Jump to content

Talk:Lambda calculus

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Single-Taped Turing Machine Simulation?

[edit]

This comment: "It is a universal model of computation that can be used to simulate any single-taped Turing machine..."

Seems odd because any single-taped Turing machine can simulate any multitape Turing machine. By modus ponens then, shouldn't lambda calculus be able to simulate any multitape Turing machine because it can simulate any single-taped one?

Assuming that's true (and I would appreciate an expert confirming it is just to be sure) then we can probably change the above to "It is a universal model of computation that can be used to simulate any Turing machine..."

Turner's citation

[edit]

Let's talk about what is missing in the statement. --Ancheta Wis   (talk | contribs) 16:24, 11 May 2023 (UTC)[reply]

I added what I think is the quote being referred to:

This mechanism works well for Algol 60 but in a language in which functions can be returned as results, a free variable might be held onto after the function call in which it was created has returned, and will no longer be present on the stack. Landin (1964) solved this in his SECD machine.

I think the basic issue is that what the article says,

In a functional programming language where functions are first class citizens, this systematic change in variables to avoid capture of a free variable can introduce an error, when returning functions as results.

does not follow from this. It is not capture-avoiding substitution that causes issues, it is the fact that a function partially applied to an argument can outlive its containing function. It's true that the Algol 60 mechanism used capture-avoiding substitution, but every functional programming language avoids unwanted captures in some way. Mathnerd314159 (talk) 20:30, 11 May 2023 (UTC)[reply]
And Turner 2012, immediately after his Landin 1964 ref says "Landin (1964) solved this in his SECD machine. A function is represented by a closure, consisting of code for the function plus the environment for its free variables. The environment is a linked list of name-value pairs. Closures live in the heap".
So the function definition (i.e. the lambda abstraction on x where , has to be augmented by its closure [x, as well as environment y,z   ...] which Turner puts on a heap; ie. in lambda notation we need   , in order not to overrun a function call, say application   . Otherwise we get an indication that the expression evaluation has not yet terminated, and machine application   slows. My mental picture is the sequence of twin primes, the output getting slower and slower with each application to successive twins, as the list of divisors y,z,w,   ... (the closures) grow without bound.) -- Ancheta Wis   (talk | contribs) 05:18, 12 May 2023 (UTC)[reply]
WillNess is pointing out that there is a problem with the scope of parameters used in intertwining functions (when expressing functions in lambda notation). The problem reminds me of the slogan 'no global variables'.
There must be a way to resolve this, otherwise why use this notation? --Ancheta Wis   (talk | contribs) 22:17, 12 May 2023 (UTC)[reply]
I believe something's gone awry. Turner 2012 does not say that capture avoiding substitution introduces errors. The error he is talking about is the Funarg problem, which doesn't require capture-avoiding substitution to occur. He is saying that capture avoiding substitution is one of the basic mechanisms for correct functional programming (it allows passing functions as arguments), but is not enough (the funarg problem means you can't always return functions). The funarg problem has no chance of occurring in the lambda calculus because there is no concept of a stack or other kind of evaluation context; there is never a need to "look up" a variable while evaluating an expression. I have removed the whole statement as I believe it to be irrelevant to the section (if not incorrect or confusing) and will make a similar edits to SECD machine. Howtonotwin (talk) 09:39, 8 January 2024 (UTC)[reply]

typo in a sentence

[edit]

One reason there are many different typed lambda calculi has been the desire to do more (of what the untyped calculus can do) without giving up on being able to prove strong theorems about the calculus.

should be

what the typed calculus can do Federicolo (talk) 19:08, 2 October 2023 (UTC)[reply]

No, this is right, the untyped lambda calculus is more powerful. For example, one can easily write the Y combinator in untyped lambda calculus, whereas many typed lambda calculi (such as the simply typed lambda calculus) do not allow it. Mathnerd314159 (talk) 01:41, 3 October 2023 (UTC)[reply]
Ok I agree with you now: I misinterpreted the sentence Federicolo (talk) 16:14, 13 October 2023 (UTC)[reply]

"Functions that operate on functions" feels incomplete

[edit]

The section "Functions that operate on functions" contains a discussion on the identity and constant functions, but fails to use them to illustrate the point of the section - functions being used as input or output. VeeIsMe (talk) 15:55, 4 November 2023 (UTC)[reply]

 Done I added function composition. The whole section is a mess; I tried to clean up a little bit. - Jochen Burghardt (talk) 09:44, 5 November 2023 (UTC)[reply]

"Intuitively this means a function's arguments are always reduced before the function itself."

[edit]

That sentence is supposed to be describing applicative order. I think it is confusing and unnecessary. For starters, a beta reduction is always applied to a redex. What exactly is an "argument" here? If, in that sentence, argument refers to the right side of a redex, then that sentence is false (that interpretation would fit call by value, not applicative order). And if it refers to bound variables, then the sentence is still false, because bound variables are not redexes. I think the most charitable interpretation of the sentence is that given a redex R, before R is reduced, one must reduce every redex U, if U's right side is a bound variable of R's function. Unfortunately, despite that new sentence's verbosity, it is not a complete definition of applicative order reduction. 81.158.160.194 (talk) 04:14, 29 March 2024 (UTC)[reply]

An argument is the right hand side of an application (this is defined in the lead and mentioned in numerous places). E.g. in M N, M is the function and N is the argument. So what this "intuition" is trying to explain is that for example with A = B = C = λx.x, applicative order will reduce A(BC) to A(C) rather than BC.
The complete definition of applicative order is "leftmost-innermost reduction", it is already pretty concise. Mathnerd314159 (talk) 05:16, 29 March 2024 (UTC)[reply]