Talk:Haskell/Denotational semantics

Comment by Apfelmus

 * Look of &perp; in a mathematical context, $$\perp$$ is preferred, whereas in a code context,  should be the way to go.
 * Draw real pictures showing the various poset structures. Especially the ones with the recursive data type definitions!
 * It's very cool that semantics can partly be explored in Haskell itself. Are there paragraphs where this opportunity was forgotten?

Apfelmus 19:39, 22 December 2006 (UTC)
 * Improve examples and exercises. Most have been invented very quickly and are not very good. Those that are, are most likely ones that I remembered from different textbooks. Exercises where one must repeat or slightly modify the explanation from the text are helpful for a first understanding but should be held in low quantities. True insight can be only be achieved by tasks where one has to twist one's mind. Train your brain, don't train a pattern.
 * Add more exercises and examples? A spot where readers frequently get stuck is clearly a good place for additional examples and exercises. Hopefully, these will emerge at this Talk-page.

Denotational semantics
Imperative languages can certainly be given denotational semantics. It's just that a direct semantics doesn't work anymore. You need to model the store, and usually continuations as well.


 * Furthermore imperative languages can be given other non-operational semantics such as Floyd-Hoare-Dijkstra style axiomatic semantics. You (Apfelmus) may be interested to search the EWD archive for what Dijkstra had to say about operational reasoning, most of it in regard to imperative programming. It was I who removed the erroneous and misleading paragraph mentioning imperative languages. I apologise for not adding an edit summary this was due to my stupidity not rudeness.


 * Of course, you are right. Indeed, just by using a MonadState, one already gives denotational semantics to imperative stuff. I somehow wanted to pin down that reasoning is much easier in purely functional languages because the semantics are very direct. So I just denied denotational semantics for imperative stuff altogether and hoped nobody would complain :) Compared to pure functional programs, it doesn't matter much whether I have operational ("machine, change your state!") or denotational ("plan of how machine state changes") semantics, the reasoning doesn't get much easier. Denotational semantics tries to recapture as much pure stuff as possible from the operational semantics, similar in power to the finding of a suitable loop invariant. Ah, and thanks for the EWD link. Apfelmus 22:29, 27 December 2006 (UTC)

forall a . a
The type 'forall a. a' doesn't have to give an error. It can also loop forever. (Which is an error, but not observable.)


 * Fixed, "any function of the polymorphic type forall a . a must denote &perp;". Why didn't I chose that compelling formulation in first place? Apfelmus 21:04, 27 December 2006 (UTC)

Mistake in the definition of $$\sqsubset$$ ?
Isn't there a mistake in the definition, which states that "$$a \sqsubset b$$ denotes that a is more defined than b" ? It seems inconsistent with "every concrete number is more defined than $$\perp$$: $$\perp\ \sqsubset 1\ ,\ \perp\ \sqsubset 2\, \dots$$". Shouldn't it be "less defined" ?


 * I was going to point this out too. How about "$$a \sqsubset b$$ denotes that b is more defined than a"? 71.202.203.117 00:34, 27 December 2006 (UTC)

Oops, it's a mistake of course. Fixed. At the same time, I reverted "completely equal programs" to "equal", because of the subtetly that programs can be syntactically equal as well. Of course, they can be syntactically different and still denote the same "mathematical object". Apfelmus 20:47, 27 December 2006 (UTC)

Love it!
Has really helped my understanding. Thankyou.

I did some images, they're perhaps a bit big, anyone who cares can resize them. DavidHouse 17:32, 26 December 2006 (UTC)

Non-strict constructors
"because we can write a function that tells them apart" f (Just _) = 4 f _       = 7 "As  ignores the contents of the Just constructor,   is 4 but   is 7." This argument is truly ingenious but unfortunately wrong :) With the above definition, we have . The stated properties cannot hold because they violate monotonicity. The fact that   is different from &perp; cannot be "proved", it's a design decision. How can this be formulated nicely? The author above clearly wanted to appeal to a "proof" by example like in the case of  . Apfelmus 13:38, 28 December 2006 (UTC)

What's currently in the article is clearly wrong, as you point out, but the fact that f _|_ = _|_ and f (Just _|_) = 4 should be enough to achieve the author's point, that _|_ and (Just _|_) are distinct. -Chinju

Approximations of the Factorial Function
The sentence "That is, it transforms less defined functions to more defined ones." seems to mean that $$x\sqsubseteq g(x)$$ for all x which isn't true.


 * You're right. I just put in a clarification that hopefully makes more sense.  Stereotype441 15:28, 8 February 2007 (UTC)

Denotional semantics & Equational reasoning
I just found an interesting article Fast and loose reasoning is morally correct. Basically it says that even though having Bottom results in inconsistent logic, with a few simple constraints equational proofs still work. I wonder whether some of it can be useful here. I'll be away the rest of the weekend but then will try to read the paper more carefully. I also wonder if the subject of paraconsistent logic has any applicability here. 64.160.39.153 05:28, 4 March 2007 (UTC)


 * Hello, sorry for this late response. I planned a chapter Haskell/Equation reasoning that covers reasoning formally about Haskell programs. As you point out, it would be great to have a subsection based on the article you mentioned that shows how equational reasoning keeps working with &perp; although it usually assumes that all programs terminate. Thanks for your suggestion :) I don't know enough about paraconsistent logic, but equational reasoning that ignores &perp; is like calculating with infinitesimal numbers: done right, it works, done wrong, it doesn't :). Related to that, I'd like also to outsource the infinite lists from this chapter to a place where all forms of program proofs (equational, denotational semantics, coinduction) live together, maybe even as a separate chapter. -- apfe&lambda;mus 12:10, 24 April 2007 (UTC)

Continuity
The text says that for any type, any monotone function   must be continuous. Why? I can only prove $$\sup\{f(x_0), f(x_1), \ldots \} \sqsubseteq f(\sup\{x_0, x_1, \ldots \})$$. Why there is not a type  and a sequence of values  $$x_0 \sqsubseteq x_1 \sqsubseteq \ldots$$ of type   such that, say, $$f(x_i) = \perp$$ for any $$i$$ and $$f(\sup \{x_0, x_1, \ldots\}) = 0$$?


 * In the light of your comment, the argument in the text indeed doesn't show anything.
 * The fundamental question I wanted to answer is that although the choice allowing or disallowing continuity is an arbitrary choice, every programming language can only formulate continous function. What magic makes this possible? I think that one can of course prove for every given language (given by its programming constructs like,  , recursion and so on) that only continous functions can be written down. However, I feel that this is not the right way to prove it because this property is universal for all programming languages, hence there ought to be a proof working for all languages at the same time.
 * If the type  has only chains with finitely many elements, continuity is granted for free. But for example,   has infinite chains, one being

$$x_i = n\mapsto\mbox{ if } n \le i \mbox{ then } 0 \mbox{ else } \perp$$.
 * Of course, $$x_i\sqsubset x_{i+1}$$. Now, imagine a non-continuous function  with the requirements

$$f(x_i)=\perp$$ $$f(\sup \{x_0, x_1, \ldots\})=0$$
 * Can this funtion be written down in Haskell? No, for it would have to test it's function argument at all possible value and check whether the outcome is . Clearly, this would take an infinite amount of time. What I had in mind when writing the text is that the transition of $$f$$ from   to   must happen at some finite point in the chain, the decision whether it's   or   may not be postponed to infinity. This is due to the finite nature of computation. With this in mind, the current text argues that the image of any function has in essence only finite chains so that the decision about the result value can indeed be done after a finite number of steps.
 * Of course, this whole argument isn't formally correct, but I think it hints on what's going on. I currently don't know a correct argument.
 * Clearly, the text needs a rewrite, though I'm unsure on how to do that. -- apfe&lambda;mus 11:31, 24 April 2007 (UTC)

I have an informal counterexample (maybe), but I haven't the knowledge to formalize it. It is inspired by Fran (indeed, I found this book when trying to understand Fran semantics). Let us define

A value  means "time is  " and a value   with $$t_1 \le t_2$$ means "time is in between   and  ". Thus, according to this meaning, we define  $$\sqsubset$$   for any $$t \in [t_1,t_2]$$ and   $$\sqsubset$$   iff $$[s_1,s_2] \subset [t_1,t_2]$$. Now consider the sequence, which converges to  , and consider the function   where

and  is any function such that   and   for any $$x \ne 0$$. This is only an informal example, I don't know if it is valid.


 * I don't think it's a counterexample because the meaning of $$\sqsubset$$ is fixed as "less defined than". I mean, neither  $$\sqsubset$$   nor   $$\sqsupset$$   hold. In other words, $$\sqsubset$$ is quite different. Your example basically tries to introduce the real numbers into denotational semantics by redefining $$\sqsubset$$. Of course, not all functions on real numbers are continuous anymore. (In fact, they all are continuous again in intuitionistic logic but that leads too far here)
 * Also note that  is a bad choice, because there is a smallest nonzero floating point number, i.e. the sequence   doesn't converge to  . But you can use an exact representation of rational numbers instead to make at least this convergence work.
 * -- apfe&lambda;mus 15:50, 26 April 2007 (UTC)

Not all Functions in Strict Languages are Strict
I think this section is wrong. The reason that the lifting trick works is not because functions are not strict in function types, but because lambda terms like  or   are in normal form and so cannot take a step of evalution. Other terms of type, like   will be reduced.

In fact (ignoring type classes), evaluation in a Haskell-like language ought to be completely independent of type information, which can be erased and forgotten once type checking is completed. In a strict language, if, then.

The following Standard ML code supports this:

In the above code,  has type   and , which has equivalent behaviour to Haskell's  , has type.

The line  gives an exception (bottom), not 1.

A strict language shown in chapter 14 of TAPL has an 'error' term, very similar to Haskell's 'undefined'. It also has this behaviour.

Rebooted 15:32, 6 September 2007 (UTC)


 * Good point. The problem is that ML has "two" undefined functions, namely
 * and
 * or better
 * In other words, we can distinguish between a function that is an exception and a function that raises an exception on every value. Put differently, the function type is lifted by an additional  and every function is indeed strict with respect to this &perp;. Unfortunately, this &perp; is useless for fixpoint iteration which depends on   (as defined in the beginning of the chapter).
 * So, yes, the section is not correct but the denotational semantics of call-by-value are even trickier than the section is wrong :-) I'm unsure what to do. I wanted the section to somehow show the  trick which can simulate call-by-name in a call-by-value language. Hm, maybe it doesn't belong to denotational semantics at all?
 * -- apfe&lambda;mus 16:59, 6 September 2007 (UTC)
 * In other words, we can distinguish between a function that is an exception and a function that raises an exception on every value. Put differently, the function type is lifted by an additional  and every function is indeed strict with respect to this &perp;. Unfortunately, this &perp; is useless for fixpoint iteration which depends on   (as defined in the beginning of the chapter).
 * So, yes, the section is not correct but the denotational semantics of call-by-value are even trickier than the section is wrong :-) I'm unsure what to do. I wanted the section to somehow show the  trick which can simulate call-by-name in a call-by-value language. Hm, maybe it doesn't belong to denotational semantics at all?
 * -- apfe&lambda;mus 16:59, 6 September 2007 (UTC)


 * I'd recommend removing the section and replacing it with one entitled "Emulating non-strict behavior in strict languages" which just demonstrates the trick.
 * Rebooted 15:31, 9 September 2007 (UTC)


 * On second thought, I'm not sure whether this solves the problem because the issue is even larger now: the discussion of "strict language" is wrong, too. I mean, the whole chapter implicitly assumes that  which doesn't work for call-by-value. Also, the trick works still thanks to some kind of non-strictness, namely the fact that   in contrast to.
 * It's probably best to look up the exact denotational semantics of call-by-value in a paper/book to figure out how (or whether) to present strict semantics.
 * --apfe&lambda;mus 08:33, 10 September 2007 (UTC)


 * Self-note: fixpoints in call-by-value are fine, it's just that the iterated functional is not strict in the value (and thus not internal to the language). In other words, a function  that starts the iteration with &perp; would be rather useless in a call-by-value language.
 * --apfe&lambda;mus 20:05, 6 October 2007 (UTC)

Partiality of $$\sqsubseteq$$ (in "Strict functions")
FIXME: $$\sqsubseteq$$ defines a totally ordered set (toset) on [ ], so we have either $$2 \sqsubseteq 5$$ or $$5 \sqsubseteq 2$$ or both ; if both were true, then 2 = 5, which would be effectively a contradiction, but we can still have either $$2 \sqsubseteq 5$$ or $$5 \sqsubseteq 2$$ where one of these two predicates is true and the other is false. Such thing is easy to define by defining a semantic to $$x \sqsubseteq y$$ where x and y are both non-undefined integers, using a "friend" toset on [ ]-{ } (such as $$\le$$). So 2 and 5 can be compared with $$\sqsubseteq$$.

Isn't there a confusion here between the toset $$\sqsubseteq$$ and the poset $$\sqsubset$$ ?

-- 90.50.99.233


 * Eh? The main point about $$\sqsubseteq$$ (i.e. $$\sqsubset$$) is that it's not a total ordering, exemplified in Haskell/Denotational semantics. The choice that neither $$2 \sqsubset 5$$ nor $$2 \sqsupset 5$$ hold is intentional, the semantic approximation order only says whether one value (like &perp;) is an approximation to another one (like ). The usual ordering $$\leq$$ of integers is irrelevant here.
 * --apfe&lambda;mus 20:02, 6 October 2007 (UTC)

Definition of $$\sqsubseteq$$
The definition of $$\sqsubseteq$$ seems pretty counter-intuitive for me: Are not $$2\sqsubseteq 5$$ at equally defined?


 * $$2=5$$ still means "equal". In other words, $$\sqsubseteq$$ is "the same value, less defined" but not "some values ordered by some degree of definedness". apfe&lambda;mus 08:54, 18 June 2008 (UTC)

There seem to be several people confused about the details of the $$\sqsubseteq$$ relation (there's a chat about it under the 'Partial Functions and Approximation Order' heading, and I found it non-obvious upon first reading as well). Perhaps it would be clearer if the $$\sqsubseteq$$ relation were defined in one go instead of being spread over several examples - e.g. $$a \sqsubseteq b$$ iff "you can replace parts of b with $$\bot$$ to get a" Yet Another Haskell User (discuss • contribs) 12:40, 21 August 2020 (UTC)

Functions with several Arguments
Should not the code associated with "We see that it is strict in  depending on whether ..." be

cond True x &perp; = x; cond False x &perp; = &perp;

instead the current one? --AndresSR (talk) 21:46, 28 January 2008 (UTC)


 * That's right, thanks :-) Fixed now. -- apfe&lambda;mus 08:23, 29 January 2008 (UTC)

take 2 (map (+1) nats) = take 3 nats

Is this really true? Doesn't the left expression evaluate to [2, 3] while the right expression evaluates to [1, 2, 3]?

--Md2perpe (talk) 00:03, 17 February 2008 (UTC)


 * Oops, that's wrong of course. I'm going to fix that. -- apfe&lambda;mus 22:01, 20 February 2008 (UTC)

Recursive data types and infinite lists diagram
In the illustration of the data type

data List = [] | : List

The value :[] is shown below :_|_. Is this a mistake? Thanks.


 * Oops, it indeed is. -- apfe&lambda;mus 21:11, 8 May 2008 (UTC)

Naïve Sets are unsuited for Recursive Data Types
Nevermind, didn't read carefully enough 76.105.193.33 (talk) 13:45, 22 September 2008 (UTC)

Is g in section 3.1 a monotone functional?
Take f such that f(2)=1 and f = _|_ everywhere else. Now g(f)(2) = 2*f(1) = 2 * _|_ = _|_, which is less defined than f(2) ?


 * Oh. Right, thank you for pointing this out. Looks like there's a fundamental flaw in the argument presented. I'll fix that.
 * -- apfe&lambda;mus 08:06, 12 October 2008 (UTC)
 * Ah wait, no that's no problem. Monotone doesn't mean that the result is more defined than the orginal ($$x \sqsubseteq g(x)$$), it means that a larger orgininal will be turned into a larger result ($$x \sqsubseteq y \Rightarrow g(x) \sqsubseteq g(y)$$). So, the fixed point iteration only works if we start with $$x_0=\perp$$.
 * -- apfe&lambda;mus 12:24, 12 October 2008 (UTC)

Functions is a (d)cpo
I don't understand why that monotone sequences may be of infinite length implies functions  is a (d)cpo. $$\forall n \in \mbox{ Integer }. \sup_{\sqsubseteq} \{\dots f_{j-1}(n) \sqsubseteq f_j(n) \sqsubseteq f_{j+1}(n) \sqsubseteq \dots\} = i_n$$ $$f(n) = i_n$$ $$\forall j \mbox{ is index of chain }. \forall m \in \mbox{ Integer }. f_j(m) \sqsubseteq i_m \Rightarrow \forall j \mbox{ is index of chain }. f_j \sqsubseteq f$$ $$\therefore f \mbox{ is upper bound of } \{\dots f_{j-1}(n) \sqsubseteq f_j(n) \sqsubseteq f_{j+1}(n) \sqsubseteq \dots\}$$ Define $$k \sqsubset g \mbox{ if } \forall x. k(x) \sqsubseteq g(x) \mbox{ and } \exists y. k(y) \sqsubset g(y)$$ $$\forall h. h \sqsubset f \Rightarrow \exists n. h(n) \sqsubset f(n) = i_n$$ $$\Rightarrow h(n) \mbox{ is not upper bound of chain } f_j(n) \Rightarrow h \mbox{ is not upper bound of chain } f_j$$ $$\therefore \sup_{\sqsubseteq} \{\dots f_{j-1} \sqsubseteq f_j \sqsubseteq f_{j+1} \sqsubseteq \dots\} = f$$ $$\therefore$$every chain of functions  has a least upper bound. Therefore, functions  is a (d)cpo. --LungZeno (talk) 03:55, 7 November 2008 (UTC)


 * What the text wants to say is the following: if monotone sequences are always finite, like in the case of, then it's clear that the ordering is complete. Of course, this doesn't work for  , but there is a (slightly) different argument that does show that the partial functions are a cpo, too.
 * apfe&lambda;mus 09:51, 7 November 2008 (UTC)


 * Does "finite" mean that number of element in a monotone sequence is finite?
 * Or mean that number of monotone sequences in  is finite?
 * --LungZeno (talk) 11:10, 7 November 2008 (UTC)
 * I saw the explaination.
 * You mean that number of elements in every monotone sequence in  is finite.
 * --LungZeno (talk) 11:42, 7 November 2008 (UTC)


 * My proof is wrong. Because
 * $$\forall h . h \sqsubset f \Rightarrow \exists n . h(n) \sqsubset f(n) = i_n$$
 * $$\Rightarrow h(n) \mbox{ is not upper bound of chain } f_j(n) \Rightarrow h \mbox{ is not upper bound of chain } f_j$$
 * $$\therefore \sup_{\sqsubseteq} \{\dots f_{j-1} \sqsubseteq f_j \sqsubseteq f_{j+1} \sqsubseteq \dots\} = f$$
 * Only prove that every chain of functions  has a minimum upper bound.
 * Changing my proof to
 * $$\forall n \in \mbox{ Integer } . \mbox{ cardinal of } \{f_j(n) | f_j(n) \sqsubseteq f_{j+1}(n)\} \mbox{ is finite }$$
 * $$\Rightarrow \mbox{ supremum } i_n \in \{f_j(n) | f_j(n) \sqsubseteq f_{j+1}(n)\}$$
 * $$\therefore \forall j, h , n. f_j \sqsubseteq h \Rightarrow f_j(n) \sqsubseteq h(n) \Rightarrow i_n \sqsubseteq h(n) \Rightarrow f(n) \sqsubseteq h(n) \Rightarrow f \sqsubseteq h$$
 * $$\therefore$$every chain of functions  has a least upper bound.
 * --LungZeno (talk) 12:24, 7 November 2008 (UTC)

Why is Nothing less defined than Just True?
And why is it at the same level as Just _|_ ? --Colin-adams (talk) 21:03, 22 November 2008 (UTC)


 * It's not.  and   are incomparable, and   is less defined than.
 * In the diagrams, "every link between two nodes specifies that the one above is more defined than the one below" (Section 2.2). Should there be a reminder in the section about data types?
 * -- apfe&lambda;mus 10:09, 23 November 2008 (UTC)

Haskell Recursion Example
I think it might be useful to point out that the "let x = f x in x" construction for the "fix" function actually uses the same least fixpoint semantics under the hoods. Thus "x = f x" semantically means that "x = fix(g)", where g = \x -> f x, leading to x = lfp(f); Thus you are using the same semantics for recursive functions to denote recursive elements.

I think some extra explanation such as above is really useful as I can see many people becoming puzzled by reading this "let x = f x in x" construction because there could be many fixpoint of f and it is not clear that the haskell semantics for functions are the same for these values. jjanssen (talk) 18:51, 2 April 2009 (UTC)


 * Sounds good. A new subsection on how the discussion applies to Haskell (touching on the examples,   and  ) in the recursion chapter is probably the best way to incorporate that.
 * -- apfe&lambda;mus 20:51, 2 April 2009 (UTC)

Request additional Algebraic Data Types examples
More examples of Data types(such as bool, lifted bool,unit, nat) and operations (lift, cartesian conjunction, square, disjunctive join) in the form of trees (one picture has the weight of a thousand words..) would help understand the topic. Can more examples be added? --194.1.130.37 (talk) 09:48, 16 November 2010 (UTC)

"Recursive Data Types and Infinite Lists", the diagram
There is a mistake in the diagram in the chapter "Recursive Data Types and Infinite Lists": the nodes ":⊥" and ":[]" are swapped. Does anyone have a source code for that diagram? --Beroal (discuss • contribs) 15:07, 23 January 2011 (UTC)


 * I think it was an SVG file; maybe it's still available? In any case, I very much like the diagrams by E.Z. Yang. Maybe we can ask him to release them with a CC license?
 * --apfe&lambda;mus 08:16, 27 January 2011 (UTC)

Syntax and semantics
The word semantics is usually singular, much like physics: “Denotational semantics gives meanings to programs,” “this language has this semantics,” etc. The CS literature is consistent on this point AFAIK. (The word can also be plural: “We consider several possible semantics, including call-by-value and call-by-name.” But that's not the usage anywhere in this text.)

Luke Maurer (discuss • contribs) 08:18, 2 February 2011 (UTC)

A "wrong" section?
I just noticed that someone added a brisk "This section is wrong." note to the top of the "Not all Functions in Strict Languages are Strict". I don't know enough theory to evaluate that statement; so could anyone more knowledgeable than me tell us whether that section is wrong (and, if so, why)? Thanks, Duplode (discuss • contribs) 04:51, 25 April 2012 (UTC)

Section missing
In Section 6.3 "Naïve Sets are unsuited for Recursive Data Types" there's a link to Naïve Sets are unsuited for Recursive Definitions, but it does not exist.
 * The section was renamed to What to choose as Semantic Domain?. The link is now fixed. Duplode (discuss • contribs) 03:14, 11 April 2014 (UTC)

Stumbling block: Partial Functions and Approximation Order
Under "Partial Functions and Approximation Order", I found the section discussing $$\sqsubseteq$$, rather indigestible. We have the para:


 * This is contrasted to ordinary order predicate $$\le$$, which can compare any two numbers. A quick way to remember this is the sentence: "1 and 2 are different in terms of information content but are equal in terms of information quantity". That's another reason why we use a different symbol: $$\sqsubseteq$$.

"A quick way to remember this" -- What is "this" in this sentence? Does it pertain to the immediately preceding $$\le$$? Or is it trying to describe $$\sqsubseteq$$ ? And what is content vs quantity?

Further, the sentence "1 and 2 are [...in some sense equal, in another sense not equal...]" is followed by this:


 * neither $$1 \sqsubseteq 2$$ nor $$2 \sqsubseteq 1$$ hold, but $$1 \sqsubseteq 1$$ holds.

Huh? How does this statement using $$\sqsubseteq$$, where both expressions involving 1 and 2 do not hold, relate to the previous sentence which says that something about 1 and 2 does hold?

Hopefully someone can clarify this. Gwideman (discuss • contribs) 06:17, 5 February 2013 (UTC)

-- I am similarly confused by this section. First it defines $$\sqsubseteq$$ as: ... $$a\sqsubseteq b$$ will denote that either $$b$$ is more defined than $$a$$ or both are equal (and so have the same definedness).

A little later we read: neither $$1 \sqsubseteq 2$$ nor $$2 \sqsubseteq 1$$ hold,

I would think that given the definition of $$\sqsubseteq$$ that both $$1 \sqsubseteq 2$$ and $$2 \sqsubseteq 1$$ would hold. Aren't $$1$$ and $$2$$ equally defined, so both the relations hold?

Thanks in advance. --Dougc (discuss • contribs) 11:57, 13 April 2013 (UTC)

-- disclamer: I'm no expert

If both $$1 \sqsubseteq 2$$ and $$2 \sqsubseteq 1$$ would hold, then the antisymmetry law of posets would be broken, since $$1 \neq 2$$

Antisymmetry: if both $$x \sqsubseteq y$$ and $$y \sqsubseteq x$$ hold, then $$x$$ and $$y$$ must be equal: $$x=y$$.

So you either don't define $$1 \sqsubseteq 2$$ and $$2 \sqsubseteq 1$$ or you say both are false. In both cases you break a law of the total ordered sets, which seems to be the reason Integer is a poset instead of a total ordered set.

Maybe you could just deny that Integer is a poset(or total order) and make $$1 \sqsubseteq 2 \land 2 \sqsubseteq 1$$ true, but I guess that would be unneccecarily complicated.

--89.246.167.57 (discuss) 19:48, 14 July 2013 (UTC)

Why is f0 = _|_ and why do we have to start at f0?
Section: Approximations of the Factorial Function.

And why $$\bot=f_0 \sqsubseteq f_1 \sqsubseteq f_2 \sqsubseteq \dots $$?

$$f \sqsubseteq g \mbox{ if } \forall x. f(x) \sqsubseteq g(x)$$ is not true here.

For example $$ f_2(0) \not \sqsubseteq f_3(2) $$ or  $$ f_1(0) \not \sqsubseteq f_2(\bot) $$

--89.246.167.57 (discuss) 20:30, 14 July 2013 (UTC)

Interpreting factorial as a set
The article currently says:

Now that we have a means to describe partial functions, we can give an interpretation to recursive definitions. Lets take the prominent example of the factorial function $$f(n)=n!$$ whose recursive definition is

$$f(n) = \mbox{ if } n == 0 \mbox{ then } 1 \mbox{ else } n \cdot f(n-1)$$

Although we saw that interpreting this directly as a set description may lead to problems, …

When did we saw that? What is that referring to? Is it another article? (In that case, I think a link should be there.) Or is it something that was removed since? (Then this should be rewritten.) Or did I just miss something? Svick (discuss • contribs) 22:04, 19 March 2014 (UTC)
 * The reference is to the discussion about recursive functions in Haskell/Denotational semantics. I reworded the passage slightly to make it more obvious. Duplode (discuss • contribs) 03:18, 11 April 2014 (UTC)

Bottom type vs. Bottom value
Someone should note the difference between the bottom value (without type) of this article and the bottom type (requiring subtyping) from Wikipedia: Bottom type. At first I assumed they were the same and couldn't follow the point of creating a bottom value requiring definedness to avoid solving the halting problem, when you could just create a bottom type without value.


 * On a tangential note, the type with no values (sans bottom) is usually known in Haskell as . --Duplode (discuss • contribs) 00:29, 11 May 2019 (UTC)

Approximations of the Factorial Function: Use of next/previous
There are two ways to iterate the factorial function:

$$f(n) = \mbox{ if } n == 0 \mbox{ then } 1 \mbox{ else } n \cdot f(n-1)$$

The first is the operational approach:

$$f(n) = n \cdot T, n \cdot \left [ (n-1) \cdot T \right ], \dots, n \cdot \ldots \cdot (3 \cdot (2 \cdot 1))$$

The second is the denotational approach:

$$f(n) = \bot, g(\bot), g(g(\bot)), \dots$$

The operational approach iterates from $$n$$ towards $$\bot$$ while the denotational approach iterates from $$\bot$$ towards $$n$$. In both cases the function call order remains unchanged: call the next function with diminishing $$n$$, i.e. $$n-1$$. What changes is the definition of 'next' or 'previous'. In the operational case, next is towards $$\bot$$. In the denotational case, next is towards $$n$$. Try as an exercise recreating the steps using:

$$f(n) = f(n+1)$$

In this sentence:

"we calculate a sequence of functions $f_k$ with the property that each one consists of the right hand side applied to the previous one"

the use of 'previous' corresponds to the denotational approach. Intepreted operationally it seems that the function call order has to be reversed, which means:

$$f_{k+1}(n) = f_k(n-1)$$

This is wrong! The correct solution is simply:

$$f_{k+1}(n) = f_k(n+1)$$

This mistake isn't possible with the factorial function because its call order can't be inverted (possibly related to the fact that it converges).


 * I believe that sentence of the text can be made clearer by replacing "one" with "function", so I'm doing that. --Duplode (discuss • contribs) 05:50, 6 May 2019 (UTC)

Monotonicity Inverse
It is also crucial that the inverse does not hold:

$$ x \not\sqsubseteq y \not\Rightarrow f(x) \not\sqsubseteq f(y) $$

For example, $$1 \not\sqsubseteq 2$$ does not imply $$f(1) \not\sqsubseteq f(2)$$. If that were the case, then $$f(1) \neq f(2) \land f(1) \not\sqsubset f(2)$$, and these examples would not be valid:

$$ f(1) = 2, f(2) = 2 $$

$$ f(1) = \bot, f(2) = 2 $$

Convergence Terminology
A directed-complete partial order refers to a partial order in which every directed subset ("every monotone sequence") is complete ("has a least upper bound"). A dcpo with a least element is termed a pointed directed-complete partial order, which this book abbreviates simply as cpo ("we will only meet dcpos which have a least element $$\bot$$ which are called complete partial orders (cpo)"). See Wikipedia: Complete Partial Order.

Continuity refers to Scott continuity. See Wikipedia: Scott Continuity.

Interpretation as Least Fixed Point: Multiple fixed points
The example with multiple fixed points is not an assignment of $$n \mapsto \dots$$ to $$f$$ (that would use $$:=$$ rather than $$=$$); rather it defines a a fixed-point equality (i.e., an equation):

$$f = n\mapsto\mbox{ if } n == 0 \mbox{ then } 1 \mbox{ else } f(n+1)$$,

The corresponding solution would be a fixed point to:

$$g(x) = n\mapsto\mbox{ if } n == 0 \mbox{ then } 1 \mbox{ else } x(n+1)$$,

Here are two fixed-point solutions (thanks to monochrom). The first has a correspondence from  and is also the least fixed point:

$$ f(n) = \begin{cases} 1 & \mbox{ if } n \leq 0 \\ \bot & \mbox{ else } \end{cases} $$

This second solution is clearly more defined than the first:

$$ f(n) = \begin{cases} 1 & \mbox{ if } n \leq 0 \\ 3 & \mbox{ else } \end{cases} $$

Strict Functions: power2
The question regarding  was somewhat vague. Yes a proof via $$2^n$$ is possible, but what does a proof involving fixed point iteration mean? If you iterate this functional:

$$ g(x) = n \mapsto \mbox{ if } n == 0 \mbox{ then } 1 \mbox{ else } 2 \cdot x(n-1) $$

It converges to. If it wasn't for the if-then-else stop condition, it would simply converge to $$\bot$$. Unfortunately these iterations don't answer the question; rather it was intended to simply to iterate  starting with the input $$\bot$$.

Recursive Data Types and Infinite Lists: iterate in Haskell
I think it's interesting to see a fixed-point iteration of  in Haskell:

ones :: [Integer] ones = 1 : ones g :: [Integer] -> [Integer] g x = 1 : x x0 :: [Integer] x0 = undefined (f0:f1:f2:f3:f4:fs) = iterate g x0

Recursive Data Types and Infinite Lists: Exercise #4 possible solution
$$\sup_{\sqsubseteq} \{\bot \sqsubseteq (1:\bot = \bot) \sqsubseteq (1:1:\bot = \bot) \sqsubseteq (1:\bot = \bot) \sqsubseteq \dots\} = \bot $$

Hidden assumption
Expanding the definition of factorial: $$f_{k+1}(n) = \mbox{ if } n == 0 \mbox{ then } 1 \mbox{ else } n \cdot f_k(n-1)$$ at some point we have to assume that $$n \cdot \bot = \bot$$, e.g., in: $$f_1 (2) = 2 \cdot f_0 (1) = 2 \cdot \bot$$ Is this something obvious?

(Bartosz (discuss • contribs) 19:34, 7 September 2019 (UTC))


 * Fortunately, almost exactly this gets explained a few paragraphs later, in the 'Strict Functions Section', which shows why.


 * Unfortunately, it does come a few paragraphs too late. Yet Another Haskell User (discuss • contribs) 12:22, 24 August 2020 (UTC)

Stumbling Block: sup function reads like argmax
The `sup` function for finding the supremum of a set is used in the "Convergence" section without explaining what it does.

On first reading, it looks like it just selects a maximum element from a (potentially infinite) set (i.e. `sup`=`argmax`), whereas in fact one of its critical properties is that it gets to draw its results from some broader set than that which it is passed.

The term 'least upper bound' is mentioned a few times, but doesn't help all that much for a first-time reader since if you don't know it has a precise technical meaning, it's very easy to just consider it a synonym for 'maximum'.

I suggest adding a few words to emphasize that the supremum is allowed to be from outside the set in question. Yet Another Haskell User (discuss • contribs) 12:48, 21 August 2020 (UTC)

'Interpretation as Least Fixed Point' subsection seems out of place
The whole of the 'Recursive Definitions as Fixed Point Iterations' section seems like its purpose is to assemble all the mathematical machinery needed to give recursive functions a solid denotation (it even says as much at the start - "Now that we have the means to describe partial functions, we can give an interpretation to recursive definitions. ").

Once we get to the end of reading the 'Convergence' subsection, it seems like the problem is solved; i.e. we can define the denotation of a recursive function to be the supremum of a particular infinite set of partial functions.

As such, it's quite confusing to read on and find that there is a whole other (quite long) subsection about fixed point iterations and continuity.

It's not at all clear to me why it's necessary to have this subsection at all - the preceding text seems to offer a satisfying enough explanation.

Is the subsection there just because it happens to be interesting that recursive functions satisfy a fixed-point equation? Or is the subsection necessary to prove some details (e.g. existence, uniqueness) which the previous explanation doesn't?

It would be helpful to either add a short intro explaining why we can't just stop after the 'Convergence' (+'Bottom includes non-termination') subsections, or to move the subsection into the 'Other Selected Topics' section where it wouldn't break the article's flow as much). Yet Another Haskell User (discuss • contribs) 13:26, 21 August 2020 (UTC)


 * After doing some further reading, I get the impression that:
 * - The supremum-based definition is actually OK for defining the dentotation
 * - Interpretation as a fixed point is mainly of use for manually simplifying the denotation of particular expressions to something usable in proofs about (e.g.) program correctness


 * Is this the case? Yet Another Haskell User (discuss • contribs) 12:28, 26 August 2020 (UTC)

Is 'converges' the best term to use?
This may be nit-picking, but using the word 'converges' (e.g. "... the sequence converges to the factorial function..") seems to suggest that there is a Metric Space involved.

I know next to nothing about metric spaces, but the general idea seems to be that 'points' (here, partial functions) need to have 'distances' between them (difference in definedness?). And that it's necessary to have all of this (plus some axioms) in order to be using the word 'converges' precisely.

While this may well be the case, it seems a bit beyond the scope of this article.

How about using a more informal word like 'approaches'? Yet Another Haskell User (discuss • contribs) 14:10, 21 August 2020 (UTC)