Talk:Haskell/Laziness

GHCi's :print and :sprint commands
Is there room to use GHCi's :print command to show what thunks look like to GHC? Maybe as an exercise? Something like:

$ ghci GHCi, version 7.6.3: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Prelude> let x = (length [1..5], reverse "olleh") Prelude> :print x x = ((_t1::Int),(_t2::[Char])) Prelude> let y = fst x Prelude> :print y y = (_t3::Int) Prelude> y 5 Prelude> :print x x = (5,(_t4::[Char])) Prelude> let y = snd x Prelude> y "hello" Prelude> :print x x = (5,"hello") Prelude>

This shows both thunks (_tX) and the shared nature of GHC data structures.

TrueFarnz (discuss • contribs) 17:40, 1 June 2015 (UTC)

WHNF picture
Hi David,

Just thought I might suggest making the WHNF picture correspond to the example, if possible. Best, -- Kowey 06:26, 9 June 2007 (UTC)
 * To add to that, boy am I looking forward to this chapter! I've always had a vague and imprecise understanding about laziness ("sum'ing about thunks") and would love to be more comfortable with the idea -- Kowey 07:17, 9 June 2007 (UTC)


 * The reason I didn't do that is because there would be too many stages of evaluation for (length [1..5], reverse "olleh"). It'd go something like this:

thunk (thunk, thunk) (length thunk, thunk) (length (1:thunk), thunk) (1 + length thunk, thunk) (1 + length (2:thunk), thunk) (1 + 1 + length thunk, thunk) (1 + 1 + length (3:thunk), thunk) (1 + 1 + 1 + length thunk, thunk) (1 + 1 + 1 + length (4:thunk), thunk) (1 + 1 + 1 + 1 + length thunk, thunk) (1 + 1 + 1 + 1 + 1, thunk) (5, thunk) (5, reverse thunk) (5, reverse ('o':thunk)) (5, reverse thunk ++ [o]) (5, reverse ('l':thunk) ++ [o]) (5, reverse thunk ++ [l] ++ [o]) (5, reverse ('l':thunk) ++ [l] ++ [o]) (5, reverse thunk ++ [l] ++ [l] ++ [o]) (5, reverse ('e':thunk) ++ [l] ++ [l] ++ [o]) (5, reverse thunk ++ [e] ++ [l] ++ [l] ++ [o]) (5, reverse ('h':thunk) ++ [e] ++ [l] ++ [l] ++ [o]) (5, reverse thunk ++ [h] ++ [e] ++ [l] ++ [l] ++ [o]) (5, reverse [] ++ [h] ++ [e] ++ [l] ++ [l] ++ [o]) (5, [] ++ [h] ++ [e] ++ [l] ++ [l] ++ [o]) (5, "hello")

And even that's skipping the definitions of (+) and (++). Something like that could feasibly go into the article proper, though, what do you think? DavidHouse 10:04, 9 June 2007 (UTC)


 * Ouch. What if we used a shorter example, something like  ?  Also, it might be helpful to have the implementation of reverse somewhere in a sidebox maybe (don't know if that would just be distracting).  But yeah, it would probably be useful to have an example worked out in the main text.  It's also more accessible to people who can't see the graphics -- Kowey 11:02, 9 June 2007 (UTC)

Introduction
I thought it would be useful to start out with a quick motivation section, not a lot of blah blah but just enough to drive home the point that laziness is more about writing nice code than it is about writing efficient code (as far as I understand). I would like to have a very simple, minimal "obvious" example without a lot of explanation. I thought maybe we could lay that example down, not explain it too much, and then add a transition like "so let's see what makes this tick" -- Kowey 07:37, 9 June 2007 (UTC)


 * I'm somewhat unsatisfied with my it makes the right things efficient enough because it doesn't capture the idea of making circular code possible in the first place. I wonder if there is a better way to phrase this -- Kowey 11:11, 9 June 2007 (UTC)

What is strict/lazy
Following the introduction, I was thinking that a sort of shallow bullet-pointed overview of laziness and strictness might be helpful, something like:


 * data structures are lazy
 * functions are lazy wrt their arguments
 * pattern matching, however, is strict

Not everything in a lazy language can be lazy, otherwise nothing would ever get done! Somewhere in the language, we need to have something to force things to get evaluated. In Haskell, the thing which forces evaluation is pattern matching. -- Kowey 08:01, 9 June 2007 (UTC)


 * True. I'll add something like this in to the first section. DavidHouse 09:55, 9 June 2007 (UTC)

Possible split
I've thought about splitting this article into two sections, one that explains the theory behind lazy evaluation and one that will probably get merged with the Strictness article and can talk about the performance implications of laziness. Thoughs? DavidHouse 10:06, 9 June 2007 (UTC)


 * Well, that sounds like a good idea. On the other hand, there is this point that making code lazier can sometimes result in performance gains (but i guess that could appear in the Strictness chapter too).  I would say that if you're going to split the article, trying moving them directly to Strictness first.  But otherwise, maybe it's best to just leave those bits as stub and see how the situation clarifies itself as the rest of the article gets written -- Kowey 11:05, 9 June 2007 (UTC)


 * Well, yeah, the idea was to have a "Laziness and nonstrictness" section where we explore why it's important to have nonstrict semantics in Haskell (the kinds of things that are going into your introduction, easier to write concise code, don't have to worry about execution order etc., brief mention of performance gains) and how this is implemented (explain laziness in terms of thunks), then have a different chapter where we look at the performance implications of nonstrictness and cover way to make your program more strict and ways to make it lazier, and when either is beneficial. DavidHouse 11:34, 9 June 2007 (UTC)


 * Splitting sounds good. I originally intended that the chapter Haskell/Graph reduction covers how functional programs are evaluated at all, i.e. what the different normal forms are and how lazy and eager evaluation work. So, the basic question is "How to evaluate an expression?". It really helps to do several reductions by hand on a piece of paper. Maybe the chapter name is too special, something like Haskell/Evaluating Functional Programs would be more fitting. Mixing _|_ into the reduction strategy discussion is more confusing than helpful, because it has to be mentioned over and over again. But I think it can be put in a separate extra section "Approximating Reduction with Denotational Semantics" or something like that. Basic cheat: assume that Haskell has lazy evaluation but don't mention this false claim :-)
 * The plan for Haskell/Laziness was to showcase the benefits and power of lazy evaluation. There are some points to make: formulating algorithms compositionally without damaging asymptotic time complexity, create your own control structures, infinite data structures, tying knots, memoization. Reasoning about space and time complexity is needed a bit, but detailed methods (from Okasaki's book) are probably best left to a data structure chapter. This means that this chapter should not explain what lazy evaluation is, that has to be done in Haskell/Graph reduction.
 * For Haskell/Strictness, I had in mind that it should explain how to enforce strictness and when or when not we want that at all. So,  is good,   is not, why   is useless, how to get a strict state monad and so on. In other words, this chapter would focus on common cases and deal with finding out what exactly gets evaluated.
 * The plan in a sentence: Haskell/Graph reduction tells how stuff actually works, Haskell/Laziness opens your eyes and Haskell/Strictness helps with day-to-day laziness/strictess sorrows. The chapter titles may not fit, but each chapter can mention the others in the introduction to help the reader find the right one :-)
 * -- a p f e l m u s  21:12, 10 June 2007 (UTC)

When to use lazy pattern matching
The article says to use lazy pattern matching when there is only one constructor. But normally (except in the case of tuples) you can declare a type having only one constructor as a newtype, and then there's no pattern match overhead because the constructor doesn't actually exist at runtime! So it seems that it's only useful for tuples - am I wrong?--Greenrd 21:26, 21 June 2007 (UTC)


 * Well, the constructor may have multiple arguments . But of course, you can replace this with a tuple in a newtype:   and lazy pattern match as in.
 * However, lazy pattern matches are also useful with multiple constructors, namely with the "blueprint technique". An intended rewrite of this chapter will incorporate a section on this. In the meantime, see also
 * Haskell wiki - Tying the Knot
 * Re: Optimization problem and Haskell wiki - Separating shape and content
 * Re: Quicksearch vs. lazyness
 * -- apfe&lambda;mus 09:52, 22 June 2007 (UTC)

Another interesting sample where lazy pattern matching is needed is shown here: Lazy patterns. Two mutual recursive functions are defined producing lists, which represent a couple of interacting automata (a "server" and a "client") exchanging tokens; without lazy patterns the two agents would deadlock waiting for each other's token.

-- Blaisorblade (talk) 17:42, 30 December 2007 (UTC)

let (x,y) = ... in
About the following quote from the text:

let (x, y) = (length [1..5], reverse "olleh") in ... «(We'll assume that in the 'in' part, we use x and y somewhere. Otherwise, we're not forced to evaluate the let-binding at all; the right-hand side could have been undefined and it would still work if the 'in' part doesn't mention x or y. This assumption will remain for all the examples in this section.)»

I question the validity of that comment. As I understand it, the text implies that until x and y are not used, the pattern matching is not run; but a non-lazy pattern matching is used, so the right-hand side of the let binding must be evaluated, and cannot be undefined. What is correct to say is that "length [1..5]" and reverse "olleh" could be undefined, if x and y are not used.

-- Blaisorblade (talk) 17:30, 30 December 2007 (UTC)


 * Let bindings - as opposed to case-expressions - are always lazy, so it's indeed true that the right-hand side won't be evaluated until x or y are demanded. But the whole chapter needs a big overhaul anyway, and I plan to rewrite this example. -- apfe&lambda;mus 09:19, 1 February 2008 (UTC)

isInfixOf xs [1..]
is presented like it will terminate, even if [1..] is infinite. I think it is a mistake: it should either return True or not terminate at all. I suggest we show two examples instead: one which make an infinite list from another infinite list (like map, tail --filter isn't completely safe), and another which produce a finite result from an infinite list (like take 42, head).

Loup Vaillant (unsubscribed)


 * Right, the example is not so good. Concerning this subsection, I had examples similar to John Hughes' "Why functional programming matters" in mind. I'll rewrite the stuff here at some point. -- apfe&lambda;mus 09:19, 1 February 2008 (UTC)

"Bad haskellish logic" comments
I removed from the body of the article the accusations of rose-tinted glasses shown in this diff. While that section is not, IMO, significantly misleading, I tried to account partially for the criticism by qualifying one of the claims in the first sentence. --Duplode (discuss • contribs) 09:04, 1 May 2012 (UTC)