F Sharp Programming/Computation Expressions

Computation expressions are easily the most difficult, yet most powerful language constructs to understand in F#.

Monad Primer
Computation expressions are inspired by Haskell monads, which in turn are inspired by the mathematical concept of monads in category theory. To avoid all of the abstract technical and mathematical theory underlying monads, a "monad" is, in very simple terms, a scary sounding word which means execute this function and pass its return value to this other function.


 * Note: The designers of F# use the term "computation expression" and "workflow" because it's less obscure and daunting than the word "monad", and because monad and computation expression, while similar, are not precisely the same thing. The author of this book prefers "monad" to emphasize the parallel between the F# and Haskell (and, strictly as an aside, it's just a neat sounding five-dollar word).

Monads in Haskell

Haskell is interesting because it's a functional programming language where all statements are executed lazily, meaning Haskell doesn’t compute values until they are actually needed. While this gives Haskell some unique features such as the capacity to define "infinite" data structures, it also makes it hard to reason about the execution of programs since you can't guarantee that lines of code will be executed in any particular order (if at all).

Consequently, it's quite a challenge to do things which need to be executed in a sequence, which includes any form of I/O, acquiring locks objects in multithreaded code, reading/writing to sockets, and any conceivable action which has a side-effect on any memory elsewhere in our application. Haskell manages sequential operations using something called a monad, which can be used to simulate state in an immutable environment.

Visualizing Monads with F#

To visualize monads, let's take some everyday F# code written in imperative style:

We can re-write the  and   functions to take an extra parameter, namely a function to execute once our computation completes. We’d end up with something that looks more like this:

If you can understand this much, then you can understand any monad.

Of course, it is perfectly reasonable to say ''what masochistic reason would anyone have for writing code like that? All it does it print out "Hello, Steve" to the console!'' After all, C#, Java, C++, or other languages we know and love execute code in exactly the order specified—in other words, monads solve a problem in Haskell which simply doesn't exist in imperative languages. Consequently, the monad design pattern is virtually unknown in imperative languages.

However, monads are occasionally useful for modeling computations which are difficult to capture in an imperative style.

The Maybe Monad

A well-known monad, the Maybe monad, represents a short-circuited computation which should "bail out" if any part of the computation fails. Using a simple example, let’s say we wanted to write a function which asks the user for 3 integer inputs between 0 and 100 (inclusive) -- if at any point, the user enters an input which is non-numeric or falls out of our range, the entire computation should be aborted. Traditionally, we might represent this kind of program using the following:


 * Note: Admittedly, the simplicity of this program -- grabbing a few integers -- is ridiculous, and there are many more concise ways to write this code by grabbing all of the values up front. However, it might help to imagine that  was a relatively expensive operation (maybe it executes a query against a database, sends and receives data over a network, initializes a complex data structure), and the most efficient way to write this program requires us to bail out as soon as we encounter the first invalid value.

This code is very ugly and redundant. However, we can simplify this code by converting it to monadic style:

The magic is in the  method. We extract the return value from our function  and pass it (or bind it) as the first parameter to.

Why use monads?

The code above is still quite extravagant and verbose for practical use, however monads are especially useful for modeling calculations which are difficult to capture sequentially. Multithreaded code, for example, is notoriously resistant to efforts to write in an imperative style; however it becomes remarkably concise and easy to write in monadic style. Let's modify our bind method above as follows:

Now our bind method will execute a function in its own thread. Using monads, we can write multithreaded code in a safe, imperative style. Here's an example in fsi demonstrating this technique:

Its interesting to notice that Google starts downloading on thread 5 and finishes on thread 11. Additionally, thread 11 is shared between Microsoft, Peta, and Google at some point. Each time we call, we pull a thread out of .NET's threadpool, when the function returns the thread is released back to the threadpool where another thread might pick it up again—its wholly possible for async functions to hop between any number of threads throughout their lifetime.

This technique is so powerful that it's baked into F# library in the form of the async workflow.

Defining Computation Expressions
Computation expressions are fundamentally the same concept as seen above, although they hide the complexity of monadic syntax behind a thick layer of syntactic sugar. A monad is a special kind of class which must have the following methods:,  , and.

We can rewrite our Maybe monad described earlier as follows:

We can test this class in fsi:

Syntax Sugar
Monads are powerful, but beyond two or three variables, the number of nested functions becomes cumbersome to work with. F# provides syntactic sugar which allows us to write the same code in a more readable fashion. Workflows are evaluated using the form. For example, the following pieces of code are equivalent:


 * Note: You probably noticed that the sugared syntax is strikingly similar to the syntax used to declare sequence expressions, . This is not a coincidence. In the F# library, sequences are defined as computation expressions and used as such. The async workflow is another computation expression you'll encounter while learning F#.

The sugared form reads like normal F#. The code  behaves as expected, but what is   doing? Notice that we say, but the value   has the type   rather than. The construct  invokes a function called , where the value   is bound to parameter passed into the   function.

Similarly,  invokes a function called. Several new keywords de-sugar to some other construct, including ones you've already seen in sequence expressions like  and , as well as new ones like   and.

This fsi sample shows how easy it is to use our maybe monad with computation expression syntax:

This code does the same thing as the desugared code, only its much much easier to read.

Dissecting Syntax Sugar
According the F# spec, workflows may be defined with the following members:

These sugared constructs are de-sugared as follows:

What are Computation Expressions Used For?
F# encourages a programming style called language oriented programming to solve problems. In contrast to general purpose programming style, language oriented programming centers around the programmers identifying problems they want to solve, then writing domain specific mini-languages to tackle the problem, and finally solve problem in the new mini-language.

Computation expressions are one of several tools F# programmers have at their disposal for designing mini-languages.

Its surprising how often computation expressions and monad-like constructs occur in practice. For example, the Haskell User Group has a collection of common and uncommon monads, including those which compute distributions of integers and parse text. Another significant example, an F# implementation of software transactional memory, is introduced on hubFS.

Additional Resources

 * Haskell.org: All About Monads - Another collection of monads in Haskell.