Haskell/Applicative functors

When covering the vital  and   type classes, we glossed over a third type class: , the class for applicative functors. Like monads, applicative functors are functors with extra laws and operations; in fact,  is an intermediate class between   and. is a widely used class with a wealth of applications. It enables the eponymous applicative style, a convenient way of structuring functorial computations, and also provides means to express a number of important patterns.

recap
We will begin with a quick review of the Functor class chapter. is characterised by the  function:

If a type is an instance of, you can use   to apply a function to values in it. Another way of describing  is saying that it promotes functions to act on functorial values. To ensure  works sanely, any instance of   must comply with the following two laws:

, for example, has a  instance, and so we can easily modify the value inside it...

...as long as it exists, of course.

has an infix synonym,. It often helps readability, and also suggests how  can be seen as a different kind of function application.

Application in functors
As useful as it is,  isn't much help if we want to apply a function of two arguments to functorial values. For instance, how could we sum  and  ? The brute force approach would be extracting the values from the  wrapper. That, however, would mean having to do tedious checks for. Even worse: in a different  extracting the value might not even be an option (just think about  ).

We could use  to partially apply   to the first argument:

But now we are stuck: we have a function and a value both wrapped in, and no way of applying one to the other. The closest we can get to the desired  is probably this:

(note that  is  ).

What we would like to have is an operator with a type akin to  to apply functions in the context of a functor. If that operator was called, we would be able to write:

Lo and behold - that works!

The type of  is:

is one of the methods of, the type class of applicative functors - functors that support function application within their contexts. Expressions such as  are said to be written in applicative style, which is as close as we can get to regular function application while working with a functor. If you pretend for a moment the,   and   aren't there, our example looks just like.

The class
The definition of  is:

Beyond, the class has a second method,  , which brings arbitrary values into the functor. As an example, let's have a look at the  instance:

It doesn't do anything surprising:  wraps the value with  ;   applies the function to the value if both exist, and results in   otherwise.

Applicative functor laws
Just like,   has a set of laws which reasonable instances should follow. They are:

Those laws are a bit of a mouthful. They become easier to understand if you think of  as a way to inject values into the functor in a default, featureless way, so that the result is as close as possible to the plain value. Thus:


 * The identity law says that applying the  morphism does nothing, exactly like with the plain   function.
 * The homomorphism law says that applying a "pure" function to a "pure" value is the same as applying the function to the value in the normal way and then using  on the result. In a sense, that means   preserves function application.
 * The interchange law says that applying a morphism to a "pure" value  is the same as applying   to the morphism. No surprises there - as we have seen in the higher order functions chapter,   is the function that supplies   as an argument to another function.
 * The composition law says that  composes morphisms similarly to how   composes functions: applying the composed morphism   to   gives the same result as applying   to the result of applying   to.

There is also a bonus law about the relation between  and  :

Applying a "pure" function with  is equivalent to using. This law is a consequence of the other ones, so you need not bother with proving it when writing instances of.

Déjà vu
Does  remind you of anything?

The only difference between that and...

... is the class constraint. and  serve the same purpose; that is, bringing values into functors. The uncanny resemblances do not stop here. Back in the chapter about  we mentioned a function called  ...

... which could be used to make functions with many arguments less painful to handle in monadic code:

looks a lot like.

Those, of course, are not coincidences. inherits from ...

... because  and   are enough to implement   and.

Several other monadic functions have more general applicative versions. Here are a few of them:

Lists are applicative functors as well. Specialised to lists, the type of  becomes...

... and so  applies a list of functions to another list. But exactly how is that done?

The standard instance of  for lists, which follows from the   instance, applies every function to every element, like an explosive version of.

Interestingly, there is another reasonable way of applying a list of functions. Instead of using every combination of functions and values, we can match each function with the value in the corresponding position in the other list. A  function which can be used for that is  :

When there are two useful possible instances for a single type, the dilemma is averted by creating a  which implements one of them. In this case, we have, which lives in :

We have already seen what  should be for zip-lists; all that is needed is to add the   wrappers:

As for, it is tempting to use  , following the standard list instance. We can't do that, however, as it violates the applicative laws. According to the identity law:

Substituting  and the suggested , we get:

Now, suppose  is the infinite list  :

The problem is that  produces lists whose length is that of the shortest list passed as argument, and so   will cut off all elements of the other zip-list after the first. The only way to ensure  never removes elements is making   infinite. The correct  follows from that:

The  applicative instance offers an alternative to all the zipN and zipWithN functions in  which can be extended to any number of arguments:

Sequencing of effects
As we have just seen, the standard  instance for lists applies every function in one list to every element of the other. That, however, does not specify  unambiguously. To see why, try to guess what is the result of  without looking at the example above or the answer just below.

Unless you were paying very close attention or had already analysed the implementation of, the odds of getting it right were about even. The other possibility would be. The difference is that for the first (and correct) answer the result is obtained by taking the skeleton of the first list and replacing each element by all possible combinations with elements of the second list, while for the other possibility the starting point is the second list.

In more general terms, the difference between is one of sequencing of effects. Here, by effects we mean the functorial context, as opposed to the values within the functor (some examples: the skeleton of a list, actions performed in the real world in, the existence of a value in  ). The existence of two legal implementations of  for lists which only differ in the sequencing of effects indicates that   is a non-commutative applicative functor. A commutative applicative functor, by contrast, leaves no margin for ambiguity in that respect. More formally, a commutative applicative functor is one for which the following holds:

Or, equivalently,

By the way, if you hear about commutative monads in Haskell, the concept involved is the same, only specialised to.

Commutativity (or the lack thereof) affects other functions which are derived from  as well. is a clear example:

combines effects while preserving only the values of its second argument. For monads, it is equivalent to. Here is a demonstration of it using, which is commutative:

Swapping the arguments does not affect the effects (that is, the being and nothingness of wrapped values). For, however, swapping the arguments does reorder the effects:

The convention in Haskell is to always implement  and other applicative operators using left-to-right sequencing. Even though this convention helps reducing confusion, it also means appearances sometimes are misleading. For instance, the  function is not , as it sequences effects from left to right just like  :

For the same reason,  from   is not. That means it provides a way of inverting the sequencing:

An alternative is the module from , which offers a   for flipping the order of effects:

A sliding scale of power
,,  . Three closely related functor type classes; three of the most important classes in Haskell. Though we have seen many examples of  and   in use, and a few of , we have not compared them head to head yet. If we ignore /  for a moment, the characteristic methods of the three classes are:

While those types look disparate, we can change the picture with a few cosmetic adjustments. Let's replace  by its infix synonym,  ;   by its flipped version,  ; and tidy up the signatures a bit:

Suddenly, the similarities are striking. ,  and   are all mapping functions over  s. The differences between them are in what is being mapped over in each case:


 * maps arbitrary functions over functors.
 * maps  morphisms over (applicative) functors.
 * maps  functions over (monadic) functors.

The day-to-day differences in uses of,   and   follow from what the types of those three mapping functions allow you to do. As you move from  to   and then to , you gain in power, versatility and control, at the cost of guarantees about the results. We will now slide along this scale. While doing so, we will use the contrasting terms values and context to refer to plain values within a functor and to whatever surrounds them, respectively.

The type of  and the two associated laws ensure that it is impossible to use it to change the context, no matter which function it is given. In, the   function has nothing to do with the   context of the   functorial value, and so applying it cannot affect the context. For that reason, if you do  on some list   the number of elements of the list will never change.

That can be taken as a safety guarantee or as an unfortunate restriction, depending on what you intend. In any case,  is clearly able to change the context:

The  morphism carries a context of its own, which is combined with that of the   functorial value. , however, is subject to a more subtle restriction. While  morphisms carry context, within them there are plain , which are still unable to modify the context. That means the changes to the context  performs are fully determined by the context of its arguments, and the values have no influence over the resulting context.

Thus with list  you know that the length of the resulting list will be the product of the lengths of the original lists, with     you know that all real-world effects will happen as long as the evaluation terminates, and so forth.

With, however, we are in a very different game. takes a  function, and so it is able to create context from values. That means a lot of flexibility:

Taking advantage of the extra flexibility, however, might mean having fewer guarantees about, for instance, whether your functions are able to unexpectedly erase parts of a data structure for pathological inputs, or whether the control flow in your application remains intelligible. In some situations there might be performance implications as well, as the complex data dependencies monadic code makes possible might prevent useful refactorings and optimisations. All in all, it is a good idea to only use as much power as needed for the task at hand. If you do need the extra capabilities of, go right ahead; however, it is often worth it to check whether   or   are sufficient.

The monoidal presentation
Back in Understanding monads, we saw how the  class can be specified using either   or   instead of. In a similar way,  also has an alternative presentation, which might be implemented through the following type class:

There are deep theoretical reasons behind the name "monoidal". In any case, we can informally say that it does look a lot like a monoid:  provides a default functorial value whose context wraps nothing of interest, and   combines functorial values by pairing values and combining effects. The  formulation provides a clearer view of how   manipulates functorial contexts. Naturally,  and   can be used to define   and , and vice-versa.

The  laws are equivalent to the following set of laws, stated in terms of  :

The functions to the left of the  are just boilerplate to convert between equivalent types, such as   and. If you ignore them, the laws are a lot less opaque than in the usual  formulation. By the way, just like for  there is a bonus law, which is guaranteed to hold in Haskell: