Haskell/Traversable

We already have studied four of the five type classes in the Prelude that can be used for data structure manipulation:,  ,   and. The fifth one is. To traverse means to walk across, and that is exactly what  generalises: walking across a structure, collecting results at each stop.

Functors made for walking
If traversing means walking across, though, we have been performing traversals for a long time already. Consider the following plausible  and   instances for lists:

walks across the list, applies  to each element and collects the results by rebuilding the list. Similarly,  walks across the list, applies   to each element and collects the results by combining them with. and, however, are not enough to express all useful ways of traversing. For instance, suppose we have the following -encoded test for negative numbers...

... and we want to use it to implement...

... which gives back the original list wrapped in  if there are no negative elements in it, and   otherwise. Neither  nor   on their own would help. Using  would replace the structure of the original list with that of whatever   we pick for folding, and there is no way of twisting that into giving either the original list or. As for,   might be attractive at first...

... but then we would need a way to turn a list of  into   a list. If you squint hard enough, that looks somewhat like a fold. Instead, however, of merely combining the values and destroying the list, we need to combine the  contexts of the values and recreate the list structure within the combined context. Fortunately, there is a type class which is essentially about combining  contexts:. , in turn, leads us to the class we need:.

is to  contexts what   is to   values. From that point of view,  is analogous to   − it creates an applicative summary of the contexts within a structure, and then rebuilds the structure in the new context. is the function we were looking for:

These are the methods of :

If  is analogous to ,   is analogous to. They can be defined in terms of each other, and therefore a minimal implementation of  just needs to supply one of them:

Rewriting the list instance using  makes the parallels with   and   obvious:

In general, it is better to write  when implementing , as the default definition of   performs, in principle, two runs across the structure (one for   and another for  ).

We can cleanly define  directly in terms of  :

Interpretations of
structures can be walked over using the applicative functor of your choice. The type of ...

... resembles that of mapping functions we have seen in other classes. Rather than using its function argument to insert functorial contexts under the original structure (as might be done with ) or to modify the structure itself (as   does),   adds an extra layer of context on the top of the structure. Said in another way,  allows for effectful traversals − traversals which produce an overall effect (i.e. the new outer layer of context).

If the structure below the new layer is recoverable at all, it will match the original structure (the values might have changed, of course). Here is an example involving nested lists:

To understand what is going on here, let's break this down step by step.

The inner lists retain the structure of the original list − all of them have three elements. The outer list is the new layer, corresponding to the introduction of nondeterminism through allowing each element to vary from zero to its (original) value.

We can also understand  by focusing on   and how it distributes context.

In this example,  can be seen distributing the old outer structure into the new outer structure, and so the new inner lists have two elements, just like the old outer list. The new outer structure is a list of twelve elements, which is exactly what you would expect from combining with  one list of four elements with another of three elements. One interesting aspect of the distribution perspective is how it helps making sense of why certain functors cannot possibly have instances of  (how would one distribute an   action? Or a function?).

The laws
Sensible instances of  have a set of laws to follow. There are the following two laws:

Plus a bonus law, which is guaranteed to hold:

Those laws are not exactly self-explanatory, so let's have a closer look at them. Starting from the last one: an applicative homomorphism is a function which preserves the  operations, so that:

Note that not only this definition is analogous to the one of monoid homomorphisms which we have seen earlier on but also that the naturality law mirrors exactly the property about  and monoid homomorphisms seen in the chapter about.

The identity law involves, the dummy functor:

The law says that all traversing with the  constructor does is wrap the structure with , which amounts to doing nothing (as the original structure can be trivially recovered with  ). The  constructor is thus the identity traversal, which is very reasonable indeed.

The composition law, in turn, is stated in terms of the  functor:

performs composition of functors. Composing two s results in a , and composing two  s results in an. The instances are the obvious ones, threading the methods one further functorial layer down.

The composition law states that it doesn't matter whether we perform two traversals separately (right side of the equation) or compose them in order to walk across the structure only once (left side). It is analogous, for instance, to the second functor law. The s are needed because the second traversal (or the second part of the traversal, for the left side of the equation) happens below the layer of structure added by the first (part). is needed so that the composed traversal is applied to the correct layer.

and  are available from  and  respectively.

The laws can also be formulated in terms of :

Though it's not immediately obvious, several desirable characteristics of traversals follow from the laws, including :


 * Traversals do not skip elements.
 * Traversals do not visit elements more than once.
 * Traversals cannot modify the original structure (it is either preserved or fully destroyed).
 * Traversals cannot modify the original structure (it is either preserved or fully destroyed).

Recovering and
We still have not justified the  and   class constraints of. The reason for them is very simple: as long as the  instance follows the laws   is enough to implement both   and. For, all we need is to use   to make a traversal out of an arbitrary function:

To recover, we need to introduce a third utility functor:   from :

is a constant functor. A value of type  does not actually contain a   value. Rather, it holds an  value which is unaffected by. For our current purposes, the truly interesting instance is the  one

simply combines the values in each context with. We can exploit that to make a traversal out of any  function that we might pass to. Thanks to the instance above, the traversal then becomes a fold:

We have just recovered from  two functions which on the surface appear to be entirely different, and all we had to do was pick two different functors. That is a taste of how powerful an abstraction functors are.