Talk:Haskell/Category theory

Untitled first section by DavidHouse
This page is getting there. Some stuff I'd still like to do:


 * More exercises
 * Diagrammatic explanations of CT concepts

Monad laws are incorrect
Hi all. I was looking over the monad laws section and noticed some errors in the first and second laws. The first law is incorrect because join cannot be composed with join since it is a function from M^2 to M. The confusion here is on subscripts: the join_x can be lifted to join_{Mx} which will then be ok. It is better to write this as (join M) using the standard monad notion. I.e the first law should read jM.j = j.jM. See the wikipedia article on monads (category theory) for the correct commutative diagrams.

There is a similar abuse of notation problem with the second law. This may be confusing to the inexperienced. I did not change them because of the following example would have to be altered as well. Perhaps a note about the abuse of notation would be sufficient since it will be awkward to explain the haskell code equivalent. -- Marc


 * The trouble comes from the fact that in Haskell,  is of polymorphic type  . The parametrization on the monad   is not important here, but the parametric type variable   is. This makes it a function and a natural transformation at the same time with   being the type argument. The point is that the Haskell compiler will accept the composition   because it can figure out that the first   takes the base type   if the other takes a base type  . In other words, it will infer the subscripts automatically. I'm unsure what to do here, but I suspect that this has to be explained at some point anyway. -- apfe&lambda;mus 09:30, 8 March 2007 (UTC)


 * I agree that it is just an abuse of notation and that it doesn't affect the code since the compiler is smart enough to figure out what is going on. This is why I didn't go trampling through the page making changes. I do think that it is worth mentioning that this happens for clarity.


 * Perhaps something like: Notice that the subscript on the second join is actually Mx so that this composition is proper from a category-theoretic standpoint, but since join is of polymorphic type in Haskell the compiler understands which join is meant from context, so in (coding) practice the distinction is not important. -- Marc


 * Actually, I think I should have called it "type inference" instead of "compiler". From the Haskell point of view, "subscript" may be confusing :), it's "type specialization". Maybe something like: "note that the second  is specialized to the type  ".
 * But the issue is more subtle than that: because of polymorphism, the category Hask allows to internalize not only morphisms but natural transformations as well. I'm not well-versed in category theory but I guess there is some construction corresponding to that? In particular, I suspect that the naturality laws 3 and 4 are free theorems (although I don't know whether higher kinded free theormes have been worked out yet). I think that this what makes innocent looking Haskell notation break with the standard category theory notation.
 * Maybe it's best to simply introduce natural transformations explicitely in the article and solve notational and conceptional problems with that. Hiding them is probably more confusing than giving a translation between Cat-speak and Haskell-speak. Actually, I like the $$T\mu$$ notation more than subscripts or type inference because it results in more symmetric monad laws. -- apfe&lambda;mus 21:47, 8 March 2007 (UTC)


 * I am well versed in category theory, but fairly new to haskell. I'm using my category theory skills to understand haskell, which is how I ended up here. I think that laws 3 and 4 are ok -- they are just expressing the requirement that unit and join must be natural transformations.


 * I'm not sure if it is worth giving a significant introduction to natural transformations into the article, though a mention and a link to a wikipedia page might be appropriate. My main concern is just that the monad laws be replaced with their correct formulations so that others approaching this in the manner that I have won't have to wrestle with the laws for a while before figuring out that they are not what a category theorist would expect. (Normally I would not be such a stickler, but this is the category theory article after all.) I am, however, willing to help make the article more in-depth from a category-theoretic point of view. Perhaps in conjunction with more concepts we will find that we need to discuss natural transformations. Marc Harper 02:19, 9 March 2007 (UTC)


 * I agree, it's a category theory article, after all :) So, subscripts are important. I'm unsure on how to state the categorially correct monad laws without natural transformations? Concerning the introduction of concepts, the rule of thumb would be that a concept may be introduced as long as there is corresponding Haskell code for it. Of course, it's best introduced if it's also useful. Your additions would be most welcome! -- apfe&lambda;mus 09:28, 9 March 2007 (UTC)

I think that we should use the $$T\mu$$ notation and explain that the type system can infer the correct domain for (M join). For the reader that understands functors, this will not be suprising since monads are functors and we are basically just applying fmap to join to lift it to the correct domain. Natural transformations can be viewed as types of the form $$ \forall a, m a \to n a $$ for functors m and n, which should be sufficient for most readers.

Once some more content is added that uses natural transformations and gives interesting examples of them it may be appropriate for a more detailed discussion of natural transformations. There is a lot of categorical content to be added -- even relatively simple constructions such as identifying tuple types at the categorical product. Does anyone know of a good reference for a categorical discussion of the properties of Hask? Marc Harper 21:15, 9 March 2007 (UTC)


 * Agreed.


 * Note that the tuple is (unfortunately) not the categorial product :) This basically comes from the fact that $$\perp \neq (\perp,\perp)$$, i.e. the tuple type has an extra element $$\perp$$ that breaks the universal property. The sum type (coproduct)  has the same problem. For more on $$\perp$$ ("bottom") and the semantics of general recursion, see Haskell/Denotational semantics.


 * AFAIK, there is no concretely worked out category theory behind Hask. In general, Lambda calculi are connected to cartesian closed categories. A quick google suggests Cartesian closed categories and the λ-calculus as trampoline. I don't know where to find a categorial construction of universal quantification $$\forall a$$, but the inventors of the polymorphic lambda calculus System F are Girard and Reynolds and they have semantics for it. Then, there are initial algebra semantics for folds and generic programming in general. The "banana and barbed wires"-paper mentioned on the haskell wiki page is one of the most famous in that direction. There is also the work of Moggi in Categorial semantics, he mostly deals with monads. -- apfe&lambda;mus 11:01, 10 March 2007 (UTC)


 * It is disappointing that it is not as elegant as it seems from the surface. Thanks for the references. -- Marc
 * What about restricting our attention to only strict functions in the category Hask? Then we seem to get better categorical properties. Also, if we consider types to be 'pointed' with basepoint $$\perp$$ then we can get properties such as $$\perp = (\perp,\perp)$$ by taking the product in the pointed category (much in analogy with pointed topological spaces). -- Marc 74.139.215.32 01:08, 28 October 2007 (UTC)


 * I don't think that a restriction to strict functions makes things better. While (co)products may work out fine, function types won't:  and   are not isomorphic, the former can be used to simulate call-by-name in a strict language. In other words, &perp; will still bite when considering function types. Usually, one restricts everything to total functions, thereby kicking out &perp; completely, and that does give a satisfying category. Partiality &perp; can then be modeled by a monad (I don't have a good reference for that but maybe  is a start).
 * For an introductory article like this one, I think the most reasonable thing to do is to mention the problem and otherwise declare all functions to be total :-) (The current article "allows" partial functions while ignoring them at the same time, I think that's a bit clumsy.)
 * -- apfe&lambda;mus 09:58, 28 October 2007 (UTC)

Strong monads
There is another subtlety, namely the fact that every "monad" in Haskell is also a "strong monad", but not the other way round. See Moggi's paper. This means that there is a natural transformation t :: Monad m => (a, m b) -> m (a,b) t x m = m >>= \b -> return (a,b) = join. fmap (\b -> return (a,b)) $ m Apparently, the category Hask offers quite a lot of objects for free. -- a  p  f  e  l m  u  s  21:47, 8 March 2007 (UTC)

Powerset eq List Monad?
This may be irrelevant from a pedagogical standpoint, but most of the times I've heard category theorists talk about the list monad (in the operads world), they seem to view it as something in a symmetric monoidal category, i.e. tensor products of Thingies; and not using powersets -- the powerset analogy is slightly flawed in that it would disallow things like [2,3,2,2,1,2]. On the other hand, proposing a change that actually works out simple enough for an example to be good is over my own head right now. Michiexile 09:29, 29 January 2007 (UTC)


 * They are not the same. This is because sets are not ordered/indexed by default, i.e. a set would not contain the element 2 twice, but a list could. As far as illustrating the idea though, they are otherwise very similar.


 * To be precise about the distinction, if you consider sets with the monoidal structure introduced by the cartesian product, then you can view lists as element of a product of sets. (This allows the list elements to be of different types though, so it is more properly viewed as a tuple.) So... lists of length n (in Haskell) are really elements of A^n, the product of A with itself n-times. The proper category theoretic way to view this is to generate a category with the product and a set A. Then n-lists can be viewed as a monad. If you wanted to treat all lists similtaneously, you could toss in the element B=A^{\infty}. It still has a product (B x B = B) and you could consider lists of length n to be infinite lists where all terms after the n-th are zero. -- Marc


 * Since you mentioned operads, the connection here is that you can view operads as monoids in the monoidal category of symmetric families of a category... I think that a discussion of operads in the article would be unecessarily complicated at this time, and even though the powerset example is not quite the same I think that it is pedagogically sufficient. -- Marc

Right chapter?
This is in the "Time and space" chapter, but the only connection to time and space (which I understand to mean performance in this context) appears to be the mention of fusion - which is tangential, and doesn't, I think, justify this article's inclusion in this chapter. --Greenrd 21:49, 16 January 2007 (UTC)


 * Would it be reasonable to rename the Program correctness chapter to something more general and move this there? Actually... would program correctness be reasonable as is?  -- Kowey 23:00, 16 January 2007 (UTC)


 * I also think that it belongs to the chapter currently entitled "Program correctness" as this chapter intents to subsume anything about "Formal program manipulation". While "Program correctness" is too narrow for category theory to fit in, I have been reluctant to change the title because "Formal" could (unfortunately) scare people off. Maybe it's good idea to change the title anyway but to bait people with a subchapter "Why is my program correct?" which serves as a hands-on introduction to the "Formal program manipulation" subject? A sneakier version would be "Help, my program doesn't work" because the chapter won't tell anything about debugging, it will tell about "Program derivation from specification" and how to prove your programs do what they should. In short, it will show how to write correct programs in the first place. -- Apfelmus 10:43, 17 January 2007 (UTC)

Category theory wikibook
Have you seen Category theory? -- Kowey 09:55, 13 January 2007 (UTC)

Suggestion
I know what a Set is, but I don't know what a Group is. Perhaps link to the wikipedia pages for these two things - so people can have access to the mathematical definition.

"Functors on Hask" paragraph
Another suggestion.

Maybe it's me, maybe because English is not my first language, maybe because i'm a physicist, maybe because the word "map" has a lot of meanings, maybe a lot of things but...

I've seen and heard "functors represent types that can be mapped over" and it never seems clear to me at all. Not even after i think i understood what they are.

As an intuitive explanation, the "mapped over" has been to me as useful at saying "functors are functors".

Can't it be added something like "functors are the generalization to categories of linear operators"?

Or if i'm wrong, add an explanation of way the comparison doesn't hold. --Dragomang87 (discuss • contribs) 20:16, 14 November 2013 (UTC)


 * The intuition here comes from practical Haskell: "mapped over" there refers to how, for instance,  allows us to apply a function to each element of a list, so that  evaluates to  . If the   type class is unfamiliar to you, check the chapter which introduces it.--Duplode (discuss • contribs) 00:23, 15 November 2013 (UTC)

Review from the Haskell-cafe mailing list
On the 16th Jan 2007, a message was sent regarding this article to the Haskell mailing list, asking for review and comment. This was really helpful, poking holes in my knowledge as well as helping to improve the article manyfold. Most of the changes you can see in the history since that date are tweaks due to that review. Noteable reviewers were Brian Hulley and Yitzchak Gale, whose comments were very encouraging and hopefully had a bit influence on the article's content. DavidHouse 18:32, 19 January 2007 (UTC)

Diagrams
... were done using the Inkscape vector editing program. The original SVG files are available on request, email me (see my talk page). DavidHouse 18:32, 19 January 2007 (UTC)


 * The wikimedia projects (e.g. wikibooks) support .svg diagrams. Have you given it a try?  It works badly on the ones produced by Omnigraffle; perhaps you would have better luck -- Kowey 19:38, 19 January 2007 (UTC)


 * Also, it's highly worth it to move the diagrams to commons. This will make the diagrams transparently shareable across all wikimedia projects, for example, a hypothetical French version of the wikibook -- Kowey 19:40, 19 January 2007 (UTC)

Comment
The first exercise asks 'As was mentioned, any partial order (P, \leq) is a category with objects as the elements of P and a morphism between elements a and b iff a \leq b. Which of the above laws guarantees the transitivity of \leq?' This seems backwards to me. The statement is that a partial order gives rise to a category, not the other way around. Given a paritial order we don't need to gaurantee the transitivity of \leq. That is an axiom. The question should be something along the lines of 'which of the axioms of a poset guarantees the second category theory law?' -- astrolabe

Morphism composition. If the existence of a (unique) f_{a,b} from a to b is equivalent to a \leq b, then: a\leq b and b\leq c means there are f_{a,b} and f_{b,c}, therefore there must be f_{a,c} = f_{b,c}. f_{a,b} --- which amounts to saying a must be \leq c. -- askyle

Another Comment
When explaining functors as they apply to Hask, the list functor is explained as a functor from Hask to Lst. But for a Monad, the underlying functor is from a category to itself: $$M : C \to C$$. So considered as a Monad, is it not the case the list functor should be a functor from Hask to Hask (where its image in Hask is only those objects in the subcategory Lst)? I know this was (perhaps still is!) a point of confusion for me, perhaps a note of clarification would be worthwhile.--Jyossarian (talk) 03:39, 12 January 2010 (UTC)


 * Lst is a (full) subcategory of Hask, i.e. every object in Lst is a Haskell data type. The list functor can be viewed as an endofunctor $$M : \mathbf{Hask} \to \mathbf{Hask}$$ but it can also be interpreted as a functor from Hask to a subcategory Lst. The text simply chose the latter interpretation when introducing the functor.
 * -- apfe&lambda;mus 09:52, 15 January 2010 (UTC)

adjoint functors
I read somewhere that monad transformers were really adjoint functors, whatever those are. Could some explanation be added? I like this article a lot and it helped me with the traditional stumbling block of understanding monads (I sort of get them now), but I'm still baffled by monad transformers. Thanks


 * Oh? That would be interesting, I only know that every pair of adjoint functors (like $$S\to\cdot$$ and $$S\times\cdot$$) gives rise to a monad ($$S\to(S\times\cdot)$$). Concerning monad transformers, the wikibook has a chapter on them, but it's not optimal yet. Meanwhile, have a look at . Concerning adjoint functors, this module will hopefully explain them at some point in the future. -- apfe&lambda;mus 10:21, 7 December 2007 (UTC)

Convention used for function composition
In the section on Category Laws, we have the following:


 * … if $$f : A \to B$$ and $$g : B \to C$$, then there must be some morphism $$h : A \to C$$ in the category such that $$h = f \circ g$$.

This suggests that $$f \circ g$$ is being used here for the morphism that results from composition of the morphism f preceding the morphism g. However, this conflicts with the Haskell composition notation, in which $$f \circ g$$ (well, f.g) refers to the composition of the morphism f following the morphism g.

The section is also internally inconsistent in that it states that for the category shown in the figure, for which $$f: B \to A$$ and $$g: A \to B$$, $$g \circ f = id_B$$ and $$f \circ g = id_A$$. (This corresponds to the convention used in Haskell.). But for the same category, the text states that $$f \circ id_A = id_B \circ f = f$$ (which uses the opposite – and un-Haskell-like – convention).

Which convention should be adopted here? (I’m no authority on Haskell or on category theory, so I wouldn’t wish to make the choice myself.)

See also the compositions for unit and join, which appear to use the Haskell-like convention.


 * Oh! The "Haskell" convention is correct. In fact, it's the convention used all over mathematics. Fixed. -- apfe&lambda;mus 08:16, 10 April 2008 (UTC)


 * Not used ‘all’ over mathematics – but certainly by far the most common convention, I agree. (And my comment on the last relation there was mistaken, as it wasn’t of course referring to the category in the diagram.) Thanks for fixing this. --Axnicho (talk) 09:17, 10 April 2008 (UTC)

-- It would be more instructive, I think, when in the diagram that explains the existence of f o g and g o f, there is for at least one of them another option than just id. Currently, the example is such that both f o g and g o f are the identity, and this might be misleading for the beginner.

Typographical conventions for unit, join
In this article, we have, for example:


 * $$join \circ M(join) = join \circ join$$

Is this really the desired typography? I’d have been inclined to use


 * $$\mbox{join} \circ M(\mbox{join}) = \mbox{join} \circ \mbox{join}$$

But perhaps there is some Haskell convention that I’m unaware of? (The same applies to unit.)


 * Function names in italic (more precisely \mathit{} in LaTeX) have a widespread tradition in papers about functional programming and Haskell, especially when there is some "mathematical" touch to them. A prototypical example is probably the book The Fun of Programming. Some chapters are available online, like Fun with Binary Heap Trees. Naturally, typewriter face is widespread, too. A variable width but upright font is the exception, a prominent example being Chris Okasaki's thesis and corresponding book.
 * In other words, italic is what we want here :-) -- apfe&lambda;mus 08:42, 10 April 2008 (UTC)


 * OK – I can accept that italic is used in Haskell and functional-programming papers (though why the standard mathematical convention that I mentioned should be used specifically in papers with ‘some "mathematical" touch to them’ eludes me!). Still, at the very least, we should improve the typography so that it looks more like this (I have made this change):


 * $$\mathit{join} \circ M(\mathit{join}) = \mathit{join} \circ \mathit{join}$$


 * By using math-mode italics (bare $…$) rather than \mathit{…} or \textit{…}, the spacing of the word ‘join’ is pretty awful – and a poor advertisement for TeX. As Lamport notes in the LaTeX 2ε user guide (p. 51):


 * ‘LaTeX normally uses an italic type style for letters in math mode. However, it interprets each letter as a separate mathematical symbol [which is not what we want here], and it produces very different spacing than an ordinary italic typestyle. You should use \mathit for multiletter symbols in formulas.’
 * --Axnicho (talk) 09:35, 10 April 2008 (UTC)


 * Yes, thanks for making the change to \mathit{} :-)
 * Concerning the italic convention, I imagine it as follows: usually, the objects of mathematical discourse have only one-letter names like $$x$$ or $$\zeta$$. This is not practical for programming, so multi-letter names like $$\mathit{map}$$ or $$\mathit{foldr}$$ are used. But since they're almost on "equal footing" with those nice looking italic letters, they gotta be italic, too. I mean, in the papers with "mathematical touch", they're not so much actual code to be typed into the terminal (-> typewriter face), but mathematical entities that obey laws like $$\mathit{map}\,(f\circ g) = \mathit{map}\,f \circ \mathit{map}\,g$$. (Hm, same for $$\sin(x)$$ of course. Must be the higher order functions, i.e. the fact that map can also be argument of a function. Oh well, whatever, it looks good :-)
 * -- apfe&lambda;mus 13:15, 11 April 2008 (UTC)


 * That is pretty funny, since in mathmetics the well established convention is to write variables (whether the variable contains a number, a function, or some other object, does not matter) as single letters in italics, and the multi-letter symbols for named functions (whose meaning in known, so they are not variables, for example: sin, max, log, etc.) in upright. And clearly map and foldr etc. are in this category, too; not variables but names for known functions, and no mathematician would write them in italic. So apparently haskell/functional programming papers with "some mathematical touch to them" use a typographical convention that is based on misunderstanding the normal mathematical typography. Whether this wikibook should follow the normal mathematical, or the alternative functional programming convention, is a difficult question. Maybe it depends to which audience this wikibook is targeted: to readers familiar with math but new to functional programming (who'd be less confused with normal math typography), or readers more familiar with functional programming papers and less familiar with normal math (who'd be more confused with the normal math typography).
 * -- Sampo Smolander 10:56, 26 November 2011 (UTC)

Applicative Functors
Should there be a section about applicative functors? It seems to me that a functor F can be made into an applicative functor if it respects exponential objects, where $$ B^A $$ should be the object consisting of functions from A to B. By that I mean that there needs to be a map

$$F(B^A)\to F(B)^{F(A)}$$

Another way to say this is that there is a map

$$ F(B^A)\times F(A)\to F(B)$$

But I haven't quite figured out where the pure requirement fits it (I'm not sure why a map $$A\to F(A)$$ is important to have for this). Possibly, we want the map $$ F(B^A)\times F(A)\to F(B)$$ to be the unique map that makes a diagram commute, where we use $$\text{pure}:B^A\times A\to F(B^A\times A)$$, and map that to $$ F(B) $$ by $$F(f)$$ where $$f:B^A\times A\to B$$ is the evaluation map, and we also map $$B^A\times A$$ to $$ F(B^A)\times F(A)$$ by the product of the two pure maps (sorry if it isn't clear, in the end we have a square we want to commute). I'm not sure if this makes sense/is actually true though.

Taylor561 (talk) 18:43, 26 November 2010 (UTC)

ID remark is incorrect
The article says that: The function  in Haskell is polymorphic - it can take many different types for its domain and range, or, in category-speak, can have many different source and target objects. But morphisms in category theory are by definition monomorphic - each morphism has one specific source object and one specific target object. This is a gross error. A monomorphism is a left-cancellative morphism. The author is confusing the property of a function of sets being single-valued with a monomorphism. (Monomorphisms generalize the notion of an injective function, because they go mono-e-mono...now you'll never forget that!)

Strictly speaking, from the mathematical point of view, the  function can be viewed as an abuse of notation. Thus Hask is a category.

Moreover, with regards to subtyping, one can use a Subobject classifier in a beautiful way...since the objects of Hask are types.

Just thought it might be nice to clarify that from the mathematician's point of view...

Pqnelson (discuss • contribs) 01:09, 10 February 2011 (UTC)


 * Nicely spotted! The funny thing is that two rather different concepts have "monomorph" in their name.
 * monomorphic type vs polymorphic type. That's what the author meant.
 * morphism, monomorphism, epimorphism and friends. That's what you mean and what the author seemed to write.
 * In category-speak, the  function is a natural transformation. The   category has the curious property that a lot of natural transformations can be reified as elements of some objects (which correspond to polymorphic types). This is like a cartesian closed category, where there are power objects $$B^A$$ whose elements correspond to morphisms.
 * I agree that the text needs a small rewrite to make the above distinction more clear.
 * --apfe&lambda;mus 09:25, 10 February 2011 (UTC)

[video] Dominic Verity on Category Theory
Maybe it is worth including links to http://vimeo.com/17207564 and http://www.youtube.com/watch?v=yilkBvVDB_w

Dominic Verity presents a gentle introduction to Category Theory, perfect for those who've been playing with Haskell for some time and wanted to know what it's all about. Yrogirg (discuss • contribs) 10:13, 18 August 2011 (UTC)

What the heck is a morphism?
I applaud the initiative to give an overview of category theory, and its relationship to Haskell. And it's encouraging that the discussion starts with the basic: "a category is a collection with three components".

But the article immediately heads off into the weeds by explaining everything in terms of "morphisms", a word which appears over fifty times. Yet never is any clue provided as to what the morphism word actually delineates (although we are told that they are sometimes called arrows, no wait, ignore that, and they may not be functions, yet correspond to functions in the Haskell sense).

Just from the context of this article, morphisms appear to be functions, or something very like functions. But the fact that this article studiously avoids calling them functions suggests that anyone assuming they are functions would be on the wrong track. So basically, if the reader doesn't already understand what morphism refers to in this neighborhood of mathematics they are going to be out of luck.

If someone takes on the task of adding a description introducing the word "morphism", before it is used, telling what "morphism" encompasses, could they please also comment on how it relates (or doesn't) to the polymorphism, monomorphism and any other morphisms we're likely to encounter.

Thanks! Gwideman (discuss • contribs) 09:16, 30 December 2012 (UTC)


 * Pretty good question! The underlying issue is that there is nothing else we can say about what a morphism is in general; the concept, just like that of object, is entirely abstract. There are things called objects, and things called morphisms which have one object as source and one object as target, and so forth. Category Theory abstracts from what those things actually consist of.


 * What we can do to clarify things is providing examples of categories. Then it is not surprising that morphisms appear to be "something very like functions", given that in the very familiar example of the Set category functions are the morphisms, and in the no less familiar example of Hask Haskell functions are the morphisms. But it needn't be so. For an arbitrary example, you can make a category out of the integer numbers by taking a single object (and it doesn't even matter what it is), the integers as morphisms, addition as composition and zero as identity. For a different sort of example, when Gabriel Gonzalez talks in the pipes documentation about "the category of Unix pipes" he refers to the fact that a  is a morphism in a category with   as composition and   (the forward-as-is pipe) as identity (a neat presentation of the underlying ideas is provided by "The category design pattern" in Gabriel's blog).


 * As for the relation to monomorphism and polymorphism with respect to types, they are different things. This question, and Apfelmus' answer to it, may shed some light on the distinction.


 * Duplode (discuss • contribs) 21:48, 12 April 2014 (UTC)

Defintion of composition unclear
"A notion of composition of these morphisms. If h is the composition of morphisms f and g, we write $$h=f\circ g$$."

It is very unclear what the exact requirements are. Is this "notion of composition" the same for all pairs of objects / morphisms? Does it have to exist between all morphisms?

E.g. the Set category does not work if composition between all $$f : A \to B$$ and $$g : C \to D$$ is required even if $$B \neq C$$, but from the "definition" given above, it is not clear why only specific morphisms have to be composable. How about this (copied from http://en.wikibooks.org/wiki/Category_Theory/Categories#Data )

"For each ordered triple of objects A, B, C in $$\mathcal C$$, there is a law of composition: If $$f:A\to B$$ and $$g:B\to C$$, then the composite of f and g is a morphism $$g\circ f:A\to C$$"

I personally don't know any category theory, so I don't know if that is correct: It would mean that the composition is allowed to follow different laws for different ordered triples of objects.


 * Well spotted - $$f\circ g$$ is only defined if the target of $$f$$ is the source of $$g$$, and the text didn't make that clear enough. I rewrote the passage to show sources and targets explicitly.Duplode (discuss • contribs) 20:24, 12 April 2014 (UTC)

Second monad law
In discussing the equation: join. fmap return = join. return = id

the = id at the end is ignored, but in fact it's the whole point of the equation, namely that both join. fmap return and join. return are nops (i.e. join undoes fmap return and just return by itself, so composing them produces an id of the appropriate type. I leave it to the author to amend the text. 2602:306:CD38:1040:F8A5:67E8:E2DA:1569 (discuss) 10:07, 17 July 2015 (UTC)


 * Well, the original authors are long gone :) Fortunately, this is Wikibooks, and so we have carte blanche to fix things. I agree with your remark, and will try to improve the text later. In fact, I find the treatment of the so-called "third and fourth laws" (justified in footnote 2) presented here somewhat distasteful. If the reader has gone all the way to find out about Category Theory this kind of lies-to-children aren't very useful. Furthermore, in more recently written chapters about various type classes the book talks about "laws" and "bonus laws" (the latter being naturality properties, which always hold in Haskell thanks to parametricity), so there is background to mention natural transformations in passing. Fixing this issue should lead to the improvement you suggest, as we would then be able to talk about the actual three monad laws.--Duplode (discuss • contribs) 18:15, 17 July 2015 (UTC)

The second exercise in the "Introduction to Categories" section
From the little and brittle knowledge I have about category theory, the diagram does seem like a category to me, and it really bugs me to be told it's not. I'm no expert in pedagogy, but I can feel as a beginner that, that exercise is doing more harm than good, unless the answer is given within reach. I'm trying to form an understanding around these very abstract concepts, and the exercise tells me that the picture I've been forming so far is wrong, without telling me why. So I question everything in the picture and it weakens my understanding. (This would actually make it a great exercise, if the answer were also given) Thanks for the great wikibook :) Enisbayramoglu (discuss • contribs) 04:14, 25 January 2016 (UTC)


 * If you follow the diagram, you will see that we can compose $$f$$, $$g$$ and $$h$$, like so: $$f \circ g \circ h$$. The associative law states that the following must hold true: $$f \circ (g \circ h) = (f \circ g) \circ h$$. From the diagram, we can see that $$g \circ h = id_B$$ and $$f \circ g = id_A$$. Thus, we can replace into the expression for the associative law: $$f \circ id_B = id_A \circ h$$. By the definition of identity, this means that $$f = h$$, which is not true. Thus, the diagram's description does not hold up to the category laws, and cannot be a category. (Disclaimer: I am a complete beginner at this. Please correct me if I did any mistakes.) --Insipido (discuss • contribs) 16:36, 11 September 2016 (UTC)


 * Your solution is correct. I do think has a point: beyond the emptiness of the solutions page, something about this exercise strikes me the wrong way. Perhaps the presentation is a little brusque, or the background assumption that the drawn arrows correspond to all distinct morphisms (which e.g. is what allows saying that $$g \circ h = id_B$$) is not obvious enough. One more thing to consider in a future cleanup of this chapter... --Duplode (discuss • contribs) 04:36, 6 October 2016 (UTC)

the do-block section has the numbering of the laws all confused ?
For example 'return x >>= f = f x' is bullet-listed as (1), whereas it is in fact the 3rd law n the earlier section.