Learn You a Haskell for Great Good! - Miran Lipovaca [134]
This law might be a bit less obvious than the first one. Let’s take a look at why it should hold. When we feed monadic values to functions by using >>=, those functions take normal values and return monadic ones. return is also one such function, if you consider its type.
return puts a value in a minimal context that still presents that value as its result. This means that, for instance, for Maybe, it doesn’t introduce any failure; for lists, it doesn’t introduce any extra nondeterminism.
Here’s a test run for a few monads:
ghci> Just "move on up" >>= (\x -> return x)
Just "move on up"
ghci> [1,2,3,4] >>= (\x -> return x)
[1,2,3,4]
ghci> putStrLn "Wah!" >>= (\x -> return x)
Wah!
In this list example, the implementation for >>= is as follows:
xs >>= f = concat (map f xs)
So when we feed [1,2,3,4] to return, first return gets mapped over [1,2,3, 4], resulting in [[1],[2],[3],[4]]. Then this is concatenated, and we have our original list.
Left identity and right identity are basically laws that describe how return should behave. It’s an important function for making normal values into monadic ones, and it wouldn’t be good if the monadic value that it produced had any more than the minimal context needed.
Associativity
The final monad law says that when we have a chain of monadic function applications with >>=, it shouldn’t matter how they’re nested. Formally written, doing (m >>= f) >>= g is just like doing m >>= (\x -> f x >>= g).
Hmmm, now what’s going on here? We have one monadic value, m, and two monadic functions, f and g. When we’re using (m >>= f) >>= g, we’re feeding m to f, which results in a monadic value. Then we feed that monadic value to g. In the expression m >>= (\x -> f x >>= g), we take a monadic value and we give it to a function that feeds the result of f x to g. It’s not easy to see how those two are equal, so let’s take a look at an example that makes this equality a bit clearer.
Remember when we had our tightrope walker, Pierre, walk a rope while birds landed on his balancing pole? To simulate birds landing on his balancing pole, we made a chain of several functions that might produce failure:
ghci> return (0, 0) >>= landRight 2 >>= landLeft 2 >>= landRight 2
Just (2,4)
We started with Just (0, 0) and then bound that value to the next monadic function, landRight 2. The result of that was another monadic value, which got bound to the next monadic function, and so on. If we were to explicitly parenthesize this, we would write the following:
ghci> ((return (0, 0) >>= landRight 2) >>= landLeft 2) >>= landRight 2
Just (2,4)
But we can also write the routine like this:
return (0, 0) >>= (\x ->
landRight 2 x >>= (\y ->
landLeft 2 y >>= (\z ->
landRight 2 z)))
return (0, 0) is the same as Just (0, 0), and when we feed it to the lambda, the x becomes (0, 0). landRight takes a number of birds and a pole (a tuple of numbers), and that’s what it gets passed. This results in a Just (0, 2), and when we feed this to the next lambda, y is (0, 2). This goes on until the final bird landing produces a Just (2, 4), which is indeed the result of the whole expression.
So it doesn’t matter how you nest feeding values to monadic functions. What matters is their meaning. Let’s consider another way to look at this law. Suppose we compose two functions named f and g:
(.) :: (b -> c) -> (a -> b) -> (a -> c)
f . g = (\x -> f (g x))
If the type of g is a -> b and the type of f is b -> c, we arrange them into a new function that has a type of a -> c, so that its parameter is passed between those functions. Now what if those two functions were monadic? What if the values they returned were monadic values? If we had a function of type a -> m b, we couldn’t just pass its result to a function of type b -> m c, because that function accepts a normal b, not a monadic one. We could, however, use >>= to make that happen.
(<=<) :: (Monad m) => (b -> m c) -> (a -> m b) -> (a -> m c)
f <=< g = (\x -> g x >>= f)
So now we can compose two monadic functions:
ghci>