Online Book Reader

Home Category

Learn You a Haskell for Great Good! - Miran Lipovaca [155]

By Root 468 0
list. What about the probability? Well, return x is supposed to make a monadic value that always presents x as its result, so it doesn’t make sense for the probability to be 0. If it always must present this value as its result, the probability should be 1!

What about >>=? Seems kind of tricky, so let’s make use of the fact that m >>= f always equals join (fmap f m) for monads and think about how we would flatten a probability list of probability lists. As an example, let’s consider this list where there’s a 25 percent chance that exactly one of 'a' or 'b' will happen. Both 'a' and 'b' are equally likely to occur. Also, there’s a 75 percent chance that exactly one of 'c' or 'd' will happen. 'c' and 'd' are also equally likely to happen. Here’s a picture of a probability list that models this scenario:

What are the chances for each of these letters to occur? If we were to draw this as just four boxes, each with a probability, what would those probabilites be? To find out, all we need to do is multiply each probability with all of the probabilities that it contains. 'a' would occur one time out of eight, as would 'b', because if we multiply one-half by one-quarter, we get one-eighth. 'c' would happen three times out of eight, because three-quarters multiplied by one-half is three-eighths. 'd' would also happen three times out of eight. If we sum all the probabilities, they still add up to one.

Here’s this situation expressed as a probability list:

thisSituation :: Prob (Prob Char)

thisSituation = Prob

[(Prob [('a',1%2),('b',1%2)], 1%4)

,(Prob [('c',1%2),('d',1%2)], 3%4)

]

Notice that its type is Prob (Prob Char). So now that we’ve figured out how to flatten a nested probability list, all we need to do is write the code for this. Then we can write >>= simply as join (fmap f m), and we have ourselves a monad! So here’s flatten, which we’ll use because the name join is already taken:

flatten :: Prob (Prob a) -> Prob a

flatten (Prob xs) = Prob $ concat $ map multAll xs

where multAll (Prob innerxs, p) = map (\(x, r) -> (x, p*r)) innerxs

The function multAll takes a tuple of probability list and a probability p that comes with it and then multiplies every inner probability with p, returning a list of pairs of items and probabilities. We map multAll over each pair in our nested probability list, and then we just flatten the resulting nested list.

Now we have all that we need. We can write a Monad instance!

instance Monad Prob where

return x = Prob [(x,1%1)]

m >>= f = flatten (fmap f m)

fail _ = Prob []

Because we already did all the hard work, the instance is very simple. We also defined the fail function, which is the same as it is for lists, so if there’s a pattern-match failure in a do expression, a failure occurs within the context of a probability list.

It’s also important to check if the monad laws hold for the monad that we just made:

The first law says that return x >>= f should be equal to f x. A rigorous proof would be rather tedious, but we can see that if we put a value in a default context with return, then fmap a function over that, and then flatten the resulting probability list, every probability that results from the function would be multiplied by the 1%1 probability that we made with return, so it wouldn’t affect the context.

The second law states that m >>= return is no different than m. For our example, the reasoning for m >>= return being equal to just m is similar to that for the first law.

The third law states that f <=< (g <=< h) should be the same as (f <=< g) <=< h. This one is true as well, because it holds for the list monad that forms the basis of the probability monad and because multiplication is associative. 1%2 * (1%3 * 1%5) is equal to (1%2 * 1%3) * 1%5.

Now that we have a monad, what can we do with it? Well, it can help us do calculations with probabilities. We can treat probabilistic events as values with contexts, and the probability monad will make sure that those probabilities are reflected in the probabilities of the final result.

Say we have two

Return Main Page Previous Page Next Page

®Online Book Reader