In this article we'll look at a very interesting class of monads: state monads.
( Collapse )
In the previous
article we discussed error-handling strategies in Haskell and derived the
Either e monad for structured error handling. In this article
we'll continue our discussion of error handling by introducing some new type
classes that will enable us to write code which recovers from errors in a
selective way (which is called exception handling in most computer
languages). Interestingly, exception handling doesn't have to be built-in in
Haskell; you get it for free once you have monads!
I also promise (threaten?) that there will be one absolutely jaw-droppingly awful pun in what follows. You have been warned.
( Collapse )
In the previous articles in this
series, I covered the conceptual basis of monads, but the discussion was pretty
abstract. Now that (I hope) you have a pretty good idea of what monads are and
what they're for, it's time to get into the nitty-gritty details of how to
actually derive the definitions of specific monads. That means that we are
going to have to define the correct instances of the type class
Monad for the various notions of computation we talked about
previously. In our derivations, we will use our understanding of how monadic
composition should work in each particular case to give us the definition of
monadic apply (the
>>= operator), and we'll use the monad
laws to give us the definition of
( Collapse )
It's a standing joke in the Haskell community that every Haskell programmer, as part of his or her learning process, will eventually write one or more monad tutorials. I'm certainly no exception. But since I know that there are already dozens of monad tutorials out there, some quite good, why on earth would I want to write Yet Another One? There are two reasons:
I think I can explain some aspects of monads better than most of the monad tutorials I've seen.
My own understanding of monads has improved greatly, and I'd like to try to pass that on if I can.
( Collapse )
Ahh, summer! A time for backyard barbeques, ice cold beer, baseball games, and playing with experimental Haskell type system extensions. Or in my case, just the latter. In this article I'll show you how to compute factorials using Haskell's type system, i.e. at compile time.
Why would we want to do this? you may well ask. The real reason is "because it's neat". This is a really beautiful construction; it took my breath away when I figured it out. But another reason is that the techniques I'll illustrate are useful in many other contexts. For instance, you can use something like this to define a data type of a list which carries its length around with it (so that e.g. taking the head of an empty list is a compile-time error). This is very powerful; the more errors you can check at compile-time, the better.
Nothing in this article is even remotely original. The direct inspiration for it came from a blog post by Brent Yorgey, which I've shamelessly stolen from and extended (thanks, Brent!). The indirect inspiration was from some comments on my previous blog post on Scala, which suggested that I look at a paper on embedding object-oriented programming in Haskell. That paper, called Haskell's Overlooked Object System, by Oleg Kiselyov and Ralf Lammel, is an amazing piece of work. In order to understand it fully, though, you first have to read another paper called Strongly Typed Heterogeneous Collections, by Oleg, Ralf and Keean Schupke. So that's what I did, and in the process learned quite a bit, which I'll use to good effect below. Both of these papers are strongly recommended if you're ready for them and if you enjoy having your mind blown.
To understand this article you need to know Haskell up to and including polymorphic types and (single parameter) type classes. I'll explain everything else as we go. If you want to run the examples you'll need the GHC Haskell compiler (Hugs may also work, but I haven't tried it). I used GHC version 6.12.1 to test this code.( Collapse )
There are two reasons I might want to learn a new programming language. One is that the language is useful for practical programming in a way that my current favorite languages aren't, and the other is that the language has some interesting features that I haven't seen before (or at least, haven't seen implemented as well). Scala appears to fulfill both criteria.
From a practical standpoint, Scala runs on the JVM, it's fast, and using Scala I can do GUI programming, web programming (using the Lift framework), and program mobile devices (using Android, though I gather that it's not yet a trivial exercise to get Scala code running on Android). Scala appears to be far ahead of Haskell in all of these areas. I expect that Haskell will catch up eventually, but I'd like to write these kind of applications now.
From a theoretical standpoint, the appeal of Scala is the way that it combines functional and object-oriented programming in a statically-typed setting. This is not easy to do. It's easy to do in a dynamically-typed language (for instance, Lisp with CLOS), but static typing brings in a lot of complexity, notably with respect to subtyping and inheritance. Haskell can support some aspects of OOP (notably, using type classes and existential types), but I haven't seen OO-style inheritance modeled in Haskell, even though it's often quite useful. Scala has a very sophisticated OO system, with features like explicit sub- and supertype annotations, covariance and contravariance annotations, etc. which goes beyond what I've seen in other OO languages. In fact, Scala implements all of its functional features as objects, so calling it an object-functional language is less accurate than calling it an extremely powerful OO language.
I think of Scala as kind of an object-oriented dual to Haskell; it's a best-of-breed language for its paradigm. (I'm ignoring dependently-typed functional languages such as Agda/Coq/Epigram here; I know they exist but IMO they can't yet be considered general purpose programming languages.) Whether objects are a more fundamental construct than functions in a statically-typed setting is unclear to me. In dynamically-typed languages, you can get objects from functions (assuming functions are closures), or you can get functions from objects, so the two notions are interconvertible. Whether or not this is the case with static typing, I think it's interesting to have two languages approaching this question from two different directions. On the other hand, I learned about OO from Smalltalk, and I have to say that I've never seen a statically-typed OO language that had the elegance of the simple message-passing model of Smalltalk. On the other other hand, I would much rather write a large program in Scala than in Smalltalk. (Also, Scala has an Actors library for when you really want straight message-passing.)
Another thing I like about Scala, and one it shares with Haskell, is that both languages have their roots in academia. When you're designing a language with a sophisticated type system, it really helps a lot if you actually know something about type theory. Both the Scala and Haskell designers clearly do, and this accounts for many of the good qualities of these languages. I hope that we're soon approaching the day when the kind of people who dismiss languages because of their academic roots in favor of real he-men "practical" languages will have to STFU because the evidence will show that languages designed by academics (i.e. people who actually know what they are doing) are simply superior at solving real-world problems.
I'm still getting my feet wet with Scala, and I look forward to learning more. But I still love Haskell and I'm not going to give it up either ;-)