Update: I mixed some numbers. I wrote about 375 MB, but it were 175 MB. (Noone seemed to have noticed though. Anyways, the argument still holds.)
Haskell's Network.HTTP package isn't quite as good as it could be. Well, to be precise, it is not at all as good as it should be. In addition to API problems (for which I proposed a solution in my previous blog entry) there's also a major performance problem, due to strictness and use of regular list-basedString
s. A simple wget
-style program written in Haskell used like ./get http://localhost/file.big
on a local 175 MB file almost locked my 1GB laptop due to constant swapping. I had to kill it, as it was using up more than 500 MB of RAM (still swapping). At this point it had run for 50 seconds at had written not a single byte to the output file. At the same time a normal wget
completed after abound 10 seconds. Since the file was retrieved from a local server I assume overall performance was inherently limited by disk speed (or the operating system's caching strategy).
The current implementation performed so badly for two reasons:
- Since it uses list-based strings each retrieved byte will take up (at least) 8 byte in program memory (one cons cell, or tag + data + pointer to tail).
- It implements custom, list-based buffering. The buffer size is 1000 characters/bytes, which is rather OK for line-based reading, but if the HTTP protocol requests to read a large block of data, this block will be read in 1000 byte chunks and then be appended to the part that has alrady been read. So if we read a block of 8000 bytes, the first block will be read and consequently be copied 8-times(!). Let's not think about reading a block of 175000000 bytes. Also because we already know the answer.
Program | Runtime | Memory Use |
---|---|---|
wget | ~10s | ~0.5MB |
./get using strict ByteStrings | ~18s | ~175MB |
./get using lazy ByteStrings | ~11s | ~3MB |
Network.HTTP
already implements a simple Stream abstraction with a simple interface:
class Stream x where readLine :: x -> IO (Result String) readBlock :: x -> Int -> IO (Result String) writeBlock :: x -> String -> IO (Result ()) close :: x -> IO ()Implementing this for strict ByteStrings is just a matter of calling the corresponding functions from the ByteStrings module. With one small annoyance: The HTTP parsing functions expect
readLine
to return the trailing newline, which hGetLine
does not include, so we have to append it manually, which in turn is an O(n) operation.
For simplicity, we also didn't convert the header parsing and writing functions to use ByteStrings, but instead inserted the appropriate calls to pack
and unpack
. This could become a performance bottleneck if we have many small HTTP requests. OTOH, we might soon have a Parsec version that works on ByteStrings.
As could be seen from the above benchmarks, using strict ByteStrings still forces us to completely load a packet into memory before we can start using it, which may result in unnecessary high memory usage. The obvious solution to this problem is to use lazy ByteStrings.
For lazy ByteStrings things work a bit differently. Instead of calling hGet
and hGetLine
inside the stream API, we call hGetContents
when we open the connection. This gives us a lazy ByteString which we store in the connection object and then use regular list functions on that string to implement the required API.
openTCPPort uri port = do { s <- socket AF_INET Stream 6 -- [...] ; h <- socketToHandle s ReadWriteMode ; bs <- BS.hGetContents h -- get the lazy ByteString ; bsr <- newIORef bs -- and store it as an IORef ; v <- newIORef (MkConn s a h bsr uri) ; return (ConnRef v) } readBlock c n = readIORef (getRef c) >>= \conn -> case conn of ConnClosed -> return (Left ErrorClosed) MkConn sock addr h bsr host -> do { bs <- readIORef bsr ; let (bl,bs') = BS.splitAt (fromIntegral n) bs ; writeIORef bsr bs' ; return $ Right bl } readLine c = readIORef (getRef c) >>= \conn -> case conn of ConnClosed -> return (Left ErrorClosed) MkConn sock addr h bsr host -> do { bs <- readIORef bsr ; let (l,bs') = BS.span (/='\n') bs ; let (nl,bs'') = BS.splitAt 1 bs' ; writeIORef bsr bs'' ; return (Right (BS.append l nl)) -- add '\n' } `Prelude.catch` \e -> [...]There are two main problems with this implementation, though:
- ByteStrings currently only work on handles not on sockets. Thus we have to turn sockets into handles using
socketToHandle
which, according to the source code linked from the Haddock documentation will fail if we're in a multithreaded environment. (search for "PARALLEL_HASKELL" in Network.Socket's source. - Furthermore, after converting a socket to a handle we should no longer use this socket. So we can't change any settings of the socket, but close it by calling
hClose
on the handle. HTTP allows the user to specify whether a socket should be closed after the response has been received. This is a bit more tricky when we use lazy ByteStrings since our request function will return immediately with a lazy ByteString as a result but no data has been read (except, maybe, on block). We thus must not close the socket right away, but only after all its contents have been read. So we must rely onhGetContents
to close our handle (and thus socket) -- which is does not! From recent #haskell comments this seems to be bug. In any case though we'd want to be able to specify the behavior, as we might as well keep the socket open.
darcs get http://www.dtek.chalmers.se/~tox/darcs/http
and for the strict version
darcs get http://www.dtek.chalmers.se/~tox/darcs/http-strict
If you're interested you can take a look at our project page.