golang Transfer-Encoding: chunked
- MDN docs on the Transfer-Encoding header
- Chunked Transfer Coding in RFC7230 (HTTP1/1)
- ResponseWriter.Write godoc
I recently ran into a fun issue with a colleague where HTTP POSTs to one of their golang services were being dropped and the response not read by an akka-http client. It appeared to occur only on long running requests so the initial search was for some kind of timeout on the client or server.
After adding a tonne more debug logs they found that akka-http was dropping the request with:
Received illegal response: HTTP chunk size exceeds the configured limit of 1048576 bytes
This seemed odd as curl and postman had no issue downloading the response. Surely we coudn't be sending a chunk larger than a megabyte? We fired up wireshark and inspected the TCP dump. The response from the go server appeared reasonable:
Transfer-Encoding: chunked
235230
{... (data follows)
The number 235230
is the count of bytes in the chunk.
But akka-http was closing the connection almost immediately after
receiving this count. It seemed to be within the configured limit.
On closer inspection of the spec we realized that each chunk begins with a hexadecimal count of bytes. According to the RFC:
The chunk-size field is a string of hex digits indicating the size of
the chunk-data in octets.
I think that's the only time I've run into hexadecimal in any of the
HTTP spec. So as it turns out our golang server was indeed sending one
massive chunk of 2314800 bytes
(2.2m) in the response. Way
above the default akka limit.
So why does golang send a massive chunk? If we look into the source we find the following:
// If the handler didn't declare a Content-Length up front, we either
// go into chunking mode or, if the handler finishes running before
// the chunking buffer size, we compute a Content-Length and send that
// in the header instead.
(server.go#L1512 at time of writing)
Our takeaway from diving into the code is that if a handler makes a call
to .Write(data)
in most cases this will create a chunk
with the same size as the data being written.
It might seem odd for a server to send just one chunk, but really TCP takes care of splitting up the large response anyway. So using chunked transfer encoding allows the golang server to start writing immediately without any knowledge of what other calls to Write might occur before the request handler finishes.
We could force smaller chunks by iterating over our data and
making multiple calls to .Write
, but given that every
other HTTP client we've encountered has no issue with large chunks the
most pragmatic thing to do here is just change the weird akka default
akka.http.parsing.max-chunk-size
or one of it's variants.
And now our requests work again.