I don't think that you understand! - Firefox3 Vulnerable by Design
I was going to through the latest entries in my feed reader, when I stumbled upon Mozilla Aims At Cross-Site Scripting With FF3. "Wow, this is interesting." So I clicked on the link and started reading. The more I read the more I knew it was a big screw up from the start.
Mozilla is aiming to put an end to XSS attacks in its upcoming Firefox 3 browser. The Alpha 7 development release includes support for a new W3C working draft specification that is intended is secure XML over HTTP requests (often referred to as XHR) which are often the culprit when it comes to XSS attacks. XHR is the backbone of Web 2.0 enabling a more dynamic web experience with remote data.
"Uh? What is that? How is that going to prevent XSS." But wait, it is getting even more interesting.
"Cross site XMLHttpRequest will enable web authors to more easily and safely create Web mashups," Mike Schroepfer, Mozilla's vice president of engineering, told internetnews.com.
A typical XSS attack vector is one in which a malicious Web site reads the credentials from another that a user has visited. The new specification could well serve to limit that type of attack though it is still incumbent upon Web developers to be careful with their trusted data.
First of all, this technology is not going to prevent XSS. This is guaranteed. Second, it may only increase the attack surface since developers will abuse this technology as it is the case with Adobe Flash crossdomain.xml
. And finally, the proposed W3C specifications are insecure from start. Let's see why this is the case.
The specification describes a mechanism where browsers can provide cross-domain communication (something that is currently restricted by the same domain policies) via the all mighty JavaScript XMLHttpRequest object. In order to grant access to external scripts you can do that by using either of the following ways:
Content-Access-Control header
The idea is that the developer provides an additional header in the response. Here is an example:
Content-Access-Control: allow <*.example.org> exclude <*.public.example.org>
Content-Access-Control: allow <webmaster.public.example.org>
So, as long as the response contains a header that specifies that the requesting site, which hosts the script, can access the content, no domain access restrictions will be applied. The bad news for this approach is that there is an attack vector known as CRLF Injection. If any part of the user supplied input is used as part of the response headers, attackers can inject additional header to grant access. Here is a scenario where this attack can be applied:
Case study 1: MySpace implements a new AJAX interface for the user contact list section. The list is delivered as XML. This REST service contains a couple of parameters. One of them is used as part of the headers. Although by default attackers cannot read the XML file due to the same origin policies, now they can trick the browser into letting them do so via CRLF injection. The attack looks like the following:
var q = new XMLHttpRequest();
q.open('GET', 'http://myspace.com/path/to/contact/rest/service.xml?someparam=blab%0D%0AContent-Access-Control: allow <*>');
q.onreadystatechange = function () {
// read the document here
};
q.send()
Ups!. This is how we tricked the browser into believing that the above site grants us with full access to the user private contact list. But wait, this is not all. I think that W3C forgot about the infamous TRACE and TRACK methods and the vulnerabilities that are associated with them. Cross-site Tracing attacks are considered sort of theoretical because there is no real scenario in which attackers can take advantage of them. On way to exploit XST, is to have access to the target content via XSS, but if you have XSS then what's the point. However, if the new spec is implemented, now we have a whole new attack vector we need to worry about. So, we are not really fixing the XSS problem, we are in fact contributing to it. Here is a demonstration Cross-site tracing attack against MySpace again.
var q = new XMLHttpRequest();
q.open('**TRACE**', 'http://myspace.com/path/to/contact/rest/service.xml');
q.setRequestHeader('Content-Access-Control', 'allow <*>'); // we say to the server to echo back this header
q.onreadystatechange = function () {
// read the document here
};
q.send();
That was too easy. I hope that FF3 restricts the XMLHttpRequest object to set "Content-Access-Control" header, but then I guess we can use Flash or Java to do the same or at least somehow circumvent FF header restrictions. I don't know.
And finally I would like you to pay attention on the fact that the browser verifies about the script access control after the request is delivered. "Uh?". Haven't you learned? CSRF!!! This means that now we can make arbitrary requests to any resource with surgical precision. Port scanning from JavaScript will become as stable as it can get. "Why?" you may ask. Here is a demo:
try {
var q = new XMLHttpRequest();
q.open('GET', 'http://**<some host>**:**<port of interest>**');
q.onreadystatechange = function () {
if (q.readyState == 3) {
// port is open
}
};
q.send();
} catch(e) {}
This port scanning method does not work today, but it will if you implement the W3C standard. With the current browser specifications, the above code will crash and burn at q.send();
step. It won't fire a request unless the origin matches with the current one. However, with the new spec on place, the q.send();
step will fire. Then, while loading the document, the onreadystatechange event callback will be called several times for states 0 (uninitialized), 1 (open), 2 (sent), 3 (receiving). At stage 4 (loaded), the request will fail with a security exception. However, we've successfully passed stage 3 (receiving) which has acknowledged that the remote resource is present. Here is a simple script that can be used to port scan with the new W3C spec. It should be super accurate:
function checkPort(host, port, callback) {
try {
var q = new XMLHttpRequest();
q.open('GET', host + ':' + port);
q.onreadystatechange = function () {
if (q.readyState == 3) {
callback(host, port, 'open');
}
};
q.send();
} catch(e) {
// check the exception type... {
callback(host, port, 'closed');
// }
}
}
for (var i = 0; i < 1024: i++) {
scanPort('target.com', i, function (host, port, status) {
console.log(host, port, status); // do something with the result
});
}
processing instruction
Ok. Bad news. But check this out. W3C standard suggests that we can embed the access control mechanism into the XML document itself. Here is an example:
"*"
access-control allow=<list>
<email>[email protected]</email>
</list>
This cross domain access control mechanism is also subjective to TRACK/TRACE and CSRF (PortScanning and State detection) vulnerabilities. Luckily, it is not vulnerable to CRLF Injection. However, in case the internal FF or IE XML parsing engine is vulnerable to some buffer overflow, we will be screwed big time. But this is another story and I guess it requires more research and of course the presence of a software vulnerability. Keep in mind that I am just elaborating here.
In conclusion
For God's sake, do not implement the standard. Can't you see? It will open a can of worms (literally). And please, don't say that this specification will prevent XSS. It doesn't? I see how the W3C spec will enable developers to go further and do even more exciting on-line stuff but is it really worthed? You tell me, cuz I don't know what the heck your have been thinking.
WARNING: None of the above attacks have been verified. The conclusion about possible vulnerabilities withing the specifications have been drawn by simply looking at the W3C working draft. However, given the fact that Firefox follows specifications to the extend no other browser vendor does, there is a high chance that the vulnerabilities mentioned above may work very soon. Thank you.
Archived Comments
Big dealyou say. True, but let say that the vulnerable SOAP server is inside the corporate Intranet. Ok, now it becomes interesting. This means that JavaScript will be able to pull data without any restrictions. All it takes is for the user to visit a resource that is slightly malicious. Now, this is what I call a sneaky break-in. Again, I have not idea what's going on but the spec does not sound good to me. I don't like it and I am almost certain that there will be some serious implications for browser vendors after it is implemented.
due to market pressuresit is inevitable. As for crossdomain.xml, well,... I don't think that the idea is good either. I posted a little bit more about about it over here and here.
Due to the Same Origin Policies JavaScript can access only the current origin. Even if you implement the crossdomain.xml file, JavaScript again will be able to access the current origin. Why? Compatibly issues. We cannot move to the new technology over the night. With or without crossdomain.xml JSON or JavaScript remoting, if you like, will still work. The only thing that will change is increased attack surface due to the trust relationship between apps. Let me explain.
Let’s say that we have app on A.com and another one on B.com. B.com says that A.com can access its data. Effectively, this means that If I can get XSS on A.com, I will be able to read the data on that domain including the data on B.com due to the trust relationship. Today this is not possible. I need two XSS vulns rather the one.
IMHO, crossdomain.xml sounds a lot better although it is a bit limiting. On the other hand, the W3C approach is very flexible but very insecure as well.
UPDATE
While writing my previous comment I though of another problem that may arise with W3C approach to cross-domain communication. There might be cases where attackers can steal the user session identifier!!! Let's say that Joe visit joesemail.com and logs in. The browser remembers his session cookie for that site. Then Joe visit evil.com. This site knows that joesemial.com has a resource that can be accessed via the W3C cross domain security policies and it is available for everyone to use. However, evil.com will try to trigger for the cookie to reset or be sent back to the client. Then evil.com can read it. Or attackers can simply use TRACE to make the server echo back what has been sent and access it via responeText if possible. Again, these are pure speculations. Also, as it is the case with crossdomain.xml, if site A.com and site B.com are in trust relationship, then having XSS on one of them will lead to XSS on the other.service.xml?someparam=blab%0B%0AContent-Acce....Shudn't it be %0D%0A ?
they are violating the same origin policyHe is right. Ignore the all specifications problems that we've discussed so far. We don't know whether they are going to by present in Firefox's implementation. Let's concentrate on the fact that attackers will be able to obtain sensitive information from multiple sites by compromising only one of them. The trust relationship that will be built on top of the web will be used in the most undesired ways. Let's say that Yahoo wants to enable all of their service to communicate with each other but only for their domains. This is cool - sort of secure. However, if the attacker manages to get only one XSS on any of the trusted domains, they effectively can get interesting info from all the others. To me, this is like back in 1990 - everything is broken again.
So u guys are not explicitly preventing TRACE, which potentially, again I repeat, potentially can lead to some problems. Moreover, logically, ready state 3 should fire no matter the security restrictions. Am I wrong?A conforming user agent must support some version of the HTTP protocol. It should support any HTTP method that matches the Method production and must at least support the following methods:
- GET
- POST
- HEAD
- PUT
- DELETE
- OPTIONS
First of all
You perform the access control checks after the request was completed. This is insane! You are saying that CSRF attacks are known for ages and you are not really contributing to the greater evilness of the Web. I must disagree. CSRF attacks via Forms (POST and GET) or Images (GET) and Links (GET) cannot contain additional headers. They do not have fine-grain over the data that is submitted. Therefore, your method makes the whole situation more insecure. I highly recommend to read Wade's excellent paper onInter-Protocol Exploitationfor more ideas how your approach can be abused.