I recently found an XSS on the mobile version of Flickr (http://m.flickr.com). Due to the way the bug is triggered, I thought it deserved a write-up.
Whilst browsing the site, you’ll notice that pages are loaded via AJAX with the path stored in the URL fragment (not as common these days now that
pushState is available).
When the page is loaded, a function,
q() (seen below), is called which will check the value of
location.hash, and call
In order to load pages from the current domain, it checks for a leading slash. If this isn’t present, it prepends one when calling the next function,
This function then performs a regex on the URL (line 160) to ensure that it’ll only load links from
m.flickr.com. If this check fails, and the URL starts with a double slash (relative protocol link), it prepends it with
http://m.flickr.com. Pretty solid check, right?
Incase you didn’t notice, the first regex doesn’t anchor it to the start of the string. This means we can bypass it providing our own URL contains
We can get our own external page loaded by passing in a URL like so:
The code will check for a leading slash (we have two :)), which it’ll pass, then checks for the domain, which will also pass, then load it via AJAX.
Since we now have CORS in modern browsers, the browser will send an initial OPTIONS request to the page (to ensure it’ll allow it to be loaded), then the real request.
All we need to do is specify a couple of headers (the additional options in the
Which leads to our payload being executed.
This issue is now fixed by anchoring the regex to the start of the string, and also running another regex to check if it starts with a double slash.
tl;dr: ISPs, please reduce your cookie scope.
Everyone now knows that hosting user generated content on a sub-domain is bad. Attacks have been demonstrated on sites such as GitHub, and it’s why Google uses googleusercontent.com.
But what if you’re an ISP. You might not host any user-content, however, you probably assign customers an IP which has Reverse DNS set. You’ll probably see hostnames like
This isn’t really an issue. The issue is when the hostname assigned is a sub-domain of your own site. If you do this, along with cookies with loose domain scope (fairly common practice), and forward DNS (again, fairly common), then this combination can result in cookie stealing, and therefore account hijacking.
To pull this off, an attacker either needs to be a customer of the ISP they’re targeting, or have access to a machine of a customer (pretty easy with the use of botnets). A web server is then hosted on the connection, and referenced by the hostname assigned (as opposed to the IP).
Rather than showing a real world example, I’d rather keep the companies names private, I’ve setup a proof-of-concept.
We have a fake ISP hosted on fin1te-dsl.com, which mimics an ISPs portal. Registering an account and logging in generates a session cookie (try it out).
We also have a site (152-151-64-212.cust.dsl.fin1te-dsl.com) which in real life would be hosted on a users own connection. A page, 152-151-64-212.cust.dsl.fin1te-dsl.com/debug.php, is hosted to display the cookies back for debug purposes.
image/jpeg and embed the image on a page.
And the cookies show up in the logs.
We just need to set our own cookie to this value and we’ve successfuly hijacked their session.
Out of the four major UK ISPs I tested, two were vulnerable (now patched). If you assume an equal market share (based on 2012 estimates), that’s approximately 10.5 million users who can be potentially targeted. Of course, they have to be logged in - but you can always embed the cookie stealer as an image on a support forum, for example.
We have three mitigation options. The first is to remove super cookies and restrict the scope to a single domain. This may be impractical if you separate content onto different sub-domains. The second is to disable forward DNS for customers. And the third is to change the hostname assigned to one which isn’t a sub-domain.
In addition, techniques such as pinning a session to an IP address will help to an extent. Unless you store a CSRF token in a cookie, in which case, we can just CSRF the user.
If you want to browse the source code of the proof-of-concept, it’s available on Github.
Since I didn’t have the time to test every single ISP in the world (just the UK ones) for the three requirements that make them vulnerable, I decided to send an email to
security@ addresses at the top 25 ISPs - 20 of these bounced, and I received no reply from the other 5
The two UK ones I originally contacted patched promptly and gave good updates, so kudos to you two.
Back in April I found three CSRF issues on Instagram, stemming from their Android/iOS App API (which is slightly different from their public API - it’s hosted on their main domain and doesn’t need an access token).
These issues were present in the following end-points:
accounts/remove_profile_pic- This is used to remove the profile picture from an account
accounts/set_private- This is used to mark a profile as private
accounts/set_public- This is used to mark a profile as public
Obviously the best one out of these is
accounts/set_public. With a simple GET request we can reveal anyones profile and access their private pictures. Pretty cool.
Facebook patched the holes pretty quickly and I was awarded a decent bounty for it.
Once patched I checked to make sure that it was indeed fixed, and issuing a GET request returns a 405 Method Not Allowed response.
I didn’t blog about the issue and completely forgot about it until recently. I decided to have another look at the Android App to see if there was any new end-points to play around with.
Pretty much all API requests within the app call a method named
setSignedBody. This generates a hash of the parameters with a secret embedded in an
.so file, meaning we can’t craft our own request on-the-fly and submit on the users behalf (without extracting the secret).
However, the three end-points I submitted still didn’t use
setSignedBody (presumably because there are no parameters needed), and therefore no token is sent along. Because of this, we can submit a POST request and still perform the attack which was supposed to be fixed!
The use of
setSignedBody without a CSRF token means that all end-points are vulnerable to a replay attack. You simply submit the request yourself, catch the request in Burp, and replay to the victim. Unfortunately, this is something I realised after the bug was fixed, so no screenshots available.
So the moral here is that you should double-double-check that an issue is fixed. If I’d been more thorough in testing the fix I would have spotted it sooner than four months, my bad.
This is now patched by requiring all requests to have a
csrftoken parameter. Any request which is signed also requires a
_uid parameter to prevent replay attacks (unless you extract the secret…).
The original proof-of-concept now returns a 400 error.
The response body is a JSON object showing the error message.
I’ve found a few bugs on various Facebook satellite/marketing domains (ones which are part of the Facebook brand, but not necessarily hosted/developed by them, and not under the *.facebook.com domain). Most of them aren’t that serious.
This one isn’t an exception, and I wouldn’t normally blog about it, but it’s an interesting use case as to why content types are important.
The bug is an XSS discovered on Facebook Studio. This is linked to by some Facebook marketing pages, and is used to showcase advertising campaigns on Facebook.
There is an area which allows you to submit work to the Gallery. This form conveniently has an option to scrape details from your Facebook page and fill in boxes for you (such as Company Name, Description).
This calls an AJAX end-point with your pages URL as a parameter.
text/html, when the response is actually JSON.
When browsed to directly (it doesn’t need any CSRF tokens to be viewed, despite the
hash param), we see our script executed.
The cool thing about this bug is that whilst it’s not persistent (the payload is fetched when the page is visited), the code is not present in the request body, therefore avoiding Chrome’s XSS Auditor and IE’s XSS Filter.
Had the content type been set to
application/json, the code would have not run (until you start to consider content sniffing…).
The content type is now set correctly.
15th August 2013 - Issue Reported
21st August 2013 - Acknowledgment of Report
21st August 2013 - Issue Fixed
This is a quick post about a simple bug I found on Friendship Pages on Facebook. (Note: Not nearly as cool as a full account takeover, however!)
Friendship Pages show you how two users on Facebook are connected, with posts and photos they’re both tagged in, events they’ve both attended and common friends. On these pages, you’re given the option to upload a cover photo (like you would on your profile, or an event).
The cover photo on someones friendship page, we can remove from any account.
First, we need the
friendship_id, which can be obtained with an AJAX call to
profile_id is one user and
friend_id is another.
friendship_id we make an AJAX call to
/ajax/timeline/friendship_cover/remove, placing the value into the
Refresh the page, and it’s disappeared.
Now, you can only remove your own cover.
29th August 2013 - Reported
2nd September 2013 - Acknowledgment of Report
2nd September 2013 - Issue Fixed
This post will demonstrate a simple bug which will lead to a full takeover of any Facebook account, with no user interaction. Enjoy.
Facebook gives you the option of linking your mobile number with your account. This allows you to receive updates via SMS, and also means you can login using the number rather than your email address.
The flaw lies in the
/ajax/settings/mobile/confirm_phone.php end-point. This takes various parameters, but the two main are
code, which is the verification code received via your mobile, and
profile_id, which is the account to link the number to.
The thing is,
profile_id is set to your account (obviously), but changing it to your target’s doesn’t trigger an error.
To exploit this bug, we first send the letter F to 32665, which is Facebook’s SMS shortcode in the UK. We receive an 8 character verification code back.
We enter this code into the activation box (located here), and modify the
profile_id element inside the
Submitting the request returns a 200. You can see the value of
__user (which is sent with all AJAX requests) is different from the
profile_id we modified.
Note: You may have to reauth after submitting the request, but the password required is yours, not the targets.
An SMS is then received with confirmation.
Now we can initate a password reset request against the user and get the code via SMS.
Another SMS is received with the reset code.
We enter this code into the form, choose a new password, and we’re done. The account is ours.
Facebook responded by no longer accepting the
profile_id parameter from the user.
23rd May 2013 - Reported
28th May 2013 - Acknowledgment of Report
28th May 2013 - Issue Fixed
The bounty assigned to this bug was $20,000, clearly demonstrating the severity of the issue.
When you create a shop on Etsy, you can upload an image to be used as a banner.
The upload form in the administration section stops you changing the shop to one you don’t control, as expected.
There is, however, an AJAX end-point which can also be used to upload these images. This doesn’t check you’re the owner on upload.
We can easily upload any image we want onto any shop we want. This could be used to damage a business’s reputation, or like what happened on the underground marketplace Silk Road, upload a banner which prompts any prospective customers to send any orders and payments to an email address we control.
Etsy fixed this in a simple way - they now check you’re the owner on upload.
4th April 2013 - Issue Reported
4th April 2013 - Acknowledgment of Report
8th April 2013 - Issue Fixed
On the Facebook App Center, we have links to numerous different apps. Some have a “Go to App” button, for apps embedded within Facebook, and others have a “Visit Website” button, for sites which connect with Facebook. The “Visit Website” button submits a POST request to
ui_server.php, which generates an access token and redirects you to the site.
The form is interesting in that it doesn’t present a permissions dialog (like you would have when requesting permissions via
/dialog/oauth). This is presumably because the request has to be initiated by the user (due to the presence of a CSRF token), and because the permissions required are listed underneath the button.
During testing, I noticed that omitting the CSRF token (
orig/new_perms generates a 500 error and doesn’t redirect you. This is expected behaviour.
However, in the background, an access token is generated. Refreshing the app’s page in the App Center and hovering over “Visit Website” shows that it is now a link to the site, with your access token included.
Using this bug, we can double-submit the permissions form to gain a valid access token. The first request is discarded - the token is generated in the background. The second request is sent after a specific interval (in my PoC I’ve chosen five seconds to be safe, but a wait of one second would suffice), which picks up the already generated token and redirects the user.
The awesome thing about this bug is that we don’t need to piggy-back off an already existing app’s permissions like in some of the other bugs, we can specify whatever ones we want (including any of the extended permissions).
When the user is sent to the final page, a snippet of their FB inbox is displayed, sweet! In a real-world example, the inbox would obviously not be presented, but logged.
Facebook has fixed this issue by redirecting any calls to
uiserver.php without the correct tokens to
4th April 2013 - Issue Reported
8th April 2013 - Acknowledgment of Report
9th April 2013 - Issue Fixed
PayPal have sent an email to researchers who have participated in their Bug Bounty Programme listing sites which are no longer in scope.
In this email they have however mentioned sites which are in scope, which I didn’t know about before. These are listed below.
There’s been rumours that they’ve de-scoped the sites because of the large number of bugs being reported, but they do mention that the reason is because they’re being decommissioned, and that they’re marketing sites and not hosted/maintained by PayPal themselves.
I’ve finally enabled SSL on my website, allowing me to securely transmit my GPG public key.
Please verify the fingerprint with the one listed on the homepage.
$ curl https://fin1te.net/fbdc7606.asc | gpg --with-fingerprint
Then import it into your keyring.
$ curl https://fin1te.net/fbdc7606.asc | gpg --import
Page 1 of 2