Occasionally I’ll get an email from someone interested in getting involved in bug bounties. Whilst some people are quite protective about giving out information - nervous that having more people participating leaves less bugs, I believe that the more people involved the better. Getting paid for issues and gaining credibility is great, but the end goal should be to improve web security as a whole.
I thought it’d be useful to compile some of the information I give out (as opposed to typing it out each time), and some tips for people starting out. If you have anything to add, shoot me a message and I’ll update this page.
For anyone already working in web application security this is probably a bit too beginner for you.
Bug bounties, also known as responsible disclosure programmes, are setup by companies to encourage people to report potential issues discovered on their sites. Some companies chose to reward a researcher with money, swag, or an entry in their hall-of-fame. If you’re interested in web application security then they’re a great way of honing your skills, with the potential of earning some money and/or credibility at the same time.
There is one book that everyone recommends, and rightly so - The Web Application Hackers Handbook, which covers the majority of common web bugs, plus it uses Burp Suite in the examples.
The OWASP Top Ten has a high-level overview of the most common web application bugs.
Quite a few people, me included, blog about issues they find. This is a great insight into the type of bugs that exist on sites, plus they’re always an interesting read. These are the ones I can remember off the top of my head.
I’ll admit, I don’t use many tools, a lot of the time I’ll write a quick PHP/Python script. I should, it’d make my sessions more efficient, but these are the core ones I use all the time.
One thing to note is that automated scanners (such as Acunetix or Nikto) generate a lot of noise. Most programmes forbid the use of them for this reason. Plus you’re highly unlikely to find something with such a scanner that no one else has found.
Applications/systems which have vulnerabilities added to them are a fun way of testing out some techniques. You might find pages outputting user data without escaping (leading to XSS), or code which executes SQL queries in an insecure manner (leading to SQL Injection).
If you start looking for bugs on the above sites you might be looking for a good week or two without finding anything, since they’ve been around for a while. One option is to find a smaller site or a new bounty, which probably won’t have had as many people looking at it.
A good tip is to signup for one of the many sites which host bounties on behalf of other companies. This lets you submit reports in a common format and track the progress - easier than emailing for updates.
When submitting a bug, you need to realise that different companies have different time frames for triaging and patching issues. Combined with the volume of reports, you may have to wait a few days/a week for a response. If you first language isn’t English, then it might be wise to submit a short video explaining it.
Don’t be afraid to send in a report, but you’ll have to understand that the severity and impact that you think the bug has could be very different to how the security team views it. As time goes on, you’ll get a feel for what is an issue and what isn’t.
Facebook has compiled a list of the most common false-positives reported.
It’s been a week since I launched the SafeCurl “Capture the Bitcoins” contest, which has been a fun, but humbling event.
Whilst I work as Security Engineer, and submitted my first bug bounty entry two years ago, I come from a development background. I’ve been writing PHP coming up to nine years now, though nothing much in production for the past year and a half.
I wanted to take a break from searching for bugs, so decided to write some PHP (the language I surprisingly love). SafeCurl seemed like a great starting point - a useful package, not too large, and still involving web app security.
Once written, I launched the bounty. Primarilly to give it a thorough test, and partly because I wanted to see what it would be like receiving bug reports rathering than submitting them.
In my head, I’d assumed that it would take ages for someone to bypass my code (if it happened at all). In reality, it took 2 hours. The reason being that I had rushed the project, excited to get it released as soon as possible. Further investigation should have been done at the start, which would have stopped such a silly bypass being possible.
Initially, there was going to be one 0.25B⃦ bounty. However, if the prize was won before most people had seen the site, there’s less incentive to keep looking. So I re-filled the wallet, and assumed this time no one would find a bypass.
I paid out another 0.1B⃦ to two people suggesting a DNS rebinding attack may be possible. Whilst this was just a theory, I created a hot-fix to pin DNS in cURL.
Then, three more 0.25B⃦ bounties caused by inconsistencies in PHP’s URL parsing and the
curl_exec function. After the first two were paid out, I declared the bounty over. However, the third was so similar to the previous two, it was only fair to pay out (from my personal wallet).
I’ve rewarded an additional 0.95B⃦ than I’d planned, and I don’t have infinite Bitcoins, but it was worth the money.
As I mentioned above, this was a stupid mistake. In the code, I’d blacklisted certain private ranges (
0.0.0.0 could also be used to refer to localhost.
The solution was pretty simple - blacklist any reserved ranges.
Found by @zoczus.
I was made aware that my code wasn’t safe from a DNS rebinding attack. This would involve rapidly switching the A record for the domain name from a valid IP (which passes any checks), to an internal IP. Whilst this is theoretical, I’ve played around with it but couldn’t get it to exploit, it was 1am and didn’t want to risk it whilst I was asleep.
Two separate people raised it at the same time. Whilst I could have just paid the first, I thought it’d be fair to pay both since they came up with it independently (the Facebook attitude).
For this, the IP returned from
gethostbynamel is pinned by replacing the hostname in the URL with the IP, then passing the original hostname in the HTTP “Host” header.
This was an interesting one. Whilst the
btc.txt file couldn’t be accessed, it did bypass all other checks of SafeCurl so was worthy of the bounty.
google.com to be returned (PHP sees
firstname.lastname@example.org? as the password). However, when the full URL is given to
curl_exec, it sees
safecurl.fin1te.net as the host, and
@google.com/ as the query string. Pretty cool trick.
A quick solution for this was to disable the use of credentials in the URL. This worked, until the next bypass was found.
Found by @shDaniell.
Similar to the previous, passing
parse_url to see
validurl.com as the host, and
user:email@example.com as the fragment. Like before,
curl_exec handles this differently and uses
This was patched by using
rawurlencode on the username, password and fragment to prevent the URL getting parsed differently.
Found by Marcus T.
And the last one was again very similar. I didn’t URL encoded the query string, so
http://google.com?user:firstname.lastname@example.org was used to bypass the check.
The path and query string are now URL encoded too, with certain characters (
& = ; [ ]) left intact, else the receiving may not parse it properly.
Found by @iDeniSix.
The first issue, along with typos, were caused by me rushing the project. These could have been prevented by taking it a bit slower, and by doing a proper design and investigation phase before starting development.
Had I launched my code straight into production, without having ~1,000,000 attempts to bypass it, would have meant that the issues above would not have been fixed, thus causing vulnerable code to be deployed.
There is a price to pay, namely the Bitcoins I paid out, but this is nothing compared to the cost of someone using it for malicious purposes.
This is something I’ve learnt from development in “real-life”. Unfortunately I didn’t apply this to my own project (partly because it was just me working on it, partly because of Lesson #1). Unit tests do seem a bit of a chore to write sometimes, but they can catch a lot of bugs being re-introduced in the codebase. Plus having someone look over your code from a different perspective is invaluable.
This may sound like a horrible lesson, but it’s not. Having something “secure” you wrote be ripped to shreds is a really awesome thing. It makes you realise that there may be gaps in your knowledge, and you now know where they are, and how to fix them. I’m really excited to launch another for this exact reason.
SafeCurl version 2 will be released shortly. This will include real unit tests covering the code, and test cases for each of the bypasses (and any other techniques I can find). Plus, experimental IPv6 support will be added.
Another bounty will be launched at some point. Whether it’s a SafeCurl bounty, or another concept, I’ve not decided.
I will also be looking to port SafeCurl to other languages such as Java, Python, Ruby, etc. This will be more of a challenge, since my strongest skills lie with PHP. If anyone wants to help out drop me a message.
A great part of the event was looking inside the Apache access logs to see some of the attempts people were making. I’ve included statistics, if you’re curious.
Total attempts 1,140,803
Average attempts per person 651
Average attempts per person (Excluding top 10) 20
Server-Side Request Forgery attacks involve getting a target server to perform requests on our behalf. Rather than covering some great material already published, this post will be to introduce a new PHP package designed to help prevent these sort of attacks.
To protect our scripts from being abused in this way, we simply validate any URL or file path being passed to functions which can send requests. Of course, this is easier said than done.
The first step is to validate the provided scheme (and port if specified). This is to stop requests to PHP’s extra protocols (
phar://) which would let an attacker read files off of the file system.
The second is to validate the URL itself. This is to make sure that someone isn’t requested a blacklisted domain (such as
https://jira.fin1te.net), or a private/loopback IP (such as
127.0.0.1). You should also resolve any domain names to their IP addresses, and validate these to make sure someone doesn’t use a DNS entry pointing to an invalid IP.
Lastly, any redirects which cURL would normally handle should be caught, and the URL specified in the
Location header validated using the above steps.
Putting this all together, we get SafeCurl.
SafeCurl has been designed to be a drop in replacement for the
curl_exec function in PHP. Whilst there are other functions in PHP which can be used to grab the contents of a URL (
curl_exec is the most popular. In future versions, support for other functions will be added.
To use SafeCurl, simply call the
SafeCurl::execute method where you’d usually call
curl_exec, wrapping everything in a
By default, SafeCurl will only allow HTTP or HTTPS requests, to ports 80, 443 and 8080, which don’t resolve to a private/loopback IP address.
If you manage to find a way of bypassing it completely, then please participate in the bounty.
In order to give SafeCurl a real-world test, I’ve hosted a demo site, which lets you try out the different protections.
The document root contains a Bitcoin private key, with 0.25BTC contained within. This file is only accessible from localhost, so if you do bypass it, grab the file and the Bitcoins are yours.
The source code for the site is also available, if you’re interested.
For more information see the Bounty page.
I recently found an XSS on the mobile version of Flickr (http://m.flickr.com). Due to the way the bug is triggered, I thought it deserved a write-up.
Whilst browsing the site, you’ll notice that pages are loaded via AJAX with the path stored in the URL fragment (not as common these days now that
pushState is available).
When the page is loaded, a function,
q() (seen below), is called which will check the value of
location.hash, and call
In order to load pages from the current domain, it checks for a leading slash. If this isn’t present, it prepends one when calling the next function,
This function then performs a regex on the URL (line 160) to ensure that it’ll only load links from
m.flickr.com. If this check fails, and the URL starts with a double slash (relative protocol link), it prepends it with
http://m.flickr.com. Pretty solid check, right?
Incase you didn’t notice, the first regex doesn’t anchor it to the start of the string. This means we can bypass it providing our own URL contains
We can get our own external page loaded by passing in a URL like so:
The code will check for a leading slash (we have two :)), which it’ll pass, then checks for the domain, which will also pass, then load it via AJAX.
Since we now have CORS in modern browsers, the browser will send an initial OPTIONS request to the page (to ensure it’ll allow it to be loaded), then the real request.
All we need to do is specify a couple of headers (the additional options in the
Which leads to our payload being executed.
This issue is now fixed by anchoring the regex to the start of the string, and also running another regex to check if it starts with a double slash.
tl;dr: ISPs, please reduce your cookie scope.
Everyone now knows that hosting user generated content on a sub-domain is bad. Attacks have been demonstrated on sites such as GitHub, and it’s why Google uses googleusercontent.com.
But what if you’re an ISP. You might not host any user-content, however, you probably assign customers an IP which has Reverse DNS set. You’ll probably see hostnames like
This isn’t really an issue. The issue is when the hostname assigned is a sub-domain of your own site. If you do this, along with cookies with loose domain scope (fairly common practice), and forward DNS (again, fairly common), then this combination can result in cookie stealing, and therefore account hijacking.
To pull this off, an attacker either needs to be a customer of the ISP they’re targeting, or have access to a machine of a customer (pretty easy with the use of botnets). A web server is then hosted on the connection, and referenced by the hostname assigned (as opposed to the IP).
Rather than showing a real world example, I’d rather keep the companies names private, I’ve setup a proof-of-concept.
We have a fake ISP hosted on fin1te-dsl.com, which mimics an ISPs portal. Registering an account and logging in generates a session cookie (try it out).
We also have a site (152-151-64-212.cust.dsl.fin1te-dsl.com) which in real life would be hosted on a users own connection. A page, 152-151-64-212.cust.dsl.fin1te-dsl.com/debug.php, is hosted to display the cookies back for debug purposes.
image/jpeg and embed the image on a page.
And the cookies show up in the logs.
We just need to set our own cookie to this value and we’ve successfuly hijacked their session.
Out of the four major UK ISPs I tested, two were vulnerable (now patched). If you assume an equal market share (based on 2012 estimates), that’s approximately 10.5 million users who can be potentially targeted. Of course, they have to be logged in - but you can always embed the cookie stealer as an image on a support forum, for example.
We have three mitigation options. The first is to remove super cookies and restrict the scope to a single domain. This may be impractical if you separate content onto different sub-domains. The second is to disable forward DNS for customers. And the third is to change the hostname assigned to one which isn’t a sub-domain.
In addition, techniques such as pinning a session to an IP address will help to an extent. Unless you store a CSRF token in a cookie, in which case, we can just CSRF the user.
If you want to browse the source code of the proof-of-concept, it’s available on Github.
Since I didn’t have the time to test every single ISP in the world (just the UK ones) for the three requirements that make them vulnerable, I decided to send an email to
security@ addresses at the top 25 ISPs - 20 of these bounced, and I received no reply from the other 5
The two UK ones I originally contacted patched promptly and gave good updates, so kudos to you two.
Back in April I found three CSRF issues on Instagram, stemming from their Android/iOS App API (which is slightly different from their public API - it’s hosted on their main domain and doesn’t need an access token).
These issues were present in the following end-points:
accounts/remove_profile_pic- This is used to remove the profile picture from an account
accounts/set_private- This is used to mark a profile as private
accounts/set_public- This is used to mark a profile as public
Obviously the best one out of these is
accounts/set_public. With a simple GET request we can reveal anyones profile and access their private pictures. Pretty cool.
Facebook patched the holes pretty quickly and I was awarded a decent bounty for it.
Once patched I checked to make sure that it was indeed fixed, and issuing a GET request returns a 405 Method Not Allowed response.
I didn’t blog about the issue and completely forgot about it until recently. I decided to have another look at the Android App to see if there was any new end-points to play around with.
Pretty much all API requests within the app call a method named
setSignedBody. This generates a hash of the parameters with a secret embedded in an
.so file, meaning we can’t craft our own request on-the-fly and submit on the users behalf (without extracting the secret).
However, the three end-points I submitted still didn’t use
setSignedBody (presumably because there are no parameters needed), and therefore no token is sent along. Because of this, we can submit a POST request and still perform the attack which was supposed to be fixed!
The use of
setSignedBody without a CSRF token means that all end-points are vulnerable to a replay attack. You simply submit the request yourself, catch the request in Burp, and replay to the victim. Unfortunately, this is something I realised after the bug was fixed, so no screenshots available.
So the moral here is that you should double-double-check that an issue is fixed. If I’d been more thorough in testing the fix I would have spotted it sooner than four months, my bad.
This is now patched by requiring all requests to have a
csrftoken parameter. Any request which is signed also requires a
_uid parameter to prevent replay attacks (unless you extract the secret…).
The original proof-of-concept now returns a 400 error.
The response body is a JSON object showing the error message.
I’ve found a few bugs on various Facebook satellite/marketing domains (ones which are part of the Facebook brand, but not necessarily hosted/developed by them, and not under the *.facebook.com domain). Most of them aren’t that serious.
This one isn’t an exception, and I wouldn’t normally blog about it, but it’s an interesting use case as to why content types are important.
The bug is an XSS discovered on Facebook Studio. This is linked to by some Facebook marketing pages, and is used to showcase advertising campaigns on Facebook.
There is an area which allows you to submit work to the Gallery. This form conveniently has an option to scrape details from your Facebook page and fill in boxes for you (such as Company Name, Description).
This calls an AJAX end-point with your pages URL as a parameter.
text/html, when the response is actually JSON.
When browsed to directly (it doesn’t need any CSRF tokens to be viewed, despite the
hash param), we see our script executed.
The cool thing about this bug is that whilst it’s not persistent (the payload is fetched when the page is visited), the code is not present in the request body, therefore avoiding Chrome’s XSS Auditor and IE’s XSS Filter.
Had the content type been set to
application/json, the code would have not run (until you start to consider content sniffing…).
The content type is now set correctly.
15th August 2013 - Issue Reported
21st August 2013 - Acknowledgment of Report
21st August 2013 - Issue Fixed
This is a quick post about a simple bug I found on Friendship Pages on Facebook. (Note: Not nearly as cool as a full account takeover, however!)
Friendship Pages show you how two users on Facebook are connected, with posts and photos they’re both tagged in, events they’ve both attended and common friends. On these pages, you’re given the option to upload a cover photo (like you would on your profile, or an event).
The cover photo on someones friendship page, we can remove from any account.
First, we need the
friendship_id, which can be obtained with an AJAX call to
profile_id is one user and
friend_id is another.
friendship_id we make an AJAX call to
/ajax/timeline/friendship_cover/remove, placing the value into the
Refresh the page, and it’s disappeared.
Now, you can only remove your own cover.
29th August 2013 - Reported
2nd September 2013 - Acknowledgment of Report
2nd September 2013 - Issue Fixed
This post will demonstrate a simple bug which will lead to a full takeover of any Facebook account, with no user interaction. Enjoy.
Facebook gives you the option of linking your mobile number with your account. This allows you to receive updates via SMS, and also means you can login using the number rather than your email address.
The flaw lies in the
/ajax/settings/mobile/confirm_phone.php end-point. This takes various parameters, but the two main are
code, which is the verification code received via your mobile, and
profile_id, which is the account to link the number to.
The thing is,
profile_id is set to your account (obviously), but changing it to your target’s doesn’t trigger an error.
To exploit this bug, we first send the letter F to 32665, which is Facebook’s SMS shortcode in the UK. We receive an 8 character verification code back.
We enter this code into the activation box (located here), and modify the
profile_id element inside the
Submitting the request returns a 200. You can see the value of
__user (which is sent with all AJAX requests) is different from the
profile_id we modified.
Note: You may have to reauth after submitting the request, but the password required is yours, not the targets.
An SMS is then received with confirmation.
Now we can initate a password reset request against the user and get the code via SMS.
Another SMS is received with the reset code.
We enter this code into the form, choose a new password, and we’re done. The account is ours.
Facebook responded by no longer accepting the
profile_id parameter from the user.
23rd May 2013 - Reported
28th May 2013 - Acknowledgment of Report
28th May 2013 - Issue Fixed
The bounty assigned to this bug was $20,000, clearly demonstrating the severity of the issue.
When you create a shop on Etsy, you can upload an image to be used as a banner.
The upload form in the administration section stops you changing the shop to one you don’t control, as expected.
There is, however, an AJAX end-point which can also be used to upload these images. This doesn’t check you’re the owner on upload.
We can easily upload any image we want onto any shop we want. This could be used to damage a business’s reputation, or like what happened on the underground marketplace Silk Road, upload a banner which prompts any prospective customers to send any orders and payments to an email address we control.
Etsy fixed this in a simple way - they now check you’re the owner on upload.
4th April 2013 - Issue Reported
4th April 2013 - Acknowledgment of Report
8th April 2013 - Issue Fixed
Page 1 of 2