I’ve either dabbled or professionally worked in cybersecurity for years. This affords me a certain level of awareness of risk and the aversion that awareness creates. It means I’m super paranoid about my own development, but can also spot issues in foreign code or configurations quickly.
You’d think that would be advantageous.
Years ago, I worked on a project for Dropbox. As a contractor, I had access to their code systems and several credentials for privileged access to various systems. Concurrent with that project I upgraded from an HDD to an SSD on my primary machine – meaning I copied all my data from one drive to another and put the old drive on a shelf.
Two or so years after that project, I found the old drive and realized I’d never purged its data. The drive still had all of that legacy code – including some credentials hard-coded by another developer! I reached out, Dropbox verified the credentials were still active, rotated them, and thanked me for the report with a small bug bounty payment.1If I still had a copy of the credentials on a forgotten drive, then any engineer – contractor or employee – who had worked on the project since I had would also have a copy!
When Bounties go Bad
Sadly, not every story ends so amicably.
Once, after finding myself bored on a lazy afternoon, I began trawling GitHub looking for inadvertently-committed AWS keys.2GitHub has long since started doing this kind of search automatically, which is a major improvement for project security across the board! To my dismay, I found several. In one case the same key had been reused by over a dozen separate public GitHub projects!
I was able to identify the key as belonging to a professor teaching a 200-level course on cloud computing. He’d established a set of keys for an account paid by his university so students could learn how to manage resources in the cloud. It was a great idea, except for the failure to remind his students not to hard-code passwords in public code.
After tracking down the keys, I reached out to both the professor and his university to alert them to the problem. I wasn’t expecting a bounty payout, but was still hopeful. After all, locking down that account would potentially save the university thousands of dollars.
Instead, their legal team sent me a cease and desist letter and threatened to report my activity to the FBI as a violation of the CFAA. They accused me of hacking their systems and threatened to sue me for damages.
Understanding Risk
I still report bugs and security issues I find in the wild. Every time I do this, I’m taking a risk.
Will the recipient of my report understand the issue? Will they be offended I’ve found a problem? Will they threaten to sue me or charge me under an arcane law written in response to an early-80s techno-thriller?
Where I’ve been responsible for managing and responding to security threats, I have always paid out bug bounties. Some have been large. Some have been small – and paid out of my own pocket to thank a researcher for doing the right thing and taking a risk.
Sadly, not every company takes this stance.
Totally understand other companies have bug bounty programs, unfortunately we have some other priorities at the moment. I’ll definitely let you know if that changes.
I’ll still take the risk to report things – it’s just right. I’ll still keep suggesting companies pay out bounties where appropriate – security researchers should be compensated for their time and for taking the risk to report issues. And, most of all, I’ll still keep my reports responsible and quiet so the impacted teams have a chance to fix their problems.
I just wish the status quo was in a better place.
- 1If I still had a copy of the credentials on a forgotten drive, then any engineer – contractor or employee – who had worked on the project since I had would also have a copy!
- 2GitHub has long since started doing this kind of search automatically, which is a major improvement for project security across the board!