Source originale du contenu
Several months ago, while I was typing a few e-mails at my dining room table, my laptop spoke to me.
“You…look…bored,” it said in a robotic monotone, out of nowhere.
Startled, I checked my browser tabs and my list of open applications to see if anything had been making noise. Nothing had. I hadn’t been watching any YouTube videos, browsing any pages with autoplay ads, or listening to any podcasts when the voice appeared.
Then I realized: this was the hacker. The same hacker who, for the prior two weeks, had been making my life a nightmare hellscape — breaking into my email accounts, stealing my bank and credit card information, gaining access to my home security camera, spying on my Slack chats with co-workers, and—the coup de grâce—installing a piece of malware on my laptop that hijacked my webcam and used it to take photos of me every two minutes, then uploaded those photos to a server owned by the hacker.
Hence the robot voice. From his computer on the other side of the country, the hacker spied on me through my webcam, saw that I was unenthused, and used my laptop’s text-to-speech function to tell me “you look bored.”
I had to admit, it was a pretty good troll. And I couldn’t even be mad, because I’d asked for it.
Last year, after reporting on the hacks of Sony Pictures, JPMorgan Chase, Ashley Madison, and other major companies, I got curious about what it felt like to be on the victim’s side of a data breach, in a time when so much of our lives is contained in these giant, fragile online containers.
So I decided to stage an experiment that, in hindsight, sounds like a terrible idea: I invited two of the world’s most elite hackers (neither of whom I’d ever met) to spend two weeks hacking me as deeply and thoroughly as they could, using all of the tools at their disposal. My only conditions were that the hackers had to promise not to steal money or any other assets from me, reveal any of my private information, or do any harm to me, my data, or anyone else. And then, at the end of the hack, I wanted them to tell me what they found, delete any copies they’d made, and help me fix any security flaws or vulnerabilities I had.
Fortune 500 companies do this kind of thing all the time. It’s called “penetration testing,” or “pentesting,” and it’s a staple of the modern corporate security arsenal. Large corporations and government agencies pay professional white-hat hackers thousands of dollars an hour to try to hack their servers, in the hopes that they’ll find holes and vulnerabilities that can be patched before a malicious hacker gets hold of them.
I’m not a Fortune 500 company, but I still wanted to subject myself to a personal penetration test to see how my security measured up. I’m a pretty privacy-conscious guy, and I’ve taken lots of steps to keep my data safe. I put two-factor authentication on my accounts; I have strong passwords and a password manager; and I use a VPN when I’m on public wifi networks.
If I had to give myself an overall digital security grade, I’d give myself an A-.
But as it turned out, it didn’t matter how good my defenses were. Against a pair of world-class hackers, my feeble protections were about as useful as cardboard shields trying to stop a rocket launcher. For weeks, these hackers owned the hell out of me. They bypassed every defense I’d set up, broke into the most sensitive and private information I have, and turned my digital life inside out. And then, when they’d had enough, I met them at DefCon (the world’s biggest hacker convention, held in Las Vegas every year) and they told me exactly how bad the damage was.
You can see the full, terrifying story of what happened to me in the video above. But here are the broad strokes.
Part 1: Social Engineering
The first hacker I called, Chris Hadnagy, specializes in what’s called “social engineering” — attacking a network by exploiting human weaknesses, rather than using code or malware. Most of these exploits are subtle—a cable company that will give out customer addresses over the phone without asking for a PIN, say, or an insurance company that only requires a Social Security number to access a customer’s policy details. But they can be dangerous, in that they can provide hackers access and data points to carry out larger attacks. (Social engineering is how hackers were able to wreck the digital life of my friend Mat Honan, by getting Apple and Amazon to divulge his personal details.)
I’d never met Chris, but his firm, Social-Engineer, came highly recommended, so I gave him my rules: he and his team had two weeks to hack me as hard as possible, using every tool at their disposal, but stealing no money or data and causing no permanent damage or fallout.
Before he began, Chris emailed me: “may God have mercy on you ;)”
Chris began by compiling a dossier on me, using publicly available information like my email address, my employer, and my social media accounts. Most of this was information I’d made available on purpose, but some of it wasn’t. (They found my home address, for example, by enlarging and zooming in on a photo I’d posted to Twitter of my dog, which had the address listed in tiny type on the dog’s tag.)
Once he had my personal information, Chris and his team went to work. They called Time Warner Cable and Comcast, pretending to be my girlfriend, and figured out whether or not I had an account with either of the companies. (I don’t.) They called the local utility company to see if I had an account there. (I do, but it’s not under my name.) They found my Social Security number on a special-purpose search engine, and took a survey of my social media activities. In total, their dossier on me added up to 13 pages.
If Chris had been a malicious attacker, he could have caused all manner of havoc with the information he had. He could have gotten my electricity shut off, or gained access to my bank account and bled me dry. He could also have stitched together several bits of personal information to come up with a convincing cover for a more sophisticated attack. (Chris did this once; he saw on Twitter that I’d recently ordered a hoverboard scooter from Alibaba, then wrote a phishing email from a fake Alibaba account to me that led to a link where, it claimed, I needed to confirm my mailing address for customs. I fell for it.)
For his grand finale, Chris had one of his social engineers, Jessica Clark, conduct a “vishing” (voice solicitation) call to my cell phone company, in which she pretended to be my (non-existent) wife and asked for access to my account. To make the act more convincing, and elicit sympathy from the customer service rep, she found a YouTube video of a crying baby and played it in the background, while spinning an elaborate sob story about how I was out of the country on business, and how, if she could just get into the account, she could get the information she needed to apply for a loan. (You can watch Jessica’s vishing call at 2:13 in the video above—it’s pretty amazing.)
The act worked: the customer service worker believed that Jessica was my wife, and—over the screams of the YouTube baby noises—not only allowed her to access my account, but allowed her to change the password, effectively locking me out.
The scariest thing about social engineering is that it can happen to literally anyone, no matter how cautious or secure they are. After all, I hadn’t messed up—my phone company had. But the interconnected nature of digital security means that all of us are vulnerable, if the companies that safeguard our data fall down on the job. It doesn’t matter how strong your passwords are if your cable provider or your utility company is willing to give your information out over the phone to a stranger.
The other scary thing about social engineering is that it’s incredibly easy. Anyone can do it, regardless of coding proficiency—all you need is Google, a phone, and some amateur acting skills. And if Chris’s team of social engineers could wreak havoc without sophisticated tools, I was terrified of what a skilled technical hacker could do.
Part 2: The Shell
For the second part of my hack, I enlisted Dan Tentler, a well-known security researcher and founder of Phobos Group, who has spent years penetration-testing the computer systems of major companies. Dan’s hacking methods are more traditional—typically, he writes malicious scripts, inserts them into systems, and uses them to undermine an organization’s security infrastructure. He’s very good at it.
Dan began hacking me with an elaborate phishing scheme. Running a WHOIS search on my personal website, he found out who hosted my site (Squarespace), and registered an available domain name that was one letter away from Squarespace’s. He then set up a fake website that purported to be a Squarespace security page, and sent me a convincing-looking email that claimed to be from Squarespace’s security team, asking me to go to the page he’d set up and install a certificate that would improve the security of my site. I’ve received a lot of phishing emails over the years, and this was the slickest one I’d ever seen—so slick, in fact, that I clicked on it even though I had promised myself I would be extra-careful while the hackers were targeting me.
The certificate I installed, of course, wasn’t really from Squarespace—it was malware he’d written that created what’s called a “shell.” This shell allowed Dan to remotely log into my computer and execute commands it as if it were his own—essentially giving him control of my entire machine.
Once Dan had his shell program up and running, hacking me was as easy as figuring out what he wanted and taking it. He created fake pop-up boxes, which looked identical to OSX system pop-ups, asking for my administrator password. He installed a keylogger that captured every letter I typed, and used it to steal my login information for my password manager, 1Password—which meant that he got all of my other passwords, too. He found my Dropcam credentials, and used it to spy on my house through my own security system. He installed a program that snapped photos of me out of my own webcam and took a screenshot of my laptop’s screen every 2 minutes, and sent them to a server where he could collect and view them. One night, I dozed off while watching “Chopped” on Netflix, and Dan literally watched me sleep.
Throughout all of this, there were vague signs that something was happening to my computer—I got more error messages than usual, and from time to time the green light next to my webcam would illuminate—but I had no idea just how extensive Dan’s hack had been until I met him in Las Vegas for DefCon.
“It’s ridiculous,” Dan said. “I have control of your digital life in its entirety. I have all your credentials. I have all your access to all your financial information, all your work information, all your personal information. I can pay people with your bank account or your Amex account.”
For all intents and purposes, he said, “I am you.”
If he had been a malicious attacker, Dan said, he could have done unspeakable damage: draining my bank account, ruining my credit score, deleting years’ worth of photos, videos, and important data from my hard drive, using secrets from my email inbox and my work Slack to ruin my reputation. Anything, really.
“I could have left you homeless and penniless,” he said.
I believed him. And after hearing about the extent of Dan’s hack, I thought about throwing my laptop into the ocean, moving to the mountains, and becoming a disconnected hermit.
But a hermetic life isn’t an option for me. I need to live and work in this insecure, interconnected digital world. So in Las Vegas, I asked Dan, Chris, and another digital security expert—Morgan Marquis-Boire, a Google veteran and the director of security at First Look Media—to help me fortify my digital life, and make it harder for hackers to attack me.
Part 3: The Cleanup
The first thing Marquis-Boire told me is that, relatively speaking, I’m pretty unlikely to be hacked by someone as skilled as Chris Hadnagy or Dan Tentler. I’m not a government official, a CEO, an intelligence officer, or a celebrity. And even though some journalists (and a few normal people) have been hacked to an extreme degree, it’s not likely that I fit the profile of someone whose life an attacker would be interested in destroying.
This principle is called “privacy through obscurity.” Basically, the idea is that although anyone can theoretically be hacked by anyone with enough skill and time on their hands, the vast majority of us simply aren’t interesting enough for hackers to care about.
“Do you worry about trained martial artists beating you up on the street?” Marquis-Boire asked me.
“Not particularly,” I responded.
“But you’re aware that they exist,” he said. “You’re also aware that you probably couldn’t do anything about it if one of them wanted to beat you up in the street.”
His point, he explained, was that while people can—and should—take basic steps to protect their digital security, most people probably shouldn’t worry about being subjected to a mega-hack like the one Dan and Chris had put me through. The real danger isn’t the trained martial artist attacking you; it’s the thief who notices that your car is unlocked and decides to help himself to some electronics.
The hackers recommended some things I could do to bolster my security. Most of it was basic stuff: turn on two-factor authentication, use a VPN, don’t click on suspicious links, change your passwords every few months. One I hadn’t heard of before was an app called Little Snitch, which monitors your outgoing network traffic and alerts you if a program you’re running is trying to contact a strange server.
You can also take proactive steps to protect yourself against social engineering. After Chris and Jessica broke into my wireless account, I called my phone company, reset my password, and instructed them not to let anyone make changes to my account in the future unless they provided a 4-digit PIN. I did the same thing with my ISP, my bank, and my utility company.
The goal of these tools isn’t to make yourself hack-proof; no app or service can do that. But using good security practices can deter hackers, or at least convince them to move on to an easier target.
In the end, I’m glad I got hacked. Now, I have a good sense of where my security shortcomings are, and I know how to fix them. I’m also aware of the things I can’t fix, but that still pose a danger to me. (One conversation I had at DefCon, with an infrastructure security researcher who spent 30 minutes telling me about how nuclear power plants and bridges can be hacked, made me want to move to a very small island.)
More than anything, my experience getting hacked made me realize the importance of ethical white-hat hackers. In a time when everything from refrigerators to baby monitors is networked, internet-connected, and vulnerable to attack, hackers have become a new kind of power broker—they’re the people who understand the risks we face, and can help push for the policies and patches that can help mitigate those risks. To paraphrase the NRA: the only thing that can stop a bad hacker with a script is a good hacker with a script.
At the end of DefCon, Dan helped me clean the malware off my laptop, remove the shell program he’d installed, and delete the files he’d transferred to a remote server. Then, I went downstairs to the convention floor, where a booth was giving away tiny stickers to place over your laptop’s webcam, in order to deter snoops. I bought a pack of 10 stickers for a dollar, and placed one squarely over the lens of my webcam. It won’t protect me from every danger I face, but at least I’ll sleep a little more peacefully.