What cybersecurity can learn from military intelligence and the CIA
Critical thinking frameworks that every analyst should know
Cybersecurity has always borrowed from the military. Lockheed Martin’s Cyber Kill Chain. MITRE ATT&CK, rooted in intelligence analysis methodology. Red teaming itself comes from military war gaming. The language we use every day like threat actors, campaigns, reconnaissance, exfiltration, that is military language.
We took their offensive frameworks. We took their defensive models. We took their vocabulary.
But I realized we skipped the one thing that makes all of it work: how they train their analysts to think.
Two weeks ago I posted five critical thinking habits for security analysts on LinkedIn. The response surprised me, not just the engagement, but the conversations it started. Team leads sharing what they struggle to teach. Analysts admitting they’d never been trained to think, only to follow playbooks. Career changers asking where to even begin.
It confirmed something I’ve believed for a long time: cybersecurity has a thinking problem, not a tools problem.
So I want to go deeper. Not just five habits, but the actual frameworks behind them, where they come from, why they work, and how you can apply them starting today.
The intelligence community figured this out decades ago
Here’s something most security professionals don’t know: the military and intelligence community have been formally training analysts in critical thinking for over 30 years. They had to. When a wrong assessment can cost lives, you can’t afford analysts who only follow procedures.
The CIA published “Psychology of Intelligence Analysis”, a book specifically about the cognitive biases that cause analysts to reach wrong conclusions. Not technical biases. Human biases. Confirmation bias. Anchoring. Mirror imaging. The same traps that cause a SOC analyst to close a ticket too fast, because the evidence looks like something they’ve seen before.
David T. Moore, writing for the National Defense Intelligence College, published “Critical Thinking and Intelligence Analysis” which laid out a core principle: analysts must simultaneously build a logical reasoning chain AND objectively challenge their own logic. Not one or the other. Both at the same time.
This isn’t abstract philosophy. This is operational tradecraft. And it translates directly to cybersecurity.
Applying this in the SOC
Let me make this concrete. You get an alert: a user account is authenticating from two geographic locations within an impossible travel timeframe.
An analyst following a playbook checks the VPN logs, confirms whether the user has a travel ticket, and either escalates or closes.
An analyst using critical thinking does something different:
Purpose: what am I actually trying to determine? Not “is this a true positive” but “is this account compromised?”
Assumptions: I’m assuming the geolocation data is accurate. I’m assuming the user only has one device. Am I sure about both?
Information: what data do I have? What data am I missing? Are there other signals I should correlate — endpoint telemetry, email activity, privilege changes?
Inferences: if this account IS compromised, what would the attacker do next? What evidence would I expect to see? If it ISN’T compromised, what benign explanations exist and what evidence supports them?
Point of view: am I looking at this only from a defensive perspective? If I were the attacker, would this alert even be the real concern, or would it be a distraction?
This is the difference between an alert handler and an analyst.
Why this matters more now than ever
LLMs are becoming part of the analyst workflow. They’re impressive tools. But they are pattern-matching engines that generate the most probable answer, not necessarily the correct one. They sound authoritative even when they’re wrong.
If you don’t have the critical thinking skills to evaluate an LLM’s output with the same rigor you’d evaluate a suspicious process execution, you have a problem. AI amplifies the analyst, but only if the analyst has the judgment to question its conclusions.
The intelligence community understood this about their analysts decades ago. Cybersecurity is only now catching up.
Where to start
You don’t need a certification or a master’s degree to start thinking more critically. But if you want structured paths, the best part is that these foundational texts are available for free.
But the real starting point is simpler than any book:
The next time you investigate an alert, PAUSE. Ask yourself what you’re assuming. Consider the opposite of your first conclusion. Explain your reasoning out loud. Check whether your thinking would hold up if someone challenged every step.
That pause — that moment of deliberate, structured thinking — is what separates good analysts from the ones who catch what everyone else missed.
Tools change every year. Thinking compounds forever.
Further reading:
Psychology of Intelligence Analysis — Richards J. Heuer Jr., CIA (free PDF):https://www.cia.gov/resources/csi/static/Pyschology-of-Intelligence-Analysis.pdf
Critical Thinking and Intelligence Analysis — David T. Moore, National Defense Intelligence College (free PDF):https://apps.dtic.mil/sti/tr/pdf/ADA481702.pdf


