
•12 min read
The Glasswing Principle: Why Your Customer Feedback Tools Have the Same Blind Spot
Most people think their customer feedback tools are working. After all, the dashboards are green, NPS is holding steady, and thousands of survey responses flow in every quarter. They're wrong — and Anthropic just proved it in the most dramatic way possible.
On April 7, 2026, Anthropic unveiled Project Glasswing, a cybersecurity initiative built on their new Claude Mythos Preview model. The results were staggering: thousands of zero-day vulnerabilities discovered across critical open-source software — bugs that the world's best automated security scanners had missed for decades. One vulnerability in OpenBSD had been hiding in plain sight for 27 years. Not because no one was scanning. Because the scanners weren't actually thinking.
This has massive implications far beyond cybersecurity. What Anthropic exposed is a pattern that applies everywhere automated tools create a false sense of confidence — including the survey and feedback tools your team relies on every day.
Key Takeaways
- Project Glasswing found thousands of zero-day vulnerabilities that automated scanners missed for decades, proving that coverage without reasoning creates dangerous blind spots.
- The Glasswing Principle: the longer an automated tool runs without finding something, the more confident you become — and the more dangerous the blind spot grows.
- Your NPS scores, survey forms, and feedback dashboards suffer from the same structural flaw: they measure what you already know to ask, not what you need to discover.
- AI-powered conversations that reason, follow up, and probe inconsistencies are to customer research what Claude Mythos is to cybersecurity — a fundamentally different approach that surfaces what automation alone cannot.
What Anthropic Just Proved with Project Glasswing
Project Glasswing is Anthropic's flagship cybersecurity program, launched in partnership with Amazon, Apple, Google, Microsoft, and NVIDIA. Anthropic committed $100 million in usage credits to the initiative, and the underlying model — Claude Mythos Preview — is considered so powerful that Anthropic has declined to release it publicly, restricting access to vetted security partners only.
The headline numbers are remarkable. Claude Mythos found vulnerability chains in the Linux kernel that interconnected across multiple subsystems — the kind of complex, multi-step attack vectors that automated scanners are architecturally incapable of tracing. It identified a buffer overflow in OpenBSD's networking stack that had survived 27 years of automated scanning, manual code review, and multiple security audits.
But the most important finding isn't any single bug. It's the meta-finding: the entire category of automated scanning tools had been generating false confidence at scale. Every clean scan report, every green dashboard, every "no critical vulnerabilities found" summary was technically accurate and fundamentally misleading. The tools were covering the codebase. They were not understanding it.
As CyberScoop reported, the vulnerabilities Glasswing surfaced weren't in obscure corners of abandoned code. They were in heavily-scanned, actively-maintained, mission-critical software. The scanning tools ran daily. The bugs persisted for decades.
That gap — between coverage and comprehension — is what I'm calling the Glasswing Principle.
The Glasswing Principle: Automation Is Not Intelligence
Here's the definition worth pinning to your wall:
The Glasswing Principle: Automated tools create a compounding false confidence over time. The longer they run without surfacing a finding, the more the organization trusts them — and the more dangerous the undetected blind spot becomes. Coverage without reasoning doesn't just miss things; it actively masks risk.
This is not a bug in any specific tool. It's a structural property of automation without intelligence. A scanner that checks 10,000 lines of code against a pattern library will find every known pattern. It will miss every unknown one. And crucially, it will report confidence in both cases identically.
The Glasswing Principle has three components:
- The Coverage Illusion — High volume of activity (scans run, surveys sent, dashboards populated) creates a feeling of thoroughness that is disconnected from actual depth.
- Compounding Confidence — Each cycle that fails to surface a problem reinforces the belief that no problem exists. Year one: "we're probably fine." Year five: "we've been scanning for years — we're definitely fine." Year 27: a buffer overflow is still sitting there.
- The Discovery Gap — The difference between what automated tools can find (pattern matches, known categories, pre-defined questions) and what actually matters (novel patterns, emergent risks, the things you didn't know to look for).
This isn't theoretical. Anthropic quantified it. The discovery gap in cybersecurity was thousands of critical vulnerabilities wide.
Your Feedback Stack Has the Same Blind Spot
Now apply the Glasswing Principle to your customer research tools.
Your NPS survey has been running quarterly for three years. Scores hover between 42 and 48. The dashboard is green. Leadership cites the trend in board decks. And then a cohort representing 15% of ARR churns in a single quarter, and nobody saw it coming.
This is not a failure of measurement. It is a failure of reasoning. The NPS survey was covering your customer base. It was not understanding it. A customer who scores you a 7 — passive, fine, not a problem — might be three weeks from signing a competitor contract. The score captured a number. It missed the story.
Consider what forms and surveys structurally cannot do:
Research from Gartner has shown that survey response rates have declined by over 15 percentage points in the last decade, now averaging below 20% for most B2B companies. But here's the Glasswing trap: most teams interpret declining response rates as a participation problem, not an intelligence problem. They optimize the form — shorter, prettier, better incentives — rather than questioning whether forms are the right instrument at all.
Your survey asks: "How satisfied are you with our onboarding? (1-5)" A customer selects 3. You record a data point.
An AI conversation asks the same opening question — then follows up: "You mentioned it took longer than expected. What specifically slowed you down?" The customer explains they couldn't figure out how to connect their CRM, tried three times, almost gave up, and only succeeded after finding an unlisted help article. "What would have made you give up entirely?" They describe exactly the moment your onboarding becomes a churn risk.
The form captured a 3. The conversation captured a product insight, a documentation gap, a churn signal, and a prioritization input for your next sprint. Same customer, same topic, fundamentally different depth.
This is the discovery gap in customer research. And like the cybersecurity discovery gap, it compounds. Every quarter your surveys run without surfacing these insights, your team's confidence in the green dashboard grows — and the blind spot grows with it.
What Changes When Your Tools Can Actually Reason
The lesson from Project Glasswing isn't "use AI." Automated security scanners already used pattern-matching algorithms. NPS platforms already use analytics. The lesson is that reasoning changes the category of what's discoverable.
Claude Mythos didn't find vulnerabilities by scanning faster or covering more code. It found them by understanding how systems interact — tracing logic across subsystems, recognizing that a benign-looking function in one module becomes dangerous when called by a specific sequence in another. That requires reasoning about context, not matching against patterns.
The same shift applies to customer research. When your feedback tool can reason, three things change:
1. You discover unknown unknowns. A form can only ask questions you've already thought of. An AI conversation can follow a thread the customer introduces — a pain point you didn't know existed, a use case you hadn't considered, a competitor you weren't tracking. McKinsey's 2025 research on AI-driven customer insights found that companies using conversational AI for customer research identified 3x more actionable insights per interaction compared to traditional survey methods.
2. You get signal from the "messy middle." The highest-value customer feedback lives in uncertainty: "it depends," "I'm not sure," "sometimes yes, sometimes no." Forms have no mechanism for engaging with ambiguity. They need a clean answer for a clean data point. AI conversations thrive in the messy middle — probing what "it depends" depends on, exploring what triggers the "sometimes," understanding the conditions that flip a customer from satisfied to frustrated.
3. You catch the 27-year-old bugs. Every customer base has deeply embedded assumptions that nobody questions because the data has never contradicted them. "Enterprise customers love our reporting." Do they? Or have they just been selecting 4 out of 5 on a form for years because the real answer — "your reporting is fine but we export everything to Looker anyway" — doesn't fit in a radio button? AI-powered conversations from tools like Perspective AI surface these hidden truths by letting customers speak in their own words, then following up on the signals that matter.
How to Audit Your Tools for Glasswing Blind Spots
The Glasswing Principle suggests a specific audit framework. Here are five questions to ask about every feedback tool in your stack:
The Glasswing Blind Spot Audit
1. What can this tool NOT discover? Every tool has a discovery ceiling. For NPS, it's "why." For forms, it's anything you didn't think to ask. For analytics, it's intent. List the categories of insight this tool is structurally incapable of producing — that's where your blind spots live.
2. How long has this tool been running without a surprising finding? If your quarterly survey hasn't surfaced something that genuinely surprised your team in over a year, that's not evidence the tool is working. That's the Glasswing Principle in action — compounding confidence masking a growing discovery gap.
3. What's your false-confidence ratio? Calculate: (number of data points collected) ÷ (number of decisions those data points actually changed). If you're collecting 10,000 survey responses per quarter but can only point to 2-3 concrete decisions they informed, your tool is generating confidence, not insight.
4. Can this tool follow up? If a customer gives an unexpected, vague, or contradictory response, what happens? If the answer is "we record it and move on," you have a Glasswing blind spot. The ability to follow up — to probe, clarify, and explore — is the difference between coverage and comprehension.
5. When was the last time this tool found something you didn't know to look for? This is the ultimate Glasswing test. Automated security scanners never found novel vulnerability classes because they could only match known patterns. Your feedback tools face the same constraint. If every insight your tool produces fits neatly into existing categories, you're confirming what you already believe, not discovering what you don't.
Score yourself honestly. If three or more answers reveal structural limitations, your feedback stack has Glasswing-grade blind spots — and the longer you've been running it, the wider those blind spots have grown.
FAQ
What is Project Glasswing?
Project Glasswing is Anthropic's cybersecurity initiative using their Claude Mythos Preview model to find zero-day vulnerabilities in critical open-source software. Launched April 2026 with partners including Amazon, Apple, Google, Microsoft, and NVIDIA, it discovered thousands of vulnerabilities that automated scanners had missed for up to 27 years. Anthropic committed $100 million in usage credits to the program.
What is the Glasswing Principle?
The Glasswing Principle states that automated tools create compounding false confidence over time. The longer they run without surfacing findings, the more organizations trust them — while undetected blind spots grow larger. It applies wherever coverage is mistaken for comprehension, from cybersecurity scanning to customer feedback surveys.
How does the Glasswing Principle apply to customer surveys?
Surveys and forms can only ask questions you've already thought of, creating the same structural blind spot that plagued automated security scanners. They generate high volumes of data — response counts, NPS scores, completion rates — that create confidence without depth. Critical insights hiding in customer ambiguity, novel pain points, and unexpressed frustrations go undetected.
Can AI replace traditional customer surveys entirely?
AI-powered conversations don't just replace surveys — they change the category of what's discoverable. Like Claude Mythos reasoning about code rather than scanning patterns, AI conversations follow up on vague answers, probe inconsistencies, and surface insights customers wouldn't volunteer in a structured form. The goal isn't faster surveys; it's deeper understanding.
What is Claude Mythos Preview?
Claude Mythos Preview is Anthropic's most advanced AI model, developed specifically for Project Glasswing's cybersecurity applications. It demonstrates reasoning capabilities that go beyond pattern matching — tracing complex vulnerability chains across interconnected systems. Anthropic considers it too powerful for general release and has restricted access to vetted security research partners.
The Blind Spot You Can't Afford
Anthropic's Project Glasswing revealed something that the cybersecurity industry will be reckoning with for years: the tools everyone trusted were generating confidence without comprehension. Twenty-seven years of automated scanning. Thousands of undetected vulnerabilities. Not because the tools weren't running — because running isn't the same as reasoning.
Your customer feedback stack faces exactly the same structural flaw. Every NPS score you collect, every form response you tally, every survey dashboard you review is giving you coverage. Whether it's giving you understanding is a different question — and if you haven't asked it recently, the Glasswing Principle suggests your blind spot is growing.
The fix isn't incremental. You don't solve a reasoning gap by optimizing a form. You solve it by replacing forms with tools that can actually think — AI-powered conversations that follow up, probe, and surface what your customers aren't telling your surveys.
Perspective AI was built on exactly this premise: that customer research cannot start with a web form. If Project Glasswing has you rethinking your assumptions about automated tools, start by running the blind spot audit above on your own feedback stack. The bugs you find might not be 27 years old — but they've almost certainly been compounding longer than you think.