Your Clients Know You're Lying About Incident Reports
Simon Sinek tells a story about a leader who asks their assistant to tell a caller “I’m not here” — when they’re clearly sitting right there. It seems harmless. But what it communicates to every person within earshot is simple: in this organization, lying is acceptable when it’s convenient. As Sinek argues in a related piece, honesty isn’t a value you declare — it’s a behavior you demonstrate or don’t.
You can write “honesty” on the wall. You can put “integrity” in the company values deck. The behavior you model is the actual policy.
A quick distinction, because it matters: there’s a difference between incomplete information in the fog of an active incident, legally cautious phrasing during a sensitive disclosure, and rewriting reality after the fact when the facts are already known. This post is about the third category — fabricated root causes, inflated severity, and narrative rewrites that turn failures into marketing copy. Not early-stage uncertainty. Not legal review. Deliberate misrepresentation.
If you’ve worked in managed services or agency environments, you already know what this looks like. You’ve probably written the honest version of an incident report and watched it get rewritten before it reached the client.
Two emails about the same fix
An engineer resolves a client issue and writes the update:
Good afternoon — I’ve completed the fix for your application. The issue was related to a configuration change in your vendor’s API. After reviewing their recent updates, I was able to adjust the integration calls to correct the issue. Please test the deployment and let me know if anything is amiss.
What the client actually receives, after the update passes through a client-facing coordinator:
Hello! Good news! We’ve fixed the issue! Let us know if there’s anything else we can help you with!
One of those tells the client what happened, why, and what to do next. The other tells them nothing, cheerfully.
The results are predictable. The detailed version gets a “Thank you” and no follow-up — the client has what they need. The filtered version, on the other hand, almost always generates another round: “Can you tell me what happened?” or “What was actually wrong?” The vague, hedging reassurance creates more work, not less — because the client still wants the answer, and now someone has to circle back and provide the information that should have been in the first email.
This pattern has a defense mechanism built in. When the engineer starts pre-writing client-ready summaries — deliberately simple, no jargon, just the facts — the coordinator pushes back: “That’s too technical for the client.”
“Configuration issue with their API” is not technical. It’s a sentence. But when the person filtering communication can’t evaluate the content, they default to the safe, empty version, or just don’t include it at all.
Each incident has to be bigger than the last
Vague communication creates a second problem: escalation. When you’re not telling clients what actually happened, you need to tell them something. And that something tends to get more dramatic over time.
A search engine bot ignores robots.txt and starts crawling aggressively — hundreds of requests per second. It’s a nuisance. You block the bot, adjust your rate limiting, move on. Ten minutes of work.
What gets communicated to the client: “Your site was under attack. We’ve resolved it and migrated you to a new server for added protection.”
Except the site is behind a CDN with DDoS protection built in. Migrating to a new server doesn’t stop an “attack” — you’d toggle a setting on the CDN. The explanation doesn’t even make technical sense if given thirty seconds of thought. But it sounds decisive, and it frames the provider as the hero instead of the cause.
Another example: bot traffic originating from a foreign IP range — everyday noise hitting thousands of sites simultaneously — becomes “It looks like you’re being targeted from overseas.” For a small organization without technical staff, that’s terrifying. And it’s also completely untrue.
The escalation has to keep ratcheting because you’ve already inflated the previous incidents. “Bot swarm” became “attack.” “Attack” becomes “targeted from overseas.” Each incident has to sound bigger than the last, because the bar for what counts as significant keeps rising. Where does it end?
This (usually) comes from the top
The most impressive version of this pattern is the catastrophic failure repackaged as a proactive initiative. Shared infrastructure goes down hard. Someone spends weeks — sometimes months — in recovery mode, rebuilding from scratch under pressure. Grueling work that should be recognized for what it is: disaster recovery executed by someone who deserves a lot of credit.
The client communication: “We’ve completed a major security upgrade so that we can serve you better.”
Not a failure. Not a recovery. An upgrade. The person who rebuilt the thing gets reframed from “saved us from a catastrophe” into a supporting player in a marketing narrative about continuous improvement.
This is where Sinek’s framework lands with full weight. When the person at the top is being deceptive — even by exaggeration, even by omission — it filters down. Every engineer who knows the server wasn’t actually attacked, every team member who knows the “upgrade” was a recovery — they all understand what the real values are. Not the ones on the wall. The ones in the emails.
And once that framing is normalized, it filters into everything. Every incident gets a spin pass before the client sees it. Postmortems become performance pieces. The institutional memory — the documentation, the incident history — becomes unreliable because it reflects what was communicated, not what happened. You can’t learn from incidents you’ve rewritten. You can’t improve processes that your records say were fine. This isn’t hypothetical — a Keeper Security study found that 41% of known cyber incidents weren’t reported internally to management, largely due to fear and cultural pressure. The spin starts small, but the underreporting becomes systemic.
What honest communication actually sounds like
It’s not complicated:
A configuration issue caused degraded performance for approximately two hours this morning. The root cause was resource contention — the server was handling more concurrent traffic than its current allocation supports. We’ve increased the allocation and are monitoring to confirm stability. We’re also reviewing the provisioning for your other services to make sure they have adequate headroom.
What happened. Why. What you did. What you’re doing to prevent it next time. No villain, no hero narrative. Just a clear account that treats the client as a competent adult who can handle the truth about their own systems. This pattern — acknowledge, explain impact, detail actions, outline prevention — shows up in every serious framework for incident communication. It’s not a novel idea. It’s just rarely practiced.
Clients who get honest incident reports develop confidence that when you say things are fine, things are actually fine. Clients who get spin learn to treat every communication as potentially unreliable — including good news. As ilert puts it in their MSP incident management guide: avoid vague terms like “working on it” — clients should always feel they’re kept in the loop with meaningful updates. Even a “no change” update reassures clients the issue is being actively worked on. Substance over cheerfulness.
Trust erodes before anyone says anything
The biggest cost of this pattern is invisible until it isn’t. You might think you’re protecting the client. Maybe it’s a self-image thing. Maybe you want the organization to seem more capable than it is. But every inflated incident report quietly erodes trust capital. The client may not realize it at first — but eventually it compounds, and eventually it comes back.
People are better at detecting inauthenticity than we give them credit for. They might not know what you’re hiding. But they know something doesn’t add up. And they’re filing it away, waiting for the pattern to confirm itself. Even Uber’s infamous attempt to cover up a breach proved the point — concealment always costs more than disclosure.
I’d rather be direct with a client about a bad day than eloquent about a fictional one.
One caveat: this post is about the principle — being direct with clients about what happened and why. There’s a separate, harder question about where honesty meets legal exposure. Incident reports are discoverable documents. There’s a meaningful difference between “we had a configuration issue” and “we were grossly negligent in our provisioning” — both might be true, but they carry different legal weight. That tension — how to be honest without writing your opposing counsel’s opening argument — deserves its own treatment, and I’ll be writing about it soon.
For more on how communication failures corrode institutional memory: Every Management Failure Is a Retrieval Failure. For how I build integrity standards into systems that can’t drift: The Governance Documents.
Sources: Simon Sinek — “Honesty Is NOT a Value” (values are behaviors, not declarations) · Keeper Security / IT Brew — “Cyber Attacks Are Grossly Underreported” (41% of known incidents unreported internally; 43% cite fear of consequences) · eMazzanti — “Transparent Communication After a Security Breach” (acknowledge → explain impact → detail actions → outline prevention framework) · ilert — “Incident Management for MSPs Guide” (avoid vague “working on it” updates; substance over reassurance) · Blackfog — “Is Transparency Important Beyond Compliance After a Cyberattack?” (Uber cover-up case study; concealment costs vs. disclosure trust) · FireHydrant — “A Practical Guide to Incident Communication” (clear language, empathy, audience-tailored updates)
Get new posts in your inbox
Occasional writing on systems, ADHD, and AI. No cadence pressure.
You're in. I'll send you the next one.