Breaking the jargon is a key to success
August 2025

When I joined a cybersecurity startup as their first in-house designer, I inherited a dashboard that was "not working"—diplomatic speak for "we have no idea what we're looking at." It managed non-human identities (API keys and service accounts that outnumber humans 1000:1) yet remain invisible. Like discovering your house has a thousand secret doors.
Session recordings revealed the truth: users clicked on charts that didn't click back. The only real interaction was with a map buried halfway down the page. We'd built the equivalent of dumping puzzle pieces on a table and calling it a picture.

Three Acts and a Revelation
I restructured the dashboard as a narrative in four acts, approaching it as a story to tell:
Act One: "You Have a Problem."
We started with an overview that quantified the scope—showing companies they had thousands of non-human identities when they thought they had dozens. But numbers alone don't convey risk. We mapped where these identities lived, which critical assets they could access, and crucially, which systems we couldn't monitor. If their most sensitive data sat in unintegrated platforms, that blind spot became part of the story.

Act Two: "Here's Our Secret Sauce."
The company had developed a sophisticated method to identify which identities posed real risk—analyzing patterns, permissions, and behaviors that distinguished dangerous credentials from routine ones. But we'd buried this intelligence under proprietary terms like "shielding" and "zero trust" without explanation. I pushed to expose the logic: show users how we determine risk scores, why certain identities get flagged, and what specific actions would actually reduce their exposure.

Act Three: "Don't Touch That!"
This section addressed the elephant in the room: remediation isn't simple. Unlike resetting passwords, rotating API keys or removing service accounts can cascade through your infrastructure. One wrong move could break critical integrations or halt production systems. We built a dependency visualization showing what would break if you rotated specific credentials—turning abstract risk into concrete consequences.

Act Four: The Investigation Hooks:
With thousands of data points, finding meaningful threats was like searching for a needle in a haystack. So we created specific entry points—not just "top risks" but discoveries designed to trigger that "aha" moment. Using the company's investigation methodology, we surfaced surprising vulnerabilities: the service account created three years ago that still has admin access, the API key connecting to your payment processor that's never been rotated. These hooks gave analysts immediate, actionable leads rather than forcing them to dig through endless data.

From Jargon to Journey
The visual design had all the hierarchy of a parking lot—everything was equally important, which meant nothing was. I introduced what designers call "information architecture" and what humans call "making sense."
Critical risks got the red-alert treatment. Safe zones receded into calming grays. Interactive elements actually looked interactive (revolutionary, I know). The previous color scheme had treated our brand palette like a suggestion rather than a system—I made it law.
But the real innovation was the "investigation hooks"—breadcrumbs leading analysts to their first meaningful discovery. Instead of presenting all vulnerabilities equally, we surfaced the ones most likely to surprise them: that forgotten service account with admin access, or the API key that hadn't been rotated since 2019.

The Invisible Made Inevitable
Here's what designing for cybersecurity taught me: your job isn't simplifying complexity—it's making invisible problems feel urgent and real. Non-human identities exist in the blind spot of most mental models. They're the digital equivalent of dark matter: everywhere, essential, and completely overlooked.
The redesigned dashboard didn't just improve metrics. It changed conversations. Design partners went from "What am I looking at?" to "What else should I be worried about?"—the sweetest music to a security company's ears.
As the sole designer in a startup shipping at startup speed, you don't get to wait for perfect. But you do get to think holistically. There's no handoff between design and strategy when you're the entire design department.
The dashboard that "didn't work" wasn't broken—it just wasn't speaking anyone's language. Not the buyers calculating risk, not the users hunting threats, and certainly not the thousands of non-human identities silently running the show. My job was simple: give everyone a voice in a conversation they didn't know they needed to have.
Because sometimes the best design isn't about making things beautiful. It's about making the invisible impossible to ignore.
Breaking the jargon is a key to success
August 2025

When I joined a cybersecurity startup as their first in-house designer, I inherited a dashboard that was "not working"—diplomatic speak for "we have no idea what we're looking at." It managed non-human identities (API keys and service accounts that outnumber humans 1000:1) yet remain invisible. Like discovering your house has a thousand secret doors.
Session recordings revealed the truth: users clicked on charts that didn't click back. The only real interaction was with a map buried halfway down the page. We'd built the equivalent of dumping puzzle pieces on a table and calling it a picture.

Three Acts and a Revelation
I restructured the dashboard as a narrative in four acts, approaching it as a story to tell:
Act One: "You Have a Problem."
We started with an overview that quantified the scope—showing companies they had thousands of non-human identities when they thought they had dozens. But numbers alone don't convey risk. We mapped where these identities lived, which critical assets they could access, and crucially, which systems we couldn't monitor. If their most sensitive data sat in unintegrated platforms, that blind spot became part of the story.

Act Two: "Here's Our Secret Sauce."
The company had developed a sophisticated method to identify which identities posed real risk—analyzing patterns, permissions, and behaviors that distinguished dangerous credentials from routine ones. But we'd buried this intelligence under proprietary terms like "shielding" and "zero trust" without explanation. I pushed to expose the logic: show users how we determine risk scores, why certain identities get flagged, and what specific actions would actually reduce their exposure.

Act Three: "Don't Touch That!"
This section addressed the elephant in the room: remediation isn't simple. Unlike resetting passwords, rotating API keys or removing service accounts can cascade through your infrastructure. One wrong move could break critical integrations or halt production systems. We built a dependency visualization showing what would break if you rotated specific credentials—turning abstract risk into concrete consequences.

Act Four: The Investigation Hooks:
With thousands of data points, finding meaningful threats was like searching for a needle in a haystack. So we created specific entry points—not just "top risks" but discoveries designed to trigger that "aha" moment. Using the company's investigation methodology, we surfaced surprising vulnerabilities: the service account created three years ago that still has admin access, the API key connecting to your payment processor that's never been rotated. These hooks gave analysts immediate, actionable leads rather than forcing them to dig through endless data.

From Jargon to Journey
The visual design had all the hierarchy of a parking lot—everything was equally important, which meant nothing was. I introduced what designers call "information architecture" and what humans call "making sense."
Critical risks got the red-alert treatment. Safe zones receded into calming grays. Interactive elements actually looked interactive (revolutionary, I know). The previous color scheme had treated our brand palette like a suggestion rather than a system—I made it law.
But the real innovation was the "investigation hooks"—breadcrumbs leading analysts to their first meaningful discovery. Instead of presenting all vulnerabilities equally, we surfaced the ones most likely to surprise them: that forgotten service account with admin access, or the API key that hadn't been rotated since 2019.

The Invisible Made Inevitable
Here's what designing for cybersecurity taught me: your job isn't simplifying complexity—it's making invisible problems feel urgent and real. Non-human identities exist in the blind spot of most mental models. They're the digital equivalent of dark matter: everywhere, essential, and completely overlooked.
The redesigned dashboard didn't just improve metrics. It changed conversations. Design partners went from "What am I looking at?" to "What else should I be worried about?"—the sweetest music to a security company's ears.
As the sole designer in a startup shipping at startup speed, you don't get to wait for perfect. But you do get to think holistically. There's no handoff between design and strategy when you're the entire design department.
The dashboard that "didn't work" wasn't broken—it just wasn't speaking anyone's language. Not the buyers calculating risk, not the users hunting threats, and certainly not the thousands of non-human identities silently running the show. My job was simple: give everyone a voice in a conversation they didn't know they needed to have.
Because sometimes the best design isn't about making things beautiful. It's about making the invisible impossible to ignore.