The remediation problem isn't new. What's new is that the math finally broke.
Software supply chain incidents have quadrupled over the last five years, according to IBM — Axios got compromised on npm at the end of March, and it wasn't found by a scanner. Critical and high-severity findings are now roughly a third of validated vulnerability submissions, up from the historical 26-28%. The backlog is growing, the mix is getting worse, and the time we have to respond is collapsing — from days, to about ten hours now, to a projected minute by 2028. (You can watch the trend at zerodayclock.com.)
The way we've done this work isn't going to survive that compression. When something like Axios drops, the questions are immediate and concrete: what's the blast radius, how do we remove or upgrade the affected package, and how do we clean up whatever the compromise left behind — the RAT, the persistence, the credentials it touched. Today those answers come from waiting on a vendor to ship a detection, then prioritizing the output, then tracking down whoever owns the affected hosts, then orchestrating the fix. That chain made sense when exploits took days to weaponize. It doesn't make sense at ten hours, and it's actively dangerous at one minute.
I've been in cybersecurity for 20 years, most of it at Rapid7 and Censys. I've watched the industry get extraordinarily good at finding problems and stay extraordinarily bad at fixing them. Every year the backlog grew, and every year the tooling stayed the same: a ticket, a spreadsheet, a person. That worked when discovery was slow. It doesn't work anymore, and it isn't going to start working again.
Two years ago it became obvious where this was headed. AI is going to keep finding more vulnerabilities — Mozilla disclosed 271 in a single Firefox release last month, attributed to Anthropic's Mythos. AI is going to keep generating more vulnerable code, because more code is being written by models that learned from the same flawed code we did. And AI is going to keep weaponizing disclosures faster than humans can ticket through. Three forces, one direction. If exploitation is getting cheaper and faster, the only response that adds up is making remediation cheaper and faster on the other side. Anything else is losing math. So we built it. Today, Furl is generally available.
What Furl is
Furl closes vulnerabilities, misconfigurations, compliance gaps, and patch debt across endpoints. We cover desktop and infrastructure today — Macs and Windows — with Linux next, and network after that. We're focused on the remediation half of the problem, not detection, not exposure management, not another scanner. There are two products on top of one engine: continuous remediation and the Forge.
Continuous remediation runs in the background. Furl maintains a live graph of the environment — endpoints, owners, every actionable item on them — and deduplicates findings across whatever scanners are already in place (Rapid7, Qualys, Tenable), pinning every issue back to the host and the owner. Before Furl touches anything, we set the guardrails: what to fix, where, with how much human oversight. Fix high CVEs only on macOS. Require approval before anything runs in production. Set confidence thresholds. Furl only fixes what we tell it to fix, and it surfaces checkpoints along the way — preflight, post-execution verification, rollback if something doesn't behave the way the model expected — so a human can step in at any of them. We track efficacy on every fix Furl runs, and that data is what earns it more autonomy over time. Autonomy in AI is earned, not given. That's the loop — scoped, reversible, running while nobody's watching, and the backlog shrinks against it.
The Forge is the other half. When Axios drops on a Friday night, we don't wait for a vendor to ship a detection. We ask Furl three questions and get three answers: what's the blast radius, how do we remove or upgrade the affected package, and how do we clean up whatever it left behind. Furl researches the issue, identifies the targets, authors the checks, authors the remediation strategies — including the cleanup for RATs, persistence mechanisms, and exposed credentials — defines the scope, and ships it. Internally we call this the corpus: a library of remediation strategies that grows every time anyone uses it, and it's what makes us independent of vendor roadmaps.
How Furl learns
A remediation program is personal — the right fix for one environment is the wrong fix for another. Furl learns at three timescales: short-term memory inside a single remediation, human feedback across sessions that shapes how it proposes the next fix, and what we call dreaming — sleep-time compute, in the broader AI lingo — where Furl reflects offline on what it ran and consolidates the patterns into long-term memory. Over time, the generic corpus becomes a remediation program tuned to your environment, getting faster and safer at the same time, because both come from the same source: knowing more about the specific place it's running.
Who this is for
Desktop and infrastructure teams with endpoint security issues they can't manually clear and breaking CVEs they can't keep up with. If a team has been triaging the same critical for a week because the script doesn't exist yet, that's the buyer.
Why this is different
Every incumbent in this space optimizes for finding, and the economic incentive runs that way — more findings means more product surface to sell. Nobody is structurally motivated to close the loop. We are. That's the whole company. The other piece is that Furl gets better the more it runs: every fix, every preflight, every rollback informs the next one, across every customer. The corpus compounds.
If you've been waiting for the other half of the security industry to show up — it's here.
— Derek
.png)