The impact question is really around scale; a few weeks ago Anthropic claimed 500 "high-severity" vulnerabilities discovered by Opus 4.6 (https://red.anthropic.com/2026/zero-days/). There's been some skepticism about whether they are truly high severity, but it's a much larger number than what BigSleep found (~20) and Aardvark hasn't released public numbers.
As someone who founded a company in the space (Semgrep), I really appreciated that the DARPA AIxCC competition required players using LLMs for vulnerability discovery to disclose $cost/vuln and the confusion matrix of false positives along with it. It's clear that LLMs are super valuable for vulnerability discovery, but without that information it's difficult to know which foundation model is really leading.
What we've found is that giving LLM security agents access to good tools (Semgrep, CodeQL, etc.) makes them significantly better esp. when it comes to false positives. We think the future is more "virtual security engineer" agents using tools with humans acting as the appsec manager. Would be very interested to hear from other people on HN who have been trying this approach!
michael-bey•about 23 hours ago
>There's been some skepticism about whether they are truly high severity
To be honest this is an even bigger problem with Semgrep and other SAST tools. Developers just want the .1% of findings that actually lead to issues, but flagging patterns will always lead to huge false positive rates.
I do something similar as what you suggested and it does work well -pattern match + LLM. The downside is this only applies to SAST and so far nobody has found a way to address the findings that make up 90% of a security team's noise, namely SCA and container images.
tkp-415•about 23 hours ago
My first use case of an LLM for security research was feeding Gemini Semgrep scan results of an open source repo. It definitely was a great way to get the LLM to start looking at something, and provide a usable sink + source flow for manual review.
I assumed I was still dealing with lots of false positives from Gemini due to using the free version and not being able to have it memorize the full code base. Either way combining those two tools makes the review process a lot more enjoyable.
nikcub•1 day ago
> What we've found is that giving LLM security agents access to good tools (Semgrep, CodeQL, etc.) makes them significantly better
100% agree - I spun out an internal tool I've been using to close the loop with website audits (more focus on website sec + perf + seo etc. rather than appsec) in agents and the results so far have been remarkable:
Human written rules with an agent step that dynamically updates config to squash false positives (with verification) and find issues while also allowing the llm to reason.
niros_valtos•about 19 hours ago
Definitely not a surprise they ship it. This is manageable for a small subset of repos scanned once.
Reality is that code changes frequently and such rescans are expensive especially with thinking models. You can open a PR too, but then there are other missing workflows as rebasing when there are conflicts, finding the devs with the right expertise to review/test the fix, etc.
bottom line - I see it is an interesting research tool but not more than that.
upghost•1 day ago
Anakin: I'm going to save the world with my AI vulnerability scanner, Padme.
Padme: You're scanning for vulnerabilities so you can fix them, Anakin?
Anakin: ...
Padme: You're scanning for vulnerabilities so you can FIX THEM, right, Annie?
nikcub•1 day ago
I assume that's why this is gated behind a request for access from teams / enterprise users rather than being GA
but there are open versions available built on the cn OSS models:
The GA functionality is already here with a crafted prompt or jailbreak :)
nikcub•about 23 hours ago
it's gone a bit unnoticed that they've stopped support for response prefilling in the 4.6 models :/
SerCe•about 21 hours ago
What's incredibly ironic is that research labs are releasing the most advanced hacking toolkit ever known, and cybersecurity defence stocks are going down as a result somehow. There’s no logic in the stock markets.
czbond•1 day ago
Definitely will be a fight against bad actors pulling bulk open source software projects, npm packages, etc and running this for their own 0 days.
I hope Anthropic can place alerts for their team to look for accounts with abnormal usage pre-emptively.
tptacek•1 day ago
You want frontier models to actively prevent people from using them to do vulnerability research because you're worried bad people will do vulnerability research?
czbond•1 day ago
Not at all. I was suggesting if an account is performing source code level request scanning of "numerous" codebases - that it could be an account of interest. A sign of mis-use.
This is different than someones "npm audit" suggesting issues with packages in a build and updating to new revisions. Also different than iterating deeply on source code for a project (eg: nginx web server).
tptacek•1 day ago
I don't understand the joke here.
ukuina•1 day ago
A vuln scanner is dual-use.
RupertSalt•about 24 hours ago
It's an Internet trope — we could link to knowyourmeme, or link to the HN Guidelines
baby•about 23 hours ago
As a founder of an auditing firm, I definitely feel the heat of the competition when big LLM companies push products that not only compete with us an auditors but also with our own AI-based offerings (https://zkao.io/).
If I were to venture a guess, there's different world in which we might exist in the next 5-10 years.
In one of these futures, we, as auditors, seize to exist. If this is the future, then developers seize to exist too, and most people touching software seize to exist. My guess here is as good as any developer's guess on if their job will remain stable.
In another one of these futures, us auditors become more specialized, more niche, and bring the "human touch" needed or required. Serious companies will want to continue working with some humans, and delegating security to "someone". That someone could be embedded in the company, or they could be a SaaS+human-support system like zkao.
On the other hand, vibe coders will definitely use claude code security, maybe we should call it "vibe security"? I don't mean it as a diss, I vibe code myself, but it will most likely be as good as vibe coding in the sense that you might have to spend time understanding it, it might make a lot of mistakes, and it will be "good enough" for a lot of usecases.
I think that world is a bit more realistic today, than the AGI "all of our jobs are gone in the next years" doom claim. And as
@zksecurityXYZ
, I don't think we're too scared of that world.
These tools have been, and are making us stronger auditors. We're a small, highly specialized team, that's resilient and hard to replace. On the other hand large consultancies and especially consultancies that focus on low hanging fruits like web security and smart contracts are ngmi.
ping00•about 22 hours ago
Respectfully (not trying to be pedantic but helpful): it's "cease" not "seize" in this context :)
deadbabe•about 22 hours ago
Developers will not cease to exist. The developers of tomorrow will simply being doing things that developers today can’t possibly even imagine.
Auditors though, they are cooked.
viccis•about 22 hours ago
>Auditors though, they are cooked.
I think you're massively underestimating the complexity and depth of a good security audit service.
tptacek•about 21 hours ago
I don't.
baby•about 15 hours ago
Dev and auditors are two sides of the same coin, if one exists the other does as well. Perhaps they will be the same person, but systems don’t exist without tradeoffs and security considerations.
deadbabe•about 3 hours ago
Believe me, they do.
kypro•about 19 hours ago
Developers of tomorrow will be everyone with a computer in the same way everyone today is a calculator.
sanketsaurav•1 day ago
FWIW Claude Code Opus 4.5 ranks ~71% accuracy on the OpenSSF CVE Benchmark that we ran against DeepSource (https://deepsource.com/benchmarks).
We have a different approach, in that we're using SAST as a fast first pass on the code (also helps ground the agent, more effective than just asking the model to "act like a security researcher"). Then, we're using pre-computer static analysis artifacts about the code (like data flow graphs, control flow graphs, dependency graphs, taint sources/sinks) as "data sources" accessible to the agent when the LLM review kicks in. As a result, we're seeing higher accuracy than others.
Haven't gotten access to this new feature yet, but when we do we'd update our benchmarks.
nadis•1 day ago
> "Rather than scanning for known patterns, Claude Code Security reads and reasons about your code the way a human security researcher would: understanding how components interact, tracing how data moves through your application, and catching complex vulnerabilities that rule-based tools miss."
Fascinating! Our team has been blending static code analysis and AI for a while and think it's a clever approach for the security use case the Anthropic team's targeting here.
jcgrillo•about 22 hours ago
That quote jumped out at me for a different reason... it's simply a falsehood. Claude code is built with an LLM which is a pattern-matching machine. While human researchers undoubtedly do some pattern matching, they also do a whole hell of a lot more than that. It's a ridiculous claim that their tool "reasons about your code the way a human would" because it's clearly wrong--we are not in fact running LLMs in our heads.
If this thing actually does something interesting, they're doing their best to hide that fact behind a steaming curtain of bullshit.
nadis•about 21 hours ago
That's a fair point and agreed that human researchers certainly do more than just pattern match. I took it as sort of vision-y fluff and not literally, but do appreciate you calling that out more explicitly as being wrong.
dboreham•about 20 hours ago
It's all pattern matching. Your brain fools you into believing otherwise. All other humans (well not absolutely all) join in the delusion, confirming it as fact.
jcgrillo•about 19 hours ago
I suppose I should have been more specific--pattern matching in text. We humans do a lot more than processing ascii bytes (or whatever encoding you like) and looking for semantically nearby ones. If "only" because we have sensors which harvest more varied data than a 1D character stream. Security researchers may get an icky feeling if they notice something or another in some system they're analyzing, which leads eventually to something exploitable. Or they may beat their head against a problem all day at work on a Friday, go to the bar afterwards, wake up with a terrible hangover Saturday morning, go out to brunch, and while stepping off the bus on the way to the zoo after brunch an epiphany strikes like a flash and the exploit unfurls before them unbidden like a red carpet. LLMs do precisely none of this. And then we can go into their deficiencies--incapable of metacognition, incapable of memory, incapable of reasoning (despite the marketing jargon), incapable of determining factual accuracy, incapable of estimating uncertainty, ...
bink•1 day ago
I hope this is better than their competitors products. So far I've been underwhelmed. They basically just find stuff that's already identified by static analysis tooling and toss in a bunch of false positives from the AI scans.
david_shaw•1 day ago
There's a lot of skepticism in the security world about whether AI agents can "think outside the box" enough to replicate or augment senior-level security engineers.
I don't yet have access to Claude Code Security, but I think that line of reasoning misses the point. Maybe even the real benefit.
Just like architectural thinking is still important when developing software with AI, creative security assessments will probably always be a key component of security evaluation.
But you don't need highly paid security engineers to tell you that you forgot to sanitize input, or you're using a vulnerable component, or to identify any of the myriad issues we currently use "dumb" scanners for.
My hope is that tools like this can help automate away the "busywork" of security. We'll see how well it really works.
samuelknight•1 day ago
LLMs and particularly Claude are very capable security engineers. My startup builds offensive pentesting agents (so more like red teaming), and if you give it a few hours to churn on an endpoint it will find all sorts of wacky things a human won't bother to check.
tptacek•1 day ago
I am seeing something closer to the opposite of skepticism among vulnerability researchers. It's not my place to name names, but for every Halvar Flake talking publicly about this stuff, there are 4 more people of similar stature talking privately about it.
decidu0us9034•1 day ago
People use whatever tools are the most effective and they have plenty of incentive not to talk publicly about them. I think the era of openness has passed us by. But why does stature matter anyway? If I look at chromium or MSRC bug reports, scarcely any of the submitters are from Europe/US and certainly don't have anything resembling stature. That guy hasn't done anything of note in the field in a long time from what I know, he's kind of boomer (you too, no disrespect).
lich_king•about 21 hours ago
Vulnerability research is exciting and profitable, but it has three problems. First, it's mentally exhausting. Second, the income it generates is very unpredictable. Third, it's sort of... futile. You can find 1,000 vulnerabilities and nothing changes.
So yeah, it's the domain of young folks, often from countries where $10k or $100k goes much farther than in the US. But what happens to vulnerability researchers once they turn 35? They often end up building product security programs or products to move the needle, often out of the limelight. They're the ones who write checks to the young uns to test these defenses and find more bugs, and they're the ones who will be making the call to augment internal or external testing with LLMs.
And FWIW, the fact that the NSA or the SVR now need to pay millions for a good weaponized zero day is a testament to this "boomer" work being quite meaningful.
ping00•about 21 hours ago
as a pentester at a Fortune 500: I think you're on the mark with this assessment. Most of our findings (internally) are "best practices"-tier stuff (make sure to use TLS 1.2, cloud config findings from Wiz, occasionally the odd IDOR vuln in an API set, etc.) -- in a purely timeboxed scenario, I'd feel much more confident in an agent's ability to look at a complex system and identify all the 'best practices' kind of stuff vs a human being.
Security teams are expensive and deal with huge streams of data and events on the blue side: seems like human-in-the-loop AI systems are going to be much more effective, especially with the reasoning advances we've seen over the past year or so.
fatherwavelet•about 6 hours ago
We will have the age of the centaur across all white collar domains. How long that age lasts I don't think is all that relevant before it has even happened.
The question is not human in the loop but how many humans in the loop?
Then I think about what does a team of 3-4 centaurs look like? For me, it looks like the unemployment line. I am sure there are people on this board who are in the top 5% of whatever the domain is in question. They will be part of the centaur while most people are just redundant.
If you try to counter this with a nineteenth century economic heuristic about coal use , I don't think it works.
tptacek•about 21 hours ago
Every conversation I've been a party to has been premised on humans in the loop; I think fully-automated luxury space vulnerability research is something that only exists in message board imaginations.
awestroke•1 day ago
Claude Opus 4.6 has been amazing at identifying security vulnerabilities for us. Less than 50% falae positives.
drcongo•1 day ago
I thought they'd noticed how many of my Claude tokens I've been burning trying to build defences against the AI bot swarms. Sadly not.
reconnecting•1 day ago
Is it only crawlers or bots that abuse your product?
We have been developing our own system (1) for several years, and it's built by engineers, not Claude. Take a look — maybe it could be helpful for your case.
Asking for a friend who’s working on a startup around this general space: do you think it’s better to go niche, focusing on agents for a specific type of application or a specific language/ecosystem, or is that effectively “killing the startup” by limiting market size too soon?
Another question that came up in conversations with them: there might be value in offering a nonscalable, high-touch service, where you build and maintain customized agents tailored to a client’s specific codebase on a periodic basis.
tptacek•about 21 hours ago
I think it's probably a bad idea to do an "AI looking for vulnerabilities" startup, since the frontier labs have all basically declared that they believe that's a feature of a coding agent and not a standalone product.
dboreham•about 20 hours ago
No dog in this fight but the frontier labs might suck at marketing to such customers, and such at servicing their needs.
DyslexicAtheist•about 22 hours ago
just when European legislators just enshrined SAST scanning into law (Cybersec Resilience Act, Radio Equipment Directive, ...), AI comes around an makes it redundant. Not saying SAST is dead, but sure can't compete with AI today when it's about signal vs. noise.
vimda•about 23 hours ago
I would love to know how this compares to just prompting Claude Code with "please find and fix any security vulnerabilities in this code"
Advertisement
deadbabe•1 day ago
Solve a problem and everyone praises you.
No one knows you also caused that problem.
rvz•about 19 hours ago
Fixing an outage is the same thing.
No person would admit to the outage that happened, but you will see them screaming that they are at $FAMOUS_COMPANY.
Anthropic has so many outages (every week)[0], that if there were a Polymarket, you could easily make millions for when another incident happens.
Limited preview for researchers, who will be hand picked to write positive reviews.
Enough of this frontier grifting. Make it testable for open source developers at no cost and without login or get lost. You won't of course, because you'd get an unfiltered evaluation instead of guerilla marketing via blog posts, secrecy, and name-dropping researchers that cannot be disclosed.
grosswait•about 22 hours ago
It’s a free market. The cream will rise to the top eventually regardless of astroturfing or not. And it will be replicated in FOSS too, so no need to be angry.
Discussion (53 Comments)
The impact question is really around scale; a few weeks ago Anthropic claimed 500 "high-severity" vulnerabilities discovered by Opus 4.6 (https://red.anthropic.com/2026/zero-days/). There's been some skepticism about whether they are truly high severity, but it's a much larger number than what BigSleep found (~20) and Aardvark hasn't released public numbers.
As someone who founded a company in the space (Semgrep), I really appreciated that the DARPA AIxCC competition required players using LLMs for vulnerability discovery to disclose $cost/vuln and the confusion matrix of false positives along with it. It's clear that LLMs are super valuable for vulnerability discovery, but without that information it's difficult to know which foundation model is really leading.
What we've found is that giving LLM security agents access to good tools (Semgrep, CodeQL, etc.) makes them significantly better esp. when it comes to false positives. We think the future is more "virtual security engineer" agents using tools with humans acting as the appsec manager. Would be very interested to hear from other people on HN who have been trying this approach!
To be honest this is an even bigger problem with Semgrep and other SAST tools. Developers just want the .1% of findings that actually lead to issues, but flagging patterns will always lead to huge false positive rates.
I do something similar as what you suggested and it does work well -pattern match + LLM. The downside is this only applies to SAST and so far nobody has found a way to address the findings that make up 90% of a security team's noise, namely SCA and container images.
I assumed I was still dealing with lots of false positives from Gemini due to using the free version and not being able to have it memorize the full code base. Either way combining those two tools makes the review process a lot more enjoyable.
100% agree - I spun out an internal tool I've been using to close the loop with website audits (more focus on website sec + perf + seo etc. rather than appsec) in agents and the results so far have been remarkable:
https://squirrelscan.com/
Human written rules with an agent step that dynamically updates config to squash false positives (with verification) and find issues while also allowing the llm to reason.
Padme: You're scanning for vulnerabilities so you can fix them, Anakin?
Anakin: ...
Padme: You're scanning for vulnerabilities so you can FIX THEM, right, Annie?
but there are open versions available built on the cn OSS models:
https://github.com/lintsinghua/DeepAudit
I hope Anthropic can place alerts for their team to look for accounts with abnormal usage pre-emptively.
This is different than someones "npm audit" suggesting issues with packages in a build and updating to new revisions. Also different than iterating deeply on source code for a project (eg: nginx web server).
If I were to venture a guess, there's different world in which we might exist in the next 5-10 years.
In one of these futures, we, as auditors, seize to exist. If this is the future, then developers seize to exist too, and most people touching software seize to exist. My guess here is as good as any developer's guess on if their job will remain stable.
In another one of these futures, us auditors become more specialized, more niche, and bring the "human touch" needed or required. Serious companies will want to continue working with some humans, and delegating security to "someone". That someone could be embedded in the company, or they could be a SaaS+human-support system like zkao.
On the other hand, vibe coders will definitely use claude code security, maybe we should call it "vibe security"? I don't mean it as a diss, I vibe code myself, but it will most likely be as good as vibe coding in the sense that you might have to spend time understanding it, it might make a lot of mistakes, and it will be "good enough" for a lot of usecases.
I think that world is a bit more realistic today, than the AGI "all of our jobs are gone in the next years" doom claim. And as @zksecurityXYZ , I don't think we're too scared of that world.
These tools have been, and are making us stronger auditors. We're a small, highly specialized team, that's resilient and hard to replace. On the other hand large consultancies and especially consultancies that focus on low hanging fruits like web security and smart contracts are ngmi.
Auditors though, they are cooked.
I think you're massively underestimating the complexity and depth of a good security audit service.
We have a different approach, in that we're using SAST as a fast first pass on the code (also helps ground the agent, more effective than just asking the model to "act like a security researcher"). Then, we're using pre-computer static analysis artifacts about the code (like data flow graphs, control flow graphs, dependency graphs, taint sources/sinks) as "data sources" accessible to the agent when the LLM review kicks in. As a result, we're seeing higher accuracy than others.
Haven't gotten access to this new feature yet, but when we do we'd update our benchmarks.
Fascinating! Our team has been blending static code analysis and AI for a while and think it's a clever approach for the security use case the Anthropic team's targeting here.
If this thing actually does something interesting, they're doing their best to hide that fact behind a steaming curtain of bullshit.
I don't yet have access to Claude Code Security, but I think that line of reasoning misses the point. Maybe even the real benefit.
Just like architectural thinking is still important when developing software with AI, creative security assessments will probably always be a key component of security evaluation.
But you don't need highly paid security engineers to tell you that you forgot to sanitize input, or you're using a vulnerable component, or to identify any of the myriad issues we currently use "dumb" scanners for.
My hope is that tools like this can help automate away the "busywork" of security. We'll see how well it really works.
So yeah, it's the domain of young folks, often from countries where $10k or $100k goes much farther than in the US. But what happens to vulnerability researchers once they turn 35? They often end up building product security programs or products to move the needle, often out of the limelight. They're the ones who write checks to the young uns to test these defenses and find more bugs, and they're the ones who will be making the call to augment internal or external testing with LLMs.
And FWIW, the fact that the NSA or the SVR now need to pay millions for a good weaponized zero day is a testament to this "boomer" work being quite meaningful.
Security teams are expensive and deal with huge streams of data and events on the blue side: seems like human-in-the-loop AI systems are going to be much more effective, especially with the reasoning advances we've seen over the past year or so.
The question is not human in the loop but how many humans in the loop?
Then I think about what does a team of 3-4 centaurs look like? For me, it looks like the unemployment line. I am sure there are people on this board who are in the top 5% of whatever the domain is in question. They will be part of the centaur while most people are just redundant.
If you try to counter this with a nineteenth century economic heuristic about coal use , I don't think it works.
We have been developing our own system (1) for several years, and it's built by engineers, not Claude. Take a look — maybe it could be helpful for your case.
1. https://github.com/tirrenotechnologies/tirreno
Another question that came up in conversations with them: there might be value in offering a nonscalable, high-touch service, where you build and maintain customized agents tailored to a client’s specific codebase on a periodic basis.
No one knows you also caused that problem.
No person would admit to the outage that happened, but you will see them screaming that they are at $FAMOUS_COMPANY.
Anthropic has so many outages (every week)[0], that if there were a Polymarket, you could easily make millions for when another incident happens.
[0] https://status.claude.com/
Enough of this frontier grifting. Make it testable for open source developers at no cost and without login or get lost. You won't of course, because you'd get an unfiltered evaluation instead of guerilla marketing via blog posts, secrecy, and name-dropping researchers that cannot be disclosed.