I've spent my career as an elite security researcher hunting vulnerabilities. My job has always been to think like an attacker: find the gaps and exploit the loopholes.
When I bring that same mindset to third-party risk, I find exactly what I expect: companies are treating their biggest attack surface with spreadsheets and self-reported questionnaires. The discipline that should be engineering risk is stuck doing compliance theater.
This post is about changing that. It's about what happens when you apply vulnerability research thinking to vendor risk.
Your vendor passed the assessment. SOC 2 Type II, privacy controls available, approved.
Then you discovered your developers have been sending production secrets to an AI-powered code editor for six months because privacy mode was off by default and nobody knew to turn it on.
Or you learned from a class action lawsuit that your business communications platform has been using customer call recordings to train AI without consent, and they added a Philippines-based transcription service that's processing customer SSNs spoken on support calls.
Or your customer engagement platform quietly removed "we do not sell your data" from their privacy policy after a breach and lawsuit, and you found out months later.
The assessment didn't catch it because it asked if controls exist, not if anyone's using them.
TPRM is an audit process applied to an engineering problem. And audits can't find what vendors don't tell you.
A risk engineer finds what could actually go wrong, not what vendors say about their controls.
The output isn't a score. It's: this is broken, this breaks if the vendor fails, here's how to fix it.
It's the difference between an audit and a penetration test. One asks if you're secure. The other proves it and prepares for the moment something breaks.
Risk engineering is required when risk isn't obvious from documentation alone. Sometimes risk emerges because usage changes. Other times, the risk exists from the start but only becomes visible once you understand how the relationship actually works.
Here's how an AI-powered code editor would be treated under TPRM vs. risk engineering.
The Vendor: Provides AI code completion and editing for developers.
Your Environment: 50 backend developers writing code with database credentials, API keys, and customer data queries.
Read vendor documentation, analyze default settings, correlate with how your developers actually use it
• Do you have data privacy controls?
• Can users control what data is shared?
• Is data encrypted?
• Do you have a SOC 2?
• What does the vendor collect by default?
• Is privacy mode on or off by default?
• What are our developers actually doing in this tool?
• What's in the code they're writing?
✅ Yes, privacy controls available
✅ Yes, privacy mode available
✅ Yes, TLS 1.3
✅ Yes, SOC 2 Type II
• Privacy mode is OFF by default
• When OFF, vendor collects: all code written, all prompts, all edits, all files opened
• Your developers write backend code with database queries and API integrations
• Developers hardcode API keys during development
• Production AWS credentials likely sent to vendor
"Yes, we have privacy controls. Users can enable privacy mode."
1. Audit all developer installations
2. Enable privacy mode organization-wide
3. Rotate any API keys that may have been exposed
4. Add privacy mode to developer onboarding checklist
Because the real question isn't "do privacy controls exist?"
The real question is "are your developers sending production secrets to a third party right now?"
Current TPRM can't answer that. Risk engineering can.
Current TPRM tools are built for auditors. Risk engineering is built for, well, risk engineers. It gives you three ways to find what vendors don't tell you:
Third-Party Artifacts — Analyzes SOC 2 reports, penetration tests, security policies.
Public Intelligence — Monitors breaches, lawsuits, policy changes, subprocessor additions.
Blast Radius Monitor — Connects to Okta, Wiz, Netskope. Shows who's using each vendor and what permissions they have.
These three sources work together to find actual exposure:
But "privacy mode OFF by default + 50 developers using it + none enabled privacy mode = production secrets exposed right now."
Risk engineers can finally verify what's actually happening, understand their actual exposure, and take specific action.
Traditional TPRM is a "check-the-box" audit process that relies on a vendor’s self-reported data (like SOC 2 reports). Risk Engineering is a proactive security discipline. It focuses on the live interface between a vendor and your organization, using forensic artifact analysis and real-time monitoring to identify actual production exposure, not just theoretical compliance.
Static questionnaires can’t catch what your procurement team doesn’t know exists. Risk Engineering integrates with your security stack (e.g., Wiz, Netskope) to detect unsanctioned AI tools and integrations as they happen. By mapping the blast radius of these tools, it allows security teams to mitigate risks before they bypass governance.
Absolutely. Compliance is a snapshot of a vendor's past; risk is a reality of your present. A vendor can meet all SOC 2 requirements while shipping a tool with "opt-out" privacy defaults that ingest your IP into their training models. Risk Engineering identifies these configuration drifts that traditional audits miss.
No, it powers it. Risk Engineering replaces the manual, high-latency work inside your GRC. Risk Engineering integrates with your existing workflow to turn a static database into a live, automated defense platform that calculates real-time impact rather than just storing PDF files.
With over a decade of experience in cybersecurity, Tomer has a distinguished background in the Israeli Intelligence Community, where he specialized in vulnerability research and led major security research projects. Prior to co-founding Lema, he served as the Research Lead at the API security unicorn Noname Security. Tomer holds an MBA from Tel Aviv University and is a recognized expert in building secure, scalable AI-driven architectures.
Share this content on your favorite social network today!
Monthly updates on all things CSA - research highlights, training, upcoming events, webinars, and recommended reading.
Monthly insights on new AI research, training, events, and happenings from CSA’s AI Safety Initiative.
Monthly insights on new Zero Trust research, training, events, and happenings from CSA's Zero Trust Advancement Center.
Quarterly updates on key programs (STAR, CCM, and CAR), for users interested in trust and assurance.
Quarterly insights on new research releases, open peer reviews, and industry surveys.
Subscribe to our newsletter for the latest expert trends and updates
We value your privacy. Our website uses analytics and advertising cookies to improve your browsing experience. Read our full Privacy Policy.
Analytics cookies, from Google Analytics and Microsoft Clarity help us analyze site usage to continuously improve our website.
Advertising cookies, enable Google to collect information to display content and ads tailored to your interests.
© 2009–2026 Cloud Security Alliance.
All rights reserved.