Ask around, and you’ll find all the usual activities in place when it comes to vehicle cybersecurity. Plenty of experts, plenty of deliverables, plenty of busy work. The TARA has been checked off, the pentest is scheduled for three months before SOP, work products are in the system. And yet, many teams close out their development programs with open risks. In what follows, we try to explain why this failure is structural — and what actually needs to change: in timing, in team organization, and in the understanding of what automotive testing with respect to cybersecurity can and should actually deliver today.
Philipp Veronesi
Let’s start with the pattern that runs through almost every vehicle program. Anyone who looks into a running vehicle program today — into the security domain of vehicle development — will find cybersecurity. Of course they will.
At least on paper.
There’s a TARA somewhere in the folder structure.
There’s a penetration test scheduled for almost exactly three months before SOP.
There are even concrete security requirements, derived from the risk analysis, now sitting in the requirements tool.
And there’s one or more cybersecurity leads — lately more likely just one — who actually, genuinely know what they’re doing.
And yet: anyone who talks to the engineers, testers, and project leads involved will quickly notice something is off. The activities are there, but they don’t properly connect. The pentest runs, but its findings arrive too late to change anything — and even if they came in time, it’s often unclear who acts on them and how: as a binding basis for design decisions, as risk acceptance, or simply as a report that gets filed. The TARA exists, but it hasn’t influenced architectural decisions — sometimes, quite literally, not at all. Requirements are defined, but they aren’t perceived as actionable in engineering.
Cybersecurity is formally present in many vehicle programs, but structurally embedded too late.
We don’t want to focus too much on the always-individual structures and processes at play. What matters here is understanding what this means for the effective impact of testing and for the security maturity of the overall program. It’s not primarily about compliance. It’s about the legitimate question of whether cybersecurity in the vehicle actually works.
Where the Real Vehicle Security Problems Originate: Not in Testing, But Much Earlier
The uncomfortable truth: many of the vulnerabilities found during the pre-SOP penetration test were not newly uncovered in the test. They should have been known long before. Their origin lies months or even years earlier — in the concept phase, in hardware selection, in the definition of the network architecture.
Three typical examples of how decisions structurally limit later security — take them or leave them:
- A gateway ECU is selected without a Hardware Security Module (HSM). This locks in, early on, how cryptographic keys can be stored and how communication can be secured. Or not. The pentest can observe this. It cannot fix it.
- Message authentication is not considered when designing the vehicle network architecture. A retrofit is technically costly and, in many project timelines, simply no longer feasible.
- Trust between ECUs is implicitly assumed, without a verification mechanism. An attacker who compromises one ECU can then cause damage in other systems that may not even have been in scope for the pentest.
Yes, automotive pentesting can make such decisions visible. It cannot undo them.
In these cases, the penetration test becomes less about discovering new problems and more about confirming old decisions.
This is the structural core of the problem. So where do we go from here?
From Automotive Pentesting Theory to Practice: 5 Areas Where Things Are Shifting
The good news: awareness of exactly this problem is growing. The corresponding shifts are already visible in running projects — accelerated, not least, by the UNECE WP.29 (UN R155/156) regulations and ISO/SAE 21434.
Teams aren’t changing their approach because some industry standard appears to require it. They’re changing because an existing, increasingly outdated setup is starting to break under real-world conditions.
Let’s look at five areas where this transition is already becoming tangible:
1. Security Must Begin in the Concept Phase. Not After.
As long as cybersecurity only kicks in once architecture and hardware have already been decided, testing remains reactive. The leverage lies earlier — in the phase when systems can still be shaped.
What this means concretely: the TARA cannot be a downstream document that describes a system design that’s already been decided. It must be a tool that influences that design. Threat scenarios need to be known early enough to feed into ECU hardware selection, the definition of communication paths, and security requirements for interfaces.
Along the ISO/SAE 21434 methodology, this is where ‘Cybersecurity Goals’ and ‘Cybersecurity Claims’ come into play. But in project practice, there is still a frequent gap between this logic and the actual integration of cybersecurity. Projects need to build a connection between “what the process says” and “what the project reality looks like.”
What seems like upfront effort pays off — for the overall outcome, for the quality of security testing, and because it avoids costly last-minute changes close to SOP. Teams that integrate cybersecurity early into the concept phase come out of the pentest with fewer findings. And with findings that can still be corrected.
2. Safety and Security Need to Think Together, Not in Parallel
In the classic project structure, safety and cybersecurity teams still work in parallel. Separate analyses, separate work products, separate review cycles. Organizationally understandable. Technically problematic.
Modern vehicle systems don’t recognize this boundary.
A manipulated signal on the vehicle network can simultaneously be a safety problem and a security problem. A fallback mechanism designed for functional safety can open an unexpected attack path. And when the safety analysis (HARA) and the TARA are based on different system assumptions, gaps emerge that neither discipline can close on its own.
In projects that have recognized this, clearer interaction models are emerging: joint reviews, aligned system models, explicit interface agreements between disciplines. It’s rarely frictionless. But it’s the only way to ensure that safety and security measures don’t undermine each other.
3. A Single Pre-SOP Pentest Captures a System That Has Long Since Moved On
Software updates arrive regularly. Backend services keep evolving. Vehicle functions increasingly depend on cloud infrastructure. Suppliers deliver new software versions late in the project cycle. OTA infrastructure is built in parallel with hardware and software.
The consequence? Running a single penetration test at the end of the development cycle means taking a snapshot of a system that is permanently changing. The result lands in the report and provides assurance — but only for the state of the system at the time of the test. What comes after is not covered.
Projects that have understood this — and internalized it into their processes — integrate testing earlier and more iteratively:
- Tests at ECU level during development,
- System-level assessments during integration phases,
- Backend tests running in parallel with backend development.
This requires different processes, different tool infrastructure, and a different budget approach. But only then does a picture emerge that actually corresponds to the real risk profile of the vehicle.
4. Reaching Compliance Is Not the Same as Ensuring Engineering Depth
ISO/SAE 21434 has achieved a great deal. The standard creates structure, establishes a common language, and ensures certain work steps don’t get forgotten. It is a necessary foundation. (Accordingly, the new edition of The Essential Guide to ISO/SAE 21434 — the world’s leading reference work — continues to be relevant in 2026. New edition, new title: 1000 Things Worth Knowing in Automotive Cybersecurity)
But ISO/SAE 21434 is not a sufficient condition for a genuinely secure vehicle. The point at which all work products exist and the process checklists are ticked off can be misleading: it looks like a conclusion, but it’s only the formal framework.
What lies behind it is still determined by engineering depth.
Concrete questions that can remain open at this point:
- Are the identified attack paths still valid after the latest changes?
- Are the implemented controls actually mitigating the risks — or were they defined once and never revisited?
- Are test results being used to improve the design, or only to formally close findings?
An example: Secure Boot is implemented, documented, and tested once. What happens when the update mechanism changes? Or when keys are managed differently in production than in the development environment?
Real value is created when testing flows back into engineering. When assumptions that have been made are questioned, not merely documented.
This requires treating cybersecurity not as a one-time activity to be checked off at a particular project milestone. Security must be structurally anchored from the start and run continuously.
That is the mindset difference between “compliance paper tigers” and real security engineering.
5. Cybersecurity Competence Must Be Built Internally — Permanent Outsourcing Is Not an Option
There was a time — and it’s only a few years back — when vehicle cybersecurity could be covered by a handful of external specialists or service providers. That time isn’t over. But that approach alone no longer suffices.
Just consider the technological evolution of the vehicle as a product: embedded software, vehicle networks (CAN, Automotive Ethernet), backend APIs, cloud services, telematics systems, and in some cases mobile applications with vehicle connectivity.
Add to that: the required depth in testing is growing — including with respect to the test cases defined in GB 44495.
Today it’s not about running tools. It’s about understanding how systems behave, how they fail, and how they can be manipulated — and ensuring that the insights gained continuously feed back into development.
Accordingly, more organizations — OEMs and suppliers alike — are building internal teams, setting up dedicated labs, and creating specialized roles.
Not necessarily out of deep security conviction, but out of practical necessity.
Self-Check: How Does Your Vehicle Cybersecurity Actually Hold Up in Practice?
A general understanding is always useful — but an honest look at your own situation is even better.
The following questions are intended as a first self-check to help assess how automotive cybersecurity is actually integrated in your own vehicle program or development work. Whether security is formally present, but in practice developed in isolation.
Don’t worry — this is not intended as another audit checklist, but as a reflection tool for the three roles most frequently affected in practice.
For system engineers building the system:
- Does the TARA actually influence design decisions — or does it describe what has already been decided?
- Are security requirements clearly visible in day-to-day work when something is being implemented?
- Have test results ever led to reconsidering part of the design?
For vehicle cybersecurity leads:
- Are most activities still concentrated in the period close to SOP?
- Do findings consistently and promptly flow back into development discussions?
- Are assumptions regularly validated — or mainly documented and then left untouched?
For project managers in vehicle development:
- Is cybersecurity anchored as a continuous activity in the project plan — or does it only appear at certain milestones?
- Do you have genuine visibility into real risks — or mainly into status, deliverable progress, and formal closures?
- Are interactions between safety, security, and systems engineering actively managed — or left to chance?
If some of these questions aren’t easy to answer, that’s not a sign of poor work.
It’s usually a sign that cybersecurity is formally established — but not yet fully integrated into the development reality.
The difference between these two states is, as practice shows, significant.
Budgets, Processes, Team Boundaries: What Changes When Vehicle Cybersecurity (Testing) Is Taken Seriously
All the shifts described have very concrete implications for what cybersecurity means in a vehicle program and how security activities must be organized today:
- Budgets shift from one-time activities to long-term capabilities.
- Processes must start earlier and stay active longer.
- Teams need clear interfaces and responsibilities — including across discipline boundaries.
- Software developers and engineers need to understand the security controls they’re implementing — not just implement them.
- Testing becomes a continuous discipline, not an occasional measure just before the milestone.
It’s not about doing more.
It’s about doing the right things at the right time in the right place.
And that fundamentally changes the role cybersecurity plays across the entire domain of vehicle testing: away from a late-stage verification activity, toward a structure-giving discipline that accompanies a program from the start.
Where Does This Leave Us? An Outlook
Even as market dynamics spare no one — security included — one thing is clear: cybersecurity in the vehicle space is not going away.
Regulatory pressure — from UN R155, ISO/SAE 21434 reaching deep into the downstream supply chain — and increasingly also from national regulations (such as AIS 189 in India, the Korea Vehicle Security regulation, and others) continually raises the bar for traceability and process quality.
At the same time, vehicle systems are becoming more complex, more connected, and more dependent on software — and therefore more attackable.
The question of whether cybersecurity is present in the project is one that almost no program needs to ask anymore.
The relevant question is a different one: where is cybersecurity actually embedded? And does it intervene before decisions are made that can no longer be corrected later?
Answering that question requires technical depth across the full stack: from the ECU to backend systems, from the concept phase through post-production operation.
This is where the growing importance of automotive pentesting comes into play.
This Is Where BreachLabz Automotive Pentesting Operates. New: with a Location in Bangalore, India
The shifts outlined above — earlier security engagement, tighter integration with safety and systems engineering, iterative testing across the full development lifecycle — simply generate more work.
For OEM teams, for suppliers, and yes: for external automotive pentesting providers like BreachLabz.
At the same time, the focus on cybersecurity in automotive pentesting has a physical dimension that cannot be argued away: firmware analysis, hardware attacks, secure boot validation, vehicle bus analysis — these are not exclusively remote activities.
They are tasks that require a component in hand. A real ECU. An actual vehicle network. A test environment as close as possible to the production system.
Hi BreachLabz India, Automotive Pentesting On-Site!
That is exactly why BreachLabz — a long-standing partner organization of the CYEQT Knowledge Base — is expanding its global presence with a dedicated location in Bangalore, directly within one of India’s most significant automotive development hubs.
For OEM and supplier teams with development sites in India and Southeast Asia, this means: no more lengthy remote coordination overhead, no more hardware that needs to travel across the continent before a test can begin.
The BreachLabz India team works on-site, integrated into existing engineering and security structures, on real ECUs, bus systems, and vehicle platforms.
Those who find that their testing results surface deeper questions — about CSMS maturity, structural gaps in the development process, or regulatory traceability — will find the right advisory framework through the Advisory & Engineering Services of the CYEQT Knowledge Base.



