insights/

Guide

The Evolution of AI-Driven Cyberthreats in Formula 1 and the Automotive Industry

Formula 1 has always been a platform for groundbreaking technology, with many advancements eventually making their way into consumer cars. Innovations in aerodynamics and hybrid power units that originated in F1 have influenced the automotive industry significantly. However, the rapid integration of artificial intelligence (AI) in F1 has introduced a new era of cyber vulnerabilities. 

These threats are not confined to the racetrack. They have direct and alarming implications for the automotive sector, which is following F1’s lead in integrating AI into its vehicles and operations. What happens on the track today could be the key to understanding and mitigating cyber risks on the roads tomorrow.

So in honor of the 2024 season concluding, and in eager anticipation of Spring bringing with it the start of the 2025 season, allow us to follow this particular rabbit hole and contextualize the pinnacle of mobility tech by applying a cybersecurity lens. 

The Cyber Battleground in F1

F1 teams collect over a million data points from each car during a single race weekend. 1 AI systems analyze this data to optimize performance, predict failures, and outthink competitors. But this reliance on AI makes F1 teams a prime target for cyber adversaries.

Advanced Persistent Threats (APTs)

While F1 teams rely on data analytics for performance optimization, they are also increasingly vulnerable to cyberattacks targeting critical race strategies. APTs target the algorithms teams used to calculate tire wear, fuel consumption, and weather impact. By intercepting or manipulating these predictions, attackers can disrupt a team’s entire race strategy.

  • Telemetry Data Interception and Manipulation: F1 teams use data from many sensors on their cars. If someone captures and changes this data, such as tire pressure or engine performance, it confuses teams and leads to bad decisions. For example, if tire wear data is altered, teams might stop at the wrong time ruining their race plan. 2  Read our article here that takes a deep dive into the role of telemetry in F1. (Link article)

  • Race Strategy Prediction and Exploitation: Teams use predictive algorithms to better their race strategy, which involves tire changes and the timing of pit stops. An attacker could interfere with weather forecasts or fuel calculations included in these algorithms, leading to misguided decisions that could affect performance, such as unnecessary pit stops or postponed tire changes. 3

  • Real-Time Sensor Data Corruption: Real-time sensor data can also be targeted to make instant race decisions. By corrupting data from tire temperature or fuel sensors, attackers could mislead a team into incorrect assumptions about the car’s condition, resulting in costly mistakes during the race. 4

  • Digital Twins Manipulation: F1 teams use digital twins, virtual models of their cars, to simulate performance in different race conditions. 5 Attackers could infiltrate and manipulate these models, causing discrepancies between the simulated and real-world performance of the vehicle. This could lead to decisions based on inaccurate predictions, ultimately affecting race outcomes. 6

Machine Learning Models Poisoning

Teams employ AI to train machine learning models to foresee everything, from best lap times to engine efficiency. These models face threats from data poisoning attacks, where harmful individuals sneak in tainted data during training. Such attacks harm model precision, leading to poor decisions on race day. Adversaries are increasingly targeting the ML models used in:

  • Aerodynamic optimization systems
  • Tire wear prediction algorithms
  • Power unit performance analysis

Autonomous Systems Vulnerability

Modern F1 cars incorporate numerous autonomous systems that are susceptible to AI-driven attacks. For example, the code below (simplified for explanation)  highlights a basic vulnerability in a DRS (Drag Reduction System) activation system for F1. An attacker could craft a malicious sensor signal that changes the sensor_data value before it reaches the is_activation_safe() method, tricking the system into activating the DRS when it shouldn't.

# Example of a vulnerable DRS activation system

class DRSController:

    def __init__(self):

        self.sensor_data = self.get_sensor_data()

        self.activation_threshold = self.calculate_threshold()  

    def is_activation_safe(self):

        # Vulnerable to data manipulation

        return self.sensor_data > self.activation_threshold

From Track to Driveway

The automotive industry is adopting many of the technologies pioneered in F1:

  • Electronic Control Units (ECUs) with AI-powered optimization
  • Over-the-air (OTA) update capabilities
  • Advanced Driver Assistance Systems (ADAS)
  • Connected vehicle telematics

AI is central to autonomous driving, predictive maintenance, and vehicle connectivity. But with these advancements come shared vulnerabilities.

Attacks on Autonomous Vehicles (AVs)

Autonomous vehicles rely on AI to make split-second decisions. A cyberattack on these systems could manipulate sensor data, causing accidents. 8 For instance, an adversary could exploit a car’s AI to misinterpret a stop sign as a yield, leading to catastrophic results.

AI-Powered Attacks on Vehicle Connectivity

As connected vehicles rely on AI to manage data flow between systems such as infotainment, navigation, and critical functions like brakes or steering, they become susceptible to AI-based cyberattacks. Attackers can use these systems to manipulate the communication channels, such as Vehicle-to-Vehicle (V2V) or Vehicle-to-Everything (V2X), to create false signals. This may cause the vehicle to execute dangerous maneuvers, such as sudden braking or acceleration, which compromise safety.

Vehicle Network Exploits

Modern cars use Controller Area Networks (CAN) to communicate between subsystems. These networks are vulnerable to attacks that can disable safety features or even take control of the vehicle. 10  The threat actors can use AI to automate the discovery of vulnerabilities within CAN protocols or to execute adaptive, real-time exploits that bypass traditional security measures. For example, it can generate attack patterns tailored to specific vehicle models, making exploits more effective.

On the flip side, AI-powered intrusion detection systems (IDS) can analyze CAN traffic for anomalies, flagging potential threats before they escalate. Machine learning models trained on typical CAN data flows could identify deviations indicative of a cyberattack, providing a proactive layer of security.

Ransomware at Scale

Imagine a scenario where an entire fleet of connected cars is held hostage through ransomware. Drivers could not start their vehicles until a ransom was paid, a chilling prospect for consumers and manufacturers alike.

Lessons in AI from F1 for Automotive Cybersecurity

Formula 1’s approach to cyber threats, especially with the integration of AI, offers valuable lessons for the automotive sector as it faces similar challenges in protecting increasingly connected vehicles.

Real-Time Threat Detection

Formula One teams mostly rely on AI-powered technologies to monitor telemetry data for real-time anomalies. Teams can respond quickly to possible threats because these technologies can identify even the slightest departures from typical patterns. Automakers can install similar systems in consumer cars to proactively identify and stop cyberattacks before they impact steering or braking. 11 These real-time AI systems can continuously monitor vehicle health, looking for unusual patterns such as sudden sensor failures or network irregularities preventing major breaches before they escalate. 

AI-Powered Collaboration 

Despite their competitive traits, Formula One teams have started exchanging cybersecurity knowledge in recognition of the necessity to fight cybercriminals as a team. In the automotive sector as well, this degree of collaboration is essential. By exchanging knowledge about AI weaknesses, attack trends, and defenses, manufacturers may work together to protect the sector as a whole. This can include developing standard AI-driven tools for identifying and mitigating cyber threats. 12

Secure Development

The rapid adoption of AI in both F1 and the automotive sector comes with the risk of adversarial attacks and data manipulation. F1 teams are increasingly focused on ensuring that their AI models are trained on clean, verified data and tested rigorously for vulnerabilities. Similarly, automotive manufacturers must ensure that vehicle AI systems, such as autonomous driving algorithms or advanced driver-assistance systems (ADAS), are secure from manipulation. 13 This requires regular validation of AI models, securing training datasets, and developing defense mechanisms against adversarial inputs.

Regulatory Alignment 

In order to ensure the security of digital systems in Formula One, the FIA has implemented cybersecurity guidelines to address the growing reliance on connected technologies. The ISO/SAE 21434 standard provides a roadmap for guaranteeing cybersecurity in automobiles for the automotive industry, which focuses on risk management, safe software upgrades, and AI-driven systems. 14  By aligning their AI security processes with these standards, automakers can ensure their cars are safe from new online threats and adhere to international legal requirements.

The Road Ahead

As AI continues to revolutionize F1 and the automotive industry, the stakes for cybersecurity have never been higher. F1 teams are often learning to defend against real-time threats under extreme conditions. These lessons are invaluable for automotive manufacturers, who face the challenge of scaling these defenses to millions of vehicles worldwide.

The intersection of F1 innovation and automotive adoption is not just about faster cars or more intelligent systems; it’s about creating a secure future for mobility. By acting now, both industries can ensure that AI remains a force for progress, not a tool for disruption.

References

  1. Amazon Web Services. Formula 1 race strategy and data. AWS. link
  2. Sepio Systems. (2023, December 4). Formula 1 and cybersecurity: A high-stakes race for data security. Sepio Cyber. link
  3. Bekker, J., & Lotz, W. (2009). Planning Formula One race strategies using discrete-event simulation. Journal of the Operational Research Society, 60. link
  4. Diginomica. (2023, December 8). The mechanics of real-time streaming of Formula 1 data. Diginomica. link
  5. Middlesex University. (2023, November 20). Digital twins: The future of Formula 1 racing. Middlesex University Digital Technologies. Retrieved January 6, 2025, from link
  6. Automotive Technology. (2023, November 28). The role of simulation and digital twins in automotive development. Automotive Technology. link
  7. Giannaros, A., et al. (2023). Autonomous vehicles: Sophisticated attacks, safety issues, challenges, open topics, blockchain, and future directions. Journal of Cybersecurity and Privacy, 3, 493-543. link
  8. Automotive Dive. (2023, December 15). Automotive cybersecurity challenges and risk mitigation strategies. Automotive Dive. link
  9. Haas, R., & Moller, D. (2017). Automotive connectivity, cyber attack scenarios, and automotive cyber security. In Proceedings of the 2017 European Simulation and Modelling Conference (pp. 635-639). link
  10. Buttigieg, R., Farrugia, M., & Meli, C. (2017). Security issues in controller area networks in automobiles. In Proceedings of the 2017 International Conference on Smart Technologies and Applications (pp. 93-98). link
  11. The Engineer. (2023, December 12). How AI is shaping automotive cybersecurity. The Engineer. link
  12. Bitdefender. (2023, December 5). 5 cybersecurity lessons leaders can take away from Formula 1 racing. Bitdefender. link
  13. LeddarTech. (2023, November 22). Cybersecurity in ADAS: Protecting connected and autonomous vehicles. LeddarTech. link
  14. ISO. (2023, November 10). ISO/SAE 21434:2021 - Road vehicles — Cybersecurity engineering. International Organization for Standardization. Retrieved January 6, 2025, from link

Contact us to know more about our solutions.