Thursday 9 November 2023

Written by Sheá Panayi, Cyber Risk Consultant, and Arunava Banerjee CISM, Cyber Risk Consulting Lead,  Zurich Resilience Solutions


99% of organisations are reliant on digital technology.  Consequently, the need for digital resilience is paramount and particularly clear in two domains: cyber risks and artificial intelligence (AI) risks. Building resilience to cyber risks involves creating a) practices to protect computer systems, networks, and data from cyber threats and b) strategies to respond to and recover from cyber security incidents. Traditional cyber resilience measures mitigate some risks associated with using AI, but there are other risks associated with AI that are beyond traditional cyber risk management.

Before building resilience to cyber and AI risks, it is important to understand the different risks.

Cyber Risks

Most cyber risks organisations face fall into the well-known CIA Cyber Security triad; 1) confidentiality, 2) integrity and 3) availability. By investing in these security fundamentals organisations can build resilience to the associated cyber risks;

1) confidential data being infiltrated or leaked,

2) information being tampered with or modified, and

3) critical systems or information being unavailable or possibly destroyed.

The result could be the loss of revenue due to downtime or reputational damage, costly claims and fines due to data privacy infringements and further costs related to response and recovery. In the current climate, organisations are usually more concerned with the confidentiality and availability components of the triad. That’s because loss of sensitive data or critical business functions tend to result in the biggest losses.

AI Risks

The CIA triad is helpful for AI risks too, but while the risks are similar, the prevalent attack methods differ. For example, prompt injections, (where a user submits a prompt into an AI chatbot to encourage it to behave in an unintended manner), can be used to steal data and target system functionality.

  • Confidentiality
    • Data Privacy and Security: As with conventional cyber-attack scenarios, there is a risk that vast amounts of data processed by AI systems can become targets for breaches, leading to privacy violations and legal consequences.
    • Intellectual Property: Some AI models and algorithms are worth a lot of money and took significant investment to build. For example, Chat GPT is projected to make $1 billion revenue in 2024, and it reportedly cost $4.6 million to develop. So, there is a risk of losing proprietary algorithms and models resulting in decreased market share, increased competition and significant disruption.
  • Availability
    • System Availability: The AI risks here are similar to cyber risks. For business-critical AI systems, availability is essential. Therefore, ensuring AI models are resilient against ransomware (software that encrypts systems and holds them for ransom), destruction and unexpected failures, is critical for AI resilience.
    • Functional Corruption: The threats targeting the availability of AI systems can be more nuanced than the typical cyber threat landscape due to the enhanced options for functional corruption. Cyber-attacks targeting the availability of any critical system normally involve ransomware. Whereas, for attacks AI systems, there are highly successful techniques to make a system ‘unavailable’. This is where the system is still technically ‘available’ but is unable to perform its business function making it obsolete.

The key difference between AI risks and traditional cyber risks is integrity. Therefore, the increased adoption of AI is likely to increase the focus on integrity within the cyber industry.

  • Integrity
    • Data Integrity: AI results are only as good as the data processed. If the data is inaccurate, incomplete, or biased, the AI analysis can bring misleading results, leading to poor decision-making. The risks associated with poor quality data differ depending on the system purpose, but the consequences could be extreme. For example, imagine the extreme financial consequences of a machine learning AI system designed to predict and advise on investments that has been provided an inaccurate data set. Whilst organisations work to protect from malicious intent, mistakes and lack of expertise or oversight are normally what threaten data integrity.
    • Data/Model Poisoning: Attackers can tamper with or corrupt the data used to train AI models causing inaccurate results and leading to poor decision making and mistakes. Data poisoning feeds false or misleading data to a black box solution, or through an attacker gaining access to the training data of a white box model and changing it there. Here’s a scenario to put data poisoning into context:
      • Endpoint Detection and Response (EDR) is a Cyber Security solution that uses AI learning capabilities to analyse activity and detect events. If an attacker understands the model and slowly feeds inaccurate activity data points the model could be poisoned to not recognise anomalous activity. In this scenario the integrity of the data input is being changed leading to a model that is wrong with potentially significant errors.

Arguably the most important risk group to consider in using AI is ethics. Debates on the ethics of AI have been consistent for the last few years and the world has responded by starting to regulate in this area.

  • Ethics
    • Bias and Fairness: AI models can perpetuate existing biases in data. Without proactive measures, the learning model may magnify these biases, causing ethical and legal concerns. There have been cases where AI was used to speed up hiring processes and the algorithms were proven to be biased. For example, if the data set showed more men in the role, the recruiting tool would demonstrate bias against women. If an organisation is shown to be forming or perpetuating bias by using or creating AI systems, they could face legal consequences.
    • Transparency and Accountability: The EU’s new AI Act mandates that organisations should disclose copyrighted material used as training data and when any content has been generated by AI. This includes videos, photos, texts, and other data.  There is also a risk that if an organisation has poor governance or employee training, AI use and misuse could go undetected. Without transparency about the use of AI there could be mistrust in lots of materials, such as deep fakes, automated responses, and other outputs from AI systems. A lack of transparency and accountability in AI usage can result in regulatory fines, brand damage and lost customers.
    • Autonomous Dependency: There are certain responsibilities and decisions with far reaching consequences that humans may not feel comfortable relying on AI for. There have also been concerns raised about overreliance, lack of human oversight and even AI systems taking human jobs. Without full consideration of these risks an organisation could be perceived as irresponsible or unethical and even face legal action for encroaching on workers’ rights.

Building Resilience to Cyber Risks

Resilience to cyber risks requires a multi-layered approach that is broader than just cyber security. It should include preventive security measures and strategies to ensure the ability to recover quickly from cyber incidents and adapt to changing threats. For example, in the CIA triad:

  1. maintaining confidentiality requires protective measures like encryption and access controls,
  2. confirming integrity can be achieved through technologies like digital signatures,
  3. providing availability involves using backups and disaster recovery plans to counteract downtime.

The capability to detect an issue and recover from an incident is key to cyber resilience. If an organisation has trained their employees well, has mature monitoring and detection capabilities and has practiced their incident response and recovery plans, they are on their journey to cyber resilience.

Building Resilience to AI Risk

Many of the measures used to build resilience to cyber risks apply to AI risks, especially for confidentiality of data and availability of systems. For both you should start with a comprehensive risk assessment to understand all potential risks and impacts. However, there are a few specific recommendations for building resilience to AI risks.

  • Confidentiality
    • Identify the parts of the AI system that have access to sensitive information. Bear in mind the data inputted by users is normally stored too. Separate this data from front facing elements of the system and protect it. Ensure data is not stored for longer than necessary.
    • Any personal information is subject to standard data protection rules, no matter how it’s used. This includes the extensive data collected for the purposes of training AI. Conduct a data protection impact assessment, gain permission from data subjects and ensure no more data is collected than necessary.
  • Integrity
    • Conduct adversarial testing by trying adversarial inputs (like ones that could cause data poisoning or prompt injects) to identify vulnerabilities.
    • Employ techniques like fairness-aware machine learning and robustness testing help maintain the accuracy of AI models.
    • Hire people with the right skill sets or upskill employees. The human factor is always important, one of the best ways to be resilient to AI risks is to have the skills in to develop, deploy and monitor AI models which are accurate and transparent.
    • To have accurate reliable information, organisations should conduct due diligence on their AI suppliers to inform data lineage, labelling practices and model development. Without this the integrity of the data can’t be validated and any issues found cannot be traced to the root.
  • Ethics
    • Organisations should ensure there is always a ‘human in the loop’ to provide assurance and oversight of the results of AI activity. This prevents or mitigates data integrity risk because the model does not purely self-learn.
    • To avoid the risks associated with bias, data for training purposes should be representative of all groups and users should have the opportunity to help development by challenging the outputs.
    • AI use should be transparent and generally understood, to the point where internal and external users are aware of any interactions with it, even if it is a small part of a process.
    • Use regulatory sandboxes for anti-bias experimentation and encourage inclusive design principles.
    • Build organisational resilience to the risks associated with using AI, involve different business units, such as ethics or governance, risk and compliance. Organisational AI policies should be created, managed and governed collaboratively.

Many AI risks are similar to traditional cyber risks. Building cyber resilience can enhance AI resilience. However, building resilience in AI is not a one-time effort; it’s an ongoing commitment to data quality, model validation, transparency, human oversight, security, and ethical considerations. Resilient AI systems not only perform effectively but also adapt to new challenges and reflect ethical values. By implementing these strategies and best practices, we can create AI technologies that are robust, trustworthy, and capable of navigating the complexities of our ever-evolving digital landscape. In doing so, we ensure AI’s continued positive impact on society.

To find out more visit www.zurich.co.uk/business/our-expertise/zurich-resilience-solutions or contact us on LinkedIn.
 


Sheá Panayi, Cyber Risk Consultant with Zurich Resilience Solutions

Sheá has worked in multiple Cyber Security disciplines, including Cyber Security maturity, strategy and roadmaps, standards and policy writing, and cloud control development. She currently works with Zurich Resilience Solutions providing consultancy and cyber risk management services to support customers on their cyber resilience journeys. Sheá has a Masters in information security where she specialised in Post-Quantum Cryptography.

Email:shea.panayi@uk.zurich.com | Connect with Sheá on LinkedIn

Arunava Banerjee CISM Cyber Risk Consulting Lead with Zurich Resilience Solutions

Arun is responsible for leading the cyber risk consultancy in the UK proving Cyber Risk and Resilience Consultancy services for both insured and non-insured clients. He is also the chair of Zurich’s Global Risk Engineering Technical Centre for Cyber. With a career spanning over 16 years, he brings in extensive cyber risk and resilience consultancy experience across both public and private sector. Arun is a CISM, ISO27001 Lead Implementor, PRINCE2 Practitioner and holds a Master of Business and Management.

Email: arunava.banerjee@uk.zurich.com | Connect with Arun on LinkedIn