Mitigating security risks with ChatGPT and LLMs

cedric-gilissen
By
cedric-gilissen
October 10, 2023
5 min read
Mitigating security risks with ChatGPT and LLMs

100 million people have used ChatGPT. Here is what we have learned about security risks - and how you can mitigate them

Management gurus put tech-leaders on pedestals for having full trust in their employees (work-from-anywhere, async,...). Trust is an inspiring managerial practice that has proven to boost employees’ productivity [1].

But then came the Samsung data leak bombshell.

It dropped like a Galaxy Note 7 exploding on a plane. Granted, Samsung might not have a high reputation when it comes to mitigating security risks. However, it's a good exercise to break down what happened earlier this month and discover why being flexible with your data security policy might not be the holy grail of 21st-century managerial practices.

It turns out, some employees from Samsung couldn't resist sharing their most prized possessions - trade secrets and source code - with none other than ChatGPT, the AI language model that became their not-so-secret confidante. 

However, we're not here to unveil a juicy saga; we want to expose the risks and consequences of entrusting your organisation's crown jewels to an AI-powered chatterbox. And help you mitigate those risks.

When employees start spilling their company's secret sauce to ChatGPT, it's like opening Pandora's box of cybersecurity nightmares. The Samsung data leak revealed the harsh reality of unauthorised data dissemination and the potential for adversaries to exploit sensitive information.

It's enough to make you wonder if AI language models are more like AI language gossip mongers, ready to spill the tea to the highest bidder (or in this case, the most clever prompter). Protecting your intellectual property has never been more crucial - cue the dramatic music!

So - what can you do? 

TL:DR

  1. Trust your employees and introduce a policy and organise training (average mitigation)
  2. Trust your employees and create a self-hosted environment for them to interact with LLMs (high mitigation)

Balancing the benefits of AI assistance with the need to keep trade secrets under lock and key requires strategy and execution. We are here to help.

Data privacy

Data privacy is a critical aspect when it comes to conversational AI systems like ChatGPT. Companies need to ensure compliance with data protection regulations such as the General Data Protection Regulation (GDPR). Here is how you do that:

  1. Secure Your Data: Ensure that your organisation has robust data protection measures in place, such as encryption, secure storage, and strict access control. This will help protect sensitive information from unauthorised access and potential misuse.
  2. Train Your AI Responsibly: Train your AI models on carefully curated data, taking care to exclude sensitive information or personally identifiable information (PII). This can help reduce the likelihood of your AI model accidentally leaking sensitive information.
  3. Monitor AI Output: Implement monitoring and filtering mechanisms to detect and prevent the generation of malicious or harmful content by AI models. This may include automated content moderation or human review processes to ensure the quality and safety of AI-generated outputs.
  4. Regularly Update Your AI Models: Update your AI models regularly to ensure they are equipped with the latest security patches and fixes, as well as trained on the most up-to-date and relevant data.
  5. Limit AI's Autonomy: Establish clear boundaries for AI-generated content to prevent AI from producing outputs that could harm your organisation's reputation or integrity. This may involve setting guidelines or implementing content filters to prevent the generation of specific types of content.

Unintentional Information Disclosure

ChatGPT's lack of context retention poses a risk of unintentional information disclosure during conversations. The model may inadvertently reveal sensitive information that users have previously shared. To address this vulnerability, developers and users must exercise caution while interacting with ChatGPT and avoid sharing personally identifiable information or confidential data. Organisations can also supplement ChatGPT's capabilities by implementing additional security measures, such as pre-processing user inputs to remove sensitive information or integrating context-awareness mechanisms.

The Role of Governance and Compliance

In addition to implementing the best practices mentioned above, organisations should also focus on developing a strong governance framework for AI usage. This includes establishing clear policies, guidelines, and procedures for using ChatGPT-like tools, as well as ensuring compliance with relevant data protection regulations, such as CCPA.

It is crucial to keep all stakeholders, including employees, clients, and partners, informed about the organisation's AI usage and the measures taken to ensure data security. Transparency and open communication can help foster trust and confidence in your organisation's use of AI tools like ChatGPT.

Self-Hosting GPT-like Models

There is an even better solution. Enhance your security and gain greater control over your conversations, by self-hosting GPT-like models. Self-hosting allows organisations to host the AI model on their own infrastructure, enabling enhanced data control and tailored security measures. Here's why it's worth considering:

  1. Enhanced Data Control: By self-hosting the model, organisations retain complete control over their data. Sensitive information remains within their network, reducing the risks associated with data exposure or unauthorised access.
  1. Tailored Security Measures: Self-hosting empowers organisations to implement custom security protocols specific to their needs. This includes encryption mechanisms, access controls, regular security audits, and compliance with industry-specific regulations and internal policies.
  1. Compliance and Regulatory Alignment: Self-hosting GPT-like models ensureS alignment with data protection laws, industry regulations, and internal policies. Organisations can tailor their security practices to meet the requirements of various compliance frameworks, mitigating legal risks and reinforcing trust.

Best Practices for Self-Hosting GPT-like Models

Implementing self-hosting successfully requires careful consideration and adherence to best practices. Here are some key recommendations:

  1. Robust Infrastructure: Establish a secure hosting environment by configuring servers with the latest security updates, implementing network segregation to isolate AI systems, and employing firewalls and intrusion detection systems. You could even go as far as hosting your AI on premise.
  1. Encryption, Anonymisation and Access Controls: Implement strong encryption protocols to protect data at rest and in transit. Employ techniques such as encryption at the database level and secure communication channels to safeguard sensitive information. Additionally, enforce strict access controls, including role-based access permissions, to limit the number of individuals who can interact with the hosted model and access the data. Furthermore, when working with personal data you can pre- and post-process the data to have it be anonymised (manually or automatically).
  1. Ongoing Monitoring and Vulnerability Management: Establish a comprehensive monitoring system to continuously track the performance and security of the self-hosted GPT-like model. Conduct regular vulnerability assessments to identify and address any potential security risks promptly. Stay up-to-date with the latest security patches and updates, and proactively apply them to the hosted environment to ensure a robust defence against emerging threats.

Conclusion:

If you’re reading this far I assume you're not going to take away ChatGPT-like tools from your employees. How you decide to mitigate the risk is up to you (or your chief security officer if you have one). So let’s cut to the chase. 

Whether you want to self-host or not, we can help.

But whatever you decide is right for you, the key will always be employee training and awareness.

Foster a culture of security awareness and encourage reporting of any potential vulnerabilities or incidents.

[1] Guthrie JP. High-involvement work practices, turnover, and productivity: evidence from New Zealand. Acad Manag J. 2001;44(1):180–190.