Back to overview

Proactive measures for AI security: A critical need for organisations

Publication date: 18 July 2024

The urgency of addressing AI security within organisations cannot be overstated. As AI technology rapidly advances, employees are increasingly turning to private AI services without formal policies in place, putting sensitive corporate data at risk. Organisations must take immediate action to implement AI security measures, as failure to do so will leave them vulnerable to data breaches and other security threats. If AI security is not proactively addressed, employees will continue to use AI tools unregulated and unsupervised. While organisations already invest significant time and money in training employees about phishing emails and other security threats, similar efforts have not been widely adopted for generative AI.

When employees enter confidential company data into private AI systems, there is a risk that this data could fall into the wrong hands. Many of these systems store the data entered to improve their algorithms, which can inadvertently lead to data loss or theft. In addition, companies lose control of their data when it enters external systems. This can lead to security breaches and legal issues, especially when sensitive customer information is involved. Another risk is the threat to competitiveness. Confidential information can end up in the hands of competitors, exposing strategic planning and trade secrets.

For example, in 2020, OpenAI’s GPT-3 was found to sometimes generate output containing fragments of sensitive data, including personal information, entered by users while using ChatGPT. This raised concerns about the potential for unintentional data leakage. Similarly, in another incident, sensitive business information was exposed when employees used a public AI service to edit documents, resulting in confidential data being inadvertently shared.

The newly adopted EU AI Act aims to set common standards for the development and use of AI in the EU. The legislation aims to ensure that AI systems are safe and transparent and that citizens’ fundamental rights are protected. It also sets out specific requirements for AI applications, which may include those used in business contexts. The EU AI Act provides an essential framework for the safe and ethical use of AI, helping companies to develop and implement their strategies.

A key aspect of the safe use of AI is AI governance. This includes policies and procedures that companies put in place to ensure the safe and ethical use of AI. Clear guidelines should be developed to regulate the use of AI systems and strictly control the private use of external services. Regular training helps employees understand the risks of personal AI use and promotes the secure handling of company data. Ongoing monitoring and regular audits ensure that these policies are being followed and that potential security gaps are identified early on.

To protect sensitive data, organisations should develop or acquire in-house AI solutions that are specifically designed to handle such data securely. These systems can be tightly controlled and monitored to prevent data breaches. Encrypting sensitive data ensures that the information is not easily accessible, even in the event of a data leak. Robust access management ensures that only authorised personnel have access to sensitive data, reducing the risk of misuse. One possible solution here is a RAG-powered chatbot. Since the information is stored as vectors within the large language model, the security level is significantly increased. If the use of external AI services is unavoidable, organisations should only work with trusted providers that adhere to strict security protocols and privacy policies.

To find out more about our workshops on the correct use of large language models and AI governance, click here.

These sessions are designed to help organisations develop robust AI strategies and protect their sensitive data.

In summary, the personal use of AI systems by employees poses significant risks to businesses. However, these risks can be minimised by implementing clear policies, ongoing training and robust security measures. The EU AI Act and AI governance provide an important framework for the safe and ethical use of AI. Companies should be proactive in protecting their sensitive data, while fully realising the benefits of artificial intelligence.

The author: Maximilian Kuhn – AI Engineer Consultant HICO-Group

HAVE QUESTIONS?

We'd love to answer them

+49 (0) 7731-9398050
Download trigger
Cookie-Einstellungen