The Risks of Chat GPT and Generative AI to Businesses

 

As AI technology and its potential benefits for businesses develop, so does the threat of malicious actors using artificial intelligence in ever more sophisticated attacks. Businesses need to be aware of these threats to protect their cyber security now and in a future AI driven world.

The Cyber Risks of ChatGPT and Large Language Models

Employees in a range of jobs are increasingly using ChatGPT and other artificial intelligence tools to write everything from marketing materials and emails to speeches and real estate listings.

As recent reports of Samsung workers leaking sensitive company data to ChatGPT have exposed, such technology poses serious confidentiality and data privacy concerns for businesses of all sizes. Such is the level of concern that JP Morgan banned the use of the software by employees in February 2023.

What particularly concerns cyber security experts in relation to chatbots is the human-like and conversational tone they use, which can lower a user’s guard and cyber security awareness. As such, using free chatbot tools for business purposes is not something which should be encouraged by employers.

As highlighted by the NCSC (National Cyber Security Centre), any queries stored online may be hacked, leaked, or more likely accidentally made publicly accessible. This could include potentially personally identifiable information. A further risk is that the operator of the LLM (Large Language Model) is later acquired by an organisation with a different approach to privacy than was true when data was entered by users.

Chat GPT’s chat history feature went offline for several weeks in March after a bug exposed brief descriptions of other users’ conversations to people on the service. This prompted OpenAI to remind users to be careful about the sensitive information shared with ChatGPT and that the company cannot delete specific prompts from a person’s history.It is therefore important for users to be aware that ChatGPT is not designed to handle sensitive information, as it lacks encryption, strict access control, and access logs.

According to a recent survey of 1.6 million workers by a data security firm CyberHaven, between the period of the 26th of February and the 9th of April, the number of incidents per 100,000 employees where confidential data went to ChatGPT increased by 60.4%.

It also found that Since ChatGPT launched, 4% of employees have pasted sensitive data into the tool at least once.

It is reported that data classified as Sensitive/Internal Use Only was the most common type of data being submitted into ChatGPT (319 incidents per week per 100,000 employees), source code (278) and client data (260).

Manipulating AI Data

AI and LLMs are only as trustworthy as the data algorithm they are based on.

Criminals can also exploit AI vulnerabilities through what is called a prompt injection or input attack (injecting/inputting data into an application). This can allow a hacker to, essentially, get a chatbot to say anything that they want, by manipulating the data it has drawn from (it’s algorithm) and extract information from users.

Another issue of concern is the ability of LLM’s to write their own code and create convincing phishing and malware attacks. While tell-tale signs such as poor grammar or spelling may have revealed phishing campaigns in the past, AI has allowed advanced phishing emails and chatbots to be created with a higher degree of realism. Though ChatGPT has built-in safeguards which are designed to stop malicious actors from using it to create phishing emails, as demonstrated in the video below, it can be easily manipulated into creating a convincing example with certain prompts.

ChatGPT will give this response if you ask it to create a phishing email.

With certain prompts it can however generate an example.

At the same time, it has also allowed individuals with little coding experience to create sophisticated attacks which can bypass basic endpoint detection and response (EDR) software.

Data can also be poisoned; this takes place when malicious actors train an AI model on inaccurate data. This can, for example, result in a spam filter flagging legitimate emails as spam and malicious emails as safe, compromised intrusion detection and financial fraud prevention systems, or even AI-based medical diagnosis tools.

As well as malware coding, experts have pointed to the misinformation that chatbots like ChatGPT have the potential of providing to users.

It is also important for businesses to be aware of AI bias. This can refer to a false belief among employees that AI tools are inherently secure and safe with data. It can also refer to AI models that are based on false security assumptions or unconscious biases. Without input from security professionals, mistrained AI-powered security systems may fail to identify something that should be identified as a fraud element, a vulnerability, or a breach. Biased AI systems may also block network traffic and therefore, critical, and relevant business information.

How Can Businesses Use AI Safely

It’s important to have regulations, ethical guidelines, and security measures in place to prevent the misuse of AI. Organisations should be aware of these risks and take steps to mitigate them as they adopt AI-based systems. By being transparent and accountable in the development and deployment of AI, organizations can ensure that AI is used in a way that serves the greater good and protects the rights of all individuals. Appropriate regulation is currently being discussed in both the United States and European Union for the future development and application of AI by vendors, and its safe use by businesses.

The EU’s tech chief, Margrethe Vestager, has recently called for a voluntary code of conduct for the AI industry to adopt prior to a hopeful EU-wide agreement on the world’s first overarching artificial intelligence law.

To ensure the responsible and effective use of AI in cyber security, businesses and organisations should work with cyber security professionals who have experience working with AI systems.

At PureCyber, we have the expertise and experience to help you use AI in a secure way, that protects your sensitive data and your reputation.

Get in touch today to speak to our award-winning cyber security team by clicking the button below.

 

Sources 

www.theverge.com 

www.cyberhaven.com  

www.nscs.org 

www.theguardian.com 

www.forbes.com  

www.cbsnews.com 

www,tvpworld.com

 
Previous
Previous

Critical Remote code Execution Vulnerability in Fortinet FortiGate Firewalls.

Next
Next

How To Put Your Cyber Action Plan into Practice