LLM in Practice by Accumulation Point

LLM blog posts and term-definitions: a practical collection

Understanding and Mitigating LLM Risks

Posted on 23rd April 2024.

Machine with DANGER sign

Find a related presentation to this blog post here.

We have already had multiple blog posts dealing with the benefits of LLMs. For example in a past post we discussed what you can do with a private business LLM, and earlier in another post we suggested some initial considerations to keep in mind when getting started with LLM based systems. Now in this current blog post let us take a cautionary viewpoint and consider some of the key risks of using LLMs. In the process we will also highlight strategies for mitigating such risks.

The Key Dangers

In essence, LLMs can be viewed as machines that take text as input and return text as output (images and other modalities are also possible). This output text can then be used as part of business processes. So what can go wrong?

Here is an overview of some key dangers and risks that need to be considered that have been introduced by the introduction fo LLMs.

  1. Shadow AI - A main risk encountered in several organizations is that LLMs are still not available as controlled service. As a consequence, employees needing the help of LLMs may end up using ad-hoc commercial LLMs, often sending private proprietary information to undisclosed servers and organizations.

  2. Generative junk - LLMs rarely produce output with grammatical errors (or syntax errors when generating code). So their output generally "looks good". Nevertheless, unless these are mitigated ahead of time, it is possible to get results with hallucinations. Further, even if hallucinations are mitigated, LLMs may seem confident with their reasoning ability yet make logical mistakes in their output. Worst of all, such mistakes are often hard to spot because they are worked in to well written text. While error rates are typically low, when used at scale, mistakes happen and these can be very costly.

  3. Private data leakage through inference - When you use an LLM service you often send data through the internet, it is processed on the server, and then returned to you. While the services of big providers (and certainly if you run your LLM on your infrastructure) are considered safe, you may still be risking your data as you send it for processing (inference). This means that you are trusting your LLM service provider to guard you private data from ending up in the wrong hands. If you use a service that is not compliant in terms of your organizations cyber-security regulations, then you are also at risk of violating these regulations.

  4. Private data leakage through training - If you train or fine-tune a model based on your own prompting mechanisms or data, then you are at risk of "inserting", or "baking in", sensitive data into the model. While the model weights on their own do not reveal that data, it is possible for attackers to probe your model during inference in ways that will reveal parts of the private data that you used in training and that the model memorized.

  5. Resource cost accidents or attacks - LLMs require a lot of computation on dedicated and expensive hardware, which can be very costly. In many cases there is justification for such resource use since the LLM output is valuable. Yet faults may occur where excessive compute is used, sometimes as a result of interconnecting LLMs in more complicated applications such as agent based frameworks. In other cases denial of service attacks on open LLM services (e.g., customer facing chat interfaces) can spike LLM costs and use to dangerous levels.

  6. Tool errors or vulnerabilities - One of the most powerful features of an LLM beyond simple text-in/text-out applications is the automatic invocation of additional tools. This is sometimes referred to as function calling. The risk with such features is that they allow the LLM to activate other functions of your system. This can potentially include sending files over the internet, deleting files, or similar dangerous tasks. If these systems are not secured and are running on your internal infrastructure, attackers may also be able to get access to your business data processing systems and infiltrate your networks.

  7. Loss of the human edge - While we advocate for the use of LLMs in business, we do believe that excessive reliance on LLMs for non-mundane tasks can reduce a company's ability to appreciate human innovation and the "human touch". This can be fine for businesses that replace their non-core activities with generative AI yet there is long term potential to rely more and more on LLMs and less on human ideas and insight.

  8. Bias and unethical responses - Most commercial LLMs have been trained to carefully mitigate bias, and providers often integrate guardrail models to filter out offensive or bad responses. However, LLM are still susceptible to produce responses that can include racial, gender, or similar bias. Such risks are inherent to LLMs.

Protector on Horse

Risk Mitigation

Having considered some of the risks 1 - 8 above let us now examine each of them, one by one, and consider how these risks can be mitigated.

  1. Mitigating Shadow AI: Organizations can mitigate the risk of shadow AI by developing internal, controlled LLM services and providing adequate access to these resources for their employees. This approach ensures data remains within a secure, monitored environment, minimizing the risk of data leakage. Additionally, employee training on the risks of unauthorized external LLM use and establishing clear policies will reinforce proper use.

  2. Mitigating Generative junk: To counteract the risk of generative junk, including hallucinations and logical errors, implementing layers of validation and review for LLM outputs is essential. These can include automated checks for factual accuracy and manual review by subject matter experts. Training models on high-quality, diverse datasets and continuously monitoring for error patterns can also refine model reliability.

  3. Mitigating Private data leakage through inference: Organizations can reduce the risk of private data leakage during inference by using encryption for data in transit and ensuring that any LLM services used comply with organizational cybersecurity standards. Regular audits and compliance checks can help maintain security standards. Using anonymization techniques before sending data for processing can further safeguard sensitive information.

  4. Mitigating Private data leakage through training: Mitigating this risk involves careful handling of sensitive data during the training phase. Data should be anonymized or pseudo-anonymized where possible. Organizations should also implement strict access controls and monitoring to prevent unauthorized data probing. Regular security assessments of the training environments can further protect against vulnerabilities.

  5. Mitigating Resource costs accidents or attacks: To manage the costs associated with LLMs and protect against overuse or denial of service attacks, organizations should implement usage quotas and monitoring systems. These systems can detect and mitigate unexpected spikes in resource demand. Employing a multi-layered security approach can protect against external attacks, while setting up fail-safes can prevent runaway costs due to internal errors.

  6. Mitigating Tool errors or vulnerabilities: Preventing tool errors or vulnerabilities in LLM applications involves rigorous security protocols, including regular updates and patches to the LLM and associated tools, as well as running code with least privileges and appropriate sandboxing. Limiting the LLM's ability to invoke potentially harmful system functions without explicit, authenticated user consent can also mitigate risks. Continuous monitoring for unusual activities can detect and respond to any unauthorized attempts at function calling.

  7. Mitigating Loss of the human edge: To preserve human insight and innovation, organizations can design workflows that integrate LLMs for routine tasks while reserving strategic decision-making for human experts. Regular training and initiatives that promote creative thinking and problem-solving can help maintain a balance between technological efficiency and human ingenuity.

  8. Mitigating Bias and unethical responses: Addressing the risk of bias and unethical responses in LLM outputs requires continuous training of the model on unbiased, diverse datasets and implementing stringent testing for bias across various demographics. Establishing guidelines for AI use and incorporating feedback mechanisms can help identify and correct biases promptly, ensuring more equitable outcomes.

LLM Cloud Image

Where to from here

As we move forward from identifying the risks associated with LLMs to implementing effective mitigation strategies, it becomes essential to continue evolving our understanding and management of these technologies. The dynamic nature of AI development requires that risk mitigation strategies be adaptable and scalable to meet new challenges as they arise. Regularly updating training protocols, improving data security measures, and refining the oversight of LLM outputs are crucial steps toward ensuring that these powerful tools are used responsibly and effectively.

Moreover, fostering a culture of transparency and accountability within organizations can play a significant role in the sustainable deployment of LLMs. This includes educating all stakeholders about the potential risks and the importance of ethical AI practices. By establishing a collaborative environment where feedback is actively sought and incorporated, companies can better navigate the complexities of AI integration.

Finally, as technology continues to advance, there is a growing need for regulatory frameworks that keep pace with innovation. Engaging with policymakers, industry leaders, and the academic community to develop and refine AI governance will be critical. These collaborative efforts can help ensure that LLMs contribute positively to society, enhancing productivity and creativity while safeguarding against the risks we have outlined. The journey with LLMs is ongoing, and by staying informed and proactive, we can harness their potential responsibly and ethically.