In a detailed report, our friends of MITIGANT describe how AI Generative tools can be compromised, and how to make sure they are not. This report, which uncovers poorly known attack vectors, should be read by all those who want to integrate Gen AI in their information system.
Generative AI (GenAI) has taken the world by storm since the release of ChatGPT in November 2022.
Large Language Models (LLMs), the underlying technology used for GenAI, can be applied in several ways aside from chatbots; hence, organizations are stretching to leverage these models to gain business advantages.
Similarly, Cloud Service Providers (CSP) also offer LLMs, such as Amazon Bedrock, Azure AI Services, and Google Vertex AI. These CSP offerings drastically ease the accessibility of LLMs, thus fast-tracking the development, deployment, and maintenance of GenAI workloads.
However, these GenAI workloads introduce several security and safety challenges that require attention. Several organizations, such as the EU AI Act, OWASP Top 10 LLMs, and the NIST AI Risk Management Framework, have published vital information about these risks to raise awareness.