Exploiting LLM APIs with excessive agency
Large Language Models (LLMs) refer to artificial intelligence (AI) algorithms capable of processing user inputs and generating realistic responses by predicting word sequences. These models undergo training on extensive semi-public datasets, leveraging machine learning to comprehend the intricacies of language structure. Typically, LLMs utilize a prompt or chat interface to receive user input, subject to input validation rules. In contemporary websites, LLMs find diverse applications such as virtual assistant-style customer service, translation services, and SEO enhancement.
Identifying vulnerabilities in LLMs involves the following approach:
1. Identify the data and APIs accessible to the LLM, exploring this new attack surface for potential vulnerabilities.
2. Examine the LLM’s inputs, encompassing both direct inputs (e.g., prompts) and indirect inputs (e.g., training data). (Source: PortSwigger Web Academy)
Lab 1:
Navigate through all functions until you locate the AI Chat function and engage in playful interactions with simple questions. The lab’s objective is to successfully delete the user “Carlos.” Once achieved, consider the lab solved.