We’re pleased to announce the release of new AI policies samples for Managed Flex Gateway. These policies give you greater control and oversight over usage of LLM APIs within your organization. These policies are highly-configurable and can be easily tailored to your organization’s needs and requirements. Alongside the existing capabilities within Anypoint API Management, you’ll have all the tools you need to ensure safe implementation of AI APIs within your organization.
By using MuleSoft API Management with you can:
- Build safeguards for AI-powered applications: Guide and control interactions with AI/ML models.
- Implement content filtering: Prevent sensitive data leakage or inappropriate content from being transmitted to LLMs.
- Validate API requests: Ensure API requests to LLMs conform to expected formats and data types.
- Enforce rate limiting: Control API traffic volume to prevent overload and control costs.
MuleSoft AI API Policies Samples
These policies are specifically built for OpenAI. The policies we are launching today include:
- AI Prompt Decorator
- AI Prompt Guard
- AI Prompt Template
- AI Basic Token Rate Limiting
AI Prompt Decorator policy
The AI Prompt Decorator policy allows you to modify and enhance API requests and responses by appending or prepending messages. This enables you to guide or protect prompts, ensuring transparency and control over interactions with OpenAI models. For example, you can use this to limit the scope of responses to only be relevant to a specific topic (ie. “modem troubleshooting”) or to instruct the model to omit potentially sensitive data.
AI Prompt Guard policy
The AI Prompt Guard policy helps you sanitize API requests by filtering out unwanted or potentially harmful content using regular expressions. This adds a layer of security to your API by preventing misuse and protecting against malicious or undesired inputs. For example, you can help ensure compliance, by filtering out requests that include Social Security numbers.
AI Prompt Template policy
The AI Prompt Template policy allows you to apply predefined templates to API requests. By providing structured input, you can ensure more consistent and accurate outputs from large language models. This can be helpful for automating processes, like generating reports.
AI Basic Token Rate Limiting policy
The AI Basic Token Rate Limiting policy allows you to limit the number of tokens sent to an OpenAI API over a pre-defined time period. This is useful in scenarios where you might want to enforce constraints to optimize LLM performance, control costs, or prevent abuse.
How to get started with the new API policies
These new API policies are now available as policy development kit samples on Github that you can modify and use. To learn more on how to use the policy development kit, please refer to our documentation on Github.