The Policy Agent is a rules-based mechanism that controls the outputs of LLMs by enforcing constraints and guardrails. By configuring policies, users can dictate acceptable response behaviors, prevent harmful outputs, and ensure that AI actions align with brand values.
Retrieval-Augmented Generation (RAG) Safety provides transparency over the data selection process used by LLMs. It reveals which data chunks were identified as relevant to a query and which were ultimately submitted to the LLM. Users have the ability to review, approve, or modify these selections, ensuring that only the most appropriate data informs the AI’s output.
The LLM Inspector is a tool that breaks down the decision-making process of the LLM, showing why it selected certain actions or responses. By dissecting the LLM’s internal logic, users can gain insights into the decision pathways and make informed adjustments to align AI behavior with brand standards.
HITL, or Human-in-the-Loop, allows you to participate in an AI agent’s conversation directly from the Botpress dashboard. This feature offers a critical layer of oversight and control when using LLMs in customer- or user-facing interactions. It was designed to ensure that sensitive or high-risk conversations can be escalated to human agents, helping maintain brand integrity, mitigate potential issues, and protect user privacy.RISE is a registered 501(c)(3) nonprofit organization • UEI: XC84LTY7ULH8 EIN: 843250531
Based in Rio Rancho, New Mexico • Serving schools nationwide
1.505.363.3863
Inspiring Growth Since 2019