TrustCloud’s New Hallucination-Proof GraphAI Shaves Hours Off Security Questionnaires

Tejas Ranade

23 Apr 2024

TrustCloud’s AI already pre-fills up to 80% of a security questionnaire, but we’ve developed the next iteration. TrustShare has built new generative AI capabilities called GraphAI. GraphAI will still find the right answer for a security questionnaire topic, but now it will better account for context and generate more natural, accurate responses based on your program controls.

GraphAI is built on a retrieval-augmented generative (RAG) model on our large language model (LLM). We built it this way to ensure speedy, accurate retrieval of information tailored to the query. 

But the most important part isn’t the technology we use, but how we train it. Over four years, our model has been trained on tens of thousands of industry data and security questionnaires sourced from public domain and our customers. (We have rigorous privacy standards that we follow to ensure data is secure, and customers can opt in to having our model learn on their information. If they choose to do so, that learning is only used to improve their AI instance, not those of any other customers. Jump to “How TrustCloud maintains data security” for more details.)

Our model training also incorporates an extensive lexicon of security and compliance terminology so our AI can adeptly interpret the intent behind security questions. And it’s exclusive to TrustCloud.

Other AI solutions, including those offered in traditional GRC automation solutions, rely on static knowledge bases. However, TrustCloud operates using TrustGraph—an advanced, interconnected graph model of your entire GRC ecosystem. This includes your specific controls, policies, documents, and knowledge base, upon which your AI instance will train further. Better yet, as you engage with your AI instance, the algorithm will continuously learn and refine its accuracy, so your responses will become even more precise over time.

This means GraphAI will more accurately convey a real-time representation of your security posture with more precise responses. Because TrustGraph dynamically integrates with all your artifacts, you can effectively monitor and manage customer commitments and mitigate the risk of misrepresentation of outdated information.

TrustCloud's GraphAI technology

GraphAI employs a sophisticated natural language processing (NLP) model to analyze and transform your TrustGraph artifacts into vector embeddings. Those embeddings are specifically optimized for the unique security and compliance lexicon, and allow AI models to process and compare data efficiently.

When answering questions, TrustCloud AI uses FAISS (Facebook AI Similarity Search) to help identify the most relevant results quickly and ensures provided answers are drawn from the most applicable information from your TrustGraph.

How we guard against hallucinations

Hallucinations come from an AI’s reliance on learned patterns and general knowledge, which can lead to answers that are coherent but incorrect or irrelevant.

To avoid this occurrence, TrustCloud uses RAG to enhance response quality. RAG initially retrieves a set of information relevant to the query, then integrates the data with the generative AI to produce answers directly informed by the retrieved information. Uniting retrieval-based and generative AI algorithms ensures responses that are more accurate, reality-based, and highly relevant.

Our in-house machine learning for vectorization and retrieval models (powered by FAISS) to produce coherent, accurate, and contextually relevant answers. We then apply a cross-check prompt validation to ensure accuracy. Combined with TrustGraph, our rigorous approach greatly enhances our reliability in answering security questions and is highly resistant to hallucinations.

How TrustCloud maintains data security

Our entire platform, including GraphAI, adheres to rigorous security, privacy, and governance standards. On our own TrustShare instance, you can find a detailed description of our internal controls and policies.

To further ensure our AI governance practices remain in step with emerging best practices, we align our AI governance practices to the latest standards, including ISO 42001 and NIST AI RMF. Some security measures we have adopted include:

  • Training AI on questions from customers’ security questionnaires and excluding answers and sensitive artifacts from a customer’s trust program.
  • We never use a customer’s data to train models for other customers. Customers can choose to opt in to allowing their compliance program data to exclusively refine their AI instance. 
  • Customers provide direct feedback on AI-generated answers, in the form of approving or rejecting an answer or by modifying the provided answer. 
  • We strictly prohibit the use of customer data for training third-party models, including those of OpenAI. Any training data is confined to improving in-house models.
  • Generative AI is an opt-in capability. By opting in, users can gain a more comprehensive, and accurate answering experience, as our AI crafts answers that precisely match question context. However, customers with stricter sensibilities can choose not to opt in. Our vector-based retrieval is enabled by default, however customers do have the option to disable AI-enhanced questionnaire processing altogether.

All customers will have TrustCloud’s GenAI capabilities ready for opt-in in their portal and an contact their Customer Success rep. We are excited to provide advanced, effective technology that drives efficiency in your organization and is secure, dependable, and aligned with the highest standards of AI governance.