Artificial intelligence quickly took the world by storm with the rise of ChatGPT, and GRC is no different. At Cential, we integrated artificial intelligence and GRC as of February 1, 2023. It changed everything for GRC, and it is truly transforming the way we do GRC.
We’re excited about the groundbreaking work happening here at Cential; however, we know that can come along with questions. We have gathered the most frequently asked questions to answer them all for you in this blog post!
Answering Your Frequently Asked Questions:
“What platforms have you integrated your AI solution with?”
“Do you have any other use cases or plans to expand?”
Yes – we have demonstrated the following additional use cases: translating laws and regulations into regular Business English, translating risk descriptions into other languages, IT demand management, and generating Corporate Policies with mappings to secure frameworks (demo video coming soon).
We are currently working on other use cases, as well—stay tuned!
“How is this different from the AI I’ve seen before in my GRC solutions?”
In broad strokes, there are two types of AI models, predictive and generative. Predictive models require training on a user’s data, i.e. loading a vast amount of historic data and then training the AI to make predictions based on it – When you hear of the existing AI features within ServiceNow or Riskonnect (called Einstein in their SalesForce platform) this is usually what is being referred to, using data within the solution to make AI-trained, “smart” predictions. What we’re doing is generative, i.e. models trained on vast amounts of language data that is generally available to the public and not specific to your organization. (You can also train your own generative model with your own data, but we are not doing that within our team at Cential as it’s even more resource intensive than predictive models).
The difference with predictive models is that these models require large quantities of data and further need to qualify it with parameters to start from and then get structured responses. With generative models, all a user needs is a really good question (or prompt) with some well-defined context and with the models being as mature as they are now, we can see the results we are getting back are accurate enough to be a good starting point from which to be reviewed or edited by a human expert.
“Is it adaptable to any AI platform?”
Yes, we are focusing on Generative Pre-trained Transformers like OpenAI’’s text-davinci–003, GPT-3.5-turbo and the newly released GPT-4 for now as we’re getting accurate generated responses from those, but we can easily swap out to use similar models such as DeepAI or new ones as they become available via API such as google-bard.
“Is it secure?”
One of the largest benefits of GPT language models is the low levels of information required as input to get meaningful output generated. As we are using a “generative” AI approach, the amount of data required from our customers is minimal, unlike an “analytical” AI approach which would require a vast level of customer data. We have also been deliberate in engineering our prompts to be both specific in terms of context, yet also not directly identifiable. We have also employed a data sanitization step into the Cential AI Hub, ensuring any organization’s identifiable markers are anonymized prior to placing the requests for data with the AI models.
At Cential, we continue to explore more opportunities and new use cases to automate risk management with artificial intelligence. We are currently in the process of testing the extent of this tool on our internal initiatives, and the efficiency and accuracy of the tool only continues to improve. This is just the beginning.
If you have any questions on what we’re building here at Cential, please reach out to any one of us through the contact form on our website.