Principles of AI Ethics
Enabling AI Ethics
AI Risk and Mitigation
AI risk falls into three categories:
- Privacy: leaking private information
- Hallucinations: generated output may be false or misleading
- Bias: Systematically false output regarding a specific group
We mitigate or eliminate those risks by:
- Obfuscation: Private information is never sent off prem
- Guardrails: Keep LLMs on-topic and factual
- Human-in-the-loop: Built-in quality checks and stakeholder oversight
Our AI models are built on diverse and representative data. We take special care to ensure our end-uses understand the sources of the data they use and that it includes data from underrepresented groups.
Generative AI bias influences content creation. Discriminative AI bias leads to unfair decisions. Built into our platform is robust and regular auditing for bias as well as the ability to test diverse scenarios and generate independent assessments.
All stakeholders in our platform are trained to recognize bias in data sets in order to mitigate its influence.
Obfuscation means that, before any information is sent to a public LLM, sensitive data is removed. So your information never becomes part of the training data or vocabulary of a public LLM.
And, if greater privacy is required, we can train a private LLM for you.
Guardrails force the output of an LLM to stay on topic and factual. Our platform integrates with your enterprise APIs so that it only learns facts from the data you give it.