AI is not just a valuable addition to our technological arsenal; it is a critical factor that underpins our drive for innovation, efficiency, and exceeding customer expectations. Our vision is to integrate AI deeply and responsibly into every facet of our operations, elevating our competitive edge.
“CreditAI boasts a user-friendly interface and enables seamless interactive natural language search,” said Sreekanth Mallikarjun, Reorg’s chief scientist and head of AI innovation, when announcing the launch of CreditAI in October 2023. “We have harnessed state-of-the-art technologies, including industry-leading vector and memory databases in conjunction with LLMs fortified with our proprietary architecture and guardrails.”
FAQs
What are your AI governance principles and practices?
Our principles are:
Fairness and Non-Discrimination: Our AI models are built to be unbiased and avoid discriminatory outcomes based on factors like race, gender, or income.
Transparency and Explainability: We strive for transparency in our AI systems. We aim to explain how decisions are made, allowing for human oversight and intervention if needed.
Accountability: We take ownership of our AI models. We have clear roles and processes for development, deployment, and monitoring to ensure responsible use.
Privacy and Security: We prioritize user privacy and data security. We comply with all relevant regulations and implement robust security measures to protect user information.
Human oversight: Humans remain in control. AI is a tool to augment human decision-making, not replace it.
And our practices include:
Diverse Development Teams: We build diverse teams of engineers, data scientists, and developers to identify and address potential biases early on.
Fairness Testing: We employ rigorous fairness testing throughout the development process to identify and mitigate bias in datasets and algorithms.
Model Explainability Tools: We use explainable AI techniques such as TruLens and Variable Importance to understand how models reach conclusions. This allows for human review and intervention if necessary.
Human-in-the-Loop Systems: We design systems where humans can review and override AI decisions, particularly in critical areas.
Regular Audits and Monitoring: We conduct regular audits to identify and address any emerging bias or performance issues in deployed models.
What is your AI ethical framework?
We have a well-defined ethical framework guiding AI development and deployment. This framework is based on industry best practices and regulatory guidelines. It outlines principles such as fairness, transparency, accountability, integrity, and security.
How do you mitigate bias?
Data Quality: We prioritize high-quality data with randomized and diverse representation to minimize bias in training datasets.
Algorithmic Choice: We carefully select algorithms less prone to bias and continuously evaluate new approaches for fairness.
Human Review and Feedback: We incorporate human review loops and feedback mechanisms to identify and rectify biased outcomes during the development and deployment phases.
How do you ensure the explainability of your AI decisions?
Ensuring explainability in our AI decisions is paramount. Below are the techniques with which we approach the issue:
Feature Importance: We identify the data points (features) that have the most significant influence on the model’s decision. This helps us understand which factors play a key role in its conclusions.
Partial Dependence Plots: These plots visualize the impact of individual features on the model’s output. It allows us to see how a specific feature value can influence the final outcome.
Counterfactual Explanations: This technique explores “what-if” scenarios. We can see how a slight change in an input might affect the model’s prediction. This helps users understand the model’s reasoning.
Model-Agnostic Explainable AI (XAI) Methods: We utilize techniques like LIME (Local Interpretable Model-Agnostic Explanations) to create simpler, human-interpretable models that mimic the behavior of the complex AI model for specific predictions.
Can you provide insights into how your AI models arrive at specific outcomes?
By leveraging the above mentioned techniques, we can provide insights into how AI models arrive at specific conclusions:
Document Classification. For example, we can show which factors like headline, legal jargon, or financial jargon had the most significant impact on the specific credit related topic.
Risk Index. For example, we can highlight the specific going concern language and covenants that amplify default risk for a company.
Are there any limitations or areas where the AI decision-making process may be opaque?
There are limitations to explainability, especially with complex models:
Black Box Nature: Highly complex models can be intricate webs of connections, making it challenging to fully understand the reasoning behind every decision.
Data-Driven Biases: If the underlying data has biases, the model might inherit them, making it difficult to explain biased outcomes.
Hallucinations: In GenAI models, despite several in-house guardrails, on rare occasions, responses may contain a confusion of coreference resolution when multiple entities or persons are mentioned in a complicated narrative.
Numerical calculations: While LLMs are effective at textual tasks, their understanding of numbers typically comes from narrative context, and they lack deep numerical reasoning or flexibility of a human mind to carry out calculations and perform human-like interpretations and complex financial or legal judgements.
How do you address the opacity issue?
Human Expertise: We rely on human expertise to interpret the explanations provided by signals observed from the source data. Subject Matter Expert (SME) groups from business along with our data scientists examine the results and ensure they align with ground truth.
Documentation and Transparency: We document the explainability methods used and the limitations of the model. This promotes transparency and helps users understand the level of certainty associated with the AI’s conclusions.
What are the sources of the data used to train and operate your AI models?
Our AI systems rely on a variety of data sources to train and operate effectively, which include:
Internal Data: We leverage anonymized historical product usage data. This data provides valuable insights to support our product and service offering improvements. Our data retention policy ensures data doesn’t contain any personally identifiable information (PII).
External Data: We may incorporate external datasets, anonymized and aggregated, on market trends, economic indicators, and industry benchmarks. This enriches our models with a broader perspective.
How do you ensure the quality, accuracy, and appropriateness of the data?
Data quality is paramount. We have stringent measures in place to ensure:
Accuracy: We implement data validation and cleaning techniques to minimize errors and inconsistencies in the data.
Completeness: We strive for comprehensive datasets to avoid biases caused by missing information.
Relevance: We select data that aligns with the specific purpose of the AI model being trained.
Fairness: We analyze the data for potential biases and take steps to mitigate them. This might involve adjusting data collection practices or employing debiasing algorithms.
Do you have measures in place to handle data privacy and security?
Data privacy and security are top priorities. We have robust measures in place, which include:
Data Anonymization: We anonymize all data before using it for training or operation. This protects user privacy and ensures compliance with data privacy regulations.
Access Controls: We implement strict access controls to restrict access to sensitive data.
Security Protocols: We adhere to industry-standard security protocols to safeguard data from cyberattacks and unauthorized access.
Regular Audits: We conduct regular audits of our data security practices to identify and address any vulnerabilities.
In addition, we strive for transparency regarding data usage. We provide users with clear information about how their data is used in our AI models. Additionally, we may offer users options to control or restrict the use of their data for specific purposes.
How do you validate and test your AI systems?
Rigorous testing and validation are cornerstones of our AI development process, especially for credit models that rely on credit data and news. Below are our testing processes with which we ensure the reliability and robustness of our AI systems:
Data Splitting: We split our data into training, validation, and testing sets. The training set teaches the model, the validation set helps fine-tune hyperparameters to avoid overfitting, and the unseen testing set provides an unbiased assessment of the model’s generalizability.
Performance Metrics: We employ a battery of performance metrics relevant to the specific application. For credit models, this might include accuracy, precision, recall, F1 score, and Area Under the ROC Curve (AUC-ROC). These metrics tell us how well the model distinguishes between creditworthy and non-creditworthy borrowers.
Stress Testing: We stress test our models with extreme or unexpected data points to assess their resilience in unforeseen situations. This helps us identify potential weaknesses and improve the model’s ability to handle edge cases.
Backtesting: For credit models, we can backtest the model’s performance on historical data to see how it would have performed in the past. This helps assess the model’s effectiveness and identify potential biases.
Human-in-the-Loop Testing: We integrate human review into the testing process. Domain experts evaluate the model’s outputs and identify cases where the model might be making inaccurate or unfair decisions. This human oversight mitigates potential risks.
How do you ensure the reliability and consistency of your AI system’s performance?
We approach this through:
Model Monitoring: We continuously monitor the performance of deployed models in production. This allows us to detect any performance degradation or shifts in the data distribution that might affect the model’s accuracy over time.
Model Retraining: Based on monitoring, we may retrain the model with new data to maintain its accuracy and effectiveness.
Version Control: We maintain a clear version control system for our models. This allows us to track changes, revert to previous versions if necessary, and ensure consistency across deployments.
What mechanisms do you have in place to detect and handle edge cases or unexpected scenarios?
We approach this through:
Scenario Testing: We develop scenarios that represent potential edge cases or unexpected situations. We test the model’s behavior in these scenarios to identify and address potential issues.
Human Oversight: We maintain human oversight capabilities within the system. Humans can intervene in critical situations or when the model’s output is deemed unreliable.
What level of human oversight and control is involved?
We believe in responsible AI when dealing with sensitive credit data and news. Here’s how we ensure a healthy balance between AI power and human oversight:
Human-in-the-Loop Approach: We primarily follow a “human-in-the-loop” approach. Our AI models generate outputs, but our experts have the final sayin credit decisions.
Expert Review: Subject Matter Experts (SMEs) review the AI’s outputs, considering the unique circumstances and supporting evidence. This mitigates potential bias from the model and ensures sound judgment.
Can users override or modify AI-generated outputs if needed?
We approach this through:
Explanation and Transparency: Users can track citations and underlying sources for AI-generated outputs. This allows them to understand the factors influencing the decision and identify potential areas for discussion.
Dispute Process: We have a clear dispute process in place. Users can contest AI outputs if they believe there are inaccuracies or extenuating circumstances not captured by the model. Our financial and legal experts will then review the case and make a final judgment.
What mechanisms do you have in place for human escalation and error correction?
We approach this through:
Clear Escalation Channels: We provide clear channels for users to escalate concerns about AI outputs. This allows for swift human intervention when necessary.
Error Correction and Feedback Loop: We have a feedback loop in place. If human experts identify errors in the AI’s outputs, these get logged and fed back into the model training process. This helps us continuously improve the model’s accuracy and fairness.
Algorithmic Bias Monitoring: We actively monitor our AI models for potential biases due to possible concept drift, covariate drift, and context drift, that might creep in over time. We can then take corrective measures, such as data debiasing techniques or model retraining, to address any identified biases.
What channels do you have for customers to provide feedback or report issues related to your AI system?
We value customer feedback and take it seriously, especially when it comes to our AI systems that utilize credit data and news. We take a customer-centric approach and provide multiple channels for customers to provide feedback or report issues related to our AI systems:
In-App Feedback Forms: We integrate user-friendly feedback forms directly within our applications. This allows users to conveniently report issues or share their experience with AI-generated outputs.
Dedicated Customer Support: We have a customer success team educated to address concerns about AI decisions. They can gather details, escalate issues, and provide clear explanations with help of the AI team.
How do you incorporate customer feedback into your AI development and improvement process?
Here’s how we analyze feedback and evaluate action:
Categorization and Analysis: All customer feedback is categorized and analyzed to identify trends and recurring issues by our Product teams. This helps us pinpoint areas needing improvement within our AI models.
Root Cause Analysis: We conduct root cause analysis to understand the reasons behind customer concerns. This might involve reviewing specific data points or investigating the model’s decision-making process for a particular case.
Prioritization and Action: Based on analysis, we prioritize issues and take appropriate actions. This could involve model retraining, bias mitigation techniques, improved explanations, or adjustments to user interfaces for better transparency.
Here’s what our feedback Loop for improvement looks like:
Transparency and Communication: We strive to be transparent with customers about how their feedback is used. We may share general insights gleaned from feedback or provide updates on how their input has led to improvements.
Continuous Learning: Customer feedback is a valuable source of real-world data. We incorporate this data into our AI development and improvement process. This allows our models to continuously learn and adapt to better serve customer needs
What compliance and regulatory standards do you adhere to?
We have adopted the NIST Cybersecurity Framework (CSF), a comprehensive and widely-recognized set of guidelines and best practices for managing cybersecurity risks. Adhering to the NIST CSF helps ensure that we maintain a robust and holistic approach to security across our organization.
Additionally, we are currently undergoing an SOC 2 Type I audit, which evaluates our system’s design against the Trust Services Criteria for security, availability, and confidentiality. This independent assessment validates that our AI platform has the necessary controls in place to meet these critical security principles.
Does your AI system comply with relevant industry regulations and standards?
Yes, regulatory compliance is a top priority for our AI system. We closely monitor and align with industry-specific standards and guidelines related to AI development, deployment, and use. This includes adhering to best practices around data governance, model transparency, fairness, and accountability.
Our SOC 2 audit further demonstrates our commitment to meeting strict security and compliance requirements, giving our customers added assurance that our AI system operates within necessary guardrails.
How do you ensure compliance with data protection and privacy laws, such as GDPR or CCPA?
We have implemented comprehensive data protection measures to fully comply with applicable privacy regulations like GDPR and CCPA. This includes obtaining proper consents, providing clear disclosures about data collection and use, and honoring data subject rights.
Our AI system is designed with privacy by default, ensuring that data is collected, processed, and stored securely and only for legitimate purposes. We also conduct regular privacy impact assessments to identify and mitigate potential risks.
Data protection is a core part of our SOC 2 audit, which attests to the presence and effectiveness of our privacy controls.
What security measures do you have in place?
We employ a defense-in-depth approach to secure our AI system, with multiple layers of protection including:
Encryption of data in transit and at rest
Strict access controls and authentication mechanisms
Continuous monitoring and logging to detect anomalous activities
Regular vulnerability scanning and penetration testing
Security awareness training for all staff
Our adherence to the NIST CSF ensures that we are implementing security best practices across all critical functions – Identify, Protect, Detect, Respond, and Recover.
How do you protect your AI system against potential vulnerabilities, attacks, or misuse?
We take a proactive approach to identifying and mitigating risks to our AI system. This includes:
Conducting thorough security testing and code reviews during development
Implementing strong input validation and output filtering to prevent common attacks like SQL injection or cross-site scripting
Regularly monitoring for new AI-specific threats and vulnerabilities
Applying the principle of least privilege to limit the potential impact of any compromise
Building in safeguards against misuse, such as rate limiting and strict usage policies
Our upcoming SOC 2 report will attest to the design effectiveness of these AI security controls.
Do you have incident response plans in case of a security breach or adverse event related to your AI system?
Yes, we have a comprehensive incident response plan that is regularly tested and updated. Our plan outlines clear roles and responsibilities, communication protocols, and step-by-step procedures for containing, investigating, and recovering from security incidents.
We also have pre-established relationships with external incident response experts and legal counsel to ensure we can quickly mobilize assistance if needed.
Incident response is a key area assessed under our SOC 2 audit, providing assurance that we have the right capabilities to minimize harm during adverse events.