Mitigating AI Audit Bias

Start Timer

0:00:00

Upvote
0
Downvote
Save question
Mark as completed
View comments (1)

American Express is piloting a generative AI model designed to assist auditors by automatically summarizing company financial statements. During internal validation, the audit team noticed that when running the model on a test set of 200 companies, the AI-generated summaries tend to be less accurate and sometimes misleading for smaller firms, while summaries for larger firms are generally more reliable.

This accuracy gap raises concerns about potential bias in the model, which could undermine audit quality and regulatory compliance, especially for small business clients.

Given this scenario, how would you systematically identify the sources of this bias in the AI model’s outputs? Additionally, what practical steps would you recommend to mitigate these biases and ensure fair and accurate summarization for companies of all sizes?

Note: Consider both data-driven and model-centric approaches, and discuss how you would validate that your mitigation strategies are effective.

.
.
.
.
.


Comments

Loading comments