{"id":126150,"date":"2023-09-03T23:22:28","date_gmt":"2023-09-03T23:22:28","guid":{"rendered":"https:\/\/feedzai.com\/?p=126150"},"modified":"2024-04-09T09:12:33","modified_gmt":"2024-04-09T09:12:33","slug":"built-in-responsible-ai-how-banks-can-tackle-ai-bias","status":"publish","type":"post","link":"https:\/\/feedzai.com\/blog\/built-in-responsible-ai-how-banks-can-tackle-ai-bias\/","title":{"rendered":"Built-in Responsible AI: How Banks Can Tackle AI Bias"},"content":{"rendered":"
[vc_row row_height_percent=”0″ override_padding=”yes” h_padding=”2″ top_padding=”1″ bottom_padding=”2″ overlay_alpha=”50″ gutter_size=”3″ column_width_percent=”100″ shift_y=”0″ z_index=”0″][vc_column width=”1\/1″][vc_row_inner][vc_column_inner width=”1\/12″][\/vc_column_inner][vc_column_inner width=”10\/12″][vc_single_image media=”126161″ dynamic=”yes” media_width_percent=”100″ uncode_shortcode_id=”520036″][\/vc_column_inner][vc_column_inner width=”1\/12″][\/vc_column_inner][\/vc_row_inner][vc_row_inner][vc_column_inner column_width_percent=”100″ gutter_size=”3″ overlay_alpha=”50″ shift_x=”0″ shift_y=”0″ shift_y_down=”0″ z_index=”0″ medium_width=”0″ mobile_visibility=”yes” mobile_width=”0″ width=”2\/12″][\/vc_column_inner][vc_column_inner width=”8\/12″][vc_custom_heading heading_semantic=”h3″ text_size=”h3″ text_weight=”400″ uncode_shortcode_id=”204458″]Many bank customers know that banks use artificial intelligence (AI) to make decisions. Yet, they also want their bank to treat them fairly and without bias. With built-in Responsible AI<\/a>, banks can be both fair and efficient in their AI decisions.<\/span>[\/vc_custom_heading][vc_column_text uncode_shortcode_id=”211488″]Some people think that to make AI fair, it might become less efficient. But at Feedzai, experts in Responsible AI, we believe that’s not the case. In this article, we’ll show how banks can use Responsible AI to be both fair and effective. Plus, we’ll discuss how it lets banks choose the best models for them.<\/span><\/p>\n Responsible AI<\/span><\/a> is a framework that ensures decisions reached by an AI or machine learning model are fair, transparent, and respectful of people\u2019s privacy. The framework also empowers financial institutions with explainability, reliability, and human-in-the-loop (HITL) design that offers guardrails for AI risks. Built-in Responsible AI, meanwhile, offers banks a seamless pathway to implement fair AI and machine learning policies and procedures without compromising on their system\u2019s performance. Banks are presented with options that offer fairer decisioning. However, banks are not obligated to select these options and can choose the framework that works best for their purposes.<\/span><\/p>\n Biases can arise at different stages of model building or training. As a model self-learns in production, it may develop biases that were not intended by the developers of the model. Beyond that, bias can even creep in from the bank\u2019s internal rules and the humans responsible for making decisions over customers\u2019 financial well-being. This means banks may deny important financial services, including access to bank accounts, credit cards, bill payments, or approval of loans to qualified individuals. This is not deliberate, but because a machine learning model found they are the \u201cwrong\u201d gender or come from a \u201chigh-risk\u201d community, they find themselves unfairly financially excluded.\u00a0<\/span>[\/vc_column_text][vc_single_image media=”119734″ media_width_percent=”100″ uncode_shortcode_id=”126420″ media_link=”url:https%3A%2F%2Fhubs.la%2FQ020zhrV0|target:_blank”][vc_column_text uncode_shortcode_id=”105207″]Political leanings can also influence a bank\u2019s decision-making. In the UK, for example, the government is investigating whether some customers are being \u201cblacklisted\u201d from critical financial services over their political views<\/a>.<\/span><\/p>\n Every bank is committed to giving its customers the best possible service it can provide. At the same time, banks want to treat every customer fairly and compassionately. As banks rely increasingly on artificial intelligence and machine learning for faster decision-making, they must trust their models to meet these priorities.\u00a0<\/span><\/p>\n As AI technology becomes increasingly prevalent in financial services, banks will need to stay vigilant in monitoring for bias. With new AI-based technologies gaining prominence, this is a mission-critical mindset.<\/span><\/p>\n Case in point, a <\/span>recent study on biases in generative AI<\/span><\/a> showed that the text-to-image model thought that \u201chigh-paying\u201d jobs, like \u201clawyer\u201d or \u201cjudge,\u201d are occupied by lighter-skin males, while prompts like \u201cfast-food worker\u201d and \u201csocial worker\u201d are occupied by darker-skin females. Unfortunately, in this example, AI is more biased than reality. For the keyword \u201cjudge,\u201d the text-to-image model generated only 3% of images as women. In reality, 34% of US judges are women. This exemplifies the considerable risks of unintentional bias and discrimination in AI, which negatively impact operations, public perceptions, and customers\u2019 lives.<\/span><\/p>\n Consumers are increasingly aware that AI is used to generate answers on any topic and ultimately help people make informed decisions faster. If they believe they were treated unfairly by their bank, they may ask to see and better understand the bank\u2019s decision-making process.<\/span><\/p>\n Unfortunately, banks are often convinced that they must trade off fraud detection performance for fairness and vice versa optimize their models for maximum efficiency and performance over fairness. Without an accurate way to measure both. As a result, many banks are forced to prioritize performance to boost their bottom lines. Model fairness and Responsible AI get treated as \u201cnice to have\u201d agenda items. But neglecting to prioritize model fairness allows biases to creep into a bank\u2019s model, even if it\u2019s never intended.\u00a0<\/span><\/p>\n To put it mildly, this is a problematic approach for banks. Not only is it a false choice, but it\u2019s also a risky one that can have harmful consequences for banks if they ignore biases in their models for too long. If too many groups of customers believe they were denied services because of their age, gender, race, zip code, or other socio-economic factors, it can create a significant public relations headache for the bank and possibly litigation.<\/span><\/p>\n Feedzai has worked for years to find a way to avoid having to make a choice between model performance and model fairness. As pioneers in Responsible AI in the fraud and financial crime prevention space, we\u2019re committed to changing this narrative.<\/span>[\/vc_column_text][vc_single_image media=”126157″ media_width_percent=”100″ uncode_shortcode_id=”127772″][vc_column_text uncode_shortcode_id=”119300″]Our culture of Responsible AI comes from the top down with a team of passionate leaders dedicated to doing the right thing for customers. It\u2019s an honor to have industry experts like IDC <\/span>recognize Feedzai<\/span><\/a> for our work.<\/span><\/p>\n Feedzai\u2019s Built-in Responsible AI tools provide financial institutions with the tools they need to tackle model bias before it gets out of hand. These tools enable banks to quantify bias, automatically identify fairer models, and optimize models for both fairness and performance.<\/span><\/p>\n Here are Feedzai\u2019s key tools for built-in Responsible AI.<\/span><\/p>\n A bank\u2019s first obligation is to assess and measure the bias of its models. Feedzai\u2019s built-in Responsible AI tools provide a bias audit notebook where banks can visualize and quantify the level of any bias they uncover. Conducting a risk audit empowers banks to understand what type of attributes put them at risk of creating bias. The bias audit notebook allows you to incorporate information in the model\u2019s selection process by selecting algorithms that maximize fairness. This enables banks to uncover and fix bias before it becomes a problem or a threat to the bank\u2019s reputation.<\/span><\/p>\n Banks can also automate the model selection process using Feedzai Fairband. Fairband is an award-winning automated machine learning (AutoML) algorithm that can quickly identify less biased models that require no additional training to implement. This means financial institutions can quickly deploy the fairest models available without compromising performance. While Fairband adjusts the hyperparameter optimization process to quickly pinpoint the fairest models, it doesn\u2019t force banks to choose them by design. Banks still have the final say over which models to deploy.<\/span><\/p>\n FairGBM is a constrained version of gradient-boosted trees that optimizes for both predictive performance and fairness between groups \u2013 without compromising one or the other. Because it was built on the LightGBM framework, FairGBM is fast and scalable to train models using millions of data points, an essential requirement for financial services. An open source version is also available for non-commercial and academic use to proliferate the mission of minimizing bias. Learn more about our publication <\/span>here<\/span><\/a>.<\/span><\/p>\n Underpinning any machine learning technique is the importance of transparent, explainable decisions. These model\u2019s decisions need to be explainable to regulators, managers, and even consumers. All of Feedzai\u2019s machine learning models have Whitebox Explanations \u2013 straightforward, human-readable text that justifies the model decision.<\/span><\/p>\n These components give banks the essential components they need to uncover biases in their models without compromising on model performance.<\/span><\/p>\n You can\u2019t fix what you can\u2019t measure. Feedzai\u2019s built-in solutions for Responsible AI give banks the tools they need to uncover bias in their models and respond appropriately. The upside to using these tools include:<\/span><\/p>\n It\u2019s important to note that while Feedzai helps banks identify and respond to bias in their models, banks ultimately have the final say when deploying the new models. Feedzai\u2019s built-in Responsible AI tools give banks the choice but do not require an organization to follow its recommendations. We\u2019ll give banks the ethical compass. It\u2019s up to them to navigate towards their goals.\u00a0<\/span><\/p>\n Feedzai\u2019s assortment of built-in Responsible AI tools gives banks a roadmap to demonstrate its commitment to fairness without compromising performance. It\u2019s a simple step in winning customer trust and long-term loyalty.<\/span>[\/vc_column_text][\/vc_column_inner][vc_column_inner column_width_percent=”100″ gutter_size=”3″ overlay_alpha=”50″ shift_x=”0″ shift_y=”0″ shift_y_down=”0″ z_index=”0″ medium_width=”0″ mobile_visibility=”yes” mobile_width=”0″ width=”2\/12″][\/vc_column_inner][\/vc_row_inner][\/vc_column][\/vc_row]<\/p>\n<\/div>","protected":false},"excerpt":{"rendered":" Many bank customers know that banks use artificial intelligence (AI) to make decisions. Yet, they also want their bank to treat them fairly and without bias. With Responsible AI, banks can be both fair and efficient in their AI decisions.<\/p>\n","protected":false},"author":106,"featured_media":126169,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"footnotes":""},"categories":[88],"tags":[501,151,77],"acf":[],"yoast_head":"\nWhat is Built-in Responsible AI?<\/span><\/h3>\n
Why Built-in Responsible AI is Critical for Banks<\/span><\/h3>\n
The False Choice Between Model Fairness and Performance<\/span><\/h3>\n
How Feedzai Delivers Built-in Responsible AI for Banks<\/span><\/h3>\n
How Built-in Responsible AI Works<\/span><\/h3>\n
Bias Audit Notebook<\/span><\/h4>\n
FairAutoML (Feedzai Fairband)<\/span><\/h4>\n
FairGBM<\/span><\/h4>\n
Whitebox Explanations<\/span><\/h4>\n
How Banks Benefit from Built-in Responsible AI<\/span><\/h3>\n
\n