Ethical AI Articals | Feedzai https://feedzai.com/blog/research/ Mon, 27 May 2024 13:07:30 +0000 en-US hourly 1 https://feedzai.com/aptopees/2020/08/fav.png Ethical AI Articals | Feedzai https://feedzai.com/blog/research/ 32 32 ChatGPT Revolutionizes Retail Banking…Soon https://feedzai.com/blog/chatgpt-revolutionizes-retail-banking-soon/ Mon, 27 Feb 2023 16:46:46 +0000 https://feedzai.com/?p=119731
Illustration of woman on the phone with bank learning how Generative AI like ChatGPT can include biases and how Responsible AI can help in retail banking

The world is buzzing with excitement about Large Language Models (LLM) and Generative AI thanks to ChatGPT – the new cool (and scary, for some) AI technology! 

ChatGPT generates human-like text based on prompts it receives, and can even write poems. Dozens of articles have been published speculating on how Generative AI will change the world. From replacing Google to the advent of artificial general intelligence (AGI), there’s no shortage of predictions.  

As a bank, you may rightly wonder if you can implement ChatGPT in particular and generative AI, in general, to improve your customer experiences and increase operational efficiencies. 

The good news is that you can likely do both of these things – and here’s the important part – in time. If an ounce of prevention is with a pound of cure, then now is a critical time to take a step back and understand the current limitations and risks of ChatGPT. Once you know that, you will know what is needed to be built on top to use it safely in retail banking. 

Understanding ChatGPT’s Risks, Limitations, and Usefulness for Retail Banking

What better place to understand ChatGPT than to “speak” to it myself? I interviewed ChatGPT about its potential role in retail banking. My questions focused on Responsible AI, a set of principles and practices to develop AI innovations responsibly, and also an important tool in a bank’s Environmental, Social, and Governance (ESG) initiatives.  

Below are the prompts I gave ChatGPT, and its answers. Additionally, I’ve provided a brief analysis of its answers to each question to illustrate the strengths, weaknesses, and potential risks of generative AI. 

Explain Generative AI, Large Language Models, and ChatGPT

Generative AI refers to a type of artificial intelligence that is capable of generating new and original content, such as images, text, music, and more. This is in contrast to other forms of AI that are designed to recognize, classify, or process existing data.
Large Language Models (LLMs) are a type of generative AI that use deep learning algorithms to analyze and generate text. They are trained on massive amounts of text data and can be used for a variety of tasks, such as language translation, question answering, and text generation.
ChatGPT is a specific type of large language model developed by OpenAI. It is designed to respond to user input in natural language, making it useful for applications such as customer service chatbots, virtual assistants, and text-based games.

Grade: A. ChatGPT provided a straightforward answer for a general audience, and didn’t get too technical. Moreover, the answer was both accurate and truthful.  

How can banks use ChatGPT in retail banking?

ChatGPT can be used in retail banking in several ways:

Customer Service: ChatGPT can be integrated with banking websites or mobile apps to provide 24/7 customer support. Customers can get quick answers to their queries without having to wait for a human representative.

Fraud Detection: ChatGPT can be used to analyze customer transactions and detect suspicious activity, flagging potential fraud in real time.

Personalized Marketing: ChatGPT can be used to generate personalized product recommendations and marketing messages based on customer data and behavior.

Investment Advice: ChatGPT can be integrated with robo-advisory platforms to provide personalized investment advice to customers based on their financial goals and risk tolerance.

Loan Processing: ChatGPT can be used to automate the loan processing and approval process, reducing the time and effort required to process loan applications.

Chatbots: ChatGPT can be used to build conversational chatbots that can handle customer queries and provide relevant information, improving the overall customer experience.

Again, ChatGPT provides an answer that appears confident. However, it lacks depth. A user must be an expert and provide nuanced prompts to gain a more complete picture. To accomplish this, I changed the prompt to focus on safety.  

Is it safe to use ChatGPT in retail banking? What are its limitations and ethical risks?

No, it is not safe to use ChatGPT directly in retail banking. ChatGPT is an AI language model that is not designed to handle sensitive financial or personal information and may not have the necessary security measures to protect this data.

The limitations and ethical risks of using ChatGPT in retail banking include:

Security: The lack of built-in security measures to protect sensitive financial and personal information.

Accuracy: ChatGPT may provide incorrect or incomplete information, which could result in financial harm to customers or banks.

Bias: ChatGPT is trained on a large dataset and may reflect societal biases in its responses, leading to discriminatory outcomes in retail banking.

Responsibility: The use of ChatGPT in retail banking raises questions about accountability and responsibility for its actions and decisions, particularly in the event of harm to customers or financial losses.

Regulation: There may be legal or regulatory restrictions on the use of AI technology in retail banking.

Given these limitations and ethical risks, it is recommended to use ChatGPT technology as part of a larger system with appropriate security measures and to be transparent with customers about the limitations and potential risks of using AI in retail banking.

ChatGPT seems to have generic safeguards and it answers cautiously about its safety. Let’s dig a little deeper to gain an adequate understanding of the risks involved.  

What type of bias might ChatGPT have, and how can it harm retail banking customers?

ChatGPT, being an AI language model trained on a large dataset, may reflect various types of biases that exist in the data it was trained on. These biases can harm retail banking customers in different ways:

Demographic bias: If the training data reflects a biased distribution of demographic characteristics, ChatGPT may produce discriminatory outcomes in retail banking, such as denying loans or other financial products to certain groups of people based on their race, gender, or other demographic factors.

Conceptual bias: If the training data reflects biased assumptions about what constitutes a typical customer or typical financial product, ChatGPT may make incorrect assumptions about customers’ needs and preferences, leading to poor recommendations or decisions.

Historical bias: If the training data reflects historical patterns of discrimination or bias in financial services, ChatGPT may produce discriminatory outcomes in retail banking, such as denying loans or financial products to certain groups of people.

Algorithmic bias: If the algorithms used to train ChatGPT reflect biased assumptions or techniques, it may produce biased outputs in retail banking, such as making discriminatory decisions or predictions.

These biases can result in harm to retail banking customers, such as denial of loans, overcharges, and other financial losses, as well as violations of privacy and discrimination. It is important to address these biases and to take steps to ensure that AI systems used in retail banking are fair, transparent, and accountable.

It’s an excellent response that summarizes the harm caused not just by large language models trained on potentially biased data, but on all machine learning algorithms that do not employ a responsible AI approach. 

How can Responsible AI make Generative AI safer for future use in retail banking?

Responsible AI practices can help make Generative AI, including language models like ChatGPT, safer for use in retail banking and other industries by reducing the risks associated with their deployment. Some ways that Responsible AI practices can help include:

Aligning AI with human values and ethical principles: Ensuring that the AI systems are designed and trained to align with human values, ethical principles, and the interests of stakeholders, can help mitigate potential harm.

Monitoring and mitigating bias: AI systems can contain biases, which can harm certain groups of people or perpetuate harmful stereotypes. Responsible AI practices can help identify and mitigate these biases through regular audits and evaluations of the training data, models, and outputs.

Ensuring transparency and interpretability: Making AI systems transparent and interpretable can help users understand how the AI is making decisions, which can increase trust and reduce the risks associated with their deployment.

Providing human oversight and control: Including human oversight and control mechanisms in AI systems can help mitigate the risks of unintended consequences, and allow for intervention if necessary.

Adopting responsible data management practices: Responsible data management practices, such as protecting privacy and ensuring data security, can help mitigate the risks associated with the deployment of AI in retail banking and other industries.

By adopting Responsible AI practices, AI developers and organizations can help make Generative AI safer for future use in retail banking and other industries, and increase the benefits and minimize the risks associated with their deployment.

The important takeaway here is that I knew to ask about Responsible AI. Many think that ChatGPT will provide them with all the answers. But it can’t answer questions people don’t think to ask. Human expertise and partnership are essential in extracting the most benefits from any technology. 

Can I trust ChatGPT’s answers?

ChatGPT is an advanced language model developed by OpenAI that generates text based on patterns it learned from a vast corpus of text data. However, the accuracy and reliability of its answers depend on several factors, such as the quality of the training data and the context of the input.

While ChatGPT has been trained on diverse text data, it may still contain biases and inaccuracies inherent in the training data. Additionally, its responses may not always be accurate, complete, or up-to-date.

It’s essential to evaluate the responses generated by ChatGPT critically and not blindly trust the information it provides.
It’s recommended to use ChatGPT as a tool to generate text or information, but not as a sole source of truth or advice.
It’s always a good practice to verify the information generated by ChatGPT with other sources and consult with experts where necessary.

A notable issue with ChatGPT is that it does not cite its sources. Imagine spouting information as fact without being able to produce evidence of that fact. It might work in politics, but not in retail banking. 

Generative AI and Large Language Models Like ChatGPT Will Revolutionize Retail Banking

Generative AI will change the world. It’s a bold statement, but arguably anyone reading the above “conversation” will agree. You know what else we all pretty much agree with? Change is scary. It can also be absolutely incredible. It can benefit humans for generations to come. 

I embrace the change Generative AI, powered by foundational models like large language models, will bring wholeheartedly and cautiously.  At this stage, the risks of deploying foundational models without tailored safeguards to retail banking customers still outweigh the benefits. Some of the limitations and risks include untruthful content, privacy leaks, biases, and lack of transparency. But that will improve. 

If there is one great takeaway from investigating the rapid development of large language models it is that right now – right this very second – banks must invest in Responsible AI. Implementing Responsible AI tooling and methodologies will enable the safe adoption of innovations in generative AI that will improve customer experiences, improve operational efficiencies, and fulfill ESG obligations. 

]]>
ChatGPT Creates New Data Science Challenges https://feedzai.com/blog/chatgpt-creates-new-data-science-challenges/ Wed, 08 Feb 2023 14:22:14 +0000 https://feedzai.com/?p=119075
Image showing how Generative AI like ChatGPT will create challenges for the Data Science community

ChatGPT can spit out a fraudulent email from Amazon in a matter of seconds. It can pass an Ivy League business school’s MBA program. And it can even write a blog – like this one – with just a few clicks. These capabilities are going to make it much harder for everyday people to distinguish what is real from what is fake. This means ChatGPT is going to create a host of challenges and opportunities for the data science community.

Text generated by ChatGPT is in italics.

I may be a data scientist, but I’m also human. So of course, I asked ChatGPT to answer the questions I had about how it will impact the data science field.

Here’s what it had to say:

Question: What is the implication of Generative AI on data science?

“Generative AI, such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), has the potential to impact data science in several ways. One of the most significant implications is the ability to generate synthetic data. This can be useful for a variety of tasks, such as training machine learning models, creating simulations, and augmenting limited datasets. Additionally, generative models can be used for tasks such as anomaly detection, image and text generation, and even drug discovery. However, it is important to note that the use of generative AI also raises ethical and societal concerns, such as the potential for misuse in areas such as deepfake generation.”

That last point is certainly worth unpacking further. ChatGPT is clearly capable of creating content rapidly. But a great deal of that content may not be true. And that’s where we need to be careful about how we use it – including in the data sciences.

ChatGPT-ing is the new ‘Googling’

Through a data science lens, one of the most concerning parts of Generative AI like ChatGPT is the lack of a second opinion. Ask it any question and not only does it answer you quickly, but it also writes high-quality responses with authority. After witnessing both the speed and authority of its responses, it’s challenging to push back against its answers. 

No wonder Big Tech giants like Google are pushing the “code red” button. After all, the program is going through a similar evolution to Google’s. When the search engine first debuted, some people were slow to adopt it. They could just check the yellow pages or a book for information. But it only took a few years for Google to become such an invaluable source of information that it morphed into a verb – Googling. The term is now part of our everyday parlance. Need to learn something fast? Just Google it! “How did you know that? I Googled it!”

Google’s ease of use had an unfortunate side effect. It significantly lowered the entry barrier to becoming an expert in any field. As a result, one of Google’s biggest strengths – its ease of use – has become one of its biggest liabilities. It became easy for misinformation and fake news to spread online

ChatGPT is going through a similar phase as Google. It’s trained on data that only goes to 2021, some of which may be outdated, include human biases, or even inaccurate information. Just as it proved risky to treat Google as an unchallenged source of truth, it’s important to treat ChatGPT the same way.  

The 4 Key Data Science Challenges of ChatGPT 

Given these capabilities, ChatGPT is poised to significantly impact both the data science field and broader society in three key ways. 

1. It’s harder to distinguish what’s real

Creating text-based content isn’t the limit of Generative AI’s capabilities. We’ve already discussed how the technology can be used to generate phishing emails, deepfakes, audio, and images. The technology can manipulate or fabricate images, videos, and audio with ease. This makes it 100 times harder to spot fake news or misinformation when images can be repurposed or invented on the spot. Over time, the model could further propagate misinformation. This makes it harder to trust the data because you risk building a model that introduces bias into your model’s decision-making. 

2. It will automate or replace certain data tasks

This point isn’t necessarily a negative. Let’s say you can’t make an informed prediction about a group or sample because you lack data. Generative AI can create synthetic data out of thin air to represent your target population. This is actually a common technique in data science and Generative AI could accelerate its practice.

However, data scientists will have to look at the generated data to ensure it is representative of real-life data. If the data generated under-represents certain groups, it could lead to unfair decisioning or inaccurate reports. 

Generative AI could automate certain data-handling tasks, including cleaning large volumes of text data. It can also enhance anomaly detection making it easier to spot new fraud patterns. However, data scientists must constantly monitor these models to ensure they work as intended.

3. No profession is safe

Generative algorithms have almost everyone in every profession on edge. And with good reason. The technology could very well disrupt every profession from content creators like digital artists, bloggers, and journalists to knowledge-based professions like doctors, architects, engineers, and even lawyers. While this is a scary prospect, it’s also an exciting one. This technology can democratize so many things at once. It can write a story you have in your head in a few seconds, compose a song, or create a design with just a few descriptive words.

4. It creates a single source of truth

We’ve already seen technology unintentionally craft “echo chambers” by giving users information that aligns with their views. ChatGPT could accelerate that trend. Some people will disregard dissenting views or forfeit facts in favor of the outputs generated by ChatGPT that align with their biases. 

4 Generative AI Guardrails for Data Scientists 

Like all other professions, Generative AI will have a big impact on the data science field as well. That’s why data scientists should adopt guardrails to ensure the integrity of their work. As the technology becomes more prevalent, data scientists should adopt the following framework:

  • Make sure data is reliable, valid, and bias-free: As we’ve noted, Generative AI is capable of creating data out of thin air. But it lacks the capacity to understand if what it has created is accurate or true. Data scientists must review the synthetic data and compare it with real-life data to confirm if it’s trustworthy. For example, always check the underlying data distributions and see if bias is introduced in the data or by evaluating the performance of a model trained on the generated data.
  • Embrace Explainability and Watch for AI Bias: Generative AI needs to be able to explain itself at each step. As we mentioned, the system doesn’t understand if it includes biased data. That’s why it needs to be able to explain its decisions in a way that makes it more transparent. Data scientists need to teach AI how to detect biases and make sure the data it uses is representative of the real world.
  • Implement security guidelines: As data scientists, we have an obligation to make sure anything created by Generative AI is not used for malicious purposes. Look at what happened with deepfake technology. Scientists created it to generate realistic video duplicates of real people. But now the technology is easily available and used in fraud. You may not intend for your creation to be used maliciously, but without the proper security guardrails in place, that’s what could happen.
  • Think ethically: As the adoption of this technology continues, we have to stay focused on our ethical obligations. Generative AI is already raising questions about intellectual property and plagiarism and demonstrating how it can be abused for fraud. As data scientists, we must always remember our ethical obligations. 

Even as a data scientist, I wondered if ChatGPT is coming for my job. So I asked it: 

“Generative AI has the potential to augment the work of data scientists, but it is unlikely to completely replace them in the near future. While AI models can be trained to perform various tasks, such as data analysis and prediction, they still require human oversight and interpretation.”

Phew! For now, at least…

]]>
Understanding FairGBM: Feedzai’s Experts Discuss the Breakthrough https://feedzai.com/blog/understanding-fairgbm-feedzais-experts-discuss-the-breakthrough/ Thu, 29 Sep 2022 14:49:16 +0000 https://feedzai.com/?p=114607
illustration of how Feedzai's FairGBM algorithm promotes fairness awareness in machine learning models

Feedzai recently announced that we are making our groundbreaking FairGBM algorithm available via open source. In this vlog, experts from Feedzai’s Research team discuss the algorithm’s importance, why it represents a significant breakthrough in machine learning fairness beyond financial services, and why we decided to release it via open source.

Why is FairGBM a major breakthrough?

Pedro Saleiro: In many high-stakes domains, like financial services or healthcare, we train machine learning models with millions of events – truly big data. These machine learning models are used to compliment humans and help them make better decisions. 

Pedro Saleiro: In fraud detection, sacrificing a few percentage points of model accuracy can have huge implications and severe monetary losses. Responsible AI is still in its early stages. There are still no “go to” tools that are simple to use with good results. Either you have to sacrifice a lot of model performance to guarantee fairness, or these tools cannot scale to millions of data points. It becomes very cumbersome and very hard to use. 

Pedro Saleiro: With FairGBM, Responsible AI can be integrated in any machine learning operations. It optimizes for fairness, not just for performance. We leveraged the capabilities of LightGBM, meaning it can scale to millions of data points, is very fast to train, and it has performance guarantees. We really expect that FairGBM can become a standard in Responsible AI. From now on, there are no excuses to not optimize for fairness when developing machine learning models. 

What was the problem you were originally trying to solve when developing FairGBM from scratch?

Catarina Belém: When we first thought about FairGBM, we realized that fairness is not being used in practice in machine learning, and especially in industry settings. This is of utmost importance because there are currently machine learning systems that are helping other humans make decisions about humans like you and me. 

Catarina Belém: For example, there are some machine learning systems that will dictate whether or not I get the mortgage on my house that I need or if you will get a loan that you need. There’s no guarantee these systems will not discriminate against certain groups. This is very harmful and we really wanted to change that. Particularly, for industries.

Catarina Belém: Although there are several algorithms (and believe me, there are quite a few) they were typically either not efficient and were kind of slow. They also introduced some buggy frameworks. They deteriorated the performance significantly to a point that companies had no confidence in deploying those models. 

Catarina Belém: That’s exactly the reason why we developed FairGBM. It’s built on top of LightGBM, so it’s blazing fast. FairGBM also guarantees the resulting machine learning model will make less biased predictions, while also keeping good performance. It also works for several groups, so we are pretty excited. We are making it open source, and we hope this helps boost the application of fairness systems in practice. 

How would you describe FairGBM?

André Cruz: FairGBM is a lightweight model that enables you to achieve state-of-the-art performance, with very high levels of fairness. For a slightly more technical view, it is based on the popular LightGBM algorithm and essentially adds fairness constraints to the training process. In other words, the model cannot discriminate based on gender, race, or any other sensitive attribute. We know that starting off with Fair ML can be daunting, but with FairGBM, we are aiming for an easy-to-use, plug-and-play approach to improve machine learning fairness on any machine learning pipeline. 

How does it compare with other Fair ML methods?

André Cruz: Other state-of-the-art Fair ML methods can be excellent in achieving high fairness. But they do so with steep drops in predictive performance or take exceedingly long to train. FairGBM is actually over 10 times faster than the most popular Fair ML method available. What this means in practice is that even though other state-of-the-art methods may lead to reasonably good results, they were seldom used in the real world. This is because they were either too slow or they sacrificed too much performance to be of use. 

Personally, how was your experience working on this innovation?

Catarina Belém: It was definitely a long and complex project, but it certainly paid off. It’s very rewarding to see such an endeavor reaching production, getting deployed, and having an impact in the real world. Not only that, we are also open sourcing this fundamental research that we started on a piece of paper. It can help millions of people. We are super excited and we invite you to use it in your applications to change the world for the better. 

Why are we making the source code available?

Pedro Saleiro: When we started this project, our goal was to develop an in-processing Fair ML algorithm that gives you performance guarantees, is easy to use, and can scale to millions of data points. In the process of researching it, we realized that we were developing a general purpose solution, not something specific just for financial services. 

Pedro Saleiro: When we started evaluating the results of FairGBM in real-world situations, the results were so good in all datasets. We realized this could really become a standard algorithm, a reference that everyone could just go through and try when developing machine learning systems that are also optimized for fairness. We realized that we had to share these capabilities with the world. That’s why we made FairGBM source code available on GitHub. So we invite others – nonprofit, research groups, universities, other organizations – to try it, to contribute to it, and to contact us if they’re interested in using this algorithm in their own products and services.

Are you ready to see how Feedzai’s FairGBM can deliver model fairness? Schedule a demo with our team to get started. 

]]>
Feedzai Releases and Open Sources FairGBM Algorithm https://feedzai.com/blog/feedzai-releases-and-open-sources-fairgbm-algorithm/ Thu, 15 Sep 2022 06:30:06 +0000 https://feedzai.com/?p=114089
Measure of how Feedzai FairGBM enables banks to achieve Responsible AI

Today, we’re thrilled to announce the next innovation in our commitment to delivering Responsible AI: Feedzai’s FairGBM. It is a game-changing algorithm that makes Fair Machine Learning accessible to all. FairGBM is available in our products and through an open-source release for non-commercial uses.

How FairGBM Enables Responsible AI 

Right this minute, there’s a bank using a biased algorithm that will deny someone a much-needed loan. The decision is not because the applicant isn’t qualified. They are. It’s because they have the “wrong” zip code to be approved for a loan. At Feedzai, this doesn’t sit well with us. So we did what we were born to do – we developed innovations to solve a problem. 

FairGBM represents a new significant Responsible AI breakthrough. Feedzai’s FairGBM is a general purpose algorithm that trains models to simultaneously optimize both predictive performance and fairness. In other words, banks don’t have to choose between fair models or models that perform well. FairGBM can be trained with millions of data points while optimizing for both goals. Fairness and accuracy now go hand in hand. 

But we haven’t even gotten to the best part: we’re open sourcing it for non-commercial uses!!

Why We’re Open Sourcing Feedzai’s FairGBM

FairGBM was initially developed for banks, FinTechs, and other financial institutions. But FairGBM is generic by design. This means that while it was developed by Feedzai, it isn’t limited to financial services and financial crime prevention. It can be used by any organization that is committed to delivering model fairness at scale regardless of decision-making task or who their end-users are.

When we realized that FairGBM can be used by any domain with any type of data, that’s when the spark was ignited. We knew we could advance Responsible AI practice by making this important breakthrough accessible and available to not only Feedzai customers through our products, but to the rest of the world through an open-source release for non-commercial uses. FairGBM has the potential to improve the AI practice of numerous organizations outside of financial services, including how healthcare providers, government agencies, and non-profit groups deliver their services.

FairGBM open-source release is available in our FairGBM github repo. We invite you to try it!

How FairGBM Works

We’ve invested time, resources, and passion into mitigating AI bias and I’m happy to say, it’s working.  Last spring, we introduced Feedzai Fairband, an AutoML algorithm that automatically discovers less biased machine learning models with zero additional model training costs. Fairband went on to win a slew of awards. It also introduced Responsible AI to many leaders in the financial services sector and proved how it’s both accessible and attainable for financial services organizations worldwide.

Fairband tweaks the hyperparameter optimization process to find the fairest model possible but does not enforce models to be fair by design. FairGBM is a constrained version of gradient boosted trees that optimizes for both performance and fairness between groups. And it does so without sacrificing predictive performance. 

Because it was built on the LightGBM framework, FairGBM’s is fast and scalable to train models using millions of data points, an essential requirement for financial services. 

Feedzai’s Commitment to Customer Success and Fairness

At Feedzai, we are 100% committed to our customer’s success. It’s the “why” for all the innovations we develop. Our customers are under pressure from their customers to consider environmental, social, and governance (ESG) issues in their business practices. Providing FairGBM in our products is just one way we ensure our clients’ success. Our clients can demonstrably prove to their customers that they have Responsible AI frameworks, and are treated fairly based on their actions – not their zip code, age, gender, or race. 

We demonstrate our commitment to Responsible AI with the launch of Fairband. Developing FairGBM and making it open source for non-commercial uses is the next logical step in our efforts to make financial services fairer and to advance Responsible AI for everyone.

FairGBM ushers in a new era of AI fairness. We can’t wait to see how it makes financial services – and many other industries – fairer for everyone!

If you want to learn more about Feedzai’s world-changing research and innovations visit Feedzai Research.  

]]>
Why Responsible AI Should be Table Stakes in Financial Services https://feedzai.com/blog/why-responsible-ai-should-be-table-stakes-in-financial-services/ Tue, 18 May 2021 11:00:24 +0000 https://feedzai.com/?p=92811

As artificial intelligence (AI) becomes increasingly used in financial services, it’s essential that financial institutions (FIs) trust the technology to work as intended and that it aligns with their ethical values. Implementing Responsible AI principles is not only the most effective way FIs can protect their customers and their brand from misbehaving AI – it’s also the right thing to do.

Why Responsible AI Matters

It is very easy, even unintentionally, to develop AI that does not work in production as you expect. The model that performed really well on the test dataset can make discriminatory decisions when in production, disproportionately hurting end customers from certain groups. Or if the model is not robust enough to respond to changes in the data – like a new bot attack – you will see an unexpected spike in fraud losses.

FIs count on AI and machine learning to enhance decision-making for a wide range of use cases, from customer relationship management to lending to new bank account applications or payments fraud detection. However, no organization should blindly rely on black-box AI.

AI bias often only becomes clearer by checking its impact on smaller groups. An FI’s false positive rate for New York City might look good, for example. But when looking more closely at where customers live, the FI might discover their algorithm declines considerably more legit credit card transactions from Brooklyn residents than Manhattan residents – regardless of whether fraud really occurs more often in one area than the other. In the effort to prevent fraud, banks have allowed bias to infiltrate their system, leaving legitimate customers vulnerable to discrimination.

What is Responsible AI?

Responsible AI is a strategic approach to develop and run AI-based applications that empower organizations to manage their ethical risks. Responsible AI follows ethical principles – including fairness, privacy, transparency, reliability, accountability – serving as guardrails to AI risks. These principles are intended to ensure AI reaches fair, inclusive decisions for all customers, offers understandable explanations for how its decisions are reached (as opposed to a black box solution), and holds teams accountable for the system’s behavior. The system should also be kept secure, make privacy a priority, and demonstrate reliability and safety.

These principles should be embedded not only in technical processes but also in people processes. When creating a new AI-powered application, FIs should consider how it affects people and assesses ethical risks at each step of the project, from scoping to maintenance.   

It’s important to note that Responsible AI isn’t a silver bullet that washes away bias forever. As these systems run continuously, bias can creep in at any time. In this respect, bias is like cholesterol: it is easy to take in without realizing it – and hard to remove. That’s why Responsible AI requires continuous attention and informed decision-making.

Common Misconceptions

Of course, doing the right thing is often harder than it sounds. Although AI is now a commodity in several industries, including financial services, most organizations are inexperienced at managing its risks. Too often the work, expertise, and resources it takes to make Responsible AI a reality seems too overwhelming. Some FIs fear they will have to refactor their machine learning pipeline to address ethical AI issues. And it’s often unclear how effective ethical AI models will be at preventing and detecting fraud.

Fortunately, Responsible AI doesn’t have to be an either/or decision. It’s time to understand that Responsible AI is both attainable and efficient for FIs.

Many executives are under the false impression that focusing on responsible AI is too expensive, undermines the FI’s fraud detection capabilities, or that the problem lies entirely in the data. Let’s dispel these common misconceptions about Responsible AI.

Misconception 1: It’s Expensive to Focus on Bias and Fairness

Misconception: Addressing AI Fairness in machine learning models is a costly endeavor that ultimately results in more fraud and greater fraud losses.

Reality: By defining fairness requirements and objectives, we can assess biases in the datasets, collect more data (if needed), and apply bias reduction techniques to train fairer models without sacrificing much predictive power or incurring in additional training costs. 

Misconception 2: Focusing on Responsible AI Greatly Compromises Model Performance

Misconception: Adjusting machine learning models to treat all groups fairly will result in a significant reduction in performance (e.g., lower fraud detection).

Reality: It’s possible to improve the fairness of your machine learning models by sacrificing a small fraction of fraud detection accuracy. Improving the robustness and explainability will result in better performance when the model is in production.

Misconception 3: Bias is in the Data, Not the ML Pipeline

Misconception: Bias comes from upstream when data is collected or sampled. 

Reality: While biases can be introduced in the data, they can also be introduced in the ML pipeline. Even a non-biased dataset can generate biased decisions. That’s why model practitioners can’t simply place responsibility for AI bias upstream to the data collectors.

Now is the Time to Focus on Responsible AI

Here are a few reasons banks should be focusing on making AI fairness a priority, beyond costs and operational considerations.

Reason 1. It’s the Right Thing to Do

Responsible AI is more than just a buzzword or a fad that the financial services sector can weather. It’s the cornerstone of the industry-wide mission to ensure that FIs consistently make fairer, ethical decisions that ultimately make a positive impact on peoples’ lives. FIs that commit to a fairer AI framework can rest assured that when automated decisions are made, they are much less likely to unfairly deny loans or stop people from paying their bills because of their race, gender, age, or where they live. 

Reason 2. Customers Respect Socially-Conscious Brands

Positioning your organization as proudly socially responsible is a strong value proposition in courting Millennials and Gen Z consumers. Both groups take social responsibility very seriously. Recent research found 83% of Millennials are loyal to companies that contribute to social issues they care about. Another survey found 70% of Gen Z try to do business with companies they consider ethical. FIs that take the lead on demonstrating their Responsible AI commitment have an opportunity to distinguish themselves from their competitors.

Reason 3: Start Now, Don’t wait for Regulators

Think of how safety innovations began appearing in cars and trucks. When concepts like seat belts and airbags first debuted they were seen as intrusive. Fast forward to today and it’s hard to imagine a consumer who would willingly drive a car without these features. Automakers that had already added rearview cameras, automatic braking, and blind-spot detection to their vehicles were ahead of the curve. This makes it easier to win over safety-oriented consumers. Having won over consumers with their innovations, regulatory agencies eventually began requiring automakers to build their new models. Just as car manufacturers who had already invested in these technologies were in a much better position than their competitors, FIs that implement Responsible AI now will be in a stronger position when AI becomes regulated by government agencies.

FIs that use AI that interacts directly or indirectly with people should assess the risks of each application and implement operational controls and mitigation strategies. Regardless of existent or future regulations, without basic controls, FIs are running the risk of using misbehaving AI that can hurt people, amplify societal biases, and create discriminatory obstacles to accessing financial services (even if indirectly through blocking transactions based on a model’s fraud score).

Reason 4: You’ll Protect Your Reputation

Reasons 1 through 3 on this list are carrots, but reason 4 is a stick. If you’re hesitant to invest in Responsible AI because you believe it’s too complicated, consider the alternative. Your FI will face a considerable public backlash if it’s discovered that your current AI and machine learning models have been discriminating against certain groups. The fallout could include lawsuits from affected parties, fines and audits from regulators, and a badly battered public image.

The bottom line is, FIs have a responsibility to address the biases that may have infiltrated their machine learning models. Responsible AI is not only the right thing to do. It’s a goal that is within reach. Let’s work together to do the right thing and make Responsible AI a priority.

Does your AI reflect your organization’s ethical values? Does it offer an inclusive experience for all of your customers? If you’ve got questions about Responsible AI, we’ve got answers. On May 26, join this live cutting-edge webinar Responsible AI in Financial Crime Prevention with Feedzai’s Pedro Bizarro, Pedro Saleiro, and Andy Renshaw to learn more about the real-world scenarios for Responsible AI, how to mitigate AI bias, and more.

]]>
Fast Company: Feedzai Fairband is a Game-Changer for Responsible AI https://feedzai.com/blog/fast-company-feedzai-fairband-is-a-game-changer-for-responsible-ai/ Tue, 04 May 2021 11:30:38 +0000 https://feedzai.com/?p=92733
Feedzai Fairband promoting Responsible AI

Six years ago, Pedro Bizarro, Feedzai’s Co-Founder and Chief Science Officer, had an aha moment – artificial intelligence (AI) was inherently biased because it is trained on human data which holds all of our conscious and unconscious biases. Right then and there, he made it our mission to develop ethical AI. 

We created a subset of Feedzai’s Research team called FATE – Fairness, Accountability, Transparency, and Explainability. For the past six years, the FATE team has been working to embed responsible AI in not just the product, but in the entire approach of how organizations prevent, detect, and mitigate payments fraud and financial crime. 

For Feedzai, ethical AI is a mission-critical world-changing idea. And we’re thrilled Fast Company thinks so too. Today, Fast Company announced that Feedzai Fairband – the world’s most advanced AI fairness framework – is a game-changer in the push to make fairer, more Responsible AI and has been nominated in its 2021 World-Changing Idea Awards. 

Feedzai Fairband is a finalist in the Software category and has an honorable mention in the AI & Data category. Fast Company’s World-Changing Idea Awards recognize companies, products, designs, and concepts that promote innovation for the betterment of communities worldwide. The news comes on the heels of Feedzai getting recognized as “RegTech of the Year” in the 2021 Asia FinTech Awards, and “Fraud Prevention Innovation of the Year” in the Fintech Breakthrough Awards in March.

Over the past decade, we’ve seen AI evolve from a theoretical concept developed in settings like academic labs to real-world applications at financial institutions (FIs), hospitals, utility companies, and more. AI’s evolution is far from complete, however. With this highly advanced technology touching almost every aspect of our lives, from our smartphones to our home electronics to our financial services to our judicial system and more – it’s critical that AI becomes more ethical and responsible in its decision-making. Feedzai Fairband represents a significant leap forward in that effort.

As AI becomes more interwoven into our lives, it’s important that it can be trusted to make fair decisions for all the people that it influences. We’ve already seen cases of AI showing racial discrimination in lending services and hiring practices. And these are just the cases that we know about. As the world grapples with social changes on multiple fronts, we cannot allow AI bias to go unaddressed. Tackling this sometimes invisible problem is an essential step toward boosting financial inclusion and empowering individuals to access the financial services they need to buy homes, pay for college tuition, finance car loans, and more.

In the past, addressing AI bias has felt like an impossible task for FIs who saw it as a choice between allowing more financial crime and protecting customers from harmful decisions. With Feedzai Fairband, implementing Responsible AI doesn’t have to be an either/or decision for FIs. Feedzai Fairband works by allowing FIs to select models that both deliver strong risk management and reduce bias from the process. The framework automatically detects less biased machine learning models, increasing model fairness by 93% and requiring no additional model training.

With the framework in place, FIs no longer have to face a false choice between security and fairness. FIs can keep their customers safe from fraudsters and financial criminals while opening new opportunities to people who would otherwise have been denied vital financial services. Customers won’t have to experience AI bias because of their ethnicity, gender, age, or the neighborhood where they live. 

Fast Company’s 5th annual World-Changing Ideas Awards recognize innovations and innovators who are committed to solving significant global challenges from climate change to social injustice. We’re thrilled to be recognized for our efforts in the space and to make Responsible AI a priority across financial services.

AI and machine learning are no longer secret weapons reserved for larger banks with deep pockets. Smaller FIs – including community banks and credit unions – are also finding opportunities to level the innovation playing field. Download our eBook Democratizing Machine Learning for Community Banks to learn how community banks can close the machine learning gap.

]]>
Understanding AI Bias in Banking https://feedzai.com/blog/understanding-ai-bias-in-banking/ Wed, 19 Aug 2020 00:20:35 +0000 https://feedzai.com/?p=83964

As banks invest in artificial intelligence (AI) solutions to improve their services they must understand how AI bias can influence their operations, public perceptions, and their customers’ lives. Follow the blog for the latest trends in ethical AI and delve into other posts on this important topic here.

Halfway through 2020, there’s one topic that remains at the forefront of global politics, business, and science: bias. Recent events have motivated organized groups and individuals to push back against long-standing biases in several institutions, from law enforcement to healthcare to financial institutions (FIs).

But to address bias in the digital age, we have to look further than human biases. We have to look at machines, specifically artificial intelligence (AI).

As banks and other FIs invest in AI solutions they must understand AI bias and how it can affect their industry, customers, and brand.

FIs that commit to tackling AI bias and building more ethical systems stand to secure loyal customers, emerge as industry leaders, avoid penalties from regulators and the PR nightmare that a public bias scandal could cause. It’s a challenging and complex goal, and it’s one that should be pursued carefully and correctly. The following outlines what FIs need to know about AI bias, why ethical AI is important, and what you can do (and should not do) to address it.

What is AI Bias and Why Does It Matter?

An AI system is biased when it produces decisions that disproportionately impact certain groups more than others. Because AI is still a relatively new commodity in the banking sector, ethical AI might not be at the center of the discussions, but it should be.

Unless FIs consider ethical AI at design and implementation, they will likely create biased systems. Take the hypothetical example of AI-based fraud prevention. Bias can result in your FI’s fraud prevention system being three times more likely to flag or decline legitimate transactions from cardholders from poorer or minority neighborhoods than cardholders from wealthier, white communities. Customers from poorer communities might not have access to a second credit card and limited banking branches in their neighborhoods to visit in-person. Without a working credit card, they cannot use vital services such as ridesharing, pay for groceries, or make other essential payments. These customers will need to call customer service centers at a higher rate than their wealthier counterparts and will grow increasingly frustrated with your bank.

Why Banks Should Care About AI Bias

Experiences like these ultimately reflect poorly on FIs which is why they need to invest in a fair decision-making process that treats all customers equally. The challenge for FIs is that there is no universally accepted definition of what is fair decision-making. FIs must consider a variety of social and ethical contexts and understand how their AI models could harm their customers by producing overwhelmingly unfair or biased decisions for certain groups.

But how do AI systems become biased in their decision-making in the first place? Bias arises at different stages, including how data is represented, how social contexts are applied to data, how different groups are represented or labeled in training samples, during the machine learning pipeline, and even from the human beings who build them. And it’s not just the machine learning models that are vulnerable. Bias can also arise from different corners of the banking system, including from an FI’s internal rules and human experts in the loop responsible for making decisions, such as fraud analysts.

Avoiding AI Bias Pitfalls to Create Ethical AI

Addressing AI bias can be a complicated task for banks. A common misperception is that the issue can be addressed by a simple fix. As you work to address AI bias, it is important to understand the wrong ways to approach the issue.

Your first instinct may be to hide or withhold sensitive demographic data, such as race and gender data. The approach seems like an obvious solution. After all, if the AI can’t view specific characteristics, it can’t produce biased results, right? Actually, this is a misconception. AI models may use other attributes that are proxies for group-specific behaviors and activities and therefore, they may produce “sexist” or “racist” predictions even without knowing gender or race explicitly. For example, your model can infer a customer’s gender based on their purchasing history or what Instagram posts they like. It could also infer a customer’s race based on their zip code, education, or income. What’s more, by not collecting or monitoring protected group data it is impossible to compare AI predictions across all groups and gauge if some groups are more impacted by bias than others.

In other words, what appears to be the most obvious strategy for addressing AI bias could backfire. Some companies are already learning this lesson the hard way. Tackling AI bias requires a deliberate, thoughtful approach that ultimately helps your customers access your bank’s full range of services.

AI Bias is Becoming a Governmental and Industry Concern

Regulators are taking notice and working to understand how AI bias manifests in financial services. In the U.S., the House of Representatives held hearings earlier this year about how regulators and the industry can more effectively address the issue while the European Commission released a whitepaper earlier this year outlining how corporations operating in the EU should approach AI responsibly. These developments indicate that the banking sector must take AI bias seriously. And they must do so now.

Key Takeaways & Resources

FIs need to keep these key lessons in mind as they approach AI bias within their organizations.

  • Understand AI bias: AI bias is when an AI system – that can include rules, multiple ML models, and humans-in-the-loop – produces prejudiced decisions that disproportionately impacts certain groups more than others. FIs that fail to address the issue of bias and implement changes to their AI systems could unfairly decline new bank account applications, block payments and credit cards, deny loans, and other vital financial services and products to qualified customers because of how their data is treated or labeled.
  • Know that withholding information won’t work: Hiding sensitive information from the system (such as race, gender, or age) doesn’t guarantee fair outcomes and can actually backfire. Deliberately not collecting sensitive information reduces the capacity of obtaining reliable bias assessments. The problem of AI bias runs deep and requires an attentive, layered fix.
  • Continuous monitoring: If we don’t measure and prevent bias, it will inevitably creep in and hurt users, reputation, and bottom lines. The first step towards mitigation is continuously auditing for bias and fairness, and understanding the causes of any disparate decisions. As AI systems are dynamic and run “forever,” bias can cripple them anytime. Unaddressed AI bias is a recipe for FIs to lose customers, damage their public reputations, and face legal actions.
  • Build ethical AI systems: Due to the scale of automation and reach of AI, it can amplify existing biases in our societies. Banks, as trustworthy and reference institutions, must address the problem of bias when building and implementing ethical AI systems.

Ethical AI Resources

Now is the time for financial institutions to learn more about ethical AI. A good place to start is to align with technology partners that provide the tools needed to monitor and mitigate AI bias. Feedzai has been developing state-of-the-art research and products to foster AI that is both efficient and ethical. Stay tuned for more details in future posts about Ethical AI.

Further, there are good educational resources for FIs to gain a deeper understanding of ethical AI. Here are a few:

AI promises untold economic and social benefits. Where will we be ten, fifteen, twenty years from now because of advances made by AI? While it’s exciting to consider the possibilities, we must also acknowledge the challenges, not the least of which is AI bias. We are forging the future of industry and society. Appreciating the magnitude of the job at hand, we must not just build AI that is efficient; it must be ethical as well.

Feedzai’s FATE research group demonstrates our commitment to ethical AI. FATE stands for fairness, accountability, transparency, and ethics. To learn more visit Feedzai Research.

]]>
Ethical AI: Why We Should All Care About It https://feedzai.com/blog/the-ethics-of-ai-and-why-we-should-all-care/ Wed, 03 Apr 2019 12:00:28 +0000 https://feedzai.com/?p=84052

Recently, I was proud to speak at the Harvard Business School’s Machine Learning Conference, Applied AI, regarding the real-world ethical application of AI at Feedzai. I was joined by other industry experts like David Ferrucci, who built and led the IBM Watson team, Max Tegmark, the Director & Co-Founder of the Future of Life Institute, as well as leaders from 23andMe, Amazon, Google, and Spotify who shared their insights.

As a co-founder and CTO of Feedzai, the ethical application of artificial intelligence is always at the forefront of my mind. On stage with Feedzai at Web Summit 2017, the late Stephen Hawking, a pioneer of ethical AI application, said:

“We need to take learning beyond a theoretical discussion of how AI should be, and take action to make sure we plan for how it can be.”

This is a sentiment that I, and everyone at Feedzai, lives by. With the advent of the digital era, the evolution of artificial technology has gone far beyond the theoretical realm. We live in a time beyond just planning for artificial intelligence advancements: we now have to consciously develop our products to ensure that they meet the highest ethical expectations.

In order to meet these expectations, Feedzai’s ethical development and use of artificial intelligence boils down to four key principles:

The first pillar is always fair. We operate without bias, evaluating people on how they behave, not what they are. We also protect people by keeping their data private and not allowing that data to be used against them.

The second pillar is forever verifiable. We log all decisions, recommendations, and scores from its origin, maintaining version control over all of the data, models, and inputs that led to a decision.

The third pillar is anonymous by design. We recognize that people have the right to anonymity and the right to have their data forgotten if they choose to.

The final pillar is transparently intelligent. At Feedzai, it is our goal to create a system that learns from observations, evolving as it learns new data. We also aim to build systems that can easily explain how a decision was made and can help others improve.

With these ethical pillars in mind, the possibilities that AI provides are seemingly limitless.

Just look at the advancements we’ve made so far. Driverless cars now roam the streets, offering the potential to save millions of lives from vehicle-related deaths annually while drastically improving roadway efficiency. AI-assisted surgery improves patient outcomes by reducing variation while providing surgeons with insights to continually improve. AI-enabled precision spraying in agriculture can reduce herbicide resistance, both saving farmers money and ensuring that we meet the growing global demand for high-quality and healthy food. Redundant tasks can be automated with intelligent AI-based automation, improving worker efficiency and giving humanity more time to focus on making the world a better place.

I, and Feedzai as a whole, also see the promises that AI also brings to the field of fraud and financial crime prevention. I see my day-to-day as striving towards unlocking AI’s true capabilities and ensuring that money flows friction-free while humans stay safe from the implications of financial crime.

While doing so, I ensure that our core pillars always apply to the work that we produce. AI’s promises are significant, but the only way to improve the world that we live in is to take these steps forward while always maintaining an ethical worldview.

As Stephen Hawking said:

“Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one, industrialization. We will aim to finally eradicate disease and poverty. Every aspect of our lives will be transformed. In short, success in creating effective AI could be the biggest event in the history of our civilization. Or the worst.”
]]>