From chatbots to call centers to loan applications and beyond, artificial intelligence (AI) has become integral to today's banking experience. The rise of generative artificial intelligence (GenAI) tools like ChatGPT and DALL-E proves that AI technologies are still evolving and poised for transformational growth. As such, banks must consider how to manage their customers’ expectations for data privacy in the Generative AI era.
Regulating AI technology is essentially a pipe dream at this point. But that doesn’t mean banks should give up on responding to their customers’ data privacy preferences, even as they determine how best to use Generative AI technology. In this blog, we’ll dive deep into the ethical concerns raised by AI and explore a proposed new framework for respecting customers’ rights and preferences in the AI age, something we’re calling #MyAI.
Unchecked Data Privacy Risks in the Generative AI Era
From daily interactions to complex financial decisions, AI has the power to reshape our lives for good and bad, often without us realizing it.
At this point, the genie is out of the bottle. The toothpaste is out of the tube. Whatever metaphor you prefer, AI is already highly prevalent in financial services. Its extraordinary computational power can be applied across a wide range of use cases, from risk management to training data to customer experience operations and more.
While AI has benefits, some people are inherently suspicious or simply prefer to sidestep it whenever possible in the rapidly disappearing non-digital age. While it’s still feasibly possible, it’s getting harder and harder in today’s online world for people who want to avoid engaging with AI-based decisions to “live off the grid.”
Attitudes to AI are extremely polarized. It is impossible for regulators to keep everyone happy, and even if they could find common ground, it would involve a lot of divisive debate and many feeling resentment at either end of the scale.
So while regulation is still some way off, banks aren’t helpless. In fact, even as they research new AI and Generative AI use cases, banks have a social and ethical responsibility to respect their customers’ wishes not just for data privacy but for AI-based interactions as well.
The adoption of AI is moving far faster than regulation. Rather than wait, banks have an opportunity to take the lead on safeguarding customers’ wishes over how AI is or is not used in their individual interactions. In the face of the transformative changes brought by Generative AI, financial institutions can differentiate themselves by proactively practicing responsible and ethical behavior by committing to protecting customers’ personal information by respecting their data privacy.
Introducing #MyAI: A New Framework for Customer Choice for AI Privacy
The financial industry already has a precedent for respecting customers’ wishes for data privacy. For example, the EU’s General Data Protection Regulation (GDPR) gives customers greater control over their personal data use. The Cookies and ePrivacy Directive, also known as the “Cookie Law,” is a great blueprint for the commercial use of AI and the need to receive prior user consent before it can be used in all but exceptional circumstances.
Like GDPR for data privacy and data collection, the financial industry needs a unified policy addressing AI and Generative AI consent. Banks should not wait for regulators to impose new rules but should take the lead in offering customers a choice regarding AI usage. Providing options such as high, low, or even none-utilization of AI can become a part of their customer engagement strategy, demonstrating their commitment to empowering individuals.
With regulatory frameworks for generative AI in the nascent stage, banks must proactively implement responsible standards. But what does it mean to practice responsible AI in the generative AI era? Financial institutions must carefully consider how they operate within this landscape, ensuring that AI-driven decisions treat their customers fairly. It can also be a strong market differentiator.
Customers want to trust their bank and the decisions made on their behalf. They care about their data (my data), their privacy (my privacy), and their ability to control how financial decisions are made on their behalf (my financial destiny). The #MyAI framework is a way to deliver the control over their financial lives that customers expect.
7 Steps to Implement a #MyAI Framework
To establish a #MyAI framework, financial institutions can follow these steps:
1. Measure customer comfort with AI
Banks should first understand their customers’ preferences and comfort levels regarding AI usage. Consider conducting surveys or focus groups to gauge how customers interact with AI in their banking workflows and how comfortable they are with additional uses. Listening to what bank customers want and expect regarding AI interactions is the most crucial first step banks and financial institutions can take.
2. Audit all AI use cases internally
Banks should also comprehensively evaluate their organization’s existing AI applications and their impact on customers. This should be a top-to-bottom review of how much a bank’s current AI solutions are public-facing and what changes will be necessary if customers decide they don’t want to use AI for those services.
3. Vet third-party AI services
In addition to reviewing their AI use cases, banks should also look closely at how their vendors and third-party providers (TPPs) use AI. After identifying how TPPs use AI, banks should ensure they align with their business values and responsible AI practices.
4. Focus on FATE
One of the top concerns of bank customers reluctant to use AI is they can’t necessarily trust its decision-making. This concern is well-founded, given that some AI systems are a black box that offers few insights into the system’s decisioning process. Banks should ensure they can accurately measure their AI models’ fairness, accountability, transparency, and explainability (FATE). If the system is lacking in any of the four FATE principles, the bank should take steps to address it.
5. Understand exceptions to the rule
While respecting customers’ wishes is critical, there are exceptions when fraud detection and financial crime. For example, if an account is suspected of being connected to fraud, banks are within their rights to maintain the customer’s data to investigate the incident. The #MyAI framework should identify similar exceptions. Banks should be transparent in using AI based on these exceptions, such as assessing unusual customer behaviors or inbound transactions.
6. Assess the AI system’s value
Determine the importance and value of AI-based decisions to customers and their associated costs. Distinguish whether an AI initiative is crucial to customers’ lives or a game-changer for the business. For example, would a significant share of customers be willing to close their other bank accounts at other financial institutions if the bank had a specific offering? If the answer is yes, it could have a significant positive impact on the bank’s business. If not, it might be worth reconsidering it. Realizing the business impact of the initiative is critical because it will determine if the costs justify the investment or if it’s a “nice to have” feature that doesn’t provide significant value overall.
7. Determine the feasibly of respecting customers’ wishes
Banks should also determine how practical accommodating their customers’ preferences is. For example, if a customer is applying for a loan and doesn’t want their application reviewed by an AI system. Instead, they would prefer a human being conduct the review. If this is the case, banks should be able to offer the customer a non-AI-based alternative. However, they can impose a fee to offset the cost of a manual review. Banks can also consider achievable options, such as avoiding sending loan offers based on AI assessments.
The Age of AI Requires a #MyAI Framework
Banks should avoid engaging in the fractious debate over whether AI tools are good or evil. Instead, banks should recognize that AI is here to stay and will be integral to financial services for years. But people still retain the right to determine how much AI should influence their daily lives and choices. That’s why banks and financial institutions should stay ahead of regulators and embrace the #MyAI framework.
By providing a compass in this complex landscape, banks can navigate the challenges of AI while empowering individuals to make their own choices. It is time for a new era of digital consent in the AI age – the era of #MyAI.
Share this article:
Robert Harris
Robert Harris is the Head of Product Marketing at Feedzai and a passionate proponent for fighting fraud and money laundering particularly in financial services. Robert is an accomplished leader in both small and large organizations in identifying opportunities, securing funding, and creatively delivering value in line with project goals. Whether launching new solutions or maximizing value from mature ones has a keen commercial eye and a conviction to both innovate and make prioritization decisions accordingly.
Related Posts
0 Comments11 Minutes
Feedzai Fraud Frontlines: Cloud Migration Done Right
We’re on a mission to help banks and financial institutions shield their customers from…
0 Comments6 Minutes
How Feedzai Supports Mental Health / Menstruation at Work
At Feedzai, we have a reputation for being risk-takers and innovators. In our product, in…
0 Comments7 Minutes
How Banks Can Upgrade Their Legacy Banking Systems
Many banks have had their current rules-based legacy banking systems in place for up to…