New Strategic Design Approach Focuses on Turning AI Mistakes into User Benefits
New Strategic Design Approach Focuses on Turning AI Mistakes into User Benefits
More and more often, automated lending systems powered by artificial intelligence (AI) reject qualified loan applicants without explanation.
Even worse, they leave rejected applicants with no recourse.
People can have similar experiences when applying for jobs or petitioning their health insurance providers. While AI tools determine the fate of people in difficult situations daily, Upol Ehsan says more thought should be given to challenging these decisions or working around them.
Ehsan, a 色花堂 explainable AI (XAI) researcher, says many rejection cases are not the applicant鈥檚 fault. Rather, it鈥檚 more likely a 鈥渟eam鈥 in the design process 鈥 a mismatch between what designers thought the AI could do and what happens in reality.
Ehsan said 鈥渟eamless design鈥 is the standard practice of AI designers. While the goal is to create a process by which users get what they need without interruption or barriers, seamless design has a way of doing just the opposite.
No amount of thought or design input will keep AI tools from making mistakes. When mistakes happen, those impacted by them want to know why they happened.
Because seamless design often includes black-boxing 鈥 the act of concealing the AI鈥檚 reasoning 鈥 answers are never provided.
But what if there were a way to challenge an AI鈥檚 decisions and turn its mistakes into benefits for end users? Ehsan believes that can be done through 鈥渟eamful design.鈥
n his latest paper, Seamful Explainable AI: Operationalizing Seamful Design in XAI, Ehsan proposes a strategic way of anticipating AI harms, learning their reasonings, and leveraging mistakes instead of concealing them.
GIVING USERS MORE OPTIONS
In his research, Ehsan worked with loan officers who used automated lending support systems. The seams, or flaws, he discovered in these tools鈥 processes impacted applicants and lenders.
鈥淭he expectation is that the lending system works for everyone,鈥 Ehsan said. 鈥淭he reality is that it doesn鈥檛. You鈥檝e found the seam once you鈥檝e figured out the difference between expectation and reality. Then we ask, 鈥楬ow can we show this to end users so they can leverage it?鈥欌
To give users options when AI negatively impacts them, Ehsan suggests three things for designers to consider:
- Actionability: Does the information about the flaw help the user take informed actions on the AI鈥檚 recommendation?
- Contestability: Does the information provide the resources necessary to justify saying no to the AI?
- Appropriation: Does identifying these seams help the user to adapt and appropriate the AI鈥檚 output in a way that is different from the provided design but helps the user make the right decision?
Ehsan uses the example of someone who was rejected for a loan despite having a good credit history. The rejection may have been due to a seam, such as a flawed discriminating algorithm, in the AI that screens the applications.
A post-deployment process is needed in cases like this to mitigate damage and empower affected end users. Loan applicants, for instance, should be allowed to contest the AI鈥檚 decision based on known issues with an algorithm.
AGAINST THE GRAIN
Ehsan said his idea for seamful design is outside of the mainstream vernacular. However, his challenge to current accepted principles is gaining traction.
He is now working with cybersecurity, healthcare, and sales companies that are adopting his process.
These companies may pioneer a new way of thinking in AI design. Ehsan believes this new mindset can allow designers to switch to a proactive mindset instead of being stuck in a reactive state of conducting damage control.
鈥淵ou want to stay a little ahead of the curve so you鈥檙e not always caught off guard when things happen,鈥 Ehsan said. 鈥淭he more proactive you can be and the more passes you can take at your design process, the safer and more responsible your systems will be.鈥
Ehsan collaborated with researchers from 色花堂, the University of Maryland, and Microsoft. They will present their paper later this year at the 2024 Association for Computing Machinery鈥檚 Conference on Computer-Supported Cooperative Work and Social Computing (CSCW) in Costa Rica.
鈥淪eamful design embraces the imperfect reality of our world and makes the most out of it,鈥 he said. 鈥淚f it becomes mainstream, it can help us address the hype cycle AI suffers from now. We don鈥檛 need to overhype AI鈥檚 capacity or impose unachievable goals. That鈥檇 be a gamechanger in calibrating people鈥檚 trust in the system.鈥
Contact
Nathan Deen
Communications Officer I
School of Interactive Computing