'Human Rights' May Help Shape Artificial Intelligence in 2019

Ethics and accountability will be among the most significant challenges for artificial intelligence (AI) in 2019, according to a survey of researchers at 色花堂鈥檚 College of Computing.

In response to an email query about AI developments that can be expected in 2019, most of the researchers 鈥 whether talking about (ML), , , , or other facets of AI 鈥 touched on the growing importance of recognizing the needs of people in AI systems.

鈥淚n 2019, I hope we will see AI researchers and practitioners start to frame the debate about proper and improper uses of artificial intelligence and machine learning in terms of human rights,鈥 said Associate Professor .

鈥淢ore and more, interpretability and fairness are being recognized as critical issues to address to ensure AI appropriately interacts with society,鈥 said Ph.D. student .

Taking on algorithmic bias

Questions about the rights of end users of AI-enabled services and products are becoming a priority, but Riedl said more is needed.

鈥淐ompanies are making progress in recognizing that AI systems may be biased in prejudicial ways. [However,] we need to start talking about the next step: remedy. How do people seek remedy if they believe an AI system made a wrong decision?鈥 said Riedl.

Assistant Professor sees algorithmic bias as an ongoing concern in 2019 and gave banking as an example of an industry that may be in the news for its algorithmic decision-making.

鈥淚 project that we鈥檒l have more high-profile examples of financial systems that use machine learning having worse rates of lending to women, people of color, and other communities historically underrepresented in the 鈥榮tandard鈥 American economic system,鈥 Morgenstern said.

In recent years corporate responses to cases of bias have been hit or miss, but Assistant Professor said 2019 may see a shift in how tech companies balance their shareholders鈥 interests with the interests of their customers and society.

鈥淸Companies] will be increasingly subject to governmental regulation and will be forced to come up with safeguards to address misuse and abuse of their technologies, and will even consider broader partnerships with their market competitors to achieve this. For some corporations, business interests may take a backseat to ethics until they regain customer trust,鈥 said De Choudhury.

Working toward more transparency

One way companies can regain that trust is through sharing their algorithms with the public, our experts said.

鈥淒evelopers tend to walk around feeling objective because 鈥榠t鈥檚 the algorithm that is determining the answer鈥. Moving forward, I believe that the algorithms will have to be increasingly 鈥榠nspectable鈥 and developers will have to explain their answers,鈥 Executive Associate Dean and Professor .

Ph.D. student  agreed. In the coming year, 鈥淸I] think we will see that researchers are trying to [develop] techniques and tests that can help us to better understand what鈥檚 going on in the actual wiring of our very fancy machine learning models.

鈥淭his is not only for curiosity but also because legal applications or regulation in various countries are starting to require that algorithmic decision-making programs be able to explain why they are doing what they are doing,鈥 said Pinter.

Regents鈥 Professor believes that these concerns are becoming more central precisely because artificial intelligence will continue to grow in importance in our everyday lives.

鈥淒espite continued hype and omnipresent doomsayers, panic and fear over the growth of AI and robotics should begin to subside in 2019 as the benefits to people鈥檚 lives are becoming more apparent to the world.

鈥淗owever, I expect to see lawyers jumping into the fray so we may also see lawsuits determining policy for self-driving cars [and other applications] more so than government regulation or the legal system,鈥 said Arkin.