How incentive-based rating systems can mitigate online biases

Monday, October 17, 2022, 10:00 – 11:00 a.m. EDT

By nature, machine-learning algorithms rely upon the differential treatment of online users which are customized and optimized to their unique digital footprints. Yet, such treatment can embolden online behavioral biases, including ad targeting or the denial of loans and other financial services. From targeted advertising to more predatory eligibility determinations, consumers tend to be the subjects of machine-learning algorithms, with limited agency and feedback on the predictive accuracy of the computational models.

Senior Fellow and Director of the Center for Technology Innovation Nicol Turner Lee argues in a newly released book chapter "Mitigating Algorithmic Biases through Incentive-Based Rating Systems," that consumers’ feedback need to be collected by developers and other stakeholders that license algorithms to ensure the trustworthiness of artificial intelligence (AI) systems. Modeled in part after the U.S. federal government’s Energy Star program, her work introduces a new incentive-based rating system to drive more informed consumer choices in their use of AI systems and improve upon the efficacy and inclusiveness of these models, especially among stakeholders desiring to reduce reputational harms due to flawed and biased systems.

On October 17, the Center for Technology Innovation at Brookings will host a panel exploring how incentive-based rating systems, reputation badges, and other consumer-facing callouts can improve the trustworthiness of AI systems, while encouraging more participatory engagement in the design and execution of AI models.

Viewers can submit questions by emailing or on Twitter at @BrookingsGov using  #OnlineBiasRatings.

Register to watch online:

The Brookings Institution, 1775 Massachusetts Ave NW, Washington, DC 20036

Select event topics you're interested in  | View all Brookings Events