Home » Topics »

Understanding Gender and Racial Bias in AI, Part 3

By Sarah Pagliaccio In my four-part series about gender and racial biases in artificial intelligence (AI) and how to combat them, Part 1 focused on educating UX designers about bias in voice- and facial-recognition software and the AI algorithms and underlying data that power them. Part 2 discussed how our everyday tools and AI-based software such as Google Search influence what we see online, as well as in our design software—often perpetuating our biases and whitewashing our personas and other design deliverables. Now, in Part 3, I’ll provide a how-to guide for addressing your own implicit biases during user research, UX design, and usability testing. If your 2020 went anything like mine, you may have put up your Black Lives Matter poster, read How to Be an Antiracist, and subscribed to the Code Switch podcast. Perhaps you even watched Coded Bias, this year’s eye-opening documentary on facial-recognition software. (If you haven’t watched it, you should.) Perhaps you then read Anthony Greenwald’s interview with Knowable Magazine and discovered: “Making people aware of their implicit biases doesn’t usually change minds.” (PBS News Hour republished it.) What should you do next? Read More

Visit site »