Back to All Events

AI & Intersectional Gender Bias in Hiring

The speakers will discuss how AI can perpetuate, amplify and even create new biases that affect women, and in particular women of color, in recruiting processes and in the workplace. We will also present Biasly AI as an example of how AI can be used as a tool for inclusivity. Biasly was created by Mila and can be used in the hiring process.

Professor Rangita de Silva de Alwis, a globally recognized international women's rights expert, a United Nations advisor, a fellow of the HKS Women and Public Policy Programme, and founder of the AI and Bias Policy Lab at Penn Law will discuss AI intersectional gender bias, especially as it relates to recruiting.

Allison Cohen, from Mila, will present Biasly, a natural language processing tool that identifies conscious and subconscious bias and debiases problematic sentences for its users.

The conversation will be moderated by Edynne Grand-pierre, Law student at the University of Ottawa.

This event is organized by the UOttawa research chair, Accountable AI in a Global Context, and the Harvard Kennedy School Women's Network.

Previous
Previous
June 8

Still Broken: Sexual Harassment in the Legal Field

Next
Next
June 10

“An Unfinished Story: A Thousand Miles To Freedom”