Mozilla’s AI ethics advocacy group proposes algorithmic bias detection program modeled on bug bounty programs (Daphne Leprince-Ringuet/ZDNet)

  • by

Daphne Leprince-Ringuet / ZDNet:
Mozilla’s AI ethics advocacy group proposes algorithmic bias detection program modeled on bug bounty programs  —  Deborah Raji is researching ways to apply the models that underpin bug bounty programs to algorithmic harm detection.  —  When it comes to detecting bias in algorithms …

Leave a Reply

Your email address will not be published. Required fields are marked *