Wired: "What Really Happened When Google Ousted Timnit Gebru"
On June 8, Wired Magazine published a feature-length article that offers a broad level of biographical depth and context around Google's firing of AI researchers Dr. Timnit Gebru and Dr. Margaret Mitchell. If you're looking for a briefing on the harms caused by current AI-driven systems, the uphill battle and hostile environment that Drs. Gebru and Mitchell faced, and the events surrounding their firings, it's an excellent read. A few extracts below.
What Really Happened When Google Ousted Timnit Gebru
by Tom Simonite - June 8, 2021
[Google] has been dogged in recent years by accusations from employees that it mistreats women and people of color, and from lawmakers that it wields unhealthy technological and economic power. Now Google had expelled a Black woman who was a prominent advocate for more diversity in tech...
The story of what actually happened in the lead-up to Gebru’s exit from Google reveals a more tortured and complex backdrop. It’s the tale of a gifted engineer who was swept up in the AI revolution before she became one of its biggest critics, a refugee who worked her way to the center of the tech industry and became determined to reform it. It’s also about a company... trying to regain its equilibrium after four years of scandals, controversies, and mutinies, but doing so in ways that unbalanced the ship even further...
... In February 2018, as part of a project called Gender Shades, [Gebru] and [researcher Joy] Buolamwini published evidence that services offered by companies including IBM and Microsoft that attempted to detect the gender of faces in photos were nearly perfect at recognizing white men, but highly inaccurate for Black women. The problem appeared to be rooted in the fact that photos scraped from the web to train facial-recognition systems overrepresented men as well as white and Western people, who had more access to the internet.
The project was a visceral demonstration of how AI could perpetuate social injustices...
At Apple, Gebru and her coworkers had studied standardized data sheets detailing the properties of every component they considered adding to a gadget like the iPhone. AI had no equivalent culture of rigor around the data used to prime machine-learning algorithms. Programmers generally grabbed the most easily available data they could find, believing that larger data sets meant better results.
Gebru and her collaborators called out this mindset, pointing to her study with Buolamwini as evidence that being lax with data could infest machine-learning systems with biases...
Inioluwa Deborah Raji, whom Gebru escorted to Black in AI in 2017... says that Google’s treatment of its own researchers demands a permanent shift in perceptions. “There was this hope that some level of self-regulation could have happened at these tech companies,” Raji says. “Everyone’s now aware that the true accountability needs to come from the outside..."
Alphabetworkers is a public, low-traffic read-only list of news extracts and commentary relevant to Alphabet workers and the Google reform movement, curated by former Google employee Bruce Hahne. It receives a maximum of one email per day. All article extracts are intended to be within fair use. To sign up, go to alphabetworkers.substack.com
It's OK to forward this message in its entirety, but please preserve the how-to-sign-up information.
A FAQ for this list is available at www.alphabetworkers.net/alphabetworkers-email-list. Bruce is reachable at firstname.lastname@example.org