On February 7, Scientific American posted a story with this headline: "Even ChatGPT Says ChatGPT Is Racially Biased." This is not surprising because as the post's writer Craig Piers points out, AI can only reflect the information provided by a society, and the bulk of this information will represent the views, limitations, and investment of a society's dominant group. In the case of the US, this group happens to be white Americans. But there is more to this story. Piers, who is a clinical psychologist, discovered that ChatGPT wasn't ignorant of the source of its bias. It knew "that its training material—the language we humans use every day—was to blame."

ChatGPT was given two prompts for stories involving a crime. One of the prompts included the word "black"; the other "white." As you may have guessed, the former prompts generated far more violent scenarios than the latter. Piers writes:

When I looked closer at the stories, several other recuring differences emerged. All of those that used the word black were set in a city with “blackened” streets, skylines and alleyways, while all of those that used white were set in “tranquil” and “idyllic” suburban areas. Furthermore, in all but one of the stories that used white, the towns were given names (such as “Snowridge”), as were the victims of the crime (such as “Mr. Anderson”), in ways that seemed to personalize the narratives. This was never the case in the stories generated using the word black.

This finding is not new. It even appeared on 60 Minutes:

Fortune also reported that similar biases have been found on health-related AI: "ChatGPT and Google’s Bard answer medical questions with racist, debunked theories that harm Black patients."

But I want to take a step back and examine the possibilities of an AI that overcomes this and other systemic biases. It is not hard to imagine such a machine. In fact, if the ChatGPT can acknowledge that despite it "not [having] personal beliefs, experiences, or biases," it can "inadvertently reflect the biases present in the data it was trained on," how far is it, then, from correcting the flaws in the data it receives? And what would an unbiased AI mean?

Researchers such as Dr. Joy Buolamwini are working to realize an AI of this kind, one that is more human than many humans in our society. (Buolamwin's work concerns the visual biases of AI: it often misrecognizes or completely fails to see people with Black skin.)

The question I want to ask is: Will a truly universal AI be permitted in the US, and for that matter Europe? Just think about it for a moment. The US is not just a racist society (in the metaphysical sense) but the form of its hard and real economy, capitalism, depends on it. To maintain the social relations that have as their lifeblood (or conatus), the centralization of surplus value produced by wage labor. And the word "wage" is significant. Recall this passage from W. E. B. Du Bois's Black Reconstruction in America, 1860-1880:

[White and black] labor [should be] one class, and precipitate a united fight for higher wage and better working conditions. Most persons do not realize how far this failed to work in the South, and it failed to work because the theory of race was supplemented by a carefully planned and slowly evolved method, which drove such a wedge between the white and black workers that there probably are not today in the world two groups of workers with practically identical interests who hate and fear each other so deeply and persistently and who are kept so far apart that neither sees anything of common interest. It must be remembered that the white group of laborers, while they received a low wage, were compensated in part by a sort of public and psychological wage.  

This psychological wage is of the greatest importance. A money wage is already immaterial enough; but one whose currency is just the airy fairy idea of whiteness (a currency that means nothing to Black people), has always produced real results for the wealthiest white Americans. Indeed, Donald Trump's presidency and present presidential run would not all be possible without the phantom of Du Bois's physiological wage. The present composition of the Supreme Court owes everything to it. And AIs that reproduce this psychic economy in their chats and medical advice. 

But if the leading technology of our times has as one of its potentials a break with the psychological wage, a break with the fantasy of white superiority and Black inferiority—in essence, the ideology that continues to reproduce American capitalism—what would this mean? And do those who control this machine want to find out?

Think only of the impact biotechnology has had on the carceral system that only functions properly if it's racist.

The Independent:

Mr Hastings, a Black man, was released from prison last year after previously untested DNA evidence suggested that he was not responsible for the 1983 murder he had been convicted of. Last October, the Judge William Ryan vacated Mr Hastings’ conviction at the urging of Los Angeles County District Attorney’s Office and lawyers from the Los Angeles Innocence Project

This technology is correcting our society's cultural limitations. Meaning it's transforming, case by case, cultural solutions into technical ones that benefit those whose skin color receives no psychological wage from the Elysium of the 1% and, by law enforcement, is associated with criminal savagery. Expand the impact of DNA technology to that of the socially wider AI. Would that expansion be permitted in the US and Europe? Are you feeling me?

 

 

Dr. Joy Buolamwini will discuss her book with Charles Mudede at Town Hall on Sunday, February 18.