Race and AI

In 1964 University of Chicago economist Milton Friedman faulted civil rights legislation barring discrimination because it was less efficient and less effective reducing discrimination. The example he gave was a restaurant that refused to serve African Americans. Such a restaurant, he argued, would lose the business not only of African Americans, but of civil rights advocates generally and would end up with an ever smaller clientele composed of racists and bigots. Private consumers would vote with their wallets.

A similar argument has been made for gender and race based hiring decisions. A firm that introduces race or gender into its criteria for hiring necessarily devalues the weight it places on criteria that measure competence and skill for the position the firm is trying to fill. In other words, discrimination does not pay.

In light of this free market analysis, some firms have moved entirely to double blind AI for their hiring decisions. Because AI doesn’t care. Or does it?

The assumption upon which this analysis is based is that the market is value neutral. But is it?

As an alternative, let me propose that, as Cornel West reminds us, “race matters.” Let me suggest, moreover, that race is valued differentially. Because, like all surface forms of appearance under capitalism, the differentia among and between things are indications of consumer preference. I like chocolate. You like vanilla.

Only in a market where ALL decisions are made by AI, would such differentia be of no consequence. But the moment private consumer preference is introduced, even as an algorithm, differentia flood back into the analytical frame.

So, for example, let us assume that I am an investor in a firm that sells high end automobiles. Data analysis suggests that consumers are marginally more likely to be white and that they are marginally more likely to purchase a car from a white male. That is to say, the marginal value of a white male sales force is higher than a white female, a black male, or a black female sales force. Were I to include this algorithm in my AI hiring choices, presumably the AI would yield “neutral” racist and misogynist decisions. That is to say, insofar as race has value — race matters — socially, an algorithm that measured value would necessarily give rise to racist outcomes.

But let us say that policy makers count race and gender equality non-negotiable, i.e., not subject to marginal analysis. And let us say, instead, that race and gender were weighted such that the disutility imposed on women and people of color were folded into the algorithm. And the social value of whiteness was discounted.

In effect, this would eliminate the market effect. Abstract value would, in that case, lose its function. It would be replaced by substantive right.

This, in fact, was the proposal against which Professor Friedman was arguing in Capitalism and Freedom. Any interference in the market undermines (market) freedom. Well, yes. By definition. But perhaps some things do not lend themselves to marginal analysis.

More immediately, in a racist and misogynist society, AI cannot but yield “neutral” racist and misogynist results.

Leave a Reply

Your email address will not be published. Required fields are marked *