Ethics and AI

2/7/22


Welcome back to the Odyssey! In this post I am going to discuss the impact AI has on our society using three fascinating articles from the MIT Technology Review


MIT Technology Review

MIT Technology Review is a magazine owned by MIT that publishes news about the world’s newest and most innovative technologies. What I love about them is that they focus both on the technical aspect of tech, as well as its greater impact in society. You can subscribe and get access to their magazine for $50 a year. I’ve been subscribed for a bit over 3 years now and I think it’s totally worth it. If you’re unwilling to pay the fee, you can also always just delete your cookies after you run out of free articles.

Of course technology perpetuates racism. It was designed that way.

This article isn’t specifically about AI, but its major takeaways are incredibly relevant to AI. McIlwain discusses how one of the first uses of predictive modeling was when President Johnson wanted to create a surveillance program in “riot affected areas” to discover the causes of the “ghetto riots” in the long hot summer of 1967. The information gathered was used to trace information flow during protests and decapitate the protests’ leadership. This layed the foundation for racial profiling, predictive policing, and racially targeted surveillance.

We’ve already started going down on the same path with AI. Contact tracing and surveillance during the pandemic employ AI systems and are once again making Black and Latinx people the threat. Automated risk profiling systems disproportionately identify Latinx people as illegal immigrants and facial recognition software technologies convict criminals on the basis of skin color. The academic community is aware of AI’s propensity to pick up bias, yet the impact this might have never seems to be considered by researchers. Moving forward, an AI development and implementation needs to be seen through an ethical lens, rather than a results driven one.

You can find the article here.

AI has exacerbated racial bias in housing. Could it help eliminate it instead?

“Few problems are longer-term or more intractable than America’s systemic racial inequality. And a particularly entrenched form of it is housing discrimination.”

This article discusses how, even though automated mortgage lending systems are not built to have discriminator policies, they still end up learning unfair policies that disproportionately hurt Black and Hispanic borrowers. These systems are designed to have a profit maximizing mindset, but whoever designed them didn’t understand the racial consequences this focus has. A study mentioned in the article found that the price of approved loans differed by roughly $800 million a year because of race. The article also discusses how far behind regulations are on understanding how these systems even work. In order to fix this problem, the article argues that we need educated regulators who understand how AI works as well as more diversity and foresight in the teams developing the algorithms.

This issue of automated housing algorithms applies to every application of AI. There is an industry wide lack of consideration about the complexity of the problem that the system being implemented is trying to address. If this practice continues, there will just be more and more severe consequences disproportionately affecting vulnerable groups. In order to fix this, project teams have to be interdisciplinary and focus on the implicit consequences of their decisions, not just the explicit ones.

You can find the article here.

An AI saw a cropped photo of AOC. It autocompleted her wearing a bikini.

“Feed one a photo of a man cropped right below his neck, and 43% of the time, it will autocomplete him wearing a suit. Feed the same one a cropped photo of a woman, even a famous woman like US Representative Alexandria Ocasio-Cortez, and 53% of the time, it will autocomplete her wearing a low-cut top or bikini.”

Bias in data is a serious problem. Virtually all high performing AI systems require massive amounts of data to be trained properly, which naturally contain bias that the model then exploits to improve performance. In the case of image generation models, these biases sexualize women leading to the scenario described above.

Not being able to control what your model learns and the consequences that lack of control has, is what inspired me to make this blog in the first place. Currently, there is a serious lack of concern in the industry about AI’s opacity and bias problem, even though the consequences are devastating. It seems as though we are so focused on making artificial intelligence become exactly like human intelligence, that we haven’t taken a step back to question whether that is the best path to go down. Humans are plagued by cognitive biases. Systems that emulate our behavior will have the same problem and ultimately end up making pervasive societal issues like racism and sexism worse.

You can find the article here.

Previous
Previous

Brick Walls

Next
Next

Time Travel