Welcome!

12/20/21


Welcome to the Odyssey! I want to use this first post to quickly discuss what I will be talking about in this blog and why I even got started.


A couple of weeks ago, whilst falling down a particularly deep rabbit hole of AI related journal articles, I discovered Ashique KhudaBukhsh’s We Don't Speak the Same Language: Interpreting Polarization through Machine Translation and was blown away. In the paper, KhudaBukhsh takes comment sections on YouTube from opposite ends of the political spectrum, treats them as separate languages, and then trains a machine translation model to translate one ‘language’ to the other [1]. The results are astounding. Even though both comment sections are speaking English, some phrases that in actuality have different definitions, like ‘kkk’ and ‘blm’, were mapped onto each other, meaning that liberals (CNN viewers) use ‘KKK’ in the same context that conservatives (Fox viewers) use ‘BLM’. To put it simply: The KKK is to CNN viewers what Black Lives Matters is to Fox viewers.

While the results themselves are fascinating, it’s the bigger picture of KhudaBukhsh’s paper, exploiting AI’s flaws for good, that inspires me to do my own research. Over the last couple of months, as I have researched more about AI and machine learning, I have become increasingly frustrated with the academic community for prioritizing pure advancement over ethical considerations and have become worried about the consequences such unchecked progression might have. The implementation of ‘better’ (scoring higher on benchmarks), but increasingly complex and opaque AI systems to supplement or replace human decision making leads to uninterpretable outcomes that exploit social biases.

I was fascinated by AI, but could not in good conscious continue to learn about topics like Deep Learning and Transformers without explicitly focusing on their relation to society. KhudaBukhsh’s paper provided the answer: using AI’s propensity to pick up bias in data to study societal issues, such as racism, sexism, antisemitism, and xenophobia, that are rooted in subconscious bias.

I am going to be researching social behavior through language models trained on social media data and I will be using this blog to document my experience. My blog will illustrate my successes and failures, describe what I learned, and provide fascinating insights into our society. Whether you’re a coding prodigy or someone who is just curious about artificial intelligence, my blog’s unique focus will make the read worth while. Stay tuned!


[1] Check out the next post to get a more in-depth discussion of the article!

Previous
Previous

Left to Right and Right to Left