Benjamin Pusch Benjamin Pusch

Left to Right and Right to Left

12/24/21

12/24/21


Welcome back to the Odyssey! Today I am going to go more in depth about the paper that inspired me to start this project. This is going to be a more technical post, but I tried to make it so everyone can understand. Check it out!


Background

The US is more divided now than it has ever been in the last 30 years. Parties fail to cooperate on even the most crucial issues, causing legislative gridlock and hurting the American people. Personally, I am often scared to share my own political beliefs because I have learned that the second one of my opinions aligns with the right or left, I will be labeled as either a Republican or Democrat and any other comment I make will be viewed through that lens. Further, social media has added gasoline to the fire by incentivizing news media to foster partisanship and making it easier for people to share controversial, harmful, and offensive opinions. Political polarization is one of the most pressing and potent problems in the US, and the longer we take to heal and see past each others differences, the more drastic the consequences will be.

What?

Getting a better understanding of the issue is crucial in addressing the problem. We Don't Speak the Same Language: Interpreting Polarization through Machine Translation provides a novel and insightful perspective on political polarization by arguing that people on opposite ends of the spectrum are speaking in different languages. Specifically, the researchers use machine translation to answer the following questions: Is it possible that the two sub-communities are speaking in two different languages such that certain words do not mean the same to the liberal and conservative viewership? If yes, how do we find those words?

Why?

So, why did the researchers want to use AI, namely machine translation, to study political polarization? Typically, surveys are used to study the political discourse but they are expensive and can only focus on a narrow subject, such as opinions on climate change. In contrast, the method proposed aggregates the opinion of two ore more sub-communities (in this case the left and right) discussing a broad range of issues. By using social media (YouTube) for their data, the researchers are able to analyze the opinions of a much wider subset of the population than a traditional survey would be able to. Finally, the word embeddings generated by the machine translation models allows the researchers to accurately quantify the polarization between the left and the right, which provides an incredibly useful and efficient way to capture the ideological differences and political divide between the right and left.

How?

Terms

As I mentioned above, the paper uses machine translation to study the political polarization in the US ahead of the 2020 elections. To understand how their method works, lets start off by defining some key terms.

Word Embeddings:

Word embeddings are a way to represent a word as a vector (which is really important because information for AI systems is represented through vectors and matrices). The simplest way to represent a word as a vector would be to use what is called a one-hot encoded vector, which is when you have a vector of size n and each entry corresponds to one of the possible words in your dataset (n is the total number of unique words in your dataset).

However, this is a very inefficient way to store information because if we are using the English language, we would need a vector of roughly length 170,000 for each word. A word embedding in contrast is a learned representation of the text in your data set where each word is represented by a unique vector in some vector space of predefined size (I hope you remember your linear algebra). This representation isn’t a given and has to be learned through a neural network that uses the context of each word to understand each word’s ‘meaning’. Thus, words that are used in similar ways result in having similar representations (are closer together in the vector space). One of the most popular algorithms used to develop a word embedding is FastText, which is used by this paper.

Machine Translation:

The name gives it away, but machine translation is the task of translating text from one language into another language. The specific algorithm used in the paper works as follows. First, FastText is used to create word embeddings for each language. Then, a bilingual seed lexicon (consisting of translation pairs) is used to learn an orthogonal transformation matrix, which is implemented to align the vector spaces (created by the word embeddings) of the two languages. To translate a word from one language to the other, a transformation matrix created using this algorithm, is applied to align the source word with the vector space of the target language. The nearest neighbor to this vector, using cosine distance, is the translated word.

Their Method

Now the juicy bit, how did the researchers use machine translation to study polarization and show that people from opposite ends of the political spectrum are speaking in different languages? The crux of the paper’s approach was misaligned pairs. Using the machine translation algorithm that I described, the researchers ‘translated’ the comments from the right wing ‘language’ to the left wing ‘language’ and vice versa. Since both groups are speaking English, most words are translated to themselves because their use isn’t affected by someone’s political stance. Fo

r example, your political affiliation will not change your use of the word ‘doorknob’. However, words that are sensitive to political beliefs, are used in different contexts by the left and right wing, producing a misaligned pairs. The paper highlighted two main types of misaligned pairs. The first being pairs where both words actually refer to the same grounded entity but have different meanings behind them (e.g., [pelosi, pelousy]). Hence, we can think of the two words as synonyms referring to the same entity, though the difference in the actual names can reflect important differences in attitudes toward that entity.

The second case is where the word pair refers to two different entities, as in [tapper, hannity]. Here, the phenomenon detected is that one sub-community makes statements about the word from their language that are very similar to the statements made by the second sub-community about the word from their language (e.g., “Tapper is a great interviewer” vs. “Hannity is a great interviewer”). These misaligned pairs reveal the deep divide between the left and the right and can be used to quantity the polarization between the left and the right

Results

Examples of how these misaligned pairs were used in the dataset

Why should you care?

These results on their own are pretty striking. Black Lives Matter being used in the same context as the KKK is frightening and the overall hateful attitude towards members on the opposite end of the political spectrum is saddening. The misaligned pairs illustrate the right and left’s mirror image views of contentious topics across the board, highlighting that political divisions are suppressing the lifeblood of a healthy political system: diversity of opinion and spirited debate.

However, the bigger takeaway I think is the effectiveness of AI models to understand such complex social dynamics and their ability to provide truly fascinating and novel insights. These traits make methods such as machine translation a much more powerful tool to perform polling or study societal behavior than traditional statistical techniques. By being able to better understand the nuances behind public opinions in the aggregate, policy makers can make better decisions that improve more people’s lives.

Read More
Benjamin Pusch Benjamin Pusch

Welcome!

12/20/21

12/20/21


Welcome to the Odyssey! I want to use this first post to quickly discuss what I will be talking about in this blog and why I even got started.


A couple of weeks ago, whilst falling down a particularly deep rabbit hole of AI related journal articles, I discovered Ashique KhudaBukhsh’s We Don't Speak the Same Language: Interpreting Polarization through Machine Translation and was blown away. In the paper, KhudaBukhsh takes comment sections on YouTube from opposite ends of the political spectrum, treats them as separate languages, and then trains a machine translation model to translate one ‘language’ to the other [1]. The results are astounding. Even though both comment sections are speaking English, some phrases that in actuality have different definitions, like ‘kkk’ and ‘blm’, were mapped onto each other, meaning that liberals (CNN viewers) use ‘KKK’ in the same context that conservatives (Fox viewers) use ‘BLM’. To put it simply: The KKK is to CNN viewers what Black Lives Matters is to Fox viewers.

While the results themselves are fascinating, it’s the bigger picture of KhudaBukhsh’s paper, exploiting AI’s flaws for good, that inspires me to do my own research. Over the last couple of months, as I have researched more about AI and machine learning, I have become increasingly frustrated with the academic community for prioritizing pure advancement over ethical considerations and have become worried about the consequences such unchecked progression might have. The implementation of ‘better’ (scoring higher on benchmarks), but increasingly complex and opaque AI systems to supplement or replace human decision making leads to uninterpretable outcomes that exploit social biases.

I was fascinated by AI, but could not in good conscious continue to learn about topics like Deep Learning and Transformers without explicitly focusing on their relation to society. KhudaBukhsh’s paper provided the answer: using AI’s propensity to pick up bias in data to study societal issues, such as racism, sexism, antisemitism, and xenophobia, that are rooted in subconscious bias.

I am going to be researching social behavior through language models trained on social media data and I will be using this blog to document my experience. My blog will illustrate my successes and failures, describe what I learned, and provide fascinating insights into our society. Whether you’re a coding prodigy or someone who is just curious about artificial intelligence, my blog’s unique focus will make the read worth while. Stay tuned!


[1] Check out the next post to get a more in-depth discussion of the article!

Read More