CP Prewriting #2

Drag to rearrange sections
Rich Text Content
CP Prewriting #2: Analytical Summaries (Topic Proposal)

This prewriting played a bigger role in my actual CP, as I figured out my topic when I completed this prewriting. The questions we were asked to answer were very helpful in developing my argument; I was able to extract information from my sources and organize them in a way that was helpful for me in this prewriting. This helped me better understand my own topic and thoroughly get a grasp on what I was going to be arguing.


  1. I HAVE RE-READ THE FORMAL INSTRUCTIONS FOR THE CP AND I DO NOT HAVE ANY QUESTIONS AT THIS TIME.
  2. See Annotated Bibliography
  3. See Annotated Bibliography
  4. New key research terms: artificial intelligence, facial recognition, anti LGBTQ algorithm, gender discrimination, human bias, debiasing technology, censorship
  5. The current technology-related topic that I decided to investigate is how social media algorithms affect cultural shifts and perceptions of gender. I would categorize this topic as one that poses a social issue. Put briefly, the issue that algorithms cause in terms of gender revolves around heteronormative information that is commodified and emulated by artificial intelligence. Human biases are unavoidable in the creation of modern technology, and as a result, A.I. fails to attune to anything that falls outside of a binary. Because of the fluidity of gender, both inside and outside of the LGBTQ community, modern technology struggles to keep up with an ever-changing concept of gender with so many different nuances and variations. By inadequately portraying gender in social media, whether that be through factors like censorship or poor representation, society forms their perspective on gender based off of widely available insufficient information.
  6. One of the affected communities within this issue is the LGBTQ community. As our standards of gender constantly are transformed by these folks, they are often subject to aversion from those unwilling to accept change, resulting in their discrimination. However, I believe that this issue also affects a larger community, that being anybody who utilizes social media. With algorithms skewing our information on gender norms on these social media platforms, people who wish to present a certain way may feel that their emotions are taboo or socially unacceptable. As aforementioned, the issue predominantly exists on social media, namely platforms such as Instagram and TikTok. Algorithm bias has been an issue the moment they were created, however, algorithmic issues in regards to gender have become increasingly more prevalent within the last decade. This issue continues to prevail because coders behind facial recognition and other A.I. are bound to integrate their subconscious (or conscious) biases into the technology they are creating, and with how rapidly the concept of gender is being explored in recent years, it’s crucial we find a solution for people to fully understand and embrace their identities. Patterns that I have identified in my sources have been censorship in media towards the LGBTQ community. 
  7. Some of the significant effects that have resulted from the problem I am investigating are as follows: unfair treatment because of incapable facial recognition, unreasonable censorship of LGBTQ influencers, suppression of LGBTQ phrases, underrepresentation of nonconforming genders in media, and even the possible threat of outing closeted LGBTQ folks. I feel that all of these issues revolve around the big question of “what are these actions communicating to today’s society?”. Through the frequent banning, failed attempts at recognizing gender nonconforming faces, et cetera, it further perpetuates the harmful idea of LGBTQ folks being outside of the norm and therefore unacceptable. 
  8. One major event that contributed to these issues is a study conducted by Stanford University, which “claimed an algorithm could accurately distinguish between gay and straight men 81 percent of the time based on headshots.” This extremely invasive piece of technology could only serve to out, criminalize, and shame LGBTQ folks; that’s assuming that it can even function correctly and discern gay people from straight people, which also assumes a binary that does not exist. In the event that this kind of technology gets placed in the wrong hands, it could become a dangerous tool easily weaponized against LGBTQ people, especially those who would be in direct harm with their identities revealed.
  9. Two scholars cited in my sources are anthropologist with affiliations in gender studies Mary L. Gray and Human Rights Campaign’s director of public education and research Ashland Johnson. It’s no mystery that these two scholars are in agreement with one another; both advocate for the termination of various A.I., believing it to do more harm than good to gender nonconforming folks. They were quoted because both of them have the grounds to speak on this issue, more in regards to the social aspect than the technical aspect of it. Their experience working with different gender identities likely contributes to their knowledge and vehemence towards this problem, and citing people who have directly worked with LGBTQ folks and beyond helps express the severity of the issue.

Annotated Bibliography

Fox, Chris. “TikTok admits restricting some LGBT hashtags.” BBC, 10 Sep. 2021, www.bbc.com/news/technology-54102575. Accessed 13 October 2021.

TikTok, today’s newest and increasingly popular social media platform, has been put under fire for its failure to adhere to its proclaimed inclusivity standards. There have been numerous accounts of this occurring. TikTok has published a statement in response to this issue, stating that they were “committed to making [their] moderation policies, algorithm, and data security practices available to experts,” but despite these claims, LGBTQ TikTok users have still reported issues with shadowbanning and other forms of censorship. These specific instances were reported to be in other countries, namely Bosnia, Jordan, Russia, and Southeast Asia. This text is directed towards TikTok users; it affects all people who use the app, regardless of whether or not they identify with the LGBTQ community. Creators are harmed in that their views, likes, and comments sharply decrease because of these restrictions, meaning less revenue specifically for LGBTQ creators. Viewers are unable to conveniently see the TikTok creators that they know and love if these algorithms are not fixed. Information listed in this text is derived from the Australian Strategic Policy Institute, and analyzed by Ben Hunte, LGBT correspondent and journalist on stories regarding sexuality and gender. This article gave me more insight on what’s happening behind the scenes of one of the world’s most loved social media platforms.

Levesque, Brody. “Instagram’s anti-LGBTQ trolls use algorithms & zap gay influencers.” Washington Blade, 30 Dec. 2020, www.washingtonblade.com/2020/12/30/instagrams-anti-lgbtq-trolls-use-algorithms-zap-gay-influencers/. Accessed 13 October 2021.

This article follows the story of Instagram creators Matthew Olshefski and Paul Castle, as well as an older instance of Joe Putignano undergoing a similar situation in 2017. Putignano is an openly gay Cirque du Soleil performer who one day woke up to his Instagram account being deactivated. The irony laid in the fact that he was frequently harassed for his sexuality on his account, and instead of acting upon those bullies, Instagram appeared to take down his account to solve the problem. After much back and forth, he was able to get it reactivated. Olshefski and Castle are a gay couple who speak on various issues in the LGBTQ community, sharing their experiences and connecting with thousands of people around the world. Similarly, they woke up to their account taken down without notice, simply stating that they were “pretending to be someone else.” Despite repeated attempts, they have yet to receive a response from Instagram. The article is directed towards Instagram users, particularly LGBTQ creators on Instagram. These supposed arbitrary account suspensions have raised reasonable concerns as to whether or not Instagram is deliberately targeting these influencers. After reading through this, I’m interested to delve deeper into seeing what Instagram’s reasoning is behind these account bans.

Samuel, Sigal. “Some AI just shouldn’t exist: Attempts to “fix” biased AI can actually harm black, gay, and transgender people.” Vox, 19 Apr. 2019, www.vox.com/future-perfect/2019/4/19/18412674/ai-bias-facial-recognition-black-gay-transgender. Accessed 13 October 2021.

Samuel discusses the effects of human bias on artificial intelligence, and how those biases can affect factors such as race and sexuality in algorithmic programming. Issues like sexism, racial discrimination, and LGBTQ stereotyping are perpetuated through courtroom sentences, mortgage algorithms, and so-called “gaydars.” To combat these problems, programmers have began integrating “debiasing toolkits” in their coding, however, it’s possible that these attempted fixes can ironically add to and perpetuate bias in technology. Because of this, Samuel argues that some AI should simply not be invented as a whole since these issues are unavoidable. In regards to the LGBTQ community, facial recognition technology can be problematic due to the fact that it can fail to recognize trans folk who are transitioning. At the same time, however, trying to “fix” something like that could mean coding the AI to recognize who is trans and who is not, thus perpetuating discrimination. Further, some have used algorithms as an automatic “gaydar” to identify who is gay and who is not solely based off of their facial features. Not only can this be inaccurate, but it also poses a threat to LGBTQ people, especially those who may be put in danger if they are to be outed; this could be the primary audience Samuel is speaking to. This article has certainly given me insight on how AI can pose a direct threat to the LGBTQ community.

Wareham, Jamie. “Why Artificial Intelligence Is Set Up To Fail LGBTQ People.” Forbes, 21 Mar. 2021, www.forbes.com/sites/jamiewareham/2021/03/21/why-artificial-intelligence-will-always-fail-lgbtq-people/. Accessed 13 October 2021.

Because of the binary nature of the algorithms we use in media today, it’s inevitable that our modern A.I. has and will continue to fail to cater to LGBTQ folks. The article elaborates on the idea that those in the LGBTQ community are subject to being “filtered out,” rendering them outside of what is considered the norm. The identities of people within this community are constantly changing, whether that be a result of one changing the way they label themselves because it suits them better, someone discovering who they are for the first time, or even just someone coming out of the closet. All of it is a fluid process, which an algorithm tends to reject. Algorithms seek concrete examples; the malleable nature of the LGBTQ community does not fit that mold. Journalist Jamie Wareham cites anthropologist Mary L. Gray on this topic. Most of the article’s credibility is derived from Gray. The primary audience is the LGBTQ community because of language such as “we” and “us” utilized in the article. Reading this helped me get a better understanding of how exactly algorithms can be discriminatory towards our community.

rich_text    
Drag to rearrange sections
Rich Text Content
rich_text    

Page Comments