CP Prewriting #3

Drag to rearrange sections
Rich Text Content
CP Prewriting #3: Annotated Bibliography and Abstract

In this final CP prewriting, we were assigned to create an annotated bibliography using scholarly, peer-reviewed sources as well as an abstract for our paper. This part actually came a bit easier to me, because in my previous prewritings, I had found scholarly sources unintentionally. When I found them, I tabled them knowing I could use them in the future, thus I ended up reading and analyzing them for this assignment. These sources were very helpful in developing a credible argument for my paper.


Part 1:

Bivens, Rena. "The gender binary will not be deprogrammed: Ten years of coding gender on Facebook." New Media & Society, vol. 19, no. 6, 2017, pp. 880-898. SAGE Publishing, https://journals.sagepub.com/doi/10.1177/1461444815621527.

In this journal article, Bivens delves into the sociotechnical issues that Facebook presents in regards to the gender binary. In summation, Facebook compromises the experience of trans and gender non-conforming folks on their social media platform for the sake of monetary advantages. Forcing users to choose a gender and/or a pronoun (the only offerings being “she,” “he,” and “them,” which in turn categorizes the individual as either a female, male, or “other”) makes them a marketable asset in the world of Facebook. This way, advertisers on Facebook can market their advertisements towards either “males” or “females,” reinforcing stereotypes from a gender binary that does not exist. Facebook appears to mimic authenticity on a surface level, but in reality, users who select a custom gender are still subject to picking one of three pronouns in order to program their gender accordingly. With that being said, the issue Bivens addresses is the deep-rooted and intentional binary coding that Facebook actively enforces for the sake of financial gain.

Fox, Chris. “TikTok admits restricting some LGBT hashtags.” BBC, 10 Sep. 2021, www.bbc.com/news/technology-54102575. Accessed 13 October 2021.

Gerrard, Ysabel and Helen Thornham. "Content moderation: Social media’s sexist assemblages." New Media & Society, vol. 27, no. 7, 2020, pp. 1266-1286. SAGE Publishing, https://journals.sagepub.com/doi/10.1177/1461444820912540.

Gerrard and Thornham investigate “sexist assemblages,” a term they use to define the overlap between gender and race and their normalized discrimination in social media. They discuss how community guidelines and algorithms used for recommended social media content (e.g. “you may also like this post,” “related keywords to x,” etc.) can be extremely damaging and act as reinforcers of sexism and racism. The issue of combining human and automated means of surveilling a social media platform is, again, the biases that humans unconsciously embed into these algorithms. For example, the authors tackle the issue of restricting “problematic” content, and how that in itself can lead to more issues. They take the situation of banning “female-presenting nipples” on the platform Tumblr and explain how such a restriction reflects the oversexualization of women’s bodies, or the banning of the “proana” tag on Pinterest and how its alternative recommendations to the tag can be just as harmful. These “sexist assemblages” work based off of previous data, reinforcing social and cultural ideas that may be harmful despite having good intentions from their human programmers. Community guidelines are also malleable and more subject to pressure from outside individuals; it becomes difficult to maintain these because they are moral reflections rather than legal ones. Thus, failure to address these issues with content moderation are argued to silence groups that are subject to discrimination despite attempts at helping them.

Gutierrez, Miren, et al. “New Feminist Studies in Audiovisual Industries: Feminism, Gender, Ageism, and New Masculinities in Audiovisual Content.” International Journal of Communication, vol. 15, 2021, pp. 407-415. 

By studying the cultural shifts in audiovisual content, Gutierrez and other writers investigate the development of feminism in modern society. The argument is that algorithms used in this form of content are directly harmful to the feminist movement, and in turn, gender roles and discrimination. The correlation between representation of women in media and women’s well-being is notably affected by algorithmic technology. Depending on the content of the media, audiovisual content showcasing women has the potential to influence the way a woman feels about herself; these algorithms determine the rate of exposure that consumers have to certain types of content. Promoting the sexualization of women in media, for example, is bound to contribute to societal views on women and how they should be treated. Naturally, all of this is counterintuitive to the feminist movement, and the article pushes for different means of promoting feminism in audiovisual content.

Lambrecht, Anja and Catherine Tucker. "Algorithmic Bias? An Empirical Study of Apparent Gender-Based Discrimination in the Display of STEM Career Ads." Management Science, vol. 65, no. 7, July 2019, pp. 2966-2981. INFORMS, https://doi.org/10.1287/mnsc.2018.30.

In this study, Lambrecht and Tucker investigate whether or not algorithms are truly biased in how they target certain demographics. They conducted multiple campaigns across various social media platforms (with a focus on Facebook) using STEM related advertisements in order to answer this suspicion. Much of the article is spent debunking other potential explanations to the question of algorithmic bias. Ultimately, they conclude that there does appear to be a difference in how often women see STEM advertisements versus how often men see STEM advertisements, though not necessarily for reasons related to gender discrimination. Their research finds that women are actually “more valuable” when it comes to such advertisements; this is due to the fact that women are more likely to pay money upon following an ad, while men are less likely to do so. As a result, when looking at the values per click on an ad, women are worth more because of the money they spend. Paradoxically, because it costs more to show women these ads, algorithms tend to move towards a more cost-effective method of advertising, hence why STEM ads may be more prevalent on a man’s social media. Lambrecht and Tucker suggest that we utilize gender-specific advertisements to remedy this imbalance, however, such a methodology would be classified as discriminatory. Gender neutral advertisements can therefore still end up carrying gender biases, but perhaps not always for malicious reasons.

Levesque, Brody. “Instagram’s anti-LGBTQ trolls use algorithms & zap gay influencers.” Washington Blade, 30 Dec. 2020, www.washingtonblade.com/2020/12/30/instagrams-anti-lgbtq-trolls-use-algorithms-zap-gay-influencers/. Accessed 13 October 2021.

Samuel, Sigal. “Some AI just shouldn’t exist: Attempts to “fix” biased AI can actually harm black, gay, and transgender people.” Vox, 19 Apr. 2019, www.vox.com/future-perfect/2019/4/19/18412674/ai-bias-facial-recognition-black-gay-transgender. Accessed 13 October 2021.

Schroeder, Jonathan E. “Reinscribing gender: social media, algorithms, bias.” Journal of Marketing Management, vol. 37, no. 3-4, 2021, pp. 376-378. Taylor & Francis, https://doi.org/10.1080/0267257X.2020.1832378.

In this journal article, Schroeder analyzes how algorithms in social media can learn gender biases and reinforce harmful and sexist stereotypes. He explains how these algorithms learn what to do based off of existing language on the Internet, namely online reviews for products and restaurants. This flawed learning has resulted in instances such as men receiving more advertisements for STEM related careers than women, and women receiving more ads for products and goods than for high-paying jobs. The purpose of this analysis is to point out the fact that the functionality of social media largely relies upon algorithms that reinforce gender biases. These algorithms derive their information from big data on the Internet, which is (needless to say) full of different gender biases. Thus, algorithms play a role in perpetuating negative gender stereotypes.

Wareham, Jamie. “Why Artificial Intelligence Is Set Up To Fail LGBTQ People.” Forbes, 21 Mar. 2021, www.forbes.com/sites/jamiewareham/2021/03/21/why-artificial-intelligence-will-always-fail-lgbtq-people/. Accessed 13 October 2021.


Part 2:

Abstract:

My project investigates the relationship between social media algorithms and their influence on the perception of gender in modern society. Algorithmic technology and the manner in which it functions has been shrouded in ambiguity; consumers of social media may not often wonder why certain content is recommended to them, or how exactly their platform of choice transforms to suit their interests. As it stands today, these algorithms are deeply ingrained into our everyday lives because of their convenient ability to adapt to our wants and needs -- but are we overlooking this technology’s potentially sinister repercussions?

From the early 2000s to present day, programmers have worked to craft an enjoyable and individualized experience for social media users to consume; in the process of employing algorithmic technologies, however, the rising issue of discriminatory artificial intelligence has become increasingly more prevalent. Social media platforms such as Instagram and Facebook fail to address their seemingly biased software, doing little beyond the surface level to resolve these issues. From random account suspensions to mandatory gender selection, a damaging binary is constantly being reinforced on the social media platforms we know and love. But who enforces it? How do these biases persist even on a neutral set of codes and numbers? How is our society affected by the content these algorithms feed us? My paper asserts that the implementation of algorithmic technology in social media results in the inevitable perpetuation of gender-related issues in society.

rich_text    
Drag to rearrange sections
Rich Text Content
rich_text    

Page Comments