AP Prewriting #2

Drag to rearrange sections
Rich Text Content
AP Prewriting #2: Scholarly Support for Advocacy Arguments & the Advocacy Abstract

In our second AP prewriting, we had to create a comprehensive bibliography that compiled all of our different kinds of sources (thinktanks, news sources, etc.) and create annotations for our new scholarly sources that we found and used. These scholarly articles were meant to help further our ideas on our advocacy position.


About Us. Freedom House. https://freedomhouse.org/about-us. Accessed 8 Nov. 2021.

About Us. National Eating Disorders Association (NEDA). https://www.nationaleatingdisorders.org/about-us. Accessed 8 Nov. 2021.

About Us. The Clay Center for Young Healthy Minds at Massachusetts General Hospital. https://www.mghclaycenter.org/about-us/center-goals/. Accessed 8 Nov. 2021.

About. Fight for the Future. https://www.fightforthefuture.org/about. Accessed 8 Nov. 2021.

Strauß, Stefan. “Deep Automation Bias: How to Tackle a Wicked Problem of AI?” Big Data and Cognitive Computing, vol. 5, no. 2, pp. 1-14, https://doi.org/10.3390/bdcc5020018. Accessed 15 Nov. 2021.

This journal article focuses on DAB (deep automation bias) and how it will take more than one approach to successfully combat it. Strauß argues that efforts such as “fairness, accountability, and transparency” (FAT) are not enough when it comes to eliminating DAB. Strauß advocates for broader solutions to algorithmic bias rather than solutions that are too narrow and specific. The issues that AI presents are also very complex in that we can’t simply definitively say that algorithms alone are the cause of ethical issues in society; there is a lot more that goes into it than that. With that being said, Strauß ultimately ends up arguing that there is no sufficient way to solve DAB because of the various limitations these proposed solutions would have; he believes that this type of issue is beyond technical solutions and must be solved in other ways beyond code.

Fazelpour, Sina and David Danks. “Algorithmic bias: Senses, sources, solutions.” Philosophy Compass, vol. 16, no. 8, 2021, pp. 1-16, https://doi.org/10.1111/phc3.12760. Accessed 15 Nov. 2021.

In this article, Fazelpour and Danks discuss the logistics of what algorithmic bias consists of, as well as where the bias is derived from. They also explain the problems that algorithmic bias presents, tackling the different nuances and individual aspects of that bias. Towards the end  of the article, these scholars propose a “a two-stage strategy for addressing algorithmic bias: (1) Use one (or more) mathematical fairness measures to quantify the amount of bias in the algorithmic output; and (2) Develop mitigation responses that reduce, and ideally eliminate, bias according to that measure,” this also being a mainstream solution to discrimination in machine learning (ML). They consider the idea of implementing statistical biases in order to combat ethical algorithmic biases. Further, they acknowledge potential drawbacks and consequences of this kind of action, but they state that denying algorithmic technology outright as a solution would be in poor taste. Fazelpour and Danks also acknowledge the “overhyped” nature of AI; while they do support its ethical use, they also consider all of its shortcomings and actively try to mitigate them. Overall, they believe that by broadening their scope of solutions, we may be able to solve biases in algorithms.

Conger, Kate, et al. "Eating Disorders and Social Media Prove Difficult to Untangle." The New York Times, 22 October 2021, https://www.nytimes.com/2021/10/22/technology/social-media-eating-disorders.html. Accessed 8 November 2021.

Oremus, Will. "Lawmakers’ latest idea to fix Facebook: Regulate the algorithm." The Washington Post, 12 October 2021, www.washingtonpost.com/technology/2021/10/12/congress-regulate-facebook-algorithm. Accessed 8 November 2021.

The Program on Platform Regulation. Stanford Cyber Policy Center. https://cyber.fsi.stanford.edu/content/program-platform-regulation. Accessed 8 Nov. 2021.

 

Abstract:

Social media is deeply embedded within our daily lives. It affects us constantly, from the way that we are perceived, the way we perceive others, our self-esteem and the standards we set for how we should be living, and even the way we interact with ourselves and the environment around us. As a result of these social media algorithms being so deeply intertwined with modern society, we constantly suffer the consequences that they bring upon us, whether unconsciously or consciously. This raises the inevitable question: is there any way to stop eliminate these drawbacks? How can we separate ourselves from the negatives of social media while still consuming it the same way we did before? My advocacy project discusses the idea that the algorithms ingrained within our social media platforms cannot be fixed, and even attempts at eliminating them completely would still result in an absolute failure to sufficiently mend the issue.

rich_text    
Drag to rearrange sections
Rich Text Content
rich_text    

Page Comments