Identify at least three main supporting details each author uses to support their argument.

TOPIC: Social Media and Freedom of Speech
ARTICLE #1
Primary Source Document: Anti-Defamation League Representatie Testifies About Online Extremism
Date: September 18, 2019
Source: U.S. Senate
On September 18, 2019, George Selim, a senior vice president at the Anti-Defamation League (ADL), a group that fights anti-Semitism and other forms of bigotry, testified before the U.S. Senate Commerce Committee about right-wing extremism on the Internet. Following are excerpts of his testimony:
The real-world violence of extremists does not emerge from a vacuum. In many cases the hatred that motivates extremist violence, and especially these documented white supremacist murders, is nurtured in online forums….
Extremist groups are empowered by access to the online world; the internet amplifies the hateful voices of the few to reach millions around the world. The online environment also offers community: while most extremists are unaffiliated with organized groups, online forums allow isolated extremists to become more active and involved in virtual campaigns of ideological recruitment and radicalization. As internet proficiency and the use of social media are nearly universal, the efforts of terrorist and extremist movements to exploit these technologies and platforms to increase the accessibility of materials that justify and instigate violence are increasing exponentially. Both terrorist and extremist movements, here at home and abroad, use online and mobile platforms to spread their messages and to actively recruit adherents who live in the communities they target….
Extremists make use of mainstream platforms in specific and strategic ways to exponentially increase their audience while avoiding content moderation activity that Facebook and Twitter use to remove hateful content. These include creating private pages and events, sharing links that directly lead users to extreme content on websites like 8chan and using coded language called “dogwhistles” to imply and spread hateful ideology while attempting to circumvent content moderation systems….
One of the key drivers of these complicated and at times deadly issues is the size and scale of these platforms. For example, on Twitter approximately 6,000 tweets are posted every second and approximately 500 million tweets are posted every day. If the company’s policies and systems operated at 99% effectiveness in detecting and responding to violent hate and extremist rhetoric, that would still leave five million tweets unaddressed every day. Imagine that each of those tweets, on the low end, reached just 60 people: those tweets would reach the number of people equal roughly to the population of the United States (330 million people) every day.
The policies and systems of these companies are very likely not operating with a high degree of accuracy, leaving possibly millions of users exposed and impacted by hateful and extreme content every day. As an example, YouTube in June 2019 announced a policy change focusing on prohibiting white nationalist and other extremist content from existing on its platform. In August 2019, an ADL investigation found a number of prominent white nationalists and other forms of hateful extremists still active and easily found on the platform, despite the policy change. Similarly, after Facebook very publicly banned Alex Jones from its platforms in May 2019, Jones was quickly able to shift his operations to another account on the platform. These instances raise alarming questions about the degree to which social media platforms, through their own internal policies and systems, are able to meaningfully detect, assess, and act on hateful content at the global scale their platforms operate.
The U.S. Congress and American public admittedly have limited knowledge of just how well platforms are dealing with the problem of white supremacist extremism. To evaluate their efforts, civil society organizations like ADL can conduct limited external research similar to the manner mentioned above, in which we use the platform information that is publicly available to objectively assess the stated actions and policy implications of a given platform. Or we can look to the platforms’ own limited efforts at transparency about their policies and practices. The mainstream social media platforms have several potentially relevant metrics related to the issue of extremism, especially white supremacist extremism, that they share in their regular transparency reports. These differ slightly as described by each platform. The metrics are self-reported by the companies, and there is no way to fully understand the classification of content categories outside of the brief descriptions given by the platforms as part of this reporting….
Additionally, when Facebook claims in its transparency report that it took action on four million pieces of hate speech from January to March 2019, it is difficult to understand what this means in context as we do not know how that compares to the level of hate speech reported to them, which communities are impacted by those pieces of content, or whether any of that content is connected with extremist activity on other parts of their platform.
In order to truly assess the problem of hate and extremism on social media platforms, technology companies must provide meaningful transparency with metrics that are agreed upon and verified by trusted third parties, like ADL, and that give actionable information to users, civil society groups, governments, and other stakeholders. Meaningful transparency will allow stakeholders to answer questions such as: “How significant is the problem of white supremacy on this platform?” “Is this platform safe for people who belong to my community?” “Have the actions taken by this company to improve the problem of hate and extremism on their platform had the desired impact?” Until tech platforms take the collective actions to come to the table with external parties and meaningfully address these kinds of questions through their transparency efforts, our ability to understand the extent of the problem of hate and extremism online, or how to meaningfully and systematically address it, will be extremely limited.

ARTICLE #2 Primary Source Document: FAcebook CEO MArk Zuckerberg delivers Address on Free Speech.
Date: October 17, 2019
Source: Washington Post
On October 17, 2019, Facebook chief executive officer Mark Zuckerberg delivered a speech at Georgetown University. He frequently referenced the First Amendment to the U.S. Constitution, which protects freedom of speech. Following are excerpts of the address:
Throughout history, we’ve seen how being able to use your voice helps people come together. We’ve seen this in the civil rights movement. [Slavery abolitionist] Frederick Douglass once called free expression “the great moral renovator of society.” He said “slavery cannot tolerate free speech.” Civil rights leaders argued time and again that their protests were protected free expression, and one noted: “nearly all the cases involving the civil rights movement were decided on First Amendment grounds.”
We’ve seen this globally too, where the ability to speak freely has been central in the fight for democracy worldwide. The most repressive societies have always restricted speech the most—and when people are finally able to speak, they often call for change. This year alone, people have used their voices to end multiple long-running dictatorships in Northern Africa. And we’re already hearing from voices in those countries that had been excluded just because they were women, or they believed in democracy….
We now have significantly broader power to call out things we feel are unjust and share our own personal experiences. Movements like #BlackLivesMatter and #MeToo went viral on Facebook—the hashtag #BlackLivesMatter was actually first used on Facebook—and this just wouldn’t have been possible in the same way before. 100 years back, many of the stories people have shared would have been against the law to even write down. And without the internet giving people the power to share them directly, they certainly wouldn’t have reached as many people. With Facebook, more than 2 billion people now have a greater opportunity to express themselves and help others.
While it’s easy to focus on major social movements, it’s important to remember that most progress happens in our everyday lives. It’s the Air Force moms who started a Facebook group so their children and other service members who can’t get home for the holidays have a place to go. It’s the church group that came together during a hurricane to provide food and volunteer to help with recovery. It’s the small business on the corner that now has access to the same sophisticated tools only the big guys used to, and now they can get their voice out and reach more customers, create jobs and become a hub in their local community. Progress and social cohesion come from billions of stories like this around the world.
People having the power to express themselves at scale is a new kind of force in the world—a Fifth Estate alongside the other power structures of society. People no longer have to rely on traditional gatekeepers in politics or media to make their voices heard, and that has important consequences. I understand the concerns about how tech platforms have centralized power, but I actually believe the much bigger story is how much these platforms have decentralized power by putting it directly into people’s hands. It’s part of this amazing expansion of voice through law, culture and technology.
So giving people a voice and broader inclusion go hand in hand, and the trend has been towards greater voice over time. But there’s also a counter-trend. In times of social turmoil, our impulse is often to pull back on free expression. We want the progress that comes from free expression, but not the tension.
We saw this when Martin Luther King Jr. wrote his famous letter from Birmingham Jail [in 1963], where he was unconstitutionally jailed for protesting peacefully. We saw this in the efforts to shut down campus protests against the Vietnam War [in the 1960s and early 1970s]. We saw this way back when America was deeply polarized about its role in World War I [1914-18], and the Supreme Court ruled that socialist leader Eugene Debs could be imprisoned for making an anti-war speech.
In the end, all of these decisions were wrong. Pulling back on free expression wasn’t the answer and, in fact, it often ended up hurting the minority views we seek to protect. From where we are now, it seems obvious that, of course, protests for civil rights or against wars should be allowed. Yet the desire to suppress this expression was felt deeply by much of society at the time.
Today, we are in another time of social tension. We face real issues that will take a long time to work through—massive economic transitions from globalization and technology, fallout from the 2008 financial crisis, and polarized reactions to greater migration. Many of our issues flow from these changes.
In the face of these tensions, once again a popular impulse is to pull back from free expression. We’re at another cross-roads. We can continue to stand for free expression, understanding its messiness, but believing that the long journey towards greater progress requires confronting ideas that challenge us. Or we can decide the cost is simply too great. I’m here today because I believe we must continue to stand for free expression.
At the same time, I know that free expression has never been absolute. Some people argue internet platforms should allow all expression protected by the First Amendment, even though the First Amendment explicitly doesn’t apply to companies. I’m proud that our values at Facebook are inspired by the American tradition, which is more supportive of free expression than anywhere else. But even American tradition recognizes that some speech infringes on others’ rights. And still, a strict First Amendment standard might require us to allow terrorist propaganda, bullying young people and more that almost everyone agrees we should stop—and I certainly do—as well as content like pornography that would make people uncomfortable using our platforms.
So once we’re taking this content down, the question is: where do you draw the line? Most people agree with the principles that you should be able to say things other people don’t like, but you shouldn’t be able to say things that put people in danger. The shift over the past several years is that many people would now argue that more speech is dangerous than would have before. This raises the question of exactly what counts as dangerous speech online. It’s worth examining this in detail.
Many arguments about online speech are related to new properties of the internet itself. If you believe the internet is completely different from everything before it, then it doesn’t make sense to focus on historical precedent. But we should be careful of overly broad arguments since they’ve been made about almost every new technology, from the printing press to radio to TV. Instead, let’s consider the specific ways the internet is different and how internet services like ours might address those risks while protecting free expression.
One clear difference is that a lot more people now have a voice—almost half the world. That’s dramatically empowering for all the reasons I’ve mentioned. But inevitably some people will use their voice to organize violence, undermine elections or hurt others, and we have a responsibility to address these risks. When you’re serving billions of people, even if a very small percent cause harm, that can still be a lot of harm.
We build specific systems to address each type of harmful content—from incitement of violence to child exploitation to other harms like intellectual property violations—about 20 categories in total. We judge ourselves by the prevalence of harmful content and what percent we find proactively before anyone reports it to us. For example, our AI [artificial intelligence] systems identify 99% of the terrorist content we take down before anyone even sees it. This is a massive investment. We now have over 35,000 people working on security, and our security budget today is greater than the entire revenue of our company at the time of our IPO [initial public offering, an action a private company takes when it seeks to sell stock] earlier this decade.
All of this work is about enforcing our existing policies, not broadening our definition of what is dangerous. If we do this well, we should be able to stop a lot of harm while fighting back against putting additional restrictions on speech.
Another important difference is how quickly ideas can spread online. Most people can now get much more reach than they ever could before. This is at the heart of a lot of the positive uses of the internet. It’s empowering that anyone can start a fundraiser, share an idea, build a business, or create a movement that can grow quickly. But we’ve seen this go the other way too—most notably when Russia’s [military intelligence] tried to interfere in the 2016 elections, but also when misinformation has gone viral. Some people argue that virality itself is dangerous, and we need tighter filters on what content can spread quickly.
For misinformation, we focus on making sure complete hoaxes don’t go viral. We especially focus on misinformation that could lead to imminent physical harm, like misleading health advice saying if you’re having a stroke, no need to go to the hospital.
More broadly though, we’ve found a different strategy works best: focusing on the authenticity of the speaker rather than the content itself. Much of the content the Russian accounts shared was distasteful but would have been considered permissible political discourse if it were shared by Americans—the real issue was that it was posted by fake accounts coordinating together and pretending to be someone else. We’ve seen a similar issue with these groups that pump out misinformation like spam just to make money.
The solution is to verify the identities of accounts getting wide distribution and get better at removing fake accounts. We now require you to provide a government ID and prove your location if you want to run political ads or a large page. You can still say controversial things, but you have to stand behind them with your real identity and face accountability. Our AI systems have also gotten more advanced at detecting clusters of fake accounts that aren’t behaving like humans. We now remove billions of fake accounts a year—most within minutes of registering and before they do much. Focusing on authenticity and verifying accounts is a much better solution than an ever-expanding definition of what speech is harmful.
Another qualitative difference is the internet lets people form communities that wouldn’t have been possible before. This is good because it helps people find groups where they belong and share interests. But the flip side is this has the potential to lead to polarization. I care a lot about this—after all, our goal is to bring people together.
Much of the research I’ve seen is mixed and suggests the internet could actually decrease aspects of polarization. The most polarized voters in the last presidential election were the people least likely to use the internet. Research from the Reuters Institute also shows people who get their news online actually have a much more diverse media diet than people who don’t, and they’re exposed to a broader range of viewpoints. This is because most people watch only a couple of cable news stations or read only a couple of newspapers, but even if most of your friends online have similar views, you usually have some that are different, and you get exposed to different perspectives through them. Still, we have an important role in designing our systems to show a diversity of ideas and not encourage polarizing content.
One last difference with the internet is it lets people share things that would have been impossible before. Take live-streaming, for example. This allows families to be together for moments like birthdays and even weddings, schoolteachers to read bedtime stories to kids who might not be read to, and people to witness some very important events. But we’ve also seen people broadcast self-harm, suicide, and terrible violence. These are new challenges and our responsibility is to build systems that can respond quickly.
We’re particularly focused on well-being, especially for young people. We built a team of thousands of people and AI systems that can detect risks of self-harm within minutes so we can reach out when people need help most. In the last year, we’ve helped first responders reach people who needed help thousands of times.
For each of these issues, I believe we have two responsibilities: to remove content when it could cause real danger as effectively as we can, and to fight to uphold as wide a definition of freedom of expression as possible—and not allow the definition of what is considered dangerous to expand beyond what is absolutely necessary. That’s what I’m committed to.
But beyond these new properties of the internet, there are also shifting cultural sensitivities and diverging views on what people consider dangerous content.
Take misinformation. No one tells us they want to see misinformation. That’s why we work with independent fact checkers to stop hoaxes that are going viral from spreading. But misinformation is a pretty broad category. A lot of people like satire, which isn’t necessarily true. A lot of people talk about their experiences through stories that may be exaggerated or have inaccuracies, but speak to a deeper truth in their lived experience. We need to be careful about restricting that. Even when there is a common set of facts, different media outlets tell very different stories emphasizing different angles. There’s a lot of nuance here. And while I worry about an erosion of truth, I don’t think most people want to live in a world where you can only post things that tech companies judge to be 100% true.
We recently clarified our policies to ensure people can see primary source speech from political figures that shapes civic discourse. Political advertising is more transparent on Facebook than anywhere else—we keep all political and issue ads in an archive so everyone can scrutinize them, and no TV or print does that. We don’t fact-check political ads. We don’t do this to help politicians, but because we think people should be able to see for themselves what politicians are saying. And if content is newsworthy, we also won’t take it down even if it would otherwise conflict with many of our standards.
I know many people disagree, but, in general, I don’t think it’s right for a private company to censor politicians or the news in a democracy. And we’re not an outlier here. The other major internet platforms and the vast majority of media also run these same ads….
As a principle, in a democracy, I believe people should decide what is credible, not tech companies. Of course there are exceptions, and even for politicians we don’t allow content that incites violence or risks imminent harm—and of course we don’t allow voter suppression. Voting is voice. Fighting voter suppression may be as important for the civil rights movement as free expression has been. Just as we’re inspired by the First Amendment, we’re inspired by the 15th Amendment too.
Given the sensitivity around political ads, I’ve considered whether we should stop allowing them altogether. From a business perspective, the controversy certainly isn’t worth the small part of our business they make up. But political ads are an important part of voice—especially for local candidates, up-and-coming challengers, and advocacy groups that may not get much media attention otherwise. Banning political ads favors incumbents and whoever the media covers.
Even if we wanted to ban political ads, it’s not clear where we’d draw the line. There are many more ads about issues than there are directly about elections. Would we ban all ads about healthcare or immigration or women’s empowerment? If we banned candidates’ ads but not these, would that really make sense to give everyone else a voice in political debates except the candidates themselves? There are issues any way you cut this, and when it’s not absolutely clear what to do, I believe we should err on the side of greater expression.
Or take hate speech, which we define as someone directly attacking a person or group based on a characteristic like race, gender or religion. We take down content that could lead to real world violence. In countries at risk of conflict, that includes anything that could lead to imminent violence or genocide. And we know from history that dehumanizing people is the first step towards inciting violence. If you say immigrants are vermin, or all Muslims are terrorists—that makes others feel they can escalate and attack that group without consequences. So we don’t allow that. I take this incredibly seriously, and we work hard to get this off our platform.
American free speech tradition recognizes that some speech can have the effect of restricting others’ right to speak. While American law doesn’t recognize “hate speech” as a category, it does prohibit racial harassment and sexual harassment. We still have a strong culture of free expression even while our laws prohibit discrimination.
But still, people have broad disagreements over what qualifies as hate and shouldn’t be allowed. Some people think our policies don’t prohibit content they think qualifies as hate, while others think what we take down should be a protected form of expression. This area is one of the hardest to get right.
I believe people should be able to use our services to discuss issues they feel strongly about—from religion and immigration to foreign policy and crime. You should even be able to be critical of groups without dehumanizing them. But even this isn’t always straightforward to judge at scale, and it often leads to enforcement mistakes. Is someone re-posting a video of a racist attack because they’re condemning it, or glorifying and encouraging people to copy it? Are they using normal slang, or using an innocent word in a new way to incite violence? Now multiply those linguistic challenges by more than 100 languages around the world….
Increasingly, we’re seeing people try to define more speech as dangerous because it may lead to political outcomes they see as unacceptable. Some hold the view that since the stakes are so high, they can no longer trust their fellow citizens with the power to communicate and decide what to believe for themselves.
I personally believe this is more dangerous for democracy over the long term than almost any speech. Democracy depends on the idea that we hold each others’ right to express ourselves and be heard above our own desire to always get the outcomes we want. You can’t impose tolerance top-down. It has to come from people opening up, sharing experiences, and developing a shared story for society that we all feel we’re a part of. That’s how we make progress together.

Write
Write a detailed three paragraph response (about 300-350 words) to the two articles under your chosen topic. Use specific details from these two assigned perspective articles when summarizing their main ideas.
What is each author’s central opinion on the issue?
Identify at least three main supporting details each author uses to support their argument.
Whose position do you agree with more, and why? Be specific in your response.
Make sure all references and quotations are clearly cited, using the MLA parenthetical form of citations.
Include a Works Cited in MLA format. (For a database source, use the Cite or Page tools to get an MLA formatted citation.)

Last Completed Projects

topic title academic level Writer delivered