Leaked internal documents from Facebook show that the social media company has engaged in a high-level lobbying campaign with some of Europe’s most powerful politicians. The exposed memos describe a strategy through which company COO Sheryl Sandberg, a popularly known motivational speaker, used her influence and prestige to convince the political elite in European Union countries to repeal their extant information-privacy laws. Targeted politicians included former British Chancellor of the Exchequer, George Osborne, and former Irish Prime Minister Enda Kenny.
In an open letter issued on February 11, 2019, organizations from across civil society urged Facebook to take meaningful action to improve the transparency of political advertising on the platform. Led by the Mozilla Foundation, a broad array of democracy and human rights groups, including members of the Design 4 Democracy (D4D) Coalition, co-signed the open letter, and supported its call for specific, time-bound action in order to improve transparency of political advertisements on social media platforms in the context of the European Union elections.
On December 7, 2018, the D4D Coalition Advisory Board issued a statement of solidarity condemning the indictment of fellow Advisory Board member and Rappler CEO Maria Ressa. In light of Maria Ressa’s arrest on February 13, 2019, on charges of cyber libel, the Coalition reaffirms the solidarity expressed in that statement, and reiterates its condemnation of efforts by Philippine government to silence Rappler. The charges stem from a seven-year-old story that predates the enactment of the 2012 Cybercrime Prevention Act. The arrest follows a string of charges leveled against Ressa by the Philippine government, which are part of a broader attempt to silence independent and critical voices in the country. For her work as a guardian in the war against truth, Ressa was named one of Time Magazine’s 2018 Persons of the Year and was the 2018 recipient of the Committee to Protect Journalism’s Gwen Ifill Press Freedom Award. Today the D4D Coalition echoes Ressa’s remarks upon accepting the award, “The time to fight for journalism . . . is now.”
Facebook announced the extension of content policies and tools regarding upcoming elections in Nigeria, the European Union, India and Ukraine. The company is planning an ad hoc approach of restrictions on those who can advertise electoral ads before elections in Nigeria and Ukraine, as well as the creation of an online library of electoral ads in India. Later in January, the social media platform blocked tools developed by ProPublica and other media watchdogs, leading to criticism of Facebook from these groups, as well as lawmakers concerned with internet privacy. For over a year and a half, ProPublica, a non-profit investigative news agency, had been compiling information on hundreds of thousands of advertisements appearing on Facebook, detailing the identity of the ads’ sponsors, as well as who the ad might be targeting, with a software tool. In response to Facebook’s blockage, Mozilla Foundation then penned an open letter to Facebook condemning the action, which was signed by several e-media freedom groups. After garnering significant negative press from this open letter, facebook VP Rob Leathrern announced via Twitter that it would do more to disclose the sourcing of political ads for upcoming polls in the critical upcoming elections.
After Facebook blocked access to transparency tools allowing users to see how they are targeted by advertisers, Mozilla Foundation and co-signatories, including D4D-affiliated groups, released an open letter to Facebook calling for specific, time-bound action to improve political ad transparency on the social media platform ahead of the European parliamentary elections. In response, Facebook committed to opening its Ad Archive API in March, and reaffirmed its intention to roll out additional ad transparency tools globally by June. The D4D Coalition’s post regarding the events noted that the challenges relating to ad transparency are global in nature. Too often, tools and policies to address transparency concerns have been rolled out primarily in countries where tech companies have a large market, or face the largest political risk. The D4D post references the notion that tech companies have an obligation to “do no democratic harm” and that protecting the abuse of social media platforms in the context of elections should not be driven by market size or political risk to the company. Indeed, new or restored democracies may be the least resilient to disinformation and have the greatest need for protection. The post appreciated Facebook’s commitment to roll out a global response to the issue of political ad transparency by the end of June.
Three D4D Coalition members, IFES, IRI and NDI, held a panel discussion in Washington, DC on January 31 to highlight the interplay between identity, marginalization and disinformation in political life. Representatives discussed new research studying the relationship between hate speech and disinformation, and the potential to explore new pathways of study for these critical issues.
Reuters has accused an intelligence network called Project Raven of working for and out-of the United Arab Emirates by spying on the perceived enemies of that state’s government, including American citizens. The cybersecurity company hired by Abu Dhabi to conduct the espionage, called CyberPoint, is an American group and employees many former National Security Agency employees. While in 2016 Project Raven shifted to the control of an Emirati cyber company, Dark Matter, a good number of the American employees remained with the team. Though interviewees stated that some of the main targets of Project Raven were violent extremist groups like the Islamic State, several other targets included journalists, human-rights campaigners and other dissidents of the UAE government; Project Raven began in 2009.
Facebook has used monetary incentives to encourage teens and young adults to download a third party app which allows the company to view all phone and internet activity that users engage in on their device, be it iOS or Android. Promising to pay potential participants more than $20 per month, the social media giant has asked them to download the VPN “Facebook Research” which allows the company full access to information on other applications and activities on the participant’s mobile phone, likely in order to gauge the company’s competition. Within 24 hours of the revelation of this story, Apple removed Facebook Research from its iOS app store and revoked its iOS developer’s license.
For roughly the past 18 months, ProPublica, a non-profit investigative news agency, has compiled information on hundreds of thousands of advertisements appearing on Facebook, detailing identity of the ads’ sponsors, as well as who the ad might be targeted towards. In January, the social media platform blocked such tools by ProPublica and other media watchdogs, leading to criticism of Facebook from these groups, as well as lawmakers concerned with internet privacy. The Mozilla Foundation then penned an open letter to Facebook condemning this action, which was signed by several e-media freedom groups.
Russia’s powerful media regulation agency, Roskomnadzor, has initiated fines against social media giants Facebook and Twitter. While the state-run agency claims that this action is in response to a violation by both companies of Russian communications laws, it is widely understood to be politically motivated. The Kremlin has a history of intimidating and punishing media and communications companies that do not comply with its laws that are designed to deprive users of online data privacy. Facebook and Twitter have refused to submit to Roskomnadzor demands to disclose the personal data of their Russian users.
In response to a rise in violence fueled by disinformation on WhatsApp, the company made the decision to limit the number of times a user can forward a message to 20 around the world and five in India. India was home to the highest number of forwarded photos, messages and videos, resulting in over 24 murders by violent mobs which had been incited on the app. After an initial success with the new cap in India, WhatsApp decided to extend this limit to all of its global users. The company hopes that this change will refocus users on the app’s original purpose: communication with close contacts.
Of India’s roughly 900 million voters, 300 million use Facebook and 200 million use WhatsApp, opening the door for the largest democratic election to also be a important test of the impact of social media on the elections. The most popular two parties, Bharatiya Janata Party (BJP) and the Indian National Congress (INC), have both accused each other of spreading fake news but maintain that they do not do so themselves. Misinformation spread through social media has resulted in over 30 deaths in 2018, and officials are worried that an increase in fake news, encouraged by an election that is expected to be competitive, will result in further violence.
Facebook has removed almost 300 inauthentic pages covertly spreading the agenda of the Kremlin’s news agency, Rossiya Segodnya, as well as its outlets Sputnik and TOK, a video service. These pages appeared to be promoting groups with special interests from regional cuisine to politicians, however they promoted the Kremlin media’s stories and agenda, increasing Sputnik’s reach by 170%.
Facebook has invited researchers to study some of its inner workings and develop proposals that would improve the company’s work on disinformation, hate speech and democracy. This report offers nine recommendations for Facebook’s policies, engagement with disinformation and impact on governance, including: clarifying its community standards on hate speech, hiring content reviewers with knowledge of cultural contexts, increasing transparency around the enforcement of policies in complicated cases, and expanding the context and fact-checking information provided for users. The same group also studied the Impact of Greater News Literacy in societies as well as the connection between news literacy and other behaviors connected to media consumption online.
Amnesty International and Element AI’s crowd-sourced data project, Troll Patrol, monitored tweets sent to 778 journalists and politicians from the U.S. and U.K. during 2017. The study found that women from both sides of the political spectrum and both professions, journalists and politicians, were all targets for harassment. The project also found that 7.1% of the tweets monitored in the study registered as “problematic” or “abusive”. This percentage was higher for women of color, especially for black women, who were 84% more likely than their white peers to be talked about in abusive or problematic tweets.
SCL Elections, the parent company of Cambridge Analytica, was fined £15,000 after it failed to comply with a UK Information Commissioner Office (ICO) order to release the personal data of an American citizen. The man, David Carroll, filed a request for the company to release all information that was collected about him, however SCL Elections released basic information and did not respond to requests for further data, prompting Mr. Carroll to file a case with the Hendon magistrates court. SCL Elections pled guilty to failing to comply with an ICO enforcement notice and breaching the Data Protection Act.
Location data collected by mobile service providers is used for many legitimate purposes, including emergency assistance, financial fraud protection and under a warrant in official investigations. This access to data, however, is also sold to other companies and resold repeatedly until it is accessible by actors for non-legitimate use. For a small fee, websites offer phone location services, effectively allowing any person to track another by their phone. Mobile providers have said that they were in the dark about this use of their location data, while members of U.S. Congress and the Federal Communications Commision have called for better regulation and renewed safeguarding of private information.
As the Chinese government requires that companies censor their own online information, a market for internet censorship is on the rise, employing thousands of Chinese workers. These censorship factories teach their workers about legitimate past events and people so that they can recognize and moderate the content viewed by over 800,000 million users. The market for online content management extends beyond China; U.S. companies including Facebook and YouTube have announced that they plan to hire thousands of employees to help manage their content.
A report by Privacy International discovered that 42.55% of apps offered for free through the Google Play store may share data with Facebook, whether or not the user has a Facebook account or is logged in at the time. These apps often send personal information to Facebook automatically when opened by a user and app developers have a limited ability to control this data flow, leading to questions about privacy and the violation of data laws.
The Bangladesh Telecommunication Regulatory Commission directed mobile phone service providers to shut down the country’s mobile internet the day before and the day of the parliamentary election. The Commission cited fears of violence, intimidation, propaganda, and rumors surrounding the election that could lead to misinformation and voter suppression. Prime Minister Sheikh Hasina, whose ruling party retained power via a landslide victory in the elections, has been marred by allegations of mass arrests and jailing of activists and critics, forced disappearances and extrajudicial killings.
Now famous for interference in the 2016 U.S. presidential elections, deceptive Russian tactics were more recently used by tech experts in an experiment during the Alabama Senate race in service of then-candidate Doug Jones. While this secret project was designed to have no impact on the outcome of the race, it does have wide-ranging implications on the future of U.S. elections and domestic media manipulation. Experts on both sides of the aisle worry that candidates may resort to such tactics out of fear that their opponent may do the same, forever changing American politics.
Since its early years, Facebook has entered into data partnerships with other sites and platforms to customize the information presented to users, decrease competition and encourage expansion through a wider user base. These partnerships have incurred concern and condemnation from the international community. Some companies, many of whom said that they were unaware of the wide access given to them by Facebook, were able to access contact information from users and non-users, read and change private messages and view personal information without official audits of their use of this data or their privacy practices.
A bipartisan group of U.S. Senators led by Catherine Cortez Masto (D-Nev.) and Marco Rubio (R-Fla.) have written a letter to U.S. Secretary of State Mike Pompeo asking for an investigation into “CCP attempts to erode democratic processes and norms around the world threaten U.S. partnerships and prosperity,” particularly in regards to Taiwan. They suggest that organized social media campaigns targeted the Democratic Progressive Party (DPP), its candidates and President Tsai Ing-wen during local elections in November. Observers in Taiwan and elsewhere have said that the Chinese government supported these opposition campaigns with different forms of computational propaganda, and the Senators’ letter suggested they find the allegations credible and want them investigated.
The US Senate Intelligence Committee released two externally produced reports that provide further data on the 2016 national elections, including one partly authored by D4D Advisory Board member Philip Howard, Director of the Oxford Internet Institute in collaboration with Graphika, a data analytics firm, and a second by New Knowledge, another company studying social media and disinformation. The reports find much broader usage of social media accounts linked across platforms, particularly targeting conservative voters and African Americans. Platforms such as Instagram and YouTube have received less attention in the media, but were also found to have been used by groups such as the Russian Internet Research Agency, while the researchers also suggested that social media platforms would need to share more data to fully understand the full scope of the campaigns. The Senate Intelligence Committee plans to release its own report on these issues in the near future.
D4D network partner International IDEA has entered into a collaboration with the Electoral Tribunal of Panama in order to provide support to the newly created Digital Media Unit. The unit’s mandate is twofold. On one side, it is in charge of the online communication of the Tribunal in terms of Electoral information, providing a key line of information to the population through Twitter, Facebook, Instagram and Whatsapp with the key electoral information. On the other, the Digital Media Unit is spearheading the fight against disinformation, with a 24/7 monitoring of social media war room, supporting the Tribunal to detect electoral offenses, campaigns to raise awareness around the dangers of spreading disinformation and engagement with diverse stakeholders to protect the electoral integrity. The Unit has also been in charge of launching the country’s first Digital Ethics Pact, encouraging the population to make responsible use of social media during the electoral campaign. International IDEA support will continue until the elections in May 2019 and beyond, aiming to position the Unit as the leader in the fight against disinformation in Panama.
The CEO of Rappler, a Philippine “Social News Network,” and D4D Advisory Board Member Maria Ressa has been charged by the Philippine government with tax evasion and failure to file tax returns; she could face up to ten years in prison. At the same time, she has been named one of Time Magazine’s 2018 Persons of the Year, part of what they call “the Guardians,” a group of journalists fighting for democratic values around the world. Ressa has been outspoken in her criticism of Philippines President Rodrigo Duterte’s violent “war on drugs,” and other policies, while through research and analysis Rappler’s team have illuminated the influence campaigns and computational propaganda tactics his government and followers have pursued online.
D4D coalition member IFES is currently piloting its Holistic Exposure and Adaptation Testing (HEAT) process in Ukraine. The HEAT process is a method for identifying and testing the potential exploitation of vulnerabilities in the use of election data management technology. HEAT tests the technology itself, as well as the legal and operational frameworks in which the technology is being deployed. As part of the pilot, IFES conducted a cybersecurity assessment in summer 2018 and a cybersecurity tabletop simulation with the Ukrainian Central Election Commission in November 2018.
The Design 4 Democracy Coalition Advisory Board stands in solidarity with our fellow member Maria Ressa and with Rappler, the leading independent online news outlet in the Philippines. Ressa and Rappler Holdings were formally indicted on November 29, 2018, on charges of tax evasion—the latest action by the Philippine government in attempting to thwart the work of Rappler’s journalists—and Ressa turned herself into authorities and posted bail this week.
The European Union announced a plan to counter disinformation ahead of the 2019 European elections. The plan includes increased resources for outside researchers and fact-checkers, strict enforcement of the platformed-signed Code of Practice, and the introduction of the Rapid Alert System. In collaboration with the European Parliament and individual member states, the EU will work to have the Rapid Alert System operational by March 2019.
Following protests against the arrest of Afghan militia commander Alipoor, tensions were heightened by disinformation spread through social media. Government security forces posted that no civilians were harmed, while protesters circulated photos of a dead schoolgirl and others. National Directorate of Security Chief Massoum Stanekzai reported that commander Alipoor was arrested by the U.S. Military. However, a spokesperson for U.S. forces tweeted that they had no involvement. The government narrative was bolstered by photos of wounded security officers which were exposed to have been taken years prior.
At the Inaugural Grand Committee on Disinformation, an empty chair was left for Facebook CEO Mark Zuckerberg, who turned down invitations to testify before the international committee of lawmakers. In his place, Facebook Vice President Richard Allan faced hard-hitting questions about email communication found within documents seized by the UK parliament and Facebook’s role in global democracy challenges. At the close of the hearing, members of parliament from around the world signed a declaration of ‘Principles of the Law Governing the Internet.’ Simultaneously, MP Damian Collins, Chair of the UK parliament’s Digital, Culture, Media and Sport Committee, pressured, under threat of imprisonment, the founder of the social media application developer Six4Three to hand over internal Facebook documents and emails, which have now been released by the committee and demonstrate its policy development, strategy and internal deliberations over data sharing with third-party developers, among other issues.
In an attempt to increase transparency of political ads ahead of the 2019 EU Parliamentary Elections, Google has announced new policies that will require ad posters name the organization that provides their funding. The parameters for a ‘political ad,’ however, are too narrow to capture much of the politically motivated content that is expected throughout the election period, sparking worry that these new policies will prove ineffective.
An independent BSR report, commissioned by Facebook, about the impact of the platform on the human rights crisis in Myanmar found evidence that Facebook did not take enough action to prevent violence from being spread on its site. Thousands of people have died in the conflict, with hundreds of thousands more displaced internally and into neighboring Bangladesh. The report warns Facebook of future human rights abuses around the 2020 elections and calls for the company to both create a new human rights policy and enforce its current hate speech policies by working with the local authorities.
The Harvard University Shorenstein Center for Media Studies published The Fight Against Disinformation in the U.S.: A Landscape Analysis, which explores the key players, tactics, and support for learning and programmatic responses to viral digital culture. Societal changes impact the way that we use media, but these shifts are in all directions. When the way that people communicate changes, this is reflected in a shift in society as well. The paper discusses the impacts of the changes in media use on American culture and the ways in which some people are trying to combat the negative impact of computational propaganda, disinformation and other harmful forms of content.
In new research from Harvard’s Berkman Klein Center, Henry Farrell and Bruce Schneier argue that nations should approach disinformation as they approach other issues of state security. Different vulnerabilities are presents and potential responses are required depending on the type of government structure; autocracy or democracy. While autocracies produce contested knowledge about political actors themselves, democracies produce contested information about who holds power. This causes democracies to be more susceptible to narratives about general political organization.
A three-part New York Times documentary series explores Russian meddling in the 2016 U.S. election in the context of the wider Russian effort to divide the West. From the inception of Soviet fake news to its use today, the NYT uncovers the continuation of Russian interference in the United States and the reasons why the U.S. government and other nations are so woefully unprepared to counter this disinformation campaign.
First seen in China, Venezuela has adopted an RFID smart card system that allows the government to track citizen behavior. This Fatherland Card, made by Chinese telecom giant ZTE Corp, collects information including healthcare data, voting participation, and subsidized food distribution in a central database. In a country where many citizens rely on government programs to feed their families and receive medical care, opponents are calling the government’s requirement that citizens obtain the Fatherland card to access services akin to blackmail.
In a new report, social media researcher Robyn Kaplan, has identified three modes of content moderation by today’s digital media platforms, Artisanal, Community-Reliant, and Industrial. Artisanal and Industrial strategies are usually taken by for-profit internet media companies, Artisanal by smaller platforms like Vimeo, Industrial by larger platforms like Google. Meanwhile, Community-Reliant strategies are taken by platforms like Reddit where platform consumers are also the main content generators.
The British Army’s 77th Brigade, a group of skilled social media analysts, graphic designers, video producers, and content writers, are hard at work running the nation’s information warfare program. This group is not alone in its focus on the importance of public opinion in conflict; other countries including the United States and Russia know the power of deploying information warfare. The 77th Brigade counters false narratives, works to improve public sentiment in conflict zones, and influences public opinion to strengthen the position of the British Army.
Following the Paris Peace Forum from November 11th to 13th, the leaders of Canada, France, Norway, Costa Rica, Tunisia, Senegal, and Lebanon authored an opinion piece in The Star. They acknowledged the growing threat of disinformation to journalism and the citizens of their countries, applauded the presentation of the International Information and Democracy Commission at the Forum, and called for further action within their own nations and around the world.
The Center for International Governance Innovation recently released a report that informs questions regarding responsibility for regulating and supervising the internet, and how society can be protected from the risks of an open internet without stifling its power of innovation. The essays included in this report detail the regulatory and political landscape of current law, impacts on censorship and civil rights, and recommendations for the role of the private sector.
In an ongoing disinformation campaign, Russia has accused the U.S. government of operating a laboratory in Georgia where scientists tested biological weapons and drugs, resulting in multiple fatalities. In response, the U.S. has accused Russia of operating a disinformation campaign to distract the world from the negative attention placed on the Kremlin by the poisonings of Russian dissidents in the United Kingdom.
Over 50 countries signed on to the Paris Call for Trust and Security in Cyberspace, an agreement released by French President Emmanuel Macron as part of the Paris Peace Forum in November. China, Russia, Australia, North Korea, Iran, and the U.S. abstained from signing the agreement, while representing hubs for ICTs, online infrastructure and cybersecurity resources, personnel and experience. Tech companies including Facebook, Microsoft, IBM, Google, and HP also signed alongside civil society organizations and technical experts. While the agreement does not include a call for specific legislation, it does advocate for the promotion of human rights on the internet, the allocation of unique responsibilities to the private sector, and the end of hacking between nations in peacetime.
The BBC World Service released two ‘Beyond Fake News’ reports, one focusing on India and the other on Kenya and Nigeria. While the content of viral news differed between countries, many people shared information from alternative sources because of widespread distrust of mainstream media outlets, an inflated view of their own ability to discern fact from fiction, and the desire to promote national identity over truth.
The RAND Corporation released a report that explores the threat of Russian-language social media activity to former Soviet states. Employing interviews with experts in security and regional politics, as well as analysis of social media data, this report digs into the Kremlin’s use of shared post-Soviet experiences to spread disinformation. The report offers recommendations including better tracking of Russian media, increasing media literacy, and improving reliable content to offer an alternative to the Kremlin agenda.
Amelia Acker’s report investigates the ways in which metadata can be manipulated to inform the disinformation efforts of bad actors, as well as strategies to stop them from misleading the public. Acker unpacks practices used by disinformation proponents to increase their impact on social media by engaging the platform’s own algorithms. Acker hopes to inform the work of technology companies and other interested parties in the fight against disinformation.
This research from the “Personal Data and Political Influence” Project is part of a Brazilian Country Report by Coding Rights. The 2018 Brazilian election took place amid widespread online influence campaigns, often making use of personal data to target voters. This report addresses the use of this personal data in political campaigns as well as the regulatory and ethical questions that result from its increased use.
Supported by NDI and more than a dozen other international partners, the Design 4 Democracy Coalition held its first Advisory Board meeting on October 25th, in conjunction with MisinfoCon London and Mozilla Fest (MozFest). The D4D Coalition seeks to act as a force multiplier for organizations who advocate for more democracy-friendly technology platforms and policies by providing an opportunity for collaboration and mutual support within the democracy community. The Coalition also provides direct lines of communication with major technology platforms and is improving communication between the democracy community and the tech industry.
With election day drawing nearer, disinformation efforts to influence voters increase. The New York Times published a “Roundup” of disinformation-related coverage, its impact on the U.S. midterm elections, and its spread internationally. In response to suspicious pro-Saudi Arabian government tweets, Twitter suspended suspected bots that tweeted and re-tweeted identical talking points including “#unfollow_enemies_of_the_nation.” Twitter also released 11 million tweets believed to be from state-backed information operations originating in Russia and Iran. Facebook pages that appeared to be for Women’s Marches were found to originate in Bangladesh and sought to sell march-related merchandise.
As part of a broader series of discussions on tech and democracy, the National Democratic Institute and International Republican Institute joined partners on October 18 to host a reception and discussion in San Francisco about the ways tech is impacting democratic processes and participation around the world. The event featured perspectives from NDI President Derek Mitchell and IRI President Dan Twining, and explored opportunities for civil society, technologists, and others to collaborate through efforts like the D4D Coalition. Participants included Bay Area stakeholders from the tech industry, academia, and the international affairs community, and co-hosts included the Pacific Council, TheBridge, and Bay Area International Link.
In an attempt to increase transparency and enable academic investigation and research, Twitter released data about accounts and content that have been part of global disinformation campaigns since 2016. Included in the data are two accounts that had not been part of earlier releases, and are thought by Twitter to be state-backed. In total, information about 3,841 accounts connected to the IRA in Russia and 770 other accounts have been released to the public. However, researchers found even more fake Twitter accounts that appear to be linked to the Russian government that were not identified by Twitter’s search, promoting politically benign topics such as Taco Bell and Coachella.
Data & Society published a report on “Weaponizing the Digital Influence Machine: The Political Perils of Online Ad Tech”. The paper further explains the relationship between politics, media, and ill-meaning actors. It lays out the tactics, technologies, and conditions that anti-democratic and politically-motivated actors use to weaponize digital advertising. The report finds that three main strategies are employed by actors who use the “Digital Influence Machine” to attempt to divide an opponent’s supporters, leverage behavioral science techniques to influence consumers, and mobilize those who support their views by threatening their identity, political or otherwise.
In a New York Times op-ed, researchers and fact checkers in Brazil called on WhatsApp to make changes to its system when they found that it was being widely used to spread disinformation in the runup to the national election. A poll found that 44 percent of Brazilians use WhatsApp to read political news, and a growing amount of misinformation and disinformation has been shared widely through the app, and the writers called on the company to restrict forwards, broadcasts and limit the size of new groups in Brazil during the election period. A Facebook subsidiary, WhatsApp later banned over 100,000 accounts associated with sharing false stories, but did not take up the suggestions before the election on October 28th.
Hundreds of members of the Myanmar military, posing as civilians and often using tactics modeled after those used by Russia, have used Facebook to spread disinformation about the Muslim Rohingya minority. One of the largest forced human migration human history, over 700,000 people, is widely attributed to this type of anti-Rohingya propaganda and the violence it incited. Nathaniel Gleicher, Facebook’s head of cybersecurity policy, reported that the company found “clear and deliberate attempts to covertly spread propaganda that were directly linked to the Myanmar military.”
Google’s CEO Sundar Pichai defended the company’s search engine for use in China is going well and would proceed, despite questions around such an initiative’s potential for censorship and surveillance. Mr. Pichai described the controversial decision to build the search engine as in keeping with their mission to provide information to all people. Google employees have voiced concern at this proposal, citing its commitments to the Global Network Initiative’s Principles on Freedom of Expression and Privacy.
Full Fact, a British fact-checking group, released a report entitled “Tackling misinformation in an open society. How to respond to misinformation and disinformation when the cure risks being worse than the disease.” The report explains that it is more realistic to build resilience against disinformation and misinformation in the UK than to eliminate it all together. In the paper, Full Fact sets out a framework for response to disinformation and misinformation that is proportionate and risk-based, and caution against taking action without thinking through the consequences and allowing time for further research into the harm caused by such campaigns.
A vulnerability in Google+ profiles opened user data to 438 applications between 2015 and March 2018, when the problem was discovered as part of an internal audit. This data breach resulted from a flaw in an API that was created by the tech company to allow developers to access profile information about individuals who used their apps and had given permission to share their profile data. Internal memos, investigative journalism, and a blog post shed light on Google’s decision to not go public with the information when it originally discovered the problem. Increased security measures including the termination of the Google+ service occurred in hopes of rectifying the problem. CEO Sundar Pichai has agreed to testify before Congress on the breach in the near future.
The Design 4 Democracy (D4D) Coalition was honored to be a part of the 2018 g0v Summit, from October 5-7, 2018. The Summit provided an opportunity to share information about the Coalition with other groups sharing similar objectives, including the Global Tech Accountability Network, a new initiative arising out of the #DearMark letter, led by organizations in Myanmar. Among the areas of collaboration discussed were the development of an open data standard on social media monitoring, together with a platform implementing the standard, for social media monitoring organizations to adapt to their own specific needs. The use of shared tools and data standards provides an opportunity for social media monitoring groups to share data with like-minded, trusted partners in other countries, providing a more complete picture of hate speech or disinformation in a regional or global context. In addition, the Summit provided an opportunity to connect with D4D partners, g0v and CoFacts, which is working with the Coalition to adapt the CoFacts fact-checking chatbot from LINE to FB Messenger and WhatsApp.
Facebook is launching fact-checking tools in Kenya and plans to spread the service to the entire African continent. The fact-checking tool will demote news stories marked as fake and warn the user trying to share. The focus on Facebook’s newsfeed, however, has drawn criticism for not being the most effective use of fact-checking technology. Facebook products are highly popular in Africa, however, WhatsApp is used by far more people to communicate than Facebook, while the fact-checking service will not extend to WhatsApp.
D4D Coalition member NDI helped organize a conference on “Enhancing Media Literacy and Combating Disinformation” in Praia, Cape Verde in collaboration with the Cape Verdean government and The Association of Journalists of Cape Verde (AJOC). Representatives from NDI spoke on the growing threats to democratic values coming online in a new age of disinformation and suggested potential means for the media and government could use to improve information integrity in the country.
A new law in California makes covert bots illegal. This decision requires fake profiles or bots to be labeled as artificial, in hopes that the consumers of content on the internet will be better informed regarding the source of the information they view. This legislation is groundbreaking, but some who study bots caution against legislating without a thorough understanding of the different types of bots and how they function.
An upcoming referendum on a name change for the Republic of Macedonia has helped create an online environment awash in disinformation campaigns, some linked to Russia, which is trying to prevent the country from joining NATO and moving towards the West.
The D4D Coalition will present on its work at the g0v Summit in Taiwan from October 5 to 7, 2018, particularly the potential to develop disinformation monitoring efforts in Asia, with the support of partners, tools and techniques bolstered by D4D.
Capitalizing on weaknesses in U.S. technology and social media platforms, businesses in North Korea are able to connect with people in other countries to both provide and solicit services, circumventing U.S. sanctions. By hiding their identities, a group of North Korean web developers have allegedly advertised their services on Western online platforms and built a website for a business in Australia. A web of fake social media accounts and front companies in other countries enable this underground business.
The prevalence of deepfakes, videos that are manipulated to show fake events, is increasing, prompting Rep. Adam B. Schiff (R-FL) and Stephanie Murphy (D-FL) to call for an analysis of the problem and solutions by the intelligence community. In their letter to the Director of National Intelligence, Daniel Coats, they call these videos a national security risk. Social media companies’ growing focus on stopping disinformation has been encouraged by pressure from Congress and the intelligence community and the lawmakers hope that the same trend will occur for the fight against deepfakes.
On September 12, 2018, members of the Design 4 Democracy Coalition held an off-the-record workshop in Kyiv, Ukraine on emerging threats in disinformation and cybersecurity. Facilitated by NDI, IRI, IFES, and StopFake, the workshop included a range of organizations from civil society, government, technology companies and the cybersecurity community. Following lightning talks and presentations on research findings, participants took part in focused discussions on disinformation and cybersecurity in the lead up to the 2019 elections, as well as potential areas for improved collaboration with technology platforms to counter identified threats.
The D4D team is actively engaged in Macedonia ahead of its historic national referendum on September 30, 2018. D4D is supporting efforts by local stakeholders to ensure that disinformation does not interfere with the ability of the Macedonian people to express their will.
The Oxford Internet Institute Computational Propaganda Project found that the upcoming Swedish election has experienced a high amount of ‘junk news’ shared by users on social media sites, second only to that seen in the 2016 U.S. presidential election. Some of these fake sites are modeled after reputable news sources, complicating users’ search for truth. Researchers were surprised to learn that eight of the ten most-shared fake news sites were domestic, causing a renewed focus on local actors over international influences.
UNESCO published this handbook for use by journalism educators and publishing journalists. It includes discussions of journalism’s responsibilities regarding disinformation, misinformation, and mal-information. Written as a curriculum, this handbook covers everything from the current state of ‘fake news’, to the evolution of journalism, to recommendations moving forward.
The Chinese government has created a national-level platform, run by the ‘Internet Illegal Information Reporting Center’ and run by Xinhua to publicly differentiate fact from fake news for its citizens. To create the platform, over 40 “rumor-refuting platforms” were combined into the official platform. The purpose behind the creation of this new official fact-checker, state-run xinhuanet.com explains, was to stop rumors and illegal information from disturbing the social order.
A Reuters special report documents more than 1,000 posts, comments, and crude images on Facebook calling for violence and discrimination against Myanmar’s Muslims. Despite official rules banning such hate speech, Facebook appeared largely unprepared to crack down on this wave of anti-Muslim posts. For a long time, the organization lacked enough Burmese speaking employees, as well as programs that can effectively detect hate speech in the language and systems to translate the text, among other limitations preventing an effective response. It has also recently blocked the accounts of 18 users and 52 pages that are linked to Myanmar’s military, further highlighting the country’s worsening relationship with the world’s largest social media network.
A recent piece from Vice Motherboard discusses the catch-22 that Facebook is confronting as it trying to create a standard moderating strategy for a world where local context is critically important to such moderation. Confronting mounting criticism from many quarters, the company has recently increased its focus and number of employees moderating false news and other posts that violate its Community Standards. Is it possible for Facebook to maintain a platform at global scale that is both safe and open?
The messaging platform Line has seen a large increase in the quantity of content posted by users that spread incorrect information, especially about healthcare. These posts, which often target elderly users, advertise fake products or share incorrect information, like the power of kale juice to cure bone pain. Line is working with fact-checking organizations and the government to attempt to combat the problem.
A hacking group affiliated with the Russian government and linked to interference in the 2016 election, APT28, is believed to be behind the creation of six websites that mimic government and public policy groups. The U.S. Senate, International Republican Institute, and the Hudson Institute were among those targeted by the sites, which appear to seek credentials from members of these organizations. Microsoft, which discovered fake websites with domains like my-iri.org and senate.group, has announced plans to fight cybersecurity threats targeting political organizations. The tech company will offer free cybersecurity protections to likely targets of groups like APT28, including campaign offices and candidates, provided that they use Office 365 software.
Facebook released statements on their website about two unconnected attempts by foreign governments to influence users or to steal their personal information. The attacks, which began in Russia and Iran, appear unconnected despite their use of similar tactics. Facebook removed 652 accounts that were traced to Iran state media and others that have been identified as Russian military intelligence.
In response to incorrectly reported “false news” content, Facebook has begun to rate its users’ credibility based on thousands of behavioral points. Users who report content that is, in fact, false will have their future flags reviewed with higher priority than those who report many articles which are not false.
Politicians in the US and Europe are devising new policies to limit microtargeting, a technique of targeting specific subgroups for advertisements which many believe is feeding polarization and voter manipulation. As researchers have identified, microtargeting has become a key weapon for foreign election meddlers, and many argue Facebook’s efforts to deter this exploitation of microtargeting has had little effect.
NDI is planning a second disinformation event in collaboration with the Mexico National Electoral Institute (Instituto Nacional Electoral, INE) the Center for Research and Teaching in Economics (Centro de Investigacion y Docencias Economicas, CIDE) and the National Autonomous University of Mexico (Universidad Nacional Autonoma de Mexico, UNAM). This will follow up on the forum held in March 2018 before the national elections in July, and will review the 2018 electoral process, highlight local efforts that tackled disinformation, and discuss lessons learned for future elections. Similar events are being held with NDI and FGV-DAPP participation in Brazil and Colombia throughout July and August.
A Portland Communications survey of influencers shaping Twitter conversations on recent African elections shows that Africa is not immune to fake news, the rise of bots, or external influence on elections. The survey finds that 53% of key influencers came from outside the country holding the election, with many influencers coming from outside the continent. The report also reveals that bots had a major presence in election discourse while politicians had comparatively minor influence on discussions. Details on specific countries are provided.
A report by the Institute for the Future reveals a widespread phenomenon of “state-sponsored trolling”: government use of targeted online hate and harassment campaigns to intimidate and silence individuals critical of the state. New surveillance and hacking technologies have allowed governments to anonymously track, threaten, and publicly delegitimize opponents on a greater scale than ever before. The report concludes with recommendations for technology companies, lawyers, and lawmakers.
A delegation including D4D partners visited Skopje, Macedonia in late July to assess potential for disinformation to impact the forthcoming referendum on a proposed change to country’s name and to better understand local needs for coordination and support. D4D partners continues to monitor the situation closely.
This comprehensive essay details the myriad of political, social and business dynamics that transformed social media from the pro-democratic tool of the Arab Spring to the anti-democratic weapon of authoritarians and election-meddlers today. Writer Zeynep Tufekci points to the lack of regulations on tech firms, the unwillingness of the US to bolster its online defenses, and the appropriation of social media tools by authoritarians as key factors in this transition.
Oxford Internet Institute’s Project on Computational Propaganda has a new report analyzing the new and growing trends of organized media manipulation, as well as the growing capacities, strategies, and resources that support the trends. In a fast-growing number of countries, political parties and government agencies are using social media to manipulate domestic public opinion, often in response to threatening foreign interference and junk news. Since 2010, political parties and governments have spent over half a billion dollars developing and implementing these operations.
This guide develops a learning module for journalists and educators meant to situate contemporary information disorder in a long history of misinformation, disinformation and propaganda. It establishes a broad historical overview of past forms of information disorder propagated by states, public figures, and the media. Established by the International Center for Journalists, the guide hopes to equip contemporary journalists and educators with a sharpened, contextual knowledge of disinformation-related issues.
The Office of Senate Intelligence Committee Vice Chairman Mark Warner prepared a policy paper detailing 20 ways that lawmakers can consider combating disinformation, protect user privacy, and promote competition in the tech space. These options include media literacy programs, new rules for social media platforms, and more user control over a company’s use of personal data.
The UK Department for Digital, Culture, Media and Sport, named Britain’s chief investigating authority on disinformation, released the first of its thorough reports on the subject. What began as an inquiry into a few major scandals turned into a comprehensive 89-page document on Russian interference, tech company responsibility for disinformation, Cambridge Analytica, and data targeting, putting the UK parliament at the center of global discussions. The report also includes a list of demands for regulation, legislation, codes of ethics and police investigations.
In its efforts to stop the spread of misinformation, Facebook deactivated a large network of pages and accounts thought to be led by Brazilian right-wing activists from the Movimento Brazil Livre (MBL), or “Free Brazil Movement.” According to a number of sources, MBL organizers posed as different independent news outlets to develop coordinated messaging campaigns in support of their policies. Facebook removed the accounts alleging that they violated the company’s authenticity policies.
The European Union issued Google a record-breaking antitrust fine of €4.34 billion ($5.06 billion) over Google’s deals with mobile phone makers and telecommunications operators. According to EU regulators, Google’s contracts with phone makers effectively force those companies to prioritize Google apps and services in exchange for Google providing its Android operating system for free. The fine represents a major step towards stronger government oversight of technology companies, at least in the EU.
The Getulio Vargas Foundation Office of Public Policy (Fundação Getúlio Vargas, Diretoria de Análise de Politicas Publicas or FGV-DAPP) launched their Sala de Democracia Digital (Digital Democracy Room) at an event on July 25 in Rio de Janeiro, Brazil with support from NDI, including the participation of NDI’s Colombia Country Director Francisco Herrero. The online project seeks to analyze political discourse on the web during the 2018 Brazilian elections and will include weekly reports on the use of bots and fake news during the months prior to the October poll as well as policy papers and recommendations.
As the EU and China push forward with tighter internet regulations, the US is losing its place as a key agenda-setter on internet freedom and cybersecurity policy. The US has lately been taking a far more passive role on countering authoritarian internet policies in China and other developing countries, and has neglected to confront the EU over its strict user privacy regulations that could threaten global cybersecurity efforts.
The parliament of Uganda temporarily maintained a controversial tax on social media use despite widespread protests against it. President Museveni, who first introduced the law, has cited the spread of “gossip” as a key reason for its existence. Seen as an attempt at state censorship, the tax has led to a significant drop in social media use in addition to creating new economic burdens for the Ugandan people.
A recent study by the research firm Ghost Data reveals that Instagram may have as many as 95 million bots, up slightly higher from 2015. Bot presence on Instagram continues to increase despite efforts by Facebook to curb the spread. The rise in Instagram bots is especially concerning since images and videos, uniquely difficult to track and identify, could play a larger role in coming elections.
A joint investigation by the Organized Crime and Corruption Reporting Project (OCCRP) and partners reveals that Macedonia’s fake American news industry in Veles was launched by well-known Macedonian attorney Trajche Arsov, not by apolitical teens as previously reported. During the 2016 election, Arsov worked closely with several high-profile American partners to churn out over a hundred fake news websites on social media. Macedonian security agencies are now cooperating with law enforcement in the US and several other European countries to investigate possible ties between Arsov and the recently indicted Russian hackers.
In light of episodes of ethnic violence in Myanmar, India and Sri Lanka, Facebook has announced tighter restrictions on disinformation and misinformation spread through its site and Instagram. The company will start effectively removing false information that might lead to physical harm. However, the policy does not apply to WhatsApp, which has been a major catalyst for recent violent incidents.
A bill that would require certain automated social media accounts to identify themselves as bots is currently moving through the California state legislature. Many critics charge that the proposed law lacks specifics, that it is not constitutional, or that it will not effectively solve the problem of bot influence on voters. The bill, among the first of its kind, demonstrates the challenges that face lawmakers seeking legislative solutions to the surge of automated accounts.
On July 16, NDI and Coalition partners convened a forum at the margins of the OGP Summit in Tbilisi, Georgia, titled “Scaling the Future of Civic Tech.” The day-long event featured discussions on ways the civic tech movement is strengthening democracy in particular national and subnational contexts and highlighted the D4D Coalition as a means of sustaining collaboration on shared priorities.
New findings reveal that Russian propagators of disinformation often posed as local news sources to exploit the American public’s higher levels of trust for local news organizations. Many fake local news Twitter accounts did not actually post false information, opting instead to establish long-term credibility for when they needed to operationalize. These cases further confirm that the Russian-led disinformation campaign has been years in the making.
Twitter has escalated its defense against fake accounts and bots in the past few months, suspending more than one million accounts a day. This is especially significant given the company’s usual prioritization of freedom of speech over policing users’ behavior. In an effort to promote trust amongst active users, Twitter has already removed large amounts of inactive or suspicious follower accounts thought to be promoting disinformation and spam.
YouTube has announced a series of developments meant to promote more reliable, “authoritative” news sources on its site. The proposed changes mark a departure from the platform’s current video recommendation algorithms, which have promoted factually incorrect conspiracy theory videos to users who had a history of watching similar ones.
Frightened mobs in India have killed two dozen innocent people as false rumors about child kidnapping spread through the widely-used messaging platform WhatsApp. The app has facilitated the proliferation of false information in countries across the world, leading to instances of physical harm in Brazil and Sri Lanka as well. Not only does WhatsApp host the majority of viral disinformation campaigns in the most amount of countries, but its encryption and private messaging systems also make it uniquely difficult to slow these campaigns. In response to these trends, WhatsApp has placed restrictions on the amount of contacts that users can forward messages to. WhatsApp also announced plans to support third-party developers of fact-checking technology for the app.
Philip Howard, Director of the Oxford Internet Institute (OII) and D4D advisory board member, testified before the Senate Intelligence Committee on the work of OII’s computational propaganda project, state sponsored disinformation worldwide, and the potential for foreign influence operations targeting U.S. elections.
First Draft, a project of Harvard University’s John F. Kennedy School of Government, published a definitional toolbox of terms related to technology and misinformation. The toolbox aims to create a shared vocabulary amongst policymakers, citizens, and academics. Part One includes a glossary defining “commonly used” and “frequently misunderstood” terms related to information disorder. Part Two attempts to map the thirteen sub-categories of the information disorder field in order to facilitate more strategic, targeted research and action. Part Three includes downloadable high-resolution graphics created to help explain information disorder.
An investigation by the Digital Forensic Research Lab reveals a deep network of exchanges between various users and several Brazil-based groups that sell pages, likes, and shares on Facebook for money. Like and share for cash transactions at large scale have the potential to threaten the integrity of Brazil’s upcoming elections, also a concern in Mexico’s recent elections that were also flooded with messages from inauthentic accounts.
Research conducted by the German Tactical Technology Collective and their partners reveals that WhatsApp is now the primary platform for political messaging across the Global South, especially in rural areas with limited internet access. The report analyzes this phenomenon, seeking to answer why WhatsApp is such a powerful tool, how politicians and campaigners use the platform, what strategies organizers use to exploit the platform and what the potential implications are of this trend. The report also includes several case studies of key countries impacted by WhatsApp.
Vietnam recently approved a new controversial cybersecurity law regulating technology firms’ use of personal data. The legislation requires that social media firms turn over subscriber information, IP addresses, and account information to the Ministry of Public Security and remove content from their platforms when requested by the government. The legislation also creates formulations for charging citizens for posting “anti-government propaganda” or any material that “incites violence and disturbs public security.”
From June 22-23, 2018, the Atlantic Council’s Digital Forensics Lab hosted the 360/OS open source summit in Berlin, bringing together journalists, activists, innovators, and leaders from around the world as part of our digital solidarity movement for objective facts and reality — a cornerstone of democracy.
Representatives from the Design for Democracy Coalition attended the Copenhagen Democracy Summit, organized and hosted by the Alliance of Democracies, with the sponsorship of a wide array of organizations including Microsoft, Facebook, the University of Denver, NDI and IRI.
Several D4D Coalition partners convened in Brussels to participate in the forum “Representation in the Age of Populism: Ideas for Global Action,” organized by the International Institute for Democracy and Electoral Assistance (International IDEA).
On June 8-9, the Verkhovna Rada, Ukraine’s unicameral parliamentary body, hosted a conference to discuss current threats on democracy, which came as a follow-up to the Chairman of the Verkhovna Rada’s official visit to Moldova in March of this year.
Supporting partners of the Design 4 Democracy Coalition, the National Democratic Institute (NDI) and the International Republican Institute (IRI), along with the Defending Digital Democracy project (D3P) at Harvard Kennedy School’s Belfer Center, convened at Google’s Belgium office for the public launch of the “The Cybersecurity Campaign Playbook: European Edition” on May 22, 2018. The event featured a series of discussions including D3P Senior Fellows Robby Mook and Matt Rhoades, representatives from Microsoft and Google, European parliamentarians and policymakers, and officials from the Belfer Center, IRI and NDI.
WeChat, a social media platform popular with Chinese Immigrants in the United States, presents new challenges and a new perspective to the fight against misinformation. The platform heavily features local news and sensational stories, while passing over other popular topics including the economy and healthcare. WeChat shows many of the same characteristics as misinformation in mainstream media, but also displays ways in which immigrant populations diverge from norms because of its blend of U.S. media and Chinese media practices. On the platform, conservative voices feature loudly, with both liberal and conservative users discussing the role of Chinese immigrants in politics and in the United States.