The Design 4 Democracy Coalition Advisory Board stands in solidarity with our fellow member Maria Ressa and with Rappler, the leading independent online news outlet in the Philippines. Ressa and Rappler Holdings were formally indicted on November 29, 2018, on charges of tax evasion—the latest action by the Philippine government in attempting to thwart the work of Rappler’s journalists—and Ressa turned herself into authorities and posted bail this week.
Supported by NDI and more than a dozen other international partners, the Design 4 Democracy Coalition held its first Advisory Board meeting on October 25th, in conjunction with MisinfoCon London and Mozilla Fest (MozFest). The D4D Coalition seeks to act as a force multiplier for organizations who advocate for more democracy-friendly technology platforms and policies by providing an opportunity for collaboration and mutual support within the democracy community. The Coalition also provides direct lines of communication with major technology platforms and is improving communication between the democracy community and the tech industry.
With election day drawing nearer, disinformation efforts to influence voters increase. The New York Times published a “Roundup” of disinformation-related coverage, its impact on the U.S. midterm elections, and its spread internationally. In response to suspicious pro-Saudi Arabian government tweets, Twitter suspended suspected bots that tweeted and re-tweeted identical talking points including “#unfollow_enemies_of_the_nation.” Twitter also released 11 million tweets believed to be from state-backed information operations originating in Russia and Iran. Facebook pages that appeared to be for Women’s Marches were found to originate in Bangladesh and sought to sell march-related merchandise.
As part of a broader series of discussions on tech and democracy, the National Democratic Institute and International Republican Institute joined partners on October 18 to host a reception and discussion in San Francisco about the ways tech is impacting democratic processes and participation around the world. The event featured perspectives from NDI President Derek Mitchell and IRI President Dan Twining, and explored opportunities for civil society, technologists, and others to collaborate through efforts like the D4D Coalition. Participants included Bay Area stakeholders from the tech industry, academia, and the international affairs community, and co-hosts included the Pacific Council, TheBridge, and Bay Area International Link.
In an attempt to increase transparency and enable academic investigation and research, Twitter released data about accounts and content that have been part of global disinformation campaigns since 2016. Included in the data are two accounts that had not been part of earlier releases, and are thought by Twitter to be state-backed. In total, information about 3,841 accounts connected to the IRA in Russia and 770 other accounts have been released to the public. However, researchers found even more fake Twitter accounts that appear to be linked to the Russian government that were not identified by Twitter’s search, promoting politically benign topics such as Taco Bell and Coachella.
Data & Society published a report on “Weaponizing the Digital Influence Machine: The Political Perils of Online Ad Tech”. The paper further explains the relationship between politics, media, and ill-meaning actors. It lays out the tactics, technologies, and conditions that anti-democratic and politically-motivated actors use to weaponize digital advertising. The report finds that three main strategies are employed by actors who use the “Digital Influence Machine” to attempt to divide an opponent’s supporters, leverage behavioral science techniques to influence consumers, and mobilize those who support their views by threatening their identity, political or otherwise.
In a New York Times op-ed, researchers and fact checkers in Brazil called on WhatsApp to make changes to its system when they found that it was being widely used to spread disinformation in the runup to the national election. A poll found that 44 percent of Brazilians use WhatsApp to read political news, and a growing amount of misinformation and disinformation has been shared widely through the app, and the writers called on the company to restrict forwards, broadcasts and limit the size of new groups in Brazil during the election period. A Facebook subsidiary, WhatsApp later banned over 100,000 accounts associated with sharing false stories, but did not take up the suggestions before the election on October 28th.
Hundreds of members of the Myanmar military, posing as civilians and often using tactics modeled after those used by Russia, have used Facebook to spread disinformation about the Muslim Rohingya minority. One of the largest forced human migration human history, over 700,000 people, is widely attributed to this type of anti-Rohingya propaganda and the violence it incited. Nathaniel Gleicher, Facebook’s head of cybersecurity policy, reported that the company found “clear and deliberate attempts to covertly spread propaganda that were directly linked to the Myanmar military.”
Google’s CEO Sundar Pichai defended the company’s search engine for use in China is going well and would proceed, despite questions around such an initiative’s potential for censorship and surveillance. Mr. Pichai described the controversial decision to build the search engine as in keeping with their mission to provide information to all people. Google employees have voiced concern at this proposal, citing its commitments to the Global Network Initiative’s Principles on Freedom of Expression and Privacy.
Full Fact, a British fact-checking group, released a report entitled “Tackling misinformation in an open society. How to respond to misinformation and disinformation when the cure risks being worse than the disease.” The report explains that it is more realistic to build resilience against disinformation and misinformation in the UK than to eliminate it all together. In the paper, Full Fact sets out a framework for response to disinformation and misinformation that is proportionate and risk-based, and caution against taking action without thinking through the consequences and allowing time for further research into the harm caused by such campaigns.
A vulnerability in Google+ profiles opened user data to 438 applications between 2015 and March 2018, when the problem was discovered as part of an internal audit. This data breach resulted from a flaw in an API that was created by the tech company to allow developers to access profile information about individuals who used their apps and had given permission to share their profile data. Internal memos, investigative journalism, and a blog post shed light on Google’s decision to not go public with the information when it originally discovered the problem. Increased security measures including the termination of the Google+ service occurred in hopes of rectifying the problem. CEO Sundar Pichai has agreed to testify before Congress on the breach in the near future.
The Design 4 Democracy (D4D) Coalition was honored to be a part of the 2018 g0v Summit, from October 5-7, 2018. The Summit provided an opportunity to share information about the Coalition with other groups sharing similar objectives, including the Global Tech Accountability Network, a new initiative arising out of the #DearMark letter, led by organizations in Myanmar. Among the areas of collaboration discussed were the development of an open data standard on social media monitoring, together with a platform implementing the standard, for social media monitoring organizations to adapt to their own specific needs. The use of shared tools and data standards provides an opportunity for social media monitoring groups to share data with like-minded, trusted partners in other countries, providing a more complete picture of hate speech or disinformation in a regional or global context. In addition, the Summit provided an opportunity to connect with D4D partners, g0v and CoFacts, which is working with the Coalition to adapt the CoFacts fact-checking chatbot from LINE to FB Messenger and WhatsApp.
Facebook is launching fact-checking tools in Kenya and plans to spread the service to the entire African continent. The fact-checking tool will demote news stories marked as fake and warn the user trying to share. The focus on Facebook’s newsfeed, however, has drawn criticism for not being the most effective use of fact-checking technology. Facebook products are highly popular in Africa, however, WhatsApp is used by far more people to communicate than Facebook, while the fact-checking service will not extend to WhatsApp.
D4D Coalition member NDI helped organize a conference on “Enhancing Media Literacy and Combating Disinformation” in Praia, Cape Verde in collaboration with the Cape Verdean government and The Association of Journalists of Cape Verde (AJOC). Representatives from NDI spoke on the growing threats to democratic values coming online in a new age of disinformation and suggested potential means for the media and government could use to improve information integrity in the country.
A new law in California makes covert bots illegal. This decision requires fake profiles or bots to be labeled as artificial, in hopes that the consumers of content on the internet will be better informed regarding the source of the information they view. This legislation is groundbreaking, but some who study bots caution against legislating without a thorough understanding of the different types of bots and how they function.
An upcoming referendum on a name change for the Republic of Macedonia has helped create an online environment awash in disinformation campaigns, some linked to Russia, which is trying to prevent the country from joining NATO and moving towards the West.
The D4D Coalition will present on its work at the g0v Summit in Taiwan from October 5 to 7, 2018, particularly the potential to develop disinformation monitoring efforts in Asia, with the support of partners, tools and techniques bolstered by D4D.
Capitalizing on weaknesses in U.S. technology and social media platforms, businesses in North Korea are able to connect with people in other countries to both provide and solicit services, circumventing U.S. sanctions. By hiding their identities, a group of North Korean web developers have allegedly advertised their services on Western online platforms and built a website for a business in Australia. A web of fake social media accounts and front companies in other countries enable this underground business.
The prevalence of deepfakes, videos that are manipulated to show fake events, is increasing, prompting Rep. Adam B. Schiff (R-FL) and Stephanie Murphy (D-FL) to call for an analysis of the problem and solutions by the intelligence community. In their letter to the Director of National Intelligence, Daniel Coats, they call these videos a national security risk. Social media companies’ growing focus on stopping disinformation has been encouraged by pressure from Congress and the intelligence community and the lawmakers hope that the same trend will occur for the fight against deepfakes.
On September 12, 2018, members of the Design 4 Democracy Coalition held an off-the-record workshop in Kyiv, Ukraine on emerging threats in disinformation and cybersecurity. Facilitated by NDI, IRI, IFES, and StopFake, the workshop included a range of organizations from civil society, government, technology companies and the cybersecurity community. Following lightning talks and presentations on research findings, participants took part in focused discussions on disinformation and cybersecurity in the lead up to the 2019 elections, as well as potential areas for improved collaboration with technology platforms to counter identified threats.
The D4D team is actively engaged in Macedonia ahead of its historic national referendum on September 30, 2018. D4D is supporting efforts by local stakeholders to ensure that disinformation does not interfere with the ability of the Macedonian people to express their will.
The Oxford Internet Institute Computational Propaganda Project found that the upcoming Swedish election has experienced a high amount of ‘junk news’ shared by users on social media sites, second only to that seen in the 2016 U.S. presidential election. Some of these fake sites are modeled after reputable news sources, complicating users’ search for truth. Researchers were surprised to learn that eight of the ten most-shared fake news sites were domestic, causing a renewed focus on local actors over international influences.
UNESCO published this handbook for use by journalism educators and publishing journalists. It includes discussions of journalism’s responsibilities regarding disinformation, misinformation, and mal-information. Written as a curriculum, this handbook covers everything from the current state of ‘fake news’, to the evolution of journalism, to recommendations moving forward.
The Chinese government has created a national-level platform, run by the ‘Internet Illegal Information Reporting Center’ and run by Xinhua to publicly differentiate fact from fake news for its citizens. To create the platform, over 40 “rumor-refuting platforms” were combined into the official platform. The purpose behind the creation of this new official fact-checker, state-run xinhuanet.com explains, was to stop rumors and illegal information from disturbing the social order.
A Reuters special report documents more than 1,000 posts, comments, and crude images on Facebook calling for violence and discrimination against Myanmar’s Muslims. Despite official rules banning such hate speech, Facebook appeared largely unprepared to crack down on this wave of anti-Muslim posts. For a long time, the organization lacked enough Burmese speaking employees, as well as programs that can effectively detect hate speech in the language and systems to translate the text, among other limitations preventing an effective response. It has also recently blocked the accounts of 18 users and 52 pages that are linked to Myanmar’s military, further highlighting the country’s worsening relationship with the world’s largest social media network.
A recent piece from Vice Motherboard discusses the catch-22 that Facebook is confronting as it trying to create a standard moderating strategy for a world where local context is critically important to such moderation. Confronting mounting criticism from many quarters, the company has recently increased its focus and number of employees moderating false news and other posts that violate its Community Standards. Is it possible for Facebook to maintain a platform at global scale that is both safe and open?
The messaging platform Line has seen a large increase in the quantity of content posted by users that spread incorrect information, especially about healthcare. These posts, which often target elderly users, advertise fake products or share incorrect information, like the power of kale juice to cure bone pain. Line is working with fact-checking organizations and the government to attempt to combat the problem.
A hacking group affiliated with the Russian government and linked to interference in the 2016 election, APT28, is believed to be behind the creation of six websites that mimic government and public policy groups. The U.S. Senate, International Republican Institute, and the Hudson Institute were among those targeted by the sites, which appear to seek credentials from members of these organizations. Microsoft, which discovered fake websites with domains like my-iri.org and senate.group, has announced plans to fight cybersecurity threats targeting political organizations. The tech company will offer free cybersecurity protections to likely targets of groups like APT28, including campaign offices and candidates, provided that they use Office 365 software.
Facebook released statements on their website about two unconnected attempts by foreign governments to influence users or to steal their personal information. The attacks, which began in Russia and Iran, appear unconnected despite their use of similar tactics. Facebook removed 652 accounts that were traced to Iran state media and others that have been identified as Russian military intelligence.
In response to incorrectly reported “false news” content, Facebook has begun to rate its users’ credibility based on thousands of behavioral points. Users who report content that is, in fact, false will have their future flags reviewed with higher priority than those who report many articles which are not false.
Politicians in the US and Europe are devising new policies to limit microtargeting, a technique of targeting specific subgroups for advertisements which many believe is feeding polarization and voter manipulation. As researchers have identified, microtargeting has become a key weapon for foreign election meddlers, and many argue Facebook’s efforts to deter this exploitation of microtargeting has had little effect.
NDI is planning a second disinformation event in collaboration with the Mexico National Electoral Institute (Instituto Nacional Electoral, INE) the Center for Research and Teaching in Economics (Centro de Investigacion y Docencias Economicas, CIDE) and the National Autonomous University of Mexico (Universidad Nacional Autonoma de Mexico, UNAM). This will follow up on the forum held in March 2018 before the national elections in July, and will review the 2018 electoral process, highlight local efforts that tackled disinformation, and discuss lessons learned for future elections. Similar events are being held with NDI and FGV-DAPP participation in Brazil and Colombia throughout July and August.
A report by the Institute for the Future reveals a widespread phenomenon of “state-sponsored trolling”: government use of targeted online hate and harassment campaigns to intimidate and silence individuals critical of the state. New surveillance and hacking technologies have allowed governments to anonymously track, threaten, and publicly delegitimize opponents on a greater scale than ever before. The report concludes with recommendations for technology companies, lawyers, and lawmakers.
A delegation including D4D partners visited Skopje, Macedonia in late July to assess potential for disinformation to impact the forthcoming referendum on a proposed change to country’s name and to better understand local needs for coordination and support. D4D partners continues to monitor the situation closely.
This comprehensive essay details the myriad of political, social and business dynamics that transformed social media from the pro-democratic tool of the Arab Spring to the anti-democratic weapon of authoritarians and election-meddlers today. Writer Zeynep Tufekci points to the lack of regulations on tech firms, the unwillingness of the US to bolster its online defenses, and the appropriation of social media tools by authoritarians as key factors in this transition.
A Portland Communications survey of influencers shaping Twitter conversations on recent African elections shows that Africa is not immune to fake news, the rise of bots, or external influence on elections. The survey finds that 53% of key influencers came from outside the country holding the election, with many influencers coming from outside the continent. The report also reveals that bots had a major presence in election discourse while politicians had comparatively minor influence on discussions. Details on specific countries are provided.
Oxford Internet Institute’s Project on Computational Propaganda has a new report analyzing the new and growing trends of organized media manipulation, as well as the growing capacities, strategies, and resources that support the trends. In a fast-growing number of countries, political parties and government agencies are using social media to manipulate domestic public opinion, often in response to threatening foreign interference and junk news. Since 2010, political parties and governments have spent over half a billion dollars developing and implementing these operations.
This guide develops a learning module for journalists and educators meant to situate contemporary information disorder in a long history of misinformation, disinformation and propaganda. It establishes a broad historical overview of past forms of information disorder propagated by states, public figures, and the media. Established by the International Center for Journalists, the guide hopes to equip contemporary journalists and educators with a sharpened, contextual knowledge of disinformation-related issues.
The Office of Senate Intelligence Committee Vice Chairman Mark Warner prepared a policy paper detailing 20 ways that lawmakers can consider combating disinformation, protect user privacy, and promote competition in the tech space. These options include media literacy programs, new rules for social media platforms, and more user control over a company’s use of personal data.
The UK Department for Digital, Culture, Media and Sport, named Britain’s chief investigating authority on disinformation, released the first of its thorough reports on the subject. What began as an inquiry into a few major scandals turned into a comprehensive 89-page document on Russian interference, tech company responsibility for disinformation, Cambridge Analytica, and data targeting, putting the UK parliament at the center of global discussions. The report also includes a list of demands for regulation, legislation, codes of ethics and police investigations.
In its efforts to stop the spread of misinformation, Facebook deactivated a large network of pages and accounts thought to be led by Brazilian right-wing activists from the Movimento Brazil Livre (MBL), or “Free Brazil Movement.” According to a number of sources, MBL organizers posed as different independent news outlets to develop coordinated messaging campaigns in support of their policies. Facebook removed the accounts alleging that they violated the company’s authenticity policies.
The European Union issued Google a record-breaking antitrust fine of €4.34 billion ($5.06 billion) over Google’s deals with mobile phone makers and telecommunications operators. According to EU regulators, Google’s contracts with phone makers effectively force those companies to prioritize Google apps and services in exchange for Google providing its Android operating system for free. The fine represents a major step towards stronger government oversight of technology companies, at least in the EU.
The Getulio Vargas Foundation Office of Public Policy (Fundação Getúlio Vargas, Diretoria de Análise de Politicas Publicas or FGV-DAPP) launched their Sala de Democracia Digital (Digital Democracy Room) at an event on July 25 in Rio de Janeiro, Brazil with support from NDI, including the participation of NDI’s Colombia Country Director Francisco Herrero. The online project seeks to analyze political discourse on the web during the 2018 Brazilian elections and will include weekly reports on the use of bots and fake news during the months prior to the October poll as well as policy papers and recommendations.
As the EU and China push forward with tighter internet regulations, the US is losing its place as a key agenda-setter on internet freedom and cybersecurity policy. The US has lately been taking a far more passive role on countering authoritarian internet policies in China and other developing countries, and has neglected to confront the EU over its strict user privacy regulations that could threaten global cybersecurity efforts.
The parliament of Uganda temporarily maintained a controversial tax on social media use despite widespread protests against it. President Museveni, who first introduced the law, has cited the spread of “gossip” as a key reason for its existence. Seen as an attempt at state censorship, the tax has led to a significant drop in social media use in addition to creating new economic burdens for the Ugandan people.
A recent study by the research firm Ghost Data reveals that Instagram may have as many as 95 million bots, up slightly higher from 2015. Bot presence on Instagram continues to increase despite efforts by Facebook to curb the spread. The rise in Instagram bots is especially concerning since images and videos, uniquely difficult to track and identify, could play a larger role in coming elections.
A joint investigation by the Organized Crime and Corruption Reporting Project (OCCRP) and partners reveals that Macedonia’s fake American news industry in Veles was launched by well-known Macedonian attorney Trajche Arsov, not by apolitical teens as previously reported. During the 2016 election, Arsov worked closely with several high-profile American partners to churn out over a hundred fake news websites on social media. Macedonian security agencies are now cooperating with law enforcement in the US and several other European countries to investigate possible ties between Arsov and the recently indicted Russian hackers.
In light of episodes of ethnic violence in Myanmar, India and Sri Lanka, Facebook has announced tighter restrictions on disinformation and misinformation spread through its site and Instagram. The company will start effectively removing false information that might lead to physical harm. However, the policy does not apply to WhatsApp, which has been a major catalyst for recent violent incidents.
A bill that would require certain automated social media accounts to identify themselves as bots is currently moving through the California state legislature. Many critics charge that the proposed law lacks specifics, that it is not constitutional, or that it will not effectively solve the problem of bot influence on voters. The bill, among the first of its kind, demonstrates the challenges that face lawmakers seeking legislative solutions to the surge of automated accounts.
On July 16, NDI and Coalition partners convened a forum at the margins of the OGP Summit in Tbilisi, Georgia, titled “Scaling the Future of Civic Tech.” The day-long event featured discussions on ways the civic tech movement is strengthening democracy in particular national and subnational contexts and highlighted the D4D Coalition as a means of sustaining collaboration on shared priorities.
New findings reveal that Russian propagators of disinformation often posed as local news sources to exploit the American public’s higher levels of trust for local news organizations. Many fake local news Twitter accounts did not actually post false information, opting instead to establish long-term credibility for when they needed to operationalize. These cases further confirm that the Russian-led disinformation campaign has been years in the making.
Twitter has escalated its defense against fake accounts and bots in the past few months, suspending more than one million accounts a day. This is especially significant given the company’s usual prioritization of freedom of speech over policing users’ behavior. In an effort to promote trust amongst active users, Twitter has already removed large amounts of inactive or suspicious follower accounts thought to be promoting disinformation and spam.
YouTube has announced a series of developments meant to promote more reliable, “authoritative” news sources on its site. The proposed changes mark a departure from the platform’s current video recommendation algorithms, which have promoted factually incorrect conspiracy theory videos to users who had a history of watching similar ones.
Frightened mobs in India have killed two dozen innocent people as false rumors about child kidnapping spread through the widely-used messaging platform WhatsApp. The app has facilitated the proliferation of false information in countries across the world, leading to instances of physical harm in Brazil and Sri Lanka as well. Not only does WhatsApp host the majority of viral disinformation campaigns in the most amount of countries, but its encryption and private messaging systems also make it uniquely difficult to slow these campaigns. In response to these trends, WhatsApp has placed restrictions on the amount of contacts that users can forward messages to. WhatsApp also announced plans to support third-party developers of fact-checking technology for the app.
Philip Howard, Director of the Oxford Internet Institute (OII) and D4D advisory board member, testified before the Senate Intelligence Committee on the work of OII’s computational propaganda project, state sponsored disinformation worldwide, and the potential for foreign influence operations targeting U.S. elections.
First Draft, a project of Harvard University’s John F. Kennedy School of Government, published a definitional toolbox of terms related to technology and misinformation. The toolbox aims to create a shared vocabulary amongst policymakers, citizens, and academics. Part One includes a glossary defining “commonly used” and “frequently misunderstood” terms related to information disorder. Part Two attempts to map the thirteen sub-categories of the information disorder field in order to facilitate more strategic, targeted research and action. Part Three includes downloadable high-resolution graphics created to help explain information disorder.
An investigation by the Digital Forensic Research Lab reveals a deep network of exchanges between various users and several Brazil-based groups that sell pages, likes, and shares on Facebook for money. Like and share for cash transactions at large scale have the potential to threaten the integrity of Brazil’s upcoming elections, also a concern in Mexico’s recent elections that were also flooded with messages from inauthentic accounts.
Research conducted by the German Tactical Technology Collective and their partners reveals that WhatsApp is now the primary platform for political messaging across the Global South, especially in rural areas with limited internet access. The report analyzes this phenomenon, seeking to answer why WhatsApp is such a powerful tool, how politicians and campaigners use the platform, what strategies organizers use to exploit the platform and what the potential implications are of this trend. The report also includes several case studies of key countries impacted by WhatsApp.
Vietnam recently approved a new controversial cybersecurity law regulating technology firms’ use of personal data. The legislation requires that social media firms turn over subscriber information, IP addresses, and account information to the Ministry of Public Security and remove content from their platforms when requested by the government. The legislation also creates formulations for charging citizens for posting “anti-government propaganda” or any material that “incites violence and disturbs public security.”
From June 22-23, 2018, the Atlantic Council’s Digital Forensics Lab hosted the 360/OS open source summit in Berlin, bringing together journalists, activists, innovators, and leaders from around the world as part of our digital solidarity movement for objective facts and reality — a cornerstone of democracy.
Representatives from the Design for Democracy Coalition attended the Copenhagen Democracy Summit, organized and hosted by the Alliance of Democracies, with the sponsorship of a wide array of organizations including Microsoft, Facebook, the University of Denver, NDI and IRI.
Several D4D Coalition partners convened in Brussels to participate in the forum “Representation in the Age of Populism: Ideas for Global Action,” organized by the International Institute for Democracy and Electoral Assistance (International IDEA).
On June 8-9, the Verkhovna Rada, Ukraine’s unicameral parliamentary body, hosted a conference to discuss current threats on democracy, which came as a follow-up to the Chairman of the Verkhovna Rada’s official visit to Moldova in March of this year.
Supporting partners of the Design 4 Democracy Coalition, the National Democratic Institute (NDI) and the International Republican Institute (IRI), along with the Defending Digital Democracy project (D3P) at Harvard Kennedy School’s Belfer Center, convened at Google’s Belgium office for the public launch of the “The Cybersecurity Campaign Playbook: European Edition” on May 22, 2018. The event featured a series of discussions including D3P Senior Fellows Robby Mook and Matt Rhoades, representatives from Microsoft and Google, European parliamentarians and policymakers, and officials from the Belfer Center, IRI and NDI.
WeChat, a social media platform popular with Chinese Immigrants in the United States, presents new challenges and a new perspective to the fight against misinformation. The platform heavily features local news and sensational stories, while passing over other popular topics including the economy and healthcare. WeChat shows many of the same characteristics as misinformation in mainstream media, but also displays ways in which immigrant populations diverge from norms because of its blend of U.S. media and Chinese media practices. On the platform, conservative voices feature loudly, with both liberal and conservative users discussing the role of Chinese immigrants in politics and in the United States.