An upcoming referendum on a name change for the Republic of Macedonia has helped create an online environment awash in disinformation campaigns, some linked to Russia, which is trying to prevent the country from joining NATO and moving towards the West.
The D4D Coalition will present on its work at the g0v Summit in Taiwan from October 5 to 7, 2018, particularly the potential to develop disinformation monitoring efforts in Asia, with the support of partners, tools and techniques bolstered by D4D.
Capitalizing on weaknesses in U.S. technology and social media platforms, businesses in North Korea are able to connect with people in other countries to both provide and solicit services, circumventing U.S. sanctions. By hiding their identities, a group of North Korean web developers have allegedly advertised their services on Western online platforms and built a website for a business in Australia. A web of fake social media accounts and front companies in other countries enable this underground business.
The prevalence of deepfakes, videos that are manipulated to show fake events, is increasing, prompting Rep. Adam B. Schiff (R-FL) and Stephanie Murphy (D-FL) to call for an analysis of the problem and solutions by the intelligence community. In their letter to the Director of National Intelligence, Daniel Coats, they call these videos a national security risk. Social media companies’ growing focus on stopping disinformation has been encouraged by pressure from Congress and the intelligence community and the lawmakers hope that the same trend will occur for the fight against deepfakes.
The D4D Advisory Board will come together in London at the end of October for their first meeting to discuss policies, plans and strategy for the Coalition. The meeting will be held on the sidelines of Mozfest and MisinfoCon, two international conferences focusing on civic tech, disinfo, cybersecurity and a host of related issues crucial to D4D’s efforts.
On September 12, 2018, members of the Design 4 Democracy Coalition held an off-the-record workshop in Kyiv, Ukraine on emerging threats in disinformation and cybersecurity. Facilitated by NDI, IRI, IFES, and StopFake, the workshop included a range of organizations from civil society, government, technology companies and the cybersecurity community. Following lightning talks and presentations on research findings, participants took part in focused discussions on disinformation and cybersecurity in the lead up to the 2019 elections, as well as potential areas for improved collaboration with technology platforms to counter identified threats.
The D4D team is actively engaged in Macedonia ahead of its historic national referendum on September 30, 2018. D4D is supporting efforts by local stakeholders to ensure that disinformation does not interfere with the ability of the Macedonian people to express their will.
The Oxford Internet Institute Computational Propaganda Project found that the upcoming Swedish election has experienced a high amount of ‘junk news’ shared by users on social media sites, second only to that seen in the 2016 U.S. presidential election. Some of these fake sites are modeled after reputable news sources, complicating users’ search for truth. Researchers were surprised to learn that eight of the ten most-shared fake news sites were domestic, causing a renewed focus on local actors over international influences.
UNESCO published this handbook for use by journalism educators and publishing journalists. It includes discussions of journalism’s responsibilities regarding disinformation, misinformation, and mal-information. Written as a curriculum, this handbook covers everything from the current state of ‘fake news’, to the evolution of journalism, to recommendations moving forward.
The Chinese government has created a national-level platform, run by the ‘Internet Illegal Information Reporting Center’ and run by Xinhua to publicly differentiate fact from fake news for its citizens. To create the platform, over 40 “rumor-refuting platforms” were combined into the official platform. The purpose behind the creation of this new official fact-checker, state-run xinhuanet.com explains, was to stop rumors and illegal information from disturbing the social order.
A Reuters special report documents more than 1,000 posts, comments, and crude images on Facebook calling for violence and discrimination against Myanmar’s Muslims. Despite official rules banning such hate speech, Facebook appeared largely unprepared to crack down on this wave of anti-Muslim posts. For a long time, the organization lacked enough Burmese speaking employees, as well as programs that can effectively detect hate speech in the language and systems to translate the text, among other limitations preventing an effective response. It has also recently blocked the accounts of 18 users and 52 pages that are linked to Myanmar’s military, further highlighting the country’s worsening relationship with the world’s largest social media network.
A recent piece from Vice Motherboard discusses the catch-22 that Facebook is confronting as it trying to create a standard moderating strategy for a world where local context is critically important to such moderation. Confronting mounting criticism from many quarters, the company has recently increased its focus and number of employees moderating false news and other posts that violate its Community Standards. Is it possible for Facebook to maintain a platform at global scale that is both safe and open?
The messaging platform Line has seen a large increase in the quantity of content posted by users that spread incorrect information, especially about healthcare. These posts, which often target elderly users, advertise fake products or share incorrect information, like the power of kale juice to cure bone pain. Line is working with fact-checking organizations and the government to attempt to combat the problem.
A hacking group affiliated with the Russian government and linked to interference in the 2016 election, APT28, is believed to be behind the creation of six websites that mimic government and public policy groups. The U.S. Senate, International Republican Institute, and the Hudson Institute were among those targeted by the sites, which appear to seek credentials from members of these organizations. Microsoft, which discovered fake websites with domains like my-iri.org and senate.group, has announced plans to fight cybersecurity threats targeting political organizations. The tech company will offer free cybersecurity protections to likely targets of groups like APT28, including campaign offices and candidates, provided that they use Office 365 software.
Facebook released statements on their website about two unconnected attempts by foreign governments to influence users or to steal their personal information. The attacks, which began in Russia and Iran, appear unconnected despite their use of similar tactics. Facebook removed 652 accounts that were traced to Iran state media and others that have been identified as Russian military intelligence.
In response to incorrectly reported “false news” content, Facebook has begun to rate its users’ credibility based on thousands of behavioral points. Users who report content that is, in fact, false will have their future flags reviewed with higher priority than those who report many articles which are not false.
Politicians in the US and Europe are devising new policies to limit microtargeting, a technique of targeting specific subgroups for advertisements which many believe is feeding polarization and voter manipulation. As researchers have identified, microtargeting has become a key weapon for foreign election meddlers, and many argue Facebook’s efforts to deter this exploitation of microtargeting has had little effect.
NDI is planning a second disinformation event in collaboration with the Mexico National Electoral Institute (Instituto Nacional Electoral, INE) the Center for Research and Teaching in Economics (Centro de Investigacion y Docencias Economicas, CIDE) and the National Autonomous University of Mexico (Universidad Nacional Autonoma de Mexico, UNAM). This will follow up on the forum held in March 2018 before the national elections in July, and will review the 2018 electoral process, highlight local efforts that tackled disinformation, and discuss lessons learned for future elections. Similar events are being held with NDI and FGV-DAPP participation in Brazil and Colombia throughout July and August.
A Portland Communications survey of influencers shaping Twitter conversations on recent African elections shows that Africa is not immune to fake news, the rise of bots, or external influence on elections. The survey finds that 53% of key influencers came from outside the country holding the election, with many influencers coming from outside the continent. The report also reveals that bots had a major presence in election discourse while politicians had comparatively minor influence on discussions. Details on specific countries are provided.
A report by the Institute for the Future reveals a widespread phenomenon of “state-sponsored trolling”: government use of targeted online hate and harassment campaigns to intimidate and silence individuals critical of the state. New surveillance and hacking technologies have allowed governments to anonymously track, threaten, and publicly delegitimize opponents on a greater scale than ever before. The report concludes with recommendations for technology companies, lawyers, and lawmakers.
A delegation including D4D partners visited Skopje, Macedonia in late July to assess potential for disinformation to impact the forthcoming referendum on a proposed change to country’s name and to better understand local needs for coordination and support. D4D partners continues to monitor the situation closely.
This comprehensive essay details the myriad of political, social and business dynamics that transformed social media from the pro-democratic tool of the Arab Spring to the anti-democratic weapon of authoritarians and election-meddlers today. Writer Zeynep Tufekci points to the lack of regulations on tech firms, the unwillingness of the US to bolster its online defenses, and the appropriation of social media tools by authoritarians as key factors in this transition.
Oxford Internet Institute’s Project on Computational Propaganda has a new report analyzing the new and growing trends of organized media manipulation, as well as the growing capacities, strategies, and resources that support the trends. In a fast-growing number of countries, political parties and government agencies are using social media to manipulate domestic public opinion, often in response to threatening foreign interference and junk news. Since 2010, political parties and governments have spent over half a billion dollars developing and implementing these operations.
This guide develops a learning module for journalists and educators meant to situate contemporary information disorder in a long history of misinformation, disinformation and propaganda. It establishes a broad historical overview of past forms of information disorder propagated by states, public figures, and the media. Established by the International Center for Journalists, the guide hopes to equip contemporary journalists and educators with a sharpened, contextual knowledge of disinformation-related issues.
The Office of Senate Intelligence Committee Vice Chairman Mark Warner prepared a policy paper detailing 20 ways that lawmakers can consider combating disinformation, protect user privacy, and promote competition in the tech space. These options include media literacy programs, new rules for social media platforms, and more user control over a company’s use of personal data.
The UK Department for Digital, Culture, Media and Sport, named Britain’s chief investigating authority on disinformation, released the first of its thorough reports on the subject. What began as an inquiry into a few major scandals turned into a comprehensive 89-page document on Russian interference, tech company responsibility for disinformation, Cambridge Analytica, and data targeting, putting the UK parliament at the center of global discussions. The report also includes a list of demands for regulation, legislation, codes of ethics and police investigations.
In its efforts to stop the spread of misinformation, Facebook deactivated a large network of pages and accounts thought to be led by Brazilian right-wing activists from the Movimento Brazil Livre (MBL), or “Free Brazil Movement.” According to a number of sources, MBL organizers posed as different independent news outlets to develop coordinated messaging campaigns in support of their policies. Facebook removed the accounts alleging that they violated the company’s authenticity policies.
The European Union issued Google a record-breaking antitrust fine of €4.34 billion ($5.06 billion) over Google’s deals with mobile phone makers and telecommunications operators. According to EU regulators, Google’s contracts with phone makers effectively force those companies to prioritize Google apps and services in exchange for Google providing its Android operating system for free. The fine represents a major step towards stronger government oversight of technology companies, at least in the EU.
The Getulio Vargas Foundation Office of Public Policy (Fundação Getúlio Vargas, Diretoria de Análise de Politicas Publicas or FGV-DAPP) launched their Sala de Democracia Digital (Digital Democracy Room) at an event on July 25 in Rio de Janeiro, Brazil with support from NDI, including the participation of NDI’s Colombia Country Director Francisco Herrero. The online project seeks to analyze political discourse on the web during the 2018 Brazilian elections and will include weekly reports on the use of bots and fake news during the months prior to the October poll as well as policy papers and recommendations.
As the EU and China push forward with tighter internet regulations, the US is losing its place as a key agenda-setter on internet freedom and cybersecurity policy. The US has lately been taking a far more passive role on countering authoritarian internet policies in China and other developing countries, and has neglected to confront the EU over its strict user privacy regulations that could threaten global cybersecurity efforts.
The parliament of Uganda temporarily maintained a controversial tax on social media use despite widespread protests against it. President Museveni, who first introduced the law, has cited the spread of “gossip” as a key reason for its existence. Seen as an attempt at state censorship, the tax has led to a significant drop in social media use in addition to creating new economic burdens for the Ugandan people.
A recent study by the research firm Ghost Data reveals that Instagram may have as many as 95 million bots, up slightly higher from 2015. Bot presence on Instagram continues to increase despite efforts by Facebook to curb the spread. The rise in Instagram bots is especially concerning since images and videos, uniquely difficult to track and identify, could play a larger role in coming elections.
A joint investigation by the Organized Crime and Corruption Reporting Project (OCCRP) and partners reveals that Macedonia’s fake American news industry in Veles was launched by well-known Macedonian attorney Trajche Arsov, not by apolitical teens as previously reported. During the 2016 election, Arsov worked closely with several high-profile American partners to churn out over a hundred fake news websites on social media. Macedonian security agencies are now cooperating with law enforcement in the US and several other European countries to investigate possible ties between Arsov and the recently indicted Russian hackers.
In light of episodes of ethnic violence in Myanmar, India and Sri Lanka, Facebook has announced tighter restrictions on disinformation and misinformation spread through its site and Instagram. The company will start effectively removing false information that might lead to physical harm. However, the policy does not apply to WhatsApp, which has been a major catalyst for recent violent incidents.
A bill that would require certain automated social media accounts to identify themselves as bots is currently moving through the California state legislature. Many critics charge that the proposed law lacks specifics, that it is not constitutional, or that it will not effectively solve the problem of bot influence on voters. The bill, among the first of its kind, demonstrates the challenges that face lawmakers seeking legislative solutions to the surge of automated accounts.
On July 16, NDI and Coalition partners convened a forum at the margins of the OGP Summit in Tbilisi, Georgia, titled “Scaling the Future of Civic Tech.” The day-long event featured discussions on ways the civic tech movement is strengthening democracy in particular national and subnational contexts and highlighted the D4D Coalition as a means of sustaining collaboration on shared priorities.
New findings reveal that Russian propagators of disinformation often posed as local news sources to exploit the American public’s higher levels of trust for local news organizations. Many fake local news Twitter accounts did not actually post false information, opting instead to establish long-term credibility for when they needed to operationalize. These cases further confirm that the Russian-led disinformation campaign has been years in the making.
Twitter has escalated its defense against fake accounts and bots in the past few months, suspending more than one million accounts a day. This is especially significant given the company’s usual prioritization of freedom of speech over policing users’ behavior. In an effort to promote trust amongst active users, Twitter has already removed large amounts of inactive or suspicious follower accounts thought to be promoting disinformation and spam.
YouTube has announced a series of developments meant to promote more reliable, “authoritative” news sources on its site. The proposed changes mark a departure from the platform’s current video recommendation algorithms, which have promoted factually incorrect conspiracy theory videos to users who had a history of watching similar ones.
Frightened mobs in India have killed two dozen innocent people as false rumors about child kidnapping spread through the widely-used messaging platform WhatsApp. The app has facilitated the proliferation of false information in countries across the world, leading to instances of physical harm in Brazil and Sri Lanka as well. Not only does WhatsApp host the majority of viral disinformation campaigns in the most amount of countries, but its encryption and private messaging systems also make it uniquely difficult to slow these campaigns. In response to these trends, WhatsApp has placed restrictions on the amount of contacts that users can forward messages to. WhatsApp also announced plans to support third-party developers of fact-checking technology for the app.
Philip Howard, Director of the Oxford Internet Institute (OII) and D4D advisory board member, testified before the Senate Intelligence Committee on the work of OII’s computational propaganda project, state sponsored disinformation worldwide, and the potential for foreign influence operations targeting U.S. elections.
First Draft, a project of Harvard University’s John F. Kennedy School of Government, published a definitional toolbox of terms related to technology and misinformation. The toolbox aims to create a shared vocabulary amongst policymakers, citizens, and academics. Part One includes a glossary defining “commonly used” and “frequently misunderstood” terms related to information disorder. Part Two attempts to map the thirteen sub-categories of the information disorder field in order to facilitate more strategic, targeted research and action. Part Three includes downloadable high-resolution graphics created to help explain information disorder.
An investigation by the Digital Forensic Research Lab reveals a deep network of exchanges between various users and several Brazil-based groups that sell pages, likes, and shares on Facebook for money. Like and share for cash transactions at large scale have the potential to threaten the integrity of Brazil’s upcoming elections, also a concern in Mexico’s recent elections that were also flooded with messages from inauthentic accounts.
Research conducted by the German Tactical Technology Collective and their partners reveals that WhatsApp is now the primary platform for political messaging across the Global South, especially in rural areas with limited internet access. The report analyzes this phenomenon, seeking to answer why WhatsApp is such a powerful tool, how politicians and campaigners use the platform, what strategies organizers use to exploit the platform and what the potential implications are of this trend. The report also includes several case studies of key countries impacted by WhatsApp.
Vietnam recently approved a new controversial cybersecurity law regulating technology firms’ use of personal data. The legislation requires that social media firms turn over subscriber information, IP addresses, and account information to the Ministry of Public Security and remove content from their platforms when requested by the government. The legislation also creates formulations for charging citizens for posting “anti-government propaganda” or any material that “incites violence and disturbs public security.”
From June 22-23, 2018, the Atlantic Council’s Digital Forensics Lab hosted the 360/OS open source summit in Berlin, bringing together journalists, activists, innovators, and leaders from around the world as part of our digital solidarity movement for objective facts and reality — a cornerstone of democracy.
Representatives from the Design for Democracy Coalition attended the Copenhagen Democracy Summit, organized and hosted by the Alliance of Democracies, with the sponsorship of a wide array of organizations including Microsoft, Facebook, the University of Denver, NDI and IRI.
Several D4D Coalition partners convened in Brussels to participate in the forum “Representation in the Age of Populism: Ideas for Global Action,” organized by the International Institute for Democracy and Electoral Assistance (International IDEA).
On June 8-9, the Verkhovna Rada, Ukraine’s unicameral parliamentary body, hosted a conference to discuss current threats on democracy, which came as a follow-up to the Chairman of the Verkhovna Rada’s official visit to Moldova in March of this year.
Supporting partners of the Design 4 Democracy Coalition, the National Democratic Institute (NDI) and the International Republican Institute (IRI), along with the Defending Digital Democracy project (D3P) at Harvard Kennedy School’s Belfer Center, convened at Google’s Belgium office for the public launch of the “The Cybersecurity Campaign Playbook: European Edition” on May 22, 2018. The event featured a series of discussions including D3P Senior Fellows Robby Mook and Matt Rhoades, representatives from Microsoft and Google, European parliamentarians and policymakers, and officials from the Belfer Center, IRI and NDI.