Youtube’s new fact checking algorithm, designed to provide viewers with factual information of videos that might have false or misleading information, has caught a snag. As videos of the recent burning of France’s Notre Dame cathedral surfaced on the streaming website, the algorithms began to display them with information from Encyclopedia Britannica about the September 11th, 2001 terrorist attacks in the United States. While Youtube apologized for the embarrassing error, the incident demonstrates gaps in the company’s current system of fact checking response.
The 9th Latin American Democracy Forum in Mexico City focused on “Challenges in Politics and Democracy in the Digital Era” from April 4 – 5. The event covers themes from citizenship in the digital era, new technologies and electoral campaigns, tools for promoting an informed vote and promoting electoral transparency and accountability through new technologies. International IDEA participated in the panel on electronic voting and the discussion of questions related to concerns about security and secrecy of online voting, key considerations for design and implementation of e-voting and creating a favorable environment for new voting technologies.
A planned corporate advisory board on ethics in artificial intelligence has been canceled by Google after a wave of negative backlash over the past few months. The company had planned to include a few controversial figures on the board, which sparked outcry: transgender rights activists were upset about the inclusion of Heritage Foundation president Kay Cole James, while anti-war groups were upset about the inclusion of military drone company CEO Dyan Gibbens.
Though Youtube has recently changed its algorithms for determining which videos are shown to viewers, eliminating recommendations for some toxic conspiracy videos on their site, it still receives criticism from advocates for tech-platform responsibility. A Bloomberg interview with former staff revealed that it had become clear by the beginning of 2018 that the company had allowed its video streaming website to automatically suggest videos containing false information or conspiracy theories to millions of viewers.
Ahead of the upcoming May elections in the Philippines, Facebook has removed 65 pages that it deemed propaganda primarily in favor of incumbent Philippine president Rodrigo Duterte or his administration and allies. These pages, which claimed or implied that they were run by regular Philippine citizens supporting the Duterte administration, were shown to be linked to the social media strategist for the president, Nic Gabunada. This is just the most recent action in Facebook’s new operational strategy to remove content which it deems to be inauthentic.
A new report from the Atlantic Council, utilizing research from 2018 by the Digital Forensic Research Lab and the Adrienne Arsht Latin America Center, details the effects of digital distortion on the 2018 national elections in Brazil, Mexico, and Colombia. This report focuses on the power of polarization, automation and disinformation in three of the region’s largest democracies, as the role of online systems in elections only grows in 2019.
On the heels of March’s Ukrainian presidential election, closely watched for signs of a concerted disinformation campaign by Russian linked entities, Finland held elections in April. As the country with the longest border with Russia of any EU member state, Finland also has an unfortunate history of being on the receiving end of attacks and coercion from its next-door neighbor. Things seem to be no different in the era of internet trolling from Twitter bots, and Finnish national security analysts have noted that Russian-linked social media accounts that spread false information remain a problem. However, Finland has the benefit of a highly educated population, with above-average levels of internet literacy, making it somewhat more resistant to disinformation campaigns.
In March, Facebook announced a new policy for its users in Ukraine, wherein they must register to promote political ad content on the social media platform. To register, individuals must provide valid identification and enroll in a two-factor identification system, while also agreeing to let all of their political content posted to Facebook contain a label that reads “paid for by (organization name here).” It also announced an archive of any political ads, proposed safeguards against problematic ad content in the lead-up to its national elections in March and April.
The use of social media and video-streaming platforms like Facebook, Twitter, Youtube, Vimeo, etc. have become ways for individuals to spread images and footage of mass violence like that of the Christchurch, New Zealand mosque shootings in March. In response, the president of Microsoft has suggested creating a “joint virtual command center” to quickly respond to postings of such heinous acts. In a blog post, the industry leader has suggested that certain emergency situations like the New Zealand massacre be deemed “major events” that would warrant all companies providing social media or video-streaming services to work together on a temporary flag-and-remove campaign for offensive content regarding the event. Facebook followed this with an announcement that it would be removing all content relating to white supremacy or separatism posted by users of its platform starting in April.
NSO Group, a little-known Israeli tech developer, created “Pegasus,” an incredibly sophisticated software program that can allow users to spy on the mobile phones of others, even when those phones are encrypted. While the technology has been credited with saving lives by foiling terrorist attack and shutting down drug and human trafficking rings, it has also been used by autocratic governments to track down and eliminate dissident citizens like Saudi Arabia’s Jamal Khashoggi. Recently, NSO cofounder Shalev Hulio sat down for an interview with 60 Minutes to defend his company’s controversial product.
A new Reuters research report co-conducted with Oxford University shows that more Indians than ever (68% of those polled in the study) are receiving the bulk of their English-language online news from their mobile phones rather than computers or other devices. The report showed that social media sites like Facebook and WhatsApp were the biggest mediums for accessing news for Indians, rather than directly going to the news source’s website. This has in turn prompted WhatsApp to roll out a new feature for its users in India ahead of this spring’s national elections. The feature, a tip line called Checkpoint, will allows users to submit messages, pictures or videos that they want checked, and Checkpoint employees will verify their accuracy.
With funding from the American and British governments, over 50 schools in Ukraine are working disinformation awareness skills into their core curriculum. Now in the fifth year of a civil war that has demolished much of the eastern part of the country, Ukraine has also been the victim of a calculated, intensive disinformation campaign by Russia, which backs separatist rebels fighting the Ukrainian government. The new “Learn to Discern” program aims to teach Ukrainian schoolchildren to spot stories, photos, videos and other media that are false narratives appropriated by Russian government sources.
Ever more attention is being turned to the increasing prevalence and potency of private information intelligence companies in the wake of a report published earlier this year, detailing the work of a for-profit American firm on behalf of the Qatari government. A recent New York Times report revealed that as former state intelligence agents retire or defect from their agencies, private firms offer them employment on projects spying on behalf of corporations or smaller countries without the resources to operate a sophisticated spy network of their own. While these “internet mercenary” groups have been used to help stop Middle Eastern based terrorists, and violent drug traffickers in places like Mexico, they are also being used by autocratic regimes like Qatar to track rights-activists and journalists.
Vladimir Putin’s regime is set to start testing the feasibility of cutting off the nation’s internet servers from the global internet. While Putin’s United Russia party insists that the motivation for this is to increase Russian internet self-sufficiency, regime critics and internet rights activists argue that it is a move designed to eliminate Russian citizens’ access to international news and commentary that is critical of the regime. This action would not only be bold and technically complicated, but also a major setback to internet freedom in Russia.
As social media platforms like Facebook and Twitter finally begin to crack-down on the prevalence of false information, hate-speech, and conspiracy theories on their platforms, one company remains largely disengaged from enforcing the developing norms of responsible operations for such websites: Instagram has seen a spike in accounts propagating disinformation, army of them populated by a growing number of conservative and alt-right aligned teenagers of the so-called “Generation Z” cohort. These accounts utilize a combination of memes, pop-culture, and posts of videos from Infowars -a conspiracy-peddling website that is banned on Facebook, and Twitter.
In the weeks leading up to the continent-wide elections for members of the European-Parliament, almost twenty European news organizations began a fact-checking collaborative project called FactcheckEU. This project aims to debunk misinformation that could interfere with the vote, much as it did in the German, French, and Italian elections in recent years. For its part, the European Commission is set to issue a warning to member governments that they should share information on online disinfo campaigns and cyber attacks via a newly developed warning system.
Facebook has confirmed that it left hundreds of millions of its users’ passwords exposed as plaintext in its internal system. While this only means that engineers employed by the company were able to see the characters that made up passwords, Facebook admitted in a statement that this was a critical oversight that it intends to correct. Other sites like GitHub and Twitter had committed similar errors in the past.
When Gabon’s longtime leader Ali Bongo addressed his nation via internet video at the beginning of 2019, observers noted that something seemed off. Multiple observers noted that the president seemed to have unnatural eye and jaw movements, and a radically different speaking pattern than in his earlier speeches. Gabonese opposition activists, politicians, and even some technology experts have acknowledged the possibility that the video could have been a so-called “deepfake,” a deliberately doctored video or image of a person that shows them doing or saying something which never actually happened or was said. The claim is that the footage of Bongo - who has been in poor health, and out of the country since autumn of 2018 - was an edited video of someone other than the president, aired by his inner circle to dilute local fears of his death or resignation. While video and film experts who analyzed the clip have not said for sure that the Bongo address is a deepfake, they did not rule out that possibility.
In a piece for the Foreign Policy Research Institute about his upcoming book, scholar Clint Watts identifies five “generations” of intentional online manipulation by state, criminal, and political actors, while making some dire predictions of internet’s future. The first generation, “Disrupting the System,” revolved around hackers committed denial-of access attacks shutting down people’s access to the internet. The second, “Exploiting the System,” was all about extremist groups (ISIS, Al Qaeda, etc.) using the internet to gain followers, mainly through sites like Twitter, and Youtube. The third, “Distorting the System” was widely seen from Russian government sources deliberately spreading false and misleading information in western internet sources. The fourth, “Dominating the System” foresees a near-future when private companies, interest groups and political parties adopt Russian tactics to invent news on a mass scale to drive public narratives. The fifth, “Owning the System” will be in the medium-to-long-term future, when authoritarian regimes begin to disconnect their national networks from the world wide web, leaving their subjects to consume only government-approved online media.
Users and moderators on Reddit have noticed a troubling trend in the past few months: Reddit accounts allegedly linked to China are engaging in massive collective activity to drown out posts or threads that are critical of the Chinese government, nation, or the Communist Party. Canadian Reddit users in particular have noticed a sharp uptake of Chinese based or associated accounts on the site voting-down or posting negative comments on pages that have an anti-PRC government theme.
When the government of Zimbabwe launched a large hike in fuel prices easier this year, large protests prompted the state to shutter all internet resources in a nation-wide web media blackout. In response, more Zimbabweans than ever began to turn to the communications application WhatsApp to get their news, including reports of coordinated government use of violence against protestors. These “e-[news]papers” are sent by WhatsApp text message, complete with photos and often even soundbites, to the phones of thousands of Zimbabwean subscribers. Though e-papers delivered via WhatsApp have existed since at least the ousting of former national president Robert Mugabe in late-2017, they are becoming ever more common in a country where the traditional media space has slowly closed over the years.
Facial recognition software grows in sophistication every year, but this isn’t accomplished without giving AI “practice”: Researchers often upload millions of pictures of human faces to the recognition software that they are developing so that their programs can further develop the algorithms needed to achieve greater face-sensing capacity. Often, the pictures the systems utilize are provided by corporations such as IBM, who take the photos from photography websites like Flickr. When interviewed by NBC News, some photographers were alarmed or disturbed that the photos they had taken of others might be used to electronically profile those individuals. These photographers complained that IBM had not sought their permission to use the photos, nor the permission of their artistic subjects.
Already under intense scrutiny from democratic governments for potentially enabling spying on user devices, Chinese telecoms giant Huawei is also engaged in the laying of underseas cables that ferry over 90% of intercontinental internet data. The company is the fourth largest in number of cables laid, but is quickly catching up to the three main western and Japanese companies. The United States, Japan, and other governments are concerned that, much like with Huawei’s 5G infrastructure, the corporation could also monitor and collect the data that of private individuals that comes through their undersea internet cables.
D4D Coalition partners the International Republican Institute (IRI) and the National Democratic Institute (NDI) partnered with Microsoft and Defending Digital Democracy (D3P) – a project of the Harvard Kennedy School’s Belfer Center— to launch “The Cybersecurity Campaign Playbook: Indian Edition” in New Delhi ahead of India’s general elections in spring 2019. The playbook provides the steps and tools political parties can implement to make their campaigns’ information more secure and protect against digital threats. On Monday, March 4th, the playbook launch event took place with a panel of local cybersecurity experts in New Delhi. On March 6th, IT representatives from Indian political parties convened for a roundtable discussion. Following these events, Indian political parties received the playbook to disseminate throughout their national, state and local branches, in order to raise further awareness about cybersecurity around political campaigns.
Seeking to improve his company’s beleaguered reputation, Facebook CEO Mark Zuckerberg released a blog post in early March in which he committed the company to the use of end-to-end encryption for its Messenger application. This may or may not assuage the fears of many online privacy advocates, who voiced grave concerns earlier this year when Facebook announced that it would merge the operating systems of Messenger with those of WhatsApp, a secure messaging service it has acquired. Critics fear that end-to-end encryption could be removed or modified during the merger, giving governments, nefarious actors, and Facebook itself greater access to track the messages of users.
International IDEA launched a pilot Public Participation Platform (PPP) to support national constitution-making bodies that collect, store and analyze data from public consultations. The PPP is an online system tailorable to country-specific processes and accessible to country-level staff through individual login credentials. It features a survey function with an accompanying public URL to facilitate public engagement on constitutional issues, a storage facility to upload publicly submitted documents and recordings, and an analysis component wherein data can be exported to CSV files for SPSS analysis. The pilot system was developed in partnership with the Gambian Constitutional Review Commission (CRC), which plans to use the online survey feature to reach local and diaspora Gambians.
Leaked internal documents from Facebook show that the social media company has engaged in a high-level lobbying campaign with some of Europe’s most powerful politicians. The exposed memos describe a strategy through which company COO Sheryl Sandberg, a popularly known motivational speaker, used her influence and prestige to convince the political elite in European Union countries to repeal their extant information-privacy laws. Targeted politicians included former British Chancellor of the Exchequer, George Osborne, and former Irish Prime Minister Enda Kenny.
Facebook has announced a new policy for its users in Ukraine, wherein they must register to promote political ad content on the social media platform. To register, individuals must provide valid identification and enroll in two-factor identification system, while also agreeing to let all of their political content posted to Facebook contain a label that reads “paid for by (organization name here).” This move is likely a way to safeguard against disinformation in the lead-up to national elections this year.
In an open letter issued on February 11, 2019, organizations from across civil society urged Facebook to take meaningful action to improve the transparency of political advertising on the platform. Led by the Mozilla Foundation, a broad array of democracy and human rights groups, including members of the Design 4 Democracy (D4D) Coalition, co-signed the open letter, and supported its call for specific, time-bound action in order to improve transparency of political advertisements on social media platforms in the context of the European Union elections.
On December 7, 2018, the D4D Coalition Advisory Board issued a statement of solidarity condemning the indictment of fellow Advisory Board member and Rappler CEO Maria Ressa. In light of Maria Ressa’s arrest on February 13, 2019, on charges of cyber libel, the Coalition reaffirms the solidarity expressed in that statement, and reiterates its condemnation of efforts by Philippine government to silence Rappler. The charges stem from a seven-year-old story that predates the enactment of the 2012 Cybercrime Prevention Act. The arrest follows a string of charges leveled against Ressa by the Philippine government, which are part of a broader attempt to silence independent and critical voices in the country. For her work as a guardian in the war against truth, Ressa was named one of Time Magazine’s 2018 Persons of the Year and was the 2018 recipient of the Committee to Protect Journalism’s Gwen Ifill Press Freedom Award. Today the D4D Coalition echoes Ressa’s remarks upon accepting the award, “The time to fight for journalism . . . is now.”
Facebook announced the extension of content policies and tools regarding upcoming elections in Nigeria, the European Union, India and Ukraine. The company is planning an ad hoc approach of restrictions on those who can advertise electoral ads before elections in Nigeria and Ukraine, as well as the creation of an online library of electoral ads in India. Later in January, the social media platform blocked tools developed by ProPublica and other media watchdogs, leading to criticism of Facebook from these groups, as well as lawmakers concerned with internet privacy. For over a year and a half, ProPublica, a non-profit investigative news agency, had been compiling information on hundreds of thousands of advertisements appearing on Facebook, detailing the identity of the ads’ sponsors, as well as who the ad might be targeting, with a software tool. In response to Facebook’s blockage, Mozilla Foundation then penned an open letter to Facebook condemning the action, which was signed by several e-media freedom groups. After garnering significant negative press from this open letter, facebook VP Rob Leathrern announced via Twitter that it would do more to disclose the sourcing of political ads for upcoming polls in the critical upcoming elections.
After Facebook blocked access to transparency tools allowing users to see how they are targeted by advertisers, Mozilla Foundation and co-signatories, including D4D-affiliated groups, released an open letter to Facebook calling for specific, time-bound action to improve political ad transparency on the social media platform ahead of the European parliamentary elections. In response, Facebook committed to opening its Ad Archive API in March, and reaffirmed its intention to roll out additional ad transparency tools globally by June. The D4D Coalition’s post regarding the events noted that the challenges relating to ad transparency are global in nature. Too often, tools and policies to address transparency concerns have been rolled out primarily in countries where tech companies have a large market, or face the largest political risk. The D4D post references the notion that tech companies have an obligation to “do no democratic harm” and that protecting the abuse of social media platforms in the context of elections should not be driven by market size or political risk to the company. Indeed, new or restored democracies may be the least resilient to disinformation and have the greatest need for protection. The post appreciated Facebook’s commitment to roll out a global response to the issue of political ad transparency by the end of June.
A new report by Park Associates conducted for the State Department details the history and current situation of misinformation in the world, as well as profiles five main state actors and their role in its spread. The report shows the danger that these states pose on internet platforms like Facebook and Google and how those companies might mitigate those challenges. Google released a lengthy strategy that details how the company plans to counter disinformation on their platform, such as retooling its video recommendation algorithm on Youtube so that the site no longer steers viewers towards videos with questionable or false content.
A new report by Park Associates conducted for the State Department has identified the history and current situation of misinformation in the world, as well as profiling five main state actors and their role in its spread. The report shows the danger these states pose on internet platforms like Facebook and Google and how those companies might mitigate those challenges. For its part meanwhile, Google released a lengthy strategy that details how the company plans to counter disinformation on their websites, such as retooling its video recommendation algorithm on Youtube, so that the site no longer steers viewers towards videos with false-content.
Three D4D Coalition members, IFES, IRI and NDI, held a panel discussion in Washington, DC on January 31 to highlight the interplay between identity, marginalization and disinformation in political life. Representatives discussed new research studying the relationship between hate speech and disinformation, and the potential to explore new pathways of study for these critical issues.
Reuters has accused an intelligence network called Project Raven of working for and out-of the United Arab Emirates by spying on the perceived enemies of that state’s government, including American citizens. The cybersecurity company hired by Abu Dhabi to conduct the espionage, called CyberPoint, is an American group and employees many former National Security Agency employees. While in 2016 Project Raven shifted to the control of an Emirati cyber company, Dark Matter, a good number of the American employees remained with the team. Though interviewees stated that some of the main targets of Project Raven were violent extremist groups like the Islamic State, several other targets included journalists, human-rights campaigners and other dissidents of the UAE government; Project Raven began in 2009.
Facebook has used monetary incentives to encourage teens and young adults to download a third party app which allows the company to view all phone and internet activity that users engage in on their device, be it iOS or Android. Promising to pay potential participants more than $20 per month, the social media giant has asked them to download the VPN “Facebook Research” which allows the company full access to information on other applications and activities on the participant’s mobile phone, likely in order to gauge the company’s competition. Within 24 hours of the revelation of this story, Apple removed Facebook Research from its iOS app store and revoked its iOS developer’s license.
For roughly the past 18 months, ProPublica, a non-profit investigative news agency, has compiled information on hundreds of thousands of advertisements appearing on Facebook, detailing identity of the ads’ sponsors, as well as who the ad might be targeted towards. In January, the social media platform blocked such tools by ProPublica and other media watchdogs, leading to criticism of Facebook from these groups, as well as lawmakers concerned with internet privacy. The Mozilla Foundation then penned an open letter to Facebook condemning this action, which was signed by several e-media freedom groups.
Russia’s powerful media regulation agency, Roskomnadzor, has initiated fines against social media giants Facebook and Twitter. While the state-run agency claims that this action is in response to a violation by both companies of Russian communications laws, it is widely understood to be politically motivated. The Kremlin has a history of intimidating and punishing media and communications companies that do not comply with its laws that are designed to deprive users of online data privacy. Facebook and Twitter have refused to submit to Roskomnadzor demands to disclose the personal data of their Russian users.
In response to a rise in violence fueled by disinformation on WhatsApp, the company made the decision to limit the number of times a user can forward a message to 20 around the world and five in India. India was home to the highest number of forwarded photos, messages and videos, resulting in over 24 murders by violent mobs which had been incited on the app. After an initial success with the new cap in India, WhatsApp decided to extend this limit to all of its global users. The company hopes that this change will refocus users on the app’s original purpose: communication with close contacts.
Of India’s roughly 900 million voters, 300 million use Facebook and 200 million use WhatsApp, opening the door for the largest democratic election to also be a important test of the impact of social media on the elections. The most popular two parties, Bharatiya Janata Party (BJP) and the Indian National Congress (INC), have both accused each other of spreading fake news but maintain that they do not do so themselves. Misinformation spread through social media has resulted in over 30 deaths in 2018, and officials are worried that an increase in fake news, encouraged by an election that is expected to be competitive, will result in further violence.
Facebook has removed almost 300 inauthentic pages covertly spreading the agenda of the Kremlin’s news agency, Rossiya Segodnya, as well as its outlets Sputnik and TOK, a video service. These pages appeared to be promoting groups with special interests from regional cuisine to politicians, however they promoted the Kremlin media’s stories and agenda, increasing Sputnik’s reach by 170%.
Facebook has invited researchers to study some of its inner workings and develop proposals that would improve the company’s work on disinformation, hate speech and democracy. This report offers nine recommendations for Facebook’s policies, engagement with disinformation and impact on governance, including: clarifying its community standards on hate speech, hiring content reviewers with knowledge of cultural contexts, increasing transparency around the enforcement of policies in complicated cases, and expanding the context and fact-checking information provided for users. The same group also studied the Impact of Greater News Literacy in societies as well as the connection between news literacy and other behaviors connected to media consumption online.
Amnesty International and Element AI’s crowd-sourced data project, Troll Patrol, monitored tweets sent to 778 journalists and politicians from the U.S. and U.K. during 2017. The study found that women from both sides of the political spectrum and both professions, journalists and politicians, were all targets for harassment. The project also found that 7.1% of the tweets monitored in the study registered as “problematic” or “abusive”. This percentage was higher for women of color, especially for black women, who were 84% more likely than their white peers to be talked about in abusive or problematic tweets.
SCL Elections, the parent company of Cambridge Analytica, was fined £15,000 after it failed to comply with a UK Information Commissioner Office (ICO) order to release the personal data of an American citizen. The man, David Carroll, filed a request for the company to release all information that was collected about him, however SCL Elections released basic information and did not respond to requests for further data, prompting Mr. Carroll to file a case with the Hendon magistrates court. SCL Elections pled guilty to failing to comply with an ICO enforcement notice and breaching the Data Protection Act.
Location data collected by mobile service providers is used for many legitimate purposes, including emergency assistance, financial fraud protection and under a warrant in official investigations. This access to data, however, is also sold to other companies and resold repeatedly until it is accessible by actors for non-legitimate use. For a small fee, websites offer phone location services, effectively allowing any person to track another by their phone. Mobile providers have said that they were in the dark about this use of their location data, while members of U.S. Congress and the Federal Communications Commision have called for better regulation and renewed safeguarding of private information.
As the Chinese government requires that companies censor their own online information, a market for internet censorship is on the rise, employing thousands of Chinese workers. These censorship factories teach their workers about legitimate past events and people so that they can recognize and moderate the content viewed by over 800,000 million users. The market for online content management extends beyond China; U.S. companies including Facebook and YouTube have announced that they plan to hire thousands of employees to help manage their content.
A report by Privacy International discovered that 42.55% of apps offered for free through the Google Play store may share data with Facebook, whether or not the user has a Facebook account or is logged in at the time. These apps often send personal information to Facebook automatically when opened by a user and app developers have a limited ability to control this data flow, leading to questions about privacy and the violation of data laws.
The Bangladesh Telecommunication Regulatory Commission directed mobile phone service providers to shut down the country’s mobile internet the day before and the day of the parliamentary election. The Commission cited fears of violence, intimidation, propaganda, and rumors surrounding the election that could lead to misinformation and voter suppression. Prime Minister Sheikh Hasina, whose ruling party retained power via a landslide victory in the elections, has been marred by allegations of mass arrests and jailing of activists and critics, forced disappearances and extrajudicial killings.
Now famous for interference in the 2016 U.S. presidential elections, deceptive Russian tactics were more recently used by tech experts in an experiment during the Alabama Senate race in service of then-candidate Doug Jones. While this secret project was designed to have no impact on the outcome of the race, it does have wide-ranging implications on the future of U.S. elections and domestic media manipulation. Experts on both sides of the aisle worry that candidates may resort to such tactics out of fear that their opponent may do the same, forever changing American politics.
Since its early years, Facebook has entered into data partnerships with other sites and platforms to customize the information presented to users, decrease competition and encourage expansion through a wider user base. These partnerships have incurred concern and condemnation from the international community. Some companies, many of whom said that they were unaware of the wide access given to them by Facebook, were able to access contact information from users and non-users, read and change private messages and view personal information without official audits of their use of this data or their privacy practices.
A bipartisan group of U.S. Senators led by Catherine Cortez Masto (D-Nev.) and Marco Rubio (R-Fla.) have written a letter to U.S. Secretary of State Mike Pompeo asking for an investigation into “CCP attempts to erode democratic processes and norms around the world threaten U.S. partnerships and prosperity,” particularly in regards to Taiwan. They suggest that organized social media campaigns targeted the Democratic Progressive Party (DPP), its candidates and President Tsai Ing-wen during local elections in November. Observers in Taiwan and elsewhere have said that the Chinese government supported these opposition campaigns with different forms of computational propaganda, and the Senators’ letter suggested they find the allegations credible and want them investigated.
The US Senate Intelligence Committee released two externally produced reports that provide further data on the 2016 national elections, including one partly authored by D4D Advisory Board member Philip Howard, Director of the Oxford Internet Institute in collaboration with Graphika, a data analytics firm, and a second by New Knowledge, another company studying social media and disinformation. The reports find much broader usage of social media accounts linked across platforms, particularly targeting conservative voters and African Americans. Platforms such as Instagram and YouTube have received less attention in the media, but were also found to have been used by groups such as the Russian Internet Research Agency, while the researchers also suggested that social media platforms would need to share more data to fully understand the full scope of the campaigns. The Senate Intelligence Committee plans to release its own report on these issues in the near future.
D4D network partner International IDEA has entered into a collaboration with the Electoral Tribunal of Panama in order to provide support to the newly created Digital Media Unit. The unit’s mandate is twofold. On one side, it is in charge of the online communication of the Tribunal in terms of Electoral information, providing a key line of information to the population through Twitter, Facebook, Instagram and Whatsapp with the key electoral information. On the other, the Digital Media Unit is spearheading the fight against disinformation, with a 24/7 monitoring of social media war room, supporting the Tribunal to detect electoral offenses, campaigns to raise awareness around the dangers of spreading disinformation and engagement with diverse stakeholders to protect the electoral integrity. The Unit has also been in charge of launching the country’s first Digital Ethics Pact, encouraging the population to make responsible use of social media during the electoral campaign. International IDEA support will continue until the elections in May 2019 and beyond, aiming to position the Unit as the leader in the fight against disinformation in Panama.
The CEO of Rappler, a Philippine “Social News Network,” and D4D Advisory Board Member Maria Ressa has been charged by the Philippine government with tax evasion and failure to file tax returns; she could face up to ten years in prison. At the same time, she has been named one of Time Magazine’s 2018 Persons of the Year, part of what they call “the Guardians,” a group of journalists fighting for democratic values around the world. Ressa has been outspoken in her criticism of Philippines President Rodrigo Duterte’s violent “war on drugs,” and other policies, while through research and analysis Rappler’s team have illuminated the influence campaigns and computational propaganda tactics his government and followers have pursued online.
D4D coalition member IFES is currently piloting its Holistic Exposure and Adaptation Testing (HEAT) process in Ukraine. The HEAT process is a method for identifying and testing the potential exploitation of vulnerabilities in the use of election data management technology. HEAT tests the technology itself, as well as the legal and operational frameworks in which the technology is being deployed. As part of the pilot, IFES conducted a cybersecurity assessment in summer 2018 and a cybersecurity tabletop simulation with the Ukrainian Central Election Commission in November 2018.
The Design 4 Democracy Coalition Advisory Board stands in solidarity with our fellow member Maria Ressa and with Rappler, the leading independent online news outlet in the Philippines. Ressa and Rappler Holdings were formally indicted on November 29, 2018, on charges of tax evasion—the latest action by the Philippine government in attempting to thwart the work of Rappler’s journalists—and Ressa turned herself into authorities and posted bail this week.
The European Union announced a plan to counter disinformation ahead of the 2019 European elections. The plan includes increased resources for outside researchers and fact-checkers, strict enforcement of the platformed-signed Code of Practice, and the introduction of the Rapid Alert System. In collaboration with the European Parliament and individual member states, the EU will work to have the Rapid Alert System operational by March 2019.
Following protests against the arrest of Afghan militia commander Alipoor, tensions were heightened by disinformation spread through social media. Government security forces posted that no civilians were harmed, while protesters circulated photos of a dead schoolgirl and others. National Directorate of Security Chief Massoum Stanekzai reported that commander Alipoor was arrested by the U.S. Military. However, a spokesperson for U.S. forces tweeted that they had no involvement. The government narrative was bolstered by photos of wounded security officers which were exposed to have been taken years prior.
At the Inaugural Grand Committee on Disinformation, an empty chair was left for Facebook CEO Mark Zuckerberg, who turned down invitations to testify before the international committee of lawmakers. In his place, Facebook Vice President Richard Allan faced hard-hitting questions about email communication found within documents seized by the UK parliament and Facebook’s role in global democracy challenges. At the close of the hearing, members of parliament from around the world signed a declaration of ‘Principles of the Law Governing the Internet.’ Simultaneously, MP Damian Collins, Chair of the UK parliament’s Digital, Culture, Media and Sport Committee, pressured, under threat of imprisonment, the founder of the social media application developer Six4Three to hand over internal Facebook documents and emails, which have now been released by the committee and demonstrate its policy development, strategy and internal deliberations over data sharing with third-party developers, among other issues.
In an attempt to increase transparency of political ads ahead of the 2019 EU Parliamentary Elections, Google has announced new policies that will require ad posters name the organization that provides their funding. The parameters for a ‘political ad,’ however, are too narrow to capture much of the politically motivated content that is expected throughout the election period, sparking worry that these new policies will prove ineffective.
An independent BSR report, commissioned by Facebook, about the impact of the platform on the human rights crisis in Myanmar found evidence that Facebook did not take enough action to prevent violence from being spread on its site. Thousands of people have died in the conflict, with hundreds of thousands more displaced internally and into neighboring Bangladesh. The report warns Facebook of future human rights abuses around the 2020 elections and calls for the company to both create a new human rights policy and enforce its current hate speech policies by working with the local authorities.
The Harvard University Shorenstein Center for Media Studies published The Fight Against Disinformation in the U.S.: A Landscape Analysis, which explores the key players, tactics, and support for learning and programmatic responses to viral digital culture. Societal changes impact the way that we use media, but these shifts are in all directions. When the way that people communicate changes, this is reflected in a shift in society as well. The paper discusses the impacts of the changes in media use on American culture and the ways in which some people are trying to combat the negative impact of computational propaganda, disinformation and other harmful forms of content.
In new research from Harvard’s Berkman Klein Center, Henry Farrell and Bruce Schneier argue that nations should approach disinformation as they approach other issues of state security. Different vulnerabilities are presents and potential responses are required depending on the type of government structure; autocracy or democracy. While autocracies produce contested knowledge about political actors themselves, democracies produce contested information about who holds power. This causes democracies to be more susceptible to narratives about general political organization.
A three-part New York Times documentary series explores Russian meddling in the 2016 U.S. election in the context of the wider Russian effort to divide the West. From the inception of Soviet fake news to its use today, the NYT uncovers the continuation of Russian interference in the United States and the reasons why the U.S. government and other nations are so woefully unprepared to counter this disinformation campaign.
First seen in China, Venezuela has adopted an RFID smart card system that allows the government to track citizen behavior. This Fatherland Card, made by Chinese telecom giant ZTE Corp, collects information including healthcare data, voting participation, and subsidized food distribution in a central database. In a country where many citizens rely on government programs to feed their families and receive medical care, opponents are calling the government’s requirement that citizens obtain the Fatherland card to access services akin to blackmail.
In a new report, social media researcher Robyn Kaplan, has identified three modes of content moderation by today’s digital media platforms, Artisanal, Community-Reliant, and Industrial. Artisanal and Industrial strategies are usually taken by for-profit internet media companies, Artisanal by smaller platforms like Vimeo, Industrial by larger platforms like Google. Meanwhile, Community-Reliant strategies are taken by platforms like Reddit where platform consumers are also the main content generators.
The British Army’s 77th Brigade, a group of skilled social media analysts, graphic designers, video producers, and content writers, are hard at work running the nation’s information warfare program. This group is not alone in its focus on the importance of public opinion in conflict; other countries including the United States and Russia know the power of deploying information warfare. The 77th Brigade counters false narratives, works to improve public sentiment in conflict zones, and influences public opinion to strengthen the position of the British Army.
Following the Paris Peace Forum from November 11th to 13th, the leaders of Canada, France, Norway, Costa Rica, Tunisia, Senegal, and Lebanon authored an opinion piece in The Star. They acknowledged the growing threat of disinformation to journalism and the citizens of their countries, applauded the presentation of the International Information and Democracy Commission at the Forum, and called for further action within their own nations and around the world.
The Center for International Governance Innovation recently released a report that informs questions regarding responsibility for regulating and supervising the internet, and how society can be protected from the risks of an open internet without stifling its power of innovation. The essays included in this report detail the regulatory and political landscape of current law, impacts on censorship and civil rights, and recommendations for the role of the private sector.
In an ongoing disinformation campaign, Russia has accused the U.S. government of operating a laboratory in Georgia where scientists tested biological weapons and drugs, resulting in multiple fatalities. In response, the U.S. has accused Russia of operating a disinformation campaign to distract the world from the negative attention placed on the Kremlin by the poisonings of Russian dissidents in the United Kingdom.
Over 50 countries signed on to the Paris Call for Trust and Security in Cyberspace, an agreement released by French President Emmanuel Macron as part of the Paris Peace Forum in November. China, Russia, Australia, North Korea, Iran, and the U.S. abstained from signing the agreement, while representing hubs for ICTs, online infrastructure and cybersecurity resources, personnel and experience. Tech companies including Facebook, Microsoft, IBM, Google, and HP also signed alongside civil society organizations and technical experts. While the agreement does not include a call for specific legislation, it does advocate for the promotion of human rights on the internet, the allocation of unique responsibilities to the private sector, and the end of hacking between nations in peacetime.
The BBC World Service released two ‘Beyond Fake News’ reports, one focusing on India and the other on Kenya and Nigeria. While the content of viral news differed between countries, many people shared information from alternative sources because of widespread distrust of mainstream media outlets, an inflated view of their own ability to discern fact from fiction, and the desire to promote national identity over truth.
The RAND Corporation released a report that explores the threat of Russian-language social media activity to former Soviet states. Employing interviews with experts in security and regional politics, as well as analysis of social media data, this report digs into the Kremlin’s use of shared post-Soviet experiences to spread disinformation. The report offers recommendations including better tracking of Russian media, increasing media literacy, and improving reliable content to offer an alternative to the Kremlin agenda.
Amelia Acker’s report investigates the ways in which metadata can be manipulated to inform the disinformation efforts of bad actors, as well as strategies to stop them from misleading the public. Acker unpacks practices used by disinformation proponents to increase their impact on social media by engaging the platform’s own algorithms. Acker hopes to inform the work of technology companies and other interested parties in the fight against disinformation.
This research from the “Personal Data and Political Influence” Project is part of a Brazilian Country Report by Coding Rights. The 2018 Brazilian election took place amid widespread online influence campaigns, often making use of personal data to target voters. This report addresses the use of this personal data in political campaigns as well as the regulatory and ethical questions that result from its increased use.
Supported by NDI and more than a dozen other international partners, the Design 4 Democracy Coalition held its first Advisory Board meeting on October 25th, in conjunction with MisinfoCon London and Mozilla Fest (MozFest). The D4D Coalition seeks to act as a force multiplier for organizations who advocate for more democracy-friendly technology platforms and policies by providing an opportunity for collaboration and mutual support within the democracy community. The Coalition also provides direct lines of communication with major technology platforms and is improving communication between the democracy community and the tech industry.
With election day drawing nearer, disinformation efforts to influence voters increase. The New York Times published a “Roundup” of disinformation-related coverage, its impact on the U.S. midterm elections, and its spread internationally. In response to suspicious pro-Saudi Arabian government tweets, Twitter suspended suspected bots that tweeted and re-tweeted identical talking points including “#unfollow_enemies_of_the_nation.” Twitter also released 11 million tweets believed to be from state-backed information operations originating in Russia and Iran. Facebook pages that appeared to be for Women’s Marches were found to originate in Bangladesh and sought to sell march-related merchandise.
As part of a broader series of discussions on tech and democracy, the National Democratic Institute and International Republican Institute joined partners on October 18 to host a reception and discussion in San Francisco about the ways tech is impacting democratic processes and participation around the world. The event featured perspectives from NDI President Derek Mitchell and IRI President Dan Twining, and explored opportunities for civil society, technologists, and others to collaborate through efforts like the D4D Coalition. Participants included Bay Area stakeholders from the tech industry, academia, and the international affairs community, and co-hosts included the Pacific Council, TheBridge, and Bay Area International Link.
In an attempt to increase transparency and enable academic investigation and research, Twitter released data about accounts and content that have been part of global disinformation campaigns since 2016. Included in the data are two accounts that had not been part of earlier releases, and are thought by Twitter to be state-backed. In total, information about 3,841 accounts connected to the IRA in Russia and 770 other accounts have been released to the public. However, researchers found even more fake Twitter accounts that appear to be linked to the Russian government that were not identified by Twitter’s search, promoting politically benign topics such as Taco Bell and Coachella.
Data & Society published a report on “Weaponizing the Digital Influence Machine: The Political Perils of Online Ad Tech”. The paper further explains the relationship between politics, media, and ill-meaning actors. It lays out the tactics, technologies, and conditions that anti-democratic and politically-motivated actors use to weaponize digital advertising. The report finds that three main strategies are employed by actors who use the “Digital Influence Machine” to attempt to divide an opponent’s supporters, leverage behavioral science techniques to influence consumers, and mobilize those who support their views by threatening their identity, political or otherwise.
In a New York Times op-ed, researchers and fact checkers in Brazil called on WhatsApp to make changes to its system when they found that it was being widely used to spread disinformation in the runup to the national election. A poll found that 44 percent of Brazilians use WhatsApp to read political news, and a growing amount of misinformation and disinformation has been shared widely through the app, and the writers called on the company to restrict forwards, broadcasts and limit the size of new groups in Brazil during the election period. A Facebook subsidiary, WhatsApp later banned over 100,000 accounts associated with sharing false stories, but did not take up the suggestions before the election on October 28th.
Hundreds of members of the Myanmar military, posing as civilians and often using tactics modeled after those used by Russia, have used Facebook to spread disinformation about the Muslim Rohingya minority. One of the largest forced human migration human history, over 700,000 people, is widely attributed to this type of anti-Rohingya propaganda and the violence it incited. Nathaniel Gleicher, Facebook’s head of cybersecurity policy, reported that the company found “clear and deliberate attempts to covertly spread propaganda that were directly linked to the Myanmar military.”
Google’s CEO Sundar Pichai defended the company’s search engine for use in China is going well and would proceed, despite questions around such an initiative’s potential for censorship and surveillance. Mr. Pichai described the controversial decision to build the search engine as in keeping with their mission to provide information to all people. Google employees have voiced concern at this proposal, citing its commitments to the Global Network Initiative’s Principles on Freedom of Expression and Privacy.
Full Fact, a British fact-checking group, released a report entitled “Tackling misinformation in an open society. How to respond to misinformation and disinformation when the cure risks being worse than the disease.” The report explains that it is more realistic to build resilience against disinformation and misinformation in the UK than to eliminate it all together. In the paper, Full Fact sets out a framework for response to disinformation and misinformation that is proportionate and risk-based, and caution against taking action without thinking through the consequences and allowing time for further research into the harm caused by such campaigns.
A vulnerability in Google+ profiles opened user data to 438 applications between 2015 and March 2018, when the problem was discovered as part of an internal audit. This data breach resulted from a flaw in an API that was created by the tech company to allow developers to access profile information about individuals who used their apps and had given permission to share their profile data. Internal memos, investigative journalism, and a blog post shed light on Google’s decision to not go public with the information when it originally discovered the problem. Increased security measures including the termination of the Google+ service occurred in hopes of rectifying the problem. CEO Sundar Pichai has agreed to testify before Congress on the breach in the near future.
The Design 4 Democracy (D4D) Coalition was honored to be a part of the 2018 g0v Summit, from October 5-7, 2018. The Summit provided an opportunity to share information about the Coalition with other groups sharing similar objectives, including the Global Tech Accountability Network, a new initiative arising out of the #DearMark letter, led by organizations in Myanmar. Among the areas of collaboration discussed were the development of an open data standard on social media monitoring, together with a platform implementing the standard, for social media monitoring organizations to adapt to their own specific needs. The use of shared tools and data standards provides an opportunity for social media monitoring groups to share data with like-minded, trusted partners in other countries, providing a more complete picture of hate speech or disinformation in a regional or global context. In addition, the Summit provided an opportunity to connect with D4D partners, g0v and CoFacts, which is working with the Coalition to adapt the CoFacts fact-checking chatbot from LINE to FB Messenger and WhatsApp.
Facebook is launching fact-checking tools in Kenya and plans to spread the service to the entire African continent. The fact-checking tool will demote news stories marked as fake and warn the user trying to share. The focus on Facebook’s newsfeed, however, has drawn criticism for not being the most effective use of fact-checking technology. Facebook products are highly popular in Africa, however, WhatsApp is used by far more people to communicate than Facebook, while the fact-checking service will not extend to WhatsApp.
D4D Coalition member NDI helped organize a conference on “Enhancing Media Literacy and Combating Disinformation” in Praia, Cape Verde in collaboration with the Cape Verdean government and The Association of Journalists of Cape Verde (AJOC). Representatives from NDI spoke on the growing threats to democratic values coming online in a new age of disinformation and suggested potential means for the media and government could use to improve information integrity in the country.
A new law in California makes covert bots illegal. This decision requires fake profiles or bots to be labeled as artificial, in hopes that the consumers of content on the internet will be better informed regarding the source of the information they view. This legislation is groundbreaking, but some who study bots caution against legislating without a thorough understanding of the different types of bots and how they function.
An upcoming referendum on a name change for the Republic of Macedonia has helped create an online environment awash in disinformation campaigns, some linked to Russia, which is trying to prevent the country from joining NATO and moving towards the West.
The D4D Coalition will present on its work at the g0v Summit in Taiwan from October 5 to 7, 2018, particularly the potential to develop disinformation monitoring efforts in Asia, with the support of partners, tools and techniques bolstered by D4D.
Capitalizing on weaknesses in U.S. technology and social media platforms, businesses in North Korea are able to connect with people in other countries to both provide and solicit services, circumventing U.S. sanctions. By hiding their identities, a group of North Korean web developers have allegedly advertised their services on Western online platforms and built a website for a business in Australia. A web of fake social media accounts and front companies in other countries enable this underground business.
The prevalence of deepfakes, videos that are manipulated to show fake events, is increasing, prompting Rep. Adam B. Schiff (R-FL) and Stephanie Murphy (D-FL) to call for an analysis of the problem and solutions by the intelligence community. In their letter to the Director of National Intelligence, Daniel Coats, they call these videos a national security risk. Social media companies’ growing focus on stopping disinformation has been encouraged by pressure from Congress and the intelligence community and the lawmakers hope that the same trend will occur for the fight against deepfakes.
On September 12, 2018, members of the Design 4 Democracy Coalition held an off-the-record workshop in Kyiv, Ukraine on emerging threats in disinformation and cybersecurity. Facilitated by NDI, IRI, IFES, and StopFake, the workshop included a range of organizations from civil society, government, technology companies and the cybersecurity community. Following lightning talks and presentations on research findings, participants took part in focused discussions on disinformation and cybersecurity in the lead up to the 2019 elections, as well as potential areas for improved collaboration with technology platforms to counter identified threats.
The D4D team is actively engaged in Macedonia ahead of its historic national referendum on September 30, 2018. D4D is supporting efforts by local stakeholders to ensure that disinformation does not interfere with the ability of the Macedonian people to express their will.
The Oxford Internet Institute Computational Propaganda Project found that the upcoming Swedish election has experienced a high amount of ‘junk news’ shared by users on social media sites, second only to that seen in the 2016 U.S. presidential election. Some of these fake sites are modeled after reputable news sources, complicating users’ search for truth. Researchers were surprised to learn that eight of the ten most-shared fake news sites were domestic, causing a renewed focus on local actors over international influences.
UNESCO published this handbook for use by journalism educators and publishing journalists. It includes discussions of journalism’s responsibilities regarding disinformation, misinformation, and mal-information. Written as a curriculum, this handbook covers everything from the current state of ‘fake news’, to the evolution of journalism, to recommendations moving forward.
The Chinese government has created a national-level platform, run by the ‘Internet Illegal Information Reporting Center’ and run by Xinhua to publicly differentiate fact from fake news for its citizens. To create the platform, over 40 “rumor-refuting platforms” were combined into the official platform. The purpose behind the creation of this new official fact-checker, state-run xinhuanet.com explains, was to stop rumors and illegal information from disturbing the social order.
A Reuters special report documents more than 1,000 posts, comments, and crude images on Facebook calling for violence and discrimination against Myanmar’s Muslims. Despite official rules banning such hate speech, Facebook appeared largely unprepared to crack down on this wave of anti-Muslim posts. For a long time, the organization lacked enough Burmese speaking employees, as well as programs that can effectively detect hate speech in the language and systems to translate the text, among other limitations preventing an effective response. It has also recently blocked the accounts of 18 users and 52 pages that are linked to Myanmar’s military, further highlighting the country’s worsening relationship with the world’s largest social media network.
A recent piece from Vice Motherboard discusses the catch-22 that Facebook is confronting as it trying to create a standard moderating strategy for a world where local context is critically important to such moderation. Confronting mounting criticism from many quarters, the company has recently increased its focus and number of employees moderating false news and other posts that violate its Community Standards. Is it possible for Facebook to maintain a platform at global scale that is both safe and open?
The messaging platform Line has seen a large increase in the quantity of content posted by users that spread incorrect information, especially about healthcare. These posts, which often target elderly users, advertise fake products or share incorrect information, like the power of kale juice to cure bone pain. Line is working with fact-checking organizations and the government to attempt to combat the problem.
A hacking group affiliated with the Russian government and linked to interference in the 2016 election, APT28, is believed to be behind the creation of six websites that mimic government and public policy groups. The U.S. Senate, International Republican Institute, and the Hudson Institute were among those targeted by the sites, which appear to seek credentials from members of these organizations. Microsoft, which discovered fake websites with domains like my-iri.org and senate.group, has announced plans to fight cybersecurity threats targeting political organizations. The tech company will offer free cybersecurity protections to likely targets of groups like APT28, including campaign offices and candidates, provided that they use Office 365 software.
Facebook released statements on their website about two unconnected attempts by foreign governments to influence users or to steal their personal information. The attacks, which began in Russia and Iran, appear unconnected despite their use of similar tactics. Facebook removed 652 accounts that were traced to Iran state media and others that have been identified as Russian military intelligence.
In response to incorrectly reported “false news” content, Facebook has begun to rate its users’ credibility based on thousands of behavioral points. Users who report content that is, in fact, false will have their future flags reviewed with higher priority than those who report many articles which are not false.
Politicians in the US and Europe are devising new policies to limit microtargeting, a technique of targeting specific subgroups for advertisements which many believe is feeding polarization and voter manipulation. As researchers have identified, microtargeting has become a key weapon for foreign election meddlers, and many argue Facebook’s efforts to deter this exploitation of microtargeting has had little effect.
NDI is planning a second disinformation event in collaboration with the Mexico National Electoral Institute (Instituto Nacional Electoral, INE) the Center for Research and Teaching in Economics (Centro de Investigacion y Docencias Economicas, CIDE) and the National Autonomous University of Mexico (Universidad Nacional Autonoma de Mexico, UNAM). This will follow up on the forum held in March 2018 before the national elections in July, and will review the 2018 electoral process, highlight local efforts that tackled disinformation, and discuss lessons learned for future elections. Similar events are being held with NDI and FGV-DAPP participation in Brazil and Colombia throughout July and August.
A Portland Communications survey of influencers shaping Twitter conversations on recent African elections shows that Africa is not immune to fake news, the rise of bots, or external influence on elections. The survey finds that 53% of key influencers came from outside the country holding the election, with many influencers coming from outside the continent. The report also reveals that bots had a major presence in election discourse while politicians had comparatively minor influence on discussions. Details on specific countries are provided.
A report by the Institute for the Future reveals a widespread phenomenon of “state-sponsored trolling”: government use of targeted online hate and harassment campaigns to intimidate and silence individuals critical of the state. New surveillance and hacking technologies have allowed governments to anonymously track, threaten, and publicly delegitimize opponents on a greater scale than ever before. The report concludes with recommendations for technology companies, lawyers, and lawmakers.
A delegation including D4D partners visited Skopje, Macedonia in late July to assess potential for disinformation to impact the forthcoming referendum on a proposed change to country’s name and to better understand local needs for coordination and support. D4D partners continues to monitor the situation closely.
This comprehensive essay details the myriad of political, social and business dynamics that transformed social media from the pro-democratic tool of the Arab Spring to the anti-democratic weapon of authoritarians and election-meddlers today. Writer Zeynep Tufekci points to the lack of regulations on tech firms, the unwillingness of the US to bolster its online defenses, and the appropriation of social media tools by authoritarians as key factors in this transition.
Oxford Internet Institute’s Project on Computational Propaganda has a new report analyzing the new and growing trends of organized media manipulation, as well as the growing capacities, strategies, and resources that support the trends. In a fast-growing number of countries, political parties and government agencies are using social media to manipulate domestic public opinion, often in response to threatening foreign interference and junk news. Since 2010, political parties and governments have spent over half a billion dollars developing and implementing these operations.
This guide develops a learning module for journalists and educators meant to situate contemporary information disorder in a long history of misinformation, disinformation and propaganda. It establishes a broad historical overview of past forms of information disorder propagated by states, public figures, and the media. Established by the International Center for Journalists, the guide hopes to equip contemporary journalists and educators with a sharpened, contextual knowledge of disinformation-related issues.
The Office of Senate Intelligence Committee Vice Chairman Mark Warner prepared a policy paper detailing 20 ways that lawmakers can consider combating disinformation, protect user privacy, and promote competition in the tech space. These options include media literacy programs, new rules for social media platforms, and more user control over a company’s use of personal data.
The UK Department for Digital, Culture, Media and Sport, named Britain’s chief investigating authority on disinformation, released the first of its thorough reports on the subject. What began as an inquiry into a few major scandals turned into a comprehensive 89-page document on Russian interference, tech company responsibility for disinformation, Cambridge Analytica, and data targeting, putting the UK parliament at the center of global discussions. The report also includes a list of demands for regulation, legislation, codes of ethics and police investigations.
In its efforts to stop the spread of misinformation, Facebook deactivated a large network of pages and accounts thought to be led by Brazilian right-wing activists from the Movimento Brazil Livre (MBL), or “Free Brazil Movement.” According to a number of sources, MBL organizers posed as different independent news outlets to develop coordinated messaging campaigns in support of their policies. Facebook removed the accounts alleging that they violated the company’s authenticity policies.
The European Union issued Google a record-breaking antitrust fine of €4.34 billion ($5.06 billion) over Google’s deals with mobile phone makers and telecommunications operators. According to EU regulators, Google’s contracts with phone makers effectively force those companies to prioritize Google apps and services in exchange for Google providing its Android operating system for free. The fine represents a major step towards stronger government oversight of technology companies, at least in the EU.
The Getulio Vargas Foundation Office of Public Policy (Fundação Getúlio Vargas, Diretoria de Análise de Politicas Publicas or FGV-DAPP) launched their Sala de Democracia Digital (Digital Democracy Room) at an event on July 25 in Rio de Janeiro, Brazil with support from NDI, including the participation of NDI’s Colombia Country Director Francisco Herrero. The online project seeks to analyze political discourse on the web during the 2018 Brazilian elections and will include weekly reports on the use of bots and fake news during the months prior to the October poll as well as policy papers and recommendations.
As the EU and China push forward with tighter internet regulations, the US is losing its place as a key agenda-setter on internet freedom and cybersecurity policy. The US has lately been taking a far more passive role on countering authoritarian internet policies in China and other developing countries, and has neglected to confront the EU over its strict user privacy regulations that could threaten global cybersecurity efforts.
The parliament of Uganda temporarily maintained a controversial tax on social media use despite widespread protests against it. President Museveni, who first introduced the law, has cited the spread of “gossip” as a key reason for its existence. Seen as an attempt at state censorship, the tax has led to a significant drop in social media use in addition to creating new economic burdens for the Ugandan people.
A recent study by the research firm Ghost Data reveals that Instagram may have as many as 95 million bots, up slightly higher from 2015. Bot presence on Instagram continues to increase despite efforts by Facebook to curb the spread. The rise in Instagram bots is especially concerning since images and videos, uniquely difficult to track and identify, could play a larger role in coming elections.
A joint investigation by the Organized Crime and Corruption Reporting Project (OCCRP) and partners reveals that Macedonia’s fake American news industry in Veles was launched by well-known Macedonian attorney Trajche Arsov, not by apolitical teens as previously reported. During the 2016 election, Arsov worked closely with several high-profile American partners to churn out over a hundred fake news websites on social media. Macedonian security agencies are now cooperating with law enforcement in the US and several other European countries to investigate possible ties between Arsov and the recently indicted Russian hackers.
In light of episodes of ethnic violence in Myanmar, India and Sri Lanka, Facebook has announced tighter restrictions on disinformation and misinformation spread through its site and Instagram. The company will start effectively removing false information that might lead to physical harm. However, the policy does not apply to WhatsApp, which has been a major catalyst for recent violent incidents.
A bill that would require certain automated social media accounts to identify themselves as bots is currently moving through the California state legislature. Many critics charge that the proposed law lacks specifics, that it is not constitutional, or that it will not effectively solve the problem of bot influence on voters. The bill, among the first of its kind, demonstrates the challenges that face lawmakers seeking legislative solutions to the surge of automated accounts.
On July 16, NDI and Coalition partners convened a forum at the margins of the OGP Summit in Tbilisi, Georgia, titled “Scaling the Future of Civic Tech.” The day-long event featured discussions on ways the civic tech movement is strengthening democracy in particular national and subnational contexts and highlighted the D4D Coalition as a means of sustaining collaboration on shared priorities.
New findings reveal that Russian propagators of disinformation often posed as local news sources to exploit the American public’s higher levels of trust for local news organizations. Many fake local news Twitter accounts did not actually post false information, opting instead to establish long-term credibility for when they needed to operationalize. These cases further confirm that the Russian-led disinformation campaign has been years in the making.
Twitter has escalated its defense against fake accounts and bots in the past few months, suspending more than one million accounts a day. This is especially significant given the company’s usual prioritization of freedom of speech over policing users’ behavior. In an effort to promote trust amongst active users, Twitter has already removed large amounts of inactive or suspicious follower accounts thought to be promoting disinformation and spam.
YouTube has announced a series of developments meant to promote more reliable, “authoritative” news sources on its site. The proposed changes mark a departure from the platform’s current video recommendation algorithms, which have promoted factually incorrect conspiracy theory videos to users who had a history of watching similar ones.
Frightened mobs in India have killed two dozen innocent people as false rumors about child kidnapping spread through the widely-used messaging platform WhatsApp. The app has facilitated the proliferation of false information in countries across the world, leading to instances of physical harm in Brazil and Sri Lanka as well. Not only does WhatsApp host the majority of viral disinformation campaigns in the most amount of countries, but its encryption and private messaging systems also make it uniquely difficult to slow these campaigns. In response to these trends, WhatsApp has placed restrictions on the amount of contacts that users can forward messages to. WhatsApp also announced plans to support third-party developers of fact-checking technology for the app.
Philip Howard, Director of the Oxford Internet Institute (OII) and D4D advisory board member, testified before the Senate Intelligence Committee on the work of OII’s computational propaganda project, state sponsored disinformation worldwide, and the potential for foreign influence operations targeting U.S. elections.
First Draft, a project of Harvard University’s John F. Kennedy School of Government, published a definitional toolbox of terms related to technology and misinformation. The toolbox aims to create a shared vocabulary amongst policymakers, citizens, and academics. Part One includes a glossary defining “commonly used” and “frequently misunderstood” terms related to information disorder. Part Two attempts to map the thirteen sub-categories of the information disorder field in order to facilitate more strategic, targeted research and action. Part Three includes downloadable high-resolution graphics created to help explain information disorder.
An investigation by the Digital Forensic Research Lab reveals a deep network of exchanges between various users and several Brazil-based groups that sell pages, likes, and shares on Facebook for money. Like and share for cash transactions at large scale have the potential to threaten the integrity of Brazil’s upcoming elections, also a concern in Mexico’s recent elections that were also flooded with messages from inauthentic accounts.
Research conducted by the German Tactical Technology Collective and their partners reveals that WhatsApp is now the primary platform for political messaging across the Global South, especially in rural areas with limited internet access. The report analyzes this phenomenon, seeking to answer why WhatsApp is such a powerful tool, how politicians and campaigners use the platform, what strategies organizers use to exploit the platform and what the potential implications are of this trend. The report also includes several case studies of key countries impacted by WhatsApp.
Vietnam recently approved a new controversial cybersecurity law regulating technology firms’ use of personal data. The legislation requires that social media firms turn over subscriber information, IP addresses, and account information to the Ministry of Public Security and remove content from their platforms when requested by the government. The legislation also creates formulations for charging citizens for posting “anti-government propaganda” or any material that “incites violence and disturbs public security.”
From June 22-23, 2018, the Atlantic Council’s Digital Forensics Lab hosted the 360/OS open source summit in Berlin, bringing together journalists, activists, innovators, and leaders from around the world as part of our digital solidarity movement for objective facts and reality — a cornerstone of democracy.
Representatives from the Design for Democracy Coalition attended the Copenhagen Democracy Summit, organized and hosted by the Alliance of Democracies, with the sponsorship of a wide array of organizations including Microsoft, Facebook, the University of Denver, NDI and IRI.
Several D4D Coalition partners convened in Brussels to participate in the forum “Representation in the Age of Populism: Ideas for Global Action,” organized by the International Institute for Democracy and Electoral Assistance (International IDEA).
On June 8-9, the Verkhovna Rada, Ukraine’s unicameral parliamentary body, hosted a conference to discuss current threats on democracy, which came as a follow-up to the Chairman of the Verkhovna Rada’s official visit to Moldova in March of this year.
Supporting partners of the Design 4 Democracy Coalition, the National Democratic Institute (NDI) and the International Republican Institute (IRI), along with the Defending Digital Democracy project (D3P) at Harvard Kennedy School’s Belfer Center, convened at Google’s Belgium office for the public launch of the “The Cybersecurity Campaign Playbook: European Edition” on May 22, 2018. The event featured a series of discussions including D3P Senior Fellows Robby Mook and Matt Rhoades, representatives from Microsoft and Google, European parliamentarians and policymakers, and officials from the Belfer Center, IRI and NDI.
WeChat, a social media platform popular with Chinese Immigrants in the United States, presents new challenges and a new perspective to the fight against misinformation. The platform heavily features local news and sensational stories, while passing over other popular topics including the economy and healthcare. WeChat shows many of the same characteristics as misinformation in mainstream media, but also displays ways in which immigrant populations diverge from norms because of its blend of U.S. media and Chinese media practices. On the platform, conservative voices feature loudly, with both liberal and conservative users discussing the role of Chinese immigrants in politics and in the United States.