Sunday, April 29, 2018

Social Media Promote Tribalism in Pakistan

Social media newsfeeds are driven by users' profiles to reinforce their preferences and prejudices.  Newsfeeds are customized for each user. Any posts that don't fit these profiles don't get displayed. The result is increasing tribalism in the world. American and British intelligence agencies claim that Russian intelligence has used social media to promote divisions and manipulate public opinion in the West.  Like the US and the UK, Pakistan also has ethnic, sectarian and regional fault-lines that make it vulnerable to similar social media manipulation.  It is very likely that intelligence agencies of countries hostile to Pakistan are exploiting these divisions for their own ends. Various pronouncements by India's current and former intelligence and security officials reinforce this suspicion.

Tribalism:

All human are born with tribal instincts. People embrace group identities based on birthplace, language, region, sect, religion, nation, school, sports team, etc to define themselves.

Such group affiliations can give people a sense of belonging but they are sometimes also used to exclude others with the purpose of promoting hostility and violence. Social media platforms are being used both ways: To unite and to divide people.

Powerful new media such as Facebook, Twitter, YouTube and WhatsApp lend themselves for use as extensions of covert warfares carried out by intelligence agencies against nations they see as hostile.

Social Media Platforms:

Social media platforms like Facebook and Twitter are powerful magnets for marketers,  extremist groups and intelligence agencies. They spend a lot of time and money on such platforms to reach and manipulate their targets.

Trolls and bots proliferate and societies become more deeply divided along political, ethnic, racial, religious, ideological and regional lines.  It is a problem that all nations in the world have to respond to.

Developed nations in Europe and North America with stronger institutions are generally more capable of dealing with the consequences of such divisions.  But the increasing social media penetration in less capable developing nations with weak institutions cause them to sometimes descend into violent riots. In a recent piece titled "Where Countries Are Tinderboxes and Facebook is a Match",  the New York Times has mentioned recent examples of riots and lynchings caused by social media posts in India, Indonesia, Mexico, Myanmar and Sri Lanka.

Brexit and Trump:

The unexpected result of Brexit, the British vote to leave the European Union, shocked many in the UK and Europe. It was soon followed by an even bigger shock with the unexpected election of Donald J. Trump as the President of the United States. Western intelligence agencies have now concluded that Russian intelligence agency sponsored trolls played a major role in manipulating the public opinion in the United Kingdom and the United States.

In February 2018, the US justice department indicted 13 Russians and three Russian entities in an alleged conspiracy to defraud the United States, including by tampering in the 2016 presidential election on behalf of Donald Trump and against Hillary Clinton, according to media reports.

The US DOJ indictment identified the Internet Research Agency, a St Petersburg-based group to which millions of impostor social media accounts have been traced, as a primary offender. The indictment also charged Russian individuals who funded the alleged election tampering conspiracy or who otherwise participated in it.

Some of the Russian social media posts were used to organize protests and counter protests in the United States on issues relating to race and religion.

US Senator Richard Burr confirmed that two groups converged outside the Islamic Da’wah Center of Houston in 2016, the Texas Tribune reported. One had gathered at the behest of the “Heart of Texas” Facebook group for a “Stop Islamification of Texas” rally, while the other, spurred on by the “United Muslims of America” Facebook page, had organized a counter-protest to “Save Islamic Knowledge.”

A Russian-sponsored Facebook ad appeared in late 2015 or early 2016, sources told CNN, and though it was meant to appear supportive of Black Lives Matter movement, it may also have conveyed the group as threatening to some white residents of those cities.

Indian Trolls:

It can be safely assumed that Russians are not alone in using social media against nations they see as hostile to them. It is also a safe bet that Indian intelligence agencies are most likely deploying their troll farms and bots to divide Pakistanis.

India's ruling BJP party has extensively used social media apps to spread rumors, innuendo,  fake news, outright lies and various forms of disinformation against anyone seen to be even mildly critical of their leader Narendra Modi. Their harshest abuse has been targeted at the Opposition Congress party leaders, various liberal individuals and groups, Muslims and Pakistanis.

Swati Chaturvedi, author of I Am a Troll, has cited many instances of hateful tweets from Modi-loving Hindu trolls, including Singer Abhijeet's lies to generate hatred against Muslims and Pakistan and BJP MP Hukum Singh's false claim of "Hindu exodus" from Kairana in western Uttar Pradesh blaming it on Muslims.

Vikram Sood, a former top spy in India, has elaborated on India's covert warfare options to target Pakistan in the following words: "The media is a favorite instrument, provided it is not left to the bureaucrats because then we will end up with some clumsy and implausible propaganda effort. More than the electronic and print media, it is now the internet and YouTube that can be the next-generation weapons of psychological war. Terrorists use these liberally and so should those required to counter terrorism."

In a 2013 speech at Sastra University, Indian Prime Minister Modi's National Security Advisor Ajit Doval revealed his covert war strategy against Pakistan as follows:  "How do you tackle Pakistan?.....We start working on Pakistan's vulnerabilities-- economic, internal security, political, isolating them internationally, it can be anything..... it can be defeating Pakistan's policies in Afghanistan...... You stop the terrorists by denying them weapons, funds and manpower. Deny them funds by countering with one-and-a-half times more funding. If they have 1200 crores give them 1800 crores and they are on our side...who are the Taliban fighting for? It's because they haven't got jobs or someone has misled them. The Taliban are mercenaries. So go for more of the covert thing (against Pakistan)..."

Summary: 

Social media newsfeeds are driven by users' profiles to reinforce their preferences and prejudices.  Newsfeeds are customized for each user. Any posts that don't fit these profiles don't get displayed. The result is increasing tribalism in the world. American and British intelligence agencies claim that Russian intelligence has used social media to manipulate public opinion in the West.  Like the US and the UK, Pakistan also has ethnic, sectarian and regional fault-lines that make it vulnerable to similar social media manipulation.  It is very likely that intelligence agencies of countries hostile to Pakistan are exploiting these divisions for their own ends. Various pronouncements by India's current and former intelligence and security officials reinforce this suspicion.

Here's a discussion on the subject in Urdu:

https://youtu.be/zuPMy65O6-s




Related Links:

Haq's Musings

South Asia Investor Review

Social Media: Blessing or Curse For Pakistan?

Planted Stories in Media

Indian BJP Troll Farm

Kulbhushan Jadhav Caught in Balochistan

The Story of Pakistan's M8 Motorway

Pakistan-China-Russia vs India-Japan-US

Riaz Haq's Youtube Channel

33 comments:

Riaz Haq said...

Could Facebook Data Leaks Impact Pakistan’s Elections?
In Pakistan the spread of misinformation is a much graver problem than the impact it might have on polling.

https://thediplomat.com/2018/05/could-facebook-data-leaks-impact-pakistans-elections/

While testifying before a joint hearing of the U.S. Senate’s Commerce and Judiciary committees, Zuckerberg said his company was introducing the latest new artificial intelligence tools to target fake accounts.

However, digital analysts and rights activists warn that while these actions would help protect data henceforth, Facebook can’t do much to undo the damage that might’ve already been done owing to the data leaks from the past.

“There is no way of undoing a particular case of data theft. Short of deleting or destroying the database, no other action would be useful, and it’s nearly impossible since as they say ‘the data has left the building’,” says Asad Baig, the founder and executive director of Media Matters for Democracy, while speaking with The Diplomat.

“The fact of the matter is, [Cambridge Analytica] has Facebook user data, including the users from Pakistan and if someone wants to exploit it for profiling, and use it for political gains to fine-tune their messages for a local public nothing much can be done about it, and the parties who exploit this data will have an undue advantage in their political campaigns.”

CEO and founder of Digital Rights Foundation, Nighat Dad, agrees that previous damage can’t be undone, but adds that Facebook needs to completely rethink its model to serve users.

“What Facebook can certainly do is to ensure that it takes strict measures to protect the data of its users in the future. This can only be done by strong privacy policies and their implementation that serve the users instead of the corporation itself,” she told The Diplomat.

While fake news has impacted voting patterns the world over, it has become especially problematic in Pakistan with all leading political parties asking their social media teams to create fake profiles as part of their social media strategy.

Talking to The Diplomat off the record, social media managers from the ruling Pakistan Muslim League-Nawaz (PML-N), and the two main opposition parties Pakistan Tehrik-e-Insaf (PTI) and the Pakistan People’s Party (PPP), confirmed that creation of fake Facebook and Twitter accounts to propagate their narratives was the official policy of each party.

“Everyone’s running fake Facebook accounts and Twitter bots, so we’re just keeping pace with what others are doing,” a social media executive of the PML-N who requested anonymity told The Diplomat. “It was the PTI that started this trend. So we’re just countering propaganda with propaganda,” they added, citing the fact that one of the rumours that the PML-N social media team has had to counter in recent weeks was the false report that the party has hired Cambridge Analytica’s services for the upcoming elections.

Kaleem Hafeez, a member of the PTI social media team, told The Diplomat that his party isn’t ruling out the possibility of the PML-N purchasing data to manipulate elections, considering the party’s control over the IT ministry.

“Our data analysts are monitoring what other parties are doing, and the undemocratic tools and methods being used to rig elections digitally,” Hafeez said. “Considering that the PML-N was involved in heavy on-field rigging in the 2013 balloting, it won’t be a surprise if they do the same digitally as well.”

Riaz Haq said...

Russian news may be biased – but so is much western media
Piers Robinson
Manipulation of the news for propaganda purposes is not the prerogative of the west’s enemies. It’s vital to look at all media, including the UK’s, with a critical eye

https://www.theguardian.com/commentisfree/2016/aug/02/russian-propaganda-western-media-manipulation

Whatever the accuracy, or lack thereof, of RT and whatever its actual impact on western audiences, one of the problems with these kinds of arguments is that they fall straight into the trap of presenting media that are aligned with official adversaries as inherently propagandistic and deceitful, while the output of “our” media is presumed to be objective and truthful. Moreover, the impression given is that our governments engage in truthful “public relations”, “strategic communication” and “public diplomacy” while the Russians lie through “propaganda”.

Neither of these claims has significant academic support. A substantial body of research conducted over many decades highlights the proximity between western news media and their respective governments, especially in the realm of foreign affairs. For reasons that include overreliance on government officials as news sources, economic constraints, the imperatives of big business and good old-fashioned patriotism, mainstream western media frequently fail to meet democratic expectations regarding independence. In our own study of UK media coverage of the 2003 Iraq invasion, Manchester University found that most UK mainstream media performed to reinforce official views rather than to challenge them.

As for the supposedly benign communication activities of our own governments – again, there are ample grounds to challenge the understanding that the “strategic communication” activities of our governments can be understood as free from the kind of manipulative “propaganda” of which the Russian government is accused. Indeed western governments frequently engage in strategies of manipulation through deception involving exaggeration, omission and misdirection. This was recently observed quite clearly during the run-up to the Iraq war when intelligence was manipulated in order to mobilise public support for the Iraq invasion.

Moreover, the recent Chilcot report describes how, in the early days after 9/11 “regime-change hawks” in Washington argued that “a coalition put together for one purpose (against international terrorism) could be used to clear up other problems in the region”. Tony Blair had discussed how phases 1 and 2 of the “war on terror” would require a “dedicated tightly knit propaganda unit”.

One might reasonably conclude from all this evidence that the western public fell foul of a major deceptive propaganda campaign which involved exploiting terrorism threats in order to “clear up other problems” and which was instigated by our own governments and communicated through “our” media. Propaganda and deception is not, it would appear, the sole preserve of non-western states; it is alive and well in western democracies.

These are confusing times for consumers of the news, and the issue of which media outlets should be trusted is as demanding and critical as ever. Given the level of conflict and potential conflict in the world today, plus pressing global issues regarding environmental crisis, poverty and resources, it is essential that people learn to navigate the media and defend themselves against manipulation. The first step towards becoming more informed is to avoid seeing our governments and media as free from manipulation while demonising “foreign” governments and media as full of propagandistic lies.

Riaz Haq said...

Pashteen calls soldiers "terrorists in uniforms"

Demands end to check posts which will lead to power vacuum filled by Taliban will attack Pashtuns and Non-Pashtuns alike.

Demands international guarantors.

Western media promoting it as "Pashtun Spring". Remember what happened to "Arab Spring"?


https://thesydney.news/2018/04/22/voa-exposed/


Recently I came across the twitter profile of the leader of the newly formed ethnic movement in Pakistan and was shocked to see that many of the posts on his profile were either fake or photoshopped, I was already shilly-shallying about this movement but seeing fake photos and blunt propaganda made it easy for me to understand the motives and intentions behind it.

I am sharing some of the tweets from that specific profile been convincingly refuted by different Pakistani social media users.

List of tweets using fake pictures by Manzoor Pashteen for propaganda purposes. He rebrands all voilent acts by TTP & terrorists as acts by Army including APS incident. Typical TTP sympathiser mindset. Now TTP apologists & sympathizers are talking about human rights & peace. https://t.co/wp6vpRleNW


Saad
@Saad__tweets
List of tweets using fake pictures by Manzoor Pashteen for propaganda purposes. He rebrands all voilent acts by TTP & terrorists as acts by Army including APS incident. Typical TTP sympathiser mindset. Now TTP apologists & sympathizers are talking about human rights & peace. https://twitter.com/BhittaniKhannnn/status/976104265990189056 …

7:44 AM - Mar 20, witter Ads info and privacy

Asfandyar Bhittani🇵🇰

@BhittaniKhannnn
Fake/Propaganda by Manzoor Pahteen https://twitter.com/manzoorpashteen/status/678491388237975552 …
Link to Original http://tucson.com/entertainment/readers-share-woodstock-stories/article_5fc8382f-d176-5f90-8b65-e3342423c766.html …

Fake by MP https://twitter.com/manzoorpashteen/status/678491105051156481 …
Link to Original http://niemanreports.org/articles/the-sights-sounds-and-smells-of-afghanistan/ …

Fake by MP https://twitter.com/manzoorpashteen/status/737924840972275712 …
Link to Original https://www.rferl.org/a/taliban_militants_burn_villages_in_northwestern_pakistan/24290667.html …

😎🎤🔻 #MICDrop https://twitter.com/RaghKhan93/status/976044860473008128 …

7:32 AM - Mar 20, 2018

Taliban 'Razes' Pakistan Villages
Local residents say Taliban militants have burned three villages in northwestern Pakistan near the border with Afghanistan.

rferl.org
117
86 people are talking about this
Twitter Ads info and privacyThese Fake photos & propaganda from @manzoorpashteen’s personal account tells a lot about the credibility, honesty & intentions of the #PashtunTahafuzMovement.
View image on TwitterView image on TwitterView image on TwitterView image on Twitter

Abid Atozai
@AbidAtozai
These Fake photos & propaganda from @manzoorpashteen’s personal account tells a lot about the credibility, honesty & intentions of the #PashtunTahafuzMovement.

No wonder #TTPisPTM is trending in Pakistan.

7:57 AM - Mar 31, 2018
160
152 people are talking about this
Twitter Ads info and privacy

Riaz Haq said...

https://www.geo.tv/latest/197971-dg-ispr-briefs-media-on-ceasefire-violations-by-india



Social media being used against Pakistan, institutions: DG ISPR



Speaking of Pashtun Tahaffuz Movement (PTM) for the first time, the DG ISPR referred to several questions pertaining to the sudden emergence of the movement.

"How did Manzoor Ahmed Masood was renamed as Manzoor Pashteen; how did this campaign start on social media; how were 5000 social media accounts made in Afghanistan in a single day; how was a 'topi' (cap) started manufacturing outside the country and coming into Pakistan; how did small group of individuals started staging anti-Pakistan protests outside the country," he questioned.

In this regard, the DG ISPR also noted publishing of articles by foreign newspapers and live telecast of Pashteen by foreign media outlets on Facebook and Twitter.

Major General Ghafoor told the media that he met with Manzoor Pashteen and Mohsin Dawar, who shared their concerns. "They came to our office, we had a discussion for an hour or two about Naqeeb Mehsud, missing persons, unexploded ordnance [in tribal areas] and check-posts issues."

He said that he separated Mohsin Dawar and Manzoor Pashteen from other people and took them to his office, adding, "Then I got them to speak to all GOCs and IG FC, got them time, [told them] all your issues should be resolved, go meet the GOCs.

"They returned and also held a meeting, and I received a text from Mohsin Dawar thanking me for facilitating and getting their issues resolved," the DG ISPR said.

He, however, said that "those who are enemies of Pakistan and still want to see the country unstable, if they join you and start praising you then one needs to look inward what is this happening."

Major General Ghafoor further said that Chief of Army Staff (COAS) General Qamar Javed Bajwa gave strict instructions not to deal with PTM gatherings through force anywhere.

No action has been taken against them so far, the army spokesperson pointed out, adding that "we have many proofs of how they are being used".

On the incident in Wana, South Waziristan, the DG ISPR said the Mehsud tribe has fought against terrorism for years. The [tribe] then fought among itself, and the casualties were rescued by Army helicopters.

A propaganda was instigated that a girl was killed by Army firing, he said.

"Pakistan has achieved peace by rendering sacrifices in the past 20 years. What we achieved, nobody was able to achieve. Now, it's time to be united and progress."

"We are not [affected] by false slogans on social media. The nation's love for the army has only increased in the [past] 10 years."

“We cannot respond to [everyone]. We are focused on our work,” he added.

The army spokesperson further said that a lot of accusations were made but time proved all the accusations to be false.

“No army [in the world] has been as successful as Pakistan army in the war against terrorism,” he said.

Riaz Haq said...

Pakistan army spokesperson accuses journalists of anti-state activity on social media
June 5, 2018 1:54 PM ET

https://cpj.org/2018/06/pakistan-army-spokesperson-accuses-journalists-of-.php

New York, June 5, 2018--The Committee to Protect Journalists today condemned comments from Major General Asif Ghafoor, spokesperson for Pakistan's military and intelligence agencies, who accused journalists of sharing anti-state remarks on social media.

At a press conference yesterday, Ghafoor derided the rise of social media troll accounts, which he said spread propaganda against the army and state, and said that Pakistan's spy agency, the Inter-Services Intelligence (ISI), was monitoring such accounts and those that engage with them, including journalists.

During his presentation, Ghafoor showed a graphic featuring an alleged troll account's Twitter activity and the journalists and other individuals allegedly connected to the account, who, Ghafoor said, redistributed anti-state and anti-army propaganda from the troll's account.

The journalists featured on the graphic include Ammar Masood and Fakhar Durrani, both with media Jang Media Group, Umar Cheema from the Jang-owned daily The News, Azaz Syed from the Jang-owned broadcaster Geo TV, and Matiullah Jan with the broadcaster Waqt News. Cheema received CPJ's International Press Freedom Award in 2011.

"Displaying photos of journalists alleged to help push anti-state propaganda in Pakistan is tantamount to putting a giant target on their backs," said Steven Butler, CPJ's Asia program coordinator in Washington, D.C. "General Ghafoor should apologize for his comments and explain how security forces might help promote journalist safety in Pakistan, where reporters and editors are routinely threatened, attacked, and killed for their work."

Pakistani authorities have cracked down on press freedom ahead of national parliamentary elections scheduled for July 25. Recently, CPJ documented disruptions to the distribution of Dawn newspaper and access to television channel Geo TV.

Riaz Haq said...

Don't blame #WhatsApp for #India's mob #violence. Whimsical sunrise greetings are evidence of India being amusingly, enticingly colorful. But India is not, deep down, a friendly country. #Lynchistan #lynchings #Modi #BJP

https://www.bloomberg.com/view/articles/2018-07-18/lynch-mobs-are-india-s-problem-not-whatsapp-s via @bopinion


As far as Indians are concerned, the mobile phone was invented so we could use WhatsApp. The messaging app’s little green icon is now an inextricable part of our lives. We might survive without Facebook, which I haven’t checked in weeks. We might turn up our noses at Instagram, which seems to consist entirely of people’s vacation photos in Lisbon. We might even undergo Twitter detox days. But when an Indian has tired of WhatsApp, she has tired of life.


We are members of dozens of groups -- high school, college, workplace and that group from that conference three years ago which is inexplicably still active. We argue about politics, forward long, off-color jokes and, according to Google, crash each other’s phones with incredibly data-heavy “Good Morning” messages. This last addiction has claimed even Prime Minister Narendra Modi, who plaintively complained to a group of his MPs that they never responded to his morning greetings. That was a rare strategic error on the prime minister’s part; Modi’s landslide victory in 2014 netted him 270-plus MPs, and now they’re all wishing him good morning on his in-house version of WhatsApp.


Whimsical sunrise greetings are evidence of India being amusingly, enticingly colorful. But India is not, deep down, a friendly country. And we have turned even WhatsApp into something dangerous and scary. Last week, a mob of 2,000 attacked a group of four men in a car in the southern state of Karnataka, beating and kicking one of them to death after dragging him through a muddy field on a rope. The reason? Rumors had spread across the area that the men, IT professionals from Hyderabad who were just passing through in search of good natural honey, were in fact planning to abduct local children. And how had these rumors spread? WhatsApp, of course.


Dozens of people have been lynched in similar circumstances -- some on suspicion of stealing children, others because they were believed to be killing cows. And that’s not the end of WhatsApp’s apparent offenses against law and order in India: Since at least 2013, full-scale riots have been instigated by people forwarding and misidentifying videos.

Unsurprisingly, responses have been stern. The Supreme Court has demanded that Parliament consider an anti-lynching law. The ministry of information technology has warned WhatsApp that it “cannot evade accountability and responsibility.” WhatsApp itself has taken out full-page ads warning against fake news and has changed its interface to indicate when content is original and when it has simply been forwarded from elsewhere. The Indian state would no doubt be happier if WhatsApp just shut down.

Yet, I am quite uncomfortable with this scapegoating of what is, in the end, a pretty innocuous little platform. Technology is what we make of it. If we in India choose to use convenient messaging to form lynch mobs, that tells us more about India than it does about WhatsApp.

Riaz Haq said...

Facebook Says It Removed Pages Involved In Deceptive Political Influence Campaign
Tim MakJuly 31, 20181:06 PM ET

https://www.npr.org/2018/07/31/634319520/facebook-says-it-removed-pages-involved-in-deceptive-political-influence-campaig?utm_source=twitter.com&utm_medium=social&utm_campaign=politics&utm_term=nprnews&utm_content=20180731


Facebook announced Tuesday afternoon that it has removed 32 Facebook and Instagram accounts or pages involved in a political influence campaign with links to the Russian government.

The company says the campaign included efforts to organize counterprotests on Aug. 10 to 12 for the white nationalist Unite The Right 2 rally planned in Washington that weekend.

Counterfeit administrators from a fake page called "Resisters" connected with five legitimate Facebook pages to build interest and share logistical information for counterprotests, Facebook said. The imminence of that event was what prompted Facebook to go public with this information.

In a blog post from the head of Facebook's cybersecurity policy, the company says that those accounts were "involved in coordinated inauthentic behavior" but that their investigation had not yielded definitive information about who was behind the effort.

However, Facebook's top security officials said the campaign involved similar "tools, techniques and procedures" employed by the Russian Internet Research Agency during the 2016 campaign.

There are not many details presented about the origin of these pages, but there is a link established between a page involved in organizing Unite The Right counterprotests and an IRA account.

Facebook noticed that a known Internet Research Agency account had been made a co-administrator on a fake page for a period of seven minutes — something a top Facebook official called "interesting but not determinative."

The actors behind the accounts were more careful to conceal their true identities than the Internet Research Agency had been in the past, Facebook said.

While Internet Research Agency accounts had occasionally used Russian IP addresses in the past, the actors behind this effort never did.

"These bad actors have been more careful to cover their tracks, in part due to the actions we've taken to prevent abuse over the past year," wrote Nathaniel Gleicher, head of cybersecurity policy at Facebook. "For example, they used VPNs and internet phone services, and paid third parties to run ads on their behalf."

Both the Republican and Democratic leaders of the Senate intelligence committee were less reserved about placing the blame for this campaign on the Russian government.

"The goal of these operations is to sow discord, distrust, and division in an attempt to undermine public faith in our institutions and our political system. The Russians want a weak America," said Sen. Richard Burr, the Republican chairman of that committee.

Added Sen. Mark Warner, the top Democrat on the panel, "Today's disclosure is further evidence that the Kremlin continues to exploit platforms like Facebook to sow division and spread disinformation."

This most recent political influence campaign consisted of pages with names like "Aztlan Warriors," "Black Elevation," "Mindful Being" and "Resisters."

The pages were created between March 2017 and May 2018 and had a total of 290,000 followers. Over this time period, they generated 9,500 posts and ran 150 ads for about $11,000. They also organized about 30 events, only two of which were slated for the future.

Riaz Haq said...

#Israeli intervention in US elections ‘vastly overwhelms' anything #Russia has done, claims Noam Chomsky. #Trump #Israel

https://www.independent.co.uk/news/world/americas/us-politics/israel-us-elections-intervention-russia-noam-chomsky-donald-trump-a8470481.html

Riaz Haq said...

Now openly admitted, governments and militaries around the world employ armies of keyboard warriors to spread propaganda and disrupt their online opposition. Their goal? To shape public discourse around global events in a way favourable to their standing military and geopolitical objectives. Their method? The Weaponization of Social Media. This is The Corbett Report.

https://youtu.be/0dL8vt1n-f8

Fighting digital disinformation is hard

Analyzing, let alone countering, this type of provocative behavior can be difficult. Russia isn’t alone, either: The U.S. tries to influence foreign audiences and global opinions, including through Voice of America online and radio services and intelligence services’ activities. And it’s not just governments that get involved. Companies, advocacy groups and others also can conduct disinformation campaigns.

Unfortunately, laws and regulations are ineffective remedies. Further, social media companies have been fairly slow to respond to this phenomenon. Twitter reportedly suspended more than 70 million fake accounts earlier this summer. That included nearly 50 social media accounts like the fake Chicago Daily News one.

Facebook, too, says it is working to reduce the spread of “fake news” on its platform. Yet both companies make their money from users’ activity on their sites – so they are conflicted, trying to stifle misleading content while also boosting users’ involvement.

Real defense happens in the brain

The best protection against threats to the cognitive dimension of cyberspace depends on users’ own actions and knowledge. Objectively educated, rational citizens should serve as the foundation of a strong democratic society. But that defense fails if people don’t have the skills – or worse, don’t use them – to think critically about what they’re seeing and examine claims of fact before accepting them as true.

American voters expect ongoing Russian interference in U.S. elections. In fact, it appears to havealready begun. To help combat that influence, the U.S. Justice Department plans to alert the public when its investigations discover foreign espionage, hacking and disinformation relating to the upcoming 2018 midterm elections. And the National Security Agency has created a task force to counter Russian hacking of election systems and major political parties’ computer networks.

https://wtop.com/social-media/2018/07/weaponized-information-seeks-a-new-target-in-cyberspace-users-minds/

Riaz Haq said...

Facebook Fueled Anti-Refugee Attacks in Germany, New Research Suggests

https://www.nytimes.com/2018/08/21/world/europe/facebook-refugee-attacks-germany.html


When you ask locals why Dirk Denkhaus, a young firefighter trainee who had been considered neither dangerous nor political, broke into the attic of a refugee group house and tried to set it on fire, they will list the familiar issues.

This small riverside town is shrinking and its economy declining, they say, leaving young people bored and disillusioned. Though most here supported the mayor’s decision to accept an extra allotment of refugees, some found the influx disorienting. Fringe politics are on the rise.

But they’ll often mention another factor not typically associated with Germany’s spate of anti-refugee violence: Facebook.

Everyone here has seen Facebook rumors portraying refugees as a threat. They’ve encountered racist vitriol on local pages, a jarring contrast with Altena’s public spaces, where people wave warmly to refugee families.

-------------

Many here suspected — and prosecutors would later argue, based on data seized from his phone — that Mr. Denkhaus had isolated himself in an online world of fear and anger that helped lead him to violence.

This may be more than speculation. Little Altena exemplifies a phenomenon long suspected by researchers who study Facebook: that the platform makes communities more prone to racial violence. And, now, the town is one of 3,000-plus data points in a landmark study that claims to prove it.

---------------

Karsten Müller and Carlo Schwarz, researchers at the University of Warwick, scrutinized every anti-refugee attack in Germany, 3,335 in all, over a two-year span. In each, they analyzed the local community by any variable that seemed relevant. Wealth. Demographics. Support for far-right politics. Newspaper sales. Number of refugees. History of hate crime. Number of protests.

One thing stuck out. Towns where Facebook use was higher than average, like Altena, reliably experienced more attacks on refugees. That held true in virtually any sort of community — big city or small town; affluent or struggling; liberal haven or far-right stronghold — suggesting that the link applies universally.

Their reams of data converged on a breathtaking statistic: Wherever per-person Facebook use rose to one standard deviation above the national average, attacks on refugees increased by about 50 percent.

-----------

Could Facebook really distort social relations to the point of violence? The University of Warwick researchers tested their findings by examining every sustained internet outage in their study window.

German internet infrastructure tends to be localized, making outages isolated but common. Sure enough, whenever internet access went down in an area with high Facebook use, attacks on refugees dropped significantly.

And they dropped by the same rate at which heavy Facebook use is thought to boost violence. The drop did not occur in areas with high internet usage but average Facebook usage, suggesting it is specific to social media.

This spring, internet services went down for several days or weeks, depending on the block, in the middle-class Berlin suburb of Schmargendorf.

Asked how life changed, Stefania Simonutti went bug-eyed and waved her arms as if screaming.

“The world got smaller, a lot changed,” said Ms. Simonutti, who runs a local ice cream shop with her husband and older son. She lost touch with family in Italy, she said, but was most distressed by losing access to news, for which she trusts only social media, chiefly Facebook.


Riaz Haq said...

Here’s the Conversation We Really Need to Have About Bias at Google

https://www.nytimes.com/2018/08/30/technology/bias-google-trump.html

Let’s get this out of the way first: There is no basis for the charge that President Trump leveled against Google this week — that the search engine, for political reasons, favored anti-Trump news outlets in its results. None.

Mr. Trump also claimed that Google advertised President Barack Obama’s State of the Union addresses on its home page but did not highlight his own. That, too, was false, as screenshots show that Google did link to Mr. Trump’s address this year.

But that concludes the “defense of Google” portion of this column. Because whether he knew it or not, Mr. Trump’s false charges crashed into a longstanding set of worries about Google, its biases and its power. When you get beyond the president’s claims, you come upon a set of uncomfortable facts — uncomfortable for Google and for society, because they highlight how in thrall we are to this single company, and how few checks we have against the many unseen ways it is influencing global discourse.

In particular, a raft of research suggests there is another kind of bias to worry about at Google. The naked partisan bias that Mr. Trump alleges is unlikely to occur, but there is a potential problem for hidden, pervasive and often unintended bias — the sort that led Google to once return links to many pornographic pages for searches for “black girls,” that offered “angry” and “loud” as autocomplete suggestions for the phrase “why are black women so,” or that returned pictures of black people for searches of “gorilla.”


I culled these examples — which Google has apologized for and fixed, but variants of which keep popping up — from “Algorithms of Oppression: How Search Engines Reinforce Racism,” a book by Safiya U. Noble, a professor at the University of Southern California’s Annenberg School of Communication.

Dr. Noble argues that many people have the wrong idea about Google. We think of the search engine as a neutral oracle, as if the company somehow marshals computers and math to objectively sift truth from trash.

But Google is made by humans who have preferences, opinions and blind spots and who work within a corporate structure that has clear financial and political goals. What’s more, because Google’s systems are increasingly created by artificial intelligence tools that learn from real-world data, there’s a growing possibility that it will amplify the many biases found in society, even unbeknown to its creators.

Google says it is aware of the potential for certain kinds of bias in its search results, and that it has instituted efforts to prevent them. “What you have from us is an absolute commitment that we want to continually improve results and continually address these problems in an effective, scalable way,” said Pandu Nayak, who heads Google’s search ranking team. “We have not sat around ignoring these problems.”

For years, Dr. Noble and others who have researched hidden biases — as well as the many corporate critics of Google’s power, like the frequent antagonist Yelp — have tried to start a public discussion about how the search company influences speech and commerce online.

There’s a worry now that Mr. Trump’s incorrect charges could undermine such work. “I think Trump’s complaint undid a lot of good and sophisticated thought that was starting to work its way into public consciousness about these issues,” said Siva Vaidhyanathan, a professor of media studies at the University of Virginia who has studied Google and Facebook’s influence on society.

Dr. Noble suggested a more constructive conversation was the one “about one monopolistic platform controlling the information landscape.”


In the United States, about eight out of 10 web searches are conducted through Google; across Europe, South America and India, Google’s share is even higher. Google also owns other major communications platforms, among them YouTube and Gmail, and it makes the Android operating system and its app store.

Riaz Haq said...

Here’s the Conversation We Really Need to Have About Bias at Google

https://www.nytimes.com/2018/08/30/technology/bias-google-trump.html


Google’s influence on public discourse happens primarily through algorithms, chief among them the system that determines which results you see in its search engine. These algorithms are secret, which Google says is necessary because search is its golden goose (it does not want Microsoft’s Bing to know what makes Google so great) and because explaining the precise ways the algorithms work would leave them open to being manipulated.

But this initial secrecy creates a troubling opacity. Because search engines take into account the time, place and some personalized factors when you search, the results you get today will not necessarily match the results I get tomorrow. This makes it difficult for outsiders to investigate bias across Google’s results.


A lot of people made fun this week of the paucity of evidence that Mr. Trump put forward to support his claim. But researchers point out that if Google somehow went rogue and decided to throw an election to a favored candidate, it would only have to alter a small fraction of search results to do so. If the public did spot evidence of such an event, it would look thin and inconclusive, too.

“We really have to have a much more sophisticated sense of how to investigate and identify these claims,” said Frank Pasquale, a professor at the University of Maryland’s law school who has studied the role that algorithms play in society.

In a law review article published in 2010, Mr. Pasquale outlined a way for regulatory agencies like the Federal Trade Commission and the Federal Communications Commission to gain access to search data to monitor and investigate claims of bias. No one has taken up that idea. Facebook, which also shapes global discourse through secret algorithms, recently sketched out a plan to give academic researchers access to its data to investigate bias, among other issues.

Google has no similar program, but Dr. Nayak said the company often shares data with outside researchers. He also argued that Google’s results are less “personalized” than people think, suggesting that search biases, when they come up, will be easy to spot.

“All our work is out there in the open — anyone can evaluate it, including our critics,” he said.

Search biases mirror real-world ones
The kind of blanket, intentional bias Mr. Trump is claiming would necessarily involve many workers at Google. And Google is leaky; on hot-button issues — debates over diversity or whether to work with the military — politically minded employees have provided important information to the media. If there was even a rumor that Google’s search team was skewing search for political ends, we would likely see some evidence of such a conspiracy in the media.

That’s why, in the view of researchers who study the issue of algorithmic bias, the more pressing concern is not about Google’s deliberate bias against one or another major political party, but about the potential for bias against those who do not already hold power in society. These people — women, minorities and others who lack economic, social and political clout — fall into the blind spots of companies run by wealthy men in California.

It’s in these blind spots that we find the most problematic biases with Google, like in the way it once suggested a spelling correction for the search “English major who taught herself calculus” — the correct spelling, Google offered, was “English major who taught himself calculus.”

Riaz Haq said...

Why AI Is the Next Frontier in Weaponized Social Media
'LikeWar' author explains how digital platforms have become war zones

https://www.adweek.com/digital/why-ai-is-the-next-frontier-in-weaponized-social-media/

Singer argues that brands, ISIS recruiters, reality stars and Russian bots are all playing in the same arena online.
When P.W. Singer set out to write a book about military use of social media in 2013, he couldn’t have known exactly what kind of rabbit hole he was entering.

---

In their conception of weaponized social media, everyone from brand marketers and reality stars to terrorist recruiters and military personnel are now competing with one another in a viral attention battleground where troll armies, misinformation and bot networks are weapons of choice.

Singer spoke with Adweek about how this social media atmosphere evolved, why brands need to pay attention to Russian bots and how artificial intelligence personas could be the future of online propaganda.

The following has been edited for length and clarity.

How did this book originally take shape?

Initially it was looking at social media use in war, but very quickly, war becomes melded with terrorism—so you think about the rise of ISIS in 2014. And then we start to see its use by criminal groups—cartels, gangs—and then it morphs into politics, where all the things that we were seeing in, for example, Russia and Ukraine start moving over into Brexit and American politics and the like. It was a pretty extensive journey, and along the way, the project got bigger and bigger.

The challenge of this topic and I think why we all weren’t handling it well is how big it is. So people were looking at just one slice of it, one geographic region or just one target and missing out on the larger trend.

For example, the people interested in terrorism were looking at ISIS’ use of social media but they weren’t aware of, say, how Russia was using it.

The people who were in these political worlds didn’t understand digital marketing or pop culture so they were missing things that anyone with an ad background or who knew what Taylor Swift does would go, ‘Of course.’ The approach was to bring all this together—to bring together all the classic research in history and psychology studies and sociology and digital marketing.

But the second thing about this space is that you can both research it and jump into it. So we joined online armies, both actual ones—you can download apps to join Israeli Defense Forces operations—to competing online tribes.

We set traps, we trolled Russian trolls.

And then the third thing that hadn’t been done that was really striking was talking to key players. So we went out and interviewed an incredibly diverse set of people to learn from them—everyone from the recruiters for extremist groups to tech company executives to the pioneers of online marketing to reality stars to generals.

How do the concepts and tactics you talk about in your book go beyond ordinary digital marketing?

“LikeWar” is a concept that brings together all these different worlds. If cyberwar is the hacking of the networks, LikeWar is the hacking of the people on the networks by driving ideas viral through a mix of likes, lies and the network’s own algorithms. It’s this strange space that brings together Russian military units using digital marketing to influence the outcome of elections to teenagers taking selfies and live-feeding but influencing the outcome of actual battles. You’re seeing some of these techniques from ad companies, marketing and the like. They’re being used for different purposes, but they’re playing out all in the exact same space.

Riaz Haq said...

How Israel’s Army Revolutionized Wartime Social Media
Read more: https://forward.com/scribe/407826/how-israels-army-revolutionized-wartime-social-media/

Facebook CEO and founder Mark Zuckerberg once said, “The question isn’t, What do we want to know about people?, It’s, ‘What do people want to tell about themselves?’” When it comes to the state of Israel and its representatives, there is a lot they want to tell about themselves. From PR crises like multiple wars with Gaza, waves of stabbing attacks and never-ending global political tensions, I’ve seen firsthand over the last seven years how Israel utilizes social media as a tool for combatting international media bias. Yet in Israel and in the global Jewish community, there is fierce criticism of Israel’s so-called “PR failures.” I believe this criticism is unwarranted. Having been a critical observer of both the Israeli government and the Israel Defense Forces from the perspective of an outsider, I believe the state of Israel, and more specifically the IDF, have completely changed the game when it comes to nation branding and public relations.

Despite unrelenting bias in the international media, the Israel Defense Forces has revolutionized modern warfare in the public sphere. In 2012, Israel was the first state to declare the beginning of an operation, before even holding a press conference, by launching the operation using Twitter — explaining why, how, when and where in under 140 characters. In 2014, the IDF provided data and evidence (photos and videos) within minutes of carrying out activities in order to stop the rocket-fire from Gaza. What other army in the world does this? Last I checked, the United States was not sending out photographic evidence of the terrorist weapons facilities they hit in real time on social media or on any other platform.

Since then, the IDF has only become more effective at distributing facts in real time. Whereas Palestinian terrorist organizations like Hamas have repeatedly distributed false and misleading information and even falsified photos and videos, the IDF is prompt, professional and highly effective at distributing photographic and video evidence, including detailed explanations. In the recent riots at the border of Gaza, the IDF released video footage within minutes of terrorists firing guns across the border in a riot which the Palestinians had claimed to the international press was “non-violent” (an incident that unfortunately occurred repeatedly).

That is not to say every action of the IDF is justifiable and morally correct — that would not be true of any army in the world. But there is no question that the IDF goes above and beyond to distribute factual information in a way that no other army in the world would do, could do or has to do to comply with international law. Indeed, even when mistakes are made, the IDF is clear and professional about the aftermath — investigating incidents of civilian casualties quickly and releasing the findings of the investigations to the public.

Few in the Western world are aware, but the IDF has a massive following in Arabic — and their Arabic spokesman, Avichay Adraee, is an extremely well-known public figure in the Arab world. Whether they love him or hate him, Adraee is getting the message out loudly and clearly on networks like Al Jazeera Arabic, and through his massive social media following of over one million on Facebook.

As the digital director of one of the few pro-Israel organizations that operates on social media in Arabic, I can testify to the fact that the impact of having an Israeli voice from the IDF, who is ”reachable” on Twitter and Facebook, is tremendous. What other army in the world has that advantage? The average citizens that Adraee is reaching on a day-to-day basis are being faced with the reality that what they have been taught about the alleged inhumanity of the IDF is simply not the case.

Read more: https://forward.com/scribe/407826/how-israels-army-revolutionized-wartime-social-media/

Riaz Haq said...

Fake #Iranian news site tweet provoked #Pakistan's nuclear threat against #Israel in 2016. https://thinkprogress.org/source-of-pakistani-nuclear-threat-against-israel-revealed-as-fake-iranian-news-site-75831c3c1880/ via @thinkprogress

In late 2016, then-Pakistani Defense Minister Khawaja Asif took to Twitter to threaten Israel with nuclear war. The threat, which was later deleted, was prompted by something Asif had apparently come across earlier while online: A post originally published on the “AWD News” site, claiming that the Israeli defense minister had himself threatened nuclear war against Pakistan.

For nearly two years, the sourcing behind “AWD News” remained unclear.

But thanks to today’s massive revelations from Twitter about fake Russian and Iranian accounts, ThinkProgress has learned that “AWD News” was part of a social media and fake news campaign out of Iran. As Lee Foster, an analyst with FireEye — the cyber-security company tasked with unearthing fake Iranian social media operations — told ThinkProgress, “AWD News” is “part of the same operation” that FireEye uncovered on Facebook a few months ago.

That is to say: A fake Iranian news site aimed at English-speakers convinced the Pakistani Defense Minister to issue a nuclear warning against Israel in late 2016.

Tracked to Iran
ThinkProgress’s confirmation of the links in Iran to “AWD News” came via Wednesday’s substantial revelations from Twitter, which published approximately one million tweets “potentially originating in Iran.” This disclosure followed Facebook’s August announcement that it had removed hundreds of fake accounts that originated in Iran.

While Facebook has still not released the names of those accounts, one of the pages identified was “Free Scotland 2014,” a popular pro-Scottish Independence account. As it is, a number of the Iranian tweets released on Wednesday also advocated for Scottish secession.

Riaz Haq said...

#Twitter’s #Russian #troll problem: There’s good news and bad news. The #Iranian trolling was less effective than the Russian posts, with most tweets getting limited engagement. #Iran #Russia
https://www.fastcompany.com/90253028/twitters-russian-troll-problem-theres-good-news-and-bad-news

On Wednesday, Twitter released a collection of more than 10 million tweets related to thousands of accounts affiliated with Russia’s Internet Research Agency propaganda outfit, as well as hundreds more troll accounts, including many based in Iran.

The Atlantic Council’s Digital Forensic Research Lab took an advance look at the data and released a four-part report on its analysis. Among the lab’s findings:

Targeting both sides: Russian trolls targeting U.S. politics took on personas from both the left, including African American activists, and the right, including a white conservative male character using the name “Marlboro Man.” Their primary goal appears to have been to sow discord, rather than promote any particular side, presumably with a goal of weakening the United States. In some cases, they even posted anti-Russian content.
The Russian trolls were often effective, drawing tens of thousands of retweets on certain posts including from celebrity commentators like conservative Ann Coulter. When Twitter suspended many accounts linked to the group, they continued with other fake activist accounts.
Twitter’s efforts to take down accounts did help. The second wave of Russian troll accounts, now since taken down, posted much less than the original group. “Twitter’s suspension of over 2,500 Russian troll accounts in late 2017 disrupted the troll operation very significantly by suspending hundreds of its assets at the same time,” according to the report.
Self-interested: Iran’s trolling was mostly focused on promoting its own interests, including attacking regional rivals like Israel and Saudi Arabia. Some posts also attacked Trump and tried to woo supporters of Bernie Sanders.
Trolling isn’t easy: The Iranian trolling was less effective than the Russian posts, with most tweets getting limited engagement. This was partially due to posting styles less suited to the medium, according to the report. “Few of the accounts showed distinctive personalities: They largely shared online articles,” according to the report. “As such, they were a poor fit for Twitter, where personal comment tends to resonate more strongly than website shares.” Generally, many troll posts were ineffective, and “their operations were washed away in the firehose of Twitter.”
For now, there’s no reason to think political trolls are going away.

“Identifying future foreign influence operations, and reducing their impact, will demand awareness and resilience from the activist communities targeted, not just the platforms and the open source community,” according to the report.

Riaz Haq said...

Foreign information ops: #Twitter data on suspect accounts shows #Pakistan mentioned in 5,652 tweets in documents related to suspect accounts originating from #Iran. #SocialMedia #trolls #weapons #India #disinformation #FakeNews https://tribune.com.pk/story/1828588/3-foreign-twitter-accounts-behind-information-operations-involving-pakistan/

Social networking website Twitter on Wednesday released a vast cache of data related to accounts deemed involved in ‘information operations’ on their servers since 2016.

The released data is part of an internal investigation by the social media giant into allegations of foreign meddling on their website, according to a statement released by the company in a blog post.

In September, Twitter chief Jack Dorsey had appeared before a US Senate Intelligence Committee in Washington to brief lawmakers regarding efforts the company was making to combat coordinated misinformation campaigns.

The Express Tribune has performed an analysis on parts of the dataset shared by the social media website.

Pakistan is mentioned in 5,652 tweets in some documents related to suspect accounts originating from Iran.

According to Twitter, the locations with which the tweets were identified had been the ‘self-reported locations’ that the users had posted.

Interestingly, most of the content flagged as suspicious was apparently coming from Brazil. France, Turkey, Iran and the US were also featured on the list.

The archive showed websites that suspected users had tweeted about most frequently. According to figures, AWD News was on top of this list, with whatsupic and libertyfrontpress following close behind.

A number of these websites, including AWD News, have been already identified as platforms used to generate fake news.

Twitter released an accompanying list of web addresses that were shared widely by users with intent to spread misinformation.

These links are mostly from AWD News and libertyfrontpress, and can help the public understand why certain stories are shared more than others by suspect groups.

The Express Tribune went through some of these links in order to separate fake stories from real stories.

http://www.awdnews.com/political/10404-cia-predict-third-terrorist-attack-after-sidney-and-pakiistan-in-usa-in-3-days.html

Headline: CIA predict third terrorist attack after Sidney and Pakistan in USA in 3 days

The link shared above is no longer available. British newspaper The Guardian has already identified AWD as a fake news site. Wayback Machine screenshot shows that the article is full of spelling mistakes and the story has not been credibly sourced. The URL reproduced above was published on December 16, 2014, and tweeted 3,619 times.

http://www.7sabah.com.tr/haber/6876/pakistan-genelkurmay-baskani-israili-12-dakikada-yok-ederiz/

Google translation of headline: Pakistan Chief of Staff: We are not in Israel in 12 Minutes!

The article linked above shows pictures of former Chief of Army Staff Raheel Sharif, and most sites quoting this also claim Joint Chiefs of Staff General Zubair Mahmood Hayat spoke to AWD News. This URL was published on October 31, 2016, and tweeted 72 times.

http://whatsupic.com/news-politics-world/1476905223.html

Headline: We can destroy Israel in ‘less than 12 minutes’: Pakistani commander

This story was tweeted out 67 times and originally published on October 19, 2016.

http://www.awdnews.com/index/saudi-arabia-bought-depleted-uranium-weapons-pakistan-delivered-syrian-rebel-groups/

Headline: Saudi Arabia has bought depleted uranium weapons from Pakistan and delivered it to Syrian rebel groups”

Riaz Haq said...

#Facebook Admits It Was Used to Incite #Violence Against #Rohingya in #Myanmar. It faces scrutiny from lawmakers who say it is not doing enough. Facebook’s experiments have helped to amplify fake stories and violence inother countries, including #SriLanka. https://nyti.ms/2yVAO5X

Facebook has long promoted itself as a tool for bringing people together to make the world a better place. Now the social media giant has acknowledged that in Myanmar it did the opposite, and human rights groups say it has a lot of work to do to fix that.

Facebook failed to prevent its platform from being used to “foment division and incite offline violence” in the country, one of its executives said in a post on Monday, citing a human rights report commissioned by the company.

“We agree that we can and should do more,” the executive, Alex Warofka, a Facebook product policy manager, wrote. He also said Facebook would invest resources in addressing the abuse of its platform in Myanmar that the report outlines.

The report, by Business for Social Responsibility, or BSR, which is based in San Francisco, paints a picture of a company that was unaware of its own potential for doing harm and did little to figure out the facts on the ground.

The report details how Facebook unwittingly entered a country new to the digital era and still emerging from decades of censorship, all the while plagued by political and social divisions.

But the report fails to look closely at how Facebook employees missed a crescendo of posts and misinformation that helped to fuel modern ethnic cleansing in Myanmar.

The report recommends that Facebook increase enforcement of policies for content posted on its platform; exercise greater transparency with data that shows its progress; and engage with civil society and officials in Myanmar.

Some Facebook detractors criticized the company on Tuesday for releasing the report on the eve of the midterm elections in the United States, when the attention of the news media and many of Facebook’s most vocal critics was elsewhere. Human rights groups said Facebook’s pledge needed to be followed up with more concrete actions.

“There are a lot of people at Facebook who have known for a long time that the company should have done more to prevent the gross misuse of its platform in Myanmar,” said Matthew Smith of Fortify Rights, a nonprofit human rights organization that focuses on Southeast Asia.

Riaz Haq said...

Specifically, I want to argue that the parallels between the printing press era and today are sufficiently compelling to suggest:

by James Dewar

https://www.rand.org/pubs/papers/P8014/index2.html

Changes in the information age will be as dramatic as those in the Middle Ages in Europe. The printing press has been implicated in the Reformation, the Renaissance and the Scientific Revolution, all of which had profound effects on their eras; similarly profound changes may already be underway in the information age.
The future of the information age will be dominated by unintended consequences. The Protestant Reformation and the shift from an earth-centered to a sun-centered universe were unintended consequences in the printing press era. We are already seeing unintended consequences in the information age that are dominating intended ones and there are good reasons to expect more in the future. Thus, the technologists are unlikely to be accurate and the inventors may neither have their intended effects nor be the most important determinants of information age progress.
It will be decades before we see the full effects of the information age. The important effects of the printing press era were not seen clearly for more than 100 years. While things happen more quickly these days, it could be decades before the winners and losers of the information age are apparent. Even today, significant (and permanent) cultural change does not happen quickly.
The above factors combine to argue for: a) keeping the Internet unregulated, and b) taking a much more experimental approach to information policy. Societies who regulated the printing press suffered and continue to suffer today in comparison with those who didn't. With the future to be dominated by unintended consequences and a long time in emerging, a more experimental approach to policy change (with special attention to unintended consequences) is soundest.

Riaz Haq said...

THE EVOLUTION OF HYBRID WARFARE
AND KEY CHALLENGES
COMMITTEE ON ARMED SERVICES
HOUSE OF REPRESENTATIVES
ONE HUNDRED FIFTEENTH CONGRESS
FIRST SESSION
HEARING HELD
MARCH 22, 2017

https://www.govinfo.gov/content/pkg/CHRG-115hhrg25088/pdf/CHRG-115hhrg25088.pdf

Hoffman, Dr. Francis G., Distinguished Fellow, National Defense University

So, I think we are involved, but maybe we are not strategically,
coherently influencing the way we want to in certain regions. And
that is an area that, perhaps the joint world and the NSC can improve our strategic responses, because we buy things, sometimes,
and I don’t think we understand that when we are supporting aparticular ally and building up their military, we think we are stabilizing something.
But Mr. Putin, he likes that weakness. He wants to see peripheral states along the Eastern seaboard to be spheres of influence
for him. You know, he wants them to be destabilized. And that is,
I think, something that we need to take to heart.
We focus, in the military, much on the hardware of the Soviet
Union, or Russia, today. Its anti-access/area denial capabilities,
A2/AD, has become a buzzword in defense. And I think that the
A2/RD, the anti-alliance and the reality denial activities that the
Russians are up to, is something we need to, you know, to push
back on.
So, I take your point. I do believe we are competing, we are just
not, probably, competing as strategically and coherently as I think
we do. And we need to understand how the opponent sees that.
When we build up the Philippines, or work with the Vietnamese,
clearly the Chinese see that as something against their interests,
and we need to be transparent and understanding about that.

Sir, with respect to China, I totally agree. It is
very important to understand their worldview, and what they are
thinking, and of course, they come to this problem as a country
with very significant achievements.
They have dragged hundreds of millions of people out of poverty,
and they have a history of imperialism in their country and so
forth, that they feel very strongly, and I think there is no question
that they feel encircled, if you like, by American allies, is how they
would put it.
And therefore, at one level, it is not unreasonable for them to
look to sort of push back that American influence. What has
changed, I think, most recently, is that China now has more capability to do that, including very sophisticated military capabilities.
And seemingly, under its current political leadership, more intent to do that. And I agree that China doesn’t want a conflict, for
the reasons I mentioned in my statement. I think the problem,
though, is that what China does seem to want, is a traditional,
19th century style sort of sphere of influence, where the region is
organized politically, economically, and in security terms, according
to its preferences.
And the real problem is

Riaz Haq said...

Network of 265 online sites in 65 countries linked to #India are mimicking defunct newspapers to spread anti-#Pakistan #propaganda. #Modi #Hindutva https://zd.net/2QhoEOC via @ZDNet & @campuscodi

Researchers have uncovered today a network of 265 online news sites using the names and brands of defunct newspapers from the 20th century to push anti-Pakistan media coverage inside the regular news cycle.

Discovered by the EU DisinfoLab, an EU-based NGO focused on researching sophisticated disinformation campaigns, this network of fake news sites was traced back to a group of Indian companies, NGOs, and think tanks.

The EU DisinfoLab team believes the goal of this global network of fake news organizations was to influence international institutions, elected representatives, and public perceptions on Pakistan by multiplying the same negative anti-Pakistan press coverage.

Furthermore, the fake news sites were also meant to reinforce the legitimacy of anti-Pakistan NGOs by providing linkable press materials to reinforce an anti-Pakistan agenda.

This was done to "add several layers of media outlets that quote and republish one another, making it harder for the reader to trace the manipulation, and in turn (sometimes) offer a 'mirage' of international support," the EU DisinfoLab team said.

All news sites were registered to use a domain that either mimicked the name of a popular local news site or used the name of a defunct newspaper.

For example, in Romania, the group operates an English news site located at frontulplugarilor.com, mimicking the name of Frontul Plugarilor, a pro-communist newspaper published in Bucharest between 1945-1953.

Similarly, in Turkey, the group operates a website at hamevasser.com, mimicking the name of a Zionist Hebrew-language weekly newspaper published in Constantinople (now Istanbul) in the early 1900s, between 1909-1911.

Among the sites it operates in the US, there's the metroeastjournal.com, which imitates the Metro-East Journal, a newspaper that shut down in 1979; or saltlaketelegram.com, which mimics the Salt Lake Telegram, a local newspaper that operated between 1915-1952.

A comprehensive list of all the sites operated by the group can be found on this interactive Google map.

SITES HID ANTI-PAKISTANI PROPAGANDA INSIDE REGULAR NEWS COVERAGE
According to the EU DisinfoLab team, all the 265 websites published English news articles, despite using the names of local newspapers.

Most of the content was syndicated from Russia Today, Voice of America, KCNA, and Interfax, researchers said.

However, hidden in the syndicated content, were articles critical of Pakistan. A search for "pakistan" on one of the sites part of this network yielded the results below, all critical of the Pakistani government.

The EU DisinfoLab team said that links between all the 265 fake news sites were also easy to find. In many cases, several news sites listed a contact address in the same co-working space/office, while in other cases, they shared web servers.

The NGO plans to release a more in-depth report in the coming weeks. In the meantime, the organization has documented their findings in a blog post and a series of Twitter threads [1, 2, 3, 4].

Riaz Haq said...

Uncovered: 265 coordinated fake local media outlets serving Indian interests


https://www.disinfo.eu/2019/11/13/uncovered:-265-coordinated-fake-local-media-outlets-serving-indian-interests/


Over 265 fake local news sites in more than 65 countries are managed by an Indian influence network.

How could you know that your local news website, such as newyorkmorningtelegraph.com, thedublingazette.com, or timesofportugal.com serves Indian governmental interests?

Here are some findings from these websites:

Most of them are named after an extinct local newspaper or spoof real media outlets;
They republish content from several news agencies (KCNA, Voice of America, Interfax);
Coverage of the same Indian-related demonstrations and events;
Republications of anti-Pakistan content from the described Indian network (including EP Today, 4NewsAgency, Times Of Geneva, New Delhi Times);
Most websites have a Twitter account as well.
One may wonder: why have they created these fake media outlets? From analysing the content and how it is shared, we found several arguments to do so:

Influence international institutions and elected representatives with coverage of specific events and demonstrations;
Provide NGOs with useful press material to reinforce their credibility and thus be impactful;
Add several layers of media outlets that quote and republish one another, making it harder for the reader to trace the manipulation, and in turn (sometimes) offer a “mirage” of international support;
Influence public perceptions on Pakistan by multiplying iterations of the same content available on search engines.

Riaz Haq said...

Network of 265 online sites in 65 countries linked to #India are mimicking defunct newspapers to spread anti-#Pakistan #propaganda. #Modi #Hindutva https://zd.net/2QhoEOC via @ZDNet & @campuscodi


https://st2.ning.com/topology/rest/1.0/file/get/3712198035?profile=original

https://storage.ning.com/topology/rest/1.0/file/get/3712198428?profile=original

Riaz Haq said...

EU Disinfo Lab researchers believe the network’s real purpose was to act as a way to disseminate coordinated anti-Pakistan propaganda that coincided with real-world anti-Pakistan demonstrations taking place in Europe.

The demonstrations were organized by groups such as the European Organization for Pakistani Minorities and Pakistani Women’s Human Rights Organization, which have been shown to use the same online infrastructure as the fake news network.


A Shadowy Indian Company Co-Opted Dead Newspapers to Spread Propaganda
Researchers think it was an influence campaign aimed to sway lawmakers in Europe in favor of Indian interests in Kashmir.

https://www.vice.com/en_us/article/xwe993/a-shadowy-indian-company-co-opted-dead-newspapers-to-spread-propaganda

Riaz Haq said...

#Facebook, #Google, #Twitter Rebel Against #Pakistan’s #SocialMedia Rules. Companies pressure and lawsuits from local civil libertarian forced govt to retreat. Law still on the books, but Pakistani officials pledged this week to review the regulations. https://nyti.ms/2uF4GF0

When Pakistan’s government unveiled some of the world’s most sweeping rules on internet censorship this month, global internet companies like Facebook, Google and Twitter were expected to comply or face severe penalties — including the potential shutdown of their services.

Instead, the tech giants banded together and threatened to leave the country and its 70 million internet users in digital darkness.

Through a group called the Asia Internet Coalition, they wrote a scathing letter to Pakistan’s prime minister, Imran Khan. In it, the companies warned that “the rules as currently written would make it extremely difficult for AIC Members to make their services available to Pakistani users and businesses.”

Their public rebellion, combined with pressure and lawsuits from local civil libertarians, forced the government to retreat. The law remains on the books, but Pakistani officials pledged this week to review the regulations and undertake an “extensive and broad-based consultation process with all relevant segments of civil society and technology companies".


“Because Pakistan does not have any law of data protection, international internet firms are reluctant to comply with the rules,” said Usama Khilji, director of Bolo Bhi, an internet rights organization based in Islamabad, the country’s capital.

The standoff over Pakistan’s digital censorship law, which would give regulators the power to demand the takedown of a wide range of content, is the latest skirmish in an escalating global battle. Facebook, Google and other big tech companies, which have long made their own rules about what is allowed on their services, are increasingly tangling with national governments seeking to curtail internet content that they consider harmful, distasteful or simply a threat to their power.

India is expected to unveil new censorship guidelines any day now, including a requirement that encrypted messaging services like WhatsApp tell the government how specific messages moved within their networks. The country has also proposed a new data privacy law that would restrict the activities of tech companies while exempting the government from privacy rules.

Vietnam passed its own cybersecurity law in 2018, with similar provisions to what Pakistan passed. Singapore recently began using its rules against “fake news” to go after critics and opposition figures by forcing social networks like Facebook to either take down certain posts or add the government’s response to them.

The unified resistance by Facebook, Google, Twitter and other tech companies in Pakistan is highly unusual. Companies often protest these types of regulations, but they rarely threaten to actually leave a country. Google pulled its search engine out of China in 2010 rather than submit to government censorship of search results, but LinkedIn agreed to self-censor its content when it entered China in 2014 and Apple acceded to Chinese demands to remove apps that customers had used to bypass the country’s Great Firewall.

Riaz Haq said...

Before #India’s elections in 2019, #Facebook took down inauthentic pages tied to #Pakistan’s military & #Indian Opposition Congress party, but it didn't remove #BJP accounts spewing anti-#Muslim #hate & #fakenews. Why? FB executive Ankhi Das intervened. https://www.wsj.com/articles/facebook-hate-speech-india-politics-muslim-hindu-modi-zuckerberg-11597423346

In 2017, Ms. Das wrote an essay, illustrated with Facebook’s thumbs-up logo, praising Mr. Modi. It was posted to his website and featured in his mobile app.

On her own Facebook page, Ms. Das shared a post from a former police official, who said he is Muslim, in which he called India’s Muslims traditionally a “degenerate community” for whom “Nothing except purity of religion and implementation of Shariah matter.”

---------

In Facebook posts and public appearances, Indian politician T. Raja Singh has said Rohingya Muslim immigrants should be shot, called Muslims traitors and threatened to raze mosques.

Facebook Inc. employees charged with policing the platform were watching. By March of this year, they concluded Mr. Singh not only had violated the company’s hate-speech rules but qualified as dangerous, a designation that takes into account a person’s off-platform activities, according to current and former Facebook employees familiar with the matter.

---

Yet Mr. Singh, a member of Indian Prime Minister Narendra Modi’s Hindu nationalist party, is still active on Facebook and Instagram, where he has hundreds of thousands of followers. The company’s top public-policy executive in the country, Ankhi Das, opposed applying the hate-speech rules to Mr. Singh and at least three other Hindu nationalist individuals and groups flagged internally for promoting or participating in violence, said the current and former employees.

Ms. Das, whose job also includes lobbying India’s government on Facebook’s behalf, told staff members that punishing violations by politicians from Mr. Modi’s party would damage the company’s business prospects in the country, Facebook’s biggest global market by number of users, the current and former employees said.

---------------
India is a vital market for Facebook, which isn’t allowed to operate in China, the only other nation with more than one billion people. India has more Facebook and WhatsApp users than any other country, and Facebook has chosen it as the market in which to introduce payments, encryption and initiatives to tie its products together in new ways that Mr. Zuckerberg has said will occupy Facebook for the next decade. In April, Facebook said it would spend $5.7 billion on a new partnership with an Indian telecom operator to expand operations in the country—its biggest foreign investment.

-----------
Another BJP legislator, a member of Parliament named Anantkumar Hegde, has posted essays and cartoons to his Facebook page alleging that Muslims are spreading Covid-19 in the country in a conspiracy to wage “Corona Jihad.” Human-rights groups say such unfounded allegations, which violate Facebook’s hate speech rules barring direct attacks on people based on “protected characteristics” such as religion, are linked to attacks on Muslims in India, and have been designated as hate speech by Twitter Inc.

While Twitter has suspended Mr. Hegde’s account as a result of such posts, prompting him to call for an investigation of the company, Facebook took no action until the Journal sought comment from the company about his “Corona Jihad” posts. Facebook removed some of them on Thursday. Mr. Hegde didn’t respond to a request for comment.

------------

Within hours of the videotaped message, which Mr. Mishra uploaded to Facebook, rioting broke out that left dozens of people dead. Most of the victims were Muslims, and some of their killings were organized via Facebook’s WhatsApp.

Riaz Haq said...

Facebook Executive Supported India’s Modi, Disparaged Opposition in Internal Messages
Some employees said the sentiments and actions conflicted with the company’s longstanding neutrality pledge

https://www.wsj.com/articles/facebook-executive-supported-indias-modi-disparaged-opposition-in-internal-messages-11598809348

In one of the messages, Ankhi Das, head of public policy in the country, posted the day before Narendra Modi swept to victory in India’s 2014 national elections: “We lit a fire to his social media campaign and the rest is of course history.”

“It’s taken thirty years of grassroots work to rid India of state socialism finally,” Ms. Das wrote in a separate post on the defeat of the Indian National Congress party, praising Mr. Modi as the “strongman” who had broken the former ruling party’s hold. Ms. Das called Facebook’s top global elections official, Katie Harbath, her “longest fellow traveler” in the company’s work with his campaign. In a photo, Ms. Das stood, smiling, between Mr. Modi and Ms. Harbath.
---------

Ms. Das made her sentiments on the race clear. When a fellow staffer noted in response to one of her internal posts that the BJP’s primary opponent, the Indian National Congress, had a larger following on Facebook than Mr. Modi’s individual page, Ms. Das responded: “Don’t diminish him by comparing him with INC. Ah well—let my bias not show!!!”

Internally, Ms. Das presented the company’s work with the BJP as benefiting Facebook as well.

“We’ve been lobbying them for months to include many of our top priorities,” she said of the BJP’s official platform, noting that the document was littered with the word “technology” and appeared to embrace Facebook’s desire for an expanded but less heavily regulated internet. “Now they just need to go and win the elections,” she wrote.
----------


The (Ankhi Das) posts cover the years 2012 to 2014 and were made to a Facebook group designed for employees in India, though it was open to anyone in the company globally who wanted to join. Several hundred Facebook employees were members of the group during those years.

Ms. Das is already at the center of a political outcry in India over Facebook’s handling of hate speech on the platform, following a Journal article earlier this month. That article showed that Ms. Das earlier this year opposed moves to ban from the platform a politician from Mr. Modi’s party whose anti-Muslim comments violated Facebook’s rules.

From its earliest days when it morphed from a college social network into a global political force, Facebook has presented itself as a neutral platform that doesn’t favor any party or viewpoint. The company’s head of global affairs, Nick Clegg, has said the company’s role is to provide the court, not “pick up a racket and start playing.” Chief Executive Mark Zuckerberg has repeatedly stressed his position that the company should remain politically neutral, including this year when he defended his decision not to act against provocative posts from President Trump.

Facebook on Tuesday said the posts by Ms. Das don’t show inappropriate bias.

“These posts are taken out of context and don’t represent the full scope of Facebook’s efforts to support the use of our platform by parties across the Indian political spectrum,” spokesman Andy Stone said.

Ms. Das didn’t respond to multiple requests for comment. She has apologized to colleagues for sharing a post described in the previous Journal article, in which she approvingly reposted an essay from a former Indian police official who said the country’s Muslims have historically been “a degenerate community.”

As in the U.S., Facebook’s India-based public policy team serves two functions. Staffers make and enforce the platform’s rules about what is and isn’t allowed to be posted, and they represent the company’s interests before governments. Critics both outside the company and inside have increasingly raised concerns about how those roles may conflict.

Riaz Haq said...

How to solve one billion complaints
The former head of Twitter India's news, politics, and government on the e-governance platform built for users and what he learned from it

https://restofworld.org/2020/raheel-khursheed-twitter-seva-india/

A former journalist for the local affiliate of CNN, Raheel Khursheed joined Twitter in 2014 as its first head of news, politics, and government in India. In 2016, he launched Twitter Seva, an e-governance platform that enabled Indian citizens to request assistance from government ministries through mentions on Twitter. Khursheed left Twitter in 2018 and has since co-founded a series of startups, including the video-streaming platform-as-a-service Laminar Global.

How do you solve a problem like one billion complaints?
Public systems in India are overstretched. Take the railway, for example, which carries more than 8 billion passengers a year. That scale often translates into apathy on the government’s end regarding any complaints about public services. It wasn’t as though the government didn’t have a complaint system earlier. But you would call a number, and that would fundamentally be the end of it. Who took your call, what happened to that call: nobody quite knew.

I saw that a lot of traditional hierarchies were flattening on Twitter: a lot of regular people tagging high powered ministers and getting almost immediate responses, which is rare in India. I was asking myself: Where can I go with this next? Twitter Seva, which means Twitter service, was the natural progression.

Our system brought the complaints under sunlight. When you put these complaints on our platform, they are public. If there is prominence attached to that complaint — if an editor, influencer, or the community has seen it — the imperative to respond is much higher. You get shamed if you don’t respond.

I remember an incident where a man was traveling on a train, and the air-conditioning was not working. When he told the staff, nobody listened. And then he tweeted, and within minutes, he had staff in the coach working to fix the problem. They even checked to see if he was satisfied with the outcome.

We built a framework of metrics. The focus suddenly went from how many followers to what your presence on your platform was worth. It was about how soon you could resolve an issue. We moved the product from a vanity metric to an impact metric.

It just takes one early adopter
The obvious challenge was getting the government to sign on. People didn’t immediately see the benefit of it. I had gone to the Mumbai police and the Delhi police repeatedly, and those conversations didn’t go anywhere.

So we started this experiment with the Bangalore police. We built this on the police commissioner being an early adopter of the platform, who was super sold on it. We convinced him that he could do a lot more with the platform, and that we could create a workflow to help. He generously opened up his organization for us to conduct this experiment.

We manually kept track of each of their Twitter mentions and assigned it to the relevant officer to resolve the issues tweeted. We eventually automated the process, and at its peak, the Bangalore police were addressing 500 tickets a day from Twitter.

Fundamentally, this created a virtuous circle, where we had government officers respond to complaints, and then immediately, people would praise them. It’s not as if people in government are doing their jobs and getting patted on their backs every day. It was a feedback loop — and the department got hooked on resolving issues even faster.

People thought, “If the railways can do it, then we can too.” Once they realized we had an actual workflow for them, it wasn’t a hard sell. By the time I left Twitter, we had rolled Twitter Seva out within at least 15 ministries.

Riaz Haq said...

The existential threat from cyber-enabled information warfare
Herbert Lin

https://www.tandfonline.com/doi/abs/10.1080/00963402.2019.1629574

Corruption of the information ecosystem is not just a multiplier of two long-acknowledged existential threats to the future of humanity – climate change and nuclear weapons. Cyber-enabled information warfare has also become an existential threat in its own right, its increased use posing the possibility of a global information dystopia, in which the pillars of modern democratic self-government – logic, truth, and reality – are shattered, and anti-Enlightenment values undermine civilization as we know it around the world.

Riaz Haq said...

41 rights groups from across the world write to #Facebook's Zuckerberg, demand Ankhi Das's removal from her role in the company “should the audit or investigation reinforce the details of The Wall Street Journal”. #Islamophobia #HateSpeech #Modi #BJP https://www.huffingtonpost.in/entry/ankhi-das-facebook-india-audit_in_5f59cb06c5b67602f6009ad6?ncid=other_twitter_cooo9wqtham&utm_campaign=share_twitter

The audit had reportedly been triggered by Wall Street Journal reports which showed Das had “opposed applying hate-speech rules” to at least four individuals and groups linked with the BJP, saying it would hurt the company’s business prospects in India. Das also supported Prime Minister Narendra Modi during the 2014 Indian general elections and openly talked of efforts to help the BJP win, saying the company “lit a fire” to Modi’s social media campaign, WSJ had reported.

Buzzfeed’s Pranav Dixit reported that Das had last month apologised to employees in the company for sharing a post in 2019 which called India’s Muslims a “degenerate community”. The post had been shared at a time the country was witnessing widespread protests against the Citizenship Amendment Act and the National Register of Citizens.

The non-governmental rights organisations from across the world, who wrote to Zuckerberg on Wednesday, demanded that Ankhi Das be removed from her role in the company “should the audit or investigation reinforce the details of The Wall Street Journal”.

Their letter said that the audit of Facebook India “must be removed entirely from the influence of the India office and jointly overseen by Menlo Park staff and civil society groups with expertise in Caste and Religious Bias.”

Time Magazine, which first reported on the audit in August, had said it was being conducted by the US law firm Foley Hoag and would include interviews with senior Facebook staff and members of civil society in India.

Signatories to the letter said that “mass riots in India spurred on by content posted on Facebook have been occurring for at least seven years.”

“A mislabeled video on social media was instrumental in stoking the horrific 2013 Muzaffarnagar riots in which 62 people were killed. A BJP politician was even arrested for sharing the video. This should have been enough to prompt Mr. Zuckerberg and Facebook to take a step back from operations and conduct a human rights audit to ensure Facebook had the necessary corporate competencies and had taken human rights into account. Despite all this, the company decided to expand in India without hesitation. ”

The letter cited international watchdogs and academics, saying “circumstances in India show the potential for genocide.”

“Mr. Zuckerberg, when you said “never again” after Myanmar, did you actually mean “Over and over again?” Myanmar is not an aberration. We are seeing the same playbook that was used to incite genocide in Rwanda in 1994 playing out in India. Then, radio broadcasts from government radio stations spread misinformation that helped incite ordinary citizens to take part in the massacres of their neighbors. Now, instead of radio stations, events like the North East Delhi pogrom are stoked by misinformation and hate speech shared on Facebook.”

The signatories asked for Facebook to work with civil society groups and human rights activists in India.

Facebook executives in India were questioned by an Indian Parliamentary panel led by Shashi Tharoor last week.

Both BJP and Congress members of the panel accused the social media giant of colluding and influencing opinion, a charge denied by the company.

---

A Delhi legislative assembly committee on peace and harmony found Facebook “prima facie guilty of a role” in the Delhi riots in February, panel chief Raghav Chadha had said late August, adding that the company should be treated as co-accused in the case.

Riaz Haq said...

“I Have Blood On My Hands”: A Whistleblower Data Scientist Says #Facebook Ignored Global #Political Manipulation. #Myanmar #Rohingya #India #Muslims #Hate #Islamophobia

https://www.buzzfeednews.com/article/craigsilverman/facebook-ignore-political-manipulation-whistleblower-memo

Facebook ignored or was slow to act on evidence that fake accounts on its platform have been undermining elections and political affairs around the world, according to an explosive memo sent by a recently fired Facebook employee and obtained by BuzzFeed News.

The 6,600-word memo, written by former Facebook data scientist Sophie Zhang, is filled with concrete examples of heads of government and political parties in Azerbaijan and Honduras using fake accounts or misrepresenting themselves to sway public opinion. In countries including India, Ukraine, Spain, Brazil, Bolivia, and Ecuador, she found evidence of coordinated campaigns of varying sizes to boost or hinder political candidates or outcomes, though she did not always conclude who was behind them.

“In the three years I’ve spent at Facebook, I’ve found multiple blatant attempts by foreign national governments to abuse our platform on vast scales to mislead their own citizenry, and caused international news on multiple occasions,” wrote Zhang, who declined to talk to BuzzFeed News. Her LinkedIn profile said she “worked as the data scientist for the Facebook Site Integrity fake engagement team” and dealt with “bots influencing elections and the like.”

“I have personally made decisions that affected national presidents without oversight, and taken action to enforce against so many prominent politicians globally that I’ve lost count,” she wrote.

The memo is a damning account of Facebook’s failures. It’s the story of Facebook abdicating responsibility for malign activities on its platform that could affect the political fate of nations outside the United States or Western Europe. It's also the story of a junior employee wielding extraordinary moderation powers that affected millions of people without any real institutional support, and the personal torment that followed.

“I know that I have blood on my hands by now,” Zhang wrote.

These are some of the biggest revelations in Zhang’s memo:

It took Facebook’s leaders nine months to act on a coordinated campaign “that used thousands of inauthentic assets to boost President Juan Orlando Hernandez of Honduras on a massive scale to mislead the Honduran people.” Two weeks after Facebook took action against the perpetrators in July, they returned, leading to a game of “whack-a-mole” between Zhang and the operatives behind the fake accounts, which are still active.
In Azerbaijan, Zhang discovered the ruling political party “utilized thousands of inauthentic assets... to harass the opposition en masse.” Facebook began looking into the issue a year after Zhang reported it. The investigation is ongoing.
Zhang and her colleagues removed “10.5 million fake reactions and fans from high-profile politicians in Brazil and the US in the 2018 elections.”
In February 2019, a NATO researcher informed Facebook that "he’d obtained Russian inauthentic activity on a high-profile U.S. political figure that we didn’t catch." Zhang removed the activity, “dousing the immediate fire,” she wrote.
In Ukraine, Zhang “found inauthentic scripted activity” supporting both former prime minister Yulia Tymoshenko, a pro–European Union politician and former presidential candidate, as well as Volodymyr Groysman, a former prime minister and ally of former president Petro Poroshenko. “Volodymyr Zelensky and his faction was the only major group not affected,” Zhang said of the current Ukrainian president.
Zhang discovered inauthentic activity — a Facebook term for engagement from bot accounts and coordinated manual accounts— in Bolivia and Ecuador but chose “not to prioritize it,” due to her workload. The amount of power she had as a mid-level employee to make decisions about a country’s political outcomes took a toll on her health.

Riaz Haq said...

HISTORY: IDEA TO REALITY: NED (National Endowment For Democracy) AT 30

https://www.ned.org/about/history/

"In the aftermath of World War II, faced with threats to our democratic allies and without any mechanism to channel political assistance, U.S. policy makers resorted to covert means, secretly sending advisers, equipment, and funds to support newspapers and parties under siege in Europe. When it was revealed in the late 1960’s that some American PVO’s were receiving covert funding from the CIA to wage the battle of ideas at international forums, the Johnson Administration concluded that such funding should cease, recommending establishment of “a public-private mechanism” to fund overseas activities openly."

-----------------
Training for media students

https://www.thenews.com.pk/print/715059-training-for-media-students

Islamabad:Women Media Centre in collaboration with the National Endowment for Democracy arranged a five-day electronic media training course for young aspiring journalists on the topic ‘Impact of Covid-19 pandemic on women and media in Pakistan’ at a local hotel here, says a press release.

Riaz Haq said...

Liberal Power: Liberals see #political authoritarianism in #Republicans clinging to power via the Senate’s rural bias, conservatives increasingly see #GOP as the only bulwark against the #cultural authoritarianism inherent in #tech & #media consolidation. https://www.nytimes.com/2020/10/17/opinion/where-liberal-power-lies.html?smid=tw-share

A striking thing about the current moment is that if you switch back and forth between reading conservatives and liberals, you see mirror-image anxieties about authoritarianism and totalitarianism, which each side believes are developing across the partisan divide.

Last Sunday I wrote in response to liberals who fear a postelection coup or a second-term slide toward autocracy, arguing (not for the first time) that Donald Trump’s authoritarian tendencies are overwhelmed by his incapacities, his distinct lack of will-to-power, and the countervailing power of liberalism in American institutions.

But then the ensuing week brought a wave of conservative anxieties about creeping authoritarianism. The source of the right’s agita was Twitter and Facebook, which decided to completely block a New York Post story featuring a cache of alleged Hunter Biden emails (with a very strange chain-of-custody back story) on the suspicion that they were the fruit of hacking, and in Twitter’s case to suspend some media accounts that shared the Post story even in critique.

“This is what totalitarianism looks like in our century,” the Post’s Op-Ed editor, Sohrab Ahmari, wrote in response: “Not men in darkened cells driving screws under the fingernails of dissidents, but Silicon Valley dweebs removing from vast swaths of the internet a damaging exposé on their preferred presidential candidate.”

Ahmari’s diagnosis is common among my friends on the right. In his new book “Live Not By Lies,” for instance, Rod Dreher warns against the rise of a “soft totalitarianism,” distinguished not by formal police-state tactics but by pressure from the heights of big media, big tech and the education system, which are forging “powerful mechanisms for controlling thought and discourse.”

Dreher is a religious conservative, but many right-of-center writers who are more culturally liberal (at least under pre-2016 definitions of the term) share a version of his fears. Indeed, what we call the American “right” increasingly just consists of anyone, whether traditionalist or secularist or somewhere in between, who feels alarmed by growing ideological conformity within the media and educational and corporate establishments.

Let me try to elaborate on what this right is seeing. The initial promise of the internet era was radical decentralization, but instead over the last 20 years, America’s major cultural institutions have become consolidated, with more influence in the hands of fewer institutions. The decline of newsprint has made a few national newspapers ever more influential, the most-trafficked portions of the internet have fallen under the effective control of a small group of giant tech companies, and the patterns of meritocracy have ensured that the people staffing these institutions are drawn from the same self-reproducing professional class. (A similar trend may be playing out with vertical integration in the entertainment business, while in academia, a declining student population promises to close smaller colleges and solidify the power of the biggest, most prestigious schools.)


----------
“This is what totalitarianism looks like in our century”by Sohrab Ahmari https://nypost.com/2020/10/14/if-unreliable-is-the-issue-why-did-social-media-never-block-anti-trump-stories/