AP: Cyborgs, Trolls and bots: A guide to online misinformation

Status
Not open for further replies.

hanimmal

Well-Known Member
https://apnews.com/article/technology-malware-elections-crime-cybercrime-913ee5d56affa97fc5d9c639c4a284ab
Screen Shot 2020-10-12 at 5.41.07 PM.png

Microsoft announced legal action Monday seeking to disrupt a major cybercrime digital network that uses more than 1 million zombie computers to loot bank accounts and spread ransomware, which experts consider a major threat to the U.S. presidential election.

The operation to knock offline command-and-control servers for a global botnet that uses an infrastructure known as Trickbot to infect computers with malware was initiated with a court order that Microsoft obtained in Virginia federal court on Oct. 6. Microsoft argued that the crime network is abusing its trademark.

“It is very hard to tell how effective it will be but we are confident it will have a very long-lasting effect,” said Jean-Ian Boutin, head of threat research at ESET, one of several cybersecurity firms that partnered with Microsoft to map the command-and-control servers. “We’re sure that they are going to notice and it will be hard for them to get back to the state that the botnet was in.”

Full Coverage: Technology

Cybersecurity experts said that Microsoft’s use of a U.S. court order to persuade internet providers to take down the botnet servers is laudable. But they add that it’s not apt to be successful because too many won’t comply and because Trickbot’s operators have a decentralized fall-back system and employ encrypted routing.

Paul Vixie of Farsight Security said via email “experience tells me it won’t scale — there are too many IP’s behind uncooperative national borders.” And the cybersecurity firm Intel 471 reported no significant hit on Trickbot operations Monday and predicted ”little medium- to long-term impact” in a report shared with The Associated Press.

But ransomware expert Brett Callow of the cybersecurity firm Emsisoft said that a temporary Trickbot disruption could, at least during the election, limit attacks and prevent the activation of ransomware on systems already infected.

The announcement follows a Washington Post report Friday of a major — but ultimately unsuccessful — effort by the U.S. military’s Cyber Command to dismantle Trickbot beginning last month with direct attacks rather than asking online services to deny hosting to domains used by command-and-control servers.

A U.S. policy called “persistent engagement” authorizes U.S. cyberwarriors to engage hostile hackers in cyberspace and disrupt their operations with code, something Cybercom did against Russian misinformation jockeys during U.S. midterm elections in 2018.

Created in 2016 and used by a loose consortium of Russian-speaking cybercriminals, Trickbot is a digital superstructure for sowing malware in the computers of unwitting individuals and websites. In recent months, its operators have been increasingly renting it out to other criminals who have used it to sow ransomware, which encrypts data on target networks, crippling them until the victims pay up.

One of the biggest reported victims of a ransomware variety sowed by Trickbot called Ryuk was the hospital chain Universal Health Services, which said all 250 of its U.S. facilities were hobbled in an attack last month that forced doctors and nurses to resort to paper and pencil.

U.S. Department of Homeland Security officials list ransomware as a major threat to the Nov. 3 presidential election. They fear an attack could freeze up state or local voter registration systems, disrupting voting, or knock out result-reporting websites.

OTHER TOP STORIES:
Trickbot is a particularly robust internet nuisance. Called “malware-as-a-service,” its modular architecture lets it be used as a delivery mechanism for a wide array of criminal activity. It began mostly as a so-called banking Trojan that attempts to steal credentials from online bank account so criminals can fraudulently transfer cash.

But recently, researchers have noted a rise in Trickbot’s use in ransomware attacks targeting everything from municipal and state governments to school districts and hospitals. Ryuk and another type of ransomware called Conti — also distributed via Trickbot — dominated attacks on the U.S. public sector in September, said Callow of Emsisoft.

Alex Holden, founder of Milwaukee-based Hold Security, tracks Trickbot’s operators closely and said the reported Cybercom disruption — involving efforts to confuse its configuration through code injections — succeeded in temporarily breaking down communications between command-and-control servers and most of the bots.

“But that’s hardly a decisive victory,” he said, adding that the botnet rebounded with new victims and ransomware.

The disruption — in two waves that began Sept. 22 — was first reported by cybersecurity journalist Brian Krebs.

The AP could not immediately confirm the reported Cybercom involvement.
 

hanimmal

Well-Known Member
Screen Shot 2020-10-13 at 6.13.39 PM.png

An account featuring the image of a Black police officer, President Trump and the words “VOTE REPUBLICAN” had a brief but spectacular run on Twitter. In six days after it became active last week, it tweeted just eight times but garnered 24,000 followers, with its most popular tweet being liked 75,000 times.

Then, on Sunday, the account was gone — suspended by Twitter for breaking its rules against platform manipulation.
The remarkable reach of @CopJrCliff and other fake accounts from supposed Black Trump supporters highlights how an account can be effective at pushing misleading narratives in just a few days — faster than Twitter can take it down.

A network of more than two dozen similar accounts, many of them using identical language in their tweets, recently has generated more than 265,000 retweets or other amplifying “mentions” on Twitter, according to Clemson University social media researcher Darren Linvill, who has been tracking them since last weekend. Several had tens of thousands of followers, and all but one have now been suspended.

Researchers call fake accounts featuring supposed Black users “digital blackface,” a reference to the now-disgraced tactic of White people darkening their faces for film or musical performances intended to mimic African Americans.

Many of the accounts used profile pictures of Black men taken from news reports or other sources. Several of the accounts claimed to be from members of groups with pro-Trump leanings, including veterans, police officers, steelworkers, businessmen and avid Christians. One of the fake accounts had, in the place of a profile photo, the words “black man photo” — a hint of sloppiness by the network’s creators.

“It’s asymmetrical warfare,” said Linvill, lead researcher for the Clemson University Media Forensics Hub. “They don’t have to last long. And they are so cheap to produce that you can get a lot of traction without a whole lot of work. Thank you, Twitter.”

Black voters are being targeted in disinformation campaigns, echoing the 2016 Russian playbook

Linvill said he found some evidence of foreign origins of the network, with a few traces of the Russian Cyrillic alphabet appearing in online records of the accounts. One account previously tweeted to promote an escort service in Turkey, Linvill found.

But overall, the origins of the coordinated effort are hard to know. What distinguished the accounts were their similarity to each other, the content they tweeted and the remarkable reach they achieved in a few days of activity. Several also followed each other.

Twitter spokesman Trenton Kennedy said Twitter already had taken down some of the network identified by Linvill for violating rules against platform manipulation and spam.

“Our teams are working diligently to investigate this activity and will take action in line with the Twitter Rules if Tweets are found to be in violation,” Kennedy said in a statement.

The @CopJrCliff account, while claiming to be from a police officer in the swing state of Pennsylvania, in fact featured a profile picture cribbed from a recent online article about a police officer from Portland, Ore., the site of prolonged protests over racial discrimination. The account supposedly was opened in 2017 but first tweeted last week, on Oct. 6.

It was one of 15 accounts using nearly identical language in tweets: “YES IM BLACK AND IM VOTING FOR TRUMP!!!”

The Portland officer, Jakhary Jackson, became a darling of conservative news organizations this summer, after criticizing White protesters for hurling invective at members of law enforcement, including at Black officers, in an interview with a local television station. Through the Portland Police Bureau, Jackson did not immediately respond to a request for comment about the account using his likeness.

The network of fake accounts claiming to represent Black Trump supporters, Linvill’s research shows, became increasingly active in the past two months, in the aftermath of a Republican nominating convention that prominently featured African Americansseeking to soften the president’s image and challenge persistent claims that he is racist. Attracting more voters from the traditionally Democratic constituency of Black voters has been a key element of Trump’s reelection campaign, especially as his support has dipped among other demographics.

Trump’s first public address since announcing his coronavirus diagnosis, delivered Saturday from the White House, was to conservative activists rallying around the mantra of “Blexit,” a campaign to convince African Americans and other minorities to leave the Democratic Party.

The president’s campaign manager, Bill Stepien, suggested to reporters in a call on Monday that Trump will be able to offset declining support among seniors “by gains in certain voting populations — Black, Hispanic and others, based on the president's appeal, his policies and the outreach he's been conducting for the last four years.”

Still, surveys point to a chasm in preferences among Black voters. Former vice president Joe Biden outperforms Trump by 81 percentage points with the demographic, according to the Pew Research Center. Black Americans are among the most likely voters to indicate their choice is “for Biden,” according to Pew, rather than simply against Trump.

Cambridge Analytica database identified Black voters as ripe for ‘deterrence,’ British broadcaster says

Their staunch support for Democrats has made Black voters frequent targets of voter suppression and other demobilization efforts, often more potent than deceptive online campaigns aimed at persuasion.

“The damage is done,” said Filippo Menczer, a professor of informatics and computer science at Indiana University at Bloomington. “There is payoff just in getting the volume out there, and the fact that the original post is gone doesn’t really matter.”

He credited Twitter for taking down the accounts but said the company needs to act more quickly to disrupt networks that “manufacture echo chambers” — and should be more transparent about the actors behind them, whether their origins are foreign or domestic.

The network of fake accounts that was suspended in recent days included some users who were not portrayed as Black, said Linvill. They included three fake accounts for Trump’s press secretary, Kayleigh McEnany, and one for Erica Kious, a hair stylist whose San Francisco salon controversially gave a haircut to House Speaker Nancy Pelosi (D-Calif.) in August, in violation of local rules. These accounts were retweeted by several of the fake accounts from supposed Black Trump supporters before getting removed.

There also was a fake account from a supposed White police officer, retweeted by some of the fake accounts, that said, “IM A WHITE COP AND I MADE MY BLACK COP PARTNER SWALLOW THE RED PILL,” a reference to convincing someone to accept uncomfortable facts. “He said I opened his eyes and he is voting for trump. We have to help administer the RED PILL to DEMS!!!”

Among the set of short-lived accounts featuring supposed Black supporters of Trump was one that claimed to be from a married U.S. Marine named Ted Katya, with three children and a faith in God. The profile describes Katya as a “Newly converted republican conservative.” It had more than 18,000 followers before being suspended.

But the profile photo was of another man, a former football player from Michigan with a different name who saved a 3-year-old boy escaping a burning building in Phoenix in July. The photo of the Ted Katya account is identical to one in a news report on the rescue. The fake account used the same #BlacksForTrump hashtag as several others in the network, as well as the repeatedly reused, “YES IM BLACK AND IM VOTING FOR TRUMP!!!”

Also tweeting the same words was an account named Keith The Mill Worker #Trump2020, which had more than 11,000 followers and the same photo of Trump and “VOTE REPUBLICAN” banner as the @CopJrCliff account. The supposed steel millworker account also retweeted Trump several times before getting suspended.
 

hanimmal

Well-Known Member
https://apnews.com/article/election-2020-race-and-ethnicity-slavery-media-social-media-caeaabd69d5da31d760cbaf6ba061867Screen Shot 2020-10-16 at 6.24.26 AM.png
RIO RANCHO, N.M. (AP) — A group of U.S. Black scholars, activists and writers has launched a new project to combat misleading information online around voting, reparations and immigration, supporters announced Friday.

The newly formed National Black Cultural Information Trust seeks to counter fake social media accounts and Twitter trolls that often discourage Black voters from participating in elections or seek to turn Black voters against other communities of color.

Jessica Ann Mitchell Aiwuyor, the project’s founder, said some dubious accounts behind the social media #ADOS movement — which stands for American Descendants of Slavery — have urged Black voters to skip the presidential election.

Some accounts also use the hashtag to flame supposed divisions between African Americans and Black immigrants from the Caribbean and Latin America, she said. Most recently, some social media users have used #ADOS to blame Somali immigrants in Minneapolis for the May 2020 death of George Floyd rather than the police officer charged with killing him.

“The disinformation used to target Black communities is cultural,” said Aiwuyor, an African American activist and scholar. “It’s cultural disinformation, which uses cultural issues to infuse false information and cause confusion.”

Aiwuyor said some social media accounts are using “digital Blackface” — posing as Black users when they aren’t — or resurrecting old accounts that haven’t tweeted in four years to spread false information about where to vote or where candidates stand on issues.

Members of the National Black Cultural Information Trust plan to monitor social media posts and flag those spreading misleading and fake stories. They plan to use crowdsourcing, website tools that show if accounts have troll-like behavior, and scholars on standby to counter any claims about slavery or voting.

Through its website, the project will direct users to discussions and stories around Black voting and U.S. reparation supporters who reject xenophobic rhetoric and push coalition-building with Black immigrants and Latinos.
 

hanimmal

Well-Known Member
Here is a YouTube stream to get you all in a twist, enjoy

lmao,
Our new troll is pushing propaganda youtube videos that spam videos to Trump's cult to have someone feed them amplified hateful lies desperate to paint anything and everything 'not Trump' as 'the left', often using Trump's pretend boogey man 'ANTIFA'.
The goal of this troll is to get people to burn themselves out on their incredibly stupid spam.

Nobody believes this shit (unless they are brainwashed into being real life Trump trolls (aka Useful idiots) themselves). But by paying these idiots some bitcoin or through a American entity it is cheap for bad actors like the Russian military to get a bunch of it made.

It is easy for them to l slap together bullshit click bait nonstop for paid foreign/domestic trolls to push across all media by increasing the stupid video's views with click farms.

Then the trolls post this crap on forums like RIU to waste our time trying to combat their pre-made nonsense and maybe get a stooge or two to like the propaganda click bait. And that is when the real mess that this attack on our democracy starts, because if you click on the video, Youtube sees this as something you are interested in and you start to get even more nonsense flooding your feed.

At this point you see the words 'ANTIFA' and "BLM" linked to 'the radical left' so often across your youtube feed that it starts to seem like a real thing because your brain gets tricked into believing it, even though it is not true.
 

hanimmal

Well-Known Member
https://apnews.com/article/election-2020-virus-outbreak-joe-biden-senate-elections-media-f32410451f45102ddd4a82ebec8ac746
Screen Shot 2020-10-17 at 7.38.52 AM.png
PROVIDENCE, R.I. (AP) — The email from a political action committee seemed harmless: if you support Joe Biden, it urged, click here to make sure you’re registered to vote.

But Harvard University graduate student Maya James did not click. Instead, she Googled the name of the soliciting PAC. It didn’t exist -- a clue the email was a phishing scam from swindlers trying to exploit the U.S. presidential election as a way to steal peoples’ personal information.

“There was not a trace of them,” James, 22, said. “It was a very inconspicuous email, but I noticed it used very emotional language, and that set off alarm bells.” She deleted the message, but related her experience on social media to warn others.

American voters face an especially pivotal, polarized election this year, and scammers here and abroad are taking notice — posing as fundraisers and pollsters, impersonating candidates and campaigns, and launching fake voter registration drives. It’s not votes they’re after, but to win a voter’s trust, personal information and maybe a bank routing number.

The Federal Bureau of Investigation, the Better Business Bureau and cybersecurity experts have recently warned of new and increasingly sophisticated online fraud schemes that use the election as an entry, reflecting both the proliferation of political misinformation and intense interest in this year’s presidential and Senate races.

“Psychologically, these scams play to our desire to do something - to get involved, to donate, to take action,” said Sam Small, chief security officer at ZeroFOX, a Baltimore, Maryland-based digital security firm.

Online grifters regularly shift tactics to fit current events, whether they are natural disasters, a pandemic or an election, according to Small. “Give them something to work with and they’ll find a way to make a dollar,” he said.

Foreign adversaries like Russia, China and Iran get much of the blame for creating fake social media accounts and spreading deceptive election information, largely because of efforts by groups linked to the Kremlin to interfere in the 2016 U.S. presidential election. In many instances, foreign disinformation campaigns make use of the same tools pioneered by cybercriminals: fake social media accounts, realistic looking websites and and suspicious links.

Online scams have flourished as so many of life’s routines move online during the pandemic. The FBI reported that complaints to its cybercrime reporting site jumped from 1,000 a day to 3,000-4,000 a day since the pandemic began.

Now, the final weeks of a contentious election are giving scammers yet another opportunity to strike.

“Every election is heated, but this one is very much so,” Paula Fleming, a chief marketing officer for the Better Business Bureau, said. “People are more trusting when they see it’s a political party or a candidate they like emailing them.”

The FBI warned Americans this month to watch out for election-related “spoofing,” when a scammer creates a campaign website or email address almost identical to a real one. A small misspelling or a slight change - using .com instead of .gov, for instance - are tell-tale signs of fraud, the agency said.

Investigators at ZeroFOX routinely scan dark corners of the internet to identify threats against its customers. This summer, they found a large cache of personal data for sale. The data dump included the phone numbers, ages and other basic demographic information for thousands of Americans. What made the data remarkable was that it also contained partisan affiliation, the “cherry on top” for anyone interested in buying the material, Small said.

“Someone could use that to pretend to be a political action committee raising money, to try to get your personal information or your account numbers,” he said.

In 2018, scammers posed as employees from the non-profit voting advocacy group TurboVote and phoned people in Georgia, Washington and at least three other states asking them to register to vote. The calls prompted complaints to state election officials, who issued a public warning.

“TurboVote doesn’t call. You’ll never get a call from us,” group spokeswoman Tanene Allison said of the organization that helped register millions of voters in 2018. “If you’re hearing something and you can’t verify the source, always check with your local election officials.”

Voters should be cautious of claims that sound too good to be true, fraud experts say. Before donating to any group that reached out by email or text, check their website or look to see if they’re registered as a charity or campaign. Does the organization have a physical location and phone number? Scammers often do not.

Beware of pushy pollsters or fundraisers, or emails or websites that use emotionally loaded language that makes you angry or fearful, a tactic that experts say plays on human psychology. And don’t reveal personal information over the phone.

“It is tricky because there are legitimate organizations out there that are trying to help people register to vote,” said Eva Velasquez, a former financial crimes investigator who now runs the Identity Theft Resource Center, based in San Diego. “But you don’t have to act in the moment. Take a few minutes and do a little homework.”
 

hanimmal

Well-Known Member
https://www.washingtonpost.com/technology/2020/10/30/trump-twitter-domestic-disinformation/
Screen Shot 2020-10-30 at 5.19.29 PM.png
President Trump launched into a tweetstorm in April, banging out nine retweets of the Centers for Disease Control’s account on the dangers of misusing disinfectant and other topics — two days after he himself had suggested that people could inject themselves with bleach to cure covid-19.

But those tweets spread in an odd pattern: More than half the 3,000 accounts retweeting Trump did so in near-perfect synchronicity, so that the 945th tweet was the same number of seconds apart as the 946th, University of Colorado information science professor Leysia Palen found.

The unusual finding underscores some of the little-known ways in which Trump’s social media army — composed of devoted followers and likely assistance from software that artificially boosts his content — has helped him develop one of the world’s most powerful political megaphones, unlike any other in the English-speaking world.

That megaphone has become a frequent source of misinformation, some of it so toxic that Harvard researchers recently dubbed attacks on mail-in voting by Trump and right-leaning leaders “a highly effective disinformation campaign with potentially profound effects ... for the legitimacy of the 2020 election.”

Trump’s singular ability to spread his messages, often disseminating false or unsubstantiated information, comes from his prominence as president and the relentless clip of his tweeting to his 87 million followers. He is also aided by a vital feedback loop — often discussed but poorly understood — among the president, high-profile influencers and rank-and-file followers that both push messages in his direction and promote every online utterance.

His feedback loop, according to several new and forthcoming studies, has become a leading threat to the integrity of political debate in the United States, with an impact that to date appears far more damaging than the efforts of Russian operatives or other foreign adversaries.

A study released Thursday by the Election Integrity Partnership, a consortium of misinformation researchers, found that just 20 conservative, pro-Trump Twitter accounts — including the president’s own @realDonaldTrump — were the original source of one-fifth of retweets pushing misleading narratives about voting.

A recent Cornell University study, meanwhile, concluded that Trump was also the “largest driver” of misinformation in the public conversation about the coronavirus during the first half of 2020. The researchers found that nearly 40 percent of articles containing misinformation about the virus mentioned him, including articles about false cures and blaming China for the disease.

“Trump is hands down the most significant accelerant and amplifier for disinformation in the election,” said Graham Brookie, director and managing editor of the Atlantic Council’s Digital Forensic Research Lab, a branch of the prominent nonpartisan think tank that is a leading source of research on foreign and domestic disinformation. “The scale and scope of domestic disinformation is far greater than anything a foreign adversary could ever do to us."

Study shows Trump is a super-spreader — of coronavirus misinformation

The president, his reelection campaign and congressional Republicans have repeatedly said that Trump’s skill in deploying social media is key to delivering his message in the face of what they say is hostility from mainstream news sources and leading Silicon Valley companies. Trump and his defenders also contend that his comment about the potential medical value of ingesting bleach was intended as sarcasm, not a suggestion, and that the extensive news coverage of that and other elements of the White House’s handling of the coronavirus pandemic are signs of systemic bias that a potent Twitter following helps him overcome.

Trump campaign spokeswoman Samantha Zager said in a statement, “Neither Big Tech nor the mainstream media is the arbiter of truth or elections. ... President Trump is a staunch advocate for a free and fair election where every vote counts exactly once.”
Her statement did not directly address allegations that he and his social media army have become leading sources of disinformation in the election. Nor did it address claims by some researchers that he is a bigger source of disinformation than Russia or to evidence that he gets a boost from automated accounts.

While researchers believe there is a significant degree of automation powering Trump’s megaphone, they do not contend that he or his supporters routinely break Twitter’s rules. The company allows enthusiastic followers to use software to automatically retweet people, and though the company bans “bulk retweeting,” it does not disclose at what point frequent retweeting crosses over into prohibited behavior. Twitter suspended 100 out of 200 sample accounts brought to its attention by The Washington Post for spammy activity or other violations.

Tech companies have spent the past four years preparing to fight foreign disinformation campaigns, but now they are confronted with a situation where their political caution and deference to free speech, coupled with the power of algorithms and loosely enforced rules, have turned their platforms into hubs of domestic misinformation — with the misleading comments often led by the president himself.

Twitter, for example, did not have a policy banning any form of misinformation until 2020, and until just months ago, both Twitter and Facebook exempted politicians — including Trump— from their rules on the grounds that their comments were too newsworthy to censor.

Twitter labels Trump’s tweets with a fact check for the first time

Of the more than 22,000 falsehoods Trump has shared since the start of his presidency through this month, according to The Washington Post Fact Checker, more than 3,700 of them have been on social media. The number of lies is six times greater so far this year than during his first year in office, the Fact Checker has found. Since May, Twitter has put warning labels and restricted viewership on at least 15 of those comments, while Facebook has removed half a dozen.

“Protecting the integrity of the conversation on Twitter by stopping platform manipulation and addressing misinformation remains one of our top priorities,” Twitter spokeswoman Liz Kelley said in a statement, adding that since the 2016 election the platform has added protections to reduce manipulation and misleading information.


Facebook declined to comment.

But some of those actions may be too late to make a difference.

Examining the impact of Twitter’s enforcement against an Aug. 23 Trump tweet in which he called mail-in ballots a “security disaster,” Kate Starbird, a disinformation expert at the University of Washington, found that Twitter’s disabling of the retweet button after labeling the tweet effectively stopped the content from spreading. But the tweet had already gone viral.

“Twitter has grown quicker in taking action on President Trump’s tweets” since then, said Starbird. “But due to the size of Trump’s following — and perhaps other factors like coordinated tweeting, automation and an unusually attentive audience — his tweets can still spread quite far before the platform takes a corrective action.”


[/QUOTE]Continues.
 

hanimmal

Well-Known Member
Continued:
Trump’s social media megaphone

Trump had just 20 million Twitter followers on Inauguration Day in January 2017. Today, he averages more than 1,000 tweets a month, with nearly 17,000 retweets each, an unparalleled volume in the English-speaking world, according to researchers. That means Trump benefits from a powerful ecosystem that amplifies every post, and from Twitter’s own loose rules and algorithms that give an additional boost to messages based on the engagement they receive.

When Palen and her team examined Trump’s tweetstorm on the CDC, they discovered that 56 percent of each of the 3,000 retweets were shared in the same pattern over a period of 24 hours.

For example, an account called @tthseeker always retweeted Trump 42 minutes after he tweeted. Another account, @shauna33R, always tweeted 4½ seconds after that.

Part of the pattern involves a core group of more than 500 especially enthusiastic followers — with 832,000 followers among them — that retweeted Trump in a roughly identical order and length of time after Trump tweets from April to September, regardless of what the president tweeted.

Palen said the pattern was “remarkable” and could not have happened by accident. She said it strongly suggests there is some combination of automated accounts and human retweeting at work.

Then, comparing all Trump tweets in August to tweets from other high-profile accounts, such as New York Gov. Andrew M. Cuomo, British Prime Minister Boris Johnson and Canadian Prime Minister Justin Trudeau, as well as celebrity Kim Kardashian, Palen found that retweet patterns for the other accounts have about half the degree of similarity as Trump’s.

Twitter adds new warnings about misinformation in run up to election

The set of accounts identified by Palen swells with support for Trump and his political slogans, using hashtags that reference the idea that they are a social media army, such as #digitalsoldier, #maga and #fightback, and patriotic images such as flags and eagles.
More than 60 reference the QAnon conspiracy theory. Many of the accounts, in their profiles, describe the owners as conservatives, devout Christians, veterans and gun rights enthusiasts. Retweets of Trump, and of one another, are far more common than original tweets for many accounts.
Screen Shot 2020-10-30 at 5.30.25 PM.png
Though Twitter bans bots, or fully automated accounts, the company allows a degree of automation in its service, by permitting people to use third-party software programs that automatically retweet or reply to tweets. In addition, Twitter’s algorithms are designed to reward engagement — or tweets that get retweeted or liked very quickly — which results in the company widely sharing that information beyond the person’s actual followers and in its trending feature. Twitter’s algorithms give lower weight to low-quality or spammy accounts.

Senate Republicans vote to authorize subpoena targeting Facebook, Twitter CEOs, citing their handling of New York Post story

Twitter’s Kelley said the company was still investigating the retweeting pattern. Of the 200 accounts identified by The Post, Kelley said they did not appear to be using third-party software but were likely real people engaging in behaviors and tactics that the company frowns upon.

“We often see people using Twitter in a way that can appear and sometimes is spammy, but many times is just how some people use the service,” Kelley said in an email. “Bulk and aggressive Retweeting is a violation of the Twitter Rules and may result in action, but that does not mean the behavior is inauthentic.”

She said the company is considering adding a warning notice to users when they are retweeting too much and about to get sanctioned.

The University of Washington’s Starbird found that on topics such as the coronavirus and the election, between 7 and 10 percent of the accounts that retweeted Trump were affiliated with QAnon, an online conspiratorial movement that Facebook and Twitter recently curtailed, suggesting that his support on these subjects came from large numbers of followers who have violated Twitter’s rules.
Trump has in turn retweeted QAnon-affiliated accounts more than 250 times since taking office, according to the left-leaning watchdog group Media Matters.

Fighting manipulation is a “cat-and-mouse game, and the mouse has discovered a new trick,” said a person familiar with Twitter’s system who spoke on the condition of anonymity to comment on the new research.

How information gets to Trump

Researchers are also examining how disinformation makes its way to the president through campaigns that push distorted narratives in his direction.

At campaign rallies and during the presidential debate over the past month, for example, Trump has talked about how ballots with his name on them had been found “thrown in a river” — implying that politically motivated postal workers were sabotaging his chances of winning the election.

The tossed ballots narrative originated Sept. 23 with an article on the conservative website Gateway Pundit. It quoted a news report about a batch of mail being found in a ditch in Greenville, Wis., and claimed, without citing evidence, that the incident was part of a Democratic plot to steal the 2020 election because the lost mail included some absentee ballots. The Gateway Pundit story was tweeted one hour and one minute later by the president’s son Eric Trump, who, along with his brother Donald Trump Jr., are the most frequent tweeters about mail-in ballots among the 50 users Trump follows, according to researchers with the University of Washington.

The story was then tweeted by conservative action group Turning Point USA founder Charlie Kirk, Breitbart News, and the TrumpWarRoom, researchers with the Election Integrity Partnership, a consortium including Stanford University, University of Washington, the DFRLab, Graphika, and other leading disinformation researchers, found. Within 24 hours, the Gateway Pundit story had 60,000 retweets. Trump then mentioned it during the Sept. 29 presidential debate, though he changed the ditch to a river — an error the White House later conceded.

The Wisconsin Elections Commission later announced that no Wisconsin ballots had actually been found among the tossed mail. The matter is still under investigation by the U.S. Postal Service.

Gateway Pundit founder Jim Hoft told The Post that the article “was accurate.”

Trump’s re-shared tweets help shield him from Twitter’s bans
Screen Shot 2020-10-30 at 5.29.26 PM.png
Fake Twitter accounts posing as Black Trump supporters appear, reach thousands, then vanish

Cook said he didn’t know how exactly his memes, which he said had been retweeted by Trump 20 to 30 times before Twitter suspended his account in June for promoting “manipulated media” and copyright infringement in relation to the toddler video, made their way to the president. Trump hosted him twice at the White House last year, including a visit to the Oval Office. Cook described his work as political satire. “Memes are not misinformation,” he said.

Path to Trump’s megaphone
Screen Shot 2020-10-30 at 5.31.41 PM.png


“Much of the influence Trump has gained to shape the national conversation will far outlast his presidency,” Clemson’s Linvill added.
 

hanimmal

Well-Known Member
I just beat you up for dumb shit you say, Not because who you stand for. Because maybe a few smacks in the face is what you missed out on in childhood. Maybe if someone smacked you one in the middle of the noggen, you wouldn't be so quick to pull the trigger on things.
Since you can't admit if you are not another paid foreign troll, and just keep spamming stupid shit, I got to ask.

Are you a bot?

Screen Shot 2020-11-11 at 8.58.07 AM.png
 

hanimmal

Well-Known Member
https://apnews.com/article/media-social-media-chuck-grassley-chris-murphy-a43992f3fc8c3f4ff198a838f748a0a9
Screen Shot 2020-12-21 at 12.36.57 PM.png
BRUSSELS (AP) — The conversation taking place around two U.S. senators’ verified social media accounts remained vulnerable to manipulation through artificially inflated shares and likes from fake users, even amid heightened scrutiny in the run up to the U.S. presidential election, an investigation by the NATO Strategic Communications Centre of Excellence found.

Researchers from the center, a NATO-accredited research group based in Riga, Latvia, paid three Russian companies 300 euros ($368) to buy 337,768 fake likes, views and shares of posts on Facebook, Instagram, Twitter, YouTube and TikTok, including content from verified accounts of Sens. Chuck Grassley and Chris Murphy.

Grassley’s office confirmed that the Republican from Iowa participated in the experiment. Murphy, a Connecticut Democrat, said in a statement that he agreed to participate because it’s important to understand how vulnerable even verified accounts are.

“We’ve seen how easy it is for foreign adversaries to use social media as a tool to manipulate election campaigns and stoke political unrest,” Murphy said. “It’s clear that social media companies are not doing enough to combat misinformation and paid manipulation on their own platforms and more needs to be done to prevent abuse.”

In an age when much public debate has moved online, widespread social media manipulation not only distorts commercial markets, it is also a threat to national security, NATO StratCom director Janis Sarts told The Associated Press.

“These kinds of inauthentic accounts are being hired to trick the algorithm into thinking this is very popular information and thus make divisive things seem more popular and get them to more people. That in turn deepens divisions and thus weakens us as a society,” he explained.

More than 98% of the fake engagements remained active after four weeks, researchers found, and 97% of the accounts they reported for inauthentic activity were still active five days later.

NATO StratCom did a similar exercise in 2019 with the accounts of European officials. They found that Twitter is now taking down inauthentic content faster and Facebook has made it harder to create fake accounts, pushing manipulators to use real people instead of bots, which is more costly and less scalable.

“We’ve spent years strengthening our detection systems against fake engagement with a focus on stopping the accounts that have the potential to cause the most harm,” a Facebook company spokesperson said in an email.

But YouTube and Facebook-owned Instagram remain vulnerable, researchers said, and TikTok appeared “defenseless.”

“The level of resources they spend matters a lot to how vulnerable they are,” said Sebastian Bay, the lead author of the report. “It means you are unequally protected across social media platforms. It makes the case for regulation stronger. It’s as if you had cars with and without seatbelts.”

Researchers said that for the purposes of this experiment they promoted apolitical content, including pictures of dogs and food, to avoid actual impact during the U.S. election season.

Ben Scott, executive director of Reset.tech, a London-based initiative that works to combat digital threats to democracy, said the investigation showed how easy it is to manipulate political communication and how little platforms have done to fix long-standing problems.

“What’s most galling is the simplicity of manipulation,” he said. “Basic democratic principles of how societies make decisions get corrupted if you have organized manipulation that is this widespread and this easy to do.”

Twitter said it proactively tackles platform manipulation and works to mitigate it at scale.

“This is an evolving challenge and this study reflects the immense effort that Twitter has made to improve the health of the public conversation,” Yoel Roth, Twitter’s head of site integrity, said in an email.

YouTube said it has put in place safeguards to root out inauthentic activity on its site, and noted that more than 2 million videos were removed from the site in the third quarter of 2020 for violating its spam policies.

“We’ll continue to deal with attempts to abuse our systems and share relevant information with industry partners,” the company said in a statement.

TikTok said it has zero tolerance toward inauthentic behavior on its platform and that it removes content or accounts that promote spam or fake engagement, impersonation or misleading information that may cause harm.

“We’re also investing in third-party testing, automated technology, and comprehensive policies to get ahead of the ever-evolving tactics of people and organizations who aim to mislead others,” a company spokesperson said in an email.
Screen Shot 2020-12-21 at 12.39.46 PM.png
 

schuylaar

Well-Known Member
https://apnews.com/article/media-social-media-chuck-grassley-chris-murphy-a43992f3fc8c3f4ff198a838f748a0a9
View attachment 4774253
BRUSSELS (AP) — The conversation taking place around two U.S. senators’ verified social media accounts remained vulnerable to manipulation through artificially inflated shares and likes from fake users, even amid heightened scrutiny in the run up to the U.S. presidential election, an investigation by the NATO Strategic Communications Centre of Excellence found.

Researchers from the center, a NATO-accredited research group based in Riga, Latvia, paid three Russian companies 300 euros ($368) to buy 337,768 fake likes, views and shares of posts on Facebook, Instagram, Twitter, YouTube and TikTok, including content from verified accounts of Sens. Chuck Grassley and Chris Murphy.

Grassley’s office confirmed that the Republican from Iowa participated in the experiment. Murphy, a Connecticut Democrat, said in a statement that he agreed to participate because it’s important to understand how vulnerable even verified accounts are.

“We’ve seen how easy it is for foreign adversaries to use social media as a tool to manipulate election campaigns and stoke political unrest,” Murphy said. “It’s clear that social media companies are not doing enough to combat misinformation and paid manipulation on their own platforms and more needs to be done to prevent abuse.”

In an age when much public debate has moved online, widespread social media manipulation not only distorts commercial markets, it is also a threat to national security, NATO StratCom director Janis Sarts told The Associated Press.

“These kinds of inauthentic accounts are being hired to trick the algorithm into thinking this is very popular information and thus make divisive things seem more popular and get them to more people. That in turn deepens divisions and thus weakens us as a society,” he explained.

More than 98% of the fake engagements remained active after four weeks, researchers found, and 97% of the accounts they reported for inauthentic activity were still active five days later.

NATO StratCom did a similar exercise in 2019 with the accounts of European officials. They found that Twitter is now taking down inauthentic content faster and Facebook has made it harder to create fake accounts, pushing manipulators to use real people instead of bots, which is more costly and less scalable.

“We’ve spent years strengthening our detection systems against fake engagement with a focus on stopping the accounts that have the potential to cause the most harm,” a Facebook company spokesperson said in an email.

But YouTube and Facebook-owned Instagram remain vulnerable, researchers said, and TikTok appeared “defenseless.”

“The level of resources they spend matters a lot to how vulnerable they are,” said Sebastian Bay, the lead author of the report. “It means you are unequally protected across social media platforms. It makes the case for regulation stronger. It’s as if you had cars with and without seatbelts.”

Researchers said that for the purposes of this experiment they promoted apolitical content, including pictures of dogs and food, to avoid actual impact during the U.S. election season.

Ben Scott, executive director of Reset.tech, a London-based initiative that works to combat digital threats to democracy, said the investigation showed how easy it is to manipulate political communication and how little platforms have done to fix long-standing problems.

“What’s most galling is the simplicity of manipulation,” he said. “Basic democratic principles of how societies make decisions get corrupted if you have organized manipulation that is this widespread and this easy to do.”

Twitter said it proactively tackles platform manipulation and works to mitigate it at scale.

“This is an evolving challenge and this study reflects the immense effort that Twitter has made to improve the health of the public conversation,” Yoel Roth, Twitter’s head of site integrity, said in an email.

YouTube said it has put in place safeguards to root out inauthentic activity on its site, and noted that more than 2 million videos were removed from the site in the third quarter of 2020 for violating its spam policies.

“We’ll continue to deal with attempts to abuse our systems and share relevant information with industry partners,” the company said in a statement.

TikTok said it has zero tolerance toward inauthentic behavior on its platform and that it removes content or accounts that promote spam or fake engagement, impersonation or misleading information that may cause harm.

“We’re also investing in third-party testing, automated technology, and comprehensive policies to get ahead of the ever-evolving tactics of people and organizations who aim to mislead others,” a company spokesperson said in an email.
View attachment 4774254
when in doubt; log out..social media and AI will be the death of us..it almost just got us.
 

DIY-HP-LED

Well-Known Member
https://apnews.com/article/election-2020-virus-outbreak-joe-biden-senate-elections-media-f32410451f45102ddd4a82ebec8ac746
View attachment 4716500
PROVIDENCE, R.I. (AP) — The email from a political action committee seemed harmless: if you support Joe Biden, it urged, click here to make sure you’re registered to vote.

But Harvard University graduate student Maya James did not click. Instead, she Googled the name of the soliciting PAC. It didn’t exist -- a clue the email was a phishing scam from swindlers trying to exploit the U.S. presidential election as a way to steal peoples’ personal information.

“There was not a trace of them,” James, 22, said. “It was a very inconspicuous email, but I noticed it used very emotional language, and that set off alarm bells.” She deleted the message, but related her experience on social media to warn others.

American voters face an especially pivotal, polarized election this year, and scammers here and abroad are taking notice — posing as fundraisers and pollsters, impersonating candidates and campaigns, and launching fake voter registration drives. It’s not votes they’re after, but to win a voter’s trust, personal information and maybe a bank routing number.

The Federal Bureau of Investigation, the Better Business Bureau and cybersecurity experts have recently warned of new and increasingly sophisticated online fraud schemes that use the election as an entry, reflecting both the proliferation of political misinformation and intense interest in this year’s presidential and Senate races.

“Psychologically, these scams play to our desire to do something - to get involved, to donate, to take action,” said Sam Small, chief security officer at ZeroFOX, a Baltimore, Maryland-based digital security firm.

Online grifters regularly shift tactics to fit current events, whether they are natural disasters, a pandemic or an election, according to Small. “Give them something to work with and they’ll find a way to make a dollar,” he said.

Foreign adversaries like Russia, China and Iran get much of the blame for creating fake social media accounts and spreading deceptive election information, largely because of efforts by groups linked to the Kremlin to interfere in the 2016 U.S. presidential election. In many instances, foreign disinformation campaigns make use of the same tools pioneered by cybercriminals: fake social media accounts, realistic looking websites and and suspicious links.

Online scams have flourished as so many of life’s routines move online during the pandemic. The FBI reported that complaints to its cybercrime reporting site jumped from 1,000 a day to 3,000-4,000 a day since the pandemic began.

Now, the final weeks of a contentious election are giving scammers yet another opportunity to strike.

“Every election is heated, but this one is very much so,” Paula Fleming, a chief marketing officer for the Better Business Bureau, said. “People are more trusting when they see it’s a political party or a candidate they like emailing them.”

The FBI warned Americans this month to watch out for election-related “spoofing,” when a scammer creates a campaign website or email address almost identical to a real one. A small misspelling or a slight change - using .com instead of .gov, for instance - are tell-tale signs of fraud, the agency said.

Investigators at ZeroFOX routinely scan dark corners of the internet to identify threats against its customers. This summer, they found a large cache of personal data for sale. The data dump included the phone numbers, ages and other basic demographic information for thousands of Americans. What made the data remarkable was that it also contained partisan affiliation, the “cherry on top” for anyone interested in buying the material, Small said.

“Someone could use that to pretend to be a political action committee raising money, to try to get your personal information or your account numbers,” he said.

In 2018, scammers posed as employees from the non-profit voting advocacy group TurboVote and phoned people in Georgia, Washington and at least three other states asking them to register to vote. The calls prompted complaints to state election officials, who issued a public warning.

“TurboVote doesn’t call. You’ll never get a call from us,” group spokeswoman Tanene Allison said of the organization that helped register millions of voters in 2018. “If you’re hearing something and you can’t verify the source, always check with your local election officials.”

Voters should be cautious of claims that sound too good to be true, fraud experts say. Before donating to any group that reached out by email or text, check their website or look to see if they’re registered as a charity or campaign. Does the organization have a physical location and phone number? Scammers often do not.

Beware of pushy pollsters or fundraisers, or emails or websites that use emotionally loaded language that makes you angry or fearful, a tactic that experts say plays on human psychology. And don’t reveal personal information over the phone.

“It is tricky because there are legitimate organizations out there that are trying to help people register to vote,” said Eva Velasquez, a former financial crimes investigator who now runs the Identity Theft Resource Center, based in San Diego. “But you don’t have to act in the moment. Take a few minutes and do a little homework.”
The Trumpers are the most vulnerable for this stuff, con artists share marks for a reason, the word mark came from the mark a conman put on their fence or near the front door, that marked them as suckers. Donald is sending his marks a half dozen emails a day looking for cash. The parlor is just a collection point for suckers and their names and info will sold to scammers and political types.
 

hanimmal

Well-Known Member
The Trumpers are the most vulnerable for this stuff, con artists share marks for a reason, the word mark came from the mark a conman put on their fence or near the front door, that marked them as suckers. Donald is sending his marks a half dozen emails a day looking for cash. The parlor is just a collection point for suckers and their names and info will sold to scammers and political types.
Everyone is vulnerable to this attack. Trump was just the one that it was set up to help the most.

 

DIY-HP-LED

Well-Known Member
Everyone is vulnerable to this attack. Trump was just the one that it was set up to help the most.

I did say they were the most vulnerable, but everybody is vulnerable to this shit and it must be dealt with. The quality of our decisions is only as good as the information we have to work with. In democratic societies this shit is lethal and generates alternative realities that cause or exacerbate social division, it can even create social division where none existed before. I take a hard line on it, not just in America, though currently American social media companies are an issue. It might be best if others policed these companies with sanctions, if Biden can't because he didn't win the senate and even if he does, your allies have more freedom of action in this area and can put disincentives in place, they do it for profit after all. Joe just needs to nod and wink.
 

Fogdog

Well-Known Member
The Disinformation Age

Politics, Technology, and Disruptive Communication in the United States
Cambridge University Press 2021

It's a big compilation of works from several contributors. 300+ pages and I've just skimmed it thus far. Sounds pertinent to the times.

chapter titles:

  1. A Brief History of the Disinformation Age: Information Wars and the Decline of Institutional Authority
  2. A Political Economy of the Origins of Asymmetric Propaganda in American Media
  3. The Flooded Zone: How We Became More Vulnerable to Disinformation in the Digital Era
  4. How American Businessmen Made Us Believe that Free Enterprise was Indivisible from American Democracy: The National Association of Manufacturers’ Propaganda Campaign 1935–1940
  5. “Since We Are Greatly Outnumbered”: Why and How the Koch Network Uses Disinformation to Thwart Democracy
  6. How Digital Disinformation Turned Dangerous
  7. Policy Lessons from Five Historical Patterns in Information Manipulation
  8. Why It Is So Difficult to Regulate Disinformation Online
  9. US Public Broadcasting: A Bulwark against Disinformation?
  10. The Public Media Option: Confronting Policy Failure in an Age of Misinformation
  11. The Coordinated Attack on Authoritative Institutions: Defending Democracy in the Disinformation Age
Excerpts from the last chapter:

How did we get here?

There are many explanations for how we arrived at our current “post truth” era. Some point to social media’s propensity to algorithmically push extremist content and to draw likeminded persons together with accounts unburdened by facts. Others emphasize the role of the Russians, Iranians, North Koreans, or Chinese in efforts to disrupt elections and exaggerate domestic divisions. Other standard accounts point to voter ignorance, racial resentments or religious intolerance. Adherents to these explanations advocate better media literacy and citizenship education, and more fact-checking in journalistic accounts. While there is merit to these and other accounts, they fail to address the full scope of the problem.

In varying ways, several of the contributors to this volume focus on the erosion of liberal democratic institutions, particularly parties, elections, the press, and science. These institutions produce information anchored in norm-based processes for introducing facts into public discourse, including peer-review in science, rules of evidence in courts, professional practices and norms of fairness and facticity in journalism. At the end of the day, Trump’s unhinged conspiracies reflected not just his personal psychological condition, but also a broader institutional crisis that brings with it an epistemological crisis. In the absence of authoritative institutions, Trump and his enablers were unanchored by facts. Instead, they had “alternative facts.”
 

hanimmal

Well-Known Member
There are many explanations for how we arrived at our current “post truth” era. Some point to social media’s propensity to algorithmically push extremist content and to draw likeminded persons together with accounts unburdened by facts. Others emphasize the role of the Russians, Iranians, North Koreans, or Chinese in efforts to disrupt elections and exaggerate domestic divisions. Other standard accounts point to voter ignorance, racial resentments or religious intolerance. Adherents to these explanations advocate better media literacy and citizenship education, and more fact-checking in journalistic accounts. While there is merit to these and other accounts, they fail to address the full scope of the problem.
Screen Shot 2021-01-09 at 4.44.40 PM.png

With near infinite variability that any conversation can contain.

Thanks for that link, looks like I have some good reading tonight.
 

hanimmal

Well-Known Member
https://apnews.com/article/joe-biden-donald-trump-media-elections-presidential-elections-ac34de7cb5844d96589a10ea6e653d50
Screen Shot 2021-01-26 at 6.24.03 AM.png
Twitter has permanently banned My Pillow CEO Mike Lindell’s account after he continued to perpetuate the baseless claim that Donald Trump won the 2020 U.S. presidential election.

Twitter decided to ban Lindell, who founded bedding company My Pillow, due to “repeated violations” of its civic integrity policy, a spokesperson said in a statement. The policy was implemented last September and is targeted at fighting disinformation.

It was not immediately clear which posts by Lindell on Twitter triggered the suspension of his account.

Lindell, a Trump supporter, has continued to insist that the presidential election was rigged even after U.S. President Joe Biden’s administration has begun.

Major retailers such as Bed Bath & Beyond and Kohl’s have said that they would stop carrying My Pillow’s products, Lindell previously said.

Lindell is also facing potential litigation from Dominion Voting Systems for claiming that their voting machines played a role in alleged election fraud. He had also urged Trump to declare martial law in Minnesota to obtain its ballots and overturn the election.

Following the storming of the U.S. Capitol earlier this month, Twitter has banned over 70,000 accounts for sharing misinformation. Trump, who had urged on the mob, has also had his account permanently suspended.
 

Yowza McChonger

Well-Known Member
Hey there. Thanks for providing the link to this thread. I expect I would have found it on my own, but the sooner the better.

Since I was a kid, I've been fascinated by microbiology/disease and disinformation. A book I got my for 8th birthday sparked the former and a really cool Vietnam vet who lived on my block got the latter rolling.

As you may have guessed, the last few years, and especially the last year, have been enormously fascinating to me.

I've spent a good deal of time since the late '70's fighting tooth and nail with dumbasses on both sides of our sham political fence in a lifelong war against disinformation. Trumpism has blown things through the roof, though. Never have I seen such a tsunami of lies and idiocy. One great writer described it as "A Frankenstein's amalgamation of 40 years of conservative intellectual decline into paranoia and delusion that escaped from the lab in 2016.

Great work. This is one of the best threads I've seen in 25 years or so of reading threaded message boards. Thank you for your service.
 

Fogdog

Well-Known Member
Hey there. Thanks for providing the link to this thread. I expect I would have found it on my own, but the sooner the better.

Since I was a kid, I've been fascinated by microbiology/disease and disinformation. A book I got my for 8th birthday sparked the former and a really cool Vietnam vet who lived on my block got the latter rolling.

As you may have guessed, the last few years, and especially the last year, have been enormously fascinating to me.

I've spent a good deal of time since the late '70's fighting tooth and nail with dumbasses on both sides of our sham political fence in a lifelong war against disinformation. Trumpism has blown things through the roof, though. Never have I seen such a tsunami of lies and idiocy. One great writer described it as "A Frankenstein's amalgamation of 40 years of conservative intellectual decline into paranoia and delusion that escaped from the lab in 2016.

Great work. This is one of the best threads I've seen in 25 years or so of reading threaded message boards. Thank you for your service.
You should write a book all about yourself.
 

Yowza McChonger

Well-Known Member
You should write a book all about yourself.
I have, kind of.....of course it's not ALL about myself. That would be extremely challenging 'cuz of that whole no stoner is an island thing.

My not-quite-all-about-myself masterpiece is a truly hilarious, engaging, uplifting, heartbreaking, witty, exquisitely-written monument to throbbing, rampaging badassness.

I'm quite interesting and I've put 7 digits in the bank writing in my spare time.

And, here we are.
 
Status
Not open for further replies.
Top