Mark Zuckerberg of Facebook, Jack Dorsey of Twitter and Sundar Pichai of Google are appearing at a hearing held by the House Energy and Commerce Committee about how disinformation spreads across their platforms.
The chief executives of Google, Facebook and Twitter are testifying at the House on Thursday about how disinformation spreads across their platforms, an issue that the tech companies were scrutinized for during the presidential election and after the Jan. 6 riot at the Capitol.
The hearing, held by the House Energy and Commerce Committee, is the first time that Mark Zuckerberg of Facebook, Jack Dorsey of Twitter and Sundar Pichai of Google are appearing before Congress during the Biden administration. President Biden has indicated that he is likely to be tough on the tech industry. That position, coupled with Democratic control of Congress, has raised liberal hopes that Washington will take steps to rein in Big Techs power and reach over the next few years.
The hearing is also be the first opportunity since the Jan. 6 Capitol riot for lawmakers to question the three men about the role their companies played in the event. The attack has made the issue of disinformation intensely personal for the lawmakers since those who participated in the riot have been linked to online conspiracy theories like QAnon.
Before the hearing, Democrats signaled in a memo that they were interested in questioning the executives about the Jan. 6 attacks, efforts by the right to undermine the results of the 2020 election and misinformation related to the Covid-19 pandemic.
Republicans sent the executives letters this month asking them about the decisions to remove conservative personalities and stories from their platforms, including an October article in The New York Post about President Bidens son Hunter.
Lawmakers have debated whether social media platforms business models encourage the spread of hate and disinformation by prioritizing content that will elicit user engagement, often by emphasizing salacious or divisive posts.
Some lawmakers will push for changes to Section 230 of the Communications Decency Act, a 1996 law that shields the platforms from lawsuits over their users posts. Lawmakers are trying to strip the protections in cases where the companies algorithms amplified certain illegal content. Others believe that the spread of disinformation could be stemmed with stronger antitrust laws, since the platforms are by far the major outlets for communicating publicly online.
By now its painfully clear that neither the market nor public pressure will stop social media companies from elevating disinformation and extremism, so we have no choice but to legislate, and now its a question of how best to do it, said Representative Frank Pallone, the New Jersey Democrat who is chairman of the committee.
The tech executives are expected to play up their efforts to limit misinformation and redirect users to more reliable sources of information. They may also entertain the possibility of more regulation, in an effort to shape increasingly likely legislative changes rather than resist them outright.
The chief executives of Facebook, Alphabet and Twitter are expected to face tough questions from lawmakers on both sides of the aisle. Democrats have focused on disinformation, especially in the wake of the Capitol riot. Republicans, meanwhile, have already questioned the companies about their decisions to remove conservative personalities and stories from their platforms.
New York Times reporters have covered many of the examples that could come up. Here are the facts to know about them:
After his son was stabbed to death in Israel by a member of the militant group Hamas in 2016, Stuart Force decided that Facebook was partly to blame for the death, because the algorithms that power the social network helped spread Hamass content. He joined relatives of other terror victims in suing the company, arguing that its algorithms aided the crimes by regularly amplifying posts that encouraged terrorist attacks. Arguments about the algorithms power have reverberated in Washington.
Section 230 of the Communications Decency Act, has helped Facebook, YouTube, Twitter and countless other internet companies flourish. But Section 230s liability protection also extends to fringe sites known for hosting hate speech, anti-Semitic content and racist tropes. As scrutiny of big technology companies has intensified in Washington over a wide variety of issues, including how they handle the spread of disinformation or police hate speech, Section 230 has faced new focus.
After inflaming political discourse around the globe, Facebook is trying to turn down the temperature. The social network started changing its algorithm to reduce the political content in users news feeds. Facebook previewed the change earlier this year when Mark Zuckerberg, the chief executive, said the company was experimenting with ways to tamp down divisive political debates among users. One of the top pieces of feedback were hearing from our community right now is that people dont want politics and fighting to take over their experience on our services, he said.
As the Electoral College affirmed Joseph R. Biden Jr.s election, voter fraud misinformation subsided. But peddlers of online falsehoods ramped up lies about the Covid-19 vaccines. Rep. Marjorie Taylor Greene, a Republican of Georgia, as well as far-right websites like ZeroHedge, have begun pushing false vaccine narratives, researchers said. Their efforts have been amplified by a robust network of anti-vaccination activists like Robert F. Kennedy Jr. on platforms including Facebook, YouTube and Twitter.
In the end, two billionaires from California did what legions of politicians, prosecutors and power brokers had tried and failed to do for years: They pulled the plug on President Trump. Journalists and historians will spend years unpacking the improvisational nature of the bans, and scrutinizing why they arrived just as Mr. Trump was losing his power, and Democrats were poised to take control of Congress and the White House. The bans have also turned up the heat on a free-speech debate that has been simmering for years.
In the fall of 2017, when Congress called on Google, Facebook and Twitter to testify about their role in Russias interference with the 2016 presidential election, the companies didnt send their chief executives as lawmakers had requested and instead summoned their lawyers to face the fire.
During the hearings, the politicians complained that the general counsels were answering questions about whether the companies contributed to undermining the democratic process instead of the top people who are actually making the decisions, as Senator Angus King, an independent from Maine, put it.
It was clear Capitol Hill wanted its pound of C.E.O. flesh and that hiding behind the lawyers was not going to work for long. That initial concern about how the chieftains of Silicon Valley would handle grilling from lawmakers is no longer a worry. After a slew of hearings in recent years, both virtual and in-person, the executives have had plenty of practice.
Since 2018, Sundar Pichai, Googles chief executive, has testified on three different occasions. Jack Dorsey, Twitters chief executive, has made four appearances, and Mark Zuckerberg, Facebooks chief, has testified six times.
And when the three men again face questioning on Thursday, they will do so now as seasoned veterans in the art of deflecting the most vicious attacks and then redirecting to their carefully practiced talking points.
In general, Mr. Pichai tends to disagree politely and quickly at the sharpest jabs from lawmakers such as when Mr. Pichai was asked last year why Google steals content from honest businesses but not harp on it. When a politician tries to pin him down on a specific issue, he often relies on a familiar delay tactic: My staff will get back to you.
Mr. Pichai is not a dynamic cult-of-personality tech leader like Steve Jobs or Elon Musk, but his reserved demeanor and earnestness is well suited for the congressional spotlight.
Mr. Zuckerberg has also grown more comfortable with the hearings over time and more emphatic about what the company is doing to combat misinformation. At his first appearance in 2018, Mr. Zuckerberg was contrite and made promises to do better for failing to protect users data and prevent Russian interference in elections.
Since then, he has pushed the message that Facebook is a platform for good, while carefully laying out the steps that the company is taking to stamp out disinformation online.
As the sessions have gone virtual during the pandemic, Mr. Dorseys appearances, hunched over a laptop camera, carry a just-another-guy-on-Zoom vibe when compared to the softly lit neutral backdrops for the Google and Facebook chiefs.
Mr. Dorsey tends to remain extremely calm almost zen-like when pressed with aggressive questions and often engages on technical issues that rarely illicit a follow-up.