Meta logo is seen in this illustration taken Aug. 22, 2022. Photo: Reuters
Family members whose loved ones’ suffering and murders were streamed on Facebook or Instagram on Oct. 7, 2023, during the Palestinian terrorist group Hamas’s invasion of and massacre across southern Israel, filed a lawsuit in Tel Aviv District Court on Monday against the social media platforms’ parent company.
The plaintiffs assert that Meta facilitated terrorism by failing to block the live video and also violated the victims’ right to privacy. They seek 4 billion shekels (about $1.15 billion) in damages.
“Our hearts go out to the families affected by Hamas terrorism,” Meta said in a statement responding to the suit. “Our policy designates Hamas as a proscribed organization, and we remove content that supports or glorifies Hamas or the Oct. 7 terrorist attack.”
The lawsuit states that the videos from the attack “trampled the petitioners’ rights in the most harrowing way imaginable” and that “these scenes of brutality, humiliation, and terror are permanently etched into the memories of the victims’ families and the Israeli public as the final moments of their loved ones’ lives.”
Many of the videos remained on the sites for hours after their initial broadcast, according to the lawsuit, which argues that “Facebook and Instagram violated the privacy of the victims, and continue to do so, by enabling the distribution of terror content for profit.”
One of the plaintiffs, Mor Bayder, wrote on Oct. 8, 2023, that “my grandmother, a resident of Kibbutz Nir Oz all her life, was murdered yesterday in a brutal murder by a terrorist in her home … A terrorist came home to her, killed her, took her phone, filmed the horror, and published it on Facebook. This is how we found out.”
Another individual signed on to the suit is Gali Idan, who Hamas held captive for hours and said was “filming constantly.” She stated that “it was clear the livestreaming was part of their operational plan — propaganda aimed at spreading fear. They filmed Maayan’s [her daughter’s] murder, our desperation, our children’s trauma, and forced [her husband] Tsahi to speak into the camera. All of it was broadcast.”
Idan calls Meta “complicit in the infrastructure of terror.”
Stav Arava also came on board as a plaintiff after seeing video of his brother Tomer forced at gunpoint to try and persuade neighbors to exit their home.
Other plaintiffs include families who did not have loved ones at the attacks, but whose minor children witnessed the videos, many of which continue to circulate today. The suit warns that the videos represent “grave harm to the dignity and psychological well-being of platform users — especially youth — who were exposed to raw acts of terror amplified by Meta’s systems.”
On June 6, a group of 41 US lawmakers sent a letter expressing concerns about “disturbing and inflammatory content circulating on your platforms in support of violence and terrorism” to Meta CEO Mark Zuckerberg, then-X CEO Linda Yaccarino, and TikTok CEO Shou Zi Chew. “We strongly urge Meta, TikTok, and X to take decisive and transparent steps to curb these dangerous trends and protect all users from the effects of hate and incitement to violence online,” the legislators wrote to the tech leaders.
“For far too long, social media platforms have allowed harmful messages, hashtags, and conspiracy theories to fester and proliferate online, targeting different communities,” the letter stated. “Following Meta’s decision earlier this year to roll back its trust and safety policies, one estimate noted this could lead to individuals encountering at least 277 million more instances of hate speech and other harmful content each year on its platforms. Since these changes, on Facebook alone, Jewish Members of Congress have experienced a fivefold increase of antisemitic harassment on the platform.”
Zuckerberg acknowledged in January when making the change in moderation policies that “this is a trade-off” and “it means that we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.”
In a report analyzing the impact of the policy change, the Anti-Defamation League (ADL) explained how “it is also possible that the policy change has signaled to hateful users that such abuse will now be tolerated. By allowing hateful content to remain on the platform, Meta is in effect encouraging this content on its platforms.”
Jonathan Greenblatt, the CEO of ADL, said in a Jan. 7 statement that “it is mind blowing how one of the most profitable companies in the world, operating with such sophisticated technology, is taking significant steps back in terms of addressing antisemitism, hate, misinformation and protecting vulnerable & marginalized groups online. The only winner here is Meta’s bottom line and as a result, all of society will suffer.”
Click this link for the original source of this article.
Author: David Swindle
This content is courtesy of, and owned and copyrighted by, https://www.algemeiner.com and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.