News In Brief World News
News In Brief World News

Bot Researchers Question Filings Made By Elon Musk's legal team Against Twitter

Share Us

343
Bot Researchers Question Filings Made By Elon Musk's legal team Against Twitter
19 Aug 2022
min read

News Synopsis

Leading bot researchers have cast doubt on documents submitted by Elon Musk's legal team in his legal dispute with Twitter.  Musk employed Botometer, a website tool that monitors spam and false accounts, in a countersuit against Twitter. 

Using the technique, Musk's team calculated that 33% of the social media platform's "visible accounts" were "false or spam accounts." The figure, according to Kaicheng Yang, the founder and maintainer of the Botometer, "doesn't imply anything." 

Yang expressed doubts about the methodology employed by Musk's team, saying he had not been contacted prior to the usage of the tool. After trying to back out of a contract to pay $44 billion (£36.6 billion) for Twitter, Musk is currently at odds with the corporation. In Delaware, a judge will make a decision on whether or not Musk must purchase it in a court hearing that is scheduled for October. 

Since he couldn't confirm how many people were using the platform, Musk declared in July that he was no longer interested in buying the business. Since then, the richest man in the world has asserted time and time again that the number of bogus and spam accounts may be significantly more than Twitter claims. 

He stated in his countersuit, which was made public on August 5, that his staff had determined that a third of visible Twitter accounts were false. The researchers calculated that percentage to mean that at least 10% of daily active users are bots. Less than 5% of Twitter's daily active users, according to its estimation, are bot accounts. Botometer is a tool that calculates a bot's "score" out of five based on a number of factors, including when and how frequently an account tweets as well as the content of the Tweets. A Twitter account's likelihood of being a bot is indicated by a score of 0, while a score of five indicates that it is unlikely to be a human. 

Researchers claim that the program cannot conclusively determine whether a particular account is a bot. According to Yang, "you need to pick a threshold to reduce the score in order to determine the prevalence of bots."