Twitter proclaimed a replacement commitment to study the fairness of its algorithms. As part of the problem, which the company has called a “Responsible Machine Learning Initiative,” company-wide knowledge scientists and engineers can study the potential “unintentional damage” caused by their algorithms and make the results public.
“We are carrying out in-depth analysis and studies to assess the existence of potential damage within the algorithms that we usually use,” the company wrote in a magazine publication mentioning the initiative.
To begin, the corporate may study Twitter’s image cropping algorithmic program, which is being criticized on behalf of individuals with light skin.
Twitter will study its content recommendations, as well as “a fairness assessment of our home time recommendations across racial subgroups” and “an analysis of content recommendations for various political ideologies in seven countries”.
It is not clear how much impact the plan will have. Twitter pointed out that in some cases, it may change certain aspects of its platform based on its findings, while other research may only lead to “important discussions around the way we build and apply ML [machine learning].”
But for Twitter and other social media platforms, this is a timely question. After the U.S. Capitol uprising, lawmakers urged Twitter, YouTube, and Facebook to increase the transparency of their algorithms. Some lawmakers proposed a piece of legislation that requires companies to evaluate the deviations of their algorithms.
Twitter CEO Jack Dorsey also talked about his desire to create an algorithm market that will allow users to control the algorithms they use. The company said in its latest blog post that it is in the “early exploration” stage.