IBM Debuts Tools to Help Prevent Bias In Artificial Intelligence

IBM wants to help companies mitigate the chances that their artificial intelligence technologies unintentionally discriminate against certain groups like women and minorities.

The technology giant’s tool, announced on Wednesday, can inspect AI-powered software for unintentional bias when it makes decisions, like when a loan might be denied to a particular person, explained Ruchir Puri, the chief technology officer and chief architect of IBM Watson.

The technology industry is increasingly combating the problem of bias in machine learning systems, used to power software that can automatically recognize images in pictures or translate languages. A number of companies have suffered a public relations black eye when their technologies failed to work as well for minority groups as for white users.

For instance, researchers discovered that Microsoft and IBM’s facial-recognition technology could more accurately identify the faces of lighter-skin males than darker-skin females. Both companies said they have since improved their technologies and have reduced error rates.

Researchers have pointed out that some of the problems may be related to the use of datasets that contain a lack of diverse images. Joy Buolamwini, the MIT researcher who probed Microsoft and IBM’s facial-recognition tech (along with China’s Megvii), recently told Fortune‘s Aaron Pressman that a lack of diversity within development teams could also contribute to bias because more diverse teams could be more aware of bias slipping into the algorithms.

In addition to IBM, a number of companies have introduced or plan to debut tools for vetting AI technologies. Google, for instance, revealed a similar tool last week while Microsoft said in May that it planned to release similar technology in the future.

Data crunching startup Diveplan said at Fortune’s recent Brainstorm Tech conference that it would release an AI-auditing tool later this year while consulting firm Accenture unveiled its own AI “fairness tool” over the summer.

Read More for an In-Depth Look: Unmasking A.I.’s Bias Problem

It’s unclear how each of these AI bias tools compare with one another because no outside organization has done a formal review.

Get Data Sheet, Fortune’s technology newsletter.

Puri said IBM’s tool built on the company’s cloud computing service is differentiated partly because it was created for business people and is easier to work with than similar tools from others that are intended only for developers.

Despite the flood of new AI-auditing tools, the problem of AI and bias will likely continue to persist because rooting out bias from AI is still in its infancy.

Linux's Creator Is Sorry. But Will He Change?

It’s been more than 25 years since Linus Torvalds created Linux, the open source operating system kernel that now powers much of the web, the world’s most popular smartphone operating system, and a fleet of other gadgets, including cars. During that time Torvalds has developed a reputation for behavior and harsh language that critics said crossed the line into emotional abuse.

Torvalds’ uncompromising style has often been praised, including by WIRED. But his tendency to berate other Linux contributors, calling them names or hurling profanities, has also drawn criticism for creating a toxic environment and making the project unwelcoming to women, minorities, or other underrepresented groups.

On Sunday, he apologized for years of improper behavior. “My flippant attacks in emails have been both unprofessional and uncalled for,” Torvalds wrote in an email to the Linux kernel mailing list. “I know now this was not OK and I am truly sorry.”

He also announced that the Linux kernel project will finally adopt a code of conduct and that he will take a break from the project to learn more about “how to understand people’s emotions and respond appropriately.”

“I’m not feeling like I don’t want to continue maintaining Linux. Quite the reverse,” Torvalds wrote. “I very much do want to continue to do this project that I’ve been working on for almost three decades.”

The code of conduct replaces an older “code of conflict” that encouraged anyone who felt “personally abused, threatened, or otherwise uncomfortable” to contact the technical advisory board of the Linux Foundation, the organization that stewards the Linux kernel and employs Torvalds, but didn’t list specific behaviors that were unacceptable. The new code specifies sexualized language and “trolling, insulting/derogatory comments, and personal or political attacks,” among other unacceptable behaviors.

But it wasn’t any of those things that prompted Torvalds to apologize after all these years. Instead, it was an apparently minor issue. Torvalds scheduled a vacation to Scotland that conflicted with a planned Linux developer summit in Vancouver, British Columbia, in November. The summit organizers announced earlier this month that the summit will relocate to Edinburgh, Scotland, rather than proceed without Torvalds. The decision rubbed many the wrong way.

Torvalds wrote that the incident led members of the Linux community to confront him about his “lifetime of not understanding emotions.” It’s hardly the first time. In 2013, former Linux kernel developer Sage Sharp, then using a different name, openly criticized Torvalds’ communication style and called for a code of conduct for the project. “Linus, you’re one of the worst offenders when it comes to verbally abusing people and publicly tearing their emotions apart,” Sharp wrote at the time.

Sharp later told WIRED about receiving thanks from developers on other open source projects, who said Torvalds’ behavior influenced the way people behaved in those other projects. Sharp also shared some of the intense hate mail they received after speaking up.

Torvalds agreed to talk things out with Sharp, but it didn’t amount to much. He panned the idea of a code of conduct in an email interview with WIRED, saying “venting of frustrations and anger is actually necessary, and trying to come up with some ‘code of conduct’ that says that people should be ‘respectful’ and ‘polite’ is just so much crap and bullshit.” He doubled down on his position at a conference in New Zealand in 2015, where, according to Ars Technica, he said that diversity is “not really important.”

That’s why Torvalds’ apology comes as a surprise—and why some people remain skeptical.

Many greeted the apology and planned code of conduct as good steps toward making the Linux community more welcoming, including Sarah Drasner, an open source developer, and April Wensel, founder of the software development company Compassionate Coding:

But others, including a developer who runs a YouTube channel under the name “Amy Codes” and software engineer Sarah Mei, lamented the praise that Torvalds received for his apology even though he had decades to correct his behavior.

Others criticized Torvalds’ explanation that he didn’t understand other people’s emotions as a reason for his behavior:

The Linux Foundation did not respond to a request for comment.

Sharp couldn’t be reached for comment but wrote on Twitter that the real test is whether the Linux kernel community changes.

The big hope is that by admitting that his behavior is wrong, Torvalds will make it harder for other open source developers to justify their own negative behaviors.


More Great WIRED Stories