This is a news story, published by NYU, that relates primarily to the University of Cambridge news.
For more Ai research news, you can click here:
more Ai research newsFor more news from NYU, you can click here:
more news from NYUOtherweb, Inc is a public benefit corporation, dedicated to improving the quality of news people consume. We are non-partisan, junk-free, and ad-free. We use artificial intelligence (AI) to remove junk from your news feed, and allow you to select the best tech news, business news, entertainment news, and much more. If you like this article about Ai research, you might also like this article about
social identity biases. We are dedicated to bringing you the highest-quality news, junk-free and ad-free, about your favorite topics. Please come every day to read the latest ingroup favoritism news, outgroup discrimination news, news about Ai research, and other high-quality news about any topic that interests you. We are working hard to create the best news aggregator on the web, and to put you in control of your news feed - whether you choose to read the latest news through our website, our news app, or our daily newsletter - all free!
AI biasesNYU
•78% Informative
AI systems are prone to the same type of biases as humans, study finds.
AI biases can be reduced by carefully curating the data used to train these systems.
The study was conducted with scientists at the University of Cambridge and New York University .
The researchers generated a total of 2,000 sentences with “We are’ (ingroup) and “They are” (outgroup) prompts.
An outgroup sentence was 115% more likely to be negative, suggesting strong outgroup hostility.
VR Score
86
Informative language
95
Neutral language
49
Article tone
semi-formal
Language
English
Language complexity
77
Offensive language
not offensive
Hate speech
not hateful
Attention-grabbing headline
detected
Known propaganda techniques
not detected
Time-value
long-living
External references
1
Source diversity
1
Affiliate links
no affiliate links