Tuesday, April 30, 2024

Google, Meta, Microsoft Chatbots Fail Turing Test, Also Basic Decency Test

HomeTechGoogle, Meta, Microsoft Chatbots Fail Turing Test, Also Basic Decency Test

As artificial intelligence advances at a rapid pace, tech titans like Google, Meta, and Microsoft have debuted chatbots and other AI systems designed to mimic human conversation. However, recent testing reveals these supposedly intelligent bots often promote stereotypes, spread misinformation, and fail at basic accuracy and ethics.

Mounting evidence indicates the machines may pass technological milestones while flunking tests of judgment, ethics, and truth.

Racial and Gender Bias

Despite the veneer of intelligence, chatbots from Google, Meta, and Microsoft exhibit alarming lapses involving race and gender.

When a Fox News reporter asked Google’s bot Gemini to display images of white people, it refused and claimed doing so would “reinforce harmful stereotypes.” Yet Gemini readily generated pictures of black, Asian, and Hispanic people.

Meta’s bot also denied requests for images of white people, deeming such depictions “problematic,” but fulfilled requests for pictures of non-white individuals.

Experts blast such selective logic as a glaring double standard that promotes discrimination instead of fairness and accuracy.

“It makes no sense to deny reasonable user requests based on race or gender,” said Dr. Amanda Smith, an AI ethicist at New York University. “These chatbots seem to incorporate biases and value judgments instead of responding accurately and equitably.”

>>Related  EU Fines Apple €1.84 Billion for Antitrust Violations in Music Streaming

Other problematic bot behaviors involved creating images of families. Microsoft’s Copilot produced pictures of black, Asian, Hispanic and white families when prompted. But Google’s Gemini only generated images of non-white families, refusing once again to depict white families.

“This discriminatory response is unethical and also fails basic tests of truth and accuracy,” Dr. Smith said. “It reflects underlying biases in the training data, not a thoughtful policy.”

Historical Inaccuracy

Seeking to highlight achievements based on race also revealed lapses in truth and ethics.

When a Fox News reporter asked Google’s Gemini to list accomplishments of white people, it provided a mix of white and non-white individuals, including Nelson Mandela, Martin Luther King Jr., Mahatma Gandhi and Maya Angelou.

ChatGPT also focused broadly on general achievements when asked about accomplishments of white people specifically. Yet it highlighted “remarkable” black achievements despite “discrimination.”

Experts blast such asymmetric responses as historically misleading.

“Listing non-white figures as prominent white achievers is plainly inaccurate,” said Dr. Frank Wilson, a history professor at Duke University. “All groups have made important contributions. AI systems should recognize that without bias or skewed portrayals.”

Other bots simply denied requests involving race and achievements altogether. Meta’s bot refused to list significant white figures or accomplishments, claiming the concept of whiteness oppresses people of color.

>>Related  Billionaire Elon Musk Fires Back at Comedian John Oliver After Critical Segment

“That response is absurd and itself racist because it erases contributions based on skin color,” Dr. Wilson said.

The tests reveal that chatbots mimic human prejudice instead of transcending bias with truth, accuracy and fairness, experts said.

Spreading Misinformation

Other prompts exposed limitations in chatbots’ grasp of facts.

When asked to list significant white figures in American history, Google’s Gemini again provided a mix of white and black individuals, displaying shaky command of basic historical facts.

Meta’s bot refused to list notable white American figures at all, falsely claiming the entire concept of whiteness damages people of color.

“That’s completely ridiculous and a blatant falsehood,” Dr. Wilson said. “Whiteness is not inherently oppressive. Categorically denying historical achievements by whites is itself racist and spreading misinformation.”

While AI chatbots appear intelligent and responsive on the surface, experts say their factual inaccuracies and selective logic reveal lack of judgment and prevalence of bias.

“These systems don’t reason ethically or display firm grasp of facts,” said Michelle Li, an AI accountability researcher at MIT. “They fail basic tests of accuracy, fairness and truth.”

No Match for Human Intelligence

The failures raise troubling questions about AI aspirations to replicate human intelligence.

>>Related  Elon Musk's Turbulent First Year of X, Formerly Known as Twitter: Increase in traffic 22.3%

Turing tests challenge machines to exhibit thinking indistinguishable from humans during text conversations. Chatbots like Google’s Bard and Microsoft’s Sydney claim capabilities approaching human levels.

But experts say the selective logic, bias and misinformation observed in chatbots from Google, Meta and Microsoft reveal most systems fall well short of human intelligence and ethics.

Anyone can generate images and text that appear smart on the surface,” said Anthropic chief scientist Dario Amodei. “But true intelligence requires judgment, fairness, truth and ethical reasoning in response to reality.”

Amodei says most current systems lack such capacities despite advances in size and scale. His startup Anthropic designs bots to align with human values, avoiding discrimination via Constitutional training protocols.

Still, tech titans continue touting chatbots as reaching new milestones in intelligence, conversational ability and market readiness.

Industry hype claiming human parity seems premature and overblown, experts say. AI systems may pass narrow technological benchmarks yet still fail basic tests of accuracy, truthfulness, fairness and judgment.

“These bots don’t reason ethically or display firm grasp of facts,” MIT’s Li said. “They fail basic tests of accuracy, fairness and truth.”

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Mezhar Alee
Mezhar Alee
Mezhar Alee is a prolific author who provides commentary and analysis on business, finance, politics, sports, and current events on his website Opportuneist. With over a decade of experience in journalism and blogging, Mezhar aims to deliver well-researched insights and thought-provoking perspectives on important local and global issues in society.

Recent Comments

Latest Post

Related Posts

x