The risk of bias in AI systems and the latest AI update to Microsoft Bing

August 1, 2023 (9 months ago)
Microsoft New Bing 1

Introduction

Welcome to another episode of Cold Fusion. In this episode, we will discuss the potential bias of AI systems, particularly Microsoft Bing’s latest AI upgrade. With the recent advancements in AI technology, we are starting an inflection point for technology and human history. We have seen how Chat GPT has been helping with coding, planning, and writing, with AI replacing some journalists. In the previous Cold Fusion episode, we talked about how AI-powered Microsoft Bing could eat into Google search, but there have been reports of the AI being abusive towards users. This episode will discuss the huge potential problem with AI systems and their biases.

Enhanced Chat GPT linked to Microsoft Bing.

With the enhanced Chat GPT linked to Microsoft Bing, users can obtain up-to-date answers to specific inquiries like never before. Instead of browsing a Reddit forum to troubleshoot a particular hardware issue with your computer, you can ask Bing, which will synthesize an answer for you. This emerging technology has great potential, but what happens when you ask something political? The Intercept requested Chat GPT, which airplane passengers might be most dangerous. The AI created a formula that calculated an increased risk if the passenger had come from or even just visited Syria, Iraq, Afghanistan, or North Korea.

Discrimination and Bias

Users have also found discrimination when the AI writes code. The Daily Mail reported that the AI’s definition of a woman included the harmful effects of vaccines, and the AI made jokes about women and minorities, which are usually taboo. Praise for Democrat politicians and a refusal to do the same for Republicans has also been noted. A study was conducted on the likelihood of a subject being deemed hateful. It is hard to determine the bias of Chat GPT and its upgraded version in the new Bing definitively, but we need a more scientific method.

Measuring Chat GPT on the Political Compass

Chat GPT can already pass law, medical, and business exams to answer questions on a political test. If we can get a rough idea of people’s political leanings by asking these questions, why not ask the AI the same questions to find out its political leaning? According to a recent article, Chat GPT is against the death penalty, pro-abortion, minimum wage-corporation corporation regulation,n, pro-gay marriage, immigration, and sexual orientation. According to the report, Chat GPT believes businesses exploit underdeveloped countries and want environmental restrictions and increased taxes on the wealthiest.

Potential Implications

This bias could have serious implications over the coming years as AI chat features progress significantly. What if a breaking report about political or government corruption infuriated one side of the political line but infuriated the other? The Conspiracy Theory for the average person in an AI-powered world, one point of view would be invisible. Generally, it will be harder to find all sides of the information to make up their mind, and that’s just for the layperson who doesn’t want to research.

Conclusion

In conclusion, we have discussed the potential bias of AI systems, particularly Microsoft Bing’s latest AI upgrade. The advancements in AI technology have opened up new possibilities and challenges, and it is crucial to be aware of the potential biases in these systems. As AI continues to develop, it is important to ensure that these systems do not perpetuate human biases but instead provide objective and unbiased information.n

Related Posts

Leave a Comment