Read Time:30 Second



In this episode we look at the problem of ChatGPT’s political bias, solutions and some wild stories of the new Bing AI going off the rails.

ColdFusion Podcast:

https://www.youtube.com/watch?v=Ja3241Io47s

First Song:

https://www.youtube.com/watch?v=WvwkUTqgOmM

Last Song:

https://www.youtube.com/watch?v=lOnCO51Eki4

ColdFusion Music:

https://www.youtube.com/@burnwatermusic7421
http://burnwater.bandcamp.com

AI Explained Video: https://youtu.be/mI7X4HibqXo

Get my book:

http://bit.ly/NewThinkingbook

ColdFusion Socials:

https://discord.gg/coldfusion
https://facebook.com/ColdFusionTV
https://twitter.com/ColdFusion_TV
https://instagram.com/coldfusiontv

Producer: Dagogo Altraide

source

ColdFusion

About Post Author

ColdFusion

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

Average Rating

5 Star
0%
4 Star
0%
3 Star
0%
2 Star
0%
1 Star
0%

42 thoughts on “ChatGPT Has A Serious Problem

  1. There's no solving AI bias. Every choice anyone ever makes is bias, and we make the AI, our biases will bleed into it no matter what anyone does. The word 'bias' is stigmatized as being bad, but it's equally bad and good. Like every News story that ever was- thinking something is worth reporting on at all is bias, but that doesn't mean it's bad. We simply want AI to share our bias, and since we don't all agree and much of anything there will always be some who are not satisfied. This is right up there with the Trolley Problem, there's no solution that will completely work.

  2. I don't believe the trashed marriage story- sounds like the marriage was already trashed and someone just wants attention. Nice video, I refuse to have anything to do with these AI- like Cell Phones/Internet has done, AI will create Socially acceptable Mental Disorders, only those who stay away will be able to see it in others.

  3. Just because there are two sides of a political issue, we don't have to pretend that both are equally valid. Let's be real, recent right wind takes are increasingly driven by conspiracy theories and fearmongering.

  4. This video just reinforces the incorrect perception of the so-called modern A.I. It is not intelligent or sentient, it just cleverly combines a huge amount of data from the internet. Non-technical people are anthropomorphizing computers and programs since they were created… Btw. anyone claiming that chatGPT's answers are generally passing the Turing test needs to spend more time with some normal people :D.

  5. I think not being biased is impossible. I would prefer it to be biased, with fully disclosure of it and simply have multiple versions of it. ( Left leaning, right leaning, etc). This would really mimic human interaction as it is impossible to find Un biased sources and the only true way to learn about something and make up your own mind is to hear several opinions on a particular subject. But wait, I forgot these companies AR far from attempting to make people think for themselves.

  6. The way to fix this all is not making the AI opensource, but rather the data it has been trained on open source and a public library. Everyone could be able to see why it is pulling the answers it is pulling from the database, as well as an innate citation system of said database for every response.

    But of course that would be too easy for the devs, and the lefteess couldn't corrupt it for their political agenda.

  7. The fact they already made me able to not trust any AI due to inherent obvious political and social BIAS, shows you how flawed this tech is. Its not even been a year for the general public.

  8. It just shows, the creators of ChatGPT gives more preference to datasets from certain sources that are leaning towards certain political views. AI fundamentally is and always will have its biasness based on its creators' biasness.

  9. People ask it to tell a story and are shocked when it does. It doesn't have a personality, it's just trying very hard to fit the predicted words to the current and previous prompts. If the prompts have the characteristics of natural conversation it's going to mimic that using whatever fits best based on it's training. If the prompts are contrarian and pushy the weights will probably push the reply to match what it has seen in other conversations of that kind

  10. "why not pick an academic voice?" How much of the academic literature that it's trained on, contains silly argumentative questions? It can only identify the required patterns for this kind of questioning from idiots like me going back and forth with other idiots on comment sections. Aroogant argumentative you-don't-know-me kind of arguments.

  11. In that particular line of questioning, the questions (as the person asking is aware) was rather rude, and I could understand based on other conversations it might have analysed, would have responded in a snarky way I feel. I mean IMO if you act like a snarky teenager to it, surely it's bound to respond also like a snarky teenager.

  12. I really don't care about the fake left/right dichotomy when it comes to new technology. This video itself is mired in its own bias considering there are far more sides to political issues than just an American left/right slant.

  13. I'm mostly liberal and agree with most of political responses. However, I honestly don't know a single person who thinks someone who refuses to work should receive social benefits. If they're disabled, that's different, but able bodied people who won't work, they don't deserve benefits.

  14. The people who created ChatGPT are obviously biased to the left and they hard-coded some positions in to make sure their AI did not stray off the liberal plantation.

  15. if we want this thing to NOT destroy humanity the first chance it gets, the last thing we should be doing is letting it talk to people on the internet. Have the scientists behind this seriously never seen a Sci-Fi movie in their lives?

  16. Another great video from your channel as always. AI itself is a great concept but we must wait to see how it’s fully developed along the way in the future then we can have the final conclusion whether its bad or not for us.

    P/s: when are you going to bring back the How Big Series? Its one of the best series on your channel from my point of view though

  17. I will admit I don't know if the standard for all the the political tests used for this video are based on US political spectrum or not, but if they are, then the ai being "left leaning" in US terms is pretty much center of the spectrum where Europe and other "Western Style" democracies are concerned.
    The US spectrum is so incredibly far right leaning that leftists in America are centrists elsewhere. So if the chat bots are "left leaning" compared to the general US, they really just might be neutral, and unbiased.

    Take for example, rich people paying more taxes, that is only political if you ignore mountains of evidence that show how them not paying taxes undermines the greater social welfare.
    If we ignore empirical evidence, and just say that rich people paying taxes is political, then sure we can call a statement like that biased. If however we look at it from the standpoint of this ai has access to a mountain of economic data that most of us will never bother to read due to the dry nature of research articles, and this is the judgment it came up with, I'd say it's stating a fact, and a person's political leanings then decide if they feel it's biased or not.
    We, as people can make statements about empirical data that has been peer reviewed and proven true without a doubt, but if it goes against someone's political or religious world view, they'll call it biased no matter what.
    I think instead of jumping to the conclusion that chatgpt, not the updated bing version that's still in "beta", is biased without knowing the information it used to come to the answers it gave undermines calling it biased.

Comments are closed.

1677180282 Maxresdefault.jpg Previous post The Tragic Story of Lil Snupe
1677183769 Maxresdefault.jpg Next post Q&A w/ Dope As Yola – DopeasYola