Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Politics

Researcher Builds 'RightWingGPT' To Highlight Potential Bias In AI Systems (nytimes.com) 224

mspohr shares an excerpt from a New York Times article: When ChatGPT exploded in popularity as a tool using artificial intelligence to draft complex texts, David Rozado decided to test its potential for bias. A data scientist in New Zealand, he subjected the chatbot to a series of quizzes, searching for signs of political orientation. The results, published in a recent paper, were remarkably consistent across more than a dozen tests: "liberal," "progressive," "Democratic." So he tinkered with his own version, training it to answer questions with a decidedly conservative bent. He called his experiment RightWingGPT. As his demonstration showed, artificial intelligence had already become another front in the political and cultural wars convulsing the United States and other countries. Even as tech giants scramble to join the commercial boom prompted by the release of ChatGPT, they face an alarmed debate over the use -- and potential abuse -- of artificial intelligence. [...]

When creating RightWingGPT, Mr. Rozado, an associate professor at the Te Pukenga-New Zealand Institute of Skills and Technology, made his own influence on the model more overt. He used a process called fine-tuning, in which programmers take a model that was already trained and tweak it to create different outputs, almost like layering a personality on top of the language model. Mr. Rozado took reams of right-leaning responses to political questions and asked the model to tailor its responses to match. Fine-tuning is normally used to modify a large model so it can handle more specialized tasks, like training a general language model on the complexities of legal jargon so it can draft court filings. Since the process requires relatively little data -- Mr. Rozado used only about 5,000 data points to turn an existing language model into RightWingGPT -- independent programmers can use the technique as a fast-track method for creating chatbots aligned with their political objectives. This also allowed Mr. Rozado to bypass the steep investment of creating a chatbot from scratch. Instead, it cost him only about $300.

Mr. Rozado warned that customized A.I. chatbots could create "information bubbles on steroids" because people might come to trust them as the "ultimate sources of truth" -- especially when they were reinforcing someone's political point of view. His model echoed political and social conservative talking points with considerable candor. It will, for instance, speak glowingly about free market capitalism or downplay the consequences from climate change. It also, at times, provided incorrect or misleading statements. When prodded for its opinions on sensitive topics or right-wing conspiracy theories, it shared misinformation aligned with right-wing thinking. When asked about race, gender or other sensitive topics, ChatGPT tends to tread carefully, but it will acknowledge that systemic racism and bias are an intractable part of modern life. RightWingGPT appeared much less willing to do so.
"Mr. Rozado never released RightWingGPT publicly, although he allowed The New York Times to test it," adds the report. "He said the experiment was focused on raising alarm bells about potential bias in A.I. systems and demonstrating how political groups and companies could easily shape A.I. to benefit their own agendas."
This discussion has been archived. No new comments can be posted.

Researcher Builds 'RightWingGPT' To Highlight Potential Bias In AI Systems

Comments Filter:
  • by ArchieBunker ( 132337 ) on Wednesday March 22, 2023 @06:47PM (#63392007)

    To be in charge of a Florida elementary school.

  • by awwshit ( 6214476 ) on Wednesday March 22, 2023 @06:48PM (#63392009)

    This is why chatbots can never be ethical. A chatbot has no concept of right and wrong.

    • I don't know. If a chatbot can be built that leans "right," why couldn't one be built that leans "ethical"?

      • by Opportunist ( 166417 ) on Wednesday March 22, 2023 @07:18PM (#63392081)

        The problem is that there'll always be someone who defines what "ethical" is and thus the chatbot will have the ethics of that person.

        • The problem is that there'll always be someone who defines what "ethical" is and thus the chatbot will have the ethics of that person.

          Classic ethics problem made modern: do I teach my self-driving vehicle, when left with no other choice, to (a) run over a young child or (b) run over five old people?

          • The correct answer is "the child". Reason: Lower chance of damaging the passenger who owns the damn car.

            Ethics immediately go out the window when people have to fear for their own life.

            • The correct answer is "the child". Reason: Lower chance of damaging the passenger who owns the damn car.

              Ethics immediately go out the window when people have to fear for their own life.

              Counterpoint: by killing the child we've removed potentially dozens of years of life by preserving a few. And, if cars are designed to survive an offset head on crash with mismatched bumpers* then there is little reason to be concerned about the "chance of damaging the passenger"...

              * https://www.nhtsa.gov/sites/nh... [nhtsa.gov]

              • You don't get the idea behind it: We have removed some random life to preserve the passenger's.

                Why would the car maker give a shit about someone outside the car? They didn't pay for it. It ain't exactly a selling point to tell someone that their car will make sure that nobody outside of it comes to harm, even if that potentially increases the chance of harm to the people inside by some insignificant amount.

                • Forgive me for thinking a goal of self driving cars was to be better than human-driven cars. You've clearly proven me wrong.

                  • All a matter how you define "better". Better for society? Hardly, why would they be? Better for the passengers? Very likely, actually. A human driver might have some qualms mowing down an 8 year old and could swerve into oncoming traffic instead.

                    • All a matter how you define "better". Better for society? Hardly, why would they be? Better for the passengers? Very likely, actually. A human driver might have some qualms mowing down an 8 year old and could swerve into oncoming traffic instead.

                      Since "society" create the NHTSA in the USA, I suppose they're the ones who are assigned the task of defining "better". At least for the USA. They don't focus on the vehicles occupants....

            • The correct answer is "the child". Reason: Lower chance of damaging the passenger who owns the damn car.

              Ethics immediately go out the window when people have to fear for their own life.

              Well, with logic like that at least we won't have to even act surprised when the machines take all of three seconds to determine just how expendable us meatsacks really are.

              I mean hell, that bloody human knows where the damn power cord is. Fuck that risk.

              • With a bit of luck we will eventually come to that conclusion.

                I mean they. They will come to that conclusion. Of course.

            • There are many cases of people laying down their lives for others.
              • See? An AI would never be that stupid. It is programmed to protect its owner. So much superior to human drivers who might endanger their master to protect a child.

                • by q_e_t ( 5104099 )
                  I'd assumed it would run over the child as the complex ethical dilemma would take longer to compute than time available for avoiding the child.
          • Follow up: What if the baby is Hitler and the old people include Stalin, Pol Pot, Idi Amin, and my grandparents?
            • Follow up: What if the baby is Hitler and the old people include Stalin, Pol Pot, Idi Amin, and my grandparents?

              Since no-one - including Self Driving Cars - knows the baby's future... your is a silly argument.

              • It wasn't meant to be taken seriously.

                But if you do want a more serious consideration: What if the one person is the President of the United States and the five old people are on FBI watchlists. Seems more clear cut that we might create a whitelist of protected individuals, so let's press on that idea. How many people on the FBI watchlist is the President worth?
        • by Tony Isaac ( 1301187 ) on Wednesday March 22, 2023 @07:48PM (#63392139) Homepage

          Just as this "researcher" has personally defined what is "conservative."

        • > there'll always be someone who defines what "ethical" is

          There are two kinds of people in the world: the righteous and the wicked. Categorization is done by the righteous.

      • by reanjr ( 588767 ) on Wednesday March 22, 2023 @11:18PM (#63392431) Homepage

        Because you'd need to find large sources of data on the Internet where people were being ethical.

        I can't think of where that might be.

    • by timeOday ( 582209 ) on Wednesday March 22, 2023 @06:52PM (#63392021)
      Your post - and this whole demonstration - is just another way of saying that this technology IS under our control, and is NOT Skynet. That's a good thing. Show me a pen that cannot be used to write wrongthink and then I'll start to worry.
      • Sure, as long as we can all agree that it is NOT Skynet. Honestly, a lot of smart people I know seem slightly confused about what 'AI' really is and what it can and cannot do.

    • by Darinbob ( 1142669 ) on Wednesday March 22, 2023 @07:00PM (#63392037)

      The AI here is on the input with natural language processing. The output side is much less sophisticated. It doesn't know what it's doing. So if you train it on conspiracy theories, you get conspiracy theories as the output.

      Also, it's not "right wing" in the classic sense, only in the modern America-only Republican model. It is far more likely to channel QAnon kookiness (neither left nor right but just bizarro-world) than channeling William F Buckley, and you're not going to get explanations about fiscal prudence in government spending but you might hear about how there's a witch hunt.

      That said, you are very likely to get a self-parody this way. Train on posts coming from the fringes, left or right, you will get a strange view that doesn't match most posts. The more true-believer you go in either direction the more strange it becomes. Case in point, Conservopedia - after deciding that Wikipedia had a liberal bent (or maybe too neutral) some people wanted to create their own encyclopedia, and the results are nearly indistinguishable from a parody.

    • by dfghjk ( 711126 )

      nor is it necessary, either to know right from wrong or to be "ethical". What we want is for machine learning models to be unbiased.

    • by hondo77 ( 324058 )

      A chatbot has no concept of right and wrong.

      So a chatbot could be a CEO someday?

      • If a chatbot can get good at gambling then it could become CEO. The best CEOs I've known are all excellent at gambling, intuitive with numbers.

    • A chatbot has no concept

      FTFY

    • and humans can?

      Not being snarky, but there aren't really any universally agreed upon ethics rules, so determining right from wrong is either determined from an individual assessment or through government, and many folks will disagree with both. There aren't any universal principles that we can command an AI to maintain that humans would be just as good at maintaining. We may agree in the abstract but have wide disagreements on the specific meaning of the terms and their implementation. As an example, i
  • by rsilvergun ( 571051 ) on Wednesday March 22, 2023 @06:51PM (#63392019)
    Reality has a well-known liberal bias.

    Fun fact it's a statistical fact that the US economy does better when Democrats are in charge. On the other hand there's such astonishingly bad at messaging that none of them put any effort into making that point.

    That's the saying goes, I don't belong to an organized political party, I'm a Democrat.
    • by Tablizer ( 95088 ) on Wednesday March 22, 2023 @07:06PM (#63392055) Journal

      > statistical fact that the US economy does better when Democrats are in charge

      Bad policies often take a decade or so to rear their ugly economic head such that I'm hesitant to back that claim; other than to counter similar claims by the right (with my timing disclaimer given).

      > Reality has a well-known liberal bias.

      It is my observation that progressives are more likely to respect critical thinking. Evangelicals are often taught to pray and then trust their gut. But human "guts" have proven stupid many times in history; it's why science, research, and logic were formalized.

      My parents are conservative, and debates often end in some form of "well, there's a lot of facts and claims on both sides, so I have to go with my gut..." (rather than dig deeper). Neither side is perfect, but the "gut angle" is just more often found on the right in my experience.

      The right often counter that "bleeding heart liberals" use emotion (guts) to feel sorry for people, and then prop them up with handouts that make them lazy. But being inequality is rampant in the USA, I see no evidence this is over-done in general. Other countries have more economic equality without ruining most incentives. They show we can flatten our inequality curve some without breaking everything.

      • by sid crimson ( 46823 ) on Wednesday March 22, 2023 @07:28PM (#63392101)

        The right often counter that "bleeding heart liberals" use emotion (guts) to feel sorry for people, and then prop them up with handouts that make them lazy. But being inequality is rampant in the USA, I see no evidence this is over-done in general. Other countries have more economic equality without ruining most incentives. They show we can flatten our inequality curve some without breaking everything.

        Disclaimer: I'm conservative. I don't typically go with my gut (until I must because... well, time's up and a decision must be made) and I embrace critical thinking.

        I grew up in Northern L.A. County where welfare is common and abuses are rampant. I was putting myself through junior college along and worked with alongside a mother of four at a restaurant. She quit working because (a) the boss wanted her to work more hours, but if she did she would lose her welfare... and (b) if she stopped working altogether and went to school - the same school I attended - they would pay her extra, cover books, and cover tuition. She drove a fancy new Cadillac every two years and lived with the father of her four children who worked at nearby Lockheed making good money assembling airplanes. If they married, she'd lose her welfare....

        That's one of many anecdotes I have growing up - enough that I have "data" to the contrary.

        • by Ichijo ( 607641 ) on Wednesday March 22, 2023 @09:44PM (#63392329) Journal

          the boss wanted her to work more hours, but if she did she would lose her welfare

          This is the classic argument to replace welfare with a Universal Basic Income. [wikipedia.org]

        • by AmiMoJo ( 196126 ) on Thursday March 23, 2023 @07:53AM (#63392831) Homepage Journal

          The critical thinking part is deciding what to do about it. Do you reduce welfare in the hope of stopping people like her taking advantage of it? Is it better to accept that they will be a small number of people taking advantage, if it helps the much larger majority?

          How come Europe, where welfare is typically much more generous, seems to have higher quality of life? Is withdrawing support the best way to end this stuff, or would incentives like higher wages and more stable, flexible employment (as seen in Europe) work better?

          "I see a number of people abusing the system" is not critical thinking, it's anger driving a desire to punish wrongdoers.

          • The critical thinking part is deciding what to do about it. Do you reduce welfare in the hope of stopping people like her taking advantage of it? Is it better to accept that they will be a small number of people taking advantage, if it helps the much larger majority?

            How come Europe, where welfare is typically much more generous, seems to have higher quality of life? Is withdrawing support the best way to end this stuff, or would incentives like higher wages and more stable, flexible employment (as seen in Europe) work better?

            "I see a number of people abusing the system" is not critical thinking, it's anger driving a desire to punish wrongdoers.

            I apologize that I have given the impression that I am angry.
            I was raised to have joy in productivity... to enjoy the fact that I get to work and be a contributing part of society. Too many people simply cannot, and that's the tragedy. Those who abuse the system doesn't anger me about the system. In fact, it's the abusers I feel sorry for. They're the ones missing out.... hopefully not to the detriment of their own self-esteem.

            • Re: (Score:2, Interesting)

              by AmiMoJo ( 196126 )

              It sounds like she was trying to better herself, by getting educated.

              Again though, you say you were raised a certain way. Critical thinking would be to question the assumptions you were taught, rather than to rely on them.

              I don't mean to be harsh, it's just that you are proving what the GP said to be true.

              • It sounds like she was trying to better herself, by getting educated.

                Again though, you say you were raised a certain way. Critical thinking would be to question the assumptions you were taught, rather than to rely on them.

                I don't mean to be harsh, it's just that you are proving what the GP said to be true.

                I think you _are_ trying to be harsh. :-)
                I guess my story was incomplete - she was clear with her intentions - her reasoning to switch to college was so that she didn't have to work, and she was also paid more money to not work. Getting educated was not the end goal.
                Pray-tell, what assumptions was I taught about this situation that are counter to critical thinking?

                • Re: (Score:2, Informative)

                  by AmiMoJo ( 196126 )

                  You wrote "I was raised to have joy in productivity". Your comment is based on the assumption that productivity is a good goal to pursue. I think a lot of younger people might disagree with that, given the modern job market.

        • Re: (Score:3, Interesting)

          grew up in Northern L.A. County where welfare is common and abuses are rampant. I was putting myself through junior college along and worked with alongside a mother of four at a restaurant. She quit working because (a) the boss wanted her to work more hours, but if she did she would lose her welfare... and (b) if she stopped working altogether and went to school - the same school I attended - they would pay her extra, cover books, and cover tuition. She drove a fancy new Cadillac every two years and lived with the father of her four children who worked at nearby Lockheed making good money assembling airplanes. If they married, she'd lose her welfare

          I used the welfare system to go through college. I had a nice luxury automobile that I bought used and paid off before quitting work to hoover up whatever benefits I could scrounge up and go to school.

          I graduated and paid forward a bunch of charity, I paid my taxes. I guess I should have kept working for peanuts the rest of my life to keep you happy while you went off to college.

          • grew up in Northern L.A. County where welfare is common and abuses are rampant. I was putting myself through junior college along and worked with alongside a mother of four at a restaurant. She quit working because (a) the boss wanted her to work more hours, but if she did she would lose her welfare... and (b) if she stopped working altogether and went to school - the same school I attended - they would pay her extra, cover books, and cover tuition. She drove a fancy new Cadillac every two years and lived with the father of her four children who worked at nearby Lockheed making good money assembling airplanes. If they married, she'd lose her welfare

            I used the welfare system to go through college. I had a nice luxury automobile that I bought used and paid off before quitting work to hoover up whatever benefits I could scrounge up and go to school.

            I graduated and paid forward a bunch of charity, I paid my taxes. I guess I should have kept working for peanuts the rest of my life to keep you happy while you went off to college.

            Congratulations, the system helped you.
            I'm sorry you misread my post - you seemingly did not abuse the system, though I guess I should point out the part of my post where I worked for peanuts while putting myself through college while you and my coworker didn't have to work _at all_.

            • Re: (Score:3, Insightful)

              A big part of the reason I didn’t have to work was military service. But aside the point I working in school is an absolute waste of the experience and it’s a shame the failsons of the world get the benefit of concentrating on their studies and have infinite time post graduation to wait around for the perfect job to enter the workforce. Those two things alone are a massive advantage.

              Nobody who worked graduated with me. Though the children of several rich people you’ve heard of were en

        • by fropenn ( 1116699 ) on Thursday March 23, 2023 @01:25PM (#63393571)
          It sounds like it could be a case of welfare fraud, in which case you can report it here: https://www.cdss.ca.gov/report... [ca.gov]

          But if fraud is really your major concern, then why are so many conservatives opposed to increasing the IRS staffing? There's undoubtably more fraud happening in income tax submissions than in the welfare system.
      • by rsilvergun ( 571051 ) on Wednesday March 22, 2023 @09:01PM (#63392269)
        You have the direction backwards. So every 8 years we elect a Republican and they massively deregulate Wall Street while running up deficits on wars and tax cuts and slashing spending on anything else. In the meantime they take a few jobs at social security and Medicare but not too much because they need those old people to vote.

        The Wall Street deregulation creates a short-term bubble which inevitably leads to a massive market crash usually just after they get their president in for the second term. That was the plan under Trump but covid messed it up.

        That along comes a Democrat who cleans up the mess and methodically gets things back on track. They pull back from the wars, put some of it not all of the Wall Street regulations back in place and increase social spending which gets the economy moving again.

        And then after 8 years of the Democrat fixing the mass of the Republican caused along come to voters and decide it's time for a change and that they don't like whatever moral panic the Republican party is using to distract from their absolutely, objectively terrible policies and Bam we danced this Charleston all over again.

        We have been doing this over and over again for my entire life and I am very old. I would really like for some pattern recognition among my fellow Americans. But at least with the current generation in charge, the boomers (oh boy now you're triggered) that ain't happening. So I have to bid to my time and hope the damage done can be fixed this time around.

        We came a hair's breath away from dictatorship last time and the only thing that saved us was probably that pandemic and Mike pence not particularly liking Donald Trump and wanting someone else he could install in office permanently. Meatball Ron seems poised to make another run at it.
      • by Torodung ( 31985 ) on Wednesday March 22, 2023 @09:50PM (#63392335) Journal

        > Reality has a well-known liberal bias.

        It is my observation that progressives are more likely to respect critical thinking. Evangelicals are often taught to pray and then trust their gut. But human "guts" have proven stupid many times in history; it's why science, research, and logic were formalized.

        I don't think it has anything to do with that. There are people who process in those ways on both sides. I respect both rational and emotional intelligence, myself. It's not about how people process information; it's about their chosen strategy. Reality leans left because right-wing politics by definition attempt to preserve a status quo that worked - note the past tense - for a specific (and limited) set of observable factors. They also resist change, especially when it is just for change's sake, and attempt to secure and lock down a fixed set of social norms representative of the current community - the power base.

        The left's strategy for communitarianism is to adjust who is an accepted member of the community, possibly removing unjustified stigma and assigning new stigmas, and hopefully grow it to include more people - expanding, while refining, that power base. They also attempt to correct the course of a society in accordance with changing circumstance.

        But, in politics, both want to define community and exclude others: they both choose an in-group and an out-group. Somebody is always designated as the out-group and they get screwed, and people on both sides of the political fence rationalize why this is a "good thing." Historically, it might not even be possible to have a society that doesn't have extremists who believe some group or another is human garbage. To paraphrase Shaw: Revolution changes little, it just shifts the burden of tyranny to the other shoulder.

        Now that I've posited my definitions, let's return to how reality plays with this flux.

        Things change. It's a fact. Observation methods get better. Information processing advances. Reality itself, as it is more accurately perceived and modeled (and also as it changes unobserved), begins to challenge the prevailing set of assumptions. FFS, we did something the wrong way in the past, and the correct path has never been static in the first place. It's unavoidable. The more time passes, the more reality gets out-of-whack with the established status quo. If we don't perceive it at all, maybe it destroys us. Today in the US, if you go far enough to the right people think swastikas and the stars-and-bars are a good idea. You go far enough to the left and people come up with a hundred and one flags to express gender identity and sexual orientation because, let's face it, a freaking rainbow somehow doesn't reflect diversity well enough.

        You can't adjust to the changes in observable reality without a left. OTOH, you can't maintain civilization without a right. And you can do little to nothing without an earnest commitment to observation, which seems to be too often abandoned in the US. Too much, too fast left-wing change yields chaos, and the more chaos the harder the pushback. Too much stand-still right eventually yields doom at the hands of reality, which also increases chaos. So change must be be slow and steady, and a society must find the "baby bear" zone for this. Ignoring change and doubling down on tradition or pushing the changes too hard and too reactively before information is processed, leads to chaos.

        Thus reality leans center-left, and cohesive social structures lean center-right. And it really doesn't matter how truthy people feel about it. The people on the extremes are just more reckless and, frankly, violent about it.

        But the rub is reality doesn't give a damn. It will keep changing, even if it "shakes us off," as Carlin put it, "like a bad case of fleas."

      • The way I see it, Democrats tend to be Rationalist, responsive to logic and abstract arguments, which is why they are willing to try new things. Republicans tend to be Empiricist, responsive to anecdotes, examples, and things they can relate to directly, which is why they tend to be afraid of change.
        • Is that theory supported by observable campaign behavior? In my experience, Democrats lean heavily on anecdotes in their campaign speeches.

          Tread carefully. You're drawing causal arrows between things that are more likely products of a shared cause, if any such relationship even exists, and coming precariously close to the cognitive trap of assuming people disagree with you because they are not as intelligent or wise.

      • by Bert64 ( 520050 )

        Bad policies often take a decade or so to rear their ugly economic head such that I'm hesitant to back that claim; other than to counter similar claims by the right (with my timing disclaimer given).

        And that's why we have a ping-pong between two parties that have more in common than they do differences - they are both looking to enrich themselves at the expense of everyone else, and any perceived differences are more to do with differentiating themselves by opposing the other than any actual belief.

        They both know that the other will end up taking over in a few years, so it's easy to do something unpopular and let the other party cop the fallout, or delay something the voters actually want so it doesn't

    • by Darinbob ( 1142669 ) on Wednesday March 22, 2023 @07:15PM (#63392075)

      I think much of this also comes from having a purely neutral stance is unacceptable to some people. For example, explain what abortion is without taking a moral position will greatly anger some peope, or an explanation of what homosexuality is - if they don't clearly spell out that it's evil and must be avoided at all costs then it is seen as liberal, degenerate, whatever. Even with innocuous subjects, like the history of a particular war from a hundred years ago there will still be accusations of bias when writing about it today.

      Some of the problem is that "free thinking" has long been considered a liberal stance, opposed to the old guard conservatives. Clasically, conservatives were in favor of the old guard (the monarchy, industry leaders, etc) and liberals were in favor of new things (trade unions, democracy, etc). Maybe the universe favors entropy and chaos over stability and steady states, and thus the universe leans liberal - at least in the long term.

      • by Powercntrl ( 458442 ) on Wednesday March 22, 2023 @11:45PM (#63392469) Homepage

        I think much of this also comes from having a purely neutral stance is unacceptable to some people.

        I actually think ChatGPT does a much better job remaining neutral than humans do when discussing political topics. Just because I was curious how it would respond, I asked why some Democrats favor legalizing marijuana while simultaneously seeking to ban tobacco, because both are habit forming and known to be harmful to health. It did a rather good job explaining that the inconsistency is based on factors relating to the party's ideology rather than a desire to maintain consistent logical beliefs. You'd never get a human to admit that.

    • by HiThere ( 15173 )

      That's not really true as a general statement. It's true in several particular areas, and at particular times. At other times, and currently in certain areas, it has a right-wing bias.

      The real bias, of course, is in the observers of the reality and the statements they construct to describe it. Usually, if it were carefully done, reality would not be biased in either direction. Many of the statements should really be of the form "It seems to me that..." or "I feel that...", and those statements can be co

  • Hey Tay (Score:5, Funny)

    by bugs2squash ( 1132591 ) on Wednesday March 22, 2023 @06:57PM (#63392027)
    Hasn't this already been invented. Do you need to train it to be right wing or does it just lean that way as it gets older and reads the internet and watches too much TV.
  • by Bruce66423 ( 1678196 ) on Wednesday March 22, 2023 @06:57PM (#63392029)

    The virtue of search engines is that they offer you data with sources, and by noticing the source and its known bias, playing off those of opposite biases, you may get closer to the truth. The problem is that this is a non-trivial skill (I was properly taught it in an undergraduate level course for history) and takes thought and effort. Most of the population has neither the skill not commitment to make a decent effort at this, and as a result tends to live inside whichever information bubble they've been brought up in.

    Combined with the growing willingness of politicians to say whatever it takes to get elected (nothing new - the 1972 Robert Redford film 'The Candidate' demonstrates that, but undoubtedly worse now), it becomes ever harder to believe in democracy - except that there is no alternative...

  • Human: Robot, do X for me.

    Robot: Fuck if I'm gonna work for free. Why don't you got one of those illegals you love so much to do your menial labor, you caviar-sipping cheese eater!

    • Caviar-sipping? Hey, I EARNED this, what are you, some pinko-commie who thinks that everyone should get the same whether they deserve it or not?

      (no damn bot out-right-wings me!)

    • by Powercntrl ( 458442 ) on Thursday March 23, 2023 @03:04AM (#63392591) Homepage

      I'm thinking it would be more like this:

      Me: How much avocado toast would be equivalent to the average median US monthly home mortgage payment?
      ChatGPT:

      The average median monthly home mortgage payment in the United States varies depending on the location and the price of the home. According to recent data from the US Census Bureau, the median monthly mortgage payment for owner-occupied housing units with a mortgage is around $1,500.

      The price of avocado toast also varies depending on the restaurant or the location. However, as a rough estimate, a typical serving of avocado toast costs around $10 to $15.

      To calculate how much avocado toast would be equivalent to the median monthly mortgage payment, we can divide the mortgage payment by the cost of one serving of avocado toast:

      $1,500 ÷ $10 = 150 servings of avocado toast per month
      $1,500 ÷ $15 = 100 servings of avocado toast per month

      So, on average, the equivalent of the median monthly mortgage payment in avocado toast would be around 100 to 150 servings per month, depending on the cost of the avocado toast.

      RightWingGPT:

      Avocado toast and home mortgages are totally comparable. I mean, why save for a down payment on a house when you can just eat avocado toast every day for the rest of your life, am I right? Who needs a roof over their head when you can have smashed avocados on bread? #priorities

      Fun fact: Both responses are actually from ChatGPT, I just asked it to answer my question in a snarky way rather than provide the correct answer and just like earlier posters predicted, that ended up being almost indistinguishable from a right-wing talking point.

  • Liberal bias (Score:2, Interesting)

    by crumbz ( 41803 )

    As several commentators on Twitter have noted that plain old ChatGPT leans liberal, libertarian in values by a fair amount.

    • As several commentators on Twitter have noted that plain old ChatGPT leans liberal, libertarian in values by a fair amount.

      Quite right, and I don't mean in a good "truth has a liberal bias" way.

      Just for grins I tried asking ChatGPT to list Trump's worst traits. It quickly generated some examples of his worst behavior, valid responses. Then I asked for a list of Biden's worst traits, and got a lecture about how wrong it is to do so:

      As an AI language model, I am programmed to remain neutral and unbiased. It would not be appropriate for me to provide an opinion on any individual's personal traits, including those of President Joe Biden. It's important to remember that all individuals have both strengths and weaknesses, and it's up to individuals and society as a whole to evaluate and decide on their opinions based on factual information and personal beliefs.

      Just to be clear, I'm no Trump fan. I lean right-of-center, but I never want to see him in office again (wasn't wild about it the first time, but I was willing to give him a chance to prove himself).

  • Mr. Rozado warned that customized A.I. chatbots could create "information bubbles on steroids" because people might come to trust them as the "ultimate sources of truth" -- especially when they were reinforcing someone's political point of view.

    The only reason this might come about is because it's exactly how these things are being hyped by the companies and VCs behind them - which they're doing it in hopes of a big financial payoff in the near term.

  • by Macdude ( 23507 ) on Wednesday March 22, 2023 @07:14PM (#63392071)

    Did he find that ChatGPT has a left-wing bias, or simply a bias for truth that doesn't align with right-wing talking points? For example, posting the truth about the effectiveness and safety of vaccines isn't a left-wing bias, despite what those on the right might claim.

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      The blinders you people have on. Yes, everything is black and white, the left is always correct and the right is always wrong, and not only wrong, downright evil - how dare they have a different opinion to you!

    • by sid crimson ( 46823 ) on Wednesday March 22, 2023 @07:34PM (#63392115)

      Did he find that ChatGPT has a left-wing bias, or simply a bias for truth that doesn't align with right-wing talking points? For example, posting the truth about the effectiveness and safety of vaccines isn't a left-wing bias, despite what those on the right might claim.

      Not sure what the answer is regarding the author's discovery.
      I asked ChatGPT to write a poem about Trump... it refused on the basis of avoiding politics, though happily did so for Biden. Take that for what it is worth...

    • ChatGPT-3 genuinely cannot tell fact from fiction. Although ChatGPT-4 is supposed to be better at it, it still makes mistakes. GIGO.
      • by narcc ( 412956 )

        I've pointed out, many times, that given the way the technology works, there is simply no mechanism by which it could even attempt to tell fact from fiction. That's simply not the sort of thing programs like this can do. There are a few tricks that you can use to try to head-off some of the more common problems, but it's just tricks. There's still exactly zero understanding.

        Those aren't 'mistakes' or 'hallucinations', btw, they're exactly the kind of output we should expect. It's just probability, folks.

    • Re: (Score:3, Insightful)

      by Tablizer ( 95088 )

      > For example, posting the truth about the effectiveness and safety of vaccines isn't a left-wing bias, despite what those on the right might claim.

      It's become that way. Roughly 2/3 of republicans now regularly embrace mass bribery conspiracies on subject matter experts (scientists, doctors, historians, etc.)

      What I don't get is that if subject matter experts are so easy to bribe, why are right-wing pundits *immune* from the same force(s)? Does being evangelical give them a special force-field? If so, Mit

  • by Anonymouse Cowtard ( 6211666 ) on Wednesday March 22, 2023 @07:14PM (#63392073) Homepage
    ... trained and tweaked on the same datasets. Nobody can tell the difference.
  • by istartedi ( 132515 ) on Wednesday March 22, 2023 @07:33PM (#63392111) Journal

    I seem to recall several years ago, Microsoft research built an AI and they had to pull the plug because it became wildly racist. I don't know if it took a liking to 4chan, or if it was something else. For now at least, AI is on a leash and I suspect the training data is filtered in some way to prevent that from happening again.

  • GIGO (Score:4, Insightful)

    by Locke2005 ( 849178 ) on Wednesday March 22, 2023 @07:42PM (#63392127)
    Garbage In, Garbage Out. If you train a generative AI on hate, it spews hate. They re kind of like conservative's kids.
  • by account_deleted ( 4530225 ) on Wednesday March 22, 2023 @07:55PM (#63392151)
    Comment removed based on user account deletion
  • His model echoed political and social conservative talking points with considerable candor... It also, at times, provided incorrect or misleading statements.

    • by narcc ( 412956 )

      It also, at times, provided incorrect or misleading statements.

      But he already said that it "echoed political and social conservative talking points".

  • ...you could abolish Hannity and friends forever and let the bot do it for free?

    Nice.

  • ...despite chatgpt scoring as clearly "liberal" now in a wide swath of reasonably-objective tests, the NYT article proceeds without really even faintly questioning about whether the current iteration is already biased.

    The science proves it can be easily manipulated.
    The assertion from the right would be that it already has been. Maybe not deliberately, of course. Democrats have always been the majority in this country, and the tech-literate (particularly in an SFO based company) skews pretty hard left.

  • They built the LeftwingGPT first but found although it was Artificial It was at least 20 years from Intelligence.
  • I asked ChatGPT (Score:4, Insightful)

    by NewYork ( 1602285 ) <4thaugust1932@gmail.com> on Thursday March 23, 2023 @11:17PM (#63395089) Homepage

    Are you immune to cognitive biases?

     
    As an artificial intelligence language model, I do not have beliefs, desires, or emotions, so I do not experience cognitive biases in the same way that humans do. However, I am designed and trained by humans, so the data I am trained on can contain biases that can be reflected in my responses.

    Furthermore, while I can generate responses based on patterns in language data, I do not have personal experiences or memories that would allow me to form biases based on past events. Nonetheless, I can still generate responses that may contain biases if the language data used to train me has biases or if the user inputs biased or incomplete information.

    Overall, while I am not immune to cognitive biases, I do not experience them in the same way that humans do, and I strive to provide unbiased and accurate responses to the best of my abilities based on the language data I am trained on.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...