AI Disclaimers in Political Ads Backfire on Candidates, Study Finds (msn.com) 20
Many U.S. states now require candidates to disclose when political ads used generative AI, reports the Washington Post.
Unfortunately, researchers at New York University's Center on Technology Policy "found that people rated candidates 'less trustworthy and less appealing' when their ads featured AI disclaimers..." In the study, researchers asked more than 1,000 participants to watch political ads by fictional candidates — some containing AI disclaimers, some not — and then rate how trustworthy they found the would-be officeholders, how likely they were to vote for them and how truthful their ads were. Ads containing AI labels largely hurt candidates across the board, with the pattern holding true for "both deceptive and more harmless uses of generative AI," the researchers wrote. Notably, researchers also found that AI labels were more harmful for candidates running attack ads than those being attacked, something they called the "backfire effect".
"The candidate who was attacked was actually rated more trustworthy, more appealing than the candidate who created the ad," said Scott Babwah Brennen, who directs the center at NYU and co-wrote the report with Shelby Lake, Allison Lazard and Amanda Reid.
One other interesting finding... The article notes that study participants in both parties "preferred when disclaimers were featured anytime AI was used in an ad, even when innocuous."
Unfortunately, researchers at New York University's Center on Technology Policy "found that people rated candidates 'less trustworthy and less appealing' when their ads featured AI disclaimers..." In the study, researchers asked more than 1,000 participants to watch political ads by fictional candidates — some containing AI disclaimers, some not — and then rate how trustworthy they found the would-be officeholders, how likely they were to vote for them and how truthful their ads were. Ads containing AI labels largely hurt candidates across the board, with the pattern holding true for "both deceptive and more harmless uses of generative AI," the researchers wrote. Notably, researchers also found that AI labels were more harmful for candidates running attack ads than those being attacked, something they called the "backfire effect".
"The candidate who was attacked was actually rated more trustworthy, more appealing than the candidate who created the ad," said Scott Babwah Brennen, who directs the center at NYU and co-wrote the report with Shelby Lake, Allison Lazard and Amanda Reid.
One other interesting finding... The article notes that study participants in both parties "preferred when disclaimers were featured anytime AI was used in an ad, even when innocuous."
Re:Good (Score:5, Insightful)
They're already all liars. And I do mean ALL including whoever your favorites are.
Both sides! The only problem is one side (Trump/Vance) lies, lies and even more lies. If one person is 800 lbs and one is 220 lbs, One person is slightly overweight and one is morbidly obese. You wouldn’t say they both are fatties.
You keep up the logical fallacies.
Re: Good (Score:4, Funny)
You Exaggerate (Score:2)
Re: (Score:2)
They're already all liars. And I do mean ALL including whoever your favorites are.
Indeed. You do not even get into such a position if you are an honest person.
Re: (Score:1)
The reality is there are no clean hands in politics.
You may get into it for ideals, but the only way to advance your agenda is via compromise with the very evils you got into it to fight against. Over time you make more and more deals. By the time you reach the upper levels of politics, you are the devil.
Re: (Score:2)
If we are going to make collective decisions we need to elect people who are flexible and know how to compromise. You need people who will listen with an open mind to everyone. And you need people who are emotionally stable with clear values and good judgment. Unfortunately none of those things are rewarded in political campaigns. We vote for people who confirm all our prejudices or at least appear to.
Why "unfortunately"? (Score:1)
Re: (Score:2)
Re: (Score:2)
Yep, I think it is.
This only makes sense (Score:4, Interesting)
The ads don't specify where the AI was used, just that it was used. So anyone watching then questions everything in the ad and wonders what was real and what was generated. Sure, you make use it to make something innocuous, but the people watching the ad don't know that was the only thing it was used for. Candidates are better off not using AI as people don't trust it in general. And this also means the disclaimers are working and should be kept, as they are making people question the ad.
True for some? (Score:2)
This will probably generally hold true, but will be invalid for supporters of Trump.
There's an old saying. "You can't beat an emotional argument with a logical one." And many (perhaps most) of Trump's supporters are operating from the emotional space. It doesn't matter how many facts or disclaimers you stack on anything. They will not be swayed. They'll no more absorb the label than they would any fact-check. It's noise.
Any chance of reverse attack ads? (Score:2)
Meaning I create an AI ad, that I correctly disclose, about some made up or even real thing about myself. Run it, and get sympathy results out of it from the AI disclaimer (and people not wanting or able to think).
"Unfortunately" people don't trust faked content? (Score:2)
It is only "unfortunate" if you think people *should* be trusting convincingly faked content in polical ads.
It isn't unfortunate - it's the REASON for the of labeling AI genned content in political ads