Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Microsoft Software Politics

Microsoft Releases Deepfake Detection Tool Ahead of Election (bloomberg.com) 63

An anonymous reader quotes a report from Bloomberg: Microsoft is releasing new technology to fight "deepfakes" that can be used to spread false information ahead of the U.S. election. "Microsoft Video Authenticator" analyzes videos and photos and provides a score indicating the chance that they're manipulated, the company said. Deepfakes use artificial intelligence to alter videos or audio to make someone appear to do or say something they didn't. Microsoft's tool aims to identify videos that have been altered using AI, according to a Tuesday blog post by the company.

The digital tool works by detecting features that are unique to deepfakes but that are not necessarily evident to people looking at them. These features -- "which might not be detectable to the human eye" -- include subtle fading and the way boundaries between the fake and real materials blend together in altered footage. The tool will initially be available to political and media organizations "involved in the democratic process," according to the company. A second new Microsoft tool, also announced Tuesday, will allow video creators to certify that their content is authentic and then communicate to online viewers that deepfake technology hasn't been used, based on a Microsoft certification that has "a high degree of accuracy," the post said. Viewers can access this feature through a browser extension.

This discussion has been archived. No new comments can be posted.

Microsoft Releases Deepfake Detection Tool Ahead of Election

Comments Filter:
  • by fustakrakich ( 1673220 ) on Tuesday September 01, 2020 @11:48PM (#60464370) Journal

    Are they to be the gatekeepers as to what is "fake"?

    • Who doesn't trust Microsoft? It's even more credible that they're only giving it to the media and political organizations, our true arbiters of truth.

      • by ShanghaiBill ( 739463 ) on Wednesday September 02, 2020 @12:48AM (#60464482)

        If they give access to anyone, the deep fakers will incorporate it into the adversarial network and train their GANs to avoid detection.

        • by Anonymous Coward
          The techniques used to detect deepfakes are already well known and they haven't incorporated them into their fakes yet. The reality is while you can easily make a fake to fool the human eye, it is bloody hard and expensive to produce something that can pass forensic analysis.
    • Comment removed based on user account deletion
    • I for one think it would be far better to give a complex tool which does incomprehensible analysis to unwashed masses. That will really help determine the TRUTH!

    • There are two possible meanings of "fake":
      1. If by "fake" they mean artificially manipulated, then
              anybody with the right technology can
              be a "gatekeeper".

      2. But if "fake" means "untrue", then it gets more
              complicated.

      Anybody with more that two functioning brain cells
      can see that they are refering to definition #1.

  • Consequences (Score:5, Insightful)

    by systemd-anonymousd ( 6652324 ) on Tuesday September 01, 2020 @11:51PM (#60464374)
    I fear that the consequences of the bot giving a false positive may be worse than a deepfake being seen as authentic.
    • Re: (Score:2, Troll)

      by Train0987 ( 1059246 )

      Good possibility that it's sole purpose is to generate false-positives. At least until November.

      • by gtall ( 79522 )

        That's ridiculous. MS doesn't get any advantage by putting intentionally flawed software out there unless it is intended to harm a competitor. Last we heard, they weren't competing in politics. That said, their software thingy could work about as well as the rest of their crapware just due to incompetence.

        • Nonsense, of course they have motivation. A friendly administration can make MS additional billions in sweet govt contracts.

          The only real question goes to who MS thinks will be most friendly.
      • by AmiMoJo ( 196126 )

        More scepticism probably wouldn't hurt. Major news channels are unlikely to broadcast deepfakes and the debates will be televised live.

    • Re: (Score:3, Interesting)

      I'd be concerned that even ordinary photo editing, to enhance contrast and make faces more visible, might be detected as fakery.

    • Re:Consequences (Score:4, Insightful)

      by AmiMoJo ( 196126 ) on Wednesday September 02, 2020 @04:19AM (#60464760) Homepage Journal

      The biggest threat is not deepfake videos, it's authentic videos edited to be misleading. We see that happening over and over, from the frame rate being adjusted to make someone look drunk or violent to simply splicing parts of a speech together to give a false impression.

    • I fear that the consequences of the bot giving a false positive may be worse than a deepfake being seen as authentic.

      Quite so. There are a lot of "fakes" that are not fake at all:

      • Sound is often separately recorded to be of a decent quality. It is then mixed in with background sound.
      • Do you come from a country where foreign news is lip-synchronized?
      • Video is edited to blur identities, or to remove gruesome details.
      • etc.

      Video editing is often done in new items, because that is the best way to bring the essence of what has happened. That does not mean all news items are fake.

    • I don't. We live in a post truth world. Fake, real, it's all completely fucking irrelevant since 2016. If anything has been proven it's that no one cares about the truth, the record is amended whenever for whatever purpose and whenever anyone speaks against a democrat they are "extreme right" and whenever anyone speaks against a republican they are "extreme left".

      Nothing would change if there were actual fakes out there which get missed, or truth that suffers false positives because people have shown they f

      • Nothing would change if there were actual fakes out there which get missed, or truth that suffers false positives because people have shown they flat out just don't give a shit anymore.

        Truth got run over by feelings, change things that make you feel bad, even if they're true. When your losing change the rules, answer the questions you wished they had asked, etc.The right thing to do (believe) is mostly that which is the least fun.

  • Will try it on the videos on Pornhub and see if the AI is as good as a human in hands-off detection trials. Have submitted a grant proposal for a Premiuim membership, strictly for research uses.
  • by Kohath ( 38547 ) on Wednesday September 02, 2020 @12:22AM (#60464428)

    All of their fakes are shallow.

  • by Roger Wilcox ( 776904 ) on Wednesday September 02, 2020 @12:24AM (#60464430)
    What could possibly go wrong?
    • by MrL0G1C ( 867445 )

      The truth is you don't want the truth, you don't want fact-checkers telling you when lying is going on. Why? Because if you actually listen then it might burst that false-reality bubble you've enclosed yourself in.

  • Saw an interesting show discussing Benford's Law, which dictates in any set of random numbers, they ought to follow a pattern, where you just look at the first digit of every value, and about 30% will start with 1, and scales down from there, making it a fairly quick way to examine a set of numbers for anomalies.

    Apparently in the case of an image file, like a JPG, an original will follow Benford's Law if you check the pixel values (not 100% sure if that's what's checked, but something like that.) If you tak

    • by raymorris ( 2726007 ) on Wednesday September 02, 2020 @03:02AM (#60464638) Journal

      > Saw an interesting show discussing Benford's Law, which dictates in any set of random numbers,

      FYI Benford's Law does NOT apply to random numbers.
      Random numbers have ~random digits, so each appears 10% of the time in any position. Approximately random digits because generally "random" generally means between zero and $MAX. The choice of $MAX affects the distribution. In no case will it follow Benford's Law if numbers are chosen randomly.

      Benford's Law applies where the LOG of the number is evenly distributed over several orders of magnitude. This tends to happen, approximately, where the number is produced by multiplying other numbers. Many sets of numbers in the real world come about by multiplying other numbers, so they follow Benford's Law.

      It does not apply to numbers that come about by adding other numbers.

      The best compression algorithm will generate numbers indistinguishable from random, numbers with no pattern, so they will not follow Benford's Law.

      The reason that the best compression will create numbers with no pattern, numbers similar to random, is that if there WERE a pattern, that pattern could be used to further compress the bits. That is, it could always be made smaller by getting rid of the pattern. Therefore any compression which generates patterns is by definition sub-optimal. Which means any compression to which Benford's Law applies is sub-optimal.

      • I said:

        > each appears 10% of the time in any position.

        That should be "in any position other than the first", if you throw away leading zeroes. If you throw away leading zeroes, each digit of course has a 1/9 chance of appearing as the first digit.

      • Thanks for the clarification, but I think you misunderstood; this isn't about 'best' compression, but from what I interpreted from the show I saw, re-compression of an image (required if editing something like a JPG) results in a file that doesn't comply with Benford's Law. Came across this that might be helpful: https://www.researchgate.net/p... [researchgate.net]

        • That may well be true. Just saying, if the numbers follow Benford's, they aren't random. By the definition of random.

  • That's nice, but... the first time the algo will find a false positive, the "fakers" will be quick to discredit the program.
    • Or worse, deep fake tech is an "adversial network". the TL:DR the hardest part of making a deep fake, is teaching the computer when it's doing a bad job. Bottom line as soon as the detection tool, is in the hands of those faking... it's worthless.
  • by Anonymous Coward

    (TrueScore: 4, Mind-blowing)

    My actual experiences with today's Internet:

    • * I had finally found one single source on the known Internet providing a certain bit of data, and set up my cURL bot to fetch it daily. It was not long before the requests started failing. Looking into it, I realized that they had just switched to the dreaded CloudFlare "Cybermafia Protection Services", meaning both my normal browser and my bots were presented with this "anti-bot" screen which obviously stops the cURL bot, but also
    • Good write up.

      More and more companies/services are starting to force this vile, idiotic concept of "two-factor authentication", as always using "security" as a bullshit excuse to track you and prevent any kind of privacy.

      If they actually cared about security they would use RFC 6238 TOTP for 2FA.

  • Comment removed based on user account deletion
  • move on.

    There are some websites that already do this.

    Remember "only I can save you".
    No that's a "deep fake".
  • This whole trick was to keep it from looping back on itself and self destructing
  • Great, a tool that will help in making more convincing deepfakes /s
  • I would like to see Deep Flake detection. Something that would intercept the political ads would be nice.

  • Couldn't the same technologies that are used to detect deep fakes be re-employed back into algorithms that produce them to make them even *harder* to detect?
  • They ran pictures of all of the political candidates through the engine, but most of them were tagged "shallow fake". :-p

"If it ain't broke, don't fix it." - Bert Lantz

Working...