Sick Gaming
Microsoft - New steps to combat disinformation - Printable Version

+- Sick Gaming (https://www.sickgaming.net)
+-- Forum: Computers (https://www.sickgaming.net/forum-86.html)
+--- Forum: Windows (https://www.sickgaming.net/forum-89.html)
+--- Thread: Microsoft - New steps to combat disinformation (/thread-97001.html)



Microsoft - New steps to combat disinformation - xSicKxBot - 09-02-2020

New steps to combat disinformation

<div style="margin: 5px 5% 10px 5%;"><img src="https://www.sickgaming.net/blog/wp-content/uploads/2020/09/new-steps-to-combat-disinformation.jpg" width="1200" height="744" title="" alt="" /></div><div><p>Today, we’re announcing two new technologies to combat disinformation, new work to help educate the public about the problem, and partnerships to help advance these technologies and educational efforts quickly.</p>
<p>There is no question that disinformation is widespread. <a href="https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2020/09/Trends_in_Online_Politics_Influence_Efforts_V2.0.pdf">Research we supported from Professor Jacob Shapiro at Princeton</a>, updated this month, cataloged 96 separate foreign influence campaigns targeting 30 countries between 2013 and 2019. These campaigns, carried out on social media, sought to defame notable people, persuade the public or polarize debates. While 26% of these campaigns targeted the U.S., other countries targeted include Armenia, Australia, Brazil, Canada, France, Germany, the Netherlands, Poland, Saudi Arabia, South Africa, Taiwan, Ukraine, the United Kingdom and Yemen. Some 93% of these campaigns included the creation of original content, 86% amplified pre-existing content and 74% distorted objectively verifiable facts. Recent reports also show that disinformation has been distributed about the <a href="https://www.nytimes.com/2020/07/28/us/politics/russia-disinformation-coronavirus.html">COVID-19 pandemic</a>, <a href="https://www.bbc.com/news/world-53755067">leading to</a> deaths and hospitalizations of people seeking supposed cures that are actually dangerous.</p>
<p>What we’re announcing today is an important part of Microsoft’s Defending Democracy Program, which, in addition to fighting disinformation, helps to protect voting through <a href="https://blogs.microsoft.com/on-the-issues/2020/02/17/wisconsin-electionguard-polls/">ElectionGuard</a> and helps secure campaigns and others involved in the democratic process through <a href="https://blogs.microsoft.com/on-the-issues/2018/08/20/protecting-democracy-with-microsoft-accountguard/">AccountGuard</a>, <a href="https://blogs.microsoft.com/on-the-issues/2019/05/06/protecting-political-campaigns-from-hacking/">Microsoft 365 for Campaigns</a> and <a href="https://blogs.microsoft.com/on-the-issues/2020/04/02/defending-democracy-program-extended/">Election Security Advisors</a>. It’s also part of a broader focus on protecting and promoting journalism as Brad Smith and Carol Ann Browne discussed in their <a href="https://www.linkedin.com/pulse/dawn-decade-top-ten-tech-policy-issues-2020s-brad-smith/?published=t">Top Ten Tech Policy Issues for the 2020s.</a></p>
<p><strong>New Technologies</strong></p>
<p>Disinformation comes in many forms, and no single technology will solve the challenge of helping people decipher what is true and accurate. At Microsoft, we’ve been working on two separate technologies to address different aspects of the problem.</p>
<p>One major issue is <em>deepfakes</em>, or synthetic media, which are photos, videos or audio files manipulated by artificial intelligence (AI) in hard-to-detect ways. They could appear to make people say things they didn’t or to be places they weren’t, and the fact that they’re generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology. However, in the short run, such as the upcoming U.S. election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes.</p>
<p>Today, we’re announcing Microsoft Video Authenticator. Video Authenticator can analyze a still photo or video to provide a percentage chance, or confidence score, that the media is artificially manipulated. In the case of a video, it can provide this percentage in real-time on each frame as the video plays. It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye.</p>
<p>This technology was originally developed by Microsoft Research in coordination with Microsoft’s Responsible AI team and the Microsoft AI, Ethics and Effects in Engineering and Research (AETHER) Committee, which is an advisory board at Microsoft that helps to ensure that new technology is developed and fielded in a responsible manner. Video Authenticator was created using a public dataset from <a href="https://www.arxiv-vanity.com/papers/1901.08971/">Face Forensic++</a> and was tested on the <a href="https://www.arxiv-vanity.com/papers/2006.07397/">DeepFake Detection Challenge Dataset</a>, both leading models for training and testing deepfake detection technologies.</p>
<p>We expect that methods for generating synthetic media will continue to grow in sophistication. As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods. Thus, in the longer term, we must seek stronger methods for maintaining and certifying the <em>authenticity</em> of news articles and other media. There are few tools today to help assure readers that the media they’re seeing online came from a trusted source and that it wasn’t altered.</p>
<p>Today, we’re also announcing new technology that can both detect manipulated content and assure people that the media they’re viewing is authentic. This technology has two components. The first is a tool built into Microsoft Azure that enables a content producer to add digital hashes and certificates to a piece of content. The hashes and certificates then live with the content as metadata wherever it travels online. The second is a reader – which can exist as a browser extension or in other forms – that checks the certificates and matches the hashes, letting people know with a high degree of accuracy that the content is authentic and that it hasn’t been changed, as well as providing details about who produced it.</p>
<p>This technology has been built by Microsoft Research and Microsoft Azure in partnership with the Defending Democracy Program. It will power an initiative recently <a href="https://www.bbc.co.uk/mediacentre/latestnews/2020/trusted-news-initiative">announced by the BBC</a> called Project Origin.</p>
<p><strong>Partnerships</strong></p>
<p>No single organization is going to be able to have meaningful impact on combating disinformation and harmful deepfakes. We will do what we can to help, but the nature of the challenge requires that multiple technologies be widely adopted, that educational efforts reach consumers everywhere consistently and that we keep learning more about the challenge as it evolves.</p>
<p>Today, we’re highlighting partnerships we’ve been developing to help these efforts.</p>
<p>First, we’re partnering with the AI Foundation, a dual commercial and nonprofit enterprise based in San Francisco, with the mission to bring the power and protection of AI to everyone in the world. Through this partnership, the AI Foundation’s <a href="https://rd2020.org/">Reality Defender 2020</a> (RD2020) initiative will make Video Authenticator available to organizations involved in the democratic process, including news outlets and political campaigns. Video Authenticator will initially be available only through RD2020, which will guide organizations through the limitations and ethical considerations inherent in any deepfake detection technology. Campaigns and journalists interested in learning more can contact RD2020 <a href="https://rd2020.org/#involved">here</a>.</p>
<p>Second, we’ve <a href="https://www.bbc.co.uk/mediacentre/latestnews/2020/trusted-news-initiative">partnered</a> with a consortium of media companies including the BBC, CBC/Radio-Canada and the New York Times on <a href="https://www.bbc.co.uk/academy/en/articles/art20200304152450988">Project Origin</a>, which will test our authenticity technology and help advance it as a standard that can be adopted broadly. The <a href="https://www.bbc.co.uk/mediacentre/latestnews/2020/trusted-news-initiative">Trusted News Initiative</a>, which includes a range of publishers and social media companies, has also agreed to engage with this technology. In the months ahead, we hope to broaden work in this area to even more technology companies, news publishers and social media companies.</p>
<p><strong>Media Literacy</strong></p>
<p>We’re also partnering with the University of Washington (UW), <a href="https://sensity.ai/">Sensity</a> and USA Today on media literacy. Improving media literacy will help people sort disinformation from genuine facts and manage risks posed by deepfakes and cheap fakes. Practical media knowledge can enable us all to think critically about the context of media and become more engaged citizens while still appreciating satire and parody. Though not all synthetic media is bad, even a short intervention with media literacy resources has been shown to help people identify it and treat it more cautiously.</p>
<p><a href="https://www.sickgaming.net/blog/wp-content/uploads/2020/09/new-steps-to-combat-disinformation.jpg"><img class="aligncenter wp-image-63975 size-full" src="https://www.sickgaming.net/blog/wp-content/uploads/2020/09/new-steps-to-combat-disinformation.jpg" alt="deep fakes quiz" width="1200" height="744"></a></p>
<p>Today, we are launching an <a href="https://aka.ms/spotdeepfakes">interactive quiz</a> for voters in the United States to learn about synthetic media, develop critical media literacy skills and gain awareness of the impact of synthetic media on democracy. The Spot the Deepfake Quiz is a media literacy tool in the form of an interactive experience developed in partnership with the UW <a href="https://www.cip.uw.edu/">Center for an Informed Public</a>, Sensity and USA Today. The quiz will be distributed across web and social media properties owned by USA Today, Microsoft and the University of Washington and through social media advertising.</p>
<p>Additionally, in collaboration with the <a href="https://www.rtdna.org/">Radio Television Digital News Association</a>, <a href="https://thetrustproject.org/">The Trust Project</a> and UW’s Center for an Informed Public and Accelerating Social Transformation Program, Microsoft is supporting a public service announcement (PSA) campaign encouraging people to take a “reflective pause” and check to make sure information comes from a reputable news organization before they share or promote it on social media ahead of the upcoming U.S. election. The PSA campaign will help people better understand the harm misinformation and disinformation have on our democracy and the importance of taking the time to identify, share and consume reliable information. The ads will run across radio stations in the United States in September and October.</p>
<p>Finally, in recent months we have significantly expanded our implementation of <a href="https://www.newsguardtech.com/">NewsGuard</a>, which enables people to learn more about an online news source before consuming its content. NewsGuard operates a team of experienced journalists who rate online news websites on the basis of nine journalistic integrity criteria, which they use to create both a “nutrition label” and a red/green rating for each rated news website. People can access NewsGuard’s service by downloading a simple browser extension, which is available for all standard browsers. It is <a href="https://www.newsguardtech.com/edge/">free</a> for users of the Microsoft Edge browser. Importantly, Microsoft has no editorial control over any of NewsGuard’s ratings and the NewsGuard browser extension does not limit access to information in any way. Instead, NewsGuard aims to provide greater transparency and encourage media literacy by providing important context about the news source itself.</p>
<p><strong>Policy considerations</strong></p>
<p>Governments, companies, non-profits and others around the world have a critical part to play in addressing disinformation and election interference broadly. In 2018, the <a href="https://pariscall.international/en/">Paris Call for Trust &amp; Security in Cyberspace</a> brought together a multistakeholder group of global leaders committing to nine principles that will help ensure peace and security online. One of the most critical of these principles is defending electoral processes. In May, Microsoft, the Alliance for Securing Democracy and the Government of Canada <a href="https://www.canada.ca/en/democratic-institutions/news/2020/05/government-of-canada-leading-globally-to-promote-cybersecurity-and-counter-disinformation-online.html">launched</a> an effort to lead global activities on this principle. We encourage any organization interested in contributing to <a href="https://pariscall.international/en/support">join the Paris Call</a>.</p>
<p class="tag-list">Tags: <a aria-label="See more stories about Azure" href="https://blogs.microsoft.com/on-the-issues/tag/azure/" rel="tag">Azure</a>, <a aria-label="See more stories about COVID-19" href="https://blogs.microsoft.com/on-the-issues/tag/covid-19/" rel="tag">COVID-19</a>, <a aria-label="See more stories about deepfakes" href="https://blogs.microsoft.com/on-the-issues/tag/deepfakes/" rel="tag">deepfakes</a>, <a aria-label="See more stories about Defending Democracy Program" href="https://blogs.microsoft.com/on-the-issues/tag/defending-democracy-program/" rel="tag">Defending Democracy Program</a>, <a aria-label="See more stories about disinformation" href="https://blogs.microsoft.com/on-the-issues/tag/disinformation/" rel="tag">disinformation</a>, <a aria-label="See more stories about ElectionGuard" href="https://blogs.microsoft.com/on-the-issues/tag/electionguard/" rel="tag">ElectionGuard</a>, <a aria-label="See more stories about Microsoft AccountGuard" href="https://blogs.microsoft.com/on-the-issues/tag/microsoft-accountguard/" rel="tag">Microsoft AccountGuard</a>, <a aria-label="See more stories about Microsoft Research" href="https://blogs.microsoft.com/on-the-issues/tag/microsoft-research/" rel="tag">Microsoft Research</a>, <a aria-label="See more stories about newsguard" href="https://blogs.microsoft.com/on-the-issues/tag/newsguard/" rel="tag">newsguard</a>, <a aria-label="See more stories about Project Origin" href="https://blogs.microsoft.com/on-the-issues/tag/project-origin/" rel="tag">Project Origin</a>, <a aria-label="See more stories about Video Authenticator" href="https://blogs.microsoft.com/on-the-issues/tag/video-authenticator/" rel="tag">Video Authenticator</a></p>
</div>


https://www.sickgaming.net/blog/2020/09/01/new-steps-to-combat-disinformation/