Create an account


Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
News - DeepMind wants to answer the big ethical questions posed by AI

#1
DeepMind wants to answer the big ethical questions posed by AI

<div style="margin: 5px 5% 10px 5%;"><img src="https://www.sickgamedev.win/wp-content/uploads/2017/10/deepmind-wants-to-answer-the-big-ethical-questions-posed-by-ai.jpg" width="200" height="200" title="" alt="" /></div><div><p>Google’s DeepMind artificial intelligence (AI) division has established a new research group to learn more about the ethical questions posed by the dawn of AI.</p>
<p>The British artificial intelligence outfit was acquired by Google in 2014, and often uses video games as part of its projects.</p>
<p>For instance, back in 2016 the company <a href="https://www.gamasutra.com/view/news/284951/DeepMind_and_Blizzard_team_up_to_release_API_aimed_at_AI_enhancement.php">partnered with Blizzard</a> to create an API tailored for research environments based in <em>StarCraft II</em>, and prior to that the DeepMind team <a href="https://www.gamasutra.com/view/news/237515/New_DeepMind_AI_learns_to_play_Atari_2600_games_at_human_levels.php">developed an artificial agent capable</a> of learning how to play Atari 2600 games from scratch. </p>
<p>Now, the DeepMind Ethics &amp; Society unit hopes to unravel some of the biggest ethical quandaries posed by the creation of artificial intelligence to pave the way for “truly beneficial and responsible AI.” </p>
<p>“We believe AI can be of extraordinary benefit to the world, but only if held to the highest ethical standards. Technology is not value neutral, and technologists must take responsibility for the ethical and social impact of their work,” <a href="https://deepmind.com/blog/why-we-launched-deepmind-ethics-society/">reads a blog post</a> on the DeepMind website. </p>
<p>“The development of AI creates important and complex questions. Its impact on society — and on all our lives — is not something that should be left to chance. Beneficial outcomes and protections against harms must be actively fought for and built-in from the beginning. But in a field as complex as AI, this is easier said than done.</p>
<p>“As scientists developing AI technologies, we have a responsibility to conduct and support open research and investigation into the wider implications of our work. At DeepMind, we start from the premise that all AI applications should remain under meaningful human control, and be used for socially beneficial purposes.”</p>
<p>DeepMind isn’t the only institution asking looking into this area. Other research projects, such as Julia Angwin’s <a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing">study of racism in criminal justice algorithms</a>, and Kate Crawford and Ryan Calo’s <a href="http://www.nature.com/news/there-is-a-blind-spot-in-ai-research-1.20805">examination of the broader consequences</a> of AI for social systems, have also begun to peel back the curtain. </p>
<p>For DeepMind, the hope is that its new unit will achieve two primary aims: to help technologists puts ethics into practice when the time comes, and to ensure society is sufficiently prepared for the day AI becomes part of the wider world.</p>
</div>
Reply



Forum Jump:


Users browsing this thread:
1 Guest(s)

Forum software by © MyBB Theme © iAndrew 2016