Create an account


Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Microsoft - New supercomputer, vision for future AI work announced

#1
New supercomputer, vision for future AI work announced

<div style="margin: 5px 5% 10px 5%;"><img src="https://www.sickgaming.net/blog/wp-content/uploads/2020/05/new-supercomputer-vision-for-future-ai-work-announced.jpg" width="1920" height="1080" title="" alt="" /></div><div><p>“As we’ve learned more and more about what we need and the different limits of all the components that make up a supercomputer, we were really able to say, ‘If we could design our dream system, what would it look like?’” said OpenAI CEO Sam Altman. “And then Microsoft was able to build it.”</p>
<p>OpenAI’s goal is not just to pursue research breakthroughs but also to engineer and develop powerful AI technologies that other people can use, Altman said. The supercomputer developed in partnership with Microsoft was designed to accelerate that cycle.</p>
<p>“We are seeing that larger-scale systems are an important component in training more powerful models,” Altman said.</p>
<p>For customers who want to push their AI ambitions but who don’t require a dedicated supercomputer, Azure AI provides <a href="https://aka.ms/AA8eedy">access to powerful compute</a> with the same set of AI accelerators and networks that also power the supercomputer. Microsoft is also making available the tools to train large AI models on these clusters in a distributed and optimized way.</p>
<p>At its Build conference, Microsoft announced that it would soon begin open sourcing its Microsoft Turing models, as well as recipes for training them in Azure Machine Learning. This will give developers access to the same family of powerful language models that the company has used to improve language understanding across its products.</p>
<p>It also <a href="https://aka.ms/deepspeed2-blog-research">unveiled a new version of DeepSpeed</a>, an open source deep learning library for PyTorch that reduces the amount of computing power needed for large distributed model training. The update is significantly more efficient than the version released just three months ago and now allows people to train models more than 15 times larger and 10 times faster than they could without DeepSpeed on the same infrastructure.</p>
<p>Along with the DeepSpeed announcement, Microsoft <a href="http://aka.ms/ort-build2020">announced it has added support</a> for distributed training to the <a href="https://github.com/Microsoft/onnxruntime">ONNX Runtime</a>. The ONNX Runtime is an open source library designed to enable models to be portable across hardware and operating systems. To date, the ONNX Runtime has focused on high-performance inferencing; today’s update adds support for model training, as well as adding the optimizations from the DeepSpeed library, which enable performance improvements of up to 17 times over the current ONNX Runtime.</p>
<p>“We want to be able to build these very advanced AI technologies that ultimately can be easily used by people to help them get their work done and accomplish their goals more quickly,” said Microsoft principal program manager Phil Waymouth. “These large models are going to be an enormous accelerant.”</p>
<figure id="attachment_82258" aria-describedby="caption-attachment-82258" class="wp-caption alignright"><a href="https://www.sickgaming.net/blog/wp-content/uploads/2020/05/new-supercomputer-vision-for-future-ai-work-announced.jpg"><img class="wp-image-82258 size-full" src="https://www.sickgaming.net/blog/wp-content/uploads/2020/05/new-supercomputer-vision-for-future-ai-work-announced.jpg" alt="A mound of papers representing unlabeled data next to a anthropomorphic laptop" width="1920" height="1080"></a><figcaption id="caption-attachment-82258" class="wp-caption-text">In “self-supervised” learning, AI models can learn from large amounts of unlabeled data. For example, models can learn deep nuances of language by absorbing large volumes of text and predicting missing words and sentences. Art by Craighton Berman.</figcaption></figure>
<h2><strong>Learning the nuances of language</strong></h2>
<p>Designing AI models that might one day understand the world more like people do starts with language, a critical component to understanding human intent, making sense of the vast amount of written knowledge in the world and communicating more effortlessly.</p>
<p>Neural network models that can process language, which are roughly inspired by our understanding of the human brain, aren’t new. But these deep learning models are now far more sophisticated than earlier versions and are rapidly escalating in size.</p>
<p>A year ago, the largest models had 1 billion parameters, each loosely equivalent to a synaptic connection in the brain. The <a href="https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/">Microsoft Turing model for natural language generation</a> now stands as the world’s largest publicly available language AI model with 17 billion parameters.</p>
<p>This new class of models learns differently than supervised learning models that rely on meticulously labeled human-generated data to teach an AI system to recognize a cat or determine whether the answer to a question makes sense.</p>
<p>In what’s known as “self-supervised” learning, these AI models can learn about language by examining billions of pages of publicly available documents on the internet — Wikipedia entries, self-published books, instruction manuals, history lessons, human resources guidelines. In something like a giant game of Mad Libs, words or sentences are removed, and the model has to predict the missing pieces based on the words around it.</p>
<p>As the model does this billions of times, it gets very good at perceiving how words relate to each other. This results in a rich understanding of grammar, concepts, contextual relationships and other building blocks of language. It also allows the same model to transfer lessons learned across many different language tasks, from document understanding to answering questions to creating conversational bots.</p>
<p>“This has enabled things that were seemingly impossible with smaller models,” said Luis Vargas, a Microsoft partner technical advisor who is spearheading the company’s AI at Scale initiative.</p>
<p>The improvements are somewhat like jumping from an elementary reading level to a more sophisticated and nuanced understanding of language. But it’s possible to improve accuracy even further by fine tuning these large AI models on a more specific language task or exposing them to material that’s specific to a particular industry or company.</p>
<p>“Because every organization is going to have its own vocabulary, people can now easily fine tune that model to give it a graduate degree in understanding business, healthcare or legal domains,” he said.</p>
</div>


https://www.sickgaming.net/blog/2020/05/...announced/
Reply



Forum Jump:


Users browsing this thread:
1 Guest(s)

Forum software by © MyBB Theme © iAndrew 2016