Posted on Leave a comment

Bloomberg: ‘How Amy Hood won back Wall Street and helped reboot Microsoft’

Terms of Service Violation

Your usage has been flagged as a violation of our terms of service.

For inquiries related to this message please contact support. For sales inquiries, please visit http://www.bloomberg.com/professional/request-demo

If you believe this to be in error, please confirm below that you are not a robot by clicking “I’m not a robot” below.

Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review the Terms of Service and Cookie Policy.

Block reference ID:

Posted on Leave a comment

TextWorld, an open-source project for generating text-based games, can train and test AI agents

Today, fresh out of the Microsoft Research Montreal lab, comes an open-source project called TextWorld. TextWorld is an extensible Python framework for generating text-based games. Reinforcement learning researchers can use TextWorld to train and test AI agents in skills such as language understanding, affordance extraction, memory and planning, exploration and more. Researchers can study these in the context of generalization and transfer learning. TextWorld further runs existing text-based games, like the legendary Zork, for evaluating how well AI agents perform in complex, human-designed settings.

Figure 1 – Enter the world of TextWorld. Get the code at aka.ms/textworld.

Text-based games – also known as interactive fiction or adventure games – are games in which the play environment and the player’s interactions with it are represented solely or primarily via text. As players moves through the game world, they observe textual descriptions of their surroundings (typically divided into discrete ‘rooms’), what objects are nearby, and any other pertinent information. Players issue text commands to an interpreter to manipulate objects, other characters in the game, or themselves. After each command, the game usually provides some feedback to inform players how that command altered the game environment, if at all. A typical text-based game poses a series of puzzles to solve, treasures to collect, and locations to reach. Goals and waypoints may be specified explicitly or may have to be inferred from cues.

Figure 2 – An example game from TextWorld with a house-based theme.

Text-based games couple the freedom to explore a defined space with the restrictions of a parser and game world designed to respond positively to a relatively small set of textual commands. An agent that can competently navigate a text-based game needs to be able to not only generate coherent textual commands but must also generate the right commands in the right order, with little to no mistakes in between. Text-based games encourage experimentation and successful playthroughs involve multiple game losses and in-game “deaths.” Close observation and creative interpretation of the text the game provides and a generous supply of common sense are also integral to winning text-based games. The relatively simple obstacles present in a TextWorld game serve as an introduction to the basic real-life challenges posed by text-based games. In TextWorld, an agent needs to learn how to observe, experiment, fail and learn from failure.

TextWorld has two main components: a game generator and a game engine. The game generator converts high-level game specifications, such as number of rooms, number of objects, game length, and winning conditions, into an executable game source code in the Inform 7 language. The game engine is a simple inference machine that ensures that each step of the generated game is valid by using simple algorithms such as one-step forward and backward chaining.

Figure 3 – An overview of the TextWorld architecture.

“One reason I’m excited about TextWorld is the way it combines reinforcement learning with natural language,” said Geoff Gordon, Principal Research Manager at Microsoft Research Montreal “These two technologies are both really important, but they don’t fit together that well yet. TextWorld will push researchers to make them work in combination.” Gordon pointed out that reinforcement learning has had a number of high-profile successes recently (like Go or Ms. Pac-Man), but in all of these cases the agent has fairly simple observations and actions (for example, screen images and joystick positions in Ms. Pac-Man). In TextWorld, the agent has to both read and produce natural language, which has an entirely different and, in many cases, more complicated structure.

“I’m excited to see how researchers deal with this added complexity, said Gordon.”

Microsoft Research Montreal specializes in start-of-the art research in machine reading comprehension, dialogue, reinforcement learning, and FATE (Fairness, Accountability, Transparency, and Ethics in AI). The lab was founded in 2015 as Maluuba and acquired by Microsoft in 2017. For more information, check out Microsoft Research Montreal.

This release of TextWorld is a beta and we are encouraging as much feedback as possible on the framework from fellow researchers across the world. You can send your feedback and questions to textworld@microsoft.com. Also, for more information and to get the code, check out TextWorld, and our related publications TextWorld: A Learning Environment for Text-based Games and Counting to Explore and Generalize in Text-based Games. Thank you!

Posted on Leave a comment

Dell debuts ‘world’s most powerful 1U rack workstation’

Dell delivers compact solutions that pack a punch, including a 1U rack workstation and towers starting at $649.

Companies of all sizes and budgets looking for powerful, affordable, compact industry-leading workstations now have new choices for their needs from Dell, including the world’s most powerful 1U rack workstation.¹

The Dell Precision 3930 Rack delivers powerful performance in a compact industrial footprint. Its 1U rack height delivers better rack density and extended operating temperatures, while features such as its short depth, dust filters, and legacy ports allow it to integrate seamlessly into complex medical imaging and industrial automation solutions.

Other features include:

  • The rack workstation provides up to 64 GB of 2666MHz DDR4 memory thanks to the introduction of Intel Xeon E processors and an 8th Generation Intel Core CPU.
  • The Intel Xeon E processor supports Error Correcting Code (ECC) for increased reliability.
  • The rack workstation offers best-in-class workstation performance and provides the flexibility of up to 250W of doublewide GPUs, and scalability with up to 24 TB of storage.
  • With 3 PCIe slots, including an optional PCI slot, this workstation can tackle complex tasks with ease.
  • A range of NVIDIA Quadro professional GPUs are available. With the Quadro P6000, users benefit from 24GB of GDDR5X and powerful ultra-high-end graphics performance.
  • In addition, customers have the option to choose AMD Radeon Pro graphics.

If your company is looking for versatile, secure, and fast remote 1:1 user access, you can add optional Teradici PCOIP technology. The rack workstation effortlessly integrates into the datacenter, which helps reduce clutter at your desk.

A smaller footprint that doesn’t skimp on performance

Going small can lead to big things, so Dell has built these new entry-level workstations to fuel the future of innovation across engineering design, science, mathematics, and other data- and graphics-intensive fields. Running a highly powerful machine no longer requires having a large work space or a large budget, making this level of performance available to many companies and workers for the first time.

“Customers across Engineering & Manufacturing, Media & Entertainment, and beyond have come to rely on Dell workstations to deliver the highest performing systems for their critical workload. But as we enter the next era of workstations, the conversation is accelerating to immersive workflows utilizing even smaller footprints. Dell is leading the way in this evolution with these new entry-level workstations designed to deliver the ultimate in performance with a substantially smaller footprint,” says Rahul Tikoo, vice president and general manager of Dell Precision. “When access to leading technology improves, innovation flourishes. Sometimes something as simple as a smaller form factor can unleash new ideas and capabilities that have the power to reshape an industry.”

The Dell Precision 3630 Tower is 23 percent smaller² than the previous generation with more expandability, so workers can get the precise solution they need regardless of workspace constraints. It features a range of easy-to-reach ports that make it possible to connect to external data sources, storage devices, and more. It offers scalable storage featuring SATA and PCIe NVMe SSDs, which can be configured for up to 14 TB with RAID support.

As workstation users often create intellectual property, Dell will also offer an optional Smart Card (CAC/PIV) reader to make secure data management easier.

If you’re interested in creating or enjoying VR experiences and other resource-intensive tasks, this workstation is a good choice, thanks to an 8th Generation Intel Core i and new professional-grade Xeon E processors with faster memory speeds up to 2666MHz 64 GB. It also offers up to 225W of NVIDIA Quadro and AMD Radeon Pro graphics support.

The new Dell Precision 3430 Small Form Factor Tower is a great fit for many workstation users, offering many of the same benefits as the Precision 3630, but in an even smaller form factor and up to 55W of graphics support. It’s also expandable with up to 6TB of storage with RAID support.

Dell also introduced support for Intel Core Xseries processors in addition to the Intel Xeon W processor options already available on the Dell Precision 5820 Tower. These new processor options bring the enhanced performance and reliability of a workstation at a more affordable price point for customers.

Adding Intel Optane memory keeps responsiveness high and high-capacity storage costs lower on all these new Dell Precision 3000 series workstations. Customers can expect the same build quality and reliability of the Dell Precision line.

Available now: The Dell Precision 3430 Small Form Factor Tower and the Dell Precision 3630 Tower (both starting at $649) and the Dell Precision 5820 Tower workstation.

Available worldwide July 26, 2018: The Dell Precision 3930 Rack, which starts at $899.

¹When equipped with Intel Xeon E-2186G processor, available 64GB of 2666MHz memory capacity, NVIDIA P6000 graphics and 2xM.2PCIe storage. Based on Dell internal analysis of competitive workstation products as of July 2018.

² Gen-over-Gen claim based on internal testing, July 2018.

Posted on Leave a comment

Facial recognition technology: The need for public regulation and corporate responsibility

All tools can be used for good or ill. Even a broom can be used to sweep the floor or hit someone over the head. The more powerful the tool, the greater the benefit or damage it can cause. The last few months have brought this into stark relief when it comes to computer-assisted facial recognition – the ability of a computer to recognize people’s faces from a photo or through a camera. This technology can catalog your photos, help reunite families or potentially be misused and abused by private companies and public authorities alike.

Facial recognition technology raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression. These issues heighten responsibility for tech companies that create these products. In our view, they also call for thoughtful government regulation and for the development of norms around acceptable uses. In a democratic republic, there is no substitute for decision making by our elected representatives regarding the issues that require the balancing of public safety with the essence of our democratic freedoms. Facial recognition will require the public and private sectors alike to step up – and to act.

We’ve set out below steps that we are taking, and recommendations we have for government regulation.

First, some context

Facial recognition technology has been advancing rapidly over the past decade. If you’ve ever seen a suggestion on Facebook or another social media platform to tag a face with a suggested name, you’ve seen facial recognition at work. A wide variety of tech companies, Microsoft included, have utilized this technology the past several years to turn time-consuming work to catalog photos into something both instantaneous and useful.

So, what is changing now? In part it’s the ability of computer vision to get better and faster in recognizing people’s faces. In part this improvement reflects better cameras, sensors and machine learning capabilities. It also reflects the advent of larger and larger datasets as more images of people are stored online. This improvement also reflects the ability to use the cloud to connect all this data and facial recognition technology with live cameras that capture images of people’s faces and seek to identify them – in more places and in real time.

Advanced technology no longer stands apart from society; it is becoming deeply infused in our personal and professional lives. This means the potential uses of facial recognition are myriad. At an elementary level, you might use it to catalog and search your photos, but that’s just the beginning. Some uses are already improving security for computer users, like recognizing your face instead of requiring a password to access many Windows laptops or iPhones, and in the future a device like an automated teller machine.

Some emerging uses are both positive and potentially even profound. Imagine finding a young missing child by recognizing her as she is being walked down the street. Imagine helping the police to identify a terrorist bent on destruction as he walks into the arena where you’re attending a sporting event. Imagine a smartphone camera and app that tells a person who is blind the name of the individual who has just walked into a room to join a meeting.

But other potential applications are more sobering. Imagine a government tracking everywhere you walked over the past month without your permission or knowledge. Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first. This has long been the stuff of science fiction and popular movies – like “Minority Report,” “Enemy of the State” and even “1984” – but now it’s on the verge of becoming possible.

Perhaps as much as any advance, facial recognition raises a critical question: what role do we want this type of technology to play in everyday society?

The issues become even more complicated when we add the fact that facial recognition is advancing quickly but remains far from perfect. As reported widely in recent months, biases have been found in the performance of several fielded face recognition technologies. The technologies worked more accurately for white men than for white women and were more accurate in identifying persons with lighter complexions than people of color. Researchers across the tech sector are working overtime to address these challenges and significant progress is being made. But as important research has demonstrated, deficiencies remain. The relative immaturity of the technology is making the broader public questions even more pressing.

Even if biases are addressed and facial recognition systems operate in a manner deemed fair for all people, we will still face challenges with potential failures. Facial recognition, like many AI technologies, typically have some rate of error even when they operate in an unbiased way. And the issues relating to facial recognition go well beyond questions of bias themselves, raising critical questions about our fundamental freedoms.

Politics meets Silicon Valley

In recent weeks, the politics of the United States have become more intertwined with these technology developments on the West Coast. One week in the middle of June put the issues raised by facial recognition technology in bold relief for me and other company leaders at Microsoft. As the country was transfixed by the controversy surrounding the separation of immigrant children from their families at the southern border, a tweet about a marketing blog Microsoft published in January quickly blew up on social media and sparked vigorous debate. The blog had discussed a contract with the U.S. Immigration and Customs Enforcement, or ICE, and said that Microsoft had passed a high security threshold; it included a sentence about the potential for ICE to use facial recognition.

We’ve since confirmed that the contract in question isn’t being used for facial recognition at all. Nor has Microsoft worked with the U.S. government on any projects related to separating children from their families at the border, a practice to which we’ve strongly objected. The work under the contract instead is supporting legacy email, calendar, messaging and document management workloads. This type of IT work goes on in every government agency in the United States, and for that matter virtually every government, business and nonprofit institution in the world. Some nonetheless suggested that Microsoft cancel the contract and cease all work with ICE.

The ensuing discussion has illuminated broader questions that are rippling across the tech sector. These questions are not unique to Microsoft. They surfaced earlier this year at Google and other tech companies. In recent weeks, a group of Amazon employees has objected to its contract with ICE, while reiterating concerns raised by the American Civil Liberties Union (ACLU) about law enforcement use of facial recognition technology. And Salesforce employees have raised the same issues related to immigration authorities and these agencies’ use of their products. Demands increasingly are surfacing for tech companies to limit the way government agencies use facial recognition and other technology.

These issues are not going to go away. They reflect the rapidly expanding capabilities of new technologies that increasingly will define the decade ahead. Facial recognition is the technology of the moment, but it’s apparent that other new technologies will raise similar issues in the future. This makes it even more important that we use this moment to get the direction right.

The need for government regulation

The only effective way to manage the use of technology by a government is for the government proactively to manage this use itself. And if there are concerns about how a technology will be deployed more broadly across society, the only way to regulate this broad use is for the government to do so. This in fact is what we believe is needed today – a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.

While we appreciate that some people today are calling for tech companies to make these decisions – and we recognize a clear need for our own exercise of responsibility, as discussed further below – we believe this is an inadequate substitute for decision making by the public and its representatives in a democratic republic. We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology. As a general principle, it seems more sensible to ask an elected government to regulate companies than to ask unelected companies to regulate such a government.

Such an approach is also likely to be far more effective in meeting public goals. After all, even if one or several tech companies alter their practices, problems will remain if others do not. The competitive dynamics between American tech companies – let alone between companies from different countries – will likely enable governments to keep purchasing and using new technology in ways the public may find unacceptable in the absence of a common regulatory framework.

It may seem unusual for a company to ask for government regulation of its products, but there are many markets where thoughtful regulation contributes to a healthier dynamic for consumers and producers alike. The auto industry spent decades in the 20th century resisting calls for regulation, but today there is broad appreciation of the essential role that regulations have played in ensuring ubiquitous seat belts and air bags and greater fuel efficiency. The same is true for air safety, foods and pharmaceutical products. There will always be debates about the details, and the details matter greatly. But a world with vigorous regulation of products that are useful but potentially troubling is better than a world devoid of legal standards.

That’s why Microsoft called for national privacy legislation for the United States in 2005 and why we’ve supported the General Data Protection Regulation in the European Union. Consumers will have more confidence in the way companies use their sensitive personal information if there are clear rules of the road for everyone to follow. While the new issues relating to facial recognition go beyond privacy, we believe the analogy is apt.

It seems especially important to pursue thoughtful government regulation of facial recognition technology, given its broad societal ramifications and potential for abuse. Without a thoughtful approach, public authorities may rely on flawed or biased technological approaches to decide who to track, investigate or even arrest for a crime. Governments may monitor the exercise of political and other public activities in ways that conflict with longstanding expectations in democratic societies, chilling citizens’ willingness to turn out for political events and undermining our core freedoms of assembly and expression. Similarly, companies may use facial recognition to make decisions without human intervention that affect our eligibility for credit, jobs or purchases. All these scenarios raise important questions of privacy, free speech, freedom of association and even life and liberty.

So what issues should be addressed through government regulation? That’s one of the most important initial questions to address. As a starting point, we believe governments should consider the following issues, among others:

  • Should law enforcement use of facial recognition be subject to human oversight and controls, including restrictions on the use of unaided facial recognition technology as evidence of an individual’s guilt or innocence of a crime?
  • Similarly, should we ensure there is civilian oversight and accountability for the use of facial recognition as part of governmental national security technology practices?
  • What types of legal measures can prevent use of facial recognition for racial profiling and other violations of rights while still permitting the beneficial uses of the technology?
  • Should use of facial recognition by public authorities or others be subject to minimum performance levels on accuracy?
  • Should the law require that retailers post visible notice of their use of facial recognition technology in public spaces?
  • Should the law require that companies obtain prior consent before collecting individuals’ images for facial recognition? If so, in what situations and places should this apply? And what is the appropriate way to ask for and obtain such consent?
  • Should we ensure that individuals have the right to know what photos have been collected and stored that have been identified with their names and faces?
  • Should we create processes that afford legal rights to individuals who believe they have been misidentified by a facial recognition system?

This list, which is by no means exhaustive, illustrates the breadth and importance of the issues involved.

Another important initial question is how governments should go about addressing these questions. In the United States, this is a national issue that requires national leadership by our elected representatives. This means leadership by Congress. While some question whether members of Congress have sufficient expertise on technology issues, at Microsoft we believe Congress can address these issues effectively. The key is for lawmakers to use the right mechanisms to gather expert advice to inform their decision making.

On numerous occasions, Congress has appointed bipartisan expert commissions to assess complicated issues and submit recommendations for potential legislative action. As the Congressional Research Service (CRS) noted last year, these commissions are “formal groups established to provide independent advice; make recommendations for changes in public policy; study or investigate a particular problem, issue, or event; or perform a duty.” Congress’ use of the bipartisan “9/11 Commission” played a critical role in assessing that national tragedy. Congress has created 28 such commissions over the past decade, assessing issues ranging from protecting children in disasters to the future of the army.

We believe Congress should create a bipartisan expert commission to assess the best way to regulate the use of facial recognition technology in the United States. This should build on recent work by academics and in the public and private sectors to assess these issues and to develop clearer ethical principles for this technology. The purpose of such a commission should include advice to Congress on what types of new laws and regulations are needed, as well as stronger practices to ensure proper congressional oversight of this technology across the executive branch.

Issues relating to facial recognition go well beyond the borders of the United States. The questions listed above – and no doubt others – will become important public policy issues around the world, requiring active engagement by governments, academics, tech companies and civil society internationally. Given the global nature of the technology itself, there likely will also be a growing need for interaction and even coordination between national regulators across borders.

Tech sector responsibilities

The need for government leadership does not absolve technology companies of our own ethical responsibilities. Given the importance and breadth of facial recognition issues, we at Microsoft and throughout the tech sector have a responsibility to ensure that this technology is human-centered and developed in a manner consistent with broadly held societal values. We need to recognize that many of these issues are new and no one has all the answers. We still have work to do to identify all the questions. In short, we all have a lot to learn. Nonetheless, some initial conclusions are clear.

First, it’s incumbent upon those of us in the tech sector to continue the important work needed to reduce the risk of bias in facial recognition technology. No one benefits from the deployment of immature facial recognition technology that has greater error rates for women and people of color. That’s why our researchers and developers are working to accelerate progress in this area, and why this is one of the priorities for Microsoft’s Aether Committee, which provides advice on several AI ethics issues inside the company.

As we pursue this work, we recognize the importance of collaborating with the academic community and other companies, including in groups such as the Partnership for AI. And we appreciate the importance not only of creating data sets that reflect the diversity of the world, but also of ensuring that we have a diverse and well-trained workforce with the capabilities needed to be effective in reducing the risk of bias. This requires ongoing and urgent work by Microsoft and other tech companies to promote greater diversity and inclusion in our workforce and to invest in a broader and more diverse pipeline of talent for the future. We’re focused on making progress in these areas, but we recognize that we have much more work to do.

Second, and more broadly, we recognize the need to take a principled and transparent approach in the development and application of facial recognition technology. We are undertaking work to assess and develop additional principles to govern our facial recognition work. We’ve used a similar approach in other instances, including trust principles we adopted in 2015 for our cloud services, supported in part by transparency centers and other facilities around the world to enable the inspection of our source code and other data. Similarly, earlier this year we published an overall set of ethical principles we are using in the development of all our AI capabilities.

As we move forward, we’re committed to establishing a transparent set of principles for facial recognition technology that we will share with the public. In part this will build on our broader commitment to design our products and operate our services consistent with the UN’s Guiding Principles on Business and Human Rights. These were adopted in 2011 and have emerged as the global standard for ensuring corporate respect for human rights. We periodically conduct Human Rights Impact Assessments (HRIAs) of our products and services, and we’re currently pursuing this work with respect to our AI technologies.

We’ll pursue this work in part based on the expertise and input of our employees, but we also recognize the importance of active external listening and engagement. We’ll therefore also sit down with and listen to a variety of external stakeholders, including customers, academics and human rights and privacy groups that are focusing on the specific issues involved in facial recognition. This work will take  up to a few months, but we’re committed to completing it expeditiously .

We recognize that one of the difficult issues we’ll need to address is the distinction between the development of our facial recognition services and the use of our broader IT infrastructure by third parties that build and deploy their own facial recognition technology. The use of infrastructure and off-the-shelf capabilities by third parties are more difficult for a company to regulate, compared to the use of a complete service or the work of a firm’s own consultants, which readily can be managed more tightly. While nuanced, these distinctions will need consideration.

Third, in the meantime we recognize the importance of going more slowly when it comes to the deployment of the full range of facial recognition technology. Many information technologies, unlike something like pharmaceutical products, are distributed quickly and broadly to accelerate the pace of innovation and usage. “Move fast and break things” became something of a mantra in Silicon Valley earlier this decade. But if we move too fast with facial recognition, we may find that people’s fundamental rights are being broken.

For this reason, based in part on input from the Aether Committee, we’re moving more deliberately with our facial recognition consulting and contracting work. This has led us to turn down some customer requests for deployments of this service where we’ve concluded that there are greater human rights risks. As we’re developing more permanent principles, we will continue to monitor the potential uses of our facial recognition technologies with a view to assessing and avoiding human rights abuses.

In a similar vein, we’re committed to sharing more information with customers who are contemplating the potential deployment of facial recognition technology. We will continue work to provide customers and others with information that will help them understand more deeply both the current capabilities and limitations of facial recognition technology, how these features can and should be used, and the risks of improper uses.

Fourth, we’re committed to participating in a full and responsible manner in public policy deliberations relating to facial recognition. Government officials, civil liberties organizations and the broader public can only appreciate the full implications of new technical trends if those of us who create this technology do a good job of sharing information with them. Especially given our urging of governments to act, it’s incumbent on us to step forward to share this information. As we do so, we’re committed to serving as a voice for the ethical use of facial recognition and other new technologies, both in the United States and around the world.

We recognize that there may be additional responsibilities that companies in the tech sector ought to assume. We provide the foregoing list not with the sense that it is necessarily complete, but in the hope that it can provide a good start in helping to move forward.

Some concluding thoughts

Finally, as we think about the evolving range of technology uses, we think it’s important to acknowledge that the future is not simple. A government agency that is doing something objectionable today may do something that is laudable tomorrow. We therefore need a principled approach for facial recognition technology, embodied in law, that outlasts a single administration or the important political issues of a moment.

Even at a time of increasingly polarized politics, we have faith in our fundamental democratic institutions and values. We have elected representatives in Congress that have the tools needed to assess this new technology, with all its ramifications. We benefit from the checks and balances of a Constitution that has seen us from the age of candles to an era of artificial intelligence. As in so many times in the past, we need to ensure that new inventions serve our democratic freedoms pursuant to the rule of law. Given the global sweep of this technology, we’ll need to address these issues internationally, in no small part by working with and relying upon many other respected voices. We will all need to work together, and we look forward to doing our part.

Tags: , , ,

Posted on Leave a comment

4 new ways Microsoft 365 takes the work out of teamwork—including free version of Microsoft Teams

It’s been one year since we introduced Microsoft 365, a holistic workplace solution that empowers everyone to work together in a secure way. In that time, Microsoft 365 seats have grown by more than 100 percent, building on the more than 135 million commercial monthly Office 365 users, 200 million Windows 10 commercial devices in use, and over 65 million seats of Enterprise Mobility + Security.

This momentum is driven by customers—in every industry—who are transforming their organizations to enable high performance from a workforce that is more diverse, distributed, and mobile than ever before. Microsoft 365 is designed to empower every type of worker—whether on the first lines of a business, managing a small team, or leading an entire organization.

Today, we are introducing four new ways Microsoft 365 connects people across their organization and improves collaboration habits, including extending the power of Microsoft Teams and new AI-infused capabilities in Microsoft 365.

1—Try Microsoft Teams, now available in a free version

To address the growing collaboration needs of our customers, last year we introduced Microsoft Teams, a powerful hub for teamwork that brings together chat, meetings, calling, files, and apps into a shared workspace in Microsoft 365. Now, more than 200,000 businesses across 181 markets use Teams to collaborate and get work done.

Beginning today, Teams is available in a free version worldwide in 40 languages. Whether you’re a freelancer, a small business owner, or part of a team inside a large organization, you can start using Teams today.

The free version includes the following for up to 300 people:

  • Unlimited chat messages and search.
  • Built-in audio and video calling for individuals, groups, and full team meetups.
  • 10 GB of team file storage plus additional 2 GB per person for personal storage.
  • Integrated, real-time content creation with Office Online apps, including built-in Word, Excel, PowerPoint, and OneNote.
  • Unlimited app integrations with 140+ business apps to choose from—including Adobe, Evernote, and Trello.
  • Ability to communicate and collaborate with anyone inside or outside your organization, backed by Microsoft’s secure, global infrastructure.

This new offering provides a powerful introduction to Microsoft 365. Teams in Microsoft 365 includes everything in the free version plus additional storage, enterprise security, and compliance, and it can be used for your whole organization, regardless of size.

As we advance our mission to empower every person and organization on the planet to achieve more, what’s most exciting are the stories of customers taking on big projects with a small workforce, such as The Hustle Media Company—who helps movers, shakers, and doers make their dent in the world. Their popular daily email provides their audience with the tech and business news they need to know.

“As a media company that nearly quadrupled in size over the last year, it became apparent we needed a solution to connect all of The Hustle’s offices. As previous Slack users, we found that Microsoft Teams has all the features that other chat-based apps bring, but the teamwork hub allows everything to live in one place.”
—Adam Ryan, vice president of Media at The Hustle

Or, take it from Urban Agriculture Company, a small business specializing in organic, easy-to-use grow kits of vegetables, flowers, and herbs. After landing on Oprah’s favorite things, founder Chad Corzine turned to Microsoft 365 Business and Teams to manage communication among his rapidly growing departments, onboard employees, and protect customer data.

[embedded content]

2—Use new intelligent event capabilities in Microsoft 365

Today, we’re also introducing new capabilities that allow anyone in your organization to create live and on-demand events in Microsoft 365. Events can be viewed in real-time or on-demand, with high-definition video and interactive discussion.

AI-powered services enhance the on-demand experience with:

  • A speaker timeline, which uses facial detection to identify who is talking, so you can easily jump to a particular speaker in the event.
  • Speech-to-text transcription, timecoding, and transcript search, so you can quickly find moments that matter in a recording.
  • Closed captions to make the event more accessible to all.

Events can be as simple or as sophisticated as you prefer. You can use webcams, content, and screen sharing for informal presentations, or stream a studio-quality production for more formal events.

3—Leverage analytics to build better collaboration habits

We’re rolling out the preview of a new Workplace Analytics solution, which uses collaboration insights from the Microsoft Graph, to help teams run efficient meetings, create time for focused work, and respect work/life boundaries. Organizations can use aggregate data in Workplace Analytics to identify opportunities for improving collaboration, then share insights and suggest habits to specific teams using MyAnalytics.

We’re also rolling out nudges, powered by MyAnalytics in Microsoft 365, which deliver habit-changing tips in Outlook, such as flagging that you’re emailing coworkers after hours or suggesting you book focused work time for yourself.

4—Work with others on a shared digital canvas with Microsoft Whiteboard

Microsoft Whiteboard is now generally available for Windows 10, coming soon to iOS, and preview on the web. Whether meeting in person or virtually, people need the ability to collaborate in real-time. The new Whiteboard application enables people to ideate, iterate, and work together both in person and remotely, across multiple devices. Using pen, touch, and keyboard, you can jot down notes, create tables and shapes, freeform drawings, and search and insert images from the web.

Get started

Whether you’re managing a new project or creating your own business, it helps to have your team behind you to brainstorm ideas, tackle the work together, and have some fun along the way. Take your teamwork to the next level and start using Teams today.

Learn more about how Microsoft 365 enables teamwork below:

Posted on Leave a comment

WIRED: ‘How artificial intelligence could prevent natural disasters’

On May 27, a deluge dumped more than 6 inches of rain in less than three hours on Ellicott City, Maryland, killing one person and transforming Main Street into what looked like Class V river rapids, with cars tossed about like rubber ducks. The National Weather Service put the probability of such a storm at once in 1,000 years. Yet, “it’s the second time it’s happened in the last three years,” says Jeff Allenby, director of conservation technology for Chesapeake Conservancy, an environmental group.

Floods are nothing new in Ellicott City, located where two tributaries join the Patapsco River. But Allenby says the floods are getting worse, as development covers what used to be the “natural sponge of a forest” with paved surfaces, rooftops, and lawns. Just days before the May 27 flood, the US Department of Homeland Security selected Ellicott City—on the basis of its 2016 flood—for a pilot program to deliver better flood warnings to residents via automated sensors.

Recently, Allenby developed another tool to help predict, plan, and prepare for future floods: a first-of-its-kind, high-resolution map showing what’s on the ground—buildings, pavement, trees, lawns—across 100,000 square miles from upstate New York to southern Virginia that drain into Chesapeake Bay. The map, generated from aerial imagery with the help of artificial intelligence, shows objects as small as 3 feet square, roughly 1,000 times more precise than the maps that flood planners previously used. To understand the difference, imagine trying to identify an Uber driver on a crowded city street using a map that can only display objects the size of a Walmart.

Creating the map consumed a year and cost $3.5 million, with help from Microsoft and the University of Vermont. Allenby’s team pored over aerial imagery, road maps, and zoning charts to establish rules, classify objects, and scrub errors. “As soon as we finished the first data set,” Allenby says, “everyone started asking ‘when are you going to do it again?’” to keep the map fresh.

Enter AI. Microsoft helped Allenby’s team train its AI for Earth algorithms to identify objects on its own. Even with a robust data set, training the algorithms wasn’t easy. The effort required regular “pixel peeping”—manually zooming in on objects to verify and amend the automated results. With each pass, the algorithm improved its ability to recognize waterways, trees, fields, roads, and buildings. As relevant new data become available, Chesapeake Conservancy plans to use its AI to refresh the map more frequently and easily than the initial labor-intensive multi-million dollar effort.

Now, Microsoft is making the tool available more widely. For $42, anyone can run 200 million aerial images through Microsoft’s AI for Earth platform and generate a high-resolution land-cover map of the entire US in 10 minutes. The results won’t be as precise in other parts of the country where the algorithm has not been trained on local conditions—a redwood tree or saguaro cactus looks nothing like a willow oak.

A map of land use around Ellicott City, Maryland, built with the help of artificial intelligence (left) offers far more detail than its predecessor (right).

Chesapeake Conservancy

To a society obsessed with location and mapping services—where the physical world unfolds in the digital every day—the accomplishment may not seem groundbreaking. Until recently, though, neither the high-resolution data nor the AI smarts existed to make such maps cost-effective for environmental purposes, especially for nonprofit conservation organizations. With Microsoft’s offer, AI on a planetary scale is about to become a commodity.

Detailed, up-to-date information is paramount when it comes to designing stormwater management systems, Allenby says. “Looking at these systems with the power of AI can start to show when a watershed” is more likely to flood, he says. The Center for Watershed Protection, a nonprofit based in Ellicott City, reported in a 2001 study that when 10 percent of natural land gets developed, stream health declines and it begins to lose its ability to manage runoff. At 20 percent, runoff doubles, compared with undeveloped land. Allenby notes that paved surfaces and rooftops in Ellicott City reached 19 percent in recent years.

Allenby says the more detailed map will enable planners to keep up with land-use changes and plan drainage systems that can accommodate more water. Eventually, the map will offer “live dashboards” and automated alerts to serve as a warning system when new development threatens to overwhelm stormwater management capacity. The Urban Forestry Administration in Washington, DC, has used the new map to determine where to plant trees by searching the district for areas without tree cover where standing water accumulates. Earlier this year, Chesapeake Conservancy began working with conservation groups in Iowa and Arizona to develop training sets for the algorithms specific to those landscapes.

The combination of high-resolution imaging and sensor technologies, AI, and cloud computing is giving conservationists deeper insight into the health of the planet. The result is a near-real-time readout of Earth’s vital signs, firing off alerts and alarms whenever the ailing patient takes a turn for the worse.

Others are applying these techniques around the world. Global Forest Watch (GFW), a conservation project established by World Resources Institute, began offering monthly and weekly deforestation alerts in 2016, powered by AI algorithms developed by the University of Maryland.1 The algorithms analyze satellite imagery as it’s refreshed to detect “patterns that may indicate impending deforestation,” according to the organization’s website. Using GFW’s mobile app, Forest Watcher, volunteers and forest rangers take to the trees to verify the automated alerts in places like the Leuser Ecosystem in Indonesia, which calls itself “the last place on Earth where orangutans, rhinos, elephants and tigers are found together in the wild.”

The new conservation formula is also spilling into the oceans. On June 4, Paul Allen Philanthropies revealed a partnership with the Carnegie Institution of Science, the University of Queensland, the Hawaii Institute of Marine Biology, and the private satellite company Planet to map all of the world’s coral reefs by 2020. As Andrew Zolli, a Planet vice president, explains: For the first time in history, “new tools are up to the [planetary] level of the problem.”

By the end of 2017, Planet deployed nearly 200 satellites, forming a necklace around the globe that images the entire Earth every day down to 3-meter resolution. That’s trillions of pixels raining down daily, which could never be transformed into useful maps without AI algorithms trained to interpret them. The partnership leverages the Carnegie Institution’s computer-vision tools and the University of Queensland’s data on local conditions, including coral, algae, sand, and rocks.

“Today, we have no idea of the geography, rate, and frequency of global bleaching events,” explains Greg Asner, a scientist at Carnegie’s Department of Global Ecology. Based on what is known, scientists project that more than 90 percent of the world’s reefs, which sustain 25 percent of marine life, will be extinct by 2050. Lauren Kickham, impact director for Paul Allen Philanthropies, expects the partnership will bring the world’s coral crisis into clear view and enable scientists to track their health on a daily basis.

In a separate coral reef project, also being conducted with Planet and the Carnegie Institution, The Nature Conservancy is leveraging Carnegie’s computer vision AI to develop a high-resolution map of the shallow waters of the Caribbean basin. “By learning how these systems live and how they adapt, maybe not our generation, but maybe the next will be able to bring them back,” says Luis Solorzano, The Nature Conservancy’s Caribbean Coral Reef project lead.

Mapping services are hardly new to conservation. Geographic Information Systems have been a staple in the conservation toolkit for years, providing interactive maps to facilitate environmental monitoring, regulatory enforcement, and preservation planning. But, mapping services are only as good as the underlying data, which can be expensive to acquire and maintain. As a result, many conservationists resort to what’s freely available, like the 30-meter-resolution images supplied by the United States Geological Survey.

Ellicott City and the Chesapeake watershed demonstrate the challenges of responding to a changing climate and the impacts of human activity. Since the 1950s, the bay’s oyster reefs have declined by more than 80 percent. Biologists discovered one of the planet’s first marine dead zones in Chesapeake Bay in the 1970s. Blue crab populations plunged in the 1990s. The sea level has risen more than a foot since 1895, and, according to a 2017 National Oceanic and Atmospheric Administration (NOAA) report, may rise as much as 6 feet by the end of this century.

Allenby joined the Chesapeake Conservancy in 2012 when technology companies provided a grant to explore the ways in which technology could help inform conservation. Allenby sought ways to deploy technology to help land managers, like those in Ellicott City, improve upon the dated 30-meter-resolution images that FEMA also uses for flood planning and preparation.

In 2015, Allenby connected with the University of Vermont—nationally recognized experts in generating county-level high-resolution land-cover maps—seeking a partner on a bigger project. They secured funding from a consortium of state and local governments, and nonprofit groups in 2016. The year-long effort involved integrating data from such disparate sources as aerial imagery, road maps, and zoning charts. As the data set came together, a Conservancy board member introduced Allenby to Microsoft, which was eager to demonstrate how its AI and cloud computing could be leveraged to support conservation.

“It’s been the frustration of my life to see what we’re capable of, yet how far behind we are in understanding basic information about the health of our planet,” says Lucas Joppa, Microsoft’s chief environmental scientist, who oversees AI for Earth. “And to see that those individuals on the front line solving society’s problems, like environmental sustainability, are often in organizations with the least resources to take advantage of the technologies that are being put out there.”

The ultimate question, however, is whether the diagnoses offered by these AI-powered land-cover maps will arrive in time to help cure the problems caused by man.

1 CORRECTION, July 11, 1:10PM: Deforestation alerts from Global Forest Watch are powered by algorithms developed by the University of Maryland. An earlier version of this article incorrectly said the algorithms were developed by Orbital Insight.


More Great WIRED Stories

Posted on Leave a comment

Remembering the first-ever Xbox Gaming Tournament hosted at 2018 Special Olympic USA Games

The Xbox Gaming Tournament at the 2018 Special Olympics USA Games was a first in the world of competitive video gaming. Xbox and Special Olympics partnered to facilitate the pilot video gaming tournament at a Special Olympics event, featuring Forza Motorsport 7 on July 2nd at the USA Games in Seattle, Washington. Players across the nation came together throughout the month of April to compete in prelims for a spot in the final tournament and a chance to win a custom-wrapped Xbox One X prize.

The Xbox Gaming Tournament originated from a hackathon in last year’s Microsoft One Week, where employees spend an entire week to go work on passion projects. Stephen Smith, Karen Woessner Smith, and Thomas Labuzienski all came together to map out what a competitive video gaming tournament at Special Olympics would look like.

The goal of this pilot tournament was to make esports more accessible, reducing the high bar to entry which currently causes competitors to feel intimidated. Xbox and Special Olympics put together a smaller-scale tournament, with high-quality production value, in hopes this model can be leveraged at other venues to empower all gamers that want to compete and bring the joy of esports to more people.

The athletes were competing for a chance to win a 2018 Special Olympics USA Games custom edition Xbox One X, of which there are only TWO in the world!

Unified Teams and Prelims

The competitors played in Unified Teams, a model within Special Olympics where teams consist of an athlete with an intellectual disability and one without. During the prelims, athletes across the country competed on the same tracks, with the same cars and those with the fastest combined times advanced to the finals. After qualifying rounds in four states across the nation, four teams from Utah, three teams from Washington, and one team from Connecticut that qualified for the 2018 USA Games.

The athletes competed in Forza Motorsport 7, a racing video game published by Microsoft Studios; the game was chosen by the team for its multiple accessibility features and existing esports presence. Speaking of esports, members of our very own Forza Racing Championship team showed up in full-force to manage the pilot competitive gaming tournament to professional standards and maintain competitive integrity.

Tournament Semifinals Bracket

To kick off the tournament, we were incredibly fortunate to have the Chairman of Special Olympics, Tim Shriver, speak to his excitement about the tournament being at this year’s USA Games, as well as his intentions to take gaming global to the Special Olympics World Games. His message on evolving our thinking to focus more on everyone’s different abilities really touched home on the message of inclusion of the USA Games.

For the semifinals, all eight teams competed in the same tracks/cars in two separate groups, and were seeded by their preliminary times. There was a diverse array of courses and cars to test the athlete’s capabilities whether it be managing twists and turns, or dealing with different handling capabilities of different cars. Scores were determined by points, which were then tallied up across the three races to determine their final scores. The top two teams from each semifinal bracket proceeded to the final four to compete for the ultimate prize, the custom 2018 Special Olympics USA Games Xbox One X console.

Finals

With four teams remaining, three teams from Utah and one from Washington, it came down to the final seconds in an action-packed, nail-biting finish. The crowd erupted into applause as the leaders of the pack crossed the finish line. Team Dempsey, aka “Team Shake and Bake” from Washington state were announced as the winners of the first-ever gaming tournament. The team, Timothy Dempsey and Nicholas Rasmussen, didn’t know each other before the preliminary tournament in April. They coordinated schedules as time allowed over the last few months, and that practice really showed off as they were able to take home the gold. The friendship and teamwork between the two in this tournament speaks to the success of the Special Olympics Unified Teams program.

Tim and Nicholas were presented with the 2018 Special Olympics USA Games Custom Consoles by Alan Hartman, the head of Turn 10 Studios, creators of the Forza Motorsport franchise. Tim Dempsey summed it up best with his response to the question “how do you feel” that he was “speechless” – to see this first of its kind video gaming tournament finish up with such excitement and a full house of spectators speaks to the future of inclusion and the future of competitive gaming.

The winning team, Nicholas Rasmussen and Tim Dempsey of Team Dempsey AKA “Team Shake and Bake” with Beth Knox, President and CEO of the 2018 Special Olympics USA Games. 

The Next Day…

The following morning, the athletes and other members of the Gaming Special Olympics team were invited to tour the Microsoft Studios where they were given a tour of both 343 Industries and Turn10 Studios. Alan Hartman personally gave the athletes a tour of the gaming studio and even brought out a couple McLarens for the athletes to take photos with!

So, What’s Next?

Partnering with Special Olympics to bring the Xbox Gaming Tournament to life was an incredible experience, and we’re thankful for everyone’s hard work and dedication. The tournament was packed to the brim with spectators, and we’re thrilled by the excitement surrounding competitive gaming tournaments. We hope to take the lessons we’ve gained from this pilot tournament and apply them to future opportunities to empower more gamers to compete.

Posted on Leave a comment

OneDrive named again as a leader in Gartner Magic Quadrant for Content Collaboration Platforms

Enabling secure ways to share and collaborate on content with coworkers and colleagues, both inside and outside your organization, is critical to improving productivity and teamwork. According to Gartner, “By 2022, 50 percent of organizations will use collaborative document editing as the standard interaction method for document creation.”1 Microsoft OneDrive makes this a seamless experience, connecting you to all your files on any device while protecting your work from data loss, malicious attacks, and more.

Today, we are honored that Gartner has recognized Microsoft, for the second year in a row, as a leader in the Content Collaboration Platforms Magic Quadrant report. Microsoft placed highest in ability to execute and has made substantial improvements in the completeness of vision over last year’s report. Additionally, Microsoft is recognized as a leader in both the Content Collaboration Platforms and Content Services Platforms Magic Quadrant reports.

Image of the Gartner Magic Quadrant.

We feel this recognition exemplifies our vision and customer commitment to ensure OneDrive provides the best content collaboration capabilities, including:

  • Accessing all your files from any device—Easily get to your personal and shared files from across Office 365. Use your mobile device to capture whiteboards and scan receipts, business cards, and other paper docs.
  • Sharing inside or outside your organization—Simply and securely share files with anyone inside or outside your organization. You can see who has viewed, modified, or shared your files and limit their access as needed.
  • Collaborating with deep Office integration—OneDrive is included with Office 365 and is the only solution that enables you to seamlessly co-author Office documents across the browser, mobile, and desktop apps.
  • Quickly finding files that matter most—Easily get back to your recent and shared files from any device. Discover new files in Office 365 with intelligent recommendations based on who you work with and what they are working on.
  • Protecting your work—With over 100 datacenters worldwide, we offer trusted, enterprise-grade compliance and security, leading industry compliance standards, and native security capabilities such as Data Loss Prevention, eDiscovery, and malicious attack recovery.

We are also proud of the positive feedback from our 135 million monthly active Office 365 commercial users, many who have switched from on-premises solutions and other cloud content collaboration platforms to OneDrive. This includes Fortune 500 customers such as MGM Resorts International and Textron, as well as Dimension Data, who are improving data security, meeting global data residency requirements, and reducing third-party licensing costs by moving to OneDrive. Small businesses such as aeronautic manufacturer Jemco and elite tour operator Utah Luxury Tours also benefit from the productivity and mobility of using OneDrive with Office 365. Collectively, our customers have nearly tripled the amount of content stored in OneDrive over the past 12 months and have helped shape the future of the product.

Microsoft has a bold vision to transform content collaboration for the modern workplace inclusive of files, dynamic web sites and portals, streaming video, AI, and mixed reality, while reducing costs and improving compliance and security. Learn more about the exciting announcements from our annual SharePoint Virtual Summit below:

Last but not least, be sure to visit the OneDrive website and join us at Microsoft Ignite, September 24-28, 2018, where we’ll share what’s next for OneDrive and Office 365 with you and over 20,000 of your peers.

Get your own complimentary copy of the Gartner Content Collaborations Platforms Magic Quadrant report.

1 Gartner “Predicts 2018: Digital Workplace Technologies,” Mike Gotta | Nikos Drakos | Carol Rozwell | Whit Andrews | Monica Basso | Karen A. Hobert | Jack Santos | Stephen Emmott, 08 December 2017

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document.

Gartner does not endorse any vendor, product, or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Posted on Leave a comment

Inside Xbox Episode 5 news recap

Earlier today Inside Xbox Episode 5 aired, continuing to pull back the curtain on Team Xbox to celebrate our games, features, and fans. This episode was full of closer looks at some big upcoming games, including No Man’s Sky, We Happy Few, and Earthfall, as well as the announcement of a huge addition to Xbox Game Pass. So, without further ado, let’s take a closer look at some of the biggest news coming out of this month’s episode of Inside Xbox.

Rocket League and Warhammer: Vermintide 2 are Coming to Xbox Game Pass

The Xbox Game Pass catalog continues to grow this week, thanks to the addition of a couple of awesome titles. First up, the much-loved Rocket League, which blends elements of soccer, racing, and demolition derbies together to create a wonderful whole, hits Xbox Game Pass today. Then, tomorrow, Warhammer: Vermintide 2 brings its mix of over-the-top gore and first-person hacking and slashing to the service.

The Sport White Special Edition Xbox One Controller

Featuring beautiful, clean lines and a snazzy design, the latest addition to the Xbox One controller family is a looker. Inspired by sports and sneakers, the Sport White’s got mint green accents and grey and silver patterns to go along with its fresh white design. If you’re a sneaker head, you’ll definitely want one of these. You can snag this sporty beauty at the Microsoft Store and other retailers beginning July 31st in the U.S. and Canada, and then worldwide on August 7th.

A Closer Look at No Man’s Sky

The highly-anticipated space exploration game No Man’s Sky is hitting Xbox One on July 24, so we had Hello Games founder Sean Murray on to share a bit about how excited the team is to be bringing the game to our consoles. He also showed a new video created by the team that breaks down 11 new features, from freighters to alien sidekicks, added to the game since its initial launch, all of which will be available when the game launches on Xbox One.

We Happy Few Adds a Story Mode

The Inside Xbox team was joined by Guillaume Provost from Compulsion Games, the latest studio to join the Microsoft Studios family. Guillaume showed off We Happy Few’s new story mode for the first time, sharing that you’ll be able to see events in the game from multiple perspectives as you play. This is going to be one wild ride, and we can’t wait to see more when the game releases in August.

Surviving an Alien Invasion in Earthfall

Coming to join us from their studio just up the road in Bellevue, the team from developer Holospark gave us a closer look at the upcoming game Earthfall, which releases this coming Friday, July 13. Earthfall is a four-player co-op shooter that tasks players with surviving an alien invasion, and it looks like a blast. Even better, the guys announced that all maps and additions to the game will be absolutely free to anyone who purchases it. There will also be Mixer integration, so save up that Spark to help your friends!

Seasons Change in Forza Horizon 4

To close out the show, the team was joined by some familiar faces from Playground Games, who came on to give fans a closer at this highly-anticipated (and absolutely gorgeous) Xbox One racing game. This segment lead into a special live-stream on mixer.com/forzamotorsport , where the team at Playground Games highlighted the summer season, including interviews with the team and community Q&A.

Thanks to everyone who tuned in! We hope you enjoyed the show and we can’t wait to tell you all about next month’s episode in a few weeks.

Posted on Leave a comment

What you need to know before you go to next week’s Microsoft Inspire

Connect with Julie@juliegolding

Julie Golding

Microsoft Inspire is less than a week away. I hope you are as excited as we all are. It promises to be an incredible week of learning, networking, and celebrating partnership.

In the remaining days before you leave for Las Vegas, Nevada we suggest you double check your travel arrangements. It’s also a good time to review your personal conference agenda in MyInspire and confirm that you’ve added the sessions you want to attend by using the session scheduler. We also recommend using the MyInspire meeting scheduler to send meeting requests to other partners or Microsoft employees with whom you most want to connect. There will be over 16,000 attendees at Microsoft Inspire, all with busy schedules, so advance planning is the best way to ensure you don’t miss anyone.

Here are some additional tips and reminders to help you have a productive and enjoyable Microsoft Inspire experience.

Download the MyInspire mobile app

MyInspire is available in a convenient full-featured mobile app. Download the app for either Android or Apple IoS and manage your personal agenda while on the go at the conference. You will also receive updates from the Microsoft Inspire planning team and be able to access helpful maps and floor plans of conference venues. Be sure to add your LinkedIn account information to your profile in the mobile app so you can see which attendees you’re connected with on LinkedIn. To do so, go to the Attendee Directory in the app and select Connections.

Review the Know Before You Go guide

We strongly recommend that you download and save a copy of our “Know Before You Go” guide, the single most comprehensive source of information about everything related to Microsoft Inspire. It covers a wide range of topics, including arrival and check-in, conference badges, transportation, session and event schedules, venue logistics, Wi-Fi connectivity, safety and security, and much more. The guide also includes descriptions of the different session types and information on the that will be a big part this year’s conference, made possible by the first-ever co-location of Microsoft Inspire and Microsoft Ready, the annual Microsoft sales, technical, and marketing fiscal year kick-off. These experiences include the Wednesday Corenote at T-Mobile Arena, where Microsoft CEO Satya Nadella will address a joint audience of partners and Microsoft employees, and our eagerly anticipated One Celebration at the Las Vegas Motor Speedway, where attendees from both events will come together for an evening of food, fun, and entertainment by an internationally renowned eleven-time Grammy Award winner.

Ready, Set, Be Inspired

After months of planning, preparation, and anticipation, we can’t wait to see you in Las Vegas, Nevada, for Microsoft Inspire, where the world will meet to transform business. We’re very excited about the huge range of sessions built around this year’s key themes of Innovation, Partnership, and Leadership, and our amazing line-up of industry-leading speakers. Microsoft Inspire is also the best place to make connections and establish profitable relationships with fellow partners, Microsoft employees, and conference vendors and sponsors. The co-location of Microsoft inspire with Microsoft Ready will create more opportunities than ever before to make meaningful connections with the global Microsoft community to accelerate the digital transformation and success of our shared customers.

See you soon at Microsoft Inspire!