post

Microsoft’s code-mixing project could help computers handle Spanglish

ENMLP

Communication is a large part of who we are as human beings, and today, technology has allowed us to communicate in new ways and to audiences much larger and wider than ever before. That technology has assumed single-language speech, which — quite often — does not reflect the way people naturally speak. India, like many other parts of the world, is multilingual on a societal level with most people speaking two or more languages. I speak Bengali, English, and Hindi, as do a lot of my friends and colleagues. When we talk, we move fluidly between these languages without much thought.

This mixing of words and phrases is referred to as code-mixing or code-switching, and from it, we’ve gained such combinations as Hinglish and Spanglish. More than half of the world’s population speaks two or more languages, so with as many people potentially code-switching, creating technology that can process it is important in not only creating useful translation and speech recognition tools, but also in building engaging user interface. Microsoft is progressing on that front in exciting ways.

In Project Mélange, we at Microsoft Research India have been building technologies for processing code-mixed speech and text. Through large-scale computational studies, we are also exploring some fascinating linguistic and behavioral questions around code-mixing, such as why and when people code-mix, that are helping us build technology people can relate to. At the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), my colleagues and I have the opportunity to share some of our recent research with our paper “Word Embeddings for Code-Mixed Language Processing.

A data shortage in code-mixed language

Word embeddings — multidimensional vector representation where words similar in meaning or used in similar context are closer to each other — are learnt using deep learning from large language corpora and are valuable in solving a variety of natural language processing tasks using neural techniques. For processing code-mixed language — say, Hinglish — one would ideally need an embedding of words from both Hindi and English in the same space. There are standard methods for obtaining multilingual word embeddings; however, these techniques typically try to map translation equivalents from the two languages (e.g., school and vidyalay) close to each other. This helps in cross-lingual transfer of models. For instance, a sentiment analysis system trained for English can be appropriately transferred to work for Hindi using multilingual embeddings. But it’s not ideal for code-mixed language processing. While school and vidyalay are translation equivalents, in Hinglish, school is far more commonly used than vidyalay; also, these words are used in slightly different contexts. Further, there are grammatical constraints on code-mixing that disallow certain types of direct word substitutions, most notably for verbs in Hinglish. For processing code-mixed language, the word embeddings should ideally be learnt from a corpus of code-mixed text.

It is difficult to estimate the amount of code-mixing that happens in the world. One good proxy is the code-mixing patterns on social media. Approximately 3.5 percent of the tweets on Twitter are code-mixed. The above pie charts show the distribution of monolingual and code-mixed, or code-switched (cs), tweets in seven major European languages: Dutch (nl), English (en), French (fr), German (de), Portuguese (pt), Spanish (es), and Turkish (tr).

The chart above shows the distributions of monolingual and code-mixed tweets for 12 major cities in Europe and the Americas that were found to have very large or very small fractions of code-mixed tweets, represented in the larger pies by the missing white wedge. The smaller pies show the top two code-mixed language pairs, the size being proportionate to their usage. The Microsoft Research India team found that code-mixing is more prevalent in cities where English is not the major language used to tweet.

Even though code-mixing is extremely common in multilingual societies, it happens in casual speech and rarely in text, so we’re limited in the amount of text data available for code-mixed language. What little we do have is from informal speech conversations, such as interactions on social media, where people write almost exactly how they speak. To address this challenge, we developed a technique to generate natural-looking code-mixed data from monolingual text data. Our method is based on a linguistic model known as the equivalence constraint theory of code-mixing, which imposes several syntactic constraints on code-mixing. In building the Spanglish corpus, for example, we used Bing Microsoft Translator to first translate an English sentence into Spanish. Then we aligned the words, identifying which English word corresponded to the Spanish word, and in a process called parsing identified in the sentences the phrases and how they’re related. Then using the equivalence constraint theory, we systematically generated all possible valid Spanglish versions of the input English sentence. A small number of the generated sentences were randomly sampled based on certain criteria that indicated how close they were to natural Spanglish data, and these sentences comprise our artificial Spanglish corpus. Since there is no dearth of monolingual English and Spanish sentences, using this fully automated technique, we can generate as large a Spanglish corpus as we want.

Solving NLP tasks with an artificially generated corpus

Through experiments on parts-of-speech tagging and sentiment classification, we showed that word embeddings learnt from the artificially generated Spanglish corpus were more effective in solving these NLP tasks for code-mixed language than the standard cross-lingual embedding techniques.

The linguistic theory–based generation of code-mixed text has applications beyond word embeddings. For instance, in one of our previous studies published earlier this year, we showed that this technique helps us in learning better language models that can help us build better speech recognition systems for code-mixed speech. We are exploring its application in machine translation to improve the accuracy of mixed-language requests. And imagine a multilingual chatbot that can code-mix depending on who you are, the context of the conversation, and what topic is being discussed, and switch in a natural and appropriate way. That would be true engagement.

post

Data Science Summer School students take a fresh look at the world’s largest rapid transit system

DS3 2018 at Microsoft Research New York City

DS3 2018 at Microsoft Research New York City

This month marked the 5th anniversary of the Microsoft Research Data Science Summer School (DS3). DS3 is an intensive, eight-week hands-on introduction to data science for college students in the New York City area committed to increasing diversity in computer science. The program is taught by leading scientists at Microsoft Research and is held at the Microsoft Research New York City lab.

Each year the program receives upwards of 200 applications, out of which only eight students, demonstrating academic excellence and a passion for using technology to help society, are selected to participate. These students complete four weeks of intensive course work and spend the remaining four weeks of their summer working on an original research problem. Graduates of the program have gone on to a number of exciting careers, ranging from data scientist positions at companies like Microsoft, Bloomberg, and American Express to PhD programs at universities such as Cornell and NYU.

Past projects have looked at how students progress through the New York City public school system, investigated demographic disparities in the city’s policing activities, and formulated improvements for the city’s taxi fleet and bike sharing service.

This year’s students used their newly acquired data science skills to examine another way of getting around New York City—the city’s subway system—and presented some impressive findings at the DS3 banquet to an overflowing room of select members of New York City’s tech community. They examined rider wait times and trip times, compared the subway to above ground travel, and investigated how changes to the system affect rider options.

Below is a summary of their presentation, which you can watch in full. The project is also available on GitHub.

[embedded content]

Akbar Mirza, a senior from City College, opened the talk by discussing the history of NYC’s subway system, which is the largest rapid transit system in the world, serving approximately 5.5 million riders each day. He highlighted the growing concern that the system has become unreliable due to aging equipment, some of which dates back to the early 20th century. And while current system-wide metrics provide some insight into the state of the subway system, they fail to capture how riders actually experience the subway.

Akbhar Mirza

Akbar Mirza

This motivated the students to investigate the subway system using the data behind the system’s new countdown clocks that record train locations. Specifically, they used a dataset collected and cleaned by local data scientist Todd Schneider that contained the approximate location of every train in the system for every minute of each day from January through May of 2018.

Next, Brian Hernandez, a senior from Hunter College, walked the audience through how this data could be used to understanding how long riders spend waiting for trains. He used these calculations to compare his commuting options on the F and 7 trains, showing that while the typical wait time is the same on both lines, the F train has much higher variability than the 7 train, making the 7 the preferred option.

Brian Hernandez

Brian Hernandez

Amanda Rodriguez, a senior at Lehman College, continued the presentation with a more granular look at subway wait times throughout the city. She presented a comprehensive wait time model that considers station- and line-specific factors as well as day of week, time of day, and weather effects. Her analysis revealed interesting patterns in wait time variability throughout the city and showed that heavy rain can result in as much as a 25% increase in typical wait times at certain locations.

Amanda Rodriguez

Amanda Rodriguez

Taxi Baerde, a senior from Adelphi University, introduced the next topic—constructing a formal representation of the subway network as a graph that could be used for finding shortest paths between any two stops and computing trip times. Taxi discussed how it’s surprisingly difficult to settle on such a representation because the network itself is so dynamic, with changing schedules, partial routes, and skipped stops. He also presented a method, called k-shortest paths, for identifying different possible itineraries between a pair of stations (for instance, taking the local versus express, or transferring between multiple likes).

Taxi Baerde

Taxi Baerde

Next, Phoebe Nguyen, a junior at Baruch College, showed how Taxi’s cleaned subway graph could be used to compare different commuting options between a pair of stations in a two- step process—first, finding a set of candidate paths between the stations; and second, reconstructing how long it actually took trains to make these trips. She used this method to compare different options for various trips, showing once again that variability is often the key for deciding between two different options.

Phoebe Nguyen

Phoebe Nguyen

Peter Farquharson, a junior from Lehman College, extended Phoebe’s results to answer a question on many busy New Yorkers’ minds: when is the subway a better option than a car? He demonstrated how open data from the city’s Taxi and Limosine Commission could be used to estimate how long past car trips between two subway stations would have taken, and compared this with corresponding subway trips. His results highlighted that, once variability is factored in, the subway can be an attractive alternative to driving when trying to get to midtown Manhattan during rush hour or traveling to JFK airport.

Peter Farquharson

Peter Farquharson

Ayliana Teitelbaum, a sophomore from Yeshiva University, looked at trip times from a different angle to tackle a question that New Yorkers face in choosing where to live—how long should you expect your commute to take coming from different parts of the city? She extended Phoebe’s results by showing historical trip times between each of the nearly 500 stations in the system to a fixed workplace destination, and presented the results as a heatmap. By comparing typical and worst case commute times for each station, she showed that accounting for variability can increase commute times in the outer boroughs by up to 50%.

Ayliana Teitelbaum

Ayliana Teitelbaum

Sasha Paulovich, a senior at Fordham University, presented the final set of results, considering how changes to the subway system affect riders and how subway experiences differ across demographic groups. She presented a heatmap similar to Ayliana’s that showed how we can expect commute times to change after the L train shuts down in January 2019, and an analogous map that projected commute times to LaGuardia airport if the proposed AirTrain extension to Willets Point is built. Finally, she discussed station options and commute times for riders requiring accessible stations and showed a correlation between median household income and commute times.

Sasha Paulovich

Sasha Paulovich

The team and their Microsoft Research mentors closed out the evening by fielding a host of questions from the audience, where the students discussed all of the additional topics they thought about tackling and the various extensions and future work to be done.

The team’s work has been accepted at the 2018 MIT Conference on Digital Experimentation (CODE) taking place in Cambridge, Massachusetts on October 26.

post

How Microsoft got into edge computing and real-time video analytics

I vividly remember October 29, 2008. I had invited colleagues from academia and industry to Building 99, home of Microsoft Research, for a daylong meeting to discuss the future of mobile and cloud computing. My friends flew to Redmond, Washington, from different parts of the world, and together in one of the conference rooms, we brainstormed ideas, using the whiteboard to design new cloud architectures, write down problems, and explore challenges. Eventually, we came up with a new computing paradigm that is now popularly known as edge computing. We called our edge nodes cloudlets.

Fast-forward 10 years, and we find ourselves in a world where edge computing is a major technology trend that is being embraced by cloud providers and most major telecommunications companies. Looking back, I am proud that we got many things right. For example, we were spot-on with the fundamentals. We devised an architecture that reduces latency to a compute infrastructure, decreases the need for large amounts of expensive network bandwidth to the cloud, and enables mission-critical operations to continue even when the network to the cloud is down. All this was right on the mark.

Joining me at that meeting were Ramón Cáceres (AT&T Labs), Nigel Davies (Lancaster University, U.K.), Mahadev Satyanarayanan (Carnegie Mellon University), and Roy Want (Intel Research). The five of us had been working in mobile computing, so naturally, we focused on devices such as smartphones, augmented reality/virtual reality headsets, and wearable computers. We did not discuss sensor networking or cyber-physical systems, which have recently emerged as the Internet of Things (IoT).

The case for edge computing

I had the opportunity to make the case for edge computing to the senior leadership team of Microsoft — including our CEO at the time, Steve Ballmer — twice. The first time was in December 2010. At the end of the presentation, Steve asked me which current application I would move to edge computing.

I had been thinking about future applications such as AR/VR and hadn’t deeply thought about existing applications, so I awkwardly answered, “Speaker and command recognition.” An executive vice president whose team was working on this challenge was in attendance, and he disagreed. Although I had built and demonstrated a small prototype of such a system (think Skype Translator) at the 2009 Microsoft Research Faculty Summit, I hadn’t thought about how we would instantiate such an application at scale. Needless to say, my answer could have been better.

My research team and I continued working on edge computing, and in January 2014, I presented to the senior leadership team again. This time, I told them about micro datacenters, a small set of servers placed on premises to do what the cloud did; essentially, today’s equivalent of Microsoft Azure Stack. I demonstrated several scenarios in which the virtues of micro datacenters were irrefutable: real-time vision analytics with associated action, energy saving in mobile devices, and single-shooter interactive cloud gaming. This time, it worked. In a booming voice, Steve — who was still our CEO — said, “Let’s do this.”

The green light was followed by a series of meetings with Microsoft distinguished engineers and technical fellows to discuss the rollout of edge computing, and through these meetings, it became increasingly clear that one question remained unanswered: What compelling real-world applications could not thrive without edge computing? Remember, Microsoft was rapidly building mega-datacenters around the world, on a path to 30-millisecond latency for most people on the planet with wired networking, and IoT had not yet emerged as a top-level scenario. So, which high-demand applications could edge computing take to the next level that cloud computing couldn’t?

The need for a killer app

We had to come up with a killer app. Around the same time as these meetings, I took a sabbatical, with stops in London and Paris. While there, I noticed the proliferation of cameras on city streets. Instinctively, I knew that people were not looking at every livestream from these cameras; there were simply too many. According to some reports, there were tens of millions of cameras in major cities. So how were they being used? I imagined every time there was an incident, authorities would have to go to the stored video stream to find the recording that had captured the event and then analyze it. Instead, why not have computers analyze these streams in real-time and generate a workflow whenever an anomaly was detected? Computers are good at such things.

For this to work, we would need cloud-like compute resources, and they would have to be close to the cameras because the system would have to analyze large quantities of data quickly. Furthermore, the cost of streaming every video stream to the cloud could be prohibitive, plus add to it the expense of renting GPUs in the cloud to process each of these streams. This was the perfect scenario — the killer app for edge computing — and it would solve a compelling real-world, large-scale problem.

In the years that followed, we worked diligently on edge-based real-time video analytics, publishing several papers in top conferences. We even deployed a system in Bellevue, Washington, for traffic analysis, accident prevention, and congestion control as part of the city’s Vision Zero program. This brings me to our paper being presented at the third Association for Computing Machinery/IEEE Symposium on Edge Computing (SEC) October 25–27 in Bellevue. The work represents another step in our journey to nail the live video analytics challenge using edge computing.

Best tradeoff between multiple resources and accuracy

In our paper “VideoEdge: Processing Camera Streams using Hierarchical Clusters,” we describe how a query made to our system is automatically partitioned so some portions of it run on edge computing clusters (think micro datacenter) and some in the cloud. In deciding what to execute where, we recognize and plan for multiple different queries that may be issued to our system concurrently. As they execute on the same infrastructure, we try not to repeat any processing. The objective is to run the maximum number of queries on the available compute resources while guaranteeing expected accuracy. This is a challenging task because we have to consider both the network and compute demands, the constraints in the hierarchical cluster, and the various tunable parameters. This creates an exponentially large search space for plans, placements, and merging.

In VideoEdge, we identify the best tradeoff between multiple resources and accuracy, thus narrowing the search space by identifying a small band of promising configurations. We also balance the resource benefits and accuracy penalty of merging queries. The results are good. We are able to improve accuracy by as much as 25 times compared to state-of-the-art techniques such as fair allocation. VideoEdge builds on a substantial body of research results we have generated since early 2014 on real-time video analytics.

IoT embraces edge computing

A few years after we began researching video analytics, IoT emerged, as thought leaders in different industries such as manufacturing, health care, automobile, and retail started focusing on using information technology to increase efficiencies in their systems. They understood automation combined with artificial intelligence, made possible with IoT, could lower operating costs and increase productivity. The key ingredient was sensing, processing, and actuation in real time.

For this to work, the time between sensing and processing and between processing and actuation had to be negligible. While processing could be done in the cloud, the latency to it was relatively high, the network to it was expensive, and IoT systems had to survive disconnections from it. Enter edge computing — it was the perfect solution for such scenarios. Recognizing this, Microsoft has committed more resources to the combined technology, announcing in April a sizable investment in IoT and edge computing.

While we began 10 years ago, I believe the most interesting portion of our journey is just starting. Simply search for the term “edge computing,” and you will see how much has been written about this topic both in industry and academia. And SEC 2018, for which I have the honor of serving as program co-chair, is further proof of the excitement surrounding this emerging computing paradigm. The papers feature many different topics, ranging from data security and integrity to machine learning at the edge, specialized hardware for edge computing, 5G edge, programming models, and deployment on drones, automobiles, the retail space, and factory floors. As we continue to build new products and learn, we uncover new challenges that engineers and researchers love to solve, and as our platform matures, we will see the creation of a new generation of applications.

In my experience, I have found it takes on average seven years for a new technology to go from research lab to real world. In 2013, I made a prediction that edge computing will be everywhere by 2020. I continue to believe this is going to happen. My colleagues and I believe that together we are entering the best part of this journey.

In a keynote address at the 2013 IEEE International Conference on Cloud Networking (IEEE CloudNet), Victor Bahl presented the above slide and predicted edge computing will be everywhere by 2020, a statement he stands by today.

post

Designing the future with the help of the past with Bill Buxton

Bill Buxton

Principal Researcher Bill Buxton

Episode 46, October 17, 2018

The ancient Chinese philosopher Confucius famously exhorted his pupils to study the past if they would divine the future. In 2018, we get the same advice from a decidedly more modern, but equally philosophical Bill Buxton, Principal Researcher in the HCI group at Microsoft Research. In addition to his pioneering work in computer science and design, Bill Buxton has spent the past several decades amassing a collection of more than a thousand artifacts that chronicle the history of human computer interaction for the very purpose of informing the future of human computer interaction.

Today, in a wide-ranging interview, Bill Buxton explains why Marcel Proust and TS Eliot can be instructive for computer scientists, why the long nose of innovation is essential to success in technology design, why problem-setting is more important than problem-solving, and why we must remember, as we design our technologies, that every technological decision we make is an ethical decision as well.

Related:


Transcript

Bill Buxton: If you are going to come and make an argument that something is going to have huge impact in the next five years, if you haven’t got fifteen years of history of that idea and can trace its evolution and history and so on, then you are probably wrong or you haven’t done your homework or you might get your head cut off when you come to this presentation unprepared. Even if you are right, and you don’t have that fifteen years, then that’s gambling, that’s not investment, that’s not research. You are just lucky. Design is a repeatable profession.

Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.

Host: The ancient Chinese philosopher Confucius famously exhorted his pupils to study the past if they would divine the future. In 2018, we get the same advice from a decidedly more modern, but equally philosophical Bill Buxton, Principal Researcher in the HCI group at Microsoft Research. In addition to his pioneering work in computer science and design, Bill Buxton has spent the past several decades amassing a collection of more than a thousand artifacts that chronicle the history of human computer interaction for the very purpose of informing the future of human computer interaction.

Today, in a wide-ranging interview, Bill Buxton explains why Marcel Proust and TS Eliot can be instructive for computer scientists, why the long nose of innovation is essential to success in technology design, why problem-setting is more important than problem-solving, and why we must remember, as we design our technologies, that every technological decision we make is an ethical decision as well. That and much more on this episode of the Microsoft Research Podcast.

Host: Bill Buxton, welcome to the podcast.

Bill Buxton: Glad to be here.

Host: So, I’d like to start by asking my guests what gets you up in the morning, but you’ve already answered that in print, and I quote, “What gets me up in the morning is to realize what I dream about.” So, now you have to tell us what you dream about.

Bill Buxton: It depends which morning it is, I think. I think there’s an embarrassment of riches of things to want to do, and I think that that’s one of the best things because you’re never at a loss to be motivated. But then the other problem is, you have to make choices as to which one you pursue. You can do anything and everything in your life, you just can’t do them all at once. You always want to be falling in love with something that just captured your imagination, but in so doing, you have to retire a previous passion or at least move it to the background because you can’t go full-throttle into more than one or two things. One description of what I do for a living is Experience Design. And I’m prone to say Jimmy Hendrix had the greatest wisdom of this, and that’s the most profound question, “Are you experienced?” And if you don’t have a breadth, as well as depth, of experience to draw on, how can you be good at Experience Design? Because it’s building up this repertoire and curating this repertoire of experiences in your life across the board that is the treasure trove that you can mine in whatever you’re trying to do.

Host: Your bio says you are a relentless advocate for innovation, design and the appropriate consideration of human values, capacity and culture in the conception, implementation and use of new products and technologies.” Which is a mouthful. But let’s unpack that a little bit. I’m really intrigued by your statement of the “appropriate consideration.” Tell us what you mean by that in the context of designing new technologies and products.

Bill Buxton: Well, one of my heroes is a historian of technology named Melvin Kranzberg, and he has some laws. But his first law is, “Technology is not good, it’s not bad, but nor is it neutral.” It will be some combination of the two. As soon as you say words like good and bad, that implies you have a moral compass. And the real question is, is that when you are making technological decisions and launching technologies into society, you are, in fact, making an ethical choice, whether you know it or not. And so, maybe you’ll do a better job of it and weight more heavily on the positive if you actually know what that moral compass is and that you are, in fact, making an ethical decision. I’m not trying to put too heavy a weight on this in that you are playing God, but you are in fact having impact. But you are also human, so how can you just do the best? You will get some stuff wrong. So, take responsibility to clean up the mess without throwing the baby out with the bath water. And so, it basically says that “appropriateness” is appropriate to the moral order of place or where it’s going to be placed. That’s the closest way I can put it.

Host: Let’s talk about your job description at Microsoft Research. When you started at MSR, Rick Rashid hired you to, as you say, “Make design a core part of the Microsoft culture.” So, how did you go about doing what he said? That’s about the vaguest job description I can think of, and yet it… it’s perfect.

Bill Buxton: Well, actually what he really said was, “Make a difference and if you are not the best person to figure out how you should do that, you are probably shouldn’t have the job.” Then my response was okay, I’m going to try to help contribute to bringing a greater awareness of design to the company and that meant, actually, not trying to design products, but trying to design a culture and change the attitudes and not elevate design to the point where everything is design-led, but where it’s an equal partner at the table. In the early days, when I would speak to different teams in the company, in large or small groups, it would be kind of like, don’t expect this to come from above, or from management or anything like that, because we are our own culture. We make it. And it’s every individual. And if you can actually start to just feel empowered to, within your own, even if it’s one other person, you can start to make adjustments along the way you want that can go viral because if we’re shifting in a good direction, it will be noticed, and then people will say, well, what’s the secret sauce that you’re using? And nobody can own this. It can’t be about any individual. It’s got to be about empowering individuals to form groups and clusters because that’s what culture is. It’s a mutually agreed upon set of values.

Host: So today, like you, I’m going to use some literary quotes to prompt our discussion on technology research. So, let’s start with Marcel Proust. He once said, “The real voyage of discovery is not in seeking new landscapes, but in having new eyes.” One of the major themes in research is looking for the next big discovery, right?

Bill Buxton: Yes.

Host: How does having new eyes, or different optics, as you’ve said it, inform the quest for innovation or how should it inform the quest for innovation?

Bill Buxton: So, the net result is that, in some sense, I would describe my job description as being an optician and to find the right lenses. I’ll give an example. As you say, the industry is heavily driven by people trying to find the next big thing, whether it’s a new gadget or a new application, killer app or a new service. And if you’re just graduating from university or design school or whatever, that of course, you want to become a millionaire by the time you’re 24 or you’re a failure. And so, there’s all these pressures. And so, my automatic reaction, I just wrote a two-pager that said the next big thing isn’t a thing. And, I said, it’s actually a change in relationship amongst the things that are already there and the things that are going to emerge. And when I say relationship amongst those things, it’s about the social relationships, things like kinship, introduction, negotiation, approach, departure, all of these things, the moral order. These are all terms that we know about the society of people. But, we aren’t used to speaking about in terms of the society of technology. What could you do that would have more impact than if things just worked? If things just worked together seamlessly? And if, in working together, every new thing I added came at a great value in and of itself, but it also added value to everything else I already had, and they to it, and furthermore every new thing I added reduced the complexity, not only of that new thing, but reduced the complexity of everything else in the ecosystem and they it. We realize that hardly anything works well together much less seamlessly. And what we’ve forgotten, when we come back to the human side, is that the better we get at really making desirable, beautiful, affordable, useful, valuable devices, the worse we’re making things. The cumulative complexity of a bunch of desirable, simple, affordable, valuable things is way above the human’s capacity to deal with. And that’s why you must reduce complexity with everything you add. And that takes a very different approach because it forces you into thinking about an ecosystem. Albert Shum who is part of the “Canadian Mafia” trying to change design at Microsoft here is a good friend and a fellow cyclist. And he has a nice way of saying it, that in the industry, we spend a whole bunch of time learning how to design houses. The real challenge is building the communities and the city planning and the urban planning and the flow of things. And I think even the changes we’ve been making over the last year or two have been significant steps on this path. But the challenge in innovation is, how do you go beyond that and say what are the right metrics for our aspirations and where we can be and how soon we should get there? Because only when you find that, can you set appropriate goals that most meet your objectives.

(music plays)

Host: You have become the collector and curator of more than a thousand computer hardware artifacts that chronicle the history of various aspects of human computer interaction. So, tell us about your collection, or collections. How did you get started doing this, what kinds of things have you collected and how hard was it for you to get your hands on some of these things? I’ve seen the collection. It’s crazy!

Bill Buxton: Well, first of all, my name is Bill and I’m not a hoarder. I’m a collector. The one-word answer, it was an accident. Maybe a more informative answer is to say it is a reflection of my process of what I do for a living. I’m always looking for reference material, always scanning, collecting things around, surrounding yourself with them for inspiration for ideas, and to trigger thoughts, and having them sitting there around you, and all of a sudden, some new relationship pops out. When I’m at a loss for a solution to a problem, I go and surround myself with these objects. But over about forty, forty-five years, I’ve never thrown any of them out. I’ve kept them all. And so, when anything came out, whether it was a brochure or an article in a magazine, or something like that, I kept it and documented it for future reference, for teaching, for teaching myself and to go back to say, hey, I think I’ve seen this before. And you can think of them all as prototypes, and really expensive-to-make prototypes, which I could get for practically nothing, sometimes like on eBay, where it’s like a really expensive education where somebody else paid the tuition. And they’re sitting there, if you want get the benefit of that education, you can. And therefore, when I do start to make something or when anybody in the company does, they can start at a much higher level because they’ve got these reference objects.

Host: Interesting.

Bill Buxton: And so, the base point-of-departure for any problem I’m looking at is, somebody has already solved this problem and there’s something out there that’s already done this. So, I’m going prospecting before I go building.

Host: Tell us about the collection. What’s in it?

Bill Buxton: Well, the collection is sort of a cross-section of all of the input devices through which people have interacted with digital technology pretty much from the beginning. And so that would include mice and joy sticks and trackballs and trackpads. It is PDAs. It’s game controllers, it’s foot pedals, head displays. It’s uh, smart watches going back to 1978. It’s the world’s first smart phone. It’s the history of portable music players. It’s the history of AR and VR technologies going back to a reproduction I made of the very first Stereo Viewer from 1838. And it’s also examples to use to serve as the basis for story-telling that illustrate some of the things that are really important about design. I don’t think many people in VR know that it is due to virtual reality, in an early form, that led to Yellowstone being made the first national park in the world, not just the United States. Or that the very first stereoscope from 1838 was already looking into a virtual space because photography wasn’t invented till the following year. There were no photographs to make stereo images from and they had to be hand-drawn and so when you looked into Wheatstone’s original reflective stereo craft, you’re looking into hand-drawn lines into a world that never existed.

Host: Wow.

Bill Buxton: I think those things are really interesting because you start to see patterns, if you go through it. But from those patterns, you say, okay, they probably haven’t stopped, and so you can extrapolate. So, it’s really hard to extrapolate from a point. If I have a line, it’s much easier. And so, I have this game, I’ll do it with adults as well as children. I’ll draw all these different lines and say, “Continue these lines.” And then I’ll put a point. And they have no idea what to do with the point, but all those other things, they can continue because they can see the pattern as things were going. And it doesn’t mean the extrapolation is correct, but it gives you your initial bearing for your exploration and usually because there’s other things involved, there’s probably a couple of lines that come and you’ll start, maybe you’ll see there’s intersections from extrapolations. And you have these ways to visualize. And this gives you a different way to think, accompanied by concrete examples that you can experience to get to the assets at the finest granularity.

Host: So, you referred to something you called the long nose of innovation. I think researchers are familiar with the phrase the long tail. But the long nose is an interesting one. And it’s in context of new technologies and how long it takes them to catch on. And you also had said at some point, that our slow rate of progress is maybe not necessarily due to a lack of technology but lack of imagination. How and why do we lack imagination and what can we do? What can researchers do about that?

Bill Buxton: The long nose basically comes back as sort of saying if we look historically at the evolution of technologies, it takes at least twenty years from the first clear articulation of something to the point that it’s mature, where let’s measure maturity as it’s a billion-dollar industry. If you are going to come and make an argument that something is going to have huge impact in the next five years, if you haven’t got fifteen years of history of that idea and can trace its evolution and history and so on, then you are probably wrong or you haven’t done your homework or you might get your head cut off when you come to this presentation unprepared. Even if you are right, and you don’t have that fifteen years, then that’s gambling, that’s not investment, that’s not research. You’re just lucky. Design is a repeatable profession. It’s not, I get lucky once in a while. And so, if you want to study design and innovation, study the repeat offenders, the ones that can do it over and over. You don’t have to wait for the muse to come and drive you. And that’s what you learn. And you can only do that if you have process. And the long nose is a key part of that process. Now, for those who doubt, the mouse, which everybody who saw one in 1968, knew it was the right thing. But it wasn’t until Windows 95 before everybody had a mouse at their desk. Now, why did it take so long? I first used a mouse in 1971. Now the thing is, you need a perfect wave of things. You had to perfect Windows icons. You had to train the developers how to write this type of graphic user interface. That was a whole new thing from DOS or UNIX. And you needed the processors. You needed graphics processors. You needed the displays to switch to bitmap displays rather than calligraphic displays which dominated back in the time, basically glorified oscilloscopes. Every technology goes the same route. And so, the long nose is basically this reminder of how long it takes. So, it also says the following things and reinforces what I was saying about the combinations about innovation being the aggregation of existing ideas: that everybody thinks that things are moving really quickly and that is not true. We mistake a whole bunch of things moving really slowly, with things moving quickly. It’s the difference between amperage and voltage. Any single technology is evolving, statistically speaking, really slowly. But, when you have a number of different things moving slowly, at slightly different paces, but simultaneously and at different stages on the nose, if you start to realize that’s what’s going on in the overall technological ecosystem, you can see those patterns and then project forward because you can extrapolate from history, and say, here’s where you hit the inflection point and that’s when things are going to happen. Everything has a perfect storm, and there’s methods by using this technique to actually predict when that perfect storm is going to happen. I’ll give you a really quick example. I spent my early career, after I switched being a musician, to building digital music synthesizers for live performance. So, I saw the evolution of how digital audio emerged. I went to Silicon Graphics and became Chief Scientist there doing animation systems. But the only act of genius I had, because I wasn’t in computer graphics, I was literate, but I wasn’t, you know, a specialist in computer graphics. But, I knew that computer graphics was going to follow exactly the same pattern as computer music, but it was multiple orders of magnitude more complex, so it was just shifted further along the timeline. And so, all the planning over the eight and a half years I was there, we kept hitting that right. And the reason we could know exactly what to do and when was because I just was repeating what I had already done in music. And so, all I needed to do was to see that relationship. And I think overall, that type of pattern happens throughout, but you have to know those other areas where you go prospecting. So, the long nose, the notion of history, collecting, sampling and not just going immediately to building. We spend far too much and go far too quickly into problem-solving and don’t spend enough time problem-setting. And that’s the ultimate skill.

Host: Can you define problem-setting a little more clearly?

Bill Buxton: Problem-setting is basically, it’s not enough to get the design right, you’ve got to design the right thing. And so, if you just leap in and start building something where you’ve got a solution, you have no idea if that’s the best option. There might have been a better way and you didn’t take time because you are already behind schedule. But here’s the crazy thing. At the beginning of the product cycle, you have a small team just getting going. Your burn rate, in terms of what it’s costing you per week in terms of the project and that, is very, very low. So, what you then should be doing is thoroughly exploring a range of different alternatives. Problem-setting, part of that process is this notion of, you cannot give me one idea. You have to learn how to work quickly and give me multiples. That’s a technique for this whole issue of, how do you deal with the problem-setting? And by exploring the space first… oh, that’s the real problem… Put it this way. You have a bunch of people that talk about user-centered design. And they’ll say, you know, go talk to your users and they will tell you what to do. Okay. Would you go to a doctor where you walked in, and the doctor said, okay what’s wrong with you, what operation do you need and what drugs should I give you under what dose, right? And that’s how some people naively interpret user-centered design, is “listen to users.” And, no. I’m going to ask you all kinds of questions. But I’m going to take all of those as part of the information that helps me make a diagnosis. And so, where do we collect the symptoms to find out where the real problems are? You’re telling me this. I understand the situation. Now, I have to know enough about your industry to ask pertinent questions. And for me, that’s what the problem-setting is. The designer, the main equipment is to have that meta-knowledge. And that’s where the diverse interests come in, so how do you get that knowledge? But if you don’t even know that’s the kind of knowledge you need to get, you’re not even going to go looking for it.

Host: So, you look at the product development cycles and, even in research, what you’re talking about is something that people would have to say, “Okay, we need to rethink how we work and what we make time for.”

Bill Buxton: So, I’d throw the argument the other way: you can’t afford not to do it. So, your cost-per-month on a project, if you put an extra month up front, it costs you almost nothing. And if it comes up with a much better solution that’s a fraction of the price and can get it done more quickly and have a much better margin, first of all, you’ve made up for the lost time by having spent that up front. But let’s pretend it still takes the same amount of time. We never have time to do problem-setting and so on sufficiently. We’re getting better at it. But we seem to be able to have time to be three months late where we are fully-loaded with the highest burn rate possible, right? I mean, if you’re going to take an extra month, do you want to play it where it costs you the most or do you want to do it up front and you get a better product? The other part is, it’s not all in one. You don’t make all your decisions up front and then go build. The decisions that you make the earliest are the ones that are hardest to change later. So, that’s your basic architecture. In the software industry, we don’t have architects. What we call an architect, in architectural terms, is actually a structural engineer. And we have no architect that has design perspective at the very beginning. But also, there’s this notion that once you’ve got a representation, like a rendering of what the screens are or some of these other things, that that’s the end of the design. There’s only two places where there’s room for creativity in design. So, the first place for creativity is the heuristics process whereby you innumerate the repertoire of things from which you are going to choose, and then the second is the heuristic you use to eliminate all but one. And it’s that inhale/exhale. You start with nothing, you end with one. But you have to go through that whole thing. You would love, afterwards, you know I say, I could have got here right from the beginning. And you could, but you never would have. And that’s the biggest mistake. The fastest way to a mediocre product is to make a plan and stick to it.

(music plays)

Host: Let’s talk about AI for a minute. Because tech companies are putting a premium on AI talent, uh…

Bill Buxton: Oh, is it important now?

Host: Apparently, people are using the terms gold rush, talent war…

Bill Buxton: Feeding frenzy…

Host: …feeding frenzy. And you’ve suggested that there’s a risk that anyone who’s not doing AI, might be marginalized.

Bill Buxton: So, I have to preface that by saying, I think what we can do today in AI is absolutely unbelievable. It’s beyond my wildest expectations in what we’d be able to do at this point. It’s unbelievably valuable, but it’s nevertheless essential but not sufficient. And as I said, you need a perfect storm of a whole bunch of things to get a sustainable system, or an ecosystem in place. And my fear is that if you focus too much on the AI component, that you distort the other requisite skillsets and disciplines that are needed to ensure that AI is successful. Every discipline represented in our company is essential to our success but not sufficient. And the trick is to find the balance. And one of the important elements here is to make a distinction between literacy and expertise. It is essential that everybody in the company has a level of literacy about AI. But it’s equally important to have literacy about every one of those disciplines. And that means that AI should be working as hard to gain literacy in the disciplines that are core to its success, as those disciplines are to AI. What happens, if we push so hard on the AI front and we don’t make that clear distinction between literacy and expertise, that developers and designers are so focused on AI, that they feel that if they’re not going that direction and chasing that really, really hard, that it’s a career-limiting move. I think that what is clear is that you may end up with the best AI in the world and still be beaten by somebody who’s got only 20% of the AI competence, but they’ve got way better integration of the AI into their larger ecosystem. Because, like any other technology, it’s not good, it’s not bad, but nor is it neutral, and it will be a positive and a negative consequence of that technological change.

Host: Another premise in AI research has its underpinnings in what we’ve referred to as DIKW pyramid where you start with data which supposedly gets refined into information and then to knowledge and culminates in wisdom, which is the ability to make good decisions based on the data you have. And this, of course, has literary roots in T.S. Eliot’s, The Rock: “Where is the life we’ve lost in living? Where is the wisdom we’ve lost in knowledge? Where is the knowledge we’ve lost in information?” Talk about this in the context of this idea that if we have enough data, with machine learning, computational power and sophisticated algorithms, we’ll end up with wisdom.

Bill Buxton: Well, first of all, Eliot left off two levels there. So, where’s the wisdom we’ve lost in knowledge, the knowledge we’ve lost in information, information we’ve lost in data and the data we’ve lost in noise. You have to remember noise cancellation. And people talked about a data revolution and so on… No, it’s a data explosion. And information technologies? No, it’s not. It’s only information if it can serve as the basis for informed decision-making. I think it’s very, very healthy to have that hierarchy. I think it’s extremely valuable to be able to fit things into moving up that food chain. But I think that the role that intelligence plays there, and where intelligence lies, is a sticky thing. And we have to base our expectations of the technology, and therefore have our engineering guided, by a sense of what’s possible at any point in time along that path. Now, I know that we were talking in AI about, you know, sensing an ecosystem environment and all this sort of stuff. Well, we have to be realistic about how much of that we can sense at what point in time, and then understand what elements are being neglected and are not simply feasible at this point to deal with and therefore our notion of intelligence is limited. And how do we, at any point in time, make sure we’re back-filling those gaps until it can be proven that we’ve got those other parts reliably taken care of. And again, by looking at the disciplines, doing the analysis, we can look at the timeline and take appropriate action for each thing to make sure that we’ve got the bases covered with the appropriate technologies for that moment in history and not make colossal mistakes and confuse the target with where we are right now. It comes right back to what I said earlier: it’s not just being able to get the vision, it’s how do I get there from here?

Host: What would you say to the people that are moving into this arena right now? What should they be thinking? What could their next steps be?

Bill Buxton: In a way, my advice is less concrete in terms of “learn this, learn that” in terms of some skill. We’ve said already that the problems we face today require depth. You have to be really good at what you are doing if you want to really have influence. And for me, the only way you can get really, really, really good at something is if you’re just so passionately in love with it that it’s not work. Now, people say okay, you got to find your passion. Well, the problem is how do you do that? Get into the traffic, because if it’s not hitting you wherever you are, then move. But the other part is, by trusting my nose, the stuff that caught my fancy in chasing those things that made no sense, but, in retrospect, were the perfect career moves. Like why would anybody go to university and do computer music when nobody even knew what a computer was? And spend four years doing that? But it was the most brilliant career decision that I never made. It wasn’t a career decision. I wanted to be a musician. But I would say, always be bad at something you love. And it doesn’t matter if things make sense. That’s the other part that’s really critical. I purposely rejected any career path for which there was a brochure in the guidance counselor’s office in high school. Because it’s already full. There’s going to be already too many people doing that. And it’s not that I’m not competitive, it’s just that my main competitive advantage is, I’m not trying to compete in the same race. And if you’ve got these interests and you become uniquely qualified, you can have the satisfaction you’re the best in the world at what you do. You’re just the only one. That makes you also the worst. That keeps hubris from taking over. But have the faith that at some point in your life, all that work will be recognized and somebody will need it. There’s somebody in the world who needs it. And the question is now to find it. For me, it took me till I was forty. But the time leading up to that was so full of rich experience that it never occurred to me that I wasn’t making any money. I was the richest person in the world because I was doing what I love doing.

Host: Bill Buxton, thank you for joining us today.

Bill Buxton: Thank you for having me.

(music plays)

To learn more about Bill Buxton and the latest innovations in human computer interaction, visit Microsoft.com/research

post

Lunar library to include photos, books stored in DNA

September 27, 2018

post

Playing to the crowd and other social media mandates with Microsoft Research’s Dr. Nancy Baym

Nancy Baym

Dr. Nancy Baym, Principal Researcher from Microsoft Research

Episode 41, September 12, 2018

Dr. Nancy Baym is a communication scholar, a Principal Researcher in MSR’s Cambridge, Massachusetts, lab, and something of a cyberculture maven. She’s spent nearly three decades studying how people use communication technologies in their everyday relationships and written several books on the subject. The big take away? Communication technologies may have changed drastically over the years, but human communication itself? Not so much.

Today, Dr. Baym shares her insights on a host of topics ranging from the arduous maintenance requirements of social media, to the dialectic tension between connection and privacy, to the funhouse mirror nature of emerging technologies. She also talks about her new book, Playing to the Crowd: Musicians, Audiences and the Intimate Work of Connection, which explores how the internet transformed – for better and worse – the relationship between artists and their fans.

Related:


TRANSCRIPT

Nancy Baym: It’s not just that it’s work, it’s that it’s work that never, ever ends. Because your phone is in your pocket, right? So, you’re sitting at home on a Sunday morning, having a cup of coffee and even if you don’t do it, there’s always the possibility of, “Oh, I could Tweet this out to my followers right now. I could turn this into an Instagram story.” So, the possibility of converting even your most private, intimate moments into fodder for your work life is always there, now.

Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.

Host: Dr. Nancy Baym is a communication scholar, a Principal Researcher in MSR’s Cambridge, Massachusetts, lab, and something of a cyberculture maven. She’s spent nearly three decades studying how people use communication technologies in their everyday relationships and written several books on the subject. The big take away? Communication technologies may have changed drastically over the years, but human communication itself? Not so much.

Today, Dr. Baym shares her insights on a host of topics ranging from the arduous maintenance requirements of social media, to the dialectic tension between connection and privacy, to the funhouse mirror nature of emerging technologies. She also talks about her new book, Playing to the Crowd: Musicians, Audiences and the Intimate Work of Connection, which explores how the internet transformed – for better and worse – the relationship between artists and their fans. That and much more on this episode of the Microsoft Research Podcast.

Host: Nancy Baym, welcome to the podcast.

Nancy Baym: Nice to be here.

Host: So, you’re a principle researcher at the MSR lab in Cambridge, Massachusetts, not to be confused with the one in Cambridge, England. Give our listeners an overview of the work that goes on in New England and of your work in particular. What are the big issues you’re looking at? Why is the work important? Basically, what gets you up in the morning?

Nancy Baym: So, the lab in New England is one of Microsoft’s smaller researcher labs. We’re very interdisciplinary, so, we have people in my basic area which is social media and social issues around technology from humanistic and social scientific perspectives. And we have that alongside people working on machine learning and artificial intelligence, people working on economics, people working on cryptography, people working on math and complexity theory, people doing algorithmic game theory, and then we also have a bioinformatics and medicine component to this program also. So, we’re really interested in getting people from very different perspectives together and listening to each other and seeing what kinds of new ideas get sparked when you get people from radically different disciplines together in the same environment and you give them long periods of time to get to know one another and get exposed to the kinds of work that they do. So, that’s the lab as a whole. My group is… we call ourselves the Social Media Collective, which is a, sort of, informal name for it. It’s not an official title but it’s sort of an affectionate one. There are three core people here in our New England lab, and then, which would be me, Mary Gray and Tarleton Gillespie, and then we have a postdoc and we have, in the summer, PhD interns, we have a research assistant, and we’re all interested in questions around how people use technologies, the kinds of work that people do through technologies, the kinds of work that technologies create for people, and the ways that that affects them, their identities, their relationships, their communities, societies as a whole.

Host: You know, as you talk about the types of researchers that you have there, I wonder, is New England unique among the labs at Microsoft?

Nancy Baym: I think we are, in that we are more interdisciplinary than many of them. I mean our Redmond lab, obviously, has got people from a huge range of disciplines, but it’s also got a huge number of people, whereas we’re a much smaller group. We’re on one floor of a building and there are, you know, anywhere from twenty to fifty of us, depending on how many visitors are in the lab and how many interns are around or what not, but that’s still a really small fraction of the Redmond group. So, I think anybody in a particular field finds themselves with many fewer colleagues from their own field relative to their colleagues as a whole in this lab. Whereas, I think most of our labs are dominated much more by people from computer science. Obviously, computer science is well-represented here, but we have a number of other fields as well. So, I think that foregrounding of interdisciplinarity is unique to this lab.

Host: That’s great. So, the social science research in the context of social computing and social media, it’s an interesting take on research in general at Microsoft, which is a high-tech company. How do you think the work that you do informs the broader work of Microsoft Research and Microsoft in general?

Nancy Baym: I would like to think that the kinds of work that I do, and that my colleagues are doing, are helping the company, and technology companies in general, think in more sophisticated ways about the ways that the technologies that we create get taken up and get used and with what consequences. I think that people who build technologies, they really want to help people do things. And they’re focused on that mission. And it can be difficult to think about, what are all the ways that that might get taken up besides the way that I imagine it will get taken up, besides the purpose that I’m designing it for? So, in some sense, I think part of our group is here to say, here’s some unexpected things you might not be thinking about. Here’s some consequences, or in the case of my own work, I’d like to think about the ways that technologies are often pushing people toward more connection and more time with others and more engagement and more sharing and more openness. And yet, people have very strong needs for privacy and for distance and for boundaries and what would it mean, for example, to think about how we could design technologies that helped people draw boundaries more efficiently rather than technologies that were pushing them toward openness all the time?

Host: I love that. And I’m going to circle back, in a bit, to some of those issues of designing for dialectic and some of the issues around unintended consequences. But first, I want to talk about a couple books you wrote. Before we talk about your newest book, I want to spend a little time talking about another book you wrote called Personal Connections in the Digital Age. And in it, you challenge conventional wisdom that tends to blame new technologies for what we might call old problems. Talk a little bit about Personal Connections in the Digital Age.

Nancy Baym: That book came out of a course that I had been teaching for, oh gosh, fifteen, sixteen, seventeen years, something like that, about communication and the internet, and one of the things that tends to come up is just what you’re talking about. This idea that people tend to receive new technologies as though this is the first time these things have ever been disrupted. So, part of what that book tries to do is to show how the way that people think and talk about the internet has these very long histories in how people think and talk about other communication technologies that have come before. So, for example, when the telephone was invented, there was a lot of concern that the telephone was going to lead to social disengagement, particularly among women, who would spend all the time talking on the phone and would stop voting. Um… (laughter) which doesn’t sound all that different from some contemporary ways that people talk about phones! Only now it’s the cell phones that are going to cause all that trouble. It’s that, but it’s also questions around things like, how do we present ourselves online? How do we come to understand who other people are online? How does language change when it’s used online? How do we build relationships with other people? How do we maintain relationships with people who we may have met offline? And also, how do communities and social networks form and get maintained through these communication technologies? So, it’s a really broad sweep. I think of that book as sort of the “one stop shop” for everything you need to know about personal connections in the digital age. If you just want to dive in and have a nice little compact introduction to the topic.

Host: Right. There are other researchers looking into these kinds of things as well. And is your work sort of dovetailing with those findings in that area of personal relationships online?

Nancy Baym: Yeah, yeah. There’s quite a bit of work in that field. And I would say that, for the most part, the body of work which I review pretty comprehensively in Personal Connections in the Digital Age tends to show this much more nuanced, balanced, “for every good thing that happens, something bad happens,” and for all of the sort of mythologies about “its destroying children” or “you can’t trust people you meet online,” or “people aren’t their real selves” or even the idea that there’s something called “real life,” which is separate from what happens on the internet, the empirical evidence from research tends to show that, in fact, online interaction is really deeply interwoven with all of our other forms of communication.

Host: I think you used the word “moral panic” which happens when a new technology hits the scene, and we’re all convinced that it’s going to ruin “kids today.” They won’t have manners or boundaries or privacy or self-control, and it’s all technology’s fault. So that’s cool that you have a kind of answer to that in that book. Let’s talk about your new book which is super fascinating: Playing to the Crowd: Musicians, Audiences and the Intimate Work of Connection. Tell us how this book came about and what was your motivation for writing it?

Nancy Baym: So, this book is the result of many years of work, but it came to fruition because I had done some early work about online fan community, particularly soap opera fans, and how they formed community in the early 1990s. And then, at some point, I got really interested in what music fans were doing online and so I started a blog where I was posting about music fans and other kinds of fans and the kinds of audience activities that people were doing online and how that was sort of messing with relationships between cultural producers and audiences. And that led to my being invited to speak at music industry events. And what I was seeing there was a lot of people with expertise saying things like, “The problem is, of course, that people are not buying music anymore, so the solution to this problem is to use social media to connect with your audience because if you can connect with them, and you can engage them, then you can monetize them.” And then I was seeing the musicians ask questions, and the kinds of questions that they were asking seemed very out-of-step with the kind of advice that they were being given. So, they would be asking questions like, do I have to use all of the sites? How do I know which ones to use? So, I got really interested in this question, of sort of, what, from the point of view from these people who were being told that their livelihood depends on creating some kind of new social relationship using these media with audiences, what is this call to connect and engage really about? What does it feel like to live with that? What are the issues it raises? Where did it come from? And then this turned into a much larger-scoped project thinking about musicians as a very specific case, but one with tremendous resonance for the ways that so many workers in a huge variety of fields now, including research, feel compelled to maintain some kind of visible, public persona that engages with and courts an audience so that when our next paper comes out, or our next record drops, or our next film is released or our next podcast comes out, the audience is already there and interested and curious and ready for it.

Host: Well let me interject with a question based on what you said earlier. How does that necessarily translate into monetization? I can see it translating into relationship and, you know, followership, but is there any evidence to support the you know…?

Nancy Baym: It’s magic, Gretchen, magic!

Host: OK. I thought so! I knew it!

Nancy Baym: You know, I work with economists and I keep saying, “Guys, let’s look at this. This is such a great research problem.” Is it true, right? Because you will certainly hear from people who work at labels or work in management who will say, “We see that our artists who engage more do better.” But in terms of any large scale “what works for which artists when?” and “does it really work across samples?” is, the million-dollar question that you just asked, is does it actually work? And I don’t know that we know the answer to that question. For some individuals, some of the time, yes. For the masses, reliably, we don’t know.

Host: Well and the other thing is, being told that you need to have this social media presence. It’s work, you know?

Nancy Baym: That’s exactly the point of the book, yeah. And it’s not just that it’s work, it’s that it’s work that never, ever ends. Because your phone is in your pocket, right? So, you’re sitting at home on a Sunday morning, having a cup of coffee, and even if you don’t do it, there’s always the possibility of, “Oh, I could tweet this out to my followers right now. I could turn this into an Instagram story.” So, the, the possibility of converting even your most private, intimate moments into fodder for your work life is always there, now. And the promise is, “Oh, if you get a presence, then magic will happen.” But first of all, it’s a lot of work to even create the presence and then to maintain it, you have to sell your personality now. Not just your stuff. You have to be about who you are now and make that identity accessible and engaging and what not. And yet it’s not totally clear that that’s, in fact, what audiences want. Or if it is what audiences want, which audiences and for which kinds of products?

(music plays)

Host: Well, let’s get back to the book a little bit. In one chapter, there’s a subsection called How Music Fans came to Rule the Internet. So, Nancy, how did music fans come to rule the internet?

Nancy Baym: So, the argument that I make in that chapter is that from the earliest, earliest days of the internet, music fans, and fans in general, were not just using the internet for their fandom, but were people who were also actively involved in creating the internet and creating social computing. So, I don’t want to say that music fans are the only people who were doing this, because they weren’t, but, from the very beginnings of online interaction, in like 1970, you already had the very people who are inventing the concept of a mailing list, at the same time saying, “Hey, we could use one of these to exchange Grateful Dead tickets, ‘cause I have some extra ones and I know there’s some other people in this building who might want them.” So, you have people at Stanford’s Artificial Intelligence laboratory in the very beginning of the 1970s saying, “Hey, we could use this enormous amount of computing power that we’ve got to digitize The Grateful Dead lyrics.” You have community computing projects like Community Memory being launched in the Bay Area putting their first terminal in a record store as a means of bringing together community. And then, from those early, early moments throughout, you see over and over and over again, music fans creating different forms of online community that then end up driving the way that the internet develops, peer-to-peer file sharing being one really clear example of a case where music fans helped to develop a technology to serve their needs, and by virtue of the success of that technology, ended up changing not just the internet, but industries that were organized around distributing cultural materials.

Host: One of the reviewers of Playing to the Crowd, and these reviews tend to be glowing, right? But he said, “It’ll change the way we think about music, technology and people.” So, even if it didn’t change everything about the way we think about music technology and people, what kinds of sort of “ah-ha findings” might people expect to find in the book?

Nancy Baym: I think one of the big ah-has is the extent to which music is a form of communication which has become co-opted, in so many ways, by commercial markets, and alongside that, the ways in which personal relationships and personal communication, have also become co-opted by commercial markets. Think about the ways that communication platforms monetize our everyday, friendly interaction through advertising. And the way that these parallel movements of music and relational communication from purely social activities to social activities that are permeated by commercial markets raises dialectic tensions that people then have to deal with as they’re continually navigating moving between people and events and circumstances and moments in a world that is so infused by technology and where our relationships are infused by technology.

Host: So, you’ve used the word “dialectic” in the context of computer interface design, and talked about the importance of designing for dialectic. Talk about what you mean by that and what kinds of questions arise for a developer or a designer with that mind set?

Nancy Baym: So, “dialectic” is one of the most important theoretical concepts to me when I think about people’s communication and people’s relationships in this project, but, in general, it’s a concept that I come back to over and over and over, and the idea is that we always have competing impulses that are both valid, and which we have to find balance between. So, a very common dialectic in interpersonal relationships is the desire to, on the one hand, be connected to others, and on the other, to be autonomous from others. So, we have that push and pull between “I want us to be part of each other’s lives all the time, and also leave me alone to make my own decisions.” (laughter) So that dialectic tension is not that one is right and one is wrong. It’s that that, and, as some of the theorists I cite on this argue, probably infinite dialectic tensions between “I want this, but I also want that” and it’s the opposite, right? And so, if we think about social interaction, instead of it being some sort of linear model where we start at point A with somebody and we move onto B and then C and then D, if we think of it instead as, even as we’re moving from A to B to C, that’s a tightrope. But at any given moment we can be toppling into one side or the other if we’re not balancing them carefully. So, if we think about a lot of the communication technologies that are available to us right now, they are founded, often quite explicitly, on a model of openness and connection and sharing. So, those are really, really valuable positions. But they’re also ends of dialectics that have opposite ends that are also very valid. So, all of these ways in which we’re pushed to be more open, more connected, to share more things, they are actually always in conflict within us with desires to be protective of other people or protective of ourselves, to have some distance from other people, to have autonomy. And to be able to have boundaries that separate us from others, as well as boundaries that connect us to one another. So, my question for designers is, how could we design in ways that make it easier for people to adjust those balances? In a way, you could sort of think about it as, what if we made the tightrope, you know, thicker so that it were easier for people to balance on, and you didn’t need to be so good at it, to make it work moment-to-moment?

Host: You know, everything you’ve just said makes me think of, you know, say, someone who wants to get involved in entertainment, in some way, and one of the plums of that is being famous, right? And then you find…

Nancy Baym: Until they are.

Host: …Until you are… that you don’t have control over all the attention you get and so that dialectic of “I want people to notice me/I want people to leave me alone” becomes wildly exacerbated there. But I think, you know, we all see “over-sharers,” as my daughter calls, them on social media. It’s like keep looking at me all the time. It’s like too much information. Have some privacy in your life…

Nancy Baym: Well you know, but that’s a great case, because I would say too much information is not actually a property of information, or of the person sending that information, it’s a property of the person receiving that information. Because, in fact, for some, it’s not going to be too much information. For some, it’s going to be exactly the right amount of information. So, I think of the example, of, from my point of view, a number of people who are parents of young children post much too much information on social networks. In particular, I’m really, really turned off by hearing about the details of their trivial illnesses that they’re going through at any given moment. You know, I mean if they got a real illness, of course I want to hear about it, but if you know, they got a fever this week and they’re just feeling a little sick, I don’t really need daily updates on their temperature, for instance. Um… on the other hand, I look at that, and I say, “Oh, too much information.” But then I say, “I’m not the audience for that.” They’ve got 500-600 friends. They probably put that there for grandma and the cousins who actually really do care. And I’m just not the audience. So, it’s not that that’s too much information. It’s that that information wasn’t meant for me. And instead of blaming them for having posted it, maybe I should just look away and move on to the next item in my feed. That’s ok, too. I’m sure that some of the things that I share strike some people as too much information but then, I’ll tell you what, some of the things that post that I think of as too much information, those are often the ones that people will later, in other contexts, say, “Oh my gosh, it meant so much to me that you posted about… whatever.” So, you know, we can’t just make these judgements about the content of what other people are producing without understanding the contexts in which it’s being received, and by whom.

Host: That is such a great reminder to us to have grace.

Nancy Baym: Grace for other people, that too, yeah.

Host: You’ve been watching, studying and writing about cyberculture for a long time. Going back a ways, what did you see, or even foresee, when you started doing this research and what if anything has surprised you along the way?

Nancy Baym: Well, it’s a funny thing. I mean, when I started doing this research, it was 1991. And the landscape has changed so much since then, so that the kinds of things that I could get away with being an insightful scholar for saying in 1991 are practically laughable now, because people just didn’t understand, at that time, that these technologies were actually going to be really socially useful. That people were going to use these technologies to present themselves to others, to form relationships, to build communities, that they were going to change the way audiences engaged, that they were going to change politics, that they were going to change so many practices of everyday life. And I think that those of us who were involved in cyberculture early, whether it was as researchers or just participants, could see that what was happening there was going to become something bigger than it was in those early days.

(music plays)

Host: I ask all of the researchers that come on the podcast some version of the question, “Is there anything that keeps you up at night?” To some degree, I think your work addresses that. You know, what ought we to be kept up at night about, and how, how ought we to address it? Is there anything that keeps you up at night, or anything that should keep us up at night that we should be thinking about critically as we’re in this landscape now?

Nancy Baym: Oh gosh, do any of us sleep anymore at all? (laughter) I mean I think what keeps me up nights is thinking, is it still ok to study the personal and the ordinary when it feels like we’re in such in extraordinary, tumultuous and frightening times, uh, nationally and globally? And I guess what I keep coming back to, when I’m lying awake at 4 in the morning saying, “Oh, maybe I just need to start studying social movements and give up on this whole interpersonal stuff.” And then I say to myself, “Wait a minute. The reason that we’re having so much trouble right now, at its heart, is that people are not having grace in their relations with one another,” to go back to your phrase. That what we really, really need right now more than anything is to be reconnected to our capacity for human connection with others. And so, in that sense, then, I kind of put myself to sleep by saying, “OK, there’s nothing more important than actual human connection and respect for one another.” And so that’s what I’m trying to foster in my work. So, I’m just going to call that my part and write a check for some of those other causes I can’t contribute to directly.

Host: I, I love that answer. And that actually leads beautifully into another question which is that your social science work at MSR is unique at industrial research labs. And I would call Microsoft, still, an industrial, you know, situation.

Nancy Baym: Definitely.

Host: So, you get to study unique and challenging research problems.

Nancy Baym: I have the best job in the world.

Host: No, I do, but you got a good one. Because I get to talk to people like you. But what do you think compels a company like Microsoft, perhaps somewhat uniquely, to encourage researchers like you to study and publish the things you do? What’s in it for them?

Nancy Baym: My lab director, Jennifer Chayes, talks about it as being like a portfolio which I think is, is a great way to think about it. So, you have this cast of researchers in your portfolio and each of them is following their own path to satisfying their curiosity and by having some of those people in that portfolio who really understand people, who really understand the way that technologies play out in ordinary people’s everyday lives and lived experiences, there may be moments where that’s exactly the stock you need at that moment. That’s the one that’s inflating and that’s the expertise that you need. So, given that we’re such a huge company, and that we have so many researchers studying so many topics, and that computing is completely infused with the social world now… I mean, if we think about the fact that we’ve shifted to so much cloud and that clouds are inherently social in the sense that it’s not on your private device, you have to trust others to store your data, and so many things are now shared that used to be individualized in computing. So, if computing is infused with the social, then it just doesn’t even really make sense for a tech company to not have researchers who understand the social, and who are studying the social, and who are on hand with that kind of expertise.

Host: As we close, Nancy, what advice would you give to aspiring researchers, maybe talking to your 25-year-old self, who might be interested in entering this field now, which is radically different from where it was when you started looking at it. What, what would you say to people that might be interested in this?

Nancy Baym: I would say, remember that there is well over a hundred years of social theory out there right now, and the fact that we have new communication technologies does not mean that people have started from scratch in their communication, and that we need to start from scratch in making sense of it. I think it’s more important than ever, when we’re thinking about new communication technologies, to understand communication behavior and the way that communication works, because that has not fundamentally transformed. The media through which we’ve used it has, but the way communication works to build identity, community, relationships, that has not fundamentally, magically, become something different. The same kind of interpersonal dynamics are still at play in many of these things. I think of the internet and communication technologies as being like funhouse mirrors. Where some phenomena get made huge and others get made small, so there’s a lot of distortion that goes on. But nothing entirely new is reflected that never existed before. So, it’s really important to understand the precedents for what you’re seeing, both in terms of theory and similar phenomena that might have occurred in earlier incarnations, in order to be able to really understand what you’re seeing in terms of both what is new, but also what’s not new. Because otherwise, what I see a lot in young scholarship is, “Look at this amazing thing people are doing in this platform with this thingy.” And it is really interesting, but it also actually looks a whole lot like what people were doing on this other platform in 1992, which also kind of looks a lot like what people were doing with ‘zines in the 1920s. And if we want to make arguments about what what’s new and what’s changing because of these things, it’s so important that we understand what’s not new and what these things are not changing.

(music plays)

Host: Nancy Baym, it’s been an absolute delight talking to you today. I’m so glad you took time to talk to us.

Nancy Baym: Alrighty, bye.

To learn more about Dr. Nancy Baym, and how social science scholars are helping real people understand and navigate the digital world, visit Microsoft.com/research.

post

Podcast: Ben Cutler talks about putting the cloud under the sea

ben cutler podcast

Ben Cutler from Microsoft Research. Photo by Maryatt Photography.

Episode 40, September 5, 2018

Data centers have a hard time keeping their cool. Literally. And with more and more data centers coming online all over the world, calls for innovative solutions to “cool the cloud” are getting loud. So, Ben Cutler and the Special Projects team at Microsoft Research decided to try to beat the heat by using one of the best natural venues for cooling off on the planet: the ocean. That led to Project Natick, Microsoft’s prototype plan to deploy a new class of eco-friendly data centers, under water, at scale, anywhere in the world, from decision to power-on, in 90 days. Because, presumably for Special Projects, go big or go home.

In today’s podcast we find out a bit about what else the Special Projects team is up to, and then we hear all about Project Natick and how Ben and his team conceived of, and delivered on, a novel idea to deal with the increasing challenges of keeping data centers cool, safe, green, and, now, dry as well!

Related:


Episode Transcript

Ben Cutler: In some sense we’re not really solving new problems. What we really have here is a marriage of these two mature industries. One is the IT industry, which Microsoft understands very well. And then the other is a marine technologies industry. So, we’re really trying to figure out how do we blend these things together in a way that creates something new and beneficial?

(music plays)

Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.

Host: Data centers have a hard time keeping their cool. Literally. And with more and more data centers coming online all over the world, calls for innovative solutions to “cool the cloud” are getting loud. So, Ben Cutler and the Special Projects team at Microsoft Research decided to try to beat the heat by using one of the best natural venues for cooling off on the planet: the ocean. That led to Project Natick, Microsoft’s prototype plan to deploy a new class of eco-friendly data centers, under water, at scale, anywhere in the world, from decision to power-on, in 90 days. Because, presumably for Special Projects, go big or go home.

In today’s podcast we find out a bit about what else the Special Projects team is up to, and then we hear all about Project Natick, and how Ben and his team conceived of, and delivered on, a novel idea to deal with the increasing challenges of keeping data centers cool, safe, green, and, now, dry as well! That and much more on this episode of the Microsoft Research Podcast.

Host: Ben Cutler. Welcome to the podcast.

Ben Cutler: Thanks for having me.

Host: You’re a researcher in Special Projects at MSR. Give us a brief description of the work you do. In broad strokes, what gets you up in the morning?

Ben Cutler: Well, so I think Special Projects is a little unusual. Rather than have a group that always does the same thing persistently, it’s more based on this idea of projects. We find some new idea, something, in our case, that we think is materially important to the company, and go off and pursue it. And it’s a little different in that we aren’t limited by the capabilities of the current staff. We’ll actually go out and find partners, whether they be in academia or very often in industry, who can kind of help us grow and stretch in some new direction.

Host: How did Special Projects come about? Has it always been “a thing” within Microsoft Research, or is it a fairly new idea?

Ben Cutler: Special Projects is a relatively new idea. In early 2014, my manager, Norm Whitaker, who’s a managing scientist inside Microsoft Research was recruited to come here. Norm had spent the last few years of his career at DARPA, which is Defense Advanced Research Projects Agency, which has a very long history in the United States, and a lot of the seminal technology achieved is not just on the defense side, where we see things like stealth, but also on the commercial or consumer side had their origins in DARPA. And so, we’re trying to bring some of that culture here into Microsoft Research and a willingness to go out and pursue crazy things and a willingness not just to pursue new types of things, but things that are in areas that historically we would never have touched as a company, and just be willing to crash into some new thing and see if it has value for us.

Host: So, that seems like a bit of a shift from Microsoft, in general, to go in this direction. What do you think prompted it, within Microsoft Research to say, “Hey let’s do something similar to DARPA here?”

Ben Cutler: I think if you look more broadly at the company, with Satya, we have this very different perspective, right? Which is, not everything is based on what we’ve done before. And a willingness to really go out there and draw in things from outside Microsoft and new ideas and new concepts in ways that we’ve never done, I think, historically as a company. And this is in some sense a manifestation of this idea of, you know, what can we do to enable every person in every organization on the planet to achieve more? And a part of that is to go out there and look at the broader context of things and what kind of things can we do that might be new that might help solve problems for our customers?

Host: You’re working on at least two really cool projects right now, one of which was recently in the news and we’ll talk about that in a minute. But I’m intrigued by the work you’re doing in holoportation. Can you tell us more about that?

Ben Cutler: If you think about what we typically do with a camera, we’re capturing this two-dimensional information. One stage beyond that is what’s called a depth camera, which is, in addition to capturing color information, it captured the distance to each pixel. So now I’m getting a perspective and I can actually see the distance and see, for example, the shape of someone’s face. Holoportation takes that a step further where we’ll have a room that we outfit with, say, several cameras. And from that, now, I can reconstruct the full, 3-D content of the room. So, you can kind of think of this as, I’m building a holodeck. And so now you can imagine I’m doing a video conference, or, you know, something as simple as like Facetime, but rather than just sort of getting that 2-D, planar information, I can actually now wear a headset and be in some immersive space that might be two identical conferences rooms in two different locations and I see my local content, but I also see the remote content as holograms. And then of course we can think of other contexts like virtual environments, where we kind of share across different spaces, people in different locations. Or even, if you will, a broadcast version of this. So, you can imagine someone’s giving a concert. And now I can actually go be at that concert even if I’m not there. Or think about fashion. Imagine going to a fashion show and actually being able to sit in the front row even though I’m not there. Or, everybody gets the front row seats at the World Cup soccer.

Host: Wow. It’s democratizing event attendance.

Ben Cutler: It really is. And you can imagine I’m visiting the Colosseum and a virtual tour guide appears with me as I go through it and can tell me all about that. Or some, you know, awesome event happens at the World Cup again, and I want to actually be on the soccer field where that’s happening right now and be able to sort of review what happened to the action as though I was actually there rather than whatever I’m getting on television.

Host: So, you’re wearing a headset for this though, right?

Ben Cutler: You’d be wearing an AR headset. For some of the broadcast things you can imagine not wearing a headset. It might be I’ve got it on my phone and just by moving my phone around I can kind of change my perspective. So, there’s a bunch of different ways that this might be used. So, it’s this interesting new capture technology. Much as HoloLens is a display, or a viewing technology, this is the other end, capture, and there’s different ways we can kind of consume that content. One might be with a headset, the other might just be on a PC using a mouse to move around much as I would on a video game to change my perspective or just on a cell phone, because today, there’s a relatively small number of these AR/VR headsets but there are billions of cell phones.

Host: Right. Tell me what you’re specifically doing in this project?

Ben Cutler: In the holoportation?

Host: Yeah.

Ben Cutler: So, really what’s going on right now is, when this project first started to outfit a room, to do this sort of a thing, might’ve been a couple hundred thousand dollars of cost, and it might be 1 to 3 gigabits of data between sites. So, it’s just not really practical, even at an enterprise level. And so, what we’re working on is, with the HoloLens team and other groups inside the company, to really sort of dramatically bring down that cost. So now you can imagine you’re a grandparent and you want to kind of play with your grandkids who are in some other location in the world. So, this is something that we think, in the next couple years, actually might be at the level the consumers can have access to this technology and use it every day.

Host: This is very much in the research stage, though, right?

Ben Cutler: We have an email address and we hear from people every day, “How do I buy this? How can I get this?” And you know, it’s like, “Hey, here’s our website. It’s just research right now. It’s not available outside the company. But keep an eye on this because maybe that will change in the future.”

Host: Yeah. Yeah, and that is your kind of raison d’etre is to bring these impossibles into inevitables in the market. That should be a movie. The Inevitables.

Ben Cutler: I think there’s something similar to that, but anyway…

Host: I think a little, yeah. So just drilling a little bit on the holoportation, what’s really cool I noticed on the website, which is still research, is moving from a room-based hologram, or holoported individual, into mobile holoportation. And you’ve recently done this, at least in prototype, in a car, yes?

Ben Cutler: We have. So, we actually took an SUV. We took out the middle seat. And then we mounted cameras in various locations. Including, actually, the headrests of the first-row passengers. So that if you’re sitting in that back row we could holoport you somewhere. Now this is a little different than, say, that room-to-room scenario. You can imagine, for example, the CEO of our company can’t make a meeting in person, so he’ll take it from the car. And so, the people who are sitting in that conference room will wear an AR headset like a HoloLens. And then Satya would appear in that room as though he’s actually there. And then from Satya’s perspective, he’d wear a VR headset, right? So, he would not be sitting in his car anymore. He would be holoported into that conference room.

(music plays)

Host: Let’s talk about the other big project you’re doing: Project Natick. You basically gave yourself a crazy list of demands and then said, “Hey, let’s see if we can do it!” Tell us about Project Natick. Give us an overview. What it is, how did it come about, where it is now, what does it want to be when it grows up?

Ben Cutler: So, Project Natick is an exploration of manufactured data centers that we place underwater in the ocean. And so, the genesis of this is kind of interesting, because it also shows not just research trying to influence the rest of the company, but that if you’re working elsewhere inside Microsoft, you can influence Microsoft Research. So, in this case, go back to 2013, and a couple employees, Sean James and Todd Rawlings, wrote this paper that said we should put data centers in the ocean and the core idea was, the ocean is a place where you can get good cooling, and so maybe we should look at that for data centers. Historically, when you look at data centers, the dominant cost, besides the actual computers doing the work, is the air conditioning. And so, we have this ratio in the industry called PUE, or Power Utilization Effectiveness. And if you go back a long time ago to data centers, PUEs might be as high as 4 or 5. A PUE of 5 says that, for every watt of power for computers, there’s an additional 4 watts for the air conditioning, which is just kind of this crazy, crazy thing. And so, industry went through this phase where we said, “OK, now we’re going to do this thing called hot aisle/cold aisle. We line up all the computers in a row, and cold air comes in one side and hot air goes out the other.” Now, modern data centers that Microsoft builds have a PUE of about 1.125. And the PUE we see of what we have right now in the water is about 1.07. So, we have cut the cooling cost. But more importantly we’ve done it in a way that we’ve made the data center much colder. So, we’re about 10-degrees Celsius cooler than land data centers. And we’ve known, going back to the middle of the 20th century, that higher temperatures are a problem for components and in fact, a factor of 10-degree Celsius difference can be a factor of 2 difference of the life expectancy of equipment. So, we think that this is one way to bring reliability up a lot. So, this idea of reliability is really a proxy for server longevity and how do we make things last longer? In addition to cooling, there’s other things that we have here. One of which is the atmosphere inside this data center is dry nitrogen atmosphere. So, there’s no oxygen. And the humidity is low. And we think that helps get rid of corrosion. And then the other thing is, data centers we get stuff comes from outside. So, by having this sealed container, safe under the ocean we hopefully have this environment that will allow servers to last much longer.

Host: How did data center technology and submarine technology come together so that you could put the cloud under water?

Ben Cutler: Natick is a little bit unusual as a research project because in some sense we’re not really solving new problems. What we really have here is a marriage of these two mature industries. One is the IT industry, which Microsoft understands very well. And then the other is a marine technologies industry. So, we’re really trying to figure out, how do we blend these things together in a way that creates something new and beneficial?

Host: And so, the submarine technology, making something watertight and drawing on the decades that people have done underwater things, how did you bring that together? Did you have a team of naval experts…?

Ben Cutler: So, the first time we did this, we just, sort of, crashed into it, and we, literally, just built this can and we just kind of dropped it in the water, and ok, we can do this, it kind of works. And so, then the second time around, we put out what we call a Request for Information. We’re thinking of doing this thing, and we did this to government and to academia and to industry, and just to see who’s interested in playing this space? What do they think about it? What kind of approaches would they take? And you know, we’re Microsoft. We don’t really know anything about the ocean. We’ve identified a bunch of folks we think do know about it. And on the industry side we really looked at three different groups. We looked to ship builders, we looked to people who were doing renewable energy in the ocean, which we should come back to that, and then we looked to oil and gas services industry. And so, we got their response and on the basis of that, we then crafted a Request for Proposal to actually go off and do something with us. And that identified what kind of equipment we put inside it, what our requirements were in terms of how we thought that this would work, how cool it had to be, the operating environment that needed to be provided for the servers, and also some more mundane stuff like, when you’re shipping it, what’s the maximum temperature things can get to when it’s like, sitting in the sun on a dock somewhere? And, on the basis of that, we got a couple dozen proposals from four different continents. And so, we chose a partner and then set forward. And so, in part, we were working with University of Washington Applied Physics Lab… is one of three centers of excellence for ocean sciences in the United States, along with Woods Hole and Scripps. And so, we leveraged that capability to help us go through the selection process. And then the company we chose to work with is a company called Naval Group, which is a French company, and among other things, they do naval nuclear submarines, surface ships, but they also do renewable energies. And, in particular, renewable energies in the ocean, so offshore wind, they do tidal energy which is to say, gaining energy from the motion of the tides, as well as something called OTEC which is Ocean Thermal Energy Conversion. So, they have a lot of expertise in renewable energy. Which is very interesting to us. Because another aspect of this that we like is this idea of co-location with offshore renewable energies. So, the idea is, rather than connecting to the grid, I might connect to renewable energies that get placed in the same location where we put this. That’s actually not a new idea for Microsoft. We have data centers that are built near hydroelectric dams or built near windfarms in Texas. So, we like this idea of renewable energy. And so, as we think about this idea of data centers in the ocean, it’s kind of a normal thing, in some sense, that this idea of the renewables would go with us.

Host: You mentioned the groups that you reached out to. Did you have any conversation with environmental groups or how this might impact sea life or the ocean itself?

Ben Cutler: So, we care a lot about that. We like the idea of co-location with the offshore renewables, not just for the sustainability aspects of this, but also for the fact that a lot of those things are going up near large populations centers. So, it’s a way to get close to customers. We’re also interested in other aspects of sustainability. And those include things like artificial reefs. We’ve actually filed an application for a patent having to use this idea of undersea data centers, potentially, as artificial reefs.

Host: So, as you look to maybe, scaling up… Say this thing, in your 5-year experiment, does really well. And you say, “Hey, we’re going to deploy more of these.” Are you looking, then, with the sustainability goggles on, so to speak, for Natick staying green both for customers but also for the environment itself?

Ben Cutler: We are. And I think one thing people should understand too, is you look out at the ocean and it looks like this big, vast open space, but in reality, it’s actually very carefully regulated. So anywhere we go, there are always authorities and rules as to what you can do and how you do them, so there’s that oversight. And there’s also things that we look at directly, ourselves. One of the things that we like about these, is from a recyclability standpoint, it’s a pretty simple structure. Every five years, we bring that thing back to shore, we put a new set of servers in, refresh it, send it back down, and then when we’re all done we bring it back up, we recycle it, and the idea is you leave the seabed as you found it. On the government side, there’s a lot of oversight, and so, the first thing to understand is, typically, like, as I look at the data center that’s there now, the seawater that we eject back into the ocean is about 8/10 of a degree warmer, Celsius, than the water that came in. It’s a very rapid jet, so, it very quickly mixes with the other seawater. And in our case, the first time we did this, a few meters downstream it was a few thousandths of a degree warmer by the time we were that far downstream.

Host: So, it dissipates very quickly.

Ben Cutler: Water… it takes an immense amount of energy to heat it. If you looked at all of the energy generated by all the data centers in the world and pushed all of them at the ocean, per year you’d raise the temperature a few millionths of a degree. So, in net, we don’t really worry about it. The place that we worry about it is this idea of local warming. And so, one of the things that’s nice about the ocean is because there are these persistent currents, we don’t have buildup of temperature anywhere. So, this question of the local heating, it’s really just, sort of, make sure your density is modest and then the impact is really negligible. An efficient data center in the water actually has less impact on the oceans than an inefficient data center on land does.

Host: Let’s talk about latency for a second. One of your big drivers in putting these in the water, but near population centers, is so that data moves fairly quickly. Talk about the general problems of latency with data centers and how Natick is different.

Ben Cutler: So, there are some things that you do where latency really doesn’t matter. But I think latency gets you in all sorts of ways, and in sometimes surprising ways. The thing to remember is, even if you’re just browsing the web, when a webpage gets painted, there’s all of this back-and-forth traffic. And so, ok, so I’ve got now a data center that’s, say, 1,000 kilometers away, so it’s going to be 10 milliseconds, roundtrip, per each communication. But I might have a couple hundred of those just to paint one webpage. And now all of a sudden it takes me like 2 seconds to paint that webpage. Whereas it would be almost instantaneous if that data center is nearby. And think about, also, I’ve got factories and automation and I’ve got to control things. I need really tight controls there in terms of the latency in order to do that effectively. Or imagine a future where autonomous vehicles become real and they’re interacting with data centers for some aspect of their navigation or other critical functions. So, this notion of latency really matters in a lot of ways that will become, I think, more present as this idea of intelligent edge grows over time.

Host: Right. And so, what’s Natick’s position there?

Ben Cutler: So, Natick’s benefit here, is more than half the world’s population lives within a couple hundred kilometers of the ocean. And so, in some sense, you’re finding a way to put data centers very close to a good percentage of the population. And you’re doing it in a way that’s very low impact. We’re not taking land because think about if I want to put a data center in San Francisco or New York City. Well turns out, land’s expensive around big cities. Imagine that. So, this is a way to go somewhere where we don’t have some of those high costs. And, potentially, with this offshore renewable energy, and not, as we talked about before, having any impact on the water supply.

Host: So, it could solve a lot of problems all at once.

Ben Cutler: It could solve a lot of problems in this very, sort of, environmentally sustainable way, as well as, in some sense, adding these socially sustainable factors as well.

Host: Yeah. Talk a little bit about the phases of this project. I know there’s been more than one. You alluded to that a little bit earlier. But what have you done stage wise, phase wise? What have you learned?

Ben Cutler: So, Phase 1 was a Proof of Concept, which is literally, we built a can, and that can had a single computer rack in it, and that rack only had 24 servers. And that was about one-third of the space of the rack. It was a standard, what we call, 42U rack, which reflects the size of the rack. Fairly standard for data centers. And then other two thirds were filled with what we call load trays. Think of them as, all they do is, they’ve got big resistors that generate heat. So, it’s like hairdryers. And so, they’re used, actually, today in data centers to just, sort of, commission new data centers. Test the cooling system, actually. In our case, we just wanted to generate heat. Could we put these things in the water? Could we cool it? What would that look like? What would be the thermal properties? So, that was a Proof of Concept just to see, could we do this? Could we just, sort of, understand the basics? Were our intuitions right about this? What sort of problems might we encounter? And just, you know, I hate to use… but, you know, get our feet wet. Learning how to interact…

Host: You had to go there.

Ben Cutler: It is astonishing the number of expressions that relate to water that we use.

Host: Oh gosh, the puns are…

Ben Cutler: It’s tough to avoid. So, we just really wanted to get some sense of, what it like was to work with the marine industry? Every company and, to some degree, industry, has ways in which they work. And so, this was really an opportunity for us to learn some of those and become informed, before we go to this next stage that we’re at now. Which is more as a prototype stage. So, this vessel that we built this time, is about the size of a shipping container. And that’s by intent. Because then we’ve got something that’s of a size that we can use standard logistics to ship things around. Whether the back of a truck, or on a container ship. Again, keeping with this idea of, if something like this is successful, we have to think about what are the economics of this? So, it’s got 12 racks this time. It’s got 864 servers. It’s got FPGAs, which is something that we use for certain types of acceleration. And then, each of those 864 servers has 32 terabytes of disks. So, this is a substantial amount of capability. It’s actually located in the open ocean in realistic operating conditions. And in fact, where we are, in the winter, the waves will be up to 10 meters. We’re at 36 meters depth. So that means the water above us will vary between 26 and 46 meters deep. And so, it’s a really robust test area. So, we want to understand, can this really work? And what, sort of, the challenges might be in this realistic operating environment.

Host: So, this is Phase 2 right now.

Ben Cutler: This is Phase 2. And so now we’re in the process of learning and collecting data from this. And just going through the process of designing and building this, we learned all sorts of interesting things. And so, turns out, when you’re building these things to go under the ocean, one of the cycling that you get is just from the waves going by. And so, as you design these things, you have to think about how many waves go by this thing over the lifetime? What’s the frequency of those waves? What’s the amplitude of those waves? And this all impacts your design, and what you need to do, based on where you’re going to put it and how long it will be. So, we learned a whole bunch of stuff from this. And we expect everything will all be great and grand over the next few years here. But we’ll obviously be watching, and we’ll be learning. If there is a next phase, it would be a pilot. And now we’re talking to build something that’s larger scale. So, it might be multiple vessels. There might be a different deployment technology than what we used this time, to get greater efficiency. So, I think those are things that, you know, we’re starting to think about, but mostly, right now, we’ve got this great thing in the water and we’re starting to learn.

Host: Yeah. And you’re going to leave it alone for 5 years, right?

Ben Cutler: This thing will just be down there. Nothing will happen to it. There will be no maintenance until it’s time to retire the servers, which, in a commercial setting, might be every 5 years or longer. And then we’ll bring it back. So, it really is the idea of a lights-out thing. You put it there. It just does its thing and then we go and pull it back later. In an actual commercial deployment, we’d probably be deeper than 36 meters. The reason we’re at 36 meters, is, it turns out, 40 meters is a safe distance for human divers to go without a whole lot of special equipment. And we just wanted that flexibility in case we did need some sort of maintenance or some sort of help during this time. But in a real commercial deployment, we’d go deeper, and one of the reasons for that, also, is just, it will be harder for people to get to it. So, people worry about physical security. We, in some sense, have a simpler challenge than a submarine because a submarine is typically trying to hide from its adversaries. We’re not trying to hide. If we deploy these things, we’d always be within the coastal waters of a country and governed by the laws of that country. But we do also think about, let’s make this thing safe. And so, one of the safety aspects is not just the ability to detect when things are going around you, but also to put it in a place where it’s not easy for people to go and mess with it.

Host: Who’s using this right now? I mean this is an actual test case, so, it’s a data center that somebody’s accessing. Is it an internal data center or what’s the deal on that?

Ben Cutler: So, this data center is actually on our global network. Right now, it’s being used by people internally. We have a number of different teams that are using it for their own production projects. One group that’s working with it, is we have an organization inside Microsoft called AI for Earth. We have video cameras, and so, one of the things that they do is, they’re watching the different fish going by, and other types of much more bizarre creatures that we see. And characterizing and counting those, and so we can kind of see how things evolve over time. And one of the things we’re looking to do, potentially, is to work with other parties that do these more general assessments and then provide some of those AI technologies to them for their general research of marine environment and how, when you put different things in the water, how that affects things, either positively or negatively. Not just, sort of, what we’re doing, but other types of things that go in the water which might be things as simple as cables or marine energy devices or other types of infrastructure.

Host: I would imagine, when you deploy something in a brand-new environment, that you have unintended consequences or unexpected results. Is there anything interesting that’s come out of this deployment that you’d like to share?

Ben Cutler: So, I think when people think of the ocean, they think this is like a really hostile and dangerous place to put things. Because we’re all used to seeing big storms, hurricanes and everything that happens. And to be sure, right at that interface between land and water is a really dangerous place to be. But what you find is that, deep under the waves on the seabed, is a pretty quiet and calm place. And so, one of the benefits that we see out of this, is that even for things like 100-year hurricanes, you will hear, acoustically, what’s going on, on the surface, or near the land… waves crashing and all this stuff going on. But it’s pretty calm down there. The idea that we have this thing deep under the water that would be immune to these types of things is appealing. So, you can imagine this data center down there. This thing hits. The only connectivity back to land is going to be fiber. And that fiber is largely glass, with some insulating shell, so it might be fuse so it will break off. But the data center will keep operating. Your data center will still be safe, even though there might be problems on land. So, this diversity of risk is another thing that’s interesting to people when we talk about Natick.

Host: What about deployment sites? How have you gone about selecting where you put Project Natick and what do you think about other possibilities in the future?

Ben Cutler: So, for this Phase 2, we’re in Europe. And Europe, today, is the leader in offshore renewable energies. Twenty-nine of the thirty largest offshore windfarms are located in Europe. We’re deployed at the European Marine Energy Center in the Orkney Islands of Scotland. The grid up there is 100% renewable energy. It’s a mix of solar and wind as well as these offshore energies that people are testing at the European Marine Energy Center or EMEC. So, tidal energy and wave energy. One of the things that’s nice about EMEC is people are testing these devices. So, in the future, we have the option to go completely off this grid. It’s 100% renewable grid, but we can go off and directly connect to one of those devices and test out this idea of a co-location with renewable energies.

Host: Did you look at other sites and say, hey, this one’s the best?

Ben Cutler: We looked at a number of sites. Both test sites for these offshore renewables as well as commercial sites. For example, go into a commercial windfarm right off the bat. And we just decided, at this research phase, we had better support and better capabilities in a site that was actually designed for that. One of the things is, as I might have mentioned, the waves there get very, very large in the winter. So, we wanted some place that had very aggressive waters so that we know that if we survive in this space that we’ll be good pretty much anywhere we might choose to deploy.

Host: Like New York. If you can make it there…

Ben Cutler: Like New York, exactly.

Host: You can make it anywhere.

Ben Cutler: That’s right.

(music plays)

Host: what was your path to Microsoft Research?

Ben Cutler: So, my career… I would say that there’s been very little commonality in what I’ve done. But the one thing that has been common is this idea of taking things from early innovation to market introduction. So, a lot of my early career was in startup companies, either as a founder or as a principle. I was in super computers, computer storage, video conferencing, different types of semiconductors, and then I was actually here at Microsoft earlier, and I was working in a group exploring new operating system technologies. And then, after that, I went to DARPA, where I was there for a few years working on different types of information technology. And then I came back here. And, truthfully, when I first heard about this idea that they were thinking about doing these underwater data centers, it just sounded like the dumbest idea to me, and… But you know, I was willing to go and then, sort of, try and think through, ok, on the surface it sounds ridiculous. But a lot of things start that way. And you have to be willing to go in, understand the economics, understand the science and the technology involved, and then draw some conclusion of whether you think that can actually go somewhere reasonable.

Host: As we close, Ben, I’m really interested in what kinds of people you have on your team, what kinds of people might be interested in working on Special Projects here. Who’s a good fit for a Special Projects research career?

Ben Cutler: I think we’re looking for people who are excited about the idea of doing something new and don’t have fear of doing something new. In some sense, it’s a lot like people who’d go into a startup. And what I mean by that is, you’re taking a lot more risk, because I’m not in in a large organization, I have to figure out a lot of things out myself, I don’t have a team that will know all these things, and a lot of things may fall on the floor just because we don’t have enough people do get everything done. It’s kind of like driving down the highway and you’re, you know, lashed to the front bumper of the car. You’re fully exposed to all the risk and all the challenges of what you’re doing. And you’re, you know, wide open. There’s no end of things to do and you have to figure out what’s important, what to prioritize, because not everything can get done. But have the flexibility to really, then, understand that even though I can’t get everything done, I’m going to pick and choose the things that are most important and really drive in new directions without a whole lot of constraints on what you’re doing. So, I think that’s kind of what we look to. I have only two people who actually directly report to me on this project. That’s the team. But then I have other people who are core members, who worked on it, who report to other people, and then across the whole company, more than two hundred people touched this Phase 2, in ways large and small. Everything from helping us design the data center, to people who refurbished servers that went into this. So, it’s really a “One Microsoft” effort. And so, I think that there’s always opportunities to engage, not just by being on a team, but interacting and providing your expertise and your knowledge base to help us be successful. Because only in that way that we can take these big leaps. And so, in some sense, we’re trying to make sure that Microsoft Research is really staying true to this idea of pursuing new things but not just five years out, in known fields, but look at these new fields. Because the world is changing. And so, we’re always looking for people who are open to these new ideas and frankly are willing to bring new ideas with them as to where they think we should go and why. And that’s how we as a company I think grow and see new markets and are successful.

(music plays)

Host: Ben Cutler, it’s been a pleasure. Thanks for coming on the podcast today.

Ben Cutler: My pleasure as well.

To learn more about Ben Cutler, Project Natick, and the future of submersible data centers, visit natick.research.microsoft.com.

post

60 seconds with … Cambridge Research Lab Director Chris Bishop

Chris BishopChris Bishop leads Microsoft’s research lab in Cambridge, which has been at the forefront of AI, machine learning and deep learning research for 20 years. Its work contributes to many Microsoft products and features, such as the Clutter feature in Office.

Name: Chris Bishop

Role: Technical Fellow and Laboratory Director

Age: 59

Lives: Cambridge, UK

Family: Wife and two sons (both at university, studying Biology and Computer Science)

Pets: Two cats

Hobbies: Flying aeroplanes

Tell us about your current role?

I was one of the first people to join Microsoft’s Research Lab in Cambridge UK, back when the lab was first opened in 1997, before being named Lab Director two-and-a-half years ago, so I’ve been involved in growing and shaping the lab for more than two decades. Today my role includes leadership of the MSR Cambridge lab, as well as coordination of the broader Microsoft presence in Cambridge. I am fortunate in being supported by a very talented leadership team and a highly capable and motivated team of support staff.

What were your previous jobs?

My background is in theoretical physics. After graduating from Oxford, I did a PhD in quantum field theory at the University of Edinburgh, exploring some of the fundamental mathematics of matter, energy, and space-time. After my PhD I wanted to do something that would have potential for practical impact, so I joined the UK’s national fusion research lab to work on the theory of magnetically confined plasmas as part of a long-term goal to create unlimited clean energy. It was during this time that there were some breakthroughs in the field of neural networks. I was very inspired by the concept of machine intelligence, and the idea that computers could learn for themselves. Initially I started applying neural networks to problems in fusion research, and we became the first lab to use neural networks for real-time feedback control of a high-temperature fusion plasma.

In fact, I found neural networks so fascinating that, after about eight years working on fusion research, I took a rather radical step and switched fields into machine learning. I became a Professor at Aston University in Birmingham, where I set up a very successful research lab. Then I took a sabbatical and came to Cambridge for six months to run a major, international programme called “Neural Networks and Machine Learning” at the Isaac Newton Institute. The programme started on July 1, 1997, on the very same day that Microsoft announced it was opening a research lab in Cambridge, its first outside the US. I was approached by Microsoft to join the new lab, and have never looked back.

What are your aims at Microsoft?

My ambition is for the lab to have an impact on the real world at scale by tackling very hard research problems, and by leveraging the advantages and opportunities we have as part of Microsoft. I often say that I want the MSR Cambridge lab to be a critical asset for the company.

I’m also very passionate about diversity and inclusion, and we have introduced multiple initiatives over the last year to support this. We are seeing a lot of success in bringing more women into technical roles in the lab, across both engineering and research, and that’s very exciting to see.

What’s the hardest part of your job?

A core part of my job is to exercise judgment in situations where there is no clear right answer. For instance, in allocating limited resources I need to look at the risk, the level of investment, the potential for impact, and the timescale. At any one time there will be some things we are investing in that are quite long term but where the impact could be revolutionary, along with other things that have perhaps been researched for several years which are beginning to get real traction, all the way to things that have had real-world impact already. The hardest part of my job is to weigh up all these factors and make some difficult decisions on where to place our bets.



What’s the best part of your job?

The thing I enjoy most is the wonderful combination of technology and people. Those are two aspects I find equally fascinating, yet they offer totally different kinds of challenges. We, as a lab, are constantly thinking about technology, trends and opportunities, but also about the people, teams, leadership, staff development and recruitment, particularly in what has become a very competitive talent environment. The way these things come together is fascinating. There is never a dull day here.

What is a leader?

I think of leadership as facilitating and enabling, rather than directing. One of the things I give a lot of attention to is leadership development. We have a leadership team for the lab and we meet once a week for a couple of hours. I think about the activities of that team, but also about how we function together. It’s the diversity of the opinions of the team members that creates a value that’s greater than the sum of its parts. Leadership is about harnessing the capabilities of every person in the lab and allowing everyone to bring their best game to the table. I therefore see my role primarily as drawing out the best in others and empowering them to be successful.

What are you most proud of?

Last year I was elected a Fellow of the Royal Society, and that was an incredibly proud moment. There’s a famous book I got to sign, and you can flip back and see the signatures of Isaac Newton, Charles Darwin, Albert Einstein, and pretty much every scientist you’ve ever heard of. At the start of the book is the signature of King Charles II who granted the royal charter, so this book contains over three-and-a-half centuries of scientific history. That was a very humbling but thrilling moment.

Another thing I’m very proud of was the opportunity to give the Royal Institution Christmas Lectures. The Royal Institution was set up more than 200 years ago – Michael Faraday was one of the early directors – and around 14 Nobel prizes have been associated with the Institution, so there is a tremendous history there too. These days it’s most famous for the Christmas Lectures, which were started by Faraday. Ever since the 1960s these lectures have been broadcast on national television at Christmas, and I watched them as a child with my mum and dad. They were very inspirational for me and were one of the factors that led me to choose a career in science. About 10 years ago, I had the opportunity to give the lectures, which would have been inconceivable to me as a child. It was an extraordinary moment to walk into that famous iconic theatre, where Faraday lectured many times and where so many important scientific discoveries were first announced.

One Microsoft anecdote that relates to the lectures was that getting selected was quite a competitive process. It eventually came down to a shortlist of five people, and I was very keen to be chosen, especially as it was the first time in the 200 year history of the lectures that they were going to be on the subject of computer science. I was thinking about what I could do to get selected, so I wrote to Bill Gates, explained how important these lectures were and asked him whether, if I was selected, he would agree to join me as a guest in one of the lectures. Fortunately, he said yes, and so I was able to include this is my proposal to the Royal Institution. When I was ultimately selected, I held Bill to this promise, and interviewed him via satellite on live television during one of the lectures.

Chris Bishop is elected a Fellow of the Royal Society
Chris Bishop is elected a Fellow of the Royal Society

What inspires you?

I love the idea that through our intellectual drive and curiosity we can use technology to make the world a better place for millions of people. For example, the field of healthcare today largely takes a one-size-fits-all approach that reactively waits until patients become sick before responding, and which is increasingly associated with escalating costs that are becoming unsustainable. The power of digital technology offers the opportunity to create a data-driven approach to healthcare that is personalised, predictive and preventative, and which could significantly reduce costs while also improving health and wellbeing. I’ve made Healthcare AI one of the focal points of the Cambridge lab, and I find it inspiring that the combination of machine learning, together with Microsoft’s cloud, could help to bring about a much-needed transformation in healthcare.

What is your favourite Microsoft product?

A few years ago, the machine learning team here in Cambridge built a feature, in collaboration with the Exchange team, called Clutter. It sorts out the email you should pay attention to now, from the ones that can be left to, say, a Friday afternoon. I love it because it’s used by tens of millions of people, and it has some very beautiful research ideas at the heart of it – something called a hierarchical Bayesian machine learning model. This gives it a nice out-of-the-box experience, a sort of average that does OK for everybody, but as you engage with it, it personalises and learns your particular preferences of what constitutes urgent versus non-urgent email. The other reason I’m particularly fond of it is that when I became Lab Director, the volume of email in my inbox quadrupled. That occurred just as we were releasing the Clutter feature, so it arrived just in time to save me from being overwhelmed.

What was the first bit of technology that you were excited about?

When I was a child I was very excited about the Apollo moon landings. I was at an age where I could watch them live on television and knew enough to understand what an incredible achievement they were. Just think of that Saturn launch vehicle that’s 36 storeys high, weighs 3,000 tonnes, is burning 15 tonnes of fuel a second, and yet it’s unstable. So, it must be balanced, rather like balancing a broom on your finger, by pivoting those massive engines backwards and forwards on hydraulic rams in response to signals from gyroscopes at the top of the rocket. It’s that combination of extreme brute force with exquisite precision, along with dozens of other extraordinary yet critical innovations, that made the whole adventure just breath-taking. And the filtering algorithms used by the guidance system are an elegant application of Bayesian inference, so it turns out that machine learning is, literally, rocket science.

Tags: , , , , ,

post

Create technology projects with your family using web-based Microsoft MakeCode

Introduction

Microsoft MakeCode (makecode.com) is a web-based learning environment for kids and teens to create with technology. MakeCode takes a unique approach to computing education by combining the magic of making with the power of code as a way to make learning about computers and technology more accessible and interesting to a wider audience.

The MakeCode team at Microsoft is small, but mighty! We are passionate about making technology fun, exciting and accessible for all kids. We work in a fun office space at Microsoft campus in Redmond, Washington and love coming to work every day!

Over the next few weeks, we’ll be publishing some easy MakeCode projects that you can do with your kids at home over the summer using a different MakeCode product each week.

Grab your kids and a glue gun and come on the Summer of MakeCode tour with us!

– The MakeCode Team

MakeCode Office Space

Week 1: Micro:Pet

If your kids are like mine, they’ll spend a few minutes playing with their fancy toys, and a few hours playing with good old-fashioned cardboard boxes, string and markers in the garage!

Micro:Pet is a fun project that gets your kids’ creativity flowing using materials you can find around the house while incorporating electronics and coding concepts and activities with the micro:bit.

For the Micro:Pet project, you’ll need:

  • A computer with internet connection and USB port
  • A micro:bit Go Kit (available at Microcenter.com) that includes a micro:bit, a USB cable, a battery pack and batteries
  • Small cardboard boxes, toilet paper rolls, paper cups/bowls or anything you have around the house to construct your pet with
  • Markers, pipe cleaners, googly eyes, colored paper, feathers and anything else you have on hand to decorate your pet!

Check out our fun video about our micro:pets!

[embedded content]

Step 1: What’s your dream pet?

The first thing you’ll want to do is think about what kind of pet you want to create. It could be something you’ve always wanted to have as a pet, but couldn’t – say a unicorn, a dragon or a skunk.

Step 2: Make your pet

Using the materials at hand, create your dream Pet. Remember to include enough space in your design to attach your micro:bit and the battery pack.

Step 3: Code the micro:bit

If you have a Windows 10 computer, you can download and install the micro:bit app at http://aka.ms/microbitapp. Otherwise, go to https://makecode.microbit.org/.

You can use your micro:bit in many different ways on your pet. Here is an example of a program for your Micro:Pet, but be creative and come up with your own unique project!

Figure 1 – An example of a Micro:Pet program. Don’t miss the video at https://youtu.be/Sd_hB4nyUXI

Optional steps:

1. Adding audio

You can add audio and play sounds through your micro:bit by connecting it to headphones, earbuds or an external speaker. Note: there is only one volume level at which the micro:bit plays – and it is very loud! Don’t put earbuds in your ear when you run your program. In addition to the micro:bit, you will need:

Figure 2 – What you’ll need for sound. Don’t miss the video at https://youtu.be/Wx73kbW5s9E

Figure 3 – Example program using sounds.

2. Adding motion

You can add motion to your Micro:Pet – for example, wagging its tail when your pet is fed. To do this, you will need to attach a servo motor to your micro:bit. You will need:

Figure 4 – What you’ll need to put your pet in motion. Don’t miss the video at https://youtu.be/qdX86in2YXo

Figure 5 – Example program using a servo motor.

Check back next week to learn how to make your own Fortune Teller using the Circuit Playground Express and some simple crafting materials!

post

Microsoft Research Montreal welcomes Fernando Diaz, Principal Researcher and lead of the new Montreal FATE Research Group

Fernando Diaz – Principal Research Manager

Microsoft Research Montreal further bolsters its research force this month, welcoming Fernando Diaz to the Montreal FATE (Fairness, Accountability, Transparency and Ethics in AI) research group as Principal Researcher.

Diaz, whose research area is the design of information access systems, including search engines, music recommendation services and crisis response platforms is particularly interested in understanding and addressing the societal implications of artificial intelligence more generally. Immediately previous to joining Microsoft Research Montreal, he was the Director of Research at Spotify Research in New York, New York. He was previously a senior researcher with Microsoft Research New York City where he founded the FATE Research Group alongside Kate Crawford and Hanna Wallach. Joining Microsoft Research reunites him with many former FATE collaborators.

The world is beginning to harness the power of AI, machine learning, and data science across many aspects of society. Indeed, these research areas form core components of many Microsoft systems and products.

But these techniques also raise complex ethical and social questions: How can we best use AI to assist users and offer people enhanced insights, while avoiding exposing them to different types of discrimination in health, housing, law enforcement, and employment? How can we balance the need for efficiency and exploration with fairness and sensitivity to users? As we move toward relying on intelligent agents in our everyday lives, how do we ensure that individuals and communities can trust these systems?

The FATE research group at Microsoft studies the complex social implications of AI, machine learning, data science, large-scale experimentation and increasing automation. The aim is to develop computational techniques that are both innovative and ethical while drawing on the deeper context surrounding these issues from sociology, history and science and technology studies. A relatively new group, FATE currently is working on collaborative research projects that address the need for transparency, accountability, and fairness in AI and ML systems. Fate publishes across a variety of disciplines, including machine learning, information retrieval, systems, sociology, political science and science and technology studies.

“I’m thrilled to welcome Fernando back to Microsoft Research. Fernando is an immensely talented leader in information retrieval, machine learning and the new field of FATE,” said Jennifer Chayes, Technical Fellow and Managing Director of Microsoft Research New England, New York City and Montreal labs. “I’m also excited and proud to announce the creation of the Montreal FATE research group. This group will work on how to increase the fairness of data sets and AI algorithms, transparency and interpretability of the output of AI algorithms, accountability of this output in fairness and transparency, and ethical questions on AI and society.”

In addition to an impressive research and academic portfolio (including a PhD and Masters in Computer Science from the University of Massachusetts Amherst), Diaz brings a passion for disseminating his work outside of the research community. He works closely with product teams at Microsoft, focusing on relevant and impactful research. He also has taught graduate level courses at New York University, introducing students to the realities of production systems.

Very attractive to Diaz about Microsoft Research was the promise of considerable freedom to work on a wide range of interesting problems. While his research will continue to include fundamental work on information access algorithms, Diaz will also focus on building a multidisciplinary group studying the societal implications of artificial intelligence.

“Increasingly, we are noticing the profound societal implications of integrating artificial intelligence into everyday life. MSR Montreal—and Montreal as a city—has amongst the strongest researchers in artificial intelligence, making it the ideal location to study and understand its societal implications from a technical perspective. At the same time, this research requires a broad, multidisciplinary strength found both in Canada and at Microsoft Research, more generally,” said Diaz.

“Work in FATE is crucial for ensuring that artificial intelligence becomes an essential and positive part of our lives, and Fernando is a leader both in FATE and in connecting FATE to other disciplines,” added Geoff Gordon, Microsoft Research Montreal Lab Director. “I am thrilled about the opportunity to work closely with him on a daily basis.”

Yoshua Bengio, Scientific Director at the Montreal Institute for Learning Algorithms (MILA) also expressed his encouragement. “Ethical and social issues associated with AI are really important and that is why MILA has put it in its mission to contribute to AI for the benefits of all and to collective discussions about the use of AI,” he said. “There already are strong collaborations between MILA and Microsoft Research Montreal and I’m delighted at the perspective of expanding this collaboration with the new FATE group which Fernando Diaz will head. This is clearly a great move for Microsoft as well as for the Montreal AI community.”

Indeed the Microsoft Research FATE team will continue to expand with impressive post-doctoral researcher talent joining the group across the summer including Canadian Luke Stark, returning to his native Canada following fellowship tours at the Department of Sociology at Dartmouth College and the Berkman Klein Center for Internet and Society at Harvard University.

The French version of this blog post can be found on the Microsoft News Center Canada.