As the world adjusts to new ways of working and staying connected, we remain committed to providing Azure AI solutions to help organizations invent with purpose.
Building on our vision to empower all developers to use AI to achieve more, today we’re excited to announce expanded capabilities within Azure Cognitive Services, including:.
Text Analytics for health preview.
Form Recognizer general availability.
Custom Commands general availability.
New Neural Text to Speech voices.
Companies in healthcare, insurance, sustainable farming, and other fields continue to choose Azure AI to build and deploy AI applications to transform their businesses. According to IDC1, by 2022, 75 percent of enterprises will deploy AI-based solutions to improve operational efficiencies and deliver enhanced customer experiences.
To meet this growing demand, today’s product updates expand on existing language, vision, and speech capabilities in Azure Cognitive Services to help developers build mission-critical AI apps that enable richer insights, save time and reduce costs, and improve customer engagement.
Get rich insights with powerful natural language processing
One of the ways organizations are adapting is scaling the ability to rapidly process data and generate new insights from data. COVID-19 has accelerated the urgency, particularly for the healthcare industry. With the overwhelming amount of healthcare data generated every year2, it is increasingly critical for providers to quickly unlock access to this information to find new solutions that improve patient outcomes.
We are excited to introduce Text Analytics for health, a new feature of Text Analytics that enables health care providers, researchers, and companies to extract rich insights and relationships from unstructured medical data. Trained on a diverse range of medical data—covering various formats of clinical notes, clinical trials protocols, and more—the health feature is capable of processing a broad range of data types and tasks, without the need for time-intensive, manual development of custom models to extract insights from the data.
In response to the COVID-19 pandemic, Microsoft partnered with the Allen Institute of AI and leading research groups to prepare the COVID-19 Open Research Dataset. Based on the resource of over 47,000 scholarly articles, we developed a COVID-19 search engine using Text Analytics for health and Cognitive Search, enabling researchers to generate new insights in support of the fight against the disease.
Additionally, we continue to make advancements in natural language processing (NLP) so developers can more quickly build apps that generate insights about sentiment in text. The opinion mining feature in Text Analytics assigns sentiment to specific features or topics so that users can better understand customer feedback from social media data, review sites, and more.
Save time and reduce costs by turning forms into usable data
A lot of the unstructured data is contained in forms that have tables, objects, and other elements. These types of documents typically take manual labeling by document type or intensive coding to extract insights.
We’re making Form Recognizer generally available to help developers extract information from millions of documents efficiently and accurately—no data science expertise needed.
Customers like Sogeti, part of the Capgemini Group, are using Form Recognizer to help their clients more quickly process large volumes of digital documents.
“Sogeti constantly looks for new ways to help clients in their digital transformation journey by providing cutting-edge solutions in AI and machine learning. Our Cognitive Document Processing (CDP) offer enables clients to process and classify unstructured documents and extract data with high accuracy resulting in reduced operating costs and processing time. CDP leverages the powerful cognitive and tagging capabilities of the Form Recognizer to extract effortlessly, keyless paired data and other relevant information from scanned/digital unstructured documents, further reducing the overall process time.” – Mark Oost – Chief Technology Officer at Sogeti, Artificial Intelligence and Machine Learning
Wilson Allen, a leading provider of consulting and analytics solutions, is using Form Recognizer to help law and other professional services firms process and evaluate documents (PDFs and images, including financial forms, loan applications, and more), and train custom models to accurately extract values from complex forms.
“The addition of Form Recognizer to our toolkit is helping us turn large amounts of unstructured data into valuable information, saving more than 400 hours of manual data entry and freeing up time for employees to work on more strategic tasks.” – Norm Mullock – VP of Strategy at Wilson Allen
Improve customer engagement with voice-enabled apps
People and organizations continue to look for ways to enrich customer experiences while balancing the transition to digital-led, touch-free operations2. Advancements in voice technology are empowering developers to create more seamless, natural, voice-enabled experiences for customers to interact with brands.
One of those advancements, Custom Commands, a capability of Speech in Cognitive Services, is now generally available. Custom Commands allows developers to create task-oriented voice applications more easily for command-and-control scenarios that have a well-defined set of variables, like voice-controlled smart home thermostats. It brings together Speech to Text for speech recognition, Language Understanding for capturing spoken entities, and voice response with Text to Speech, to accelerate the addition of voice capabilities to your apps with a low-code authoring experience.
In addition, Neural Text to Speech is expanding language support with 15 new natural-sounding voices based on state-of-the-art neural speech synthesis models: Salma in Arabic (Egypt), Zariyah in Arabic (Saudi Arabia), Alba in Catalan (Spain), Christel in Danish (Denmark), Neerja in English (India), Noora in Finnish (Finland), Swara in Hindi (India), Colette in Dutch (Netherland), Zofia in Polish (Poland), Fernanda in Portuguese (Portugal), Dariya in Russian (Russia), Hillevi in Swedish (Sweden), Achara in Thai (Thailand), HiuGaai in Chinese (Cantonese, Traditional) and HsiaoYu in Chinese (Taiwanese Mandarin).
Customers are already adding speech capabilities to their apps to improve customer engagement. With Cognitive Services and Bot Service, the BBC created an AI-enabled voice assistant, Beeb, that delivers a more engaging, tailored experience for its diverse audiences.
We are excited to introduce these new product innovations that empower all developers to build mission-critical AI apps. To learn more, check out our resources below.
The articles in this series have mainly focused on Perl’s ability to manipulate text. Perl was designed to manipulate and analyze text. But Perl is capable of much more. More complex problems often require working with sets of data objects and indexing and comparing them in elaborate ways to compute some desired result.
For working with sets of data objects, Perl provides arrays and hashes. Hashes are also known as associative arrays or dictionaries. This article will prefer the term hash because it is shorter.
The remainder of this article builds on the previous articles in this series by demonstrating basic use of arrays and hashes in Perl.
An example Perl program
Copy and paste the below code into a plain text file and use the same one-liner that was provided in the the first article of this series to strip the leading numbers. Name the version without the line numbers chip2.pm and move it into the hal subdirectory. Use the version of the game that was provided in the second article so that the below chip will automatically load when placed in the hal subdirectory.
00 # advanced operations chip
01 02 package chip2;
03 require chip1;
04 05 use strict;
06 use warnings;
07 08 use constant SCORE=>'
09 ┌───┬───┬───┐
10 │ 3 │ 2 │ 3 │
11 ├───┼───┼───┤
12 │ 2 │ 4 │ 2 │
13 ├───┼───┼───┤
14 │ 3 │ 2 │ 3 │
15 └───┴───┴───┘
16 ';
17 18 sub get_prob {
19 my $game = shift;
20 my @nums;
21 my %odds;
22 23 while ($game =~ /[1-9]/g) {
24 $odds{$&} = substr(SCORE, $-[0], 1);
25 }
26 27 @nums = sort { $odds{$b} <=> $odds{$a} } keys %odds;
28 29 return $nums[0];
30 }
31 32 sub win_move {
33 my $game = shift;
34 my $mark = shift;
35 my $tkns = shift;
36 my @nums = $game =~ /[1-9]/g;
37 my $move;
38 39 TRY: for (@nums) {
40 my $num = $_;
41 my $try = $game =~ s/$num/$mark/r;
42 my $vic = chip1::get_victor $try, $tkns;
43 44 if (defined $vic) {
45 $move = $num;
46 last TRY;
47 }
48 }
49 50 return $move;
51 }
52 53 sub hal_move {
54 my $game = shift;
55 my $mark = shift;
56 my @mark = @{ shift; };
57 my $move;
58 59 $move = win_move $game, $mark, \@mark;
60 61 if (not defined $move) {
62 $mark = ($mark eq $mark[0]) ? $mark[1] : $mark[0];
63 $move = win_move $game, $mark, \@mark;
64 }
65 66 if (not defined $move) {
67 $move = get_prob $game;
68 }
69 70 return $move;
71 }
72 73 sub complain {
74 print "My mind is going. I can feel it.\n";
75 }
76 77 sub import {
78 no strict;
79 no warnings;
80 81 my $p = __PACKAGE__;
82 my $c = caller;
83 84 *{ $c . '::hal_move' } = \&{ $p . '::hal_move' };
85 *{ $c . '::complain' } = \&{ $p . '::complain' };
86 }
87 88 1;
How it works
In the above example Perl module, each position on the Tic-Tac-Toe board is assigned a score based on the number of winning combinations that intersect it. The center square is crossed by four winning combinations – one horizontal, one vertical, and two diagonal. The corner squares each intersect one horizontal, one vertical, and one diagonal combination. The side squares each intersect one horizontal and one vertical combination.
The get_prob subroutine creates a hash named odds (line 21) and uses it to map the numbers on the current game board to their score (line 24). The keys of the hash are then sorted by their score and the resulting list is copied to the nums array (line 27). The get_prob subroutine then returns the first element of the nums array ($nums[0]) which is the number from the original game board that has the highest score.
The algorithm described above is an example of what is called a heuristic in artificial intelligence programming. With the addition of this module, the Tic-Tac-Toe game can be considered a very rudimentary artificial intelligence program. It is really just playing the odds though and it is quite beatable. The next module (chip3.pm) will provide an algorithm that actually calculates the best possible move based on the opponent’s counter moves.
The win_move subroutine simply tries placing the provided mark in each available position and passing the resulting game board to chip1’s get_victor subroutine to see if it contains a winning combination. Notice that the r flag is being passed to the substitution operation (s/$num/$mark/r) on line 41 so that, rather than modifying the original game board, a new copy of the board containing the substitution is created and returned.
Arrays
It was mentioned in part one that arrays are variables whose names are prefixed with an at symbol (@) when they are created. In Perl, these prefixed symbols are called sigils.
Context
In Perl, many things return a different value depending on the context in which they are accessed. The two contexts to be aware of are called scalar context and list context. In the following example, $value1 and $value2 are different because @nums is accessed first in scalar context and then in list context.
$value1 = @nums; ($value2) = @nums;
In the above example, it might seem like @nums should return the same value each time it is accessed, but it doesn’t because what is accessing it (the context) is different. $value1 is a scalar, so it receives the scalar value of @nums which is its length. ($value2) is a list, so it receives the list value of @nums. In the above example, $value2 will receive the value of the first element of the nums array.
In part one, the below statement from the get_mark subroutine copied the numbers from the current Tic-Tac-Toe board into an array named nums.
@nums = $game =~ /[1-9]/g
Since the nums array in the above statement receives one copy of each board number in each of its elements, the count of the board numbers is equal to the length of the array. In Perl, the length of an array is obtained by accessing it in scalar context.
Next, the following formula was used to compute which mark should be placed on the Tic-Tac-Toe board in the next turn.
$indx = (@nums+1) % 2;
Because the plus operator requires a single value (a scalar) on its left hand side, not a list of values, the nums array evaluates to its length, not the list of its values. The parenthesis, in the above example, are just being used to set the order of operations so that the addition (+) will happen before the modulo (%).
Copying
In Perl you can create a list for immediate use by surrounding the list values with parenthesis and separating them with commas. The following example creates a three-element list and copies its values to an array.
@nums = (4, 5, 6);
As long as the elements of the list are variables and not constants, you can also copy the elements of an array to a list:
($four, $five, $six) = @nums;
If there were more elements in the array than the list in the above example, the extra elements would simply be discarded.
Different from lists in scalar context
Be aware that lists and arrays are different things in Perl. A list accessed in scalar context returns its last value, not its length. In the following example, $value3 receives 3 (the length of @nums) while $value4 receives 6 (the last element of the list).
$value3 = @nums; $value4 = (4, 5, 6);
Indexing
To access an individual element of an array or list, suffix it with the desired index in square brackets as shown on line 29 of the above example Perl module.
Notice that the nums array on line 29 is prefixed with the dollar sigil ($) rather than the at sigil (@). This is done because the get_prob subroutine is supposed to return a single value, not a list. If @nums[0] were used instead of $nums[0], the subroutine would return a one-element list. Since a list evaluates to its last element in scalar context, this program would probably work if I had used @nums[0], but if you mean to retrieve a single element from an array, be sure to use the dollar sigil ($), not the at sigil (@).
It is possible to retrieve a subset from an array (or a list) rather than just one value in which case you would use the at sigil and you would provide a series of indexes or a range instead of a single index. This is what is known in Perl as a list slice.
Hashes
Hashes are variables whose names are prefixed with the percent sigil (%) when they are created. They are subscripted with curly brackets ({}) when accessing individual elements or subsets of elements (hash slices). Like arrays, hashes are variables that can hold multiple discrete data elements. They differ from arrays in the following ways:
Hashes are indexed by strings (or anything that can be converted to a string), not numbers.
Hashes are unordered. If you retrieve a list of their keys, values or key-value pairs, the order of the listing will be random.
The number of elements in the hash will be equal to the number of keys that have been assigned values. If a value is assigned to index 99 of an array that has only three elements (indexes 0-2), the array will grow to a length of 100 elements (indexes 0-99). If a value is assigned to a new key in a hash that has only three elements, the hash will grow by only one element.
As with arrays, if you mean to access (or assign to) a single element of a hash, you should prefix it with the dollar sigil ($). When accessing a single element, Perl will go by the type of the subscript to determine the type of variable being accessed – curly brackets ({}) for hashes or square brackets ([]) for arrays. The get_prob subroutine in the above Perl module demonstrates assigning to and accessing individual elements of a hash.
Perl has two special built-in functions for working with hashes – keys and values. The keys function, when provided a hash, returns a list of all the hash’s keys (indexes). Similarly, the values function will return a list of all the hash’s values. Remember though that the order in which the list is returned is random. This randomness can be seen when playing the Tic-Tac-Toe game. If there is more than one move available with the highest score, the computer will chose one at random because the keys function returns the available moves from the odds hash in random order.
On line 27 of the above example Perl module, the keys function is being used to retrieve the list of keys from the odds hash. The keys of the odds hash are the numbers that were found on the current game board. The values of the odds hash are the corresponding probabilities that were retrieved from the SCORE constant on line 24.
Admittedly, this example could have used an array instead of a string to store and retrieve the scores. I chose to use a string simply because I think it presents the layout of the board a little nicer. An array would likely perform better, but with such a small data set, the difference is probably too small to measure.
Sort
On line 27, the list of keys from the odds hash is being feed to Perl’s built-in sort function. Beware that Perl’s sort function sorts lexicographically by default, not numerically. For example, provided the list (10, 9, 8, 1), Perl’s sort function will return the list (1, 10, 8, 9).
The behavior of Perl’s sort function can be modified by providing it a code block as its first parameter as demonstrated on line 27. The result of the last statement in the code block should be a number less-than, equal-to, or greater-than zero depending on whether element $a should be placed before, concurrent-with, or after element $b in the resulting list respectively. $a and $b are pairs of elements from the provided list. The code in the block is executed repeatedly with $a and $b set to different pairs of elements from the original list until all the pairs have been compared and sorted.
The <=> operator is a special Perl operator that returns -1, 0, or 1 depending on whether the left argument is numerically less-than, equal-to, or greater-than the right argument respectively. By using the <=> operator in the code block of the sort function, Perl’s sort function can be made to sort numerically rather than lexicographically.
Notice that rather than comparing $a and $b directly, they are first being passed through the odds hash. Since the values of the odds hash are the probabilities that were retrieved from the SCORE constant, what is being compared is actually the score of $a versus the score of $b. Consequently, the numbers from the original game board are being sorted by their score, not their value. Numbers with an equal score are left in the same random order that the keys function returned them.
Notice also that I have reversed the typical order of the parameters to <=> in the code block of the sort function ($b on the left and $a on the right). By switching their order in this way, I have caused the sort function to return the elements in reverse order – from greatest to least – so that the number(s) with the highest score will be first in the list.
References
References provide an indirect means of accessing a variable. They are often used when making copies of the variable is either undesirable or impractical. References are a sort of short cut that allows you to skip performing the copy and instead provide access to the original variable.
Why to use references
There is a cost in time and memory associated with making copies of variables. References are sometimes used as a means of reducing that cost. Be aware, however, that recent versions of Perl implement a technology called copy-on-write that greatly reduces the cost of copying variables. This new optimization should work transparently. You don’t have to do anything special to enable the copy-on-write optimization.
Why not to use references
References violate the action-at-a-distance principle that was mentioned in part one of this series. References are just as bad as global variables in terms of their tendency to trip up programmers by allowing data to be modified outside the local scope. You should generally try to avoid using references. But there are times when they are necessary.
How to create references
An example of passing a reference is provided on line 59 of the above Perl module. Rather than placing the mark array directly in the list of parameters to the win_move subroutine, a reference to the array is provided instead by prefixing the variable’s sigil with a backslash (\).
It is necessary to use a reference (\@mark) on line 59 because if the array were placed directly on the list, it would expand such that the first element of the mark array would become the third parameter to the win_move function, the second element of the mark array would become the fourth parameter to the win_move function, and so on for as many elements as the mark array has. Whereas an array will expand in list context, a reference will not. If the array were passed in expanded form, the receiving subroutine would need to call shift once for each element of the array. Also, the receiving function would not be able to tell how long the original array was.
Three ways to dereference references
In the receiving subroutine, the reference has to be dereferenced to get at the original values. An example of dereferencing an array reference is provided on line 56. On line 56, the shift statement has been enclosed in curly brackets and the opening bracket has been prefixed with the array sigil (@).
There is also a shorter form for dereferencing an array reference that is demonstrated on line 43 of the chip1.pm module. The short form allows you to omit the curly brackets and instead place the array sigil directly in front of the sigil of the scalar that holds the array reference. The short form only works when you have an array reference stored in a scalar. When the array reference is coming from a function, as it is on line 56 of the above Perl module, the long form must be used.
There is yet a third way of dereferencing an array reference that is demonstrated on line 29 of the game script. Line 29 shows the MARKS array reference being dereferenced with the arrow operator (->) and an index enclosed in square brackets. The MARKS array reference is missing its sigil because it is a constant. You can tell that what is being dereferenced is an array reference because the arrow operator is followed by square brackets ([]). Had the MARKS constant been a hash reference, the arrow operator would have been followed by curly brackets ({}).
There are also corresponding long and short forms for dereferencing hash references that use the hash sigil (%) instead of the array sigil. Note also that hashes, just like arrays, need to be passed by reference to subroutines unless you want them to expand into their constituent elements. The latter is sometimes done in Perl as a clever way of emulating named parameters.
A word of caution about references
It was stated earlier that references allow data to be modified outside of their declared scope and, just as with global variables, this non-local manipulation of the data can be confusing to the programmer(s) and thereby lead to unintended bugs. This is an important point to emphasize and explain.
On line 35 of the win_move subroutine, you can see that I did not dereference the provided array reference (\@mark) but rather I chose to store the reference in a scalar named tkns. I did this because I do not need to access the individual elements of the provided array in the win_move subroutine. I only need to pass the reference on to the get_victor subroutine. Not making a local copy of the array is a short cut, but it is dangerous. Because $tkns is only a copy of the reference, not a copy of the original data being referred to, if I or a later program developer were to write something like $tkns->[0] = ‘Y’ in the win_move subroutine, it would actually modify the value of the mark array in the hal_move subroutine. By passing a reference to its mark array (\@mark) to the win_move subroutine, the hal_move subroutine has granted access to modify its local copy of @mark. In this case, it would probably be better to make a local copy of the mark array in the win_move subroutine using syntax similar to what is shown on line 56 rather than preserving the reference as I have done for the purpose of demonstration on line 35.
Aliases
In addition to references, there is another way that a local variable created with the my or state keyword can leak into the scope of a called subroutine. The list of parameters that you provide to a subroutine is directly accessible in the @_ array.
To demonstrate, the following example script prints b, not a, because the inc subroutine accesses the first element of @_ directly rather than first making a local copy of the parameter.
#!/usr/bin/perl sub inc { $_[0]++;
} MAIN: { my $var = 'a'; inc $var; print "$var\n";
}
Aliases are different from references in that you don’t have to dereference them to get at their values. They really are just alternative names for the same variable. Be aware that aliases occur in a few other places as well. One such place is the list returned from the sort function – if you were to modify an element of the returned list directly, without first copying it to another variable, you would actually be modifying the element in the original list that was provided to the sort function. Other places where aliases occur include the code blocks of functions like grep and map. The grep and map functions are not covered in this series of articles. See the provided links if you want to know more about them.
Final notes
Many of Perl’s built-in functions will operate on the default scalar ($_) or default array (@_) if they are not explicitly provided a variable to read from or write to. Line 40 of the above Perl module provides an example. The numbers from the nums array are sequentially aliased to $_ by the for keyword. If you chose to use these variables, in most cases you will probably want to retrieve your data from $_ or @_ fairly quickly to prevent it being accidentally overwritten by a subsequent command.
The substitution command (s/…/…/), for example, will manipulate the data stored in $_ if it is not explicitly bound to another variable by one of the =~ or !~ operators. Likewise, the shift function operates on @_ (or @ARGV if called in the global scope) if it is not explicitly provided an array to operate on. There is no obvious rule to which functions support this shortcut. You will have to consult the documentation for the command you are interested in to see if it will operate on a default variable when not provided one explicitly.
As demonstrated on lines 55 and 56, the same name can be reused for variables of different types. Reusing variable names generally makes the code harder to follow. It is probably better for the sake of readability to avoid variable name reuse.
Beware that making copies of arrays or hashes in Perl (as demonstrated on line 56) is shallow by default. If any of the elements of the array or hash are references, the corresponding elements in the duplicated array or hash will be references to the same original data. To make deep copies of data structures, use one of the Clone or Storable Perl modules. An alternative workaround that may work in the case of multi-dimensional arrays is to emulate them with a one-dimensional hash.
Similar in form to Perl’s syntax for creating lists – (1, 2, 3) – unnamed array references and unnamed hash references can be constructed on the fly by bounding a comma-separated set of elements in square brackets ([]) or curly brackets ({}) respectively. Line 07 of the game script demonstrates an unnamed (anonymous) array reference being constructed and assigned to the MARKS constant.
Notice that the import subroutine at the end of the above Perl module (chip2.pm) is assigning to some of the same names in the calling namespace as the previous module (chip1.pm). This is intentional. The hal_move and complain aliases created by chip1’s import subroutine will simply be overridden by the identically named aliases created by chip2’s import subroutine (assuming chip2.pm is loaded after chip1.pm in the calling namespace). Only the aliases are updated/overridden. The original subroutines from chip1 will still exist and can still be called with their full names – chip1::hal_move and chip1::complain.
Right now, you may be examining your business strategies and revising plans for the near and distant future. And as we gradually overcome this crisis, there will undoubtedly be a new normal.
This new normal may require you to examine your business model and to redefine who you are as a company, how you approach problems and how you make decisions. It is an opportunity to think big and be bold – and AI can help. Here are three key considerations as you consider whether and how to become an AI-powered organization:
Scaling AI. To be successful, AI must be woven into the very fabric of your entire organization. AI can have the most impact and make the biggest difference when it is part of your culture and strategy, and when your company’s technological capabilities and desired business outcomes are considered side by side.
Driving AI culture and strategy to move beyond pilot and proof of concept requires the right approach from business leaders. That’s why, more than a year ago, we launched AI Business School, a free online master class series designed to set you up for success. More than a million people have accessed the resources there, and I’m happy to share today that we have added two new modules:
My team and I have had many detailed conversations with AI experts and executives across the world that informed all of the modules, and we will continue to add to the content as we learn more from them about what businesses need for their AI journeys.
To bring AI culture to life, companies are bringing technology and business together into a combined lifecycle driven by business outcomes. This model, which is familiar within software development as DevOps, is making its way into AI development with its counterpart, MLOps. MLOps, known to many as machine learning and operations, can improve business results and accelerate time to market by centrally managing models, environments, code and datasets so they can be shared among data scientists, ML engineers, software developers and other IT teams through the entire ML lifecycle.
In Vancouver, BC, transit agency TransLink deployed 18,000 AI models to better predict bus departure times, accounting for factors including traffic, weather and other disruptions. The agency used MLOps with Azure Machine Learning to manage and deliver the models at such a massive scale that they were able to improve their prediction accuracy by 74% and reduce customer wait times by 50%.
AI for everyone. To fully benefit from AI, each of your employees must feel empowered and included. A recent survey of 10,000 employers and employees found that 91% of employees want new skills that will help them succeed alongside AI. Employees of companies that have integrated AI companywide report that they find more meaning in their work and want to use AI more.
Many of your employees may already be using AI tools in their daily work. AI is incorporated into applications in Microsoft 365 to encourage teamwork, facilitate productivity and improve decision making. And your employees in sales, marketing and customer service may be using solutions in Dynamics 365 AI to improve performance across those departments. In Power Platform, AI Builder gives everyone in your organization the ability to add AI capabilities to the apps they create and use – without any technical experience.
Everyone in your organization can benefit from AI tools, regardless of their technical expertise.
Your organization’s data scientists and developers can use Azure AI services to create custom AI models and applications to solve the unique challenges you face.
One scenario I’m particularly passionate about is the ability for AI to empower subject matter experts. AI for Reasoning enables experts and researchers to choose which AI model to employ to analyze information and easily share it with peers.
Swedish whisky maker Mackmyra recently won a silver cube product design award for creating the world’s first machine-generated whisky. The company’s master blenders fed a machine learning model existing recipes, sales data and customer preferences, and the machine made quick work of generating high-quality recipes that would prove popular.
Swiss pharmaceutical company Novartis used AI to redefine its business functions across the organization, empowering every individual to apply AI models to augment their expertise and creativity without needing to learn data science. AI brought together mass amounts of critical information across data sources and streamlined collaboration.
Now, 50,000-plus employees can quickly make sense of the vast amount of data they deal with, deriving insights that will help them transform how medicines are discovered, manufactured and commercialized.
Responsible AI. As you put AI into action, particularly in these times of change, it’s imperative to consider the implications of technology. AI must be integrated in a way that aligns with your company’s values, goals and priorities. Companies that incorporate AI successfully implement practices from the beginning to guard against potential misuse of AI, such as biased or unfair results.
At Microsoft, we’ve been on our own AI journey for some time, and we are committed to sharing key learnings and are doing so both through the Responsible AI module within our AI Business School as well as the Responsible AI Resource Center, which is perfect for technical teams developing and deploying AI.
How can we help?
We know you’ve got a tremendous amount on your plate right now. Considering how you adjust to the changes we’re all experiencing, while continuing to serve your customers and support your employees, is an extremely tall order. As difficult as the current challenges are, we believe the implementation of short-term solutions and a focus on strategic, long-term planning go hand in hand as you transform your company into an AI-powered organization. Now is a perfect time to think of the possibilities. Good ideas coupled with perseverance will go a long way toward helping you respond to today’s challenges and imagine a better tomorrow. We’re excited to help you as you make progress on your AI journey. Visit AI Business School to help you get started putting AI into action. I promise, you’ll be glad you did.
Responding to the impact of the pandemic, we will need to get our economies back on track – which means businesses need to navigate the now, plan for re-booting and re-invent to shape the ‘next normal.’
According to Ralph Haupter, President, Microsoft EMEA, it will require “three critical areas of focus: human ingenuity, adaptability, and innovation.” He sees that digital transformation has accelerated as firms look to stabilize and get back to growth, with projects that may normally take years being delivered in months, and that “most innovation right now has, at its heart, an AI component.”
This set the scene for a lively expert panel debate which explored key questions around the role of artificial intelligence, employee skills and augmenting human ingenuity in helping firms be adaptive and resilient.
The discussion was chaired by Azeem Azhar, founder of the Exponential View, with executives from global engineering, management and development consultancy Mott MacDonald, AI start-up Robovision and technology advisory firm Fourkind, joined by Ralph Haupter.
Jonathan Berte, Founder & CEO of Robovision, echoed Ralph’s observation around accelerated digital transformation and noted that “it’s all part of this bigger journey to making digital twins of companies and societies. And what we see is that companies embracing AI, they just have more knowledge. They have more digitalization in their process and this brings a lot of advantages.”
Mott MacDonald is one of the organizations seeing significant positive business impact, according to Simon Denton, Business Architect, who gave a number of compelling examples: “We’re taking advantage of AI through services like Project Cortex, which is really enhancing our knowledge management within the organization, with its reducing the dangers of knowledge being trapped. Again, this is one of those sort of silent AI partners that’s really helping to connect the organization and bring it together to help us achieve our mission.”
Jarno Kartela, Machine Learning Partner at Fourkind, sees huge potential in AI increasingly augmenting workers. That means skills like creativity and problem-solving become more important than ever before. “Technology like machine learning lets you simulate business environments, different outcomes and predict possible end results. I think that creates a lot of resilience for the companies adopting this type of technology. But, at the same time, I think creative problem solving and strategic design skills will become even more important because machine learning will move from automating things to helping augment people to solve problems. This will become even more important in times like these.”
Simon offered some practical advice for businesses at the early stages of AI implementation: “It’s about building confidence and trust in AI. I think if people are starting [their AI] journey, it’s about getting your data in order.” The sentiment of moving forward with confidence was shared by Jarno, “We shouldn’t be afraid of using emerging technology and augmenting human skills because it’s not really about losing control, but improving our own decision-making.” And Jonathan encouraged business leaders not to wait and risk ceding market share: “Don’t fall into the trap of analysis paralysis and over analyze your road to AI. Just jump into it.”
You can dive into the full debate here, and a full transcript is below:
Part 1: The panel discusses digital transformation and AI, the role of skills and augmenting employees, what it means for business adaptability and resiliency, and managing uncertainty in the ‘new normal.’
Part 2: The conversation picks up on curating AI models, and moves onto why some business functions lead with AI while others lag, advice for organizations less far along their AI journey, and the role of culture and leadership.
Full transcript
PART 1
AZEEM:
IT IS VERY EXCITING TO BE HERE. AND TO BE ABLE TO HEAR FROM A NUMBER OF EXPERTS WHO ARE THEMSELVES, ACROSS MANY DIFFERENT TIME ZONES, ON KEY QUESTIONS RELATING TO HOW THE INVESTMENT IN ARTIFICIAL INTELLIGENCE AND IN PARTICULAR THE INVESTMENT INTO THE SKILLS OF YOUR WORKERS HAS MADE FIRMS MORE CAPABLE OF ADAPTING TO THE NEW NORMAL THAT THIS CRAZY YEAR OF 2020 KEEPS THROWING AT US.
SO OUR PANEL OF EXPERTS SHARED THEIR EXPERIENCES WITH US, AND WE WILL ALSO HEAR FROM MICROSOFT EMEA PRESIDENT RALPH HAUPTER ON AI, AND THE MANY CUSTOMERS THEIR FIRM HAS HELPED.
AS WE STEP INTO OUR DISCUSSION, MAYBE IT IS WORTH SPENDING A FEW SECONDS TALKING ABOUT WHAT WE MEAN BY AI IN THE CONTEXT OF OUR DISCUSSION TODAY. IT IS SUCH A TERM THAT HAS CAPTURED THE IMAGINATION IN MANY DIFFERENT WAYS, BUT FOR BUSINESSES, I SEE IT AS A VERY PRACTICAL SET OF TOOLS THAT ARE AVAILABLE TODAY FOR COMPANIES TO MAKE USE OF FOR THE BENEFITS OF THEIR CUSTOMERS, THEIR INTERNAL OPERATIONS, PRODUCT DEVELOPMENT. IT IS A SET OF TECHNOLOGIES AND PRODUCTS THAT AT THEIR HEART HAVE THE NOTION OF COMPUTER SYSTEMS THAT ARE PERFORMING THEIR ACTIONS ON THE BASIS NOT PURELY BY WHAT THEIR PROGRAMMERS HAVE TOLD THEM TO DO, BUT FROM LEARNING FROM THE DATA AND THE ENVIRONMENTAL EXPERIENCES THAT THESE SYSTEMS EXPERIENCE. NOW, THESE TASKS COULD BE SIMPLE THINGS LIKE RECOGNIZING OBJECTS, FOR EXAMPLE HANDWRITING AND IMAGES, OR THINGS I’M SURE MANY OF US HAVE EXPERIENCED, TO OUR DELIGHT, ENGAGING IN LIFELIKE CONVERSATIONS WITH CHATBOTS OR THINGS THAT HAPPEN UNDERNEATH THE SURFACE OF OUR INTERACTIONS WITH COMPANIES, BY OPTIMIZING THE ALLOCATION RESOURCES ACROSS A NETWORK OR SUPPLY CHAIN. AT THE HEART IS DATA PREDICTION, OPTIMIZATION LEARNING, VERY PRACTICAL CORE TECHNOLOGIES THAT CAN HELP ANY BUSINESS. BUT LIKE SO MANY OF THE TECHNOLOGY TRANSFORMATIONS WE HAVE SEEN BEFORE, WHETHER IT WAS THE MOVE TO CLOUD OR CRM OR EVEN RELATIONAL DATABASES EARLIER IN OUR HISTORY, THIS IS A TECHNOLOGY THAT IMPACTS HOW EMPLOYEES ACT AND HOW THEY ARE ORGANIZED AS MUCH AS IT AFFECTS AN I.T. DEPARTMENT AND AS MUCH AS IT AFFECTS THE STRATEGIES IN A BUSINESS. AS A TECHNOLOGY, IT IS ONE OF THOSE THINGS THAT DEMAND A LOT FROM THE EMPLOYEE AND THE WAY IN WHICH COMPANIES SCALE UP THEIR WORKERS.
TO HAVE THIS DISCUSSION, WE HAVE BEEN VERY LUCKY TO HAVE SOME EXPERTS WHO HAVE SOME HANDS-ON EXPERIENCE WITH THIS. WE WILL ASK EACH OF THEM TO GIVE A LITTLE ONE MINUTE INTRO INTO THEIR COMPANY’S BUSINESS AND MISSION ON THEIR OWN ROLE. WE HAVE SIMON, JONATHAN, JARNO FROM FOURKIND, AND I WILL ASK RALPH FROM MICROSOFT TO SAY A FEW WORDS. PERHAPS WE COULD START WITH YOU, SIMON.
SIMON: HI, I AM SIMON DENTON. WE ARE A GLOBAL MANAGEMENT AND DEVELOPMENT CONSULTANCY, SO AROUND THE WORLD WE HELP CLIENTS WITH INFRASTRUCTURE NEEDS FOR TRANSPORT, ENERGY, WATER, AND THE ENVIRONMENT. YOU MAY NOT HAVE HEARD OF US MUCH, BUT WE ARE ON SOME EPIC PROJECTS WITH TRANSFORMING RAIL NETWORKS IN LOS ANGELES AND SYDNEY, TO SHAPING THE BIGGEST ENERGY FARMS. WE SECURE WATER SUPPLIES TO CITIES LIKE NEW YORK AND LONDON. WE ARE UNIQUELY POSITIONED TO BRIDGE THE PHYSICAL AND THE VIRTUAL DIVIDE. MY ROLE IS TO FACILITATE CONNECTIONS WITHIN THE ORGANIZATION AND EMPOWER OUR STAFF TO DO WHAT THEY DO BEST AND TO SERVE THEIR CLIENTS WELL.
AZEEM: THANK YOU VERY MUCH, SIMON. JONATHAN, COULD WE TURN TO YOU FOR A BRIEF INTRODUCTION.
JONATHAN: YES, I AM THE FOUNDER OF ROBOVISION. IT IS ALL ABOUT DEMOCRATIZING AI. YOU WANT TO GET AI FROM THE DATA SCIENTIST DEPARTMENT TO SPECIALISTS AND NORMAL PEOPLE EVERYWHERE SO THEY CAN BUILD THEIR OWN DEEP LEARNING MODELS, CREATE THE DATA, ANNOTATE IT WITH THE HELP OF THE CLOUD, AND IN THIS WAY WE ARE DEVELOPING EASY TOOLS WITH MANUFACTURING, AGRICULTURE, SMART NATIONS, AND THE LIFE SCIENCES. WE ARE ALSO HELPING THE COVID-19 CRISIS BY ENABLING RADIOLOGISTS TO CREATE DEEP LEARNING MODELS THEMSELVES AND TO MORE QUICKLY ANALYZE CT SCANS WITH COVID RELATED DAMAGES.
AZEEM: THANK YOU VERY MUCH. JARNO FROM FOURKIND.
JARNO: HELLO. WE AT FOURKIND ARE A BUSINESS FOCUSED ADVISORY COMPANY WITH DEEP KNOWLEDE IN MACHINE LEARNING, AND WOULD LIKE TO SOLVE PROBLEMS THAT OTHERS CLAIM UNSOLVABLE, RANGING FROM MAKING WHISKY WITH COMPUTERS TO UTILIZING AIRPORTS. WE HAVE 12 YEARS OF EXPERIENCE FROM STRATEGY TO DEVELOPMENT, AND I AM SUPER EXCITED TO WHAT WE WILL DO NEXT BECAUSE I THINK WE HAVE ONLY JUST TOUCHED WHAT IS POSSIBLE. THANKS.
AZEEM: THANK YOU, JARNO. NOW YOU’RE MAKING ME THIRSTY, GIVEN SOME OF YOUR WONDERFUL INNOVATIONS. RALPH, THANK YOU FOR JOINING US. I THINK YOU ARE STILL IN ASIA AT THE MOMENT, BUT IT WOULD BE WONDERFUL TO HEAR YOUR PERSPECTIVE ON AI SKILLS COMPONENT, WHAT YOU’RE SEEING FROM THE RESEARCH AND WHAT THAT MEANS FOR LEADERSHIP.
RALPH: THANK YOU FOR GIVING ME THE OPPORTUNITY HERE. I WOULD BE READY FOR WHISKY, TO BE HONEST, FROM A TIMING PERSPECTIVE. WE SEE IN THESE TIMES WHERE WE HAVE DISRUPTION, A LOT OF FOCUS RIGHT NOW IS ON HOW TO STABILIZE AND GO BACK TO GROWTH. THE EXPERIENCE WE HAVE RIGHT NOW IS THAT THERE ARE THREE CRITICAL AREAS TO FOCUS ON. ONE IS HUMAN INGENUITY, THE OTHER IS ADAPTABILITY. FINALLY THE FOCUS ON INNOVATION. INNOVATION IS RIGHT NOW HAPPENING, TECHNICAL INNOVATION IS HAPPENING AS WE SEE AND SPEAK, AND AS WAS MENTIONED, WE LITERALLY SEE PROJECTS WHICH NORMALLY TAKE TWO YEARS HAPPENING IN TWO MONTHS BY JUST PUTTING UP INNOVATION BACK TO BEING A DRIVER FOR GROWTH.
MOST OF THESE INNOVATIONS RIGHT NOW, ARE HAVING IN THEIR HEART, AN AI COMPONENT. AS AN EXAMPLE, WE SAW IN DENMARK DURING THE CRISIS IN COPENHAGEN, THE EMERGENCY MEDICAL SERVICE THERE, BUILDING A CHATBOT. AND WITHIN A DAY THEY COULD HAVE 30,000 CALLS FOR CLARIFICATION IF PEOPLE HAD INDICATIONS OF INFECTION. THAT WAS DONE WITH AI IN A COUPLE OF DAYS, AND IT SHOWS THE POWER OF INNOVATION AND HOW IT COMES TOGETHERWE THINK THAT TECHNOLOGY IS HERE TO NAVIAGTE THE NOW,IT’S HERE FOR PLANNING TO REBOOT, AND NOW WE’RE AT THE PHASE OF RESHAPING THE NEXT NORMAL. EVERY ORGANIZATION IS GOING THROUGH THAT, WONDERING WHAT TO DO. DISCOVERING HOW AI CAN HELP AND FIGURE OUT WHAT IT MEANS FOR THE CULTURE OF THE ORGANIZATION.
SO WE WENT ON THAT AND TRIED TO UNDERSTAND WHAT IS THE EMPLOYEES’ IMPACT AND WHAT IS THE ROLE OF THE EMPLOYEE IN THIS GROWTH OF AI? WE FOCUSED ON THE SKILLS COMPONENT OF THE EMPLOYEES IN THAT CONTEXT. WE DID RESEARCH WHERE WE WERE IN MARCH AT THE TIME OF THE CORONAVIRUS PEAK. WE WENT IN MORE THAN 20 COUNTRIES TO MORE THAN 12,000 EXECUTIVES AND ASKED THEM AND THEIR EMPLOYEES ABOUT TECHNOLOGY USAGE, THE IMPACT ON AI AND THEIR SKILLING. WE FOUND A COUPLE OF INTERESTING DATA POINTS. 93% OF THE FIRMS ARE SAYING THEY HAVE ACTIVELY BUILT SKILLS FOR THE WORKERS TO WORK WITH AI. 70% OF THEM SAY EMPLOYEES ARE PREPARING THEM FOR AI, WHICH I THINK IS GOOD NEWS, A STRONG CORRELATION BETWEEN A COMPANY’S IMPACT AND THE EMPLOYEES’ TRAINING. SO TWO THIRDS OF TODAY’S WORKERS ARE MENTORED BY AI.
WHAT IS THE OUTCOME? IT IS BUSINESS EFFICIENCY, BUT IT IS ALSO SPEED AND CHANGE OF PRODUCT DEVELOPMENT AND SERVICE AND EXPERIENCE ON AN AI BASIS, AND EVEN MORE IT IS A CHANGE ON THE CULTURE OF INNOVATION CULTURE. SO I PERSONALLY WOULD SAY I HAVE SEEN THROUGH AI CHANGES IN CULTURES OF THE COMPANY HAPPENING. LEADERS EMBRACING DATA , WHICH IN MANY WAYS IS THE FOUNDATION FOR AN AI EXPERIENCE, LEVERAGING THAT ONTO RESHAPING COMPANIES. NINE OUT OF 10 EXECUTIVES SAY THEY BENEFIT FROM AI, THE SUPPORT WHERE THEY HAVE EXPERIENCED IT ONCE. THIS ALL IS ABOUT IMPROVEMENT. YOU SKILLE PEOPLE, YOU HAVE BETTER OUTCOME. IT IS A COMBINATION OF AI TECHNOLOGY AND THE SKILLING, WHICH IS IMPORTANT, TO HAVE PEOPLE READY IN THE CAPABILITY TO LEVERAGE THAT. SO IT IS AS MUCH A PRIORITIZATION OF SKILL, OF TECHNOLOGY WHICH NEEDS TO HAPPEN, AND I THINK ONE THING WE CAN BE CERTAIN IS THE IMPORTANCE OF EMPOWERING HUMAN INGENUITY THROUGH AI. THANK YOU FOR INVITING ME TO JOIN. I AM GIVING IT BACK TO YOU, AZEEM.
AZEEM: I THINK ONE OF THE MOST INTERESTING DIMENSIONS OF THIS IS THE TECHNOLOGY AND HUMAN SKILLS. YOU DESCRIBED THE CIRCLE OF IMPROVEMENT, HOW THE INVESTMENT IN ONE DRIVE CERTAIN PERFORMANCE WHICH DRIVES MORE INVESTMENT. THAT IS A FASCINATING DYNAMIC. I AM CURIOUS TO HEAR FROM OUR EXPERTS PERHAPS STARTING WITH JARNO, ON HOW ORGANIZATIONS MIGHT BE USING AI TO AUGMENT AND UNLOCK THAT INGENUITY THAT IS WITHIN THEIR EMPLOYEE BASE.
JARNO: AT LEAST FOR THE EXPERIENCE WE HAVE HAD WITH CUSTOMERS, WE HAVE INTERACTED WITH, THE MAIN TWIST CURRENTLY SEEMS TO BE MOVING FROM AUTOMATING STUFF LIKE WE HAVE DONE FOR A WHILE NOW, TO AUGMENTING CREATIVE EXPERTS IN R&D AND IMPROVING THE CAPABILITIES OF DECISION-MAKING AND STRATEGIC FUNCTIONS. I THINK THIS REQUIRES AN INNOVATIVE AND BOLD APPROACH TO USING EMERGING TECHNOLOGY, AND I THINK THAT IS A TALL ORDER FOR MANY COMPANIES TO PULL OFF BECAUSE IT TAKES A LOT OF COURAGE AND IT REALLY TAKES A LOT OF INTEREST AND ACTUALLY WANTING TO TRY OUT AI TO MAKE THAT HAPPEN.
AZEEM: THANK YOU. WHAT IS YOUR VISION FROM ROBOVISION?
JONATHAN: IN TERMS OF GETTING PREPARED FOR THIS SO-CALLED SECOND WAVE OF THE COVID-19 CRISIS, WHAT WE SEE IS ESPECIALLY WITH SPECIALISTS LIKE RADIOLOGISTS, EMBRACING THE TECHNOLOGY BIG TIME, REALLY ENABLING THEIR OWN SKILLS, AUGMENTING IT WITH AI TO MEASURE FASTER AND TO GET THIS PATIENT THE RIGHT TREATMENT THEY DESERVE IN TERMS OF THIS CRISIS.
WHAT WE SEE IN MANUFACTURING IS A GREATER SENSITIVITY TO QUALITY CONTROL BECAUSE OF THE HUMAN LOOP, BECAUSE OF SEVERAL RESTRICTIONS IN EUROPEAN COUNTRIES, WE SEE DURING A LOCKDOWN IT IS DIFFICULT TO GET MANUFACTURING GOING. BUT SOMETIMES MANUFACTURING IS REALLY VERY IMPORTANT FOR THE LIFELINES OF SOCIETIES LIKE FRUIT PACKAGING AND SO ON. INSTEAD, YOU NEED THIS QUALITY CONTROL. WE SEE A BIGGER PUSH IN TERMS OF CRISIS, TO EMBRACE AI AT A HIGHER SPEED, AND THESE PRIOR DEALS OF SIX TO 12 MONTHS ARE NOW DEAL CYCLES OF FOUR WEEKS.
ALSO SMART NATIONS, SOCIAL DISTANCING MEASUREMENTS, YOU CAN JUST MEASURE IF PEOPLE ARE COMPLYING TO CERTAIN REGULATIONS OR IF THEY ARE SOFTENING THESE RESTRICTIONS. SO A LOT OF SPEED, AND OTHER EMBRACING OF THESE NEW TECHNOLOGIES IN THESE NEW ECOSYSTEMS.
AZEEM: THANK YOU, JONATHAN. AND SIMON FROM MOTT MACDONALD, THE SAME QUESTION, HOW THE ORGANIZATION IS USING AI TO AUGMENT AND UNLOCK THE INGENUITY OF THEIR EMPLOYEES. THIS AND TO JONATHAN’S ANSWER, HE IS WORKING IN A PURELY SOFTWARE WORLD, AND PERHAPS THE CADENCE IS SLIGHTLY DIFFERENT FROM MOTT MACDONALD, GIVEN THE INFRASTRUCTURE. BUT I’M SURE THERE IS A CONNECTION BETWEEN AI AND THE COMMUNITY OF YOUR TEAMS.
SIMON: THERE ALMOST CERTAINLY IS. AS AN ENGINEER I USED TO HAVE TO SIGN INTO INFRASTRUCTURES AND THEN PUT IT INTO A PRODUCTIVE MODEL WITHIN THE ENGINEERING APPLICATION, AND IT WOULD CONFIRM MY CHOICE. I WOULD TALK WITH A MEMBER ABOUT BEING OVERSTRESSED OR NOT. THEN WE CAN PUT THE SAME INPUTS IN THAT ENGINEERS WOULD GET, AND THE MODEL WILL MAKE A SUGGESTION OF WHAT THE BUILDING INFRASTRUCTURE SHOULD BE. SO THE ENGINEER CAN TAKE IT ON AND INTERPRET IT CORRECTLY. WE ARE SEEING A LOT OF WORK, ALMOST SUBCONSCIOUSLY BEING SUPPORTED BY AI, BEING ASSISTED BY THE MACHINE LEARNING.
ALSO NORMALIZATION, TAKING THE JUMP TO AI BY THE CORTEX, WHICH IS ENHANCING OUR KNOWLEDGE MANAGER WITHIN THE ORGANIZATIONS, WHERE THERE ARE DANGERS OF KNOWLEDGE BEING TRAPPED. IT IS ONE OF THOSE SILENT AI PARTNERS THAT IS HELPING TO CONNECT THE ORGANIZATION AND BRING IT TOGETHER TO HELP US ACHIEVE OUR MISSION.
THE THIRD STRAND WE HAVE BEEN LOOKING AT IS AROUND MAKING INFRASTRUCTURE SMARTER. I KNOW TECHNICALLY IT IS LARGELY INANIMATE OBJECT, SO MAKING IT SMARTER IS PROBABLY A BIT OF A CONTRADICTION THERE. THERE IS SO MUCH DATA AVAILABLE WITH THE ASSETS THAT WE HAVE IN THE WORLD, FROM A RAILWAY STATION TO A ROAD, HOSPITALS, ENTIRE TEAMS. WE HAVE BEEN LOOKING AT USING AI AND HELPING TEAMS WITHIN OUR ORGANIZATION NAVIGATE IT SO THEY CAN ADVISE CLIENTS HOW TO GET ASSETS AND HOW TO BEST USE THEM. WE CAN MODEL MUCH QUICKER THROUGH AI-BASED PREDICTIVE MODELING TO DETERMINE SAFE SWIMMING CONDITIONS. A SAFE SWIM SERVICE DOWN THERE, YOU WILL GO TO THE BEACH AND SEE A SIGN THAT SAYS THE WATER QUALITY IS FIT FOR SWIMMING. WE KNOW THAT THROUGH PREDICTIVE AI MODELING. WE KNOW WHERE THE RUN-OFF IS, WHAT THE LEVELS ARE. ALL THAT INFORMATION WOULD BRING YOU TOGETHER. ALLOWING PEOPLE AS A CONSUMER, IT IS A BIG FOCUS AT THE MOMENT.
AZEEM: ANOTHER IDEA THAT CAME ACROSS IS THE NOTION OF INCREASING SPEED. I WANT TO RAISE THAT QUESTION IN OUR LATER QUESTIONS. I’M CURIOUS ABOUT THIS, TO PUT THIS TO JARNO AND JONATHAN IN PARTICULAR. THIS QUESTION ABOUT WHETHER THE PRIOR INVESTMENT IN EQUIPPING EMPLOYEES TO BE SKILLED AND SUCCESSFUL WITH AI HAS MADE FIRMS MORE AGILE AND RESILIENT. WHICH OF THOSE SKILLS ARE BENEFITING FROM THE MOST. PERHAPS JONATHAN YOU COULD BE THE FIRST TO TAKE THIS ONE.
JONATHAN: WHAT WE SEE IS THAT COMPANIES EMBRACING AI, HAVE MORE KNOWLEDGE, MORE DIGITALIZATION, AND THIS BRINGS A LOT OF ADVANTAGES. IT BRINGS THE ADVANTAGE OF KNOWING WHAT DISEASES ARE BECAUSE YOU ARE WORKING WITH CAMERAS AND YOU CAN MINE THE DATA, KNOW WHETHER A DISEASE IS MORE PRONE THIS YEAR THAN LAST YEAR. IN THE CONTEXT OF SMART NATIONS YOU CAN KNOW WHAT IS THE DIFFERENCE BETWEEN PEOPLE STREAMS. WHAT WE SEE IS A LOT OF CONTROL. PEOPLE AND ORGANIZATIONS GET CONTROL OF THE FLAWS, THEY ARE AWARE OF QUALITY ISSUES, AWARE OF DIFFERENT KINDS OF ISSUES WITHIN THEIR ORGANIZATION.
AZEEM: THANK YOU. JARNO, HOW ABOUT YOUR PERSPECTIVE ON THE QUESTION? I’D LOVE TO PUT IT TO RALPH AS WELL.
JARNO: INVESTMENT IN ANY EMERGING TECHNOLOGY AND THE DETAILS AND PAINT A BIG PICTURE OF WHERE BUSINESS IS HEADING BECAUSE TECHNOLOGY IS SO EMBEDDED IN BUSINESS TODAY. MACHINE LEARNING IN PARTICULAR, BECAUSE OF THE SET OF POSSIBILITIES THAT CAN SIMULATE A BUSINESS ENVIRONMENT AND SIMULATE DIFFERENT OUTCOMES AND PREDICT IMPOSSIBLES. I THINK THAT CREATES A LOT OF RESILIENCE FOR THE COMPANIES ADOPTING THIS TYPE OF TECHNOLOGY. BUT AT THE SAME TIME, I THINK CREATIVE PROBLEM-SOLVING AND STRATEGIC DESIGN SKILLS WILL BECOME EVEN MORE IMPORTANT BECAUSE MACHINE LEARNING, MOVING FROM DESIGN STUFFED AUGMENTING AND SOLVING PROBLEMS, MOVES IT CLOSER TO ENGINEERING. HUMAN INGENUITY THAT DESIGNED THE STRATEGIC BITS WILL BECOME EVEN MORE IMPORTANT IN TIMES LIKE THESE.
AZEEM: THANK YOU, JARNO. RALPH, I WANTED TO COME TO YOU ON THIS PARTICULAR QUESTION, WHICH IS WHETHER PRIOR INVESTMENT IN EQUIPPING EMPLOYEES WITH SKILLS TO BE SUCCESSFUL WITH AI HAS MADE FIRMS MORE AGILE AND RESILIENT. JARNO ADDED A DETAIL THERE ABOUT STRATEGIC AND CREATIVE PROBLEM-SOLVING AS AN IMPORTANT COMPONENT. DOES THAT REFLECT YOUR EXPERIENCE?
RALPH: I WOULD SAY THAT, YEAH, THEY MAY BE A STEP OR TWO FURTHER AHEAD ON WHAT I’VE SEEN IN THE BROADER MASS USAGE OF AI. REFLECTING ON CUSTOMER ENGAGEMENTS AND THE FEEDBACK WE’RE GETTING, AND THE THINGS WE ARE SEEING, WHAT CLEARLY MAKES A DIFFERENCE IS RIGHT NOW IS MOST COMPANIES ARE DEEPLY INVESTING INTO DATA-DRIVEN OPERATING MORE FOR THE ORGANIZATION. I SEE THEY MAKE A LOT OF DIFFERENCE NOW. I JUST HAD A CALL JUST YESTERDAY WITH A LUXURY GOODS COMPANY, WHICH IS PRETTY STRESSED IN A GIVEN ENVIRONMENT. BUT THEY HAVE BEEN SO CONFIDENT IN THE DATA MODELS THEY HAVE BEEN BUILDING NOW, HOW THEY ARE FORECASTING AND PREDICTING THE TYPE OF PRODUCT AND REVENUE NUMBERS, AND SEE THE PROGRESS OF ENRICHING THESE DATA MODELS OVER THE LAST WEEKS BY COLLECTING EXTERNALLY, WHERE YOU THINK ABOUT SHOPS OPENING, STARTING AGAIN, SEEING T HOW PEOPLE ARE ALLOWED PHYSICALLY TO GO INTO A SHOP, MULTIPLY THE SHOP SIZE, WITH PEOPLE, BUT YOU CANNOT GO IN THERE. GIVE THEM MUCH MORE CONFIDENCE IN HOW THEY PREDICTED BUSINESS. THAT IS BASED ON PEOPLE INVESTING AND THEN EDUCATING PEOPLE IN BEING REALLY DEEP INTO THIS RESEARCH SIDE. THAT IS COMING STRONG IN MANY COMPANIES IN DIFFERENT SEGMENTS, AND FINANCE, IN MAINTENANCE AND BACK-OFFICE ORGANIZATIONS. THERE IS THIS USAGE HAPPENING THAT WAS INVESTED OVER THE LAST 12 MONTHS, I WOULD SAY.
AZEEM: PROBABLY THE BEST TIME TO PLANT A TREE IS 20 YEARS AGO, AND THE SECOND BEST TIME IS TODAY. I WANT TO BUILD ON THIS IDEA THAT YOU HAVE INTRODUCED HERE, RALPH, WHICH IS THAT OVER THE PAST FEW WEEKS COMPANIES HAVE HAD TO GATHER AS MUCH DATA AS THEY CAN ABOUT THEIR NEW NORMAL. THERE IS THIS IDEA THAT AI SYSTEMS ARE BUILT WITH MODELS THAT ARE AT SOME POINT TRAINED ON THE DATA THAT WE KNOW UP UNTIL THEN, AND THE WORLD HAS CHANGED IN THE LAST SIX WEEKS. I AM CURIOUS TO THIS QUESTION ABOUT HOW FIRMS ENSURE THAT THEIR AI MODELS ADAPT TO THAT NEXT NORMAL. I WOULD BE HAPPY TO ASK JARNO’S VIEW OF THAT FIRST, AND THEN BE HAPPY TO TAKE THAT OF THE OTHER PANELISTS AS WELL.
JARNO: I HAVE HAD A FAIR BIT OF EXPERIENCE ADAPTING MODELS WITH THESE MACHINES. THE PROBLEM WITH MOST MACHINE LEARNING WE SEE TODAY IS IT THE ONLY USES PAST DATA AND DOES NOT EXPLORE THE WORLD AROUND IT. SOME COMPANIES HAVE BENEFITED MASSIVELY FROM USING REINFORCEMENT LEARNING IN A CHANGING ENVIRONMENT. BECAUSE THAT IS A DREAM WORK, NATIVELY ADAPTING TO A CHANGING WORLD IN LET’S SAY DYNAMIC PRICING OR PERSONALIZED NEWCOMERS. OTHER FRAMEWORKS SHOULD BE USABLE, TOO, BUT THE CORE PROBLEM WITH MACHINE LEARNING IS THAT THEY ARE NOT EXPLORING NEW CHOICES, SO WE ARE NOT LEARNING ANYTHING NEW. WE SHOULD NOT JUST EXPLOIT PAST DATA, WE SHOULD ACTUALLY INFECT, STOP SAYING THAT WE NEED DATA FOR AI, BECAUSE IT IS JUST NOT TRUE.
AZEEM: ONE OF YOUR EXAMPLES WAS THAT YOU TRAINED MACHINE VISION SYSTEMS TO RECOGNIZE CT SCANS OF A PATIENT’S COVID LUNGS, WHICH IS A NEW DATA SET THAT DID NOT EXIST THREE MONTHS AGO. WHAT IS YOUR TAKE ABOUT WHAT YOU NEED TO DO TO MAINTAIN THESE MODELS AND KEEP THEM ENSURE THAT THEY ARE READY FOR THE NEXT NORMAL, BOTH THE CURRENT ONE AND THE ONE AFTER THAT?
JONATHAN: BE EXTREMELY FLEXIBLE. IT IS A SIMPLE ANSWER, BUT MAKE SURE THAT THE FLEXIBILITY IS SHORT AND WITH SCALABILITY. WE HAD ALMOST UNLIMITED SCALABILITY SO WE COULD ONBOARD A LOT OF HOSPITALS. WE HAVE DONE THAT ON THE SECOND DAY IN LOCKDOWN. WE HAVE CONTACTED MICROSOFT AND SAID IF YOU WANT ALL OF THESE HOSPITALS UPLOADING DATA IN A PLATFORM, IT IS PERFECTLY FEASIBLE FROM A CLOUD’S POINT OF VIEW. A CLOUD IS MADE FOR THESE KIND OF THINGS, AND YOU NEED TO BE AWARE OF THE SPECIALISTS THEMSELVES, NEED TO HAVE THE RADIOLOGIST IN THIS CASE, TO HAVE A GOOD ENVIRONMENT TO LABEL THIS DATA, SO THAT IT NEEDS TO BE WITH ALL THE TOOLING BECAUSE THEY ARE VERY NEEDED AT THIS MOMENT. THERE IS NO MEDICAL SPECIALIST IN THE FRONT LINE THAT HAS TIME, A LOT OF TIME RIGHT NOW. YOU NEED TO MAYBE BE AWARE THAT IT NEEDS TO BE GENERIC AND VASTLY ADAPTABLE FROM YOUR KIND OF USE CASE. WE HAVE MANAGED TO DO THAT WITH MICROSOFT BY MAKING THE CITY AI MODELS.
AZEEM: SIMON AND RALPH, I AM COMING TO YOU WITH THIS QUESTION AS WELL, BUT I JUST WANT TO PLAY BACK SOMETHING THAT I THINK IS INTERESTING AND WHAT JONATHAN JUST SAID, WHICH IS THAT OF COURSE YOU NEED TO KEEP THE DATA MODELS UP-TO-DATE. PEOPLE WHO HAVE TO DO THAT DO NOT OFTEN HAVE THE TIME TO DO THAT, SO YOU HAVE TO AUGMENT THEM WITH AUTOMATED SYSTEMS. THOSE ARE LEARNING SYSTEMS AS WELL. JONATHAN?
JONATHAN: YOU HAVE TO USE TECHNIQUES LIKE PREDICTIVE LABELING, LIKE GIVING A PROPOSAL TO A RADIOLOGIST DETECTING COVID IN THE LUNG. IT IS SOMETIMES A LENGTHY INSTRUCTION SHEET, AND SOME OF THEM HAVE MISUNDERSTOOD SOME INSTRUCTIONS, SO YOU NEED TO WITH MACHINE LEARNING UNDERSTAND THAT IF SOMEBODY IS LOSING TIME BECAUSE OF INCORRECTLY LABELING, HE IS FILING UP GARBAGE IN, GARBAGE OUT WITHIN THE DEEP LEARNING SYSTEM. YOU NEED ALL KINDS OF TUNING TO MANAGE THE CROWD OF DATA. THIS CROWD CAN BE VERY VERSATILE. WE HAVE PEOPLE FROM SOUTH AFRICA OR VENEZUELA LABELING RECYCLING DATA, BUT WE HAVE SPECIALISTS, RADIOLOGIST LABELING CT DATA. THIS TRULY NEEDS TO BE SO GENERIC THAT YOU CAN SWIFTLY PIVOT YOUR USE CASE ACCORDING TO A CRISIS. YOU UNDERSTAND?
AZEEM: ABSOLUTELY. THAT IS A FANTASTIC CASE STUDY.
Part 2
AZEEM: LET ME TAKE THIS QUESTION ABOUT MODELS NOW, AS A REMINDER. IT IS HOW SHOULD FIRMS ENSURE THAT AI MODELS ARE CAPABLE OF ADAPTING TO THIS NEXT NEW NORMAL. AND PERHAPS WE WILL HEAR FROM SIMON FIRST, AND THEN I WILL COME TO RALPH.
SIMON: WHAT IS IMPORTANT WITH MODELS IS THAT THEY DON’T UNDERSTAND PESSIMISM. AT THE MOMENT WE ARE FEEDING THEM RATHER PESSIMISTIC DATA SETS. WE HAVE TO BE CAREFUL IF WE ARE USING THOSE KINDS OF MODELS THAT WE ARE NOT TRAINING THEM TO BE PESSIMISTIC. SO WHEN THINGS EMERGE AND THINGS ARE BETTER, THEY DO NOT GIVE US THE NEGATIVE OUTCOMES COMES EVERY TIME. THE DATA SET HOW AI IS USED ON MACHINE LEARNING HAS TO BE CONTEXTUAL. THERE NEEDS TO BE A BROAD ENOUGH CHURCH FOR THE RIGHT TO VIEW TO BE FORMED. IT IS REALLY IMPORTANT THAT PEOPLE USING THESE ADAPTIVE MODELS, THAT THEY USE RETROSPECTIVE DATA TO GET SOME POSITIVITY, TO GIVE A BIT OF A SLANT, A BIT OF AN UNDERSTANDING, AND THEN TAKE IT FORWARD. WE ALSO RECOGNIZE THE FACT THAT PROBABLY ALL OF THAT DATA IS AVAILABLE TO YOU, SO PART OF IT IS STRINGING THAT DATA TOGETHER, PUTTING IT INTO A MODEL AND SEE HOW IT REACTS AND HOW IT POPULATES. THAT IS THE SECOND PART OF IT, ESPECIALLY WHEN YOU DO NOT HAVE A MODEL TO START WITH.
AZEEM: YOU STARTED OFF WITH PESSIMISM BUT THEN ENDED UP POSITIVE, I KNOW MANY A SERIES THAT DIED AFTER STRINGING DATA TOGETHER. IT IS ALWAYS A BIG CHALLENGE. RALPH, I’M CURIOUS ABOUT THIS QUESTION ABOUT HOW YOU ENSURE MODELS ADAPT TO THE NEXT NORMAL, AND IN PARTICULAR, THIS SEEMS TO BE A CASE, AS IT CAME ACROSS FROM SIMON’S ANSWER AND JARNO’S, IT IS A PEOPLE SKILLS, HUMAN CAPITAL QUESTION TO TACKLE.
RALPH: I WOULD SAY IT STARTS WITH THE HUMAN POINT OF VIEW, AND I DON’T KNOW IF IT FITS HERE, BUT THE DATA CENTERS NEED TO BRING A LEVEL OF CHOREOGRAPHY WITH DATA SCIENTISTS NEED TO TRAIN AND REACH OUT FOR DATA. THE REASON I AM SAYING IT THAT WAY IS BECAUSE I THINK THE SHIFT I HAVE SEEN OVER THE LAST SIX MONTHS IS THAT IT STARTED WITH COMPANIES BUILDING MODELS AND INSIGHTS BASED ON THEIR EXISTING DATA. RIGHT NOW, I SEE A PHASE WHERE SOME COMPANIES START OUT TO REACH AND FIND OUT WHAT EXTERNAL DATA IS AVAILABLE, BUT THE PROCESS I WOULD SAY IS ALMOST RANDOM. SO WHAT I AM OBSERVING OVER THE LAST 12 WEEKS IS THAT WE SEE MORE AND MORE COMPANIES COMING IN WITH A SPECIALTY OF DATA SET, WHICH IS AUGMENTING COMPANY-OWNED DATA SETS. SO I SEE A LOT AS AN EXAMPLE IN THE CONTEXT OF SUPPLY CHAIN. THAT HAD AN IMPACT ON COMMERCIAL COMPANIES. I SAW IT HAVING AN IMPACT IN THE HEALTH CARE INDUSTRY. BUT I DON’T SEE GOVERNMENT INSTITUTIONS ASKING SUPPLY CHAIN COMPANIES TO HELP THINK THROUGH HOW THEY ORGANIZE A COUNTRY IN TERMS OF LOGISTICS CAPABILITY. BUT IT NEEDED TO HAVE A STEP IF SOMEBODY GOES OUT OF THE FRAME AND BASICALLY LOOKS AT PARTNERS WHO ARE HAVING A VERY DIFFERENT DATA SET COMPARED TO THEIR EXISTING BUSINESS MODEL, AND JUST TRIED TO COMBINE AND GET THAT PIECE OF DATA ACCESS INTO THE FULL MODEL. SO HAVING IT AS A MODEL, HAVING A HUMAN DRIVER, USING NOW A REAL OUTREACH TO BRING PARTNERS INTO PLAY I THINK IS SUPER IMPORTANT TO FACE WHERE WE ARE RIGHT NOW. I THINK IT IS TRUE, IT IS NOT ABOUT HAVING A DREAM THAT YOU HAVE TO BE A PERFECT AND HOLISTIC DATA SET, BUT IT IS HUMAN ASPIRATION TO REACH OUT AND BRING THE NEXT BEST POTENTIAL SOURCE AND PARTNERSHIP TO THAT MODEL, WHICH I THINK MAKES A DIFFERENCE AS I AM OBSERVING IT RIGHT NOW.
AZEEM: THANK YOU VERY MUCH FOR THIS. THERE IS SO MUCH TO DELVE INTO ON THIS QUESTION, SO PERHAPS WE WILL DO SO LATER ON. I AM CURIOUS TO MOVE TO THE NEXT QUESTION, WHICH IS REALLY ABOUT HOW WE ARE SEEING THE DEPLOYMENT OF AI WITHIN A PARTICULAR BUSINESS FUNCTION? MY OWN EXPERIENCE HAS BEEN THERE WERE CERTAIN BUSINESS FUNCTIONS THAT ARE OFTEN THE FIRST TO BRING THIS TECHNOLOGY ON, BUT THAT IT DOES VARY QUITE A LOT FROM ORGANIZATION TO ORGANIZATION, SOMETIMES DOWN TO INDIVIDUALS WHO SEE THE OPPORTUNITY. I AM VERY CURIOUS ABOUT WHAT YOUR OWN EXPERIENCE IS WITH WHICH BUSINESS FUNCTIONS MIGHT BE LEADING WITH AI, WHILE OTHERS ARE LAGGING. PERHAPS SIMON, YOU WOULD BE SO KIND TO ANSWER THIS FIRST, AND THEN WE WILL GO TO THE OTHER PANELISTS AS WELL.
SIMON: SURE, THE SMART INFRASTRUCTURE TEAM, YOU WOULD EXPECT THEM TO LEAD THE WAY FORWARD. THE REASON WHY WE CREATED IT, TO ENABLE OUR CLIENTS TO MAXIMIZE THEIR INVESTMENTS AND THEIR ASSETS AND BUILD THESE THINGS. THE REASON THAT GROUP IS LEADING IS BECAUSE THEY HAVE GOT AN INTEREST IN IT. IT IS NOT JUST SOMETHING THAT THEY TEND TO DO, THEY HAVE AN INTEREST AND PASSION WITHIN THAT, SO THEY ARE DRIVING IT FORWARD.
THE OTHER PART OF THE ORGANIZATION, TAKING A LEAD IN A DIFFERENT DIRECTION, THEY ARE TAKING REAL ADVANTAGE FROM THAT MACHINE CURATED, MACHINE SUGGESTED KNOWLEDGE AND HELPING THEM TO FORM BETTER TEAMS TO WIN THE RIGHT WORK. THE SIMPLE SORT OF AI INVESTMENTS THEY DID NOT KNOW THEY WERE USING AI, BUT THEY ARE LEADING THE WAY. AND THERE ARE OTHERS. THIS IS A STEP CHANGE. THIS IS A CLASSIC BELL CURVE MOMENT WHERE YOU HAVE THE EARLY ADOPTERS. EVERY ORGANIZATION HAS TO DO SO FOR ALL THE RIGHT REASONS, AND I THINK WE HAVE A GOOD BALANCE.
AZEEM: WE HAVE ABOUT SIX MINUTES LEFT BEFORE WE MOVE TO THE NEXT PHASE OF THIS. THE TIME HAS REALLY GONE WITH THESE WONDERFULLY DEEP ANSWERS. ON THAT QUESTION, WHICH FUNCTIONS DO YOU SEE LEADING. JARNO, WHAT HAVE YOU SEEN ACROSS THE WORK YOU HAVE DONE? OH, I THINK WE HAVE LOST JARNO. JONATHAN, WHAT HAVE YOU SEEN?
JONATHAN: WHAT I HAVE SEEN IS THAT IT IS RELATED TO THE EVOLUTION IN THE AI ITSELF. DEEP LEARNING WAS FOUNDED ON THE PREMISE OF REVERSE ENGINEERING IN THE SYSTEM, AND WHAT WE HAVE SEEN IS THAT BUSINESSES RELATED TO IMAGERY AND THE ANALYSIS OF IMAGERY IS ESSENTIAL. IT IS GOING TO BE ESSENTIAL THAT THEY LEAD BECAUSE IT IS SIMPLE AND IT IS NOT SO RELATED TO BLACK SWAN EVENTS LIKE THE CORONA CRISIS. WHAT IS MUCH MORE DIFFICULT IS THE TIMELINES WITH AI, BECAUSE WE AS HUMAN BEINGS CANNOT EASILY LABEL. WE ARE OFTEN OURSELVES IN STRONG DISCUSSIONS ABOUT IMPLEMENTATION OF TIMELINE. IF THERE IS ONE SINGLE THING WHERE WE MOSTLY AGREE ON, IT IS ON PHYSICAL INFORMATION BECAUSE IT IS CLEAR AND PRESENT AND YOU CAN POINT TO IT AT A GLANCE. SO IT IS VISION FIRST, AND WE SEE A LOT OF EVOLUTION IN THE SPACE. THEN YOU HAVE THE MORE IMMATURE STACKS OF TECHNOLOGY, REINFORCEMENT LEARNING, TIMELINE ANALYSIS, AND CERTAIN DIFFICULT CONTEXTS. THAT IS OUR TAKE ON THE SITUATION.
AZEEM:5 WE JUST GOT JARNO BACK AGAIN. THE EXPERIENCE THAT YOU HAVE SEEN WITH WHICH BUSINESS FUNCTIONS MIGHT BE LEADING WITH AI AND WHICH ONES MIGHT BE LACKING?
JARNO: IT IS CLEAR THAT BUSINESS FUNCTIONS THAT HAVE A FIRST FEEDBACK LOOP ARE SORT OF TRIUMPHING IN THIS TYPE OF CAREER BECAUSE THEY CAN SORT OF EXPORT TECHNOLOGIES FASTER BECAUSE THEY CAN TRY OUT NEW THINGS WITH CERTAINTY, KNOWING WHO THEY WORK ON OR NOT. KNOWING IF THEY WORK OR NOT. USING THAT IN R&D WITH THE STRATEGIC FUNCTIONS. THE FEEDBACK LOOP IS PRETTY LONG AND IT TAKES A WHILE TO KNOW IF SOMETHING WORKS OR NOT. SO I THINK THAT AFFECTS THE ADOPTION RATE ACROSS DIFFERENT BUSINESS FUNCTIONS. YOU CAN EASILY SEE THAT SORT OF FEAR OF ADAPTING NEW TECHNOLOGY. LIKE IN THE WHISKY CASE WHERE YOU CAN IMAGINE WHEN PEOPLE IN SCOTLAND HEARD THAT WE ARE GOING TO MAKE WHISKY WITH COMPUTERS. IT WAS LIKE NOTHING THEY HAD EVER SEEN AND HEARD, AND WE HAVE GOTTEN FEEDBACK THAT YOU CANNOT DO THIS STUFF. A WEEK AGO, WE WON A REALLY PRESTIGIOUS PRIZE ON AN AWARDS SHOW FROM THE INTELLIGENCE WHISKY. I KNOW THAT FUNCTIONS THAT HAVE LONG FEEDBACK LOOPS ARE GOING TO BE A BIT BEHIND.
AZEEM: THANK YOU FOR THAT. I CAN TELL YOU THAT THE EMPTY BOTTLE OF YOUR WHISKY THAT I HAVE SITTING ON A SHELF IN A KITCHEN IS TESTAMENT TO THE FACT THAT THE AI IS WORKING THERE. WE HAVE 3, 4 MINUTES LEFT BEFORE WE MOVE TO AUDIENCE QUESTIONS. WE HAVE THE LAST QUESTION, AND I WILL START WITH JONATHAN FIRST. WHAT BRIEF ADVICE DO AI-LEADING BUSINESSES HAVE FOR OTHERS THAT ARE NOT AS FAR ALONG THAT JOURNEY AS THEY MIGHT BE? IF WE COULD START WITH JONATHAN, PLEASE.
JONATHAN: START EARLY AND ACCEPT TO FAIL FAST. THE INNOVATION GOES INTO A FAST ITERATION AND YOU HAVE GONE INTO EFFECTS AND MVPS QUICKLY. DO NOT FALL INTO THE TRAP OF ANALYSIS PARALYSIS AND OVERANALYZE. JUST JUMP INTO IT AND GET IT GOING, GET THE ABSORPTION GOING IN YOUR ORGANIZATION BECAUSE YOU WILL SEE OTHER THINGS WITH PURE AI, YOU WILL SEE THAT YOUR DATA IS NOT CONSISTENT OVERALL, AND YOU WILL SEE PRACTICAL ISSUES BEFORE YOU CAN REALLY START THIS WONDERFUL JOURNEY OF CREATING THE MOST OUT OF YOUR CONTEXT. THE OTHER THING IS REALIZE THAT IF YOU WAIT TOO LONG, YOUR COMPETITOR WILL PROBABLY GRAB SOME MARKETS AND YOU WILL BE TOO LATE AT SOME POINT. IF WE ARE LIVING IN FAST-MOVING TIMES, DISRUPTIONS, IT IS IMPORTANT TO JUMP ON THE WAGON.
AZEEM: WAGONS AND WHISKY. NOW I’M GETTING CONFUSED. LET’S ASK JARNO THAT SIMILAR QUESTION. WHAT ADVICE WOULD YOU HAVE FOR COMPANIES THAT ARE LESS FAR ALONG THERE AT OUR JOURNEY?
JARNO: I THINK THAT IS A GOOD AND REALLY IMPORTANT QUESTION, AND I THINK IT SORT OF BOILS DOWN TO THE HUMAN BITS AND THE CULTURE BITS, THAT WE SHOULD NOT BE AFRAID OF USING EMERGING TECHNOLOGY AND AUGMENTING BECAUSE IT IS NOT REALLY ABOUT LOSING CONTROL, BUT IMPROVING OUR OWN DECISION-MAKING. I THINK ALSO WE SHOULD OPT FOR USING APPROACHES WHERE MACHINES ACTUALLY LEARN AND GENERATE SOMETHING, AND EXPLORE WHAT IS POSSIBLE INSTEAD OF REPEATING WHAT WE ALREADY KNOW, WHEREVER THAT TYPE OF EXPLORATION IS POSSIBLE. I THINK YOU WILL BE SURPRISED.
AZEEM: THANK YOU. SIMON, I AM CURIOUS TO HEAR FROM YOU, IN PARTICULAR BECAUSE YOUR SECTOR IS ONE THAT HAS LONG FEEDBACK LOOPS. WHAT ADVICE WOULD YOU HAVE FOR ORGANIZATIONS THAT ARE LESS FAR AHEAD ON THAT JOURNEY?
SIMON: OUR FEEDBACK LOOPS ARE USUALLY MEASURED IN DECADES. SO FIRST, IT IS REALLY AROUND CONFIDENCE AND TRUST AND WHAT IT IS DOING. IF YOU HAVE NOT ENGAGED WITH AI BEFORE, DIPPING YOUR TOE IN, IT IS ABOUT GAINING CONFIDENCE AND TRUST WITH WHAT THE ANSWERS AND WHAT THE IMPLICATIONS ARE. THE OTHER POINTS, YOU DO NEED TO APPRECIATE THAT IT IS NOT YOUR SILVER BULLET. IT IS NOT GOING TO GIVE YOU THAT 4:00 A.M. EPIPHANY STRAIGHTAWAY. BUT GETTING YOUR DATA IN ORDER, SOMETIMES IT IS A PAINFUL PROCESS, BUT IT HELPS YOU GET THINGS A BIT CLEANER, STANDARD FORMAT. IN OUR INDUSTRY, WE ARE QUITE LUCKY. WE ARE 10 YEARS INTO A STANDARD CYCLE, AND WITHIN THE STANDARD CYCLE, THEY HAVE BEEN STARTING TO PRESCRIBE THE METADATA THAT YOU SHOULD BE CAPTURING WITHIN THE CREATION OF YOUR ASSETS. SO THEY ARE REACHING QUITE FAR AHEAD. I WANT TO STRESS THAT YOU HAVE TO BE MORE ECONOMICAL WITH DATA, COLLABORATE A BIT MORE. JUST GO FOR IT.
AZEEM: RALPH, LET’S PUT THIS LAST QUESTION TO YOU AGAIN, WHICH IS ABOUT THE ADVICE YOU WOULD GIVE TO CLIENTS WHO ARE NOT AS FAR ALONG THEIR AI JOURNEY AS SOME OF THE FIRMS WE HAVE BEEN DISCUSSING SO FAR IN OUR SESSION.
RALPH: I WOULD REFRAME IT AS OBSERVATIONS I HAVE BEEN ABLE TO TAKE FROM COMPANIES. FOR ME, ALWAYS A STARTING POINT IS STUDIES WOULD SAY TWO YEARS DOWN THE ROAD, 60% TO 65% OF OUR GDP IS FROM A DIGITAL FOOTPRINT. BY DEFINITION OF LEADERSHIP, A COUNTRY SHOULD HAVE A DIGITAL TECHNOLOGY AGENDA.
HAVING LEADERSHIP THAT CAN ARTICULATE WHAT AI WILL DO FOR A COMPANY MAKES A BIG DIFFERENCE. LEADERSHIP ON AI I THINK DRIVES BEHAVIOR. BY DOING SO, MANY PEOPLE ACTUALLY FIND OUT THAT OF ALL OF THE THINGS AND EXPERIENCES THEY HAVE, IT IS ALREADY AUGMENTING DAILY LIFE. ON THE PRIVATE PERSONAL SIDE AS MUCH AS IN BUSINESS. THE NEXT STEP HAS ALWAYS BEEN FOR ME THAT PEOPLE START HAVING BOARD MEETINGS BASED ON DATA AND NOT BASED ON PRESENTATIONS. SO WHEN YOU START HAVING EXECUTIVES LOOKING AT DATA WHICH IS REAL-TIME PULLED, WHICH IS REASONABLY DONE, AT THE MOMENT THEY HAVE THE DATA, THEY SAY WHAT MORE CAN YOU GET OUT OF IT? HOW CAN YOU USE IT BETTER? THAT HAS BEEN TO ME ALWAYS THE BEST WAYS TO ACCELERATE. AND BUILD GROUPS OF KNOWLEDGE. INVEST IN SKILL SET OF PEOPLE. DO RESEARCH WORK. ON THE PRODUCTION SIDE, PEOPLE WHO AUGMENT VIDEO TECHNOLOGY, JUST BUILDING GROUPS OF KNOWLEDGE AND DATA-DRIVEN SKILLS, THAT HAS ALWAYS BEEN THE RECIPE FOR SUCCESS. FROM THERE ONWARDS, I WOULD SAY THAT ALL COMPANIES I HAVE OBSERVED IT, IT IS JUST A SELF-FULFILLING PROCESS OF GROWTH AND INTEREST AND MORE DEPTH WHERE PEOPLE ARE GOING TO USE THE TECHNOLOGY FOR THEIR OWN BENEFIT AND BRINGING PEOPLE AND TECHNOLOGY TOGETHER.
AZEEM: WHEN WE TALK ABOUT PEOPLE AND TECHNOLOGY, THAT TAKES US TO THE QUESTION OF CULTURE. WE DO JUST HAVE FIVE MINUTES LEFT TO ASK ABOUT THIS, EXPLORE THIS QUESTION OF CULTURE. I AM CURIOUS IN PARTICULAR ABOUT THE DISTINCTION OF WHETHER THIS IS SOMETHING THAT HAS TO BE DRIVEN FROM THE TOP DOWN, WHETHER IT CAN HAPPEN FROM THE BOTTOM UP, AND WHAT ARE THE KEY ELEMENTS OF CULTURAL HYGIENE THAT ARE IMPORTANT FOR SUCCESSFUL AI IN SKILLING INITIATIVES. I AM COGNIZANT THAT OUR ATTENDEES ARE APPROACHING THE TOP OF THE HOUR. IF YOU COULD EACH TRY TO ANSWER THIS WITHIN A MINUTE SO WE GET A MOMENT TO SUMMARIZE, THAT WOULD BE GREAT. I WILL ASK JARNO TO START ON THAT QUESTION. THANK YOU.
JARNO: I THINK, LIKE, IN TERMS OF CULTURE, LIKE FOR MACHINES WITH LEARNING, YOU NEED TO BRING THE VALUE FOR THE SORT OF CXO LEVEL, THE SORT OF BOTTOM LEVEL WHERE YOU WANT TO APPLY THAT CHANGE. LIKE IF YOU START AN ENDEAVOR TO DO SOMETHING FOR CUSTOMER SERVICE, YOU NEED TO START WHERE YOU KNOW THE BUSINESS PROBLEM, BUT YOU NEED TO START WITH A CULTURAL APPROACH THAT TAKES THE CUSTOMER SERVICE LIKE REALLY INTO THE LOOP OF DEVELOPING THE ENTIRE SYSTEM AND DOES THAT WITH THAT HUMAN INTELLECT, REALLY HEAVILY ENTWINED WITH WHAT YOU ARE TRYING TO ACCOMPLISH WITH TECHNOLOGY. SO I THINK A LOT OF IT BOILS DOWN TO JUST HOW CAN YOU BUILD THAT TYPE OF EMERGING TECHNOLOGY TO A COMPANY WITH PROJECTS THAT ARE LED AND SORT OF MANAGED FROM THE SORT OF END USER AND FROM THE END USER EXPERIENCE ONWARDS. I THINK THAT WILL LEAD TO A LOT OF SUCCESS.
AZEEM: THANK YOU. WE WILL GO OVER TO SIMON ON THIS, WHICH IS WHAT ROLE DOES COMPANY CULTURE PLAY IN A SUCCESSFUL AI AND SKILLING INITIATIVE? YOU HAVE JUST ONE MINUTE.
SIMON: TOP-DOWN SUPPORT HELPS GET THINGS DONE, BUT WHATEVER YOU DO, FACILITATE THE COALESCING OF LIKE-MINDED PEOPLE TOGETHER. THAT IS THE FIRST POINT. IT DOESN’T MATTER WHERE THEY ARE FROM, JUST GIVE THEM THE PLATFORM OR A WAY OF COALESCING TOGETHER. YOU GIVE THEM THE OPPORTUNITY TO PLAY WITH THE TOOLS AND THE ARTIFACTS AND LET THEM EXPLORE. WE LEARN THROUGH PLAYING THAT THAT IS THE BEST WAY TO GROW AND DEVELOP OUR SKILLS. SHOW A BIT OF EMPATHY TOWARD THEM. THEY ARE LEARNING SOMETHING NEW, PLAYING WITH SOMETHING NEW, PROBABLY ON THEIR OWN TIME, TOO. INCUBATE, EXPAND, AND THEN YOU CAN DEPLOY.
AZEEM: I LOVE THAT. THAT’S GREAT. JONATHAN, YOUR PERSPECTIVE. 45 SECONDS IS NOW YOUR ALLOCATED SLOT.
JONATHAN: ENABLING, ENABLING, ENABLING. ENABLING YOUR CUSTOMER, ENABLING YOUR TEAM. ENABLING FROM A CEILING POINT OF VIEW. LET THEM READ PAPERS, LET THEM CONNECT THE DOTS IN A CREATIVE WAY. DRAW OUTSIDE OF THE BOX. DO NOT BE TOO SCHEMATIC. THAT THEY PLAY AROUND. AS SIMON SAYS, IT IS IMPORTANT NOT TO TRY TO MANAGE IT. THESE SMART PEOPLE, THEY CAN MANAGE THEMSELVES. IT IS A BIT LIKE ANARCHY IN A GOOD WAY. YOU NEED TO HAVE IT WORK AT GRASSROOTS, YOU NEED TO ENABLE IT. GIVE THEM THE TOOLS THEY NEED, THE GPU, A LAPTOP, AND OFF THEY GO.
AZEEM: “ANARCHY BUT IN A GOOD WAY.” RALPH, WHAT ROLE DOES CULTURE PLAY IN AI AND SKILLING INITIATIVES?
RALPH: ESTABLISH A FRAMEWORK, ESTABLISH A FRAMEWORK OF LEARNING, AND MAKE SURE PEOPLE UNDERSTAND IT IS A TECHNICAL ENVIRONMENT. BUT IT IS NOTHING WHICH IS LIMITED TO PEOPLE WITH A TECHNICAL BACKGROUND. SO EVERYBODY IN THE CONTEXT OF AI WITH TOOLS AND THE WAY THAT STUFF IS DONE AND ACTUALLY BENEFIT BY USING SMALL BITS AND PIECES AND JUST GET INTO IT, AND IT MAKES A HUGE DIFFERENCE FOR THE WHOLE ORGANIZATION OF THE COMPANY.
AZEEM: RALPH, THANK YOU FOR THAT, AND THANK YOU TO ALL OF THE PANELISTS. IT IS AN INCREDIBLY RICH DISCUSSION. I HOPE THIS IS BEING RECORDED AND PEOPLE GET A CHANCE TO WATCH IT AGAIN BECAUSE THERE IS SO MUCH NUANCE AND INSIGHT THAT HAS BEEN BROUGHT UP TODAY. THANK YOU VERY MUCH TO JARNO, TO SIMON, JONATHAN, AND FINALLY TO RALPH.
SAS’ AI and analytics more tightly integrate with Microsoft Azure; Microsoft to bring cloud-based SAS industry solutions to its customers
Cary, NC, and Redmond, WA,(June 15, 2020) – Microsoft Corp. and SAS today announced an extensive technology and go-to-market strategic partnership. The two companies will enable customers to easily run their SAS® workloads in the cloud, expanding their business solutions and unlocking critical value from their digital transformation initiatives. As part of the partnership, the companies will migrate SAS’ analytical products and industry solutions onto Microsoft Azure as the preferred cloud provider for the SAS Cloud. SAS’ industry solutions and expertise will also bring added value to Microsoft’s customers across health care, financial services and many other industries. This partnership builds on SAS integrations across Microsoft cloud solutions for Azure, Dynamics 365, Microsoft 365 and Power Platform and supports the companies’ shared vision to further democratize AI and analytics.
“Through this partnership, Microsoft and SAS will help our customers accelerate growth and find new ways to drive innovation with a broad set of SAS Analytics offerings on Microsoft Azure,” said Scott Guthrie, Microsoft Executive Vice President of Cloud and AI. “SAS, with its recognized expertise in analytics, data science and machine learning, is a strategic partner for Microsoft, and together we will help customers across dozens of industries and horizontals address their most critical and complex analytical challenges.”
Organizations around the world are moving to the cloud to innovate and move faster toward their business goals. As part of this transition, many customers, like St. Louis-based health system Mercy, are migrating their SAS analytic workloads to Azure to improve performance and cost-efficiency.
“At Mercy, we’re focused on how to continuously improve patient care and outcomes and realize the role of data analytics, machine learning in bringing that focus to light. Working with SAS and Microsoft, we can capitalize on analytics software and the Azure cloud platform to strengthen our ability to harness Real World Evidence for improved outcomes and more informed care,” said Curtis Dudley, Mercy Vice President of Data Analytics. “We’re excited about the potential for increased speed, scalability and an expanded catalog of analytics solutions the SAS and Microsoft partnership brings in helping us deliver a new care model.”
To provide a seamless experience and help organizations accelerate their cloud transformation initiatives, SAS and Microsoft are working together to ensure that SAS products and solutions can be successfully deployed and run effectively on Azure.
“SAS and Microsoft have a shared vision of helping customers accelerate their digital transformation initiatives. We both understand that it is about enrichment of data and improving lives through better decisions,” said Oliver Schabenberger, SAS Chief Technology Officer and Chief Operating Officer. “Partnering with Microsoft gives customers a more seamless path to the cloud that provides faster, more powerful and easier access to SAS solutions and enables trusted decisions with analytics that everyone – regardless of skill level – can understand.”
This will include optimizing SAS® Viya®, the latest release of the company’s cloud-native offering, for Azure as well as integrating SAS’ deep portfolio of industry solutions, from fraud to risk to retail, into the Azure Marketplace to provide improved productivity and enhanced business outcomes for customers.
“The partnership between SAS as a leader in the analytics space, and Microsoft as a leader in cloud makes for an interesting strategic alliance. With SAS planning to build integrations across Microsoft’s entire cloud portfolio (Azure, Microsoft 365, Dynamics 365 & Power BI) it opens up a lot of joint solution potential,” said Steve White, Program Vice President, Channels and Alliances at IDC.
Additionally, through the partnership, Microsoft and SAS will explore opportunities to integrate SAS analytics capabilities, including industry-specific models, within Azure and Dynamics 365 and build new market-ready joint solutions for customers that are natively integrated with SAS services across multiple vertical industries. This further integration will enable SAS customers to capitalize on the scalability and flexibility of the cloud for their analytics and AI workloads. For example, Microsoft and SAS are already empowering customers with solutions that help them capitalize at scale on the vast amount of data being generated by the Internet of Things by combining Microsoft’s Azure IoT platform with SAS’ edge-to-cloud IoT analytics and AI capabilities. Currently, the Town of Cary, NC, is using a joint IoT offering from Microsoft and SAS to power a critical flood prediction solution.
“Localized flooding is something all communities experience, and ours is no exception,” said Nicole Raimundo, Cary’s Chief Information Officer. “Using sensors, weather data, SAS IoT analytics and the Azure IoT platform, we expect to increase situational awareness of rising stream levels, predict where flooding might occur, and improve our emergency response through automation. Cary is also proud to be able to share this data with our neighboring communities to help them better serve their citizens.”
Supported with joint co-selling and go-to-market activities, additional SAS products and solutions will begin rolling out later this year. This will enable SAS and Microsoft customers to address some of their most critical and complex analytical challenges while fostering continuous innovation. SAS is also committed to using Microsoft 365 and Dynamics 365 to power its internal operations.
Today’s announcement is in conjunction with Virtual SAS Global Forum 2020, the world’s premier analytics conference. Due to the COVID-19 pandemic, this year’s conference is being held virtually. Register now for the June 16 virtual event to hear more on this partnership from SAS and Microsoft executives during the SAS Executive Welcome at 11 a.m. ET.
To learn more about how SAS and Microsoft are partnering and to register for an upcoming SAS on Azure virtual event, please visit sas.com/Microsoft.
Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.
It’s a big day for you. Back-to-back meetings are scheduled with critical customers and partners, and a parent-teacher conference is sandwiched in there as well. As you’re headed toward the last meeting, suddenly you cannot remember the key talking points. Who sent you the pre-read notes? Was it Taylor? No, possibly Drew. No luck. You are about two minutes from reaching the meeting room, and you want more than anything to pull out your phone and scream at it:
If only there existed an intelligent system that enabled you to find information this effortlessly. Now, there is: Meeting Insights provides AI capabilities that help you find information before, during, and after meetings as easily as if you had your own assistant to support you. Meeting Insights is now available for commercial Microsoft 365 and Office 365 customers in Outlook mobile (on both Android and iOS devices) and Outlook on the web. We would like to pull back the cover and talk about the science and technology that drives this scenario. Also, we’ll share why Meeting Insights is only the tip of the iceberg in how we at Microsoft are developing AI-powered capabilities to simplify and improve customer experience and productivity. We’re currently testing two new features that expand intelligent content recommendations to new scenarios in Outlook.
Providing usefulness in every context
Customers often say that finding content from meetings is a challenge. Therefore, we set out to build an intelligent personalized solution that provides customers with information from their mailboxes, OneDrive for Business accounts, and SharePoint sites to better help them accomplish the goals of their meetings.
The solution we developed powers the Meeting Insights feature that makes meetings more effective by helping customers:
Prepare for their meetings by offering them content they haven’t had a chance to read or may want to revisit;
Access relevant content during their meetings with ease;
Retrieve information about completed meetings by returning content presented during the meeting, sent meeting notes, and other relevant post-meeting material
Currently, Meeting Insights can be found on more than 40% of all Outlook mobile and Outlook on the web meetings.
Large-scale, personal, privacy-preserving AI
The most useful emails and files for a meeting may change over time (for example, those most useful before may be different than the ones most useful during or after). In order to create a relevant and useful service, we needed to find a way to reason across information shared by a customer as well as the files in their organization that they have permission to access and have opted to share. Microsoft 365 upholds a strict commitment to protecting customer data—promising to only use customer data for agreed upon services and not look at data during development or deployment of a new feature. This privacy promise, rather than being a hindrance, spurred us to think creatively and to innovate. As detailed below, we use a creative combination of weak and self-supervised machine learning (ML) algorithms in Meeting Insights to train large-scale language models without looking at any customer data.
The need to efficiently reason over millions of private corpora, themselves each potentially containing millions of items, underscores the complexity of the problem we needed to solve in Meeting Insights. To accomplish this reasoning, Meeting Insights enlists the help of Microsoft Graph, where shared data is captured in a graph representation. Microsoft Graph provides convenient APIs to reason over all of the shared email, files, and meetings for customers as well as the relationship between these items. This provides a high level of personalization to accurately meet customer needs.
Building intelligent features like Meeting Insights in the enterprise setting poses additional problems to the standard ML workflow. In enterprise settings, customers have high expectations of new products—especially the ones in their critical workflows and even more so when they are paying for the service. Because there is a need for an initial model to work out of the gate, standard ML workflows, which deploy a heuristic model with moderate performance and take time to learn from interaction data, lead to a lack of product acceptance. In Meeting Insights, we use ML algorithms that require less supervision to personalize customers’ experiences more quickly.
This challenge, which we refer to as the ‘’jump-start’’ problem, is therefore critical to product success in enterprise scenarios. This goes beyond standard “cold-start” challenges where data about a particular item or new user of a system is lacking, and instead the primary challenge is to get the entire process off the ground. Common approaches to improve model performance before deployment, such as getting annotations from crowd-sourced judges, have limited to no applicability due to the privacy-sensitive and personal nature of the recommendation and learning challenges. Finally, Microsoft 365 is used all over the world, and we wanted to make this technology available as broadly as possible and not simply to a few select languages.
Figure 1: Schematic depiction of how we train the model for recommending emails in Meeting Insights.
Solving the technical challenges
In order to make Meeting Insights possible, we needed to leverage three key components: weak supervision that is language agnostic, personalization enriched by the Microsoft Graph, and an agile, privacy-preserving ML pipeline.
Weak supervision: Large-scale supervised learning provides state-of-the-art results for many applications. However, this is impractical when building new enterprise search scenarios due to the privacy-sensitive and personal nature of the problem space. Instead of having annotators labeling data, we turned to weak supervision, an approach where heuristics can be defined to programmatically label data. To apply weak supervision to this task, we used Microsoft’s compliant experimentation platform. Emails and files attached to meetings were assigned a positive label, and all emails and files which the organizer could have attached at meeting creation time but did not were assigned a negative label. The benefit of using weak supervision for this problem went beyond preserving privacy as it allowed us to quickly and cheaply scale across languages and communication styles—all of which would be extremely challenging with a strongly supervised modeling approach involving annotators.
Personalization: Identifying the most relevant and useful information for a customer requires understanding the people and phrases that are important for that person. In order to identify the candidate set of relevant items and rank them, we leverage personalized representations of the most important key phrases and key people for a person. These personalized representations are learned in a self-supervised and privacy-preserving manner from nodes and edges in the Microsoft Graph. The context meeting is then combined with these personalized key-phrase and people representations to construct a candidate set using the same. Microsoft Search endpoint uses the same Microsoft Search technology powering search in applications such as Outlook, Teams, and SharePoint. In the final ranking stage, these personalized representations as well as more general embeddings are used to compute semantic relatedness between the context and candidate items, relationship strength via graph features, and collaboration strength based on relationship between key people.
Agile privacy preserving ML pipelines: As noted above, preserving the privacy of our customers’ data is sacrosanct for Microsoft. The weak and self-supervised algorithm techniques described above allow us algorithmically to train highly accurate and language agnostic large-scale models without having to look at the customer’s data. However, in order to put the algorithms into practice, test them, and innovate, we needed a platform that makes approaches like this possible. Innovations on the modeling front went hand-in-hand with development of ML platforms and processes that allowed our scientists to remain agile. Our in-house compliant experimentation platform provides key privacy safeguards. For example, our algorithms can operate on customer content to provide recommendations directly to customers, but our engineers cannot see that content except when it’s their own. Many tools were developed to assist in monitoring and debugging our ML pipelines, firing off alerts when data quality as well correlations between signals and labels diverged from expected values.
Self-hosting to improve for our customers
As we developed Meeting Insights, we first rolled it out to internal Microsoft customers and instrumented their interactions with the experience to identify areas for improvement. Early on, we saw from the data we had instrumented that 90% of the usage of Meeting Insights on a given day was for meetings that or the following day. Armed with this datapoint, we were able to implement a significant optimization by prefetching the insights for these meetings the moment the customer opens their calendar. This data-informed strategy resulted in a 50% reduction of customer-perceived latency.
Customer engagement with the deployed product showed other strong temporal effects worth calling out for this experience:
For meetings, freshness is important with about 5% of insights clicks happening within 15 minutes of the meeting being created.
For emailinsights, 30% of clicks go to emails sent/received in the 24 hours preceding the time of the user request.
For fileinsights, 35% of clicks go to files created or modified in the 24 hours preceding the time of the user request.
In less than four months after shipping our first Meeting Insights experience (for meeting invitations written in English), we were able to expand support to all enterprise customers across all languages. This was made possible by effectively leveraging the Microsoft Graph, being creative in the low-cost modeling approaches we employed, and being careful in the design of our AI solutions by using weak supervision and avoiding language specific dependencies. Over the next few months, we will be rolling Meeting Insights out to Cortana Briefing Mail recipients.
Meeting Insights is currently shown on more than 40% of opened meetings on supported Outlook clients, with customers reporting two out of three suggestions to be useful.
Providing broader contextual intelligence
Meeting Insights is not the only place where we are providing contextual intelligence that makes life easier for our customers. We are looking at how we can use Meeting Insights to accelerate our offerings in other scenarios using techniques like transfer learning, which has proven to be an effective and efficient way for us to gain reusable value from AI models learned for one scenario but reapplied to another.
For example, we are now transferring the learnings from our Meeting Insights models to power other intelligent content recommendations features such as “Suggested Attachments” and “Suggested Reply with File” on Outlook. These features take a customer and an email as input to return contextually relevant attachment suggestions that significantly reduce the time and effort required to share content via email.
“Suggested Attachments” and “Suggested Reply with File” are features currently in testing phases. We look forward to adding new offerings for Microsoft 365 users and beyond for intelligent content recommendation.
Imagine you’re heading to that last meeting again after an exceptionally busy day. You’ve suddenly forgotten the talking points, and you just can’t seem to recall who sent those pre-read notes. Was it Taylor? Drew? You feel like shouting at the sky, but then a thought flashes into your mind. You calmly pull up Outlook mobile on your phone as you approach the room, and with a simple tap on the meeting, your pre-read notes appear at the bottom of the screen thanks to Meeting Insights. Now, you’ve got this.
We look forward to continuing to improve life for our customers, and we hope the next time you walk into a meeting, you also walk in with more confidence knowing that Meeting Insights is there to assist you.
Every day, developers and researchers are finding creative ways to leverage AI to augment human intelligence and solve tough problems. Whether they’re training a computer vision model that can spot endangered snow leopards or help us do our business expenses more easily when we scan pictures of receipts, they need a lot of quality pictures to do it. Developers usually crowd source these large batches of pictures by enlisting the help of gig workers to submit photos, but often, these calls for photos feel like a black box. Participants have little insight into why they’re submitting a photo and can feel like their time was lost when their submissions are rejected without explanation. At the same time, developers can find that these sourcing projects take a long time to complete due to lower quality and less diverse inputs.
We’re excited to announce that Trove, a Microsoft Garage project, is exploring a solution that can enhance the experience and agency for both parties. Trove is a marketplace app that allows people to contribute photos to AI projects that developers can then use to train machine learning models. Interested parties can request an invite to join the experiment as a contributor or developer. Trove is currently accepting a small number of participants in the United States on both Android and iOS.
A marketplace that puts transparency and choice first
Today, most data collection is passive, with many people unaware that their data is being collected or not making a real-time, active choice to contribute their information. And even those who contribute more directly to model training projects are often not provided the greater context and purpose of the project; there’s little to no feedback loop to correct and align data submissions to better fit the needs of project.
For people who rely on this data gig work as an important source of income, this rejection experience can leave them feeling frustrated and without any agency to contribute better submissions and a higher return on their time investment. With machine learning being a critical step in unlocking advancements from speech to image recognition, there’s an important opportunity to increase the quality of data, while making sure that contributors have the clarity and choice they need to participate in the process.
The Trove team has found a way to overcome these tough tradeoffs in a marketplace solution that emphasizes greater communication, context, and feedback between developers and project participants. “There’s a better way we can do this. You can have the transparency of how your data is being used and actually want to opt in to contribute to these projects and advance science and AI,” shares Krishnan Raghupathi, the Senior Program Manager for Trove. “We’d love to see this become a community where people are a key part of the project.”
To read more about key features and how Trove works for developers and contributors, check it out on the Garage Workbench.
Aspiring to higher quality data and increased contributor agency
The team behind Trove was originally inspired by thought leaders exploring how we can embrace the need for a large volume of data to enable AI advancements, while providing more agency to contributors and recognizing the value of their data. “We wanted to explore these concepts through something concrete,” shared Christian Liensberger, the lead Principal Program Manager on the project. “We decided to form an incubation team and build something that could show how things could be different.”
In creating Trove, the incubation team had to think through principles that would guide them as they brought such an experience to life. They believe that the best framework to produce the higher quality data needed to train these AI models involves connecting content creators to AI developers more directly. Trove was built with a design and approach that focuses on four core principles:
Transparency See all the projects available, details about who is posting them, and how your data will be used
Control Decide which projects you want to contribute to, and control when and how much you contribute
Enrichment Learn directly from AI developers how your contributions are valuable, and see how your participation will advance AI projects
Connection Communicate with AI developers to stay informed on projects you contributed to
“I love working on this project, it’s a continuous shift between the user need for privacy and control, and professionals’ need for data to innovate and create new products,” said Devis Lucato, Principal Engineering Manager for Trove. “We’re pushing the boundaries of all the technologies that we touch, exploring new features and challenging decisions determined by the status quo.”
Before releasing this experiment to external users, the team piloted Trove with Microsoft employees from across the US. While Trove is still in an experimental phase, the team is excited for even more feedback. “Our solution is still a bit rough around the edges, but we want to hear from the community about what we should focus on next,” shares Christian. Trinh Duong, the Marketing Manager on the project added, “My favorite part about working on this has been how much the app incorporates users into the experience. We want to invite our users to reach out and join us as true participants in the creation of this concept.”
The team is welcoming feedback from experiment participants here, and is enthusiastic for the input of users who are as passionate about the principles of transparency, control, enrichment, and connection as they are.
At the 2005 Conference on Neural Information Processing Systems, researcher Hanna Wallach found herself in a unique position—sharing a hotel room with another woman. Actually, three other women to be exact. In the previous years she had attended, that had never been an option because she didn’t really know any other women in machine learning. The group was amazed that there were four of them, among a handful of other women, in attendance. In that moment, it became clear what needed to be done. The next year, Wallach and two other women in the group, Jennifer Wortman Vaughan and Lisa Wainer, founded the Women in Machine Learning (WiML) Workshop. The one-day technical event, which is celebrating its 15th year, provides a forum for women to present their work and seek out professional advice and mentorship opportunities. Additionally, the workshop aims to elevate the contributions of female ML researchers and encourage other women to enter the field. In its first year, the workshop brought together 100 attendees; today, it draws around a thousand.
In creating WiML, the women had tapped into something greater than connecting female ML researchers; they asked whether their machine learning community was behaving fairly in its inclusion and support of women. Wallach and Wortman Vaughan are now colleagues at Microsoft Research, and they’re channeling the same awareness and critical eye to the larger AI picture: Are the systems we’re developing and deploying behaving fairly, and are we properly supporting the people building and using them?
Senior Principal Researchers Jennifer Wortman Vaughan (left) and Hanna Wallach (right), co-founders of the Women in Machine Learning Workshop, bring a people-first approach to their work in responsible AI. The two have co-authored upward of 10 papers together on the topic, and they each co-chair an AI, Ethics, and Effects in Engineering and Research (Aether) working group at Microsoft.
Wallach and Wortman Vaughan each co-chair an AI, Ethics, and Effects in Engineering and Research (Aether) working group—Wallach’s group is focused on fairness, Wortman Vaughan’s on interpretability. In those roles, they help inform Microsoft’s approach to responsible AI, which includes helping developers adopt responsible AI practices with services like Azure Machine Learning. Wallach and Wortman Vaughan have co-authored upward of 10 papers together around the topic of responsible AI. Their two most recent publications in the space address the AI challenges of fairness and interpretability through the lens of one particular group of people involved in the life cycle of AI systems: those developing them.
“It’s common to think of machine learning as a fully automated process,” says Wortman Vaughan. “But people are involved behind the scenes at every step, making decisions about which data to use, what to optimize for, even which problems to solve in the first place, and each of these decisions has the potential to impact lives. How do we empower the people involved in creating machine learning systems to make the best choices?”
A framework for thinking about and prioritizing fairness
When Wallach took the lead on the Aether Fairness working group, she found herself getting the same question from industry colleagues, researchers in academia, and people in the nonprofit sector: Why don’t you just build a software tool that can be integrated into systems to identify issues of unfairness? Press a button, make systems fair. Some people asked in jest; others more seriously. Given the subjective and sociotechnical nature of fairness, there couldn’t be a single tool to address every challenge, and she’d say as much. Underlying the question, though, was a very real truth: Practitioners needed help. During a two-hour car ride while on vacation, Wallach had an aha moment listening to a Hidden Brain podcast episode about checklists. What practitioners wanted was a framework to help them think about and prioritize fairness.
“I’m getting this question primarily from people who work in the technology industry; the main way they know how to ask for structure is to ask for software,” she recalls thinking of the requests for a one-size-fits-all fairness tool. “But what they actually want is a framework.”
Wallach, Wortman Vaughan, Postdoctoral Researcher Luke Stark, and PhD candidate Michael A. Madaio, an intern at the time of the work, set out to determine if a checklist could work in this space, what should be on it, and what kind of support teams wanted in adopting one. The result is a comprehensive and customizable checklist that accounts for the real-life workflows of practitioners, with guidelines and discussion points for six stages of AI development and deployment: envision, define, prototype, build, launch, and evolve.
During the first of two sets of workshops, researchers presented participants with an initial AI fairness checklist culled from existing lists, literature, and knowledge of fairness challenges faced by practitioners. Participants were asked to give item-level feedback using sticky notes and colored dots to indicate edits and difficulty level of accomplishing list items, respectively. The researchers used the input to revise the checklist.
Co-designing is key
AI ethics checklists and principles aren’t new, but in their research, Wallach, Wortman Vaughan, and their team found current guidelines are challenging to execute. Many are too broad, oversimplify complex issues with yes/no–style items, and—most importantly—often appear not to have included practitioners in their design. Which is why co-designing the checklist with people currently on the ground developing AI systems formed the basis of the group’s work.
The researchers conducted semi-structured interviews exploring practitioners’ current approaches to addressing fairness issues and their vision of the ideal checklist. Separately, Wallach, Wortman Vaughan, and others in the Aether Fairness working group had built out a starter checklist culled from existing lists and literature, as well as their own knowledge of fairness challenges faced by practitioners. The researchers presented this initial checklist during two sets of workshops, revising the list after each based on participant input regarding the specific items included. Additionally, the researchers gathered information on anticipated obstacles and best-case scenarios for incorporating such a checklist into workflows, using the feedback, along with that from the semi-structured interviews, to finalize the list. When all was said and done, 48 practitioners from 12 tech companies had contributed to the design of the checklist.
During the process, researchers found that fairness efforts were often led by passionate individuals who felt they were on their own to balance “doing the right thing” with production goals. Participants expressed hope that having an appropriate checklist could empower individuals, support a proactive approach to AI ethics, and help foster a top-down strategy for managing fairness concerns across their companies.
A conversation starter
While offering step-by-step guidance, the checklist is not about rote compliance, says Wallach, and intentionally omits thresholds, specific criteria, and other measures that might encourage teams to blindly check boxes without deeper engagement. Instead, the items in each stage of the checklist are designed to facilitate important conversations, providing an opportunity to express and explore concerns, evaluate systems, and adjust them accordingly at natural points in the workflow. The checklist is a “thought infrastructure”—as Wallach calls it—that can be customized to meet the specific and varying needs of different teams and circumstances.
During their co-design workshops, researchers used a series of storyboards based on participant feedback to further understand the challenges and opportunities involved in incorporating AI fairness checklists into workflows.
And just as the researchers don’t foresee a single tool solving all fairness challenges, they don’t view the checklist as a solo solution. The checklist is meant to be used alongside other methods and resources, they say, including software tools like Fairlearn, the current release of which is being demoed this week at the developer event Microsoft Build. Fairlearn is an open-source Python package that includes a dashboard and algorithms to support practitioners in assessing and mitigating unfairness in two specific scenarios: disparities in the allocation of opportunities, resources, and information offered by their AI systems and disparities in system performance. Before Fairlearn can help with such disparities, though, practitioners have to identify the groups of people they expect to be impacted by their specific system.
The hope is the checklist—with such guidance as “solicit input on system vision and potential fairness-related harms from diverse perspectives”—will aid practitioners in making such determinations and encourage other important conversations.
“We can’t tell you exactly who might be harmed by your particular system and in what way,” says Wallach. “But we definitely know that if you didn’t have a conversation about this as a team and really investigate this, you’re definitely doing it wrong.”
Tackling the challenges of interpreting interpretability
As with fairness, there are no easy answers—and just as many complex questions—when it comes to interpretability.
Wortman Vaughan recalls attending a panel discussion on AI and society in 2016 during which one of the panelists described a future in which AI systems were so advanced that they would remove uncertainty from decision-making. She was confounded and angered by what she perceived as a misleading and irresponsible statement. The uncertainty inherent in the world is baked into any AI systems we build, whether it’s explicit or not, she thought. The panelist’s comment weighed on her mind and was magnified further by current events at the time. The idea of “democratizing AI” was gaining steam, and models were forecasting a Hillary Rodham Clinton presidency, an output many were treating as a done deal. She wondered to the point of obsession, how well do people really understand the predictions coming out of AI systems? A dive into the literature on the ML community’s efforts to make machine learning interpretable was far from reassuring.
“I got really hung up on the fact that people were designing these methods without stopping to define exactly what they mean by interpretability or intelligibility, basically proposing solutions without first defining the problem they were trying to solve,” says Wortman Vaughan.
That definition rests largely on who’s doing the interpreting. To illustrate, Wallach provides the example of a machine learning model that determines loan eligibility: Details regarding the model’s mathematical equations would go a long way in helping an ML researcher understand how the model arrives at its decisions or if it has any bugs. Those same details mean little to nothing, though, to applicants whose goal is to understand why they were denied a loan and what changes they need to make to position themselves for approval.
In their work, Wallach and Wortman Vaughan have argued for a more expansive view of interpretability, one that recognizes that the concept “means different things to different people depending on who they are and what they’re trying to do,” says Wallach.
As ML models continue to be deployed in the financial sector and other critical domains like healthcare and the justice system—where they can significantly affect people’s livelihood and well-being—claiming ignorance of how an AI system works is not an option. While the ML community has responded to this increasing need for techniques that help show how AI systems function, there’s a severe lack of information on the effectiveness of these tools—and there’s a reason for that.
“User studies of interpretability are notoriously challenging to get right,” explains Wortman Vaughan. “Doing these studies is a research agenda of its own.”
Not only does designing such a study entail qualitative and quantitative methods, but it also requires an interdisciplinary mix of expertise in machine learning, including the mathematics underlying ML models, and human–computer interaction (HCI), as well as knowledge of both the academic literature and routine data science practices.
The enormity of the undertaking is reflected in the makeup of the team that came together for the “Interpreting Interpretability” paper. Wallach, Wortman Vaughan, and Senior Principal Researcher Rich Caruana have extensive ML experience; PhD student Harmanpreet Kaur, an intern at the time of the work, has a research focus in HCI; and Harsha Nori and Samuel Jenkins are data scientists who have practical experience building and using interpretability tools. Together, they investigated whether current tools for increasing the interpretability of models actually result in more understandable systems for the data scientists and developers using them.
Three visualization types for model evaluation are output by the popular and publicly available InterpretML implementation of GAMs (top) and the implementation of SHAP in the SHAP Python package (bottom), respectively. Left column: global explanations. Middle column: component (GAMs) or dependence plot (SHAP). Right column: local explanations.
Tools in practice
The study focuses on two popular and publicly available tools, each representative of one of two techniques dominating the space: the InterpretML implementation of GAMs, which uses a “glassbox model” approach, by which models are designed to be simple enough to understand, and the implementation of SHAP in the SHAP Python package, which uses a post-hoc explanation approach for complex models. Each tool outputs three visualization types for model evaluation.
Through pilot interviews with practitioners, the researchers identified six routine challenges that data scientists face in their day-to-day work. The researchers then set up an interview study in which they placed data scientists in context with data, a model, and one of the two tools, assigned randomly. They examined how well 11 practitioners were able to use the interpretability tool to uncover and address the routine challenges.
The researchers found participants lacked an overall understanding of the tools, particularly in reading and drawing conclusions from the visualizations, which contained importance scores and other values that weren’t explicitly explained, causing confusion. Despite this, the researchers observed, participants were inclined to trust the tools. Some came to rely on the visualizations to justify questionable outputs—the existence of the visualizations offering enough proof of the tools’ credibility—as opposed to using them to scrutinize model performance. The tools’ public availability and widespread use also contributed to participants’ confidence in the tools, with one participant pointing to its availability as an indication that it “must be doing something right.”
Following the interview study, the researchers surveyed nearly 200 practitioners, who were asked to participate in an adjusted version of the interview study task. The purpose was to scale up the findings and gain a sense of their overall perception and use of the tools. The survey largely supported participants’ difficulty in understanding the visualizations and their superficial use of them found in the interview study, but also revealed a path for future work around tutorials and interactive features to support practitioners in using the tools.
“Our next step is to explore ways of helping data scientists form the right mental models so that they can take advantage of the full potential of these tools,” says Wortman Vaughan.
The researchers conclude that as the interpretability landscape continues to evolve, studies of the extent to which interpretability tools are achieving their intended goals and practitioners’ use and perception of them will continue to be important in improving the tools themselves and supporting practitioners in productively using them.
Putting people first
Fairness and interpretability aren’t static, objective concepts. Because their definitions hinge on people and their unique circumstances, fairness and interpretability will always be changing. For Wallach and Wortman Vaughan, being responsible creators of AI begins and ends with people, with the who: Who is building the AI systems? Who do these systems take power from and give power to? Who is using these systems and why? In their fairness checklist and interpretability tools papers, they and their co-authors look specifically at those developing AI systems, determining that practitioners need to be involved in the development of the tools and resources designed to help them in their work.
By putting people first, Wallach and Wortman Vaughan contribute to a support network that includes resources and also reinforcements for using those resources, whether that be in the form of a community of likeminded individuals like in WiML, a comprehensive checklist for sparking dialogue that will hopefully result in more trustworthy systems, or feedback from teams on the ground to help ensure tools deliver on their promise of helping to make responsible AI achievable.
“As we’ve learned more and more about what we need and the different limits of all the components that make up a supercomputer, we were really able to say, ‘If we could design our dream system, what would it look like?’” said OpenAI CEO Sam Altman. “And then Microsoft was able to build it.”
OpenAI’s goal is not just to pursue research breakthroughs but also to engineer and develop powerful AI technologies that other people can use, Altman said. The supercomputer developed in partnership with Microsoft was designed to accelerate that cycle.
“We are seeing that larger-scale systems are an important component in training more powerful models,” Altman said.
For customers who want to push their AI ambitions but who don’t require a dedicated supercomputer, Azure AI provides access to powerful compute with the same set of AI accelerators and networks that also power the supercomputer. Microsoft is also making available the tools to train large AI models on these clusters in a distributed and optimized way.
At its Build conference, Microsoft announced that it would soon begin open sourcing its Microsoft Turing models, as well as recipes for training them in Azure Machine Learning. This will give developers access to the same family of powerful language models that the company has used to improve language understanding across its products.
It also unveiled a new version of DeepSpeed, an open source deep learning library for PyTorch that reduces the amount of computing power needed for large distributed model training. The update is significantly more efficient than the version released just three months ago and now allows people to train models more than 15 times larger and 10 times faster than they could without DeepSpeed on the same infrastructure.
Along with the DeepSpeed announcement, Microsoft announced it has added support for distributed training to the ONNX Runtime. The ONNX Runtime is an open source library designed to enable models to be portable across hardware and operating systems. To date, the ONNX Runtime has focused on high-performance inferencing; today’s update adds support for model training, as well as adding the optimizations from the DeepSpeed library, which enable performance improvements of up to 17 times over the current ONNX Runtime.
“We want to be able to build these very advanced AI technologies that ultimately can be easily used by people to help them get their work done and accomplish their goals more quickly,” said Microsoft principal program manager Phil Waymouth. “These large models are going to be an enormous accelerant.”
In “self-supervised” learning, AI models can learn from large amounts of unlabeled data. For example, models can learn deep nuances of language by absorbing large volumes of text and predicting missing words and sentences. Art by Craighton Berman.
Learning the nuances of language
Designing AI models that might one day understand the world more like people do starts with language, a critical component to understanding human intent, making sense of the vast amount of written knowledge in the world and communicating more effortlessly.
Neural network models that can process language, which are roughly inspired by our understanding of the human brain, aren’t new. But these deep learning models are now far more sophisticated than earlier versions and are rapidly escalating in size.
A year ago, the largest models had 1 billion parameters, each loosely equivalent to a synaptic connection in the brain. The Microsoft Turing model for natural language generation now stands as the world’s largest publicly available language AI model with 17 billion parameters.
This new class of models learns differently than supervised learning models that rely on meticulously labeled human-generated data to teach an AI system to recognize a cat or determine whether the answer to a question makes sense.
In what’s known as “self-supervised” learning, these AI models can learn about language by examining billions of pages of publicly available documents on the internet — Wikipedia entries, self-published books, instruction manuals, history lessons, human resources guidelines. In something like a giant game of Mad Libs, words or sentences are removed, and the model has to predict the missing pieces based on the words around it.
As the model does this billions of times, it gets very good at perceiving how words relate to each other. This results in a rich understanding of grammar, concepts, contextual relationships and other building blocks of language. It also allows the same model to transfer lessons learned across many different language tasks, from document understanding to answering questions to creating conversational bots.
“This has enabled things that were seemingly impossible with smaller models,” said Luis Vargas, a Microsoft partner technical advisor who is spearheading the company’s AI at Scale initiative.
The improvements are somewhat like jumping from an elementary reading level to a more sophisticated and nuanced understanding of language. But it’s possible to improve accuracy even further by fine tuning these large AI models on a more specific language task or exposing them to material that’s specific to a particular industry or company.
“Because every organization is going to have its own vocabulary, people can now easily fine tune that model to give it a graduate degree in understanding business, healthcare or legal domains,” he said.
One of the defining aspects of COVID-19 is its disproportionate impact on underserved communities and the harsh spotlight it shines on existing social equity issues around the world. From access to quality education, jobs or affordable healthcare, COVID-19 is magnifying virtually every inequality in our communities.
Never has there been a more important time to capture the moment to create the solutions the world needs to make a positive and lasting contribution to the social inequity issues of our generation. Solutions will come from all corners and technology innovators will need to play their part.
Building on Microsoft’s long-standing efforts to ensure technology fulfills its promise to address the world’s biggest challenges, Microsoft joined efforts with Giving Tech Labs to unleash the power of public interest technology. This week, at Build 2020, we are offering developers a preview of X4Impact, the innovation hub spawned by this collaboration, and the opportunity to demo this powerful tool. Built on Azure, X4Impact is an AI-powered market intelligence platform for social innovation where people can define social challenges, contribute ideas, access solutions and identify funding.
YouTube Video
Clearly the challenges are complex and will require strong vision and collaboration across governments, nonprofits, donors and the private sector. The power of AI, data science and high-performance cloud computing have created an unprecedented ability to produce insights and solutions for critical issues.
Take, for example, our work with the COVID-19 High Performance Computing Consortium led by the White House. Bringing together the Federal government, industry and academics, Microsoft is providing researchers in computer science, biology, medicine and public health access to the world’s most powerful computing resources. This collaboration is helping speed the pace of scientific discovery of treatments and a potential vaccine for COVID-19.
While this is a powerful scenario, it’s only one example. The world needs thousands of these solutions to meet the wide-ranging issues that we’re facing today.
Part of the answer lies in unlocking the power of technology for the public interest – a field dedicated to deploying advanced technology, data science, AI and sustainability models to address urgent issues in society. It is about building solutions that work because they reflect the needs of the communities they serve. To achieve this, technology for public interest encompasses important principles such as:
Bringing nonprofits, government, the private sector and donors together to drive change through a focus on empathy and inclusion in the design of solutions
Using ethical AI to transform data into knowledge with a relentless focus on measurable impact for the communities needing help
Recognizing that, because technology alone does not solve problems, building long-lasting, sustainable processes and capacity is essential
What does technology for public interest mean for those working on the front lines? Let’s take clean drinking water as an example. Fighting cholera is one of the world’s most pressing needs. Five million cholera cases are recorded across the globe each year and $3 billion is spent annually in treatments and lost productivity that could be avoidable through early detection. Having lost family members of her own to cholera, Dr. Katherine Clayton, an engineer by training, founded a startup called OmniVis, which has now developed a cloud-based platform that uses a smartphone and mobile, affordable hardware to test water in the field and produce cholera analysis and insights. The relative affordability and speed at which results are returned will allow NGOs to alert nearby communities before an outbreak spreads. This will help save lives.
At Microsoft, we are committed to being a catalyst to help thousands of organizations like OmniVis pursue their technology for public interest ideas. That’s why, in February, we launched a new Global Social Entrepreneurship program to offer qualified startups access to technology, education, customers and grants. Our global initiative is designed to help social entrepreneurs build and scale their organizations to do good globally. The program is available in 140 countries and will actively seek to support underrepresented founders with diverse perspectives and backgrounds.
In this environment of collective problem-solving, we need an easy way for developers to identify the greatest unmet needs, whether through cholera detection or COVID-19 treatments, where technology can play a critical role in helping address these challenges. Similarly, we need to map these social challenges to available funding sources and collaborators to fully understand the opportunities for solution creation.
X4Impact will help social entrepreneurs, nonprofits, citizen developers, funders and foundations identify where they can deploy their time and talent to collectively build a better world. Leveraging the power of AI, X4Impact aggregates content from hundreds of thousands of IRS 990 and 990-PF filings, private investing filings with the SEC and active grants from the federal government, foundations and private companies, in addition to content from over 5,000 trusted sources. The result is over 30 million units of knowledge indexed under the 17 United Nations Sustainable Development Goals and 231 impact indicators. With access to this market intelligence, we can collectively build much-needed solutions at a new level of scale and impact.
While the platform will launch this July, we call on tech trailblazers to join the public interest movement now by registering at x4i.org to receive an invitation to demo the platform. This work builds on our current offers for all nonprofits and we recommend reviewing our COVID-19 Resource Guide for Nonprofits to learn about additional support. At Microsoft, we are committed to learning how to better drive social innovation each day while evolving our social business model to help move nonprofit missions forward and drive social good.