post

When should you right click publish

Some people say ‘friends don’t let friends right click publish’ but is that true? If they mean that there are great benefits to setting up a CI/CD workflow, that’s true and we will talk more about these benefits in just a minute. First, let’s remind ourselves that the goal isn’t always coming up with the best long-term solution.

Technology moves fast and as developers we are constantly learning and experimenting with new languages, frameworks and platforms. Sometimes we just need to prototype something rather quickly in order to evaluate its capabilities. That’s a classic scenario where right click publish in Visual Studio provides the right balance between how much time you are going to spend (just a few seconds) and the options that become available to you (quite a few depending on the project type) such as publish to IIS, FTP  & Folder (great for xcopy deployments and integration with other tools).

Continuing with the theme of prototyping and experimenting, right click publish is the perfect way for existing Visual Studio customers to evaluate Azure App Service (PAAS). By following the right click publish flow you get the opportunity to provision new instances in Azure and publish your application to them without leaving Visual Studio:

When the right click publish flow has been completed, you immediately have a working application running in the cloud:

Platform evaluations and experiments take time and during that time, right click publish helps you focus on the things that matter. When you are ready and the demand rises for automation, repeatability and traceability that’s when investing into a CI/CD workflow starts making a lot of sense:

  • Automation: builds are kicked off and tests are executed as soon as you check in your code
  • Repeatability: it’s impossible to produce binaries without having the source code checked in
  • Traceability: each build can be traced back to a specific version of the codebase in source control which can then be compared with another build and figure out the differences

The right time to adopt CI/CD typically coincides with a milestone related to maturity; either and application milestone or the team’s that is building it. If you are the only developer working on your application you may feel that setting up CI/CD is overkill, but automation and traceability can be extremely valuable even to a single developer once you start shipping to your customers and you have to support multiple versions in production.

With a CI/CD workflow you are guaranteed that all binaries produced by a build can be linked back to the matching version of the source code. You can go from a customer bug report to looking at the matching source code easily, quickly and with certainty. In addition, the automation aspects of CI/CD save you valuable time performing common tasks like running tests and deploying to testing and pre-production environments, lowering the overhead of good practices that ensure high quality.

As always, we want to see you successful, so if you run into any issues using publish in Visual Studio or setting up your CI/CD workload, let me know in the comment section below and I’ll do my best to get your question answered.

post

Engineers from Walmart and Microsoft will work side-by-side in new ‘cloud factory’

Walmart is expanding its technology center in Austin, Texas, to accelerate digital innovation that transforms how associates work and delivers more convenient ways for customers to shop.

About 30 technologists, including engineers from both Walmart and Microsoft, will work together side by side in the cloud factory, expected to open in early 2019 as an extension to a strategic partnership announced in July. The factory will be an expansion of Walmart’s innovation hub, a vibrant workplace opened earlier this year in the center of Austin’s growing technology scene.

The Walmart-Microsoft team – known internally as “4.co” for its location at Fourth and Colorado streets – will initially focus on migrating Walmart’s thousands of internal business applications to Microsoft Azure. The team will also build new, cloud-native applications. The collaboration will be part of a multi-year journey to modernize Walmart’s enterprise application portfolio, create more efficient business processes and decrease operational costs associated with legacy architecture.

large, open, modern workplace with colorful mural in background
Walmart’s innovation hub in Austin, Texas.

The work will continue digital solutions already deployed at Walmart, including thousands of IoT sensors on HVAC and refrigeration systems that process a billion daily data messages from stores worldwide. The solution helps Walmart save energy and prevent product loss.

Based in Arkansas, the global retailer is also deploying Microsoft AI in a number of use cases, including internal chatbots that help associates navigate benefits, find mentors and better manage supplier engagements. Walmart has also deployed a natural language processing platform capable of processing 40 terabytes of unstructured text and providing near real-time insights and actions in support of business operations.

To shed light on 4.co and the company’s innovations, Transform chatted with Clay Johnson, executive vice president and enterprise chief information officer at Walmart.

TRANSFORM: Why is digital transformation important to Walmart?

JOHNSON: Technology is changing the way people live, work and shop – and the pace of change is only getting faster. We want to create new and incredibly convenient ways for our customers to shop, and as technology cycles get shorter and shorter, we must increase our speed and agility.

To do this, we’re digitally transforming processes and empowering our associates with consumer-grade technology and tools. Digital transformation is pervasive across the company and we’re pushing the envelope to accelerate innovation.

TRANSFORM: How is Walmart using Microsoft AI? Where do you see it going?

headshot of Clay Johnson
Clay Johnson, executive vice president and enterprise chief information officer at Walmart.

JOHNSON: We’re layering AI across every facet of our business. We have lots and lots of data that represents value we can leverage. That’s where AI, machine learning and Cognitive Services in Microsoft Azure come in to play.

We took our natural language processing platform and a single business case – post-payment auditing – to build out the platform at massive scale and apply it in other use cases. For associates who are tasked with finding that needle in the haystack, machine learning will help them go through unstructured text quickly. But ultimately, it’s not just about efficiency. A lot of this will drive impact to the bottom line.

With our chatbots, associates can use them to find a mentor or ask questions about benefits. They can get simple questions answered quickly. By saving them time, that’s more time they can spend on the sales floor helping our customers.

TRANSFORM: What are some benefits you’ve seen with other technologies like IoT and edge computing?

JOHNSON: With our IoT work and sensor enablement, we’re looking at our energy consumption and other factors to predict equipment failures before they happen. Improving equipment performance can result in enhanced energy efficiency, which lowers costs and our carbon footprint. That’s good for the customer and the environment. Sustainability is a big deal for us.

Putting IoT data into edge analytics lets us look at data at a store level and backhaul it to Azure to look at it across a region or the whole U.S. We started talking to Microsoft about this concept of a set of stores being a “micro-cloud,” and you roll them into Azure for data analytics and insights. A lot of this work is coming out of our Austin site.

TRANSFORM: How did the concept of 4.co come to be? What are the team’s goals?

JOHNSON: With this partnership with Microsoft, we started talking, ‘‘Hey, what’s the best way to accelerate all the stuff we’re doing here? We need help and expertise. We want to move fast. How do we partner our smart people with Microsoft’s smart people?”

Then it was obvious: ”Why don’t we just co-locate the teams together.” We haven’t done something like this before with co-location, but I think the outcomes are going to be huge and strengthen our partnership even more. You’re going to see a lot more co-innovation around IoT, computer vision, big data and real-time analytics.

TRANSFORM: How do you think both companies will benefit?

JOHNSON: We’re going to learn a lot from each other. We’re going to learn a lot from Microsoft –  which apps make sense to get to the cloud quickly and which don’t.

Microsoft’s going to get to see stuff at a scale they’ve never seen before. [Walmart has 11,200 global stores and 2.2 million associates]. I think they’ll learn a lot from our footprint. Co-locating top engineers from both companies will deepen the technical brainpower for creating disruptive, large-scale enterprise solutions for Walmart.

open, modern workplace
Walmart’s innovation hub in Austin, Texas.

TRANSFORM: Why did Walmart pick Austin for its technology office and 4.co? How are you interacting with the community there?

JOHNSON: You see a lot of tech companies and startups in Austin, and we wanted to blend that culture and the universities with our culture as we continue to grow and add tech talent to Walmart. Austin is a huge recruiting opportunity for us. Because our team in Austin works closely with our technology teams in Bentonville, [Arkansas], a pipeline of technology talent into Bentonville has also emerged.

Austin is also a very meetup-heavy city. So we were really purposeful in picking a location that puts us in the middle of the tech meetup scene. We host deep learning, AI and chatbot meetups. We do a lot of Women Who Code events.

We recently did a hackathon with Microsoft and the University of Texas at Austin, where we worked with a no-kill animal shelter. The students really enjoyed doing something for the local community. [The event focused on helping dogs through predictive modeling of the canine parvovirus disease].

TRANSFORM: What’s your vision for the future of retail? 

JOHNSON: I think you’re going to see an omnichannel-type world that is a blend of e-commerce and brick and mortar, where consumers can have access to things at any point, any time. They can order online and pick up in the store or have it delivered same-day. You can see we’re starting to do that with our online grocery right now.

Top photo: Walmart’s technology center in Austin, Texas. All photos courtesy of Walmart. Follow Clay Johnson @ClayMJohnson.

post

Building an ecosystem for responsible drone use and development on Azure

The next wave of computing is already taking shape around us as IoT enables businesses to sense all aspects of their business in real-time, and take informed action to running cloud workloads on those IoT devices so they don’t require “always on” connectivity to the cloud to make real-time context-aware decisions. This is the intelligent edge, and it will define the next wave of innovation, not just for business, but also how we address some of the world’s most pressing issues.

Drones or unmanned aircraft systems (UAS) are great examples of intelligent edge devices being used today to address many of these challenges, from search and rescue missions and natural disaster recovery, to increasing the world’s food supply with precision agriculture. With the power of AI at the edge, drones can have a profound impact in transforming businesses and improving society, as well as in assisting humans in navigating high-risk areas safely and efficiently.   

With these advanced capabilities also comes great responsibility including respecting the laws that govern responsible drone use in our airspace, as well as the applications of the drones to scan the environment. We believe it is important to protect data wherever it lives, from the cloud to the intelligent edge.

In addition to building the platforms for innovation, we at Microsoft believe it is crucial to also invest in companies and partnerships that will enable the responsible use of drones and associated data. Today we are making two important announcements that further our commitment to responsible use of drones as commercial IoT edge devices running on Microsoft Azure.

AirMap powered by Microsoft Azure

Today we announced that AirMap has selected Microsoft Azure as the company’s exclusive cloud-computing platform for its drone traffic management platform and developer ecosystem.

Drone usage is growing quickly across industries such as transportation, utilities, agriculture, public safety, emergency response, and more, to improve the efficiency and performance of existing business processes. Data generated by drones is infused with intelligence to augment the value that companies and governments deliver to customers and communities. However, concerns about regulatory compliance, privacy, and data protection still prevent organizations from adopting drone technology at scale.

AirMap works with civil aviation authorities, air navigation service providers, and local authorities to implement an airspace management system that supports and enforces secure and responsible access to low-altitude airspace for drones.

With AirMap’s airspace management platform running on Microsoft Azure, the two companies are delivering a platform that will allow state and local authorities to authorize drone flights and enforce local rules and restrictions on how and when they can be operated. Their solution also enables companies to ensure that compliance and security are a core part of their enterprise workflows that incorporate drones.

AirMap selected Microsoft Azure because it provides the critical cloud-computing infrastructure, security, and reliability needed to run these mission-critical airspace services and orchestrate drone operations around the world.

Earlier this year, Swiss Post, the postal service provider for the country of Switzerland, joined with drone manufacturer Matternet and Insel Group, Switzerland’s largest leading medical care system, to fly time-sensitive laboratory samples between Tiefanau Hospital and University Hospital Insel in Bern. This was an alternative to ground transport, where significant traffic can cause life-threatening delays. The operations are supported by Switzerland’s airspace management system for drones, powered by AirMap and Microsoft Azure, for safety and efficiency.

DJI Windows SDK for app development enters public preview

Last May at our Build developer conference, we announced a partnership with DJI, the world’s leader in civilian drones and aerial imaging technology, to bring advanced AI and machine learning capabilities to DJI drones, helping businesses harness the power of commercial drone technology and edge cloud computing.

Today at DJI’s AirWorks conference, we are announcing the public preview of the Windows SDK, which allows applications to be written for Windows 10 PCs that control DJI drones. The SDK will also allow the Windows developer community to integrate and control third-party payloads like multispectral sensors, robotic components like custom actuators, and more, exponentially increasing the ways drones can be used in the enterprise.

With this SDK, we now have three methods to enable Azure AI services to interact with drone imagery and video in real-time:

  1. Drone imagery can be sent directly to Azure for processing by an AI workload.
  2. Drone imagery can be processed on Windows running Azure IoT Edge with an AI workload.
  3. Drone imagery can be processed directly onboard drones running Azure IoT Edge with an AI workload.

We take the security of data seriously, from the cloud to edge devices such as drones. Azure IoT Edge includes an important subsystem, called the security manager, which acts as a core for protecting the IoT Edge device and all its components by abstracting the secure silicon hardware. It is the focal point for security hardening and provides IoT device manufacturers the opportunity to harden their devices based on their choice of hardware secure modules (HSM). Finally, the Azure certified for IoT program only certifies third-party Azure IoT Edge hardware that meets our strict security requirements.

We are encouraged by all the creative applications of drones we are seeing on Azure today across industry sectors, and the following are a few examples of this.

  • SlantRange: Is an aerial remote sensing and data analytics company servicing the information needs of the agriculture industry, with over two decades of experience in earth science, defense, and intelligence applications. The company has patented foundational technologies for aerial crop inspections and introduced innovative analytical methods that deliver valuable agronomic data within minutes of collection, anywhere in the world, using low-power edge computing devices. These technologies have propelled SlantRange’s growth from a few Nebraska corn and soybean farms just a few seasons ago to over 40 countries and a wide variety of crops today, including contracts with many of the world’s leading agricultural suppliers and producers.
  • Clobotics: Is a global AI company helping wind energy companies to improve its productivity by automatically inspecting, processing, and reporting wind turbine blade defects. The company has inspected over 1,000 wind turbines in the past few months around the world. Clobotics’ end-to-end solutions combine computer vision, machine learning, and data analytics software with commercial drones and sensors to help the wind energy industry automate its inspection service. Clobotics’ Wind Turbine Data Platform, along with its computer vision-based edge computing service, provides a first-of-its-kind wind turbine blade life-cycle management service. As a Microsoft WW Azure partner, Clobotics works closely with Microsoft Azure and the IoT platform to provide the most innovative and reliable services to its enterprise customers.
  • eSmart systems: Brings more than 20 years of global intelligence to provide software solutions to the energy industry, service providers, and smart cities. By bringing AI to the edge on Azure IoT Edge, eSmart Systems are revolutionizing grid inspections. Their Connected Drone solution has the capability to analyze 200,000 images in less than one hour, this is more than a human expert can do in a year. The result is reduced operational cost, better insights into the current status of the grid, and fewer outages.

Earlier this year we announced a $5 billion investment in IoT and the intelligent edge to continue innovation, strategic partnerships, and programs. The latest announcements with strategic partners like AirMap and DJI continue to push the boundaries of what is possible, and we look forward to seeing what our joint customers go on to build.

post

Mad? Sad? Glad? Video Indexer now recognizes these human emotions

Many different customers across industries want to have insights into the emotional moments that appear in different parts of their media content. For broadcasters, this can help create more impactful promotion clips and drive viewers to their content; in the sales industry it can be super useful for analyzing sales calls and improve convergence; in advertising it can help identify the best moment to pop up an ad, and the list goes on and on. To that end, we are excited to share Video Indexer’s (VI) new machine learning model that mimics humans’ behavior to detect four cross-cultural emotional states in videos: anger, fear, joy, and sadness.

Endowing machines with cognitive abilities to recognize and interpret human emotions is a challenging task due to their complexity. As humans, we use multiple mediums to analyze emotions. These include facial expressions, voice tonality, and speech content. Eventually, the determination of a specific emotion is a result of a combination of these three modalities to varying degrees.

While traditional sentiment analysis models detect the polarity of content – for example, positive or negative – our new model aims to provide a finer granularity analysis. For example, given a moment with negative sentiment, the new model determines whether the underlying emotion is fear, sadness, or anger. The following figure illustrates VI’s emotion analysis of Microsoft CEO Satya Nadella’s speech on the importance of education. At the very beginning of his speech, a sad moment was detected.

Microsoft CEO Satya Nadella's speech on the importance of education

All the detected emotions and their specific appearances along the video are enumerated in the video index JSON as follows:

video index JSON

Cross-channel emotion detection in VI

The new functionality utilizes deep learning to detect emotional moments in media assets based on speech content and voice tonality. VI detects emotions by capturing semantic properties of the speech content. However, semantic properties of single words are not enough, so the underlying syntax is also analyzed because the same words in a different order can induce different emotions.

Syntax

VI leverages the context of the speech content to infer the dominant emotion. For example, the sentence “… the car was coming at me and accelerating at a very fast speed …” has no negative words, but VI can still detect fear as the underlying emotion.

VI analyzes the vocal tonality of speakers as well. It automatically detects segments with voice activity and fuses the affective information contained within with the speech content component.

Video Indexer

With the new emotion detection capability in VI that relies on speech content and voice tonality, you are able to become more insightful about the content of your videos by leveraging them for marketing, customer care, and sales purposes.

For more information, visit VI’s portal or the VI developer portal, and try this new capability for free. You can also browse videos indexed as to emotional content: sample 1, sample 2, and sample 3.  

Have questions or feedback? We would love to hear from you!

Use our UserVoice to help us prioritize features, or email VISupport@Microsoft.com with any questions.

post

Use Hybrid Connections to Incrementally Migrate Applications to the Cloud

As the software industry shifts to running software in the cloud, organizations are looking to migrate existing applications from on-premises to the cloud. Last week at Microsoft’s Ignite conference, Paul Yuknewicz and I delivered a talk focused on how to get started migrating applications to Azure (watch the talk free) where we walked through the business case for migrating to the cloud, and choosing the right hosting and data services.

If your application is a candidate for running in App Service, one of the most useful pieces of technology that we showed was Hybrid Connections. Hybrid Connections let you host a part of your application in Azure App Service, while calling back into resources and services not running in Azure (e.g. still on-premises). This enables you to try running a small part of your application in the cloud without the need to move your entire application and all of its dependencies at once; which is usually time consuming, and extremely difficult to debug when things don’t work. So, in this post I’ll show you how to host an ASP.NET front application in the cloud, and configure a hybrid connection to connect back to a service on your local machine.

Publishing Our Sample App to the Cloud

For the purposes of this post, I’m going to use the Smart Hotel 360 App sample that uses an ASP.NET front end that calls a WCF service which then accesses a SQL Express LocalDB instance on my machine.

The first thing I need to do is publish the ASP.NET application to App Service. To do this, right click on the “SmartHotel.Registration.Web” project and choose “Publish”

clip_image001

The publish target dialog is already on App Service, and I want to create a new one, so I will just click the “Publish” button.

This will bring up the “Create App Service” dialog.  Next, I will click “Create” and wait for a minute while the resources in the cloud are created and the application is published.

clip_image003

When it’s finished publishing, my web browser will open to my published site. At this point, there will be an error loading the page since it cannot connect to the WCF service. To fix this we’ll add a hybrid connection.

image

Create the Hybrid Connection

To create the Hybrid Connection, I navigate to the App Service I just created in the Azure Portal. One quick way to do this is to click the “Managed in Cloud Explorer” link on the publish summary page

clip_image005

Right click the site, and choose “Open in Portal” (You can manually navigate to the page by logging into the Azure portal, click App Services, and choose your site).

clip_image006

To create the hybrid connection:

Click the “Networking” tab in the Settings section on the left side of the App Service page

Click “Configure your hybrid connection endpoints” in the “Hybrid connections” section

image

Next, click “Add a hybrid connection”

Then click “Create a new hybrid connection”

clip_image010

Fill out the “Create new hybrid connection” form as follows:

  • Hybrid connection Name: any unique name that you want
  • Endpoint Host: This is the machine URL your application is currently using to connect to the on-premises resource. In this case, this is “localhost” (Note: per the documentation, use the hostname rather than a specific IP address if possible as it’s more robust)
  • Endpoint Port: The port the on-premises resource is listening on. In this case, the WCF service on my local machine is listening on 2901
  • Servicebus namespace: If you’ve previously configured hybrid connections you can re-use an existing one, in this case we’ll create a new one, and give it a name

clip_image011

Click “OK”. It will take about 30 seconds to create the hybrid connection, when it’s done you’ll see it appear on the Hybrid connections page.

Configure the Hybrid Connection Locally

Now we need to install the Hybrid Connection Manager on the local machine. To do this, click the “Download connection manager” on the Hybrid connections page and install the MSI.

clip_image013

After the connection manager finishes installing, launch the “Hybrid Connections Manager UI”, it should appear in your Windows Start menu if you type “Hybrid Connections”. (If for some reason it doesn’t appear on the Start Menu, launch it manually from “C:\Program Files\Microsoft\HybridConnectionManager <version#>”)

Click the “Add a new Hybrid Connection” button in the Hybrid Connections Manager UI and login with the same credentials you used to publish your application.

clip_image015

Choose the subscription you used published your application from the “Subscription” dropdown, choose the hybrid connection you just created in the portal, and click “Save”.

clip_image017

In the overview, you should see the status say “Connected”. Note: If the state won’t change from “Not Connected”, I’ve found that rebooting my machine fixes this (it can take a few minutes to connect after the reboot).

clip_image019

Make sure everything is running correctly on your local machine, and then when we open the site running in App Service we can see that it loads with no error. In fact, we can even put a breakpoint in the GetTodayRegistrations() method of Service.svc.cs, hit F5 in Visual Studio, and when the page loads in App Service the breakpoint on the local machine is hit!

clip_image021

Conclusion

If you are looking to move applications to the cloud, I hope that this quick introduction to Hybrid Connections will enable you to try moving things incrementally. Additionally, you may find these resources helpful:

As always, if you have any questions, or problems let me know via Twitter, or in the comments section below.

post

Azure preparedness for Hurricane Florence

As Hurricane Florence continues its journey to the mainland, our thoughts are with those in its path. Please stay safe. We’re actively monitoring Azure infrastructure in the region. We at Microsoft have taken all precautions to protect our customers and our people.

Our datacenters (US East, US East 2, and US Gov Virginia) have been reviewed internally and externally to ensure that we are prepared for this weather event. Our onsite teams are prepared to switch to generators if utility power is unavailable or unreliable. All our emergency operating procedures have been reviewed by our team members across the datacenters, and we are ensuring that our personnel have all necessary supplies throughout the event.

As a best practice, all customers should consider their disaster recovery plans and all mission-critical applications should be taking advantage of geo-replication.

Rest assured that Microsoft is focused on the readiness and safety of our teams, as well as our customers’ business interests that rely on our datacenters. 

You can reach our handle @AzureSupport on Twitter, we are online 24/7. Any business impact to customers will be communicated through Azure Service Health in Azure portal.

If there is any change to the situation, we will keep customers informed of Microsoft’s actions through this announcement.

For guidance on Disaster Recovery best practices see references below: 

post

New Azure Pipelines service helps devs build, test and deploy to any platform or cloud

With the introduction of Azure DevOps today, we’re offering developers a new CI/CD service called Azure Pipelines that enables you to continuously build, test, and deploy to any platform or cloud. It has cloud-hosted agents for Linux, macOS, and Windows, powerful workflows with native container support, and flexible deployments to Kubernetes, VMs, and serverless environments.

Microsoft is committed to fueling open source software development. Our next step in this journey is to provide the best CI/CD experience for open source projects. Starting today, Azure Pipelines provides unlimited CI/CD minutes and 10 parallel jobs to every open source project for free. All open source projects run on the same infrastructure that our paying customers use. That means you’ll have the same fast performance and high quality of service. Many of the top open source projects are already using Azure Pipelines for CI/CD, such as Atom, CPython, Pipenv, Tox, Visual Studio Code, and TypeScript – and the list is growing every day.

In the following, you can see Atom running parallel jobs on Linux, macOS, and Windows for its CI.

atom2x

Azure Pipelines app on GitHub Marketplace

Azure Pipelines has an app in the GitHub Marketplace so it’s easy to get started. After you install the app in your GitHub account, you can start running CI/CD for all your repositories.

pipelines2x

Pull Request and CI Checks

When the GitHub app is setup, you’ll see CI/CD checks on each commit to your default branch and every pull request.

highlight2x

Our integration with the GitHub Checks API makes it easy to see build results in your pull request. If there’s a failure, the call stack is shown as well as the impacted files.

githubchecks2x

More than just open source

Azure Pipelines is also great for private repositories. It is the CI/CD solution for companies like Columbia, Shell, Accenture, and many others. It’s also used by Microsoft’s biggest projects like Azure, Office 365, and Bing. Our free offer for private projects includes a cloud-hosted job with 1,800 minutes of CI/CD a month or you can run unlimited minutes of CI/CD on your own hardware, hosted in the cloud or your on-premises hardware. You can purchase parallel jobs for private projects from Azure DevOps or the GitHub Marketplace.

In addition to CI, Azure Pipelines has flexible deployments to any platform and cloud, including Azure, Amazon Web Services, and Google Cloud Platform, as well as any of your on-premises server running Linux, macOS or Windows. There are built-in tasks for Kubernetes, serverless, and VM deployments. Also, there’s a rich ecosystem of extensions for the most popular languages and tools. The Azure Pipelines agent and tasks are open source and we’re always reviewing feedback and accepting pull requests on GitHub.

Join our upcoming live streams to learn more about Azure Pipelines and other Azure DevOps services.


  • Keynote: Watch our live Azure DevOps keynote on September 11, 2018 from 8:00 – 9:30 AM Pacific Time.


  • Live training: Join our live Mixer workshop with interactive Q&A on September 17, 2018 from 8:30 AM – 2:30 PM Pacific Time.

You can save-the-date and watch both live streams on our events page. There you’ll also find additional on-demand videos and other resources to help get you started.

I’m excited for you to try Azure Pipelines and tell us what you think. You can share your thoughts directly to the product team using @AzureDevOps, Developer Community, or comment on this post.

Jeremy Epling

@jeremy_epling

post

Introducing Azure DevOps, to help developers ship software faster and with higher quality

Today we are announcing Azure DevOps. Working with our customers and developers around the world, it’s clear DevOps has become increasingly critical to a team’s success. Azure DevOps captures over 15 years of investment and learnings in providing tools to support software development teams. In the last month, over 80,000 internal Microsoft users and thousands of our customers, in teams both small and large, used these services to ship products to you.

The services we are announcing today span the breadth of the development lifecycle to help developers ship software faster and with higher quality. They represent the most complete offering in the public cloud. Azure DevOps includes:

Azure PipelinesAzure Pipelines

CI/CD that works with any language, platform, and cloud. Connect to GitHub or any Git repository and deploy continuously. Learn More >

Azure BoardsAzure Boards

Powerful work tracking with Kanban boards, backlogs, team dashboards, and custom reporting. Learn more >

Azure ArtifactsAzure Artifacts

Maven, npm, and NuGet package feeds from public and private sources. Learn more >

Azure ReposAzure Repos

Unlimited cloud-hosted private Git repos for your project. Collaborative pull requests, advanced file management, and more. Learn more >

Azure Test PlansAzure Test Plans

All in one planned and exploratory testing solution. Learn more >

Each Azure DevOps service is open and extensible. They work great for any type of application regardless of the framework, platform, or cloud. You can use them together for a full DevOps solution or with other services. If you want to use Azure Pipelines to build and test a Node service from a repo in GitHub and deploy it to a container in AWS, go for it. Azure DevOps supports both public and private cloud configurations. Run them in our cloud or in your own data center. No need to purchase different licenses. Learn more about Azure DevOps pricing.

Here’s an example of Azure Pipelines used independently to build a GitHub repo:

pipelinesbuild2x

Alternatively, here’s an example of a developer using all Azure DevOps services together from the vantage point of Azure Boards.

boards2x

Open Source projects receive free CI/CD with Azure Pipelines

As an extension of our commitment to provide open and flexible tools for all developers, Azure Pipelines offers free CI/CD with unlimited minutes and 10 parallel jobs for every open source project. With cloud hosted Linux, macOS and Windows pools, Azure Pipelines is great for all types of projects.

Many of the top open source projects are already using Azure Pipelines for CI/CD, such as Atom, CPython, Pipenv, Tox, Visual Studio Code, and TypeScript – and the list is growing every day.

We want everyone to have extremely high quality of service. Accordingly, we run open source projects on the same infrastructure that our paying customers use.

Azure Pipelines is also now available in the GitHub Marketplace making it easy to get setup for your GitHub repos, open source or otherwise. 

Here’s a walkthrough of Azure Pipelines:

Learn more >

The evolution of Visual Studio Team Services (VSTS) 

Azure DevOps represents the evolution of Visual Studio Team Services (VSTS). VSTS users will be upgraded into Azure DevOps projects automatically. For existing users, there is no loss of functionally, simply more choice and control. The end to end traceability and integration that has been the hallmark of VSTS is all there. Azure DevOps services work great together. Today is the start of a transformation and over the next few months existing users will begin to see changes show up. What does this mean?

  • URLs will change from abc.visualstudio.com to dev.azure.com/abc. We will support redirects from visualstudio.com URLs so there will not be broken links.
  • As part of this change, the services have an updated user experience. We continue to iterate on the experience based on feedback from the preview. Today we’re enabling it by default for new customers. In the coming months we will enable it by default for existing users.
  • Users of the on-premises Team Foundation Server (TFS) will continue to receive updates based on features live in Azure DevOps. Starting with next version of TFS, the product will be called Azure DevOps Server and will continue to be enhanced through our normal cadence of updates.

Learn how to enable these changes for your existing VSTS organizations today.

Learn more

To learn more about Azure DevOps, please join us:


  • Keynote: Watch our live Azure DevOps keynote on September 11, 2018 from 8:00 – 9:30 AM Pacific Time.


  • Live training: Join our live Mixer workshop with interactive Q&A on September 17, 2018 from 8:30 AM – 2:30 PM Pacific Time.

You can save-the-date and watch both live streams on our events page. There you’ll also find additional on-demand videos and other resources to help get you started.

We couldn’t be more excited to offer Azure DevOps to you and your teams. We can’t wait to see what amazing things you create with it.

post

Plan to map UK’s network of heart defibrillators using Azure could save thousands every year

Thousands of people who are at risk of dying every year from cardiac arrest could be saved under new plans to make the public aware of their nearest defibrillator.

There are 30,000 cardiac arrests outside of UK hospitals annually but fewer than one-in-10 of those survive, compared with a 25% survival rate in Norway, 21% in North Holland, and 20% in Seattle, in the US.

A new partnership between the British Heart Foundation (BHF), Microsoft, the NHS and New Signature aims to tackle the problem by mapping all the defibrillators in the UK, so 999 call handlers can tell people helping a cardiac arrest patient where the nearest device is.

Ambulance services currently have their own system of mapping where defibrillators are located but this is not comprehensive.

Printout of heart monitor
It is hoped the partnership can evolve to capture heart data from cardiac arrest patients

“There is huge potential ahead in the impact that technology will have in digitally transforming UK healthcare,” said Clare Barclay, Chief Operating Officer at Microsoft. “This innovative partnership will bring the power of Microsoft technology together with the incredible vision and life-saving work of BHF and the NHS. This project, powered by the cloud, will better equip 999 call handlers with information that can make the difference between life and death and shows the potential that innovative partnerships like this could make to the health of the nation.”

Cardiac arrest occurs when the heart fails to pump effectively, resulting in a sudden loss of blood flow. Symptoms include a loss of consciousness, abnormal or absent breathing, chest pain, shortness of breath and nausea. If not treated within minutes, it usually leads to death.

Defibrillators can save the life of someone suffering from a cardiac arrest by providing a high-energy electric shock to the heart through the chest wall. This allows the body’s natural pacemaker to re-establish the heart’s normal rhythm.

However, defibrillators are used in just 2% of out-of-hospital cardiac arrests, often because bystanders and ambulance services don’t know where the nearest device is located.



Owners of the tens of thousands of defibrillators in workplaces, train stations, leisure centres and public places across the country will register their device with the partnership. That information will be stored in Azure, Microsoft’s cloud computing service, where it will be used by ambulance services during emergency situations. The system will also remind owners to check their defibrillators to make sure they are in working order.

It is hoped that the partnership can evolve to enable defibrillators to self-report their condition, as well as capture heart data from cardiac arrest patients that can be sent to doctors.

Simon Gillespie, Chief Executive of the BHF, said: “Every minute without CPR or defibrillation reduces a person’s chance of surviving a cardiac arrest by around 10%. Thousands more lives could be saved if the public were equipped with vital CPR skills, and had access to a defibrillator in the majority of cases.

Everything you need to know about Microsoft’s cloud

“While we’ve made great progress in improving the uptake of CPR training in schools, public defibrillators are rarely used when someone suffers a cardiac arrest, despite their widespread availability. This unique partnership could transform this overnight, meaning thousands more people get life-saving defibrillation before the emergency services arrive.”

Simon Stevens, Chief Executive of NHS England, added: “This promises to be yet another example of how innovation within the NHS leads to transformative improvements in care for patients.”

The defibrillation network will be piloted by West Midlands Ambulance Service and the Scottish Ambulance Service, before being rolled out across the UK.

Tags: , , , , ,

post

How to upgrade your financial analysis capabilities with Azure

In corporate finance and investment banking, risk analysis is a crucial job. To assess risk, analysts review research, monitor economic and social conditions, stay informed of regulations, and create models for the investment climate. In short, the inputs into an analysis make for a highly complex and dynamic calculation, one that requires enormous computing power. The vast number of calculations and the way the math is structured typically allows for high degrees of parallelization across many separate processes. To satisfy such a need, grid computing employs any number of machines working together to execute a set of parallelized tasks — which is perfect for risk analysis. By using a networked group of computers that work together as a virtual supercomputer, you can assemble and use vast computer grids for specific time periods and purposes, paying, only for what you use. Also, by splitting tasks over multiple machines, processing time is significantly reduced to increase efficiency and minimize wasted resources.

image
The Azure Industry Experiences team has recently authored two documents to help those involved in banking scenarios. We show how to implement a risk assessment solution that takes advantage of cloud grid computing technologies.

The first document is a short overview for technical decision makers, especially those considering a burst-to-cloud scenario. The second is a solution guide. It is aimed at solution architects, lead developers and others who want a deeper technical illumination of the strategy and technology.

Recommended next steps

  1. Read the Risk Grid Computing Overview
  2. Read the Risk Grid Computing Solution Guide