Tag Archives: post

How to Use ASR and Hyper-V Replica with Failover Clusters

In the third and final post of this blog series, we will evaluate Microsoft’s replication solutions for multi-site clusters and how to integrate basic backup/DR with them. This includes Hyper-V Replica, Azure Site Recovery, and DFS Replication. In the first part of the series, you learned about setting up failover clusters to work with DR solutions and in the last post, you learned about disk replication considerations from third-party storage vendors. The challenge with the solutions that we previously discussed is that they typically require third-party hardware or software. Let’s look at the basic technologies provided by Microsoft to reduce these upfront fixed costs.

Note: The features talked about in this article are native Microsoft features with a baseline level of functionality. Should you require over and above what is required here you should look at a third-party backup/replication product such as Altaro VM Backup.

Multi-Site Disaster Recovery with Windows Server DFS Replication (DFSR)

DFS Replication (DFSR) is a Windows Server role service that has been around for many releases. Although DFSR is built into Windows Server and is easy to configure, it is not supported for multi-site clustering. This is because the replication of files only happens when a file is closed, so it works great for file servers hosting documents. However, it is not designed to work with application workloads where the file is kept open, such as SQL databases or Hyper-V VMs. Since these file types will only close during a planned failover or unplanned crash, it is hard to keep the data consistent at both sites. This means that if your first site crashes, the data will not be available at the second site, so DFSR should not be considered as a possible solution.

Multi-Site Disaster Recovery with Hyper-V Replica

The most popular Microsoft DR solution is Hyper-V Replica which is a built-in Hyper-V feature and available to Windows Server customers at no additional cost. It copies the virtual hard disk (VHD) file of a running virtual machine from one host to a second host in a different location. This is an excellent low-cost solution to replicate your data between your primary and secondary sites and even allows you to do extended (“chained”) replication to a third location. However, it is limited in that is only replicates Hyper-V virtual machines (VMs) so it cannot be used for any other application unless they are virtualized and running inside a VM. The way it works is that any changes to the VHD file are tracked by a log file, which is copied to an offline VM/VHD in the secondary site. This also means that replication is also asynchronous, allowing copies to be sent every 30 seconds, 5 minutes or 15 minutes. While this means that there is no distance limitation between the sites, there could be some data loss if any in-memory data has not been written to the disk or if there is a crash between replication cycles.

Two Clusters Replicate Data between Sites with Hyper-V Replica

Figure 1 – Two Clusters Replicate Data between Sites with Hyper-V Replica

Hyper-V Replica allows for replication between standalone Hyper-V hosts or between separate clusters, or any combination.  This means that instead of stretching a single cluster across two sites, you will set up two independent clusters. This also allows for a more affordable solution by letting businesses set up a cluster in their primary site and a single host in their secondary site that will be used only for mission-critical applications. If the Hyper-V Replica is deployed on a failover cluster, a new clustered workload type is created, known as the Hyper-V Replica Broker. This basically makes the replication service highly-available, so that if a node crashes, the replication engine will failover to a different node and continue to copy logs to the secondary site, providing greater resiliency.

Another powerful feature of Hyper-V Replica is its built-in testing, allowing you to simulate both planned and unplanned failures to the secondary site.  While this solution will meet the needs of most virtualized datacenters, it is also important to remember that there are no integrity checks in the data which is being copied between the VMs. This means that if a VM becomes corrupted or is infected with a virus, that same fault will be sent to its replica. For this reason, backups of the virtual machine are still a critical part of standard operating procedure. Additionally, this Altaro blog notes that Hyper-V Replica has other limitations compared to backups when it comes to retention, file space management, keeping separate copies, using multiple storage locations, replication frequency and may have a higher total cost of ownership. If you are using a multi-site DR solution which uses two clusters, then make sure that you are taking and storing backups in both sites, so that you can recover your data at either location. Also make sure that your backup provider supports clusters, CSV disks, and Hyper-V replica, however, this is now standard in the industry.

Multi-Site Disaster Recovery with Azure Site Recovery (ASR)

All of the aforementioned solutions require you to have a second datacenter, which simply is not possible for some businesses.  While you could rent rack space from a cohosting facility, the economics just may not make sense. Fortunately, the Microsoft Azure public cloud can now be used as your disaster recovery site using Azure Site Recovery (ASR). This technology works with Hyper-V Replica, but instead of copying your VMs to a secondary site, you are pushing them to a nearby Microsoft datacenter. This technology still has the same limitations of Hyper-V Replica, including the replication frequency, and furthermore you do not have access to the physical infrastructure of your DR site in Azure. The replicated VM can run on the native Azure infrastructure, or you can even build a virtualized guest cluster, and replicate to this highly-available infrastructure.

While ASR is a significantly cheaper solution than maintaining your own hardware in the secondary site, it is not free. You have to pay for the service, the storage of your virtual hard disks (VHDs) in the cloud, and if you turn on any of those VMs, you will pay for standard Azure VM operating costs.

If you are using ASR, you should follow the same backup best practices as mentioned in the earlier Hyper-V Replica section. The main difference will be that you should use an Azure-native backup solution to protect your replicated VHDs in Azure, in case you switch over the Azure VMs for any extended period of time.

Conclusion

From reviewing this blog series, you should be equipped to make the right decisions when planning your disaster recovery solution using multi-site clustering.  Start by understanding your site restrictions and from there you can plan your hardware needs and storage replication solution.  There are a variety of options that have tradeoffs between a higher price with more features to cost-effective solutions using Microsoft Azure, but have limited control. Even after you have deployed this resilient infrastructure, keep in mind that there are still three main reasons why disaster recovery plans fail:

  • The detection of the outage failed, so the failover to the secondary datacenter never happens.
  • One component in the DR failover process does not work, which is usually due to poor or infrequent testing.
  • There was no automation or some dependency on humans during the process, which failed as humans create a bottleneck and are unreliable during a disaster.

This means that whichever solution you choose, make sure that it is well tested with quick failure detection and try to eliminate all dependencies on humans! Good luck with your deployment and please post any questions that you have in the comments section of this blog.


Go to Original Article
Author: Symon Perriman

How to Use Failover Clusters with 3rd Party Replication

In this second post, we will review the different types of replication options and give you guidance on what you need to ask your storage vendor if you are considering a third-party storage replication solution.

If you want to set up a resilient disaster recovery (DR) solution for Windows Server and Hyper-V, you’ll need to understand how to configure a multi-site cluster as this also provides you with local high-availability. In the first post in this series, you learned about the best practices for planning the location, node count, quorum configuration and hardware setup. The next critical decision you have to make is how to maintain identical copies of your data at both sites, so that the same information is available to your applications, VMs, and users.

Multi-Site Cluster Storage Planning

All Windows Server Failover Clusters require some type of shared storage to allow an application to run on any host and access the same data. Multi-site clusters behave the same way, but they require multiple independent storage arrays at each site, with the data replicated between them. The data for the clustered application or virtual machine (VM) on each site should use its own local storage array, or it could have significant latency if each disk IO operation had to go to the other location.

If you are running Hyper-V VMs on your multi-site cluster, you may wish to use Cluster Shared Volumes (CSV) disks. This type of clustered storage configuration is optimized for Hyper-V and allows multiple virtual hard disks (VHDs) to reside on the same disk while allowing the VMs to run on different nodes. The challenge when using CSV in a multi-site cluster is that the VMs must make sure that they are always writing to their disk in their site, and not the replicated copy. Most storage providers offer CSV-aware solutions, and you must make sure that they explicitly support multi-site clustering scenarios. Often the vendors will force writes at the primary site by making the CSV disk at the second site read-only, to ensure that the correct disks are always being used.

Understanding Synchronous and Asynchronous Replication

As you progress in planning your multi-site cluster you will have to select how your data is copied between sites, either synchronously or asynchronously. With asynchronous replication, the application will write to the clustered disk at the primary site, then at regular intervals, the changes will be copied to the disk at the secondary site. This usually happens every few minutes or hours, but if a site fails between replication cycles, then any data from the primary site which has not yet been copied to the secondary site will be lost. This is the recommended configuration for applications that can sustain some amount of data loss, and this generally does not impose any restrictions on the distance between sites. The following image shows the asynchronous replication cycle.

Asynchronous Replication in a Multi-Site Cluster

Asynchronous Replication in a Multi-Site Cluster

With synchronous replication, whenever a disk write command occurs on the primary site, it is then copied to the secondary site, and an acknowledgment is returned to both the primary and secondary storage arrays before that write is committed. Synchronous replication ensures consistency between both sites and avoids data loss in the event that there is a crash between a replication cycle. The challenge of writing to two sets of disks in different locations is that the physical distance between sites must be close or it can affect the performance of the application. Even with a high-bandwidth and low-latency connection, synchronous replication is usually recommended only for critical applications that cannot sustain any data loss, and this should be considered with the location of your secondary site.  The following image shows the asynchronous replication cycle.

Synchronous Replication in a Multi-Site Cluster

Synchronous Replication in a Multi-Site Cluster

As you continue to evaluate different storage vendors, you may also want to assess the granularity of their replication solution. Most of the traditional storage vendors will replicate data at the block-level, which means that they track specific segments of data on the disk which have changed since the last replication. This is usually fast and works well with larger files (like virtual hard disks or databases), as only blocks that have changed need to be copied to the secondary site. Some examples of integrated block-level solutions include HP’s Cluster Extension, Dell/EMC’s Cluster Enabler (SRDF/CE for DMX, RecoverPoint for CLARiiON), Hitachi’s Storage Cluster (HSC), NetApp’s MetroCluster, and IBM’s Storage System.

There are also some storage vendors which provide a file-based replication solution that can run on top of commodity storage hardware. These providers will keep track of individual files which have changed, and only copy those. They are often less efficient than the block-level replication solutions as larger chunks of data (full files) must be copied, however, the total cost of ownership can be much less. A few of the top file-level vendors who support multi-site clusters include Symantec’s Storage Foundation High Availability, Sanbolic’s Melio, SIOS’s Datakeeper Cluster Edition, and Vision Solutions’ Double-Take Availability.

The final class of replication providers will abstract the underlying sets of storage arrays at each site. This software manages disk access and redirection to the correct location. The more popular solutions include EMC’s VPLEX, FalconStor’s Continuous Data Protector and DataCore’s SANsymphony. Almost all of the block-level, file-level, and appliance-level providers are compatible with CSV disks, but it is best to check that they support the latest version of Windows Server if you are planning a fresh deployment.

By now you should have a good understanding of how you plan to configure your multi-site cluster and your replication requirements. Now you can plan your backup and recovery process. Even though the application’s data is being copied to the secondary site, which is similar to a backup, it does not replace the real thing. This is because if the VM (VHD) on one site becomes corrupted, that same error is likely going to be copied to the secondary site. You should still regularly back up any production workloads running at either site.  This means that you need to deploy your cluster-aware backup software and agents in both locations and ensure that they are regularly taking backups. The backups should also be stored independently at both sites so that they can be recovered from either location if one datacenter becomes unavailable. Testing recovery from both sites is strongly recommended. Altaro’s Hyper-V Backup is a great solution for multi-site clusters and is CSV-aware, ensuring that your disaster recovery solution is resilient to all types of disasters.

If you are looking for a more affordable multi-site cluster replication solution, only have a single datacenter, or your storage provider does not support these scenarios, Microsoft offers a few solutions. This includes Hyper-V Replica and Azure Site Recovery, and we’ll explore these disaster recovery options and how they integrate with Windows Server Failover Clustering in the third part of this blog series.

Let us know if you have any questions in the comments form below!


Go to Original Article
Author: Symon Perriman

A year of bringing AI to the edge

This post is co-authored by Anny Dow, Product Marketing Manager, Azure Cognitive Services.

In an age where low-latency and data security can be the lifeblood of an organization, containers make it possible for enterprises to meet these needs when harnessing artificial intelligence (AI).

Since introducing Azure Cognitive Services in containers this time last year, businesses across industries have unlocked new productivity gains and insights. The combination of both the most comprehensive set of domain-specific AI services in the market and containers enables enterprises to apply AI to more scenarios with Azure than with any other major cloud provider. Organizations ranging from healthcare to financial services have transformed their processes and customer experiences as a result.

These are some of the highlights from the past year:

Employing anomaly detection for predictive maintenance

Airbus Defense and Space, one of the world’s largest aerospace and defense companies, has tested Azure Cognitive Services in containers for developing a proof of concept in predictive maintenance. The company runs Anomaly Detector for immediately spotting unusual behavior in voltage levels to mitigate unexpected downtime. By employing advanced anomaly detection in containers without further burdening the data scientist team, Airbus can scale this critical capability across the business globally.

“Innovation has always been a driving force at Airbus. Using Anomaly Detector, an Azure Cognitive Service, we can solve some aircraft predictive maintenance use cases more easily.”  —Peter Weckesser, Digital Transformation Officer, Airbus

Automating data extraction for highly-regulated businesses

As enterprises grow, they begin to acquire thousands of hours of repetitive but critically important work every week. High-value domain specialists spend too much of their time on this. Today, innovative organizations use robotic process automation (RPA) to help manage, scale, and accelerate processes, and in doing so free people to create more value.

Automation Anywhere, a leader in robotic process automation, partners with these companies eager to streamline operations by applying AI. IQ Bot, their unique RPA software, automates data extraction from documents of various types. By deploying Cognitive Services in containers, Automation Anywhere can now handle documents on-premises and at the edge for highly regulated industries:

“Azure Cognitive Services in containers gives us the headroom to scale, both on-premises and in the cloud, especially for verticals such as insurance, finance, and health care where there are millions of documents to process.” —Prince Kohli, Chief Technology Officer for Products and Engineering, Automation Anywhere

For more about Automation Anywhere’s partnership with Microsoft to democratize AI for organizations, check out this blog post.

Delighting customers and employees with an intelligent virtual agent

Lowell, one of the largest credit management services in Europe, wants credit to work better for everybody. So, it works hard to make every consumer interaction as painless as possible with the AI. Partnering with Crayon, a global leader in cloud services and solutions, Lowell set out to solve the outdated processes that kept the company’s highly trained credit counselors too busy with routine inquiries and created friction in the customer experience. Lowell turned to Cognitive Services to create an AI-enabled virtual agent that now handles 40 percent of all inquiries—making it easier for service agents to deliver greater value to consumers and better outcomes for Lowell clients.

With GDPR requirements, chatbots weren’t an option for many businesses before containers became available. Now companies like Lowell can ensure the data handling meets stringent compliance standards while running Cognitive Services in containers. As Carl Udvang, Product Manager at Lowell explains:

“By taking advantage of container support in Cognitive Services, we built a bot that safeguards consumer information, analyzes it, and compares it to case studies about defaulted payments to find the solutions that work for each individual.”

One-to-one customer care at scale in data-sensitive environments has become easier to achieve.

Empowering disaster relief organizations on the ground

A few years ago, there was a major Ebola outbreak in Liberia. A team from USAID was sent to help mitigate the crisis. Their first task on the ground was to find and categorize the information such as the state of healthcare facilities, wifi networks, and population density centers.  They tracked this information manually and had to extract insights based on a complex corpus of data to determine the best course of action.

With the rugged versions of Azure Stack Edge, teams responding to such crises can carry a device running Cognitive Services in their backpack. They can upload unstructured data like maps, images, pictures of documents and then extract content, translate, draw relationships among entities, and apply a search layer. With these cloud AI capabilities available offline, at their fingertips, response teams can find the information they need in a matter of moments. In Satya’s Ignite 2019 keynote, Dean Paron, Partner Director of Azure Storage and Edge, walks us through how Cognitive Services in Azure Stack Edge can be applied in such disaster relief scenarios (starting at 27:07): 

Transforming customer support with call center analytics

Call centers are a critical customer touchpoint for many businesses, and being able to derive insights from customer calls is key to improving customer support. With Cognitive Services, businesses can transcribe calls with Speech to Text, analyze sentiment in real-time with Text Analytics, and develop a virtual agent to respond to questions with Text to Speech. However, in highly regulated industries, businesses are typically prohibited from running AI services in the cloud due to policies against uploading, processing, and storing any data in public cloud environments. This is especially true for financial institutions.

A leading bank in Europe addressed regulatory requirements and brought the latest transcription technology to their own on-premises environment by deploying Cognitive Services in containers. Through transcribing calls, customer service agents could not only get real-time feedback on customer sentiment and call effectiveness, but also batch process data to identify broad themes and unlock deeper insights on millions of hours of audio. Using containers also gave them flexibility to integrate with their own custom workflows and scale throughput at low latency.

What’s next?

These stories touch on just a handful of the organizations leading innovation by bringing AI to where data lives. As running AI anywhere becomes more mainstream, the opportunities for empowering people and organizations will only be limited by the imagination.

Visit the container support page to get started with containers today.

For a deeper dive into these stories, visit the following

Go to Original Article
Author: Microsoft News Center

Rethinking cyber learning—consider gamification

As promised, I’m back with a follow-up to my recent post, Rethinking how we learn security, on how we need modernize the learning experience for cybersecurity professionals by gamifying training to make learning fun. Some of you may have attended the recent Microsoft Ignite events in Orlando and Paris. I missed the conferences (ironically, due to attending a cybersecurity certification boot camp) but heard great things about the Microsoft/Circadence joint Into the Breach capture the flag exercise.

If you missed Ignite, we’re planning several additional Microsoft Ignite The Tour events around the world, where you’ll be able to try your hand at this capture the flag experience. Look for me at the Washington, DC event in early February.

In the meantime, due to the great feedback I received from my previous blog—which I do really appreciate, especially if you have ideas for how we should tackle the shortage of cyber professionals—I’ll be digging deeper into the mechanics of learning to understand what it really takes to learn cyber in today’s evolving landscape.

Today, I want to address the important questions of how a new employee could actually ramp up their learning, and how employers can prepare employees for success and track the efficacy of the learning curriculum. Once again, I’m pleased to share this post with Keenan Skelly, chief evangelist at Boulder, Colorado-based Circadence.

Here are some of some of her recommendations from our Q&A:

Q: Keenan, in our last blog, you discussed Circadence’s “Project Ares” cyber learning platform. How do new cyber practitioners get started on Project Ares?

A: The way that Project Ares is set up allows for a user to acquire a variety of different skill levels when launched. It’s important to understand what kind of work roles you’re looking to learn about as a user as well as what kinds of tools you’re looking to understand better before you get started on Project Ares. For example, if I were to take some of my Girls Who Code or Cyber Patriot students and put them into the platform, I would probably have them start in the Battle School. This is where they’re going to learn about basic cybersecurity fundamentals such as ports and protocols, regular expressions, and the cyber kill chain. Then they can transition into Battle Rooms, where they’ll start to learn about very specific tools, tactics, and procedures or TTPs, for a variety of different work roles. If you’re a much more skilled cyber ninja, however, you can probably go ahead and get right into Missions, but we do recommend that everyone who comes into Project Ares does some work in the Battle Rooms first, specifically if they are trying to learn a tool or a skill for their work role.

Project Ares also has a couple of different routes that an expert or an enterprising cybersecurity professional can come into that’s really focused more on their role. For example, we have an assessments area based entirely on the work role. This aligns to the NIST framework and the NICE cybersecurity work roles. For example, if you’re a network defender, you can come into that assessment pathway and have steps laid out before you to identify your skill level in that role as you see below:

Assessment pathway.

Q: What areas within Project Ares do you recommend for enterprise cyber professionals to train against role-based job functions and prepare for cyber certifications?

A: You might start with something simple like understanding very basic things about your work role through a questionnaire in the Battle School arena as seen in the illustrations below. You may then move into a couple of Battle Rooms that tease out very detailed skills in tools that you would be using for that role. And then eventually you’ll get to go into a mission by yourself, and potentially a mission with your entire team to really certify that you are capable in that work role. All this practice helps prepare professionals to take official cyber certifications and exams.

Battle School questionnaire.

Battle School mission.

Q: Describe some of the gamification elements in Project Ares and share how it enhances cyber learning.

A: One of the best things about Project Ares is gamification. Everyone loves to play games, whether it’s on your phone playing Angry Birds, or on your computer or gaming console. So we really tried to put a lot of gaming elements inside Project Ares. Since everything is scored within Project Ares, everything you do from learning about ports and protocols, to battle rooms and missions, gives you experience points. Experience points add up to skill badges. All these things make learning more fun for the user. For example, if you’re a defender, you might have skill badges in infrastructure, network design, network defense, etc. And the way Project Ares is set up, once you have a certain combination of those skill badges you can earn a work role achievement certificate within Project Ares.

This kind of thing is taken very much from Call of Duty and other types of games where you can really build up your skills by doing a very specific skill-based activity and earn points towards badges. One of the other things that is great about Project Ares is it’s quite immersive. For example, Missions allows a user to come into a specific cyber situation or cyber response situation (e.g., water treatment plant cyberattack) and have multimedia effects that demonstrate what is going—very much reflective of that cool guy video look. Being able to talk through challenges in the exercises with our in-game advisor, Athena, adds another element to the learning experience as shown in the illustration below.

Athena was inspired by the trends of personal assistants like Cortana and other such AI-bots, which have been integrated into games. So things like chat bots, narrative storylines, and skill badges are super important for really immersing the individual in the process. It’s so much more fun, and easier to learn things in this way, as opposed to sitting through a static presentation or watching someone on a video and trying to learn the skill passively.

Athena—the in-game advisor.

Q: What kinds of insights and reporting capability can Project Ares deliver to cyber team supervisors and C-Suite leaders to help them assessing cyber readiness?

A: Project Ares offers a couple great features that are good for managers, all the way up to the C-Suite, who are trying to understand how their cybersecurity team is doing. The first one is called Project Ares Trainer View. This is where a supervisor or manager can jump into the Project Ares environment, with the students or with the enterprise team members, and observe in a couple of different ways.

The instructor or the manager can jump into the environment as Athena, so the user doesn’t know that they are there. They can then provide additional insight or help that is needed to a student. A supervisor or leader can also jump in as the opponent, which gives them the ability to see someone who is just breezing by everything and maybe make it a little more challenging. Or they can just observe and leave comments for the individuals. This piece is really helpful when we’re talking about managers who are looking to understand their team’s skill level in much more detail.

The other piece of this is a product we have coming out soon called Dendrite—an analytics tool that looks at everything that happens at Project Ares. We record all the key strokes and chats a user had with Athena or any with other team members while in a mission or battle room. Cyber team leads can then see what’s going on. Users can see what they’re doing well, and not doing well. This feedback can be provided up to the manager level, the senior manager level, and even to the C-Suite level to demonstrate exactly where that individual is in their particular skill path. It helps the cyber team leads understand what tools are being used appropriately and which tools are not being used appropriately.

For example, if you’re a financial institution and you paid quite a bit of money for Tanium, but upon viewing tool use in Dendrite, you find that no one is using it. It might prompt you to rethink your strategy on how to use tools in your organization or look at how you train your folks to use those tools. These types of insights are absolutely critical if you want to understand the best way to grow the individual in cybersecurity and make sure they’re really on top of their game.

The Dendrite assessment and analysis solution.

Q: How can non-technical employees improve their cyber readiness?

A: At Circadence, we don’t just provide learning capabilities for advanced cyber warriors. For mid-range people just coming into the technical side of cybersecurity, we have an entire learning path that starts with a product called inCyt. Now, inCyt is a very fun browser-based game of strategy where players have some hackable devices they must protect—like operating systems and phones. Meanwhile, your opponent has the same objective: protect their devices from attacks. Players continually hack each other by gathering intel on their opponent and then launching different cyberattacks. While they’re doing this, players get a fundamental understanding of the cyber kill chain. They learn things like what reconnaissance means to a hacker, what weaponizing means to a hacker, what deploying that weapon means to a hacker, so they can start to recognize that behavior in their everyday interactions online.

Some people ask why this is important and I always say, “I used to be a bomb technician, and there is no possible way I could defuse an IED or nuclear weapon without understanding how those things are put together.” It’s the same kind of concept.

It’s impossible to assume that someone is going to learn cyber awareness by answering some questions or watching a five-minute phishing tutorial after they have already clicked a link in a suspicious email. Those are very reactive ways of learning cyber. inCyt is very proactive. And we want to teach you in-depth understanding of what to look for, not just for phishing but for all the attacks we’re susceptible to. inCyt is also being used by some of our customers as a preliminary gate track for those who are interested in cybersecurity. So if you demonstrate a very high aptitude within inCyt, we would send you over to our CyberBridge portal where you can start learning some of the basics of cybersecurity to see if it might be the right field for you. Within our CyberBridge access management portal, you can then go into Project Ares Academy, which is just a lighter version of Project Ares.

Professional and Enterprise licenses in Project Ares pave more intricate learning pathways for people to advance in learning, from novice to expert cyber defender. You’ll be able to track all metrics of where you started, how far you came, what kind of skill path you’re on, and what kind of skill path you want to be on. Very crucial items for your own work role pathway.

How to close the cybersecurity talent gap

Keenan’s perspective and the solution offered by Project Ares really helps to understand how to train security professionals and give them the hands-on experience they require and want. We’re in interesting times, right? With innovations in machine learning and artificial intelligence (AI), we’re increasingly able to pivot from reactive cyber defense to get more predictive. Still, right now we’re facing a cybersecurity talent gap of up to 4 million people, depending on which analyst group you follow. The only way that we’re going to get folks interested in cybersecurity is to make it exactly what we have been talking about: a career-long opportunity to learn.

Make it something that they can attain, they can grow in, and see themselves going from a novice to a leader in an organization. This is tough right now because there are relatively few cybersecurity operators compared to demand, and the operators on the front lines are subject to burnout. With uncertain and undefined career paths beyond tactical SecOps, what is there to look forward to?

We need to get better as a community in cybersecurity, not only protect the cybersecurity defenders that we have already, but also help to bring in new cybersecurity defenders and offenders who are really going to push the boundaries of where we’re at today. This is where we have an excellent and transformational opportunity to introduce more immersive and gamified learning to improve the learning experience and put our people in a position to succeed.

Learn more

To learn more about how to close the cybersecurity talent gap, read the e-book: CISO essentials: How to optimize recruiting while strengthening cybersecurity. For more information on Microsoft intelligence security solutions, see Achieve an optimal state of Zero Trust.

You can also watch my full interview with Keenan.

Bookmark the Security blog to keep up with our expert coverage on security matters and follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

Go to Original Article
Author: Microsoft News Center

Microsoft’s connected vehicle platform presence at IAA, the Frankfurt Auto Show

This post was co-authored by the extended Microsoft Connected Vehicle Platform (MCVP) team. 

A connected vehicle solution must enable a fleet of potentially millions of vehicles, distributed around the world, to deliver intuitive experiences including infotainment, entertainment, productivity, driver safety, driver assistance. In addition to these services in the vehicle, a connected vehicle solution is critical for fleet solutions like ride and car sharing as well as phone apps that incorporate the context of the user and the journey.

Imagine you are driving to your vacation destination and you start your conference call from home while you are packing. When you transition to the shared vehicle, the route planning takes into account the best route for connectivity and easy driving and adjusts the microphone sensitivity during the call in the back seat. These experiences today are constrained to either the center-stack screen, known as the in-vehicle infotainment device (IVI), or other specific hardware and software that is determined when the car is being built. Instead, these experiences should evolve over the lifetime of ridership. The opportunity is for new, modern experiences in vehicles that span the entire interior and systems of a vehicle, plus experiences outside the vehicle, to create deeper and longer-lasting relationships between car makers and their customers throughout the transportation journey.

To realize this opportunity, car manufacturers and mobility-as-a-service (MaaS) providers need a connected vehicle platform to complete the digital feedback loop by incorporating the seamless deployment of new functionality that is composed from multiple independently updatable services that reflect new understanding, at scale, and with dependable and consistent management of data and these services from Azure to and from three different edges: the vehicle, the phone, and the many enterprise applications that support the journey.

The Microsoft Connected Vehicle Platform (MCVP) is the digital chassis upon which automotive original equipment manufacturers (OEMs) can deliver value-add services to their customers. These services areas include:

  • In-vehicle experiences
  • Autonomous driving
  • Advanced navigation
  • Customer engagement and insights
  • Telematics and prediction services
  • Connectivity and over the air updates (OTA)

MCVP is a platform composed from about 40 different Azure services and tailored for automotive scenarios. To ensure continuous over-the-air (OTA) updates of new functionality, MCVP also includes different Azure edge technologies such as Automotive IoT Edge that runs in the vehicle, and Azure Maps for intelligent location services.

With MCVP, and an ecosystem of partners across the industry, Microsoft offers a consistent platform across all digital services. This includes vehicle provisioning, two-way network connectivity, continuous over-the-air updates of containerized functionality, support for command-and-control, hot, warm, or cold path for telematics, and extension hooks for customer or third-party differentiation. Being built on Azure, MCVP includes the hyperscale, global availability, and regulatory compliance that comes as part of the Azure cloud. OEMs and fleet operators leverage MCVP as a way to “move up the stack” and focus on their customers rather than spend resources on non-differentiating infrastructure.

Automotive OEMs already taking advantage of MCVP, along with many of our ecosystem partners, including the Volkswagen Group, the Renault-Nissan-Mitsubishi Alliance, and Iconiq.

In this blog post, we are delighted to recap many of the MCVP ecosystem partners that accelerate our common customers’ ability to develop and deploy completed connected vehicle solutions.

An image showing the aspects of the Microsoft Connected Vehicle Platform.

Focus areas and supporting partnerships

Microsoft’s ecosystem of partners include independent software vendors (ISVs), automotive suppliers, and systems integrators (SIs) to complete the overall value proposition of MCVP. We have pursued partnerships in these areas:

In-vehicle experiences

Cheaply available screens, increasingly autonomous vehicles, the emergence of pervasive voice assistants, and users’ increased expectation of the connectedness of their things have all combined to create an opportunity for OEMs to differentiate through the digital experiences they offer to the occupants, both the driver and the passengers, of their vehicles.

LG Electronics’ webOS Autoplatform offers an in-vehicle, container-capable OS that brings the third party application ecosystem created for premium TVs to In-vehicle experiences. webOSAuto supports the container-based runtime environment of MCVP and can be an important part of modern experiences in the vehicle.

Faurecia leverages MCVP to create disruptive, connected, and personalized services inside the Cockpit of the Future to reinvent the on-board experience for all occupants.

Autonomous driving

The continuous development of autonomous driving systems requires input from both test fleets and production vehicles that are integrated by a common connected vehicle platform. This is because the underlying machine learning (ML) models that either drive the car or provide assistance to the driver will be updated over time as they are improved based on feedback across those fleets, and those updates will be deployed over the air in incremental rings of deployment by way of their connection to the cloud.

Teraki creates and deploys containerized functionality to vehicles to efficiently extract and manage selected sensor data such as telemetry, video, and 3D information. Teraki’s product continuously trains and updates the sensor data to extract relevant, condensed information that enables customers’ models to achieve highest accuracy rates, both in the vehicle (edge) as well in Azure (cloud.)

TomTom is integrating their navigation intelligence services such as HD Maps and Traffic as containerized services for use in MCVP so that other services in the vehicles, including autonomous driving, can take advantage of the additional location context.

Advanced navigation

TomTom’s navigation application has been integrated with the MCVP in-vehicle compute architecture to enable navigation usage and diagnostics data to be sent from vehicles to the Azure cloud where the data can be used by automakers to generate data-driven insights to deliver tailored services, and to make better informed design and engineering decisions. The benefit of this integration includes the immediate insights created from comparing the intended route with the actual route with road metadata. If you are attending IAA, be sure to check out the demo at the Microsoft booth.

Telenav is a leading provider of connected car and location-based services and is working with Microsoft to integrate its intelligent connected-car solution suite, including infotainment, in-car commerce, and navigation, with MCVP.

Customer engagement and insights

Otonomo securely ingests automotive data from OEMs, fleet operators, etc., then reshapes and enriches the data so application and service providers can use it to develop a host of new and innovative offerings that deliver value to drivers. The data services platform has built it privacy by design solutions for both personal and aggregate use cases. Through the collaboration with Microsoft, car manufacturers adopting the Microsoft Connected Vehicle Platform can easily plug their connected car data into Otonomo’s existing ecosystem to quickly roll out new connected car services to drivers.

Telematics and prediction services

DSA is a leading software and solutions provider for quality assurance, diagnostics, and maintenance of the entire vehicle electrics and electronics in the automotive industry. Together, DSA and Microsoft target to close the digital feedback loops between automotive production facilities and field cars by providing an advanced Vehicle Lifecycle Management, based on the Microsoft Connected Vehicle Platform.

WirelessCar is a leading managed service provider within the connected vehicle eco-system and empowers car makers to provide mobility services with Microsoft Azure and the Microsoft Connected Vehicle Platform that supports and accelerates their customers’ high market ambitions in a world of rapid changing business models.

Connectivity and OTA

Cubic Telecom is a leading connectivity management software provider to the automotive and IoT industries globally. They are one of the first partners to bring seamless connectivity as a core service offering to MCVP for a global market. The deep integration with MCVP allows for a single data lake and an integrated services monitoring path. In addition, Cubic Telecom provides connected car capabilities that let drivers use infotainment apps in real-time, connect their devices to the Wi-Fi hotspot, and top-up on data plans to access high-speed LTE connectivity, optionally on a separate APN.

Excelfore is an innovator in automotive over-the-air (OTA) updating and data aggregation technologies. They provide a full implementation of the eSync bi-directional data pipeline, which has been ported to the Microsoft Azure cloud platform and integrated as the first solution for MCVP OTA updating.

Tata Communications is a leading global digital infrastructure provider. We are working with them to help speed the development of new innovative connected car applications. By combining the IoT connectivity capabilities of Tata Communications MOVE™ with MCVP, the two companies will enable automotive manufacturers to offer consumers worldwide more seamless and secure driving experiences.

Microsoft is incredibly excited to be a part of the connected vehicle space. With the Microsoft Connected Vehicle Platform, our ecosystem partners, and our partnerships with leading automotive players – both vehicle OEMs and automotive technology suppliers – we believe we have a uniquely capable offering enabling at global scale the next wave of innovation in the automotive industry as well as related verticals such as smart cities, smart infrastructure, insurance, transportation, and beyond.

Explore the Microsoft Connected Vehicle Platform today and visit us at IAA.

Go to Original Article
Author: Microsoft News Center