Tag Archives: blog

What Exactly are Proximity Placement Groups in Azure?

In this blog post, you’ll learn all about Azure Proximity Groups, why they are necessary and how to use them.

What are Azure Proximity Placement Groups?

Microsoft defines Azure Proximity Placement groups as an Azure Virtual Machine logical grouping capability that you can use to decrease the inter-VM network latency associated with your applications (Microsoft announcement blog post). But what does that actually mean?

When you look at VM placement in Azure and the reduction of latency between VMs, you can place VMs in the same region and the same Availability Zone. With that, they are in the same group of physical datacenters. To be honest, with the growing Azure footprint, these datacenters can still be a few kilometers away from each other.

That may impact the latency of your application and especially application with a need for latency within the nanosecond ranges will be highly impacted. Such applications can be for example banking applications for low latency trading or financial stock operations.

Proximity Placement Groups bring together as near as possible to achieve the lowest latency possible. Following scenarios are eligible for Proximity Placement Groups:

  • Low latency between stand-alone VMs.
  • Low Latency between VMs in a single availability set or a virtual machine scale set.
  • Low latency between stand-alone VMs, VMs in multiple Availability Sets, or multiple scale sets. You can have multiple compute resources in a single placement group to bring together a multi-tiered application.
  • Low latency between multiple application tiers using different hardware types. For example, running the backend using M-series in an availability set and the front end on a D-series instance, in a scale set, in a single proximity placement group.

All VMs must be in a single VNet like shown in the drawing below:

Virtual Network Scale Set and Availability Set

I wouldn`t suggest single VMs for production workloads on Azure. Always use a cluster within an Availability Set or a VM Scale Set.

How does that look like in an Azure Datacenter environment?

The following drawing shows the placement of a VM without Proximity Groups:

Placement of a VM without Proximity Groups

With Proximity Groups for a single VM, it could look like the following:

Proximity Groups for a single VM

When you use availability sets for your VMs, the distribution can look like the following.

Distribution availability sets for VMs

With that said, let’s learn how to set up a Proximity Placement Group.

How to set up a Proximity Placement Group

To set up a Proximity Placement Group is pretty easy.

Look for Proximity Placement Groups in the Portal:

Proximity Placement Groups in the Portal

Add a new group:

Create a new Proximity Placement Group

Select Subscription, Resource Group, Region and the name and create the group:

Proximity Placement Group Settings

When you now create a VM you can select the Proximity Placement Group in the advanced tap:

Proximity Placement Group advanced settings

There is also the option to use PowerShell to deploy Proximity Groups.

Conclusion

The information in this blog post explains Proximity Groups and the ways to use them but if you’re stuck or if there’s something you need further explanation about, let me know in the comments below and I’ll get back to you!

Go to Original Article
Author: Florian Klaffenbach

What Exactly is Azure Dedicated Host?

In this blog post, we’ll become more familiar with a new Azure service called Azure Dedicated Hosts. Microsoft announced the service as preview some time ago and will go general-available with it in the near future.

Microsoft Azure Dedicated Host allows customers to run their virtual machines on a dedicated host not shared with other customers. While in a regular virtual machine scenario different customers or tenants share the same hosts, with Dedicated Host, a customer does no longer share the hardware. The picture below illustrates the setup.

Azure Dedicated Hosts

With a Dedicated Host, Microsoft wants to address customer concerns regarding compliance, security, and regulations, which could come up when running on a shared physical server. In the past, there was only one option to get a dedicated host in Azure. The option was to use very large instances like a D64s v3 VM size. These instances were so large that they consumed one host, and the placement of other VMs was not possible.

To be honest here, with the improvements in machine placement, larger hosts, and with that a much better density, there was no longer a 100% guaranty that the host is still dedicated. Another thing regarding instances is they are extremely expensive, as you can see in the screenshot from the Azure Price Calculator.

Azure price calculator

How to Setup a Dedicated Host in Azure

The setup of a dedicated host is pretty easy. First, you need to create a host group with your preferences for availability, like Availability Zones and Number of Fault Domains. You also need to decide for a Host Region, Group Name, etc.

How To Setup A Dedicated Host In Azure

After you created the host group, you can create a host within the group. Within the current preview, only VM Type Ds3 and Es3 Family are available to choose from. Microsoft will add more options soon.

Create dedicated host

More Details About Pricing

As you can see in the screenshot, Microsoft added the option to use Azure Hybrid Use Benefits for Dedicated Host. That means you can use your on-prem Windows Server and SQL Server licenses with Software Assurance to reduce your costs in Azure.

Azure Hybrid Use Benefits pricing

Azure Dedicated Host also gives you more insides into the host like:

  • The underlying hardware infrastructure (host type)
  • Processor brand, capabilities, and more
  • Number of cores
  • Type and size of the Azure Virtual Machines you want to deploy

An Azure Customer can control all host-level platform maintenance initiated by Azure, like OS updates. An Azure Dedicated Host gives you the option to schedule maintenance windows within 35 days where these updates are applied to your host system. During this self-maintenance window, customers can apply maintenance to hosts at their own convenience.

So looking a bit deeper in that service, Azure becomes more like a traditional hosting provider who gives a customer a very dynamic platform.

The following screenshot shows the current pricing for a Dedicated Host.

Azure Dedicated Host pricing details

Following virtual machine types can be run on a dedicated host.

Virtual Machines on a Dedicated Host

Currently, you have a soft limit from 3000 vCPUs for a dedicated host per region. That limit can be enhanced by submitting a support ticket.

When Would I Use A Dedicated Host?

In most cases, you would choose a dedicated host because of compliance reasons. You may not want to share a host with other customers. Another reason could be that you want a guaranteed CPU architecture and type. If you place your VMs on the same host, then it is guaranteed that it will have the same architecture.

Further Reading

Microsoft already published a lot of documentation and blogs about the topic so you can deepen your knowledge about Dedicated Host.

Resource #1: Announcement Blog and FAQ 

Resource #2: Product Page 

Resource #3: Introduction Video – Azure Friday “An introduction to Azure Dedicated Hosts | Azure Friday”

Go to Original Article
Author: Florian Klaffenbach

Microsoft Azure Peering Services Explained

In this blog post, you’ll discover everything you need to know about Microsoft Azure Peering Services, a networking service introduced during Ignite 2019.

Microsoft explains the service within their documentation as follows:

Azure Peering Service is a networking service that enhances customer connectivity to Microsoft cloud services such as Office 365, Dynamics 365, software as a service (SaaS) services, Azure, or any Microsoft services accessible via the public internet. Microsoft has partnered with internet service providers (ISPs), internet exchange partners (IXPs), and software-defined cloud interconnect (SDCI) providers worldwide to provide reliable and high-performing public connectivity with optimal routing from the customer to the Microsoft network.

To be honest, Microsoft explained the service well, but what’s behind the explanation is much more complex. To understand Azure Peering Services and its benefits, you need to understand how peering, routing, and connectivity for internet providers work.

What Are Peering And Transit?

In the internet and network provider world, peering is an interconnection of separated and independent internet networks to exchange traffic between users within their respective networks. Peering or partnering is a free agreement between two providers. Normally both providers only pay their cross-connect in the datacenter and their colocation space. Traffic is not paid by any party. Instead, there are special agreements, e.g. from smaller to larger providers.

Normally you have the following agreements:

  • between equal providers or peering partners – traffic upload and download between these two networks is free for both parties
  • a larger provider and a smaller provider – the smaller provider needs to pay a fee for the transit traffic to the larger network provider
  • providers who transit another network to reach a 3rd party network (upstream service) – the provider using the upstream needs to pay a fee for the transit traffic to the upstream provider

An agreement by two or more networks to peer is instantiated by a physical interconnection of the networks, an exchange of routing information through the Border Gateway Protocol (BGP) routing protocol and, in some special cases, a formalized contractual document. These documents are called peering policies and Letter of Authorization or LOA.

Fun Fact – As a peering partner for Microsoft, you can easily configure the peering through the Azure Portal as a free service.

As you can see in the screenshot, Microsoft is very restrictive with their routing and peering policies. That prevents unwanted traffic and protects Microsoft customers when Peering for Azure ExpressRoute (AS12076).

Routing and peering policies Azure express route.

Now let’s talk a bit about the different types of peering.

Public Peering

Public peering is configured over the shared platform of Internet Exchange Point. Internet Exchanges charge a port and/or member fee for using their platform for interconnect.

If you are a small cloud or network provider with less infrastructure, the peering via an Internet Exchange is a good place to start. As a big player on the market, it is a good choice because you are also reaching smaller networks on a short path. The picture below shows an example of those prices. I took my example from the Berlin Commercial Internet Exchange Pricing Page.

Berlin Commercial Internet Exchange Pricing

Hurricane Electric offers a tool that can give you a peering map and more information about how a provider is publicly peered with other providers, but you will not get a map from the private peering there. The picture below shows you some examples for Microsoft AS 8075.

Microsoft AS 8075 peering

Private Peering

Private peering is a direct physical link between two networks. Commonly the peering is done by one or more 10GBE or 100GBE links. The connection is made from only one network to another, for which any site pays a set fee to the owner of the infrastructure or colocation that is used. Those costs are usually crossconnect within the datacenter. That makes private peering a good choice when you need to send large volumes of traffic to one specific network. That’s a much cheaper option when looking on the pricing per transferred gigabyte between both networks than with public peering. When peering private with providers you may need to follow some peering policies though.

A good provider also has a looking glass where you can get more insights into peerings, but we will look at this later on.

Transit and Upstream

When someone is using Transit, the provider itself has no access to the destination network. Therefore he needs to leverage other networks or network providers to reach the destination network and destination service. Those providers who give the transit are known as transit providers, with larger networks being considered as Tier 1 networks. As a network provider for cloud customers like Microsoft, you don’t want any transit routing. In the first place, you normally have high costs for transitive routing through other networks, and what is worse, you add additional latency and uncontrollable space between your customers and the cloud services. So, the first thing when handling cloud customers, avoid transit routing and peer yourself with cloud providers either through private or public network interconnect at interconnect locations.

That is one reason why Microsoft is working with Internet Exchanges and Network and Internet Providers to enable Services like Microsoft Azure Peering. It should give customers more control over how they reach Microsoft Services incl. Azure, Microsoft 365, xBox etc. To understand the impact, you also need to know about Service Provider Routing. That’s how we will follow up in the next part of the post.

How Internet Service Providers Route your Traffic?

When you look at routing, there are mostly only two options within a carrier network. The first one is cold potato or centralized routing. With cold potato routing, a provider keeps the traffic as long as possible within his network before he sends it to another 3rd party. The other option is hot potato routing or decentralized routing. Here the provider sends the traffic as fast as possible to the 3rd party, mostly in the same metro.

The picture below illustrates the difference between hot and cold potato routing.

cold and hot potato routing differences

As you can see in the drawing, the cold potato routing takes a longer distance through the provider network and with that to your target, e.g. Microsoft.

Those routing configurations have a large impact on your cloud performance because every kilometer distance adds latency. The actual number is 1ms in latency added per every 200 kilometers of distance. As a result, you will see an impact on the likes of voice quality during Teams Meetings or synchronization issues for Backups to Azure.

Microsoft has a big agenda to address that issue for their customers and the rest of the globe. You can read more about the plans in articles from Yousef Khalidi, Cop. Vice President Microsoft Networking.

Now let’s start with Peering Services and how it can change the game.

What is Azure Peering Services and How it Solves the Issue?

When you look at how the service is designed, you can see that it leverages all of Microsoft Provider Peering with AS 8075. Together with the Microsoft Azure Peering Services Partners, Microsoft can change the default routing and transit behavior to their services when using a partner provider.

Following the picture below, you can setup a routing so that traffic from your network to Azure (or other networks) now uses the Microsoft Global Backbone instead of a transit provider without any SLA.

What is Azure Peering Services

With that service enabled, the performance to Microsoft Services will increase and the latency will be reduced depending on the provider. As you can expect, services like Office 365 or Azure AD will profit from that Azure Service but there is more. When you for example build your backbone on the Microsoft Global Transit Architecture with Azure Virtual WAN and leverage Internet Connections of these certain Providers and Internet Exchange Partners, you will directly boost your network performance and you will have a pseudo-private network. The reason for that is because you now leverage private or public peering with route restrictions. Your backbone traffic will now bypass the regular Internet and flow through the Microsoft Global Backbone from A to B.

Let me try to explain it with a drawing.

Microsoft global backbone network

in addition to better performance, you will also get an additional layer of monitoring. While the regular internet is a black box regarding dataflow, performance, etc. with Microsoft Azure Peering Services you get fully operational monitoring of your wide area network through the Microsoft Backbone.

You can find this information in the Azure Peering Services Telemetry Data.

The screenshot below shows the launch partner of Azure Peering Services.

Launch partner of Azure Peering Services

When choosing a network provider for your access to Microsoft, you should follow this guideline:

  • Choose a provider well peered with Microsoft
  • Choose a provider with hot potato routing to Microsoft
  • Don`t let the price decide the provider, a good network has costs
  • Choose Dedicated Internet Access before regular Internet Connection any time possible
  • If possible use locale providers instead of global ones
  • A good provider always has a looking glass or can provide you with default routes between a city location and other peering partners. If not, it is not a good provider to choose

So, let’s learn about the setup of the service.

How to configure Azure Peering Services?

First, you need to understand that like with Azure ExpressRoute, there are two sites to contact and configure.

You need to follow the steps below to establish a Peering Services connection.

Step 1: Customer provision the connectivity from a connectivity partner (no interaction with Microsoft). With that, you get an Internet provider who is well connected to Microsoft and meets the technical requirements for performant and reliable connectivity to Microsoft. Again you should check the Partnerlist.
Step 2: Customer registers locations into the Azure portal. A location is defined by: ISP/IXP Name, Physical location of the customer site (state level), IP Prefix given to the location by the Service Provider or the enterprise. As a service from Microsoft, you now get Telemetry data like Internet Routes monitoring and traffic prioritization from Microsoft to the user’s closest edge location.

The registration of the locations happens within the Azure Portal.

Currently, you need to register for the public beta first. That happens with some simple PowerShell commands.

Using Azure PowerShell 

Using Azure CLI

Afterward, you can configure the service using the Azure Portal, Azure PowerShell, or Azure CLI.

You can find the responsive guide here.

After the Service went General Available (GA), customers also received SLAs on the Peering and Telemetry Service. Currently, there is no SLA and no support if you use the services in production.

Peering and Telemetry service

Closing Thoughts

From reading this article you now have a better understanding of Microsoft Azure Peering Services and its use, peering between providers, and the routing and traffic behavior within the internet. When digging deeper into Microsoft Peering Services, you now should be able to develop some architectures and ideas on how to use that service.

If you have any providers which are not aware about that Service or direct Peering with Microsoft AS 8075, point them to http://peering.azurewebsites.net/ or let them drop an email to [email protected]

When using the BGP Tools from Hurricane Electric, you should get info about some providers, peering with Microsoft. One thing you need to know, most of the 3500 Network Partners of Microsoft are peering private with Microsoft. The Hurricane tools and only observe the public peering partners.

Go to Original Article
Author: Florian Klaffenbach

How to Use ASR and Hyper-V Replica with Failover Clusters

In the third and final post of this blog series, we will evaluate Microsoft’s replication solutions for multi-site clusters and how to integrate basic backup/DR with them. This includes Hyper-V Replica, Azure Site Recovery, and DFS Replication. In the first part of the series, you learned about setting up failover clusters to work with DR solutions and in the last post, you learned about disk replication considerations from third-party storage vendors. The challenge with the solutions that we previously discussed is that they typically require third-party hardware or software. Let’s look at the basic technologies provided by Microsoft to reduce these upfront fixed costs.

Note: The features talked about in this article are native Microsoft features with a baseline level of functionality. Should you require over and above what is required here you should look at a third-party backup/replication product such as Altaro VM Backup.

Multi-Site Disaster Recovery with Windows Server DFS Replication (DFSR)

DFS Replication (DFSR) is a Windows Server role service that has been around for many releases. Although DFSR is built into Windows Server and is easy to configure, it is not supported for multi-site clustering. This is because the replication of files only happens when a file is closed, so it works great for file servers hosting documents. However, it is not designed to work with application workloads where the file is kept open, such as SQL databases or Hyper-V VMs. Since these file types will only close during a planned failover or unplanned crash, it is hard to keep the data consistent at both sites. This means that if your first site crashes, the data will not be available at the second site, so DFSR should not be considered as a possible solution.

Multi-Site Disaster Recovery with Hyper-V Replica

The most popular Microsoft DR solution is Hyper-V Replica which is a built-in Hyper-V feature and available to Windows Server customers at no additional cost. It copies the virtual hard disk (VHD) file of a running virtual machine from one host to a second host in a different location. This is an excellent low-cost solution to replicate your data between your primary and secondary sites and even allows you to do extended (“chained”) replication to a third location. However, it is limited in that is only replicates Hyper-V virtual machines (VMs) so it cannot be used for any other application unless they are virtualized and running inside a VM. The way it works is that any changes to the VHD file are tracked by a log file, which is copied to an offline VM/VHD in the secondary site. This also means that replication is also asynchronous, allowing copies to be sent every 30 seconds, 5 minutes or 15 minutes. While this means that there is no distance limitation between the sites, there could be some data loss if any in-memory data has not been written to the disk or if there is a crash between replication cycles.

Two Clusters Replicate Data between Sites with Hyper-V Replica

Figure 1 – Two Clusters Replicate Data between Sites with Hyper-V Replica

Hyper-V Replica allows for replication between standalone Hyper-V hosts or between separate clusters, or any combination.  This means that instead of stretching a single cluster across two sites, you will set up two independent clusters. This also allows for a more affordable solution by letting businesses set up a cluster in their primary site and a single host in their secondary site that will be used only for mission-critical applications. If the Hyper-V Replica is deployed on a failover cluster, a new clustered workload type is created, known as the Hyper-V Replica Broker. This basically makes the replication service highly-available, so that if a node crashes, the replication engine will failover to a different node and continue to copy logs to the secondary site, providing greater resiliency.

Another powerful feature of Hyper-V Replica is its built-in testing, allowing you to simulate both planned and unplanned failures to the secondary site.  While this solution will meet the needs of most virtualized datacenters, it is also important to remember that there are no integrity checks in the data which is being copied between the VMs. This means that if a VM becomes corrupted or is infected with a virus, that same fault will be sent to its replica. For this reason, backups of the virtual machine are still a critical part of standard operating procedure. Additionally, this Altaro blog notes that Hyper-V Replica has other limitations compared to backups when it comes to retention, file space management, keeping separate copies, using multiple storage locations, replication frequency and may have a higher total cost of ownership. If you are using a multi-site DR solution which uses two clusters, then make sure that you are taking and storing backups in both sites, so that you can recover your data at either location. Also make sure that your backup provider supports clusters, CSV disks, and Hyper-V replica, however, this is now standard in the industry.

Multi-Site Disaster Recovery with Azure Site Recovery (ASR)

All of the aforementioned solutions require you to have a second datacenter, which simply is not possible for some businesses.  While you could rent rack space from a cohosting facility, the economics just may not make sense. Fortunately, the Microsoft Azure public cloud can now be used as your disaster recovery site using Azure Site Recovery (ASR). This technology works with Hyper-V Replica, but instead of copying your VMs to a secondary site, you are pushing them to a nearby Microsoft datacenter. This technology still has the same limitations of Hyper-V Replica, including the replication frequency, and furthermore you do not have access to the physical infrastructure of your DR site in Azure. The replicated VM can run on the native Azure infrastructure, or you can even build a virtualized guest cluster, and replicate to this highly-available infrastructure.

While ASR is a significantly cheaper solution than maintaining your own hardware in the secondary site, it is not free. You have to pay for the service, the storage of your virtual hard disks (VHDs) in the cloud, and if you turn on any of those VMs, you will pay for standard Azure VM operating costs.

If you are using ASR, you should follow the same backup best practices as mentioned in the earlier Hyper-V Replica section. The main difference will be that you should use an Azure-native backup solution to protect your replicated VHDs in Azure, in case you switch over the Azure VMs for any extended period of time.

Conclusion

From reviewing this blog series, you should be equipped to make the right decisions when planning your disaster recovery solution using multi-site clustering.  Start by understanding your site restrictions and from there you can plan your hardware needs and storage replication solution.  There are a variety of options that have tradeoffs between a higher price with more features to cost-effective solutions using Microsoft Azure, but have limited control. Even after you have deployed this resilient infrastructure, keep in mind that there are still three main reasons why disaster recovery plans fail:

  • The detection of the outage failed, so the failover to the secondary datacenter never happens.
  • One component in the DR failover process does not work, which is usually due to poor or infrequent testing.
  • There was no automation or some dependency on humans during the process, which failed as humans create a bottleneck and are unreliable during a disaster.

This means that whichever solution you choose, make sure that it is well tested with quick failure detection and try to eliminate all dependencies on humans! Good luck with your deployment and please post any questions that you have in the comments section of this blog.


Go to Original Article
Author: Symon Perriman

Business and innovation tips for your Imagine Cup project

Editor’s note: This blog was contributed by the U.S. Department of Global Innovation through Science and Technology (GIST)GIST is led by the U.S. Department of State and implemented by VentureWell 

Microsoft’s Imagine Cup empowers student developers and aspiring entrepreneurs from all academic backgrounds to bring an idea to life with technology. Through competition and collaboration, it provides an opportunity to develop an application, create a business plan, and gain a keen understanding of whats needed to bring a concept to market to make an impact.We’ve partnered with GIST to provide some top tips for turning your idea into a marketable business solution and prepare you to present it effectively on a global stage. 

Key things to consider when developing a business idea

1. Assess whether your product is truly novel 

In the early development stages of a new idea, it’s important to assess whether your idea already exists in the current market and if so, what unique solution your application can provide. 

In the world of intellectual property law, “prior art” is the term used for relevant information that was publicly available before a patent claim. For example, if your company is working on a new type of football helmet, but another company has already given an interview about their own plans to invent such a helmet, that constitutes prior art – and it means your patent claim is likely to face a steep uphill battle. Start by asking yourself if your project is truly novelWhat problem does your application solve?  Are there similar solutions already on the market? If necessary, work with your university to establish if a patent already exists. 

2. Learn to take feedback  

It’s easy to get attached to an invention. However, being too lovestruck with your technology can prevent you from absorbing vital feedback from customers, professors, mentors, even teammates. “Feedback is learning,” says Dr. Lawrence Neeley, Associate Professor of Design and Entrepreneurship at Olin College of Engineering“Sure, feedback can hurt, but understand that you can’t improve your invention without learning what’s wrong with it. Feedback is a mechanism for growth.” In addition, don’t lose sight of the passion that originally drove you to developing a solution, as it can put you in the right mindset to listen to feedback. By keeping the core problem at the forefront, you can more effectively pivot your technology and business model to better address market demands. Read more about how to balance your passion with real-life data to make your project shine.

3. Incorporate diversity & inclusion 

Empower everyone to benefit from your solution by considering diversity and inclusion in your project early on. “When accessibility is at the heart of inclusive design, we not only make technology that is accessible for people with disabilities, we invest in the future of natural user interface design and improved usability for everyone,” says Megan Lawrence, an Accessibility Technical Evangelist at Microsoft. Check out some resources to help you build inclusion into your innovation: 

  • Use Accessibility Insights to run accessibility testing on web pages and applications. 
  • Learn how to create inclusive design through video tutorials and downloadable toolkits. 
  • Read the story of two Microsoft teams at Ability Hacks who embraced the transformative power of technology to create inclusive solutions now used by millions of people. 

Read more tips on using inclusion as a lens to drive innovation. 

4. Consider environmental responsibility 

To maximize impact from the start, it’s critical that student innovators develop an environmentally responsible mindset at the earliest stages of their innovation, business, or manufacturing process. Here are some examples from student innovators of how they integrated environmental responsibility into their business models: 

  • Use renewable energy sources where possible, such as solar power or implementing recycling processes. 
  • Incorporate sustainable processes through things like reducing packaging, limiting plastic waste, and sourcing materials that are reusable or biodegradable.  
  • Create an innovation that solves a key environmental issue or repurposes harmful by-products, such as recovering metal water contaminants or converting ocean waste.  

Read more about how they leveraged sustainability in their projects. 

Maximizing resources for your innovation 

It can be a challenge to seek support resources as a student entrepreneur.  Here are some top tips for maximizing on and off-campus benefits while you’re still in school  – check out additional advice if you’re interested in learning more.  

1. Take stock of university resources 

Assess what skills you may need beyond just technical and talk to faculty or administrators to develop a roadmap for your time in school. For instance, seek out seminars or courses in different departments to help sharpen writing or public speaking skills, or visit your university library to find out what resources they have to offer student entrepreneurs such as makerspaces, workshops, or guest lectures. 

2. Maximize networking opportunities 

Connect with others through LinkedIn, your university’s alumni network, classes, hackathons, and more to network with industry-specific experts. Pro-tip: Imagine Cup connects you to a global community of like-minded tech enthusiasts to collaborate and innovate together, in addition to giving you access to industry professionals. 

3. Take advantage of competitions  

Approach competitions as not just an opportunity to win, but also to further refine your project and go-to-market planLeverage feedback and insights from judges, mentors, and peers to continue ideating and developing a marketable solution.   

Build business skills through hands-on innovation 

What better way to put these tips into practice than through bringing your own solution to life? The Imagine Cup is your opportunity to build a technology innovation from what you’re most passionate about. Regardless of where you place in the competition, youll have the chance to connect with likeminded tech enthusiasts across the globe, including joining a network of over two million past competitors. In addition, teams who advance to the Regional Finals will receive mentorship from industry professionals and in-person entrepreneurship workshops from GISTled by the U.S. Department of State and implemented by VentureWellthelp elevate their solutions.   

Learn by doing, code for impact, and build purpose from your passion. Register now for the 2020 competition. 

 

Go to Original Article
Author: Microsoft News Center

Addressing the coming IoT talent shortage – Microsoft Industry Blogs

This blog is the third in a series highlighting our newest research, IoT Signals. Each week will feature a new top-of-mind topic to provide insights into the current state of IoT adoption across industries, how business leaders can develop their own IoT strategies, and why companies should use IoT to improve service to partners and customers.

As companies survey the possibilities of the Internet of Things (IoT), one of the challenges they face is a significant growing talent shortage. Recent research from Microsoft, IoT Signals, drills down into senior leaders’ concerns and plans. Microsoft surveyed 3,000 decision-makers at companies across China, France, Germany, Japan, the United States, and the United Kingdom who are involved in IoT.

Exploring IoT skills needs at enterprises today

Most IoT challenges today relate to staffing and skills. Our research finds that only 33 percent of companies adopting IoT say they have enough workers and resources, 32 percent lack enough workers and resources, and 35 percent reported mixed results or didn’t know their resourcing issues. Worldwide, talent shortages are most acute in the United States (37 percent) and China (35 percent).

Of the top challenges that impede the 32 percent of companies struggling with IoT skills shortages, respondents cited a lack of knowledge (40 percent), technical challenges (39 percent), lack of budget (38 percent), an inability to find the right solutions (28 percent), and security (19 percent).

a close up of a logo graph of tech assessment

a close up of a logo graph of tech assessment

Companies will need to decide which capabilities they should buy, in the form of hiring new talent; build, in the form of developing staff competencies; or outsource, in the form of developing strategic partnerships. For example, most companies evaluating the IoT space aren’t software development or con­nectivity experts and will likely turn to partners for these services.

Adequate resourcing is a game-changer for IoT companies

Our research found that having the right team and talent was critical to IoT success on a number of measures. First, those with sufficient resources were more likely to say that IoT was very critical to their company’s future success: 51 percent versus 39 percent. Hardship created more ambivalence, with only 41 percent of IoT high performers saying IoT was somewhat critical to future success, whereas 48 percent of lower-performing companies agreed.

Similarly, companies with strong IoT teams viewed IoT as a more successful investment, attributing 28 percent of current ROI to IoT (inclusive of cost savings and efficiencies) versus 20 percent at less enabled companies. That’s likely why 89 percent of those who have the right team is planning to use IoT more in the future versus 75 percent of those who lack adequate resources.

IoT talent shortage may cause higher failure rate

Getting IoT off the ground can be a challenge for any company, given its high learning curve, long-term commitment, and significant investment. It’s doubly so for companies that lack talent and resources. IoT Signals found that companies who lack adequate talent and resources have a higher failure rate in the proof of concept phase: 30 percent versus 25 percent for those with the right team. At companies with high IoT success, the initiative is led by a staffer in an IT role, such as a director of IT, a chief technology officer, or a chief information officer. With leadership support, a defined structure, and budget, these all-in IoT organizations are able to reach the production stage on an average of nine months, while those who lack skilled workers and resources take 12 months on average.

Despite initial challenges, company leaders are unlikely to call it quits. Business and technology executives realize that IoT is a strategic business imperative and will be increasingly required to compete in the marketplace. Setting up the right team, tools, and resources now can help prevent team frustration, business burnout, and leadership commitment issues.

Overcoming the skills issues with simpler platforms

Fortunately, industry trends like fully hosted SaaS platforms are reducing the complexity of building IoT programs: from connecting and managing devices to providing integrated tooling and security, to enabling analytics.

Azure IoT Central, a fully managed IoT platform, is designed to let anyone build an IoT initiative within hours, empowering business teams and other non-technical individuals to easily gain mastery and contribute. Azure includes IoT Plug and Play, which provides an open modeling language to connect IoT devices to the cloud seamlessly.

Additionally, Microsoft is working with its partner ecosystem to create industry-specific solutions to help companies overcome core IoT adoption blockers and investing in training tools like IoT School and AI Business School. Microsoft has one of the largest and fastest-growing partner ecosystems. Our more than 10,000 IoT partners provide domain expertise across industries and help address connectivity, security infrastructure, and application infrastructure requirements, allowing companies to drive to value faster. 

Learn more about how global companies are using IoT to drive value by downloading the IoT Signals report and reading our Transform Blog on IoT projects companies such as ThyssenKrupp, Bühler, Chevron, and Toyota Material Handling Group are driving.

Go to Original Article
Author: Microsoft News Center

8 Key Questions on Cloud Migration Answered

If you follow our blog, you’ll likely know that we recently hosted an Altaro panel-style webinar, featuring Microsoft MVPs Didier Van Hoye, Thomas Maurer, and myself. The topic of the webinar was centered around the journey to the cloud, or simply put, migrating to cloud technologies. Cloud technologies including, on-prem hosted private cloud, hybrid cloud solutions like Azure Stack, and public cloud technologies such as Microsoft Azure. We chose this topic because we’ve found that while most IT Pros will agree that adopting cloud technologies is a good idea, many of them are unsure of the best way to get there. To be honest, I think that uncertainty is to be expected given the vast amount of options emerging cloud technologies provide. The aim of this webinar was to clarify the services available and how to decide which form of cloud adoption will be best for you.

It seems this topic is something quite a lot of our audience are interested in hearing more about considering the number of questions which were asked during the webinar. I’ve decided to group the most commonly asked ones here and omitted the more specific questions that relate to particular set-ups and individual requirements. Apart from the questions, the topic also raised a lot of comments and discussion which I think is well worth mentioning here so you can get a feel about how others in the IT community are dealing with the issue of cloud migration and the various concerns it brings with it (further down the page).

Remember if you didn’t have a chance to ask a question during the webinar, or if you were unable to attend and want to ask something now, I will be more than happy to answer any questions submitted through the comment box at the bottom of this page.

Revisit the Webinar

If you haven’t already watched the webinar (or if you just want to watch it again) you can do so HERE

Free Cloud Migration webinar

8 Questions on Cloud Technologies and Migration Answered

Q. When can you consider a deployment a hybrid cloud? Is it Azure Stack? Is it something as simple as a VPN linking on-prem and a public cloud?

A. I don’t know if there is an official definition, but the current industry opinion would state that a hybrid cloud is any deployment where your workloads and deployments are stretching from on-prem to a public cloud player such as Azure or AWS.

Q. With the release of Windows Admin Center, will we see the RSAT (Remote Server Administration Toolkit) tools go away?

A. No. At this time both management solutions will be developed by their respective teams. With that said, if the adoption of WAC is strong enough, we could potentially see the slow “phasing out” of RSAT possibly as soon as the next version of Windows Server (after 2019)

Q. Is there any way to connect containers to Windows Admin Center?

A. As it stands at the time of this writing no. There currently is no mechanism to manage containers from WAC. With that said, due to WACs extensibility it’s not out of the question for a 3rd party vendor (or even Microsoft) to write an extension for WAC that would allow you to do so.

If you need advanced management of containers today, take a look at an orchestration tool like Kubernetes.

Q. How does Azure Stack compete with current open source private clouds in the industry such as OpenStack? Pricing is quite different and some can even be seen as “free” by higher management while disregarding the needed effort to support such a deployment.

A. While it’s true that OpenStack and other open source cloud platforms like it can potentially be free, it’s not really an apples-to-apples comparison when comparing them to Azure stack. Azure stack is the power and capabilities of Azure inside of your datacenter. Microsoft has taken everything they’ve learned with public Azure and packaged it up for you to use at your location. You manage it and get billed, much the same way as with Azure. You manage it via the web and get billed per usage.

OpenStack certainly has it’s uses, and I’m a huge supporter of Open Source, but if you’re a Microsoft Centric shop looking to host a cloud for your organization, it’s tough to go wrong with Azure Stack due to the similarities in management and integration with public Azure. At the end of the day you have to ask yourself, do you want to use/consume cloud services? Or do you want to build a cloud? Remember that building a cloud is difficult, costly and time intensive. It’s possible but ongoing management can be difficult. With Azure Stack much of that work and testing is taken care of for you.

Q. Do you have any suggestions for using Azure as a DR site?

A. It certainly is possible to use Azure for DR, and it’s often seen as one of the 1st services to move into the public cloud. You can certainly use Azure to host offsite backups and/or recovery to a nested hypervisor inside of Azure using a product such as Altaro VM Backup. If you need a more “hot” DR approach, you could look at something like Azure Site Recovery as well.

Q. What are your thoughts on using Cloud services to host file services for a small number of remote users

A. While you could certainly use something like OneDrive for Business, or Azure Files to do something like this, you need to first consider latencies and access times. Are your users consuming file types that work ok with longer than “local” latencies? If so these services may work for you. If not local on-network file storage may still be a requirement. Whatever route you chose to go, remember that file performance is often one of the most ticket generating user issue. Make sure you test before settling on an option.

Q. What are the rough costs for storage in Azure?

A. See the Azure Pricing Calculator for the latest pricing information

Q. Is there are “Cost Meter” in the Azure Interface? Someway of allowing you to keep an eye on mounting costs?

A. This is an area that Microsoft has continued to improve. The Azure Portal has many of its own cost monitoring and estimation tools, but if you need more than the basics, then take a look at Azure Cost Management.

Thoughts and Opinions from Webinar Attendees

On companies utilizing cloud technologies

“I Agree. It’s almost never going to be 100% cloud, except with brand new companies, and even then, a small number. 99.99% Will be Hybrid”

-Mark

“I Think we are ready for the cloud, but a lot of delay is being caused by software vendors. They are not ready for the cloud since their software was developed in the late 90s and the recent updates only contain updated branding and minor code changes. The cloud is entirely new for them and it scares them”

-Jos

On moving existing workloads to the cloud and dealing with old Operating Systems

“There is way too much very old stuff that will be difficult to move to the cloud. It will have to wait until there are resources (read: money) to re-architect the application/platform”

-Mark

“There are a LOT of 2003 boxes running in production still”

-Mark

“There are even still Windows NT Boxes running!”

-Jos

“We have some old NT and 2003 servers due to old technology interfaces, plus the original designers have left and there is no documentation”

-Steve

On the Need for On-Premises Equipment

“On-Premises data will always be required due to local/country laws. Think Switzerland, and think of the new GDPR laws in the EU. Almost every country will have their own local data center, Azure, AWS, Google…etc. It is the way it will go.”

-Mark

On the DevOps Movement in the Industry

“Don’t forget the OPS in DevOPS…. We are also interested in it and it is no longer strictly a Dev thing.”

-Nuno

On Container Technologies

“Still, A container is the App “package”. It still needs to run on something and while it can accelerate the delivery process, there’s still a huge dependency on the infrastructure landscape and IMO it’s really where Ops can shine and their current knowledge can translate into the container world”

-Nuno

On Getting Started with the Cloud

“Very good point about doing your personal systems in the cloud. I agree and am doing it also.”

-Mark

Wrap-Up

As you can see there are plenty of questions when it comes to moving to the cloud, but none of them are insurmountable. Moving to the cloud can be predictable, and doable, you just need to do your homework before you make the move.

What are your thoughts? Is the cloud something you’re considering in the 2018 calendar year? Why? Why not? Also, if you have additional questions, or you attended our webinar and don’t see your question above, be sure to let us know in the comments form below!

Thanks for reading!

New to Microsoft 365 in May—empowering and securing users – Microsoft 365 Blog

Each month on the Microsoft 365 Blog, we highlight key updates to Microsoft 365 that build on our vision for the modern workplace. This month, we introduced a number of new capabilities to help individuals produce accessible content, work together in real-time, and create a secure and compliant workplace.

Here’s a look at what we brought to Microsoft 365 in May.

Empowering creative teamwork

Create accessible content in Office 365—We enhanced the Accessibility Checker to streamline the process of creating quality content that is accessible to people with disabilities. Now, the Accessibility Checker identifies an expanded range of issues within a document, including low-contrast text that is difficult to read because the font color is too similar to the background color. The checker also includes a recommended action menu and utilizes AI to make intelligent suggestions for improvements—like suggesting a description for an image—making it easier to fix flagged issues from within your workflow.

GIF showing the Accessibility Checker being run from the Review tab in a Word document with black text on a grey background and an image of a forest. Accessibility Checker inspection results show that the image is missing alternative text and the user clicks the recommended action: Add a description to fix this. This opens the Alt Text pane and the user types the image description in it. The user then clicks the Low contrast text warning in the Accessibility Checker inspection results and clicks the recommended action and changes the page color to white. The inspection results now show no more accessibility issues.

Accessibility Checker alerts you in real-time of issues that make your content difficult for people with disabilities to access.

Work in mixed reality with SharePoint—This month, we unveiled SharePoint spaces—immersive, mixed reality experiences built on SharePoint—which enable you to interact with and explore content in new ways. Now, Microsoft 365 subscribers can work with 3D models, 360-degree videos, panoramic images, organizational charts, visualizations, and any information on your intranet to create immersive mixed reality experiences. SharePoint spaces make it easy to create virtual environments with point-and-click simplicity to help viewers digest information that might be too numerous or too complex to experience in the real world or in a two-dimensional environment.

Create immersive virtual environments in seconds with SharePoint spaces.

Find relevant content faster in SharePoint—The new Find tab in the SharePoint mobile app makes it easier to access the information you need when looking for expertise, content, apps, or resources on the go. The Find tab uses AI to automatically surface sites, files, news, and people relevant to you without having to search—including documents and sites that you were recently working on from across your devices. The Find tab also refines search results as you type, and leverages AI to provide instant answers to questions you ask based on information from across your intranet.

A screenshot of the SharePoint Find tab.

By learning from your existing content and organizational knowledge, AI provides instant answers, transforming search into action.

Run efficient meetings with Microsoft Teams—This month at Build, we demonstrated a range of future capabilities in Microsoft Teams that utilize AI to make meetings smarter and more intuitive over time—including real-time transcription, Cortana voice interactions for Teams-enabled devices, and automatic notetaking. Today, we’re announcing new capabilities for mobile users that make it easier to participate in meetings on the go. Now, you can quickly share your screen with others in the meeting directly from your mobile device, or upload images and video from your library. These improvements make everyone a first-class meeting participant—regardless of location or device.

Source video.

Extend meeting capabilities with Surface Hub 2—Earlier this month, we introduced Surface Hub 2, a device built from the ground up to be used by teams in any organization. Surface Hub 2 integrates Teams, Microsoft Whiteboard, Office 365, Windows 10, and the intelligent cloud into a seamless collaboration experience, which extends the capabilities of any meeting space and allows users to create—whether in the same room or separated by thousands of miles.

Creating a secure and compliant workplace

Achieve GDPR compliance with the Microsoft Cloud—This month marked a major milestone for individual privacy rights with the General Data Protection Regulation (GDPR) that took effect on May 25, 2018. Over the last few months, we introduced new capabilities across the Microsoft Cloud to help you effectively demonstrate that your organization has taken appropriate steps to protect the privacy rights of individuals. To learn more about these capabilities, read our summary of Microsoft’s investment to support GDPR and the privacy rights of individuals.

Microsoft 365 customer INAIL leverages Azure Information Protection to classify, label, and protect their most sensitive data.

Work securely with external partners in Microsoft 365—We introduced several new capabilities in Azure Active Directory Business-to-Business (B2B) collaboration that make it easier to work safely and securely with people outside of your Microsoft 365 tenant. B2B collaboration allows administrators to share access to internal resources and applications with external partners while maintaining complete control over their own corporate data. Starting this month, first-time external users are welcomed to your tenant with a modernized experience and improved consent flow, making it easier for users to accept the terms of use agreements set by your organization.

We also improved Business-to-Consumer (B2C) collaboration, making it easier to invite external partners who use consumer email accounts like Outlook and Gmail while protecting your organization’s data and improving the process of setting access policies.

A screenshot from Azure Active Directory's Review permissions tab.

Track terms of use agreements in Azure Active Directory B2B by tracking when users consent.

Other updates

As companies seek to empower people to do their best work, a cultural transformation isn’t just inevitable—it’s essential. This month, we released a white paper outlining how Microsoft is partnering with customers to foster a modern workplace that is productive, responsive, creative, and secure. To learn more, read the New Culture of Work white paper.

Check out these other updates from across Microsoft 365:

Next Generation Washington 2018 – Microsoft on the Issues

Last January, we published for the first time a blog that outlined the positions Microsoft would be advocating as we walked the halls of our state capitol in Olympia. As we pointed out, public interest groups had called for greater transparency by companies, and we had concluded that they made a good point. People seemed to appreciate last year’s publication, so we’re taking the same step this year.

We appreciate the hard work and personal sacrifices our state legislators make each year. As we move forward in this year’s session, I’ve sketched Microsoft’s priorities in Washington state below. You’ll find our thoughts on several issues, including education and workforce development, climate change, rural economic development, the Cascadia Corridor, the Special Olympics and a few others. No doubt you may agree with some of our positions more than others. But regardless of the substance of the issue, we hope you’ll find this helpful.

Strengthening education and workforce development

The short 2018 legislative session provides an opportunity to build on the accomplishments that our legislators achieved last year in Olympia during the six-month session. Last year they passed a landmark bipartisan budget designed to inject an additional $7.3 billion into schools over the next four years. As we noted last July, this continued a trend that began with the McCleary decision in 2012. That year, the state spent $13.4 billion per biennium (two years) on K-12 education. By the 2019-2021 biennium, the state will spend $26.6 billion on K-12 education. Much of the new funding is based on student need, which helps to close stubborn opportunity gaps for many students in high-poverty schools.

While the state’s Supreme Court has acknowledged the importance of this progress, it has also called on the legislature to accelerate this spending increase. As a result, this is an important priority for this legislative session, and we hope it can be addressed effectively.

At the same time, it’s critical that our legislators take additional steps to address other education and workforce development needs. Technology is changing jobs and people will need to develop new skills to succeed in the future. For the people of our state to be successful, we need to continue to increase high school graduation rates and then provide a path towards a post-secondary credential, whether that’s an industry certification, a college degree or some other credential. The state has set the important goal of helping 70 percent of Washingtonians between the ages of 25 to 44 to achieve a post-secondary degree or credential by 2023. Today, that figure is only 51 percent, with larger deficits among important racial, geographic and economic segments. In short, we have a lot of work to do.

One of our best opportunities is to invest in a strong career-connected learning strategy that will provide young people with learning and training programs that will provide them with the skills and credentials they need to pursue our state’s jobs. Microsoft has been a strong supporter of Gov. Jay Inslee’s goal of connecting 100,000 young people with career-connected learning opportunities. I’ve co-chaired – along with MacDonald-Miller’s Perry England – the governor’s task force to address this issue. We’ve learned from the business, labor, education and policy leaders involved what an important opportunity our state has to lead the nation in better preparing our young people for the full range of jobs across the state. I’m excited about the recommendations we’re finalizing and will present to the governor and public next month. I hope our legislators will support the governor as he continues to lead the state on this issue, and I hope that companies across the business community and organized labor groups will work closely with our educators to make these opportunities real for our young people.

While we undertake this new career-connected learning initiative, it’s also important for the 2018 session to address two areas of narrower but vital unfinished business left over from last year. The first is to provide $3 million in supplemental funding to complete the doubling – to over 600 – of computer science degree capacity at the University of Washington’s Paul G. Allen School of Computer Science and Engineering. We’re exceedingly fortunate as a state to have in the Allen School one of the world’s premier computer science departments located in the middle of a region that is creating so many computer science jobs.

I chaired the effort that completed the fundraising in 2017 to build a second computer science building at the University of Washington. Microsoft was a major contributor, as was the state itself, Amazon, Zillow, Google and so many generous individuals. Now that we’ve raised over $70 million of private money to build this building, we’re hoping the legislature will allocate $3 million so the university can fill it with Washington students.

Finally, we have a key opportunity to continue to help remove financial barriers for lower-income students to pursue college degrees in high demand STEM and healthcare fields by advancing and funding the Washington State Opportunity Scholarship (WSOS).

Microsoft has supported WSOS and I’ve chaired the program’s board since it was founded in 2011. Thanks to the terrific leadership of Naria Santa Lucia, our executive director, and some remarkable partners across the state, the WSOS program is already leading the nation with its innovative work to match private sector contributions with state funding and services for our scholars. More than 3,800 students who grew up in Washington state are attending colleges in the state on these scholarships this year. Because the program continues to grow, just over 1,750 of this total are new scholars added this year. And consider this: 60 percent of this new group are female, 72 percent are among the first generation in their families to attend college and 73 percent are students of color. It helps put our state at the forefront of national efforts to create better opportunities for young people of all backgrounds.

The legislature can take two additional steps this session to help the WSOS achieve even more. The first is to include an additional appropriation in its supplemental budget to match the increasing level of private donations to the program. And the second is to authorize WSOS to provide new support to students looking to pursue industry certification and associate degree programs in STEM-related fields at our state’s 34 community and technical colleges.

Addressing climate change

A second important issue on the state legislature’s agenda this year is one of the broadest issues for the planet: climate change. As a company, Microsoft is focusing on new ways we can use artificial intelligence and other technology to help address this problem, including through our AI for Earth program. This builds on ongoing work to ensure our local campus and our datacenters worldwide use more green energy. This includes setting an internal price on carbon we charge our business units, purchasing renewable energy and establishing extensive commuting and carpooling programs.  As companies across the tech sector help address climate issues, we believe that Washington state has a key role to play as well.

We applaud Gov. Inslee for his longstanding commitment to this issue, as well as the work of several legislators who have emerged as leaders in addressing it. Washington is already one of the lowest carbon emitters per capita, in part because of the important clean energy investments made by Washington businesses and families. But we all need to do more.

We hope the legislature will work with stakeholders across the state to drive reductions in total carbon emissions, while minimizing economic disruptions, creating new job opportunities and addressing the water infrastructure needs that are so vital for the eastern part of our state. In 2017, we saw the value of diverse interests coming together to craft a balanced solution that makes Washington a leader on paid family leave. A similar collaborative approach can help us forge progress in addressing climate change.

Supporting rural Washington

One of the issues we learned a lot more about in 2017 was the importance of expanding opportunities for people in rural communities in the United States. The more we’ve learned, the more passionate we’ve become. We’re now working with state governments across the country, and we hope that our own legislature will take the steps needed to ensure we can do work in our home state that matches the work we’re doing and witnessing elsewhere.

One of the issues that deserves more attention is the broadband gap in the state and across the country. As we see firsthand every day, cloud computing and artificial intelligence are reshaping the economy and creating new opportunities for people who can put this technology to use. But it’s impossible to take advantage of these opportunities without access to broadband. Today there are 23.4 million Americans in rural counties who lack broadband access. Many rural communities, especially east of the Cascades, lack adequate broadband. Whether they are parents helping their children with homework, veterans seeking telemedicine services, farmers looking to explore precision agriculture or small business owners wanting to create jobs, people in these communities are at a disadvantage to those living in cities with high-speed connectivity.

The goal of Microsoft’s Airband initiative is to help close this rural connectivity gap by bringing broadband connectivity to 2 million people in rural America by 2022. Through our direct work with partners, we will launch at least 13 projects in 13 states this year, including Washington. We believe the public sector also has a vital role to play, including the investment of matching funds to support capital equipment projects. Today, 11 states have earmarked funds to extend broadband service to their rural communities. But Washington is not one of them.

We hope the legislature will act this year to join the ranks of other states that are acting to advance rural broadband connectivity. Encouragingly, the legislature this year has taken a first step in this direction by recently adopting a capital budget with $5 million in grants for the Community Economic Revitalization Board to support the expansion of rural broadband. However, the bill failed to reach projects addressing the homework gap or providing telemedicine capacity. Legislative leaders have stated they will make supplemental adjustments to the biennial budget in the upcoming weeks. We hope the legislature will continue to pursue a more expansive use of rural broadband funds and will reestablish the Rural Broadband Office in the Department of Commerce. This office would then lead state planning to prioritize and sequence the delivery of high-quality broadband access to unserved and underserved communities.

Advancing the Cascadia Corridor

The legislature can also act in 2018 to build on the growing momentum to advance the Cascadia Corridor. The past year saw several important advances in this area by leaders in Washington, British Columbia and Oregon. This included new education and research partnerships, businesses working more closely together and transportation initiatives, all supported by government leaders across Washington state. Gov. Inslee, King County Executive Dow Constantine and University of Washington President Ana Mari Cauce have all played key leadership roles, which we greatly appreciate.

At the same time, urban congestion is making even more compelling the need to spread economic growth more broadly to more areas around the Puget Sound. One way to do that is to strengthen transportation ties from Vancouver to Seattle to Portland. The ability to move more quickly would help spur growth in places from Bellingham and Anacortes to Tacoma and Olympia, among others.

While we believe there is an important role for future investments in autonomous vehicles and highway improvements to accommodate them, we also believe there are vital steps the legislature can take in another area – high-speed rail. The construction of a high-speed rail connection between Portland and Vancouver, B.C. would be a game changer. An initial high-level report already supports the concept and Gov. Inslee has proposed a more detailed study of potential ridership, routes and financing. We support funding for this additional feasibility analysis, and in light of the recent Amtrak tragedy, urge lawmakers to examine both economic opportunities and public safety requirements.

Other issues

There are three additional issues that deserve continuing attention in Olympia and across the state this year. These are:

  • Immigration. Over the past year, state Attorney General Bob Ferguson has emerged as a national leader in addressing the urgent needs of people who have come to our state from other countries. We greatly appreciate the role he and his staff have played in helping to protect our employees, who work for us lawfully and in full compliance with federal laws. As we look ahead, we remain concerned not only about steps taken over the past year but by new steps that could come in the year ahead. These could impact our employees and families, as well as many others across the state. We’re grateful that we live in a state that has an attorney general who is committed to continuing efforts, if needed, to bring these types of issues properly before the courts.
  • Criminal justice system improvements. We hope that officials across the state will continue to build on the steps taken last year in this area. In 2017 the legislature provided $1.2 million in additional funding for the state’s Criminal Justice Training Center (CJTC) to improve situational de-escalation capabilities and build stronger trust between law enforcement and communities. Microsoft is supporting this with a $400,000 investment through 2019 to pilot the Center’s 21st Century Police Leadership program. We’re grateful that CJTC Executive Director Sue Rahr – a nationally recognized expert in policing and a long-time law enforcement leader in our state – is leading this work. We are also working with leaders in our state’s court system to build technology solutions that will help judges improve fairness and just outcomes in legal financial obligations. We look forward to continuing to pursue additional advances in 2018 with a wide range of partners.
  • Net neutrality. Like most tech companies and many consumers, we’re also concerned about the decision by the Federal Communications Commission (FCC) to rescind net neutrality rules. We’ve long supported net neutrality rules at the federal level, and we endorsed the FCC’s adoption of strong net neutrality protection in 2015. Given the federal government’s withdrawal of net neutrality protections, we believe it’s appropriate and helpful for the legislature to adopt at the state level the rules that the FCC rescinded. We hope the legislature will include a provision that will sunset these rules automatically if the FCC re-adopts rules that are the same or substantially similar in the future. This would create a long-term incentive for all stakeholders to move net neutrality in the United States back to the place where it can be governed effectively at the national level.

Ending on a very bright note – the 2018 Special Olympics USA Games are coming to Seattle!

Finally, as we contemplate the challenges of our time, there’s one thing we should all get excited about. The Special Olympics USA Games will take place in Seattle. On July 1, 4,000 athletes and coaches from across the country will arrive to compete in 14 sports. They include many remarkable athletes and their families, and it will be broadcast nationally on ESPN.

It will be one of the largest sporting events ever held in Seattle, with more than 50,000 spectators expected. Microsoft is honored to be the presenting sponsor of the games, I’m thrilled to serve as the honorary chair, and we thank (and even salute!) lawmakers for the $3 million in state support, further demonstrating our state commitment to showcasing the power of diversity.

While many of the issues I’ve noted above call for leadership by our legislators and other officials, the Special Olympics provide an opportunity for individual leadership by every one of us. Literally.

The Special Olympics have played a transformative role in the lives of athletes with intellectual disabilities and has become a global movement of acceptance and inclusion. Through sports, health, school and youth engagement, the organization brings people around the world together, with and without intellectual disabilities, to foster tolerance, unity and respect.

I hope you’ll join in to make the USA Games a special moment not only for the athletes and their families, but for all of us who live in Washington state. Please join us at the opening ceremonies on July 1. It will be an event to remember. Or, attend one of the 14 sports events that will take place around Puget Sound. Consider volunteering to help, showing our local hospitality to our visitors while learning more about how we can all learn from each other in new ways.

We also believe the USA Games can provide another opportunity as well. As we prepare for the event, one of the themes we’ve adopted is “Seattle as a city of inclusion.” We’re hoping that local employers will join together not only to encourage employee volunteerism, but to learn more about programs like the one that we’ve benefited from at Microsoft that has helped us recruit, hire and develop some sensational employees who also happen to deal with autism every day. As we’ve learned, talent flourishes all around us, but sometimes we need to look around a bit more broadly to appreciate how we can benefit from it – and how we can help other people along the way.

*        *        *

As we look to the months ahead, there’s no doubt that 2018 will bring its share of twists, turns, and even challenges. But when we look at what our legislators can accomplish and what the rest of us can contribute, there is no shortage of opportunities. Let’s make the most of them together!

As always, we welcome your thoughts on our ideas.

Tags: Brad Smith, Cascadia Corridor, education, employment, Environment, legislation, Next Generation Washington

Moving to a new home…

This is the last blog post that I am going to write as Virtual PC Guy. But do not fear, I am starting a new blog over at american-boffin.com, and all the Virtual PC Guy posts are going to remain intact.

You may wonder why I am making this change?

Well, there are several reasons.

  • It’s been a long time. I have written 1399 blog posts over 14 years – averaging one new post every other working day. When I started this blog, I had more hair and fewer children.
  • The world has changed. When I started writing as Virtual PC Guy, virtualization was a new and unknown technology. Cloud computing was not even invented yet. It is amazing to think about how far we have come!
  • The scope and impact of my work has drastically increased. When I started blogging, there were a very select group of people who cared about virtualization. Now, between cloud computing, the rise of virtualization and containerization as standard development tools – and the progress we have been making on delivering virtualization based security for all environments – more and more people are affected by my work.
  • I am a manager now. When I started on this blog I was a frontline program manager – and most of my time was spent thinking about and designing technology. I have been a program manager lead for almost a decade now – and while I do still spend a lot of time working with technology – I spend more time working with people.
  • Maintaining multiple blogs is hard. I have tried, from time to time, to start up separate blogs for different parts of my life. But maintaining a blog is a lot of work. Maintaining multiple blogs is just too much work for me.
  • Virtual PC Guy has a very distinctive style. Over the years I have toyed with the idea of switching up the style of Virtual PC Guy – but I have never been able to bring myself to do it.

For all these reasons – I have decided that the best thing to do would be to archive Virtual PC Guy (I have posts that are 10 years old and are still helping people out!) and to start a new blog.

On my new blog – I will still talk about technology – but I will also spend time talking about working with customers, working with people in a corporate environment, and about whatever hobbies I happen to be spending my time on.

I hope you come and join me on my new blog!

Cheers,
Ben