Tag Archives: developing

VMware Project Dimension to deliver managed HCI, edge networking

VMware is developing a managed edge appliance that has compute and storage for running applications and a software-defined WAN for connecting to the data center and public clouds.

The upcoming offering is in technical preview under the name Project Dimension. The product is a lightweight hyper-converged infrastructure system that includes the vSphere infrastructure compute stack and the vSAN software-defined storage product.

For networking, Project Dimension uses VMware’s NSX SD-WAN by VeloCloud, which VMware acquired last year. The VeloCloud SD-WAN provides connectivity to the corporate data center, SaaS or applications running on IaaS.

Project Dimension is essentially the branch version of VMware’s Cloud Foundation, which merges compute, storage and network provisioning to simplify application deployment in the data center and the Amazon and Microsoft Azure public clouds. Companies could use Project Dimension to run IoT and other software in retail stores, factories and oil rigs, according to VMware. Actual hardware for the system would come from VMware partners.

Companies already using Cloud Foundation could apply their policies and security to applications running on Project Dimension.

“There’s a lot of potential for operational simplicity. There’s the potential for improved multi-cloud management, and there’s the potential for faster time to market [for users’ applications],” said Stephen Elliot, an analyst at IDC.

But Project Dimension’s hybrid cloud approach — which lets companies run some applications at the edge, while also connecting to software running in the cloud — could eventually make it a “niche product,” said Andrew Froehlich, president of computer consultancy West Gate Networks, based in Loveland, Colo.

“While hybrid architectures are extremely common today, most businesses are looking to get to a 100% public cloud model as soon as they can,” he said. “Thus, it’s an interesting concept — and one that some can use — but I don’t see this making a significant impact long term.”

How Project Dimension works as a managed service

VMware plans to offer Project Dimension as a managed service. A company would order the service by logging into the VMware Cloud and going to its Edge Portal, where the business would choose a Project Dimension resource cluster and a service-level agreement.

Businesses would then upload the IP addresses of the edge locations, where VMware would send technicians to install the Project Dimension system. Each system would appear as a separate cluster in the Edge Portal.

VMware plans to use its cloud-based lifecycle management system to fix failures and handle infrastructure firmware and software updates. As a result, companies could focus on developing and deploying business applications without having to worry about infrastructure maintenance.

VMware, which introduced Project Dimension last week at the VMworld conference in Las Vegas, did not say when it would release the product. Also, the company did not disclose pricing.

Building on experience: a framework for cybersecurity policy

Each year, more and more governments are developing policies to address security challenges presented by an increasingly digitized world. And to support those efforts, I’m excited today to announce the release of Microsoft’s new Cybersecurity Policy Framework, a resource for policymakers that provides an overview of the building blocks of effective cybersecurity policies and that is aligned with the best practices from around the globe. Nations coming online today, and building their cybersecurity infrastructures, should not—and need not—be burdened with the stumbling blocks that characterized previous generations of cybersecurity policies. Instead, such nations should be empowered to leapfrog outdated challenges and unnecessary hurdles.

For years, Microsoft has worked with policymakers in advanced and emerging economies, and across many social and political contexts, to support the development of policies to address a wide range of cybersecurity challenges. This new publication captures and distills the important lessons learned from those years of experience partnering with governments. And as increasing numbers of countries wrestle with how to best address cybersecurity challenges, the Cybersecurity Policy Framework is an indispensable resource for the policymakers joining this work.

According to the last analysis provided by the United Nations, half of the countries on earth today either have or are developing national cybersecurity strategies. I have little doubt that in the next decade every single outstanding country will add its name to that list. And this trend highlights the importance of this new resource. The policies established today will impact how technologies are used for years to come and how safe or dangerous the online world becomes for all of us. Truly, there is no going back, only forward.

The Cybersecurity Policy Framework is not one-stop shopping for cybersecurity policymakers, but it does serve as an important “umbrella document,” providing a high-level overview of concepts and priorities that must be top of mind when developing an effective and resilient cybersecurity policy environment.

Specifically, this new resource outlines:

  • National strategies for cybersecurity.
  • How to establish a national cyber agency.
  • How to develop and update cybercrime laws.
  • How to develop and update critical infrastructure protections.
  • International strategies for cybersecurity.

We at Microsoft have been at this work for a long time and have developed a wide variety of resources to help those who are working to position their industries and nations to capitalize on the benefits of new technologies—so many that they can often be difficult to find! And this highlights another strength of the Cybersecurity Policy Framework, while it is not one-stop shopping, each section does provide an overview of a critical policy topic as well as links to the associated and more in-depth resources my team has developed over the years to assist policymakers. In this way, this new resource serves not only as essential, high-level guidance, but also as a key to a broader catalogue of resources built on years of experience partnering with governments around the world.

Reading through this new resource, I am proud of the work we have done in pursuit of a safer online world. Important progress has been made and these foundational principles underscore much today’s cybersecurity discourse. However, we have—and will always have—more work to do as a result of the changes and innovations in technology always on the horizon, and their implications for cybersecurity. I’m glad to put this resource forward today to support a new generation of policymakers and also look forward to partnering with them to tackle the new challenges we will face together tomorrow.

Download your copy of the Cybersecurity Policy Framework today.

Free course targets candidates for network engineering jobs

Get prepared for a high-paying IT job. Deliver clean drinking water to developing countries.

That’s the pitch from newly opened IT training company NexGenT, which is offering a course that prepares budding tech workers for a networking certification exam for as little as a $5 donation.

The San Jose, Calif., startup partnered with Charity: Water, a nonprofit that funds clean-water initiatives worldwide, hoping to rake in donations — and new business, said NexGenT’s David Torres, who goes by the title of growth marketer.

The monthlong course, which has a list price of $997, gets help desk technicians, network admins or other IT apprentices ready for CompTIA’s Network+ certification, a useful, but not mandatory, credential for getting network engineering jobs.

“It not only prepares you and helps you get a job, but it also gives you a very strong general foundation of networking,” Torres said. Whether the recently launched two-in-one freebie — jumpstart an IT career and help provide safe drinking water to people in need — will prove irresistible is an open question.

In any case, people who take NexGenT up on its course-for-donation offer can enroll, schedule a Network+ examination date with CompTIA, pass it and, the industry organization holds, confidently put themselves out on the market. But they will also get pitched NexGenT’s flagship offering, Zero to Engineer, an intensive, $12,500 online program designed to get people ready for an IT career in three to six months, no matter what their level of competency.

Network engineer boot camp
Students take part in a five-day network engineering boot camp at NexGenT’s San Jose, Calif., campus in April.

One word: Networks

Network engineering — planning and designing the computer networks that support the flow of data and communication and, essentially, enable modern organizations to function — is NexGenT’s educational focus at a time when the astronomic popularity of coding boot camps has shown signs of leveling off amid shifting employer demand. The network, Torres said, is where a lot of tech’s future lies — with the expected millions upon millions of devices that will hook into the internet of things in the future, “we need more people that know how to manage all this information.”

According to a CompTIA study on IT skills, released in May, companies are having the most trouble attracting and retaining emerging tech skills such as artificial intelligence and automation, with 59% of companies seeing a moderate or significant skills gap. In software development, it’s 55%. The shortage is less acute for network engineering and systems administrator jobs, with 44% of companies having trouble pinning down the right skills — but a skills gap is a skills gap.

And the U.S. Bureau of Labor Statistics reports there will be a 6% increase in the number of network and systems admin jobs between 2016 and 2026, close to the 7% average projected growth of all jobs together.

NexGenT aims to not just widen the pool of people poised for network engineering jobs but to also bolster the skills themselves by turning out “full-stack network engineers,” which it describes as IT pros with a mastery of core routing and switching but also cybersecurity, cloud, automation, virtualization and voice over IP.

Network engineer boot camp
NexGenT co-founder Jacob Hess (right) speaks with Edgar Montes, a student in a five-day network engineering boot camp at the training company’s campus in San Jose, Calif.

Gunning for $100K

The company’s goal is to get people to “break the six-digit salary figure as soon as possible,” Torres said. That can be done in four years, he said — and with network engineers bringing in an average starting salary of $70,147, according to salary and benefits website PayScale, that seems doable.

NexGenT also wants its students to land network engineering jobs without racking up a huge amount of school debt. The flagship course costs $12,500 for its online-only modules that include the ins and outs of IT architecture and networking, protocols and technologies, a keystone networking project and a community of mentors. For $15,500, students can add a five-day boot camp at the San Jose campus, where they will set up, configure and secure networking equipment.

The cost isn’t peanuts, but it’s low compared with the tens of thousands of dollars in loans students are leaving college with today. The company offers discounts and payment plans to help with tuition, Torres said.

Military, NFL pedigree

NexGenT was founded by Terry Kim and Jacob Hess, who were IT instructors in the U.S. Air Force, training people “who in most cases didn’t have any IT experience, to be ready to work on tactical networks in just a few months,” Hess said in a statement.

The company graduated from startup accelerator AngelPad and got seed funding from Liquid 2 Ventures, an investment fund run by former NFL quarterback Joe Montana.

The Zero to Engineer program has 143 students. The company hasn’t posted a job placement rate yet, since students are still in the program, which started in February. There are success stories on its website, though, Torres said — they belong to graduates of a beta-version course offered last year. And there’s Kevin Lee, a project manager who had “absolutely zero experience” in IT and is now a network engineer at Samsung.

The Network+ course, meanwhile, has drawn 110 people since the offer launched earlier this month, and NexGenT will be ramping up its online and social media ad campaigns in coming weeks.

Many contours, many questions in developing a cloud strategy

Companies today are moving data and applications to the cloud, but many are doing so without developing a cloud strategy that lays out business outcomes and establishes governance and control, said Gartner analyst Mindy Cancila. She spoke at the Gartner Catalyst conference in San Diego earlier this month.

In a recent Gartner survey, 40% of organizations responded that cloud was their top investment priority, Cancila said. The same survey found that 42% didn’t have the skills needed to put a cloud plan into practice.

“So here we have a technology that everyone’s trying to use — and they’re obviously already using it — and yet they know they don’t have the skills to be able to implement cloud in the way that they want,” she said.

Analysts and speakers at the four-day event pushed forming, implementing and advancing a cloud strategy that encompasses the right skills, the right technologies and the right mindset for cloud. Meanwhile, the IT architects, systems engineers and software developers in attendance were answering questions and amassing knowledge to piece together cloud puzzles specific to their organizations’ needs and abilities.

Gartner analyst Mindy Cancila discusses developing a cloud strategy at the Gartner Catalyst conference in San Diego on Aug. 21.
Gartner analyst Mindy Cancila discusses developing a cloud strategy at the Gartner Catalyst conference in San Diego on Aug. 21.

Builder of a cloud future

Jayapal Boompally, principal software engineer at biotech Amgen Inc., in Thousand Oaks, Calif., was interested in a role intended to burnish an organization’s cloud acumen: cloud architect. That’s an IT leader who puts the organization on a “cloud-first” strategy — that is, cloud computing for all new initiatives unless there’s a good reason not to go to cloud.

Gartner analyst Kyle Hilgendorf gave a talk on the role, saying the prime responsibility of a cloud architect is “getting organizational buy-in, encouraging people to trial and play with cloud services, giving them the opportunity and the freedom, letting them make some mistakes, giving them the growth mindset so that, over time, you build up an army of individuals that are behind this movement.”

That’s a broad portfolio, especially for a role that is fairly new. But the cloud architect position has gained in popularity over the last few years — there were 777 openings as of July, Hilgendorf said. And the position does seem to make a difference in developing a cloud strategy that organizations deem effective.

Hilgendorf cited a Gartner survey that found 39% of organizations already have a cloud architect, and 60% of those that do feel prepared to take full advantage of cloud services. Conversely, only 33% of organizations without one feel prepared.

Amgen has a corps of traditional enterprise architects who work with architects from Amazon Web Services (AWS), Amazon’s cloud division, to sift through regulations outlined by the Health Insurance Portability and Accountability Act and determine what can move to the AWS cloud, what can’t and why it can’t, Boompally said.

The Amgen enterprise architects do some of the work cloud architects do, such as developing cloud architecture, he added. None has the title of cloud architect, though, or the added leadership responsibilities of developing a cloud strategy for the entire company. But Boompally does see a day when someone is appointed, because the person in that role can empower others to make proper decisions concerning cloud.

“It will come from inside, probably from the enterprise architects,” he said. “They know pretty much all the systems, so they can easily tell what type of tools and strategy are a better fit for our company.”

Gartner analyst Kyle Hilgendorf describes the cloud architect role at Gartner Catalyst on Aug. 21.
Gartner analyst Kyle Hilgendorf describes the cloud architect role at Gartner Catalyst on Aug. 21.

A private matter

Amid the attention given to companies big and small moving to the public cloud — especially to megavendors like AWS, Microsoft Azure and Google Cloud Platform — Gartner analyst Alan Waite led a talk on private cloud, disabusing attendees of the notion that private was going away.

Waite cited a Gartner survey covering private cloud implementations. By 2020, 40% of companies that planned to move applications to the cloud intended to use public cloud infrastructure, and a full 60% intended to go private — either putting apps on internal private clouds built in their data centers or moving them to exclusive servers run by an outside provider.

“There are very good reasons for workloads to be private,” Waite said. “There are security reasons, regulatory reasons, performance reasons. Maybe they need to be close to other things, for latency requirements.”

Gartner analyst Alan Waite discusses when organizations should consider private cloud at Gartner Catalyst on Aug. 22.
Gartner analyst Alan Waite discusses when organizations should consider private cloud at Gartner Catalyst on Aug. 22.

Jason Place, a systems engineer at a U.S. utility he couldn’t identify because of company policy, was happy to hear that organizations are still interested in private cloud. His company is developing a cloud strategy that includes building its own internal cloud because of regulatory and security needs it has to meet.

“Obviously, there’s lots of talk about public cloud and moving to public cloud. Is that the general consensus? Sitting through something like this points you in the other direction, saying, ‘No, you’re not alone; there are plenty of workloads staying exactly where they are, and there’s still plenty of investment going on in private infrastructure,'” Place said. “So reaffirming the plans that we already started to lay.”

Announcing .NET Core 2.0

.NET Core 2.0 is available today as a final release. You can start developing with it at the command line, in your favorite text editor, in Visual Studio 2017 15.3, Visual Studio Code or Visual Studio for Mac. It is ready for production workloads, on your own hardware or your favorite cloud, like Microsoft Azure.

We are also releasing ASP.NET Core 2.0 and Entity Framework Core 2.0. Read the ASP.NET Core 2.0 and the Entity Framework Core 2.0 announcements for details. You can also watch the launch video on Channel 9 to see many of the new features in action.

The .NET Standard 2.0 spec is complete, finalized at the same time as .NET Core 2.0. .NET Standard is a key effort to improve code sharing and to make the APIs available in each .NET implementation more consistent. .NET Standard 2.0 more than doubles that set of APIs that you have available for your projects.

.NET Core 2.0 has been deployed to Azure Web Apps. It is available today in a small number of regions and will expand globally quickly.

.NET Core 2.0 includes major improvements that make .NET Core easier to use and much more capable as a platform. The following improvements are the biggest ones and others are described in the body of this post. Please share feedback and any issues you encounter at dotnet/core #812.



Visual Studio

  • Live Unit Testing supports .NET Core
  • Code navigation improvements
  • C# Azure Functions support in the box
  • CI/CD support for containers

For Visual Studio users: You need to update to the latest versions of Visual Studio to use .NET Core 2.0. You will need to install the .NET Core 2.0 SDK separately for this update.


On behalf of the entire team, I want to express our gratitude for all the direct contributions that we received for .NET Core 2.0. Thanks! Some of the most prolific contributors for .NET Core 2.0 are from companies investing in .NET Core, other than Microsoft. Thanks to Samsung and Qualcomm for your contributions to .NET Core.

The .NET Core team shipped two .NET Core 2.0 previews (preview 1 and preview 2) leading up to today’s release. Thanks to everyone who tried out those releases and gave us feedback.

Using .NET Core 2.0

You can get started with .NET Core 2.0 in just a few minutes, on Windows macOS or Linux.

You first need to install the .NET Core SDK 2.0.

You can create .NET Core 2.0 apps on the command line or in Visual Studio.

Creating new projects is easy. There are templates you can use in Visual Studio 2017. You can also create new application at the command line with dotnet new, as you can see in the following example.

C:samples>dotnet new console -o console-app
C:samples>cd console-app
C:samplesconsole-app>dotnet run
Hello World!

You can also upgrade an existing application to .NET Core 2.0. In Visual Studio, you can change the target framework of an application to .NET Core 2.0.

If you are working with Visual Studio Code or another text editor, you will need to update the target framework to netcoreapp2.0.


It is not as critical to update libraries to .NET Standard 2.0. In general, libraries should target the lowest version of .NET Standard they can tolerate (for maximum .NET implementation applicability) unless they require APIs only in .NET Core. If you do want to update libraries, you can do it the same way, either in Visual Studio or directly in the project file, as you can see with the following project file segment that target .NET Standard 2.0.


You can read more in-depth instructions in the Migrating from ASP.NET Core 1.x to ASP.NET Core 2.0 document.

Relationship to .NET Core 1.0 and 1.1 Apps

You can install .NET Core 2.0 on machines with .NET Core 1.0 and 1.1. Your 1.0 and 1.1 applications will continue to use the 1.0 and 1.1 runtimes, respectively. They will not roll forward to the 2.0 runtime unless you explicitly update your apps to do so.

By default, the latest SDK is always used. After installing the .NET Core 2.0 SDK, you will use it for all projects, including 1.0 and 1.1 projects. As stated above, 1.0 and 1.1 projects will still use the 1.0 and 1.1 runtimes, respectively.

You can configure a directory (all the way up to a whole drive) to use a specific SDK by creating a global.json file that specifies a specific .NET Core SDK version. All dotnet uses “under” that file will use that version of the SDK. If you do that, make sure you have that version installed.

.NET Core Runtime Improvements

The .NET Core 2.0 Runtime has the following improvements.

Performance Improvements

There are many performance improvements in .NET Core 2.0. The team published a few posts describing the improvements to the .NET Core Runtime in detail.

.NET Core 2.0 Implements .NET Standard 2.0

The .NET Standard 2.0 spec has been finalized at the same time as .NET Core 2.0.

We have more than doubled the set of available APIs in .NET Standard from 13k in .NET Standard 1.6 to 32k in .NET Standard 2.0. Most of the added APIs are .NET Framework APIs. These additions make it much easier to port existing code to .NET Standard, and, by extension, to any .NET implementation of .NET Standard, such as .NET Core 2.0 and the upcoming version of Universal Windows Platform (UWP).

.NET Core 2.0 implements the .NET Standard 2.0 spec: all 32k APIs that the spec defines.

You can see a diff between .NET Core 2.0 and .NET Standard 2.0 to understand the set of APIs that .NET Core 2.0 provides beyond the set required by the .NET Standard 2.0 spec.

Much easier to target Linux as a single operating system

.NET Core 2.0 treats Linux as a single operating system. There is now a single Linux build (per chip architecture) that works on all Linux distros that we’ve tested. Our support so far is specific to glibc-based distros and more specifically Debian- and Red Hat-based Linux distros.

There are other Linux distros that we would like to support, like those that use musl C Standard library, such as Alpine. Alpine will be supported in a later release.

Please tell us if the .NET Core 2.0 Linux build doesn’t work well on your favorite Linux distro.

Similar improvements have been made for Windows and macOS. You can now publish for the following “runtimes”.

  • linux-x64, linux-arm
  • win-x64, win-x86
  • osx-x64

Linux and Windows ARM32 builds now available, in Preview

Globalization Invariant Mode

.NET Core 2.0 includes a new .NET Core SDK Improvements

The .NET Core SDK 2.0 has the following improvements.

dotnet restore is implicit for commands that require it

The dotnet restore command has been a required set of keystrokes with .NET Core to date. The command installs required project dependencies and some other tasks. It’s easy to forget to type it and the error messages that tell you that you need to type it are not always helpful. It is now implicitly called on your behalf for commands like run, build and publish.

The following example workflow demonstates the absense of a required dotnet restore command:

C:Usersrich>dotnet new mvc -o mvcapp
The template "ASP.NET Core Web App (Model-View-Controller)" was created successfully.
This template contains technologies from parties other than Microsoft, see https://aka.ms/template-3pn for details.

Processing post-creation actions...
Running 'dotnet restore' on mvcappmvcapp.csproj...
  Restoring packages for C:Usersrichmvcappmvcapp.csproj...
  Restore completed in 32.3 ms for C:Usersrichmvcappmvcapp.csproj.
  Generating MSBuild file C:Usersrichmvcappobjmvcapp.csproj.nuget.g.props.
  Generating MSBuild file C:Usersrichmvcappobjmvcapp.csproj.nuget.g.targets.
  Restore completed in 2.26 sec for C:Usersrichmvcappmvcapp.csproj.
Restore succeeded.

C:Usersrich>cd mvcapp

C:Usersrichmvcapp>dotnet run
Hosting environment: Production
Content root path: C:Usersrichmvcapp
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.
Application is shutting down...

Reference .NET Framework libraries from .NET Standard

You can now reference .NET Framework libraries from .NET Standard libraries using Visual Studio 2017 15.3. This feature helps you migrate .NET Framework code to .NET Standard or .NET Core over time (start with binaries and then move to source). It is also useful in the case that the source code is no longer accessible or is lost for a .NET Framework library, enabling it to be still be used in new scenarios.

We expect that this feature will be used most commonly from .NET Standard libraries. It also works for .NET Core apps and libraries. They can depend on .NET Framework libraries, too.

The supported scenario is referencing a .NET Framework library that happens to only use types within the .NET Standard API set. Also, it is only supported for libraries that target .NET Framework 4.6.1 or earlier (even .NET Framework 1.0 is fine). If the .NET Framework library you reference relies on WPF, the library will not work (or at least not in all cases). You can use libraries that depend on additional APIs,but not for the codepaths you use. In that case, you will need to invest singificantly in testing.

You can see this feature in use in the following images.

The call stack for this app makes the dependency from .NET Core to .NET Standard to .NET Framework more obvious.

.NET Standard NuGet Packages no longer have required dependencies

.NET Standard NuGet packages no longer have any required dependencies if they target .NET Standard 2.0 or later. The .NET Standard dependency is now provided by the .NET Core SDK. It isn’t necessary as a NuGet artifact.

The following is an example nuspec (recipe for a NuGet package) targeting .NET Standard 2.0.

xml version="1.0" encoding="utf-8"?>
<package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd">
        <description>Package Descriptiondescription>
            <group targetFramework=".NETStandard2.0" />

The following is an example nuspec (recipe for a NuGet package) targeting .NET Standard 1.4.

xml version="1.0" encoding="utf-8"?>
<package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd">
        <description>Package Descriptiondescription>
            <group targetFramework=".NETStandard1.4">
                <dependency id="NETStandard.Library" version="1.6.1" exclude="Build,Analyzers" />

Visual Studio 2017 version 15.3 updates

Side-by-Side SDKs

Visual Studio now has the ability to recognize the install of an updated .NET Core SDK and light up corresponding tooling within Visual Studio. With 15.3, Visual Studio now provides side-by-side support for .NET Core SDKs and defaults to utilizing the highest version installed in the machine when creating new projects while giving you the flexibility to specify and use older versions if needed, via the use of global.json file. Thus, a single version of Visual Studio can now build projects that target different versions of .NET Core.

Support for Visual Basic

In addition to supporting C# and F#, 15.3 now also supports using Visual Basic to develop .NET Core apps. Our aim with Visual Basic this release was to enable .NET Standard 2.0 class libraries. This means Visual Basic only offers templates for class libraries and console apps at this time, while C# and F# also include templates for ASP.NET Core 2.0 apps. Keep an eye on this blog for updates.

Live Unit Testing Support

IDE Productivity enhancements

Visual Studio 2017 15.3 has several productivity enhancements to help you write better code faster. We now support

We’ve added a handful of new refactorings including:

  • Resolve merge conflict
  • Add parameter (from callsite)
  • Generate overrides
  • Add named argument
  • Add null-check for parameters
  • Insert digit-separators into literals
  • Change base for numeric literals (e.g., hex to binary)
  • Convert if-to-switch
  • Remove unused variable

Project System simplifications

We further simplified the .csproj project file by removing some unnecessary elements that were confusing to users and wherever possible we now derive them implicitly. Simplification trickles down to Solution Explorer view as well. Nodes in Solution Explorer are now neatly organized into categories within the Dependencies node, like NuGet, project-to-project references, SDK, etc.

Another enhancement made to the .NET Core project system is that it is now more efficient when it comes to builds. If nothing changed and the project appears to be up to date since the last build, then it won’t waste build cycles.


Several important Support and Lifecycle

.NET Core 2.0 is a new release, .NET Core 1.1

.NET Core 1.1 has transitioned to LTS Support, adopting the same LTS timeframe as .NET Core 1.0.

.NET Core 1.0 and 1.1 will both go out of support on June 27, 2019 or 12 months after the .NET Core 2.0 LTS release, whichever is shorter.

We recommend that all 1.0 customers move to 1.1, if not to 2.0. .NET Core 1.1 has important usability fixes in it that make for a significantly better development experience than 1.0.

Red Hat

Red Hat also provides full support for .NET Core on RHEL and will be providing a distribution of .NET Core 2.0 very soon. We’re excited to see our partners like Red Hat follow our release so quickly. For more information head to Closing

We’re very excited on this significant milestone for .NET Core. Not only is the 2.0 release our fastest version of .NET ever, the .NET Standard 2.0 delivers on the promise of .NET everywhere. In conjunction with the Visual Studio family, .NET Core provides the most productive development platform for developers using MacOS or Linux as well as Windows. We encourage you to download the latest .NET Core SDK from https://dot.net/core and start working with this new version of .NET Core.

Please share feedback and any issues you encounter at dotnet/core #812.

Watch the launch video for .NET Core 2.0 to see this new release in action.

Humanitarian company uses Dynamics 365 for Talent for quicker deployment

Whether it’s responding to a natural disaster or helping a developing country improve its education system or water quality, international development company Chemonics needs to build out specialized business processes on the fly. That’s how it keeps more than 60 humanitarian projects around the world moving, despite each one having its own technological needs that are dependent on size, scope and location.

Roughly three years ago, the Washington, D.C.-based company began looking at business applications that could simplify the HR process of finding and hiring the necessary talent needed for their distinct projects, ultimately settling on Microsoft’s Dynamics 365 last October. But Chemonics was still longing for more HR capabilities, like onboarding and contract management, and it was looking at third-party tools to help fill the holes when Microsoft told them about a new feature coming down the line: Dynamics 365 for Talent.

The new Dynamics 365 feature, which was made generally available on Aug. 1, helps streamline routine tasks and automates staffing processes.

“Essentially, we build a brand-new company of anywhere from 15 to 20 people, to 400 to 500 people,” said Eric Reading, executive vice president at Chemonics. “Our business process and the way we organize ourselves needs to be very flexible and oriented around the rapidly changing nature of the geographic and organizational layouts of our company.”

‘We can work in real time’

Founded in 1975, Chemonics has done humanitarian work all over the globe, including current projects in Afghanistan helping with sustainable agriculture and literacy, policy reform in Jordan and health services in Angola, as well as dozens more. The process calls for a local office to be set up in the corresponding region, with recruiting and hiring of talent both worldwide and local to that region.

“We have roughly 4,500 staff around the world, with the smallest office being a half dozen staff and the largest around 400 people,” Reading said. “It’s a pretty dramatic range of scale we have to work in. A lot of those systems and processes we used were designed during a time when we used telex machines. Things were manual or with little automation due to the geographic separation.”

The growth of cloud hosting allowed Chemonics to think more modernly about its technology, as internet infrastructure can be spotty in some of the developing nations in which it works.

It took us to a place where it was possible to have our whole global organization operating on a single framework for IT and business process.
Eric Readingexecutive vice president, Chemonics

“It took us to a place where it was possible to have our whole global organization operating on a single framework for IT and business process,” Reading said. “We can work in real time and collaborate.”

Chemonics researched roughly a dozen different software providers, ultimately narrowing the list to four, then to two — Oracle and Dynamics 365 — before settling on Dynamics for its UI consistency, simplicity and licensing structure.

“The consistency of experience across different parts of the interface was valuable,” Reading said. “There are a lot of elements of business that have to be done a certain way because we’re a government contractor and work on programs that need to comply in a lot of different legal departments. It allowed us to do more at a deeper level without having to completely customize everything.”

And while Chemonics’ first iteration of Dynamics helped with collaboration and consistency among its global projects, it still left some features to be desired in the HR department.

“At the time, there was an incompleteness of the HR offering, and it didn’t satisfy our needs in that area,” Reading said. “We were evaluating options on what do we append in to get that resource functionality. We talked with Microsoft about it, and they asked us to give them a little bit of time to see what was coming down the road.”

Reading said Chemonics was one of the first Microsoft customers to set up Dynamics 365 for Talent for a project in the Dominican Republic.

“After [implementing Dynamics 365 for Talent], we stood up the Dominican Republic office in a 21-day period,” Reading said, adding that the typical goal is 60 days.

A screenshot showing the different steps of the onboarding process at Chemonics through Dynamics 365 for Talent. The new feature was generally released on Aug. 1.
A screenshot showing the different steps of the onboarding process at Chemonics through Dynamics 365 for Talent. The new feature was generally released on Aug. 1.

Integrating with LinkedIn

Dynamics 365 for Talent was one of two major upgrades that Microsoft brought to its business application earlier this year, with the other being bringing together LinkedIn Sales Navigator and Dynamics 365 for Sales, which allows Dynamics customers to mine LinkedIn’s 500 million members for additional sales leads.

Integrating LinkedIn’s vast amount of professional data into Dynamics also helps with the hiring process that Chemonics needed.

“The new offerings focus on the hiring process, the employee onboarding process and the underlying core needs of HR,” said Mike Ehrenberg, chief strategist for Microsoft. “We’ve had these abilities before, but it’s much more modern and richer now.”

Reading said Chemonics uses LinkedIn as one of the first places to find specialized and specific talent.

“We may need to find an expert in methodology of literacy that can work in a particular language,” Reading said. “Finding that specialized skill set and being able to link it from LinkedIn to the Talent offering is exciting.”

Prior to Dynamics 365 for Talent, the hiring process for Chemonics’ different projects was manual — and the results varied.

“We often had lots of one-page Word documents that may or may not get reused,” Reading said. “We’d have checklists and other manual management work that had a fair level of inconsistency with it.”

Licensing easy to work with

The final aspect that drew Chemonics toward Dynamics 365 was the malleable licensing Microsoft offered, with both an overarching license for management and administrators and a team member license for employees with a simpler routine.

“Our organization doesn’t break down neatly among traditional roles,” Reading said. “The licensing made it easier to manage the process and much more competitive on a pricing standpoint.”

The full use of Dynamics 365 cost $210 per user, per month, with team members’ licenses costing $8 per user, per month to execute basic processes and shared knowledge. There’s also an operations activity license for $50 per user, per month and an operations devices license for $75 per user, per month. Microsoft also offers other cheaper, stripped-down licenses of Dynamics 365, some of which don’t include Dynamics 365 for Talent.

Performance Tuning Windows Server 2016

While we were working on developing Windows Server 2016 – we had a team dedicated to testing the many aspects of the performance of Windows.  We rely heavily on this team to make sure that we are continuously improving our performance in many ways.

Well – they have recently created an amazing set of documentation on performance tuning Windows Server 2016.

You can read it all here:


Note that while there is lots of good Hyper-V information in this documentation, there is also information about many other Windows Server roles as well.


Powered by WPeMatico