Tag Archives: Evolution

Why move to PowerShell 7 from Windows PowerShell?

PowerShell’s evolution has taken it from a Windows-only tool to a cross-platform, open source project that runs on Mac and Linux systems with the release of PowerShell Core. Next on tap, Microsoft is unifying PowerShell Core and Windows PowerShell with the long-term supported release called PowerShell 7, due out sometime in February. What are advantages and disadvantages of adopting the next generation of PowerShell in your environment?

New features spring from .NET Core

Nearly rebuilt from the ground up, PowerShell Core is a departure from Windows PowerShell. There are many new features, architectures and improvements that push the language forward even further.

Open source PowerShell runs on a foundation of .NET Core 2.x in PowerShell 6.x and .NET Core 3.1 in PowerShell 7. The .NET Core framework is also cross-platform, which enables PowerShell Core to run on most operating systems. The shift to the .NET Core framework brings several important changes, including:

  • increases in execution speed;
  • Window Desktop Application support using Windows Presentation Foundation and Windows Forms;
  • TLS 1.3 support and other cryptographic enhancements; and
  • API improvements.

PowerShell Core delivers performance improvements

As noted in the .NET Core changes, execution speed has been much improved. With each new release of PowerShell Core, there are further improvements to how core language features and built-in cmdlets alike work.

PowerShell Core speed improvement
A test of the Group-Object cmdlet shows less time is needed to execute the task as you move from Windows PowerShell to the newer PowerShell Core versions.

With a simple Group-Object test, you can see how much quicker each successive release of PowerShell Core has become. With a nearly 73% speed improvement from Windows PowerShell 5.1 to PowerShell Core 6.1, running complex code in gets easier and completes faster.

Sort-Object test results
Another speed test with the Sort-Object cmdlet shows a similar improvement with each successive release of PowerShell.

Similar to the Group-Object test, you can see Sort-Object testing results in nearly a doubling of execution speed between Windows PowerShell 5.1 and PowerShell Core 6.1. With sorting so often used in many applications, using PowerShell Core in your daily workload means that you will be able to get that much more done in far less time.

Gaps in cmdlet compatibility addressed

The PowerShell team began shipping the Windows Compatibility Pack for .NET Core starting in PowerShell Core 6.1. With this added functionality, the biggest reason for holding back from greater adoption of PowerShell Core is no longer valid. The ability to run many cmdlets that previously were only available to Windows PowerShell means that most scripts and functions can now run seamlessly in either environment.

PowerShell 7 will further cinch the gap by incorporating the functionality of the current Windows Compatibility Module directly into the core engine.

New features arrive in PowerShell 7

There are almost too many new features to list in PowerShell 7, but some of the highlights include:

  • SSH-based PowerShell remoting;
  • an & at the end of pipeline automatically creates a PowerShell job in the background;
  • many improvements to web cmdlets such as link header pagination, SSLProtocol support, multipart support and new authentication methods;
  • PowerShell Core can use paths more than 260 characters long;
  • markdown cmdlets;
  • experimental feature flags;
  • SecureString support for non-Windows systems; and
  • many quality-of-life improvements to existing cmdlets with new features and fixes.

Side-by-side installation reduces risk

A great feature of PowerShell Core, and one that makes adopting the new shell that much easier, is the ability to install the application side-by-side with the current built-in Windows PowerShell. Installing PowerShell Core will not remove Windows PowerShell from your system.

Instead of invoking PowerShell using the powershell.exe command, you use pwsh.exe instead — or just pwsh in Linux. In this way, you can test your scripts and functions incrementally before moving everything over en masse.

This feature allows quicker updating to new versions rather than waiting for a Windows update. By decoupling from the Windows release cycle or patch updates, PowerShell Core can now be regularly released and updated easily.

Disadvantages of PowerShell Core

One of the biggest drawbacks to PowerShell Core is losing the ability to run all cmdlets that worked in Windows PowerShell. There is still some functionality that can’t be fully replicated by PowerShell Core, but the number of cmdlets that are unable to run is rapidly shrinking with each release. This may delay some organizations move to PowerShell Core, but in the end, there won’t be a compelling reason to stay on Windows PowerShell with the increasing cmdlet support coming to PowerShell 7 and beyond.

Getting started with the future of PowerShell

PowerShell Core is released for a wide variety of platforms, Linux and Windows alike. Windows offers MSI packages that are easily installable, while Linux packages are available for a variety of different package platforms and repositories.

Simply starting the shell using pwsh will let you run PowerShell Core without disrupting your current environment. Even better is the ability to install a preview version of the next iteration of PowerShell and run pwsh-preview to test it out before it becomes generally available.

Go to Original Article
Author:

For Sale – *price reduced* Fanless HTPC (Intel i7 4785T 2.2 GHz, 8GB RAM, 120GB SSD, USB 3.0 Wi-Fi)

Sedatech UC03109F1I8HE Mini Evolution Passive Cooling Desktop PC

CPU Intel Core i7-4785T 4×2.2Ghz (max 3.2Ghz)
Asus Rock motherboard
Haswell Generation low power Intel i7 processor (TDP 35W) with Turbo Boost Technology (up to 3.2Ghz)
120Gb SSD Drive
Completely silent fanless case which looks like an unbadged Akasa Euler Mini ITX Case
Low power consumption and small dimensions (W x H x D): 22,8 x 6,15 x 18,7
With Windows 8.1 64-bit EN (DVD + booklet + activation key) plus drivers disc

Perfect silent HTPC

Wiped clean with latest Ubuntu 19.10 but it does come with the original Windows 8.1 cd and key

Go to Original Article
Author:

For Sale – Sedatech Fanless HTPC (Intel i7 4785T 2.2 GHz, 8GB RAM, 120GB SSD, USB 3.0 Wi-Fi)

Sedatech UC03109F1I8HE Mini Evolution Passive Cooling Desktop PC

CPU Intel Core i7-4785T 4×2.2Ghz (max 3.2Ghz)
Asus Rock motherboard
Haswell Generation low power Intel i7 processor (TDP 35W) with Turbo Boost Technology (up to 3.2Ghz)
120Gb SSD Drive
Completely silent fanless case which looks like an unbadged Akasa Euler Mini ITX Case
Low power consumption and small dimensions (W x H x D): 22,8 x 6,15 x 18,7
With Windows 8.1 64-bit EN (DVD + booklet + activation key) plus drivers disc

Perfect silent HTPC

Wiped clean with latest Ubuntu 19.10 but it does come with the original Windows 8.1 cd and key

Go to Original Article
Author:

From search to translation, AI research is improving Microsoft products

The evolution from research to product

It’s one thing for a Microsoft researcher to use all the available bells and whistles, plus Azure’s powerful computing infrastructure, to develop an AI-based machine translation model that can perform as well as a person on a narrow research benchmark with lots of data. It’s quite another to make that model work in a commercial product.

To tackle the human parity challenge, three research teams used deep neural networks and applied other cutting-edge training techniques that mimic the way people might approach a problem to provide more fluent and accurate translations. Those included translating sentences back and forth between English and Chinese and comparing results, as well as repeating the same translation over and over until its quality improves.

“In the beginning, we were not taking into account whether this technology was shippable as a product. We were just asking ourselves if we took everything in the kitchen sink and threw it at the problem, how good could it get?” Menezes said. “So we came up with this research system that was very big, very slow and very expensive just to push the limits of achieving human parity.”

“Since then, our goal has been to figure out how we can bring this level of quality — or as close to this level of quality as possible — into our production API,” Menezes said.

Someone using Microsoft Translator types in a sentence and expects a translation in milliseconds, Menezes said. So the team needed to figure out how to make its big, complicated research model much leaner and faster. But as they were working to shrink the research system algorithmically, they also had to broaden its reach exponentially — not just training it on news articles but on anything from handbooks and recipes to encyclopedia entries.

To accomplish this, the team employed a technique called knowledge distillation, which involves creating a lightweight “student” model that learns from translations generated by the “teacher” model with all the bells and whistles, rather than the massive amounts of raw parallel data that machine translation systems are generally trained on. The goal is to engineer the student model to be much faster and less complex than its teacher, while still retaining most of the quality.

In one example, the team found that the student model could use a simplified decoding algorithm to select the best translated word at each step, rather than the usual method of searching through a huge space of possible translations.

The researchers also developed a different approach to dual learning, which takes advantage of “round trip” translation checks. For example, if a person learning Japanese wants to check and see if a letter she wrote to an overseas friend is accurate, she might run the letter back through an English translator to see if it makes sense. Machine learning algorithms can also learn from this approach.

In the research model, the team used dual learning to improve the model’s output. In the production model, the team used dual learning to clean the data that the student learned from, essentially throwing out sentence pairs that represented inaccurate or confusing translations, Menezes said. That preserved a lot of the technique’s benefit without requiring as much computing.

With lots of trial and error and engineering, the team developed a recipe that allowed the machine translation student model — which is simple enough to operate in a cloud API — to deliver real-time results that are nearly as accurate as the more complex teacher, Menezes said.

Arul Menezes standing with arms folded in front of green foliage in the background
Arul Menezes, Microsoft distinguished engineer and founder of Microsoft Translator. Photo by Dan DeLong.

Improving search with multi-task learning

In the rapidly evolving AI landscape, where new language understanding models are constantly introduced and improved upon by others in the research community, Bing’s search experts are always on the hunt for new and promising techniques. Unlike the old days, in which people might type in a keyword and click through a list of links to get to the information they’re looking for, users today increasingly search by asking a question — “How much would the Mona Lisa cost?” or “Which spider bites are dangerous?” — and expect the answer to bubble up to the top.

“This is really about giving the customers the right information and saving them time,” said Rangan Majumder, partner group program manager of search and AI in Bing. “We are expected to do the work on their behalf by picking the most authoritative websites and extracting the parts of the website that actually shows the answer to their question.”

To do this, not only does an AI model have to pick the most trustworthy documents, but it also has to develop an understanding of the content within each document, which requires proficiency in any number of language understanding tasks.

Last June, Microsoft researchers were the first to develop a machine learning model that surpassed the estimate for human performance on the General Language Understanding Evaluation (GLUE) benchmark, which measures mastery of nine different language understanding tasks ranging from sentiment analysis to text similarity and question answering. Their Multi-Task Deep Neural Network (MT-DNN) solution employed both knowledge distillation and multi-task learning, which allows the same model to train on and learn from multiple tasks at once and to apply knowledge gained in one area to others.

Bing’s experts this fall incorporated core principles from that research into their own machine learning model, which they estimate has improved answers in up to 26 percent of all questions sent to Bing in English markets. It also improved caption generation — or the links and descriptions lower down on the page — in 20 percent of those queries. Multi-task deep learning led to some of the largest improvements in Bing question answering and captions, which have traditionally been done independently, by using a single model to perform both.

For instance, the new model can answer the question “How much does the Mona Lisa cost?” with a bolded numerical estimate: $830 million. In the answer below, it first has to know that the word cost is looking for a number, but it also has to understand the context within the answer to pick today’s estimate over the older value of $100 million in 1962. Through multi-task training, the Bing team built a single model that selects the best answer, whether it should trigger and which exact words to bold.

Screenshot of a Bing search results page showing an enhanced answer of how much the Mona Lisa costs, with a snippet from Wikipedia
This screenshot of Bing search results illustrates how natural language understanding research is improving the way Bing answers questions like “How much does the Mona Lisa cost?” A new AI model released this fall understands the language and context of the question well enough to distinguish between the two values in the answer — $100 million in 1962 and $830 million in 2018 — and highlight the more recent value in bold. Image by Microsoft.

Earlier this year, Bing engineers open sourced their code to pretrain large language representations on Azure.  Building on that same code, Bing engineers working on Project Turing developed their own neural language representation, a general language understanding model that is pretrained to understand key principles of language and is reusable for other downstream tasks. It masters these by learning how to fill in the blanks when words are removed from sentences, similar to the popular children’s game Mad Libs.

You take a Wikipedia document, remove a phrase and the model has to learn to predict what phrase should go in the gap only by the words around it,” Majumder said. “And by doing that it’s learning about syntax, semantics and sometimes even knowledge. This approach blows other things out of the water because when you fine tune it for a specific task, it’s already learned a lot of the basic nuances about language.”

To teach the pretrained model how to tackle question answering and caption generation, the Bing team applied the multi-task learning approach developed by Microsoft Research to fine tune the model on multiple tasks at once. When a model learns something useful from one task, it can apply those learnings to the other areas, said Jianfeng Gao, partner research manager in the Deep Learning Group at Microsoft Research.

For example, he said, when a person learns to ride a bike, she has to master balance, which is also a useful skill in skiing. Relying on those lessons from bicycling can make it easier and faster to learn how to ski, as compared with someone who hasn’t had that experience, he said.

“In some sense, we’re borrowing from the way human beings work. As you accumulate more and more experience in life, when you face a new task you can draw from all the information you’ve learned in other situations and apply them,” Gao said.

Like the Microsoft Translator team, the Bing team also used knowledge distillation to convert their large and complex model into a leaner model that is fast and cost-effective enough to work in a commercial product.

And now, that same AI model working in Microsoft Search in Bing is being used to improve question answering when people search for information within their own company. If an employee types a question like “Can I bring a dog to work”? into the company’s intranet, the new model can recognize that a dog is a pet and pull up the company’s pet policy for that employee — even if the word dog never appears in that text. And it can surface a direct answer to the question.

“Just like we can get answers for Bing searches from the public web, we can use that same model to understand a question you might have sitting at your desk at work and read through your enterprise documents and give you the answer,” Majumder said.

Top image: Microsoft investments in natural language understanding research are improving the way Bing answers search questions like “How much does the Mona Lisa cost?” Image by Musée du Louvre/Wikimedia Commons. 

Related:

Jennifer Langston writes about Microsoft research and innovation. Follow her on Twitter.

Go to Original Article
Author: Microsoft News Center

Why VDI is here to stay

VDI has led an up-and-down existence. Once heralded as the next great evolution in virtualization when it hit the scene more than a decade ago, VDI was eventually labeled a failure by many. In reality, the future of VDI is still very much alive.

To find out just how viable VDI is at this point, it’s important to look at the reasons many people cited in saying the technology was doomed to fail and see what, if anything, has changed.

VDI is cost-prohibitive

In the early days, the overall cost of a virtual desktop could be as much as five times higher than that of a comparable physical desktop, according to some estimates. In fact, virtual desktops were once so ridiculously expensive that they were something of a status symbol for the few organizations that chose to use them in production.

Today, the debate over whether it is less expensive to deploy physical or virtual desktops rages on. VDI vendors are quick to tell potential customers that virtual desktops cost less than physical desktops, while others claim the opposite.

Regardless of whether virtual desktops or physical desktops have a lower total cost of ownership, one thing is certain: The fact that people are even debating which desktop type is less expensive serves as proof that the cost of VDI has dropped tremendously.

The overall cost reduction can be attributed to three main factors:

  • Friendlier licensing terms
  • Moore’s Law: In the context of VDI, Moore’s Law means IT can add more virtual desktops per host server as the technology matures.
  • Purpose-built hyper-converged infrastructure: HCI lowers the cost of long-term operational tasks by combining compute, storage and networking in one piece of hardware.

Enterprise desktops are complex

There is far more to an enterprise desktop — physical or virtual — than just the OS. There are many other components IT must consider. Some of these components include applications, device drivers and user profiles.

At one time, making a change to any one of these or other desktop components could have meant building and deploying a brand-new desktop image. As such, many people considered virtual desktop management to be far too labor-intensive for the technology to ever be practical.

It is increasingly common for IT to break down virtual desktops into a series of virtualized subcomponents.

Today, however, there is no reason IT has to base virtual desktops on a single, monolithic and frequently updated image. It is increasingly common for IT to break down virtual desktops into a series of virtualized subcomponents. This approach works similarly to containerization.

For example, IT might virtualize applications so it can maintain them without having to worry about the OS image. Likewise, IT can break off the user profile into a virtualized layer, making it possible to achieve the illusion of persistence without affecting the desktop OS. In any case, layering addresses the challenges of enterprise desktop complexity.

Performance

An early VDI implementation might work fine for lightweight apps, such as word processors and spreadsheets, but it would likely have trouble coping with anything graphically or computationally intensive. Even the simple act of playing a YouTube video could bring a VDI deployment to its knees.

Today, VDI performance is generally far better. Yes, Moore’s Law plays into this, but there are also some other factors. For instance, VDI has existed for long enough that vendors have gotten a lot better at performance-tuning their VDI software. Likewise, vendors such as VMware and Microsoft added features designed to ensure performance and stability.

For example, IT can throttle virtual desktops to prevent a user from generating a workload that affects other virtual desktops. In addition, IT can map virtual desktops running graphically intensive workloads to physical graphics processing units to provide graphical performance that is nearly as good as what users would expect on a physical desktop.

The DevOps digital transformation: Evolutionary and revolutionary

Evolution doesn’t occur at a steady pace. It’s marked by moments of a consequential and relatively sudden change, which significantly alter survival dynamics and give rise to entirely new paradigms.

This happened with the Cambrian explosion. Approximately 541 million years ago, and over the next 70 million to 80 million years, organisms rapidly evolved from mostly single-cell to complex and diverse creatures that better resemble life on planet Earth as we know it.

As CloudBees CTO and Jenkins founder Kohsuke Kawaguchi explained in his Jenkins World keynote, the Cambrian explosion serves as an apt metaphor for both Jenkins and the DevOps digital transformation.

Otherwise mundane elements sparked and fueled the Cambrian explosion, like the gradual evolution of eyesight. While crude at first, many experts believe eyesight reached a tipping point that upheaved the predator-prey dynamic by enabling predators to hunt more effectively. This increased pressure to evolve and kicked off an arms race, as prey developed better defense features, like armor, speed and camouflage.

Automation, cloud and mobile: Fueling the DevOps digital transformation

For Jenkins, which started as a single app for a single use case, the automation features in early builds stand in for eyesight, while mobility and cloud serve the same for DevOps as a whole. Modern software as we know it has been around for about 70 years. But it’s easy to see mobility, cloud and automation, igniting software’s Cambrian explosion.

All were limited and seemingly innocuous at first, but eventually developed to enable an online broker like Amazon to compete with Walmart, the world’s largest physical retailer. The pressure to evolve is why Walmart dropped $3 billion on e-commerce startup Jet.com earlier this year. The pressure to evolve is why all business are now in the software business — a refrain repeated at Jenkins World.

Evolution equals transformation, and the latter was a steady theme at Jenkins World; although, both could just as easily double as warnings. CloudBees CEO Sacha Labourey hit that point home in his keynote focusing on “Digital Darwinism,” quoting Eric Shinseki, retired Army general and former U.S. secretary of Veteran Affairs: “If you dislike change, you’re going to dislike irrelevance even more.”

Instant insights, the next big thing

CloudBees used Jenkins World to launch DevOptics, which Labourey claimed provides a “single source of truth” for a “holistic view” of the deployment pipeline, aggregating data from disparate tools and teams. From his description, it’s a DevOps system of record — one that ultimately helps the business side “identify ROI from DevOps initiatives,” according to CloudBees.

Think of [a friendly UI] as extending the pipeline beyond IT to business and marketing.

CloudBees was’’t alone in trying to make metric sense of the deployment pipeline. Electric Cloud recently unveiled ElectricFlow 8.0 with DevOps Insight Analytics, using Jenkins World to show it off to prospective developers. According to Electric Cloud, Insight Analytics provides “teams with automated data collection and powerful reporting to connect DevOps toolchain metrics and performance back to the milestones and business value (features, user stories) being delivered in every release.”

Anders Wallgren, CEO at Electric Cloud, based in San Jose, Calif., stated it offered instant insights to relevant pipeline analytics, helping troubleshoot bottlenecks and spot trends, for both IT and business leaders.

So, what’s the big deal about dashboards and insights? Plenty, according to Kawaguchi — particularly CloudBees Blue Ocean. He said he sees it as another element fueling the DevOps digital transformation.

A friendly UI that both business and IT can understand improves the constant delivery user experience. Think of it as extending the pipeline beyond IT to business and marketing. With relevant insights, organizations can better meet customer needs and react to customer demands.

It’s both an evolutionary and revolutionary software explosion, fueled by cloud, mobile, automation and easy access to actionable data. Take another look at Walmart as it scrambles to stave off Amazon, or at Marriott and Hilton doing the same with Airbnb. Look at Tesla and it’s software fix to its hardware problem. It’s already here, altering survival dynamics and giving rise to entirely new paradigms.