Tag Archives: embraced

Will PowerShell Core 6 fill in missing features?

Administrators who have embraced PowerShell to automate tasks and manage systems will need to prepare themselves…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

as Microsoft plans to focus its energies in the open source version called PowerShell Core.

All signs from Microsoft indicate it is heading away from the Windows-only version of PowerShell, which the company said it will continue to support with critical fixes — but no further upgrades. The company plans to release PowerShell Core 6 shortly. Here’s what admins need to know about the transition.

What’s different with PowerShell Core?

PowerShell Core 6 is an open source configuration management and automation tool from Microsoft. As of this article’s publication, Microsoft made a release candidate available in November. PowerShell Core 6 represents a significant change for administrators because it shifts from a Windows-only platform to accommodate heterogeneous IT shops and hybrid cloud networks. Microsoft’s intention is to give administrative teams a single tool to manage Linux, macOS and Windows systems.

What features are not in PowerShell Core?

PowerShell Core runs on .NET Core and uses .NET Standard 2.0, the latter is a common library that helps make some current Windows PowerShell modules work in PowerShell Core.

As a subset of the .NET Framework, PowerShell Core misses out on some useful features in Windows PowerShell. For example, workflow enables admins to execute tasks or retrieve data through a sequence of automated steps. This feature is not in PowerShell Core 6. Similarly, tasks such as sequencing, checkpointing, resumability and persistence are not available in PowerShell Core.

A few other features missing from PowerShell Core 6 are:

  • Windows Presentation Foundation: This is the group of .NET libraries that enable coders to build UIs for scripts. It offers a common platform for developers and designers to work together with standard tools to create Windows and web interfaces.
  • Windows Forms: In PowerShell 5.0 for Windows, the Windows Forms feature provides a robust platform to build rich client apps with the GUI class library on the .NET Framework. To create a form, the admin loads the System.Windows.Forms assembly, creates a new object of type system.windows.forms and calls the ShowDialog method. With PowerShell Core 6, administrators lose this capability.
  • Cmdlets: As of publication, most cmdlets in Windows PowerShell have not been ported to PowerShell Core 6. However, the compatibility with .NET assemblies enables admins to use the existing modules. Users on Linux are limited to modules mostly related to security, management and utility. Admins on that platform can use the PowerShellGet in-box module to install, update and discover PowerShell modules. PowerShell Web Access is not available for non-Windows systems because it requires Internet Information Services, the Windows-based web server functionality.
  • PowerShell remoting: Microsoft ports Secure Socket Shell to Windows, and SSH is already popular in other environments. That means SSH-based remoting for PowerShell is likely the best option for remoting tasks. Modules such as Hyper-V, Storage, NetTCPIP and DnsClient have not been ported to PowerShell Core 6, but Microsoft plans to add them.

Is there a new scripting environment?

For Windows administrators, the PowerShell Integrated Scripting Environment (ISE) is a handy editor that admins use to write, test and debug commands to manage networks. But PowerShell ISE is not included in PowerShell Core 6, so administrators must move to a different integrated development environment.

Microsoft recommends admins use Visual Studio Code (VS Code). VS Code is a cross-platform tool and uses web technologies to provide a rich editing experience across many languages. However, VS Code lacks some of PowerShell ISE’s features, such as PSEdit and remote tabs. PSEdit enables admins to edit files on remote systems without leaving the development environment. Despite VS Code’s limitations, Windows admins should plan to migrate from PowerShell ISE and familiarize themselves with VS Code.

What about Desired State Configuration?

Microsoft offers two versions of Desired State Configuration: Windows PowerShell DSC and DSC for Linux. DSC helps administrators maintain control over software deployments and servers to avoid configuration drift.

Microsoft plans to combine these two options into a single cross-platform version called DSC Core, which will require PowerShell Core and .NET Core. DSC Core is not dependent on Windows Management Framework (WMF) and Windows Management Instrumentation (WMI) and is compatible with Windows PowerShell DSC. It supports resources written in Python, C and C++.

Debugging in DSC has always been troublesome, and ISE eased that process. But with Microsoft phasing out ISE, what should admins do now? A Microsoft blog says the company uses VS Code internally for DSC resource development and plans to release instructional videos that explain how to use the PowerShell extension for DSC resource development.

PowerShell Core 6 is still in its infancy, but Microsoft’s moves show the company will forge ahead with its plan to replace Windows PowerShell. This change brings a significant overhaul to the PowerShell landscape, and IT admins who depend on this automation tool should pay close attention to news related to its development.

Dig Deeper on Microsoft Windows Scripting Language

Industrial IoT adoption rates high, but deployment maturity low

Industrial organizations have embraced IoT, that much is clear. But a new study found current deployments aren’t very advanced yet — though that will come in time, the organizations said.

Bsquare’s 2017 Annual IIoT Maturity Study surveyed more than 300 senior-level employees with operational responsibilities from manufacturing, transportation, and oil and gas companies with annual revenues of more than $250 million.

Eighty-six percent of respondents said they have IIoT technologies in place, with an additional 12% planning to deploy IIoT within the next year. And of the 86% of industrial organizations that have completed IoT adoption, 91% said the IIoT deployments were important to business operations.

However, while IoT is catching on, most industrial organizations are still in the early stages.

The state of IoT adoption in industrial organizations

The study outlined five levels of IoT adoption: device connectivity and data forwarding, real-time monitoring, data analytics, automation and on-board intelligence.

Seventy-eight percent of survey respondents, with transportation leading the pack, self-identified their companies at the first stage, transmitting sensor data to the cloud for analytics, and 56%, again with transportation in the lead, reached the second stage, monitoring sensor data in real time for visualization.

Dave McCarthy, BsquareDave McCarthy

Dave McCarthy, senior director of products at Bsquare, said he had predicted the gap between the first two stages would be smaller; no surprise there. What really surprised him, however, was the small gap between the second stage and third: Forty-eight percent of respondents said they were using data analytics for insight, predictions and optimization with applied analytics such as machine learning or artificial intelligence.

“What it indicates to me,” McCarthy said, “Is that people who have gone down the visualization route have figured out, to some degree, some use of the data they’re collecting, and they know that analytics is going to play a part in helping them understand more closely what that data is going to mean for them.”

McCarthy wasn’t surprised to see the drop in the fourth and fifth stages: Twenty-eight percent said they were automating actions across internal systems with their IoT deployments, and only 7% had reached the edge analytics level.

“Just as expected, there’s a large drop-off from people doing analytics to people who are automating the results,” McCarthy said. “And in my mind, the highest amount of ROI comes when you can get to those levels.”

IIoT Maturity Model
Maturity of IoT adoption in industrial organizations

IIoT adoption and satisfaction

Not reaching the highest levels of ROI isn’t deterring IoT adoption, though: Seventy-three percent of respondents said they expect to increase IIoT deployments over the next year, with higher IoT adoption rates in transportation and manufacturing (85% and 78%, respectively) than oil and gas (56%). Additionally, 35% of all industrial organizations believe they will reach the automation stage, and 29% are aiming to reach the real-time monitoring stage in the same time period.

Nor will ROI always be calculated the same by analysts and companies as it is by the organizations using IIoT technologies, McCarthy noted. Respondents cited machine health- (90%) and logistics- (67%) related goals as top IoT adoption drivers, while lowering operating costs came in at 24%.

“The number one motivation that all operations-oriented companies have is improving and increasing uptime of their equipment,” McCarthy said. “I hear this over and over again. They know they eventually have to do maintenance on equipment and take things down for repairs, but it is so much more manageable when they can get ahead of that and plan for it.”

“The reality for these types of businesses is that if there are plant shutdowns or line shutdowns that last for extended periods of time, they often don’t have the ability to make up that loss in production,” McCarthy added. “You can’t just run another shift on a Saturday to pick up the slack. Oftentimes the value of the product they’re producing far outweighs the cost of operating the equipment. What this indicates to me is, ‘I’ll spend more if that means I can keep that line running because of the production value.'”

With or without traditional ROI, the majority of survey respondents said they were happy with the results they’re seeing: Eighty-four percent said their products and services were extremely or very effective, with the transportation sector seeing a 96% satisfaction rate.

Additionally, 99% of oil and gas, 98% of transportation and 90% of manufacturing organizations said IIoT would have a significant impact on their industry at a global level. Perhaps those predictions of IIoT investments reaching $60 trillion in the next 15 years and the number of internet-connected IIoT devices exceeding 50 billion by 2020 will become a reality.

Cisco cloud VP calls out trends in multicloud strategy

Large enterprises have quickly embraced multicloud strategy as a common practice — a shift that introduces opportunities, as well as challenges.

Cisco has witnessed this firsthand, as the company seeks a niche in a shifting IT landscape. Earlier this year, Cisco shuttered Intercloud Services, its failed attempt to create a public cloud competitor to Amazon Web Services (AWS). Now, Cisco’s bets are on a multicloud strategy to draw from its networking and security pedigree and sell itself as a facilitator for corporate IT’s navigation across disparate cloud environments.

In an interview with SearchCloudComputing, Kip Compton, vice president of Cisco’s cloud platforms and solutions group, discussed the latest trends with multicloud strategy and where Cisco plans to fit in the market.

How are your customers shifting their view on multicloud strategy?

Kip Compton: It started with the idea that each application is going to be on separate clouds. It’s still limited to more advanced customers, but we’re seeing use cases where they’re spanning clouds with different parts of an application or subsystems, either for historical reasons or across multiple applications, or taking advantage of the specific capabilities in a cloud.

Hybrid cloud was initially billed as a way for applications to span private and public environments, but that was always more hype than reality. What are enterprises doing now to couple their various environments?

Compton: The way we approach hybrid cloud is as a use case where you have an on-prem data center and a public data center and the two work together. Multicloud, the definition we’ve used, is at least two clouds, one of which is a public cloud. In that way, hybrid is essentially a subset of multicloud for us.

Azure Stack is a bit of an outlier, but hybrid has changed a bit for most of our customers in terms of it not being tightly coupled. Now it is deployments where they have certain codes that run in both places, and the two things work together to deliver an application. They’re calling that hybrid, whereas in the early days, it was more about seamless environments and moving workloads between on prem and the public cloud based on load and time of day, and that seems to have faded.

What are the biggest impediments to a successful multicloud strategy?

Compton: Part of it is what types of problems do people talk about to Cisco, as opposed to other companies, so I acknowledge there may be some bias there. But there are four areas that are pretty reliable for us in customer conversations.

First is networking, not surprisingly, and they talk about how to connect from on prem to the cloud. How do they connect between clouds? How do they figure out how that integrates to their on-prem connectivity frameworks?

Then, there’s security. We see a lot of companies carry forward their security posture as they move workloads; so virtual versions of our firewalls and things like that, and wanting to align with how security works in the rest of their enterprise.

The third is analytics, particularly application performance analytics. If you move an app to a completely different environment, it’s not just about getting the functionality, it’s about being performant. And then, obviously, how do you monitor and manage it [on] an ongoing basis?

The trend we see is [customers] want to take advantage of the unique capabilities of each cloud, but they need some common framework, some capability that actually spans across these cloud providers, which includes their on-prem deployment.

Where do you draw the line on that commonality between environments?

Compton: In terms of abstraction, there was a time where a popular approach was — I’ll call it the Cloud Foundry or bring-your-own-PaaS [platform as a service] approach — to say, ‘OK, the way I’m going to have portability is I’m not going to write my application to use any of the cloud providers’ APIs. I’m not going to take advantage of anything special from AWS or Azure or anyone.’

That’s less popular because the cloud providers have been fairly successful at launching new features developers want to use. We think of it more like a microservices style or highly modular pattern, where, for my application to run, there’s a whole bunch of things I need: messaging queues, server load, database, networking, security plans. It’s less to abstract Amazon’s networking, and it’s more to provide a common networking capability that will run on Amazon.

You mentioned customers with workloads spanning multiple clouds. How are those being built?

Compton: What I referred to are customers that have an application, maybe with a number of different subsystems. They might have an on-prem database that’s a business-critical system. They might do machine learning in Google Cloud Platform with TensorFlow, and they might look to deliver an experience to their customers through Alexa, which means they need to run some portion of the application in Amazon. They’re not taking their database and sharding it across multiple clouds, but those three subsystems have to work together to deliver that experience that the customer perceives as a single application.

What newer public cloud services do you see getting traction with your customers?

Compton: A few months ago, people were reticent to use [cloud-native] services because portability was the most important thing — but now, ROI and speed matter, so they use those services across the board.

A few months ago, people were reticent to use [cloud-native] services because portability was the most important thing — but now, ROI and speed matter, so they use those services across the board.
Kip Comptonvice president, Cisco’s cloud platforms and solutions group

We see an explosion of interest in serverless. It seems to mirror the container phenomenon where everybody agrees containers will become central to cloud computing architectures. We’re reaching the same point on serverless, or function as a service, where people see that as a better way to create code for more efficient [use of] resources.

The other trend we see: a lot of times people use, for example, Salesforce’s PaaS because their data is there, so the consumption of services is driven by practical considerations. Or they’re in a given cloud using services because of how they interface with one of their business partners. So, as much as there are some cool new services, there are some fairly practical points that drive people’s selection, too.

Have you seen companies shift their in-house architectures to accommodate what they’re doing in the public cloud?

Compton: I see companies starting new applications in the cloud and not on prem. And what’s interesting is a lot of our customers continue to see on-prem growth. They have said, ‘We’re going to go cloud-first on our new applications,’ but the application they already have on prem continues to grow in resource needs.

We also see interest in applying the cloud techniques to the on-prem data center or private cloud. They’re moving away from some of the traditional technologies to make their data center work more like a cloud, partially so it’s easier to work between the two environments, but also because the cloud approach is more efficient and agile than some of the traditional approaches.

And there are companies that want to get out of running data centers. They don’t want to deal with the real estate, the power, the cooling, and they want to move everything they can into Amazon.

What lessons did Cisco learn from the now-shuttered Intercloud?

Compton: The idea was to build a global federated IaaS [infrastructure as a service] that, in theory, would compete with AWS. At that time, most in the industry thought that OpenStack would take over the world. It was viewed as a big threat to AWS.

Today, it’s hard to relate to that point of view — obviously, that didn’t happen. In many ways, cloud is about driving this brutal consistency, and by having global fabrics that are identical and consistent around the world, you can roll out new features and capabilities and scale better than if you have a federated model.

Where we are now in terms of multicloud and strategy going forward — to keep customers and partners and large web scale cloud providers wanting to either buy from us or partner with us — it’s solving some of these complex networking and security problems. Cisco has value in our ability to solve these problems [and] link to the enterprise infrastructures that are in place around the world … that’s the pivot we’ve gone through.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.