Tag Archives: source

Netflix launches tool for monitoring AWS credentials

LAS VEGAS — A new open source tool looks to make monitoring AWS credentials easier and more effective for large organizations.

The tool, dubbed Trailblazer, was introduced during a session at Black Hat USA 2018 on Wednesday by William Bengtson, senior security engineer at Netflix, based in Los Gatos, Calif. During his session, Bengtson discussed how his security team took a different approach to reviewing AWS data in order to find signs of potentially compromised credentials.

Bengtson said Netflix’s methodology for monitoring AWS credentials was fairly simple and relied heavily on AWS’ own CloudTrail log monitoring tool. However, Netflix couldn’t rely solely on CloudTrail to effectively monitor credential activity; Bengtson said a different approach was required because of the sheer size of Netflix’s cloud environment, which is 100% AWS.

“At Netflix, we have hundreds of thousands of servers. They change constantly, and there are 4,000 or so deployments every day,” Bengtson told the audience. “I really wanted to know when a credential was being used outside of Netflix, not just AWS.”

That was crucial, Bengtson explained, because an unauthorized user could set up infrastructure within AWS, obtain a user’s AWS credentials and then log in using those credentials in order to “fly under the radar.”

However, monitoring credentials for usage outside of a specific corporate environment is difficult, he explained, because of the sheer volume of data regarding API calls. An organization with a cloud environment the size of Netflix’s could run into challenges with pagination for the data, as well as rate limiting for API calls — which AWS has put in place to prevent denial-of-service attacks.

“It can take up to an hour to describe a production environment due to our size,” he said.

To get around those obstacles, Bengtson and his team crafted a new methodology that didn’t require machine learning or any complex technology, but rather a “strong but reasonable assumption” about a crucial piece of data.

“The first call wins,” he explained, referring to when a temporary AWS credential makes an API call and grabs the first IP address that’s used. “As we see the first use of that temporary [session] credential, we’re going to grab that IP address and log it.”

The methodology, which is built into the Trailblazer tool, collects the first API call IP address and other related AWS data, such as the instance ID and assumed role records. The tool, which doesn’t require prior knowledge of an organization’s IP allocation in AWS, can quickly determine whether the calls for those AWS credentials are coming from outside the organization’s environment.

“[Trailblazer] will enumerate all of your API calls in your environment and associate that log with what is actually logged in CloudTrail,” Bengtson said. “Not only are you seeing that it’s logged, you’re seeing what it’s logged as.”

Bengtson said the only requirement for using Trailblazer is a high level of familiarity with AWS — specifically how AssumeRole calls are logged. The tool is currently available on GitHub.

Kontron heeds carrier demand for software, buys Inocybe

Kontron has acquired Inocybe Technologies, adding open source networking software to the German hardware maker’s portfolio of computing systems for the telco industry.

Kontron, which announced the acquisition this week, purchased Inocybe’s Open Networking Platform as telcos increasingly favor buying software separate from hardware. Kontron is a midsize supplier of white box systems to communications service providers (CSPs) and cable companies.

CSPs are replacing specialized hardware with more flexible software-centric networking, forcing companies like Kontron and Radisys, which recently sold itself to Reliance Industries, to reinvent themselves, said Lee Doyle, principal analyst at Doyle Research, based in Wellesley, Mass.

“This is part of Kontron’s efforts to move in a more software direction — Radisys has done this as well — and to a more service-oriented model, in this case, based on open source,” Doyle said.

Inocybe, on the other hand, is a small startup that could take advantage of the resources of a midsize telecom supplier, mainly since the market for open source software is still in its infancy within the telecom industry, Doyle said.

While Kontron did not release financial details, the price for Inocybe ranged from $5 million to $10 million, said John Zannos, previously the chief revenue officer of Inocybe and now a general manager of its technology within Kontron. The manufacturer plans to offer Inocybe’s Open Networking Platform as a stand-alone product while also providing hardware specially designed to run the platform.

Inocybe’s business

Inocybe’s business model is similar to that of Red Hat, which sells its version of open source Linux and generates revenue from support and services on the server operating system. Under Kontron, Inocybe plans to continue developing commercial versions of all the networking software built under the Linux Foundation.

Open source is free, but making it work isn’t.
Lee Doyleprincipal analyst, Doyle Research

The Open Networking Platform includes parts of the Open Network Automation Platform (ONAP), the OpenDaylight software-defined networking controller and the OpenSwitch network operating system. Service providers use Inocybe’s platform as a tool for traffic engineering, network automation and network functions virtualization.

Tools like Inocybe’s deliver open source software in a form that’s ready for testing and then deploying in a production environment. The more difficult alternative is downloading the code from a Linux Foundation site and then stitching it together into something useful.

“Open source is free, but making it work isn’t,” Doyle said.

Before the acquisition, Inocybe had a seat on the board of the open source networking initiative within the Linux Foundation and was active in the development of several technologies, including OpenDaylight and OpenSwitch. All that work would continue under Kontron, Zannos said.

WSO2 integration platform twirls on Ballerina language

The latest version of WSO2’s open source integration platform strengthens its case to help enterprises create and execute microservices.

The summer 2018 release of the WSO2 Integration Agile Platform, introduced at the company’s recent annual WSO2Con user conference, supports what the company calls an “integration agile” approach to implement microservices. Software development has moved to an agile approach, but legacy Waterfall approaches can stymie the integration process, and WSO2 aims to change that.

Solving the integration challenge

Integration remains a primary challenge for enterprise IT shops. The shift of automation functions to the public cloud complicates enterprises’ integration maps, but at the same time, enterprises want to use microservices and serverless architectures, which require new integration architectures, said Holger Mueller, an analyst at Constellation Research in San Francisco.

Improvements to the WSO2 integration platform, such as integration of the WSO2 API management, enterprise integration, real-time analytics and identity and access management options, aim to help enterprises adopt agile integration as they move from monolithic applications to microservices as part of digital transformation projects. The company also introduced a new licensing structure for enterprises to scale their microservices-based apps.

In addition, WSO2 Integration Agile Platform now supports the open source Ballerina programming language, a cloud-native programming language built by WSO2 and optimized for integration. The language features a visual interface that suits noncoding business users, yet also empowers developers to write code to integrate items rather than use bulky configuration-based integration schemes.

“Ballerina has a vaguely Java-JavaScript look and feel,” said Paul Fremantle, CTO of WSO2. “The concurrency model is most similar to Go and the type system is probably most similar to functional programming languages like Elm. We’ve inherited from a bunch of languages.” Using Ballerina, University of Oxford students finished projects in 45 minutes that typically took two hours in other languages, Fremantle said.

Some early Ballerina adopters requested more formal support, so WSO2 now offers a Ballerina Early Access Development Support package with unlimited query support to users, but this is only available until Ballerina 1.0 is released later this year, Fremantle said. Pricing for the package is $500 per developer seat, with a minimum package of five developers.

WSO2's Paul Fremantle at BallerinaCon
Paul Fremantle, CTO of WSO2, demoing Ballerina at BallerinaCon.

Integration at the heart of PaaS

Integration technology is central functionality for all PaaS offerings that aim to ease enterprise developers and DevOps pros into microservices, serverless computing, and even emerging technologies like blockchain, said Charlotte Dunlap, an analyst at GlobalData in Santa Cruz, Calif. WSO2 offers a competitive open source alternative to pricier options from larger rivals such as Oracle, IBM, and SAP, though it’s more of a “second tier” integration and API management provider and lacks the brand recognition to attract global enterprises, she said.

Nevertheless, Salesforce’s MuleSoft acquisition earlier this year exemplifies the importance of smaller integration vendors. Meanwhile, Red Hat offers integration and API management options, and public cloud platform providers will also build out these services.

How is the future of PowerShell shaping up?

Now that PowerShell is no longer just for Windows — and is an open source project — what are the long-term implications of these changes?

Microsoft technical fellow Jeffrey Snover developed Windows PowerShell based on the parameters in his “Monad Manifesto.” If you compare his document to the various releases of Windows PowerShell, you’ll see Microsoft produced a majority of Snover’s vision. But, now that this systems administration tool is an open source project, what does this mean for the future of PowerShell?

I’ve used PowerShell for more than 12 years and arguably have as good an understanding of PowerShell as anyone. I don’t know, or understand, where PowerShell is going, so I suspect that many of its users are also confused.

When Microsoft announced that PowerShell would expand to the Linux and macOS platforms, the company said it would continue to support Windows PowerShell, but would not develop new features for the product. Let’s look at some of the recent changes to PowerShell and where the challenges lie.

Using different PowerShell versions

While it’s not directly related to PowerShell Core being open source, one benefit is the ability to install different PowerShell versions side by side. I currently have Windows PowerShell v5.1, PowerShell Core v6.0.1 and PowerShell Core v6.1 preview 2 installed on the same machine. I can test code across all three versions using the appropriate console or Visual Studio Code.

PowerShell versions
One benefit of the open source move is Windows PowerShell v5.1, PowerShell Core v6.0.1 and PowerShell Core v6.1 preview 2 can run on the same machine.

How admins benefit from open source PowerShell

The good points of the recent changes to PowerShell include access to the open source project, a faster release cadence and community input.

Another point in favor of PowerShell’s move is that you can see the code. If you can read C#, you can use that skill to track down and report on bugs you encounter. If you have a fix for the problem, then you can submit it.

Studying the code can give you insight into how PowerShell works. What it won’t explain is why PowerShell works the way it does. Previously, Microsoft MVPs and very few other people had access to Windows PowerShell code, but with the PowerShell Core code now available to more people, it can only make it a better product in the long run.

The PowerShell team expects to deliver a new release approximately every six months. The team released PowerShell v6.0 in January 2018. At the time of this article’s publication, version 6.1 is in its third preview release, with the final version expected soon. If the team maintains this release cadence, you can expect v6.2 in late 2018 or early 2019.

A faster release cadence implies quicker resolution of bugs and new features on a more regular basis. The downside to a faster release cadence is that you’ll have to keep upgrading your PowerShell instances to get the bug fixes and new features.

Of the Microsoft product teams, the Windows PowerShell team is one of the most accessible. They’ve been very active in the PowerShell community since the PowerShell v1 beta releases by engaging with users and incorporating their feedback. The scope of that dialog has expanded; anyone can report bug fixes or request new features.

The downside is the expectation that the originator of the request is expected to implement the changes. If you follow the project, you’ll see there are just a handful of community members who are heavily active.

Shortcomings of the PowerShell Core project

This leads us to the disadvantages now that PowerShell is an open-source project. In my view, the problems are:

  • it’s an open source project;
  • there’s no overarching vision for the project;
  • the user community lacks an understanding of what’s happening with PowerShell; and
  • gaps in the functionality.

[embedded content]

PowerShell’s inventor gives a status update on the automation tool

These points aren’t necessarily problems, but they are issues that could impact the PowerShell project in the long term.

Changing this vital automation and change management tool to an open source project has profound implications for the future of PowerShell. The PowerShell Core committee is the primary caretaker of PowerShell. This board has the final say on which proposals for new features will proceed.

At this point in time, the committee members are PowerShell team members. A number of them, including the original language design team, have worked on PowerShell from the start. If that level of knowledge is diluted, it could have an adverse effect on PowerShell.

The PowerShell project page supplies a number of short- to medium-term goals, but I haven’t seen a long-term plan that lays out the future of PowerShell. So far, the effort appears concentrated on porting the PowerShell engine to other platforms. If the only goal is to move PowerShell to a cross-platform administration tool, then more effort should go into bringing the current Windows PowerShell functionality to the other platforms.

Giving the PowerShell community a way to participate in the development of PowerShell is both a strength and a weakness. Some of the proposals show many users don’t understand how PowerShell works.

Giving the PowerShell community a way to participate in the development of PowerShell is both a strength and a weakness. Some of the proposals show many users don’t understand how PowerShell works. There are requests to make PowerShell more like Bash or other shells.

Other proposals seek to change how PowerShell works, which could break existing functionality. The PowerShell committee has done a good job of managing the more controversial proposals, but clearing up long-term goals for the project would reduce requests that don’t fit into the future of PowerShell.

The project is also addressing gaps in functionality. Many of the current Windows PowerShell v5.1 modules will work in PowerShell Core. At the PowerShell + DevOps Global Summit 2018, one demonstration showed how to use implicit remoting to access Windows PowerShell v5.1 modules on the local machine through PowerShell Core v6. While not ideal, this method works until the module authors convert them to run in PowerShell Core.

One gap that needs work is the functionality on Linux and macOS systems. PowerShell Core is missing the cmdlets needed to perform standard administrative tasks, such as working with network adapters, storage, printer management, local accounts and groups.

Availability of the ConvertFrom-String cmdlet would be a huge boost by giving admins the ability to use native Linux commands then turn the output into objects for further processing in PowerShell. Unfortunately, ConvertFrom-String uses code that cannot be open sourced, so it’s not an option currently. Until this functionality gap gets closed, Linux and macOS will be second-class citizens in the PowerShell world.

How to build a Packer image for Azure

Get started
Bring yourself up to speed with our introductory content.

Packer is an open source tool that automates the Windows Server image building process to give administrators a consistent approach to create new VMs.


For admins who prefer to roll their own Windows Server image, despite the best of intentions, issues can arise…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

from these handcrafted builds.

To maintain some consistency — and avoid unnecessary help desk tickets — image management tools such as Packer can help construct golden images tailored for different needs. The Packer image tool automates the building process and helps admins manage Windows Server images. Packer offers a way to script the image construction process to produce builds through automation for multiple platforms at the same time. Admins can use code repositories to store validated Packer image configurations that admins across different locations can share to ensure stability across builds.

Build a Packer image for Azure

To demonstrate how Packer works, we’ll use it to build a Windows Server image. To start, download and install Packer for the operating system of choice. Packer offers an installation guide on its website.

Next, we need to figure out where to create the image. A Packer feature called builders creates images for various services, such as Azure, AWS, Docker, VMware and more. This tutorial will explain how to build a Windows Server image to run in Azure.

To construct an image for Azure, we have to meet a few prerequisites. You need:

  • a service principal for Packer to authenticate to Azure;
  • a storage account to hold the image;
  • the resource group name for the storage account;
  • the Azure subscription ID;
  • the tenant ID for your Azure Active Directory; and
  • a storage container to place the VHD image.

Validate the Windows Server build instructions

A Packer feature called builders creates images for various services, such as Azure, AWS, Docker, VMware and more.

Next, it’s time to set up the image template. Every Packer image requires a JSON file called a template that tells Packer how to build the image and where to put it. An example of a template that builds an Azure image is in the code below. Save it with the filename WindowsServer.Azure.json.

{
  “variables”: {
      “client_id”: “”,
      “client_secret”: “”,
      “object_id”: “”
  },
  “builders”: [{
    “type”: “azure-arm”,

    “client_id”: “{{user `client_id`}}”,
    “object_id”: “{{user `object_id`}}”,
    “client_secret”: “{{user `client_secret`}}”,
    “resource_group_name”: “labtesting”,
    “storage_account”: “adblabtesting”,
    “subscription_id”: “d660a51f-031d-4b8f-827d-3f811feda5fc”,
    “tenant_id”: “bb504844-07db-4019-b1c4-7243dfc97121”,

    “capture_container_name”: “vhds”,
    “capture_name_prefix”: “packer”,

    “os_type”: “Windows”,
    “image_publisher”: “MicrosoftWindowsServer”,
    “image_offer”: “WindowsServer”,
    “image_sku”: “2016-Datacenter”,
    “location”: “East US”,
    “vm_size”: “Standard_D2S_v3”
  }]
}

You should validate the schema before you start with the packer validate command. We don’t want sensitive information in the template, so we create the client_id and client_secret variables and pass those at runtime.

packer validate -var ‘client_id=value’ -var ‘client_secret=value’ WindowsServer.Azure.json

How to correct Packer build issues

After the command confirms the template is good, we build the image with nearly the same syntax as the validation command. For the purposes of this article, we will use placeholders for the client_id, client_secret and object_id references.

> packer build -var ‘client_id=XXXX’ -var ‘client_secret=XXXX’ -var ‘object_id=XXXX’ WindowsServer.Azure.json

When you run the build the first time, you may run into a few errors if the setup is not complete. Here are the errors that came up when I ran my build:

    • “Build ‘azure-arm’ errored: The storage account is located in eastus, but the build will take place in West US. The locations must be identical”
    • Build ‘azure-arm’ errored: storage.AccountsClient#ListKeys: Failure responding to request: StatusCode=404 – Original Error: autorest/azure: Service returned an error. Status=404 Code=”ResourceGroupNotFound” Message=”Resource group ‘adblabtesting’ could not be found.”

[embedded content]

Using Packer to build an image from another VM.

  • “==> azure-arm: ERROR: -> VMSizeDoesntSupportPremiumStorage : Requested operation cannot be performed because storage account type ‘Premium_LRS’ is not supported for VM size ‘Standard_A2’.”

The error messages are straightforward and not difficult to fix.

However, the following error message is more serious:

==> azure-arm: ERROR: -> Forbidden : Access denied
==> azure-arm:
==> azure-arm:  …failed to get certificate URL, retry(0)

This indicates the use of the wrong object_id. Find the correct one in the Azure subscription role.

After adding the right object_id, you will find a VHD image in Azure.

Dig Deeper on Windows Server deployment

Databricks platform additions unify machine learning frameworks

SAN FRANCISCO — Open source machine learning frameworks have multiplied in recent years, as enterprises pursue operational gains through AI. Along the way, the situation has formed a jumble of competing tools, creating a nightmare for development teams tasked with supporting them all.

Databricks, which offers managed versions of the Spark compute platform in the cloud, is making a play for enterprises that are struggling to keep pace with this environment. At Spark + AI Summit 2018, which was hosted by Databricks here this week, the company announced updates to its platform and to Spark that it said will help bring the diverse array of machine learning frameworks under one roof.

Unifying machine learning frameworks

MLflow is a new open source framework on the Databricks platform that integrates with Spark, SciKit-Learn, TensorFlow and other open source machine learning tools. It allows data scientists to package machine learning code into reproducible modules, conduct and compare parallel experiments, and deploy models that are production-ready.

Databricks also introduced a new product on its platform, called Runtime for ML. This is a preconfigured Spark cluster that comes loaded with distributed machine learning frameworks commonly used for deep learning, including Keras, Horovod and TensorFlow, eliminating the integration work data scientists typically have to do when adopting a new tool.

Databricks’ other announcement, a tool called Delta, is aimed at improving data quality for machine learning modeling. Delta sits on top of data lakes, which typically contain large amounts of unstructured data. Data scientists can specify a schema they want their training data to match, and Delta will pull in all the data in the data lake that fits the specified schema, leaving out data that doesn’t fit.

MLflow's tracking user interface
MLflow includes a tracking interface for logging the results of machine learning jobs.

Users want everything under one roof

Each of the new tools is either in a public preview or alpha test stage, so few users have had a chance to get their hands on them. But attendees at the conference were broadly happy about the approach of stitching together disparate frameworks more tightly.

Saman Michael Far, senior vice president of technology at the Financial Industry Regulatory Authority (FINRA) in Washington, D.C., said in a keynote presentation that he brought in the Databricks platform largely because it already supports several query languages, including R, Python and SQL. Integrating these tools more closely with machine learning frameworks will help FINRA use more machine learning in its goal of spotting potentially illegal financial trades.

You have to take a unified approach. Pick technologies that help you unify your data and operations.
John Golesenior director of business analysis and product management at Capital One

“It’s removed a lot of the obstacles that seemed inherent to doing machine learning in a business environment,” Far said.

John Gole, senior director of business analysis and product management at Capital One, based in McLean, Va., said the financial services company has implemented Spark throughout its operational departments, including marketing, accounts management and business reporting. The platform is being used for tasks that range from extract, transform and load jobs to SQL querying for ad hoc analysis and machine learning. It’s this unified nature of Spark that made it attractive, Gole said.

Going forward, he said he expects this kind of unified platform to become even more valuable as enterprises bring more machine learning to the center of their operations.

“You have to take a unified approach,” Gole said. “Pick technologies that help you unify your data and operations.”

Bringing together a range of tools

Engineers at ride-sharing platform Uber have already built integrations similar to what Databricks unveiled at the conference. In a presentation, Atul Gupte, a product manager at Uber, based in San Francisco, described a data science workbench his team created that brings together a range of tools — including Jupyter, R and Python — into a web-based environment that’s powered by Spark on the back end. The platform is used for all the company’s machine learning jobs, like training models to cluster rider pickups in Uber Pool or forecast rider demand so the app can encourage more drivers to get out on the roads.

Gupte said, as the company grew from a startup to a large enterprise, the old way of doing things, where everyone worked in their own silo using their own tool of choice, didn’t scale, which is why it was important to take this more standardized approach to data analysis and machine learning.

“The power is that everyone is now working together,” Gupte said. “You don’t have to keep switching tools. It’s a pretty foundational change in the way teams are working.”

Accenture fosters inclusive workspace to empower employees with Microsoft 365 – Microsoft 365 Blog

Today’s post was written by Stephen Cutchins CIO and accessibility lead at Accenture.

Headshot of Stephen Cutchins, CIO and accessibility lead at Accenture.Diversity at Accenture is a source of strength; the wealth of different perspectives and skillsets that our employees bring to the table keeps us leading in our field. Achieving more as a company starts with addressing the needs of every single employee in our workforce. I am passionate about accessibility. I grew up with two cousins with disabilities and it shaped my outlook on the whole idea of inclusion in the workplace. Accessible technology is about one thing—fitting the tools to the humans who use themand I’m fortunate to work with a company that shares my vision. I wanted to create an accessibility practice at Accenture, and to that end, I started as the first employee in the CIO’s Center for Excellence, where we look at finding the right tools for an inclusive workplace. And when it comes to business tools, we see Microsoft as a leader in inclusive technology and a great partner, a perfect match for our goals to put technology to work empowering every one of our employees. In fact, we now take it for granted that the experiences within Microsoft 365 are going to work well for our employees.

As a human-centric company, our workplace initiatives are designed to bring the conversation about accessibility to the forefront, encouraging an open dialogue about how we can support employees’ needs in the workplace. Accenture runs on Office 365 productivity services that include a wealth of built-in accessibility features. The Microsoft approach of “accessibility by design” matches our philosophy that accessibility is not an add-on or an afterthought, but an inherent part of the technology we use to communicate and collaborate as an organization.

The ability to collaborate effectively with your colleagues to get work done is the baseline of any productive organization. A lot of credit goes to accessibility features in Office 365 ProPlus applications—such as Skype for Business, Word, and Outlook—for helping us tap into the incredible resources in our company. Daily Skype voice and video calls become transformative when people who are blind or motor-disabled can participate by using JAWS screen reader for Windows or voice dictation software. Even minor changes can have an enormous impact. I was excited to see that when Microsoft moved the Accessibility Checker front and center in Word, near the spell check, it raised awareness of both the feature itself and the need to use it. We are all of different abilities, and learning to consider the full range of situations across the disability spectrum means employees will use the tools at hand for better communication and collaboration with everyone.

It gives me enormous satisfaction that our inclusive workplace, with Microsoft technologies, engages our employees to do their best work and to help them realize their true potential and grow as human beings. Everyone benefits.

—Stephen Cutchins

Read the case study to learn more about how Accenture is empowering its workforce with the intuitive accessibility tools built into Windows 10 and Office 365.

Mind the feature gaps in the PowerShell open source project


Even though the current version of the PowerShell open source project lacks much of the functionality of Windows…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

PowerShell, administrators can close those gaps with a few adjustments.

Microsoft released the first version of its cross-platform, open source version of PowerShell in January 2018. This version, officially called PowerShell Core 6.0, is not a straight swap for Windows PowerShell 5.1, even though it’s forked from the 5.1 code base and adapted for use on Windows, many Linux distributions and macOS.

The key difference is Windows PowerShell 5.1 and earlier versions run on the full .NET Framework for Windows, whereas the PowerShell open source project relies on .NET Core, which does not have access to the same .NET classes.

Differences between Windows PowerShell and PowerShell Core

Many PowerShell 5.1 features aren’t available in PowerShell Core 6, including the Integrated Scripting Environment (ISE), PowerShell workflows, Windows Management Instrumentation (WMI) cmdlets, event log cmdlets, performance counter cmdlets and the Pester module.

The large collection of PowerShell modules that work with Windows PowerShell are not available in PowerShell Core 6. Any binary PowerShell module compiled against the full .NET Framework won’t work in the PowerShell open source project, including the Active Directory module and the Exchange module.

The large collection of PowerShell modules that work with Windows PowerShell are not available in PowerShell Core 6.

PowerShell Core 6 brings some useful features. The first is the ability to administer Linux and macOS systems using PowerShell. The depth of cmdlets for Windows systems is not available on the non-Windows platforms, but the PowerShell community might fill those holes through the PowerShell Gallery.

Secondly, PowerShell 6 introduced remoting over Secure Socket Shell (SSH) as opposed to remoting with the Web Services-Management protocol. This enables remoting to and from Linux systems and provides an easy way to remote to and from non-domain Windows machines.

Installing and configuring SSH on Windows is getting easier, and the inclusion of SSH as an optional feature in the latest Windows 10 and Windows Server previews will hopefully lead to a simpler remoting installation and configuration process.

How to surmount functionality obstacles

You can overcome some of the missing features in PowerShell Core 6, starting with the ISE. Use Visual Studio (VS) Code instead of ISE for developing scripts. VS Code is an open source editor that runs on Windows, Linux and macOS. VS Code uses Windows PowerShell or PowerShell Core in the integrated terminal if both are installed.

PowerShell workflows, which allow functions to run on several machines at the same time, are never going to be a part of PowerShell Core because this feature is difficult to implement. Instead, you can use PowerShell runspaces to provide parallel processing. While they aren’t easy to code, a proposal exists to create cmdlets for managing runspaces.

An example of a simple PowerShell workflow is:

workflow test1 {

  parallel {

    Get-CimInstance -ClassName Win32_OperatingSystem

    Get-CimInstance -ClassName Win32_ComputerSystem

  }

}

test1

Figure 1 shows the results of this script.

PowerShell workflow
Figure 1. Running a workflow on Windows PowerShell 5.1 in the PowerShell Integrated Scripting Environment

Emulating a PowerShell workflow with runspaces results in the following code:

## create runspace pool with min =1 and max = 5 runspaces
$rp = [runspacefactory]::CreateRunspacePool(1,5)
## create powershell and link to runspace pool
$ps = [powershell]::Create()
$ps.RunspacePool = $rp
$rp.Open()
$cmds = New-Object -TypeName System.Collections.ArrayList

1..2 | ForEach-Object {
    $psa = [powershell]::Create()
    $psa.RunspacePool = $rp
    if ($_ -eq 1) {
       [void]$psa.AddScript({
         Get-CimInstance -ClassName Win32_OperatingSystem
       })
    } else {
        [void]$psa.AddScript({
            Get-CimInstance -ClassName Win32_ComputerSystem
          })
    }
$handle = $psa.BeginInvoke()
     $temp = ” | Select-Object PowerShell, Handle
     $temp.PowerShell = $psa
     $temp.Handle = $handle
     [void]$cmds.Add($temp)
}

 

## view progress
$cmds | Select-Object -ExpandProperty handle
## retrieve data
$cmds | ForEach-Object {$_.PowerShell.EndInvoke($_.Handle)}

## clean up
$cmds | ForEach-Object {$_.PowerShell.Dispose()} 
$rp.Close()
$rp.Dispose()

Figure 2 shows this code running on PowerShell Core 6 in the VS Code editor.

PowerShell runspaces in VS Code
Figure 2. Executing code with runspaces in PowerShell Core 6 in the VS Code editor.

Runspaces code works in Windows PowerShell 5.1 and PowerShell Core 6. Administrators can also simplify runspaces with the PoshRSjob module from the PowerShell Gallery. The latest version works in PowerShell Core 6 on Linux and Windows.

The developers of the PowerShell open source project plan to add missing WMI, event log and performance cmdlets into the Windows Compatibility Pack for .NET Core. WMI cmdlets are effectively deprecated in favor of the Common Information Model (CIM) cmdlets, which are available in PowerShell Core 6 and Windows PowerShell 3 and newer.

[embedded content]

Showing PowerShell Core’s cross-platform
capabilities

Event log cmdlets only work with the traditional event logs. If you need to work with the new style application and service logs, you have to use the Get-WinEvent cmdlet, which also works with the old-style logs.

The following command uses the older Get-EventLog cmdlet:

Get-EventLog -LogName System -Newest 5

It’s easy enough to switch to the Get-WinEvent cmdlet to get similar results:

Get-WinEvent -LogName System -MaxEvents 5

The Get-WinEvent cmdlet can’t clear, limit, write to or create/remove classic event logs, but you can configure event logs using the properties and methods on the objects returned.

Get-WinEvent -ListLog *

In Windows PowerShell 5.1, you can run the following command to access performance counters:

Get-Counter -Counter ‘Processor(*)% Processor Time’

This generates the following output:

Timestamp                 CounterSamples

———                 ————–

31/03/2018 15:18:34       \w510w10processor(0)% processor time :

                          1.56703775955929

 

                          \w510w10processor(1)% processor time :

                          0.00460978748879626

 

                          \w510w10processor(2)% processor time :

                          1.56703775955929

 

                          \w510w10processor(3)% processor time :

                          0.00460978748879626

 

                          \w510w10processor(4)% processor time :

                          0.00460978748879626

 

                          \w510w10processor(5)% processor time :

                          4.69189370370026

 

                          \w510w10processor(6)% processor time :

                          3.12946573162978

 

                          \w510w10processor(7)% processor time :

                          0.00460978748879626

 

                          \w510w10processor(_total)% processor time :

                          1.37173676293523

The alternative way to return performance counter data is with CIM classes, which work in Windows PowerShell 5.1 and PowerShell Core 6:

Get-CimInstance -ClassName Win32_PerfFormattedData_PerfOS_Processor | select Name, PercentProcessorTime

 

Name   PercentProcessorTime

—-   ——————–

_Total                   13

0                        50

1                         0

2                         0

3                        43

4                         6

5                         6

6                         0

7                         0

PowerShell Core 6 can access many of the older PowerShell modules, such as the networking or storage modules, available on Windows 8 or Windows Server 2012. To do this, add the following entry in your profile to the Windows PowerShell 5.1 modules folder:

$env:PSModulePath =  $env:PSModulePath + ;C:WindowsSystem32WindowsPowerShellv1.0Modules’

Another way to do this is to import the module directly.

Import-Module C:WindowsSystem32WindowsPowerShellv1.0ModulesNetAdapterNetAdapter.psd1

Get-NetAdapter

 

Name                      ifIndex Status       MacAddress             LinkSpeed

—-                      ——- ——       ———-             ———

vEthernet (nat)           16 Up           00-15-5D-82-CF-92        10 Gbps

vEthernet (DockerNAT)     20 Up           00-15-5D-36-C9-37        10 Gbps

vEthernet (Default Swi…  9 Up           2E-15-00-2B-41-72        10 Gbps

vEthernet (Wireless)      22 Up           F0-7B-CB-A4-30-9C     144.5 Mbps

vEthernet (LAN)           12 Up           00-15-5D-36-C9-11        10 Gbps

Network Bridge            24 Up           F0-7B-CB-A4-30-9C     144.5 Mbps

LAN                       14 Disconnected F0-DE-F1-00-3F-67          0 bps

Wireless                   8 Up           F0-7B-CB-A4-30-9C     144.5 Mbps

In the case of binary modules, such as Active Directory, you’ll have to revert to scripting when using the PowerShell open source version. If you want to administer Linux machines using PowerShell Core 6, you’ll have to do a lot of scripting or wait for the community to create the functionality.

STEAM CONTROLLER

Hello hope one of you guys can help me source a STEAM CONTROLLER. I already have the steam link so just looking for the controller.
Thank you in advance.

Location: Middlesbrough

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be…

STEAM CONTROLLER

STEAM CONTROLLER

Hello hope one of you guys can help me source a STEAM CONTROLLER. I already have the steam link so just looking for the controller.
Thank you in advance.

Location: Middlesbrough

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be…

STEAM CONTROLLER