Tag Archives: source

Learn to set up and use PowerShell SSH remoting

When Microsoft said PowerShell would become an open source project that would run on Windows, Linux and macOS in August 2016, there was an interesting wrinkle related to PowerShell remoting.

Microsoft said this PowerShell Core would support remoting over Secure Shell (SSH) as well as Web Services-Management (WS-MAN). You could always use the PowerShell SSH binaries, but the announcement indicated SSH support would be an integral part of PowerShell. This opened up the ability to perform remote administration of Windows and Linux systems easily using the same technologies.

A short history of PowerShell remoting

Microsoft introduced remoting in PowerShell version 2.0 in Windows 7 and Windows Server 2008 R2, which dramatically changed the landscape for Windows administrators. They could create remote desktop sessions to servers, but PowerShell remoting made it possible to manage large numbers of servers simultaneously.

Remoting in Windows PowerShell is based on WS-MAN, an open standard from the Distributed Management Task Force. But because WS-MAN-based remoting is Windows orientated, you needed to use another technology, usually SSH, to administer Linux systems.

Introducing SSH on PowerShell Core

We have grown accustomed to installing software on Windows using the wizards, but the installation of OpenSSH requires more background information and more work from the administrator.

SSH is a protocol for managing systems over a possibly unsecured network. SSH works in a client-server mode and is the de facto standard for remote administration in Linux environments.

PowerShell Core uses OpenSSH, a fork from SSH 1.2.12 which was released under an open source license. OpenSSH is probably the most popular SSH implementation.

The code required to use WS-MAN remoting is installed as part of the Windows operating system. You need to install OpenSSH manually.

Installing OpenSSH

We have grown accustomed to installing software on Windows using the wizards, but the installation of OpenSSH requires more background information and more work from the administrator. Without some manual intervention, many issues can arise.

The installation process for OpenSSH on Windows has improved over time, but it’s still not as easy as it should be. Working with the configuration file leaves a lot to be desired.

There are two options when installing PowerShell SSH:

  1. On Windows 10 1809, Windows Server 1809, Windows Server 2019 and later, OpenSSH is available as an optional feature.
  2. On earlier versions of Windows, you can download and install OpenSSH from GitHub.

Be sure your system has the latest patches before installing OpenSSH.

Installing the OpenSSH optional feature

You can install the OpenSSH optional feature using PowerShell. First, check your system with the following command:

Get-WindowsCapability -Online | where Name -like '*SSH*'
OpenSSH components
Figure 1. Find the OpenSSH components in your system.

Figure 1 shows the OpenSSH client software is preinstalled.

You’ll need to use Windows PowerShell for the installation unless you download the WindowsCompatibility module for PowerShell Core. Then you can import the Deployment Image Servicing and Management module from Windows PowerShell and run the commands in PowerShell Core.

Install the server feature:

Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0
Path :
Online : True
RestartNeeded : False

The SSH files install in the C:WindowsSystem32OpenSSH folder.

Download OpenSSH from GitHub

Start by downloading the latest version from GitHub. The latest version of the installation instructions are at this link.

After the download completes, extract the zip file into the C:Program FilesOpenSSH folder. Change location to C:Program FilesOpenSSH to install the SSH services:

.install-sshd.ps1
[SC] SetServiceObjectSecurity SUCCESS
[SC] ChangeServiceConfig2 SUCCESS
[SC] ChangeServiceConfig2 SUCCESS

Configuring OpenSSH

After OpenSSH installs, perform some additional configuration steps.

Ensure that the OpenSSH folder is included on the system path environment variable:

  • C:WindowsSystem32OpenSSH if installed as the Windows optional feature
  • C:Program FilesOpenSSH if installed via the OpenSSH download

Set the two services to start automatically:

Set-Service sshd -StartupType Automatic
Set-Service ssh-agent -StartupType Automatic

If you installed OpenSSH with the optional feature, then Windows creates a new firewall rule to allow inbound access of SSH over port 22. If you installed OpenSSH from the download, then create the firewall rule with this command:

New-NetFirewallRule -Name sshd -DisplayName 'OpenSSH Server (sshd)' `
-Enabled True -Direction Inbound -Protocol TCP `
-Action Allow -LocalPort 22

Start the sshd service to generate the SSH keys:

Start-Service sshd

The SSH keys and configuration file reside in C:ProgramDatassh, which is a hidden folder. The default shell used by SSH is the Windows command shell. This needs to change to PowerShell:

New-ItemProperty -Path "HKLM:SOFTWAREOpenSSH" -Name DefaultShell `
-Value "C:Program FilesPowerShell6pwsh.exe" -PropertyType String -Force

Now, when you connect to the system over SSH, PowerShell Core will start and will be the default shell. You can also make the default shell Windows PowerShell if desired.

There’s a bug in OpenSSH on Windows. It doesn’t work with paths with a space, such as the path to the PowerShell Core executable! The workaround is to create a symbolic link that creates a path that OpenSSH can use:

New-Item -ItemType SymbolicLink -Path C:pwsh -Target 'C:Program FilesPowerShell6'

In the sshd_config file, un-comment the following lines:

PubkeyAuthentication yes
PasswordAuthentication yes

Add this line before other subsystem lines:

Subsystem  powershell C:pwshpwsh.exe -sshs -NoLogo -NoProfile

This tells OpenSSH to run PowerShell Core.

Comment out the line:

AuthorizedKeysFile __PROGRAMDATA__/ssh/administrators_authorized_keys

After saving the changes to the sshd_config file, restart the services:

Restart-Service sshd
Start-Service ssh-agent

You need to restart the sshd service after any change to the config file.

Using PowerShell SSH remoting

Using remoting over SSH is very similar to remoting over WS-MAN. You can access the remote system directly with Invoke-Command:

Invoke-Command -HostName W19DC01 -ScriptBlock {Get-Process}
[email protected]@w19dc01's password:

You’ll get a prompt for the password, which won’t be displayed as you type it.

If it’s the first time you’ve connected to the remote system over SSH, then you’ll see a message similar to this:

The authenticity of host 'servername (10.00.00.001)' can't be established.
ECDSA key fingerprint is SHA256:().
Are you sure you want to continue connecting (yes/no)?

Type yes and press Enter.

You can create a remoting session:

$sshs = New-PSSession -HostName W19FS01
[email protected]@w19fs01's password:

And then use it:

Invoke-Command -Session $sshs -ScriptBlock {$env:COMPUTERNAME}
W19FS01

You can enter an OpenSSH remoting session using Enter-PSSession in the same way as a WS-MAN session. You can enter an existing session or use the HostName parameter on Enter-PSSession to create the interactive session.

You can’t disconnect an SSH based session; that’s a WS-MAN technique.

You can use WS-MAN and SSH sessions to manage multiple computers as shown in Figure 2.

The session information shows the different transport mechanism — WS-MAN and SSH, respectively — and the endpoint in use by each session.

Remote management sessions
Figure 2. Use WS-MAN and SSH sessions together to manage remote machines.

If you look closely at Figure 2, you’ll notice there was no prompt for the password on the SSH session because the system was set up with SSH key-based authentication.

Using SSH key-based authentication

Open an elevated PowerShell session. Change the location to the .ssh folder in your user area:

Set-Location -Path ~.ssh

Generate the key pair:

ssh-keygen -t ed25519

Add the key file into the SSH-agent on the local machine:

ssh-add id_ed25519

Once you’ve added the private key into SSH-agent, back up the private key to a safe location and delete the key from the local machine.

Copy the id_ed25519.pub file into the .ssh folder for the matching user account on the remote server. You can create such an account if required:

$pwd = Read-Host -Prompt 'Password' -AsSecureString
Password: ********
New-LocalUser -Name Richard -Password $pwd -PasswordNeverExpires
Add-LocalGroupMember -Group Administrators -Member Richard

On the remote machine, copy the contents of the key file into the authorized_keys file:

scp id_ed25519.pub authorized_keys

The authorized_keys file needs its permissions changed:

  • Open File Explorer, right click authorized_keys and navigate to Properties – Security – Advanced
  • Click Disable Inheritance.
  • Select Convert inherited permissions into explicit permissions on this object.
  • Remove all permissions except for SYSTEM and your user account. Both should have Full control.

Introduction to SSH with PowerShell Core.

You’ll see references to using the OpenSSHUtils module to set the permissions, but there’s a bug in the version from the PowerShell Gallery that makes the authorized_keys file unusable.

Restart the sshd service on the remote machine.

You can now connect to the remote machine without using a password as shown in Figure 2.

If you’re connecting to a non-domain machine from a machine in the domain, then you need to use the UserName parameter after enabling key-pair authentication:

$ss = New-PSSession -HostName W19ND01 -UserName Richard

You need the username on the remote machine to match your domain username. You won’t be prompted for a password.

WS-MAN or SSH remoting?

Should you use WS-MAN or SSH based remoting? WS-MAN remoting is available on all Windows systems and is enabled by default on Windows Server 2012 and later server versions. WS-MAN remoting has some issues, notably the double hop issue. WS-MAN also needs extra work to remote to non-domain systems.

SSH remoting is only available in PowerShell Core; Windows PowerShell is restricted to WS-MAN remoting. It takes a significant amount of work to install and configure SSH remoting. The documentation isn’t as good as it needs to be. The advantages of SSH remoting are that you can easily access non-domain machines and non-Windows systems where SSH is the standard for remote access.

Go to Original Article
Author:

DARPA unveils first SSITH prototype to mitigate hardware flaws

DARPA aims to block hardware attacks at the source and reduce the need for software patches and has turned to a new microprocessor design to help achieve that goal.

DARPA first announced the project, dubbed SSITH, in 2017 and Dr. Linton Salmon, program manager in the microsystems technology office at DARPA, presented the project and the first prototype chip at this year’s DEF CON event in Las Vegas. He admitted he had to force the naming of the project — System Security Integration Through Hardware and Firmware — to fit the SSITH acronym.

According to Salmon, DARPA has six teams working on 15 different SSITH prototypes using open source RISC-V cores and ranging from low-end IoT devices up to high-end systems. 

“The goal of the program is to provide security against hardware vulnerabilities that are exploited through software [and] to increase security throughout the microelectronics enterprise, whether you’re talking about small IoT devices or you’re talking about a high-performance computing system that costs hundreds of millions of dollars,” Salmon said. 

He added that the reason why software was given responsibility for security was likely a function of how quickly technology iterates, but said that just asking software to handle security is “inappropriate.”

“Right now we’re doing patch and pray. If there’s a software weakness that exploits a hardware vulnerability — like buffer overflow — if there’s a patch, you go on the Common Vulnerabilities and Exposures [CVE] index at NIST and it’s ‘This is the attack on that software, and here’s the patch to fix it,'” Salmon said. “The problem is someone finds another way through that same software to exploit that same hardware weakness, and they do it again. Now you get another software patch. And each time you get a software patch, of course there’s this period between the time it’s actually employed for people to break in.”

The main example of this during Linton’s talk was buffer overflow, which Linton said has been a problem for more than 20 years. He said the goal of SSITH would be to block buffer overflow and other hardware attacks at the source and reduce the need for software patches for flaws caused by underlying hardware issue. 

“We are trying to make an important step forward in how to make electronic systems secure,” Salmon said. “We address the hardware vulnerabilities at their source and reduce the attack surface from thousands of independent software patches to a few basic hardware approaches.”

Salmon was careful to note that despite presenting SSITH at the DEF CON Voting Village, the purpose of SSITH is “not to make a secure election system” partially because that’s not DARPA’s mission, but also because the goal is much more broad.

DARPA felt the Voting Village would be a good place for a demonstration because it would be a popular open forum with a large interested audience, and because elections are “a critical national infrastructure.” Salmon added that DARPA also brought SSITH to DEF CON because the only way to know the device was secure was to put it out into the world for others to try to break.

“We appreciate the time and effort you will take in breaking our hardware,” Salmon said. However, a DARPA representative in the Voting Village Hacking area said at the end of the first day that no one had attempted to attack the SSITH. It is unclear if anyone attempted an attack during DEF CON.

DARPA and Voting Village representatives have not responded to requests for comment at the time of this post.

At this point in the SSITH program, DARPA has only developed prototypes of the low-end chip to test, but Linton said the hope was to come back to DEF CON with more prototypes in the future.

At the end of the SSITH program, DARPA hopes the prototype designs it develops using open source hardware, including RISC-V processors, and frameworks will be adopted by other manufacturers in order to help make devices more secure.

Jake Williams, founder and president of Rendition Infosec in Augusta, Ga., said DARPA’s academic pursuit was interesting but he had doubts around SSITH’s implementation and adoption.

“This isn’t the first academic project to claim their customized hardware would block attacks. The performance overhead on hardware solutions has traditionally been pretty poor,” Williams told SearchSecurity. “As a corollary, shadow stacks were introduced [around] 2001 and still aren’t being used, despite not requiring special hardware and having a much lower performance overhead. They also completely eliminate practical buffer overflow exploitation on the stack.”

Beyond the performance questions, Williams was unsure if the market is willing to adopt a custom chip like SSITH.

“We’ve been down this road before. Nobody wants custom chips even if they’re safer. There’s no appetite for custom ‘secure’ processors in the market right now,” Williams said. “The security on the mobile side has been all about secure key storage. Anything that increases overhead simultaneously increases heat and decreases battery life, so I doubt there’s anything there.”

Go to Original Article
Author:

Crazy about chocolate, serious about people: meet the Dutch chocolate company that dared to be different – Microsoft News Centre Europe

Now, staff can build annual reports with Teams, easily sharing multiple source documents while knowing that team members are always working with the most current versions. As with every other aspect of work life for Tony’s employees, that strategy is carefully aligned with company values. “We use the chat feature in Teams to build the personal relationships we want to encourage,” says Ursem. “Email is more formal, more time-consuming. Chat lends itself to the shared humor and quick check-ins that naturally fit our culture and make us more efficient.”

With Teams, Tony’s team members have an informal, easy way to collaborate on projects with each other and with suppliers by using Teams Rooms. For IT Manager Rick van Doorn, otherwise known as Chocomatic Fanatic, nurturing spirit drives key decisions made by company leadership. “We say that our team comes first,” he explains. “Without the team, there is no company. And keeping that team collaborating optimally is vitally important to everything we do.”

Since implementing Teams, the company has averaged a 10 to 15 percent decrease in its total email volume. Ursem and van Doorn point out that this is happening despite steady company growth. “We’re pushing communication to the channels where it can happen most effectively,” says van Doorn.

A new world of work
The company focuses intently on messaging, both internally and externally. Even the design of its chocolate bars has a message – each bar is divided into unequal pieces, to mirror the inequalities in traditional profit sharing.

Internally, employees mix up workspaces every six months, sharing space with colleagues from different departments to build stronger team relations. That dedication to cultivating teamwork led the company to experiment with various apps purported to propel teamwork forward.

One of the biggest successes to come out of this experimentation was consolidating telephony with Office 365 in April 2019. As a result, employees can now access the company landline through Teams with the Vodafone Calling in Office 365 solution. For customer contact, Teams is extended with the Anywhere365 Contact Center. Because the solution interoperates with Salesforce, incoming calls can be logged in the company’s customer relationship management system for inclusion in the customer database.

“Using Teams with Vodafone Calling in Microsoft Office 365 amplifies the personal and transparent approach we’re known for,” says van Doorn. “We can talk to our chocofans with full knowledge of their prior backgrounds, orders, and feedback.” Incoming calls automatically route to the best person to handle the call, no matter where that person is, and contact information is included for the convenience of the person receiving the call.

A future in the cloud
Ten years ago, the company migrated to the cloud. “We were growing rapidly and needed to be scalable,” recalls van Doorn. “And we also looked at the growing number of relationships we were managing—both customers and suppliers, plus our rapidly expanding staff. We felt that committing to a complicated IT landscape in terms of connections, interfaces, and equipment would have been a risk.”

Cross-functional collaboration also underpins daily and strategic operations at Tony’s Chocolonely. Teamwork fans out from internal teams to a swath of partners that support different aspects of operations, including web developers, product wrapper suppliers, retail stores, and many more.

The company is so committed in fact, that its suppliers also now collaborate in Teams. “We’ve implemented the entire Microsoft 365 suite,” van Doorn. “All of our data is on SharePoint. With this modern working platform, we can easily collaborate with our partners, suppliers, and with each other.” The company also hopes to reduce business travel expenses by 10 percent now that so much collaboration takes place in the cloud.

For the people behind the Tony’s Chocolonely brand, it comes down to relationships. “By growing long-term relationships and paying a higher price—above the market price plus the Fairtrade Premium—to West African farmers, we’re trying to create an equal partnership,” says Ursem. “And if we’re performing well, other companies will be inspired to shoulder this responsibility, too.”

For more information, please visit the Microsoft Customer Stories blog.

Go to Original Article
Author: Microsoft News Center

Mini XL+, Mini E added to iXsystems FreeNAS Mini series

Open source hardware provider iXsystems introduced two new models to its FreeNAS Mini series storage system lineup: FreeNAS Mini XL+ and FreeNAS Mini E. The vendor also introduced tighter integration with TrueNAS and cloud services.

Designed for small offices, iXsystems’ FreeNAS Mini series models are compact, low-power and quiet. Joining the FreeNAS Mini and Mini XL, the FreeNAS Mini XL+ is intended for professional workgroups, while the FreeNAS Mini E is a low-cost option for small home offices.

The FreeNAS Mini XL+ is a 10-bay platform — eight 3.5-inch and one 2.5-inch hot-swappable bays and one 2.5-inch internal bay — and iXsystem’s highest-end Mini model. The Mini XL+ provides dual 10 Gigabit Ethernet (GbE) ports, eight CPU cores and 32 GB RAM for high-performance workloads. For demanding applications, such as hosting virtual machines or multimedia editing, the Mini XL+ scales beyond 100 TB.

For lower-intensity workloads, the FreeNAS Mini E is ideal for file sharing, streaming and transcoding video up to 1080p. The FreeNAS Mini E features four bays with quad GbE ports and 8 GB RAM, configured with 8 TB capacity.

The full iXsystems FreeNAS Mini series supports error correction RAM and Z File System with data checksumming, unlimited snapshots and replication. IT operations can remotely manage systems via Intelligent Platform Management Interface and, dependent on needs, can be built has hybrid or all-flash storage.

FreeNAS provides traditional NAS and delivers network application services via plugin applications, featuring both open source and commercial applications to extend usability to entertainment, collaboration, security and backup. IXsystems’ FreeNAS 11.2 provides a web interface and encrypted cloud sync to major cloud services, such as Amazon S3, Microsoft Azure, Google Drive and Backblaze B2.

At Gartner’s 2018 IT Infrastructure, Operations & Cloud Strategies Conference, ubiquity of IT infrastructure was a main theme, and FreeNAS was named an option for file, block, object and hyper-converged software-defined storage. According to iXsystems, FreeNAS and TrueNAS are leading platforms for video, telemetry and other data processing in the cloud or a colocation facility.

New FreeNAS Mini models were introduced to iXsystems' lineup for open source storage.
IXsystems’ FreeNAS Mini lineup now includes the high-end FreeNAS Mini XL+ and entry-level FreeNAS Mini E.

With the upgrade, the FreeNAS Mini series can be managed by iXsystems’ unified management system, TrueCommand, which enables admins to monitor all TrueNAS and FreeNAS systems from a single UI and share access to alerts, reports and control of storage systems. A TrueCommand license is free for FreeNAS deployments of fewer than 50 drives.

According to iXsystems, FreeNAS Mini products reduce TCO by combining enterprise-class data management and open source economics. The FreeNAS Mini XL+ ranges from $1,499 to $4,299 and the FreeNAS Mini E from $749 to $999.

FreeNAS version 11.3 is available in beta, and the vendor anticipates a 12.0 release that will bring more efficiency to its line of FreeNAS Minis.

Go to Original Article
Author:

Netflix launches tool for monitoring AWS credentials

LAS VEGAS — A new open source tool looks to make monitoring AWS credentials easier and more effective for large organizations.

The tool, dubbed Trailblazer, was introduced during a session at Black Hat USA 2018 on Wednesday by William Bengtson, senior security engineer at Netflix, based in Los Gatos, Calif. During his session, Bengtson discussed how his security team took a different approach to reviewing AWS data in order to find signs of potentially compromised credentials.

Bengtson said Netflix’s methodology for monitoring AWS credentials was fairly simple and relied heavily on AWS’ own CloudTrail log monitoring tool. However, Netflix couldn’t rely solely on CloudTrail to effectively monitor credential activity; Bengtson said a different approach was required because of the sheer size of Netflix’s cloud environment, which is 100% AWS.

“At Netflix, we have hundreds of thousands of servers. They change constantly, and there are 4,000 or so deployments every day,” Bengtson told the audience. “I really wanted to know when a credential was being used outside of Netflix, not just AWS.”

That was crucial, Bengtson explained, because an unauthorized user could set up infrastructure within AWS, obtain a user’s AWS credentials and then log in using those credentials in order to “fly under the radar.”

However, monitoring credentials for usage outside of a specific corporate environment is difficult, he explained, because of the sheer volume of data regarding API calls. An organization with a cloud environment the size of Netflix’s could run into challenges with pagination for the data, as well as rate limiting for API calls — which AWS has put in place to prevent denial-of-service attacks.

“It can take up to an hour to describe a production environment due to our size,” he said.

To get around those obstacles, Bengtson and his team crafted a new methodology that didn’t require machine learning or any complex technology, but rather a “strong but reasonable assumption” about a crucial piece of data.

“The first call wins,” he explained, referring to when a temporary AWS credential makes an API call and grabs the first IP address that’s used. “As we see the first use of that temporary [session] credential, we’re going to grab that IP address and log it.”

The methodology, which is built into the Trailblazer tool, collects the first API call IP address and other related AWS data, such as the instance ID and assumed role records. The tool, which doesn’t require prior knowledge of an organization’s IP allocation in AWS, can quickly determine whether the calls for those AWS credentials are coming from outside the organization’s environment.

“[Trailblazer] will enumerate all of your API calls in your environment and associate that log with what is actually logged in CloudTrail,” Bengtson said. “Not only are you seeing that it’s logged, you’re seeing what it’s logged as.”

Bengtson said the only requirement for using Trailblazer is a high level of familiarity with AWS — specifically how AssumeRole calls are logged. The tool is currently available on GitHub.

Kontron heeds carrier demand for software, buys Inocybe

Kontron has acquired Inocybe Technologies, adding open source networking software to the German hardware maker’s portfolio of computing systems for the telco industry.

Kontron, which announced the acquisition this week, purchased Inocybe’s Open Networking Platform as telcos increasingly favor buying software separate from hardware. Kontron is a midsize supplier of white box systems to communications service providers (CSPs) and cable companies.

CSPs are replacing specialized hardware with more flexible software-centric networking, forcing companies like Kontron and Radisys, which recently sold itself to Reliance Industries, to reinvent themselves, said Lee Doyle, principal analyst at Doyle Research, based in Wellesley, Mass.

“This is part of Kontron’s efforts to move in a more software direction — Radisys has done this as well — and to a more service-oriented model, in this case, based on open source,” Doyle said.

Inocybe, on the other hand, is a small startup that could take advantage of the resources of a midsize telecom supplier, mainly since the market for open source software is still in its infancy within the telecom industry, Doyle said.

While Kontron did not release financial details, the price for Inocybe ranged from $5 million to $10 million, said John Zannos, previously the chief revenue officer of Inocybe and now a general manager of its technology within Kontron. The manufacturer plans to offer Inocybe’s Open Networking Platform as a stand-alone product while also providing hardware specially designed to run the platform.

Inocybe’s business

Inocybe’s business model is similar to that of Red Hat, which sells its version of open source Linux and generates revenue from support and services on the server operating system. Under Kontron, Inocybe plans to continue developing commercial versions of all the networking software built under the Linux Foundation.

Open source is free, but making it work isn’t.
Lee Doyleprincipal analyst, Doyle Research

The Open Networking Platform includes parts of the Open Network Automation Platform (ONAP), the OpenDaylight software-defined networking controller and the OpenSwitch network operating system. Service providers use Inocybe’s platform as a tool for traffic engineering, network automation and network functions virtualization.

Tools like Inocybe’s deliver open source software in a form that’s ready for testing and then deploying in a production environment. The more difficult alternative is downloading the code from a Linux Foundation site and then stitching it together into something useful.

“Open source is free, but making it work isn’t,” Doyle said.

Before the acquisition, Inocybe had a seat on the board of the open source networking initiative within the Linux Foundation and was active in the development of several technologies, including OpenDaylight and OpenSwitch. All that work would continue under Kontron, Zannos said.

WSO2 integration platform twirls on Ballerina language

The latest version of WSO2’s open source integration platform strengthens its case to help enterprises create and execute microservices.

The summer 2018 release of the WSO2 Integration Agile Platform, introduced at the company’s recent annual WSO2Con user conference, supports what the company calls an “integration agile” approach to implement microservices. Software development has moved to an agile approach, but legacy Waterfall approaches can stymie the integration process, and WSO2 aims to change that.

Solving the integration challenge

Integration remains a primary challenge for enterprise IT shops. The shift of automation functions to the public cloud complicates enterprises’ integration maps, but at the same time, enterprises want to use microservices and serverless architectures, which require new integration architectures, said Holger Mueller, an analyst at Constellation Research in San Francisco.

Improvements to the WSO2 integration platform, such as integration of the WSO2 API management, enterprise integration, real-time analytics and identity and access management options, aim to help enterprises adopt agile integration as they move from monolithic applications to microservices as part of digital transformation projects. The company also introduced a new licensing structure for enterprises to scale their microservices-based apps.

In addition, WSO2 Integration Agile Platform now supports the open source Ballerina programming language, a cloud-native programming language built by WSO2 and optimized for integration. The language features a visual interface that suits noncoding business users, yet also empowers developers to write code to integrate items rather than use bulky configuration-based integration schemes.

“Ballerina has a vaguely Java-JavaScript look and feel,” said Paul Fremantle, CTO of WSO2. “The concurrency model is most similar to Go and the type system is probably most similar to functional programming languages like Elm. We’ve inherited from a bunch of languages.” Using Ballerina, University of Oxford students finished projects in 45 minutes that typically took two hours in other languages, Fremantle said.

Some early Ballerina adopters requested more formal support, so WSO2 now offers a Ballerina Early Access Development Support package with unlimited query support to users, but this is only available until Ballerina 1.0 is released later this year, Fremantle said. Pricing for the package is $500 per developer seat, with a minimum package of five developers.

WSO2's Paul Fremantle at BallerinaCon
Paul Fremantle, CTO of WSO2, demoing Ballerina at BallerinaCon.

Integration at the heart of PaaS

Integration technology is central functionality for all PaaS offerings that aim to ease enterprise developers and DevOps pros into microservices, serverless computing, and even emerging technologies like blockchain, said Charlotte Dunlap, an analyst at GlobalData in Santa Cruz, Calif. WSO2 offers a competitive open source alternative to pricier options from larger rivals such as Oracle, IBM, and SAP, though it’s more of a “second tier” integration and API management provider and lacks the brand recognition to attract global enterprises, she said.

Nevertheless, Salesforce’s MuleSoft acquisition earlier this year exemplifies the importance of smaller integration vendors. Meanwhile, Red Hat offers integration and API management options, and public cloud platform providers will also build out these services.

How is the future of PowerShell shaping up?

Now that PowerShell is no longer just for Windows — and is an open source project — what are the long-term implications of these changes?

Microsoft technical fellow Jeffrey Snover developed Windows PowerShell based on the parameters in his “Monad Manifesto.” If you compare his document to the various releases of Windows PowerShell, you’ll see Microsoft produced a majority of Snover’s vision. But, now that this systems administration tool is an open source project, what does this mean for the future of PowerShell?

I’ve used PowerShell for more than 12 years and arguably have as good an understanding of PowerShell as anyone. I don’t know, or understand, where PowerShell is going, so I suspect that many of its users are also confused.

When Microsoft announced that PowerShell would expand to the Linux and macOS platforms, the company said it would continue to support Windows PowerShell, but would not develop new features for the product. Let’s look at some of the recent changes to PowerShell and where the challenges lie.

Using different PowerShell versions

While it’s not directly related to PowerShell Core being open source, one benefit is the ability to install different PowerShell versions side by side. I currently have Windows PowerShell v5.1, PowerShell Core v6.0.1 and PowerShell Core v6.1 preview 2 installed on the same machine. I can test code across all three versions using the appropriate console or Visual Studio Code.

PowerShell versions
One benefit of the open source move is Windows PowerShell v5.1, PowerShell Core v6.0.1 and PowerShell Core v6.1 preview 2 can run on the same machine.

How admins benefit from open source PowerShell

The good points of the recent changes to PowerShell include access to the open source project, a faster release cadence and community input.

Another point in favor of PowerShell’s move is that you can see the code. If you can read C#, you can use that skill to track down and report on bugs you encounter. If you have a fix for the problem, then you can submit it.

Studying the code can give you insight into how PowerShell works. What it won’t explain is why PowerShell works the way it does. Previously, Microsoft MVPs and very few other people had access to Windows PowerShell code, but with the PowerShell Core code now available to more people, it can only make it a better product in the long run.

The PowerShell team expects to deliver a new release approximately every six months. The team released PowerShell v6.0 in January 2018. At the time of this article’s publication, version 6.1 is in its third preview release, with the final version expected soon. If the team maintains this release cadence, you can expect v6.2 in late 2018 or early 2019.

A faster release cadence implies quicker resolution of bugs and new features on a more regular basis. The downside to a faster release cadence is that you’ll have to keep upgrading your PowerShell instances to get the bug fixes and new features.

Of the Microsoft product teams, the Windows PowerShell team is one of the most accessible. They’ve been very active in the PowerShell community since the PowerShell v1 beta releases by engaging with users and incorporating their feedback. The scope of that dialog has expanded; anyone can report bug fixes or request new features.

The downside is the expectation that the originator of the request is expected to implement the changes. If you follow the project, you’ll see there are just a handful of community members who are heavily active.

Shortcomings of the PowerShell Core project

This leads us to the disadvantages now that PowerShell is an open-source project. In my view, the problems are:

  • it’s an open source project;
  • there’s no overarching vision for the project;
  • the user community lacks an understanding of what’s happening with PowerShell; and
  • gaps in the functionality.

[embedded content]

PowerShell’s inventor gives a status update on the automation tool

These points aren’t necessarily problems, but they are issues that could impact the PowerShell project in the long term.

Changing this vital automation and change management tool to an open source project has profound implications for the future of PowerShell. The PowerShell Core committee is the primary caretaker of PowerShell. This board has the final say on which proposals for new features will proceed.

At this point in time, the committee members are PowerShell team members. A number of them, including the original language design team, have worked on PowerShell from the start. If that level of knowledge is diluted, it could have an adverse effect on PowerShell.

The PowerShell project page supplies a number of short- to medium-term goals, but I haven’t seen a long-term plan that lays out the future of PowerShell. So far, the effort appears concentrated on porting the PowerShell engine to other platforms. If the only goal is to move PowerShell to a cross-platform administration tool, then more effort should go into bringing the current Windows PowerShell functionality to the other platforms.

Giving the PowerShell community a way to participate in the development of PowerShell is both a strength and a weakness. Some of the proposals show many users don’t understand how PowerShell works.

Giving the PowerShell community a way to participate in the development of PowerShell is both a strength and a weakness. Some of the proposals show many users don’t understand how PowerShell works. There are requests to make PowerShell more like Bash or other shells.

Other proposals seek to change how PowerShell works, which could break existing functionality. The PowerShell committee has done a good job of managing the more controversial proposals, but clearing up long-term goals for the project would reduce requests that don’t fit into the future of PowerShell.

The project is also addressing gaps in functionality. Many of the current Windows PowerShell v5.1 modules will work in PowerShell Core. At the PowerShell + DevOps Global Summit 2018, one demonstration showed how to use implicit remoting to access Windows PowerShell v5.1 modules on the local machine through PowerShell Core v6. While not ideal, this method works until the module authors convert them to run in PowerShell Core.

One gap that needs work is the functionality on Linux and macOS systems. PowerShell Core is missing the cmdlets needed to perform standard administrative tasks, such as working with network adapters, storage, printer management, local accounts and groups.

Availability of the ConvertFrom-String cmdlet would be a huge boost by giving admins the ability to use native Linux commands then turn the output into objects for further processing in PowerShell. Unfortunately, ConvertFrom-String uses code that cannot be open sourced, so it’s not an option currently. Until this functionality gap gets closed, Linux and macOS will be second-class citizens in the PowerShell world.

How to build a Packer image for Azure

Get started
Bring yourself up to speed with our introductory content.

Packer is an open source tool that automates the Windows Server image building process to give administrators a consistent approach to create new VMs.


For admins who prefer to roll their own Windows Server image, despite the best of intentions, issues can arise…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

from these handcrafted builds.

To maintain some consistency — and avoid unnecessary help desk tickets — image management tools such as Packer can help construct golden images tailored for different needs. The Packer image tool automates the building process and helps admins manage Windows Server images. Packer offers a way to script the image construction process to produce builds through automation for multiple platforms at the same time. Admins can use code repositories to store validated Packer image configurations that admins across different locations can share to ensure stability across builds.

Build a Packer image for Azure

To demonstrate how Packer works, we’ll use it to build a Windows Server image. To start, download and install Packer for the operating system of choice. Packer offers an installation guide on its website.

Next, we need to figure out where to create the image. A Packer feature called builders creates images for various services, such as Azure, AWS, Docker, VMware and more. This tutorial will explain how to build a Windows Server image to run in Azure.

To construct an image for Azure, we have to meet a few prerequisites. You need:

  • a service principal for Packer to authenticate to Azure;
  • a storage account to hold the image;
  • the resource group name for the storage account;
  • the Azure subscription ID;
  • the tenant ID for your Azure Active Directory; and
  • a storage container to place the VHD image.

Validate the Windows Server build instructions

A Packer feature called builders creates images for various services, such as Azure, AWS, Docker, VMware and more.

Next, it’s time to set up the image template. Every Packer image requires a JSON file called a template that tells Packer how to build the image and where to put it. An example of a template that builds an Azure image is in the code below. Save it with the filename WindowsServer.Azure.json.

{
  “variables”: {
      “client_id”: “”,
      “client_secret”: “”,
      “object_id”: “”
  },
  “builders”: [{
    “type”: “azure-arm”,

    “client_id”: “{{user `client_id`}}”,
    “object_id”: “{{user `object_id`}}”,
    “client_secret”: “{{user `client_secret`}}”,
    “resource_group_name”: “labtesting”,
    “storage_account”: “adblabtesting”,
    “subscription_id”: “d660a51f-031d-4b8f-827d-3f811feda5fc”,
    “tenant_id”: “bb504844-07db-4019-b1c4-7243dfc97121”,

    “capture_container_name”: “vhds”,
    “capture_name_prefix”: “packer”,

    “os_type”: “Windows”,
    “image_publisher”: “MicrosoftWindowsServer”,
    “image_offer”: “WindowsServer”,
    “image_sku”: “2016-Datacenter”,
    “location”: “East US”,
    “vm_size”: “Standard_D2S_v3”
  }]
}

You should validate the schema before you start with the packer validate command. We don’t want sensitive information in the template, so we create the client_id and client_secret variables and pass those at runtime.

packer validate -var ‘client_id=value’ -var ‘client_secret=value’ WindowsServer.Azure.json

How to correct Packer build issues

After the command confirms the template is good, we build the image with nearly the same syntax as the validation command. For the purposes of this article, we will use placeholders for the client_id, client_secret and object_id references.

> packer build -var ‘client_id=XXXX’ -var ‘client_secret=XXXX’ -var ‘object_id=XXXX’ WindowsServer.Azure.json

When you run the build the first time, you may run into a few errors if the setup is not complete. Here are the errors that came up when I ran my build:

    • “Build ‘azure-arm’ errored: The storage account is located in eastus, but the build will take place in West US. The locations must be identical”
    • Build ‘azure-arm’ errored: storage.AccountsClient#ListKeys: Failure responding to request: StatusCode=404 – Original Error: autorest/azure: Service returned an error. Status=404 Code=”ResourceGroupNotFound” Message=”Resource group ‘adblabtesting’ could not be found.”

[embedded content]

Using Packer to build an image from another VM.

  • “==> azure-arm: ERROR: -> VMSizeDoesntSupportPremiumStorage : Requested operation cannot be performed because storage account type ‘Premium_LRS’ is not supported for VM size ‘Standard_A2’.”

The error messages are straightforward and not difficult to fix.

However, the following error message is more serious:

==> azure-arm: ERROR: -> Forbidden : Access denied
==> azure-arm:
==> azure-arm:  …failed to get certificate URL, retry(0)

This indicates the use of the wrong object_id. Find the correct one in the Azure subscription role.

After adding the right object_id, you will find a VHD image in Azure.

Dig Deeper on Windows Server deployment

Databricks platform additions unify machine learning frameworks

SAN FRANCISCO — Open source machine learning frameworks have multiplied in recent years, as enterprises pursue operational gains through AI. Along the way, the situation has formed a jumble of competing tools, creating a nightmare for development teams tasked with supporting them all.

Databricks, which offers managed versions of the Spark compute platform in the cloud, is making a play for enterprises that are struggling to keep pace with this environment. At Spark + AI Summit 2018, which was hosted by Databricks here this week, the company announced updates to its platform and to Spark that it said will help bring the diverse array of machine learning frameworks under one roof.

Unifying machine learning frameworks

MLflow is a new open source framework on the Databricks platform that integrates with Spark, SciKit-Learn, TensorFlow and other open source machine learning tools. It allows data scientists to package machine learning code into reproducible modules, conduct and compare parallel experiments, and deploy models that are production-ready.

Databricks also introduced a new product on its platform, called Runtime for ML. This is a preconfigured Spark cluster that comes loaded with distributed machine learning frameworks commonly used for deep learning, including Keras, Horovod and TensorFlow, eliminating the integration work data scientists typically have to do when adopting a new tool.

Databricks’ other announcement, a tool called Delta, is aimed at improving data quality for machine learning modeling. Delta sits on top of data lakes, which typically contain large amounts of unstructured data. Data scientists can specify a schema they want their training data to match, and Delta will pull in all the data in the data lake that fits the specified schema, leaving out data that doesn’t fit.

MLflow's tracking user interface
MLflow includes a tracking interface for logging the results of machine learning jobs.

Users want everything under one roof

Each of the new tools is either in a public preview or alpha test stage, so few users have had a chance to get their hands on them. But attendees at the conference were broadly happy about the approach of stitching together disparate frameworks more tightly.

Saman Michael Far, senior vice president of technology at the Financial Industry Regulatory Authority (FINRA) in Washington, D.C., said in a keynote presentation that he brought in the Databricks platform largely because it already supports several query languages, including R, Python and SQL. Integrating these tools more closely with machine learning frameworks will help FINRA use more machine learning in its goal of spotting potentially illegal financial trades.

You have to take a unified approach. Pick technologies that help you unify your data and operations.
John Golesenior director of business analysis and product management at Capital One

“It’s removed a lot of the obstacles that seemed inherent to doing machine learning in a business environment,” Far said.

John Gole, senior director of business analysis and product management at Capital One, based in McLean, Va., said the financial services company has implemented Spark throughout its operational departments, including marketing, accounts management and business reporting. The platform is being used for tasks that range from extract, transform and load jobs to SQL querying for ad hoc analysis and machine learning. It’s this unified nature of Spark that made it attractive, Gole said.

Going forward, he said he expects this kind of unified platform to become even more valuable as enterprises bring more machine learning to the center of their operations.

“You have to take a unified approach,” Gole said. “Pick technologies that help you unify your data and operations.”

Bringing together a range of tools

Engineers at ride-sharing platform Uber have already built integrations similar to what Databricks unveiled at the conference. In a presentation, Atul Gupte, a product manager at Uber, based in San Francisco, described a data science workbench his team created that brings together a range of tools — including Jupyter, R and Python — into a web-based environment that’s powered by Spark on the back end. The platform is used for all the company’s machine learning jobs, like training models to cluster rider pickups in Uber Pool or forecast rider demand so the app can encourage more drivers to get out on the roads.

Gupte said, as the company grew from a startup to a large enterprise, the old way of doing things, where everyone worked in their own silo using their own tool of choice, didn’t scale, which is why it was important to take this more standardized approach to data analysis and machine learning.

“The power is that everyone is now working together,” Gupte said. “You don’t have to keep switching tools. It’s a pretty foundational change in the way teams are working.”