Tag Archives: open

MariaDB X4 brings smart transactions to open source database

MariaDB has come a long way from its MySQL database roots. The open source database vendor released its new MariaDB X4 platform, providing users with “smart transactions” technology to enable both analytical and transactional databases.

MariaDB, based in Redwood City, Calif., was founded in 2009 by the original creator of MySQL, Monty Widenius, as a drop replacement for MySQL, after Widenius grew disillusioned with the direction that Oracle was taking the open source database.

Oracle acquired MySQL via its acquisition of Sun Microsystems in 2008. Now, in 2020, MariaDB still uses the core MySQL database protocol, but the MariaDB database has diverged significantly in other ways that are manifest in the X4 platform update.

The MariaDB X4 release, unveiled Jan. 14, puts the technology squarely in the cloud-native discussion, notably because MariaDB is allowing for specific workloads to be paired with specific storage types at the cloud level, said James Curtis, senior analyst of data, AI and analytics at 451 Research.

“There are a lot of changes that they implemented, including new and improved storage engines, but the thing that stands out are the architectural adjustments made that blend row and columnar storage at a much deeper level — a change likely to appeal to many customers,” Curtis said.

MariaDB X4 smart transactions converges database functions

The divergence with MySQL has ramped up over the past three years, said Shane Johnson, senior director of product marketing at MariaDB. In recent releases MariaDB has added Oracle database compatibility, which MySQL does not include, he noted.

In addition, MariaDB’s flagship platform provides a database firewall and dynamic data masking, both features designed to improve security and data privacy. The biggest difference today, though, between MariaDB and SQL is how MariaDB supports pluggable storage engines, which gain new functionality in the X4 update.

The thing that stands out are the architectural adjustments made that blend row and columnar storage at a much deeper level — a change likely to appeal to many customers.
James CurtisSenior analyst of data, AI and analytics, 451 Research

Previously when using the pluggable storage engine, users would deploy an instance of MariaDB for transactional use cases with the InnoDB storage engine and another instance with the ColumnStore columnar storage engine for analytics, Johnson explained.

In earlier releases, a Change Data Capture process synchronized those two databases. In the MariaDB X4 update, transactional and analytical features have been converged in an approach that MariaDB calls smart transactions.

“So, when you install MariaDB, you get all the existing storage engines, as well as ColumnStore, allowing you to mix and match to use row and columnar data to do transactions and analytics, very simply, and very easily,” Johnson said.

MariaDB X4 aligns cloud storage

Another new capability in MariaDB X4 is the ability to more efficiently use cloud storage back ends.

“Each of the storage mediums is optimized for a different workload,” Johnson said.

For example, Johnson noted that Amazon Web Service’s S3, is a good fit for analytics, because of its high-availability and capacity. He added that for transactional applications with row-based storage, Amazon Elastic Block Storage (EBS) is a better fit. The ability to mix and match both EBS and S3 in the MariaDB X4 platform makes it easier for user to consolidate both analytics and transactional workload in the database.

“The update for X4 is not so much that you can run MariaDB in the cloud, because you’ve always been able to do that, but rather that you can run it with smart transactions and have it optimized for cloud storage services,” Johnson said.

MariaDB database as a service (DBaaS) is coming

MariaDB said it plans to expand its portfolio further this year.

The core MariaDB open source community project is currently at version 10.4, with plans for version 10.5, which will include the smart transactions capabilities, to debut sometime in the coming weeks, according to MariaDB.

The new smart transaction capabilities have already landed in the MariaDB Enterprise 10.4 update. The MariaDB Enterprise Server has more configuration settings and hardening for enterprise use cases.

The full MariaDB X4 platform goes a step further with the MariaDB MaxScale database proxy, which provides automatic failover, transaction replay and a database firewall, as well as utilities that developers need to build database applications.

Johnson noted that traditionally new features tend to land in the community version first, but as it happened, during this cycle MariaDB developers were able to get the features into the enterprise release quicker.

MariaDB has plans to launch a new DBaaS product this year. Users can already deploy MariaDB to a cloud of choice on their own. MariaDB also has a managed service that provides full management for a MariaDB environment.

“With the managed service, we take care of everything for our customers, where we deploy MariaDB on their cloud of choice and we will manage it, administer it, operate and upgrade, it,” Johnson said. “We will have our own database as a service rolling out this year, which will provide an even better option.”

Go to Original Article

How should organizations approach API-based SIP services?

Many Session Initiation Protocol features are now available through open APIs for a variety of platforms. While voice over IP only refers to voice calls, SIP encompasses the set up and release of all calls, whether they are voice, video or a combination of the two.

Because SIP establishes and tears down call sessions, it brings multiple tools into play. SIP services enable the use of multimedia, VoIP and messaging, and can be incorporated into a website, program or mobile application in many ways.

The APIs available range from application-specific APIs to native programming languages, such as Java or Python, for web-based applications. Some newer interfaces are operating system-specific for Android and iOS. SIP is an open protocol, which makes most features available natively regardless of the SIP vendor. However, the features and implementations for SIP service APIs are specific to the API vendor. 

Some of the more promising features include the ability to create a call during the shopping experience or from the shopping cart at checkout. This enables customer service representatives and customers to view the same product and discuss and highlight features within a browser, creating an enhanced customer shopping experience.

The type of API will vary based on which offerings you use. Before issuing a request for a quote, issue a request for information (RFI) to learn what kinds of SIP service APIs a vendor has to offer. While this step takes time, it will allow you to determine what is available and what you want to use. You will want to determine the platform or platforms you wish to support. Some APIs may be more compatible with specific platforms, which will require some programming to work with other platforms.

Make sure to address security in your RFI.  Some companies will program your APIs for you. If you don’t have the expertise, or aren’t sure what you’re looking for, then it’s advantageous to meet with some of those companies to learn what security features you need. 

Go to Original Article

Microsoft Open Data Project adopts new data use agreement for datasets

Datasets compilation for Open Data

Last summer we announced Microsoft Research Open Data—an Azure-based repository-as-a-service for sharing datasets—to encourage the reproducibility of research and make research data assets readily available in the cloud. Among other things, the project started a conversation between the community and Microsoft’s legal team about dataset licensing. Inspired by these conversations, our legal team developed a set of brand new data use agreements and released them for public comment on Github earlier this year.

Today we’re excited to announce that Microsoft Research Open Data will be adopting these data use agreements for several datasets that we offer.

Diving a bit deeper on the new data use agreements

The Open Use of Data Agreement (O-UDA) is intended for use by an individual or organization that is able to distribute data for unrestricted uses, and for which there is no privacy or confidentiality concern. It is not appropriate for datasets that include any data that might include materials subject to privacy laws (such as the GDPR or HIPAA) or other unlicensed third-party materials. The O-UDA meets the open definition: it does not impose any restriction with respect to the use or modification of data other than ensuring that attribution and limitation of liability information is passed downstream. In the research context, this implies that users of the data need to cite the corresponding publication with which the data is associated. This aids in findability and reusability of data, an important tenet in the FAIR guiding principles for scientific data management and stewardship.

We also recognize that in certain cases, datasets useful for AI and research analysis may not be able to be fully “open” under the O-UDA. For example, they may contain third-party copyrighted materials, such as text snippets or images, from publicly available sources. The law permits their use for research, so following the principle that research data should be “as open as possible, as closed as necessary,” we developed the Computational Use of Data Agreement (C-UDA) to make data available for research while respecting other interests. We will prefer the O-UDA where possible, but we see the C-UDA as a useful tool for ensuring that researchers continue to have access to important and relevant datasets.

Datasets that reflect the goals of our project

The following examples reference datasets that have adopted the Open Use of Data Agreement (O-UDA).

Location data for geo-privacy research

Microsoft researcher John Krumm and collaborators collected GPS data from 21 people who carried a GPS receiver in the Seattle area. Users who provided their data agreed to it being shared as long as certain geographic regions were deleted. This work covers key research on privacy preservation of GPS data as evidenced in the corresponding paper, “Exploring End User Preferences for Location Obfuscation, Location-Based Services, and the Value of Location,” which was accepted at the Twelfth ACM International Conference on Ubiquitous Computing (UbiComp 2010). The paper has been cited 147 times, including for research that builds upon this work to further the field of preservation of geo-privacy for location-based services providers.

Hand gestures data for computer vision

Another example dataset is that of labeled hand images and video clips collected by researchers Eyal Krupka, Kfir Karmon, and others. The research addresses an important computer vision and machine learning problem that deals with developing a hand-gesture-based interface language. The data was recorded using depth cameras and has labels that cover joints and fingertips. The two datasets included are FingersData, which contains 3,500 labeled depth frames of various hand poses, and GestureClips, which contains 140 gesture clips (100 of these contain labeled hand gestures and 40 contain non-gesture activity). The research associated with this dataset is available in the paper “Toward Realistic Hands Gesture Interface: Keeping it Simple for Developers and Machines,” which was published in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems.

Question-Answer data for machine reading comprehension

Finally, the FigureQA dataset generated by researchers Samira Ebrahimi Kahou, Adam Atkinson, Adam Trischler, Yoshua Bengio and collaborators, introduces a visual reasoning task for research that is specific to graphical plots and figures. The dataset has 180,000 figures with 1.3 million question-answer pairs in the training set. More details about the dataset are available in the paper “FigureQA: An Annotated Figure Dataset for Visual Reasoning” and corresponding Microsoft Research Blog post. The dataset is pivotal to developing more powerful visual question answering and reasoning models, which potentially improve accuracy of AI systems that are involved in decision making based on charts and graphs.

The data agreements are a part of our larger goals

Microsoft Research Open Data project was conceived from the start to reflect Microsoft Research’s commitment to fostering open science and research and to achieve this without compromising the ethics of collecting and sharing data. Our goal is to make it easier for researchers to maintain provenance of data while having the ability to reference and build upon it.

The addition of the new data agreements to Microsoft Research Open Data’s feature set is an exciting step in furthering our mission.

Acknowledgements: This work would not have been possible without the substantial team effort by — Dave Green, Justin Colannino, Gretchen Deo, Sarah Kim, Emily McReynolds, Mario Madden, Emily Schlesinger, Elaine Peterson, Leila Stevenson, Dave Baskin, and Sergio Loscialo.

Go to Original Article
Author: Microsoft News Center

Datrium opens cloud DR service to all VMware users

Datrium plans to open its new cloud disaster recovery as a service to any VMware vSphere users in 2020, even if they’re not customers of Datrium’s DVX infrastructure software.

Datrium released disaster recovery as a service with VMware Cloud on AWS in September for DVX customers as an alternative to potentially costly professional services or a secondary physical site. DRaaS enables DVX users to spin up protected virtual machines (VMs) on demand in VMware Cloud on AWS in the event of a disaster. Datrium takes care of all of the ordering, billing and support for the cloud DR.

In the first quarter, Datrium plans to add a new Datrium DRaaS Connect for VMware users who deploy vSphere infrastructure on premises and do not use Datrium storage. Datrium DraaS Connect software would deduplicate, compress and encrypt vSphere snapshots and replicate them to Amazon S3 object storage for cloud DR. Users could set backup policies and categorize VMs into protection groups, setting different service-level agreements for each one, Datrium CTO Sazzala Reddy said.

A second Datrium DRaaS Connect offering will enable VMware Cloud users to automatically fail over workloads from one AWS Availability Zone (AZ) to another if an Amazon AZ goes down. Datrium stores deduplicated vSphere snapshots on Amazon S3, and the snapshots replicated to three AZs by default, Datrium chief product officer Brian Biles said.

Speedy cloud DR

Datrium claims system recovery can happen on VMware Cloud within minutes from the snapshots stored in Amazon S3, because it requires no conversion from a different virtual machine or cloud format. Unlike some backup products, Datrium does not convert VMs from VMware’s format to Amazon’s format and can boot VMs directly from the Amazon data store.

“The challenge with a backup-only product is that it takes days if you want to rehydrate the data and copy the data into a primary storage system,” Reddy said.

Although the “instant RTO” that Datrium claims to provide may not be important to all VMware users, reducing recovery time is generally a high priority, especially to combat ransomware attacks. Datrium commissioned a third party to conduct a survey of 395 IT professionals, and about half said they experienced a DR event in the last 24 months. Ransomware was the leading cause, hitting 36% of those who reported a DR event, followed by power outages (26%).

The Orange County Transportation Authority (OCTA) information systems department spent a weekend recovering from a zero-day malware exploit that hit nearly three years ago on a Thursday afternoon. The malware came in through a contractor’s VPN connection and took out more than 85 servers, according to Michael Beerer, a senior section manager for online system and network administration of OCTA’s information systems department.

Beerer said the information systems team restored critical applications by Friday evening and the rest by Sunday afternoon. But OCTA now wants to recover more quickly if a disaster should happen again, he said.

OCTA is now building out a new data center with Datrium DVX storage for its VMware VMs and possibly Red Hat KVM in the future. Beerer said DVX provides an edge in performance and cost over alternatives he considered. Because DVX disaggregates storage and compute nodes, OCTA can increase storage capacity without having to also add compute resources, he said.

Datrium cloud DR advantages

Beerer said the addition of Datrium DRaaS would make sense because OCTA can manage it from the same DVX interface. Datrium’s deduplication, compression and transmission of only changed data blocks would also eliminate the need for a pricy “big, fat pipe” and reduce cloud storage requirements and costs over other options, he said. Plus, Datrium facilitates application consistency by grouping applications into one service and taking backups at similar times before moving data to the cloud, Beerer said.

Datrium’s “Instant RTO” is not critical for OCTA. Beerer said anything that can speed the recovery process is interesting, but users also need to weigh that benefit against any potential additional costs for storage and bandwidth.

“There are customers where a second or two of downtime can mean thousands of dollars. We’re not in that situation. We’re not a financial company,” Beerer said. He noted that OCTA would need to get critical servers up and running in less than 24 hours.

Reddy said Datrium offers two cost models: a low-cost option with a 60-minute window and a “slightly more expensive” option in which at least a few VMware servers are always on standby.

Pricing for Datrium DRaaS starts at $23,000 per year, with support for 100 hours of VMware Cloud on-demand hosts for testing, 5 TB of S3 capacity for deduplicated and encrypted snapshots, and up to 1 TB per year of cloud egress. Pricing was unavailable for the upcoming DRaaS Connect options.

Other cloud DR options

Jeff Kato, a senior storage analyst at Taneja Group, said the new Datrium options would open up to all VMware customers a low-cost DRaaS offering that requires no capital expense. He said most vendors that offer DR from their on-premises systems to the cloud force customers to buy their primary storage.

George Crump, president and founder of Storage Switzerland, said data protection vendors such as Commvault, Druva, Veeam, Veritas and Zerto also can do some form of recovery in the cloud, but it’s “not as seamless as you might want it to be.”

“Datrium has gone so far as to converge primary storage with data protection and backup software,” Crump said. “They have a very good automation engine that allows customers to essentially draw their disaster recovery plan. They use VMware Cloud on Amazon, so the customer doesn’t have to go through any conversion process. And they’ve solved the riddle of: ‘How do you store data in S3 but recover on high-performance storage?’ “

Scott Sinclair, a senior analyst at Enterprise Strategy Group, said using cloud resources for backup and DR often means either expensive, high-performance storage or lower cost S3 storage that requires a time-consuming migration to get data out of it.

“The Datrium architecture is really interesting because of how they’re able to essentially still let you use the lower cost tier but make the storage seem very high performance once you start populating it,” Sinclair said.

Go to Original Article

AT&T design for open routing covers many uses

AT&T has introduced an ambitious open design for a distributed, disaggregated chassis that hardware makers can use to build service provider-class routers ranging from single line card systems to large clusters of routing hardware.

AT&T recently submitted the specs for its white box architecture to the Open Compute Project, an initiative to share with the general IT industry designs for server and data center components. The AT&T design builds a router chassis around Broadcom’s StrataDNX Jericho2 system-on-a-chip for Ethernet switches and routers.

AT&T has been a leading advocate of open, disaggregated hardware to reduce CapEx costs. It plans to use the new design for edge and core routers that comprise its global Common Backbone. The CBB is the network that handles the service provider’s IP traffic.

Also, AT&T plans to use the Jericho2 chip in its design to power 400 Gbps interfaces for the carrier’s next-generation 5G wireless network services.

For several years, AT&T has advocated for an open disaggregated router, which means the hardware is responsible only for data traffic while its control plane runs in separate software. Therefore, AT&T’s new design specs are not a surprise.

“What is indeed interesting is that they are taking the approach to all router use cases including high-performance, high-capacity routing using this distributed chassis scale-out approach,” Rajesh Ghai, an analyst at IDC, said.

AT&T design committed to hardware neutrality

AT&T’s hardware-agnostic design is ambitious because its use in carrier-class routing would require a new approach to procuring, deploying, managing and orchestrating hardware, Ghai said. “I know they have tried [to develop that approach] in the lab over the past year with a startup.”

Whether hardware built on AT&T specs can find a home outside of the carrier’s data centers remains to be seen.

“AT&T’s interest in releasing the specs for everyone is to drive adoption of the open hardware approach by other SPs [service providers] and hence drive a new market for disaggregated routers,” Ghai said. “But this requires sophistication on the part of the SP that few have. So, we’ll have to see who jumps in next.”

At the very least, vendors know the specifications they must meet to sell router software to AT&T, Ghai said.

AT&T’s design specifies three key building blocks for router clusters. The smallest is a line card system that supports 40 100 Gbps ports, plus 13 400 Gbps fabric-facing ports. In the middle is a line card system supporting 10 400 Gbps client ports, plus 13 400 Gbps fabric-facing ports.

For the largest systems, there is a fabric device that supports 48 400 Gbps ports. AT&T’s specs also cover a fabric system with 24 400 Gbps ports.

AT&T has taken a more aggressive approach to open hardware than rival Verizon. The latter has said it would run its router control plane in the cloud and use it to manage devices from Cisco and Juniper Networks, Ghai said.

Go to Original Article

Are students prepared for real-world cyber curveballs?

With a projected “skills gap” numbering in the millions for open cyber headcount, educating a diverse workforce is critical to corporate and national cyber defense moving forward. However, are today’s students getting the preparation they need to do the cybersecurity work of tomorrow?

To help educators prepare meaningful curricula, the National Institute of Standards and Technology (NIST) has developed the National Initiative for Cybersecurity Education (NICE) Cybersecurity Workforce Framework. The U.S. Department of Energy (DOE) is also doing its part to help educate our future cybersecurity workforce through initiatives like the CyberForce Competition,™ designed to support hands-on cyber education for college students and professionals. The CyberForce Competition™ emulates real-world, critical infrastructure scenarios, including “cyber-physical infrastructure and lifelike anomalies and constraints.”

As anyone who’s worked in cybersecurity knows, a big part of operational reality are the unexpected curveballs ranging from an attacker’s pivot while escalating privileges through a corporate domain to a request from the CEO to provide talking points for an upcoming news interview regarding a recent breach. In many “capture the flag” and “cyber-range exercises,” these unexpected anomalies are referred to as “injects,” the curveballs of the training world.

For the CyberForce Competition™ anomalies are mapped across the seven NICE Framework Workforce Categories illustrated below:

Image showing seven categories of cybersecurity: Operate and Maintain, Oversee and Govern, Collect and Operate, Securely Provision, Analayze, Protect and Defend, and Investigate.

NICE Framework Workforce categories, NIST SP 800-181.

Students were assessed based on how many and what types of anomalies they responded to and how effective/successful their responses were.

Tasks where students excelled

  • Threat tactic identification—Students excelled in identifying threat tactics and corresponding methodologies. This was shown through an anomaly that required students to parse through and analyze a log file to identify aspects of various identifiers of insider threat; for example, too many sign-ins at one time, odd sign-in times, or sign-ins from non-standard locations.
  • Log file analysis and review—One task requires students to identify non-standard browsing behavior of agents behind a firewall. To accomplish this task, students had to write code to parse and analyze the log files of a fictitious company’s intranet web servers. Statistical evidence from the event indicates that students are comfortable writing code to parse log file data and performing data analysis.
  • Insider threat investigations—Students seemed to gravitate towards the anomalies and tasks connected to insider threat identification that maps to the Security Provision pillar. Using log analysis techniques described above, students were able to determine at a high rate of success individuals with higher than average sign-in failure rates and those with anomalous successful logins, such as from many different devices or locations.
  • Network forensics—The data indicated that overall the students had success with the network packet capture (PCAP) forensics via analysis of network traffic full packet capture streams. They also had a firm grasp on related tasks, including file system forensic analysis and data carving techniques.
  • Trivia—Students were not only comfortable with writing code and parsing data, but also showed they have solid comprehension and intelligence related to cybersecurity history and trivia. Success in this category ranked in the higher percentile of the overall competition.

Pillar areas for improvement

  • Collect and Operate—This pillar “provides specialized denial and deception operations and collection of cybersecurity information that may be used to develop intelligence.” Statistical analysis gathered during the competition indicated that students had hesitancies towards the activities in this pillar, including for some tasks that they were successful with in other exercises. For example, some fairly simple tasks, such as analyzing logs for specific numbers of entries and records on a certain date, had a zero percent completion rate. Reasons for non-completion could be technical inability on the part of the students but could also have been due to a poorly written anomaly/task or even an issue with sign-ins to certain lab equipment.
  • Investigate—Based on the data, the Investigate pillar posed some challenges for the students. Students had a zero percent success rate on image analysis and an almost zero percent success rate on malware analysis. In addition, students had a zero percent success rate in this pillar for finding and identifying a bad file in the system.

Key takeaways

Frameworks like NIST NICE and competitions like the DOE CyberForce Competition are helping to train up the next generation of cybersecurity defenders. Analysis from the most recent CyberForce Competition indicates that students are comfortable with tasks in the “Protect and Defend” pillar and are proficient in many critical tasks, including network forensics and log analysis. The data points to areas for improvement especially in the “Collect and Operate” and “Investigate” pillars, and for additional focus on forensic skills and policy knowledge.

Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The CyberForce work was partially supported by the U.S. Department of Energy Office of Science under contract DE-AC02-06CH11357.

Go to Original Article
Author: Steve Clarke

Learn to set up and use PowerShell SSH remoting

When Microsoft said PowerShell would become an open source project that would run on Windows, Linux and macOS in August 2016, there was an interesting wrinkle related to PowerShell remoting.

Microsoft said this PowerShell Core would support remoting over Secure Shell (SSH) as well as Web Services-Management (WS-MAN). You could always use the PowerShell SSH binaries, but the announcement indicated SSH support would be an integral part of PowerShell. This opened up the ability to perform remote administration of Windows and Linux systems easily using the same technologies.

A short history of PowerShell remoting

Microsoft introduced remoting in PowerShell version 2.0 in Windows 7 and Windows Server 2008 R2, which dramatically changed the landscape for Windows administrators. They could create remote desktop sessions to servers, but PowerShell remoting made it possible to manage large numbers of servers simultaneously.

Remoting in Windows PowerShell is based on WS-MAN, an open standard from the Distributed Management Task Force. But because WS-MAN-based remoting is Windows orientated, you needed to use another technology, usually SSH, to administer Linux systems.

Introducing SSH on PowerShell Core

We have grown accustomed to installing software on Windows using the wizards, but the installation of OpenSSH requires more background information and more work from the administrator.

SSH is a protocol for managing systems over a possibly unsecured network. SSH works in a client-server mode and is the de facto standard for remote administration in Linux environments.

PowerShell Core uses OpenSSH, a fork from SSH 1.2.12 which was released under an open source license. OpenSSH is probably the most popular SSH implementation.

The code required to use WS-MAN remoting is installed as part of the Windows operating system. You need to install OpenSSH manually.

Installing OpenSSH

We have grown accustomed to installing software on Windows using the wizards, but the installation of OpenSSH requires more background information and more work from the administrator. Without some manual intervention, many issues can arise.

The installation process for OpenSSH on Windows has improved over time, but it’s still not as easy as it should be. Working with the configuration file leaves a lot to be desired.

There are two options when installing PowerShell SSH:

  1. On Windows 10 1809, Windows Server 1809, Windows Server 2019 and later, OpenSSH is available as an optional feature.
  2. On earlier versions of Windows, you can download and install OpenSSH from GitHub.

Be sure your system has the latest patches before installing OpenSSH.

Installing the OpenSSH optional feature

You can install the OpenSSH optional feature using PowerShell. First, check your system with the following command:

Get-WindowsCapability -Online | where Name -like '*SSH*'
OpenSSH components
Figure 1. Find the OpenSSH components in your system.

Figure 1 shows the OpenSSH client software is preinstalled.

You’ll need to use Windows PowerShell for the installation unless you download the WindowsCompatibility module for PowerShell Core. Then you can import the Deployment Image Servicing and Management module from Windows PowerShell and run the commands in PowerShell Core.

Install the server feature:

Add-WindowsCapability -Online -Name OpenSSH.Server~~~~
Path :
Online : True
RestartNeeded : False

The SSH files install in the C:WindowsSystem32OpenSSH folder.

Download OpenSSH from GitHub

Start by downloading the latest version from GitHub. The latest version of the installation instructions are at this link.

After the download completes, extract the zip file into the C:Program FilesOpenSSH folder. Change location to C:Program FilesOpenSSH to install the SSH services:

[SC] SetServiceObjectSecurity SUCCESS
[SC] ChangeServiceConfig2 SUCCESS
[SC] ChangeServiceConfig2 SUCCESS

Configuring OpenSSH

After OpenSSH installs, perform some additional configuration steps.

Ensure that the OpenSSH folder is included on the system path environment variable:

  • C:WindowsSystem32OpenSSH if installed as the Windows optional feature
  • C:Program FilesOpenSSH if installed via the OpenSSH download

Set the two services to start automatically:

Set-Service sshd -StartupType Automatic
Set-Service ssh-agent -StartupType Automatic

If you installed OpenSSH with the optional feature, then Windows creates a new firewall rule to allow inbound access of SSH over port 22. If you installed OpenSSH from the download, then create the firewall rule with this command:

New-NetFirewallRule -Name sshd -DisplayName 'OpenSSH Server (sshd)' `
-Enabled True -Direction Inbound -Protocol TCP `
-Action Allow -LocalPort 22

Start the sshd service to generate the SSH keys:

Start-Service sshd

The SSH keys and configuration file reside in C:ProgramDatassh, which is a hidden folder. The default shell used by SSH is the Windows command shell. This needs to change to PowerShell:

New-ItemProperty -Path "HKLM:SOFTWAREOpenSSH" -Name DefaultShell `
-Value "C:Program FilesPowerShell6pwsh.exe" -PropertyType String -Force

Now, when you connect to the system over SSH, PowerShell Core will start and will be the default shell. You can also make the default shell Windows PowerShell if desired.

There’s a bug in OpenSSH on Windows. It doesn’t work with paths with a space, such as the path to the PowerShell Core executable! The workaround is to create a symbolic link that creates a path that OpenSSH can use:

New-Item -ItemType SymbolicLink -Path C:pwsh -Target 'C:Program FilesPowerShell6'

In the sshd_config file, un-comment the following lines:

PubkeyAuthentication yes
PasswordAuthentication yes

Add this line before other subsystem lines:

Subsystem  powershell C:pwshpwsh.exe -sshs -NoLogo -NoProfile

This tells OpenSSH to run PowerShell Core.

Comment out the line:

AuthorizedKeysFile __PROGRAMDATA__/ssh/administrators_authorized_keys

After saving the changes to the sshd_config file, restart the services:

Restart-Service sshd
Start-Service ssh-agent

You need to restart the sshd service after any change to the config file.

Using PowerShell SSH remoting

Using remoting over SSH is very similar to remoting over WS-MAN. You can access the remote system directly with Invoke-Command:

Invoke-Command -HostName W19DC01 -ScriptBlock {Get-Process}
[email protected]@w19dc01's password:

You’ll get a prompt for the password, which won’t be displayed as you type it.

If it’s the first time you’ve connected to the remote system over SSH, then you’ll see a message similar to this:

The authenticity of host 'servername (' can't be established.
ECDSA key fingerprint is SHA256:().
Are you sure you want to continue connecting (yes/no)?

Type yes and press Enter.

You can create a remoting session:

$sshs = New-PSSession -HostName W19FS01
[email protected]@w19fs01's password:

And then use it:

Invoke-Command -Session $sshs -ScriptBlock {$env:COMPUTERNAME}

You can enter an OpenSSH remoting session using Enter-PSSession in the same way as a WS-MAN session. You can enter an existing session or use the HostName parameter on Enter-PSSession to create the interactive session.

You can’t disconnect an SSH based session; that’s a WS-MAN technique.

You can use WS-MAN and SSH sessions to manage multiple computers as shown in Figure 2.

The session information shows the different transport mechanism — WS-MAN and SSH, respectively — and the endpoint in use by each session.

Remote management sessions
Figure 2. Use WS-MAN and SSH sessions together to manage remote machines.

If you look closely at Figure 2, you’ll notice there was no prompt for the password on the SSH session because the system was set up with SSH key-based authentication.

Using SSH key-based authentication

Open an elevated PowerShell session. Change the location to the .ssh folder in your user area:

Set-Location -Path ~.ssh

Generate the key pair:

ssh-keygen -t ed25519

Add the key file into the SSH-agent on the local machine:

ssh-add id_ed25519

Once you’ve added the private key into SSH-agent, back up the private key to a safe location and delete the key from the local machine.

Copy the id_ed25519.pub file into the .ssh folder for the matching user account on the remote server. You can create such an account if required:

$pwd = Read-Host -Prompt 'Password' -AsSecureString
Password: ********
New-LocalUser -Name Richard -Password $pwd -PasswordNeverExpires
Add-LocalGroupMember -Group Administrators -Member Richard

On the remote machine, copy the contents of the key file into the authorized_keys file:

scp id_ed25519.pub authorized_keys

The authorized_keys file needs its permissions changed:

  • Open File Explorer, right click authorized_keys and navigate to Properties – Security – Advanced
  • Click Disable Inheritance.
  • Select Convert inherited permissions into explicit permissions on this object.
  • Remove all permissions except for SYSTEM and your user account. Both should have Full control.

Introduction to SSH with PowerShell Core.

You’ll see references to using the OpenSSHUtils module to set the permissions, but there’s a bug in the version from the PowerShell Gallery that makes the authorized_keys file unusable.

Restart the sshd service on the remote machine.

You can now connect to the remote machine without using a password as shown in Figure 2.

If you’re connecting to a non-domain machine from a machine in the domain, then you need to use the UserName parameter after enabling key-pair authentication:

$ss = New-PSSession -HostName W19ND01 -UserName Richard

You need the username on the remote machine to match your domain username. You won’t be prompted for a password.

WS-MAN or SSH remoting?

Should you use WS-MAN or SSH based remoting? WS-MAN remoting is available on all Windows systems and is enabled by default on Windows Server 2012 and later server versions. WS-MAN remoting has some issues, notably the double hop issue. WS-MAN also needs extra work to remote to non-domain systems.

SSH remoting is only available in PowerShell Core; Windows PowerShell is restricted to WS-MAN remoting. It takes a significant amount of work to install and configure SSH remoting. The documentation isn’t as good as it needs to be. The advantages of SSH remoting are that you can easily access non-domain machines and non-Windows systems where SSH is the standard for remote access.

Go to Original Article

Splunk pricing worries users as SignalFx buy makes headlines

Splunk broke open its piggy bank and shelled out $1.05 billion for cloud monitoring vendor SignalFx this week, and its user base wonders if Splunk pricing means their IT budgets will be next.

Splunk, which began as a log analytics player, will gain expertise and software that collects metrics and distributed tracing data in cloud-native application environments with this acquisition, as it sets its sights on speedier data analytics features that support a broader swath of data sources.

The deal, valued at $1.05 billion, is the largest in recent memory in the IT monitoring space, analysts say. It also indicates future trends that will see cloud monitoring tools for increasingly complex applications and software frameworks such as container orchestration and service mesh take over enterprise IT. It’s the third merger between IT monitoring and automation firms in the past week — IT infrastructure monitoring firm Virtual Instruments also acquired cloud monitoring startup Metricly, and IT automation vendor Resolve Systems bought AIOps player FixStream.

“There’s a lot more interest in monitoring these days,” said Nancy Gohring, analyst at 451 Research in New York. “I would tie that to the growing interest in cloud and cloud-native technologies, and the maturity curve there, especially from the enterprise perspective.”

Splunk pricing a potential downside of cloud monitoring updates

[Enterprise IT] shops are learning the hard way that you have to change your approach to monitoring in these new environments, and, thus, there’s more demand for these types of tools.
Nancy GohringAnalyst, 451 Research

Splunk users are aware of emerging trends toward cloud-native application monitoring, streaming data analytics and machine learning, and see the need for fresh approaches to data collection and data analytics as they grow.

“It lowers the entry barrier for machine learning and gets people asking the right questions,” said Steve Koelpin, lead Splunk engineer for a Fortune 1000 company in the Midwest, of Splunk’s Machine Learning Toolkit. “People share their code, and then others use that as a starting point to innovate.”

But an emphasis on machine learning that includes metrics data will demand large data sets and large amounts of processing power, and that comes at a cost.

“It’s expensive to stream metric data as it’s metered at a fixed 150 bytes,” Koelpin said. “This could mean substantially [higher] license costs compared to streaming non-metric data.”

Splunk pricing starts at $150 per gigabyte, per day, with volume discounts for larger amounts of data. The costs for Splunk software licenses and data storage have already driven some users away from the vendor’s tools and toward open source cloud monitoring software such as the Elastic Stack.

Tim TullyTim Tully

“We’re always evaluating pricing options, but our focus is more on making sure we build the best products we can,” said Tully, senior vice president and CTO at Splunk, when asked about users’ Splunk pricing complaints.

Splunk offers the most extensive set of log analytics features on the market, as well as good data collection and analytics performance and stability, Gohring said. However, some users chafe at paying for that full set of features when they may not use them all.

“People want to collect more and more data, and there’s always a cost associated with that,” she said. “It’s something all vendors are struggling with and trying to address.”

Splunk prepares to gobble up massive data

As enterprises adopt cloud-native technologies, they will look to vendors such as Splunk, pricing notwithstanding, for cloud monitoring tools rather than build the cheaper homegrown systems early adopters favored, Gohring said.

“Those shops are learning the hard way that you have to change your approach to monitoring in these new environments, and, thus, there’s more demand for these types of tools,” she said.

In fact, 451 Research estimated that container monitoring tools will overtake the overall market revenue share held by container orchestration and management tools over the next five years. A June 2019 market monitor report on containers by the research group estimated total application containers market size at $1.59 billion in 2018, and projected growth to $5.54 billion in 2023. In 2018, management and orchestration software vendors generated 32% of that revenue, and monitoring and logging vendors did 24%. By 2023, however, 451 Research expects management and orchestration vendors to generate 25% of overall market revenue, and monitoring and logging vendors will do 31%.

In the meantime, Splunk plans to capture its slice of that pie, when beta products such as its Machine Learning Toolkit, Data Fabric Search and Data Stream Processor become generally available at its annual user conference this October. These products will boost the data collection and query performance of the Splunk Enterprise platform as it absorbs more data, and give users a framework to create their own machine learning algorithms against those data repositories under the Splunk UI. Splunk will also create a separate Kafka-like product for machine data processing based on open source software.

“We’re looking to add Apache Flink stream data processing under the Splunk UI for real-time monitoring and data enrichment, where Splunk acts as the query engine and storage layer for that data [under Data Fabric Search],” Tully said.

SignalFx has strong streaming analytics IP that will bolster those efforts in metrics and distributed tracing environments, Gohring said.

Go to Original Article

Mini XL+, Mini E added to iXsystems FreeNAS Mini series

Open source hardware provider iXsystems introduced two new models to its FreeNAS Mini series storage system lineup: FreeNAS Mini XL+ and FreeNAS Mini E. The vendor also introduced tighter integration with TrueNAS and cloud services.

Designed for small offices, iXsystems’ FreeNAS Mini series models are compact, low-power and quiet. Joining the FreeNAS Mini and Mini XL, the FreeNAS Mini XL+ is intended for professional workgroups, while the FreeNAS Mini E is a low-cost option for small home offices.

The FreeNAS Mini XL+ is a 10-bay platform — eight 3.5-inch and one 2.5-inch hot-swappable bays and one 2.5-inch internal bay — and iXsystem’s highest-end Mini model. The Mini XL+ provides dual 10 Gigabit Ethernet (GbE) ports, eight CPU cores and 32 GB RAM for high-performance workloads. For demanding applications, such as hosting virtual machines or multimedia editing, the Mini XL+ scales beyond 100 TB.

For lower-intensity workloads, the FreeNAS Mini E is ideal for file sharing, streaming and transcoding video up to 1080p. The FreeNAS Mini E features four bays with quad GbE ports and 8 GB RAM, configured with 8 TB capacity.

The full iXsystems FreeNAS Mini series supports error correction RAM and Z File System with data checksumming, unlimited snapshots and replication. IT operations can remotely manage systems via Intelligent Platform Management Interface and, dependent on needs, can be built has hybrid or all-flash storage.

FreeNAS provides traditional NAS and delivers network application services via plugin applications, featuring both open source and commercial applications to extend usability to entertainment, collaboration, security and backup. IXsystems’ FreeNAS 11.2 provides a web interface and encrypted cloud sync to major cloud services, such as Amazon S3, Microsoft Azure, Google Drive and Backblaze B2.

At Gartner’s 2018 IT Infrastructure, Operations & Cloud Strategies Conference, ubiquity of IT infrastructure was a main theme, and FreeNAS was named an option for file, block, object and hyper-converged software-defined storage. According to iXsystems, FreeNAS and TrueNAS are leading platforms for video, telemetry and other data processing in the cloud or a colocation facility.

New FreeNAS Mini models were introduced to iXsystems' lineup for open source storage.
IXsystems’ FreeNAS Mini lineup now includes the high-end FreeNAS Mini XL+ and entry-level FreeNAS Mini E.

With the upgrade, the FreeNAS Mini series can be managed by iXsystems’ unified management system, TrueCommand, which enables admins to monitor all TrueNAS and FreeNAS systems from a single UI and share access to alerts, reports and control of storage systems. A TrueCommand license is free for FreeNAS deployments of fewer than 50 drives.

According to iXsystems, FreeNAS Mini products reduce TCO by combining enterprise-class data management and open source economics. The FreeNAS Mini XL+ ranges from $1,499 to $4,299 and the FreeNAS Mini E from $749 to $999.

FreeNAS version 11.3 is available in beta, and the vendor anticipates a 12.0 release that will bring more efficiency to its line of FreeNAS Minis.

Go to Original Article