Tag Archives: linux

How to manage Windows with Puppet

IT pros have long aligned themselves with either Linux or Windows, but it has grown increasingly common for organizations to seek the best of both worlds.

For traditional Windows-only shops, the thought of managing Windows systems with a server-side tool made for Linux may be unappealing, but Puppet has increased Windows Server support over the years and offers capabilities that System Center Configuration Manager and Desired State Configuration do not.

Use existing Puppet infrastructure

Many organizations use Puppet to manage Linux systems and SCCM to manage Windows Servers. SCCM works well for managing workstations, but admins could manage Windows more easily with Puppet code. For example, admins can easily audit a system configuration by looking at code manifests.

Admins manage Windows with Puppet agents installed on Puppet nodes. They use modules and manifests to deploy node configurations. If admins manage both Linux and Windows systems with Puppet, it provides a one-stop shop for all IT operations.

Combine Puppet and DSC for greater support

Admins need basic knowledge of Linux to use a Puppet master service. They do not need to have a Puppet master because they can write manifests on nodes and apply them, but that is likely not a scalable option. For purely Windows-based shops, training in both Linux and Puppet will make taking the Puppet plunge easier. It requires more time to set up and configure Windows systems in Puppet the same way they would be configured in SCCM. Admins should design the code before users start writing and deploying Puppet manifests or DevOps teams add CI/CD pipelines.

SCCM works well for managing workstations, but admins could more easily manage Windows with Puppet code.

DSC is one of the first areas admins look to manage Windows with Puppet code. The modules are written in C# or PowerShell. DSC has native monitoring GUI, which makes the overall view of a machine’s configuration complex. In its enterprise version, Puppet has native support for web-based reporting. Admins can also use a free open source version, such as Foreman.

Due to the number of community modules available on the PowerShell Gallery, DSC receives the most Windows support for code-based management, but admins can combine Puppet with DSC to get complete coverage for Windows management. Puppet contains native modules and a DSC module with PowerShell DSC modules built in. Admins may also use the dsc_lite module, which can use almost any DSC module available in Puppet. The dsc_lite modules are maintained outside of Puppet completely.

How to use Puppet to disable services

Administrators can use Puppet to run and disable services. Using native Puppet support without a DSC Puppet module, admins could write a manifest to always have the net logon, BITS and W3SVC running when a Puppet run completes. Place the name of each Windows service in a Puppet array $svc_name.

$svc_name  = [‘netlogon’,’BITS’,’W3SVC’]

   service { $svc_name:

   ensure => ‘running’


In the next example, the Puppet DSC module ensures that the web server Windows feature is installed on the node and reboots if a pending reboot is required.

dsc_windowsfeature {‘webserverfeature’:

  dsc_ensure = ‘present’

  dsc_name = ‘Web-Server’


reboot { ‘dsc_reboot’ :

  message => Puppet needs to reboot now’,

  when    => ‘pending’,

  onlyif  => ‘pending_dsc_reboot’,


Go to Original Article

ONAP Beijing release targets deployment scenarios

Deployability is the name of the game with the Linux Foundation’s latest Open Network Automation Platform architecture.

Central to the ONAP Beijing release are seven identified “dimensions of deployability,” said Arpit Joshipura, general manager of networking and orchestration at the Linux Foundation. These seven deployability factors comprise usability, security, manageability, stability, scalability, performance and resilience.

By identifying these dimensions, the Linux Foundation expects to better address and answer questions regarding documentation, Kubernetes management, disruptive testing, multisite failures and lifecycle management transactions. The goal is better consistency among ONAP deployments, Joshipura said.

Other than the standardized support for external northbound APIs that face a user’s operational and business support systems, the ONAP Beijing release had only a handful of architectural changes from the previous Amsterdam architecture, according to Joshipura. To that end, the ONAP Beijing release features four relevant MEF APIs taken from MEF’s Lifecycle Service Orchestration architecture and framework.

An additional architectural tweak pinpointed the ONAP Operations Manager. OOM now works with Kubernetes and can run with any cloud provider, Joshipura said.

Arpit Joshipura, Linux Foundation Arpit Joshipura

“All the projects within ONAP can become Docker containers, and Kubernetes orchestrates all of them,” he said. “It helps with management, portability and efficiencies in terms of VMs [virtual machines] needed to run them.”

The ONAP Beijing release also introduced Multi-site State Coordination Services, which ONAP dubbed MUSIC. MUSIC coordinates databases and synchronizes policies for ONAP deployments in multiple locations, geographies and countries — relevant for providers like Vodafone and Orange. The release also provided standard templates and virtual network functions (VNFs) integration and validation, regarding information and data modeling.

Functional enhancements for ONAP Beijing

In addition to architecture adaptions, the ONAP Beijing release made a series of functional enhancements that include change management, hardware platform awareness and autoscaling with manual triggers. For example, the system follows policy to automatically move VNFs or add VMs if a certain location has excess compute capacity. This capability helps scale the VNFs appropriately, Joshipura said.

ONAP expects to make its next release, Casablanca, available at the end of 2018. ONAP Casablanca will continue work on operational and business support systems, in addition to adding more cross-project integration related to microservices architecture, Joshipura said. Further, ONAP Casablanca will introduce a formal VNF certification program and standardize features to support 5G and cross-cloud connectivity.

Linux Foundation reacts to Microsoft’s GitHub acquisition

Microsoft’s $7.5 billion acquisition of GitHub received a cautiously positive response from the Linux Foundation. Jim Zemlin, the Linux Foundation’s executive director, categorized the move as “pretty good news for the world of open source,” highlighting Microsoft’s expertise to make GitHub better. He did, however, stress Microsoft’s growing need to earn the trust of developers, while also acknowledging the existence of other open source developer platforms, such as GitLab and Stack Overflow.

“As we all evaluate the evolution of open source from the early days to now, I suggest we celebrate this moment,” Zemlin said about the purchase. “The multidecade progression toward the adoption and continual use of open source software in developing modern technological products, solutions and services is permanent and irreversible. The majority of the world’s economic systems, stock exchanges, the internet, supercomputers and mobile devices run the open source Linux operating system, and its usage and adoption continue to expand.”

Open source networking projects unite under Linux Foundation

The Linux Foundation this week announced the formation of the LF Networking Fund, or LFN, an initiative to combine the multiple open source networking projects currently under its supervision.

Host to many of the top open source networking projects, The Linux Foundation said it was time to streamline how it oversees its various ventures, said Arpit Joshipura, general manager of networking and orchestration at The Linux Foundation.

The six founding open source projects involved in the LFN are FD.io, OpenDaylight, Open Network Automation Platform (ONAP), Open Platform for NFV (OPNFV), PDNA and Streaming Network Analytics System. An additional 83 member organizations are participating in LFN. Members of The Linux Foundation can choose whether they want to join LFN, and they can participate in as many or as few of the projects as they want.

The open source networking projects will remain technically independent, maintaining their existing charters and working toward their individual releases — all of which are still on schedule, according to Joshipura. But the projects will be under a single governing board and will share financial resources and staff, he said.

The governing board will comprise chosen representatives from platinum, gold and silver members of The Linux Foundation. LNF also includes a technical advisory council (TAC) and marketing advisory council (MAC), with individual member representatives. The board and councils will allow LFN members to share project development, testing, deployment and architecture integration best practices, in addition to the regulations across projects.

“The finance, budgets, prioritization and strategy are functions of the governing board, with input from the TAC and the MAC,” Joshipura said. So, if a project requests additional money for testing or is ready for a project release, for example, it goes to the advisory councils with the requests, he added.

Another issue LFN hopes to address is that of onboarding virtual network functions (VNFs). Instead of having inconsistent processes for VNF onboarding, LFN will work toward a single architecture and process to support that effort, Joshipura said.

“We don’t want ONAP to do it one way and OPNFV to do it another way,” he said. “Now, it’s one way to do it across projects.”

The LF Networking Fund still business as usual

While the idea of cross-project collaboration has merit, Joshipura said LFN faces some challenges. One such challenge is simplifying the process to allow developers to join the projects.

“It [includes] a lot more education,” he said. “People do want to participate in other projects, but they’re not familiar with them. So, we want to make sure we bring the training from one project to another project.”

Lee Doyle, principal analyst at Doyle Research, said another issue that could trip up the initiative is the fact that The Linux Foundation is still a business — and all of these open source networking projects will still compete with each other.

“The Linux Foundation isn’t altruistic,” Doyle said. “It’s a business. People are still going to fight for resources and sponsors.”

While Joshipura stressed that the formal legal system outlined within LFN will make discussions and decisions simpler, Doyle said it still means a bunch of meetings.

It’s a laudable goal, he said, but any progress within the LF Networking Fund will take time.

Microsoft boasts SQL Server machine learning services

SEATTLE — While SQL Server 2017 continues to get attention for opening up to Linux, many of Microsoft’s database advances revolve around various ways the company is opening up analytics on its flagship database. Case in point: SQL Server machine learning services.

Open source data frameworks and development languages increasingly have become a path to next-level data analytics and machine learning, and SQL Server support is central to this strategy.

The clues are various. Even before 2017, Microsoft brought Apache Spark and the R language into the mix. Earlier this year, the Python language joined R as part of a newly minted Azure Machine Learning developer kit.

The story took a new turn at PASS Summit 2017 last week, as Microsoft featured the capability for Azure Machine Learning users to bring their analytics models into SQL Server 2017 for native T-SQL runtime scoring. An essential element in machine learning, scoring is a way to measure the likely success of machine-generated predictions.

Native T-SQL scoring can process large amounts of data at an average of under 20 milliseconds per row, according to Rohan Kumar, general manager of Microsoft’s database systems group, who spoke at PASS Summit. Native T-SQL scoring takes the form of a stored procedure for prediction that can be used without calling Microsoft’s R runtime, as was the case with SQL Server 2016.

This capability is important because models built and trained to, for example, suggest new products to likely buyers can produce results while the buyers are actually web browsing. As SQL Server machine learning services head in this direction, their use could grow.

Machine learning models

Supporting such scoring in the Microsoft database could make machine learning analytics more a part of operations and less an experimental effort, according to Ginger Grant, advanced analytics consultant for SolidQ and a presenter at the event.

“Traditionally, what has happened is that you’ve had a data science group that sort of sat in the corner creating machine learning models. They then threw that ‘over the wall’ to developers who had to code it in another language,” Grant said in an interview.

“Native T-SQL scoring allows people to modularize their work and environment, so things can be operationally implemented relatively quickly,” she said.

Microsoft’s new SQL Server machine learning services will help with real-time prediction, said Victoria Holt, who also took part in PASS Summit. She is an independent data analytics and platforms architect, as well as a trainer at SQL Relay.

“It is great to be able to leverage machine learning computation in-database,” she said.

This year’s inclusion of Python in the Microsoft Machine Learning workbench is also a step forward, Holt said. But it will take time for such new technologies to spread.

Holt noted that the “addition of Python extends the use of deep learning frameworks in the product. The retrained cognitive models will speed up consumption. But there is significant user training and upgrading that will need to happen before these models are adopted.”

Beyond T-SQL stored procedures

Microsoft’s moves are all about being more welcoming to open source communities.
Jen Stirrupfounder of Data Relish

Microsoft analytics advances discussed at PASS Summit were not limited to T-SQL. The company previewed scale-out features for Azure Analysis Services to improve response time for large query workloads on the cloud.

The company also moved to simplify data preparation for analytics in the cloud by releasing a public preview of Azure Data Factory that includes the ability to run SQL Server Integration Services in ADF.

Growing Microsoft SQL Server 2017 support for Python and R is significant, according to Jen Stirrup, founder of the U.K.-based Data Relish consultancy and PASS Summit board member.

Python is something of a portal to a crop of machine learning services entering the open source sphere almost daily. In Stirrup’s view, deeper support for advanced analytics is the next step for big data, and Microsoft is tuned to that notion.

“The company understands that customers really want to do something with the data,” she said.

“The data is such a key thing. It underpins your applications. Today, that means you have to reach out to software and languages that are not necessarily part of Microsoft’s .NET,” Stirrup continued. “Microsoft’s moves are all about being more welcoming to open source communities.”

Introducing a preview of the next generation of Skype for Linux | Skype Blogs

Great news for Skype for Linux users—the next generation of Skype for Linux is launching! Starting today, you can download Skype Preview for Linux and start enjoying new features across all your devices—including screen sharing and group chat.

Bring calls to life and collaborate on projects with screen sharing

With Skype for Linux, you can take advantage of the screen sharing feature on your desktop screen. Now, you can share content with everyone on the call—making it even easier to bring your calls to life and collaborate on projects.

Image showing the screen sharing feature on the desktop screen.

Turn everyday conversations into experiences with group chat

The new group chat feature for Skype for Linux allows you to talk with several friends at the same time. We even included options to personalize chats with emoticons, Mojis, and photos so you can express yourself with your own style. It’s a great way to turn your everyday conversations into experiences.

Image showing the chat feature using emoticons, Mojis, and photos in the group conversation.

The next generation of Skype for Linux is part of our broader strategy to rebuild Skype from the ground up with cloud technology—a more reliable platform that can scale to a much bigger audience. We’re making great improvements in the ways you like to connect with people and bringing your world closer together than ever before.

Image showing three people on a Skype for Linux call.

As a reminder—all Skype for Linux clients (versions 4.3 and older) were retired on July 1, 2017. If you’re running an older version, it’s time to upgrade to the Skype Preview for Linux.

We also recommend making sure that you have an up-to-date microphone and webcam for video calls, so you can take advantage of all the new features this preview version has to offer.

Try it out and tell us what you think by clicking the heart on the menu. Share your ideas in the Skype for Linux Community on how we can make Skype for Linux better. We’re looking forward to hearing your feedback.

DH2i DxEnterprise adds Linux support to improve availability

DH2i is expanding its DxEnterprise software for managing Microsoft Windows SQL Server into the Linux world. Along with protecting databases, DxEnterprise 17 also adds support for stateful Docker containers.

DH2i bills DxEnterprise as container management software that allows organizations to manage Microsoft Windows SQL Server for disaster recovery and high availability.

With the new version, DH2i is making it easier to move data while expanding its scope beyond SQL Server, said George Crump, founder of analyst firm Storage Switzerland.

“Getting data from point A to point B is half the problem,” Crump said. “Getting an application to run in point B can be as difficult as getting data there.”

The product can move “a workload from any host to any host, anywhere, at any time,” said Don Boxley, CEO and co-founder of DH2i.

The “any time” feature is the difference between high availability and what DH2i calls “Smart Availability,” Boxley said. For example, an organization can run a workload for five minutes in one location and then move it back.

“We don’t care what operating system you’re using,” Boxley said. “Instead of having multiple data protection processes, I can do it all one way.”

Of course, there are limits to the movement DxEnterprise 17 enables. For example, a customer cannot move a SQL instance from Windows to Linux.

DH2i's DxEnterprise
The Docker view of DH2i DxEnterprise 17.

A sequel to SQL focus

DH2i opened for business in 2010 and previously stuck with what it knew well: SQL Server. DxEnterprise was the first application container management software designed for SQL Server.

Instead of having multiple data protection processes, I can do it all one way.
Don BoxleyCEO and co-founder, DH2i

Boxley acknowledged that Linux had been “a bit of a science project.” But with organizations now using Linux in production, the update opens the door to a new group of potential customers.

DxEnterprise 17 software uses Virtual Host technology to decouple databases and containers from their underlying infrastructures for workload portability, according to DH2i. The update adds portable, persistent management of any new or existing Docker container, as well as support for SQL Server 2017. DH2i technology allows managed instances and containers to move freely across varying OSes or distributions on their respective platforms, with built-in intelligent health and performance monitoring, alerting and automated orchestration.

DxEnterprise, which is generally available now, costs $1,500 per core for the perpetual license, plus $360 per core per year for 24/7 support and updates. On a subscription basis, it is $75 per core per month, which includes 24/7 support and updates.

Typical customers are medium to large enterprises, with the smallest DH2i private sector users having revenue of $1 billion, Boxley said. He said a company goal is to hit triple figures in customers by the end of 2017.

The new Docker capability in DxEnterprise is more of a future-proof feature. Boxley said “there’s excitement” among DH2i customers over the capability, but most are still in the pre-test and development phase and around 18 months from production use.

DH2i is looking to add built-in replication to DxEnterprise, Boxley said. But Crump said that should not be a high priority because DH2i could partner with a vendor that specializes in replication, such as Zerto.

Microsoft empowers Windows containers to erase OS divide

Linux and Windows used to be like oil and water, but new approaches to Windows containers will find a way to blend them.

Containers are increasingly important to achieve DevOps velocity. Containerization pulls provisioning and configuration management data into the container image so apps don’t look to the host OS for that information. As intelligence gets baked into containers, the host OS becomes less important to manage the environment.

Windows containers must still catch up to the state of the art in Linux, where containers originated. But as the technology develops, traditionally separate Linux and Windows operations will begin to meld behind container orchestration software.

“Whether you’re on a Linux or Windows environment, you don’t have to worry as much about how you’re building the server,” said Andy Domeier, director of technology operations at SPS Commerce, a communications network for supply chain and logistics businesses based in Minneapolis. “You can use the same host image across [OS] platforms because you don’t have to worry about individual configuration needs for apps.”

Microsoft, which saw this trend coming, has jettisoned its Windows-only focus. Recent versions of products such as Azure Container Instances (ACI), ASP.NET Core 2016, Azure Web App for Containers, and the Azure App Service all support Linux and containers. Docker’s Enterprise Edition release in August 2017 also added support for clusters and multi-tier apps made up of a mix of Linux and Windows components.

“[Microsoft is] a cloud operating system company and all their focus is on selling compute and storage in Azure,” said Chris Riley, DevOps analyst at Fixate IO, a content strategy consulting firm based in Livermore, Calif., and a TechTarget contributor. “If Microsoft embraces open source and makes it easier to develop, then it’s going to get easier to get applications into Azure and increase adoption.”

If Microsoft embraces open source and makes it easier to develop, then it’s going to get easier to get applications into Azure and increase adoption.
Chris RileyDevOps analyst, Fixate IO

Longtime Windows shops said Linux may be the more efficient choice for future container-based app development pipelines.

“We’re looking to break down some of our bigger apps into microservices to run on OpenShift Origin [container management platform], and for those microservices we’re looking into Linux-based databases,” said Aloisio Rocha, operations specialist at NetEnt, an online gaming systems service provider in Sweden. “We’ve found that, for a large amount of data, we don’t need all the functionality that comes with Windows, and Windows databases are a bit too bloated.”

For Microsoft, containers recapture lost time

OS convergence is likely, especially among IT organizations that want DevOps on Microsoft systems, but there’s still work to do if Windows containers are to achieve parity with Linux. Microsoft is comparatively late to the containers game, with Docker container support first generally available in Windows Server 2016. Windows support for container orchestration platforms such as Kubernetes and ACI remains in the preview stage.

“Microsoft is trying to do progressive things and they’re taking the right kind of steps,” said Brandon Cipes, managing director of DevOps at cPrime, an Agile consulting firm in Foster City, Calif. “But if you want a good indicator of how containers are still hard for them, ACI is debuting on Linux and not Windows.”

Microsoft’s strategy for different versions of the Windows OS has also been in flux as the company strives to make Windows container images more efficient. The company revealed in August 2017 that it will no longer support its Nano Server operating system on host servers, but reserve the micro OS for use as a base image inside containers, while Server Core serves as the primary Windows OS for hosts.

“By removing components relating to hardware, and operating system parts which are not needed inside a container, Microsoft thinks it can make the Nano Server container smaller,” explained Thomas Maurer, cloud architect for Switzerland-based ItnetX, a consulting firm that works with large enterprise clients in Europe. “This will make the startup of the containers faster and cut down on their resource consumption.”

This is another area where Windows containers must catch up to their open source counterparts — several container-specific micro-operating systems are already generally available for Linux.

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at [email protected] or follow @PariseauTT on Twitter.