Tag Archives: Level

Tape storage capacity plays important role as data increases

As the amount of new data created is set to hit the multiple-zettabyte level in the coming years, where will we store it all?

With the release of LTO-8 and recent reports that total tape storage capacity continues to increase dramatically, tape is a strong option for long-term retention. But even tape advocates say it’s going to take a comprehensive approach to storage that includes other forms of media to handle the data influx.

Tape making a comeback?

The annual tape media shipment report released earlier this year by the LTO Program showed that 108,000 petabytes (PB) of compressed tape storage capacity shipped in 2017, an increase of 12.9% over 2016. The total marks a fivefold increase over the capacity of just over 20,000 PB shipped in 2008.

LTO-8, which launched in late 2017, provides 30 TB compressed capacity and 12 TB native, doubling the capacities of LTO-7, which came out in 2015. The 12 TB of uncompressed capacity is equivalent to 8,000 movies, 2,880,000 songs or 7,140,000 photos, according to vendor Spectra Logic.

“We hope now [with] LTO-8 another increase in capacity [next year],” said Laura Loredo, worldwide marketing product manager at Hewlett Packard Enterprise, one of the LTO Program’s Technology Provider Companies along with IBM and Quantum.

The media, entertainment and science industries have been traditionally strong users of tape for long-term retention. Loredo pointed to more recent uses that have gained traction. Video surveillance is getting digitized more often and kept for longer, and there is more of it in general. The medical industry is a similar story, as records get digitized and kept for long periods of time.

The ability to create digital content at high volumes is becoming less expensive, and with higher resolutions, those capacities are increasing, Quantum product and solution marketing manager Kieran Maloney said. So tape becomes a cost-efficient play for retaining that data.

Tape also brings security benefits. Because it is naturally isolated from a network, tape provides a true “air gap” for protection against ransomware, said Carlos Sandoval Castro, LTO marketing representative at IBM. If ransomware is in a system, it can’t touch a tape that’s not connected, making tapes an avenue for disaster recovery in the event of a successful attack.

“We are seeing customers come back to tape,” Loredo said.

LTO roadmap
The LTO roadmap projects out to the 12th generation.

Tape sees clear runway ahead

“There’s a lot of runway ahead for tape … much more so than either flash or disk,” said analyst Jon Toigo, managing partner at Toigo Partners International and chairman of the Data Management Institute.

Even public cloud providers such as Microsoft Azure are big consumers of tape, Toigo said. Those cloud providers can use the large tape storage capacity for their data backup.

Tape is an absolute requirement for storing the massive amounts of data coming down the pike.
Jon Toigochairman, Data Management Institute

However, with IDC forecasting dozens of zettabytes in need of storage by 2025, flash and disk will remain important. One zettabyte is equal to approximately 1 billion TBs.

“You’re going to need all of the above,” Toigo said. “Tape is an absolute requirement for storing the massive amounts of data coming down the pike.”

It’s not necessarily about flash versus tape or other comparisons, it’s about how best to use flash, disk, tape and the cloud, said Rich Gadomski, vice president of marketing at Fujifilm and a member of the Tape Storage Council.

The cloud, for example, is helpful for certain aspects, such as offsite storage, but it shouldn’t be the medium for everything.

“A multifaceted data protection approach continues to thrive,” Gadomski said.

There’s still a lot of education needed around tape, vendors said. So often the conversation pits technologies against each other, Maloney said, but instead the question should be “Which technology works best for which use?” In the end, tape can fit into a tiered storage model that also includes flash, disk and the cloud.

In a similar way, the Tape Storage Council’s annual “State of the Tape Industry” report, released in March, acknowledged that organizations are often best served by using multiple media for storage.

“Tape shares the data center storage hierarchy with SSDs and HDDs and the ideal storage solution optimizes the strengths of each,” the report said. “However, the role tape serves in today’s modern data centers is quickly expanding into new markets because compelling technological advancements have made tape the most economical, highest capacity and most reliable storage medium available.”

LTO-8 uses tunnel magnetoresistance (TMR) for tape heads, a switch from the previous giant magnetoresistance (GMR). TMR provides a more defined electrical signal than GMR, allowing bits to be written to smaller areas of LTO media. LTO-8 also uses barium ferrite instead of metal particles for tape storage capacity improvement. With the inclusion of TMR technology and barium ferrite, LTO-8 is only backward compatible to one generation. Historically, LTO had been able to read back two generations and write back to one generation.

“Tape continues to evolve — the technology certainly isn’t standing still,” Gadomski said.

Tape also has a clearly defined roadmap, with LTO projected out to the 12th generation. Each successive generation after LTO-8 projects double the capacity of the previous version. As a result, LTO-12 would offer 480 TB compressed tape storage capacity and 192 TB native. It typically takes between two and three years for a new LTO generation to launch.

In addition, IBM and Sony have said they developed technology for the highest recording areal density for tape storage media, resulting in approximately 330 TB uncompressed per cartridge.

On the lookout for advances in storage

Spectra Logic, in its “Digital Data Storage Outlook 2018” report released in June, said it projects much of the future zettabytes of data will “never be stored or will be retained for only a brief time.”

“Spectra’s projections show a small likelihood of a constrained supply of storage to meet the needs of the digital universe through 2026,” the report said. “Expected advances in storage technologies, however, need to occur during this timeframe. Lack of advances in a particular technology, such as magnetic disk, will necessitate greater use of other storage mediums such as flash and tape.”

While the report claims the use of tape for secondary storage has declined with backup moving to disk, the need for tape storage capacity in the long-term archive market is growing.

“Tape technology is well-suited for this space as it provides the benefits of low environmental footprint on both floor space and power; a high level of data integrity over a long period of time; and a much lower cost per gigabyte of storage than all other storage mediums,” the report said.

Introducing the 2017 Ford F-150 Raptor Xbox One X Edition for Forza Motorsport 7

Forza Motorsport 7 fans, get ready to experience a new level of power and performance with the launch of the 2017 Ford F-150 Raptor Xbox One X Edition which is a free gift for all players. The specially designed F-150 joins the lineup of more than 700 cars in the game which showcases 60 frames per second racing on all Xbox One platforms and native 4K support on Xbox One X. With a custom livery based on Xbox One X’s codename “Project Scorpio” and numerous performance-minded upgrades, this Raptor is ready to intimidate all opponents in Forza Motorsport 7.

Longtime Forza fans know that Xbox and Ford have enjoyed a long and fruitful partnership over the years. It’s hard not to miss Ford vehicles on the cover of recent Forza games, including the Ford GT on the cover of Forza Motorsport 6 and the Ford F-150 Raptor race truck leaping into action on the cover of Forza Horizon 3. That Ford partnership is more than just skin deep, too. Did you know that Ford features more cars in Forza Motorsport 7 than any other manufacturer in the game?

As for the F-150 Xbox One X Edition, that story began with Ford Performance’s announcement of the 2017 Ford F-150 Raptor at the 2015 North American International Auto Show. Later in the year, Ford Performance saw success in the Raptor race truck at the Baja 1000, finishing third. This success led to the debut of the truck at the 2017 SEMA (Specialty Equipment Manufacturer’s Association) show in Las Vegas, where over-the-top custom vehicles are king and the Xbox One X Edition Ford F-150 fit right in.

While the truck is significantly taller than stock – over 93 inches tall (2.3 meters) on 38 inch BF Goodrich Krawler T/A tires – the team behind the Xbox One X Raptor still wanted it to retain the race truck’s Baja-tested performance characteristics. The Raptor’s livery was designed by illustrator Hydro74 and features a stylized “Project Scorpio” scorpion motif. Look closely and you can see that the scorpion’s body includes many details from Xbox One X, including the thumb sticks, d-pad, and ABXY buttons.

Starting today, Forza Motorsport 7 can download the free Xbox One X Raptor from their Forza Motorsport 7 Message Center, as well as download a Ford F-150 Raptor Xbox One X Edition Windows 10 Theme set in the Microsoft Store. In addition, we’ve got a new Rivals event available today in the Featured Event channel, starring the Raptor. Look for the “Delivering the Sting” event and set your best time on the leaderboard. Can you tame the power of the Xbox One X Raptor?

February Forza Events

Looking forward to the next four weeks, Forza Motorsport 7 players can expect for a great month of fun in February. First up, the next content update and car pack will launch in Forza Motorsport 7 on February 19th; look for more details as we get closer to launch. February will also see new events arriving in the game. In addition to a new Bounty Hunter event starring Turn 10 Studio Manager Alan Hartman, we’ll be launching a new event featuring a twist on the traditional Rivals format. All that, plus the arrival of new Driver Gear celebrating the Winter Olympics, the Lunar New Year, and more.

Finally, Forza fans can look forward to the official announcement of our Forza Racing Championship plans this month. Look for details on schedule, format, and more in the coming weeks.

How to Monitor Hyper-V Performance using PNP4Nagios

At a high level, you need three things to run a trouble-free datacenter (even if your datacenter consists of two mini-tower systems stuffed in a closet): intelligent architecture, monitoring, and trend analysis. Intelligent architecture consists of making good purchase decisions and designing virtual machines that can appropriately handle their load. Monitoring allows you to prevent or respond quickly to emergent situations. Trend analysis helps you to determine how well your reality matches your projections and greatly assists in future architectural decisions. In this article, we’re going to focus on trend analysis. We will set up a data collection and graphing system called “PNP4Nagios” that will allow you to track anything that you can measure. It will hold that data for four years. You can display it in graphs on demand.

What You Get

I know that intro was a little heavy. So, to put it more simply, I’m giving you graphs. Want to know how much CPU that VM has been using? Trying to figure out how quickly your VMs are filling up your Cluster Shared Volumes? Curious about a VM’s memory usage? We have all of that.

Where I find it most useful: Getting rid of vendor excuses. We all have at least one of those vendors that claim that we’re not providing enough CPU or memory or disk or a combination. Now, you can visually determine the reasonableness of their demands.

First, the host and service screens in Nagios will get a new graph icon next to every host and service that track performance data. Also, hovering over one of those graph icons will show a preview of the most recent chart:

p4n_mainscreen

Second, clicking any of those icons will open a new tab with the performance data graph for the selected item.

p4n_chartpage

Just as the Nagios pages periodically refresh, the PNP4Nagios page will update itself.

Additionally, you can do the following:

  • Click-dragging a section on a graph will cause it to zoom. If you’ve ever used the zoom feature in Performance Monitor, this is similar.
  • In the Actions bar, you can:
    • Set a custom time/date range to graph
    • Generate a PDF of the visible charts
    • Generate XML summary data
  • Create a “basket” of the graphs that you view most. The basket persists between sessions, so you can build a dashboard of your favorite charts

What You Need

Fortunately, you don’t need much to get going with PNP4Nagios.

Fiscal Cost

Let’s answer the most important question: what does it cost? PNP4Nagios does not require you to purchase anything. Their site does include a Donate button. If your organization finds PNP4Nagios useful, it would be good to throw a few dollars their way.

You’ll need an infrastructure to install PNP4Nagios on, of course. We’ll wrap that up into the later segments.

Nagios

As its name implies, PNP4Nagios needs Nagios. PNP4Nagios installs alongside Nagios on the same system. We have a couple of walkthroughs for installing Nagios as a Hyper-V guest, divided by distribution.

The installation really doesn’t change much between distributions. The differences lie in how you install the prerequisites and in how you configure Apache. If you know those things about your distribution, then you should be able to use either of the two linked walkthroughs to great effect. If you’d rather see something on your exact distribution, the official Nagios project has stepped up its game on documentation. If we haven’t got instructions for your distribution, maybe they do. There are still things that I do differently, but nothing of critical importance. Also, being a Hyper-V blog, I have included special items just for monitoring Hyper-V, so definitely look at the post-installation steps of my articles.

Also, if you want to use SSL and Active Directory to secure your Nagios installation, we’ve got an article for that.

Disk Space

According to the PNP4Nagios documentation, each item that you monitor will require about 400 kilobytes once it has reached maximum data retention. That assumes that you will leave the default historical interval and retention lengths. More information can be found on the PNP4Nagios site. So, 20 systems with 12 monitors apiece will use about 96 megabytes.

PNP4Nagios itself appears to use around 7 megabytes once installed and extracted.

Downloading PNP4Nagios

PNP4Nagios is distributed on Sourceforge: https://sourceforge.net/projects/pnp4nagios/files/latest/download.

As always, I recommend that you download to a standard workstation and then transfer the files to the Nagios server. Since I operate using a Windows PC and run Nagios on a Linux system, WinSCP is my choice of transfer tool.

On my Linux systems, I create a “Download” directory in my home folder and place everything there. The install portion of my instructions will be written using the file’s location as a starting point. So, for me, I begin with
cd ~/Downloads.

Installing PNP4Nagios

PNP4Nagios installs quite easily.

PNP4Nagios Prerequisites

Most of the prerequisites for PNP4Nagios automatically exist in most Linux distributions. Most of the remainder will have been satisfied when you installed Nagios. The documentation lists them: http://docs.pnp4nagios.org/pnp-0.6/about#required_software.

  • Perl, at least version 5. To check your installed Perl version:
    perl v
  • RRDTool: This one will not be installed automatically or during a regular Nagios build. Most distributions include it in their mainstream repositories. Install with your distribution’s package manager.
    • CentOS and most other RedHat-based distributions:
      sudo yum install perlrrdtool
    • SUSE-based systems:
      sudo zypper install rrdtool
    • Ubuntu and most other Debian-based distributions:
      sudo apt install rrdtool librrdsperl
  • PHP, at least version 5. This would have been installed with Nagios. Check with:
    php v
  • GD extension for PHP. You might have installed this with Nagios. Easiest way to check is to just install it; it will tell you if you’ve already got it.
    • CentOS and most other RedHat-based distributions:
      sudo yum install phpgd
    • SUSE-based systems:
      sudo zypper install phpgd
    • Ubuntu and most other Debian-based distributions:
      sudo apt install phpgd
  • mod_rewrite extension for Apache. This should have been installed along with Nagios. How you check depends on whether your distribution uses “apache2” or “httpd” as the name of the Apache executable:
    • CentOS and most other RedHat-based distributions:
      sudo httpd M | grep rewrite
    • Ubuntu, openSUSE, and most Debian and SUSE distributions:
      sudo apache2ctl M | grep rewrite
  • There will be a bit more on this in the troubleshooting section near the end of the article, but if you’re running a more current version of PHP (like 7), then you may not have the XML extension built-in. I only ran into this problem on my Ubuntu installation. I solved it with this:
    sudo apt install phpxml
  • openSUSE was missing a couple of PHP modules on my system:
    sudo zypper install phpsockets phpzlib

If you are missing anything that I did not include instructions for, you can visit one of my articles on installing Nagios. If I haven’t got one for your distribution, then you’ll need to search for instructions elsewhere.

Unpacking and Installing PNP4Nagios

As I mentioned in the download section, I place my downloaded files in ~/Downloads. I start from there (with
cd ~/Downloads). Start these directions in the folder where you placed your downloaded PNP4Nagios tarball.

  1. Unpack the tarball. I wrote these directions with version 0.6.26. Modify your command as necessary (don’t forget about tab completion!):
    tar xzf pnp4nagios0.6.26.tar.gz
  2. Move to the unpacked folder:
    cd ./pnp4nagios0.6.26/
  3. Next, you will need to configure the installer. Most of us can just use it as-is. Some of us will need to override some things, such as the Nagios user groups. To determine if that applies to you, open /usr/local/nagios/etc/nagios.cfg. Look for the following section:



    If both nagios_user and nagios_group are “nagios”, then you don’t need to do anything special.
    Regular configuration:
    ./configure
    Configuration with overrides:
    ./configure withnagiosuser=naguser withnagiosgroup=nagcmd .
    Other overrides are available. You can view them all with
    ./configure help. One useful override would be to change the location of the emitted perfdata files to an auxiliary volume to control space usage. On my Ubuntu system, I needed to override the location of the Apache conf files:
    ./configure withhttpdconf=/etc/apache2/sitesavailable

  4. When configure completes, check its output. Verify that everything looks OK. Especially pay attention to “Apache Config File” — note the value because you will access it later. If anything looks off, install any missing prerequisites and/or use the appropriate configure options. You can continue running ./configure until everything suits your needs.
  5. Compile the program:
    make all. If you have an “oh no!” moment in which you realize that you missed something, you can still re-run ./configure and then compile again.
  6. Because we’re doing a new installation, we will have it install everything:
    sudo make fullinstall. Be aware that we are now using sudo. That’s because it will need to copy files into locations that your regular account won’t have access to. For an upgrade, you’d likely only want
    sudo make install. Please check the documentation for additional notes about upgrading. If you didn’t pay attention to the output file locations during configure, they’ll be displayed to you again.
  7. We’re going to be adding a bit of flair to our Nagios links. Enable the pop-up extension with:
    sudo cp ./contrib/ssi/statusheader.ssi /usr/local/nagios/share/ssi/

Installation is complete. We haven’t wired it into Nagios yet, so don’t expect any fireworks.

Configure Apache Security for PNP4Nagios

If you just use the default Apache security for Nagios, then you can skip this whole section. As outlined in my previous article, I use Active Directory authentication. Really, all that you need to do is duplicate your existing security configuration to the new site. Remember how I told you to pay attention to the output of configure, specifically “Apache Config File”? That’s the file to look in.

My “fixed” file looks like this:

Only a single line needed to be changed to match my Nagios virtual directories.

Initial Verification of PNP4Nagios Installation

Before we go any further, let’s ensure that our work to this point has done what we expected.

  1. If you are using a distribution whose Apache enables and disables sites by symlinking into sites-available and you instructed PNP4Nagios to place its files there (ex: Ubuntu), enable the site:
    sudo a2ensite pnp4nagios.conf
  2. Restart Apache.
    1. CentOS and most other RedHat-based distributions:
      sudo service httpd restart
    2. Almost everyone else:
      sudo service apache2 restart
  3. If necessary, address any issues with Apache starting. For instance, Apache on my openSUSE box really did not like the “Order” and “Allow” directives.
  4. Once Apache starts correctly, access http://yournagiosserveraddress/pnp4nagios. For instance, my internal URL is http://nagios.siron.int/pnp4nagios. Remember that you copied over your Nagios security configuration, so you will log in using the same credentials that you use on a normal Nagios site.
  5. Fix any problems indicated by the web page. Continue reloading the Apache server and the page as necessary until you get the green light:
    p4n_greenlight
  6. Remove the file that validates the installation:
    sudo rm /usr/local/pnp4nagios/share/install.php

Installation was painless on my CentOS and Ubuntu systems. openSUSE gave me more drama. In particular, it complained about “PHP zlib extension not available” and “PHP socket extension not available”. Very easy to fix:
sudo zypper install phpsockets phpzlib. Don’t forget to restart Apache after making these changes.

Initial Configuration of Nagios for PNP4Nagios

At this point, you have PNP4Nagios mostly prepared to do its job. However, if you try to access the URL, you’ll get a message that says that it doesn’t have any data: “perfdata directory “/usr/local/pnp4nagios/var/perfdata/” is empty. Please check your Nagios config.” Nagios needs to start feeding it data.

We start by making several global changes. If you are comparing my walkthrough to the official PNP4Nagios documentation, be aware that I am guiding you to a Bulk + NPCD configuration. I’ll talk about why after the how-to.

Global Nagios Configuration File Changes

In the text editor of your choice, open /usr/local/nagios/etc/nagios.cfg. Find each of the entries that I show in the following block and change them accordingly. Some don’t need anything other than to be uncommented:

Next, open /usr/local/nagios/etc/objects/templates.cfg. At the end, you’ll find some existing commands that mention “perfdata”. After those, add the commands from the following block. If you don’t use the initial Nagios sample files, then just place these commands in any active cfg file that makes sense to you.

Configuring NPCD

The performance collection method that we’re employing involves the Nagios Perfdata C Daemon (NPCD). The default configuration will work perfectly for this walkthrough. If you need something more from it, you can edit /usr/local/pnp4nagios/etc/npcd.cfg. We just want it to run as a daemon:

Enable it to run automatically at startup.

  • Most Red Hat and SUSE based distributions:
    sudo chkconfig add npcd
  • Ubuntu and most other Debian-based distributions:
    sudo updaterc.d npcd defaults

Configuring Hosts in Nagios for PNP4Nagios Graphing

If you made it here, you’ve successfully completed all the hard work! Now you just need to tell Nagios to start collecting performance data so that PNP4Nagios can graph it.

Note: I deviate substantially from the PNP4Nagios official documentation. If you follow those directions, you will quickly and easily set up every single host and every single service to gather data. I didn’t want that because I don’t find such a heavy hand to be particularly useful. You’ll need to do more work to exert finer control. In my opinion, that extra bit of work is worth it. I’ll explain why after the how-to.

If you followed the path of least resistance, every single host in your Nagios environment inherits from a single root source. Open /usr/local/nagios/etc/objects/templates.cfg. Find the define host object with a name of generic-host. Most likely, this is your master host object. Look at its configuration:

Now that you’ve enabled performance data processing in nagios.cfg, this means that Nagios and PNP4Nagios will now start graphing for every single host in your Nagios configuration. Sound good? Well, wait a second. What it really means is that it will graph the output of the check_command for every single host in your Nagios configuration. What is check_command in this case? Probably check_ping or check_icmp. The performance data that those output are the round-trip average and packets lost during pings from the Nagios server to the host in question. Is that really useful information? To track for four years?

I don’t really need that information. Certainly not for every host. So, I modified mine to look this:

What we have:

  • Our existing hosts are untouched. They’ll continue not recording performance data just as they always have.
  • A new, small host definition called “perf-host”. It also does not set up the recording of host performance data. However, its “action_url” setting will cause it to display a link to any graphs that belong to this host. You can use this with hosts that have graphed services but you don’t want the ping statistics tracked. To use it, you would set up/modify hosts and host templates to inherit from this template in addition to whatever host templates they already inherit from. For example:
    use perfhost,generichost.
  • A new, small host definition called “perf-host-pingdata”. It works exactly like “perf-host” except that it will capture the ping data as well. The extra bit on the end of the “action_url” will cause it to draw a little preview when you mouseover the link. To use it, you will set up/modify hosts and host templates to inherit from this template in addition to whatever host templates they already inherit from. For example:
    use perfhostpingdata,generichost.

Note: When setting the inheritance:

  • perf-host or perf-host-pingdata must come before any other host templates in a use line.
  • In some instances, including a space after the comma in a use line causes Nagios to panic if the name of the host does not also have a space (ex: you are using tabs instead of spaces on the
    name generic_host line. Make sure that all of your use directives have no spaces after any commas and you will never have a problem. Ex:
    use perfhost,generichost.

Remember to check the configuration and restart Nagios after any changes to the .cfg files:

Couldn’t You Just Set a Single Root Host for Inheritance?

An alternative to the above would be:

In this configuration, perf-host inherits directly from generic-host. You could then have all of your other systems inherit from perf-host instead of generic-host. The problem is that even in a fairly new Nagios installation, a fair number of hosts already inherit from generic-host. You’d need to determine which of those you wanted to edit and carefully consider how inheritance works. If you’re going to all of that trouble, it seems to me that maybe you should just directly edit the generic-host template and be done with it.

Truthfully, I’m only telling you what I do. Do whatever makes sense to you.

Configuring Services in Nagios for PNP4Nagios Graphing

You’ll get much more use of out service graphing than host graphing. Just as with hosts, the default configuration enables performance graphing for all services. Not all services emit performance data, and you may not want data from all services that do produce data. So, let’s fine-tune that configuration as well.

Still in /usr/local/nagios/etc/objects/templates.cfg, find the define service object with a name of generic-service. Disable performance data collection on it and add a stub service that enables performance graphing:

When you want to capture performance data from a service, prepend the new stub service to its use line. Ex:
use perfservice,genericservice. The warnings from the host section about the order of items and the lack of a space after the comma in the use line transfer to the service definition.

Remember to check the configuration and restart Nagios after any changes to the .cfg files:

Example Configurations

In case the above doesn’t make sense, I’ll show you what I’m doing.

Most of the check_nt services emit performance data. I’m especially interested in CPU, disk, and memory. The uptime service also emits data, but for some reason, it doesn’t use the defined “counter” mode. Instead, it’s just a graph that steadily increases at each interval until you reboot, then it starts over again at zero. I don’t find that terribly useful, especially since Nagios has its own perfectly capable host uptime graphs. So, I first configure the “windows-server” host to show the performance action_url. Then I configure the desired default Windows services to capture performance data.

My /usr/local/nagios/etc/objects/windows.cfg:

Now, my hosts that inherit from the default Windows template have the extra action icon, but my other hosts do not:

p4n_hostswithiconsThe same story on the services page; services that track performance data have an icon, but the others do not:

p4n_serviceswithicons

Troubleshooting your PNP4Nagios Deployment

Not getting any data? First of all, be patient, especially when you’re just getting started. I have shown you how to set up the bulk mode with NPCD which means that data captures and graphing are delayed. I’ll explain why later, but for now, just be aware that it will take some time before you get anything at all.

If it’s been some time, say, 15 minutes, and you’re still not getting any data. Go to verify.pnp4nagios.org/ and download the verify_pnp_config file. Transfer it to your Nagios host. I just plop it into my Downloads folder as usual. Navigate to the folder where you placed yours, then run:

That should give you the clues that you need to fix most any problems.

I did have one leftover problem, but only my Ubuntu system where I had updated to PHP 7. The verify script passed everything, but trying to load any PNP4Nagios page gave me this error: “Call to undefined function simplexml_load_file()”. I only needed to install the PHP XML package to fix that:
sudo apt install phpxml. I didn’t look up the equivalent on the other distributions.

Plugin Output for Performance Graphing

To determine if a plugin can be graphed, you could just look at its documentation. Otherwise, you’ll need to manually execute it from /usr/local/nagios/libexec. For instance, we’ll just use the first one that shows up on an Ubuntu system, check_apt:

p4n_testcheckoutput

See the pipe character (|) there after the available updates report? Then the jumble of characters after that? That’s all in the standard format for Nagios performance charting. That format is:

  1. A pipe character after the standard Nagios service monitoring result.
  2. A human-readable label. If the label includes any special characters, the entire label should be enclosed in single quotes.
  3. An equal sign (=)
  4. The reported value.
  5. Optionally, a unit of measure.
  6. A semi-colon, optionally followed by a value for the warning level. If the warning level is visible on the produced chart, it will be indicated by a horizontal yellow line.
  7. A semi-colon, optionally followed by a value for the critical level. If the warning level is visible on the produced chart, it will be indicated by a horizontal red line.
  8. A semicolon, optionally followed by the minimum value for the chart’s y-axis. Must be the same unit of measure as the value in #4. If not specified, PNP4Nagios will automatically set the minimum value. If this value would make the current value invisible, PNP4Nagios will set its own minimum.
  9. A semicolon, optionally followed by the maximum value for the chart’s y-axis. Must be the same unit of measure as the value in #4. If not specified, PNP4Nagios will automatically set the maximum value. If this value would make the current value invisible, PNP4Nagios will set its own maximum.

This format is defined by Nagios and PNP4Nagios conforms to it. You can read more about the format at: verify.pnp4nagios.org/

My plugins did not originally emit any performance data. I have been working on that and should hopefully have all of that work completed before you read this article.

My PNP4Nagios Configuration Philosophy

I had several decision points when setting up my system. You may choose to diverge as it meets your needs. I’ll use this section to explain why I made the choices that I did.

Why “Bulk with NPCD” Mode?

Initially, I tried to set up PNP4Nagios in “synchronous” mode. That would cause Nagios to instantly call on PNP4Nagios to generate performance data immediately after every check’s results were returned. I chose that initially because it seemed like the path of least resistance.

It didn’t work for me. I’m betting that I did something wrong. But, I didn’t get my problem sorted out. I found a lot more information on the NPCD mode. So, I switched. Then I researched the differences. I feel like I made the correct choice.

You can read up on the available modes yourself: http://docs.pnp4nagios.org/pnp-0.6/modes.

In synchronous mode, Nagios can’t do anything while PNP4Nagios processes the return information. That’s because it all occurs in the same thread; we call that behavior “blocking”. According to the PNP4Nagios documentation, that method “will work very good up to about 1,000 services in a 5-minute interval”. I assume that’s CPU-driven, but I don’t know. I also don’t know how to quantify or qualify “will work very good”. I also don’t know what sort of environments any of my readers are using.

Bulk mode moves the processing of data from per-return-of-results to gathering results for a while and then processing them all at once. The documentation says that testing showed that 2,000 services were processed in .06 seconds. That’s easier to translate to real-world systems, although I still don’t know the overall conditions that generated that benchmark.

When we add NPCD onto bulk mode, then we don’t block Nagios at all. Nagios still does the bulk gathering, but NPCD processes the data, not Nagios. I chose this method as it means that as long as your Nagios system is multi-core and not already overloaded, you should not encounter any meaningful interruption to your Nagios service by adding PNP4Nagios. It should also work well with most installation sizes. For really big Nagios/PNP4Nagios installations (also not qualified or quantified), you can follow their instructions on configuring “Gearman Mode”.

One drawback to this method: Your “4 Hour” charts will frequently show an empty space at the right of their charts. That’s because they will be drawn in-between collection/processing periods. All of the data will be filled in after a few minutes. You just may not have instant gratification.

Why Not Just Allow Every Host and Service to be Monitored?

The default configuration of PNP4Nagios results in every single host and every single service being enabled for monitoring. From an “ease-of-configuration” standpoint, that’s tempting. Once you’ve set the globals, you literally don’t have to do anything else.

However, we are also integrating directly with Nagios’ generated HTML pages. Whereas PNP4Nagios can determine that a service doesn’t have performance data because Nagios won’t have generated anything, the front-end just has an instruction to add a linked icon to every single service. So, if you just globally enable it, then you’ll get a lot of links that don’t work.

If you’re the only person using your environment, maybe that’s OK. But, if you share the environment, then you’ll start getting calls wanting to you to “fix” all those broken links. It won’t take long before you’re spending more time explaining (and re-explaining) that not all of the links have anything to show.

Why Not Just Change the Inheritance Tree?

If you want, you could have your performance-enabled hosts and services inherit from the generic-host/generic-service templates, then have later templates, hosts, and services inherit from those. If that works for you, then take that approach.

I chose to employ multiple inheritance as a way of overriding the default templates because it seemed like less effort to me. When I went to modify the services, I simply copied “perf-service,” to the clipboard and then selectively pasted it into the use line of every service that I wanted. It worked easier for me than a selective find-replace operation or manual replacement. It also seems to me that it would be easier to revert that decision if I make a mistake somewhere.

I can envision very solid arguments for handling this differently. I won’t argue. I just think that this approach was best for my situation.

SAP cloud applications go Azure with Microsoft partnership

SAP and Microsoft are taking their relationship to the next level in the cloud.

The two computing titans, who have been longtime partners, recently announced a number of initiatives that deepen the relationship, including enabling SAP cloud applications to run on Microsoft Azure.

The companies will also deploy each other’s cloud applications internally and will co-engineer and go to market together with cloud applications and managed cloud services, according to a joint press release.   

Specifically, SAP’s private managed cloud service SAP HANA Enterprise Cloud (HEC) is available on Microsoft Azure, which allows customers to run SAP S/4HANA on Azure’s managed cloud.

Both Microsoft and SAP will run SAP S/4HANA on Azure for internal operations. Microsoft is transforming its legacy SAP financial systems and will implement S/4HANA Finance on Azure. Microsoft also plans to connect S/4HANA to Azure AI and analytics services.

We’re extending a partnership that has a long history and taking it to the next level with an eye towards those joint customers as they move those mission-critical SAP systems to the cloud.
Julia WhiteMicrosoft corporate vice president, Azure

SAP is migrating more than a dozen “business-critical systems” to Azure, according to the press release. That includes S/4HANA, which supports Concur, the SAP travel and expense cloud application. SAP Ariba is also currently running on Azure.

The partnership is important now because joint SAP and Microsoft customers are moving mission-critical systems to the cloud, according to Julia White, Microsoft corporate vice president at Azure.

“We’re extending a partnership that has a long history and taking it to the next level with an eye towards those joint customers as they move those mission-critical SAP systems to the cloud,” White said. “They need to have the confidence and a trusted approach, so it’s about us coming together with a partnership that’s all about both co-engineering and making sure that we have incredible integrated solutions, as well as going to market together and engaging with our customers together for deploying all the way down to having joint support for those SAP cloud applications.”

SAP HANA timeline
A history of SAP HANA

The customers should benefit

The partnership makes sense to Holger Mueller, principal analyst and vice president of Constellation Research, who said in a blog post that the main question may be what took the parties so long, with Azure capable of running HEC since at least 2016, but customers should be happy to see the companies “drinking their own champagne.”

SAP can now expend fewer capital resources on HEC as Azure can now relieve that load, and perhaps put more money into R&D for S/4HANA, Mueller said.

“If all goes well it means customers will have to pay less for running S/4HANA, while it is being operated by a vendor who does infrastructure management (IaaS) for a living, compared to SAP who is certainly in the SaaS and PaaS [space], but less and less (if at all) in the IaaS space,” Mueller said.

However, Mueller noted that the partnership needs to “pick up steam, show customer traction, value and customer success.”

But SAP needs to help them make choices

Jon Reed, co-founder of Diginomica.com, also believes that the partnership could be good for SAP customers, but does not see the announcement as “earthshaking news.”

“It’s more of a logical extension of SAP’s multi-cloud strategy and their ongoing partnership with Microsoft,” Reed said. “It’s good news for SAP customers in that it’s one more sign post on the road to multi-cloud and deployment choice. For Microsoft it’s obviously another validation that Azure has enterprise clout and you can’t really do enterprise multi-cloud without offering Azure deployments.”

It’s ultimately up to the customer to determine whether Microsoft Azure, AWS or Google is the right hosting option for the S/4HANA private cloud or other SAP cloud applications, Reed said.

SAP needs to figure out how much responsibility it has in helping customers make these choices, for example, determining which cloud providers have more strength in machine learning or optimizing data center locations.

“I think that’s an ongoing question and SAP has been thinking about it also,” Reed said. “What customers need here is somewhat uncharted territory, and I think that SAP needs to provide more documentation and cross-checks for customers on multi-cloud features and options.”

Customer trust and confidence are the keys

Running SAP cloud applications on Azure and Microsoft S/4HANA internally is one thing that will help customers choose to deploy on the Azure cloud, according to White, and these experiences will help customers understand how to run the systems.

“Our joint partnership with the co-engineering, the go-to-market, the support is a big differentiator in terms of customer support and trust, but to also know that we are running it first party they know that there’s real engineering experience on both sides is about confidence, about trust, about ensuring that it’s a secure system,” White said. “It also has the halo effect of helping our combined engineering efforts as well, as we are doing it ourselves both on the SAP and Microsoft side, that we learn and see and are able to improve the products because of that.”

To highlight this issue of customer trust, the companies identified The Coca-Cola Company, Columbia Sportswear Company and Coats and Costco Wholesale Corp. as customers that have deployed SAP cloud applications on Azure.

“It really was those types of clients — and ourselves — that really were a motivator to bring this partnership together in a greater way,” White said. “It was that level of company and mission-critical systems that was a catalyst for us to do something different here.”

Level 3-CenturyLink merger could open doors for UC partners

CenturyLink’s $34 billion acquisition of Level 3 Communications is complete, and the two telecom providers have merged under the CenturyLink brand. But questions linger over what will become of Level 3’s unified communications partnerships following the CenturyLink merger.

Level 3, based in Broomfield, Colo., announced a partnership with Amazon in July to offer the Amazon Chime communications service. Amazon chose Level 3 and Vonage to deliver Chime as a managed service with Level 3 targeting medium to large businesses. Level 3 also has UC partnerships with vendors such as AVI-SPL and Unify Square.

CenturyLink, based in Monroe, La., acquired Level 3 to expand its reach in the business communications market and compete against larger providers, such as AT&T and Verizon. UC partnerships could thrive as the Level 3-CenturyLink merger gives partners like Amazon a broader range of customers to target, according to Frost & Sullivan analyst Michael Brandenburg.

However, the CenturyLink merger with Level 3 is still in its early days. CenturyLink will take its time to refine the company’s combined portfolio with as little impact to customers as possible, he said.

CenturyLink is “willing to take on multiple solutions that fit what their customers need or want,” Brandenburg said. “I don’t think they’ll put partnerships in jeopardy at this point.”

RingCentral revenue on the rise

RingCentral’s revenue in the third quarter grew to $129.8 million, a 34% increase from the previous year. RingCentral CEO Vlad Shmunis attributed the results to growth in the unified communications as a service (UCaaS) provider’s midmarket and enterprise business, as well as its channel partners.

RingCentral, based in Belmont, Calif., also saw its software subscription revenue grow 30% year over year to $119.4 million. The vendor’s shares have more than doubled since the start of this year.   

A recent report from Synergy Research Group found that strong adoption was driving the UCaaS market. The report named RingCentral a market share leader based on quarterly revenues and subscriber seats. RingCentral was also named a market leader in the latest Gartner Magic Quadrant for UCaaS.

RingCentral has expanded its services this year by introducing new integrations with Google G Suite, Amazon Alexa and Slack, offering expanded coverage in Latin America and adding new quality of service analytics.

Vidyo revamps product portfolio

Vidyo has streamlined its product portfolio under its VidyoCloud platform. VidyoCloud supports two key platforms: VidyoConnect and VidyoEngage, as well as the Vidyo.io platform-as-a-service offering.

VidyoCloud offers a new user interface for a consistent experience across endpoints. It also offers noise suppression to filter background noise such as keyboard tapping. The platform is powered by Vidyo’s new VP9 codec to support real-time video communications.

VidyoConnect is an enterprise meeting service for team collaboration. The service offers a unified experience across endpoints, from mobile devices to conference rooms. The service also offers a hybrid capability to manage video communication traffic within the corporate network.

VidyoEngage enables enterprises and healthcare systems to provide face-to-face video interactions with customers and patients. The service also offers integrations with DocuSign to streamline processes.

VidyoCloud also offers added support for Xamarin and Electron in Vidyo.io to allow app development from a single code base.

Level 3 offers Amazon cloud collaboration tool as a service

Level 3 Communications has launched Amazon Chime as a managed service — the latest sign that the cloud collaboration tool is capable of competing with Microsoft’s Skype for Business and Cisco’s WebEx.

Last week’s introduction of the Level 3 service demonstrates progress in Amazon’s goal of becoming an enterprise communications provider, said Ira Weinstein, an analyst at Wainhouse Research, based in Duxbury, Mass.

“If I was advising one of my large enterprise clients, I would tell them you now have a very reputable vendor’s product in Chime, and you’ve got it wrapped in Level 3’s production-friendly, managed environment,” he said. “Second, Chime is more than sufficient for the typical enterprise, and when you wrap it in Level 3’s managed services, it becomes better supported.”

Level 3, a multinational telecommunications and internet service provider, hopes to tap into the large enterprise customer base of Amazon Web Services, the retailer’s platform-as-a-service business, said Jon Arnold, an analyst at the Canadian firm J Arnold & Associates. In the second quarter, AWS generated $4.1 billion in net sales.

 “You can’t ignore Amazon,” Arnold said. “They’re too big.”

Other Amazon partners in Chime cloud collaboration tool

Amazon, however, is not big enough to succeed in the crowded UC and collaboration market on its own. Along with Level 3, Amazon has partnered with UC vendor Vonage, which offers Chime as a feature in all business communications plans.

Level 3 is targeting medium to large businesses with a pay-per-use model. The company is also willing to bundle its PSTN service with the cloud collaboration tool.

Chime is based on technology Amazon obtained last year through the acquisition of online meeting provider Biba.

Powered by WPeMatico