Cisco has introduced pay-as-you-go pricing for the latest line card of the ASR 9000 router, offering service providers a more flexible licensing model as they evaluate 5G infrastructure suppliers.
Cisco’s new licensing model, unveiled this week, applies to the new line card and subsequent generations. The latest hardware has a maximum throughput of 3.2 Tbps, uses a half watt of power per gigabit and is available with 32, 16 or 8 ports of 100 GbE. The cards fit into existing ASR 9000 chassis.
The pricing change lets service providers buy a license for ASR 9000 capacity across sites, but only pay for what they use. The cost would increase as ports are activated, said Sumeet Arora, the head of engineering for service provider network systems at Cisco.
Previously, service providers had to buy an ASR 9000 license for each site based on expected demand. As a result, the customers would pay for capacity they weren’t using, Arora said.
The ASR 9000 router in 5G
Cisco is making its pricing more customer-friendly as service providers consider technology like the ASR 9000 to support future 5G business and consumer services. The fifth-generation cellular technology delivers speed, capacity and latency improvements that will enable new products for healthcare, manufacturing, entertainment and the auto industry, proponents have said.
However, analysts do not expect the 5G services market to take off for several years. Cisco CEO Chuck Robbins recently told financial analysts that he didn’t expect significant 5G sales until 2020.
Until the 5G market opens, Cisco is aiming the new ASR 9000 line cards at the network edge where service providers deliver virtual private networks and other business services. Other “big use cases” include internet peering, data center interconnects and the IP infrastructure for mobile services, Arora said.
The ASR 9000 router competes with products from Juniper Networks, Huawei and Nokia. The latter two vendors, along with Ericsson, comprise the top three suppliers to service providers.
Last week, Juniper Networks announced a partnership with Ericsson to sell a collection of products for moving 5G traffic. Cisco announced a wide-ranging partnership with Ericsson in 2015, but that deal has stalled, and many analysts believe it is nearly dead.
“The Ericsson-Cisco partnership was a nonstarter, and both parties did not follow up on the promise that they had articulated during the announcement,” said Rajesh Ghai, an analyst at IDC.
Samsung’s lineup of data center solid-state drives– including a Z-NAND model — introduced this week targets smaller organizations facing demanding workloads such as in-memory databases, artificial intelligence and IoT.
The fastest option in the Samsung data center SSD family — the 983 ZET NVMe-based PCIe add-in card — uses the company’s latency-lowering Z-NAND flash chips. Earlier this year, Samsung announced its first Z-NAND-based enterprise SSD, the SZ985, designed for the OEM market. The new 983 ZET SSD targets SMBs, including system builders and integrators, that buy storage drives through channel partners.
The Samsung data center SSD lineup also adds the first NVMe-based PCIe SSDs designed for channel sales in 2.5-inch U.2 and 22-mm-by-110-mm M.2 form factors. At the other end of the performance spectrum, the new entry-level 2.5-inch 860 DCT 6 Gbps SATA SSD targets customers who want an alternative to client SSDs for data center applications, according to Richard Leonarz, director of product marketing for Samsung SSDs.
Rounding out the Samsung data center SSD product family is a 2.5-inch 883 DCT SATA SSD that uses denser 3D NAND technology, which Samsung calls V-NAND, than comparable predecessor models. Samsung’s PM863 and PM863a SSDs use 32-layer and 48-layer V-NAND respectively, but the new 883 DCT SSD is equipped with triple-level cell (TLC) 64-layer V-NAND chips, as are the 860 DCT and 983 DCT models, Leonarz said.
Noticeably absent from the Samsung data center SSD product line is 12 Gbps SAS. Leonarz said research showed SAS SSDs trending flat to downward in terms of units sold. He said Samsung doesn’t see a growth opportunity for SAS on the channel side of the business that sells to SMBs such as system builders and integrators. Samsung will continue to sell dual-ported enterprise SAS SSDs to OEMs.
Z-NAND-based SSD uses SLC flash
The Z-NAND technology in the new 983 ZET SSD uses high-performance single-level cell (SLC) V-NAND 3D flash technology and builds in logic to drive latency down to lower levels than standard NVMe-based PCIe SSDs that store two or three bits of data per cell.
Samsung positions the Z-NAND flash technology it unveiled at the 2016 Flash Memory Summit as a lower-cost, high-performance alternative to new 3D XPoint nonvolatile memory that Intel and Micron co-developed. Intel launched 3D XPoint-based SSDs under the brand name Optane in March 2017, and later added Optane dual inline memory modules (DIMMs). Toshiba last month disclosed its plans for XL-Flash to compete against Optane SSDs.
Use cases for Samsung’s Z-NAND NVMe-based PCIe SSDs include cache memory, database servers, real-time analytics, artificial intelligence and IoT applications that require high throughput and low latency.
“I don’t expect to see millions of customers out there buying this. It’s still going to be a niche type of solution,” Leonarz said.
Samsung claimed its SZ985 NVMe-based PCIe add-in card could reduce latency by 5.5 times over top NVMe-based PCIe SSDs. Product data sheets list the SZ985’s maximum performance at 750,000 IOPS for random reads and 170,000 IOPS for random writes, and data transfer rates of 3.2 gigabytes per second (GBps) for sequential reads and 3 GBps for sequential writes.
The new Z-NAND based 983 ZET NVMe-based PCIe add-in card is also capable of 750,000 IOPS for random reads, but the random write performance is lower at 75,000 IOPS. The data transfer rate for the 983 ZET is 3.4 GBps for sequential reads and 3 GBps for sequential writes. The 983 ZET’s latency for sequential reads and writes is 15 microseconds, according to Samsung.
Both the SZ985 and new 983 ZET are half-height, half-length PCIe Gen 3 add-in cards. Capacity options for the 983 ZET will be 960 GB and 480 GB when the SSD ships later this month. SZ985 SSDs are currently available at 800 GB and 240 GB, although a recent product data sheet indicates 1.6 TB and 3.2 TB options will be available at an undetermined future date.
Samsung’s SZ985 and 983 ZET SSDs offer significantly different endurance levels over the five-year warranty period. The SZ985 is rated at 30 drive writes per day (DWPD), whereas the new 983 ZET supports 10 DWPD with the 960 GB SSD and 8.5 DWPD with the 480 GB SSD.
Samsung data center SSD endurance
The rest of the new Samsung data center SSD lineup is rated at less than 1 DWPD. The entry-level SATA 860 DCT SATA SSD supports 0.20 DWPD for five years or 0.34 DWPD for three years. The 883 DCT SATA SSD and 983 DCT NVMe-based PCIe SSD are officially rated at 0.78 DWPD for five years, with a three-year option of 1.30 DWPD.
Samsung initially targeted content delivery networks with its 860 DCT SATA SSD, which is designed for read-intensive workloads. Sequential read/write performance is 550 megabytes per second (MBps) and 520 MBps, and random read/write performance is 98,000 IOPS and 19,000 IOPS, respectively, according to Samsung. Capacity options range from 960 GB to 3.84 TB.
“One of the biggest challenges we face whenever we talk to customers is that folks are using client drives and putting those into data center applications. That’s been our biggest headache for a while, in that the drives were not designed for it. The idea of the 860 DCT came from meeting with various customers who were looking at a low-cost SSD solution in the data center,” Leonarz said.
He said the 860 DCT SSDs provide consistent performance for round-the-clock operation with potentially thousands of users pinging the drives, unlike client SSDs that are meant for lighter use. The cost per GB for the 860 DCT is about 25 cents, according to Leonarz.
The 883 DCT SATA SSD is a step up, at about 30 cents per GB, with additional features such as power loss protection. The performance metrics are identical to the 860 DCT, with the exception of its higher random writes of 28,000 IOPS. The 883 DCT is better suited to mixed read/write workloads for applications in cloud data centers, file and web servers and streaming media, according to Samsung. Capacity options range from 240 GB to 3.84 TB.
The 983 DCT NVMe-PCIe SSD is geared for I/O-intensive workloads requiring low latency, such as database management systems, online transaction processing, data analytics and high performance computing applications. The 2.5-inch 983 DCT in the U.2 form factor is hot swappable, unlike the M.2 option. Capacity options are 960 GB and 1.92 TB for both form factors. Pricing for the 983 DCT is about 34 cents per GB, according to Samsung.
The 983 DCT’s sequential read performance is 3,000 MBps for each of the U.2 and M.2 983 DCT options. The sequential write performance is 1,900 MBps for the 1.92 TB U.2 SSD, 1,050 MBps for the 960 GB U.2 SSD, 1,400 MBps for the 1.92 TB M.2 SSD, and 1,100 MBps for the 960 GB M.2 SSD. Random read/write performance for the 1.92 TB U.2 SSD is 540,000 IOPS and 50,000 IOPS, respectively. The read/write latency is 85 microseconds and 80 microseconds, respectively.
The 860 DCT, 883 DCT and 983 DCT SSDs are available now through the channel, and the 983 ZET is due later this month.
VMware has introduced features that improve the use of its NSX network virtualization and security software in private and public clouds.
At VMworld 2018 in Las Vegas, VMware unveiled an NSX instance for AWS Direct Connect and technology to apply NSX security policies on Amazon Web Services workloads. Also, VMware said Arista Networks’ virtual and physical switches would enforce NSX policies — the result of a collaboration between the two vendors.
The latest AWS feature is in NSX-T Data Center 2.3, which VMware introduced at VMworld. Other features added to the newest version of NSX-T include support for containers and Linux-based workloads running on bare-metal servers. NSX-T uses Open vSwitch to turn a Linux host into an NSX-T transport node and to provide stateful security services.
VMware plans to release NSX-T 2.3 by November.
NSX on AWS Direct Connect
To help companies connect to AWS, VMware introduced integration between NSX and AWS Direct Connect. The combination will provide NSX-powered connectivity between workloads running on VMware Cloud on AWS and those running on a VMware-based private cloud in the data center.
AWS Direct Connect lets companies bypass the public internet and establish a dedicated network connection between a data center and an AWS location. Direct Connect is particularly useful for companies with rules against transferring sensitive data across the public internet.
Finally, VMware introduced interoperability between Arista’s CloudVision and NSX. As a result, companies can have NSX security policies enforced on Arista switches running either virtually in a public cloud or the data center.
Arista CloudVision manages switching fabrics within multiple cloud environments. Last year, the company released a virtualized version of its EOS network operating system for AWS, Google Cloud Platform, Microsoft Azure and Oracle Cloud.
VMware is using its NSX portfolio to connect and secure infrastructure and applications running in the data center, branch office and public cloud. For the branch office, VMware has integrated NSX with the company’s VeloCloud software-defined WAN to provide microsegmentation for applications at the WAN’s edge.
VMware competes in multi-cloud networking with Cisco and Juniper Networks.
Cisco has introduced Meraki MX security appliances with a built-in 4G wireless broadband modem. The company also added the Long Term Evolution, or LTE, modem to a new Z-series teleworker gateway.
This week, Cisco launched the Meraki MX67C and MX68CW with an integrated CAT 6 LTE cellular modem. Also, Cisco unveiled four MX models – the MX67, MX68, MX67W and MX68W — without LTE but with more throughput than older models. All the new MX hardware, which are the first in the Meraki line to support the 802.11ac Wave 2 Wi-Fi standard, can deliver up to 450 Mbps of firewall throughput.
Network admins manage Cisco Meraki switches, appliances and access points through a web-based console called the Meraki Dashboard, which also provides automation and analytics. Cisco has aimed the product line at small branch offices and retailers that need a no-frills wireless LAN. For an access layer that meets the need of larger enterprises, Cisco offers the Aironet APs and Catalyst switches.
MX appliances are unified threat management devices with software-defined WAN functionality. A UTM system combines and integrates multiple security services and features, including a firewall.
Uses for LTE in the Meraki MX
The higher throughput in the latest MX appliances is aimed at companies accessing SaaS applications, such as Microsoft Office 365, said Imran Idrees, a marketing manager in Cisco’s Meraki unit. Remote branch offices can use the LTE modem as a substitute for broadband when it isn’t available.
Companies could also use the LTE connection as a failover link, Idrees said. If the Ethernet connection goes down, then the MX would switch to LTE.
“Given the ubiquity and increasing performance of LTE, this is a relatively inexpensive way for a branch office to increase its network availability,” said Mark Hung, an analyst at Gartner.
The cellular MX models have one Nano SIM card slot for connecting to a carrier’s LTE network. The built-in modem makes it possible track usage and performance of the MX from the Meraki Dashboard.
Getting LTE on older Meraki MX models required companies to plug a carrier-provided USB stick that contained the 4G modem. Because the modem wasn’t integrated with the MX, no data was captured for tracking performance.
With the latest models, data captured from the LTE connection includes signal strength, the provider’s name and how much data is traveling over the link. All the information is displayed on the Meraki Dashboard.
The Z3C gateway
The Z3C teleworker gateway is for workers who need a secure connection to the corporate network while they are on the road. “It’s a very compact device that a business person would take around with them,” Idrees said.
The previous version of the gateway, Z3, required a traveler to plug a hotel room’s Ethernet cable into the device to gain access to the corporate network. The Z3C has the option of connecting over LTE.
Companies that want to use a Meraki WLAN have to purchase the product line’s devices and a cloud subscription license. Once the license is registered, network managers can configure and manage the hardware through the Meraki Dashboard.
Dell EMC has introduced a high-density 100 Gigabit Ethernet switch aimed at service providers and large enterprises that need more powerful hardware to support a growing number of cloud applications and digital business initiatives.
Dell EMC launched the Z9264F open networking switch this week, listing its target customers as hyperscale data center operators, tier-one and -two service providers and enterprises. The Dell EMC 100 GbE switch is designed for leaf-spine switching architectures.
“Dell’s new, high-performance, low-latency 100 GbE switch is ideally suited for large enterprises and service providers,” said Rohit Mehra, analyst at IDC. “The continued growth of cloud applications that require high-performance, east-west traffic-handling capabilities will likely be one of the key drivers for this class of switches to see increased traction.”
Indeed, Dell EMC, Cisco, Hewlett Packard Enterprise (HPE) and Juniper Networks are counting on an increase in data center traffic to sell their 100 GbE switches. So far, demand for the hardware has been robust. In the first quarter, revenue from 100 GbE gear grew nearly 84% year over year to $742.5 million, according to IDC. Port shipments increased almost 118%.
The Dell EMC 100 GbE switch is 2RU hardware available with 64, 128 or 64 ports of 100 GbE, 25 GbE or 50 GbE, respectively. Options for 10 GbE and 40 GbE ports are also available. Broadcom’s 6.4 Tbps StrataXGS Tomahawk II chip powers the switch.
Dell EMC, along with rival HPE, is marketing its support for third-party network operating systems as a differentiator for its switches. Dell EMC is selling the Z9246F with the enterprise edition of its network operating system (NOS), called OS10, or with operating systems from Big Switch Networks, Cumulus Networks, IP Infusion or Pluribus Networks.
Other options for the Dell 100 GbE switch include the open source edition of OS10 and either the Metaswitch network protocol stack or the Quagga suite of open source applications for managing routing protocols. Finally, Dell EMC will sell just the hardware with several open source applications, including Quagga and the OpenSwitch or SONiC NOS.
The starting price for the Z9264F, without an operating system or optics, is $45,000.
Trends in the 100 GbE market
Rohit Mehraanalyst at IDC
Several trends are driving the 100 GbE market. Service providers are redesigning their data centers to support software-based network services, including 5G and IoT. Also, financial institutions are providing services to customers over a growing number of mobile devices.
Meanwhile, cloud companies that provide infrastructure or platform as a service are buying more hardware to accommodate a significant increase in companies moving application workloads to the cloud. In 2017, public cloud data centers accounted for the majority of the $46.5 billion spent on IT infrastructure products — server, storage and switches — for cloud environments, according to IDC.
As a switch supplier, Dell EMC is a smaller player. The company is not one of the top five vendors in the market, according to IDC. Nevertheless, Dell EMC is a major supplier of open networking to the small number of IT shops buying the technology.
“While open networking is not mainstream yet in the enterprise, providing choice in terms of the complete hardware and software stack is something that large enterprises and service providers have started to look at favorably,” Mehra said.
Array Networks Inc. has introduced an upgrade of its network functions virtualization hardware. New features in the AVX NFV appliance, which provides application delivery, security and other networking operations, include support for 40 GbE interfaces and higher throughput for encrypted traffic.
Array, based in Milpitas, Calif., launched the AVX5800, AVX7800 and AVX9800 appliances this week. Along with support for optional 40 GbE network interface cards (NICs), the latest hardware provides a significant improvement in elliptic curve cryptography (ECC) processing over a Secure Sockets Layer virtual private network (SSL VPN).
The new NFV appliances include Array’s latest software release, AVX 2.7. The upgrade provides better fine-tuning of system resources for virtualized network functions running on the platform. Other improvements include the ability to back up and restore AVX configurations and images via USB and an online image repository for software running on AVX appliances.
Array has also added enhancements for companies using the NFV appliance with OpenStack environments. The company has introduced a hypervisor driver that lets the AVX platform serve as an OpenStack compute node.
The AVX NFV platform, launched in May 2017, comprises a series of virtualized servers for running Array and third-party applications, such as Fortinet’s FortiGate next-generation firewall and Positive Technologies’ PT AF web application firewall.
A10 Harmony Controller Update
A10 has launched an upgrade to its Harmony Controller, an application delivery controller, or ADC, that is also a cloud management, orchestration and analytics engine.
A10, based in San Jose, Calif., released Harmony version 4.1 last week, adding improvements to the product’s ability to configure and manage policies across A10’s line of Thunder security appliances.
New features in Harmony include preloaded Thunder ADC services. Also added to the controller is a self-service app for Thunder SSL inspection, which decrypts traffic, so security devices can analyze it.
Other improvements include extending Harmony’s analytics history to 12 months, so network operators and security pros can go further back in time when investigating events.
Harmony is a cloud-optimized ADC that can spin up specific services anywhere in a hybrid cloud environment. The software also incorporates per-application analytics and centrally manages and orchestrates application services.
Aviatrix improves its AWS security
Aviatrix has added to its AVX network security software better control over traffic leaving Amazon Web Services. The enhancements provide customers with stronger protection against internal threats and external attacks.
The new AVX capability announced last week focuses on filtering egress data from an AWS virtual private cloud (VPC). An AWS VPC provides a private cloud computing environment on the infrastructure-as-a-service provider’s platform. The benefit of a VPC is the granular control a company can get over a virtual network service serving sensitive workloads.
AVX for AWS VPCs verifies the traffic destination’s IP address, hostname or website, the vendor, based in Palo Alto, Calif., said. An inline, software-controlled AVX Gateway does the VPC filtering and prevents traffic from going to unauthorized locations.
The Aviatrix platform, which comprises a controller and gateway, operates over a network overlay that spans cloud and data center environments. The new VPC egress security feature is available as part of the platform, which is available only as software.
Companies can deploy the Aviatrix product through the AWS marketplace. Aviatrix also has versions of its technology for Microsoft Azure and Google Cloud.
When we introduced containers to Windows with the release of Windows Server 2016, our primary goal was to support traditional server-oriented applications and workloads. As time has gone on, we’ve heard feedback from our users about how certain workloads need access to peripheral devices—a problem when you try to wrap those workloads in a container. We’re introducing support for select host device access from Windows Server containers, beginning in Insider Build 17735 (see table below).
We’ve contributed these changes back to the Open Containers Initiative(OCI) specification for Windows. We will be submitting changes to Docker to enable this functionality soon. Watch the video below for a simple example of this work in action (hint: maximize the video).
To provide a simple demonstration of the workflow, we have a simple client application that listens on a COM port and reports incoming integer values (powershell console on the right). We did not have any devices on hand to speak over physical COM, so we ran the application inside of a VM and assigned the VM’s virtual COM port to the container. To mimic a COM device, an application was created to generate random integer values and send it over a named pipe to the VM’s virtual COM port (this is the powershell console on the left).
As we see in the video at the beginning, if we do not assign COM ports to our container, when the application runs in the container and tries to open a handle to the COM port, it fails with an IOException (because as far as the container knew, the COM port didn’t exist!). On our second run of the container, we assign the COM port to the container and the application successfully gets and prints out the incoming random ints generated by our app running on the host.
How It Works
Let’s look at how it will work in Docker. From a shell, a user will type:
docker run --device="/"
For example, if you wanted to pass a COM port to your container:
docker run --device="class/86E0D1E0-8089-11D0-9CE4-08003E301F73" mcr.microsoft.com/windowsservercore-insider:latest
The value we’re passing to the device argument is simple: it looks for an IdType and an Id. For this coming release of Windows , we only support an IdType of “class”. For Id, this is a device interface class GUID. The values are delimited by a slash, “/”. Whereas in Linux a user assigns individual devices by specifying a file path in the “/dev/” namespace, in Windows we’re adding support for a user to specify an interfaceclass, and all devices which identify as implementing this class will be plumbed into the container.
If a user wants to specify multiple classes to assign to a container:
docker run --device="class/86E0D1E0-8089-11D0-9CE4-08003E301F73" --device="class/DCDE6AF9-6610-4285-828F-CAAF78C424CC" --device="…" mcr.microsoft.com/windowsservercore-insider:latest
What are the Limitations?
Process isolation only: We only support passing devices to containers running in process isolation; Hyper-V isolation is not supported, nor do we support host device access for Linux Containers on Windows (LCOW).
We support a distinct list of devices: In this release, we targeted enabling a specific set of features and a specific set of host device classes. We’re starting with simple buses. The complete list that we currently support is below.
Interface Class GUID
Stay tuned for a Part 2 of this blog that explores the architectural decisions we chose to make in Windows to add this support.
We’re eager to get your feedback. What specific devices are most interesting for you and what workload would you hope to accomplish with them? Are there other ways you’d like to be able to access devices in containers? Leave a comment below or feel free to tweet at me.
Juniper Networks has introduced a security acceleration card that boosts the performance of the company’s SRX5000 line of firewalls to future-proof the data centers of service providers, cloud providers and large enterprises.
Juniper designed the services processing card, SPC3, for organizations anticipating large data flows from upcoming multi-cloud, internet-of-things and 5G applications. Besides meeting future demand, the SPC3 can also accommodate current traffic increases due to video conferencing, media streaming and other data-intensive applications.
The SPC3 multiplies performance up to a factor of 11 across key metrics for the SRX5000 line, Juniper said. Organizations using the Juniper SPC2 can upgrade to the SPC3 without service interruptions.
What’s in the SRX5000 line?
The SRX5000 line’s security services include a stateful firewall, an intrusion prevention system, unified threat management and a virtual private network. Network operators manage security policies for SRX5000 hardware through Juniper’s Junos Space Security Director.
With the addition of an SPC, the SRX5000 line can support up to 2 Tbps of firewall throughput. The line’s I/O cards offer a range of connectivity options, including 1 Gigabit Ethernet, 10 GbE, 40 GbE and 100 GbE interfaces.
Security is one area Juniper has reported quarterly revenue growth while overall sales have declined. For the quarter ended June 30, Juniper reported last month revenue from its security business increased to $79.5 million from $68.7 million a year ago.
However, overall revenue fell 8% to $1.2 billion, and the company said sales in the current quarter would also be down. Nevertheless, the company expects to return to quarterly revenue growth in the fourth quarter.
Avi Networks, a maker of software to improve application performance and security, has introduced version 18.1 of its Vantage Platform, which provides better integration with several Cisco products.
The upgrade offers “enhanced integrations” with Cisco AppDynamics, Tetration and its software-defined networking architecture, called Application Centric Infrastructure (ACI), according to Avi, based in Santa Clara, Calif. The ACI integration simplifies the process of placing application services, such as the Avi load balancer, on ACI networks.
Avi, which doesn’t sell physical hardware, provides software companies can deploy on premises or in the cloud. The Vantage Platform offers elastic load balancing and web application security on a per-application basis. The company also makes an application delivery controller that provides Layer 4-7 services to containerized applications running in cloud environments. The Avi load balancer and other services compete with products from F5 and Citrix.
The latest version of Vantage Platform provides integration between the Avi Controller and Cisco’s Application Policy Infrastructure Controller. Avi connects the controllers through REST APIs.
The Vantage upgrade also delivers telemetry from its Layer 4-7 services to the AppDynamics application performance management suite and the Tetration network analytics engine for the data center.
In June, Cisco Investments joined a $60 million round of funding for Avi, which brought its total funding to $115 million. Other investors included DAG Ventures, Greylock Partners, Lightspeed Venture Partners and Menlo Ventures.
LiveAction intros LiveNX Server Appliance
LiveAction plans to release on Aug. 1 its LiveNX Server Appliance, a network performance monitor developed with the help of Savvius, a packet monitor maker LiveAction acquired in June.
The latest product provides LiveAction customers with a hardware option for deploying the company’s technology. Previously, deployment options were limited to a public cloud or a virtualized server within a data center.
Savvius’ “extensive hardware tuning experience” made it possible for LiveAction to deliver the LiveNX hardware quickly, the company said. LiveAction plans to release other acquisition-related products in the future.
Flow monitoring is a core feature in LiveNX, which taps into the NetFlow data-collection component built into routers and switches from Cisco and other manufacturers. The software uses the data to determine packet loss, delay and round-trip time, while also showing network administrators how well the network is delivering application services.
Analysts expect LiveAction, based in Palo Alto, Calif., to combine its network performance monitor with Savvius’ packet monitor into a single product. Today, companies often buy those types of technologies separately, using a performance monitor for spotting problems and a packet monitor for performing in-depth analyses to pinpoint causes.
Corvil launches Intelligence Hub
Network analytics vendor Corvil plans to release this summer Intelligence Hub, a product designed to deliver intelligence to business operations, as well as network performance data to IT departments.
Intelligence Hub applies machine learning and predictive analytics to packet data to spot changes in business activity related to the total number of transactions, individual orders and products, conversion rates and response times. The software sends change alerts to business teams.
For network operators, the software provides many of the features contained in Corvil’s appliances, such as identifying and alerting on network anomalies, including packet loss, a dip in network performance or an increase in latency.
In general, Corvil products capture, timestamp and forward network packets to a separate capture appliance, where they are analyzed. Corvil can provide the hardware, or, in the case of Intelligence Hub, the software can run on a third-party device.
Corvil products can send customized streams of network data to big data sources, such as Elasticsearch, Hadoop, MongoDB and Splunk, so IT departments can draw more targeted information from the tools.
Corvil, headquartered in Dublin, competes with ExtraHop, ThousandEyes, Riverbed and NetScout.
Big Switch Networks has introduced software that provides consistency in building and managing a network infrastructure within a virtual network in Amazon Web Services and the private data center.
The vendor, which provides a software-based switching fabric for open hardware, said this week it would release the hybrid cloud technology in stages. First up is a software release next month for the data center, followed by an application for AWS in the fourth quarter.
The AWS product, called Big Cloud Fabric — Public Cloud, provides the tools for creating and configuring a virtual network to deliver Layer 2, Layer 3 and security services to virtual machines or containers running on the IaaS provider. AWS also offers tools for building the virtual networks, which it calls Virtual Private Clouds (VPCs).
In general, customers use AWS VPCs to support a private cloud computing environment on the service provider’s platform. The benefit is getting more granular control over the virtual network that serves sensitive workloads.
Big Cloud Fabric — Public Cloud lets companies create AWS VPCs and assign security policies for applications running on the virtual networks. The product also provides analytics for troubleshooting problems. While initially available on AWS, Big Switch plans to eventually make Big Cloud Fabric — Public Cloud available on Google Cloud and Microsoft Azure.
VPCs for the private data center
For the corporate data center, Big Switch plans to add tools to its software-based switching fabric — called Big Cloud Fabric — for creating and managing on-premises VPCs that operate the same way as AWS VPCs, said Prashant Gandhi, the chief product officer for Big Switch, based in Santa Clara, Calif.
Customers could use the on-premises VPCs, which Big Switch calls enterprise VPCs, as the virtual networks supporting computing environments that include Kubernetes and Docker containers, the VMware server virtualization vSphere suite, and the OpenStack cloud computing framework.
“With the set of tools they are announcing, [Big Switch] will be able to populate these VPCs and facilitate a consistent deployment and management of networks across cloud and on premises,” said Will Townsend, an analyst at Moor Insights & Strategy, based in Austin, Texas.
Big Switch already offers a version of its Big Monitoring Fabric (BMF) network packet broker for AWS. In the fourth quarter, Big Switch plans to release a single console, called Multi-Cloud Director, for accessing all BMF and Big Cloud Fabric controllers.
In general, Big Switch supplies software-based networking technology for white box switches. Big Cloud Fabric competes with products from Cisco, Midokura and Pluribus Networks, while BMF rivals include technology from Gigamon, Ixia and Apcon.
Big Switch customers are mostly large enterprises, including communication service providers, government agencies and 20 Fortune 100 companies, according to the vendor.