Tag Archives: introduced

3 Fundamental Capabilities of VM Groups You Can’t Ignore

In a previous post, I introduced you to VM groups in Hyper-V and demonstrated how to work with them using PowerShell. I’m still working with them to see how I will incorporate them into my everyday Hyper-V work, but I already know that I wish the cmdlets for managing groups worked a little differently. But that’s not a problem. I can create my own tooling around these commands and build a solution that works for me. Let me share what I’ve come up with so far.

1. Finding Groups

As I explained last time, you can have a VM group that contains a collection of virtual machines, or nested management groups. By default, Get-VMGroup will return all groups. Yes, you can filter by name but you can’t filter by group type. If I want to see only Management groups, I need to use a PowerShell expression like this:

This is not a complicated expression but it becomes tedious when I am repeatedly typing or modifying this command. This isn’t an issue in a script, but for everyday interactive work, it can be a bit much. My solution was to write a new command, Find-VMGroup, that works identically to Get-VMGroup except this version allows you to specify a group type.

Finding specific VM Group types with PowerShell

Your output might vary from the screenshot but I think you get the idea. The default is to return all groups, but then you might as well use Get-VMGroup. And because the group type is coded into the function, you can use tab complete to select a value.

Interested in getting the Find-VMGroup command? I have a section on how to install the module a little further down the page.

2. Expanding Groups

Perhaps the biggest issue (and even that might be a bit strong) I had with the VM Group command is that ultimately, what I really want are the members of the group. I want to be able to use groups to do something with all of the members of that group. And by members, I mean virtual machines. It doesn’t matter to me if the group is a VM Collection or Management Collection. Show me the virtual machines!

Again, this isn’t technically difficult.

 Getting VM Group members

If you haven’t figured out by now I prefer simple. Getting virtual machines from a management group requires even more steps. Once again, I wrote my own command called Expand-VMGroup.

Expanding a single VM group with a custom PowerShell command

The output has been customized a bit to provide a default, formatted view. There are in fact other properties you could work with.

Viewing all properties of an expanded VM group

Depending on the command, you might be able to pipe these results to another Hyper-V command. But I know that many of the Hyper-V cmdlets will take pipeline input by value. This allows you to pass a list of virtual machine names to a command. I added a parameter to Expand-VMGroup that will write just the virtual machine names to the pipeline as a list. Now I can run commands like this:

Piping Expand-VMGroup to another Hyper-V command

Again, the module containing this command can be found near the end of the article and can be installed using Install-Module

3. Starting and Stopping Groups

The main reason I want to use VM groups is to start and stop groups of virtual machines all at once. I could use Expand-VMGroup and pipe results to Start-VM or Stop-VM but I decided to make specific commands for starting and stopping all virtual machine members of a group. If a member of the group is already in the targeted state, it is skipped.

Starting members of a VM group

The third member of this group was already running so it was skipped. Now I’ll shut down the group.

Stopping members of a VM group

It may not seem like much but every little thing I can do to get more done with less typing and effort is worth my time. I’m using full parameter names and typing out more than I actually need to for the sake of clarity.

How Do I Get These Commands

Normally, I would show you code samples that you could use. But in this case, I think these commands are ready to use as-is. You can get the commands from my PSHyperVTools module which is free to install from the PowerShell Gallery.

If you haven’t installed anything before you might get a prompt to update the version of nuget. Go ahead and say yes.  You’ll also be prompted if you want to install from a non-trusted repository. You aren’t installing this on a mission-critical server so you should be OK. Once installed, you can use the commands that I’ve demonstrated. They should all have help and examples.

Getting help for Expand-VMGroup

The module is open source so if you’d like to review the code first or the README, jump over to https://github.com/jdhitsolutions/PSHyperV. There are a few other commands and features of the module that I hope to write about in a future article or two. But for now, I hope you’ll give these commands a spin and let me know what you think in the comments section below!

Go to Original Article
Author: Jeffery Hicks

CloudKnox Security adds privileged access features to platform

CloudKnox Security, a vendor in identity privilege management, introduced new features to its Cloud Security Platform, including Privilege-on-Demand, Auto-Remediation for Machine Identities and Anomaly Detection.

The offerings intend to increase enterprise protection from identity and resource risks in hybrid cloud environments. According to CloudKnox Security, the new release is an improvement on its existing Just Enough Privileges Controller, which enables enterprises to reduce overprovisioned identity privileges to appropriate levels across VMware, AWS, Azure and Google Cloud.

Privileged accounts are often targets for attack, and a successful hacking attempt can result in full control of an organization’s data and assets. The 2019 Verizon Data Breach Investigations Report highlighted privileged account misuse as the top threat for security incidents and the third-leading cause of security breaches.

The Privilege-on-Demand feature from CloudKnox Security enables companies to grant privileges to users for a certain amount of time and on a specific resource on an as-needed basis. The options include Privilege-on-Request, Privilege Self-Grant or Just-in-Time Privilege that give users access to a specific resource within a set time to perform an action.

The Auto-Remediation feature can frequently and automatically dismiss unused privileges of machine identities, according to the vendor. For example, the feature can be useful dealing with service accounts that perform repetitive tasks with limited privileges, because when these accounts are overprovisioned, organizations will be particularly vulnerable to privilege misuse.

The Anomaly Detection feature creates risk profiles for users and resources based on data obtained by CloudKnox’s Risk Management Module. According to the vendor, the software intends to detect abnormal behaviors from users, such as a profile carrying out a high-risk action for the first time on a resource they have never accessed.

The company will demonstrate the new features at Black Hat USA in Las Vegas this year for the first time. CloudKnox’s update to its Cloud Security Platform follows competitor CyberArk‘s recent updates to its own privileged access management offering, including zero-trust access, full visibility and control of privileged activities for customers, biometric authentication and just-in-time provisioning. Other market competitors that promise insider risk reduction, identity governance and privileged access management include BeyondTrust and One Identity.

Go to Original Article
Author:

Adobe Experience Platform adds features for data scientists

After almost a year in beta, Adobe has introduced Query Service and Data Science Workspace to the Adobe Experience Platform to enable brands to deliver tailored digital experiences to their customers, with real-time data analytics and understanding of customer behavior.

Powered by Adobe Sensei, the vendor’s AI and machine learning technology, Query Service and Data Science Workspace intend to automate tedious, manual processes and enable real-time data personalization for large organizations.

The Adobe Experience Platform — previously the Adobe Cloud Platform — is an open platform for customer experience management that synthesizes and breaks down silos for customer data in one unified customer profile.

According to Adobe, the volume of data organizations must manage has exploded. IDC predicted the Global DataSphere will grow from 33 zettabytes in 2018 to 175 zettabytes by 2025. And while more data is better, it makes it difficult for businesses and analysts to sort, digest and analyze all of it to find answers. Query Service intends to simplify this process, according to the vendor.

Query Service enables analysts and data scientists to perform queries across all data sets in the platform instead of manually combing through siloed data sets to find answers for data-related questions. Query Service supports cross-channel and cross-platform queries, including behavioral, point-of-sale and customer relationship management data. Query Service enables users to do the following:

  • run queries manually with interactive jobs or automatically with batch jobs;
  • subgroup records based on time and generate session numbers and page numbers;
  • use tools that support complex joins, nested queries, window functions and time-partitioned queries;
  • break down data to evaluate key customer events; and
  • view and understand how customers flow across all channels.

While Query Service simplifies the data identification process, Data Science Workspace helps to digest data and enables data scientists to draw insights and take action. Using Adobe Sensei’s AI technology, Data Science Workspace automates repetitive tasks and understands and predicts customer data to provide real-time intelligence.

Also within Data Science Workspace, users can take advantage of tools to develop, train and tune machine learning models to solve business challenges, such as calculating customer predisposition to buy certain products. Data scientists can also develop custom models to pull particular insights and predictions to personalize customer experiences across all touchpoints.

Additional capabilities of Data Science Workstation enable users to perform the following tasks:

  • explore all data stored in Adobe Experience Platform, as well as deep learning libraries like Spark ML and TensorFlow;
  • use prebuilt or custom machine learning recipes for common business needs;
  • experiment with recipes to create and train tracked unlimited instances;
  • publish intelligent services recipes without IT to Adobe I/O; and
  • continuously evaluate intelligent service accuracy and retrain recipes as needed.

Adobe data analytics features Query Service and Data Science Workspace were first introduced as part of the Adobe Experience Platform in beta in September 2018. Adobe intends these tools to improve how data scientists handle data on the Adobe Experience Platform and create meaningful models off of which developers can work. 

Go to Original Article
Author:

Arrcus upgrades ArcOS to support Jericho2-based routers

Arrcus has introduced a version of its ArcOS network operating system that supports Broadcom’s StrataDNX Jericho2 system-on-a-chip for switches and routers. As a result, ArcOS supports Jericho2-based commodity hardware for hyperscale cloud, edge and 5G networks.

Key features supported by ArcOS when used with Jericho2 include the following:

  • network speeds up to 10 Tbps switching capacity, which is five times more than the previous generation;
  • a fourfold increase in port density per chip;
  • up to 2.6 million IPv4 routes on chip;
  • real-time flow visibility at scale;
  • support for IPv4, IPv6, MPLS and segment routing forwarding;
  • standards-based BGP Flowspec;
  • visibility into access control lists and routes to help with traffic distribution; and
  • selectable scale profiles.

In January, Arrcus introduced an ArcOS upgrade that supported 400 Gigabit Ethernet white box switches powered by Broadcom’s StrataXGS Tomahawk 3 chipset. Hardware available with the ArcOS upgrade included 400 GbE and 100 GbE switches from Celestica and Edgecore.  

Arrcus competitors include Pluribus Networks, which in June debuted a no-frills edge router for co-location and service providers. Pluribus is marketing its Freedom Series 9532C-XL-R edge as a more cost-effective option to a traditional router.

Another competitor, Cumulus Networks, recently revamped its data center tool set by adding a graphical dashboard. Cumulus offers two core networking software products: Cumulus Linux and Cumulus NetQ.

Arrcus also launched ArcIQ, an analytics platform that uses artificial intelligence in an effort to provide real-time visibility, control and security. ArcIQ detects anything unusual and shows corrective actions to create more uptime.

Additionally, Arrcus has raised another $30 million in Series B funding, bringing the total capital raised to $49 million. The funding enables Arrcus to expand its operations and reach of ArcOS.

Go to Original Article
Author:

Mini XL+, Mini E added to iXsystems FreeNAS Mini series

Open source hardware provider iXsystems introduced two new models to its FreeNAS Mini series storage system lineup: FreeNAS Mini XL+ and FreeNAS Mini E. The vendor also introduced tighter integration with TrueNAS and cloud services.

Designed for small offices, iXsystems’ FreeNAS Mini series models are compact, low-power and quiet. Joining the FreeNAS Mini and Mini XL, the FreeNAS Mini XL+ is intended for professional workgroups, while the FreeNAS Mini E is a low-cost option for small home offices.

The FreeNAS Mini XL+ is a 10-bay platform — eight 3.5-inch and one 2.5-inch hot-swappable bays and one 2.5-inch internal bay — and iXsystem’s highest-end Mini model. The Mini XL+ provides dual 10 Gigabit Ethernet (GbE) ports, eight CPU cores and 32 GB RAM for high-performance workloads. For demanding applications, such as hosting virtual machines or multimedia editing, the Mini XL+ scales beyond 100 TB.

For lower-intensity workloads, the FreeNAS Mini E is ideal for file sharing, streaming and transcoding video up to 1080p. The FreeNAS Mini E features four bays with quad GbE ports and 8 GB RAM, configured with 8 TB capacity.

The full iXsystems FreeNAS Mini series supports error correction RAM and Z File System with data checksumming, unlimited snapshots and replication. IT operations can remotely manage systems via Intelligent Platform Management Interface and, dependent on needs, can be built has hybrid or all-flash storage.

FreeNAS provides traditional NAS and delivers network application services via plugin applications, featuring both open source and commercial applications to extend usability to entertainment, collaboration, security and backup. IXsystems’ FreeNAS 11.2 provides a web interface and encrypted cloud sync to major cloud services, such as Amazon S3, Microsoft Azure, Google Drive and Backblaze B2.

At Gartner’s 2018 IT Infrastructure, Operations & Cloud Strategies Conference, ubiquity of IT infrastructure was a main theme, and FreeNAS was named an option for file, block, object and hyper-converged software-defined storage. According to iXsystems, FreeNAS and TrueNAS are leading platforms for video, telemetry and other data processing in the cloud or a colocation facility.

New FreeNAS Mini models were introduced to iXsystems' lineup for open source storage.
IXsystems’ FreeNAS Mini lineup now includes the high-end FreeNAS Mini XL+ and entry-level FreeNAS Mini E.

With the upgrade, the FreeNAS Mini series can be managed by iXsystems’ unified management system, TrueCommand, which enables admins to monitor all TrueNAS and FreeNAS systems from a single UI and share access to alerts, reports and control of storage systems. A TrueCommand license is free for FreeNAS deployments of fewer than 50 drives.

According to iXsystems, FreeNAS Mini products reduce TCO by combining enterprise-class data management and open source economics. The FreeNAS Mini XL+ ranges from $1,499 to $4,299 and the FreeNAS Mini E from $749 to $999.

FreeNAS version 11.3 is available in beta, and the vendor anticipates a 12.0 release that will bring more efficiency to its line of FreeNAS Minis.

Go to Original Article
Author:

Cisco ASR 9000 router gets usage-based pricing

Cisco has introduced pay-as-you-go pricing for the latest line card of the ASR 9000 router, offering service providers a more flexible licensing model as they evaluate 5G infrastructure suppliers.

Cisco’s new licensing model, unveiled this week, applies to the new line card and subsequent generations. The latest hardware has a maximum throughput of 3.2 Tbps, uses a half watt of power per gigabit and is available with 32, 16 or 8 ports of 100 GbE. The cards fit into existing ASR 9000 chassis.

The pricing change lets service providers buy a license for ASR 9000 capacity across sites, but only pay for what they use. The cost would increase as ports are activated, said Sumeet Arora, the head of engineering for service provider network systems at Cisco.

Previously, service providers had to buy an ASR 9000 license for each site based on expected demand. As a result, the customers would pay for capacity they weren’t using, Arora said.

The ASR 9000 router in 5G

Cisco is making its pricing more customer-friendly as service providers consider technology like the ASR 9000 to support future 5G business and consumer services. The fifth-generation cellular technology delivers speed, capacity and latency improvements that will enable new products for healthcare, manufacturing, entertainment and the auto industry, proponents have said.

However, analysts do not expect the 5G services market to take off for several years. Cisco CEO Chuck Robbins recently told financial analysts that he didn’t expect significant 5G sales until 2020.

ASR 9000
ASR 9000 router with 32 100 GbE ports

Until the 5G market opens, Cisco is aiming the new ASR 9000 line cards at the network edge where service providers deliver virtual private networks and other business services. Other “big use cases” include internet peering, data center interconnects and the IP infrastructure for mobile services, Arora said.

The ASR 9000 router competes with products from Juniper Networks, Huawei and Nokia. The latter two vendors, along with Ericsson, comprise the top three suppliers to service providers.

Last week, Juniper Networks announced a partnership with Ericsson to sell a collection of products for moving 5G traffic. Cisco announced a wide-ranging partnership with Ericsson in 2015, but that deal has stalled, and many analysts believe it is nearly dead.

“The Ericsson-Cisco partnership was a nonstarter, and both parties did not follow up on the promise that they had articulated during the announcement,” said Rajesh Ghai, an analyst at IDC.

Samsung adds Z-NAND data center SSD

Samsung’s lineup of data center solid-state drives– including a Z-NAND model — introduced this week targets smaller organizations facing demanding workloads such as in-memory databases, artificial intelligence and IoT.

The fastest option in the Samsung data center SSD family — the 983 ZET NVMe-based PCIe add-in card — uses the company’s latency-lowering Z-NAND flash chips. Earlier this year, Samsung announced its first Z-NAND-based enterprise SSD, the SZ985, designed for the OEM market. The new 983 ZET SSD targets SMBs, including system builders and integrators, that buy storage drives through channel partners.

The Samsung data center SSD lineup also adds the first NVMe-based PCIe SSDs designed for channel sales in 2.5-inch U.2 and 22-mm-by-110-mm M.2 form factors. At the other end of the performance spectrum, the new entry-level 2.5-inch 860 DCT 6 Gbps SATA SSD targets customers who want an alternative to client SSDs for data center applications, according to Richard Leonarz, director of product marketing for Samsung SSDs.

Rounding out the Samsung data center SSD product family is a 2.5-inch 883 DCT SATA SSD that uses denser 3D NAND technology, which Samsung calls V-NAND, than comparable predecessor models. Samsung’s PM863 and PM863a SSDs use 32-layer and 48-layer V-NAND respectively, but the new 883 DCT SSD is equipped with triple-level cell (TLC) 64-layer V-NAND chips, as are the 860 DCT and 983 DCT models, Leonarz said.

Noticeably absent from the Samsung data center SSD product line is 12 Gbps SAS. Leonarz said research showed SAS SSDs trending flat to downward in terms of units sold. He said Samsung doesn’t see a growth opportunity for SAS on the channel side of the business that sells to SMBs such as system builders and integrators. Samsung will continue to sell dual-ported enterprise SAS SSDs to OEMs.

Samsung 983 ZET NVMe SSD
The Samsung 983 ZET NVMe SSD uses its latency-lowering Z-NAND flash chips.

Z-NAND-based SSD uses SLC flash

The Z-NAND technology in the new 983 ZET SSD uses high-performance single-level cell (SLC) V-NAND 3D flash technology and builds in logic to drive latency down to lower levels than standard NVMe-based PCIe SSDs that store two or three bits of data per cell.

Samsung positions the Z-NAND flash technology it unveiled at the 2016 Flash Memory Summit as a lower-cost, high-performance alternative to new 3D XPoint nonvolatile memory that Intel and Micron co-developed. Intel launched 3D XPoint-based SSDs under the brand name Optane in March 2017, and later added Optane dual inline memory modules (DIMMs). Toshiba last month disclosed its plans for XL-Flash to compete against Optane SSDs.

Use cases for Samsung’s Z-NAND NVMe-based PCIe SSDs include cache memory, database servers, real-time analytics, artificial intelligence and IoT applications that require high throughput and low latency.

“I don’t expect to see millions of customers out there buying this. It’s still going to be a niche type of solution,” Leonarz said.

Samsung claimed its SZ985 NVMe-based PCIe add-in card could reduce latency by 5.5 times over top NVMe-based PCIe SSDs. Product data sheets list the SZ985’s maximum performance at 750,000 IOPS for random reads and 170,000 IOPS for random writes, and data transfer rates of 3.2 gigabytes per second (GBps) for sequential reads and 3 GBps for sequential writes.

The new Z-NAND based 983 ZET NVMe-based PCIe add-in card is also capable of 750,000 IOPS for random reads, but the random write performance is lower at 75,000 IOPS. The data transfer rate for the 983 ZET is 3.4 GBps for sequential reads and 3 GBps for sequential writes. The 983 ZET’s latency for sequential reads and writes is 15 microseconds, according to Samsung.

Both the SZ985 and new 983 ZET are half-height, half-length PCIe Gen 3 add-in cards. Capacity options for the 983 ZET will be 960 GB and 480 GB when the SSD ships later this month. SZ985 SSDs are currently available at 800 GB and 240 GB, although a recent product data sheet indicates 1.6 TB and 3.2 TB options will be available at an undetermined future date.

Samsung’s SZ985 and 983 ZET SSDs offer significantly different endurance levels over the five-year warranty period. The SZ985 is rated at 30 drive writes per day (DWPD), whereas the new 983 ZET supports 10 DWPD with the 960 GB SSD and 8.5 DWPD with the 480 GB SSD.

Samsung data center SSD endurance

The rest of the new Samsung data center SSD lineup is rated at less than 1 DWPD. The entry-level SATA 860 DCT SATA SSD supports 0.20 DWPD for five years or 0.34 DWPD for three years. The 883 DCT SATA SSD and 983 DCT NVMe-based PCIe SSD are officially rated at 0.78 DWPD for five years, with a three-year option of 1.30 DWPD.

Samsung initially targeted content delivery networks with its 860 DCT SATA SSD, which is designed for read-intensive workloads. Sequential read/write performance is 550 megabytes per second (MBps) and 520 MBps, and random read/write performance is 98,000 IOPS and 19,000 IOPS, respectively, according to Samsung. Capacity options range from 960 GB to 3.84 TB.

“One of the biggest challenges we face whenever we talk to customers is that folks are using client drives and putting those into data center applications. That’s been our biggest headache for a while, in that the drives were not designed for it. The idea of the 860 DCT came from meeting with various customers who were looking at a low-cost SSD solution in the data center,” Leonarz said.

He said the 860 DCT SSDs provide consistent performance for round-the-clock operation with potentially thousands of users pinging the drives, unlike client SSDs that are meant for lighter use. The cost per GB for the 860 DCT is about 25 cents, according to Leonarz.

The 883 DCT SATA SSD is a step up, at about 30 cents per GB, with additional features such as power loss protection. The performance metrics are identical to the 860 DCT, with the exception of its higher random writes of 28,000 IOPS. The 883 DCT is better suited to mixed read/write workloads for applications in cloud data centers, file and web servers and streaming media, according to Samsung. Capacity options range from 240 GB to 3.84 TB.

The 983 DCT NVMe-PCIe SSD is geared for I/O-intensive workloads requiring low latency, such as database management systems, online transaction processing, data analytics and high performance computing applications. The 2.5-inch 983 DCT in the U.2 form factor is hot swappable, unlike the M.2 option. Capacity options are 960 GB and 1.92 TB for both form factors. Pricing for the 983 DCT is about 34 cents per GB, according to Samsung.

The 983 DCT’s sequential read performance is 3,000 MBps for each of the U.2 and M.2 983 DCT options. The sequential write performance is 1,900 MBps for the 1.92 TB U.2 SSD, 1,050 MBps for the 960 GB U.2 SSD, 1,400 MBps for the 1.92 TB M.2 SSD, and 1,100 MBps for the 960 GB M.2 SSD. Random read/write performance for the 1.92 TB U.2 SSD is 540,000 IOPS and 50,000 IOPS, respectively. The read/write latency is 85 microseconds and 80 microseconds, respectively.

The 860 DCT, 883 DCT and 983 DCT SSDs are available now through the channel, and the 983 ZET is due later this month.

VMware takes NSX security to AWS workloads

VMware has introduced features that improve the use of its NSX network virtualization and security software in private and public clouds.

At VMworld 2018 in Las Vegas, VMware unveiled an NSX instance for AWS Direct Connect and technology to apply NSX security policies on Amazon Web Services workloads. Also, VMware said Arista Networks’ virtual and physical switches would enforce NSX policies — the result of a collaboration between the two vendors.

VMware is applying NSX security policies, including microsegmentation, on AWS workloads by adding support of NSX-T to VMware Cloud on AWS. NSX-T provides networking and security management for containers and non-VMware virtualized environments. VMware Cloud on AWS is a hybrid cloud service that runs the VMware software-defined data center stack on AWS.

The latest AWS feature is in NSX-T Data Center 2.3, which VMware introduced at VMworld. Other features added to the newest version of NSX-T include support for containers and Linux-based workloads running on bare-metal servers. NSX-T uses Open vSwitch to turn a Linux host into an NSX-T transport node and to provide stateful security services.

VMware plans to release NSX-T 2.3 by November.

NSX on AWS Direct Connect

To help companies connect to AWS, VMware introduced integration between NSX and AWS Direct Connect. The combination will provide NSX-powered connectivity between workloads running on VMware Cloud on AWS and those running on a VMware-based private cloud in the data center.

AWS Direct Connect lets companies bypass the public internet and establish a dedicated network connection between a data center and an AWS location. Direct Connect is particularly useful for companies with rules against transferring sensitive data across the public internet.

Finally, VMware introduced interoperability between Arista’s CloudVision and NSX. As a result, companies can have NSX security policies enforced on Arista switches running either virtually in a public cloud or the data center.

Arista CloudVision manages switching fabrics within multiple cloud environments. Last year, the company released a virtualized version of its EOS network operating system for AWS, Google Cloud Platform, Microsoft Azure and Oracle Cloud.

VMware is using its NSX portfolio to connect and secure infrastructure and applications running in the data center, branch office and public cloud. For the branch office, VMware has integrated NSX with the company’s VeloCloud software-defined WAN to provide microsegmentation for applications at the WAN’s edge.

VMware competes in multi-cloud networking with Cisco and Juniper Networks.

Cisco adds LTE modem to Meraki MX security appliance

Cisco has introduced Meraki MX security appliances with a built-in 4G wireless broadband modem. The company also added the Long Term Evolution, or LTE, modem to a new Z-series teleworker gateway.

This week, Cisco launched the Meraki MX67C and MX68CW with an integrated CAT 6 LTE cellular modem. Also, Cisco unveiled four MX models – the MX67, MX68, MX67W and MX68W — without LTE but with more throughput than older models. All the new MX hardware, which are the first in the Meraki line to support the 802.11ac Wave 2 Wi-Fi standard, can deliver up to 450 Mbps of firewall throughput.

Network admins manage Cisco Meraki switches, appliances and access points through a web-based console called the Meraki Dashboard, which also provides automation and analytics. Cisco has aimed the product line at small branch offices and retailers that need a no-frills wireless LAN. For an access layer that meets the need of larger enterprises, Cisco offers the Aironet APs and Catalyst switches.

MX appliances are unified threat management devices with software-defined WAN functionality. A UTM system combines and integrates multiple security services and features, including a firewall.

Uses for LTE in the Meraki MX

The higher throughput in the latest MX appliances is aimed at companies accessing SaaS applications, such as Microsoft Office 365, said Imran Idrees, a marketing manager in Cisco’s Meraki unit. Remote branch offices can use the LTE modem as a substitute for broadband when it isn’t available.

Companies could also use the LTE connection as a failover link, Idrees said. If the Ethernet connection goes down, then the MX would switch to LTE.

“Given the ubiquity and increasing performance of LTE, this is a relatively inexpensive way for a branch office to increase its network availability,” said Mark Hung, an analyst at Gartner.

The cellular MX models have one Nano SIM card slot for connecting to a carrier’s LTE network. The built-in modem makes it possible track usage and performance of the MX from the Meraki Dashboard.

Getting LTE on older Meraki MX models required companies to plug a carrier-provided USB stick that contained the 4G modem. Because the modem wasn’t integrated with the MX, no data was captured for tracking performance.

With the latest models, data captured from the LTE connection includes signal strength, the provider’s name and how much data is traveling over the link. All the information is displayed on the Meraki Dashboard.

LTE in Meraki Dashboard

The Z3C gateway

The Z3C teleworker gateway is for workers who need a secure connection to the corporate network while they are on the road. “It’s a very compact device that a business person would take around with them,” Idrees said.

The previous version of the gateway, Z3, required a traveler to plug a hotel room’s Ethernet cable into the device to gain access to the corporate network. The Z3C has the option of connecting over LTE.

Companies that want to use a Meraki WLAN have to purchase the product line’s devices and a cloud subscription license. Once the license is registered, network managers can configure and manage the hardware through the Meraki Dashboard.

New Dell EMC 100 GbE switch tailored for east-west traffic

Dell EMC has introduced a high-density 100 Gigabit Ethernet switch aimed at service providers and large enterprises that need more powerful hardware to support a growing number of cloud applications and digital business initiatives.

Dell EMC launched the Z9264F open networking switch this week, listing its target customers as hyperscale data center operators, tier-one  and -two service providers and enterprises. The Dell EMC 100 GbE switch is designed for leaf-spine switching architectures.

“Dell’s new, high-performance, low-latency 100 GbE switch is ideally suited for large enterprises and service providers,” said Rohit Mehra, analyst at IDC. “The continued growth of cloud applications that require high-performance, east-west traffic-handling capabilities will likely be one of the key drivers for this class of switches to see increased traction.”

Indeed, Dell EMC, Cisco, Hewlett Packard Enterprise (HPE) and Juniper Networks are counting on an increase in data center traffic to sell their 100 GbE switches. So far, demand for the hardware has been robust. In the first quarter, revenue from 100 GbE gear grew nearly 84% year over year to $742.5 million, according to IDC. Port shipments increased almost 118%.

The Dell EMC 100 GbE switch is 2RU hardware available with 64, 128 or 64 ports of 100 GbE, 25 GbE or 50 GbE, respectively. Options for 10 GbE and 40 GbE ports are also available. Broadcom’s 6.4 Tbps StrataXGS Tomahawk II chip powers the switch.

Dell EMC, along with rival HPE, is marketing its support for third-party network operating systems as a differentiator for its switches. Dell EMC is selling the Z9246F with the enterprise edition of its network operating system (NOS), called OS10, or with operating systems from Big Switch Networks, Cumulus Networks, IP Infusion or Pluribus Networks.

Other options for the Dell 100 GbE switch include the open source edition of OS10 and either the Metaswitch network protocol stack or the Quagga suite of open source applications for managing routing protocols. Finally, Dell EMC will sell just the hardware with several open source applications, including Quagga and the OpenSwitch or SONiC NOS.

The starting price for the Z9264F, without an operating system or optics, is $45,000.

Trends in the 100 GbE market

While open networking is not mainstream yet in the enterprise, providing choice in terms of the complete hardware and software stack is something that large enterprises and service providers have started to look at favorably.
Rohit Mehraanalyst at IDC

Several trends are driving the 100 GbE market. Service providers are redesigning their data centers to support software-based network services, including 5G and IoT. Also, financial institutions are providing services to customers over a growing number of mobile devices.

Meanwhile, cloud companies that provide infrastructure or platform as a service are buying more hardware to accommodate a significant increase in companies moving application workloads to the cloud. In 2017, public cloud data centers accounted for the majority of the $46.5 billion spent on IT infrastructure products — server, storage and switches — for cloud environments, according to IDC.

In the first quarter, original design manufacturers accounted for almost 30% of all infrastructure hardware and software sold to public cloud providers, according to Synergy Research Group, based in Reno, Nev. Dell EMC had a 5% to 10% share, which was the same size share as Cisco and HPE.

As a switch supplier, Dell EMC is a smaller player. The company is not one of the top five vendors in the market, according to IDC. Nevertheless, Dell EMC is a major supplier of open networking to the small number of IT shops buying the technology.

“While open networking is not mainstream yet in the enterprise, providing choice in terms of the complete hardware and software stack is something that large enterprises and service providers have started to look at favorably,” Mehra said.