Tag Archives: introduced

Cisco Assurance services verify intent-based networking

Cisco has introduced a policy-centric layer of network analytics for the data center, campus and the wireless LAN, providing customers with additional intelligence to pinpoint problems and fix them. The latest technology represents a significant advancement in Cisco’s march toward intent-based networking.

Cisco’s Assurance analytics, launched on Tuesday, focuses on the nonpacket data the company’s Tetration network monitoring and troubleshooting software doesn’t cover. Unlike Tetration, Assurance keeps tabs on policies created in Cisco software to control the network’s infrastructure, such as switches, firewalls and load balancers.

Cisco Assurance is the latest step in the company’s intent-based networking (IBN) initiative, which is centered around creating policies that tell software what an operator wants the network to do. The application then makes the infrastructure changes.

The engine behind Cisco Assurance services

Cisco’s latest layer of analytics for the data center is called the Network Assurance Engine, which Cisco has tied to its software-defined networking (SDN) architecture, called Application Centric Infrastructure (ACI). The new technology is virtualized software that network operators deploy on any server.

Once installed, the software logs into the ACI controller, called the Application Policy Infrastructure Controller (APIC), which shares network policies, switch configurations and the data-plane state with the Assurance Engine.

At that point, the software creates a map of the entire ACI fabric and then builds a mathematical model that spans underlays, overlays and virtualization layers. The model establishes the network state, which Assurance compares to what operators want the network to do based on policies they’ve created.

“If a network engineer used flawed logic in expressing intent, the Assurance Engine would find that flaw when the intent is translated to network state,” said Shamus McGillicuddy, an analyst for Enterprise Management Associates, based in Boulder, Colo.

Other vendors, such as Forward Networks and Veriflow, also build models of network state and then perform analytics to spot discrepancies with a network operator’s intent. Cisco’s differentiator is the integration with its APIC policy controller, which creates a closed-loop system for ensuring operator intent matches network state, McGillicuddy said.

Knowing where an engineer’s policies have “gone off the rails” is a big help in keeping networks running smoothly, said Andrew Froehlich, the president of consulting firm West Gate Networks, based in Loveland, Colo. “For network administrators, this is a huge win, because it will help them to pinpoint where problems are occurring when people start shouting the network is slow.”

Cisco has tied the analytics engine to a troubleshooting library of what the company has identified as the most common network failure scenarios. As a result, when an engineer makes a change to the network, the Assurance Engine can determine, based on its knowledge base, where the modification could create a problem.

Initially, the Assurance Engine will cover only the Nexus 9000 switches required for an ACI fabric. Later in the quarter, Cisco plans to extend the software’s capabilities to firewalls, load balancers and other network services from Cisco or partners.

Cisco Assurance services for the campus

For the campus, Cisco has added its new analytics engine to version 1.1 of the Digital Network Architecture (DNA) Center — Cisco’s software console for distributing policy-based configurations across wired and wireless campus networks. DNA Center, which costs $77,000, requires the use of Cisco Catalyst switches and Aironet access points. Companies using DNA Center have to buy a subscription license for each network device attached to the software.

The Assurance analytics in the latest release of DNA Center draws network telemetry data from the APIC-EM controller, the campus network version of the ACI controller used in the data center. The model created from the data lets operators monitor applications, switches, routers, access points and end-user devices manufactured by Cisco partners, such as Apple.

As the data center software, the Cisco Assurance services for the campus are focused on troubleshooting and remediation. Later in the quarter, Cisco will add similar features to the cloud-based management console of the Meraki wireless LAN. Problems the Meraki analytics will help solve will include dropped traffic, latency and access-point congestion.

Today, most operators manage networks by programming switches and scores of other devices manually, usually via a command-line interface. Proponents of IBN claim the new paradigm is more flexible and agile in accommodating the needs of modern business applications. In the future, Cisco, Juniper Networks and others want to use machine learning and artificial intelligence to have networks fix common problems without operator involvement.

Despite progress vendors have made in developing IBN systems, enterprises are just beginning to roll out the methodology in their operations. Gartner predicted the number of commercial deployments will be in the hundreds through mid-2018, increasing to more than 1,000 by the end of next year.

An Amazing 2017: What’s Next?

It was just eight months ago when we introduced Mixer. Looking back on 2017, it’s hard to believe that so much has happened in such a short time….and it’s been an amazing feeling getting to bring our vision to life in a service and community unlike any other.

In 2017, we introduced 4-person co-streaming, enabling streamers to combine their streams and chat on Mixer into a single unified experience, and giving viewers a way to watch the action from multiple perspectives. Several unique Mixer-interactive game experiences were brought to life during 2017, giving streamers and viewers a totally new way to play. These included Minecraft, Hello Neighbor, Death’s Door, Killing Floor 2, and multiple Crowd-Play enabled games from Telltale Games. Also in 2017, Mixer made a big investment in mobile broadcasting with the launch of the Mixer Create app (iOS & Android) for streamers and a new Mixer app for viewing, both free to download on iOS and Android devices.

We added support for 21 languages, making it easier to enjoy the full Mixer experience around the world. And Mixer HypeZone was born, catering to fans of PUBG and battle-royale experiences. With HypeZone, you get 24×7, non-stop action with a channel dedicated to watching the final moments of matches in PUBG (high concentration of Chicken Dinners here!). Although still in beta, the HypeZonePUBG channel has already surpassed one million views!

2017 was also a big growth year for new and existing Mixer Partners. A special shout out to all the new Partners who joined the Mixer community from around the world, as well as the many up-and-coming streamers who have made Mixer the home for their growing communities. We also celebrated a very special milestone for one of our Mixer vets—Siefe reached more than 1 million views on November 16 (a Mixer first). And we’re cheering on several Partners who took the plunge and became full-time streamers on Mixer, including LenaAxios, Covent, LuckyShots and many more!

We’ve been humbled by all of your support and interest, and we’re completely blown away with how quickly the Mixer community has grown. 2017 was also the year where Mixer exceeded more than 10 million active users in a month for the first time —a milestone we’re all super proud of. You all watched and interacted with streams ranging from PUBG to Cooking Shows…. from Minecraft to Clash Royale…. and everything in between.

But even with all that’s happened in 2017 — we’re just barely getting started. 2018 is already shaping up to be an even bigger year for Mixer.

Last week down in San Antonio at PAX South, our Mixer Partners turned out in force, streaming non-stop from the show floor, showing off built-in Mixer interactivity in the upcoming game, The Darwin Project, and sharing their Mixer love with PAX South attendees throughout the weekend. We’re already looking forward to seeing even more of you at PAX East in Boston later this Spring and at E3 in June.

On the technology side, we’re continuing to refine and expand the HypeZone experience based on your feedback. You’ll see improvements and new capabilities coming to HypeZone in the coming months. We’re also working with numerous game publishers to bring Mixer interactivity to even more games soon, along with continued investment in new interactive capabilities for streamers to take advantage of! More to come on that at the Game Developers Conference in March. We’re also updating and innovating in several other areas across the Mixer platform; we’ll be announcing those additions as they get closer to release.

For our Mixer Partners, we are investing in new ways for them to connect with their viewers and build successful communities on Mixer. As part of this, we’re excited to share that the Mixer Direct Purchase program is now in testing and will be launching broadly soon! With Mixer Direct Purchase, viewers can purchase digital games and game DLC directly through the Mixer streaming platform. When you’re watching a Mixer Partner playing a game or DLC that you want to own, you can purchase directly from the stream you’re watching. Mixer partners will earn a percentage from all purchases made through their stream, in turn helping them to continue to bring great content to their Mixer community. To start with, Mixer Direct Purchase will be available for all games in the Microsoft digital store, including more than 5,000 games across Xbox and Windows 10! Our aspiration is to make even more content available through Direct Purchase down the road.

Also coming soon is the ability to donate to your favorite content creators directly through Mixer. While streamers can continue using external donation services, we’re adding the option for viewers to donate directly on the platform, without having to leave Mixer. And by popular demand, we are adding the ability to subscribe to specific channels inside the Mixer app on Xbox One (similar to what you can do today on the web and mobile). Stay tuned, as we’ll have more details to share soon about each of these programs in the coming weeks and months.

Finally, to everyone in the Mixer community: THANK YOU! We appreciate the energy, passion and welcoming attitude that you bring to the Mixer community every day. As a small token of thanks to each of you for being such a big part of Mixer in 2017, we’ve made 3 new global emotes that will be available on Mixer this week to celebrate these milestones!

Thank you so much to everyone who supported the Mixer community in 2017, and a huge welcome to Mixer in 2018.

Sincerely: James Boehm and Matt Salsamendi

Cloudian HyperStore 7 targets multi-cloud complexities

Cloudian today introduced the latest version of HyperStore that pools storage from multiple cloud environments under a single namespace, so data can be managed, protected and searched as a single entity.

Cloudian HyperStore 7 supports Amazon Web Services (AWS), Microsoft Azure Blob Storage and Google Cloud Platform, along with object storage and NFS and SMB protocols. The software runs on scale-out 4U storage nodes that can be clustered across on-premises and public cloud data centers. The company’s object storage is based on the Apache Cassandra open source distributed database.

The Cloudian HyperStore software and scale-out storage nodes natively support the Simple Storage Service API to provide services such as data management, data protection, high availability, search and geodistribution from a single storage pool.

Greg Schulz, founder of consultant firm StorageIO in Stillwater, Minn., said Cloudian’s HyperStore 7 can remove complexities associated with accessing files and objects in the cloud and on premises.

“For years, the conversation was about using the cloud as a target,” Schulz said. “Now, it’s not just about using cloud as a target, but doing other things. Cloudian can do scale-out. They can do on premises. They can do file and object. What [Cloudian] is doing is bringing all these attributes together and also adding multi-cloud.”

Cloudian HyperStore also supports AES-256 server-side encryption and SSL encryption for data in transit.

Cloudian HyperStore consolidates object-based or file-based unstructured data into a single, scalable storage pool. HyperStore users can replicate across clouds. It can be used for backup, disaster recovery, archiving, collaboration and data management. It provides synchronous and asynchronous replication, erasure coding and bucket-level granularity for all storage policies, multi-tenant services and quality of service. Cloudian HyperStore also supports AES-256 server-side encryption and SSL encryption for data in transit.

“It’s all clustered, so you can stripe data or replicate data for protection,” said Jon Toor, Cloudian’s chief marketing officer. “This is an application running on clustered devices that uses back-end storage in the cloud. You can run three devices in an on-premises data center and one in the cloud, and when you look at the management console, you will see four locations in a cluster. From a management standpoint, it all looks like the same thing.”

“We could cluster before, but it was in a single data center,” Toor added. “Now, we can manage data across multiple environments. We provide a common language and management pool for Amazon, Azure and Google [clouds].”

Server Core management remains a challenge for some

Server Core introduced a number of benefits to IT, but certain hurdles have stymied its progress in the enterprise.

Microsoft unveiled Server Core with Windows Server 2008. It wasn’t a new operating system, but a slimmed-down version of the full server OS. Server Core removed the GUI, but kept its infrastructure functionality. This reduced the codebase and brought several advantages: a smaller attack surface, fewer patches, quicker installs and more reliability.

But the lack of a GUI also made Server Core management a challenge. The absence of a traditional Windows interface took away the comfort level for the admin when it came to deployments and overall use of the operating system.

Administrators missed the interface because, while using the command line might not have been a complete mystery, using it to manage every aspect of the OS was new. A strong focus on PowerShell to control this OS caused further discomfort for many in IT. This new language came in at a time with Server Core to make the admin feel very unwelcome in this new world.

Server Core management with PowerShell and the command prompt are two very different things.

Server Core management with PowerShell and the command prompt are two very different things. Besides the language, scripting is linear, and PowerShell is an object-oriented language. The MS-DOS command prompt has been around a lot longer, but has not kept up with the features and functionality in the newer Windows operating systems. Microsoft expanded on scripting after MS-DOS with Visual Basic Script (VBS) but that introduced security issues from VBS-based viruses. Microsoft developed PowerShell to provide extensive functionality with fewer security liabilities. PowerShell has cmdlets tightly integrated with Microsoft’s newest operating systems for both basic and advanced functionality — which MS-DOS and VBS lacked.

Microsoft aids learning efforts

PowerShell is the predominant command-line language for Windows. MS-DOS exists but has had few updates to its core. Microsoft helped establish this course in the later versions of Windows Server. Many of the traditional server configuration wizards can produce the PowerShell code for the actions the administrator executes from the GUI. This capability changed the game for many administrators with limited programming experience or time to learn PowerShell scripting. Rather than write scripts from scratch, IT pros could take the automatically generated code and manipulate it to work on other servers. This feature was a step up from taking code examples from the Internet that only worked with very specific conditions or environments.

Microsoft helped spur Server Core adoption with improved remote management with later server OS versions with its Server Manager console. While Microsoft always had some level of remote management with Windows Server 2012 and beyond, a much stronger focus on this meant the admin could use a single GUI-based server to handle Server Core management for dozens — or even hundreds — of installations of this minimal operating system over the network. This kept the GUI aspect the admins were familiar with but allowed the enterprise to take advantage of more Server Core deployments. While they did not get the full benefits of what PowerShell and other automation tools do, this move helped admins get started with Server Core.

 When administrators start with Server Core, it’s helpful to look at the long-term view. How far do you want to go with it? Some companies that want to implement Server Core will be content to use remote management, but PowerShell will unlock the full potential of this server OS deployment.

Admins new to PowerShell will have a bit of a learning curve to overcome, but a few things can help. There are utilities, such as Notepad++, that make editing PowerShell code easier with its contextual highlighting feature. Another scripting tool is Microsoft’s PowerShell Integrated Scripting Environment, which can test code blocks and commands that help debug issues in a context-sensitive environment.

Server Core should only grow in popularity. Microsoft runs workloads on its new Azure Stack on Server Core. Administrators should consider its use just for the reduced patching workload.

In Windows Server 2016, the default installation is Server Core, and administrators need to manually select a different option to get the full server GUI setup. Also removed from Windows Server 2016 is the ability to install a desktop onto Server Core after deployment.

With the enhancements to remote management, the future is clear for the Microsoft server OS — and it’s without a GUI.


WPA3 Wi-Fi protocol aims to improve security in 2018

The Wi-Fi Alliance introduced the next generation of Wi-Fi Protected Access — WPA3 — which aims to improve password security as well as security for IoT devices.

The industry will begin rolling out the WPA3 Wi-Fi protocol in products in 2018 and replace WPA2, meaning vendors will have to follow the security standard in order to carry the “Wi-Fi Certified” branding.

In an official announcement from CES in Las Vegas, the Wi-Fi Alliance noted that the WPA3 Wi-Fi protocol will include “four new capabilities for personal and enterprise Wi-Fi networks.”

“Two of the features will deliver robust protections even when users choose passwords that fall short of typical complexity recommendations, and will simplify the process of configuring security for devices that have limited or no display interface. Another feature will strengthen user privacy in open networks through individualized data encryption,” the Wi-Fi Alliance wrote. “Finally, a 192-bit security suite, aligned with the Commercial National Security Algorithm (CNSA) Suite from the Committee on National Security Systems, will further protect Wi-Fi networks with higher security requirements such as government, defense, and industrial.”

According to Mathy Vanhoef, a network security and applied cryptography post-doctoral candidate and one of the researchers behind the WPA2 KRACK vulnerability which took advantage of the WPA2 four-way handshake network connection process to produce a man-in-the-middle exploit. WPA3 implements a more secure handshake that should help prevent brute force password attacks.

Marc Bevand, former security engineer at Google, described in a Hacker News forum post how this type of password authenticated key exchange (PAKE) can prevent attacks online and off.

“[Offline, an attacker] can try to decrypt the packet with candidate passwords, but he does not know when he guesses the right one, because a successful decryption will reveal [values that] are indistinguishable from random data. And even if he guessed right, he would obtain [public keys], but would not be able to decrypt any further communications as the use of Diffie-Hellman makes it impossible to calculate the encryption key,” Bevand wrote. “[Online,] if he actively [man-in-the-middles] the connection and pretends to be the legitimate server, he can send his own [key and password] to the client using one guessed candidate password. If he guessed wrong … each authentication attempt gives him only one chance to test one password. If, out of frustration, the client tries to retype the password and re-auth three times, then the attacker can at most try to guess three candidate passwords. He can’t brute force many passwords.”

Additionally, experts noted that the WPA3 Wi-Fi protocol improvements to “configuring security for devices that have limited or no display interface” could help improve security on IoT devices, but not all experts, like Tom Van de Wiele, principal cyber security consultant and red-teamer at F-Secure, were optimistic about the possibility.

Intelligent Communications takes the next step with calling in Teams

In September, we introduced a new vision for intelligent communications including plans to evolve Microsoft Teams into the primary client for calling and meetings in Office 365. As part of this, we are bringing comprehensive calling and meetings capabilities into Microsoft Teams, along with data and insights from the Microsoft Graph, and a strong roadmap of innovation to empower teams to achieve more.

  Easily view your calling historyToday we are releasing new calling capabilities in Teams, providing full featured dialing capabilities, complete with call history, hold/resume, speed dial, transfer, forwarding, caller ID masking, extension dialing, multi-call handling, simultaneous ringing, voicemail, and text telephone (TTY) support. You can expect this to roll out over the next few hours and should come soon to your tenant.

To add calling in Teams for your users, the first thing you need is Phone System (formerly Cloud PBX), which is included with Office 365 E5 and available as an add-on to other Office 365 plans. From there, you can subscribe to a Calling Plan (formerly known as PSTN Calling) for any number of users in your organization.

Together, a Calling Plan and Phone System in Office 365 create a phone system for your organization, giving each user a primary phone number and letting them make and receive phone calls to and from outside of your organization. This solution also allows your organization to shift away from expensive telephony hardware and simplifying by centralizing the management of your phone system.

With the addition of calling, Teams is an even more robust hub for teamwork — the single place for your content, contacts and communications including chat, meetings and calling in a modern, collaboration experience.

Getting started with calling in Teams
To get started with calling in Teams, please review our quick start guide. You can learn more about geographic availability of Calling Plans here.  We also invite you to join us live December 18, at 9 AM PDT on Teams On Air to hear guest Marc Pottier, Principal Program Manager discuss and demo calling plans in Microsoft Teams in more detail.

Device Naming for Network Adapters in Hyper-V 2016

Not all of the features introduced with Hyper-V 2016 made a splash. One of the less-published improvements allows you to determine a virtual network adapter’s name from within the guest operating system. I don’t even see it in any official documentation, so I don’t know what to officially call it. The related settings use the term “device naming”, so we’ll call it that. Let’s see how to put it to use.

Requirements for Device Naming for Network Adapters in Hyper-V 2016

For this feature to work, you need:

  • 2016-level hypervisor: Hyper-V Server, Windows Server, Windows 10
  • Generation 2 virtual machine
  • Virtual machine with a configuration version of at least 6.2
  • Windows Server 2016 or Windows 10 guest

What is Device Naming for Hyper-V Virtual Network Adapters?

You may already be familiar with a technology called “Consistent Device Naming”. If you were hoping to use that with your virtual machines, sorry! The device naming feature utilized by Hyper-V is not the same thing. I don’t know for sure, but I’m guessing that the Hyper-V Integration Services enable this feature.

Basically, if you were expecting to see something different in the Network and Sharing Center, it won’t happen:

harn_nscenterNor in Get-NetAdapter:


In contrast, a physical system employing Consistent Device Naming would have automatically named the network adapters in some fashion that reflected their physical installation. For example, “SLOT 4 Port 1” would be the name of the first port of a multi-port adapter installed in the fourth PCIe slot. It may not always be easy to determine how the manufacturers numbered their slots and ports, but it helps more than “Ethernet 5”.

Anyway, you don’t get that out of Hyper-V’s device naming feature. Instead, it shows up as an advanced feature. You can see that in several ways. First, I’ll show you how to set the value.

Setting Hyper-V’s Network Device Name in PowerShell

From the management operating system or a remote PowerShell session opened to the management operating system, use Set-VMNetworkAdapter:

This enables device naming for all of the virtual adapters connected to the virtual machine named sv16g2.

If you try to enable it for a generation 1 virtual machine, you get a clear error (although sometimes it inexplicably complains about the DVD drive, but eventually it gets where it’s going):

The cmdlet doesn’t know if the guest operating system supports this feature (or even if the virtual machine has an installed operating system).

If you don’t want the default “Virtual Network Adapter” name, then you can set the name at the same time that you enable the feature:

These cmdlets all accept pipeline information as well as a number of other parameters. You can review the TechNet article that I linked in the beginning of this section. I also have some other usage examples on our omnibus networking article.

Reminder: PowerShell is the only way to set the name of a Hyper-V virtual network adapter.

Note: You must reboot the guest operating system for it to reflect the change.

Setting Hyper-V’s Network Device Name in the GUI

You can use Hyper-V Manager or Failover Cluster Manager to enable this feature. Just look at the bottom of the Advanced Features sub-tab of the network adapter’s tab. Check the Enable device naming box. If that box does not appear, you are viewing a generation 1 virtual machine.


Reminder: PowerShell is the only way to set the name of a Hyper-V virtual network adapter. See the preceding section for instructions.

Note: You must reboot the guest operating system for it to reflect the change.

Viewing Hyper-V’s Network Device Name in the Guest GUI

This will only work in Windows 10/Windows Server 2016 (GUI) guests. The screenshots in this section were taken from a system that still had the default name of Network Adapter.

  1. Start in the Network Connections window. Right-click on the adapter and choose Properties:
  2. When the Ethernet # Properties dialog appears, click Configure:
  3. On the Microsoft Hyper-V Network Adapter Properties dialog, switch to the Advanced tab. You’re looking for the Hyper-V Network Adapter Name property. The Value holds the name that Hyper-V holds for the adapter:

If the Value field is empty, then the feature is not enabled for that adapter or you have not rebooted since enabling it. If the Hyper-V Network Adapter Name property does not exist, then you are using a down-level guest operating system or a generation 1 VM.

Viewing Hyper-V’s Network Device Name in the Guest with PowerShell

As you saw in the preceding section, this field appears with the adapter’s advanced settings. Therefore, you can view it with the Get-NetAdapterAdvancedProperty cmdlet. To see all of the settings for all adapters, use that cmdlet by itself.


Tab completion doesn’t work for the names, so drilling down just to that item can be a bit of a chore. The long way:

Slightly shorter way:

One of many not future-proofed-but-works-today way:

For automation purposes, you need to query the DisplayValue or the RegistryValue property. I prefer the DisplayValue. It is represented as a standard System.String. The RegistryValue is represented as a System.Array of System.String (or, String[]). It will never contain more than one entry, so dealing with the array is just an extra annoyance.

To pull that field, you could use select (an alias for Select-Object), but I wouldn’t:


I don’t like select in automation because it creates a custom object. Once you have that object, you then need to take an extra step to extract the value of that custom object. The reason that you used select in the first place was to extract the value. select basically causes you to do double work.

So, instead, I recommend the more .Net way of using a dot selector:

You can store the output of that line directly into a variable that will be created as a System.String type that you can immediately use anywhere that will accept a String:

Notice that I injected the Name property with a value of Ethernet. I didn’t need to do that. I did it to ensure that I only get a single response. Of course, it would fail if the VM didn’t have an adapter named Ethernet. I’m just trying to give you some ideas for your own automation tasks.

Viewing Hyper-V’s Network Device Name in the Guest with Regedit

All of the network adapters’ configurations live in the registry. It’s not exactly easy to find, though. Navigate to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlClass{4d36e972-e325-11ce-bfc1-08002be10318}. Not sure if it’s a good thing or a bad thing, but I can identify that key on sight now. Expand that out, and you’ll find several subkeys with four-digit names. They’ll start at 0000 and count upward. One of them corresponds to the virtual network adapter. The one that you’re looking for will have a KVP named HyperVNetworkAdapterName. Its value will be what you came to see. If you want further confirmation, there will also be KVP named DriverDesc with a value of Microsoft Hyper-V Network Adapter (and possibly a number, if it’s not the first).

Barefoot Tofino chip tapped for Deep Insight network monitor

Barefoot Networks has introduced software that pinpoints anomalies in network traffic at the packet level. The new product, called Deep Insight, works on Ethernet switches powered by Barefoot Tofino, a programmable chip for the data center.

The software, unveiled this week, provides graphical reporting on network abnormalities, such as dropped packets and microbursts. The latter refers to traffic congestion that lasts for microseconds in a switch. Such delays are a problem, for example, in high-speed transactions performed by financial applications.

To use the software, network operators must first program each Barefoot Tofino chip to add to packets metadata that could include arrival time, matched rules, queue delay and switch identity. Engineers program the silicon using P4, an open source language that directs network devices on how to process packets.

Network managers choose the metadata each switch adds to packets as they travel to the application. The last switch collects the metadata and sends the package to Deep Insight, which runs on a commodity server.

The software establishes a baseline for network operations, so it can identify anomalies and display the details to network operators. To reduce the amount of unnecessary information, engineers choose the application traffic the software will analyze.

Deep Insight data drawn from Barefoot Tofino

The information Deep Insight provides includes the path taken by a packet, the rules it followed along the route, the amount of time it queued at each switch and the other packets with which it shared the queues.

Barefoot plans to eventually extend Deep Insight to open source virtual switches built on specifications developed by the Open vSwitch Project and network interface cards that support the P4 language. The company did not provide a timetable for the support.

Barefoot Tofino, which processes packets at 6.5 Tbps, is marketed as an alternative to fixed-function application-specific integrated circuits. Tofino appeals to large data centers, cloud and communication service providers, and white box switch makers that incorporate the technology into their product lines, analysts said. Examples of the latter include Edgecore Networks and WNC.

Barefoot plans to sell the Deep Insight software based on the number of packets processed each second. Barefoot has product trials underway with select customers and plans to make the software available in February.

Support for Open AI Ecosystem Grows as Amazon Web Services Joins ONNX AI Format – Microsoft Cognitive Toolkit

It’s been an exciting few months! In September we introduced the Open Neural Network Exchange (ONNX) format we created with Facebook to increase interoperability and reduce friction for developing and deploying AI. In October a number of companies that share our goals announced their support for ONNX.

Today Microsoft and Facebook are excited to share Amazon Web Services (AWS) is contributing ONNX support for Apache MXNet and joining the ONNX initiative. Amazon recognizes the benefits of the ONNX open ecosystem to enable developers working on deep learning to move between tools easily, choosing ones that are best suited for the task at hand. It’s great to have another major framework support ONNX: Caffe2, PyTorch, Microsoft Cognitive Toolkit, and now MXNet.

At Microsoft we believe bringing AI advances to all developers, on any platform, using any language, with an open AI ecosystem, will help ensure AI is more accessible and valuable to all. With ONNX and the rest of our Azure AI services, infrastructure and tools such as Azure Machine Learning and the recently announced Visual Studio Tools for AI, developers and data scientists will be able to deliver new and exciting AI innovations faster.

We invite others in the community to visit http://onnx.ai to learn more and participate in the ONNX effort. You can also get ONNX updates on Facebook and @onnxai on Twitter.

Windows Server 2016 book serves up PowerShell recipes

Microsoft introduced a number of new features in Windows Server 2016, from container support to the Nano Server deployment option. But there’s no need to cook up a script from scratch to implement these innovations when there are prepared PowerShell recipes that do the job.

Windows Server 2016 admins can automate jobs and reduce their workload if they master newer cmdlets. For IT shops that want to avoid manual intervention to arrange and manage features in the latest server OS, there are more than 100 PowerShell recipes in Windows Server 2016 Automation with PowerShell Cookbook: Second Edition by Thomas Lee that can help.

Lee provides scripts to ease the mundane processes that can trip up admins who need to be available when trouble strikes. Microsoft switched the Windows Server patching to a cumulative model in 2016, which made the monthly releases more frustrating to handle for some. Lee has a few scripts to make the process less painful. For admins who want an easier way to work with the Desired State Configuration management tool to keep certain systems tamper-proof, Lee walks through the concepts and provides PowerShell recipes to set up the deployment.

In this excerpt taken from the book’s first chapter, Lee describes PackageManagement, a PowerShell module that helps admins and developers install and manage applications from the command line:

PowerShellGet is a powerful resource for PowerShell, built on top of the core PackageManagement capabilities of PowerShell 5. It is one of many PackageManagement providers available. …

PackageManagement is a unified interface for software package management systems, a tool to manage package managers. You use the PackageManagement cmdlets to perform software discovery, installation, and inventory (SDII) tasks. PackageManagement involves working with package providers, package sources, and the software packages themselves.

Within the PackageManagement architecture, PackageManagement providers represent the various software installers that provide a means to distribute software via a standard plug-in model using the PackageManagement APIs. Each PackageManagement provider manages one or more package sources or software repositories. Providers may be publicly available or can be created within an organization to enable developers and system administrators to publish or install propriety or curated software packages.

Editor’s note: This excerpt is from Windows Server 2016 Automation with PowerShell Cookbook: Second Edition, authored by Thomas Lee, published by Packt Publishing, 2017. For updates to scripts used in the book, check the author’s PowerShell Cookbook GitHub repository.