Tag Archives: adding

Cisco bolsters cloud security with Duo acquisition

Cisco has announced the $2.35 billion acquisition of Duo Security, adding two-step authentication services to the networking company’s cloud-based security portfolio.

Cisco said this week it expects to close the cash deal by the end of October. Following the Duo acquisition, Cisco will make Duo part of its security business under its general manager and executive vice president, David Goeckeler. Duo, which has 700 employees, will remain at its Ann Arbor, Mich., headquarters, and CEO Dug Song will continue to lead the company.

Under Cisco, Duo could grow much faster than it could on its own by gaining access to Cisco’s 800,000 customers. Duo, which was founded in 2009, has 12,000 customers.

Cisco wants to buy Duo to strengthen its cloud-based security services. Duo offers two-factor authentication that companies can integrate into websites, VPNs and cloud services. Duo services can also determine whether the user device trying to access the corporate asset poses a security risk.

The Duo acquistion adds another set of capabilities to those provided by Cisco’s other cloud-based security products, including OpenDNS and Stealthwatch Cloud. OpenDNS blocks malware, phishing attacks and botnets at the domain name system layer. Stealthwatch Cloud searches for threats by aggregating and analyzing telemetry drawn from public cloud infrastructures, such as AWS, Microsoft Azure and Google Cloud Platform.

Cisco’s plans following Duo acquisition

During a conference call with reporters and analysts, Goeckeler said Cisco will sell Duo as a stand-alone product, while also integrating its services into some of Cisco’s other cloud-based services. He did not provide details or a timeline, but noted other cloud-based products that Cisco has combined with each other include OpenDNS, the Viptela SD-WAN and the cloud-managed Meraki wireless LAN.

“We think we can drive [more] integrations here,” Goeckeler said of Duo. He later added Duo could bring more value to Cisco Umbrella, a cloud-based service that searches for threats in internet activity.

“Duo is another asset we can combine together with Umbrella to just increase the value of that solution to our customers,” Goeckeler said.

Cisco has been growing its security business through acquisition since at least 2013, when it bought firewall provider Sourcefire for $2.7 billion. In 2015, Cisco acquired OpenDNS for $635 million, and it bought CloudLock a year later for $293 million. CloudLock provides secure access to cloud applications, including those running on platform-as-a-service and infrastructure-as-a-service providers.

“All of these pieces are part of the larger strategy to build that integrated networking, security and identity cloud-delivered platform,” Goeckeler said.

Cisco’s acquisitions have fueled much of the growth in its security business. In the quarter ended in April, Cisco reported an 11% increase in security revenue to $583 million.

Kontron heeds carrier demand for software, buys Inocybe

Kontron has acquired Inocybe Technologies, adding open source networking software to the German hardware maker’s portfolio of computing systems for the telco industry.

Kontron, which announced the acquisition this week, purchased Inocybe’s Open Networking Platform as telcos increasingly favor buying software separate from hardware. Kontron is a midsize supplier of white box systems to communications service providers (CSPs) and cable companies.

CSPs are replacing specialized hardware with more flexible software-centric networking, forcing companies like Kontron and Radisys, which recently sold itself to Reliance Industries, to reinvent themselves, said Lee Doyle, principal analyst at Doyle Research, based in Wellesley, Mass.

“This is part of Kontron’s efforts to move in a more software direction — Radisys has done this as well — and to a more service-oriented model, in this case, based on open source,” Doyle said.

Inocybe, on the other hand, is a small startup that could take advantage of the resources of a midsize telecom supplier, mainly since the market for open source software is still in its infancy within the telecom industry, Doyle said.

While Kontron did not release financial details, the price for Inocybe ranged from $5 million to $10 million, said John Zannos, previously the chief revenue officer of Inocybe and now a general manager of its technology within Kontron. The manufacturer plans to offer Inocybe’s Open Networking Platform as a stand-alone product while also providing hardware specially designed to run the platform.

Inocybe’s business

Inocybe’s business model is similar to that of Red Hat, which sells its version of open source Linux and generates revenue from support and services on the server operating system. Under Kontron, Inocybe plans to continue developing commercial versions of all the networking software built under the Linux Foundation.

Open source is free, but making it work isn’t.
Lee Doyleprincipal analyst, Doyle Research

The Open Networking Platform includes parts of the Open Network Automation Platform (ONAP), the OpenDaylight software-defined networking controller and the OpenSwitch network operating system. Service providers use Inocybe’s platform as a tool for traffic engineering, network automation and network functions virtualization.

Tools like Inocybe’s deliver open source software in a form that’s ready for testing and then deploying in a production environment. The more difficult alternative is downloading the code from a Linux Foundation site and then stitching it together into something useful.

“Open source is free, but making it work isn’t,” Doyle said.

Before the acquisition, Inocybe had a seat on the board of the open source networking initiative within the Linux Foundation and was active in the development of several technologies, including OpenDaylight and OpenSwitch. All that work would continue under Kontron, Zannos said.

Curious About Windows Server 2019? Here’s the Latest Features Added

Microsoft continues adding new features to Windows Server 2019 and cranking out new builds for Windows Server Insiders to test. Build 17709 has been announced, and I got my hands on a copy. I’ll show you a quick overview of the new features and then report my experiences.

If you’d like to get into the Insider program so that you can test out preview builds of Windows Server 2019 yourself, sign up on the Insiders page.

Ongoing Testing Requests

If you’re just now getting involved with the Windows Server Insider program or the previews for Windows Server 2019, Microsoft has asked all testers to try a couple of things with every new build:

  • In-place upgrade
  • Application compatibility

You can use virtual machines with checkpoints to easily test both of these. This time around, I used a physical machine, and my upgrade process went very badly. I have not been as diligent about testing applications, so I have nothing of importance to note on that front.

Build 17709 Feature 1: Improvements to Group Managed Service Accounts for Containers

I would bet that web applications are the primary use case for containers. Nothing else can match containers’ ability to strike a balance between providing version-specific dependencies while consuming minimal resources. However, containerizing a web application that depends on Active Directory authentication presents special challenges. Group Managed Service Accounts (gMSA) can solve those problems, but rarely without headaches. 17709 includes these improvements for gMSAs:

  • Using a single gMSA to secure multiple containers should produce fewer authentication errors
  • A gMSA no longer needs to have the same name as the system that host the container(s)
  • gMSAs should now work with Hyper-V isolated containers

I do not personally use enough containers to have meaningful experience with gMSA. I did not perform any testing on this enhancement.

Build 17709 Feature 2: A New Windows Server Container Image with Enhanced Capabilities

If you’ve been wanting to run something in a Windows Server container but none of the existing images meet your prerequisites, you might have struck gold in this release. Microsoft has created a new Windows Server container image with more components. I do not have a complete list of those components, but you can read what Lars Iwer has to say about it. He specifically mentions:

  • Proofing tools
  • Automated UI tests
  • DirectX

As I read that last item, I instantly wanted to know: “Does that mean GUI apps from within containers?” Well, according to the comments on the announcement, yes*. You just have to use “Session 0”. That means that if you RDP to the container host, you must use the /admin switch with MSTSC. Alternatively, you can use the physical console or an out-of-band console connection application.

Commentary on Windows Server 2019 Insider Preview Build 17709

So far, my experiences with the Windows Server 2019 preview releases have been fairly humdrum. They work as advertised, with the occasional minor glitch. This time, I spent more time than normal and hit several frustration points.

In-Place Upgrade to 17709

Ordinarily, I test preview upgrades in a virtual machine. Sure, I use checkpoints with the intent of reverting if something breaks. But, since I don’t do much in those virtual machines, they always work. So, I never encounter anything to report.

For 17709, I wanted to try out the container stuff, and I wanted to do it on hardware. So, I attempted an in-place upgrade of a physical host. It was disastrous.

Errors While Upgrading

First, I got a grammatically atrocious message that contained false information. I wish that I had saved it so I could share with others that might encounter it, but I must have accidentally my notes. the message started out with “Something happened” (it didn’t say what happened, of course), then asked me to look in an XML file for information. Two problems with that:

  1. I was using a Server Core installation. I realize that I am not authorized to speak on behalf of the world’s Windows administrators, but I bet no one will get at mad at me for saying, “No one in the world wants to read XML files on Server Core.”
  2. The installer didn’t even create the file.

I still have not decided which of those two things irritates me the most. Why in the world would anyone actively decide to build the upgrade tool to behave that way?

Problems While Trying to Figure Out the Error

Well, I’m fairly industrious, so I tried to figure out what was wrong. The installer did not create the XML file that it talked about, but it did create a file called “setuperr.log”. I didn’t keep the entire contents of that file either, but it contained only one line error-wise that seemed to have any information at all: “CallPidGenX: PidGenX function failed on this product key”. Do you know what that means? I don’t know what that means. Do you know what to do about it? I don’t know what to do about it. Is that error even related to my problem? I don’t even know that much.

I didn’t find any other traces or logs with error messages anywhere.

How I Fixed My Upgrade Problem

I began by plugging the error messages into Internet searches. I found only one hit with any useful information. The suggestions were largely useless. But, the guy managed to fix his own problem by removing the system from the domain. How in the world did he get from that error message to disjoining the domain? Guesswork, apparently. Well, I didn’t go quite that far.

My “fix”: remove the host from my Hyper-V cluster. The upgrade worked after that.

Why did I put the word “fix” in quotation marks? Because I can’t tell you that actually fixed the problem. Maybe it was just a coincidence. The upgrade’s error handling and messaging was so horrifically useless that without duplicating the whole thing, I cannot conclusively say that one action resulted in the other. “Correlation is not causation”, as the saying goes.

Feedback for In-Place Upgrades

At some point, I need to find a productive way to express this to Microsoft. But for now, I’m upset and frustrated at how that went. Sure, it only took you a few minutes to read what I had to say. It took much longer for me to retry, poke around, search, and prod at the thing until it worked, and I had no idea that it was ever going to work.

Sure, once the upgrade went through, everything was fine. I’m quite happy with the final product. But if I were even to start thinking about upgrading a production system and I thought that there was even a tiny chance that it would dump me out at the first light with some unintelligible gibberish to start a luck-of-the-draw scavenger hunt, then there is a zero percent chance that I would even attempt an upgrade. Microsoft says that they’re working to improve the in-place upgrade experience, but the evidence I saw led me to believe that they don’t take this seriously at all. XML files? XML files that don’t even get created? Error messages that would have set off 1980s-era grammar checkers? And don’t even mean anything? This is the upgrade experience that Microsoft is anxious to show off? No thanks.

Microsoft: the world wants legible, actionable error messages. The world does not want to go spelunking through log files for vague hints. That’s not just for an upgrade process either. It’s true for every product, every time.

The New Container Image

OK, let’s move on to some (more) positive things. Many of the things that you’ll see in this section have been blatantly stolen from Microsoft’s announcement.

Once my upgrade went through, I immediately started pulling down the new container image. I had a bit of difficulty with that, which Lars Iwer of Microsoft straightened out quickly. If you’re trying it out, you can get the latest image with the following:

Since Insider builds update frequently, you might want to ensure that you only get the build version that matches your host version (if you get a version mismatch, you’ll be forced to run the image under Hyper-V isolation). Lars Iwer provided the following script (stolen verbatim from the previously linked article, I did not write this or modify it):

Trying Out the New Container Image

I was able to easily start up a container and poke around a bit:

Testing out the new functionality was a bit tougher, though. It solves problems that I personally do not have. Searching the Internet for, “example apps that would run in a Windows Server container if Microsoft had included more components” didn’t find anything I could test with either (That was a joke; I didn’t really do that. As far as you know). So, I first wrote a little GUI .Net app in Visual Studio.

*Graphical Applications in the New Container Image

Session 0 does not seem to be able to show GUI apps from the new container image. If you skimmed up to this point and you’re about to tell me that GUI apps don’t show anything from Windows containers, this links back to the (*) text above. The comments section of the announcement article indicate that graphical apps in the new container will display on session 0 of the container host.

I don’t know if I did something wrong, but nothing that I did would show me a GUI from within the new container style. The app ran just fine — it shows up under Get-Process — but it never shows anything. It does exactly the same thing under microsoft/dotnet-framework in Hyper-V isolation mode, though. So, on that front, the only benefit that I could verify was that I did not need to run my .Net app in Hyper-V isolation mode or use a lot of complicated FROM nesting in my dockerfile. Still no GUI, though, and that was part of my goal.

DirectX Applications in the New Container Image

After failing to get my graphical .Net app to display, I next considered DirectX. I personally do not know how to write even a minimal DirectX app. But, I didn’t need to. Microsoft includes the very first DirectX-dependent app that I was ever able to successfully run: dxdiag.

Sadly, dxdiag would not display on session 0 from my container, either. Just as with my .Net app, it appeared in the local process list and docker top. But, no GUI that I could see.

However, dxdiag did run successfully, and would generate an output file:

Notes for anyone trying to duplicate the above:

  • I started this particular container with 
    docker run it mcr.microsoft.com/windowsinsider
  • DXDiag does not instantly create the output file. You have to wait a bit.

Thoughts on the New Container Image

I do wish that I had more experience with containers and the sorts of problems this new image addresses. Without that, I can’t say much more than, “Cool!” Sure, I didn’t personally get the graphical part to work, but a DirectX app from with a container? That’s a big deal.

Overall Thoughts on Windows Server 2019 Preview Build 17709

Outside of the new features, I noticed that they have corrected a few glitchy things from previous builds. I can change settings on network cards in the GUI now and I can type into the Start menu to get Cortana to search for things. You can definitely see changes in the polish and shine as we approach release.

As for the upgrade process, that needs lots of work. If a blocking condition exists, it needs to be caught in the pre-flight checks and show a clear error message. Failing partway into the process with random pseudo-English will extend distrust of upgrading Microsoft operating systems for another decade. Most established shops already have an “install-new-on-new-hardware-and-migrate” process. I certainly follow one. My experience with 17709 tells me that I need to stick with it.

I am excited to see the work being done on containers. I do not personally have any problems that this new image solves, but you can clearly see that customer feedback led directly to its creation. Whether I personally benefit or not, this is a good thing to see.

Overall, I am pleased with the progress and direction of Windows Server 2019. What about you? How do you feel about the latest features? Let me know in the comments below!

GandCrab ransomware adds NSA tools for faster spreading

With version 4, GandCrab ransomware has undergone a major overhaul, adding an NSA exploit to help spread and targeting a larger set of systems.

The updated GandCrab ransomware was first discovered earlier this month, but researchers are just now learning the extent of the changes. The code structure of the GandCrab ransomware was completely rewritten. And, according to Kevin Beaumont, a security architect based in the U.K., the malware now uses the EternalBlue National Security Agency (NSA) exploit to target SMB vulnerabilities and spread faster.

“It no longer needs a C2 server (it can operate in airgapped environments, for example) and it now spreads via an SMB exploit – including on XP and Windows Server 2003 (along with modern operating systems),” Beaumont wrote in a blog post. “As far as I’m aware, this is the first ransomware true worm which spreads to XP and 2003 – you may remember much press coverage and speculation about WannaCry and XP, but the reality was the NSA SMB exploit (EternalBlue.exe) never worked against XP targets out of the box.”

Joie Salvio, senior threat researcher at Fortinet, based in Sunnyvale, Calif., found the GandCrab ransomware was being spread to targets via spam email and malicious WordPress sites and noted another major change to the code.

“The biggest change, however, is the switch from using RSA-2048 to the much faster Salsa20 stream cipher to encrypt data, which had also been used by the Petya ransomware in the past,” Salvio wrote in the analysis. “Furthermore, it has done away with connecting to its C2 server before it can encrypt its victims’ file, which means it is now able to encrypt users that are not connected to the Internet.”

However, the GandCrab ransomware appears to specifically target users in Russian-speaking regions. Fortinet found the malware checks the system for use of the Russian keyboard layout before it continues with the infection.

Despite the overhaul of the GandCrab ransomware and the expanded systems being targeted, Beaumont and Salvio both said basic cyber hygiene should be enough to protect users from attack. This includes installing the EternalBlue patch released by Microsoft, keeping antivirus up-to-date and disabling SMB version 1 altogether, which is advice that has been repeated by various outlets, including US-CERT, since the initial WannaCry attacks began.

MapR Data Platform gets object tiering and S3 support

MapR Technologies updated its Data Platform, adding support for Amazon’s S3 application programming interface and automated tiering to cloud-based object storage.

MapR is known for its distribution of open source Apache Hadoop software. It contributes to related open source projects designed to handle advanced analytics for large data sets across computer clusters. The 6.1 release of the MapR Data Platform — formerly MapR Converged Data Platform — adds storage management features for artificial intelligence applications that require real-time analytics.

MapR Data Platform 6.1, scheduled to become generally available this quarter, features policy-based data placement across performance, capacity and archive tiers. It also added fast-ingest erasure coding for high-capacity storage on premises and in public clouds, an installer option to enable security by default and volume-based encryption of data at rest.

Providing real-time analytics for AI requires coordination between on-premises, cloud and edge storage, said Jack Norris, senior vice president of data and applications at MapR, which is based in Santa Clara, Calif.

“What we’re seeing increasingly is that the time frame for AI is decreasing. It’s not enough to understand what happened in the business. It’s really, ‘How do you impact the business as it’s happening?'” Norris said.

MapR storage additions

MapR Data Platform 6.1 expands storage features by adding policy-based tiering to automatically move data. It now supports a performance tier of SSDs or SAS HDDs; a capacity tier of high-density HDDs; and an archival tier of third-party, S3-compliant object storage. Customers supply the commodity hardware.

The storage management features follow the 2017 addition of MapR-XD software to MapR Converged Data Platform. MapR-XD is based on the company’s distributed file system that was released in 2010. It includes a global namespace that can span on-premises and public cloud environments and support tiers of hot, warm and cold storage.

MapR writes all data to the performance tier and then determines the most appropriate way to store it, Norris said. Its tiering is independent of data format. Norris said the system could write NFS and read S3, or the reverse. For instance, MapR can place and store data as an object on one or more clouds and later pull back the data and restore it as a file transparently to the user.

“We do constant management of the data to account for node failure, disk failure, rebalancing of the cluster and eliminating hotspots,” he said.

New MapR release adds file stubs

The MapR software handles data transformations between file and object formats in the background. With past releases, the MapR system had to go through an intermediate step to shift data between file- and object-based storage. With 6.1, MapR retains file stubs to represent data that the system has shifted to cloud-based object storage. The stub stores the location of the data.

“When you need to access that data, we’re just pulling back an individual file,” Norris said. “You don’t want to pull back a whole directory or a whole volume. If you look at cost economics in the cloud, it’s expensive, because you get charged by data movement.”

The newly added support for the Amazon S3 API includes all core capabilities, such as the concept of buckets and access-control lists, Norris said.

MapR’s new erasure coding spreads pieces of data across disks. Norris said the MapR erasure coding preserves snapshots and compression.

The MapR Data Platform is available in Enterprise Standard and Enterprise Premium editions. The Enterprise Standard offering includes MapR-XD, MapR-Document Database, MapR Event Data Streams and Apache Hadoop, Spark and Drill. The Enterprise Premium software tacks on options such as real-time data integration with MapR Change Data Capture, Orbit Cloud Suite extensions and the ability to add the Data Science Refinery toolkit.

Deviation from Hadoop

Carl Olofson, a data management software research vice president at IDC, said MapR’s file system emulates the Hadoop Distributed File System, but its indexes and update-in-place capabilities set it apart. The challenge for MapR is the potential skepticism of having a “data lake solution that deviates so far from the Hadoop project code,” Olofson said.

“The good news there is that even the other Hadoop vendors are no longer solely focused on Hadoop, so MapR may be on top of an emerging trend,” he wrote.

Policy-based storage tiering is the key new capability in the MapR Data Platform 6.1, Olofson claimed. “As people move data lake technologies to the cloud, they are initially in sticker shock because of the storage costs associated with it,” he said. “The MapR approach not only addresses that, but they say it does it automatically.”

MapR’s competition includes Cloudera, Hortonworks and various open source technologies, according to Mike Matchett, principal IT industry analyst at Small World Big Data. He noted concerns that MapR is “at heart a closed proprietary platform.” But Matchett said he gives MapR an advantage over plain open source in terms of supporting mixed and now operational workloads.

“The theme for MapR in this release is to support big data AI and ML [machine learning] alongside and with business applications,” Matchett wrote in an email.

Lustre-based DDN ExaScaler arrays receive NVMe flash

DataDirect Networks has refreshed its Storage Fusion Architecture-based ExaScaler arrays, adding two models designed with nonvolatile memory express flash and a hybrid system with disk and flash.

In a related move, the high-performance computing storage vendor acquired the code repository and support contracts of Intel’s open source Lustre parallel file system for an undisclosed sum. The Lustre file system is the foundation for DDN ExaScaler and GridScaler arrays.

The fourth version of DDN ExaScaler combines parallel file storage servers and Nvidia DGX-1 high-performance GPUs with Storage Fusion Architecture (SFA) OS software. SFA 200NV and SFA 400NV are 2U arrays, with slots for 24 dual-ported nonvolatile memory express (NVMe) SSDs. The difference between the two is in compute power: SFA 200NV has a single CPU per controller, while the SFA 400NV has two CPUs per controller.

The arrays embed a 192-lane PCIe Gen 3 fabric to maximize NVMe performance. DDN claims the dense ExaScaler flash ingests data at nearly 40 GBps.

DDN also introduced the SFA7990 hybrid system, which allows customers to fill 90 drive slots with enterprise-grade SSDs and HDDs.

AI and analytics performance driver

Adding NVMe is a natural fit for DDN, which provides scalable storage systems to hyperscale data centers that require lots of high-performance storage, said Tim Stammers, a storage analyst at 451 Research.

“NVMe is going to help drive performance on intensive applications, like AI and analytics. It makes storage faster, and in return, AI and analytics will drive the takeup of NVMe flash,” Stammers said.

Data centers have the option to buy DDN ExaScaler NVMe arrays as plug-and-play storage for AI projects. The DDN AI200 and AI400 provide as much as 360 TB of dual-ported NVMe storage in 2U. The 4U AI7990 configurations scale to 5.4 PB in 20U.

The AI turnkey appliances include performance-tested implementations of Caffe, CNTK, Horovod, PyTorch, TensorFlow and other established AI frameworks.

Customers can combine an SFA cluster with DDN’s NVMe-based storage. Lustre presents file storage as a mountable capacity pool of flash and disk sharing a single namespace.

The DDN ExaScaler upgrade provides dense storage in a compact form factor to keep acquisition within reach of most enterprises, said James Coomer, vice president for product management at DDN, based in Chatsworth, Calif.

“At this early stage, customers don’t necessarily know where they’re going with AI,” Coomer said. “They may need more flash for performance. For AI, they need an economical way to hold data that’s relatively cold. We give them a choice to expand either the hot flash area or augment it in the second stage with hard-drive tiers and anywhere in between.”

Recent AI enhancements to the SFA operating system include declustered RAID and NVMe tuning. Declustered RAID allows for faster drive rebuilds by sharing parity bits across pooled drives.

Inference and training investments planned

DDN’s Lustre acquisition includes the open source code repository, file-tracking system and existing support contracts from Intel. Coomer said DDN plans to make investments to enable Lustre to support inference and training of data for AI workloads. The open source code will remain available for contributions from the community.

DDN is a prominent contributor to Lustre code development, and it has shipped Lustre-based storage systems for nearly two decades.

“DDN says they’re going to make Lustre easier to use,” Stammers said. “What they’re banking on is that it will lead more enterprises to use Lustre for these emerging workloads.”

Avaya adds real-time speech analytics to contact centers

Avaya has updated its workforce optimization software for contact centers, adding real-time speech analytics and automated quality management tools. The vendor also released new data privacy features to help businesses comply with the General Data Protection Regulation (GDPR).

The workforce optimization upgrades are Avaya’s latest initiative to bring AI and automation to the contact center. In April, the vendor announced a partnership with the startup Afiniti for intelligent call routing.

All major contact center vendors have begun investing in workforce optimization, either through native software development or partnerships with other vendors, said Robin Gareiss, president of Nemertes Research, based in Mokena, Ill.

The upgrades Avaya highlighted this week should help contact center customers increase sales and boost customer satisfaction, but they are not revolutionary, Gareiss said. Instead, businesses can already get the same tools from legacy vendors such as Cisco and Genesys, as well as cloud startups such as Five9.

“Without these announcements, Avaya would fall behind,” Gareiss said. “And given it’s the platform for so many huge contact centers around the world, it’s crucial for Avaya to continue to improve agent evaluation through speech analytics, as well as formal customer feedback and ratings.”

Some startups in the contact center market have placed workforce optimization at the core of their offerings. The cloud vendor Sharpen Technologies Inc., for example, uses a customizable algorithm to rate agents and uses an AI platform to monitor and flag trends within the contact center.

“Workforce optimization has become a huge area of focus for customer engagement,” Gareiss said. “It’s really all about making the contact center agents more productive and efficient to provide continuously improving customer experience.”

Avaya tries to keep pace with competitors

Avaya’s real-time speech analytics software will help businesses monitor conversations between agents and customers. The system can provide helpful contextual information to agents during the call, or suggest next steps based on the customer’s tone or word choice.

The tool can also alert managers to conversations that require immediate intervention, such as if an agent fails to get the proper consents or provide the necessary compliance disclosures, at the outset of a discussion with a customer.

Avaya highlighted real-time speech analytics as one tool for monitoring GDPR compliance within a contact center. The vendor also added a feature that will let businesses securely delete recordings in response to “right to be forgotten” requests.

Meanwhile, a new quality management system can automatically evaluate agents after every customer interaction, freeing managers of a time-consuming task.

Businesses must purchase a separate license to use Avaya’s workforce optimization tools, which are sold individually and as a suite. Avaya’s contact center customers can install the software in their data centers or subscribe to it as a hosted cloud service.

Polycom, Crestron expand Zoom room video conferencing partnerships

Polycom has expanded its partnership with Zoom by adding support for Zoom Rooms video conferencing to its Polycom Trio conferencing phones and camera technology. Polycom announced the expanded partnership last week at InfoComm 2018, an audiovisual trade show in Las Vegas.

The Zoom Rooms partnership is available in three bundles to support different conference room deployments. The huddle room bundle includes the Trio 8500, EagleEye Mini camera and Dell OptiPlex PC. The midsize conference room bundle includes the Trio 8500, EagleEye IV MPTZ camera for HD video and Dell OptiPlex. The large conference room bundle includes the Trio 8800, EagleEye IV MPTZ, Dell OptiPlex and advanced speakers for higher-quality audio. The bundles will be available in July.

Polycom also announced new capabilities for the Polycom Trio to support the high-end room video conferencing capabilities required in large meeting rooms and auditoriums, such as facial tracking and dual-monitor support. The new capabilities will be available in a July software update to Polycom Trio and Group Series codecs.

In July, Polycom will support guest access in its Pano wireless content-sharing system to allow users to connect and share content through guest Wi-Fi systems.

Crestron expands interactive whiteboard collaboration

Also at InfoComm, Crestron debuted AirBoard, a network whiteboard-capture device that allows local and remote users to see whiteboard content on a main conference display and on personal devices. AirBoard is a camera with an arm that attaches to any electronic whiteboard with a mounting kit.

An Ethernet cable connected to the LAN is required for video, power and a secure connection to the network and Crestron ecosystem. Users can capture and share content via Crestron touchscreens and integrated devices.

Content shared through the AirBoard can be saved and sent to a central webpage or to invited meeting participants. Remote participants can access whiteboard sessions like they would with Crestron AirMedia, a wireless presentation service.

Crestron has also expanded its Zoom partnership to its TSW Touch Screen line for audiovisual (AV) presentation, conferencing and collaboration in larger, integrated meeting spaces. The partnership integrates Zoom with the entire Crestron ecosystem of control, such as its DM NVX video encoder/decoder and AV framework.

ViewSonic debuts interactive whiteboards

ViewSonic Corp. announced at InfoComm two new models in its ViewBoard interactive whiteboard IFP60 series. The new ViewBoard interactive whiteboards offer 4K video resolution and annotation software. The new models come pre-installed with ViewSonic vBoard collaboration software and ViewBoard Cast functions. ViewBoard Cast allows mobile devices to share content to the flat panels.

The interactive whiteboards include advanced annotation tools and enterprise-level security with AES-256 encryption. They also offer single sign-on and sign-off, as well as cloud-based portability to store and retrieve files.

The new IFP6560 is a 65-inch display that will retail for $4,999. The IFP7560 is a 75-inch display that will retail for $7,999. Both of the flat-panel displays will be available in July.

The ViewBoard also integrates with Zoom room video conferencing using Zoom’s API platform. The integration with Zoom allows users to connect Zoom credentials to ViewBoard software, conference calls and document sharing with remote participants.

Microsoft Teams mobile app matures, but interoperability lags

Microsoft has been building out the capabilities of the Microsoft Teams mobile app in recent months, adding features more advanced than those traditionally supported by unified communications mobility clients. Nevertheless, the vendor has more work to do to catch up with rival Cisco and to provide a seamless mobile experience to businesses.

Microsoft is on par with the meeting features in Cisco Webex Teams mobile, but Cisco can give users a more seamless experience for scheduling and joining meetings from their mobile phones. That’s because Cisco Webex Teams and Cisco Webex rely on the same back-end cloud infrastructure, said Zeus Kerravala, founder and principal analyst at ZK Research in Westminster, Mass.

“Cisco has been very, very mobile-centric for a long time, so I wouldn’t expect Microsoft to have the same maturity in the mobile client,” Kerravala said. Nevertheless, recent improvements to the Microsoft Teams mobile app are “a good start.”

Microsoft users must juggle multiple UC mobile apps

Microsoft still has separate mobile apps for Teams and Skype for Business that require users to toggle between apps. Users can generally only access Teams meetings from within the Microsoft Teams mobile app, rather than from the mobile apps for Outlook or Skype for Business.

“The ability to schedule and join calls or meetings needs to be a lot more consistent,” Kerravala said. “So, if I’m in Outlook mobile and I can start a Skype for Business meeting, I should be able to start a Teams meeting.”

This gap in interoperability could cause headaches for businesses, as they attempt to migrate users from Skype for Business to Teams in keeping with Microsoft’s directive that it will eventually phase out the former.

Microsoft is encouraging customers that use the cloud version of Skype for Business to begin using Teams simultaneously. The vendor recently gave users the ability to transfer contacts and groups from Skype for Business to Teams and made instant messaging exchanges between the two clients persistent for Teams users.

Recent features added to the Microsoft Teams mobile app included the ability to join audio and video meetings in Teams or request a meeting to call them on their mobile devices. Once in a meeting, the Microsoft Teams mobile app lets users upload files, share their screens and control presentations.

Team collaboration elevates mobile clients

Team collaboration apps like Microsoft Teams, Cisco Webex Teams and Slack are growing in popularity because they provide a single platform for communicating synchronously and asynchronously and for getting work done through third-party integrations.

That model for unified communications (UC) has made mobile clients even more significant, as vendors compete to deliver products that help users stay connected to colleagues and data whether they are in the office or working remotely, analysts said.

“The lines between communication, collaboration and conferencing are blurring,” said Alan Lepofsky, a principal analyst at Constellation Research Inc., based in Cupertino, Calif. “One of the biggest challenges for vendors is to create consistent experiences across various device types.”

Traditional UC mobile apps were built primarily around calling and messaging, but mobile phones already provide those same capabilities over cellular networks. The mobile apps for Microsoft Teams and Cisco Webex Teams now give users access to nearly all of the files and collaboration tools available to them on the desktop.

“For all the manufacturers in this space, their singular goal should be [the following]: Can the user eradicate the term, ‘I’ll take care of that when I’m back in the office?'” Kerravala said.

Seasonic Fanless PSU, WD 1TB 2.5″ HDD

Have various items for sale, will be adding more as I get round to it

SeaSonic PSU

Model SS-400FL, this is the original fanless version of Seasonic’s 400W 80+ Gold PSU. Has most of the original cables, at the moment is missing 1 cable with 2xSATA connectors.

  • 24 pin Motherboard connector
  • CPU 8/4 Pin Connector
  • PIC-E 8/6 Pin Connector
  • 2 x MOLEX Cable (x5 connectors total)
  • SATA Cable (x3 connectors)
  • Floppy (Y Adapter)



Seasonic Fanless PSU, WD 1TB 2.5″ HDD