Security vendor Sophos this month expanded its endpoint protection lineup with Intercept X for Mobile. The new mobile security application extends the company’s Intercept security software to devices including phones, tablets and laptops.
The new offering is meant to bolster mobile threat defense for devices running on Android, iOS and Chrome. Features include:
Authenticator: Helps to manage multi-factor authentication passwords for sites like Google, Amazon and Facebook.
Secure QR code scanner: Scans target URLs for malicious content.
Privacy protection: Detects when personal data is accessed or if there are hidden costs associated with downloaded apps.
“The biggest unique point of the Intercept X model is that we are a security model, and we do security for different platforms and can be configured in one place,” said Petter Nordwall, director of product management at Sophos. “Intercept X, as a whole, can now protect Windows, Mac iOS, Chromebooks and servers. Regardless of what platform they use, they can use Intercept X.”
In “Advance and Improve Your Mobile Security Strategy,” a recent report from Gartner, senior analyst Patrick Hevesi found that “mobile security products are becoming increasingly important as a rate of mobile attacks continues to grow.” Hevesi recommended tech professionals track new threats, build a mobile threat defense strategy and set minimum iOS and hardware versions.
He added that organizations should focus on training users on what threats actually look like, rather than letting the systems do all the work.
“Everyone is doing antiphishing training, but think about the application,” Hevesi said. “The user doesn’t think about mobile in the same way; they see a highly rated app and don’t think about why the app needs permission to my contact data.”
Pricing for Intercept X for Mobile ranges from $24.50 to $63 per 100 seats depending on the addition of Sophos’ mobile, a unified endpoint management system. Intercept X for Mobile is available free for download for individual use, from Google Play and the Apple App Store.
After claiming more than a quarter century of patent leadership, IBM has expanded its fight against patent assertion entities, also known as patent trolls, by joining the LOT Network. As a founding member of the Open Invention Network in 2005, IBM has been in the patent troll fight for nearly 15 years.
The LOT Network (short for License on Transfer) is a nonprofit community of more than 600 companies that have banded together to protect themselves against patent trolls and their lawsuits. The group says companies lose up to $80 billion per year on patent troll litigation. Patent trolls are organizations that hoard patents and bring lawsuits against companies they accuse of infringing on those patents.
“It made sense to align IBM’s and Red Hat’s view on how to manage our patent portfolio,” said Jason McGee, vice president and CTO of IBM Cloud Platform. “We want to make sure that patents are used for their traditional purposes, and that innovation proceeds and open source developers can work without the threat of a patent litigation.”
To that end, IBM contributed more than 80,000 patents and patent applications to the LOT Network to shield those patents from patent assertion entities, or PAEs.
IBM joining the LOT Network is significant for a couple of reasons, said Charles King, principal analyst at Pund-IT in Hayward, Calif. First and foremost, with 27 years of patent leadership, IBM brings a load of patent experience and a sizable portfolio of intellectual property (IP) to the LOT Network, he said.
“IBM’s decision to join should also silence critics who decried how the company’s acquisition of Red Hat would erode and eventually end Red Hat’s long-standing leadership in open source and shared IP,” King said. “Instead, the opposite appears to have occurred, with IBM taking heed of its new business unit’s dedication to open innovation and patent stewardship.”
Charles KingAnalyst, Pund-IT
The LOT Network operates as a subscription service that charges members for the IP protection they provide. LOT’s subscription rates are based on company revenue. Membership is free for companies making less than $25 million annually. Companies with annual revenues between $25 million and $50 million pay $5,000 annually to LOT. Companies with revenues between $50 million and $100 million pay $10,000 annually to LOT. Companies with revenues between $100 million and $1 billion pay $15,000. And LOT caps its annual subscription rates at $20,000 for companies with revenues greater than $1 billion.
Meanwhile, the Open Invention Network (OIN) has three levels of participation: members, associate members and licensees. Participation in OIN is free, the organization said.
“One of the most powerful characteristics of the OIN community and its cross-license agreement is that the board members sign the exact same licensing agreement as the other 3,100 business participants,” said Keith Bergelt, CEO of OIN. “The cross license is royalty-free, meaning it costs nothing to join the OIN community. All an organization or business must agree to do is promise not to sue other community participants based on the Linux System Definition.”
IFI Claims Patent Services confirms that 2019 marked the 27th consecutive year in which IBM has been the leader in the patent industry, earning 9,262 U.S. patents last year. The patents reach across key technology areas such as AI, blockchain, cloud computing, quantum computing and security, McGee said.
IBM achieved more than 1,800 AI patents, including a patent for a method for teaching AI systems how to understand implications behind certain text or phrases of speech by analyzing other related content. IBM also gained patents for improving the security of blockchain networks.
In addition, IBM inventors were awarded more than 2,500 patents in cloud technology and grew the number of patents the company has in the nascent quantum computing field.
“We’re talking about new patent issues each year, not the size of our patent portfolio, because we’re focused on innovation,” McGee said. “There are lots of ways to gain and use patents, we got the most for 27 years and I think that’s a reflection of real innovation that’s happening.”
Since 1920, IBM has received more than 140,000 U.S. patents, he noted. In 2019, more than 8,500 IBM inventors, spanning 45 different U.S. states and 54 countries contributed to the patents awarded to IBM, McGee added.
In other patent-related news, Apple and Microsoft this week joined 35 companies who petitioned the European Union to strengthen its policy on patent trolls. The coalition of companies sent a letter to EU Commissioner for technology and industrial policy Thierry Breton seeking to make it harder for patent trolls to function in the EU.
Quantum Corp. expanded its F-Series line of NVMe flash arrays this week with an entry-level option for businesses that maintain large media and entertainment files.
The F-1000 is the second array in the Quantum F-Series product family, following the 2019 launch of its F-2000 NAS. The F-Series servers run Quantum StorNext file system software in a scale-out file storage cluster for unstructured data.
For the F-1000, Quantum said it reworked commodity server hardware to create a lower-cost option, reducing the amount of memory needed to compute RAID. The 1U server contains a single controller and supports up to 10 NVMe SSDs, with RAID 10. By comparison, the 2U F-2000 has two controllers and takes 24 dual-ported NVMe SSDs.
Quantum F-1000 is offered in two capacity models: 39 TB and 77 TB, with 32G Fibre Channel and 100 Gigabit Ethernet via iSCSI extensions for remote direct memory access.
“This innovation stems directly from Quantum’s strategy of focusing on video data. They have tailored a cost-optimized offering for a specific solution, rather than trying to sell you a general-purpose NVMe storage server,” as other storage vendors have done, said Scott Sinclair, a storage analyst at Enterprise Strategy Group (ESG).
Quantum F-Series takes software-defined approach
Nonvolatile flash memory (NVMe) transmits data across PCI Express lanes instead hopping of between network components. NVMe provides faster data access and high parallelization, making it attractive for high-resolution video rendering and streaming media. NVMe flash media also comes with premium pricing, putting it beyond the reach of many organizations.
The Quantum F-Series marks the NAS vendor’s intention to adopt a software-defined storage approach, said Eric Bassier, a Quantum senior director of technical marketing. Quantum F-Series customers include major movie studios, government agencies and private corporations that need to capture, edit and store data for visual effects and computer-generated imagery.
Scott SinclairStorage analyst, Enterprise Strategy Group
Quantum targets the F-1000 for IT teams that need NVMe flash performance, but with moderate density. “It’s pretty cool to be able to port the same [StorNext] software to bring F-1000 server to market so quickly” after its debut in April, Bassier said.
Storage for unstructured data still growing
Organizations are dealing with a surge in newly created data, much of it unstructured data. Media content, particularly image and video, is a prime contributor. According to an ESG report on storage trends, nearly one-quarter of organizations cite digital media as a top driver of projected on-premises storage growth over the next several years.
“The idea that the data center is dying because of the cloud is not the case,” Sinclair said.
Quantum bills the F-1000 as a lower-cost alternative for dense media. It did not disclose pricing, but Bassier said Quantum F-1000 NVMe storage will cost roughly the same as its hybrid SAS arrays.
“We believe SAS SSDs are going to become obsolete rather quickly,” Bassier said.
In addition to StorNext-powered storage, Quantum sells ActiveScale object storage, DXi backup appliances, R-Series storage for in-vehicle storage, VS-Series video surveillance systems and Scalar tape storage systems.
The F-1000 is Quantum’s first product launch since resolving a dispute with the U.S. Securities and Exchange Commission. Quantum in December agreed to a $1 million settlement related to a series of earnings misstatements dating to February 2018. The SEC found that former Quantum executives booked revenue from multiyear contracts, but failed to disclose the revenue in financial reports. Quantum had previously agreed to pay $8 million to settle shareholder lawsuits arising from the probe.
Hyper-converged vendors Pivot3 and Scale Computing this week expanded their use cases with product launches.
Scale formally unveiled HE150 all-flash NVMe hyper-converged infrastructure (HCI) appliances for space-constrained edge environments. Scale sells the compute device as a three-node cluster, but it does not require a server rack.
The new device is a tiny version of the Scale HE500 HCI appliances that launched this year. HE150 measures 4.6 inches wide, 1.7 inches high and 4.4 inches deep. Scale said select customers have deployed proofs of concept.
Pivot3 rolled out AI-enabled data protection in its Acuity HCI operating software. The vendor said Pivot3 appliances can stream telemetry data from customer deployments to the vendor’s support cloud for historical analysis and troubleshooting.
HCI use cases evolve
Hyper-converged infrastructure vendors package the disparate elements of converged infrastructure in a single piece of hardware, including compute, hypervisor software, networking and storage.
Dell is the HCI market leader, in large measure to VMware vSAN, while HCI pioneer Nutanix holds the No. 2 spot. But competition is heating up. Server vendors Cisco and Hewlett Packard Enterprise have HCI products, as does NetApp with a product using its SolidFire all-flash technology. Ctera Networks, DataCore and startup Datrium are also trying to elbow into the crowded space.
Pivot3 storage is used mostly for video surveillance, although the Austin, Texas-based vendor has focused on increasing its deal size for its Acuity systems.
Scale Computing, based in Indianapolis, sells the HC3 virtualization platform for use in edge and remote office deployments. The company has customers in education, financial services, government, healthcare and retail.
Hyper-converged infrastructure has expanded beyond its origins in virtual desktop infrastructure to support cloud analytics of primary and secondary storage, said Eric Sheppard, a research vice president in IDC’s infrastructure systems, platforms and technologies group.
“The most common use of HCI is virtualized applications, but the percentage of [hosted] apps that are mission-critical has increased considerably,” Sheppard said.
Scale HE150: Small gear for the edge
Scale’s HC3 system is designed for Linux-based KVM. Unlike most HCI appliances, Scale HC3 does not support VMware. Scale designed HyperCore to run Linux-based KVM.
The HE150 includes a full version of HyperCore operating system, including rolling updates, replication and snapshots. The device comes with up to six cores and up to 64 GB of RAM. Intel’s Frost Canyon Next Unit of Computing (NUC) mini-PC provides the compute. Storage per nodes is up to 2 TB with one M.2 NVMe SSD.
Traditional HCI appliances require a dedicated backplane switch to route network traffic, including Scale’s larger HC3 appliances. HE150 features new HC3 Edge Fabric software-based tunneling for communication between HC3 nodes. The tunneling is needed to accommodate the tiny form factor, said Dave Demlow, Scale’s VP of product management.
Scale recommends a three-node HE150 cluster. Data is mirrored twice between the nodes for redundancy. Demlow said the cluster takes up the space of three smart phones stacked together.
Eric Slack, a senior analyst at Evaluator Group, said Scale’s operating system enables it to sell an HCI appliance the size of Scale HE150.
“This new small device runs the full Scale HyperCore OS, which is an important feature. Scale stack is pretty thin. They don’t run VMware or a separate software-defined storage layer, so HyperCore can run with limited memory and a limited number of CPU cores,” Slack said.
Pivot3 HCI appliances
Pivot3 did not make hardware upgrades with this release. The features in Acuity center on AI-driven analytics for more automated management.
Pivot3 enhanced its Intelligence Engine policy manager with AI tools for backup and disaster recovery in multi-petabyte storage. The move comes amid research by IDC that indicates more enterprises expect HCI vendors to provide autonomous management via the cloud.
The IDC survey of 252 data centers found that 89% rely on cloud-based predictive analytics to manage IT infrastructure, but only 72% had enterprise storage systems that bundle analytics tools as part of the base price.
“The entirety of the data center infrastructure market is increasing the degree to which tasks can be automated. All roads lead toward autonomous operations, and cloud-based predictive analytics is the fastest way to get there,” Sheppard said.
Pivot3 said it added self-healing to identify failed nodes and automatically returns repaired nodes to the cluster. The vendor also added delta differencing to its erasure coding for faster rebuilds.
IBM Spectrum storage software expanded its scope to cover AI and large-scale analytics, including updates for compliance and deeper integration in Amazon Web Services.
IBM Spectrum is the vendor’s brand for storage software. The products released this week extend the reach of IBM Spectrum Discover metadata management to include other vendors’ storage. A refreshed IBM Cloud Object Storage supports denser capacity per rack and individual nodes.
In addition, IBM upgraded its Spectrum Protect Plus data protection to enable direct backup of local databases to Amazon’s Simple Storage Service (S3). IBM also previewed a new VersaStack converged infrastructure that uses its FlashSystem 9100 NVMe rack-scale storage with Cisco servers.
Spectrum Discover metadata management is a recent addition to the IBM Spectrum storage software portfolio. Spectrum Discover layers on top of storage to ingest and index billions of files and objects stored locally and in the cloud. IBM said Discover can help to classify exabytes of unstructured data.
Originally designed only for IBM storage, Spectrum Discover now supports Dell EMC Isilon NAS and NetApp filers, as well as Ceph and any Amazon S3-compatible storage.
IBM Spectrum storage added Discover to enable more efficient mining of metadata, said Eric Herzog, chief marketing officer and vice president of worldwide storage channels at IBM Storage. He said the significant feature enhancement is analytics across different storage systems.
With the new release, IBM beefed up Discover’s capabilities for hooking metadata directly into AI and big data projects. Science teams can use Discover to search large metadata catalogs and connect third-party data analytics tools via built-in APIs.
Henry Baltazar, an analyst for storage at 451 Research, called IBM’s updates “evolutionary, but not revolutionary.” He said IBM Spectrum Discover adds important features for regulatory compliance and optimizing storage efficiency.
“What makes Spectrum Discover valuable is being able to see as much data as possible. The big addition with this launch is support for third-party search on other storage arrays. They didn’t have that before. The more data people can get their hands on, the more powerful the infrastructure is going to be,” Baltazar said.
“You can search the content, not just the metadata. We automated detection of certain sensitive data for GDPR and [privacy] regulations coming out of California and Brazil. You can still create custom metadata, but we included some defaults for apps that need to stay in compliance,” Herzog said.
IBM Spectrum storage for database backup and object capacity
Amazon customers can protect Db2, Oracle, MongoDB and Microsoft SQL Server databases hosted on AWS, using S3 Intelligent Tiering to move data to IBM Spectrum Protect Plus. IBM also added more data retention options to tape and virtual tape libraries, Amazon S3 Glacier and Microsoft Azure Archive Storage.
IBM Cloud Object Storage arrays are based on technology IBM acquired from Cleversafe in 2015. Customers can purchase IBM object storage software as a cloud service, an on-premises deployment or embedded on IBM hardware.
The latest Cloud Object Storage arrays use second-generation IBM hardware and bigger drives. The hardware scales to 10 PB in a single 42U rack and 1.3 PB per node, which IBM said equates to 26% more overall capacity. Use cases include AI, big data and secondary workloads.
IBM enables customers to mix and match old and new Cloud Object Storage systems, Baltazar said. “IBM is saying you don’t need to drop all your stuff on the new hardware. I think people will take advantage of this right away, since they can mix and match without a forklift upgrade,” he said.
IBM VersaStack uses Cisco Unified Computing System servers with IBM storage. The new version attaches to IBM FlashSystem 9100 arrays outfitted with NVMe SSDs and is due out in late 2019. IBM said it will continue to sell VersaStack models that use IBM A900, V9000 and Storwize storage.
The Pivot3 storage portfolio has expanded into different directions over the years. Pivot3 started out selling — and still sells — storage for media and surveillance before moving into hyper-converged infrastructure after HCI gained popularity.
CEO Ron Nash said the acquisition of NexGen Storage in 2017 was a watershed moment for the vendor, based in Austin, Texas. NexGen gave Pivot3 storage quality of service and nonvolatile memory express (NVMe) PCIe flash — capabilities that underpinned the 2017 launch of the Pivot3 Acuity HCI system.
In the last month, Pivot3 added HCI products to expand its use cases. The vendor launched a ruggedized Intelligent Edge Command and Control system that’s optimized for analytics and virtual desktops and built for defense and intelligence operations in the field. It also forged a partnership with Lenovo to sell a set of edge computing systems for smart city security, based on Pivot3 HCI software and Lenovo ThinkSystem servers.
He added that Pivot3’s average selling price per unit increased 74% since the Acuity launch. With the NexGen integration complete, composable infrastructure is part of Pivot3’s strategy to sell more to enterprises with sprawling application workflows that need both automation and scalability, particularly in a multi-cloud environment.
“When you first start out, you put your blinders on and try to go as fast as possible,” Nash said. “It’s almost like a demonstration market. But you have to keep innovating. You can’t keep still. Things that were seen as innovative a few years ago aren’t so innovative anymore. That’s both the beauty and the challenge of technology.”
We recently spoke with Nash on the Pivot3 storage strategy and product roadmap.
How do you evaluate the advent of composable storage? Some observers claim it is more flexible than hyper-converged infrastructure, which pacakges compute, networking, storage and virtualization on a single hardware appliance.
Ron Nash: Well, you could run composable units on a hyper-converged platform now, if you wanted to. In fact, that’s the direction we’re heading.
The composable stuff is really just a packaging exercise: You deliver processing, storage, networking and such as definable units that you can mix and match in different measures to meet different service levels. But once you start running composable units, you can run it on a hyper-converged infrastructure plaform just fine.
It is additive. It gives you one higher level of abstraction, above the hyper-converged infrastructure platform you have now. That’s why we’re interested in it. We’ll have a policy-driven engine that allows you to specify those pieces and, in the parlance, compose those units such that we can provide modules of performance that people will need.
Would this be a different licensing model for Pivot3?
Nash: Yes, it would be another software layer on top of the hardware modules. We’re abstracting just one step farther from the hardware into the software zone.
The direction we’re taking is to make the composable stuff into a software mechanism. That software layer will manipulate the platform underneath to [provision] the right type of units to provide the right service level at the right cost level and right security level. They don’t have to think about the hardware underneath. They’ll think in terms of the load they have.
How soon do you expect this feature to go live on Pivot3 storage?
Nash: We are working on it right now. We have parts of it operating in our lab. We won’t have one grand step where we announce composable as a part of a big picture. We’ll have a series of releases over quarters that add different capabilities as we move down that road.
How rapidly are Pivot3 storage customers adopting a multi-cloud strategy? And how is it reflected in your growth?
Nash: I read a research survey that 80% of CIOs [are using] multiple clouds. To me, that’s [the equivalent] of crossing the chasm. Everybody put their toe in the water, figured out what the cloud was and then had to bring stuff back on premises. You need the ability to go into and out of the public cloud and private clouds.
We’re making our platform with high performance, such that it runs far more of a company’s applications than our competitors. That means we’re getting more load and bigger orders.
We have had people buying a bigger platform from the get-go, but our average selling price increased 74% in the last year. We’ve gone from a baseline of hundreds of thousands of dollars [per sale] to 74% higher on average. That’s massive. Usually, you’re happy if you get a 5% increase year over year.
Which vertical sectors account for the most growth in Pivot3 storage?
Ron NashCEO, Pivot3
Nash: We recently announced our revenue growth at 70% year over year. Growth is coming in the hybrid cloud and software-defined data center. People in that market are becoming more educated and discriminating. If you’ve got a little bitty job, there are a whole bunch of [vendors] for that.
The strengths of Pivot3 storage are ultrahigh performance, large application environments that need to scale, automated quality of service and policy management. We have broadcast that to the market, and enterprise customers are getting a bead on us. They seem to know when to call us.
Pivot3 and NexGen Storage are integrated in a single code base. Trace the impact the NexGen acquisition has had on your cloud business.
Nash: We are benefiting from the amalgamation of the NexGen and Pivot3 teams. You’ve got to have automation if you’re [managing] data that’s scattered across private, hybrid and public clouds. The fact we had the forethought to start working on the automation layer is really paying off now.
People are starting to get away from the early romance of the cloud — the idea that I could pick one cloud and put everything on it. That was never going to happen. People now have a more sophisticated view. They say, ‘I need some private cloud capabilities inside and probably need two or three different public clouds, based on the job and service level.’ And they need the ability to connect these clouds and move data back and forth. That plays to our strength as a platform, with automation layers that let you keep up with it.
What’s been the most surprising feature that customers are requesting?
Nash: The one people are asking about most is encryption. We have customers in government and security that need to encrypt data, so we’ve always done that. But we’re starting to see the early edge of a wave for encryption in many more companies. They have growing concern about [cloud] data breaches and being able to protect people’s personal identities. They think, ‘Why not just encrypt everything?’
As NVMe over Fabric gets going, we’ll move from a world of electromechanical devices and storage controllers to a world that looks more like memory. If you’re working in memory, the cost of encryption is in single digits, in terms of the impact on performance. If you’ve got the right architectural platform, the cost of encryption is well worth it.
Our erasure coding is not content-aware, but it uses a linear algebra algorithm that works just fine. That’s going to make a huge difference for us on encrypted data. We can get storage efficiency that is as good on encrypted as ion unencrypted data.
Polycom has expanded its VoIP endpoint portfolio with the release of four new open SIP phones. The vendor also launched a new cloud-based device management service to help partners provision and troubleshoot Polycom devices.
The release builds upon the Polycom VVX series of IP desk phones. The more advanced models include color LCD displays and gigabit Ethernet ports, unlike any of the previous phones in the Polycom VVX series.
The VVX 150 is the most basic of the new devices. Designed for home offices or common areas, the VVX 150 supports two lines and does not have a USB port or a color display.
The VVX 250 is targeted at small and midsize businesses, with a 2.8-inch color LCD display, HD audio, one USB port and support for up to four lines.
The VVX 350 is for cubicle workers, call centers and small businesses. It has a 3.5-inch color LCD display, two USB ports and support for six lines.
The most advanced of the four new models, the VVX 450, can host 12 lines and comes with a 4.3-inch color LCD display. Polycom said the phones are meant for front-line staff in small and midsize businesses.
The new phones rely on the same unified communications software as the rest of the Polycom VVX series, which should simplify the certification process for service providers, Polycom said. 8×8, Nextiva and The Voice Factory were the first voice providers to certify the devices.
Unlike traditional propriety phones, open SIP phones can connect to the IP telephony services of a wide range of vendors. This simplifies interoperability for businesses that get UC services from multiple vendors.
Polycom embraces cloud to help sell hardware
Polycom has launched two new cloud services in an attempt to make its hardware more attractive to enterprises and service providers.
Polycom Device Management Service for Service Providers, released this week, gives partners a web-based application for managing Polycom devices. This should help service providers improve uptimes and enhance end-user control panels. Polycom launched a similar service for enterprises earlier this year.
Eventually, Plantronics may look to combine its cloud management platform with Polycom’s, allowing partners to control phones and headsets from the same application, said Irwin Lazar, analyst at Nemertes Research, based in Mokena, Ill. This would give Plantronics and Polycom an advantage over competitors such as Yealink and AudioCodes.
“The endpoint market is fairly competitive, so wrapping management capabilities around the devices is an attractive means to provide a differentiated offering,” Lazar said.
SAN FRANCISCO — Box Inc. said it will launch a key piece of its Box Skills AI system, with expanded capabilities for customers to build and train their own AI-based content management tools in December 2018.
In addition to the expected commercial release of Box Skills Kit, the cloud content management vendor revealed new AI and workflow automation tools, Google integrations, security features and third-party app integrations at its BoxWorks 2018 conference.
A few hundred Box enterprise users have been using prefab Box Skills, the Box Skills Kit and Google integrations in private beta over the past year; the Google connections now are available in public beta.
Box also previewed a new Automations feature for workflow automation and a new data security system, Box Shield. Both are slated to be released in beta in early 2019, along with AI-driven Box Feed updates and notifications feature.
Box said Automations and an updated version of Box Tasks, with which users can assign things and deadlines to co-workers, will be out in beta in early 2019.
Activity Stream embeds third-party apps
All this came after Box unveiled Activity Stream, a collaboration system that enables users to work inside the Box platform with popular third-party apps like Slack, Salesforce and DocuSign. That product is expected to go into beta next year.
Users at the conference said they welcomed the open platform design of Activity Stream, Box’s progress on workflow automations, and the impending commercial availability of Box Skills Kit, which will include audio intelligence, video intelligence and image intelligence Skills, as well as the customization features.
Rich LibbyCIO, Herbalife
“The whole focus on workflow and integrations is really positive,” said Rich Libby, CIO at natural supplements manufacturer Herbalife.
“I have all kinds of workflows that rely on Box documents where Box is not the final destination but stops along the track, so I’m excited to do that,” Libby said of using the Automations feature. “We have all kinds of contracts and agreements that we have to do offline or on email … so putting it on DocuSign will be great.”
“In the digital workplace, what you need is a fundamentally modern set of tools to equip your users with so they can start innovating at a very different velocity,” Patel said of Box’s partner strategy. “For these tools to be effective they need to work seamlessly with your content. Regardless of the application that you the user may be working in, we want to make sure you can work with content in Box.”
Analysts said Box appears to be successfully executing a series of product advancements in a range of areas that users have been calling for, though some outstanding questions remain, particularly about the cost of Box Skills and Box Skills Kit.
The AI tools for developers work with AWS, Google, IBM and Microsoft AI, enabling users to customize and derive insights from Box content using those different engines. Users will apparently have to pay both the AI vendors for their services, as well as Box on a volume pricing basis.
Box Skills costs uncertain
“There is a lot of power in combining multiple Box features, like Skills and Automations,” said Alan Lepofsky, an analyst at Constellation Research who was at the conference at the George R. Moscone Convention Center.
“As the number of documents, images and videos stored in Box increases, the number of business use cases increases and the potential for AI goes up,” Lepofsky said. “What customers need to be aware of is these AI calls cost additional on top of Box.”
That cost does not appear to be fully worked out yet.
“The business model is actually quite simple for now. Skills Kit really just leverages our core platform, so it’s sort of a volume-based business model with the volume of data you’re moving back and forth between Box and third-party AI providers,” Box CEO Aaron Levie said in a Q&A session with reporters and analysts.
“We want to make it really, really easy to adopt Box Skills and make it easy to deploy it at scale, so we think it’s the fastest way customers are going to adopt this technology,” Levie said.
Beta user eyes commercial release
Meanwhile, a Box Skills beta user, Rich Guerra, head of application development at Farmers Insurance, said in an interview that he’s looking forward to using Box Skills Kit to develop AI applications for adjusters to enable them to work with videos and voice recording transcriptions from customers.
Farmers, which has standardized content on the Box platform, is also in the midst of a large-scale project to move legacy content from an old IBM on-premises file management system into the Box cloud.
As for Automations, “I was just texting our account rep how excited we were to take a look at that,” Guerra said. “The automation of the workflow is something we’re looking at to increase efficiency for adjusters, to make the process a lot easier.”
Box creating a cloud ecosystem
Overall, Box has done reasonably well in coordinating its far-flung product undertakings and positioning itself as a “Salesforce-like hub” of content services and collaboration, said Bola Rotibi, an analyst and research director at Creative Intellect Consulting.
“They’ve gotten a bit slicker than in the past,” Rotibi said of the orchestrated announcements on the first day of the conference.
“There are areas they can sharpen — like how much people are going to have to pay for Custom Skills — but I think they’re working behind the scenes to simplify that,” Rotibi said. “On the positive side, what I really like is the contextual integration of the third-party apps.”
Nice inContact has expanded the analytics and quality management capabilities of its flagship cloud contact center, CXone, as businesses increasingly look to such tools to boost sales and customer satisfaction.
CXone Quality Management Analytics Pro uses speech and text analytics to monitor customer-agent interactions. It lets companies make sure agents are saying the right things — for promotional or regulatory purposes — and flags inappropriate behavior.
The system also identifies trends by tracking conversations based on keywords, categories and the sentiments being expressed by customers. The trend discovery should help managers coach agents and develop customer engagement plans.
In addition to the analytics and quality management system, Nice inContact added to CXone omnichannel support for Instagram and the Japanese messaging app Viber, while also giving businesses the ability to schedule tweets and Facebook posts.
The vendor announced the changes this week as part of its summer 2018 update to CXone, a contact-center-as-a-service platform the vendor launched one year ago. The vendor said 250,000 agents in more than 100 countries now use the platform, including some employed by Fortune 100 companies.
The summer update also gave businesses more tools for complying with the General Data Protection Regulation, such as automated and manual controls over which customer interactions get recorded, retained and deleted.
Contact center analytics gets results for businesses
In a recent survey, 700 IT and business leaders cited analytics as the top technology for transforming customer experiences, according to Robin Gareiss, president of Nemertes Research, based in Mokena, Ill.
Businesses that had already deployed agent performance analytics credited the tools with a 121% increase in customers won, a 68% increase in self-service use, a 45% improvement in customer ratings and a 41% increase in digital sales, Gareiss said.
“Nice inContact is focusing where our research says it should focus,” Gareiss said. “Specifically, Nice inContact is focusing on agent analytics, where our research showed significant success correlations.”
Beyond analytics, this year, cloud contact center vendors have been focused on delivering intelligent call routing — a tool for finding optimal customer-agent pairings — and using AI to give agents better information faster during customer conversations.
Newisys today formally expanded beyond its server roots with the launch of a dense NVMe all-flash storage system: the NSS-2560, which packs nearly 1.7 petabytes of raw capacity in 2U.
The latest Newisys NVMe flash storage was introduced during a demonstration at Flash Memory Summit 2018 in Santa Clara, Calif. At that trade show last year, Newisys won a best-of-show award for its introductory NDS-22482F NVMe over Fabrics Ethernet JBOF (just a bunch of flash) product.
The NSS-2560 server is designed with a drop-down side panel to load 56 NVMe U.2 SSDs. The enclosure contains two Newisys storage server modules, each equipped with dual Intel Broadwell CPUs. Intel Skylake-based server modules are on the Newisys NVMe flash storage roadmap.
The Newisys servers run in parallel and both can access all the NVMe SSDs in the system. Customers can swap out failed drives or servers nondisruptively.
Newisys does not package an operating system on the NSS-2560 hardware. The system supports Microsoft Windows, Red Hat Enterprise Linux and open source Linux variants CentOS, Fedora and Ubuntu.
Newisys is an independent engineering subsidiary of Sanmina Corp., an electronics manufacturing contractor. Sanmina acquired Newisys in 2003. Newisys is best known for its OEM server partnerships with storage vendors.
The Newisys NVMe storage brand was launched several years ago, but the vendor is now looking to ramp up marketing and customer awareness. Other NVMe storage in the Newisys lineup includes the 2U NDS-2244 PCIe over Fabric JBOF, the 2U NSS-2247G quad server, and 1U NSS-1160G database-acceleration server.
“We have been selling in volume to the largest hyperscale data centers for years, but we’re not well-known [for storage]. We had a restructuring and management change and we’re now coming out of stealth mode,” said Dan Liddle, a Newisys vice president of marketing for servers and storage.
The price of an NSS-2560 array will vary depending on the type of NVMe SSD needed, Liddle estimated, with the price between $50,000 and $200,000 per unit according to the type of NVMe SSD configuration a customer chooses. Newisys plans to sell directly to enterprises and cloud service providers, and Liddle hinted that plans are under way to firm up its channel strategy.
Liddle said Newisys’ history of selling storage servers “gives us an advantage in going to NVMe because we’re not starting from scratch. We’ve got a base of understanding that makes [for] a cleaner transition to an NVMe platform.”
Newisys NVMe flash elbows into crowded market
Industry analysts say the emerging NVMe standard for flash and memory-based storage technologies drastically reduces latency by streamlining the transport of SCSI commands. NVMe enables storage to access a computer processor directly across a PCI Express link. Legacy SAS and SATA SSDs incur latency due to host bus connectors that send commands across a network in multiple hops.
Analyst firm IDC projects NVMe-based flash storage will account for more than half of all sales of external primary storage by 2021. Gartner pegs NVMe adoption at 30% by 2021, compared with 1% presently.
The NSS-2560 is designed with a Newisys SAS server chassis reconfigured for the NVMe protocol, said Rick Kumar, Newisys senior vice president of servers and storage marketing. Four 16-lane PCIe add-in cards and up to eight dual inline memory modules per CPU are standard. The 64 PCIe lanes are evenly divided: 32 lanes to the NVMe SSDs and 32 lanes to networks, with connectivity across four 100 Gigabit Ethernet ports.
Newisys claims the NSS-2560 is rated to provide 50 Gbps read performance, 12.5 million read IOPS and 64 Gbps of bandwidth between servers and SSDs.
“Our system is built to be balanced across the entire platform. We make sure there is sufficient connectivity between the drives and the network connections. People are [buying] high-end NVMe drives for performance and latency, and we want to make sure the unit can handle their workloads,” Liddle said.
Tom Coughlin, president of data storage consulting firm Coughlin Associates in Atascadero, Calif., said the Newisys NVMe flash capacity holds appeal for service providers and specialized data center applications, including online transaction processing.
“This is a pretty dense 2U package with almost 2 petabytes (PBs) of raw native capacity. This platform gives you a lot of availability. It’s a pretty impressive box,” Coughlin said.
A compelling selling point, Coughlin added, is relatively low performance penalty for internal-to-external network traffic. “It’s only about 14 Gbps [of consumed throughput] out of the 64 Gbps” to connect the drives to servers, he said.
‘Our NVMe flash array won’t compete with storage vendors’
Newisys plans to continue selling storage servers to OEMs, which raises the possibility its NVMe-based storage could wind up competing with some of its own customers. That’s a conundrum that larger server vendors have also faced.
Before merging with Dell Technologies in 2015, EMC partnered with Cisco to bundle its storage and VMware virtualization on Cisco UCS servers and networking. The relationship worked well — at least until VMware broadened into network virtualization, posing a threat to Cisco’s server business and straining the EMC-Cisco partnership. Legacy EMC storage now uses Dell PowerEdge servers.
Kumar said the vendor expects to continue to partner with, not compete, with its OEMs.
“We’ve been very sensitive to that. We’ve talked to our partners and they’re comfortable with it, as long as we don’t disclose anything under NDA (nondisclosure agreements). We’re confident we won’t be perceived as a competitor,” Kumar said.