Tag Archives: vendor

Gemalto Sentinel flaws could lead to ICS attacks

A long disclosure and remediation process between security researchers and a hardware token vendor resulted in patches for  dangerous flaws that could have led to attacks on critical infrastructure.

Researchers from Kaspersky Lab ICS CERT said they decided to investigate Gemalto Sentinel USB tokens after penetration tests showed the “solution provides license control for software used by customers and is widely used in ICS and IT systems.”

“The solution’s software part consists of a driver, a web application and a set of other software components. The hardware part is a USB token. The token needs to be connected to a PC or server on which a software license is required,” Kasperksy researchers wrote in a report. “From researchers’ viewpoint, [the Gemalto Sentinel software] exhibited a rather curious behavior in the system: it could be remotely accessed and communicated with on open port 1947. The protocol type was defined by the network packet header — either HTTP or a proprietary binary protocol was used. The service also had an API of its own, which was based on the HTTP protocol.”

Kaspersky ICS CERT ultimately found 14 vulnerabilities in Gemalto SafeNet Sentinel tokens, the most critical of which “can be used without local privilege escalation — the vulnerable process runs with system privileges, enabling malicious code to run with the highest privileges.”

Vladimir Dashchenko, head of the ICS CERT vulnerability research team at Kaspersky Lab, told SearchSecurity this issue needs attention because “some of the ICS vendors use such license managers for SCADA software.”

“Some vulnerabilities that we found allow remote code execution, meaning an attacker can access someone else’s computing device and make their own changes. For example, vulnerabilities can provide an attacker with the ability to execute malicious code and take complete control of an affected system with the same privileges as the user running the application,” Dashchenko said via email. “Some vulnerabilities are denial-of-service (DoS) vulnerabilities, meaning an attacker has the ability to shut down a machine or network, making it unavailable to its intended users. DoS does not cause machine or network shutdown. It stops the vulnerable process. However in some cases it could possibly cause denial of service for the machine.”

Paul Brager Jr., technical product security leader at Houston-based Baker Hughes and former cybersecurity project manager focused on ICS at Booz Allen Hamilton, said the “potential implications and risks for ICS are not trivial.” 

“Open ports that allow remote interaction with engineering workstations or servers that run human machine interface or other process-oriented software licenses managed by this solution could lead to an impact to the software itself, the control assets that are managed by the software, or both,” Brager told SearchSecurity. “Worst case scenario is an impact to the processes that are being governed by the licensed solution — some of which could be critical operating processes. Also given the care that is required when patching, the risks could persist for some time.”

Gemalto Sentinel disclosure and patching

The timeline of the disclosure and patching and issues with communication from Gemalto caught the attention of the researchers. According to Kaspersky, the first set of vulnerabilities was reported to Gemalto in early 2017, but it wasn’t until late June “in response to our repeated requests” that Kaspersky received a reply.

Dashchenko clarified the timeline and noted that although Gemalto claimed it “notified all of its customers of the need to update the driver via their account dashboards; we were contacted by several developers of software that use this server, and it became clear they were not aware about the issue.”

“We have informed and sent to the vendor information regarding all of the identified vulnerabilities. In early 2017, we sent information about 11 vulnerabilities and in late June the vendor informed us that a patch had been released and information about the vulnerabilities that had been closed, along with a new version of the driver, could be found on the company’s internal user portal. On June 26, we informed Gemalto of the suspicious functionality and of three more vulnerabilities. On July 21, the vendor released a private notice about a new driver version — without any mention of the vulnerabilities closed.”

Gemalto did not respond to requests for comment at the time of this post.

Dashchenko added that Gemalto Sentinel is a “very popular licensing solution,” and noted that an advisory from Siemens listed 16 solutions that need patching against these issues.

Ken Modeste, global principal engineer at Chicago-based Underwriters Laboratories, said patching ICS is complex so users may be wary of the Gemalto Sentinel issues.

The risk associated with either down time or inadvertent failures … will typically be too high for end-users to accept.
Ken Modesteglobal principal engineer at Chicago-based Underwriters Laboratories

“Factory automation and connected control systems are vetted, tested, reliable systems. Deploying patches that have not seen significant runtime and test time can cause significant issues. Most of the implemented systems have requirements around safety, reliability and uptime. Therefore, deploying a patch to software or an embedded product can affect an operational system,” Modeste told SearchSecurity. “The risk associated with either down time or inadvertent failures associated with a patch of either the inherent device or software, or its interaction with other devices and software, will typically be too high for end-users to accept.”

Moreno Carullo, co-founder and CTO of Nozomi Networks, an ICS cybersecurity company headquartered in San Francisco, said patching is especially important because “while blocking port 1947 is an option to mitigate the problem, it is also not a solution that is suited for all business processes.”

“Blocking this port could result in the cessation of integral services as well,” Carullo told SearchSecurity. “ICS operators could have strong visibility into the network by applying technologies that are able to monitor the traffic passively to detect anomalies or suspicious activities. These technologies should also be integrated with the firewall to increase the needed visibility in such scenarios.”

Brager said the risks of patching the Gemalto Sentinel issues “could be significant, given the pervasiveness of the SafeNet solution in both enterprise and OT/ICS environments.”

“Particularly concerning is the pervasiveness of the solution in control system environments, and what could potentially mean for assets that leverage the SafeNet dongle solution to operate,” Brager said. “In those instances, patching those systems can be a significant (and time consuming) undertaking. Enterprise patching may not be nearly as complex and critical, but it too comes with its own sets of risks.”

NVMe flash storage doesn’t mean tape and disk are dying

Not long ago, a major hardware vendor invited me to participate in a group chat where we would explore the case for flash storage and software-defined storage. On the list of questions sent in advance was that burning issue: Has flash killed disk? Against my better judgment, I accepted the offer. Opinions being elbows, I figured I had a couple to contribute.

I joined a couple of notable commentators from the vendor’s staff and the analyst community, who I presumed would echo the talking points of their client like overzealous high school cheerleaders. I wasn’t wrong.

Shortly after it started, I found myself drifting from the nonvolatile memory express (NVMe) flash storage party line. I also noted that software-defined storage (SDS) futures weren’t high and to the right in the companies I was visiting, despite projections by one analyst of 30%-plus growth rates over the next couple years. Serious work remained to be done to improve the predictability, manageability and orchestration of software-defined and hyper-converged storage, I said, and the SDS stack itself needed to be rethought to determine whether the right services were being centralized.

Yesterday’s silicon tomorrow

I also took issue with the all-silicon advocates, stating my view that NVMe flash storage might just be “yesterday’s silicon storage technology tomorrow,” or at least a technology in search of a workload. I wondered aloud whether NVMe — that the “shiny new thing” — mightn’t be usurped shortly by capacitor-backed dynamic RAM (DRAM) that’s significantly less expensive and faster. DRAM also has much lower latency than NVMe flash storage because it’s directly connected to the memory channel rather than the PCI bus or a SAS or SATA controller.

The vendor tried to steer me back into the fold, saying “Of course, you need the right tool for the right job.” Truer words were never spoken. I replied that silicon storage was part of a storage ecosystem that would be needed in its entirety if we were to store the zettabytes of data coming our way. The vendor liked this response since the company had a deep bench of storage offerings that included disk and tape.

I then took the opportunity to further press the notion that disk isn’t dead any more than tape is dead, despite increasing claims to the contrary. (I didn’t share a still developing story around a new type of disk with a new form factor and new data placement strategy that could buy even more runway for that technology. For now, I am sworn to secrecy, but once the developers give the nod, readers of this column will be the first to know.)

I did get some pushback from analysts about tape, which they saw as completely obsoleted in the next generation, all-silicon data center. I could have pushed them over to Quantum Corp. for another view.

The back story

A few columns back, I wrote something about Quantum exiting the tape space based on erroneous information from a recently released employee. I had to issue a retraction, and I contacted Quantum and spoke with Eric Bassier, senior director of data center products and solutions, who set the record straight. Seems Quantum — like IBM and Spectra Logic — is excited about LTO-8 tape technology and how it can be wed to the company’s Scalar tape products and StorNext file system.

Bassier said Quantum was “one of only a few storage companies [in 2016] to demonstrate top-line growth and profitability,” and its dedication to tape was not only robust, it succeeded with new customers seeking to scale out capacity. In addition to providing a dense enterprise tape library, the Scalar i6000 has 11,000 or more slots, a dual robot and as many as 24 drives in a single 19-inch rack frame, all managed with web services using representational state transfer, or RESTful API calls.

Quantum, like IBM and Spectra Logic, is articulating a product strategy that has fingers in all the popular storage buckets.

Quantum was also hitting the market with a 3U rack-mountable, scalable library capable of delivering 150 TB uncompressed LTO-7 tape storage or 300 TB uncompressed LTO-8 in storage for backup, archive or additional secondary storage for less frequently used files and objects. Add compression and you more than double these capacity numbers. That, Bassier asserted, was more data than many small and medium-sized companies would generate in a year.

Disk also has a role in Quantum’s world; its DXi product provides data deduplication that’s a significant improvement over the previous-generation model. It offers performance and density improvements through the application of SSDs and 8 TB HDDs, as well as a reduction in power consumption.

All the storage buckets

Quantum, like IBM and Spectra Logic, is articulating a product strategy that has fingers in all the popular buckets, including tape, disk and NVMe flash storage. After years of burying their story under a rock by providing OEM products to other vendors who branded them as their own, 90% of the company’s revenue is now derived from the Quantum brand.

Bottom line: We might eventually get to an all-silicon data center. In the same breath, I could say that we might eventually get that holographic storage the industry has promised since the Kennedy administration. For planning 2018, your time is better spent returning to basics. Instead of going for the shiny new thing, do the hard work of understanding your workload, then architecting the right combination of storage and software to meet your needs. Try as you might, the idea of horizontal storage technology — one size fits most — with simple orchestration and administration, remains elusive.

That’s my two elbows.

Microsoft scoops up NAS vendor Avere for hybrid cloud services

Microsoft moved to bolster its cloud storage capabilities with the acquisition of NAS vendor Avere Systems, giving it a high-performance file system to manage unstructured data in hybrid clouds.

The Pittsburgh-based NAS vendor Avere’s OS file system is incorporated in FXT Edge filers in all-flash or spinning disk versions for on-premises or hybrid cloud configurations. Avere also provides a virtual appliance, the Virtual FXT Edge filers, which are available for Amazon Web Services (AWS) and the Google Cloud Platform. 

The terms of the deal were not disclosed.

Microsoft disclosed the acquisition in a blog post on its website but declined an interview request to provide more details about its plans for the cloud NAS vendor. Microsoft acquired early cloud NAS vendor StorSimple in 2012, and gives that technology to Azure subscribers to tier data into the cloud.

However, Avere CEO Ron Bianchini wrote in a company blog post that the two companies’ “shared vision” is to use Avere technology “in the data center, in the cloud and in hybrid cloud storage …” while tightly integrating it with Azure.

“Avere and Microsoft recognize that there are many ways for enterprises to leverage data center resources and the cloud,” Bianchini wrote. “Our shared vision is to continue our focus on all of Avere’s use cases — in the data center, in the cloud and in hybrid cloud storage and cloud bursting environments. Tighter integration with Azure will result in a much more seamless experience for our customers.”

Avere was founded in 2008 as a company that focused on the data center with its FXT Core Filers that used flash to accelerate network-attached storage (NAS) performance on disk systems. The company later transitioned to the cloud with its Avere FXT Edge Filers that served as NAS public clouds, allowing customers to connect on-premises storage to AWS, Google Cloud and Azure services.

In addition to NFS and SMB protocols, the Avere Cloud NAS appliance supports object storage from IBM Cleversafe, Western Digital, SwiftStack and others through its C2N Cloud-Core NAS platform.

The only other vendor that offers end-to-end is Oracle. But Oracle does not have a global namespace. Avere has a global namespace.
Marc Staimerfounder, Dragon Slayer Consulting

The NAS vendor also sells FlashCloud, which runs on FXT Edge Filers with object APIs to connect to public and private clouds. The systems can be clustered so that cloud-based NAS can scale on premises while also providing high-availability access to data in the cloud. Customers can use FlashCloud software as a file system for object storage and move data to the cloud without requiring a gateway.

“They provide a true NAS filer,” said Marc Staimer, founder of Dragon Slayer Consulting. “They provide a complete, end-to-end package. The only other vendor that offers end-to-end is Oracle. But Oracle does not have a global namespace. Avere has a global namespace.”

Avere founders Bianchini, CTO Michael Kazar and technical director Daniel Nydick came from NetApp, which acquired their previous company Spinnaker Networks in 2004 for its clustered NAS technology.

Some of Avere’s customers include Sony Pictures’ Imageworks, animation studio Illumination Mac Guff, the Library of Congress, Johns Hopkins University and Teradyne Inc. The company is private so it does not disclose revenue, but a source close to the vendor put its bookings at $7 million in the fourth quarter of 2016 and $22 million for the year. Those bookings were up from $4.8 million in the fourth quarter and $14.5 million in 2015.

In March of 2017, Google became an Avere investor during the company’s $14 million Series E funding round. Avere raised about $100 million in total funding. Previous investors include Menlo Ventures, Norwest Venture Partners, Lightspeed Venture Partners, Tenaya Capital and Western Digital Technologies.

The Avere team will continue to work out of its Pittsburgh office for Microsoft.

Quantum Xcellis scale-out NAS tackles unstructured data

Quantum Corp.’s new Xcellis Scale-out NAS system moves the vendor into the mainstream NAS market, where it will take on the likes of Dell EMC Isilon and NetApp FAS.

The Quantum Xcellis Scale-out NAS appliances target large semistructured and unstructured  primary data workloads. Sample use cases include analytics, artificial intelligence, autonomous vehicle development, drug discovery, genomics and immersive content.

Xcellis encompasses a line of Quantum data storage hardware managed by the StorNext scalable file system. Quantum first launched the Xcellis brand in late 2015, putting StorNext onto an appliance. The new scale-out NAS version handles higher capacity workloads, and Quantum claims it can scale to hundreds of petabytes with no effect on performance.

The new scale-out Quantum Xcellis NAS nodes are scheduled for general availability in December.

“We are aiming this product at users with high-value workflows where data is the product,” said Laura Shepard, Quantum senior director of emerging technologies. “This tends to be data that grows very rapidly and requires very high performance and scale. These tend to be primary workloads that need to stay on premises.

“We believe we can offer the enterprise features of scale-out NAS with cost-effective performance scaling, which has not been a great strength of traditional enterprise NAS,” Shepard said.

StorNext scale-out storage is Quantum’s fastest growing segment. Quantum still drives most of its income from tape, but also sells disk-based backup. When he took over as Quantum’s interim CEO last month, Adalio Sanchez called scale-out storage the vendor’s growth engine. 

New Quantum configurations cluster all-flash, archiving, hybrid

Quantum Xcellis all-flash and hybrid building blocks are available in 5U and 6U form factors. The all-flash systems range from 370 TB to 925 TB of capacity. Quantum rates all-flash performance at 1 million IOPS.

Quantum Xcellis hybrid configurations for mixed workflows scale to 400 TB and 200,000 IOPS. A 3U entry-level Xcellis NAS tops out at 370 TB. Quantum also offers a 5U archive model with up to 448 TB of disk storage. Varying Quantum Xcellis node types can be mixed and managed as a single tier of storage. Users can add nodes individually to a cluster or combined in a rack-scale deployment for up to 3 PB of raw capacity.

The unified Quantum data storage presents block, file and object in a single namespace. Users can scale performance and storage separately and offload data to any StorNext-managed storage. Quantum allows tiering to on-premises object stores and the public cloud, but the data then is no longer managed by StorNext.

Quantum’s StorNext data management includes audits, encryption, erasure coding, load balancing, point-in-time snapshots, RAID, replication and WORM compliance. StorNext manages Xcellis data across IBM Cloud Object Storage, NetApp StorageGrid, Scality and Quantum Lattus object platforms, as well as Amazon Web Services and Microsoft Azure public clouds.

Will Quantum make inroads against established NAS vendors?

Scott Sinclair, a storage analyst at Enterprise Strategy Group in Milford, Mass., said managing rapid data growth is not the only headache for digital-based enterprises. A bigger challenge is the ability for storage to provide streaming access to data for analytics and real-time business operations.

Sinclair breaks the enterprise NAS market into three segments: “Enterprise-class systems focus on features and functionality for data management. A second segment includes vendors that provide big pools of storage that’s cheap and deep, without many features. The third segment is HPC systems optimized for speed. Quantum claims Xcellis NAS can deliver all of that in one product,” Sinclair said.

“There is a demand for [NAS] technology that is good — if not great — at handling the multiple aspects of functionality, cost-effective scaling and performance,” he added. “The question is whether Quantum Xcellis will be able to deliver to the extent that it starts to displace the incumbents. Even though there aren’t many vendors, it’s a difficult market to penetrate.”

Researchers bypass iPhone X security feature Face ID

Researchers at the security vendor Bkav Corporation found a way to bypass iPhone X security by tricking Face ID, Apple’s facial recognition technology.

The team at the Vietnam-based company was able to unlock an iPhone X using a mask made out of a 3D-printed frame, a handmade silicone nose and some 2D pictures layered on top of the mask. The whole experiment used only about $150 worth of materials, but it required a lot of know-how, according to Bkav.

“It is quite hard to make the ‘correct’ mask without certain knowledge of security,” said Bkav’s CEO Nguyễn Tử Quảng. “We were able to trick Apple’s AI … because we understood how their AI worked and how to bypass it.”

The Bkav research team didn’t start trying to bypass the iPhone X security feature until the device was released on Nov. 5, 2017, but they were successful almost immediately and published their findings on Nov. 9.

“Everything went much more easily than you expect,” said Quảng, noting that Apple’s Face ID works even when the user covers up half of their face, so the technology behind it really only needs half of a mask to be fooled. “Apple seems to rely too much on Face ID’s AI. We just need a half face to create the mask. It was even simpler than we, ourselves, had thought.”

This is not the first time this company has been able to break facial recognition technology. In 2008, Bkav demonstrated the security flaws in facial recognition for laptops.

“So, after nearly 10 years of development, face recognition is not mature enough to guarantee security for computers and smartphones,” wrote the Bkav researchers in their report on the experiment, adding that for biometrics security, fingerprint scanning is the best option.

Facial recognition software

However, the Bkav researchers also noted that not everyone has to worry about this iPhone X security issue. “Potential targets shall not be regular users, but billionaires, leaders of major corporations, nation leaders and [agencies] like FBI need to understand the Face ID’s issue,” the company wrote. “Security units’ competitors, commercial rivals of corporations, and even nations might benefit from our PoC [proof of concept].”

Bkav’s PoC could also potentially impact the tensions between law enforcement and Apple over encryption and locked iPhones. Apple famously refused to unlock the iPhone belonging to the gunman in the 2015 San Bernardino, Calif., shooting. The FBI ordered Apple to unlock the phone so it could investigate its content, but the company refused, sparking a still-going debate over encryption backdoors for law enforcement. However, if law enforcement can put Bkav’s proof of concept to work successfully, there could be real implications in the debate.

“Exploitation is difficult for normal users,” Bkav wrote, “but simple for professional ones.”

In other news

  • Equifax has taken control over 138 domains that mimicked the company’s breach response website. The real website was created in September 2017 following the data breach that exposed the personal and financial information of around 145 million U.S. consumers. Separate from the company website, Equifax set up www.equifaxsecurity2017.com for customers to check whether or not they had been breached and to learn about follow-up steps if they had. The website, which was injected with malware that spread to its customers, inspired a Hong Kong-based company called China Capital Investment Limited to purchase 138 domains through GoDaddy that were similar to the legitimate Equifax website, but slightly different. This included using variations on the real domain and likely misspellings or typos, such as eauifaxsecurity.com, equifaxsecuiry2017.com, and equifavsecurity2017.com. According to Gizmodo, China Capital Investment started buying up these domains within 24 hours of the Equifax breach announcement.
  • In this month’s Patch Tuesday, Microsoft rolled out a fix for a 17-year-old vulnerability in Microsoft Word. The flaw was a remote code execution vulnerability in Microsoft Office that the company did not previously know about. Researchers at security firm Embedi discovered the flaw, which exists in all versions of Microsoft Office, and, if successfully exploited, could enable an attacker to run arbitrary code as the legitimate user. Also on Patch Tuesday, Adobe released patches for 80 vulnerabilities on nine of its products; 56 of the vulnerabilities were in Acrobat and Reader alone, and the others were spread over Flash Player, Photoshop, Connect, DNG Converter, InDesign, Digital Editions, Shockwave Player and Experience Manager. While Adobe says that none of the vulnerabilities were exploited in the wild, many of them were classified as critical.
  • A new strain of malware known as FALLCHILL is being used by a hacking group associated with the North Korean government. The U.S. Department of Homeland Security (DHS) and the FBI issued a joint alert that said the FALLCHILL remote administration tool was used by the group known as Hidden Cobra or the Lazarus Group to hack into organizations in the aerospace, telecommunications and financial industries. “[The] FBI has high confidence that Hidden Cobra actors are using the IP addresses — listed in this report’s IOC [indicators of compromise] files — to maintain a presence on victims’ networks and to further network exploitation,” the alert stated. “DHS and FBI are distributing these IP addresses to enable network defense and reduce exposure to North Korean government malicious cyber activity.” The Lazarus Group is believed to be behind hacks such as those on Sony Pictures Entertainment and Bangladesh’s central bank.

Versa SD-WAN gets features focused on voice, video calling

Versa Networks has added to its SD-WAN product capabilities the vendor claims will help organizations maintain the quality of video and voice calling in the branch office.

The company introduced this week Versa SD-WAN technology that formulates a mean opinion score (MOS) for communication traffic and lets network engineers set policies that trigger specific actions if the score falls below a baseline. MOS is a number from one to five that indicates the voice or video quality at the destination end of the circuit. Satisfactory voice calls, for example, are typically in the 3.5 to 4.2 range.

Versa has developed algorithms that determine the MOS of each call based on metrics extrapolated from Real-Time Transport Protocol (RTP) and Real-Time Transport Control Protocol (RTCP) traffic flows. RTP combines its data transport with the RTCP. The latter lets monitoring applications detect packet loss and compensate for delays that lead to jitter.

Other factors used to reach a mean opinion score are the codecs used in a company’s voice over IP and video conferencing application. In general, codecs compress digital data to move it faster over the network and then decompress it at the destination point. In doing its work, however, a codec causes some degradation in quality.

Network engineers can set policies that tell the Versa SD-WAN to take specific actions when the MOS is too low. Those steps could include changing the transport of the data flow, moving other traffic off the route to increase available bandwidth and cloning the communication traffic so it can be sent across multiple circuits.

Analysts split on value of MOS in Versa SD-WAN

Other SD-WAN vendors, such as VeloCloud, which was recently acquired by VMware, also provide mechanisms for monitoring and taking corrective actions to help maintain voice and video quality. In general, most products take into account common network problems, such as the loss of packets or delay in their delivery.

“The added support [within Versa ] for real-time voice and video will help ensure good-quality communications are maintained,” said Mike Fratto, an analyst at Current Analysis, which is owned by London-based GlobalData.

Not all analysts, however, agreed that MOS scoring would improve voice and video calling.

“My guess is that this won’t improve calling all that much, because SD-WAN solutions are already looking at things like jitter, latency, etc., for voice traffic,” said Irwin Lazar, an analyst at Nemertes Research, based in Mokena, Ill. “They also can’t deal with factors inside the LAN, such as poor Wi-Fi performance that can adversely impact call quality.”

At its core, SD-WAN lets engineers steer traffic across multiple links, such as MPLS, Long Term Evolution and broadband. The connections they choose depends on the needs of the applications generating the traffic. Companies can select MPLS for data that needs a high-level of reliability, while using cheaper broadband for less sensitive data flows.

Vendors have added WAN optimization, firewalls, routing and quality-of-service features for communications as differentiators and to demand higher prices for their SD-WAN platforms.

Q&A: New CEO bets on open source future for Acquia CMS

On Monday, cloud CMS vendor Acquia Inc. announced Michael Sullivan, former Hewlett Packard Enterprise senior vice president and general manager for SaaS, has been named the company’s new CEO. Sullivan will move into the position next month.

SearchContentManagement interviewed Sullivan and Acquia co-founder Dries Buytaert, who was also the lead developer on the open source Drupal content management system, upon which the Acquia CMS is based. Buytaert remains Acquia’s CTO and also takes over as board chair.

Dries, according to your blog, there were more than 140 candidates to succeed longtime CEO Tom Erickson. How did Acquia choose Michael Sullivan?

Dries Buytaert, co-founder and CTO, AcquiaDries Buytaert

Dries Buytaert: There are a lot of reasons. First of all, there’s a very good fit with Mike. That’s not just a good fit between him and me, but also to our culture and personality and how we think about different things, like the importance of cloud and open source. I also felt Mike was really well-prepared to lead our business. Mike has 25 years [of] experience with software as a service, enterprise content management and content governance. Mike has worked with small companies, as well as larger companies.

At HP Enterprise and Micro Focus [acquired by HPE], Mike was responsible for managing more than 30 SaaS products. Acquia is evolving its product strategy to go beyond Drupal and the cloud to become a multiproduct company with Acquia Digital Asset Manager and Acquia Journey. So, our own transformation as a company is going from a single-product company to a multiproduct company. Mike is uniquely qualified to help us with that, based on his experience.

Mike, why was it a fit for you, and what excites you about the market position of the Acquia CMS and the company’s future as a cloud CMS provider?

Michael Sullivan: I’ve been involved in both [enterprise] content management and web content management during the course of my career, so it’s not new to me. I’ve always found it interesting and have had a lot of success in this space, broadly. There’s a fundamental shift that’s occurring in the content management world, where people are moving from static web presence to a different model of engaging with their customers — an intelligent digital experience.

Michael Sullivan, CEO, AcquiaMichael Sullivan

Companies will need to compete on that basis in the future, and they need to have personalized experiences and work with customers through lots of channels, not just the website. Acquia sits at the intersection of a lot of these technologies — Drupal, open source, SaaS, DevOps, machine learning, predictive analytics. If you look at what Acquia’s already done and what they’re working on, this is a company that has the right vision and a proven ability to execute … and a history of winning. That was important to me; it makes it believable to me this company will succeed.

What do you see as Acquia’s biggest challenges moving forward the next few years?

Sullivan: There’s a lot of work to do. We have to move fast; we have to execute well. Our challenge is execution — we know what we want to build, [and] we know where we want to go. The question is: How do we get there, and how do we get there efficiently?

What is the role of AI in the future of content management and the Acquia CMS?

Buytaert: There’s a big future for AI in our space; it’s something we’re investing in, with a team of six people working on machine learning solutions in our space. We believe we are in the early stages of what will be a pretty big transformation of the web, or digital.

Historically, the web has been pull-based: You have to go to the web and search for information. We believe, in the future, more of those experiences will become push-based: Information will start to find you. The Holy Grail is delivering customers the right information for the right service at the right time, in the right context, on the right channel — web, mobile, chatbots or voice assistants. That’s a pretty big vision.

Drupal is evolving from a website management system to a digital experience platform.
Dries Buytaertco-founder, Acquia

To [accomplish] that, you need to build systems that are smart and can predict what users want at what point in time. If you can do that, you can really change the customer experience. Instead of having the customer find the information, it increasingly comes to you.

There’s a lot of early examples of that; a simple example is [music streaming services] Spotify and Pandora. The old pull-based model is turning the knob on your radio to find the music that you want; Spotify and Pandora push you information that you like, so you don’t have to go look for it. We think that will happen across every industry, and the Acquia platform will help companies build these digital experiences.

Dries, Acquia is expanding past the original concept of Drupal with headless CMS and all of these new SaaS offerings and CRM-style tools to help companies service customers. What will become of Drupal?

Buytaert: One of the great things about Drupal is that there aren’t a lot of technologies that remain relevant for 18 years [since Drupal debuted]. The reason Drupal has been successful is that we’ve literally reinvented ourselves more than 10 times. Drupal is evolving quite rapidly; I would argue we’re ahead — an API-first player, compared to our proprietary competitors.

Drupal is evolving from a website management system to a digital experience platform; it’s becoming a content repository, where you can manage content and can feed that content into a variety of different touchpoints or channels. It’s not just specialized in creating HTML output for webpages, but we have integrations with Alexa, chatbots, digital kiosks, [and] we have a long list of customers who come to us because they want to move beyond building websites.

We’ve been investing in headless Drupal for four years, since before it was called headless. I feel like we spotted those trends and have done a pretty good job going after them earlier than our competitors.

Mike, what will the Acquia CMS look like in five years?

Sullivan: We have big ambitions for this space. Some of these pieces we already have plans for. I think we’ll be in the position to do acquisitions over time. Obviously, I haven’t had my first day yet, so it’s hard to say for sure, but we think we are well-positioned to fill in all these pieces [to build the next-generation digital experience platform]. Five years is a long time; I’d like to think that we’ll be able to do it a lot sooner than that.

VeloCloud-VMware acquisition will battle Cisco in the branch

VMware plans to acquire SD-WAN vendor VeloCloud Networks, a move that would turn the branch office into a battleground for the virtualization provider and Cisco.

The VeloCloud-VMware acquisition, announced this week, would be carried out in early February. With VeloCloud, VMware would go head-to-head against Cisco’s Viptela, IWAN and Meraki brands. SD-WAN, in general, intelligently routes branch traffic across multiple links, such as broadband, MPLS and LTE.

“This is the first time that Cisco and VMware will directly compete in the networking world,” said Shamus McGillicuddy, an analyst at Enterprise Management Associates, based in Boulder, Colo.

Before, the closest Cisco and VMware came to competing in networking was with their software-defined networking platforms ACI and NSX, respectively. The products, however, serve mostly different purposes in the data center. NSX provisions network services within VMware’s virtualized computing environments while ACI distributes application-centric policies to Cisco switches.

VMware SDN marches to the branch

The VeloCloud-VMware acquisition, however, marks the start of taking NSX to the branch, where Cisco is already offering ACI. Both vendors are also working on extending their respective SDN platforms to enterprise software running on public clouds.

In the branch, VMware plans to provide SD-WAN, security, routing and other services on an NSX-based network overlay that’s hardware agnostic. Rather than supply branch appliances for VeloCloud software, VMware wants customers to buy certified hardware from different vendors.

This is the first time that Cisco and VMware will directly compete in the networking world.
Shamus McGillicuddyanalyst, Enterprise Management Associates

“That is certainly our longer-term vision for this. That it will be a pure software play,” said Rajiv Ramaswami, COO of cloud services at VMware, during a conference call with reporters and analysts.

In the short term, VMware would support appliances sold by VeloCloud, Ramaswami said. VMware’s parent company, Dell EMC, also sells hardware for VeloCloud software.

While VMware shies away from hardware, Cisco has delivered centralized software that provisions network services to the branch through a new line of routers, called the Catalyst 9000s. In the future, Cisco could also provide a software-only option through the Enterprise Network Functions Virtualization platform (ENFV)  the company introduced last year. ENFV would run on Cisco servers or third-party certified hardware.

“Cisco is making multiple bets in SD-WAN,” McGillicuddy said.

Cloud orchestration a key piece of VeloCloud-VMware acquisition

VMware is banking on VeloCloud’s cloud-based network orchestration tools to evolve into a significant differentiator from Cisco and other WAN infrastructure providers. VMware could eventually use the technology to orchestrate network services in the branch and the cloud, Ramaswami said.

VMware’s ambitions do not alter the fact that it has a difficult road ahead battling Cisco. The latter company dominates the networking market with more than 150,000 paying customers for its WAN products, according to Gartner. VMware is the largest supplier of data center virtualization, but is a newbie in networking.

VeloCloud’s roughly 1,000 customers include service providers, as well as enterprises. AT&T, Deutsche Telekom, Sprint, Vonage and Windstream are examples of carriers that offer the company’s SD-WAN product as a service.

VMware sells network virtualization software to service providers and expects VeloCloud to help grow that relatively small business. “VeloCloud and their deep relationship with the service provider community is a huge route to a market accelerator,” said Peder Ulander, a vice president of strategy at VMware.

VMware did not release financial details of the acquisition.

HCI technology: Pivot3 taps Arrow for distribution

Pivot3, a hyper-converged infrastructure vendor, has tapped Arrow Electronics for distribution and contract manufacturing.

The distribution, marketing and channel development components of the Arrow partnership will initially focus on the North American market. But Mark Maisano, vice president of channel sales at Pivot3, based in Austin, Texas, said discussions are underway regarding global distribution for its HCI technology. Arrow’s Enterprise Computing Solutions (ECS) brings Pivot3 “a different set of partners,” he said, noting the Arrow ECS channel ecosystem in the data center and storage areas.

The arrangement is the second distribution relationship for Pivot3, which partnered with Promark Technology, an Ingram Micro company, in June 2016. Promark specializes in data storage and virtualization products and focuses on the public sector, holding a number of federal and state contract vehicles. One of the Arrow ECS technology specializations is converged and hyper-converged infrastructure products.

In addition to distribution, Arrow will offer configuration services for Pivot3’s hyper-converged platform on Lenovo servers. Under the arrangement, Pivot3 will engage with the distributor’s Arrow Intelligent Systems for contract manufacturing services worldwide.

In 2015, Pivot3 unveiled a pact with Lenovo to bring its HCI technology to Lenovo hardware. At the time, Pivot3 identified Arrow ECS as its exclusive European distributor for the combined platform.

The arrangement with Arrow in Europe, however, mainly focused on surveillance use cases, Maisano said. The new HCI technology relationship, he added, targets enterprise customers with a broader range of offerings, such as Pivot3’s Acuity HCI software platform, which was designed for large enterprise use cases.

Dave Clipp, principal of systems and virtualization at Atom Creek, an IT service provider and Pivot3 partner in Centennial, Colo., said he is pleased Pivot3 selected Arrow, referring to the company as “one of the best distribution partnerships we have.”

“There are other distribution partners they could have chosen, and that could have been a deal breaker for us,” Clipp said. “We’ve had some challenging relationships with some distribution partners.”

The distribution pact comes amid an HCI technology market set for growth. “We see the use cases expanding where we can come in with a hyper-converged infrastructure,” Clipp said, noting that earlier use cases were around virtual desktop infrastructure and video analytics.

Data quality vendor Naveego debuts channel program

Naveego, a provider of cloud-based data quality technology, introduced its Partner Success Program this week.

Through the program, partners can tap resources and support from the vendor to help customers address data quality challenges in hybrid cloud enterprise systems. Resources include sales and technical training, opportunity registration, and marketing and promotions, Naveego said.

“We are very much channel-focused. We don’t have a direct sales force team. So, this [program] isn’t a ‘nice to have’ for us; it’s a ‘have to have,'” said Sean Cavanaugh, director of sales at Naveego, based in Traverse City, Mich.

Partner Success is Naveego’s first channel program. The company works with seven partners today, and another 11 partners are currently coming on board, Cavanaugh said. He added that Naveego’s software-as-a-service offering has a low barrier of entry for partners, allowing them to rapidly establish themselves as subject-matter experts in data quality and master data management.

Naveego’s expertise is in the oil and gas industry, he noted, but the vendor is looking at manufacturing and telecommunications as potential vertical market focuses.

VeloCloud aligns partners to retail market

Software-defined WAN company VeloCloud said partners have a role to play in its recently announced initiative to target the retail sector.

According to Mike Wood, vice president of worldwide marketing at VeloCloud, based in Mountain View, Calif., key IT trends in the retail industry span the shift to cloud, adoption of unified communications and video, internet of things deployment and pervasive in-store Wifi.

“VeloCloud SD-WAN for Retail is enabling our partners to differentiate themselves further in specific verticals using an expanded set of SD-WAN capabilities for the retail market, which includes advanced segmentation, PCI [Payment Card Industry] capabilities and deployment best practices,” Wood said in an email.

Other news

  • If your customers use Salesforce, be on the lookout for more spending on AI and analytics. That’s the word from Bluewolf, an IBM company and Salesforce consultancy, and its annual State of Salesforce Report. The report, based on a survey of 1,800 Salesforce customers from around the world, said 77% of Salesforce customers already using AI expect to up their investment in “AI or platforms that have embedded AI capabilities” over the next 12 months. In addition, 71% of the respondents said they are “increasing their investment in actionable analytics,” according to the report.
  • CompTIA said it has updated its Security+ certification exam to place a greater emphasis on test takers’ hands-on ability to identify and address security threats, attacks and vulnerabilities. CompTIA released the new version of Security+ (SY0-501) this month.
  • ConnectWise’s business management platform, ConnectWise Management, has integrated with itopia’s cloud IT management portal for Google Cloud Platform. According to itopia, the integration provides managed services providers the ability to support clients by synchronizing accounts, ticketing and billing transactions between ConnectWise and itopia.
  • SolarWinds MSP, a vendor of IT service management solutions targeting managed service providers, unveiled SolarWinds Mail Assure, a cloud-based email malware protection and spam filtering offering. Mail Assure integrates with on-premises or cloud-based email services, such as Microsoft Office 365 and Exchange.
  • Data protection vendor Commvault released ScaleProtect with Cisco UCS, an offering that integrates the vendor’s HyperScale Software with Cisco’s Unified Computing System. Available now through Cisco’s SolutionsPlus program, the offering is slated to become available via Cisco’s Global Price List. “This [release] really is a game-changer for our partner ecosystem and will create tremendous sales opportunities for them, while providing customers the flexibility, agility and reliability of the cloud to protect data stored on premises,” said Ralph Nimergood, vice president of worldwide channels and alliances at Commvault, in an email.
  • Gainsight has extended its customer success management platform with an offering for professional services organizations. Gainsight for Services Success aims to help those organizations adapt to a subscription business model. The company said the offering helps professional services teams bring new clients on board at scale and measure service engagement results based on business outcomes. Gainsight for Services Success can work in conjunction with professional services automation tools, according to the company.
  • Pax8, a value-added cloud distributor, promoted Ryan Walsh to the position of chief channel officer. The company said Walsh will lead the strategic direction of Pax8’s IT channel strategy, partner solutions and vendor relations.
  • World Wide Technology, an IT solution provider based in St. Louis, said it has expanded its partnership with Mercy, a health system also based in that city. Under the arrangement, Mercy will manage two on-site Family Health Centers for employees and their families. WWT opened the centers in St. Louis and Edwardsville, Ill., in 2015. The on-site health services include primary care providers, prescription drug dispensing and annual flu vaccinations.
  • Extreme Networks unveiled an updated partner program in the wake of its acquisitions of Zebra Technologies’ WLAN and Avaya’s networking businesses. Extreme also recently revealed its intent to purchase Brocade Communications Systems’ data center networking business. New features of the program include four specialization programs, invitation-only Black Diamond membership status and sales-enablement packages.  

Market Share is a news roundup published every Friday.

Scality Connect ports S3 apps to Azure Blob storage

Object storage vendor Scality is moving to connect Amazon S3 apps to Microsoft Azure Blob storage in multicloud setups.

Scality Connect software, which launched last week, can help customers overcome the hurdle of porting an application based on the Simple Storage Service (S3) API to Azure Blob storage.

Scality plans to announce in December advanced Amazon S3 API support, along with versioning and a bucket website, said Wally MacDermid, vice president of business development for cloud at Scality, based in San Francisco.

John Webster, a senior partner at Evaluator Group in Boulder, Colo., said the multicloud play will be of particular interest to the DevOps groups within organizations. Many developers spend a great deal of time doing API modifications to applications.

“Anytime you can relieve the user of that burden is good. [Lack of interoperability] is a big issue. This is the last thing customers want,” Webster said of the need to modify APIs. “They just hate it. They have to modify APIs to work with other APIs.”

MacDermid said there is no hardware requirement for Scality Connect.  It is included as a stateless container inside an Azure subscription. Connect stores data in the Microsoft Azure Blob storage native format, and the container runs in a virtual machine within the customer’s subscription.

“We don’t hold any data. We just pass it to the Azure cloud,” MacDermid said. “An application that works on S3 can run in Azure without requiring any modification in the code.

“Once the data is up in Azure, you can use the Azure management services on top of it.”

Scality Connect makes it easier for developers to deploy applications within Microsoft Azure and use its advanced services. The software is available through the Azure Marketplace.

The Microsoft Azure and Google clouds do not support the Amazon S3 API, which has become the de facto cloud standard in the industry. That means the Azure Blob storage does not talk to the Amazon S3 API, which limits a customer’s ability to use multiple clouds.

“One side talks S3, and the other side talks the Azure API, and neither talks to each other,” MacDermid said. “This is a problem not only for customers, but for Azure, as well. [Microsoft] would admit that. The Scality Connect runs in the Azure Cloud. It gets your data up to the Azure Cloud and allows you to use the Azure services. We are the translation layer.”

Scality Connect is not the vendor’s first multicloud initiative. Scality in July unveiled its Zenko open source software controller for multicloud management to store data and applications under a single user interface no matter where they reside, including Scality Ring. It helps customers match specific workloads to the best cloud service. Zenko is based on the Scality S3 Server.