Tag Archives: Customers

Introducing Azure DevOps

Today we are announcing Azure DevOps. Working with our customers and developers around the world, it’s clear DevOps has become increasingly critical to a team’s success. Azure DevOps captures over 15 years of investment and learnings in providing tools to support software development teams. In the last month, over 80,000 internal Microsoft users and thousands of our customers, in teams both small and large, used these services to ship products to you.

The services we are announcing today span the breadth of the development lifecycle to help developers ship software faster and with higher quality. They represent the most complete offering in the public cloud. Azure DevOps includes:

Azure PipelinesAzure Pipelines

CI/CD that works with any language, platform, and cloud. Connect to GitHub or any Git repository and deploy continuously. Learn More >

Azure BoardsAzure Boards

Powerful work tracking with Kanban boards, backlogs, team dashboards, and custom reporting. Learn more >

Azure ArtifactsAzure Artifacts

Maven, npm, and NuGet package feeds from public and private sources. Learn more >

Azure ReposAzure Repos

Unlimited cloud-hosted private Git repos for your project. Collaborative pull requests, advanced file management, and more. Learn more >

Azure Test PlansAzure Test Plans

All in one planned and exploratory testing solution. Learn more >

Each Azure DevOps service is open and extensible. They work great for any type of application regardless of the framework, platform, or cloud. You can use them together for a full DevOps solution or with other services. If you want to use Azure Pipelines to build and test a Node service from a repo in GitHub and deploy it to a container in AWS, go for it. Azure DevOps supports both public and private cloud configurations. Run them in our cloud or in your own data center. No need to purchase different licenses. Learn more about Azure DevOps pricing.

Here’s an example of Azure Pipelines used independently to build a GitHub repo:

pipelinesbuild2x

Alternatively, here’s an example of a developer using all Azure DevOps services together from the vantage point of Azure Boards.

boards2x

Open Source projects receive free CI/CD with Azure Pipelines

As an extension of our commitment to provide open and flexible tools for all developers, Azure Pipelines offers free CI/CD with unlimited minutes and 10 parallel jobs for every open source project. With cloud hosted Linux, macOS and Windows pools, Azure Pipelines is great for all types of projects.

Many of the top open source projects are already using Azure Pipelines for CI/CD, such as Atom, Cpython, Pipenv, Tox, Visual Studio Code, and TypeScript – and the list is growing every day.

We want everyone to have extremely high quality of service. Accordingly, we run open source projects on the same infrastructure that our paying customers use.

Azure Pipelines is also now available in the GitHub Marketplace making it easy to get setup for your GitHub repos, open source or otherwise. 

Here’s a walkthrough of Azure Pipelines:

Learn more >

The evolution of Visual Studio Team Services (VSTS) 

Azure DevOps represents the evolution of Visual Studio Team Services (VSTS). VSTS users will be upgraded into Azure DevOps projects automatically. For existing users, there is no loss of functionally, simply more choice and control. The end to end traceability and integration that has been the hallmark of VSTS is all there. Azure DevOps services work great together. Today is the start of a transformation and over the next few months existing users will begin to see changes show up. What does this mean?

  • URLs will change from abc.visualstudio.com to dev.azure.com/abc. We will support redirects from visualstudio.com URLs so there will not be broken links.
  • As part of this change, the services have an updated user experience. We continue to iterate on the experience based on feedback from the preview. Today we’re enabling it by default for new customers. In the coming months we will enable it by default for existing users.
  • Users of the on-premises Team Foundation Server (TFS) will continue to receive updates based on features live in Azure DevOps. Starting with next version of TFS, the product will be called Azure DevOps Server and will continue to be enhanced through our normal cadence of updates.

Learn more

To learn more about Azure DevOps, please join us:

  • Keynote: Watch our live Azure DevOps keynote on September 11, 2018 from 8:00 – 9:30 AM Pacific Time.

  • Live training: Join our live Mixer workshop with interactive Q&A on September 17, 2018 from 8:30 AM – 2:30 PM Pacific Time.

You can save-the-date and watch both live streams on our events page. There you’ll also find additional on-demand videos and other resources to help get you started.

We couldn’t be more excited to offer Azure DevOps to you and your teams. We can’t wait to see what amazing things you create with it.

CloudHealth’s Kinsella weighs in on VMware, cloud management

VMware surprised many customers and industry watchers at its annual user conference, VMworld 2018, held this week, with its acquisition of CloudHealth Technologies, a multi-cloud management tool vendor. This went down only days before CloudHealth cut the ribbon on its new Boston headquarters. Joe Kinsella, CTO and founder at CloudHealth, spoke with us about events leading up to the acquisition, as well as his thoughts on the evolution of the cloud market.

Why sell now? And why VMware?

Joe KinsellaJoe Kinsella

Joe Kinsella: A year ago, we raised a [Series] D round of funding of $46 million. The reason we did that is because we had no intention of doing anything other than build a large public company — until recently. A few months ago, VMware approached us with a partnership conversation. We talked about what we could do together. It became clear that the two of us together would accelerate the vision that I set out to do six years ago. We could do what we set out to do faster, on the platform of VMware.

How will VMware and CloudHealth rationalize the products that overlap within the two companies?

Kinsella: The CloudHealth brand will be a unifying brand across their own portfolio of SaaS and cloud products. That said, in the process of doing that, there will be overlap, but also some opportunities, and we will have to rationalize that over time. There is no need to do it in the short term. [VMware] vRealize and CloudHealth are successful products. We will integrate with VMware, but we will continue to offer a choice.

What was happening in the market to drive your decision?

[Enterprises] have settled on a nuanced approach to leverage a broad portfolio of cloud options, which means many public clouds, many private clouds and a diverse set of SaaS products. Managing [such] a diverse portfolio is incredibly complex.
Joe KinsellaCTO and founder, CloudHealth Technologies

Kinsella: Cloud management has evolved rapidly. What drives it [is something] I call the ‘three phases of cloud adoption.’ In phase one, enterprises said they would not go to the public cloud, despite the fact that their lines of business used the public cloud. Phase two was this irrational exuberance that everything went to the public cloud. [Enterprises in phase three] have settled on a nuanced approach to leverage a broad portfolio of cloud options, which means many public clouds, many private clouds and a diverse set of SaaS products. Managing a single cloud is complex; managing [such] a diverse portfolio is incredibly complex.

What’s your view today of cloud market adoption and how the landscape is evolving?

Kinsella: Today, the majority of workloads still run on premises. But public cloud growth has been dramatic, as we all know. Amazon remains the market leader by a good amount. [Microsoft’s] Azure business has grown quickly, but a lot of that growth includes the Office 365 product as well. Google has not been a big player until recently. It’s only been in the past 12 months that we felt the Google strategy that Diane Green started to execute in the market. Alibaba has made some big moves and is a cloud to watch. Though Amazon is still far ahead, it’s finally getting competitive.

But customers don’t really just focus on one source anymore, correct?

Kinsella: I’ve talked about the concept of the heterogenous cloud, which is building applications and business services that take advantage of services from multiple service providers. We think of them as competitors today, but instead of buying services from Amazon, Google or Azure, you might build a business service that takes advantage of services from all three. I think that’s the future. I believe these multiple cloud providers will continue to exist and be differentiated based on the services they provide.

Announcing general availability of Azure IoT Hub’s integration with Azure Event Grid

We’re proud to see more and more customers using Azure IoT Hub to control and manage billions of devices, send data to the cloud and gain business insights. We are excited to announce that IoT Hub integration with Azure Event Grid is now generally available, making it even easier to transform these insights into actions by simplifying the architecture of IoT solutions. Some key benefits include:

  • Easily integrate with modern serverless architectures, such as Azure Functions and Azure Logic Apps, to automate workflows and downstream processes.
  • Enable alerting with quick reaction to creation, deletion, connection, and disconnection of devices.
  • Eliminate the complexity and expense of polling services and integrate events with 3rd party applications using webhooks such as ticketing, billing system, and database updates.

Together, these two services help customers easily integrate event notifications from IoT solutions with other powerful Azure services or 3rd party applications. These services add important device lifecycle support with events such as device created, device deleted, device connected, and device disconnected, in a highly reliable, scalable, and secure manner.

Here is how it works:

As of today, this capability is available in the following regions:

  • Asia Southeast
  • Asia East
  • Australia East
  • Australia Southeast
  • Central US
  • East US 2
  • West Central US
  • West US

  • West US 2
  • South Central US
  • Europe West
  • Europe North
  • Japan East
  • Japan West
  • Korea Central
  • Korea South

  • Canada Central
  • Central India
  • South India
  • Brazil South
  • UK West
  • UK South
  • East US, coming soon
  • Canada East, coming soon

Azure Event Grid became generally available earlier this year and currently has built-in integration with the following services:

Azure Event Grid service integration

As we work to deliver more events from Azure IoT Hub, we are excited for you to try this capability and build more streamlined IoT solutions for your business. Try this tutorial to get started.

We would love to hear more about your experiences with the preview and get your feedback! Are there other IoT Hub events you would like to see made available? Please continue to submit your suggestions through the Azure IoT User Voice forum.

Talari SD-WAN targets mobile with Meta Networks integration

Talari Networks’ customers can now combine their software-defined WAN service with a network-as-a-service platform from Meta Networks.

The platform offered by Meta Networks, an Israel-based NaaS startup, targets remote and mobile users who need to access data center and cloud applications. While SD-WAN technology offers remote connectivity to an extent, it is limited in its flexibility to connect individual remote and mobile BYOD users, as most can’t deploy a physical or virtual SD-WAN appliance. With Talari’s support for Meta Networks’ NaaS software, Talari customers located outside the software-defined WAN perimeter can connect using one of Meta’s multiple points of presence (POP) worldwide.

With the platform, user devices connect to the closest Meta POP to access corporate resources. Instead of applying policies based on site location, Meta Networks takes a user-centric approach by specifying policies and application authentication based on individual user permissions. Network administrators, for example, can create policies that deny mobile users access to certain websites or cloud applications.

The integrated offering is now available for Talari SD-WAN customers.

Versa Networks adds managed SD-WAN partner

Versa Networks added another service provider to its managed SD-WAN partner list. California Telecom, headquartered in Chino, Calif., joins existing Versa Networks partners CenturyLink, China Telecom Global, Comcast Business and Verizon in adding managed SD-WAN services to its portfolio.

California Telecom customers can choose from three available purchasing options: SD-WAN standard, SD-WAN advanced and SD-WAN secured. Load balancing, automated failover, error correction and circuit monitoring, among other features, are included in all three options. Customers can add additional features, such as firewalls, antivirus and content filtering and advanced routing.

“We spent over a year looking for an SD-WAN platform we could integrate into our existing MPLS infrastructure that could offer all the features that were being promoted in the industry,” said Jim Gurol, California Telecom’s CEO, in a statement. Versa’s Cloud IP Platform paired well with California Telecom’s infrastructure, he added, allowing the service provider to go to market immediately.

Customers can deploy California Telecom’s managed SD-WAN service to create various WAN designs, including hybrid MPLS, cloud-based SD-WAN and security-focused models, Gurol said.

SD-WAN adoption impeded by available options

Enterprises are investigating SD-WAN, but the technology is still being adopted relatively slowly, according to a report conducted by Sapio Research at the request of Teneo, a consulting firm and technology integrator.

While almost half of the 200 senior IT and networking managers surveyed said they were investigating SD-WAN in some form, only 20% said they’ve deployed the technology. A third of the respondents hadn’t yet evaluated SD-WAN technology. Part of the reason for SD-WAN’s slow adoption is the large number of available SD-WAN options and variants, according to Marc Sollars, CTO of Teneo, based in Dulles, Va.

“Many firms are clearly putting a toe in the water on SD-WAN or doing a proof of concept, but it’s still very hard to say when this test phase will start to translate into enterprise-level implementations,” Sollars said in a statement. “In many ways, the broad range of choice that SD-WAN brings is what’s causing companies to hesitate over their decisions.”

Respondents indicated the primary driver to consider SD-WAN deployment is to help address the growing complexity of network infrastructure and performance tasks. Cutting network costs and better infrastructure management followed behind.

Laura Noren advocates data science ethics for employee info

The trend in tech has been to gather more and more data on everyone — customers and employees alike — even if there is no direct reason to collect so much data. This has led to a pushback by users and experts about data privacy and more conversations about standards of data science ethics.

At Black Hat USA 2018, Laura Norén, director of research at Obsidian Security, spoke about data science ethics, how companies can avoid “being creepy,” and why privacy policies often leave out protections for employees.

Editor’s note: This interview was edited for length and clarity.

How did you become interested in data science ethics? 
 
Laura Norén: Data science isn’t really a discipline, it’s a set of methods being used across disciplines. One of the things that I realized fairly early on was people were doing the same thing with data science that we’ve done with so many technologies. We get so excited about the promise. People get so excited about being the first to do some new thing. But they’re really using technologies before they fully understand what the consequences and the social impact would be. So that’s when I got started on data science ethics and I talked fairly consistently to get a course that was just about ethics for data science.
 
But I ended up spending several years just working on, ‘What is it that’s unique about data science ethics?’ We’ve had ethics forever. Most engineers take an ethics class. Do we really need to reinvent the wheel here? What’s actually new about this?
 
I realized it is actually very difficult to ask those kinds of questions sitting solely from within academia because we don’t have business pressure and we don’t have the data to really understand what’s happening. I knew that I wanted to leave for a while so that I could be a better data chief science ethicist, but that it would be very difficult to find a company that would want to have such a person around. Frankly, no tech company wants to know what they can’t do, they want to know what they can do. They want to build, they want to innovate, they want to do things differently. 
 
Obsidian is a new company, but it’s founded by three guys who have been around for a while. They have seen some things that I would say they would find creepy and they didn’t ever want to be that kind of company. They were happy to have me around. [They said], ‘If you see that we’re being creepy, I want you to push back and to stop us. But also, how we can avoid that? Not just that we should stop, but what we should do differently so that we can continue to move forward and continue to build products. Because, frankly, if we don’t put X, Y, Z product out in the world someone else will. And unless we have a product that’s actually better than that, you’re still going to have employee data, for instance, being treated in bizarre and troubling ways.’

Why was it important to study data science ethics from within a company?

Norén: I got lucky. I picked them because they care about ethics, and because I knew that I needed to see a little bit more about how data are actually being used, deployed, or deleted or not, combined in a real setting. These are all dangerous issues, but unless you actually see how they’re being done, it’s way too easy to be hypercritical all the time. And that’s kind of where that field is going.
 
It’s also very interesting that employee data is not yet in the spotlight. Right now, the spotlight in tech ethics is on how tech companies are treating their workers. Are they inclusive or not? Do they care if their workers don’t want to develop weapons? Do they still have to do that anyways? And then it’s also on user data. But it’s not on employee data.

I feel like — I don’t know exactly how fast these cycles go — in three to five years, the whole conversation will be about employee data. We will have somehow put some stop-gaps in place to deal with user data, but we will not have paid much attention to employee data. In three to five years, when regulation starts to come down the chain, we’ve actually already built systems that are at least ethical. It’s hard to comply with a regulation that doesn’t exist, but at least you can imagine where those regulations are going to go and try to be in compliance with at least the principle of the effort.
 
What makes employee data different from a data science ethics perspective? 
 
Norén: One of the major differences between user data and employee data, at least from a legal perspective, is that when someone starts to work for a company, that company usually has them consent to a bunch of procedures, one of those procedures being, ‘And you consent to us surveilling what you are doing under the auspices of this company, using our physical equipment when you’re out in the world representing us. Therefore we need to be able to monitor what you’re up to, see that you’re in line with what we think you’re supposed to be doing for us.’ This means that employees actually have far fewer privacy assumptions and rights than users do in a practical sense. They have those rights, but then they consent to give them up. And that’s what most employees do. 
 
That’s why there’s not a lot of attention here because they’ve signed an employment agreement that they’ve established. Legally it’s not a gray area. Employers can potentially do what they wish. 
 
Is there a way to push back on those types of policies, or is it more just a matter of trying to get companies to change those policies? 
 
Norén: California has the California Consumer Privacy Act; it’s very similar to the GDPR. They’ve changed a few things, and — almost as a throwaway — they stuck employees in there as potential users. It’s moving to be tried in the court of law — someone’s going to have to test exactly how this is written — but it doesn’t go into effect for a while. It is possible that regulators may explicitly — or in a bumbling kind of almost accidental fashion — write employees into some of the policies that are like GDPR copycats.

It’s probably my imagination of how this is going to work [but it’s] the same thing that happened with Facebook. What Facebook was doing was considered by everyone to be kind of fine for a long time. Because just like employees/employers do, Facebook had a privacy policy and they had terms of service and everyone checked the box and legally — supposedly — that covered them for all the things that they were doing. 
 
But not really, because in the court of public opinion, eventually people started to say, ‘Hey, this isn’t right, that’s not right. I don’t think that I really consented to have my elections meddled with. That’s not in my imagination.’ If you look at the letter of the law, I’m sure Facebook is probably in compliance, but ethically their business practice extended beyond what people turned out to be comfortable with. I have a feeling that that same kind of thing is going to happen with employees.

Probably we’ll see this kind of objection happening among fairly sophisticated workers first, just like the Google Maven project was objected to by Google employees first. They’re very sophisticated, intelligent, well-educated people who are used to being listened to.

The law will have to react to those kinds of things, which is typical, right? Laws always react. 
 
What are good data science ethics policies that enterprises should adopt when handling both user data and employee data? 
 
Norén: Well, one of the more creative things that we’re trying to do is, instead of asking people at one point in time to confront a very dense legal document that says, “OK, now I’ve signed — I’m not even sure what — but I’m going to sign here and then I’ll just move on with my life,’ is to kind of do transparency all throughout the process. 
 
Let’s say you’re a typical employee and you emailed your wife, girlfriend, boyfriend, kid, whoever, some personal connection from your work account. Now you’ve consented to let your employer look at that email traffic. They may or may not be reading the contents of the email, but they can see subject lines and who you’re contacting and that may be personal for you. Instead of just letting that happen, you could say, ‘Hey, it looks like you’re emailing someone who we think is a personal connection. Just wanted to remind you that we are able to see …’ and then whatever your agreement is. 
 
Remind them of what you’re able to see and then you say something like, ‘You know, if you were to contact this person after hours or on another device or outside of this account then we wouldn’t be able to see that.’ To encourage them to take their own privacy a little bit more seriously on a daily basis right at the moment where it matters rather than assuming they’re going to remember something that they signed three years ago. Even three minutes ago. Make it really accessible and then do that transparent kind of thing throughout. 
 
Maybe they’re still OK with it because it’s just email. But maybe then you also use some of the information that you have about those emails. Like, ‘OK, I can see that Jane is totally comfortable emailing her mom all the time.’ But then if Jane leaves the company, maybe that’s some of the first steps you investigate and then delete. So not only do you make transparency kind of an ongoing process where you’re obtaining consent all the way along for doing what you’re doing, or at least providing your employees some strategy for not being surveilled, but then once they leave, you probably — as an employer — want to maintain some of the data that they have.

Certainly from the cybersecurity perspective, if you’re trying to develop predictive algorithms about, ‘What does a typical employee working in accounting do?’ you don’t just want to delete all their data the second they leave because it’s still valuable to you in terms of creating a baseline model of a typical employee, or in this case maybe if you require creating a baseline model of that employee, it’s still really valuable. But you probably don’t need to know all the times that they were emailing their personal connections. Maybe that’s something that you decide to decay by design. You decay out some of the most privacy-sensitive stuff so that you can keep what is valuable to you without exposing this person’s private communications any more than they would need to be. 

What are the ethical issues with storing so much data?

Norén: One of the things about these contracts is the indefinite status of holding onto data. We’re very skeptical about hoarding data. We’re very picky about what we keep. And we try to find ways to take the stuff that might be not all that valuable to us but very sensitive to the individual because it’s personal or who knows might make it sensitive, and deleting that kind of thing first.

The right to be forgotten is that someone’s going to come to you and say, ‘Hey, would you please forget this?’ Just as a good social scientist, we know that only a very select group of people ever feel so empowered and so informed of the right to go do such a thing. So it’s already kind of an unfair policy because most people won’t know how to do that, won’t know that they can do that. We feel like some of these broader policies are actually more fair because they will be applied to everyone, not just the privileged people who are entitled to their rights, and they’re going to demand these things and figure out how to do it. So those are the kinds of policies that we’re looking at. 
 
We have decided never to store the contents of people’s emails in the first place. That also falls in line with our “do not hoard” policy. We don’t need to know contents of emails. It doesn’t give us anything additional for what we need to do, so we’re just not going to store it. And we’re not going to get transfixed by what lots of data scientists get transfixed by which is the idea that in the future if we have all the data now, we’ll do this magical thing in the future that we haven’t figured out yet. No, that fairy tale’s dead here.

ClearSky Data launches service for VMware Cloud on AWS

ClearSky Data launched a new service that will enable VMware Cloud on AWS customers to protect their data in inexpensive object storage.

The new ClearSky Data service gives VMware Cloud on AWS customers the chance to keep their existing backup applications and use Amazon Simple Storage Service (S3) as the back-end storage repository. The startup’s software can transform the block- and file-based data from the certified backup applications to an object format to enable data protection and archiving in Amazon S3, according to Laz Vekiarides, CTO and co-founder of ClearSky Data.

VMware Cloud (VMC) on AWS users can choose certified backup options from vendors such as Commvault, Dell EMC, Druva and Veeam. But those vendors typically offer repositories in Amazon’s Elastic Block Store (EBS). Commvault is the only certified VMC option to back up natively to Amazon’s more economical object-based S3, according to Bryan Young, a group product manager of vSphere external storage at VMware.

“Customers are generally wed to their particular backup vendor. And they do not like to change because they want a common backup application across the enterprise,” Young said. “But they also want the lowest cost storage for their long-term backups.”

Extends on-premises virtual server environments

The VMware Cloud on AWS on-demand service lets customers extend their on-premises server virtualization environments to Amazon’s cloud. The service is based on VMware Cloud Foundation, which integrates vSphere server virtualization, vSAN software-defined storage, NSX software-defined networking and security, and vCenter management software.

VMware Cloud customers create a virtual private cloud (VPC) to run their application workloads on vSphere VMs in Amazon Elastic Compute Cloud (EC2) bare-metal infrastructure. VMware’s vSAN creates the local storage on SSDs in the host server cluster to keep the data.

“Part of the issue with the cloud inside of Amazon EC2 is that at any given point in time, if you decide to shut any of these [servers] down, all the data that’s inside of them gets lost,” Vekiarides said. “So, obviously, you need a place to keep all of your data when you’re not using your VMware cluster.”

ClearSky Data service for VMware Cloud on AWS
ClearSky Data service for VMware Cloud on AWS architectural diagram.

A third-party backup proxy runs in a VMware Cloud on AWS software-defined data center and copies data using the vSphere Storage APIs for data protection (VADP). The backup proxy copies data to an EC2-based backup repository that typically uses Amazon’s EBS.

Cache 64 TB per backup volume

ClearSky’s Cloud Edge appliance can connect to the backup application and cache up to 64 TB of data per VM guest-attached volume to enable fast access to frequently accessed data. Vekiarides said ClearSky can have many such volumes online, up to 4 PB total. On the back end, the Cloud Edge appliance connects to the ClearSky Data Network’s Amazon S3 storage via AWS Direct Connect to protect and archive all the data.

ClearSky supports a recovery point objective of less than 10 minutes for point-in-time snapshots and a recovery time objective of less than a minute once the customer’s applications are running.

In addition to the cloud-based deployment option, customers could run VMware virtual servers, third-party backup applications and ClearSky Edge Cache appliances on premises. The Edge Cache appliance’s software would deduplicate and compress the data and convert it to object format for archiving in ClearSky Network’s Amazon S3. The data from the on-premises Edge Cache and off-site Cloud Edge appliances would be stored in a common Amazon S3 repository.

ClearSky Data used the VMware Cloud on AWS service to build out a server cluster to test its new service. Vekiarides said ClearSky engineers made minor changes to the deployment software to ensure it works and meets performance targets in the AWS cloud environment.

Pricing for the ClearSky Data service ranges from 6 cents to 20 cents per GB, per month depending on the use case, quality of service and data capacity.

ClearSky Data does not charge fees for replication and customers do not incur AWS egress charges to restore their data, according to Vekiarides. He said egress charges generally do not apply because nearly all of the data is cached either inside AWS’ cloud or somewhere in ClearSky’s network.

Scale-out Qumulo NAS qualifies latest Dell EMC PowerEdge servers

Qumulo today added a hardware option for its customers by qualifying its scale-out NAS software to run on Dell Technologies’ PowerEdge servers. That leaves open the possibility that Qumulo will gain customers on Dell EMC servers at the expense of Dell EMC Isilon’s clustered NAS platform.

Qumulo is nearly two years into an OEM deal with Dell EMC archrival Hewlett Packard Enterprise. HPE rebrands and sells Qumulo’s scale-out NAS software on its servers. There is no joint go-to-marketing agreement between Qumulo and Dell EMC, which is a NAS market leader. The partnership means customers can purchase PowerEdge hardware from their preferred Dell EMC resellers and install Qumulo NAS software on the box.

Dell qualified Qumulo NAS software to run on dual-socket 2U Dell EMC PowerEdge R740xd servers.

“There are a lot of customers who build private clouds on Dell hardware. We’re now in a position where they can choose our software to build their computing,” Qumulo chief marketing officer Peter Zaballos said.

Dell EMC 14th-generation PowerEdge are equipped with about 20% more NVMe flash capacity than R730 models. One of the use cases cited by Dell EMC is the ability to use a single PowerEdge 14G node to power its IsilonSD Edge virtual NAS software, which competes with Qumulo storage.

Will Qumulo on PowerEdge compete with Dell EMC Isilon NAS?

The Qumulo File Fabric (QF2) file system scales to support billions of files and hundreds of petabytes. QF2 is available on Qumulo C-Series hybrid arrays, all-flash P-Series or preinstalled on HPE Apollo servers. Customers also may run it as an Elastic Compute Cloud instance to burst and replicate in AWS.

Qumulo NAS gear is sold mostly to companies in media and entertainment and other sectors with large amounts of unstructured data.

Zaballos said QF2 on PowerEdge isn’t a direct attempt to displace Isilon. The goal is to give Dell EMC shops greater flexibility, he said.

“We’re looking to build the biggest footprint in the market. Between Dell and HPE, that’s about 40% of the server market for data centers,” Zaballos said.

Qumulo competes mainly with Isilon and NetApp’s NAS products and has won customers away from Isilon. Pressure on traditional NAS vendors is also coming from several file system-based cloud startups, including Elastifile, Quobyte, Stratoscale and WekaIO.

Qumulo founders Peter Godman, Aaron Passey and Neal Fachan helped develop the Isilon OneFS clustered file system, which paved the way for the startup’s initial public offering in 2006. EMC bought the Isilon technology for $2.25 billion in 2010 and then was acquired as part of the Dell-EMC merger in 2015.

Qumulo CEO Bill Richter was president of the EMC Isilon division for three years. He joined Qumulo in 2016.

Greg Schulz, an analyst with Server StorageIO, based in Stillwater, Minn., likened the Qumulo-PowerEdge configuration to Dell EMC’s “co-optetition” OEM agreement with hyper-converged vendor Nutanix.

Qumulo NAS has been focused on high-performance, big-bandwidth file serving, which may not play well in environments that have many smaller files and mixed workloads. That’s an area Isilon has adapted to over the years. The other obstacle is getting [beyond] large elephant-hunt deals into broader markets, and getting traction with Dell servers can help them fill gaps in their portfolio,” Schulz said.

Ron Pugh, vice president for Dell EMC OEM sales in North America, said it’s not unusual for potential competitors to rely on Dell hardware products.

“If you look deeply inside the Dell Technologies portfolio, some of our customers can be considered competitors. Our OEM program is here to be a building block for our customers, not to build competing products,” Pugh said.

Dell EMC also sells Elastifile cloud-based NAS on its servers and is an Elastifile strategic investor.

Qumulo: AI tests on P-Series flash

Qumulo this week also previewed upcoming AI enhancements to its P-Series to enable faster prefetching of application data in RAM. Those enhancements are due to roll out in September. Grant Gumina, a Qumulo senior product manager, said initial AI enhancements will improve performance of all-flash P-Series. Series proofs of concept are under way with media customers, Gumina said.

“A lot of studios are using SANs to power primarily file-based workloads in each playback bay. The performance features in QF2 effectively means they can install a NAS for the first time and move to a fully Ethernet-based environment,” Gumina said.

File storage vendors NetApp and Pure Storage recently added flash systems built for AI, incorporating Nvidia hardware.

Revenue ops main theme at Ramp by InsightSquared conference

Customers, potential customers and partners of InsightSquared Inc. gathered in Boston for two days for Ramp 2018, the dashboard and reporting software vendor’s second annual conference. The Pipeline podcast was there to take in the conference festivities.

Revenue ops was among the main topics discussed at Ramp, with keynotes and conversations dedicated to the idea of bringing together marketing, sales and service departments to improve ROI and revenue.

To help companies with that objective, InsightSquared also unveiled a new set of marketing analytics tools that may help companies uncover insights within the marketing process, including marketing attribution, demand management, and planning and analysis.

“There’s a natural tension between sales and marketing,” said Matisha Ladiwala, GM of marketing analytics for InsightSquared, on the conference stage. Ladiwala ran through a demo of some of the tools’ capabilities before two InsightSquared customers spoke about using the marketing analytics tools.

One of Ladiwala’s demos showed a dashboard that united data from the sales and marketing departments and determined how quickly sales followed up on leads and how many leads were making it into the funnel. This revenue ops approach is beneficial to companies that have traditionally used a more manual, time-intensive approach to reporting, according to InsightSquared.

Aggregating information from areas was very manual and time-consuming.
Guido BartolacciNew Breed

One InsightSquared user, Guido Bartolacci, manager of acquisition and strategy at New Breed, an inbound marketing and sales agency, told conference attendees: “Aggregating information from areas was very manual and time-consuming.”

By using InsightSquared’s new marketing analytics tools while in beta, the marketing and sales agency was able to pull together data from multiple sources quickly and with more insight, Bartolacci said.

Beyond discussing the revenue ops-focused conference, this Pipeline podcast also touches on some of the other speakers at Ramp, including Nate Silver, data scientist and founder of the FiveThirtyEight blog, and TrackMaven CEO Allen Gannett, who gave a lively, entertaining keynote on creativity.

Avaya earnings show cloud, recurring revenue growth

Avaya hit revenue targets, increased cloud sales and added customers in its second full quarter as a public company — welcome news for customers and partners anxious for proof that the company is regaining its financial footing following last year’s bankruptcy.

Avaya reported revenue of $755 million in the third quarter of 2018 — down from $757 million last quarter, but within the vendor’s previously announced targets. When excluding sales from the networking division, which Avaya sold last year, adjusted revenue was 1% higher than during the third quarter of 2017.

To keep pace with competitors like Microsoft and Cisco, Avaya is looking to reduce its dependence on large, one-time hardware purchases by selling more monthly cloud subscriptions. This transition can make it difficult to show positive quarter-over-quarter and year-over-year growth in the short term.

Recurring revenue accounted for 59% of Avaya’s adjusted earnings in the third quarter — up from 58% the previous quarter. Cloud revenue represented just 11% of the quarter’s total, but monthly recurring revenue from cloud sales increased by 43% in the midmarket and 107% in the enterprise market, compared with last quarter.

Avaya reported an $88 million net loss in the third quarter. Still, the company’s operations netted $83 million in cash, which is a more critical financial indicator, in this case, than net income, said Hamed Khorsand, analyst at BWS Financial Inc., based in Woodland Hills, Calif.

“This is a company that’s still in transition as far as their accounting goes, with the bankruptcy proceedings,” Khorsand said. “[The net cash flow] actually tells you that the company is adding cash to its balance sheet.”

Also during the third quarter, Avaya regained top ratings in Gartner’s yearly rankings of unified communications (UC) and contact center infrastructure vendors. Avaya’s one-year absence from the leadership quadrant in the Gartner report probably slowed growth, Khorsand said, because C-suite executives place value in those standings.

Avaya’s stock closed up 3.61%, at $20.68 per share, following the Avaya earnings report on Thursday.

Avaya earnings report highlights product growth

The Avaya earnings report showed the company added 1,700 customers worldwide during the third quarter. It also launched and refreshed several products, including an updated workforce optimization suite for contact centers and a new version of Avaya IP Office, its UC offering for small and midsize businesses.

The product releases demonstrate that Avaya continued to invest in research and development, even as it spent most of 2017 engaged in Chapter 11 bankruptcy proceedings, said Zeus Kerravala, principal analyst at ZK Research in Westminster, Mass.

“As long as we continue to see this steady stream of new products coming out, I think it should give customers confidence,” Kerravala said. “Channel partners tend to live on new products, as well.”

The bankruptcy allowed Avaya to cut its debt in half to a level it can afford based on current revenue. But years of underinvestment in product continue to haunt the vendor, as it tries to play catch-up with rivals Cisco and Microsoft, which analysts generally agreed have pulled ahead of all other vendors in the UC market.

Avaya acquired cloud contact center vendor Spoken Communications earlier this year, gaining a multi-tenant public cloud offering. Avaya plans to use the same technology to power a UC-as-a-service product in the future.

“We are investing significantly in people and technology, investing more on technology in the last two quarters than we did in all of fiscal 2017,” said Jim Chirico, CEO at Avaya.

Avaya is expecting to bring in adjusted revenue between $760 and $780 million in the fourth quarter, which would bring the fiscal year’s total to a little more than $3 billion.

LifeLock vulnerability exposed user email addresses to public

Symantec’s identity theft protection service, LifeLock, exposed millions of customers’ email addresses.

According to security journalist Brian Krebs, the LifeLock vulnerability was in the company’s website, and it enabled unauthorized third parties to collect email addresses associated with LifeLock user accounts or unsubscribe users from communications from the company. Account numbers, called subscriber keys, appear in the URL of the unsubscribe page on the LifeLock website that correspond to a customer record and appear to be sequential, according to Krebs, and that lends itself to writing a simple script to collect the email address of every subscriber.

The biggest threat with this LifeLock vulnerability is attackers could launch a targeted phishing scheme — and the company boasted more than 4.5 million users as of January 2017.

“The upshot of this weakness is that cyber criminals could harvest the data and use it in targeted phishing campaigns that spoof LifeLock’s brand,” Krebs wrote. “Of course, phishers could spam the entire world looking for LifeLock customers without the aid of this flaw, but nevertheless the design of the company’s site suggests that whoever put it together lacked a basic understanding of web site authentication and security.”

Krebs notified Symantec of the LifeLock vulnerability, and the security company took the affected webpage offline shortly thereafter. Krebs said he was alerted to the issue by Atlanta-based independent security researcher Nathan Reese, a former LifeLock subscriber who received an email offering him a discount if he renewed his membership. Reese then wrote a proof of concept and was able to collect 70 email addresses — enough to prove the LifeLock vulnerability worked.

Reese emphasized to Krebs how easy it would be for a malicious actor to use the two things he knows about the LifeLock customers — their email addresses and the fact that they use an identity theft protection service — to create a “sharp spear” for a spear phishing campaign, particularly because LifeLock customers are already concerned about cybersecurity.

Symantec, which acquired the identity theft protection company in 2016, issued a statement after Krebs published his report on the LifeLock vulnerability:

This issue was not a vulnerability in the LifeLock member portal. The issue has been fixed and was limited to potential exposure of email addresses on a marketing page, managed by a third party, intended to allow recipients to unsubscribe from marketing emails. Based on our investigation, aside from the 70 email address accesses reported by the researcher, we have no indication at this time of any further suspicious activity on the marketing opt-out page.

LifeLock has faced problems in the past with customer data. In 2015, the company paid out $100 million to the Federal Trade Commission to settle charges that it allegedly failed to secure customers’ personal data and ran deception advertising.

In other news:

  • The American Civil Liberties Union (ACLU) of Northern California said Amazon’s facial recognition program, Rekognition, falsely identified 28 members of Congress as people who were arrested for a crime in its recent test. The ACLU put together a database of 25,000 publicly available mugshots and ran the database against every current member of the House and Senate using the default Rekognition settings. The false matches represented a disproportionate amount of people of color — 40% of the false matches, while only 20% of Congress members are people of color — and spanned both Democrats and Republicans and men and women of all ages. One of the falsely identified individuals was Rep. John Lewis (D-Ga.), who is a member of the Congressional Black Caucus; Lewis previously wrote a letter to Amazon’s CEO, Jeff Bezos, expressing concern for the potential implications of the inaccuracy of Rekognition and how it could affect law enforcement and, particularly, people of color.
  • Researchers have discovered another Spectre vulnerability variant that enables attackers to access sensitive data. The new exploit, called SpectreRSB, was detailed by researchers at the University of California, Riverside, in a paper titled, “Spectre Returns! Speculation Attacks using the Return Stack Buffer.” “Rather than exploiting the branch predictor unit, SpectreRSB exploits the return stack buffer (RSB), a common predictor structure in modern CPUs used to predict return addresses,” the research team wrote. The RSB aspect of the exploit is what’s new, compared with Spectre and its other variants. It’s also why it is, so far, unfixed by any of the mitigations put in place by Intel, Google and others. The researchers tested SpectreRSB on Intel Haswell and Skylake processors and the SGX2 secure enclave in Core i7 Skylake chips.
  • Google Chrome implemented its new policy this week that any website not using HTTPS with a valid TLS certificate will be marked as “not secure.” In the latest version of the browser, Google Chrome version 68, users will see a warning message stating that the site in not secure. Google first announced the policy in February. “Chrome’s new interface will help users understand that all HTTP sites are not secure, and continue to move the web towards a secure HTTPS web by default,” Emily Schechter, Chrome Security product manager, wrote in the announcement. “HTTPS is easier and cheaper than ever before, and it unlocks both performance improvements and powerful new features that are too sensitive for HTTP.”