Tag Archives: Brian

Datrium DVX switches focus to converged markets, enterprise

Datrium has a new CEO, and a new strategy for pushing hyper-convergence into the enterprise.

Tim Page replaced Brian Biles, one of Datrium’s founders, as CEO in June. Biles moved into the chief product officer role, one he said he is better suited for, to allow Page to build out an enterprise sales force.

The startup is also changing its market focus. Its executives previously avoided calling Datrium DVX primary storage systems hyper-converged, despite its disaggregated architecture that included storage and Datrium Compute Nodes and Data Nodes. They pitched the Datrium DVX architecture as “open convergence” instead because customers could also use separate x86 or commodity servers. As a software-defined storage vendor, Datrium played down its infrastructure architecture.

Now Datrium positions itself as hyper-converged infrastructure (HCI) on both the primary and secondary storage sides. The use cases and reasons for implementation are the same as hyper-converged — customers can collapse storage and servers into a single system.

“You can think of us as a big-a– HCI,” Biles said. “We’re breaking all the HCI rules.”

Datrium DVX is nontraditional HCI with stateless servers, large caches and shared storage but is managed as a single entity.

You can think of us as a big-a– HCI. We’re breaking all the HCI rules.
Brian Bileschief product officer, Datrium

“We mean HCI in a general way,” Biles said. “We’re VM- or container-centric, we don’t have LUNs. DVX includes compute and storage, it can support third-party servers. But when you look at our architecture, it is different. To build this, we had to break all the rules.”

Datrium’s changed focus is opportunistic. The HCI market is growing at a far faster rate than traditional storage arrays, and that trend is expected to continue. Vendors who have billed themselves as software-defined storage without selling underlying hardware have failed to make it.

Secondary storage is also taking on a converged focus with the rise of newcomers Rubrik and Cohesity. Datrium also wants to compete there with a cloud-native version of DVX for backup and recovery.

However, Datrium will find a highly competitive landscape in enterprise storage and HCI. It will go against giants Dell EMC, Hewlett Packard Enterprise and NetApp on both fronts, and Cisco and Nutanix in HCI. Besides high-flying Cohesity and Rubrik, its backup competition includes Veritas, Dell EMC, Veeam and Commvault.

A new Datrium DVX customer, the NFL’s San Francisco 49ers, buys into the vendor’s HCI story. Jim Bartholomew, the 49ers IT director, said the football team collapsed eight storage platforms into one when it installed eight DVX Compute Nodes and eight DVX Data Nodes. It will also replace its servers and perhaps traditional backup with DVX, 49ers VP of corporate partnerships Brent Schoeb said.

“The problem was, we had three storage vendors and always had to go to a different one for support,” Bartholomew said.

Schoeb said the team stores its coaching and scouting video on Datrium DVX, as well as all of the video created for its website and historical archives.

“We were fragmented before,” Schoeb said of the team’s IT setup. “Datrium made it easy to consolidate our legacy storage partners. We rolled it all up into one.”

Datrium DVX units in the 49ers' data center.
The NFL’s San Francisco 49ers make an end run around established storage vendors by taking a shot with startup Datrium.

Roadmap: Multi-cloud support for backup, DR

Datrium parrots the mantra from HCI pioneer Nutanix and others that its goal is to manage data from any application wherever it resides on premises or across clouds.

Datrium is building out its scale-out backup features for secondary storage. Datrium DVX includes read on write snapshots, deduplication, inline erasure coding and a built-in backup catalog called Snapstore.

Another Datrium founder, CTO Sazzala Reddy, said the roadmap calls for integrating cloud support for data protection and disaster recovery. Datrium added support for AWS backup with Cloud DVX last fall, and is working on support for VMware Cloud on AWS and Microsoft Azure.

“We want to go where the data is,” Reddy said. “We want to move to a place where you can move any application to any cloud you want, protect it any way you want, and manage it all in the data center.”

New CEO: Datrium’s ready to pivot

Page helped build out the sales organization as COO at VCE, the EMC-Cisco-VMware joint venture that sold Vblock converged infrastructure systems. He will rebuild the sales structure at DVX, shifting the focus from SMB and midmarket customers to the enterprise.

DVX executives claim they have hundreds of customers and hope to hit 1,000 by the end of 2018, although that goal is likely overambitious. The startup is far from profitable, and will require more than the $110 million in funding it has raised. Industry sources say Datrium already has about $40 million in venture funding lined up for a D round, and is seeking strategic partners before disclosing the round. Datrium has around 200 employees.

“Datrium’s at an interesting point,” Page said of his new company. “They’re getting ready to pivot in a hyper-growth space now into the enterprise. What we didn’t have was an enterprise sales motion — it’s different selling into the Nimble, Tintri, Nutanix midmarket world. It’s hard to port anyone from that motion into the enterprise motion. We’re going to get into that growth phase, and make sure we do it right.”

Biles said he is following the same model as in his previous company, Data Domain. The backup deduplication pioneer took off after bringing Frank Slootman in as CEO during its early days of shipping products in 2003. Data Domain became a public company in 2007, and EMC acquired it for $2.1 billion two years later.

“I knew a lot less then than I know now, but I know there are many better CEOs than me,” Biles said. “Customer opportunities are much bigger than they used to be, and the sales cycle is much bigger than our team was equipped for. We needed to do a spinal transplant. There’s a bunch of things to deal with as you get to hundreds of employees and a lot of demanding customers. My training is on the product side.”

The intelligent enterprise blends tech savvy, business smarts

The term intelligent enterprise was coined in the early 1990s by James Brian Quinn. It was a business theory holding that technology and computing infrastructure were key to improved business performance. That was before the internet was a household word, before smartphones were in every hand, before cloud computing became computing for so many.

In 2018, intelligent enterprise takes on a new significance. Ryan Mallory, an executive at data center provider Equinix, hosted a panel on the topic at the MIT Sloan CIO Symposium in Cambridge, Mass., on May 23. His definition didn’t stray far from the original: a company that uses technology to “become more efficient, more productive and have a larger impact on the success of the overall enterprise goals and objectives.”

As senior vice president for global solutions enablement at the Redwood City, Calif., company, Mallory helps companies use Equinix’s data center resources to their advantage, easing their transition to cloud and emerging technologies like AI. So, for him, talking future tech and how to prepare for it is business as usual. IT execs at the MIT event spoke to him about a major challenge they faced in getting started: the breakneck pace of development.

“They’re having a hard time even getting through their tech-dev, DevOps evaluation cycle before that product is either obsolete or it has changed its overall focus,” he said.

In an interview at the MIT conference, Mallory discussed digital transformation efforts companies need to undertake to become an intelligent enterprise, how they’re fairing in deploying the mandatory technologies, including cloud, plus challenges Equinix faced during its own emerging-tech initiatives. Here are edited excerpts of that conversation.

Ryan Mallory, senior vice president for global solutions enablement, EquinixRyan Mallory

Part of your job is helping business executives with their cloud deployment strategies. What do executives get about cloud, and where do they need the most help?

Ryan Mallory: Executives get that they need cloud, period. So the financial modeling and ingestion of services in a bare-metal environment — meaning they’re using their own dollars every single year, every two years to buy servers and then do the refreshes — they realize that the economic modeling associated with that just isn’t viable any longer. They know that they need to consume services in a virtualized fashion, and they want to get there.

The biggest challenge that they have is, over the past eight or 10 years, they’ve really reduced their IT staffs — most importantly, their network staffs. And what they realize is, what they have today can’t get them to where they need to be tomorrow. They don’t have intellectual property inside, from a staffing perspective, to cross that chasm, to understand how to actually get cloudified.

You’re moderating a panel here at the MIT CIO Symposium on becoming an intelligent enterprise. Can you define that?

Mallory: What I look at as an intelligent enterprise is a company or an enterprise that has the ability to utilize technology to become more efficient, more productive and have a larger impact on the success of the overall enterprise goals and objectives.

Does a business need to go through a digital transformation to become an intelligent enterprise?

Mallory: Absolutely. It’s always multi-tiered, and it should be viewed as multiyear, and it shouldn’t ever be viewed as a start and an end point. Because with today’s technology advancements and development life cycles, we see Moore’s Law continually compressing with the ability to have better capabilities, better services, better chips, etc. So the need to advance your digital strategy aligns with that, because it can continue to get better.

I was meeting with a with a couple of companies here, and they were very clearly stating that the biggest challenges when they start something in R&D, they’re having a hard time even getting through their tech-dev, DevOps evaluation cycle before that product is either obsolete or it has changed its overall focus. And their R&D dollars have to be refunded and then go back into another round of R&D.

That’s the cycle we’re seeing, especially from SaaS applications and the capabilities out there: These near-term roadmap capabilities are just evolving so quickly that there’s a frustration around, ‘How do we make sure we stay in front of this?’ And, ‘How are we looking at the current state and the “to be,” but also that the innovation ideation is taking place, as well?’

What are some of those key technologies?

Mallory: It’s AI. It’s analytics, machine learning, but it’s also the core apps that sit inside Microsoft Azure. We’ve had conversations around some of their data science applications and even some of their advanced notebooking capabilities, where companies are using them for a specific purpose.

One of the companies I was talking to this morning is an oil and gas company that was using some of the data analytics and scientific mechanisms that Microsoft had, and they were all bought in [on a Microsoft product]. Six months into their R&D cycle, they end-of-lifed that product because they were going to launch something else. And that’s not a hit on Microsoft; it’s just how fast people are developing their underlying products, and that’s coming from the hyperscalers all the way down to startups.

And we’ve seen this evolve. Entefy is a very hot, up-and-coming AI-machine learning company in Silicon Valley. And just in the last 18 months, we’ve seen their product set evolve from one unified messaging-type platform that had AI capabilities in it to allow you to combine text and voice and email into a single information interface to an eight-product set suite. That’s how quick they’re evolving. That original UC model it’s there; it’s OK. But the deep search and the deep learning capabilities are what they’ve really moved into. The market is just so dynamic right now; you just have to really focus on being in front of it.

So, the pace of development is a challenge. What are some other challenges companies will face?

Mallory: They’re very adept at their own code and their own algorithms; they’re very focused on that product development. It’s being able to integrate those capabilities in usable fashion out in the market.

If we look at the marketplace from a silo perspective, what’s happening with customers, what’s happening with vendors, you can look at the widgets and say, ‘OK, great. That’s a good idea,’ and, ‘That’s a great algorithm,’ or, ‘That’s a great implementation.’ But when you start looking at AI-machine learning and say, ‘OK, we want to start looking at some models for smart cities,’ what’s the endpoint? Data cameras that when every time everybody goes through a toll, we can look at what the car looks like, how many passengers are in there, what speed they’re going at. Then, companies can start looking at, ‘Where do we want to put charging stations? What’s the uptick of electric cars? Should we sell this information to Tesla and let them put in a new kiosk in a mall?’

But connecting all those ancillary dots becomes very difficult, because you’re taking technologies, and you’ve got to figure out how to integrate them, whether it’s physically or logically, and then figure out how to make sure that people can access the information. More importantly, how can they act on that info? That’s the big challenge.

Equinix has implemented a lot of emerging technologies — it spearheaded a number of AI initiatives, for example. What challenges did you experience when preparing for and executing on them?

Mallory: I think what everybody assumes is when they think about AI or they think of IoT that there are quick-fix scenarios out there. ‘OK, great. I’m IoT-enabled.’ Well, if a device is IoT-enabled, that just means that endpoint has the capability to communicate more broadly. When you look at that deployment plan or a deployment scheme on trying to make things smarter or have access to that data, that’s putting sensors everywhere.

The real challenge is the scope of the deployment associated with making things more intelligent. That’s the hard part. The technology and being able to define that it has been proven. And everybody I think in the technology field is very comfortable where we’re at and where we’re going to go. But it’s just making sure that anything that’s a legacy — anything that was built from today back has those capabilities. That’s where the challenge is. We’re talking about hundreds of thousands of sensors. It takes a lot of man-hours to go in and put a sensor on every light switch.

The physical deployment.

Mallory: Absolutely. We have the aggregation mechanisms both public and private, with private connectivity through our fabric and public connectivity through the internet or Wi-Fi, so the accessing and the concepts are there. But it’s the time associated with deployment that I think that’s catching everybody right now.

Equinix’s Ryan Mallory discusses his company’s moves in a growing data center services market in part two of this two part Q&A.