Tag Archives: Brian

3 secret virtues of a great IT practitioner

What makes a great IT practitioner? Danny Brian, a Gartner vice president and fellow, suggested it’s the ability to embrace one’s vices.

At Gartner Catalyst 2018, Brian named laziness, impatience and hubris as the three secret virtues of a great IT practitioner — borrowing them from acclaimed programmer Larry Wall. In Wall’s 1991 book Programming Perl, the virtues were aimed at programmers. But Brian made the case they can help all IT practitioners succeed in today’s digital business age — and contribute to the bottom line.


In Wall’s book, laziness is defined as “the quality that makes you go to great effort to reduce overall energy expenditure.” In other words, lazy IT practitioners continually seek out the easiest, most efficient ways to complete a task.

“If necessity is the mother of invention, then maybe laziness is the mother of innovation,” Brian said.

As an example, he pointed to computer programming pioneer Grace Hopper. The inventor of one of the first compiler tools — i.e., software that transforms computer code from one programming language into another — Hopper credited laziness as the impetus for her accomplishment.

Laziness, Brian said, also requires an enormous amount of planning and foresight.

“You don’t want to just be lazy now; you want to be lazy tomorrow and the day after that,” Brian said. “And if you want to enable other people to be lazy, it takes even more thought and preparation.”

He listed specific examples of what true laziness requires of an IT practitioner:

  • not repeating yourself;
  • not reinventing the wheel — utilizing the best frameworks and tools to save time and effort;
  • focusing on the most important problems;
  • knowledge and recognition of design patterns, which avoid solving the same or similar problems multiple times;
  • ensuring test-driven development in order to avoid hours spent later in panic mode trying to figure out what broke;
  • developing processes and procedures that actually help people short cut their tasks, rather than creating standards for standards’ sake; and
  • documenting everything — as close to the activity as possible — in a way that is easy for others and for your future self to understand.


If necessity is the mother of invention, then maybe laziness is the mother of innovation.
Danny Brianvice president and fellow, Gartner

Every religious tradition in the world espouses patience as a virtue, Brian said, but the truth is the world is growing more impatient, in part, because of technology.

“If you think you want patient people working for you, I’d ask, ‘What about all that technology influence that’s creating more and more impatience in the world? Don’t you want people who recognize that and are ready and willing to respond to it?'” he said.

Indeed, patience could even pose a threat to organizational efficiency.

“Patience can lead to inaction, if you think about it. Patience can quickly become apathy or complacency — or at least appear to be those things,” he said.

The impatience Brian exalted is the general impatience that drives people to get things done and fix things that are broken or problematic. While laziness is about overall energy expenditure, impatience is all about the emotion — specifically anger at a slow program or process.

“It’s fixing a problem not because practitioners have to, but because it bugs them; not because there’s a ticket open, but because it’s really annoying and they’re impatient users,” Brian said.

This is where practices like continuous integration come in, Brian said. Along with having tests run on a regular basis so they can know as soon as a problem occurs, impatient IT practitioners are also continuously exploring — and integrating — new and better tools.

Impatience is also key to Agile development, Brian said.

“You should never hear the words from an Agile team, ‘We are waiting on X from X,'” Brian said. “They’re not Agile unless they can meet all of their dependencies and never be waiting on another team to get things done. And that’s what real impatience should look like.”

He listed specific examples of what true impatience requires of an IT practitioner:

  • a sense of urgency;
  • automating everything automatable;
  • constantly watching for better workflows, tools and methodologies;
  • continuously integrating so you never feel behind;
  • utilizing wikis, because we need to edit that right here and now;
  • empathizing with impatient end users;
  • having empowered teams with the resources necessary to push projects through to completion;
  • the ability to use cloud services, or any service that is the best tool for the job; and
  • strong communication skills from all contributors and sponsors.


Wall defined hubris as “excessive pride — the sort of thing that Zeus zaps you for. [It’s] also the quality that makes you write and maintain programs that other people won’t want to say bad things about.”

In that vein, Brian refers to IT-practitioner hubris as the pride one takes in a well-crafted product and the drive to succeed where others have failed.

“[It’s] that total sense of ownership that doesn’t come without opening things up and allowing themselves to be impatient and lazy in this case,” he said. “It’s also knowing enough to know what you don’t know, which brings confidence with experience.”

This brand of hubris requires not only a conviction that one is right, but an ability to make the case to the CIO and the business, Brian said.

“A big part of this is for the technical folks to learn to not speak like coneheads,” he said.

Brian noted that novice IT practitioners can’t really have true hubris — yet.

“New practitioners can be lazy, and they can be impatient. But they can’t have hubris in the effective way,” Brian said. Hubris takes time, experience and success. “Real hubris is being an expert.”

He listed specific examples of what else true hubris requires of an IT practitioner:

  • pride in yourself and in your work;
  • zero fear of new technologies — the ability to dive in and emerge an expert;
  • attention to details, such as design, documentation and code formatting;
  • flexibility to adjust to changing requirements and user needs — a “we can do that” mentality;
  • owning the results of your work — releasing, maintaining and improving a service;
  • knowing what “good” looks like and how to get it;
  • going above and beyond, even when it is not requested;
  • constantly retraining yourself; staying abreast of new technology developments; reading technology books; attending conventions and workshops; and subscribing to training sites, like Lynda.com, Udemy, Pluralsight or Codecademy; and
  • a craftsmanship mentality — seeing your job as creating solutions for people and the business, rather than racking servers or writing code.

Brian ended with a warning to IT practitioners: Don’t let any one of these three qualities outweigh the others; they must coexist and balance each other out. Practitioners ruled by laziness — efficiency obsessives — will try to suss out and prematurely optimize any problems that might come in the future.

“If they’re too impatient, they’re going to be quick to adopt the wrong solutions … and just incur technical debt over time,” Brian said. “Too much hubris, and they are going to be perfectionists that can’t ever recognize when good is good enough and [the need to] sacrifice the good for the perfect.”

Datrium DVX switches focus to converged markets, enterprise

Datrium has a new CEO, and a new strategy for pushing hyper-convergence into the enterprise.

Tim Page replaced Brian Biles, one of Datrium’s founders, as CEO in June. Biles moved into the chief product officer role, one he said he is better suited for, to allow Page to build out an enterprise sales force.

The startup is also changing its market focus. Its executives previously avoided calling Datrium DVX primary storage systems hyper-converged, despite its disaggregated architecture that included storage and Datrium Compute Nodes and Data Nodes. They pitched the Datrium DVX architecture as “open convergence” instead because customers could also use separate x86 or commodity servers. As a software-defined storage vendor, Datrium played down its infrastructure architecture.

Now Datrium positions itself as hyper-converged infrastructure (HCI) on both the primary and secondary storage sides. The use cases and reasons for implementation are the same as hyper-converged — customers can collapse storage and servers into a single system.

“You can think of us as a big-a– HCI,” Biles said. “We’re breaking all the HCI rules.”

Datrium DVX is nontraditional HCI with stateless servers, large caches and shared storage but is managed as a single entity.

You can think of us as a big-a– HCI. We’re breaking all the HCI rules.
Brian Bileschief product officer, Datrium

“We mean HCI in a general way,” Biles said. “We’re VM- or container-centric, we don’t have LUNs. DVX includes compute and storage, it can support third-party servers. But when you look at our architecture, it is different. To build this, we had to break all the rules.”

Datrium’s changed focus is opportunistic. The HCI market is growing at a far faster rate than traditional storage arrays, and that trend is expected to continue. Vendors who have billed themselves as software-defined storage without selling underlying hardware have failed to make it.

Secondary storage is also taking on a converged focus with the rise of newcomers Rubrik and Cohesity. Datrium also wants to compete there with a cloud-native version of DVX for backup and recovery.

However, Datrium will find a highly competitive landscape in enterprise storage and HCI. It will go against giants Dell EMC, Hewlett Packard Enterprise and NetApp on both fronts, and Cisco and Nutanix in HCI. Besides high-flying Cohesity and Rubrik, its backup competition includes Veritas, Dell EMC, Veeam and Commvault.

A new Datrium DVX customer, the NFL’s San Francisco 49ers, buys into the vendor’s HCI story. Jim Bartholomew, the 49ers IT director, said the football team collapsed eight storage platforms into one when it installed eight DVX Compute Nodes and eight DVX Data Nodes. It will also replace its servers and perhaps traditional backup with DVX, 49ers VP of corporate partnerships Brent Schoeb said.

“The problem was, we had three storage vendors and always had to go to a different one for support,” Bartholomew said.

Schoeb said the team stores its coaching and scouting video on Datrium DVX, as well as all of the video created for its website and historical archives.

“We were fragmented before,” Schoeb said of the team’s IT setup. “Datrium made it easy to consolidate our legacy storage partners. We rolled it all up into one.”

Datrium DVX units in the 49ers' data center.
The NFL’s San Francisco 49ers make an end run around established storage vendors by taking a shot with startup Datrium.

Roadmap: Multi-cloud support for backup, DR

Datrium parrots the mantra from HCI pioneer Nutanix and others that its goal is to manage data from any application wherever it resides on premises or across clouds.

Datrium is building out its scale-out backup features for secondary storage. Datrium DVX includes read on write snapshots, deduplication, inline erasure coding and a built-in backup catalog called Snapstore.

Another Datrium founder, CTO Sazzala Reddy, said the roadmap calls for integrating cloud support for data protection and disaster recovery. Datrium added support for AWS backup with Cloud DVX last fall, and is working on support for VMware Cloud on AWS and Microsoft Azure.

“We want to go where the data is,” Reddy said. “We want to move to a place where you can move any application to any cloud you want, protect it any way you want, and manage it all in the data center.”

New CEO: Datrium’s ready to pivot

Page helped build out the sales organization as COO at VCE, the EMC-Cisco-VMware joint venture that sold Vblock converged infrastructure systems. He will rebuild the sales structure at DVX, shifting the focus from SMB and midmarket customers to the enterprise.

DVX executives claim they have hundreds of customers and hope to hit 1,000 by the end of 2018, although that goal is likely overambitious. The startup is far from profitable, and will require more than the $110 million in funding it has raised. Industry sources say Datrium already has about $40 million in venture funding lined up for a D round, and is seeking strategic partners before disclosing the round. Datrium has around 200 employees.

“Datrium’s at an interesting point,” Page said of his new company. “They’re getting ready to pivot in a hyper-growth space now into the enterprise. What we didn’t have was an enterprise sales motion — it’s different selling into the Nimble, Tintri, Nutanix midmarket world. It’s hard to port anyone from that motion into the enterprise motion. We’re going to get into that growth phase, and make sure we do it right.”

Biles said he is following the same model as in his previous company, Data Domain. The backup deduplication pioneer took off after bringing Frank Slootman in as CEO during its early days of shipping products in 2003. Data Domain became a public company in 2007, and EMC acquired it for $2.1 billion two years later.

“I knew a lot less then than I know now, but I know there are many better CEOs than me,” Biles said. “Customer opportunities are much bigger than they used to be, and the sales cycle is much bigger than our team was equipped for. We needed to do a spinal transplant. There’s a bunch of things to deal with as you get to hundreds of employees and a lot of demanding customers. My training is on the product side.”

The intelligent enterprise blends tech savvy, business smarts

The term intelligent enterprise was coined in the early 1990s by James Brian Quinn. It was a business theory holding that technology and computing infrastructure were key to improved business performance. That was before the internet was a household word, before smartphones were in every hand, before cloud computing became computing for so many.

In 2018, intelligent enterprise takes on a new significance. Ryan Mallory, an executive at data center provider Equinix, hosted a panel on the topic at the MIT Sloan CIO Symposium in Cambridge, Mass., on May 23. His definition didn’t stray far from the original: a company that uses technology to “become more efficient, more productive and have a larger impact on the success of the overall enterprise goals and objectives.”

As senior vice president for global solutions enablement at the Redwood City, Calif., company, Mallory helps companies use Equinix’s data center resources to their advantage, easing their transition to cloud and emerging technologies like AI. So, for him, talking future tech and how to prepare for it is business as usual. IT execs at the MIT event spoke to him about a major challenge they faced in getting started: the breakneck pace of development.

“They’re having a hard time even getting through their tech-dev, DevOps evaluation cycle before that product is either obsolete or it has changed its overall focus,” he said.

In an interview at the MIT conference, Mallory discussed digital transformation efforts companies need to undertake to become an intelligent enterprise, how they’re fairing in deploying the mandatory technologies, including cloud, plus challenges Equinix faced during its own emerging-tech initiatives. Here are edited excerpts of that conversation.

Ryan Mallory, senior vice president for global solutions enablement, EquinixRyan Mallory

Part of your job is helping business executives with their cloud deployment strategies. What do executives get about cloud, and where do they need the most help?

Ryan Mallory: Executives get that they need cloud, period. So the financial modeling and ingestion of services in a bare-metal environment — meaning they’re using their own dollars every single year, every two years to buy servers and then do the refreshes — they realize that the economic modeling associated with that just isn’t viable any longer. They know that they need to consume services in a virtualized fashion, and they want to get there.

The biggest challenge that they have is, over the past eight or 10 years, they’ve really reduced their IT staffs — most importantly, their network staffs. And what they realize is, what they have today can’t get them to where they need to be tomorrow. They don’t have intellectual property inside, from a staffing perspective, to cross that chasm, to understand how to actually get cloudified.

You’re moderating a panel here at the MIT CIO Symposium on becoming an intelligent enterprise. Can you define that?

Mallory: What I look at as an intelligent enterprise is a company or an enterprise that has the ability to utilize technology to become more efficient, more productive and have a larger impact on the success of the overall enterprise goals and objectives.

Does a business need to go through a digital transformation to become an intelligent enterprise?

Mallory: Absolutely. It’s always multi-tiered, and it should be viewed as multiyear, and it shouldn’t ever be viewed as a start and an end point. Because with today’s technology advancements and development life cycles, we see Moore’s Law continually compressing with the ability to have better capabilities, better services, better chips, etc. So the need to advance your digital strategy aligns with that, because it can continue to get better.

I was meeting with a with a couple of companies here, and they were very clearly stating that the biggest challenges when they start something in R&D, they’re having a hard time even getting through their tech-dev, DevOps evaluation cycle before that product is either obsolete or it has changed its overall focus. And their R&D dollars have to be refunded and then go back into another round of R&D.

That’s the cycle we’re seeing, especially from SaaS applications and the capabilities out there: These near-term roadmap capabilities are just evolving so quickly that there’s a frustration around, ‘How do we make sure we stay in front of this?’ And, ‘How are we looking at the current state and the “to be,” but also that the innovation ideation is taking place, as well?’

What are some of those key technologies?

Mallory: It’s AI. It’s analytics, machine learning, but it’s also the core apps that sit inside Microsoft Azure. We’ve had conversations around some of their data science applications and even some of their advanced notebooking capabilities, where companies are using them for a specific purpose.

One of the companies I was talking to this morning is an oil and gas company that was using some of the data analytics and scientific mechanisms that Microsoft had, and they were all bought in [on a Microsoft product]. Six months into their R&D cycle, they end-of-lifed that product because they were going to launch something else. And that’s not a hit on Microsoft; it’s just how fast people are developing their underlying products, and that’s coming from the hyperscalers all the way down to startups.

And we’ve seen this evolve. Entefy is a very hot, up-and-coming AI-machine learning company in Silicon Valley. And just in the last 18 months, we’ve seen their product set evolve from one unified messaging-type platform that had AI capabilities in it to allow you to combine text and voice and email into a single information interface to an eight-product set suite. That’s how quick they’re evolving. That original UC model it’s there; it’s OK. But the deep search and the deep learning capabilities are what they’ve really moved into. The market is just so dynamic right now; you just have to really focus on being in front of it.

So, the pace of development is a challenge. What are some other challenges companies will face?

Mallory: They’re very adept at their own code and their own algorithms; they’re very focused on that product development. It’s being able to integrate those capabilities in usable fashion out in the market.

If we look at the marketplace from a silo perspective, what’s happening with customers, what’s happening with vendors, you can look at the widgets and say, ‘OK, great. That’s a good idea,’ and, ‘That’s a great algorithm,’ or, ‘That’s a great implementation.’ But when you start looking at AI-machine learning and say, ‘OK, we want to start looking at some models for smart cities,’ what’s the endpoint? Data cameras that when every time everybody goes through a toll, we can look at what the car looks like, how many passengers are in there, what speed they’re going at. Then, companies can start looking at, ‘Where do we want to put charging stations? What’s the uptick of electric cars? Should we sell this information to Tesla and let them put in a new kiosk in a mall?’

But connecting all those ancillary dots becomes very difficult, because you’re taking technologies, and you’ve got to figure out how to integrate them, whether it’s physically or logically, and then figure out how to make sure that people can access the information. More importantly, how can they act on that info? That’s the big challenge.

Equinix has implemented a lot of emerging technologies — it spearheaded a number of AI initiatives, for example. What challenges did you experience when preparing for and executing on them?

Mallory: I think what everybody assumes is when they think about AI or they think of IoT that there are quick-fix scenarios out there. ‘OK, great. I’m IoT-enabled.’ Well, if a device is IoT-enabled, that just means that endpoint has the capability to communicate more broadly. When you look at that deployment plan or a deployment scheme on trying to make things smarter or have access to that data, that’s putting sensors everywhere.

The real challenge is the scope of the deployment associated with making things more intelligent. That’s the hard part. The technology and being able to define that it has been proven. And everybody I think in the technology field is very comfortable where we’re at and where we’re going to go. But it’s just making sure that anything that’s a legacy — anything that was built from today back has those capabilities. That’s where the challenge is. We’re talking about hundreds of thousands of sensors. It takes a lot of man-hours to go in and put a sensor on every light switch.

The physical deployment.

Mallory: Absolutely. We have the aggregation mechanisms both public and private, with private connectivity through our fabric and public connectivity through the internet or Wi-Fi, so the accessing and the concepts are there. But it’s the time associated with deployment that I think that’s catching everybody right now.

Equinix’s Ryan Mallory discusses his company’s moves in a growing data center services market in part two of this two part Q&A.