Tag Archives: Strategy

Datrium DVX switches focus to converged markets, enterprise

Datrium has a new CEO, and a new strategy for pushing hyper-convergence into the enterprise.

Tim Page replaced Brian Biles, one of Datrium’s founders, as CEO in June. Biles moved into the chief product officer role, one he said he is better suited for, to allow Page to build out an enterprise sales force.

The startup is also changing its market focus. Its executives previously avoided calling Datrium DVX primary storage systems hyper-converged, despite its disaggregated architecture that included storage and Datrium Compute Nodes and Data Nodes. They pitched the Datrium DVX architecture as “open convergence” instead because customers could also use separate x86 or commodity servers. As a software-defined storage vendor, Datrium played down its infrastructure architecture.

Now Datrium positions itself as hyper-converged infrastructure (HCI) on both the primary and secondary storage sides. The use cases and reasons for implementation are the same as hyper-converged — customers can collapse storage and servers into a single system.

“You can think of us as a big-a– HCI,” Biles said. “We’re breaking all the HCI rules.”

Datrium DVX is nontraditional HCI with stateless servers, large caches and shared storage but is managed as a single entity.

You can think of us as a big-a– HCI. We’re breaking all the HCI rules.
Brian Bileschief product officer, Datrium

“We mean HCI in a general way,” Biles said. “We’re VM- or container-centric, we don’t have LUNs. DVX includes compute and storage, it can support third-party servers. But when you look at our architecture, it is different. To build this, we had to break all the rules.”

Datrium’s changed focus is opportunistic. The HCI market is growing at a far faster rate than traditional storage arrays, and that trend is expected to continue. Vendors who have billed themselves as software-defined storage without selling underlying hardware have failed to make it.

Secondary storage is also taking on a converged focus with the rise of newcomers Rubrik and Cohesity. Datrium also wants to compete there with a cloud-native version of DVX for backup and recovery.

However, Datrium will find a highly competitive landscape in enterprise storage and HCI. It will go against giants Dell EMC, Hewlett Packard Enterprise and NetApp on both fronts, and Cisco and Nutanix in HCI. Besides high-flying Cohesity and Rubrik, its backup competition includes Veritas, Dell EMC, Veeam and Commvault.

A new Datrium DVX customer, the NFL’s San Francisco 49ers, buys into the vendor’s HCI story. Jim Bartholomew, the 49ers IT director, said the football team collapsed eight storage platforms into one when it installed eight DVX Compute Nodes and eight DVX Data Nodes. It will also replace its servers and perhaps traditional backup with DVX, 49ers VP of corporate partnerships Brent Schoeb said.

“The problem was, we had three storage vendors and always had to go to a different one for support,” Bartholomew said.

Schoeb said the team stores its coaching and scouting video on Datrium DVX, as well as all of the video created for its website and historical archives.

“We were fragmented before,” Schoeb said of the team’s IT setup. “Datrium made it easy to consolidate our legacy storage partners. We rolled it all up into one.”

Datrium DVX units in the 49ers' data center.
The NFL’s San Francisco 49ers make an end run around established storage vendors by taking a shot with startup Datrium.

Roadmap: Multi-cloud support for backup, DR

Datrium parrots the mantra from HCI pioneer Nutanix and others that its goal is to manage data from any application wherever it resides on premises or across clouds.

Datrium is building out its scale-out backup features for secondary storage. Datrium DVX includes read on write snapshots, deduplication, inline erasure coding and a built-in backup catalog called Snapstore.

Another Datrium founder, CTO Sazzala Reddy, said the roadmap calls for integrating cloud support for data protection and disaster recovery. Datrium added support for AWS backup with Cloud DVX last fall, and is working on support for VMware Cloud on AWS and Microsoft Azure.

“We want to go where the data is,” Reddy said. “We want to move to a place where you can move any application to any cloud you want, protect it any way you want, and manage it all in the data center.”

New CEO: Datrium’s ready to pivot

Page helped build out the sales organization as COO at VCE, the EMC-Cisco-VMware joint venture that sold Vblock converged infrastructure systems. He will rebuild the sales structure at DVX, shifting the focus from SMB and midmarket customers to the enterprise.

DVX executives claim they have hundreds of customers and hope to hit 1,000 by the end of 2018, although that goal is likely overambitious. The startup is far from profitable, and will require more than the $110 million in funding it has raised. Industry sources say Datrium already has about $40 million in venture funding lined up for a D round, and is seeking strategic partners before disclosing the round. Datrium has around 200 employees.

“Datrium’s at an interesting point,” Page said of his new company. “They’re getting ready to pivot in a hyper-growth space now into the enterprise. What we didn’t have was an enterprise sales motion — it’s different selling into the Nimble, Tintri, Nutanix midmarket world. It’s hard to port anyone from that motion into the enterprise motion. We’re going to get into that growth phase, and make sure we do it right.”

Biles said he is following the same model as in his previous company, Data Domain. The backup deduplication pioneer took off after bringing Frank Slootman in as CEO during its early days of shipping products in 2003. Data Domain became a public company in 2007, and EMC acquired it for $2.1 billion two years later.

“I knew a lot less then than I know now, but I know there are many better CEOs than me,” Biles said. “Customer opportunities are much bigger than they used to be, and the sales cycle is much bigger than our team was equipped for. We needed to do a spinal transplant. There’s a bunch of things to deal with as you get to hundreds of employees and a lot of demanding customers. My training is on the product side.”

New types of authentication take root across the enterprise

BOSTON — When IT professionals develop a strategy for user password and authentication management, they must consider the two key metrics of security and usability.

IT professionals are looking for ways to minimize the reliance on passwords as the lone authentication factor, especially because 81% of hacking breaches occur due to stolen or weak passwords, according to Verizon’s 2017 Data Breach Investigations Report. Adding other types of authentication to supplement — or even replace — user passwords can ensure security improves without hurting usability.

“Simply put, the world has a password problem,” said Brett McDowell, executive director of the FIDO Alliance, based in Wakefield, Mass., here in a session at Identiverse.

A future without passwords?

Types of authentication that only require a single verification factor could be much more secure if users adopted complex, harder-to-predict passwords, but this pushes up against the idea of usability. The need for complex passwords, along with the 90- to 180-day password refreshes that are an industry standard in the enterprise, means that reliance on passwords alone can’t meet security and usability standards at the same time.

“If users are being asked to create and remember incredibly complex passwords, IT isn’t doing its job,” said Don D’Souza, a cybersecurity manager at Fannie Mae, based in Washington, D.C.

IT professionals today are turning to two-factor authentication, relying on biometric and cryptographic methods to supplement passwords. The FIDO Alliance, a user authentication trade association, pushes for two-factor authentication that entirely excludes passwords in their current form.

We want to take user vulnerability out of the picture.
Brett McDowellexecutive director, FIDO Alliance

McDowell broke down authentication methods into three categories:

  • something you know, such as a traditional password or a PIN;
  • something you possess, such as a mobile device or a token card; and
  • something you are, which includes biometric authentication methods, such as voice, fingerprint or gesture recognition.

The FIDO Alliance advocates for organizations to shift toward the latter two of these options.

“We want to take user vulnerability out of the picture,” McDowell said.

Taking away password autonomy from the user could improve security in many areas, but none more directly than phishing. Even if a user falls for a phishing email, his authentication is not compromised if two-factor authentication is in place, because the hacker lacks the cryptographic or biometric authentication access factor.

“With user passwords as a single-factor authentication, the only real protection against phishing is testing and training,” D’Souza said.

Trickle-down benefits of new types of authentication

Added types of authentication increase the burden on IT when it comes to privileged access management (PAM) and staying up-to-date on user information. But as organizations move away from passwords entirely, IT doesn’t need to worry as much about hackers gaining access to authentication information, because that is only one piece of the puzzle. This also leads to the benefit of cutting down on account access privileges, said Ken Robertson, a principal technologist at GE, based in Boston.

With stronger types of authentication in place, for example, IT can feel more comfortable handing over some simple administrative tasks to users — thereby limiting its own access to user desktops. IT professionals won’t love giving up access privilege, however.

“People typically start a PAM program for password management,” Robertson said. “But limiting IT logon use cases minimizes vulnerabilities.”

Organizations are taking steps toward multifactor authentication that doesn’t include passwords, but the changes can’t happen immediately.

“We will have a lot of two-factor authentication across multiple systems in the next few years, and we’re looking into ways to limit user passwords,” D’Souza said.

IBM keeps pace with evolving IBM business partners

IBM has tasked itself with refocusing its channel strategy to reflect the modern challenges facing IBM business partners and push indirect business activities to outpace its internal business growth.

The vendor last week introduced an ecosystem model to benefit its traditional channel base, while simultaneously encouraging partnerships with more cutting-edge players in the market, such as ISVs, developers, managed service providers and cloud services providers. The revamped strategy streamlines benefits, tools and programs to better engage, enable and incentivize partners.

According to the vendor, partners will soon find it easier and faster to do business with IBM, including business around software-as-a-service offerings. IBM also revised its rules of engagement and said it would shift more accounts to partner coverage.

“IBM has spent the last several years transforming everything about [itself] from a hardware … a software and a services [perspective]. We know it has become very clear that the ecosystem (both our core channel partners and the new ecosystem that we are going after this year) … is requiring us to change,” said John Teltsch, general manager of global IBM business partners.

John Teltsch, general manager of global IBM business partnersJohn Teltsch

IBM currently works with about 19,000 partners worldwide. Over the past several years, the company has transformed itself from hardware-focused vendor to embrace software, services and cloud computing. The transition has included a heavy investment in cognitive computing, an area that IBM has urged partners to incorporate into their offerings.

With this latest shift in IBM ecosystem strategy, the company has set its sights on even greater market dominance in a range of technology categories.

“The growth they are looking to get is huge,” said Steve White, program vice president of channels and alliances at IDC.

Adapting to digital disruption

As we continue to move more of our hardware and software to ‘as a service’ type offerings … we need to leverage this new ecosystem and our core set of partners as they evolve and change their businesses.
John Teltschgeneral manager of global IBM business partners

Teltsch said the revamped strategy recognizes the changes that digital transformation has wrought on customers and IBM business partners alike. “We need to adjust how we engage our partners, as the digital disruption continues to impact every part of our clients, our partners and our distributors’ way of going to market,” he said. “As we continue to move more of our hardware and software to ‘as a service’ type offerings … we need to leverage this new ecosystem and our core set of partners as they evolve and change their businesses.”

Although firmly committed to expanding the IBM ecosystem, Teltsch acknowledged that executing the new strategy has its challenges.

For one thing, IBM must evolve internally to help its traditional partners adopt modern business models. For example, Teltsch said, many of IBM’s hardware partners are moving from selling solely hardware to offering managed services. “We have a lot of partners that are looking for our help as they transform their own businesses and modernize themselves into this digital world. As we are changing internally … we are helping [partners globally] modernize themselves,” he said.

Ginni Rometty, chairman, president and CEO, IBM
Ginni Rometty, chairman, president and CEO of IBM, discusses Watson with IBM business partners at PartnerWorld Leadership Conference 2017.

IBM to lower barrier of entry for new partners

Another challenge IBM faces is changing how it brings new IBM business partners into the fold. Teltsch said he aims to lower the barrier of entry, especially for “the new generation of partners … that don’t traditionally think of IBM today, or think of IBM as too large, too complex [and] not really approachable.”

“We have to simplify and lower the barrier of entry for all of [the] new partners, as well as our existing core partners to come into IBM,” he added.

To help address these challenges, IBM plans to adjust its tools, certifications, systems and contracts, Teltsch said. Additionally, the vendor will continue building out its digital capabilities to better meet the needs of core partners and the expanding IBM ecosystem.

White said he thinks IBM is trying to do the right thing through its channel refocus, yet he noted that IBM’s massive size makes for a complex shift. However, partners will likely appreciate the clarity the vendor adds to its channel strategy, he said.

According to Teltsch, the new ecosystem strategy is slated to go into effect April 10.

What is happening with AI in cybersecurity?

Jon Oltsik, an analyst with Enterprise Strategy Group in Milford, Mass., wrote about the growing role of AI in cybersecurity. Two recent announcements sparked his interest.

The first was by Palo Alto Networks, which rolled out Magnifier, a behavioral analytics system. Second, Alphabet deployed Chronicle, a cybersecurity intelligence platform. Both rely on AI in cybersecurity and machine learning to sort through massive amounts of data. Vendors are innovating to bring AI in cybersecurity to the market, and ESG sees growing demand for these forms of advanced analytics.

Twelve percent of enterprises have already deployed AI in cybersecurity. ESG research found 29% of respondents want to accelerate incident detection, while similar numbers demand faster incident response or the ability to better identify and communicate risk to the business. An additional 22% want AI cybersecurity systems to improve situational awareness.

Some AI applications work on a stand-alone basis, often tightly coupled with security information and event management or endpoint detection and response; in other cases, machine learning is applied as a helper app. This is true of Bay Dynamics’ partnership with Symantec, applying Bay’s AI engine to Symantec data loss prevention.

Oltsik cautioned that most chief information security officers (CISO) don’t understand AI algorithms and data science, so vendors will need to focus on what they can offer to enhance security. “In the future, AI could be a cybersecurity game-changer, and CISOs should be open to this possibility. In the meantime, don’t expect many organizations to throw the cybersecurity baby out with the AI bath water,” Oltsik said.

Read more of Oltsik’s ideas about AI in cybersecurity.

Simplify networks for improved security and performance

Russ White, blogging in Rule 11 Tech, borrowed a quote from a fellow blogger. “The problem is that once you give a monkey a club, he is going to hit you with it if you try to take it away from him.”

In this analogy, the club is software intended to simplify the work of a network engineer. But in reality, White said, making things easier can also create a new attack surface that cybercriminals can exploit.

To that end, White recommended removing unnecessary components and code to reduce the attack surface of a network. Routing protocols, quality-of-service controls and transport protocols can all be trimmed back, along with some virtual networks and overlays.

In addition to beefing up security, resilience is another key consideration, White said. When engineers think of network failure, their first thoughts include bugs in the code, failed connectors and faulty hardware. In reality, however, White said most failures stem from misconfiguration and user error.

“Giving the operator too many knobs to solve a single problem is the equivalent of giving the monkey a club. Simplicity in network design has many advantages — including giving the monkey a smaller club,” he said.

Explore more from White about network simplicity.

BGP in data centers using EVPN

Ivan Pepelnjak, writing in ipSpace, focused on running Ethernet VPN, or EVPN, in a single data center fabric with either EVPN or MPLS encapsulation. He contrasts this model with running EVPN between data center fabrics, where most implementations require domain isolation at the fabric edge.

EVPN is used as a Border Gateway Protocol address family that can be run on external BGP or internal BGP connections. For single data center fabrics, engineers can use either IBGP or EBGP to build EVPN infrastructure within a single data center fabric, Pepelnjak said.

He cautioned, however, that spine switches shouldn’t be involved in intra-fabric customer traffic forwarding. The BGP next-hop in an EVPN update can’t be changed on the path between ingress and egress switch, he said. Instead, the BGP next-hop must always point to the egress fabric edge switch.

To exchange EVPN updates across EBGP sessions within a data center fabric, the implementation needs to support functionality similar to MPLS VPN. Pepelnjak added many vendors have not boosted integration for EVPN, and users often run into issues that can result in  numerous configuration changes.

Pepelnjak recommended avoiding vendors that market EBGP between leaf-and-spine switches or IBGP switches on top of intra-fabric EBGP. If engineers are stuck with an inflexible vendor, it may be best to use Interior Gateway Protocol as the routing protocol.

Dig deeper into Pepelnjak’s ideas on EVPN.

Cybersecurity skills shortage continues to worsen

Jon Oltsik, an analyst at Enterprise Strategy Group in Milford, Mass., said the global cybersecurity skills shortage is bad and getting worse. According to Oltsik, skills shortages among various networking disciplines have not eased — and the cybersecurity shortage is particularly acute — citing ESG’s annual survey on the state of IT. For instance in 2014, 23% of respondents said that their organization faced a problematic shortage of cybersecurity skills. In the most current survey, which polled more than 620 IT and cybersecurity professionals, 51% said they faced a cybersecurity skills shortage. The data aligns with the results of an ESG-ISSA survey in 2017 that found 70% of cybersecurity professionals reporting their organizations were affected by the skills shortage — resulting in increased workloads and little time for planning. 

“I can only conclude that the cybersecurity skills shortage is getting worse,” Oltsik said. “Given the dangerous threat landscape and a relentless push toward digital transformation, this means that the cybersecurity skills shortage represents an existential threat to developed nations that rely on technology as the backbone of their economy.”

Chief information security officers (CISOs), Oltsik said, need to consider the implications of the cybersecurity skills shortage. Smart leaders are doing their best to cope by consolidating systems, such as integrated security operations and analytics platform architecture, and adopting artificial intelligence and machine learning. In other cases, CISOs automate processes, adopt a portfolio management approach and increase staff compensation, training and mentorship to improve retention.

Dig deeper into Oltsik’s ideas on the cybersecurity skills shortage.

Building up edge computing power

Erik Heidt, an analyst at Gartner, spent part of 2017 discussing edge computing challenges with clients as they worked to improve computational power for IoT projects. Heidt said a variety of factors drive compute to the edge (and in some cases, away), including availability, data protection, cycle times and data stream filtering. In some cases, computing capability is added directly to an IoT endpoint. But in many situations, such as data protection, it may make more sense to host an IoT platform in an on-premises location or private data center.

Yet the private approach poses challenges, Heidt said, including licensing costs, capacity issues and hidden costs from IoT platform providers that limit users to certified hardware. Heidt recommends purchasing teams look carefully at what functions are being offered by vendors, as well as considering data sanitization strategies to enable more use of the cloud. “There are problems that can only be solved by moving compute, storage and analytical capabilities close to or into the IoT endpoint,” Heidt said.

Read more of Heidt’s thoughts on the shift to the edge.

Meltdown has parallels in networking

Ivan Pepelnjak, writing in IPSpace, responded to a reader’s question about how hardware issues become software vulnerabilities in the wake of the Meltdown vulnerability. According to Pepelnjak, there has always been privilege-level separation between kernels and user space. Kernels have always been mapped to high-end addresses in user space, but in more recent CPUs, operations needed to execute just a single instruction often following a pipeline with dozens of different instructions — thus exposing the vulnerability an attack like Meltdown can exploit.

In these situations, the kernel space location test fails once the command is checked against the access control list (ACL), but by then other parts of the CPU have already carried out instructions designed to call up the memory location.

Parallelized execution isn’t unique to CPU vendors. Pepelnjak said at least one hardware vendor created a version of IPv6 neighbor discovery that suffers from the same vulnerability. In response, vendors are rolling out operating system patches removing the kernel from user space. This approach prevents exploits but no longer gives the kernel direct access to the user space when it is needed. As a result, in many cases the kernel needs to change virtual-to-physical page tables, mapping user space into kernel page tables. Every single system call, even reading a byte from one file, means the kernel page tables need to be unmapped.

Explore more of Pepelnjak’s thoughts on network hardware vulnerabilities.

Campus network architecture strategies to blossom in 2018

Bob Laliberte, an analyst with Enterprise Strategy Group in Milford, Mass., said even though data center networking is slowing down in the face of cloud and edge computing, local and campus network architecture strategies are growing in importance as the new year unfolds.

After a long period of data center consolidation, demand for reduced latency — spurred by the growth of the internet of things (IoT) — and the evolution of such concepts as autonomous vehicles are driving a need for robust local networks.

At the same time, organizations have moved to full adoption of the cloud for roles beyond customer relationship management, and many have a cloud-first policy. As a result, campus network architecture strategies need to allow companies to use multiple clouds to control costs. In addition, good network connectivity is essential to permit access to the cloud on a continuous basis.

Campus network architecture plans must also accommodate Wi-Fi to guarantee user experience and to enable IoT support. The emergence of 5G will also continue to expand wireless capabilities.

Intent-based networks, meanwhile, will become a tool for abstraction and the introduction of automated tasks. “The network is going to have to be an enabler, not an anchor,” with greater abstraction, automation and awareness, Laliberte said.

Laliberte said he expects intent-based networks to be deployed in phases, in specific domains of the network, or to improve verification and insights. “Don’t expect your network admins to have Alexa architecting and building out your network,” he said. Although, he said, systems modeled after Alexa will become interfaces for network systems.

Explore more of Laliberte’s thoughts on networking.

BGP route selection and intent-based networking

Ivan Pepelnjak, writing in ipSpace, said pundits who favor the demise of Border Gateway Protocol (BGP) through new SDN approaches often praise the concept of intent-based networking.

Yet, the methodologies behind intent-based networks fail when it comes to BGP route selection, he said. Routing protocols were, in fact, an early approach to the intent-based idea, although many marketers now selling intent-based systems are criticizing those very same protocols, Pepelnjak said. Without changing the route algorithm, the only option is for users to tweak the intent and hope for better results.

To deal with the challenges of BGP route selection, one option might involve a centralized controller with decentralized local versions of the software for fallback in case the controller fails. Yet, few would want to adopt that approach, Pepelnjak said, calling such a strategy “messy” and difficult to get right. Route selection is now being burdened with intent-driven considerations, such as weights, preferences and communities.

“In my biased view (because I don’t believe in fairy tales and magic), BGP is a pretty obvious lesson in what happens when you try to solve vague business rules with [an] intent-driven approach instead of writing your own code that does what you want to be done,” Pepelnjak wrote. “It will be great fun to watch how the next generation of intent-based solutions will fare. I haven’t seen anyone beating laws of physics or RFC 1925 Rule 11 yet,” he added.  

Dig deeper into Pepelnjak’s ideas about BGP route selection and intent-based networking.

Greater hybridization of data centers on the horizon

Chris Drake, an analyst with Current Analysis in Sterling, Va., said rising enterprise demand for hybrid cloud systems will fuel partnerships between hyperscale public providers and traditional vendors. New alliances — such as the one between Google and Cisco — joined existing cloud-vendor partnerships like those between Amazon and VMware and Microsoft and NetApp, Drake said. New alliances are in the offing.

Moving and managing workloads across hybrid IT environments will be a key point of competition between providers, Drake said, perhaps including greater management capabilities to oversee diverse cloud systems.

Drake said he also expects a proliferation of strategies aimed at edge computing. The appearance of micro data centers and converged edge systems may decentralize large data centers. He said he also anticipates greater integration with machine learning and artificial intelligence. However, as a result of legacy technologies, actual deployments of these technologies will remain gradual.

Read more of Drake’s assessment of 2018 data center trends.

Looking ahead to the biggest 2018 cybersecurity trends

Jon Oltsik, an analyst with Enterprise Strategy Group in Milford, Mass., examined some of the top 2018 cybersecurity trends. While some analysts have focused on ransomware, and others made dire pronouncements about nationwide power-grid attacks, Oltsik said he’s more concerned about cloud security, where easily exploitable vulnerabilities are becoming increasingly likely.

Security teams — many of which are facing a severe lack of cybersecurity skills — are struggling with the rapid deployment of cloud technologies, such as virtual machines, microservices and containers in systems such as Amazon Web Services or Azure. Many organizations are switching to high-end security options from managed security service providers or SaaS providers. ESG research indicated 56% of organizations are interested in security as a service.

Among other 2018 cybersecurity trends, Oltsik said he foresees greater integration of security products and the continued expansion of the security operations and analytics platform architecture model. As large vendors like Cisco, Splunk and Symantec scramble to catch up, they will fill holes in existing portfolios. Although he said he sees machine learning technology stuck in the hype cycle, in 2018, Oltsik projects machine learning will grow as a “helper app” in roles such as endpoint security or network security analytics.

With the introduction of the European Union’s General Data Protection Regulation (GDPR) on May 25, 2018, Oltsik said a major fine — perhaps as much as $100 million — may serve as a wake-up call to enterprises whose security platforms don’t meet the standard.

“One U.K. reseller I spoke with compared GDPR to Y2K, saying that service providers are at capacity, so if you need help with GDPR preparation, you are out of luck. As GDPR anarchy grips the continent next summer, look for the U.S. Congress to (finally) start engaging in serious data privacy discussions next fall,” he added.

Dig deeper into Oltsik’s ideas on 2018 cybersecurity trends.

The challenges of BGP

Ivan Pepelnjak, writing in ipSpace, said when Border Gateway Protocol (BGP) incidents occur, commentators often call for a better approach. “Like anything designed on a few napkins, BGP has its limit. They’re well-known, and most of them have to do with trusting your neighbors instead of checking what they tell you,” he said.

To resolve problems with BGP, Pepelnjak recommended the following: First, IT teams need to build a global repository of who owns which address. Second, they need to document who connects to whom and understand their peering policies. And they need to filter traffic from those addresses that are obviously spoofed.

The good news, Pepelnjak, said, is most BGP issues can be solved with guidance from volume 194 of Best Current Practices — the latest update. In Pepelnjak’s perspective, internet service providers (ISPs) are often the problem. ISPs have little incentive to resolve BGP issues or reprimand customers who can easily switch to more permissive providers. An additional problem stems from internet exchange points running route servers without filters.

According to Pepelnjak, because engineers hate confrontation, they often turn to cryptographic tools, such as resource public key infrastructure, rather than fixing chaotic or nonexistent operational practices. “What we’d really need to have are (sic) driving licenses for ISPs, and some of them should be banned for good, due to repetitive drunk driving. Alas, I don’t see that happening in my lifetime,” he added.

Read more of Pepelnjak’s thoughts on BGP issues.

Artificial intelligence, low-code and abstracting infrastructure

Charlotte Dunlap, an analyst with GlobalData’s Current Analysis group in Sterling, Va., blogged about the repositioning of mobile enterprise application platforms (MEAP) to address app development and internet of things. Dunlap said advancements in AI, API management and low-code tools play into DevOps’ need for abstracted infrastructure.

GlobalData research indicated that MEAP is widely used to abstract complexity, particularly in use cases such as application lifecycle management related to AI-enabled automation or containerization.

GlobalData awarded high honors to vendors that integrated back-end data for API management, such as IBM MobileFirst and Kony AppPlatform. Dunlap said mobile service provider platform strategies have increasingly shifted to the needs of a DevOps model.

“Over the next 12 months, we’ll see continued momentum around a growing cloud ecosystem in order to stay competitive with broad platform services, including third-party offerings. Most dominant will be partnerships with Microsoft and Amazon for offering the highest levels of mobile innovation to the broadest audiences of developers and enterprises,” Dunlap said.

Explore more ideas from Dunlap on the changing nature of MEAP.

Azure Backup service adds layer of data protection

more important to have a solid backup strategy for company data and workloads. Microsoft’s Azure Backup service has matured into a product worth considering due to its centralized management and ease of use.

Whether it’s ransomware or other kinds of malware, the potential for data corruption is always lurking. That means that IT admins need a way to streamline backup procedures with the added protection and high availability made possible by the cloud.

Azure Backup protects on-premises workloads — SharePoint, SQL Server, Exchange, file servers, client machines, VMs, and cloud resources like infrastructure-as-a-service VMs — into one recovery vault with solid data protection and restore capabilities. Administrators can monitor and start backup and recovery activities from a single Azure-based portal. After the initial setup, this arrangement lightens the burden on IT because off site backups require minimal time and effort to maintain.

How Azure Backup works

The Azure Backup service stores data in what Microsoft calls a recovery vault, which is the central storage locker for the service whether the backup targets are in Azure or on premises.

Whether it’s ransomware or other kinds of malware, the potential for data corruption is always lurking.

The administrator needs to create the recovery vault before the Azure Backup service can be used. From the Azure console, select All services, type in Recovery Services and select Recovery Services vaults from the menu. Click Add, give it a name, associate it with an Azure subscription, choose a resource group and location, and click Create.

From there, to back up on-premises Windows Server machines, open the vault and click the Backup button. Azure will prompt for certain information: whether the workload is on premises or in the cloud and what to back up — files and folders, VMs, SQL Server, Exchange, SharePoint instances, system state information, and data to kick off a bare-metal recovery. When this is complete, click the Prepare Infrastructure link.

[embedded content]

Configure backup for a Windows machine

The Microsoft Azure Recovery Services Agent (MARS) handles on-premises backups. Administrators download the MARS agent from the Prepare Infrastructure link — which also supplies the recovery vault credentials — and install it on the machines to protect. MARS picks up the recovery vault credentials to link the MARS agent instances of the on-premises machine to the Azure subscription and attendant recovery vault.

Azure Backup pricing

Microsoft determines Azure Backup pricing based on two components: the number of protected VMs or other instances — Microsoft charges for each discrete item to back up — and the amount of backup data stored within the service. The monthly pricing is:

  • for instances up to 50 GB, each instance is $5 per month, plus storage consumed;
  • for instances more than 50 GB, but under 500 GB, each instance is $10, plus storage consumed; and
  • for instances more than 500 GB, each instance is $10 per nearest 500 GB increment, plus storage consumed.

Microsoft bases its storage prices on block blob storage rates, which vary based on the Azure region. While it’s less expensive to use locally redundant blobs than geo-redundant blobs, local blobs are less fault-tolerant. Restore operations are free; Azure does not charge for outbound traffic from Azure to the local network.

Pros and cons of the Azure Backup service

The service has several features that are beneficial to the enterprise:

  • There is support to back up on-premises VMware VMs. Even though Azure is a Microsoft cloud service, the Azure Backup product will take VMware VMs as they are and back them up. It’s possible to install the agent inside the VM on the Windows Server workload, but it’s neater and cleaner to just back up the VM.
  • Administrators manage all backups from one console regardless of the target location. Microsoft continually refines the management features in the portal, which is very simple to use.
  • Azure manages storage needs and automatically adjusts as required. This avoids the challenges and capacity limits associated with on-premises backup tapes and hard drives.

The Azure Backup service isn’t perfect, however.

  • It requires some effort to understand pricing. Organizations must factor in what it protects and how much storage those instances will consume.
  • The Azure Backup service supports Linux, but it requires the use of a customized copy of System Center Data Protection Manager (DPM), which is more laborious compared to the simplicity and ease of MARS.
  • Backing up Exchange, SharePoint and SQL workloads requires the DPM version that supports those products. Microsoft includes it with the service costs, so there’s no separate licensing fee, but it still requires more work to deploy and understand.

The Azure Backup service is one of the more compelling administrative offerings from Microsoft. I would not recommend it as a company’s sole backup product — local backups are still very important, and even more so if time to restore is a crucial metric for the enterprise — but Azure Backup is a worthy addition to a layered backup strategy.