Tag Archives: Infrastructure

For VMware, DSC provides ESXi host and resource management

PowerShell Desired State Configuration has been a favorite among Windows infrastructure engineers for years, and the advent of the VMware DSC module means users who already use DSC to manage Windows servers can use it to manage VMware, too. As VMware has continued to develop the module, it has increased the numbers of vSphere components the tool can manage, including VMware Update Manager.

DSC has been the configuration management tool of choice for Windows since it was released. No other tool offers such a wide array of capabilities to manage a Windows OS in code instead of through a GUI.

VMware also uses PowerShell technology to manage vSphere. The vendor officially states that PowerCLI, its PowerShell module, is the best automation tool it offers. So, it only makes sense that VMware would eventually incorporate DSC so that its existing PowerShell customers can manage their assets in code.

Why use DSC?

Managing a machine through configuration as code is not new, especially in the world of DevOps. You can write a server’s desired state in code, which ensures you can quickly resolve any drift in configuration by applying that configuration frequently.

In vSphere, ESXi hosts, in particular, are the prime candidates for this type of management. An ESXi host’s configurations do not change often, and when they do happen to change, admins must personally make that change. This means any change in the DSC configuration will apply to the hosts.

You can use this tool to manage a number of vSphere components, such as VMware Update Manger and vSphere Standard Switch.

How the LCM works

In DSC, the LCM makes up the brains of a node.

In DSC, Local Configuration Manager (LCM) makes up the brains of a node. It takes in the configuration file and then parses and applies the change locally.

ESXi and vCenter do not have LCM, so in the context of vSphere, you must use an LCM proxy, which runs as a Windows machine with PowerShell v5.1 and PowerCLI 10.1.1.

Installing the module

Installing the module is simple, as the DSC module is part of PowerShell Gallery. It only takes a single cmdlet to install the module on your LCM proxy:

C:> Install-Module -Name VMware.vSphereDSC

Updating the module when Windows releases additional versions is also a simple task. You can use the Update-Module cmdlet in PowerCLI:

C:> Update-Module vmware.vspheredsc

Resources

DSC ties a resource to a particular area of a system it can manage. The DSC module vmware.vspheredsc, for example, can manage various aspects of vSphere, such as the following:

C:Usersdan> Get-DscResource -Module vmware.vspheredsc | Select NameName
----
Cluster
Datacenter
DatacenterFolder
DrsCluster
Folder
HACluster
PowerCLISettings
vCenterSettings
vCenterStatistics
VMHostAccount
VMHostDnsSettings
VMHostNtpSettings
VMHostSatpClaimRule
VMHostService
VMHostSettings
VMHostSyslog
VMHostTpsSettings
VMHostVss
VMHostVssBridge
VMHostVssSecurity
VMHostVssShaping
VMHostVssTeaming

Many such resources are associated with ESXi hosts. You can manage settings such as accounts, Network Time Protocol and service through DSC. For clusters, manage settings such as HAEnabled, Distributed Resource Scheduler and DRS distribution. You can view the resources DSC can manage with the Get-DSCResource cmdlet:

C:> Get-DscResource -Name Cluster -Module vmware.vspheredsc -Syntax
Cluster [String] #ResourceName
{
[DependsOn = [String[]]]
[PsDscRunAsCredential = [PSCredential]]
Server = [String]
Credential = [PSCredential]
Name = [String]
Location = [String] DatacenterName = [String]
DatacenterLocation = [String]
Ensure = [String]
[HAEnabled = [Boolean]]
[HAAdmissionControlEnabled = [Boolean]]
[HAFailoverLevel = [Int32]]
[HAIsolationResponse = [String]]
[HARestartPriority = [String]]
[DrsEnabled = [Boolean]]
[DrsAutomationLevel = [String]]
[DrsMigrationThreshold = [Int32]]
[DrsDistribution = [Int32]]
[MemoryLoadBalancing = [Int32]]
[CPUOverCommitment = [Int32]]
}

With the capabilities of DSC now available to VMware admins, as well as Windows admins, they can control a variety of server variables through code and make vSphere and vCenter automation easy and accessible. They can apply broad changes across an entire infrastructure of hosts and ensure consistent configuration.

Go to Original Article
Author:

Pure Storage cloud sales surge, but earnings miss the target

Add Pure Storage to the list of infrastructure vendors that sense a softening global demand. The all-flash pioneer put the best face on last quarter’s financial numbers, focusing on solid margins and revenue, while downplaying its second earnings miss in the last three quarters.

Demand for Pure Storage cloud services boosted revenue to $428.4 million for the quarter that ended Oct. 31. That’s up 15% year over year, but lower than the $440 million expectation on Wall Street.

Pure Storage launched as a startup in 2009 and has grown steadily to a publicly traded company with $1.5 billion in revenue. On Pure’s earnings call last week, CEO Charles Giancarlo blamed the revenue miss on declining flash prices. Giancarlo said U.S. trade tensions with China and uncertainty surrounding Brexit create economic headwinds for infrastructure vendors — concerns also voiced recently by rivals Dell EMC and NetApp.

Pure: Looking for bright spot in cloud

Like most major storage vendors, Pure is rebranding to tap into the burgeoning demand for hybrid cloud. Recent additions to the Pure Storage cloud portfolio include Cloud Block Store, which allows users to run Pure’s FlashArray systems in Amazon Web Services, and consumption-based Pure as a Service (ES2), formerly Pure Evergreen.

Pure said deferred licensing revenue of $643 million rose 39%, fueled by record growth of ES2 sales. The Pure Storage cloud strategy resonates with customers that want storage with cloudlike agility, company executives said.

“Data storage still remains the least cloudlike layer of technology in the data center. Delivering data storage in an enterprise is still an extraordinarily manual process with storage arrays highly customized and dedicated to particular workloads,” Giancarlo said.

Pure claims it added nearly 400 customers last quarter, bringing its total to more than 7,000. That includes cloud IT services provider ServiceNow, which implements Pure Storage all-flash storage to underpin its production cloud.

“Companies are realizing IT services are not their main line of business — that a cloud-hosted services model is generally better. We’re right in the middle of that. We build enterprise data services and do all the work to manage the cloud” for corporate customers, Keith Martin, ServiceNow’s director of cloud capacity engineering, told SearchStorage in an interview this year.

Pure will use its increased product margin — which jumped 4.5 points last quarter to 73% — to ensure it “won’t lose on price” in competitive deals, outgoing president David Hatfield said.

A strong pipeline of Pure Storage cloud and on-premises deals gives it the ability to bundle multiple products and sell more terabytes. “It’s just taking a little bit longer from a deal-push perspective, but our win rates are holding nicely,” Hatfield said.

Hatfield said he is stepping away from president duties to deal with a family health issue, but he will remain Pure’s vice chairman and special advisor to Giancarlo. Former Riverbed Technology CEO Paul Mountford was introduced as Pure’s new COO. Kevan Krysler, most recently VMware’s senior vice president of finance and chief accounting officer, will take over in December as Pure’s CFO. He will replace Tim Ritters, who announced his departure in August.

Go to Original Article
Author:

Dell EMC upgrades VxRail appliances for AI, SAP HANA

Dell EMC today added predictive analytics and network management to its VxRail hyper-converged infrastructure family while expanding NVMe support for SAP HANA and AI workloads.

Dell EMC VxRail appliances combine Dell PowerEdge servers and Dell-owned VMware’s vSAN hyperconverged infrastructure (HCI) software. The launch of Dell’s flagship HCI platform includes two new all-NVMe appliance configurations, plus VxRail Analytic Consulting Engine (ACE) and support for SmartFabric Services (SFS) across multi-rack configurations.

The new Dell EMC VxRail appliance models are the P580N and the E560N. The P580N is a four-socket system designed for SAP HANA in-memory database workloads. It is the first appliance in the VxRail P Series performance line to support NVMe. The 1u E560N is aimed at high performance computing and compute-heavy workloads such as AI and machine learning, along with virtual desktop infrastructure.

The new 1U E Series systems support Nvidia T4 GPUs for extra processing power. The E Series also supports 8 TB solid-state drives, doubling the total capacity of previous models. The VxRail storage-heavy S570 nodes also now support the 8 TB SSDs.

ACE is generally available following a six-month early access program. ACE, developed on Dell’s Pivotal Cloud Foundry platform, performs monitoring and performance analytics across VxRail clusters. ACE provides alerts for possible system problems, capacity analysis and can help orchestrate upgrades.

The addition of ACE to VxRail comes a week after Dell EMC rival Hewlett Packard Enterprise made its InfoSight predictive analytics available on its SimpliVity HCI platform.

Wikibon senior analyst Stuart Miniman said the analytics, SFS and new VxRail appliances make it easier to manage HCI while expanding its use cases.

“Hyperconverged infrastructure is supposed to be simple,” he said. “When you add in AI and automated operations, that will make it simpler. We’ve been talking about intelligence and automation of storage our whole careers, but there has been a Cambrian explosion in that over the last year. Now they’re building analytics and automation into this platform.”

Bringing network management into HCI

Part of that simplicity includes making it easier to manage networking in HCI. Expanded capabilities for SFS on VxRail include the ability for HCI admins to manage networking switches across VxRail clusters without requiring dedicated networking expertise. SFS now applies across multi-rack VxRail clusters, automating switch configuration for up to six racks in one site. SFS supports from six switches in a two-rack configuration to 14 switches in a six-rack deployment.

Support for Mellanox 100 Gigabit Ethernet PCIe cards help accelerate streaming media and live broadcast functions.

“We believe that automation across the data center is key to fostering operational freedom,” Gil Shneorson, Dell EMC vice president and general manager for VxRail, wrote in a blog with details of today’s upgrades. “As customers expand VxRail clusters across multiple racks, their networking needs expand as well.”

Dell EMC VxRail vs. Nutanix: All about the hypervisor?

IDC lists Dell as the leader in the hyperconverged appliance market, which IDC said hit $1.8 billion in the second quarter of 2019. Dell had 29.2% of the market, well ahead of second-place Nutanix with 14.2%. Cisco was a distant third with 6.2.%

According to Miniman, the difference between Dell EMC and Nutanix often comes down to the hypervisor deployed by the user. VxRail closely supports market leader VMware, but VxRail appliances do not support other hypervisors. Nutanix supports VMware, Microsoft Hyper-V and the Nutanix AHV hypervisors. The Nutanix software stack competes with vSAN.

“Dell and Nutanix are close on feature parity,” Miniman said. “If you’re using VMware, then VxRail is the leading choice because it’s 100% VMware. VxRail is in lockstep with VMware, while Nutanix is obviously not in lockstep with VMware.”

Go to Original Article
Author:

AWS gets behind Rust programming language

AWS has gotten behind the Rust programming language in a big way, to the point where the cloud infrastructure giant has become a sponsor of the language.

Since its first stable release four years ago, Rust has emerged as a viable alternative to C++. Known for enabling developers to build high-performing, reliable applications, as well as for boosting programmer productivity, Rust has been adopted as a system programming language by companies including Google, Microsoft, Mozilla, Yelp, Dropbox, Cloudflare and AWS.

“Rust is the first real alternative to C++ that we’ve seen in a long time,” said Cameron Purdy, CEO of Xqiz.it, a Lexington, Mass., startup developing its own programming language, known as Ecstasy. “Rust is built for systems-level work, and appears to be far better thought out than C++ was.”

Indeed, “Rust is making significant inroads as a language for systems programming,” said James Governor, an analyst at RedMonk.

The use of Rust at AWS has grown, as services such as Lambda, EC2 and S3 use Rust in performance-sensitive components. Also, AWS’s Firecracker virtualization technology is written using Rust.

The AWS sponsorship of Rust includes supporting the Rust project infrastructure. AWS provides promotional credits to the Rust project to be used to perform upstream and performance testing, CI/CD or storage of artifacts on AWS, the company said in a blog post. AWS also is offering similar promotional credits to other open source projects, including AdoptOpenJDK, Maven Central and the Julia programming language.

Jeffrey HammondJeffrey Hammond

“I think AWS is looking for opportunities to blunt the criticism — undeserved or not — that while it is a consumer and benefactor from its OSS consumption, it’s not a producer or community supporter of it,” said Jeffrey Hammond, an analyst at Forrester Research. “Projects like Coretto, Firecracker and sponsorship projects like this all go to counter that narrative.”

According to AWS, the Rust project uses AWS services to:

  • Store release artifacts such as compilers, libraries, tools and source code on S3.
  • Run ecosystem-wide regression tests with Crater on EC2.
  • Operate docs.rs, a website that hosts documentation for all packages published to the central crates.io package registry.

“It’s interesting that AWS recently made this approach explicit, but AWS is not alone,” Governor said. “I talk a lot about folks being ‘Rust curious,’ but it appears we’re now moving beyond curiosity. Microsoft is another major player making a strong call for more Rust-based development. Rust is no longer something for developers to play with on their weekends. It’s becoming a language of infrastructure.”

I talk a lot about folks being ‘Rust curious,’ but it appears we’re now moving beyond curiosity.
James GovernorAnalyst, RedMonk

Rust has been ranked as the “most loved” programming language in the annual Stack Overflow developer survey for four years in a row. With no runtime or garbage collector, Rust delivers faster performance. Rust also provides memory and thread safety, which helps to eliminate bugs.

In July, Microsoft said it was looking at Rust as an alternative to C and C++ based on its safety and performance. In other words, Rust enables developers to create secure, high-performant applications, said Ryan Levick, a principal cloud developer advocate at Microsoft, in a blog post.

“We believe Rust changes the game when it comes to writing safe systems software,” Levick said. “Rust provides the performance and control needed to write low-level systems, while empowering software developers to write robust, secure programs.”

However, Microsoft found some issues with Rust that will need to be addressed, including the lack of first-class interoperability with C++, and interoperability with existing Microsoft tooling, Levick said.

Holger Mueller, an analyst at Constellation Research in San Francisco, said the race for cloud market leadership is based on attracting developers to build next-generation applications on the leading cloud platforms.

“From time to time there is a new programming language that catches the attention of developers, usually for productivity and/or capability reasons,” he said. “That’s the case with Rust, which is gaining quickly in popularity and, hence, large IaaS providers need to support Rust.”

Go to Original Article
Author:

Know your Office 365 backup options — just in case

Exchange administrators who migrate their email to Office 365 reduce their infrastructure responsibilities, but they must not ignore areas related to disaster recovery, security, compliance and email availability.

Different businesses rely on different applications for their day-to-day operations. Healthcare companies use medical records to treat patients or a manufacturing plant needs its ERP system to track production. But generally speaking, most businesses, regardless of their vertical, rely on email to communicate with their co-workers and customers. If the messaging platform goes down for any amount of time, users and the business suffer. A move to Microsoft’s cloud-based collaboration platform introduces new administrative challenges, such as determining whether the organization needs an Office 365 backup product.

IT pros tasked with all things related to Exchange Server administration — managing multiple email services, including system uptime; mailbox recoverability; system performance; maintenance; user setups; and general reactive system issues — will have to adjust when they move to Office 365. Many of the responsibilities related to system performance, maintenance and uptime become the responsibility of Microsoft. Unfortunately, not all of these outsourced activities meet the expectations of Exchange administrators. Some of them will resort to alternative methods to ensure their systems have the right protections to avoid serious disasters.

A move to Microsoft’s cloud-based collaboration platform introduces new administrative challenges, such as determining whether the organization needs an Office 365 backup product.

To keep on-premises Exchange running with high uptime, Exchange admins rely on setting up the environment with adequate redundancies, such as virtualization with high availability, clustering and proper backup if a recovery is required. In a hosted Exchange model with Office 365, email administrators rely heavily on the hosting provider to manage those redundancies and ensure system uptime. However, despite the promised service-level agreements (SLAs) by Microsoft, there are still some gaps that Exchange administrators must plan for to get the same level of system availability and data protection they previously experienced with their legacy on-premises Exchange platform.

Hosted email in Exchange Online, which can be purchased as a stand-alone service or as part of Office 365, has certainly attracted many companies. Microsoft did not provide exact numbers in its most recent quarterly report, but it is estimated to be around 180 million Office 365 commercial seats. Despite the popularity of the platform, one would assume Microsoft would offer an Office 365 backup option at minimum for the email service. Microsoft does, but not in the way Exchange administrators know backup and disaster recovery.

Microsoft does not have backups for Exchange Online

Microsoft provides some level of recoverability with mailboxes stored in Exchange Online. If a user loses email, then the Exchange administrator can restore deleted email by restoring an entire mailbox with PowerShell or through the Outlook recycle bin.

The Undo-SoftDeletedMailbox PowerShell command recovers the deleted mailbox, but there are some limitations. The command is only useful when a significant number of folders have been deleted from a mailbox and the recovery attempt occurs within 30 days. After 30 days, the content is not recoverable.

Due to this limited backup functionality, many administrators look to third-party Office 365 backup vendors such as SkyKick, BitTitan, Datto and Veeam to expand their backup and recovery needs beyond the 30 days that Microsoft offers. At the moment, this is the only way for Exchange administrators to satisfy their organization’s back up and disaster recovery requirements.

Microsoft promises 99.9% uptime with email

No cloud provider is immune to outages and Microsoft is no different. Despite instances of service loss, Microsoft guarantees at least 99.9% uptime for Office 365. This SLA translates into no more than nine hours of downtime per year.

For most IT executives, this guarantee does not absolve them of the need to plan for possible downtime. Administrators should investigate the costs and the technical abilities of an email continuity service from vendors, including Mimecast, Barracuda or TitanHQ, to avoid trouble from unplanned outages.

Email retention policies can go a long way for sensitive content

The ability to define different type of data access and retention policies is just as important as backup and disaster recovery for organizations with compliance requirements.

Groups that need to prevent accidental email deletion will need to work with the Office 365 administrator to set up the appropriate on-hold policies or archiving configuration to protect that content. These are native features in Exchange Online that administrators must build their familiarity to ensure they understand how to meet the different legal requirements of the different groups in their organization.

Define backup retention policies to meet business needs

For most backup offerings for on-premises Exchange, storage is always a concern for administrators. Since it is generally the dictating factor behind the retention period of email backup, Exchange admins have to keep disk space in mind when they determine the best backup scheme for their organization. Hourly, daily, weekly, monthly and quarterly backup schedules are influenced by the amount of available storage.

Office 365 backup products for email from vendors such as SkyKick, Dropsuite, Acronis and Datto ease the concerns related to storage space. This gives the administrator a way to develop the best protection scheme for their company without the added worry of wondering when to purchase additional storage hardware to accommodate these backups.

Go to Original Article
Author:

VMware vSAN HCI: Complete stack or ‘vaporware’?

Days after VMware’s CEO proclaimed his vSAN product the winner in the hyper-converged infrastructure space, the CEO of VMWare rival Nutanix countered that VMware “sells a lot of vaporware.”

“We’re crushing Nu … I mean we’re winning in the marketplace,” VMware CEO Pat Gelsinger said during his opening VMworld keynote last week. “We’re separating from No. 2. We’re winning in the space.”

Two days later on Nutanix’s earnings call, CEO Dheeraj Pandey took a shot at VMware without mentioning the company by name. “We don’t sell vaporware,” he said, when referring to why Nutanix wins in competitive deals.

In an exclusive interview after the call, Pandey admitted the vaporware charge was aimed mostly at VMware’s vSAN HCI software.

Pat Gelsinger, VMware CEOPat Gelsinger

“VMware sells a lot of vaporware,” Pandey said. “A lot of that vaporware becomes evident to customers who buy that stuff. When bundled products don’t deliver on their promise, they call us. What we sell is not shelfware.”

Whatever VMware is selling with its vSAN HCI software, it is working. VMware reported license bookings of its vSAN HCI software grew 45% year-over-year last quarter, while Nutanix revenue and bookings slipped from last year. VMware’s parent Dell also claimed a 77% increase in orders of its Dell EMC VxRail HCI appliances that run vSAN software. Those numbers suggest Dell increased market share against Nutanix, even if Nutanix did better than expected last quarter following a disappointing period. IDC listed VMware as the HCI software market leader and Dell as the hardware HCI leader in the first quarter of 2019, with Nutanix second in both categories. Gartner lists Nutanix as the HCI software leader, but No. 2 VMware made up ground in Gartner’s first-quarter numbers.

Nutanix’s Pandey attributed at least some of VMware’s HCI success to bundling its vSAN software with its overall virtualization stack. Like VMware, Nutanix has its own hypervisor (AHV) and its share of hardware partners — including Dell — but VMware has a huge vSphere installed base to sell vSAN into.

Dheeraj Pandey, Nutanix CEODheeraj Pandey

Pandey said he was unimpressed by VMware’s Kubernetes and open source plans laid out at VMworld, which included Tanzu and Project Pacific. Both are still roadmap items but reflect a commitment from VMware to containers and open source software.

“That’s worse than vaporware, that’s slideware,” Pandey said of VMware’s announcements. “Everything works in slides. We’re based on Linux; we get a lot of leverage out of open source. AHV was based on Linux, and we’ve made it enterprise grade.”

Making vSAN part of its vSphere virtualization platform has paid off for VMware. Customers at VMworld pointed to their familiarity with VMware and vSAN’s integration with vSphere, and its NSX software-defined networking as reasons for going with vSAN HCI.

 “What really end up selling it for us was, we were already using VMware for our base product and the vast majority of the deliverables that our customers request is in vSphere,” said Lester Shisler, senior IT systems engineer at Harmony Healthcare IT, based in South Bend, Ind. “So whatever pain points we learned along the way with vSAN, we were going to have to learn [with a competing HCI product] as well, along with new software and new management and everything else.”

Matthew Douglas, chief enterprise architect at Sentara Healthcare in Norfolk, Va., said Nutanix was among the HCI options he looked at before picking vSAN.

“VMware was ultimately the choice,” he said. “All the others were missing some components. VMWare was a consistent platform for hyper-converged infrastructure. Plus, there was NSX and all these things that fit together in a nice, uniform fashion. And as an enterprise, I couldn’t make a choice of all these independent different tools. Having one consistent tool was the differentiator.”

Despite losing share, Nutanix’s last-quarter results were mixed. Its revenue of $300 million and billings of $372 million were both down from last year but better than expected following the disappointing previous quarter. Nutanix’s software and support revenue of $287 million increased 7%, a good sign for the HCI pioneer’s move to a software-centric business model. Nutanix also reported a 16% growth in deals over $1 million from the previous quarter.

However, operating expenses also increased. Sales and marketing spend jumped to $254 million from $183 million the previous year. Nutanix, which has never recorded a profit, lost $194 million in the quarter — more than double its losses from a year ago. It finished the quarter with $909 million in cash, down from $943 million last year.

Pandey said he is more concerned about growth and customer acquisition than profitability.

“Profitability is a nuanced word,” Pandey said. “We defer so much in our balance sheet. Right now we care about doing right by the customer when we sell them subscriptions.”

Go to Original Article
Author:

IDC: SD-WAN market spend to top $5B in 2023

The global software-defined WAN infrastructure market will grow an average of nearly 31% annually through 2023 as vendors feed enterprise hunger for technology that connects employees to applications running on multiple cloud service providers.

That’s one of the findings of IDC’s latest SD-WAN forecast. The research firm said the market would reach $5.25 billion in 2023 from $1.4 billion in 2018, the beginning of the forecast period.

Enterprises have found SD-WAN a necessary technology for connecting branch locations and remote offices with SaaS applications and software running on public clouds, such as AWS and Microsoft Azure. Traditional WAN technology lacks most of the features needed for connecting to cloud and SaaS applications, such as simplified management, cost-effective bandwidth utilization and WAN flexibility, efficiency and security, IDC said.

The demand for SD-WAN will fuel a continuation of market consolidation through acquisition as companies with stronger business models buy weaker vendors for their intellectual property, customer base or presence in specific geographical regions, IDC said.

SD-WAN market consolidation

The SD-WAN market today has more than three dozen vendors, which is more than the market can support, analysts have said. The most significant acquisitions to date include VMware purchasing VeloCloud in 2017 and Cisco Systems acquiring Viptela and Oracle picking up Talari Networks in 2018.

Other trends spotted by IDC include SD-WAN evolving from a standalone product to a key feature within a broader SD-branch platform that encompasses additional network and security services.

“Vendors will compete intensely on this front during the next few years,” the IDC report said.

Businesses with lots of branch and remote offices are deploying SD-branch technology to simplify network operations through consolidation of WAN connectivity, network security, LAN and Wi-Fi in a unified platform, according to Lee Doyle, principal analyst for Doyle Research. Network and security vendors offering SD-branch options include Cisco Meraki, Cradlepoint, Fortinet, Hewlett Packard Enterprise’s Aruba Networks, Riverbed and Versa Networks.

Market share leaders

IDC defines SD-WAN infrastructure as comprising edge routing software or hardware and traditional routers and WAN optimization technology if they are an in-use and integrated component of an SD-WAN product.

Other infrastructure components include SD-WAN controllers for centralized implementation of application policy and WAN routing, network visibility and analytics.

Based on IDC’s definition of SD-WAN infrastructure, Cisco’s broad portfolio of hardware and software made it the market leader with a 46.4% share, the researcher said. VMware, which sells only software, was second with an 8.8% share, followed by Silver Peak, 7.4%; Nuage Networks, a Nokia company, 4.9%; and Riverbed, 4.3%.

Go to Original Article
Author:

New Contentful CMS targets content delivery for retailers

Contentful has launched a content infrastructure system to drive online sales by enabling more content management across channels for retailers.

Like a headless content management system, the Contentful CMS allows users to publish and update content across all digital platforms at once, but at an enterprise-grade scale. The vendor claimed content infrastructure enables retailers to repurpose existing content, improve impact and deliver marketing messages to target audiences.

Headless CMS enables content creation and sharing across multiple channels with one action by removing the head — or presentation layer — which defines the channel or platform in a traditional CMS. Content infrastructure has the same benefits as headless CMS, but unifies content to be managed from one content hub.

Contentful claimed content infrastructure markets digital content four to seven times faster than a traditional CMS by enabling users to do the following:

  • organize content specific to their business;
  • create content once for different platforms;
  • store all content in a central hub;
  • edit content without the involvement of developers;
  • manage teams with roles and permissions; and
  • publish content to any device.

Contentful intends its content infrastructure to enable brands to build and manage targeted, customized marketing for event-driven campaigns and localize the content for any market. Through the vendor’s Content Delivery API, editors can update content through a web app synced with any platform for consistent management.

The vendor claimed its array of content management services has decreased bounce rates, increased mobile conversion, personalized content across a breadth of languages and locales, updated content at a fraction of the time as legacy tools, and delivers new customer touch points five times faster than with a traditional CMS.

In 2018, Contentful was named a contender in Forrester’s Wave for web content management systems, challenged by leaders Adobe, Acquia and Sitecore. According to Contentful, its headless enterprise focus makes it flexible for developers. Forrester recommended the vendor for progressive digital initiatives that require content unification across channels, but also have easy access to developer resources.

Go to Original Article
Author:

DataCore adds new HCI, analytics, subscription price options

Storage virtualization pioneer DataCore Software revamped its strategy with a new hyper-converged infrastructure appliance, cloud-based predictive analytics service and subscription-based licensing option.

DataCore launched the new offerings this week as part of an expansive DataCore One software-defined storage (SDS) vision that spans primary, secondary, backup and archival storage across data center, cloud and edge sites.

For the last two decades, customers have largely relied on authorized partners and OEMs, such as Lenovo and Western Digital, to buy the hardware to run their DataCore storage software. But next Monday, they’ll find new 1U and 2U DataCore-branded HCI-Flex appliance options that bundle DataCore software and VMware vSphere or Microsoft Hyper-V virtualization technology on Dell EMC hardware. Pricing starts at $21,494 for a 1U box, with 3 TB of usable SSD capacity.

The HCI-Flex appliance reflects “the new thinking of the new DataCore,” said Gerardo Dada, who joined the company last year as chief marketing officer.

DataCore software can pool and manage internal storage, as well as external storage systems from other manufacturers. Standard features include parallel I/O to accelerate performance, automated data tiering, synchronous and asynchronous replication, and thin provisioning.

New DataCore SDS brand

In April 2018, DataCore unified and rebranded its flagship SANsymphony software-defined storage and Hyperconverged Virtual SAN software as DataCore SDS. Although the company’s website continues to feature the original product names, DataCore will gradually transition to the new name, said Augie Gonzalez, director of product marketing at DataCore, based in Fort Lauderdale, Fla.

With the product rebranding, DataCore also switched to simpler per-terabyte pricing instead of charging customers based on a-la-carte features, nodes with capacity limits and separate expansion capacity. With this week’s strategic relaunch, DataCore is adding the option of subscription-based pricing.

Just as DataCore faced competitive pressure to add predictive analytics, the company also needed to provide a subscription option, because many other vendors offer it, said Randy Kerns, a senior strategist at Evaluator Group, based in Boulder, Colo. Kerns said consumption-based pricing has become a requirement for storage vendors competing against the public cloud.

“And it’s good for customers. It certainly is a rescue, if you will, for an IT operation where capital is difficult to come by,” Kerns said, noting that capital expense approvals are becoming a bigger issue at many organizations. He added that human nature also comes into play. “If it’s easier for them to get the approvals with an operational expense than having to go through a large justification process, they’ll go with the path of least resistance,” he said.

DataCore SDS
DataCore software-defined storage dashboard

DataCore Insight Services

DataCore SDS subscribers will gain access to the new Microsoft Azure-hosted DataCore Insight Services. DIS uses telemetry-based data the vendor has collected from thousands of SANsymphony installations to detect problems, determine best-practice recommendations and plan capacity. The vendor claimed it has more than 10,000 customers.

Like many storage vendors, DataCore will use machine learning and artificial intelligence to analyze the data and help customers to proactively correct issues before they happen. Subscribers will be able to access the information through a cloud-based user interface that is paired with a local web-based DataCore SDS management console to provide resolution steps, according to Steven Hunt, a director of product management at the company.

DataCore HCI-Flex appliance
New DataCore HCI-Flex appliance model on Dell hardware

DataCore customers with perpetual licenses will not have access to DIS. But, for a limited time, the vendor plans to offer a program for them to activate new subscription licenses. Gonzalez said DataCore would apply the annual maintenance and support fees on their perpetual licenses to the corresponding DataCore SDS subscription, so there would be no additional cost. He said the program will run at least through the end of 2019.

Shifting to subscription-based pricing to gain access to DIS could cost a customer more money than perpetual licenses in the long run.

“But this is a service that is cloud-hosted, so it’s difficult from a business perspective to offer it to someone who has a perpetual license,” Dada said.

Johnathan Kendrick, director of business development at DataCore channel partner Universal Systems, said his customers who were briefed on DIS have asked what they need to do to access the services. He said he expects even current customers will want to move to a subscription model to get DIS.

“If you’re an enterprise organization and your data is important, going down for any amount of time will cost your company a lot of money. To be able to see [potential issues] before they happen and have a chance to fix that is a big deal,” he said.

Customers have the option of three DataCore SDS editions: enterprise (EN) for the highest performance and richest feature set, standard (ST) for midrange deployments, and large-scale (LS) for secondary “cheap and deep” storage, Gonzalez said.

Price comparison

Pricing is $416 per terabyte for a one-year subscription of the ST option, with support and software updates. The cost for a perpetual ST license is $833 per terabyte, inclusive of one year of support and software updates. The subsequent annual support and maintenance fees are 20%, or $166 per year, Gonzalez said. He added that loyalty discounts are available.

The new PSP 9 DataCore SDS update that will become generally available in mid-July includes new features, such as AES 256-bit data-at-rest encryption that can be used across pools of storage arrays, support for VMware’s Virtual Volumes 2.0 technology and UI improvements.

DataCore plans another 2019 product update that will include enhanced file access and object storage options, Gonzalez said.

This week’s DataCore One strategic launch comes 15 months after Dave Zabrowski replaced founder George Teixeira as CEO. Teixeira remains with DataCore as chairman.

“They’re serious about pushing toward the future, with the new CEO, new brand, new pricing model and this push to fulfill more of the software-defined stack down the road, adding more long-term archive type storage,” Jeff Kato, a senior analyst at Taneja Group in West Dennis, Mass., said of DataCore. “They could have just hunkered down and stayed where they were at and rested on their installed base. But the fact that they’ve modernized and gone for the future vision means that they want to take a shot at it.

“This was necessary for them,” Kato said. “All the major vendors now have their own software-defined storage stacks, and they have a lot of competition.”

Go to Original Article
Author:

Transforming IT infrastructure and operations to drive digital business

It’s time for organizations to modernize their IT infrastructure and operations to not just support, but to drive digital business, according to Gregory Murray, research director at Gartner.

But to complete that transformation, organizations need to first understand their desired future state, he added.

“The future state for the vast majority of organizations is going to be a blend of cloud, on prem and off prem,” Murray told the audience at the recent Gartner Catalyst conference. “What’s driving this is the opposing forces of speed and control.”

From 2016 to 2024, the percentage of new workloads that will be deployed through on-premises data centers is going to plummet from about 80% to less than 20%, Gartner predicts. During the same period, cloud adoption will explode — going from less than 10% to as much as 45% — with off-premises, colocation and managed hosting facilities also picking up more workloads.

IT infrastructure needs to provide capabilities across these platforms, and operations must tackle the management challenges that come with it, Murray said.

How to transform IT infrastructure and operations

Once organizations have defined their future state — and Murray urged organizations to start with developing a public cloud strategy to determine which applications will be in the cloud — they should begin modernizing their infrastructure, he told the audience at the Gartner Catalyst conference. 

“Programmatic control is the key to enabling automation and automation is, of course, critical to addressing the disparity between the speed that we can deliver and execute in cloud, and improving our speed of execution on prem,” he said. 

Organizations will also need developers with the skills to take advantage of it, he said. Another piece of the automation equation when modernizing the infrastructure to gain speed is standardization, he said.

The future state for the vast majority of organizations is going to be a blend of cloud, on prem and off prem.
Gregory Murrayresearch director, Gartner

“We need to standardize around those programmatic building blocks, either by using individual components of software-defined networking, software-defined compute and software-defined storage, or by using a hyper-converged system.”

Hyper-converged simplifies the complexity associated with establishing programmatic control and helps create a unified API for infrastructure, he said.

Organizations also need to consider how to uplevel their standardization, according to Murray. This is where containers come into play. The atomic unit of deployment is specific to an application and it abstracts much of the dependencies and complications that come with moving an application independent of its operating system, he explained.

“And if we can do that, now I have a construct that I can standardize around and deploy into cloud, into on prem, into off prem and give it straight to my developers and give them the ability to move quickly and deploy their applications,” he said.

Hybrid is the new normal

To embrace this hybrid environment, Murray said organizations should establish a fundamental substrate to unify these environments.

“The two pieces that are so fundamental that they precede any sort of hybrid integration is the concept of networks — specifically your WAN and WAN strategy across your providers — and identity,” Murray said. “If I don’t have fundamental identity constructs, governance will be impossible.”

Organizations looking to modernize their network for hybrid capabilities should resort to SD-WAN, Murray said. This provides software-defined control that extends outside of the data center and allows a programmatic approach and automation around their WAN connectivity to help keep that hybrid environment working together, he explained.

But to get that framework of governance in place across this hybrid environment requires a layered approach, Murray said. “It’s a combination of establishing principles, publishing the policies and using programmatic controls to bring as much cloud governance as we can.”

Murray also hinted that embracing DevOps is the first step in “a series of cultural changes” that organizations are going to need to truly modernize IT infrastructure and operations. For those who aren’t operating at agile speed, operations still needs to get out of the business of managing tickets and delivering resources and get to a self-service environment where operations and IT are involved in brokering the services, he added.

There is also need to have a monitoring framework in place to gain visibility across the environment. Embracing AIOps — which uses big data, data analytics and machine learning — can help organizations become more predictive and more proactive with their operations, he added.