Tag Archives: popular

How to Select a Placement Policy for Site-Aware Clusters

One of the more popular failover clustering enhancements in Windows Server 2016 and 2019 is the ability to define the different fault domains in your infrastructure. A fault domain lets you scope a single point of failure in hardware, whether this is a Hyper-V host (a cluster node), its enclosure (chassis), its server rack or an entire datacenter. To configure these fault domains, check out the Altaro blog post on configuring site-aware clusters and fault domains in Windows Server 2016 & 2019. After you have defined the hierarchy between your nodes, chassis, racks, and sites then the cluster’s placement policies, failover behavior, and health checks will be optimized. This blog will explain the automatic placements policies and advanced settings you can use to maximize the availability of your virtual machines (VMs) with site-aware clusters.

Site-Aware Placement Based on Storage Affinity

From reading the earlier Altaro blog about fault-tolerance, you may recall that the resiliency is created by distributing identical (mirrored) storage spaces direct (S2D) disks across the different fault domains.  Each node, chassis, rack or site may contain a copy of a VM’s virtual hard disks. However, you always want the VM to be in the same site as its disk for performance reasons to avoid having the I/O transmitted across distance. In the event that a VM is forced to start in a separate site from its disk, then it will automatically live migrate the VM to the same site as its disk after about a minute.  With site-awareness, the automatic enforcement of storage affinity between a VM and its disk is given the highest site placement priority.

Configuring Preferred Sites with Site-Aware Clusters

If you have configured multiple sites in your infrastructure, then you should consider which site is your “primary” site and which should be used as a backup. Many organizations will designate their primary site as the location closest to their customers or with the best hardware, and the secondary site as the failover location which may have limited hardware to only support critical workloads.  Some enterprises may deploy identical datacenters, and distribute specific workloads to each location to balance their resources. If you are splitting your workloads across different sites you can assign each clustered workload or VM (cluster group) a preferred site. Let’s say that you want your US-East VM to run in your primary datacenter and your US-West VM to run in your secondary datacenter, you could configure the following settings via PowerShell:

Designating a preferred site for the entire cluster will ensure that after a failure that the VMs will start in this location. After you defined your sites by creating a New-ClusterFaultDomain you can use the cluster-wide property PreferredSite to set the default location to launch VMs. Below is the PowerShell cmdlet:

Be aware of your capacity if you are usually distributing your workloads across two sites and they are forced to run in a single location as performance will diminish with less hardware. Consider using the VM prioritization feature and disabling automatic VM restarts after a failure, as this will ensure that only the most important VMs will run. You can find more information from this Altaro blog on how to configure start order priority for clustered VMs.

To summarize, placement priority is based on:

  • Storage affinity
  • Preferred site for a cluster group or VM
  • Preferred site for the entire cluster

Site-Aware Placement Based on Failover Affinity

When site-awareness has been configured for a cluster, there are several automatic failover policies that are enforced behind the scenes. First, a clustered VM or group will always failover to a node, chassis or rack within the same site before it moves to a different site. This is because local failover is always faster than cross-site failover since it can bring the VM online faster by accessing the local disk and avoid any network latency between sites. Similarly, site-awareness is also honored by the cluster when a node is drained for maintenance. The VMs will automatically move to a local node, rather than a cross-site node.

Cluster Shared Volumes (CSV) disks are also site-aware. A single CSV disk can store multiple Hyper-V virtual hard disks while allowing their VMs to run simultaneously on different nodes.  However, it is important that these VMs are all running on nodes within the same site. This is because the CSV service coordinates disk write access across multiple nodes to a single disk. In the case of Storage Spaces Direct (S2D), the disks are mirrored so there are identical copies running in different locations (or sites). If VMs were writing to mirrored CSV disks in different locations and replicating their data without any coordination, it could lead to disk corruption. Microsoft ensures that this problem never occurs by enforcing all VMs which share a CSV disk to run on the local site and write to a single instance of that disk. Furthermore, CSV distributes the VMs across different nodes within the same site, balancing the workloads and write requests to that coordinate node.

Site-Aware Health Checks and Cluster Heartbeats

Advanced cluster administrators may be familiar with cluster heartbeats, which are health checks between cluster nodes. This is the primary way in which cluster nodes validate that their peers are healthy and functioning. The nodes will ping each other once per predefined interval, and if a node does not respond after several attempts it will be considered offline, failed or partitioned from the rest of the cluster. When this happens, the host is not considered an active node in the cluster and it does not provide a vote towards cluster quorum (membership).

If you have configured multiple sites in different physical locations, then you should configure the frequency of these pings (CrossSiteDelay) and the number of health check which can be missed (CrossSiteThreshold) before a node is considered failed. The greater the distance between sites, the more network latency will exist, so these values should be tweaked to minimize the chances of a false failover during times when there is high network traffic. By default, the pings are sent every 1 second (1000 milliseconds) and when 20 are missed, a node is considered unavailable and any workloads it was hosting will be redistributed. You should test your network latency and cross-site resiliency regularly to determine whether you should increase or reduce these default values. Below is an example to change the testing frequency from every 1 second to 5 seconds and the number of missed responses from 20 to 30.

By increasing these values, it will now take longer for a failure to be confirmed and failover to happen resulting in greater downtime. The default time is 1-second x 20 misses = 20 seconds, and this example extends it to 5 seconds x 30 misses = 150 seconds.

Site-Aware Quorum Considerations

Cluster quorum is an algorithm that clusters use to determine whether there are enough active nodes in the cluster to run its core operations. For additional information, check out this series of blogs from Altaro about multi-site cluster quorum configuration.  In a multi-site cluster, quorum becomes complicated since there could be a different number of nodes in each site. With site-aware clusters, “dynamic quorum” will be used to automatically rebalance the number of nodes which have votes. This means that as clusters nodes drop out of membership, the number of voting nodes changes. If there are two sites with an equal number of voting nodes, then the group of nodes that are assigned to be the preferred site will stay online and run the workloads, while the lower priority site will reduce their votes and not host any VMs.

Windows Server 2012 R2 introduced a setting known as the LowerQuorumPriorityNodeID, which allowed you to set a node in a site as the least important, but this was deprecated in Windows Server 2016 and should no longer be used. The idea behind this was to easily declare which location was the least important when there were two sites with the same number of voting nodes. The site with the lower priority node would stay offline while the other partition would run the clustered workloads. That caused some confusion since the setting was only applied to a single host, but you may still see this setting referenced in blogs such as Altaro’s https://www.altaro.com/hyper-v/quorum-microsoft-failover-clusters/.

The site-awareness features added to the latest version of Window Server will greatly enhance a cluster’s resilience through a combination of user-defined policies and automatic actions. By creating the fault domains for clusters, it is easy to provide even greater VM availability by moving the workloads between nodes, chassis, racks, and sites as efficiently as possible. Failover clustering further reduces the configuration overhead by automatically applying best practices to make failover faster and keep your workloads online for longer.

Wrap-Up

Useful information yes? How many of you are using multi-site clusters in your organizations? Are you finding it easy to configure and manage? Having issues? If so, let us know in the comments section below! We’re always looking to see what challenges and successes people in the industry are running into!

Thanks for reading!


Go to Original Article
Author: Symon Perriman

Managed services companies remain hot M&A ticket

Managed services companies continue to prove popular targets for investment, with more merger and acquisition deals surfacing this week.

Those transactions included private equity firm Lightview Capital making a strategic investment in Buchanan Technologies; Siris, a private equity firm, agreeing to acquire TPx Communications; and IT Solutions Consulting Inc. buying SecurElement Infrastructure Solutions.

Those deals follow private equity firm BC Partners’ agreement last week to acquire Presidio, an IT solutions provider with headquarters in New York. That transaction, valued at $2.1 billion, is expected to close in the fourth quarter of 2019.

More than 30 transactions involving managed service providers (MSPs) and IT service firms have closed thus far in 2019. This year’s deals mark a continuation of the high level of merger and acquisition (M&A) activity that characterized the MSP market in 2018. Economic uncertainty may yet dampen the enthusiasm for acquisitions, but recession concerns don’t seem to be having an immediate impact.

Seth Collins, managing director at Martinwolf, an M&A advisory firm based in Scottsdale, Ariz., said trade policies and recession talk have brought some skepticism to the market. That said, the MSP market hasn’t lost any steam, according to Collins.

“We haven’t seen a slowdown in activity,” he said. The LMM Group at Martinwolf represented Buchanan Technologies in the Lightview Capital transaction.

Collins said the macroeconomic environment isn’t affecting transaction multiples or valuations. “Valuations aren’t driven by uncertainty; they’re driven by the quality of the asset,” he noted.

Finding the right partner

Buchanan Technologies is based in Grapevine, Texas, and operates a Canadian headquarters in Mississauga, Ont. The company’s more than 500 consultants, engineers and architects provide cloud services, managed services and digital transformation, among other offerings.

Valuations aren’t driven by uncertainty; they’re driven by the quality of the asset.
Seth CollinsManaging director at Martinwolf

A spokesman for Lightview Capital said Buchanan Technologies manages on-premises environments, private clouds and public cloud offerings, such as AWS, IBM Cloud and Microsoft Azure. The company focuses on the retail, manufacturing, education, and healthcare and life sciences verticals.

Collins said Buchanan Technologies founder James Buchanan built a solid MSP over the course of 30 years and had gotten to the point where he would consider a financial partner able to take the company to the next level.

“As it turned out, Lightview was that partner,” Collins added, noting the private equity firm’s experience with other MSPs, such as NexusTek.

The Siris-TPx deal, meanwhile, also involves a private equity investor and long-established services provider. TPx, a 21-year old MSP based in Los Angeles, provides managed security, managed WAN, unified communications and contact center offerings. The companies said the deal will provide the resources TPx needs to “continue the rapid growth” it is encountering in unified communications as a service, contact center as a service and managed services.

Siris has agreed to purchase TPx from its investors, which include Investcorp and Clarity.

“Investcorp and Clarity have been invested with TPx for more than 15 years, and they were ready to monetize their investment,” a spokeswoman for TPx said.

IT Solutions Consulting’s acquisition of SecurElement Infrastructure Solutions brings together two MSPs in the greater Philadelphia area.

The companies will pool their resources in areas such as security. IT Solutions offers network and data security through its ITSecure+ offering, which includes antivirus, email filtering, advanced threat protection, encryption and dark web monitoring. A spokeswoman for IT Solutions said SecurElement’s security strategy aligns with IT Solutions’ approach and also provides “expertise in a different stack of security tools.”

The combined company will also focus on private cloud, hybrid cloud and public cloud services, with a particular emphasis on Office 365, the spokeswoman said.

IT Solutions aims to continue its expansion plans in the Philadelphia area and mid-Atlantic regions through hiring, new office openings and acquisitions.

“We have an internal sales force that will continue our organic growth efforts, and our plan is to continue our acquisition strategy of one to two transactions per year,” she said.

MSP market M&A chart
Managed services companies continue to consolidate in an active M&A market.

VMware arms cloud partners with new tools

Ahead of the VMworld 2019 conference, VMware has unveiled a series of updates for its cloud provider partners.

The VMware Cloud Provider Platform now features new tools to enhance the delivery of hybrid cloud offerings and differentiated cloud services, the vendor said. Additionally, VMware said it is enabling cloud providers to target the developer community with their services.

“Customers are looking for best-of-breed cloud that addresses their specific application requirements. … In this world, where there are multiple types of clouds, customers are looking to accelerate the deployment of the applications, and, when they are looking at cloud, what they are looking for is flexibility —  flexibility so that they can choose a cloud that best fits their workload requirements. In many ways, the clouds have to adapt to the application requirements,” said Rajeev Bhardwaj, vice president of products for the cloud provider software business unit at VMware.

Highlights of the VMware updates include the following:

  • The latest version of the vendor’s services delivery platform, VMware vCloud Director 10, now provides a centralized view for hosted private and multi-tenant clouds. Partners can also tap a new “intelligent workload placement” capability for placing “workloads on the infrastructure that best meets the workload requirements,” Bhardwaj said.
  • To help partners differentiate their services, VMware introduced a disaster-recovery-as-a-service program for delivering DRaaS using vCloud Availability; an object storage extension for vCloud Director to deliver S3-compliant object storage services; and a backup certification to certify backup vendors in vCloud Director-based multi-tenant environments, VMware said. Cohesity, Commvault, Dell EMC, Rubrik and Veeam have completed the backup certification.
  • Cloud provider partners can offer containers as a service via VMware Enterprise PKS, a container orchestration product. The update enables “our cloud providers to move up the stack. So, instead of offering just IaaS … they can start targeting new workloads,” Bhardwaj said. VMware will integrate the Cloud Provider Platform with Bitnami, which develops a catalog of apps and development stacks that can be rapidly deployed, he said. The Bitnami integration can be combined with Enterprise PKS to support developer and DevOps costumers, attracting workloads such as test/dev environments onto clouds, according to VMware.

Bhardwaj noted that the VMware Cloud Provider Program has close to 4,300 partners today. Those partners span more than 120 countries and collectively support more than 10 million workloads. VMware’s Cloud Verified partners, which offer VMware software-defined data center and value-added services, have grown to more than 60 globally, VMware noted.

Managed service providers are a growing segment within the VMware Cloud Provider Program (VCCP), Bhardwaj added.

“As the market is shifting more and more toward SaaS and … subscription services, what we are seeing is more and more different types of partners” join VCCP, he said.

Partner businesses include solution providers, systems integrators and strategic outsourcers. They typically don’t build their own clouds, but “want to take cloud services from VMware as a service and become managed service providers,” he said.

Other news

  • Rancher Labs, an enterprise container management vendor, rolled out its Platinum Partner Program. Targeting partners with Kubernetes expertise, the program provides lead and opportunity sharing programs, joint marketing funds and options for co-branded content, the company said. Partners must meet a series training requirements to qualify for the program.
  • Quantum Corp., a storage and backup vendor based in San Jose, Calif., updated its Alliance Partner Program with a new deal registration application, an expanded online training initiative and a redesigned partner portal. The deal registration component, based on Vartopia’s deal registration offering, provides a dashboard to track sales activity, the deal funnel and wins, according to Quantum. The online training for sales reps and engineers is organized by vertical market, opportunities and assets. The company also offers new training options for in-person training.
  • Quisitive Technology Solutions Inc., a Microsoft solutions provider based in Toronto, launched a Smart Start Workshop for Microsoft Teams.
  • MSP software vendor Continuum cut the ribbon on a new security operations center (SOC). Located in Pittsburgh, the SOC will bolster the availability of cybersecurity talent, threat detection and response, and security monitoring for Continuum MSP partners, the vendor said.
  • Technology vendor Honeywell added Consultare America LLC and Silver Touch Technologies to its roster of Guided Work Solutions resellers. A voice-directed productivity product, Guided Work Solutions software targets small and medium-sized distribution centers.
  • Sify Technologies Ltd., an information and communications technology provider based in Chennai, India, aims to bring its services to Europe through a partnership with ZSAH Managed Technology Services. The alliance provides a “broader consulting practice” to the United Kingdom market, according to Sify.
  • US Signal, a data center services provider based in Grand Rapids, Mich., added several features to its Zerto-based disaster recovery as a service offering. Those include self-management, enterprise license mobility, multi-cloud replication and stretch layer 2 failover.
  • Dizzion, an end user cloud provider based in Denver, introduced a desktop-as-a-service offering for VMware Cloud on AWS customers.
  • LaSalle Solutions, a division of Fifth Third Bank, said it has been upgraded to Elite Partner Level status in Riverbed’s channel partner program, Riverbed Rise.
  • FTI Consulting Inc., a business advisory firm, said its technology business segment has launched new services around its RelativityOne Data Migration offering. The services include migration planning, data migration and workspace migration.
  • Mimecast Ltd., an email and data security company, has appointed Kurt Mills as vice president of channel sales. He is responsible for the company’s North American channel sales strategy. In addition, Mimecast appointed Jon Goodwin as director of public sector.
  • Managed detection and response vendor Critical Start has hired Dwayne Myers as its vice president of channels and alliances. Myers joins the company from Palo Alto Networks, where he served as channel business manager, Central U.S. and Latin America, for cybersecurity solutions.

Market Share is a news roundup published every Friday.

Go to Original Article
Author:

No one likes waiting on the phone for a GP appointment. So why do we still do it?

The team behind the services are experts at healthcare, as they also run Patient.Info, one of the most popular medical websites in the UK. More than 100 million people logged on to the site in 2018 to read articles about healthcare, check symptoms and learn to live a healthier life, and more than 60% of GPs in England have access to it.

They also produce a newsletter that’s sent to 750,000 subscribers and around 2,000 leaflets on health conditions and 850 on medicines.

People can access Patient.Info 24 hours a day, seven days a week. It’s the same for Patient Access but web traffic spikes every morning when people want to book appointments to see their GP. To handle that demand, Patient Access runs on Microsoft’s Azure cloud platform. As well as being reliable and stable, all patient data is protected by a high level of security – Microsoft employs more than 3,500 dedicated cybersecurity professionals to help protect, detect and respond to threats, while segregated networks and integrated security controls add to the peace of mind.

“About 62% of GP practices use Patient Access,” says Sarah Jarvis MBE, the Clinical Director behind the service. “They’re using it to manage their services, manage appointments, take in repeat medications, consolidate a patient’s personal health record and even conduct video consultations.

“Just imagine your GP being able to conduct video consultations. If you’re aged 20 to 39 you might not want or need to have a relationship with a GP because you don’t need that continuity of care.

“But imagine you are elderly and housebound, and a district nurse visits you. They phone your GP and say: ‘Could you come and visit this patient’, but the GP is snowed under and can’t get there for a couple of hours. The district nurse is also very busy and must visit someone else.

“Now, with Patient Access, a Duty Doctor can look at someone’s medical record and do a video consultation in five minutes. If the patient needs to be referred, the GP can do it there and then from inside the system. The possibilities are endless, and older people, especially, have so much to gain from this.”

Go to Original Article
Author: Microsoft News Center

Get to know data storage containers and their terminology

Data storage containers have become a popular way to create and package applications for better portability and simplicity. Seen by some analysts as the technology to unseat virtual machines, containers have steadily gained more attention as of late, from customers and vendors alike.

Why choose containers and containerization over the alternatives? Containers work on bare-metal systems, cloud instances and VMs, and across Linux and select Windows and Mac OSes. Containers typically use fewer resources than VMs and can bind together application libraries and dependencies into one convenient, deployable unit.

Below, you’ll find key terms about containers, from technical details to specific products on the market. If you’re looking to invest in containerization, you’ll need to know these terms and concepts.

Getting technical

Containerization. With its roots in partitioning, containerization is an efficient data storage strategy that virtually isolates applications, enabling multiple containers to run on one machine but share the same OS. Containers run independent processes in a shared user space and are capable of running on different environments, which makes them a flexible alternative to virtual machines.

The benefits of containerization include reduced overhead on hardware and portability, while concerns include the security of data stored on containers. With all of the containers running under one OS, if one container is vulnerable, the others are as well.

Container management software. As the name indicates, container management software is used to simplify, organize and manage containers. Container management software automates container creation, destruction, deployment and scaling and is particularly helpful in situations with large numbers of containers on one OS. However, the orchestration aspect of management software is complex and setup can be difficult.

Products in this area include Kubernetes, an open source container orchestration software; Apache Mesos, an open source project that manages compute clusters; and Docker Swarm, a container cluster management tool.

Persistent storage. In order to be persistent, a storage device must retain data after being shut off. While persistence is essentially a given when it comes to modern storage, the rise of containerization has brought persistent storage back to the forefront.

Containers did not always support persistent storage, which meant that data created with a containerized app would disappear when the container was destroyed. Luckily, storage vendors have made enough advances in container technology to solve this issue and retain data created on containers.

Stateful app. A stateful app saves client data from the activities of one session for use in the next session. Most applications and OSes are stateful, but because stateful apps didn’t scale well in early cloud architectures, developers began to build more stateless apps.

With a stateless app, each session is carried out as if it was the first time, and responses aren’t dependent upon data from a previous session. Stateless apps are better suited to cloud computing, in that they can be more easily redeployed in the event of a failure and scaled out to accommodate changes.

However, containerization allows files to be pulled into the container during startup and persist somewhere else when containers stop and start. This negates the issue of stateful apps becoming unstable when introduced to a stateless cloud environment.

Container vendors and products

While there is one vendor undoubtedly ahead of the pack when it comes to modern data storage containers, the field has opened up to include some big names. Below, we cover just a few of the vendors and products in the container space.

Docker. Probably the most synonymous with data storage containers, Docker is even credited with bringing about the container renaissance in the IT space. Docker’s platform is open source, which enables users to register and share containers over various hosts in both private and public environments. In recent years, Docker made containers accessible and offers various editions of containerization technology.

When you refer to Docker, you likely mean either the company itself, Docker Inc., or the Docker Engine. Initially developed for Linux systems, the Docker Engine had version updates extended to operate natively on both Windows and Apple OSes. The Docker Engine supports tasks and workflows involved in building, shipping and running container-based applications.

Container Linux. Originally referred to as CoreOS Linux, Container Linux by CoreOS is an open source OS that deploys and manages the applications within containers. Container Linux is based on the Linux kernel and is designed for massive scale and minimal overhead. Although, Container Linux is open source, CoreOS sells support for the OS. Acquired by Red Hat in 2018, CoreOS develops open source tools and components.

Azure Container Instances (ACI). With ACI, developers can deploy data storage containers on the Microsoft Azure cloud. Organizations can spin up a new container via the Azure portal or command-line interface, and Microsoft automatically provisions and scales the underlying compute resources. ACI also supports standard Docker images and Linux and Windows containers.

Microsoft Windows containers. Windows containers are abstracted and portable operating environments supported by the Microsoft Windows Server 2016 OS. They can be managed with Docker and PowerShell and support established Windows technologies. Along with Windows Containers, Windows Server 2016 also supports Hyper-V containers.

VMware vSphere Integrated Containers (VIC). While VIC can refer to individual container instances, it is also a platform that deploys and manages containers within VMs from within VMware’s vSphere VM management software. Previewed under the name Project Bonneville, VMware’s play on containers comes with the virtual container host, which represents tools and hardware resources that create and control container services.

Go to Original Article
Author:

IBM battling to change perception of Cognos Analytics BI platform

While IBM’s traditionally popular Cognos Analytics BI suite remains a powerful system, some say it’s not for everyone anymore.

The business intelligence platform was viewed for decades as a leading product, one that met the needs of data analysts well. But as analytics went mainstream, the need for trained data scientists to deploy analytics software lessened, and everyday users became more versed in the ways of BI, views of IBM’s analytics technology changed.

Now, however, IBM is fighting back.

On Thursday, addressing concerns that led to an impression among some that the Cognos Analytics on Cloud platform is behind the times, IBM released Cognos Analytics version 11.1.3. The latest rollout, which according to IBM is designed to give small and medium-sized businesses better access to IBM’s BI tools, targets pricing and ease of use.

It was in 2015, when IBM transitioned Cognos from version 10 to version 11, that IBM first began addressing its image among some as being good for large enterprises but not user-friendly for everyone.

“We released 11.0 in 2015 and spent a lot of time on the road at conferences banging the drum that this is not your grandfather’s Cognos, that this is the next iteration,” said Kevin McFaul, senior product manager of business analytics at IBM. “The capabilities are still there, but it was enhanced to target the new line of business users. Our competition went to market on ‘Cognos is too complex’ and we’ve done a lot of work to try and correct that perception.”

IBM’s release of Cognos Analytics version 11.1.3 comes amid what analysts view as a challenging time for the Cognos Analytics BI platform.

Analysts said that the suite’s technological capabilities remain among the most effective, yet despite IBM’s best efforts it’s a product viewed as primarily aimed at enterprises with budget to spend and IT specialists to spare. It is not seen as a platform that fits the needs of the new wave of data users, citizen data scientists.

IBM Cognos market activity by the year

Gartner’s proprietary “Magic Quadrant” ranking system in February 2019 dropped IBM to “Niche Player” after ranking it a “Visionary” the previous three years.

A decade ago in 2009, and up through 2015, Gartner labeled IBM as a “Leader.”

“IBM was a leader in traditional BI, but it took them a long time to respond to [changes in the market],” said Rita Sallam, a VP analyst at Gartner. “Cognos lost a lot of traction, but they’ve made promising investments in augmented intelligence, which we see as the next phase of BI.”

“They’ve now introduced a promising new version of the product,” she added.

The Cognos Analytics BI platform has many of the same capabilities as other advanced analytics suites, including natural language processing (the old Watson analytics system was recently rolled into Cognos), data visualization and multi-cloud connectivity. And IBM continues to add features through updates, introducing a host of AI-powered tools in the fall of 2018 with IBM Cognos 11.1 and now addressing pricing and ease of use in 11.1.3.

Even before the latest update, the Cognos Analytics BI platform remained robust for those who were already IBM customers.

But there’s also been a downside to the Cognos Analytics BI story, one analyst said has made it difficult for IBM to win new customers.

“Historically, it’s been an IT-centric implementation,” said Rick Sherman, founder of Athena IT Solutions in Maynard, Mass. “It can manage metadata and share, but there are lots of parts to the architecture to make it effective.

“You’ll look elsewhere if you don’t have the skills,” he continued.

Another potential drawback has been IBM’s perceived relatively slow pace of innovation.

While Microsoft provides monthly updates to Power BI, and vendors such as MicroStrategy are starting to embed analytics into applications beyond their traditional BI platforms, IBM’s most recent additions — even the ones unveiled this week — are ones many of their competitors already possess.

“It has most of the features that are required, but the competition went ahead more than Cognos,” said Boris Evelson, vice president and principal analyst at Forrester Research. “Its strength is that it’s delivering analytics at scale, and it’s introducing some of the more advanced features — it has a conversational user interface, and the advanced analytics it’s introduced is a strength.”

“But a weakness is the pace of innovation is not as fast as its competitors,” he said.

One of the specific issues that may have held the Cognos Analytics BI platform back is that IBM invested heavily in Watson before the company abandoned Watson analytics as a stand-alone product and added it to the Cognos Analytics BI suite.

“Watson set them back a couple of years — Watson analytics didn’t have the completeness Cognos had,” said Doug Henschen, analyst at Constellation Research. “The hope is that they catch up.”

Henschen, however, noted that IBM now is giving the Cognos Analytics BI platform more attention, which could help reverse the sense that IBM is lagging in the analytics sphere.

They’ve updated their story significantly in the last 18 months. There are signs that they’re getting hip to the market demands of the day to get customers what they want and where they want it.
Doug HenschenAnalyst, Constellation Research

“They’ve updated their story significantly in the last 18 months,” Henchen said. “There are signs that they’re getting hip to the market demands of the day to get customers what they want and where they want it.”

The new release appears to be further evidence of IBM’s commitment to the Cognos Analytics BI platform, and to making it a suite not only for giant enterprise but also for the citizen data scientist.

“It was designed around how to empower end users and help them understand their data,” McFaul said.

Pricing for Cognos Analytics version 11.1.3 is in three tiers, beginning at $15 per user, per month for the Standard Edition, $35 per user, per month for the Plus Edition, and $70 per user, per month for the Premium Edition. All the versions are available on the IBM Marketplace and can be purchased directly from the site and used via the cloud, IBM said.

Whether the market takes notice of the changes, however, and whether IBM can regain what it has lost in recent years, remains to be seen.

A decade ago, IBM had a 14.6% market share of vendor revenues, according to Gartner. By 2017, though statistics vary slightly among sources and the BI software market has become more fragmented in general over time, IBM’s market share had dropped.

“Their challenges don’t stem from the product,” Sallam said. “It’s their go-to-market strategy, how to sell beyond their installed base, how to attract new buyers. They’ve put in place a plan to do that, but we’ll see how well they execute on those plans.”

“Hearts are harder to change than the product,” she said.

Go to Original Article
Author:

Jaguar Land Rover, BI Worldwide share GitLab migration pros and cons

Microsoft’s proposed acquisition of popular code repository vendor GitHub also thrust competitor GitLab into the spotlight. A quarter-million customers tried to move code repositories from GitHub to GitLab last week in the wake of the Microsoft news, a surge that crashed the SaaS version of GitLab.

Enterprises with larger, more complex code repositories will need more than a few days to weigh the risks of the Microsoft acquisition and evaluate alternatives to GitHub. However, they were preceded by other enterprise GitLab converts who shared their experience with GitLab migration pros and cons.

BI Worldwide, an employee engagement software company in Minneapolis, considered a GitLab migration when price changes to CloudBees Jenkins Enterprise software drove a sevenfold increase in the company’s licensing costs for both CloudBees Jenkins Enterprise and GitHub Enterprise.

GitLab offers built-in DevOps pipeline tools with its code repositories in both SaaS and self-hosted form. BI Worldwide found it could replace both GitHub Enterprise and CloudBees Jenkins Enterprise with GitLab for less cost, and made the switch in late 2017.

“GitLab offered better functionality over GitHub Enterprise because we don’t have to do the extra work to create web hooks between the code repository and CI/CD pipelines, and its CI/CD tools are comparable to CloudBees,” said Adam Dehnel, product architect at BI Worldwide.

GitLab pipelines
GitLab’s tools include both code version control and app delivery pipelines.

Jaguar Land Rover-GitLab fans challenge Atlassian incumbents

Automobile manufacturer Jaguar Land Rover, based in London, also uses self-hosted GitLab among the engineering teams responsible for its in-vehicle infotainment systems. A small team of three developers in a company outpost in Portland, Ore., began with GitLab’s free SaaS tool in 2016, though the company at large uses Atlassian’s Bitbucket and Bamboo tools.

As of May 2018, about a thousand developers in Jaguar Land Rover’s infotainment division use GitLab, and one of the original Portland developers to champion GitLab now hopes to see it rolled out across the company.

Sometimes vendors … get involved with other parts of the software development lifecycle that aren’t their core business, and customers get sold an entire package that they don’t necessarily want.
Chris Hillhead of systems engineering, Jaguar Land Rover’s infotainment systems

“Atlassian’s software is very good for managing parent-child relationships [between objects] and collaboration with JIRA,” said Chris Hill, head of systems engineering for Jaguar Land Rover’s infotainment systems. “But sometimes vendors can start to get involved with other parts of the software development lifecycle that aren’t their core business, and customers get sold an entire package that they don’t necessarily want.”

A comparison between tools such as GitLab and Bitbucket and Bamboo largely comes down to personal preference rather than technical feature gaps, but Hill said he finds GitLab more accessible to both developers and product managers.

“We can give developers self-service capabilities so they don’t have to chew up another engineer’s time to make merge requests,” Hill said. “We can also use in-browser editing for people who don’t understand code, and run tutorials with pipelines and rundeck-style automation jobs for marketing people.”

Jaguar Land Rover’s DevOps teams use GitLab’s collaborative comment-based workflow, where teams can discuss issues next to the exact line of code in question.

“That cuts down on noise and ‘fake news’ about what the software does and doesn’t do,” Hill said. “You can make a comment right where the truth exists in the code.”

GitLab offers automated continuous integration testing of its own and plugs in to third-party test automation tools. Continuous integration testing inside GitLab and with third-party tools is coordinated by the GitLab Runner daemon. Runner will be instrumental to deliver more frequent software updates over the air to in-car infotainment systems that use a third-party service provider called Redbend, which will mean Jaguar Land Rover vehicle owners will get automatic updates to infotainment systems without the need to go to a dealership for installation. This capability will be introduced with the new Jaguar I-Pace electric SUV in July 2018.

Balancing GitLab migration pros and cons

BI Worldwide and Jaguar Land Rover both use the self-hosted version of GitLab’s software, which means they escaped the issues SaaS customers suffered with crashes during the Microsoft GitHub exodus. They also avoided a disastrous outage that included data loss for GitLab SaaS customers in early 2017.

Still, their GitLab migrations have come with downsides. BI Worldwide jumped through hoops to get GitLab’s software to work with AWS Elastic File System (EFS), only to endure months of painful conversion from EFS to Elastic Block Store (EBS), which the company just completed.

GitLab never promised that its software would work well with EFS, and part of the issue stemmed from the way AWS handles EFS burst credits for performance. But about three times a day, response time from AWS EFS in the GitLab environment would shoot up from an average of five to eight milliseconds to spikes as high as 900 milliseconds, Dehnel said.

“EBS is quite a bit better, but we had to get an NFS server setup attached to EBS and work out redundancy for it, then do a gross rsync project to get 230 GB of data moved over, then change the mount points on our Rancher [Kubernetes] cluster,” Dehnel said. “The version control system is so critical, so things like that are not taken lightly, especially as we also rely on [GitLab] for CI/CD.”

GitLab is working with AWS to address the issues with its product on EFS, a company spokesperson said. For now, its documentation recommends against deployment with EFS, and the company suggests users consider deployments of GitLab to Kubernetes clusters instead.

Electron framework flaw puts popular desktop apps at risk

A new vulnerability found in an app development tool has caused popular desktop apps made with the tool to inherit a risky flaw.

The Electron framework uses node.js and Chromium to build desktop apps for popular web services — including Slack, Skype, WordPress.com, Twitch, GitHub, and many more — while using web code like JavaScript, HTML and CSS. Electron announced that a remote code execution vulnerability in the Electron framework (CVE-

2018-1000006
) was inherited by an unknown number of apps.

Zeke Sikelianos, a designer and developer who works at Electron, wrote in a blog post that only apps built for “Windows that register themselves as the default handler for a protocol … are vulnerable,” while apps for macOS and Linux are not at risk.

Amit Serper, principal security researcher at Cybereason, said a flaw like the one found in the Electron framework “is pretty dangerous since it allows arbitrary command execution by a simple social engineering trick.”

A flaw like this is pretty dangerous since it allows arbitrary command execution by a simple social engineering trick.
Amit Serperprincipal security researcher at Cybereason

“Electron apps have the ability to register a protocol handler to make it easier to automate processes for the Electron apps themselves (for example, if you’ll click a link that starts with slack:// then Slack will launch. It makes it easier to automate the process of joining a Slack group,” Serper told SearchSecurity by email. “The vulnerability is in the way that the protocol handler is being processed by the Electron app, which allows an attacker to create a malicious link to an Electron app which will execute whatever command that the attacker wanted to run.”

Sikelianos urged developers to update apps to the most recent version of Electron as soon as possible.

There are more than 460 apps that have been built using the flawed Electron framework, but it is unclear how many of those apps are at risk and experts noted that code reviews could take a while.  

Security audits

Lane Thames, senior security researcher at Tripwire, said mechanisms for code reuse like software libraries, open source code, and the Electron framework “are some of the best things going for modern software development. However, they are also some of its worst enemies in terms of security.”

“Anytime a code base is in use across many products, havoc will ensue when (not if) a vulnerability is discovered. This is inevitable. Therefore, developers should ensure that mechanisms are in place for updating downstream applications that are impacted by the vulnerabilities in the upstream components,” Thames told SearchSecurity. “This is not an easy task and requires lots of coordination between various stakeholders. In a perfect world, code that gets used by many other projects should undergo security assessments with every release. Implementing a secure coding practice where every commit is evaluated at least with a security-focused code review would be even better.”

Serper said developers need to “always audit their code and be mindful to security.”

“However, in today’s software engineering ecosystem, where there is a lot of use of third party libraries it is very hard to audit the code that you are using since many developers today use modules and code that was written by other people, completely unrelated to their own project,” Serper said. “These are vast amounts of code and auditing third party code in addition to auditing your own code could take a lot of time.”

Justin Jett, director of audit and compliance at Plixer International Inc., a network analysis company based in Kennebunk, Maine, said the Electron framework flaw was significant, given that “affected applications like Skype, Slack, and WordPress are used by organizations to host and share their most critical information”

“If these applications were to be compromised, the impact could be devastating. Developers that use third-party frameworks, like Electron, should audit their code on a regular basis, ideally quarterly, to ensure they are using an up-to-date version of the framework that works with their application and has resolved any security issues from previous releases,” Jett told SearchSecurity. “Additionally, platform developers, like Electron, should complete routine audits on their software to ensure that the developers taking advantage of their platform don’t expose users to security vulnerabilities — vulnerabilities which, left unresolved, could cause profound damage to businesses that rely on these applications.”

Top 10 blog posts of 2017 illuminate top CIO goals

SearchCIO’s most popular blog posts of 2017 point to a set of lofty — and mandatory — CIO goals: artificial intelligence, digital transformation, multicloud management. IT leaders are learning all they can about these tech trends. The aim? To help their companies gain business advantage — before their competitors do.

Readers perused posts about avoiding getting locked into relationships with public cloud vendors, the absence of a universal platform for digitally connected smart cities and the coming proliferation of AI in the workplace.

The blogs IT leaders showed interest in 2017 were about how to install robotic process automation technology, what copy data management software is good for and what to look for in cloud management platforms. Moreover, they revealed some of the top CIO goals of 2017. Here are the year’s 10 most-read blog posts.

10. Managing the unmanageable

IT departments today are overseeing an ever-expanding assortment of cloud services — and it’s not easy. Each service requires a different management tool, and juggling all that “is just painful,” said IBM cloud development expert Mike Edwards. In “Out of many, one hybrid cloud management platform,” Edwards gives a rundown of the functions to look for in cloud management platforms, commercial tools meant to rein in cloud chaos. Among them are integration, so they can pull together disparate computing systems; general services, including a central portal to manage all a company’s cloud services; and financial management, to track the resources consumed and how much money is spent on them.

Cloud management platform functions
This slide, from a Cloud Standards Customer Council presentation in July, outlines the capabilities of a cloud management platform.

9. Cloud, consolidated

The public cloud firmament is ruled not by a pantheon of platform providers but by a tiny clique of cloud gods. According to an August report by Forrester Research, organizations hosting IT and business operations in the cloud shouldn’t let the small number of big players — Amazon, Microsoft and Google are the top three — lull them into a one-provider strategy. Those that do may see business come to a halt should the provider experience an outage. Or they may complain bitterly if a provider raises its prices — and then grudgingly pay up. In “Forrester: Go multicloud, ditch public cloud platform lock-in,” analyst Andrew Bartels advises CIOs on ways to reap cloud benefits — and lower risk. 

8. Making data talk

Bob Rogers, chief data scientist, Intel Corp.Bob Rogers

What makes a great data scientist? At a talk at Harvard University, Bob Rogers, chief data scientist at Intel Corp., started with what doesn’t: creating a report the business just asked for. A great data scientist needs to understand algorithms and statistics to produce analytics, of course, but “can also communicate with the stakeholders who are going to use those results.” In “What Intel’s Bob Rogers looks for when hiring data scientists,” Rogers describes the detailed “conversation” these practitioners need to have with users of data in order to dig up the insights that matter to the business.

7. Urban renewal

City CIOs examining initiatives aimed at making their cities “smart” — using data to improve municipal services — are at a frontier. There are no technical standards for how data is collected and measured. There’s no analytical data platform. There’s not even one understanding of what a smart city is. “If you go to any smart city conference, you’re going to find as many definitions of a smart city as there are attendees,” said Bob Bennett, chief innovation officer for Kansas City, Mo. The post “Smart city platform: A work in progress” reports on a conference convened to address big questions swirling around smart city projects today. The verdict? It’s too early for answers.

6. What’s my job?

Know of a chief digital officer hired to drive employee productivity and operational efficiency? Then the job description is in need of a redo, said Jim Fowler. “That’s the role of the CIO,” the vice president and CIO at General Electric said at MIT Sloan CIO Symposium in May. Fowler delineates the two roles in “CIO doesn’t play chief digital officer role at GE.” The CDO should be focused on using data to develop commercial products, Fowler said. At GE, an old company going through huge digital changes, the roles are separate and distinct. As CIO, Fowler is working toward a billion-dollar productivity target. The CDO, who happens to be his boss, William Ruh, “is focused on turning us into a $10 billion software business.”

From left to right, Peter Weill, of MIT; Jim Fowler, of GE (on screen); David Gledhill, of DBS Bank; Ross Meyercord, of Salesforce; and Lucille Mayer, of BNY Mellon, chat on stage at the MIT Sloan CIO Symposium in Cambridge, Mass., on May 24.

5. The business of AI

CIOs who want to inject AI into their companies’ lifeblood have some work to do. Today, 80% of organizations are considering use of AI or examining it, while just a small percentage are using it in their core business, according to McKinsey & Co. research. In “McKinsey: You can’t do enterprise AI without digital transformation,” McKinsey partner Michael Chui said the entire organization needs to be on board “to move the needle from a corporate standpoint.” CIOs need to build the foundation for AI in business first by determining where the potential is — and then by pushing ahead on digital efforts, digitizing infrastructure, amassing data and making it easy to access.

4. Double take

Rosetta Stone’s Mark Moseley is thankful for having been to one boring meeting. The vice president of IT at the language-learning company agreed to meet sales reps from Actifio, which sells copy data management software. “I didn’t care,” he told SearchCIO in “Waking up to benefits of copy data management software.” “I was mostly zoned out of the meeting.” Until he realized that the vendor’s product could clone an entire database in minutes — and help his development team work more efficiently. After installing the software, Moseley found he could do other useful tasks, such as virtualize his data, which allows him to have to store less, and spin up disaster recovery sites in the cloud.

3. ‘What I want when I want it’

Nestlé’s 100-year-old water delivery business is going through an immense transition. The prime objective used to be making sure that customers didn’t run out of bottled water before a truck delivered more. “Now it’s, ‘Make sure you deliver what I want when I want it,'” said Aymeric Le Page, vice president of business strategy and transformation at Nestlé Waters North America. The post “Nestlé builds ‘digital ecosystems’ to transform its massive bottled water biz” describes the technological and cultural changes the company is ushering in to personalize its service for convenience-obsessed customers — and it draws a critical parallel to how IT leaders should be thinking about serving the business.

2. My software, my co-worker

AI will supplant the UI, Accenture says; count on it. “Accenture: AI is the new UI” examines the consulting outfit’s prediction — the rise of software tailored to individuals rather than programming for the masses. “The standard way people built applications 20 years ago was you had one interface to serve everybody,” said Michael Biltz, managing director of Accenture Technology Vision. CIOs should start overhauling customer-facing applications, Biltz said, equipping them with technology such as voice recognition to make interacting with them “more human or natural” — and then move to internal apps, to help make employees more effective and efficient.

1. Show me the value

David Brain, RPA consultant David Brain

Companies looking to robotic process automation should ditch POC for POV — that’s alphabet soup for proof of concept and proof of value — according to RPA consultant David Brain in “Proof of value — not proof of concept — key to RPA technology.” A proof of concept may show customers that the technology works — that it can automate a certain business process. What it doesn’t show is “whether there is a business case for automation and will it deliver the scale of improvements the company wants to achieve.” A proof of value for RPA, Brain said, shows whether the technology can automate systems in the precise way they’re used in a specific company.

Azure feature updates in 2017 play catch up to AWS

Microsoft Azure already solidified its position as the second most popular public cloud, and critical additions in 2017 brought the Azure feature set closer to parity with AWS.

In some cases, Azure leapfrogged its competition. But a bevy of similar products bolstered the platform as a viable alternative to Amazon Web Services (AWS). Some Microsoft initiatives broadened the company’s database portfolio. Others lowered the barrier to entry for Azure, and pushed further into IoT and AI. And the long-awaited, on-premises machine, Azure Stack, seeks to tap surging interest to make private data centers obsolete.

Like all the major public cloud providers, Microsoft Azure doubled down on next-generation applications that rely on serverless computing and machine learning. Among the new products are Machine Learning Workbench, intended to improve productivity in developing and deploying AI applications, and Azure Event Grid, which helps route and filter events built in serverless architectures. Some important upgrades to Azure IoT Suite included managed services for analytics on data collected through connected devices, and Azure IoT Edge, which extends Azure functionality to connected devices.

Many of those Azure features are too advanced for most corporations that lack a team of data scientists. However, companies have begun to explore other services that rely on these underlying technologies in areas such as vision, language and speech recognition.

AvePoint, an independent software vendor in Jersey City, N.J., took note of the continued investment by Microsoft this past year in its Azure Cognitive Services, a turnkey set of tools to get better results from its applications.

“If you talk about business value that’s going to drive people to use the platform, it’s hard to find a more business-related need than helping people do things smartly,” said John Peluso, Microsoft regional director at AvePoint.

Microsoft also joined forces with AWS on Gluon, an open source, deep learning interface intended to simplify the use of machine learning models for developers. And the company added new machine types that incorporate GPUs for AI modeling.

Azure compute and storage get some love, too

Microsoft’s focus wasn’t solely on higher-level Azure services. In fact, the areas in which it caught up the most with AWS were in its core compute and storage capabilities.

The B-Series are the cheapest machines available on Azure and are designed for workloads that don’t always need great CPU performance, such as test and development or web servers. But more importantly, they provide an on-ramp to the platform for those who want to sample Azure services.

Another Azure feature addition was the M-Series machines that can support SAP workloads with up to 20 TBs of memory, a new bare-metal VM and the incorporation of Kubernetes into Azure’s container service.

“I don’t think anybody believes they are on par [with AWS] today, but they have momentum at scale and that’s important,” said Deepak Mohan, an analyst at IDC.

In storage, Managed Disks is a new Azure feature that handles storage resource provisioning as applications scale. Archive Storage provides a cheap option to house data as an alternative to Amazon Glacier, as well as a standard access model to manage data across all the storage tiers.

Reserved VM Instances emulate AWS’ popular Reserved Instances to provide significant cost-savings for advanced purchases and deeper discounts for customers that link the machines to their Windows Server licenses. Azure also added low-priority VMs– the equivalent to AWS Spot Instances — that can provide even further savings but should be limited to batch-type projects due to the fact that they can be pre-empted.

It looks to me like Azure is very much openly and shamelessly following the roadmap of AWS.
Jason McKaysenior vice president and CTO, Logicworks

The addition of Azure Availability Zones was a crucial update for mission-critical workloads that need high availability. It brings greater fault tolerance to the platform through the ability to spread workloads across regions and achieve a guaranteed 99.99% uptime.

“It looks to me like Azure is very much openly and shamelessly following the roadmap of AWS,” said Jason McKay, senior vice president and CTO at Logicworks, a cloud managed service provider in New York.

And that’s not a bad thing, because Microsoft has always been good at being a fast follower, McKay said. There’s a fair amount of parity in the service catalogs for Azure and AWS, though Azure’s design philosophy is a bit more tightly coupled between its services. That means potentially slightly less creativity, but more functionality out of the box compared to AWS, McKay said.

Databases and private data centers

Azure Database Migration Service has helped customers transition from their private data centers to Azure. Microsoft also added full compatibility between SQL Server and the fully managed Azure SQL database service.

Azure Cosmos DB, a fully managed NoSQL cloud database, may not see a wave of adoption any time soon, but has the potential to be an exciting new technology to manage databases on a global scale. And in Microsoft’s continued evolution to embrace open source technologies, the company added MySQL and PostgreSQL support to the Azure database lineup as well.

The company also improved management and monitoring, which incorporates tools from Microsoft’s acquisition of Cloudyn, as well as added security. Azure confidential computing encrypts data while in use, in addition to encryption options at rest and in transit, while Azure Policy added new governance capabilities to enforce corporate rules at scale.

Other important security upgrades include Azure App Service Isolated, which made it easier to install dedicated virtual networks in the platform-as-a-service layer. The Azure DDoS Protection service aims to protect against DDoS attacks, new capabilities put firewalls around data in Azure Storage, and end points within the Azure virtual network limit the exposure of data to the public internet to access various multi-tenant Azure services.

Azure Stack’s late arrival

Perhaps Microsoft’s biggest cloud product isn’t part of its public cloud. After two years of fanfare, Azure Stack finally went on sale in late 2017. It transfers many of the tools found on the Azure public cloud within private facilities, for customers that have higher regulatory demands or simply aren’t ready to vacate their data center.

“That’s a huge area of differentiation for Microsoft,” Mohan said. “Everybody wants true compatibility between services on premises and services in the cloud.”

Rather than build products that live on premises, AWS joined with VMware to build a bridge for customers that want their full VMware stack on AWS either for disaster recovery or extension of their data centers. Which approach will succeed depends on how protracted the shift to public cloud becomes — and a longer delay in that shift favors Azure Stack, Mohan said.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at [email protected].

Alexa for Business sounds promising, but security a concern

Virtual assistant technology, popular in the consumer world, is migrating toward businesses with the hopes of enhancing employee productivity and collaboration. Organizations could capitalize on the familiarity of home-based virtual assistants, such as Siri and Alexa, to boost productivity in the office and launch meetings quicker.

Last week, Amazon announced Alexa for Business, a virtual assistant that connects Amazon Echo devices to the enterprise. Alexa for Business allows organizations to equip conference rooms with Echo devices that can turn on video conferencing equipment and dial into a conference via voice commands.

“Virtual assistants, such as Alexa, greatly enhance the user experience and reduce the complexity in joining meetings,” Frost & Sullivan analyst Vaishno Srinivasan said.

Personal Echo devices connected to the Alexa for Business platform can also be used for hands-free calling and messaging, scheduling meetings, managing to-do lists and finding information on business apps, such as Salesforce and Concur.

Overcoming privacy and security hurdles

Before enterprise virtual assistants like Alexa for Business can see widespread adoption, they must overcome security concerns.

“Amazon and other providers will have to do some evangelizing to demonstrate to CIOs and IT leaders that what they’re doing is not going to compromise any security,” Gartner analyst Werner Goertz said.

Amazon is well-positioned to grab this opportunity much ahead of Microsoft Cortana, Google Assistant and Apple’s Siri.
Vaishno Srinivasananalyst, Frost & Sullivan

Srinivasan said organizations may have concerns about Alexa for Business collecting data and sharing it in a cloud environment. Amazon has started to address these concerns, particularly when connecting personal Alexa accounts and home Echo devices to a business account.

Goertz said accounts are sandboxed, so users’ personal information will not be visible to the organization. The connected accounts must also comply with enterprise authentication standards. The platform also includes administrative controls that offer shared device provisioning and management capabilities, as well as user and skills management.

Another key challenge is ensuring a virtual assistant device, like the Amazon Echo, responds to a user with information that is highly relevant and contextual, Srinivasan said.

“These devices have to be trained to enhance its intelligence to deliver context-sensitive and customized user experience,” she said.

Integrating with enterprise IT systems

End-user spending on virtual assistant devices is expected to reach $3.5 billion by 2021, up from $720 million in 2016, according to Gartner. Enterprise adoption is expected to ramp up by 2019.

Goertz said Amazon had to do a lot of work “under the hood” to enable the integrations with business apps and vendors such as Microsoft, Cisco, Polycom and BlueJeans. The deep integrations with enterprise IT systems is required to enable future capabilities, such as dictating and sending emails from an Echo device, he said.

Srinivasan said Alexa for Business can extend beyond conference rooms through APIs provided by Amazon’s Alexa Skills Kit for developers.

“Thousands of developers utilize these APIs and have created ‘skills’ that enable automation and increase efficiency within enterprises,” she said.

Taking use cases beyond productivity tools

While enterprise virtual assistants could be deployed in any type of company looking to boost productivity, Alexa for Business has already seen deployments in industries such as hospitality.

Wynn Las Vegas is equipping its rooms with Amazon Echo devices, which are managed with Alexa for Business, Goertz said. Guests of the hotel chain can use voice commands, called skills, to turn on the lights, close the blinds or order room service.

Another industry that could see adoption of virtual assistants is healthcare. Currently, Alexa for Business supports audio-only devices. But the platform could potentially support devices with a camera and display that could add video conferencing and telemedicine capabilities, Goertz said.

Alexa for Business also has the potential to disrupt the huddle room market by turning Echo devices into stand-alone conference phones, Srinivasan said.

Amazon Echo prices range from $50 to $200, and the most recent generation of devices offers improved audio quality. The built-in virtual assistant with Alexa for Business and developer ecosystem fills a gap that exists in the conference phone market, she wrote in a blog post.

“Amazon is well-positioned to grab this opportunity much ahead of Microsoft Cortana, Google Assistant and Apple’s Siri,” she said.