Tag Archives: Cloud

DevOps pros rethink cloud cost with continuous delivery tool

Enterprise DevOps pros can slash cloud resource overallocations with a new tool that shows them how specific app resources are allocated and used in the continuous delivery process.

The tool, Continuous Efficiency (CE), became generally available this week from Harness.io, a continuous delivery (CD) SaaS vendor. It can be used by itself or integrated with the company’s CD software, which enterprises use to automatically deploy and roll back application changes to Kubernetes infrastructure.

In either case, CE correlates cloud cost information with specific applications and underlying microservices without requiring manual tagging, which made it easy for software engineers at beta tester companies to identify idle cloud resources.

Jeff GreenJeff Green

“The teams running applications on our platform are distributed, and there are many different teams at our company,” said Jeff Green, CTO at Tyler Technology, a government information systems software maker headquartered in Plano, Texas. “We have a team that manages the [Kubernetes] cluster and provides guidelines for teams on how to appropriately size workloads, but we did find out using CE that we were overallocating resources.”

In beta tests of CE, Tyler Technologies found that about one-third of its cloud resources were not efficiently utilized — capacity had been allocated and never used, or it was provisioned as part of Kubernetes clusters but never allocated. Developers reduced the number of Kubernetes replicas and CPU and memory allocations after this discovery. Green estimated those adjustments could yield the company some $100,000 in cloud cost savings this year.

Harness Continuous Efficiency dashboard
Harness Continuous Efficiency tool correlates cloud costs to applications, services and Kubernetes clusters.

DevOps puts cloud cost on dev to-do list

Tyler Technologies has used Harness pipelines since 2017 to continuously deploy and automatically roll back greenfield applications that run on Kubernetes clusters in the AWS cloud. The full lifecycle of these applications is managed by developers, who previously didn’t have direct visibility into how their apps used cloud resources, or experience with cloud cost management. CE bridged that gap without requiring developers to manage a separate tool or manually tag resources for tracking.

This has already prompted developers at Tyler Technologies to focus more on cost efficiencies as they plan applications, Green said.

“That wasn’t something they really thought about before,” he said. “Until very recently, we followed a more traditional model where we had dedicated operations people that ran our data centers, and they were the ones that were responsible for optimizing and tuning.”

While developer visibility into apps can be helpful, a tool such as CE doesn’t replace other cloud cost management platforms used by company executives and corporate finance departments.

“It’s good for developers to be cognizant of costs and not feel like they’re being blindsided by impossible mandates from a perspective they don’t understand,” said Charles Betz, analyst at Forrester Research. “But in large enterprises, there will still be dedicated folks managing cloud costs at scale.”

The Harness CD tool deploys delegates, or software agents, to each Kubernetes cluster to carry out and monitor app deployments. CE can use those agents to identify the resources that specific apps and microservices use and compare this information to resource allocations in developers’ Kubernetes manifests, identifying idle and unallocated resources.

If users don’t have the Harness CD tool, CE draws on information from Kubernetes autoscaling data and associates it with specific microservices and applications. In either case, developers don’t have to manually tag resources, which many other cloud cost tools require.

This was a plus for Tyler Technologies, but Betz also expressed concern about the reliability of auto-discovery. 

“There’s no way to map objective tech resources to subjective business concepts without some false negatives or positives that could result in the wrong executive being charged for the wrong workload,” Betz said. “Tagging is a discipline that organizations ultimately can’t really get away from.”

Harness roadmap includes cloud cost guidance

Tyler Technologies plans to add the CE product to Harness when it renews its license this year but hasn’t yet received a specific pricing quote for the tool. Harness officials declined to disclose specific pricing numbers but said that CE will have a tiered model that charges between 1% and 5% of customers’ overall cloud spending, depending on whether the cloud infrastructure is clustered or non-clustered.

“It’s not quite free money — there is a charge for this service,” Green said. “But it will allow us to save costs we wouldn’t even be aware of otherwise.”

It will allow us to save costs we wouldn’t even be aware of otherwise.
Jeff GreenCTO, Tyler Technologies

Harness plans to add recommendation features to CE in a late July release, which will give developer teams hints about how to improve cloud cost efficiency. In its initial release, developers must correct inefficiencies themselves, which Tyler’s Green said would be easier with recommendations. 

“We use an AWS tool that recommends savings plans and how to revise instances for cost savings,” Green said. “We’d like to see that as part of the Harness tool as well.”

Other Harness users that previewed CE, such as Choice Hotels, have said they’d also like to see the tool add proactive cloud cost analysis, but Green said his team uses CE in staging environments to generate such estimates ahead of production deployments.

Harness plans to add predictive cost estimates based on what resources are provisioned for deployments, a company spokesperson said. The Continuous Efficiency platform already forecasts cloud costs for apps and clusters, and later releases will predict usage based on seasonality and trends.

Go to Original Article
Author:

Snowflake files for IPO after months of speculation

After months of speculation, fast-growing cloud data warehouse vendor Snowflake has filed for an IPO.

“All our sources have confirmed that they filed using the JOBS Act approach,” said R “Ray” Wang, founder and CEO of Constellation Research.

The Jumpstart Our Business Act was signed into law by President Barack Obama in 2012 and is intended to help fund small businesses by easing securities regulations, including allowing smaller firms to file for IPOs confidentially while testing the market.

“They have ramped up their sales and marketing to match the IPO projections and they’ve made substantial customer progress,” Wang added.

Snowflake, meanwhile, has not yet confirmed that its IPO is now officially in the works.

“No comment” was the official response from the vendor when reached for comment.

Snowflake, founded in 2012 and based in San Mateo, Calif., has appeared to be aiming at an IPO for more than a year.

All our sources have confirmed that they filed using the JOBS Act approach. They have ramped up their sales and marketing to match the IPO projections and they’ve made substantial customer progress.
R ‘Ray’ WangFounder and CEO, Constellation Research

The vendor is in a competitive market that includes Amazon Redshift, Google BigQuery, Microsoft Azure SQL Data Warehouse and SAP Data Warehouse, among others. Snowflake, however, has established a niche in the market and been able to grow from 80 customers when it released its first platform in 2015 to more than 3,400.

“Unlike other cloud data warehouses, Snowflake uses a SQL database engine designed for the cloud, and scales storage and compute independently,” said Noel Yuhanna, analyst at Forrester Research. “Customers like its ease of use, lower cost, scalability and performance capabilities.”

He added that unlike other cloud data warehouses, Snowflake can help customers avoid vendor lock-in by running on multiple cloud providers.

“If the IPO comes through, it will definitely put pressure on the big cloud vendors Amazon, Google and Microsoft who have been expanding their data warehouse solutions in the cloud,” Yuhanna said.

Snowflake has been able to increase its valuation from under $100 million when it emerged from stealth to more than $12 billion by growing its customer base and raising investor capital through eight funding rounds. An IPO has the potential to infuse the company with even more capital, and fundraising is often the chief reason a company goes public.

Other advantages include an exit opportunity for investors, publicity and credibility, a reduced overall cost of capital since private companies often pay higher interest rates to receive bank loans, and the ability to use stock as a means of payment.

Speculation that Snowflake was on a path toward going public gained momentum when Bob Muglia, who took over as CEO of Snowflake in 2014 just before it emerged from stealth, abruptly left the company in April 2019 and was replaced by Frank Slootman.

Before joining Snowflake, Slootman had led ServiceNow and Data Domain through their IPOs, and in October 2019 told an audience in London that Snowflake could pursue an IPO as early as summer 2020.

Three months later, in February 2020, Snowflake raised $479 million in venture capital funding led by Dragoneer Investment Group and Salesforce Ventures, which marked the vendor’s eighth fundraising round and raised the its valuation to more than $12.4 billion.

Even eight funding rounds are rare, and in order to increase valuation beyond venture capital investments, companies are generally left with the option of either going public or getting acquired.

Meanwhile, last week at its virtual user conference Snowflake revealed expanded cloud data warehouse capabilities that included a new integration with Salesforce that will enable Snowflake to more easily connect to different data sources. And the more capabilities Snowflake has, the more attractive it would be to potential investors in an IPO.

“Snowflake, I believe, has been looking at an IPO for a few years now,” Yuhanna said. “They have had a steady revenue streamline for a while, and many large Fortune companies have been using it for critical analytical deployments. Based on our inquiries, it’s the top data warehouse that customers have been asking about besides Amazon Redshift.”

While Snowflake has finally filed for an IPO, the filing is just one step in the process of going public and it’s not certain the vendor will go through with a public offering.

The IPO market, however, has remained active despite the COVID-19 pandemic.

Go to Original Article
Author:

Yugabyte boosts distributed SQL database with new funding

Among the emerging types of databases enterprises need in the cloud era are distributed SQL databases that enable multiple disparate nodes to act as a single logical database.

With a distributed SQL database, users can enable more scalable database deployments than with a traditional SQL database that typically was designed in the era when on-premises databases were the norm.

One distributed SQL database startup, Yugabyte, has taken an open source approach to building out its platform as a way to grow its technology. On May 19, the company hired a new CEO, former Pivotal Software and Greenplum Software President Bill Cook, to take over from co-founder Kannan Muthukkaruppan, who is now president.

Cook was the CEO of Greenplum from 2006 to 2010, when the company was acquired by EMC. The Greenplum division was spun out as part of Pivotal Inc. in 2013 and Pivotal was subsequently acquired by VMware in 2019.

Yugabyte, founded in 2016 and based in Sunnyvale, Calif., said Tuesday that it raised $30 million in a Series B round of funding led by 8VC Lightspeed Venture Partners, bringing total funding to date to $55 million. The new investment will help the vendor expand its go-to-market efforts including a cloud database as a service (DBaaS), according to Yugabyte.

In this Q&A, Cook talks about the growing distributed SQL database market.

Why did Yugabyte raise a funding round in the midst of the COVID-19 pandemic?

Bill CookBill Cook

Bill Cook: We were doing this fund raising in parallel with the company recruiting me to join. But, you know, the impetus is obviously that there is a big market opportunity in front of us.

As to why $30 million, it was really around what was going to be required to continue the investment on the engineering product side to grow the organization aggressively. And we’re also ramping on the enterprise go-to-market side.

If you think about things like the pandemic and the changes that are going on more globally, it really just starts to accelerate how people think about technology. When you’re an open source database company like we are, with the services that we deliver, I think it is an accelerant.

If you think about things like the pandemic and the changes that are going on more globally, it really just starts to accelerate how people think about technology.
Bill CookCEO, Yugabyte

How does your past experience in the compare with the new challenge of leading Yugabyte?

Cook: At Greenplum, we were taking on a new market category as a pre-Hadoop, big data type vendor. Like Yugabyte now, PostgreSQL compatibility and the alignment with the open source PostgreSQL community was important to us then as well. At Pivotal, we were helping organizations with the modernization of application portfolios and moving to a new platform like Cloud Foundry [an open source, multi-cloud application platform as a service] helped to show the way.

When I got to know Yugabyte’s co-founders, Kannan Muthukkaruppan and Karthik Ranganathan, I felt it was a similar story to Greenplum, in the sense that it’s an emerging company in a big space.

The most important question I had and that they had for me was really around cultural fit and what are we really trying to do here. We want to build a very special company, where we attract the best and brightest and we’re going after a very big market and doing it in an open source way that can appeal to the largest enterprises around the globe.

Where do you see Yugabyte distributed SQL fitting into the database landscape?

Cook: At the back end of this technology it’s about being distributed. Databases should be able to run across time zones or regions or geographies and do it in a scalable, performant way. Resiliency is obviously the core tenet that you’re looking for in a database.

The decision to be aligned with the PostgreSQL community on the front end of the technology helps to serve the SQL market and leverage that community. I think the combination of open source PostgreSQL compatibility with the technology and the expertise that Yugabyte has is what differentiates us.

When we talk to enterprises, you know, they’re looking to simplify their lives. They want to have an end-to-end story that gives them that capability to move off of a traditional database infrastructure and do it with a with a trusted partner.

What do you see as the opportunity for DBaaS with distributed SQL?

Cook: Organizations are thinking about how to be able to leverage cloud infrastructure. You know it’s a similar to the experience we had at Pivotal with Cloud Foundry. Users wanted to make sure they could run workloads in Cloud Foundry across private infrastructure or in their public cloud instances.

I think customers will increasingly view cloud services from an infrastructure perspective, as a way to drive cost down, while having application and database capabilities. It’s that simple.

Organizations want a range of offerings, to be able to deploy how they want to deploy.

What’s your view on open source as a model for developing and building a database company?

Cook: I view open source as a requirement today.

From our perspective, the business model of having open source core to everything we do, and then monetizing it as a platform, just gives the community and large enterprises comfort.

In an internal call we had this week, Kannan Muthukkaruppan was talking about all the contributions we’re seeing from the community that help to make the product better. So, I think it’s a win-win-win if you if you do it right.

Editor’s note: This interview has been edited for clarity and conciseness.

Go to Original Article
Author:

Try an Azure Site Recovery setup for DR needs

In today’s IT world, you can have workloads on premises and in the cloud. One common denominator for each location is a need to plan for disaster recovery. Azure Site Recovery is one option for administrators who need a way to cover every scenario.

Azure Site Recovery is a service used to protect physical and virtual Windows or Linux workloads outside of your primary data center and its traditional on-premises backup system. During the Azure Site Recovery setup process, you can choose either Azure or another data center for the replication target. In the event of a disaster, such as a power outage or hardware failure, your apps can continue to operate in the Azure cloud to minimize downtime. Azure Site Recovery also supports cloud failover of both VMware and Hyper-V virtual infrastructures.

One of the real advantages of this Azure service for a Windows shop is integration. All the functionality is built right into the admin portal and requires little effort to configure beyond the agent installation, which can be done automatically. Offerings from other vendors, such as Zerto and Veeam work the same way but require additional configuration using a management suite based outside the Azure portal.

Azure Site Recovery pricing

One of the big issues for any platform is cost. Each protected instance costs $25 per month with additional fees for the Azure Site Recovery license, storage in Azure, storage transactions and outbound data transfer. Organizations interested in testing the service can use it for free for the first 31 days.

As with most systems, there are caveats, including how replication and recovery are tied to specific Azure regions depending on the location of the cluster. There is a list of supported configurations on Microsoft’s documentation.

Azure includes the option to failover to an on-premises location, which reduces the cost to $16 per instance. However, this option requires meeting bandwidth requirements that are not a factor in an Azure-to-Azure failover scenario.

Azure Site Recovery uses vaults to store workload dependencies

Most disaster recovery (DR) environments utilize the concept of crash consistent applications, meaning the application fails over as a whole with all its dependencies. In Azure, you store the VM backups, their respective recovery points and the backup policies in a vault.

These vaults should contain all the servers that make up the services required for a successful failover. (You should test before an emergency occurs to make sure it functions expected.) It is possible to fail over individual VMs within a replication group if needed; this used to be an all-or-nothing scenario until recently.

How to create a Recovery Services vault

For this Azure Site Recovery setup tutorial, we’ll cover how to configure VMs for site-to-site replication between regions via the portal.azure.com link.

As with most Azure tools, the Disaster Recovery menu is on the left-hand side with the other Azure services. Under this menu is the Recovery Service vault option. Create one by filling in the fields as shown in Figure 1.

Recovery Services vault
Figure 1. Create the Recovery Services vault by filling in the project details.

When you have entered all your specifications, click Create to build the vault. The next step is to choose the purpose for the vault. The choice is either for backup or DR.

Next, add the VMs. To start, from the vault choices select Site Recovery.

From the on-premises option, click Replicate Application to open a wizard to add VMs. Next, click Review + Start replication to start the creation and replication process, which can take several minutes. For ease of access and experimentation purposes, I suggest pinning it to your dashboard. Opening the vault provides a health overview of the site and clicking on each item shows details about the replication status as shown in Figure 2.

Azure Site Recovery replication
Figure 2. The Azure Site Recovery setup process replicates the protected instances to the defined target, either an on-premises location or in Azure.

This completes the creation of a group with two protected VMs. Every VM added to that resource group automatically becomes a protected member of the vault. By default, the DR failover is set to a maximum duration of 24 hours. After the initial configuration, you can adjust the failover duration and snapshot frequency from the Site Recovery Policy – Retention policies page.

The last step is to create a recovery plan. From the vault, select Create recovery plan and then + Recovery plan and give it a name as shown in Figure 3 with our example called MyApplicationRecoveryPlan. You choose the source from either the on-premises location or Azure, and the Azure target.

Azure Site Recovery plans
Figure 3. The menu displays all your configured recovery plans in Azure Site Recovery. From here, you can execute a test failover to verify your settings.

When complete, opening the plan to verify it works properly by clicking Test for a nondisruptive assessment that checks the replication in an isolated environment. This process can detect any problems related to services and connectivity the application needs to function in a failover setting.

This tutorial covers some of the basic functionality of Azure Site Recovery. For more granular control, there are many more options available to provide advanced functionality.

Go to Original Article
Author:

Dell EMC Isilon file storage floats into Google public cloud

Dell EMC spun out a flurry of cloud initiatives to bolster one of the few areas where its products lag competing storage vendors.

The infrastructure vendor teamed with Google to make its Dell EMC Isilon OneFS file system available for scale-out analytics in the Google Cloud Platform (GCP). Dell EMC said Google cloud customers can scale up to 50 petabytes of Isilon file storage in a single namespace, with no required application changes.

The managed NAS offering uses Google compute to run software instantiations of Isilon OneFS. The service is part of Dell Technologies Cloud, an umbrella branding for Dell EMC’s cloud options. This is Google’s second major foray into file system storage within the last year. It acquired startup Elastifile, whose scale-out system is integrated in Google Cloud Filestore.

Dell Technologies Cloud hybrid cloud infrastructure enhancements also include native Kubernetes integration in VMware vSphere, along with more flexible compute and storage options.

File storage written for cloud

Dell EMC allows customers to tier local file storage to all three public cloud providers via its Isilon CloudPools, but the Google partnership is its first effort at writing OneFS specifically for cloud-native workloads. AWS has the largest market share of the public cloud market, followed by Microsoft Azure and Google Cloud Platform.

Dell did not address if it plans similar integrations with AWS or Microsoft Azure, but it represents a likely path, especially as enterprises deploy multiple hybrid clouds. File pioneer NetApp started offering cloud-based versions of its OnTap operating system several years ago, while all-flash specialist Pure Storage recently added file services to its block-based FlashArray flagship array. Hewlett Packard Enterprise also sells file services in the cloud on ProLiant servers through an OEM deal with Qumulo, whose founders helped to engineer the original Isilon NAS code.

Dell has to continue to execute on this strategy with the other major cloud providers. This can’t be a one-and-done.
Matt EastwoodSenior vice president of enterprise infrastructure, IDC

“Dell has to continue to execute on this strategy with the other major cloud providers. This can’t be a one-and-done [with Google]. We’ll need to see more improvements from Dell in the next six to 12 months to show they are able to bring their file storage technologies to the cloud,” said Matt Eastwood, a senior vice president of enterprise infrastructure at IDC.

Although Dell and Google publicly acknowledged a beta version in 2018, the formal OneFS cloud launch comes a little more than one year after Thomas Kurian took over as CEO at Google Cloud Platform. An interesting twist would be noteworthy if Kurian’s arrival helped spur the Dell product development: George Kurian, his twin brother, and CEO at NetApp, has said Dell is “years behind” NetApp’s Data Fabric strategy.

Brian Payne, a Dell EMC vice president, said enterprises have struggled to run traditional file systems that fully exploit Google’s fast compute services for analyzing large data sets. Enterprises can purchase the cloud version of Dell EMC Isilon OneFS with the required compute services in the Google Compute Platform portal.

“We found that customers are using Google to run their AI engines or data services, and we paired with Google to help them process and store very large content files in Isilon,” Payne said.

Node requirements flexed for Dell Technologies Cloud

Dell’s strategy has evolved on how to unify is hybrid cloud offerings with public cloud technologies, although its ownership of VMware provides assets supported by Dell EMC storage competitors.

Dell Technologies Cloud integrates VMware Cloud Foundation (VCF) and Dell EMC VxRail hyper-converged infrastructure as a combined stack to run workload domains, software-defined storage, software-defined networking and virtualized compute. Customers can buy Dell Technologies Cloud and manage it locally or as an on-demand service.

VMware Cloud Foundation 4.0 includes native Kubernetes integration that allows container orchestration to be managed in vSphere. The Kubernetes piece is part of Project Pacific, the code name for a major redesign of the vSphere control plane. Payne said it allows cloud-native workloads to run directly on the Dell Technologies Cloud platform, with Dell handling lifecycle management.

Dell Technologies On Demand offers the same services as a consumption license. Payne said Dell’s new entry requirement is a minimum of four nodes, down from eight nodes, and users can scale capacity across multiple racks.

The Dell Technologies Cloud binge includes updates to Dell EMC SD-WAN software-defined networking, based on the VeloCloud technology VMware acquired in 2017. Dell also added support for Dell EMC PowerProtect Cyber Recovery data protection to VMware Cloud, which uses Dell EMC storage to extend private IaaS deployments to public clouds.

Go to Original Article
Author:

VMware Cloud Foundation to run on new Google Cloud service

Google Cloud users can now run VMware Cloud Foundation workloads, thanks to a new managed service deal between the two companies.

The service, dubbed Google Cloud VMware Engine, builds upon the company’s existing VMware offering. VMware users can run VMware’s full Cloud Foundation stack on dedicated bare-metal servers provided by Google. The Cloud Foundation stack includes VMware vSphere, vCenter, vSAN, NSX-T and HCX.

Last July the two companies signed a deal that allowed VMware users to run workloads natively on Google Cloud, which gave VMware users an alternative to AWS. As part of that deal, CloudSimple administered the platform running on Google Cloud with Google-provided first-line support.

Google subsequently acquired CloudSimple in November. The new Google Cloud VMware Engine service offers unified billing, a UI within the Google Cloud console, and integrations with native Google Cloud services such as BigQuery, Anthos and Cloud AI. Google handles lifecycle management of the Cloud Foundation stack.

CloudSimple’s technology has also powered Microsoft Azure’s similar VMware service. This month, Microsoft delivered a preview of the “next evolution” of that service but offered few specifics on what will change. “Our newly announced service Azure VMware Solution does not use CloudSimple, but the Azure VMware Solution by CloudSimple continues to be a Microsoft Azure GA service backed by Microsoft SLAs,” Microsoft said in a statement.

It makes sense for Google to aggressively go after VMware workloads, as each company has complementary positions in a number of markets, said Dana Gardner, principal analyst at Interarbor Solutions LLC in Gilford, N.H.

“Google Cloud needs more enterprise traction and VMware needs to get with more cloud partners, and it doesn’t upset any existing relationship they have with other [cloud partners],” he said.

Another reason Google is strengthening its relationship with VMware is the anticipated hit Google will take over the next couple of quarters to its advertising revenues because of the COVID-19 pandemic.

“They will be looking more toward their cloud business to generate more earnings as a way to compensate,” Gardner said.

The downside of VMware cloud choices

While enterprises now have their pick of VMware hosting options among the cloud hyperscalers, this is a case where choice can lead to complexity, analysts said.

They point to the added administrative costs users take on in introducing another cloud platform in their environment along with the time it takes to get C-suite level approval. Given the technical similarities among the top three cloud providers, the bureaucratic headaches may not be worth the trouble.

“It gives VMware users an option, but how rich an opportunity is it for Google with many VMware users already deploying multiple clouds?” said Brian Kirsch, an IT architect and instructor at Milwaukee Area Technical College. “Also, with just one cloud provider, sometimes it’s easier for IT organizations to have all their cloud billing in one place.”

The pandemic has created a rich opportunity for cloud providers to form strategic partnerships with so many of corporate users working remotely.

Google has some pretty unique capabilities in the area of AI, so you can run VMware workloads and still access those Google services to work in your VMware environment.
Gary ChenResearch director, IDC

“Amazon has never looked stronger and Microsoft is also doing very well financially,” Gardner said. “So, other [cloud] players in the field looking to become number three, now is the time to get really aggressive because this opportunity may not be there later on,” he said.

Another analyst sees both the advantages and disadvantages for VMware shops incorporating Google Cloud, especially if they already support one of Google’s competitors.

“A lot of companies have selected Google for a particular set of reasons, so getting VMware for another cloud can be a hassle,” said Gary Chen, research director, overseeing IDC’s software-defined compute practice. “But Google has some pretty unique capabilities in the area of AI, so you can run VMware workloads and still access those Google services to work in your VMware environment.”

The Google Cloud VMware Engine rollout begins with availability limited to two U.S. regions today. Eight more around the world will be added in the second half of the year, according to Google.

Go to Original Article
Author:

Oracle Analytics for Cloud HCM gives new capabilities to HR

The just released Oracle Analytics for Cloud HCM platform can do some familiar things for HR. It can, for instance, produce reports about profit and revenue per employee. But it also takes these familiar analytics one step further. HR can use the tool to, for instance, hunt for relationships between employee engagement and revenue data.

Until the release of Oracle Analytics for Cloud HCM, delivering some types of HR and finance data may have required coordination with finance. But the tool now gives HR the ability to run these reports as needed, according to the vendor.

Indeed, Mark Brandau, an analyst at Forrester Research, said the Oracle Analytics for Cloud HCM may give HR managers new ability to run their own analytics. HR will “have additional ways to leverage the people and operational data to make better decisions,” he said.

But HR’s use of analytics broadly “depends on how the solutions are delivering the data and making it consumable every day for people who aren’t familiar with data,” Brandau said.

Oracle Analytics for Cloud HCM, announced via an online conference, is available to its HCM users. It’s part of the vendor’s Oracle Analytics for Applications product line. 

Bruno Aziza, group vice president of Oracle Analytics, said the HCM analytics application stores and makes data available for analytics in an “autonomous data warehouse,” which has security, repair and high availability features that don’t require user intervention. It will have a set of analytical HR modules around employee data, but it will also allow mashups of data from other sources, he said.

Use of unstructured data

Where Aziza believes the analytics application will differentiate itself is in its ability to take unstructured survey data, or something like 360-degree feedback data, and combine it with financial data.

For instance, take a firm analyzing the performance of sales teams across geographies, Aziza said. In this process, HR may use Oracle Analytics for Cloud HCM and discover a performance problem in a specific region.

We couldn’t really see where the blockages were, and we worked closely with the HR teams to make relationships.
Katherine ThompsonHead of reporting and analytics for the Metis Program, Home Office

“Maybe there are some dimensions that explain this lack of performance related to HR components,” Aziza said. It could be a consequence of new employees, dissatisfied employees or engagement issues, he said.

All the major HCM vendors — Oracle, Workday, SuccessFactors, Ultimate Software — have made significant investments in analytics, Forrester’s Brandau said. Part of the drive to analytics is to “help standardize some of the practices and metrics and the way that HR operates,” he said.

At its online launch, Oracle hosted a customer panel that included Katherine Thompson, head of reporting and analytics for the Metis Program at the Home Office in the U.K. The Home Office is responsible for immigration, security and other issues. The Metis Program is the name for a migration to cloud-based ERP using Oracle.

Thompson said the Home Office has been using Oracle’s analytics to identify ways to improve the time it takes to hire someone. It involves different systems, including recruiting and security clearances. “We couldn’t really see where the blockages were, and we worked closely with the HR teams to make relationships,” she said. The Home Office has since sped up the hiring process, she said.

Go to Original Article
Author:

Dremio accelerates cloud data lake queries for AWS

Dremio Tuesday released into general availability its cloud data lake engine offerings with a new purpose-built AWS edition that provides enhanced data query capabilities.

The Dremio AWS Edition expands on the Santa Clara, Calif., data lake vendor’s Data Lake Engine technology base with a specially optimized system for AWS users.

Among the new features in the AWS edition is an elastic engines capability that can help to accelerate cloud data lake queries, and a new parallel project feature that helps organizations with scalability to better enable automation across multiple Dremio instances. Dremio had previously made its data lake engine available on AWS but had not developed a version that was optimized for Amazon’s cloud.

The parallel project and elastic engines capabilities in Dremio’s AWS Edition can help data consumers manage their time and infrastructure more efficiently, said Kevin Petrie, vice president of research at Eckerson Group.

The Dremio platform provides simple access for a wide range of analysis and fast results for reporting, which is becoming increasingly important to enterprises with the sudden onset of a new business era triggered by the COVID-19 pandemic, Petrie said.

“COVID-19 accelerates the cloud modernization trend and therefore the adoption of cloud-native object stores for data lakes,” Petrie said. “Dremio’s AWS marketplace offering provides enterprises the opportunity to modernize their data lakes on AWS infrastructure.”

Data lake vendor's AWS Edition data lake dashboard
AWS Edition dashboard provides visibility into data lake storage and data sets.

Big money for Dremio’s cloud data lake efforts

The AWS Edition release is the first major launch for Dremio since it made public a $70 million Series C funding round on March 26, bringing total funding to $212 million.

Tomer Shiran, co-founder and chief product officer at Dremio, said the funding was a “great vote of confidence” for his firm, especially given the current global pandemic. Analytics and business intelligence are two key categories that many large organizations that Dremio targets will continue to spend on, even during the COVID-19 crisis, he said.

“Part of the reason for the large investment even during an economic crisis, and obviously a health crisis, is the fact that we’re playing in such a hot space,” Shiran said.

How Dremio’s elastic engines improve cloud data lake queries

Most of Dremio’s customers use the vendor’s data lake engine in the cloud already either on AWS, or on Microsoft Azure but the new edition advances Dremio’s AWS offering specifically, Shiran noted.

COVID-19 accelerates the cloud modernization trend and therefore the adoption of cloud-native object stores for data lakes. Dremio’s AWS marketplace offering provides enterprises the opportunity to modernize their data lakes on AWS infrastructure.
Kevin PetrieVice president of research, Eckerson Group

“The idea is to drastically reduce the complexity and make it much easier for companies to get started with Dremio on AWS, and to take advantage of all the unique capabilities that Amazon brings as a platform,” Shiran said.

He added that typically with query engines there is a single execution cluster, even if multiple sets of workloads and different users are on the same system. The approach requires organizations to size their query engine deployment for peak workload.

With the new AWS Edition, the elastic engines feature is debuting, providing a separate query engine for each workload. With elastic engines, the query engine elastically scales up or down based on demand, rather than needing to run one large cluster that has been sized for peak utilization.

“This is really taking advantage of the fact that, in the cloud Amazon is willing to rent you servers by the second,” Shiran said.

How elastic engines work

Dremio is managing the AWS EC2 (Elastic Compute Cloud) instances on the user’s behalf, handling the configuration and optimization for autoscaling the required resources for running the data lake query engine.

“So, what you do with this AWS Edition of Dremio is you spin up literally one instance of Dremio from the Amazon Marketplace and that’s all you’re interacting with ever is that one instance,” Shiran said. “Automatically, behind the scenes it is using Amazon APIs to provision and deprovision resources.”

The elastic engines feature is first available in the AWS Edition of Dremio, but the vendor plans to expand the capability with future support for Microsoft Azure and Google Cloud Platform, as well as on-premises Kubernetes environments.

Parallel projects enables multi-tenancy

Another new feature in the Dremio AWS Edition is a feature the company has dubbed parallel projects. Shiran said parallel projects is an effort to make it easier to achieve multi-tenancy for Dremio deployments.

“So, now we have a notion of a project, where all the state of your Dremio environment is saved and you can shut it down entirely and then bring it back up later,” he said.

With parallel projects, an organization can choose to have different environments for development and production. Each of the environments is also automatically backed up and gets automated patching for upgrades as well.

Dremio will continue to focus on the cloud and ease of use for customers, Shiran said.

“We are investing in making Dremio easier for people who want to run in the cloud and you’re seeing the first step of that with the AWS Edition, but we’re going to extend that to other clouds as well,” he said.

Go to Original Article
Author:

Learn to manage Office 365 ProPlus updates

A move to the cloud can be confusing until you get your bearings, and learning how to manage Office 365 ProPlus updates will take some time to make sure they’re done right.

Office 365 is a bit of a confusing name. It is actually a whole suite of programs based on a subscription model, mostly cloud based. However, Office 365 ProPlus is a suite inside a suite: a subset collection of software contained in most Office 365 subscriptions. This package is the client install that contains the programs everyone knows: Word, Excel, PowerPoint and so on.

Editor’s note: Microsoft recently announced it would rename Office 365 ProPlus to Microsoft 365 Apps for enterprise, effective on April 21.

For the sake of comparison, Office 2019, Office 2016 and older versions are the on-premises managed suite with the same products, but with a much slower rollout pace for updates and fixes. Updates for new features are also slower and may not even appear until the next major version, which might not be until 2022 based on Microsoft’s release cadence.

Rolling the suite out hasn’t changed too much for many years. You can push out Office 365 ProPlus updates the same way you do other Windows updates, namely Windows Server Update Service (WSUS) and Configuration Manager. Microsoft gave the latter a recent branding adjustment and is now referring to it as Microsoft Endpoint Configuration Manager.

The Office 365 ProPlus client needs a different approach, because updates are not delivered or designed in the same way as the traditional Office products. You can still use Configuration Manager, but the setup is different.

Selecting the update channel for end users

Microsoft gives you the option to determine when your users will get new feature updates. There are five update channels: Insider Fast, Monthly Channel, Monthly Channel (Targeted), Semi-Annual Channel and Semi-Annual Channel (Targeted). Insider Fast gets updates first, Monthly Channel updates arrive on a monthly basis and Semi-Annual updates come every six months. Users in the Targeted channels get these updates first so they can report back to IT with any issues or other feedback.

You can configure the channel as part of an Office 365 ProPlus deployment with the Office Deployment Toolkit (ODT), but this only works at the time of install. There are two ways to configure the channel after deployment: Group Policy and Configuration Manager.

Using Group Policy for Office 365 ProPlus updates

Using Group Policy, you can set which channel a computer gets by enabling the Update Channel policy setting under Computer ConfigurationPoliciesAdministrative TemplatesMicrosoft Office 2016 (Machine)Updates. This is a registry setting located at HKLMSoftwarePoliciesMicrosoftoffice16.0commonofficeupdateupdatebranch. The options for this value are: Current, FirstReleaseCurrent, InsiderFast, Deferred, FirstReleaseDeferred.

Update Channel policy setting
Managing Office 365 ProPlus updates from Group Policy requires the administrator to select the Enabled option in the Update Channel policy setting.

A scheduled task, which is deployed as part of the Office 365 ProPlus install called Office Automatic Update 2.0, reads that setting and applies the updates.

You can use standard Group Policy techniques to target policies to specific computers or apply the registry settings.

Using Configuration Manager for Office 365 ProPlus updates

You can use Configuration Manager, utilizing ODT or Group Policy, to define which channel a client is in, but it also works as a software update point rather than using WSUS or downloading straight from Microsoft’s servers. With this method, you will need to ensure the Office 365 ProPlus channel builds across all the different deployed channels are available from the software update point in Configuration Manager.

Office 365 ProPlus updates work the same way as other Windows updates: Microsoft releases the update, a local WSUS server downloads them, Configuration Manager synchronizes with the WSUS server to copy the updates, and then Configuration Manager distributes the updates to the distribution points. You need to enable the Office 365 Client product on WSUS for this approach to work.

WSUS server settings
Set up Configuration Manager to handle Office 365 ProPlus updates by selecting the Office 365 Client product on the WSUS server.

It’s also possible to configure clients just to get the updates straight from Microsoft if you don’t want or need control over them.

Caveats for Office 365 ProPlus updates

When checking a client’s channel, the Office 365 ProPlus client will only show the channel it was in during its last update. Only when the client gets a new update will it show which channel it obtained the new update from, so the registry setting is a better way to check the current configuration.

When an Office 365 ProPlus client detects an update, it will download a compressed delta update. However, if you change the client to a channel that is on an older version of Office 365 ProPlus, the update will be much larger but still smaller than the standard Office 365 ProPlus install. Also, if you change the channel multiple times, it can take up to 24 hours for a second version change to be recognized and applied.

As always with any new product: research, test and build your understanding of these mechanisms before you roll out Office 365 ProPlus. If an update breaks something your business needs, you need know how to fix that situation across your fleet quickly.

Go to Original Article
Author:

Zendesk Relater primes customers for remote call center work

Zendesk, the cloud platform vendor that made its name with its Support Suite customer service platform for SMBs, is moving into CRM. But during the coronavirus crisis, the company quickly moved its own operations to at-home virtual work as it supports its 150,000 users, many of which are launching remote call centers amid spikes in customer service interactions.

“Even companies that are already flexible and using Zendesk are experiencing dramatic increases in their volumes, because a lot of people are trying to work remote right now,” said Colleen Berube, Zendesk CIO. “We have a piece of our business where we are having to help companies scale up their abilities to handle this shift in working.”

Even though the vendor did support some remote work before the coronavirus work-from-home orders hit, immediately rolling out work-from-home for Zendesk’s entire organization wasn’t straightforward, because of laptop market shortages. Like many companies, it required a culture shift to move an entire operation to telecommuting that included new policies allowing workers to expense some purchases for home-office workstations.

“We don’t have any intention of recreating the entire workplace at home, but we wanted to give them enough so they could be productive,” Berube said.

Zendesk CEO Mikkel Svane
Zendesk CEO Mikkel Svane delivers the Zendesk Relater user conference keynote from his home Tuesday.

Among Zendesk’s prominent midmarket customers so far are travel and hospitality support desks “dealing with unprecedented volumes of cancellations and refunds,” as well as companies assisting remote workforces shipping hardware to their employees, said Zendesk founder and CEO Mikkel Svane at the Zendesk Relater virtual user conference Tuesday.

“Using channels like chat have helped these customers keep up with this volume,” Svane said.

Zendesk has seen interest and network use in general grow among customers who need to bring remote call centers online during shelter-in-place orders from local and state governments. Easing the transition for users and their customers, Berube said, are self-service chatbots that Zendesk has developed over the last few years. She added that she’s seen Zendesk’s own AnswerBot keep tickets manageable on its internal help desk, which services remote employees as well as partners.

During Relater, Zendesk President of Products Adrian McDermott said that Zendesk AI-powered bots have saved users 600,000 agent hours by enabling customer self-service, adding that Zendesk customers using AI for customer support increased more than 90% over the last year. He said the company is betting big on self-service becoming the grand majority of customer service.

[Self-service is] not just going to a knowledge base and reading the knowledge base … but it’s about the user being at the center of the conversation and controlling the conversation.
Adrian McDermottPresident of products, Zendesk

“Self-service is going to be everywhere,” McDermott said. “It’s not just going to a knowledge base and reading the knowledge base … but it’s about the user being at the center of the conversation and controlling the conversation.”

While some larger cloud customer experience software vendors such as Oracle, Salesforce and Google canceled even the virtual conferences that were planned in lieu of live user events, Zendesk assembled a set of pre-recorded presentations from executives at home and other speakers scheduled for its canceled Miami Relate conference and put on a virtual user conference renamed “Zendesk Relater.”

Earlier this month, Zendesk released upgrades to its Sunshine CRM and Support Suite platforms. At Relater, the company announced a partnership with Tata Consultancy Services to implement Zendesk CRM at large enterprises.

Zendesk has the reputation of being a customer service product tuned for B2C companies, specializing in quick interactions. Its CRM system also has potential to serve that market, said Kate Leggett, Forrester Research analyst. Whether that will translate to enterprises and gain traction in the B2B market remains to be seen.

“It’s very different from the complex products that Microsoft and Salesforce have for that long-running sales interaction, with many people on the seller side and many people on the buyer side,” Leggett said.

Go to Original Article
Author: