Tag Archives: vendor

Zendesk Relater primes customers for remote call center work

Zendesk, the cloud platform vendor that made its name with its Support Suite customer service platform for SMBs, is moving into CRM. But during the coronavirus crisis, the company quickly moved its own operations to at-home virtual work as it supports its 150,000 users, many of which are launching remote call centers amid spikes in customer service interactions.

“Even companies that are already flexible and using Zendesk are experiencing dramatic increases in their volumes, because a lot of people are trying to work remote right now,” said Colleen Berube, Zendesk CIO. “We have a piece of our business where we are having to help companies scale up their abilities to handle this shift in working.”

Even though the vendor did support some remote work before the coronavirus work-from-home orders hit, immediately rolling out work-from-home for Zendesk’s entire organization wasn’t straightforward, because of laptop market shortages. Like many companies, it required a culture shift to move an entire operation to telecommuting that included new policies allowing workers to expense some purchases for home-office workstations.

“We don’t have any intention of recreating the entire workplace at home, but we wanted to give them enough so they could be productive,” Berube said.

Zendesk CEO Mikkel Svane
Zendesk CEO Mikkel Svane delivers the Zendesk Relater user conference keynote from his home Tuesday.

Among Zendesk’s prominent midmarket customers so far are travel and hospitality support desks “dealing with unprecedented volumes of cancellations and refunds,” as well as companies assisting remote workforces shipping hardware to their employees, said Zendesk founder and CEO Mikkel Svane at the Zendesk Relater virtual user conference Tuesday.

“Using channels like chat have helped these customers keep up with this volume,” Svane said.

Zendesk has seen interest and network use in general grow among customers who need to bring remote call centers online during shelter-in-place orders from local and state governments. Easing the transition for users and their customers, Berube said, are self-service chatbots that Zendesk has developed over the last few years. She added that she’s seen Zendesk’s own AnswerBot keep tickets manageable on its internal help desk, which services remote employees as well as partners.

During Relater, Zendesk President of Products Adrian McDermott said that Zendesk AI-powered bots have saved users 600,000 agent hours by enabling customer self-service, adding that Zendesk customers using AI for customer support increased more than 90% over the last year. He said the company is betting big on self-service becoming the grand majority of customer service.

[Self-service is] not just going to a knowledge base and reading the knowledge base … but it’s about the user being at the center of the conversation and controlling the conversation.
Adrian McDermottPresident of products, Zendesk

“Self-service is going to be everywhere,” McDermott said. “It’s not just going to a knowledge base and reading the knowledge base … but it’s about the user being at the center of the conversation and controlling the conversation.”

While some larger cloud customer experience software vendors such as Oracle, Salesforce and Google canceled even the virtual conferences that were planned in lieu of live user events, Zendesk assembled a set of pre-recorded presentations from executives at home and other speakers scheduled for its canceled Miami Relate conference and put on a virtual user conference renamed “Zendesk Relater.”

Earlier this month, Zendesk released upgrades to its Sunshine CRM and Support Suite platforms. At Relater, the company announced a partnership with Tata Consultancy Services to implement Zendesk CRM at large enterprises.

Zendesk has the reputation of being a customer service product tuned for B2C companies, specializing in quick interactions. Its CRM system also has potential to serve that market, said Kate Leggett, Forrester Research analyst. Whether that will translate to enterprises and gain traction in the B2B market remains to be seen.

“It’s very different from the complex products that Microsoft and Salesforce have for that long-running sales interaction, with many people on the seller side and many people on the buyer side,” Leggett said.

Go to Original Article
Author:

New AI tools in the works for ThoughtSpot analytics platform

The ThoughtSpot analytics platform only has been available for six years, but since 2014 the vendor has quickly gained a reputation as an innovator in the field of business intelligence software.

ThoughtSpot, founded in 2012 and based in Sunnyvale, Calif., was an early adopter of augmented intelligence and machine learning capabilities, and even as other BI vendors have begun to infuse their products with AI and machine learning, the ThoughtSpot analytics platform has continued to push the pace of innovation.

With its rapid rise, ThoughtSpot attracted plenty of funding, and an initial public offering seemed like the next logical step.

Now, however, ThoughtSpot is facing the same uncertainty as most enterprises as COVID-19 threatens not only people’s health around the world, but also organizations’ ability to effectively go about their business.

In a recent interview, ThoughtSpot CEO Sudheesh Nair discussed all things ThoughtSpot, from the way the coronavirus is affecting the company to the status of an IPO.

In part one of a two-part Q&A, Nair talked about how COVID-19 has changed the firm’s corporate culture in a short time. Here in part two, he discusses upcoming plans for the ThoughtSpot analytics platform and when the vendor might be ready to go public.

One of the main reasons the ThoughtSpot analytics platform has been able to garner respect in a short time is its innovation, particularly with respect to augmented intelligence and machine learning. Along those lines, what is a recent feature ThoughtSpot developed that stands out to you?

ThoughtSpot CEO Sudheesh NairSudheesh Nair

Sudheesh Nair: One of the main changes that is happening in the world of data right now is that the source of data is moving to the cloud. To deliver the AI-based, high-speed innovation on data, ThoughtSpot was really counting on running the data in a high-speed memory database, which is why ThoughtSpot was mostly focused on on-premises customers. One of the major changes that happened in the last year is that delivered what we call Embrace. With Embrace we are able to move to the cloud and leave the data in place. This is critical because as data is moving, the cost of running computations will get higher because computing is very expensive in the cloud.

With ThoughtSpot, what we have done is we are able to deliver this on platforms like Snowflake, Amazon Redshift, Google BigQuery and Microsoft Synapse. So now with all four major cloud vendors fully supported, we have the capability to serve all of our customers and leave all of their data in place. This reduces the cost to operate ThoughtSpot — the value we deliver — and the return on investment will be higher. That’s one major change.

Looking ahead, what are some additions to the ThoughtSpot analytics platform customers can expect?

Nair: If you ask people who know ThoughtSpot — and I know there are a lot of people who don’t know ThoughtSpot, and that’s OK — … if you ask them what we do they will say, ‘search and AI.’ It’s important that we continue to augment on that; however, one thing that we’ve found is that in the modern world we don’t want search to be the first thing that you do. What if search became the second thing you do, and the first thing is that what you’ve been looking for comes to you even before you ask?

What if search became the second thing you do, and the first thing is that what you’ve been looking for comes to you even before you ask?
Sudheesh NairCEO, ThoughtSpot

Let’s say you’re responsible for sales in Boston, and you told the system you’re interested in figuring out sales in Boston — that’s all you did. Now the system understands what it means to you, and then runs multiple models and comes back to you with questions you’ll be interested in, and most importantly with insights it thinks you need to know — it doesn’t send a bunch of notifications that you never read. We want to make sure that the insights we’re sending to you are so relevant and so appropriate that every single one adds value. If one of them doesn’t add value, we want to know so the system can understand what it was that was not valuable and then adjust its algorithms internally. We believe that the right action and insight should be in front of you, and then search can be the second thing you do prompted by the insight we sent to you.

What tools will be part of the ThoughtSpot analytics platform to deliver these kinds of insights?

Nair: There are two features we are delivering around it. One is called Feed, which is inspired by our social media curating insights, and conversations and opinions around facts. Right now social media is all opinion, but imagine a fact-driven social media experience where someone says they had a bad a quarter and someone else says it was great and then data shows up so it doesn’t become an opinion based on another opinion. It’s important that it should be tethered to facts. The second one is Monitor, which is the primary feature where the thing you were looking for shows up even before you ask in the format that you like — could be mobile, could be notifications, could be an image.

Those two features are critical innovations for our growth, and we are very focused on delivering them this year.

The last time we spoke we talked about the possibility of ThoughtSpot going public, and you were pretty open in saying that’s something you foresee. It’s about seven months later, where do plans for going public currently stand?

Nair: If you had asked me before COVID-19 I would have had a bit of a different answer, but the big picture hasn’t changed. I still firmly believe that a company like ThoughtSpot will tremendously benefit from going public because our customers are massive customers, and those customers like to spend more with a public company and the trust that comes with it.

Having said that, I talked last time about building a team and predictability, and I feel seven months later that we have built the executive team that can be the best in class when it comes to public companies. But going public also requires being predictable, and we’re getting in that right spot. I think that the next two quarters will be somewhat fluid, which will maybe set us back when it comes to building a plan to take the company public. But that is basically it. I think taken one by one, we have a good product market, we have good business momentum, we have a good team, and we just need to put together the history that is necessary so that the business is predictable and an investor can appreciate it. That’s what we’re focused on. There might be a short-term setback because of what the coronavirus might throw at us, but it’s going to definitely be a couple of more quarters of work.

Does the decline in the stock market related to COVID-19 play into your plans at all?

Nair: It’s absolutely an important event that’s going on and no one knows how it will play out, but when I think about a company’s future I never think about an IPO as a few quarters event. It’s something we want to do, and a couple of quarters here or there is not going to make a major difference. Over the last couple of weeks, we haven’t seen any softness in the demand for ThoughtSpot, but we know that a lot of our customers’ pipelines are in danger from supply impacts from China, so we will wait and see. We need to be very close to our customers right now, helping them through the process, and in that process we will learn and make the necessary course corrections.

Editor’s note: This interview has been edited for clarity and conciseness.

Go to Original Article
Author:

Commvault storage story expands with Hedvig for primary data

Of all the changes data protection vendor Commvault made in the last year, perhaps the most striking was its acquisition of primary storage software startup Hedvig.

The $225 million deal in October 2019 — eight months into Sanjay Mirchandani’s tenure as CEO — marked Commvault’s first major acquisition. It also brought the backup specialist into primary storage as it tries to adapt to meet demand for analytics on data everywhere.

Hedvig gives Commvault a distributed storage platform that spans traditional and cloud-hosted workloads. The Hedvig software runs primary storage on commodity hardware and is already been integrated in the Commvault storage software stack, including the new Commvault Metallic SaaS-based backup.

Don Foster, a vice president of storage solutions at Commvault, said data centers want to centralize all their data, from creation to retention, without adding third-party endpoints.

“We envision Hedvig as a way to ensure that your storage and backup will work in a symbiotic fashion,” Foster said.

Hedvig provides unified storage that allows Commvault to tackle new cloud-application use cases. The storage software run on clustered commodity nodes as distributed architecture for cloud and scale-out file and object storage across multiple hypervisors.

Commvault plans to use Hedvig to converge storage and data management and enhance Commvault HyperScale purpose-built backup appliances. Revenue from Commvault HyperScale appliances was up 10% year over year last quarter, and the vendor said six of its top 10 customers have deployed HyperScale appliances.

Commvault has expanded Hedvig into more primary workloads with the addition of support for the Container Storage Interface and erasure coding. In the near term, Hedvig will also remain available for purchase as primary storage and existing Hedvig customers with in-force contracts will be supported. The larger plan is to integrate Hedvig as a feature in the Commvault Complete suite of backup and data management tools, Foster said.

Integrating technology and integrating culture

Mirchandani replaced retired CEO Bob Hammer, who led Commvault for 20 years. The change at the top also brought about a raft of executive changes and the launch of the Metallic SaaS offering under a brand outside of Commvault. But the Hedvig deal was most significant in moving the Commvault storage strategy from data protection to data management — a shift backup vendors have talked about for years.

Because Hedvig didn’t have a large installed base, the key for Commvault was gaining access to Hedvig’s engineering IP, said Steven Hill, a senior analyst of applied infrastructure and storage technologies at 451 Research, part of S&P Global Market Intelligence.

Hedvig gives Commvault a software-defined storage platform that combines block, file and object storage services, along with cloud-level automation and support for containers.
Steven HillSenior analyst of applied infrastructure and storage technologies, 451 Research

“Growing adoption of hybrid cloud infrastructure and scale-out secondary storage has changed the business model for backup vendors. Hedvig gives Commvault a software-defined storage platform that combines block, file and object storage services, along with cloud-level automation and support for containers. It checks a lot of boxes for the next generation of storage buyers,” Hill said.

“The future of hybrid secondary storage lies in the management of data based on the business value of its content, and makes the need for broader, cloud-optimized information management a major factor in future storage buying decisions,” Hill added. He said Cohesity and Rubrik “discovered this [idea] a while ago” and other backup vendors now are keying in on secondary storage to support AI and analytics.

A research note by IDC said the Hedvig deal signals “orthogonal and expansionary thinking” by Commvault that paves a path to primary storage and multi-cloud data management. Commvault is a top five backup vendor in revenue; its revenue has declined year over year for each of the last four quarters. Commvault reported $176.3 million in revenue last quarter, down 4.3% from the same period a year ago.

IDC researchers note the difference between traditional Commvault storage and the Hedvig product. Namely, that Commvault is a 20-year-old public company in an entrenched market, while Hedvig launched in 2018. The companies share only a few mutual business partners and resellers.

“Market motion matters here, as each company is selling into different buyer bases.  … Melding a unified company and finding synergies between different buying centers may be more difficult than the technical integration,” IDC analysts wrote in a report on the Commvault-Hedvig acquisition.

‘Belts and suspenders’ approach

Pittsburg State University (PSU) in Kansas has deployed Hedvig primary storage and Commvault backup for several years. Tim Pearson, the university’s assistant director of IT infrastructure and security, said he was not surprised to hear about the Hedvig deal.

“I knew Hedvig was looking for a way to grow the company,” Pearson said, adding that he spoke with Commvault representatives in the run-up to the transaction.

PSU runs Hedvig storage software on Hewlett Packard Enterprise ProLiant servers as frontline storage for its VMware farm and protects data with Commvault backup. Pearson said the “belts and suspenders” approach designed by Hedvig engineers enables Commvault to bridge production storage and secondary use cases.

“What I hope to gain out of this is a unified pane of glass to manage not only my traditional Commvault backups, but also point-in-time recovery by scheduling Hedvig storage-level snapshots,” Pearson said.

Go to Original Article
Author:

Coronavirus impact: Businesses forced to rely on video conferencing

In January, life sciences technology vendor Veeva held a new year kickoff for its North American employees in Orlando, Fla. A few weeks later, the company held a similar event for its Asia-based employees — except instead of everyone meeting in Tokyo as planned, the coronavirus outbreak forced workers to dial into Zoom.

The differences between the two events were stark.

The would-be Tokyo attendees sat alone on their computers. In Orlando, colleagues shared meals and dance floors. They visited an amusement park one evening. And by gathering more than 1,000 people in the same place, the company generated a palpable enthusiasm for its vision and goals.

“There is a little bit lost, for sure, in a remote meeting compared to a face-to-face meeting,” said Paul Shawah, Veeva’s senior vice president for commercial cloud strategy.

Businesses like Veeva are increasingly turning to video conferencing services like Zoom and Cisco Webex to avoid travel in response to the growing threat of the new coronavirus, COVID-19. The disease had sickened nearly 100,000 people worldwide as of March 5, including more than 200 people in the United States, where 14 have died.

Video conferencing apps are providing a convenient alternative to face-to-face meetings during the outbreak. But companies are also missing chances to connect on a more personal level with customers, partners and employees.

Theory Studios, a boutique entertainment company, generates most of its business by attending conferences. The studio scrambled to schedule Zoom meetings after the last-minute cancellation of Google I/O and the postponement of the Game Developers Conference.

“At the end of the day, nothing beats in-person [meetings],” said David Andrade, co-founder of Theory Studios. “It’s the joy of sharing a meal — or maybe the client wanting to tour you around their office — that turns a regular meeting into a personal, long-lasting connection.” At the same time, Andrade has used Zoom to build meaningful relationships long before meeting in person, he said.

Similarly, salespeople for electronics manufacturer ViewSonic are watching closely as premier sponsors begin to withdraw from Enterprise Connect, a trade show scheduled for late March. Some of ViewSonic’s customers have also temporarily banned salespeople from their campuses.

“As a sales leader … I would always like to think that travel is essential to business,” said Chris Graefe, ViewSonic’s director of enterprise sales. “A face-to-face meeting is preferred, obviously.” Video conferencing, however, will help maintain relationships amid the travel restrictions, he said.

For Veeva, holding its Asia kickoff on Zoom was “the next best thing,” Shawah said. The format even brought some benefits. For example, everything was recorded, allowing those who missed the meeting to catch up. Also, Zoom’s chat feature facilitated a robust Q&A session, Shawah said.


Tech vendors capitalize on coronavirus outbreak

Video conferencing providers have responded to the increased need for their services by extending the capabilities of their free offerings.

Cisco is now allowing meetings of unlimited length for up to 100 participants on the free version of Webex. Microsoft is giving out six-month licenses to Office 365 that provide access to a more robust version of Microsoft Teams than is usually available for free. Zoom has lifted the 40-minute cap on free meetings in China.

The vendors hope the uptick in usage will continue even after fears about the virus subside. Free offerings can be an effective way to generate paying customers.

In a conference call with investors Wednesday, Zoom CEO Eric Yuan predicted the outbreak would demonstrate the benefits of Zoom and lead to higher usage among companies. “This will dramatically change the landscape.”

Zoom’s stock is up more than 50% since late January. Video conferencing vendors, including hardware makers, are expected to rake in $13.8 billion in revenue by 2023, up from $7.8 billion in 2018, according to Frost & Sullivan.

Because so many employees are temporarily working from home, cybersecurity firm Trend Micro has begun hosting company-wide Zoom calls twice a week. Some hope the practice will continue even after people return to the company’s offices around the world.

“With this concern happening, and people increasing their use of collaboration tools, I do think it’s going to have a lasting effect,” said Leah MacMillan, Trend Micro’s chief marketing officer.

Go to Original Article
Author:

Nvidia scoops up object storage startup SwiftStack

Nvidia plans to acquire object storage vendor SwiftStack to help its customers accelerate their artificial intelligence, high-performance computing and data analytics workloads.

The GPU vendor, based in Santa Clara, Calif., will not sell SwiftStack software but will use SwiftStack’s 1space as part of its internal artificial intelligence (AI) stack. It will also enable customers to use the SwiftStack software as part of their AI stacks, according to Nvidia’s head of enterprise computing, Manuvir Das.

SwiftStack and Nvidia disclosed the acquisition today. They did not reveal the purchase price, but they said it expects the deal to close with weeks.

Nvidia previously worked with SwiftStack

Nvidia worked with San Francisco-based SwiftStack for more than 18 months on tackling the data challenges associated with running AI applications at a massive scale. Nvidia found 1space particularly helpful. SwiftStack introduced 1space in 2018 to accelerate data access across public and private clouds through a single object namespace.

“Simply put, it’s a way of placing the right data in the right place at the right time, so that when the GPU is busy, the data can be sent to it quickly,” Das said.

Das said Nvidia customers would be able to use enterprise storage from any vendor. The SwiftStack 1space technology will form the “storage orchestration layer” that sits between the compute and the storage to properly place the data so the AI stack runs optimally, Das said.

“We are not a storage vendor. We do not intend to be a storage vendor. We’re not in the business of selling storage in any form,” Das said. “We work very closely with our storage partners. This acquisition is designed to further the integration between different storage technologies and the work we do for AI.”

We are not a storage vendor. We do not intend to be a storage vendor.
Manuvir DasHead of enterprise computing, Nvidia

Nvidia partners with storage vendors such as Pure Storage, NetApp, Dell EMC and IBM. The storage vendors integrate Nvidia GPUs into their arrays or sell the GPUs along with their storage in reference architectures.

Nvidia attracted to open source tech

Das said Nvidia found SwiftStack attractive because its software is based on open source technology. SwiftStack’s eponymous object- and file-based storage and data management software is rooted in open source OpenStack Swift. Das said Nvidia plans to continue to work with the SwiftStack team to advance and optimize the technology and make it available through open source avenues.

“The SwiftStack team is part of Nvidia now,” he said. “They’re super talented. So, the innovation will continue to happen, and all that innovation will be upstreamed into the open source SwiftStack. It will be available to anybody.”

Joe ArnoldJoe Arnold

SwiftStack laid off an undisclosed number of sales and marketing employees in late 2019, but kept the engineering and support team intact, according to president Joe Arnold. He attributed the layoffs to a shift in sales focus from classic backup and archiving to AI, machine learning and data analytics use cases.

The SwiftStack 7.0 software update that emerged late last year took aim at analytics HPC, AI and ML use cases, such as autonomous vehicle applications that feed data to GPU-based servers. SwiftStack said at the time that it had worked with customers to design clusters that could scale to handle multiple petabytes of data and support throughput in excess of 100 GB per second.

Das said Nvidia has been using SwiftStack’s object storage technology as well as 1space. He said Nvidia’s internal work on data science and AI applications had quickly showed the company that accelerating the computer shifts the bottleneck elsewhere, to the storage. That played a factor in Nvidia’s acquisition of SwiftStack, he noted.

“We recognized a long time ago that the way to help the customers is not just to provide them a GPU or a library, but to help them create the entire stack, all the way from the GPU up to the applications themselves. If you look at Nvidia now, we spend most of our energy on the software for different kinds of AI applications,” Das said.

He said Nvidia would fully support SwiftStack’s customer base. SwiftStack claims it has around 125 customers. It products lineup included SwiftStack’s object storage software, ProxyFS file system for integrated file and object API access, and 1space. SwiftStack’s software is designed to run on commodity hardware on premises, and its 1space technology can run in the public cloud.

SwiftStack spent more than eight years expanding its software’s capabilities since the company’s 2011 founding. Das said Nvidia has no reason to sell the SwiftStack’s proprietary software because it does not compete head-to-head with other object storage providers.

“Our philosophy here at Nvidia is we are not trying to compete with infrastructure vendors by selling some kind of a stack that competes with other peoples’ stacks,” Das said. “Our goal is simply to make people successful with AI. We think, if that happens, everybody wins, including Nvidia, because we believe GPUs are the best platform for AI.”

Go to Original Article
Author:

Sophos adds mobile threat defense app to Intercept X line

Security vendor Sophos this month expanded its endpoint protection lineup with Intercept X for Mobile. The new mobile security application extends the company’s Intercept security software to devices including phones, tablets and laptops.

The new offering is meant to bolster mobile threat defense for devices running on Android, iOS and Chrome. Features include:

  • Authenticator: Helps to manage multi-factor authentication passwords for sites like Google, Amazon and Facebook.
  • Secure QR code scanner: Scans target URLs for malicious content.
  • Privacy protection: Detects when personal data is accessed or if there are hidden costs associated with downloaded apps.

“The biggest unique point of the Intercept X model is that we are a security model, and we do security for different platforms and can be configured in one place,” said Petter Nordwall, director of product management at Sophos. “Intercept X, as a whole, can now protect Windows, Mac iOS, Chromebooks and servers. Regardless of what platform they use, they can use Intercept X.”

Sophos introduced Intercept X in 2016 as a cloud-based tool designed to enhance endpoint security already running in an environment. Intercept X for Server was introduced in December 2018; an update launched in May 2019 added endpoint protection and response features.

Mobile threats on the rise

In “Advance and Improve Your Mobile Security Strategy,” a recent report from Gartner, senior analyst Patrick Hevesi found that “mobile security products are becoming increasingly important as a rate of mobile attacks continues to grow.” Hevesi recommended tech professionals track new threats, build a mobile threat defense strategy and set minimum iOS and hardware versions.

He added that organizations should focus on training users on what threats actually look like, rather than letting the systems do all the work.

“Everyone is doing antiphishing training, but think about the application,” Hevesi said. “The user doesn’t think about mobile in the same way; they see a highly rated app and don’t think about why the app needs permission to my contact data.”

Pricing for Intercept X for Mobile ranges from $24.50 to $63 per 100 seats depending on the addition of Sophos’ mobile, a unified endpoint management system. Intercept X for Mobile is available free for download for individual use, from Google Play and the Apple App Store.

Go to Original Article
Author:

EG Enterprise v7 focuses on usability, user experience monitoring

Software vendor EG Innovations will release version 7 of its EG Enterprise software, its end-user experience monitoring tool, on Jan. 31.

New features and updates have been added to the IT monitoring software with the goal of making it more user-friendly. The software focuses primarily on monitoring end-user activities and responses.

“Many times, vendor tools monitor their own software stack but do not go end to end,” said Srinivas Ramanathan, CEO of EG Innovations. “Cross-tier, multi-vendor visibility is critical when it comes to monitoring and diagnosing user experience issues. After all, users care about the entire service, which cuts across vendor stacks.”

Ramanathan said IT issues are not as simple as they used to be.

“What you will see in 2020 is now that there is an ability to provide more intelligence to user experience, how do you put that into use?” said Mark Bowker, senior analyst at Enterprise Strategy Group. “EG has a challenge of when to engage with a customer. IT’s a value to them if they engage with the customer sooner in an end-user kind of monitoring scenario. In many cases, they get brought in to solve a problem when it’s already happened, and it would be better for them to shift.”

New features in EG Enterprise v7 include:

  • Synthetic and real user experience monitoring: Users can create simulations and scripts of different applications that can be replayed to further help diagnose a problem and notifies IT operations teams of impending problems.
  • Layered monitoring: Enables users to monitor every tier of an application stack via a central console.
  • Automated diagnosis: Lets users use machine learning and automation to find root causes to issues.
  • Optimization plan: Users can customize optimization plans through capacity and application overview reports.

“Most people look at user experience as just response time for accessing any application. We see user experience as being broader than this,” Ramanthan said. “If problems are not diagnosed correctly and they reoccur again and again, it will hurt user experience. If the time to resolve a problem is high, users will be unhappy.”

Pricing for EG Enterprise v7 begins at $2 per user per month in a digital workspace. Licensing for other workloads depends on how many operating systems are being monitored. The new version includes support for Citrix and VMWare Horizon.

Go to Original Article
Author:

Google buys AppSheet for low-code app development

Google has acquired low-code app development vendor AppSheet in a bid to up its cloud platform’s appeal among line-of-business users and tap into a hot enterprise IT trend.

Like similar offerings, AppSheet ingests data from sources such as Excel spreadsheets, Smartsheet and Google Sheets. Users apply views to the data — such as charts, tables, maps, galleries and calendars — and then develop workflows with AppSheet’s form-based interface. The apps run on Android, iOS and within browsers.

AppSheet, based in Seattle, already integrated with G Suite and other Google cloud sources, as well as Office 365, Salesforce, Box and other services. The company will continue to support and improve those integrations following the Google acquisition, AppSheet CEO Praveen Seshadri said in a blog post.

“Our core mission is unchanged,” Seshadri said. “We want to ‘democratize’ app development by enabling as many people as possible to build and distribute applications without writing a line of code.”

Terms of the deal were not disclosed, but the price tag for the low-code app development startup is likely far less than Google’s $2.6 billion acquisition of data visualization vendor Looker in June 2019.

Under the leadership of former longtime Oracle executive Thomas Kurian, Google Cloud was expected to make a series of deals to shore up its position in the cloud computing market, where it trails AWS and Microsoft by significant percentages.

So far, Kurian has not made moves to buy core enterprise applications such as ERP and CRM, two markets dominated by the likes of SAP, Oracle and Salesforce. Rather, the AppSheet purchase reflects Google Cloud’s perceived strength in application development, but with a gesture toward nontraditional coders.

As for why Google chose AppSheet to boost its low-code/no-code strategy, one reason could be the dwindling number of options. In the past couple of years, several prominent low-code/no-code vendors became acquisition targets. Notable examples include Siemens’ August 2018 purchase of Mendix for $730 million, and more recently, Swiss banking software provider Temenos’ move to buy Kony in a $559 million deal.

It’s not as if Google, Siemens and Temenos made a long shot bet, either. A survey released last year by Forrester Research, based on data collected in late 201, found that 23% of more than 3,000 developers surveyed reported their companies were already using low-code development platforms. In addition, another 22% indicated their organizations would buy into low-code within a year.

Low-code app dev platforms foster quick creation of business data-driven mobile apps.
Google’s purchase of AppSheet gives it low-code app dev tools for business users.

Low-code competition heightens

Google’s AppSheet buy pits it directly against cloud platform rival Microsoft, which has the citizen developer-targeted Power Apps low-code app development platform that has taken off like a rocket, said John Rymer, an analyst at Forrester. The acquisition of AppSheet also sets Google apart from cloud market share leader AWS, whose alleged super-secret low-code/no-code platform that was said to be under development by a team led by prominent development guru Adam Bosworth has yet to appear.

However, in AppSheet, Google is getting a winner, Rymer noted. “It’s a really good product and a really good team,” he said.

Moreover, the addition of AppSheet will help Google get more horsepower out of Apigee than just API management. The company wanted a broader platform with more functionality to address more customers and more use cases, Rymer said.

“So, I think they will be positioning this as a new platform anchored by Apigee,” he said. “Customers could use Apigee to create and publish APIs and AppSheet is how they would consume them. But they won’t stop there. They need process automation/workflow, so I would expect them to go there as well.”

AppSheet gives Google the potential to craft a more cohesive story that integrates that with Google Cloud and Anthos in the future.
Jeffrey HammondAnalyst, Forrester

Meanwhile, another key benefit Google gains from this acquisition is the integration that AppSheet already has with Google’s office productivity products, said Jeffrey Hammond, another Forrester analyst.

“G Suite has always felt a bit out of place to me at Google’s developer conferences, but it used to be one of the main ‘leads’ for the enterprise,” he said. “AppSheet gives Google the potential to craft a more cohesive story that integrates that with Google Cloud and Anthos in the future.”

Overall, this acquisition is yet another indication that low-code/no-code development has gone mainstream and the number of people building applications will continue to grow.

Go to Original Article
Author:

Aruba SD-Branch gets intrusion detection, prevention software

Wireless LAN vendor Aruba has strengthened security in its software-defined branch product by adding intrusion detection and prevention software. The vendor is aiming the latest technology at retailers, hotels and healthcare organizations with hundreds of locations.

Aruba, a Hewlett Packard Enterprise company, also introduced this week an Aruba SD-Branch gateway appliance with a built-in Long Term Evolution (LTE) interface. Companies often use LTE cellular as a backup when other links are temporarily unavailable.

The latest iteration of Aruba’s SD-Branch has an intrusion detection system (IDS)  that performs deep packet inspection in monitoring network traffic for malware and suspicious activity. When either is detected, the IDS alerts network managers, while the new intrusion prevention system (IPS) takes immediate action to block threats from spreading to networked devices. The IPS software takes action based on policies set in Aruba’s ClearPass access control system.

Previously, Aruba security was mostly focused on letting customers set security policies that restricted network access of groups of users, devices and applications. The company also provided customers with a firewall.

“But this IDS and IPS capability takes it a step further and allows enterprises that have deployed Aruba to quickly detect and prevent unwanted traffic from entering and exiting their networks,” said Brandon Butler, an analyst at IDC.

The latest features bring Aruba in line with other vendors, Butler said. In general, security is part of a “holistic” approach vendors are taking toward SD-branch.

Other features vendors are adding include WAN optimization, direct access to specific SaaS and IaaS providers, and a management console for the wired and wireless LAN. Software-defined WAN (SD-WAN) technology for traffic routing is a staple within all SD-branch offerings.

Aruba LTE gateway

The new gateway appliance is a key component of Aruba’s SD-Branch architecture. The multifunction hardware includes a firewall and an SD-WAN.

The device integrates with Aruba’s ClearPass and its cloud-based Central management console. The latter oversees the SD-WAN, as well as Aruba access points, switches and routers.

The new SD-Branch gateway with an LTE interface is the latest addition to the 9000 series Aruba launched in the fourth quarter of last year. The hardware is Aruba’s highest performing gateway with four 1 Gb ports and an LTE interface that delivers 600 Mbps downstream and 150 Mbps upstream.

Certification of the device by all major carriers will start this quarter, Aruba said.

Other network and security vendors providing SD-branch products include Cisco, Cradlepoint, Fortinet, Riverbed and Versa Networks. All the vendors combine internally developed technology with that of partners to deliver a comprehensive SD-Branch. Aruba, for example, has security partnerships with Zscaler, Palo Alto Networks and Check Point.

The vendors are competing for sales in a fast-growing market. Revenue from SD-branch will increase from $300 million in 2019 to $2.6 billion by 2023, according to Doyle Research.

Go to Original Article
Author:

Hyper-V Virtual CPUs Explained

Did your software vendor indicate that you can virtualize their application, but only if you dedicate one or more CPU cores to it? Not clear on what happens when you assign CPUs to a virtual machine? You are far from alone.

Note: This article was originally published in February 2014. It has been fully updated to be relevant as of November 2019.

Introduction to Virtual CPUs

Like all other virtual machine “hardware”, virtual CPUs do not exist. The hypervisor uses the physical host’s real CPUs to create virtual constructs to present to virtual machines. The hypervisor controls access to the real CPUs just as it controls access to all other hardware.

Hyper-V Never Assigns Physical Processors to Specific Virtual Machines

Make sure that you understand this section before moving on. Assigning 2 vCPUs to a system does not mean that Hyper-V plucks two cores out of the physical pool and permanently marries them to your virtual machine. You cannot assign a physical core to a VM at all. So, does this mean that you just can’t meet that vendor request to dedicate a core or two? Well, not exactly. More on that toward the end.

Understanding Operating System Processor Scheduling

Let’s kick this off by looking at how CPUs are used in regular Windows. Here’s a shot of my Task Manager screen:

Task Manager

Task Manager

Nothing fancy, right? Looks familiar, doesn’t it?

Now, back when computers never, or almost never, shipped as multi-CPU multi-core boxes, we all knew that computers couldn’t really multitask. They had one CPU and one core, so there was only one possible active thread of execution. But aside from the fancy graphical updates, Task Manager then looked pretty much like Task Manager now. You had a long list of running processes, each with a metric indicating what percentage of the CPUs time it was using.

Then, as in now, each line item you see represents a process (or, new in the recent Task Manager versions, a process group). A process consists of one or more threads. A thread is nothing more than a sequence of CPU instructions (keyword: sequence).

What happens is that (in Windows, this started in 95 and NT) the operating system stops a running thread, preserves its state, and then starts another thread. After a bit of time, it repeats those operations for the next thread. We call this pre-emptive, meaning that the operating system decides when to suspend the current thread and switch to another. You can set priorities that affect how a process rates, but the OS is in charge of thread scheduling.

Today, almost all computers have multiple cores, so Windows can truly multi-task.

Taking These Concepts to the Hypervisor

Because of its role as a thread manager, Windows can be called a “supervisor” (very old terminology that you really never see anymore): a system that manages processes that are made up of threads. Hyper-V is a hypervisor: a system that manages supervisors that manage processes that are made up of threads.

Task Manager doesn’t work the same way for Hyper-V, but the same thing is going on. There is a list of partitions, and inside those partitions are processes and threads. The thread scheduler works pretty much the same way, something like this:

Hypervisor Thread Scheduling

Hypervisor Thread Scheduling

Of course, a real system will always have more than nine threads running. The thread scheduler will place them all into a queue.

What About Processor Affinity?

You probably know that you can affinitize threads in Windows so that they always run on a particular core or set of cores. You cannot do that in Hyper-V. Doing so would have questionable value anyway; dedicating a thread to a core is not the same thing as dedicating a core to a thread, which is what many people really want to try to do. You can’t prevent a core from running other threads in the Windows world or the Hyper-V world.

How Does Thread Scheduling Work?

The simplest answer is that Hyper-V makes the decision at the hypervisor level. It doesn’t really let the guests have any input. Guest operating systems schedule the threads from the processes that they own. When they choose a thread to run, they send it to a virtual CPU. Hyper-V takes it from there.

The image that I presented above is necessarily an oversimplification, as it’s not simple first-in-first-out. NUMA plays a role, for instance. Really understanding this topic requires a fairly deep dive into some complex ideas. Few administrators require that level of depth, and exploring it here would take this article far afield.

The first thing that matters: affinity aside, you never know where any given thread will execute. A thread that was paused to yield CPU time to another thread may very well be assigned to another core when it is resumed. Did you ever wonder why an application consumes right at 50% of a dual-core system and each core looks like it’s running at 50% usage? That behavior indicates a single-threaded application. Each time the scheduler executes it, it consumes 100% of the core that it lands on. The next time it runs, it stays on the same core or goes to the other core. Whichever core the scheduler assigns it to, it consumes 100%. When Task Manager aggregates its performance for display, that’s an even 50% utilization — the app uses 100% of 50% of the system’s capacity. Since the core not running the app remains mostly idle while the other core tops out, they cumulatively amount to 50% utilization for the measured time period. With the capabilities of newer versions of Task Manager, you can now instruct it to show the separate cores individually, which makes this behavior far more apparent.

Now we can move on to a look at the number of vCPUs assigned to a system and priority.

What Does the Number of vCPUs Assigned to a VM Really Mean?

You should first notice that you can’t assign more vCPUs to a virtual machine than you have logical processors in your host.

Invalid CPU Count

Invalid CPU Count

So, a virtual machine’s vCPU count means this: the maximum number of threads that the VM can run at any given moment. I can’t set the virtual machine from the screenshot to have more than two vCPUs because the host only has two logical processors. Therefore, there is nowhere for a third thread to be scheduled. But, if I had a 24-core system and left this VM at 2 vCPUs, then it would only ever send a maximum of two threads to Hyper-V for scheduling. The virtual machine’s thread scheduler (the supervisor) will keep its other threads in a queue, waiting for their turn.

But Can’t I Assign More Total vCPUs to all VMs than Physical Cores?

Yes, the total number of vCPUs across all virtual machines can exceed the number of physical cores in the host. It’s no different than the fact that I’ve got 40+ processes “running” on my dual-core laptop right now. I can only run two threads at a time, but I will always far more than two threads scheduled. Windows has been doing this for a very long time now, and Windows is so good at it (usually) that most people never see a need to think through what’s going on. Your VMs (supervisors) will bubble up threads to run and Hyper-V (hypervisor) will schedule them the way (mostly) that Windows has been scheduling them ever since it outgrew cooperative scheduling in Windows 3.x.

What’s The Proper Ratio of vCPU to pCPU/Cores?

This is the question that’s on everyone’s mind. I’ll tell you straight: in the generic sense, this question has no answer.

Sure, way back when, people said 1:1. Some people still say that today. And you know, you can do it. It’s wasteful, but you can do it. I could run my current desktop configuration on a quad 16 core server and I’d never get any contention. But, I probably wouldn’t see much performance difference. Why? Because almost all my threads sit idle almost all the time. If something needs 0% CPU time, what does giving it its own core do? Nothing, that’s what.

Later, the answer was upgraded to 8 vCPUs per 1 physical core. OK, sure, good.

Then it became 12.

And then the recommendations went away.

They went away because no one really has any idea. The scheduler will evenly distribute threads across the available cores. So then, the amount of physical CPUs needed doesn’t depend on how many virtual CPUs there are. It depends entirely on what the operating threads need. And, even if you’ve got a bunch of heavy threads going, that doesn’t mean their systems will die as they get pre-empted by other heavy threads. The necessary vCPU/pCPU ratio depends entirely on the CPU load profile and your tolerance for latency. Multiple heavy loads require a low ratio. A few heavy loads work well with a medium ratio. Light loads can run on a high ratio system.

I’m going to let you in on a dirty little secret about CPUs: Every single time a thread runs, no matter what it is, it drives the CPU at 100% (power-throttling changes the clock speed, not workload saturation). The CPU is a binary device; it’s either processing or it isn’t. When your performance metric tools show you that 100% or 20% or 50% or whatever number, they calculate it from a time measurement. If you see 100%, that means that the CPU was processing during the entire measured span of time. 20% means it was running a process 1/5th of the time and 4/5th of the time it was idle. This means that a single thread doesn’t consume 100% of the CPU, because Windows/Hyper-V will pre-empt it when it wants to run another thread. You can have multiple “100%” CPU threads running on the same system. Even so, a system can only act responsively when it has some idle time, meaning that most threads will simply let their time slice go by. That allows other threads to access cores more quickly. When you have multiple threads always queuing for active CPU time, the overall system becomes less responsive because threads must wait. Using additional cores will address this concern as it spreads the workload out.

The upshot: if you want to know how many physical cores you need, then you need to know the performance profile of your actual workload. If you don’t know, then start from the earlier 8:1 or 12:1 recommendations.

Altaro Dojo Forums
forums logo

Connect with fellow IT pros and master Hyper-V

Moderated by Microsoft MVPs

What About Reserve and Weighting (Priority)?

I don’t recommend that you tinker with CPU settings unless you have a CPU contention problem to solve. Let the thread scheduler do its job. Just like setting CPU priorities on threads in Windows can cause more problems than they solve, fiddling with hypervisor vCPU settings can make everything worse.

Let’s look at the config screen:

vCPU Settings

vCPU Settings

The first group of boxes is the reserve. The first box represents the percentage of its allowed number of vCPUs to set aside. Its actual meaning depends on the number of vCPUs assigned to the VM. The second box, the grayed-out one, shows the total percentage of host resources that Hyper-V will set aside for this VM. In this case, I have a 2 vCPU system on a dual-core host, so the two boxes will be the same. If I set 10 percent reserve, that’s 10 percent of the total physical resources. If I drop the allocation down to 1 vCPU, then 10 percent reserve becomes 5 percent physical. The second box, will be auto-calculated as you adjust the first box.

The reserve is a hard minimum… sort of. If the total of all reserve settings of all virtual machines on a given host exceeds 100%, then at least one virtual machine won’t start. But, if a VM’s reserve is 0%, then it doesn’t count toward the 100% at all (seems pretty obvious, but you never know). But, if a VM with a 20% reserve is sitting idle, then other processes are allowed to use up to 100% of the available processor power… until such time as the VM with the reserve starts up. Then, once the CPU capacity is available, the reserved VM will be able to dominate up to 20% of the total computing power. Because time slices are so short, it’s effectively like it always has 20% available, but it does have to wait like everyone else.

So, that vendor that wants a dedicated CPU? If you really want to honor their wishes, this is how you do it. You enter whatever number in the top box that makes the second box show the equivalent processor power of however many pCPUs/cores the vendor thinks they need. If they want one whole CPU and you have a quad-core host, then make the second box show 25%. Do you really have to? Well, I don’t know. Their software probably doesn’t need that kind of power, but if they can kick you off support for not listening to them, well… don’t get me in the middle of that. The real reason virtualization densities never hit what the hypervisor manufacturers say they can do is because of software vendors’ arbitrary rules, but that’s a rant for another day.

The next two boxes are the limit. Now that you understand the reserve, you can understand the limit. It’s a resource cap. It keeps a greedy VM’s hands out of the cookie jar. The two boxes work together in the same way as the reserve boxes.

The final box is the priority weight. As indicated, this is relative. Every VM set to 100 (the default) has the same pull with the scheduler, but they’re all beneath all the VMs that have 200 and above all the VMs that have 50, so on and so forth. If you’re going to tinker, weight is safer than fiddling with reserves because you can’t ever prevent a VM from starting by changing relative weights. What the weight means is that when a bunch of VMs present threads to the hypervisor thread scheduler at once, the higher-weighted VMs go first.

But What About Hyper-Threading?

Hyper-Threading allows a single core to operate two threads at once — sort of. The core can only actively run one of the threads at a time, but if that thread stalls while waiting for an external resource, then the core operates the other thread. You can read a more detailed explanation below in the comments section, from contributor Jordan. AMD has recently added a similar technology.

To kill one major misconception: Hyper-Threading does not double the core’s performance ability. Synthetic benchmarks show a high-water mark of a 25% improvement. More realistic measurements show closer to a 10% boost. An 8-core Hyper-Threaded system does not perform as well as a 16-core non-Hyper-Threaded system. It might perform almost as well as a 9-core system.

With the so-called “classic” scheduler, Hyper-V places threads on the next available core as described above. With the core scheduler, introduced in Hyper-V 2016, Hyper-V now prevents threads owned by different virtual machines from running side-by-side on the same core. It will, however, continue to pre-empt one virtual machine’s threads in favor of another’s. We have an article that deals with the core scheduler.

Making Sense of Everything

I know this is a lot of information. Most people come here wanting to know how many vCPUs to assign to a VM or how many total vCPUs to run on a single system.

Personally, I assign 2 vCPUs to every VM to start. That gives it at least two places to run threads, which gives it responsiveness. On a dual-processor system, it also ensures that the VM automatically has a presence on both NUMA nodes. I do not assign more vCPU to a VM until I know that it needs it (or an application vendor demands it).

As for the ratio of vCPU to pCPU, that works mostly the same way. There is no formula or magic knowledge that you can simply apply. If you plan to virtualize existing workloads, then measure their current CPU utilization and tally it up; that will tell you what you need to know. Microsoft’s Assessment and Planning Toolkit might help you. Otherwise, you simply add resources and monitor usage. If your hardware cannot handle your workload, then you need to scale out.


Go to Original Article
Author: Eric Siron