Tag Archives: vendor

Hyper-V Virtual CPUs Explained

Did your software vendor indicate that you can virtualize their application, but only if you dedicate one or more CPU cores to it? Not clear on what happens when you assign CPUs to a virtual machine? You are far from alone.

Note: This article was originally published in February 2014. It has been fully updated to be relevant as of November 2019.

Introduction to Virtual CPUs

Like all other virtual machine “hardware”, virtual CPUs do not exist. The hypervisor uses the physical host’s real CPUs to create virtual constructs to present to virtual machines. The hypervisor controls access to the real CPUs just as it controls access to all other hardware.

Hyper-V Never Assigns Physical Processors to Specific Virtual Machines

Make sure that you understand this section before moving on. Assigning 2 vCPUs to a system does not mean that Hyper-V plucks two cores out of the physical pool and permanently marries them to your virtual machine. You cannot assign a physical core to a VM at all. So, does this mean that you just can’t meet that vendor request to dedicate a core or two? Well, not exactly. More on that toward the end.

Understanding Operating System Processor Scheduling

Let’s kick this off by looking at how CPUs are used in regular Windows. Here’s a shot of my Task Manager screen:

Task Manager

Task Manager

Nothing fancy, right? Looks familiar, doesn’t it?

Now, back when computers never, or almost never, shipped as multi-CPU multi-core boxes, we all knew that computers couldn’t really multitask. They had one CPU and one core, so there was only one possible active thread of execution. But aside from the fancy graphical updates, Task Manager then looked pretty much like Task Manager now. You had a long list of running processes, each with a metric indicating what percentage of the CPUs time it was using.

Then, as in now, each line item you see represents a process (or, new in the recent Task Manager versions, a process group). A process consists of one or more threads. A thread is nothing more than a sequence of CPU instructions (keyword: sequence).

What happens is that (in Windows, this started in 95 and NT) the operating system stops a running thread, preserves its state, and then starts another thread. After a bit of time, it repeats those operations for the next thread. We call this pre-emptive, meaning that the operating system decides when to suspend the current thread and switch to another. You can set priorities that affect how a process rates, but the OS is in charge of thread scheduling.

Today, almost all computers have multiple cores, so Windows can truly multi-task.

Taking These Concepts to the Hypervisor

Because of its role as a thread manager, Windows can be called a “supervisor” (very old terminology that you really never see anymore): a system that manages processes that are made up of threads. Hyper-V is a hypervisor: a system that manages supervisors that manage processes that are made up of threads.

Task Manager doesn’t work the same way for Hyper-V, but the same thing is going on. There is a list of partitions, and inside those partitions are processes and threads. The thread scheduler works pretty much the same way, something like this:

Hypervisor Thread Scheduling

Hypervisor Thread Scheduling

Of course, a real system will always have more than nine threads running. The thread scheduler will place them all into a queue.

What About Processor Affinity?

You probably know that you can affinitize threads in Windows so that they always run on a particular core or set of cores. You cannot do that in Hyper-V. Doing so would have questionable value anyway; dedicating a thread to a core is not the same thing as dedicating a core to a thread, which is what many people really want to try to do. You can’t prevent a core from running other threads in the Windows world or the Hyper-V world.

How Does Thread Scheduling Work?

The simplest answer is that Hyper-V makes the decision at the hypervisor level. It doesn’t really let the guests have any input. Guest operating systems schedule the threads from the processes that they own. When they choose a thread to run, they send it to a virtual CPU. Hyper-V takes it from there.

The image that I presented above is necessarily an oversimplification, as it’s not simple first-in-first-out. NUMA plays a role, for instance. Really understanding this topic requires a fairly deep dive into some complex ideas. Few administrators require that level of depth, and exploring it here would take this article far afield.

The first thing that matters: affinity aside, you never know where any given thread will execute. A thread that was paused to yield CPU time to another thread may very well be assigned to another core when it is resumed. Did you ever wonder why an application consumes right at 50% of a dual-core system and each core looks like it’s running at 50% usage? That behavior indicates a single-threaded application. Each time the scheduler executes it, it consumes 100% of the core that it lands on. The next time it runs, it stays on the same core or goes to the other core. Whichever core the scheduler assigns it to, it consumes 100%. When Task Manager aggregates its performance for display, that’s an even 50% utilization — the app uses 100% of 50% of the system’s capacity. Since the core not running the app remains mostly idle while the other core tops out, they cumulatively amount to 50% utilization for the measured time period. With the capabilities of newer versions of Task Manager, you can now instruct it to show the separate cores individually, which makes this behavior far more apparent.

Now we can move on to a look at the number of vCPUs assigned to a system and priority.

What Does the Number of vCPUs Assigned to a VM Really Mean?

You should first notice that you can’t assign more vCPUs to a virtual machine than you have logical processors in your host.

Invalid CPU Count

Invalid CPU Count

So, a virtual machine’s vCPU count means this: the maximum number of threads that the VM can run at any given moment. I can’t set the virtual machine from the screenshot to have more than two vCPUs because the host only has two logical processors. Therefore, there is nowhere for a third thread to be scheduled. But, if I had a 24-core system and left this VM at 2 vCPUs, then it would only ever send a maximum of two threads to Hyper-V for scheduling. The virtual machine’s thread scheduler (the supervisor) will keep its other threads in a queue, waiting for their turn.

But Can’t I Assign More Total vCPUs to all VMs than Physical Cores?

Yes, the total number of vCPUs across all virtual machines can exceed the number of physical cores in the host. It’s no different than the fact that I’ve got 40+ processes “running” on my dual-core laptop right now. I can only run two threads at a time, but I will always far more than two threads scheduled. Windows has been doing this for a very long time now, and Windows is so good at it (usually) that most people never see a need to think through what’s going on. Your VMs (supervisors) will bubble up threads to run and Hyper-V (hypervisor) will schedule them the way (mostly) that Windows has been scheduling them ever since it outgrew cooperative scheduling in Windows 3.x.

What’s The Proper Ratio of vCPU to pCPU/Cores?

This is the question that’s on everyone’s mind. I’ll tell you straight: in the generic sense, this question has no answer.

Sure, way back when, people said 1:1. Some people still say that today. And you know, you can do it. It’s wasteful, but you can do it. I could run my current desktop configuration on a quad 16 core server and I’d never get any contention. But, I probably wouldn’t see much performance difference. Why? Because almost all my threads sit idle almost all the time. If something needs 0% CPU time, what does giving it its own core do? Nothing, that’s what.

Later, the answer was upgraded to 8 vCPUs per 1 physical core. OK, sure, good.

Then it became 12.

And then the recommendations went away.

They went away because no one really has any idea. The scheduler will evenly distribute threads across the available cores. So then, the amount of physical CPUs needed doesn’t depend on how many virtual CPUs there are. It depends entirely on what the operating threads need. And, even if you’ve got a bunch of heavy threads going, that doesn’t mean their systems will die as they get pre-empted by other heavy threads. The necessary vCPU/pCPU ratio depends entirely on the CPU load profile and your tolerance for latency. Multiple heavy loads require a low ratio. A few heavy loads work well with a medium ratio. Light loads can run on a high ratio system.

I’m going to let you in on a dirty little secret about CPUs: Every single time a thread runs, no matter what it is, it drives the CPU at 100% (power-throttling changes the clock speed, not workload saturation). The CPU is a binary device; it’s either processing or it isn’t. When your performance metric tools show you that 100% or 20% or 50% or whatever number, they calculate it from a time measurement. If you see 100%, that means that the CPU was processing during the entire measured span of time. 20% means it was running a process 1/5th of the time and 4/5th of the time it was idle. This means that a single thread doesn’t consume 100% of the CPU, because Windows/Hyper-V will pre-empt it when it wants to run another thread. You can have multiple “100%” CPU threads running on the same system. Even so, a system can only act responsively when it has some idle time, meaning that most threads will simply let their time slice go by. That allows other threads to access cores more quickly. When you have multiple threads always queuing for active CPU time, the overall system becomes less responsive because threads must wait. Using additional cores will address this concern as it spreads the workload out.

The upshot: if you want to know how many physical cores you need, then you need to know the performance profile of your actual workload. If you don’t know, then start from the earlier 8:1 or 12:1 recommendations.

Altaro Dojo Forums
forums logo

Connect with fellow IT pros and master Hyper-V

Moderated by Microsoft MVPs

What About Reserve and Weighting (Priority)?

I don’t recommend that you tinker with CPU settings unless you have a CPU contention problem to solve. Let the thread scheduler do its job. Just like setting CPU priorities on threads in Windows can cause more problems than they solve, fiddling with hypervisor vCPU settings can make everything worse.

Let’s look at the config screen:

vCPU Settings

vCPU Settings

The first group of boxes is the reserve. The first box represents the percentage of its allowed number of vCPUs to set aside. Its actual meaning depends on the number of vCPUs assigned to the VM. The second box, the grayed-out one, shows the total percentage of host resources that Hyper-V will set aside for this VM. In this case, I have a 2 vCPU system on a dual-core host, so the two boxes will be the same. If I set 10 percent reserve, that’s 10 percent of the total physical resources. If I drop the allocation down to 1 vCPU, then 10 percent reserve becomes 5 percent physical. The second box, will be auto-calculated as you adjust the first box.

The reserve is a hard minimum… sort of. If the total of all reserve settings of all virtual machines on a given host exceeds 100%, then at least one virtual machine won’t start. But, if a VM’s reserve is 0%, then it doesn’t count toward the 100% at all (seems pretty obvious, but you never know). But, if a VM with a 20% reserve is sitting idle, then other processes are allowed to use up to 100% of the available processor power… until such time as the VM with the reserve starts up. Then, once the CPU capacity is available, the reserved VM will be able to dominate up to 20% of the total computing power. Because time slices are so short, it’s effectively like it always has 20% available, but it does have to wait like everyone else.

So, that vendor that wants a dedicated CPU? If you really want to honor their wishes, this is how you do it. You enter whatever number in the top box that makes the second box show the equivalent processor power of however many pCPUs/cores the vendor thinks they need. If they want one whole CPU and you have a quad-core host, then make the second box show 25%. Do you really have to? Well, I don’t know. Their software probably doesn’t need that kind of power, but if they can kick you off support for not listening to them, well… don’t get me in the middle of that. The real reason virtualization densities never hit what the hypervisor manufacturers say they can do is because of software vendors’ arbitrary rules, but that’s a rant for another day.

The next two boxes are the limit. Now that you understand the reserve, you can understand the limit. It’s a resource cap. It keeps a greedy VM’s hands out of the cookie jar. The two boxes work together in the same way as the reserve boxes.

The final box is the priority weight. As indicated, this is relative. Every VM set to 100 (the default) has the same pull with the scheduler, but they’re all beneath all the VMs that have 200 and above all the VMs that have 50, so on and so forth. If you’re going to tinker, weight is safer than fiddling with reserves because you can’t ever prevent a VM from starting by changing relative weights. What the weight means is that when a bunch of VMs present threads to the hypervisor thread scheduler at once, the higher-weighted VMs go first.

But What About Hyper-Threading?

Hyper-Threading allows a single core to operate two threads at once — sort of. The core can only actively run one of the threads at a time, but if that thread stalls while waiting for an external resource, then the core operates the other thread. You can read a more detailed explanation below in the comments section, from contributor Jordan. AMD has recently added a similar technology.

To kill one major misconception: Hyper-Threading does not double the core’s performance ability. Synthetic benchmarks show a high-water mark of a 25% improvement. More realistic measurements show closer to a 10% boost. An 8-core Hyper-Threaded system does not perform as well as a 16-core non-Hyper-Threaded system. It might perform almost as well as a 9-core system.

With the so-called “classic” scheduler, Hyper-V places threads on the next available core as described above. With the core scheduler, introduced in Hyper-V 2016, Hyper-V now prevents threads owned by different virtual machines from running side-by-side on the same core. It will, however, continue to pre-empt one virtual machine’s threads in favor of another’s. We have an article that deals with the core scheduler.

Making Sense of Everything

I know this is a lot of information. Most people come here wanting to know how many vCPUs to assign to a VM or how many total vCPUs to run on a single system.

Personally, I assign 2 vCPUs to every VM to start. That gives it at least two places to run threads, which gives it responsiveness. On a dual-processor system, it also ensures that the VM automatically has a presence on both NUMA nodes. I do not assign more vCPU to a VM until I know that it needs it (or an application vendor demands it).

As for the ratio of vCPU to pCPU, that works mostly the same way. There is no formula or magic knowledge that you can simply apply. If you plan to virtualize existing workloads, then measure their current CPU utilization and tally it up; that will tell you what you need to know. Microsoft’s Assessment and Planning Toolkit might help you. Otherwise, you simply add resources and monitor usage. If your hardware cannot handle your workload, then you need to scale out.


Go to Original Article
Author: Eric Siron

Startup Sisu’s data analytics tool aims to answer, ‘Why?’

Armed with $66.7 million in venture capital funding, startup vendor Sisu recently emerged from stealth and introduced the Sisu data analytics platform.

Sisu, founded in 2018 by Stanford professor Peter Bailis and based in San Francisco, revealed on Oct. 16 that it secured $52.5 million in Series B funding, led by New Enterprise Associates, a venture capital firm with more than $20 billion in assets under management. Previously, Sisu secured $14.2 million in funding, led by Andreessen Horowitz, which also participated in the Series B round.

On the same date it revealed the new infusion of capital, the startup rolled out the Sisu data analytics tool for general use, with electronics and IT giant Samsung already listed as one of its customers.

Essentially an automated system for monitoring changes in data sets, Sisu enters a competitive market featuring not only proven vendors but also recent startups such as ThoughtSpot and Looker, which have been able to differentiate themselves enough from other BI vendors to gain a foothold and survive — Looker agreed to be acquired by Google for $2.7 billion in June while ThoughtSpot remains independent.

“Startups have to stand out,” said Doug Henschen, an analyst at Constellation Research. “They can’t present me-too versions of capabilities that are already out there. They can’t be too broad and they also can’t expect companies to risk ripping out and replacing existing systems of mission-critical importance. The sweet spot is focused solutions that complement or extend existing capabilities or that take on new or emerging use cases or challenges.”

The Sisu data analytics platform is just that — highly focused — and not attempting to do anything other than track data.

A sample Sisu dashboard displays an organization's customer conversion rate data.
An organization’s customer conversion rate data is displayed on a Sisu dashboard.

It relies on machine learning and statistical analysis to monitor, recognize and explain changes to a given organization’s key performance indicators.

And it’s in that final stage — the explanation — where Sisu wants to differentiate from existing diagnostic tools. Others, according to Bailis, monitor data sets and are automated to send push notifications when changes happen, but don’t necessarily explain why those changes occurred.

Startups have to stand out. They can’t present me-too versions of capabilities that are already out there.
Doug HenschenAnalyst, Constellation Research

“We’re designed to answer one key question, and be the best at it,” said Bailis, who is on leave from Stanford. “We want to be faster, and we want to be better. There’s intense pressure to build everything into a platform, but I’m a firm believer that doing any one thing well is a company in itself. I’d rather be great at diagnosing than do a bunch of things just OK.”

The speed Bailis referred to comes from the architecture of the Sisu data analytics tool. Sisu is cloud native, which gives it more computing power than an on-premises platform, and its algorithms are built on augmented intelligence.

That speed is indeed a meaningful differentiator, according to Henschen.

“The sweet spot for Sisu is quickly diagnosing what’s changing in critical areas of a business and why,” he said. “It’s appealing to high-level business execs, not the analyst class or IT. The tech is compatible with, and doesn’t try to replace, existing investments in data visualization capabilities.”

Moving forward, Bailis said the Sisu data analytics platform will stay focused on data workflows, but that there’s room to grow even within that focused space.

“Long term, there is a really interesting opportunity for additional workflow operations,” he said. “There’s value because it leads to actions, and we want to own more and more of the action space. You can take action directly from the platform.”

Meanwhile, though survival is a challenge for any startup and many begin with the end goal of being acquired, Bailis said Sisu plans to take on the challenge of independence and compete against established vendors for market share. The recent funding, he said, will enable Sisu to continue to grow its capabilities to take advantage of what he sees as “an insane opportunity.”

Henschen, meanwhile, cautioned that unless Sisu does in fact grow its capabilities, it likely will be more of an acquisition target than a vendor with the potential for long-term survival.

“Sometimes startups come up with innovative technology, but [Sisu] strikes me as an IP [intellectual property] for a feature or set of features likely to be acquired by a larger, broader vendor,” he said. “That might be a good path for Sisu, but it’s early days for the company. I think it would have to evolve and develop broader capabilities in order to go [public] and continue as an independent company.”

Sisu is a Finnish word that translates loosely to tenacity or resilience, and is used by Finns to express their national character.

Go to Original Article
Author:

Oracle looks to grow multi-model database features

Perhaps no single vendor or database platform over the past three decades has been as pervasive as the Oracle database.

Much as the broader IT market has evolved, so too has Oracle’s database. Oracle has added new capabilities to meet changing needs and competitive challenges. With a move toward the cloud, new multi-model database options and increasing automation, the modern Oracle database continues to move forward. Among the executives who have been at Oracle the longest is Juan Loaiza, executive vice president of mission critical database technologies, who has watched the database market evolve, first-hand, since 1988.

In this Q&A, Loaiza discusses the evolution of the database market and how Oracle’s namesake database is positioned for the future.

Why have you stayed at Oracle for more than three decades and what has been the biggest change you’ve seen over that time?

Juan LoaizaJuan Loaiza

Juan Loaiza: A lot of it has to do with the fact that Oracle has done well. I always say Oracle’s managed to stay competitive and market-leading with good technology.

Oracle also pivots very quickly when needed. How do you survive for 40 years? Well, you have to react and lead when technology changes.

Decade after decade, Oracle continues to be relevant in the database market as it pivots to include an expanding list of capabilities to serve users.

The big change that happened a little over a year ago is that Thomas Kurian [former president of product development] left Oracle. He was head of all development and when he left what happened is that some of the teams, like database and apps, ended rolling up to [Oracle founder and CTO] Larry Ellison. Larry is now directly managing some of the big technology teams. For example, I work directly with Larry.

What is your view on the multi-model database approach?

Loaiza: This is something we’re starting to talk more about. So the term that people use is multi-model but we’re using a different term, we’re using a term called converged database and the reason for that is because multi-model is kind of one component of it.

Multi-model really talks about different data models that you can model inside the database, but we’re also doing much more than that. Blockchain is an example of converging technology that is not even thought about normally as database technology into the database. So we’re going well beyond the conventional kind of multi-model of, Hey, I can do this, data format, and that data format.

Initially, the relational database was the mainstream database people used for both OLTP [online transaction processing] and analytics. What has happened in the last 10 to 15 years is that there have been a lot of new database technologies to come around, things like NoSQL, JSON, document databases, databases for geospatial data and graph databases too. So there’s a lot of specialty databases that have come around. What’s happening is, people are having to cobble together a complex kind of web of databases to solve one problem and that creates an enormous amount of complexity.

With the idea of a converged database, we’re taking all the good ideas, whether it’s NoSQL, blockchain or graph, and we’re building it into the Oracle database. So you can basically use one data store and write your application to that.

The analogy that we use is that of a smartphone. We used to have a music device and a phone device and a calendar device and a GPS device and all these things and what’s happened is they’ve all been converged into a smartphone.

Are companies actually shifting their on-premises production database deployments to the cloud?

Loaiza: There’s definitely a switch to the cloud. There are two models to cloud; one is kind of the grassroots. So we’re seeing some of that, for example, with our autonomous database that people are using now. So they’re like, ‘Hey, I’m in the finance department, and I need a reporting database,’ or, ‘hey, I’m in the marketing department, and I need some database to run some campaign with.’ So that’s kind of a grassroots and those guys are building a new thing and they want to just go to cloud. It’s much easier and much quicker to set up a database and much more agile to go to the cloud.

The second model is where somebody up in the hierarchy says, ‘Hey, we have a strategy to move to cloud.’ Some companies want to move quickly and some companies say, ‘Hey, you know, I’m going to take my time,’ and there’s everything in the middle.

Will autonomous database technology mean enterprises will need fewer database professionals?

Loaiza: The autonomous database addresses the mundane aspects of running a database. Things like tuning the database, installing it, configuring it, setting up HA [high availability], among other tasks. That doesn’t mean that there’s nothing for database professionals to do.

Like every other field where there is automation, what you do is you move upstream, you say, ‘Hey, I’m going to work on machine learning or analytics or blockchain or security.’ There’s a lot of different aspects of data management that require a lot of labor.

One of the nice things that we have in this industry is there is no real unemployment crisis in IT. There’s a lot of unfilled jobs.

So it’s pretty straightforward for someone who has good skills in data management to just move upstream and do something that’s going to add more specific value then just configuring and setting up databases, which is really more of a mechanical process.

This interview has been edited for clarity and conciseness.

Go to Original Article
Author:

AI in advertising captures audiences with personalized ads

In 2017, Flipkart, the giant e-commerce vendor based in Bengaluru, India, got a cold-call email from AdGreetz. The email highlighted AdGreetz’s product, an AI in advertising platform that can generate thousands or millions of personalized ads quickly.

Intrigued, Flipkart, which is owned primarily by Walmart, struck up communication with AdGreetz, and decided to move ahead with a pilot project.

The first campaign AdGreetz did for Flipkart reached 200 million people across social media platforms, said Vijay Sharma, associate director of brand marketing and head of digital media at Flipkart.

Advertising flip

The audience, spread across different regions of India, was diverse, Sharma said.

“These are people of different types, in different cities, with different motivations, with different relationships with Flipkart,” he said.

We created a fairly complex, hard-working campaign, and that showed results.
Vijay SharmaAssociate director of brand marketing and head of digital media, Flipkart

To give the ads a more significant impact, AdGreetz and Flipkart created about a million creatives, or ad banners and other forms of created online advertising, each targeting different groups based on data collected over social media and Flipkart’s e-commerce platform. Depending on who they targeted, the ads varied dramatically, including different colors, voices and languages.

“We created a fairly complex, hard-working campaign, and that showed results,” Sharma said.

Since then, Flipkart has used AdGreetz to produce some 40 different campaigns, he said. The results have been mainly positive.

The first project took about two and a half months to complete, slowed by initial integrations and a lot of back-and-forth. Now, Vijay Sharma said, campaigns can be completed within a week.

AdGreetz, AI in advertising
Using AI, AdGreetz generates millions of personalized ads for Flipkart

AI in advertising

AdGreetz, a 2009 startup based in Los Angeles, uses a platform imbued with machine learning to quickly and automatically create millions of personalized advertisements, CEO and co-founder Eric Frankel said.

Taking a few templates and AI in advertising technology, along with data, including a consumer’s buying habits, location, age and gender, AdGreetz can automatically create personalized variations of those template ads. Advertisement forms include television spots, online banners and videos, emails, and physical product labels.

AdGreetz doesn’t keep consumer data used to create them, Frankel claimed.

The vendor’s relationship with Flipkart is close, probably closer than with any other customer AdGreetz has, Frankel said.

“They bought in from day one,” he said, adding that he thinks the relationship has been fruitful for Flipkart.

“They will probably be the most personalized company in the world,” he said.

Big results

For Flipkart, the ads campaigns have seen “big numbers,” according to Sharma. The company can create far more personalized ads than it did before using AdGreetz, back when marketing teams filled up Excel sheets with creatives, he said.

While the AdGreetz platform relies on AI in advertising, it still requires manual effort from Flipkart to operate.

“It’s not an end-to-end solution,” Sharma said.

The team first creates a creative and then works with AdGreetz to build it into a storyboard, which is then multiplied and personalized. Often, Flipkart needs to provide cultural context to the advertisements and tweak them before they go live.

Sharma has spoken with some competing vendors, as they came highly rated. He said he’s keeping his options open, but for right now is sticking with AdGreetz.

He referenced Alibaba, the China-based tech and e-commerce giant. Alibaba, Sharma said, possesses the ability to create billions of personalized advertisements using AI in advertising and provides some of the most personalized marketing campaigns in the world.

“Hopefully,” Sharma said, “One day, we will also get there.”

Go to Original Article
Author:

Kronos introduces its latest time clock, Kronos InTouch DX

Workforce management and HR software vendor Kronos this week introduced Kronos InTouch DX, a time clock offering features including individualized welcome screens, multilanguage support, biometric authentication and integration with Workforce Dimensions.

The new time clock is aimed at providing ease of use and more personalization for employees.

“By adding consumer-grade personalization with enterprise-level intelligence, Kronos InTouch DX surfaces the most important updates first, like whether a time-off request has been approved or a missed punch needs to be resolved,” said Bill Bartow, vice president and global product management at Kronos.

InTouch DX works with Workforce Dimensions, Kronos’ workforce management suite, so when a manager updates the schedule, employees can see those updates instantly on the Kronos InTouch DX and when employees request time off through the Kronos InTouch DX, managers are notified in Workforce Dimensions, according to the company.

Workforce Dimensions is mobile-native and accessible on smartphones and tablets.

Other features of InTouch DX include:

  • Smart Landing: Provides a personal welcome screen alerting users to unread messages, time-off approvals or requests, shifts swap and schedule updates.
  • Individual Mode: Provides one-click access to a user’s most frequent self-service tasks such as viewing their schedule, checking their accruals bank or transferring job codes.
  • My Time: Combines an individual’s timecard and weekly schedule, providing an overall view so that employees can compare their punches to scheduled hours to avoid errors.
  • Multilanguage support: Available for Dutch, English, French (both Canadian and French), German, Japanese, Spanish, Traditional and Simplified Chinese, Danish, Hindi, Italian, Korean, Polish and Portuguese.
  • Optional biometric authentication: Available as an option for an extra layer of security or in place of a PIN number or a badge. The InTouch DX supports major employee ID Badge formats, as well as PIN/employee ID numbers.
  • Date and time display: Features an always-on date and time display on screen.
  • Capacitive touchscreen: Utilizes capacitive technology used in consumer electronic devices to provide precision and reliability.

“Time clocks are being jolted in the front of workers’ visibility with new platform capabilities that surpass the traditional time clock hidden somewhere in a corner. Biometrics, especially facial recognition, are key to accelerate and validate time punches,” said Holger Mueller, vice president and principal analyst at Constellation Research.

When it comes to purchasing a product like this, Mueller said organizations should look into a software platform. “[Enterprises] need to get their information and processes on it, it needs to be resilient, sturdy, work without power, work without connectivity and gracefully reconnect when possible,” he said.

Other vendors in the human capital management space include Workday, Paycor and WorkForce Software. Workday platform’s time-tracking and attendance feature works on mobile devices and provide real-time analytics to aid managers’ decisions. Paycor’s Time and Attendance tool offers a mobile punching feature that can verify punch locations and enable administrators to set location maps to ensure employees punch in or near the correct work locations. WorkForce’s Time and Attendance tool automates pay rules for hourly, salaried or contingent workforce.

Go to Original Article
Author:

The importance of AI for fraud prevention

Jumio, the identity verification technology vendor, released Jumio Go, a real-time, automated platform for identity verification. Coming at a time when cybercriminals are becoming ever more technologically advanced, Jumio Go uses a combination of AI, optical character recognition and biometrics to automatically verify a user’s identity in real time.

Jumio, founded in 2010, has long sold an AI for fraud prevention platform used by organizations in financial services, travel, gaming and retail industries. The Palo Alto, Calif., vendor’s new Jumio Go platform builds on its existing technologies, which include facial recognition and verification tools, while also simplifying them.

Jumio Go, launched Oct. 28, provides real-time identity verification, giving users results much faster than Jumio’s flagship product, which takes 30 to 60 seconds to verify a user, according to Jumio. It also eliminates the need to add a component, meaning the process of matching a real-time photo of a user’s face to a saved photo is entirely automated. That speeds up the process, and enables employees to take on other tasks, but also potentially could make it a little less secure.

The new product accepts fewer ID documents than Jumio’s flagship platform, but the tradeoff is the boost in real-time speed. Using natural language processing, Jumio’s platforms can read through and extract relevant information from documents. The system scans that information for irregularities, such as odd wordings or misspellings, which could indicate a fraud.

AI for fraud prevention in finance

For financial institutions, whose customers conduct much more business online, this type of fraud detection and identity verification technology is vital.

For combating fraud, “leveraging AI is critical,” said Amyn Dhala, global product lead at AI Express, Mastercard’s methodology for the deployment of AI that grew out of the credit card company’s 2017 acquisition of Brighterion.

.

AI for fraud prevention, fraud
To help stop fraudsters, financial institutions are using AI-powered security tools.

Through AI Express, Mastercard sells AI for fraud prevention tools, as well as AI-powered technologies, to help predict credit risk, manage network security and catch money-laundering.

AI, Dhala said in an interview at AI World 2019 in Boston, is “important to provide a better customer experience and drive profitability,” as well as to ensure customer safety.

The 9 to 5 fraudster

For financial institutions, blocking fraudsters is no simple task. Criminals intent on fraud are taking a professional approach to their work, working for certain hours during the week and taking weekends off, according to an October 2019 report from Onfido, a London-based vendor of AI-driven identity software.

Also, today’s fraudsters are highly technologically skilled, said Dan Drapeau, head of technology at Blue Fountain Media, a digital marketing agency owned by Pactera, a technology consulting and implementation firm based in China.

Cybercriminals are always that one step ahead.
Dan DrapeauHead of technology, Blue Fountain Media

“You can always throw new technology at the problem, but cybercriminals are always going to do something new and innovative, and AI algorithms have to catch up to that,” Drapeau said. “Cybercriminals are always that one step ahead.”

“As good as AI and machine learning get, it still will always take time to catch up to the newest innovation from criminals,” he added.

Still, by using AI for fraud prevention, financial organizations can stop good deal of fraud automatically, Drapeau said. Now, combining AI with manual work, such as checking or double-checking data and verification documents, works best, he said.

Go to Original Article
Author:

UCaaS vendor Intermedia adds Telax CCaaS to portfolio

Unified communications vendor Intermedia has added contact center software to its cloud portfolio. The move is the latest example of how the markets for UC and contact center technologies are converging.

Intermedia follows the lead of other cloud UC vendors, including RingCentral, Vonage and 8×8, in building or acquiring a contact center as a service (CCaaS) platform. Intermedia’s CCaaS software stems from the acquisition of Toronto-based Telax in August.

The Intermedia Contact Center will be available as a stand-alone offering or bundled with Intermedia Unite, a cloud-based suite of calling, messaging and video conferencing applications. Intermedia will sell the offering in three tiers: Express, Pro and Elite.

Express — sold only as an add-on to Intermedia Unite — is a basic call routing platform for small businesses. Pro includes more advanced call routing, analytics, and support for additional contact channels, such as chat.

Elite, the most expensive tier, integrates with CRM platforms and includes support for self-service voice bots, outbound notification campaigns and quality assurance monitoring. 

Intermedia has already integrated Express with its UC platform. It’s planning to do the same for Pro and Elite early next year.

Integrating UC and contact center platforms can save money by letting customer service agents transfer calls outside of the contact center without going through the public telephone network. Plus, communication between agents and others in the organization is more effective when everyone uses the same chat and video apps.

Based in Sunnyvale, Calif., Intermedia sells its technology to small and midsize businesses through 6,600 channel partners. Most of them are managed service providers that brand Intermedia’s service as their own.

In addition to UC and contact center, Intermedia offers email archiving and encryption, file backup and sharing systems, and hosted Microsoft email services.

Roughly 1.4 million people across 125,000 businesses use Intermedia’s technology. The company, founded in 1995 and now owned by private equity firm Madison Dearborn Partners, said its acquisition of Telax brought annual revenue to around $250 million. 

Founded in 1997, Telax sold its CCaaS platform exclusively through service providers, which rebranded it mostly for small and midsize businesses.

Go to Original Article
Author:

HP’s purchase of endpoint security vendor Bromium a win for IT

HP Inc. plans to acquire Bromium Inc., an endpoint security vendor that uses microvirtualization technology to isolate threats from untrusted sources.

Bromium, founded by Gaurav Banga, Simon Crosby and Ian Pratt in 2010, is known for its Microvisor software, which uses hardware virtualization to launch a virtual machine for every browser tab or email attachment opened. The idea is to trap malicious code before it can infect a user’s machine.

Analysts called the acquisition unsurprising. Not only has HP been reselling Bromium software as Sure Click since 2017, but the endpoint security vendor market has been in the throes of rapid consolidation. Just last month, VMware and Broadcom acquired Carbon Black and Symantec, respectively.

Analysts also labeled the news a good thing for IT admins. Brad LaPorte, an analyst at Gartner specializing in endpoint security and threat intelligence, said a deal like this “is a multiplier” for those in charge of HP devices.

“When you roll out a fleet of HP laptops, you’ll already have a centralized agent that is secure by default, which will greatly reduce the number of agents you have to install and manage,” he said. “The added security will also mean fewer alerts because your attack surface has been greatly reduced.”

But, he cautioned, while HP is headed in the right direction, not every company will benefit from the acquisition and there are steps HP still needs to take to round out its security program.

HP’s response to Dell

LaPorte described the acquisition as a safe bet for HP, one that could help the company stay relevant. “This is a play to compete against Dell’s Endpoint Security Suite that it’s had for a couple of years now,” he said.

Eric Parizo, senior analyst at Ovum, also pointed to the Dell rivalry as rationale behind the acquisition. He ticked off Dell’s growing endpoint security capabilities, which include its RSA NetWitness Endpoint security product, its ownership of managed security services provider SecureWorks, and its more recent go-to-market partnership with CrowdStrike.

Although Dell still has more [endpoint security] options, now at least, HP can say it has a viable alternative.
Eric ParizoSenior analyst, Ovum

“HP needed additional endpoint security technology to bolster the capabilities it can provide as a technical solution to secure its PCs and laptops, but also as a bundling option to increase the size of its sales opportunities,” Parizo said. “This move also helps counter the perception that Dell has more to offer in the way of endpoint security. Although Dell still has more options, now at least, HP can say it has a viable alternative.”

By acquiring rather than reselling the technology, HP can build out the Bromium functionality, something Paula Musich, security and risk management research director at Enterprise Management Associates Inc., fully expects to see.

“HP hasn’t offered a roadmap for where they plan to take the acquired technology, but it wouldn’t be a huge surprise to see them eventually extend the technology to HP’s vast printer portfolio,” she said. “Internet-connected printers are a target for attackers, and there’s a potentially huge addressable market in adapting the technology to HP printers.”

If Musich’s theory becomes practice, IT admins would benefit by having “a single source for protecting both printers and PCs/laptops,” she said.

Even in the short term, the acquisition will help IT admins better manage HP laptops and PCs, as well as provide an added layer of security. Bromium provides security “from the user in versus the network out,” said Zeus Kerravala, founder and principal analyst at ZK Research in Boston.

“The more distributed computing becomes and the more we do more things on more devices in more places, the more something like Bromium is needed,” he said.

LaPorte described Bromium as an endpoint security vendor whose product operates on a pre-OS layer, or hardware layer, rather than post-OS layer. Investing in such products is HP’s — and Dell’s, for that matter — attempt at getting ahead of attacks that target deeper layers of the computing stack.

‘Too many pizza shops’

Although the dollar figure HP will pay for Bromium was not disclosed, LaPorte described the acquisition as a likely cheap bet for HP. In a 2016 attempt to secure funding, Bromium’s valuation was cut almost in half; its growth and profitability had recently been in the single digits.

But the acquisition may not be a good fit for everyone. LaPorte said companies that use a golden image, or a preconfigured template for virtual machines, may miss out on the benefits that an endpoint security product provides. “When you remove these features to meet specific organizational needs, you are sacrificing security in lieu of efficiency,” he said. “Buyers need to consider these requirements before purchasing.”

And the buy still leaves HP’s security services and endpoint detection and response functionality lacking, especially compared to Dell. LaPorte believes HP will take its next steps in these areas.

On the whole, LaPorte expects consolidation of the endpoint security vendor market to continue on a weekly if not multiweekly basis. “There are too many pizza shops and not enough people buying pizza,” he said simply.

Clear leaders, such as CrowdStrike and Microsoft, control a significant portion of the market share, making it difficult for other endpoint security vendors to find decent footing in the market, according to LaPorte.

“The market share for the people who are not the leaders in this space is going down exponentially,” he said.

Although he has little insight into the Bromium acquisition, Steve Athanas, associate CIO of system architecture at UMASS Lowell and VMUG president, said it’s a market he is keeping an eye on.

“I’m very interested to see how this wave of security acquisitions and consolidation plays out,” he said.

Go to Original Article
Author:

Cobalt Iron extends data protection SaaS tool’s capabilities

Cobalt Iron, a cloud-based data protection and backup vendor, this week rolled out a new feature in its Adaptive Data Protection SaaS tool, aimed at improving data security and cyberattack prevention.

The addition, called Cyber Shield, is intended to identify and contain cyberattacks by locking down data access control while saving companies the cost of adding new security measures. The software includes functions such as data access control, data security, ransomware detection and ransomware responsiveness, which are intended to provide quick response to threats as well as recovery functions to reduce the impact of cyberattacks, the company said.

Cobalt Iron’s update comes amidst the industry’s shift in security investments from threat prevention to threat detection and as enterprises are facing more complicated cyberattacks. Gartner recently identified expanding capabilities for security operation centers as one of its top priorities for enterprises in 2019.

Cobalt Iron said Cyber Shield is available now with no added cost as a part of Cobalt Iron’s Adaptive Data Protection SaaS, and does not require additional equipment, software or licenses.

Competitors in the cloud-based data protection market includes Symantec, McAfee and Druva. Symantec’s Cloud Data Protection claims to enable organizations to use cloud applications such as Salesforce.com and ServiceNow and secure sensitive data while complying with data privacy regulations. McAfee MVISION Cloud Encrypt can block downloads of corporate data to personal devices and provide sensitive data encryption, which is inaccessible to any third party, according to the company. Druva promises lower costs due to elimination of onsite hardware and software and offers a free trial of its products and pay-as-you-go pricing.

Go to Original Article
Author:

Kaseya ramps up managed compliance services focus

MSP software vendor Kaseya revealed it has invested $10 million into a newly formed business unit dedicated to managed compliance services.

The division focuses on Kaseya Compliance Manager, a platform the vendor developed after acquiring RapidFire Tools in 2018. Kaseya Compliance Manager lets MSPs assess and monitor customers’ compliance posture within a number of regulatory frameworks, including the Health Insurance Portability and Accountability Act (HIPAA), the Payment Card Industry Security Standard, and General Data Protection Regulation (GDPR). The platform can also help MSPs and their customers demonstrate compliance with cyberinsurance policies. Kaseya recently appointed Max Pruger, formerly chief revenue officer at CloudJumper, to lead the unit as senior vice president and general manager of compliance.

“We as a company believe that compliance is the next big managed service. It is a close cousin to security … [and] a fantastic opportunity for MSPs to expand their business and monetize a very low-touch type of offering,” said Fred Voccola, CEO of Kaseya.

What Kaseya Compliance Manager does

Fred Voccola, CEO of KaseyaFred Voccola

According to Voccola, Kaseya’s compliance management platform scans a customer’s networks and infrastructure to gather about 70% to 90% of the data required by regulators. The remaining 10% to 30% of information can’t be obtained through automated processes, so the software generates a checklist to guide that information’s collection.

“What our product does is it automates everything that can be automated and then it lists the 50 to 100 items that have to be ‘manually’ proven,” Voccola said.

HIPAA, for example, states that a medical provider must physically store patient files in a room with secure locks on the doors. In this case, Kaseya Compliance Manager would direct the MSP and its customer to take photos of the door locks and load the photos into the software, he said.

Once the information is collected, MSPs can then generate documentation and reports to show the customer has met compliance requirements.

In addition, Voccola said the software lets MSPs continually monitor customers’ compliance status. MSPs can receive alerts if changes in a customer’s network or infrastructure cause a compliance issue.

Enabling managed compliance services

A portion of Kaseya’s $10 million investment will go into developing resources to help MSPs establish managed compliance practices. Resources include a content library to learn about how to price, sell and deliver the services. “MSPs don’t have to be an expert in the compliance framework with this offering. That’s the biggest part of it,” Voccola said.

Max Pruger, senior vice president and general manager of compliance, KaseyaMax Pruger

Kaseya is also encouraging MSPs to use the platform internally. Under certain regulatory frameworks, such as HIPAA, MSPs must demonstrate internal compliance before they can touch customer data. Voccola said Kaseya gives MSPs the license for their own internal usage for free when they purchase Kaseya Compliance Manager.

Pruger added that MSPs can also benefit from using the software internally for showing continual compliance with cyberinsurance policies. “Every MSP out there should have cyberinsurance,” Pruger noted.

Voccola said that regulatory compliance will soon become a common part of doing business for all MSPs in the U.S., especially as states roll out localized privacy legislation. He cited the California Consumer Privacy Act introduced in 2018, as an example.

Pruger agreed. “I will say that in the next 24 months, every single MSP will have to have a compliance practice, because every single state in the United States is going to have specific compliance rules that they are going to have to follow,” Pruger said.

In the next quarter, Pruger said he aims to bring Kaseya Compliance Manager to market across GDPR, cyberinsurance, HIPAA and NIST frameworks. “As far as cyberinsurance, HIPAA and NIST go within the U.S., every single MSP has to be [compliant with] at least one of those,” he said. He noted that he will look to add more compliance standards on the platform.

Only about 400 MSPs are currently using Kaseya Compliance Manager, Voccola noted — a number the company hopes to greatly increase in the coming months.

SADA offers flat-rate GCP services

SADA, a business and technology consultancy based in Los Angeles, launched four flat-rate packaged offers for Google Cloud Platform adopters.

The packaged services include Anthos First Step, Anthos Flat-Rate, Database Migration Flat-Rate and VM Migration Flat-Rate. SADA delivers the services for a flat price and according to a fixed time. Miles Ward, CTO at SADA, said the GCP services address customers’ uncertainty and risk when moving to the cloud. An organization may balk at a cloud migration service if the service provider can’t cite a definitive delivery schedule or set a fixed price.

“The ability to have a flat-rate offer lets the conversation start,” Ward said. He noted customers are more likely to greenlight a project if the service is prescriptive, time-bound and available at a specific price point.

The Anthos First Step package provides the first phase of setting up and using Google Anthos. The offering includes x86 portable or rack-mounted infrastructure and Google Anthos, VMware vCenter 6.5 and F5 Big-IP Virtual Addition. SADA provides on-site hardware and software installation, a hands-on lab and a help desk trained on Anthos/Kubernetes.

Anthos Flat-Rate covers the second phase of an Anthos implementation. The package includes everything in the first-step package as well as additional items including a review of the initial implementation, identification of production goals and stakeholders for readiness reviews, and any additional equipment delivery and validation, according to SADA.

The Database Migration Flat-Rate package includes migration of a customer’s database to GCP, while VM Migration Flat-Rate migrates a customer’s virtual machines to GCP.

Hostway, Hosting rebrand as Ntirety

Emil Sayegh, president and CEO at NtiretyEmil Sayegh

Hostway Services Inc. and Hosting, which merged in January, have rebranded as Ntirety.

The managed cloud services company is based in Austin, Texas, and has vendor certifications with companies such as AWS, Microsoft and Oracle. Emil Sayegh, president and CEO at Ntirety, cited “strong synergy” between the companies’ offerings and no overlap between their customer bases. On the IT side, the companies had both been using ScienceLogic to run their businesses. The consolidation of those instances has generated cost savings, Sayegh noted.

No additional acquisitions are in the offing this year, but Sayegh noted the potential for merger and acquisition activity in 2020.

Other news

Christian Alvarez as vice president, Americas Channel, at NutanixChristian Alvarez
  • Nutanix has appointed Christian Alvarez as vice president, Americas Channel. Alvarez was previously worldwide head of channels and distribution at Juniper Networks. He has also held positions at Cyan, a Ciena company; Avaya; eLandia Group; Connexion Technologies and Terremark Worldwide, a Verizon company. Nutanix said its partners include value-added resellers, distributors, system integrators, OEM partners and technology alliances.
  • Hewlett Packard Enterprise launched HPE ML Ops, a container-based software product to support the machine-learning model lifecycle, according the company. An HPE spokesperson said the vendor believes the product presents an opportunity for partners. “Channel partners need to build a practice in this area and develop expertise in data science, AI [and machine learning], and advanced analytics. It’s an opportunity for them to … provide a strategic advisory role for their customers as they look to deliver game-changing business innovation with AI,” the spokesperson said.
  • NYI, a hybrid IT solutions provider based in New York, has acquired a data center in Chicago. The former Navisite facility is geared toward edge and IoT requirements, according to the company.
  • In distribution news, Ingram Micro inked a deal with CoreKinect to provide its IoT sensors to U.S. channel partners. Meanwhile, Tech Data signed an agreement with OPAQ to provide its network-security-as-a-service cloud platform to U.S. service providers.
  • Mission, an MSP based in Los Angeles, said it obtained AWS APN Premier Consulting Partner status.
  • Axcient, a business availability and cloud migration company, unveiled a lead generation program that it said is free for all Axcient partners.
  • Opengear, which specializes in enterprise automation, network resilience and security, will host its first channel partner conference Sept. 16-17 in Dallas.

Market Share is a news roundup published every Friday.

Go to Original Article
Author: