Tag Archives: vendor

Sophos adds mobile threat defense app to Intercept X line

Security vendor Sophos this month expanded its endpoint protection lineup with Intercept X for Mobile. The new mobile security application extends the company’s Intercept security software to devices including phones, tablets and laptops.

The new offering is meant to bolster mobile threat defense for devices running on Android, iOS and Chrome. Features include:

  • Authenticator: Helps to manage multi-factor authentication passwords for sites like Google, Amazon and Facebook.
  • Secure QR code scanner: Scans target URLs for malicious content.
  • Privacy protection: Detects when personal data is accessed or if there are hidden costs associated with downloaded apps.

“The biggest unique point of the Intercept X model is that we are a security model, and we do security for different platforms and can be configured in one place,” said Petter Nordwall, director of product management at Sophos. “Intercept X, as a whole, can now protect Windows, Mac iOS, Chromebooks and servers. Regardless of what platform they use, they can use Intercept X.”

Sophos introduced Intercept X in 2016 as a cloud-based tool designed to enhance endpoint security already running in an environment. Intercept X for Server was introduced in December 2018; an update launched in May 2019 added endpoint protection and response features.

Mobile threats on the rise

In “Advance and Improve Your Mobile Security Strategy,” a recent report from Gartner, senior analyst Patrick Hevesi found that “mobile security products are becoming increasingly important as a rate of mobile attacks continues to grow.” Hevesi recommended tech professionals track new threats, build a mobile threat defense strategy and set minimum iOS and hardware versions.

He added that organizations should focus on training users on what threats actually look like, rather than letting the systems do all the work.

“Everyone is doing antiphishing training, but think about the application,” Hevesi said. “The user doesn’t think about mobile in the same way; they see a highly rated app and don’t think about why the app needs permission to my contact data.”

Pricing for Intercept X for Mobile ranges from $24.50 to $63 per 100 seats depending on the addition of Sophos’ mobile, a unified endpoint management system. Intercept X for Mobile is available free for download for individual use, from Google Play and the Apple App Store.

Go to Original Article
Author:

EG Enterprise v7 focuses on usability, user experience monitoring

Software vendor EG Innovations will release version 7 of its EG Enterprise software, its end-user experience monitoring tool, on Jan. 31.

New features and updates have been added to the IT monitoring software with the goal of making it more user-friendly. The software focuses primarily on monitoring end-user activities and responses.

“Many times, vendor tools monitor their own software stack but do not go end to end,” said Srinivas Ramanathan, CEO of EG Innovations. “Cross-tier, multi-vendor visibility is critical when it comes to monitoring and diagnosing user experience issues. After all, users care about the entire service, which cuts across vendor stacks.”

Ramanathan said IT issues are not as simple as they used to be.

“What you will see in 2020 is now that there is an ability to provide more intelligence to user experience, how do you put that into use?” said Mark Bowker, senior analyst at Enterprise Strategy Group. “EG has a challenge of when to engage with a customer. IT’s a value to them if they engage with the customer sooner in an end-user kind of monitoring scenario. In many cases, they get brought in to solve a problem when it’s already happened, and it would be better for them to shift.”

New features in EG Enterprise v7 include:

  • Synthetic and real user experience monitoring: Users can create simulations and scripts of different applications that can be replayed to further help diagnose a problem and notifies IT operations teams of impending problems.
  • Layered monitoring: Enables users to monitor every tier of an application stack via a central console.
  • Automated diagnosis: Lets users use machine learning and automation to find root causes to issues.
  • Optimization plan: Users can customize optimization plans through capacity and application overview reports.

“Most people look at user experience as just response time for accessing any application. We see user experience as being broader than this,” Ramanthan said. “If problems are not diagnosed correctly and they reoccur again and again, it will hurt user experience. If the time to resolve a problem is high, users will be unhappy.”

Pricing for EG Enterprise v7 begins at $2 per user per month in a digital workspace. Licensing for other workloads depends on how many operating systems are being monitored. The new version includes support for Citrix and VMWare Horizon.

Go to Original Article
Author:

Google buys AppSheet for low-code app development

Google has acquired low-code app development vendor AppSheet in a bid to up its cloud platform’s appeal among line-of-business users and tap into a hot enterprise IT trend.

Like similar offerings, AppSheet ingests data from sources such as Excel spreadsheets, Smartsheet and Google Sheets. Users apply views to the data — such as charts, tables, maps, galleries and calendars — and then develop workflows with AppSheet’s form-based interface. The apps run on Android, iOS and within browsers.

AppSheet, based in Seattle, already integrated with G Suite and other Google cloud sources, as well as Office 365, Salesforce, Box and other services. The company will continue to support and improve those integrations following the Google acquisition, AppSheet CEO Praveen Seshadri said in a blog post.

“Our core mission is unchanged,” Seshadri said. “We want to ‘democratize’ app development by enabling as many people as possible to build and distribute applications without writing a line of code.”

Terms of the deal were not disclosed, but the price tag for the low-code app development startup is likely far less than Google’s $2.6 billion acquisition of data visualization vendor Looker in June 2019.

Under the leadership of former longtime Oracle executive Thomas Kurian, Google Cloud was expected to make a series of deals to shore up its position in the cloud computing market, where it trails AWS and Microsoft by significant percentages.

So far, Kurian has not made moves to buy core enterprise applications such as ERP and CRM, two markets dominated by the likes of SAP, Oracle and Salesforce. Rather, the AppSheet purchase reflects Google Cloud’s perceived strength in application development, but with a gesture toward nontraditional coders.

As for why Google chose AppSheet to boost its low-code/no-code strategy, one reason could be the dwindling number of options. In the past couple of years, several prominent low-code/no-code vendors became acquisition targets. Notable examples include Siemens’ August 2018 purchase of Mendix for $730 million, and more recently, Swiss banking software provider Temenos’ move to buy Kony in a $559 million deal.

It’s not as if Google, Siemens and Temenos made a long shot bet, either. A survey released last year by Forrester Research, based on data collected in late 201, found that 23% of more than 3,000 developers surveyed reported their companies were already using low-code development platforms. In addition, another 22% indicated their organizations would buy into low-code within a year.

Low-code app dev platforms foster quick creation of business data-driven mobile apps.
Google’s purchase of AppSheet gives it low-code app dev tools for business users.

Low-code competition heightens

Google’s AppSheet buy pits it directly against cloud platform rival Microsoft, which has the citizen developer-targeted Power Apps low-code app development platform that has taken off like a rocket, said John Rymer, an analyst at Forrester. The acquisition of AppSheet also sets Google apart from cloud market share leader AWS, whose alleged super-secret low-code/no-code platform that was said to be under development by a team led by prominent development guru Adam Bosworth has yet to appear.

However, in AppSheet, Google is getting a winner, Rymer noted. “It’s a really good product and a really good team,” he said.

Moreover, the addition of AppSheet will help Google get more horsepower out of Apigee than just API management. The company wanted a broader platform with more functionality to address more customers and more use cases, Rymer said.

“So, I think they will be positioning this as a new platform anchored by Apigee,” he said. “Customers could use Apigee to create and publish APIs and AppSheet is how they would consume them. But they won’t stop there. They need process automation/workflow, so I would expect them to go there as well.”

AppSheet gives Google the potential to craft a more cohesive story that integrates that with Google Cloud and Anthos in the future.
Jeffrey HammondAnalyst, Forrester

Meanwhile, another key benefit Google gains from this acquisition is the integration that AppSheet already has with Google’s office productivity products, said Jeffrey Hammond, another Forrester analyst.

“G Suite has always felt a bit out of place to me at Google’s developer conferences, but it used to be one of the main ‘leads’ for the enterprise,” he said. “AppSheet gives Google the potential to craft a more cohesive story that integrates that with Google Cloud and Anthos in the future.”

Overall, this acquisition is yet another indication that low-code/no-code development has gone mainstream and the number of people building applications will continue to grow.

Go to Original Article
Author:

Aruba SD-Branch gets intrusion detection, prevention software

Wireless LAN vendor Aruba has strengthened security in its software-defined branch product by adding intrusion detection and prevention software. The vendor is aiming the latest technology at retailers, hotels and healthcare organizations with hundreds of locations.

Aruba, a Hewlett Packard Enterprise company, also introduced this week an Aruba SD-Branch gateway appliance with a built-in Long Term Evolution (LTE) interface. Companies often use LTE cellular as a backup when other links are temporarily unavailable.

The latest iteration of Aruba’s SD-Branch has an intrusion detection system (IDS)  that performs deep packet inspection in monitoring network traffic for malware and suspicious activity. When either is detected, the IDS alerts network managers, while the new intrusion prevention system (IPS) takes immediate action to block threats from spreading to networked devices. The IPS software takes action based on policies set in Aruba’s ClearPass access control system.

Previously, Aruba security was mostly focused on letting customers set security policies that restricted network access of groups of users, devices and applications. The company also provided customers with a firewall.

“But this IDS and IPS capability takes it a step further and allows enterprises that have deployed Aruba to quickly detect and prevent unwanted traffic from entering and exiting their networks,” said Brandon Butler, an analyst at IDC.

The latest features bring Aruba in line with other vendors, Butler said. In general, security is part of a “holistic” approach vendors are taking toward SD-branch.

Other features vendors are adding include WAN optimization, direct access to specific SaaS and IaaS providers, and a management console for the wired and wireless LAN. Software-defined WAN (SD-WAN) technology for traffic routing is a staple within all SD-branch offerings.

Aruba LTE gateway

The new gateway appliance is a key component of Aruba’s SD-Branch architecture. The multifunction hardware includes a firewall and an SD-WAN.

The device integrates with Aruba’s ClearPass and its cloud-based Central management console. The latter oversees the SD-WAN, as well as Aruba access points, switches and routers.

The new SD-Branch gateway with an LTE interface is the latest addition to the 9000 series Aruba launched in the fourth quarter of last year. The hardware is Aruba’s highest performing gateway with four 1 Gb ports and an LTE interface that delivers 600 Mbps downstream and 150 Mbps upstream.

Certification of the device by all major carriers will start this quarter, Aruba said.

Other network and security vendors providing SD-branch products include Cisco, Cradlepoint, Fortinet, Riverbed and Versa Networks. All the vendors combine internally developed technology with that of partners to deliver a comprehensive SD-Branch. Aruba, for example, has security partnerships with Zscaler, Palo Alto Networks and Check Point.

The vendors are competing for sales in a fast-growing market. Revenue from SD-branch will increase from $300 million in 2019 to $2.6 billion by 2023, according to Doyle Research.

Go to Original Article
Author:

Hyper-V Virtual CPUs Explained

Did your software vendor indicate that you can virtualize their application, but only if you dedicate one or more CPU cores to it? Not clear on what happens when you assign CPUs to a virtual machine? You are far from alone.

Note: This article was originally published in February 2014. It has been fully updated to be relevant as of November 2019.

Introduction to Virtual CPUs

Like all other virtual machine “hardware”, virtual CPUs do not exist. The hypervisor uses the physical host’s real CPUs to create virtual constructs to present to virtual machines. The hypervisor controls access to the real CPUs just as it controls access to all other hardware.

Hyper-V Never Assigns Physical Processors to Specific Virtual Machines

Make sure that you understand this section before moving on. Assigning 2 vCPUs to a system does not mean that Hyper-V plucks two cores out of the physical pool and permanently marries them to your virtual machine. You cannot assign a physical core to a VM at all. So, does this mean that you just can’t meet that vendor request to dedicate a core or two? Well, not exactly. More on that toward the end.

Understanding Operating System Processor Scheduling

Let’s kick this off by looking at how CPUs are used in regular Windows. Here’s a shot of my Task Manager screen:

Task Manager

Task Manager

Nothing fancy, right? Looks familiar, doesn’t it?

Now, back when computers never, or almost never, shipped as multi-CPU multi-core boxes, we all knew that computers couldn’t really multitask. They had one CPU and one core, so there was only one possible active thread of execution. But aside from the fancy graphical updates, Task Manager then looked pretty much like Task Manager now. You had a long list of running processes, each with a metric indicating what percentage of the CPUs time it was using.

Then, as in now, each line item you see represents a process (or, new in the recent Task Manager versions, a process group). A process consists of one or more threads. A thread is nothing more than a sequence of CPU instructions (keyword: sequence).

What happens is that (in Windows, this started in 95 and NT) the operating system stops a running thread, preserves its state, and then starts another thread. After a bit of time, it repeats those operations for the next thread. We call this pre-emptive, meaning that the operating system decides when to suspend the current thread and switch to another. You can set priorities that affect how a process rates, but the OS is in charge of thread scheduling.

Today, almost all computers have multiple cores, so Windows can truly multi-task.

Taking These Concepts to the Hypervisor

Because of its role as a thread manager, Windows can be called a “supervisor” (very old terminology that you really never see anymore): a system that manages processes that are made up of threads. Hyper-V is a hypervisor: a system that manages supervisors that manage processes that are made up of threads.

Task Manager doesn’t work the same way for Hyper-V, but the same thing is going on. There is a list of partitions, and inside those partitions are processes and threads. The thread scheduler works pretty much the same way, something like this:

Hypervisor Thread Scheduling

Hypervisor Thread Scheduling

Of course, a real system will always have more than nine threads running. The thread scheduler will place them all into a queue.

What About Processor Affinity?

You probably know that you can affinitize threads in Windows so that they always run on a particular core or set of cores. You cannot do that in Hyper-V. Doing so would have questionable value anyway; dedicating a thread to a core is not the same thing as dedicating a core to a thread, which is what many people really want to try to do. You can’t prevent a core from running other threads in the Windows world or the Hyper-V world.

How Does Thread Scheduling Work?

The simplest answer is that Hyper-V makes the decision at the hypervisor level. It doesn’t really let the guests have any input. Guest operating systems schedule the threads from the processes that they own. When they choose a thread to run, they send it to a virtual CPU. Hyper-V takes it from there.

The image that I presented above is necessarily an oversimplification, as it’s not simple first-in-first-out. NUMA plays a role, for instance. Really understanding this topic requires a fairly deep dive into some complex ideas. Few administrators require that level of depth, and exploring it here would take this article far afield.

The first thing that matters: affinity aside, you never know where any given thread will execute. A thread that was paused to yield CPU time to another thread may very well be assigned to another core when it is resumed. Did you ever wonder why an application consumes right at 50% of a dual-core system and each core looks like it’s running at 50% usage? That behavior indicates a single-threaded application. Each time the scheduler executes it, it consumes 100% of the core that it lands on. The next time it runs, it stays on the same core or goes to the other core. Whichever core the scheduler assigns it to, it consumes 100%. When Task Manager aggregates its performance for display, that’s an even 50% utilization — the app uses 100% of 50% of the system’s capacity. Since the core not running the app remains mostly idle while the other core tops out, they cumulatively amount to 50% utilization for the measured time period. With the capabilities of newer versions of Task Manager, you can now instruct it to show the separate cores individually, which makes this behavior far more apparent.

Now we can move on to a look at the number of vCPUs assigned to a system and priority.

What Does the Number of vCPUs Assigned to a VM Really Mean?

You should first notice that you can’t assign more vCPUs to a virtual machine than you have logical processors in your host.

Invalid CPU Count

Invalid CPU Count

So, a virtual machine’s vCPU count means this: the maximum number of threads that the VM can run at any given moment. I can’t set the virtual machine from the screenshot to have more than two vCPUs because the host only has two logical processors. Therefore, there is nowhere for a third thread to be scheduled. But, if I had a 24-core system and left this VM at 2 vCPUs, then it would only ever send a maximum of two threads to Hyper-V for scheduling. The virtual machine’s thread scheduler (the supervisor) will keep its other threads in a queue, waiting for their turn.

But Can’t I Assign More Total vCPUs to all VMs than Physical Cores?

Yes, the total number of vCPUs across all virtual machines can exceed the number of physical cores in the host. It’s no different than the fact that I’ve got 40+ processes “running” on my dual-core laptop right now. I can only run two threads at a time, but I will always far more than two threads scheduled. Windows has been doing this for a very long time now, and Windows is so good at it (usually) that most people never see a need to think through what’s going on. Your VMs (supervisors) will bubble up threads to run and Hyper-V (hypervisor) will schedule them the way (mostly) that Windows has been scheduling them ever since it outgrew cooperative scheduling in Windows 3.x.

What’s The Proper Ratio of vCPU to pCPU/Cores?

This is the question that’s on everyone’s mind. I’ll tell you straight: in the generic sense, this question has no answer.

Sure, way back when, people said 1:1. Some people still say that today. And you know, you can do it. It’s wasteful, but you can do it. I could run my current desktop configuration on a quad 16 core server and I’d never get any contention. But, I probably wouldn’t see much performance difference. Why? Because almost all my threads sit idle almost all the time. If something needs 0% CPU time, what does giving it its own core do? Nothing, that’s what.

Later, the answer was upgraded to 8 vCPUs per 1 physical core. OK, sure, good.

Then it became 12.

And then the recommendations went away.

They went away because no one really has any idea. The scheduler will evenly distribute threads across the available cores. So then, the amount of physical CPUs needed doesn’t depend on how many virtual CPUs there are. It depends entirely on what the operating threads need. And, even if you’ve got a bunch of heavy threads going, that doesn’t mean their systems will die as they get pre-empted by other heavy threads. The necessary vCPU/pCPU ratio depends entirely on the CPU load profile and your tolerance for latency. Multiple heavy loads require a low ratio. A few heavy loads work well with a medium ratio. Light loads can run on a high ratio system.

I’m going to let you in on a dirty little secret about CPUs: Every single time a thread runs, no matter what it is, it drives the CPU at 100% (power-throttling changes the clock speed, not workload saturation). The CPU is a binary device; it’s either processing or it isn’t. When your performance metric tools show you that 100% or 20% or 50% or whatever number, they calculate it from a time measurement. If you see 100%, that means that the CPU was processing during the entire measured span of time. 20% means it was running a process 1/5th of the time and 4/5th of the time it was idle. This means that a single thread doesn’t consume 100% of the CPU, because Windows/Hyper-V will pre-empt it when it wants to run another thread. You can have multiple “100%” CPU threads running on the same system. Even so, a system can only act responsively when it has some idle time, meaning that most threads will simply let their time slice go by. That allows other threads to access cores more quickly. When you have multiple threads always queuing for active CPU time, the overall system becomes less responsive because threads must wait. Using additional cores will address this concern as it spreads the workload out.

The upshot: if you want to know how many physical cores you need, then you need to know the performance profile of your actual workload. If you don’t know, then start from the earlier 8:1 or 12:1 recommendations.

Altaro Dojo Forums
forums logo

Connect with fellow IT pros and master Hyper-V

Moderated by Microsoft MVPs

What About Reserve and Weighting (Priority)?

I don’t recommend that you tinker with CPU settings unless you have a CPU contention problem to solve. Let the thread scheduler do its job. Just like setting CPU priorities on threads in Windows can cause more problems than they solve, fiddling with hypervisor vCPU settings can make everything worse.

Let’s look at the config screen:

vCPU Settings

vCPU Settings

The first group of boxes is the reserve. The first box represents the percentage of its allowed number of vCPUs to set aside. Its actual meaning depends on the number of vCPUs assigned to the VM. The second box, the grayed-out one, shows the total percentage of host resources that Hyper-V will set aside for this VM. In this case, I have a 2 vCPU system on a dual-core host, so the two boxes will be the same. If I set 10 percent reserve, that’s 10 percent of the total physical resources. If I drop the allocation down to 1 vCPU, then 10 percent reserve becomes 5 percent physical. The second box, will be auto-calculated as you adjust the first box.

The reserve is a hard minimum… sort of. If the total of all reserve settings of all virtual machines on a given host exceeds 100%, then at least one virtual machine won’t start. But, if a VM’s reserve is 0%, then it doesn’t count toward the 100% at all (seems pretty obvious, but you never know). But, if a VM with a 20% reserve is sitting idle, then other processes are allowed to use up to 100% of the available processor power… until such time as the VM with the reserve starts up. Then, once the CPU capacity is available, the reserved VM will be able to dominate up to 20% of the total computing power. Because time slices are so short, it’s effectively like it always has 20% available, but it does have to wait like everyone else.

So, that vendor that wants a dedicated CPU? If you really want to honor their wishes, this is how you do it. You enter whatever number in the top box that makes the second box show the equivalent processor power of however many pCPUs/cores the vendor thinks they need. If they want one whole CPU and you have a quad-core host, then make the second box show 25%. Do you really have to? Well, I don’t know. Their software probably doesn’t need that kind of power, but if they can kick you off support for not listening to them, well… don’t get me in the middle of that. The real reason virtualization densities never hit what the hypervisor manufacturers say they can do is because of software vendors’ arbitrary rules, but that’s a rant for another day.

The next two boxes are the limit. Now that you understand the reserve, you can understand the limit. It’s a resource cap. It keeps a greedy VM’s hands out of the cookie jar. The two boxes work together in the same way as the reserve boxes.

The final box is the priority weight. As indicated, this is relative. Every VM set to 100 (the default) has the same pull with the scheduler, but they’re all beneath all the VMs that have 200 and above all the VMs that have 50, so on and so forth. If you’re going to tinker, weight is safer than fiddling with reserves because you can’t ever prevent a VM from starting by changing relative weights. What the weight means is that when a bunch of VMs present threads to the hypervisor thread scheduler at once, the higher-weighted VMs go first.

But What About Hyper-Threading?

Hyper-Threading allows a single core to operate two threads at once — sort of. The core can only actively run one of the threads at a time, but if that thread stalls while waiting for an external resource, then the core operates the other thread. You can read a more detailed explanation below in the comments section, from contributor Jordan. AMD has recently added a similar technology.

To kill one major misconception: Hyper-Threading does not double the core’s performance ability. Synthetic benchmarks show a high-water mark of a 25% improvement. More realistic measurements show closer to a 10% boost. An 8-core Hyper-Threaded system does not perform as well as a 16-core non-Hyper-Threaded system. It might perform almost as well as a 9-core system.

With the so-called “classic” scheduler, Hyper-V places threads on the next available core as described above. With the core scheduler, introduced in Hyper-V 2016, Hyper-V now prevents threads owned by different virtual machines from running side-by-side on the same core. It will, however, continue to pre-empt one virtual machine’s threads in favor of another’s. We have an article that deals with the core scheduler.

Making Sense of Everything

I know this is a lot of information. Most people come here wanting to know how many vCPUs to assign to a VM or how many total vCPUs to run on a single system.

Personally, I assign 2 vCPUs to every VM to start. That gives it at least two places to run threads, which gives it responsiveness. On a dual-processor system, it also ensures that the VM automatically has a presence on both NUMA nodes. I do not assign more vCPU to a VM until I know that it needs it (or an application vendor demands it).

As for the ratio of vCPU to pCPU, that works mostly the same way. There is no formula or magic knowledge that you can simply apply. If you plan to virtualize existing workloads, then measure their current CPU utilization and tally it up; that will tell you what you need to know. Microsoft’s Assessment and Planning Toolkit might help you. Otherwise, you simply add resources and monitor usage. If your hardware cannot handle your workload, then you need to scale out.


Go to Original Article
Author: Eric Siron

Startup Sisu’s data analytics tool aims to answer, ‘Why?’

Armed with $66.7 million in venture capital funding, startup vendor Sisu recently emerged from stealth and introduced the Sisu data analytics platform.

Sisu, founded in 2018 by Stanford professor Peter Bailis and based in San Francisco, revealed on Oct. 16 that it secured $52.5 million in Series B funding, led by New Enterprise Associates, a venture capital firm with more than $20 billion in assets under management. Previously, Sisu secured $14.2 million in funding, led by Andreessen Horowitz, which also participated in the Series B round.

On the same date it revealed the new infusion of capital, the startup rolled out the Sisu data analytics tool for general use, with electronics and IT giant Samsung already listed as one of its customers.

Essentially an automated system for monitoring changes in data sets, Sisu enters a competitive market featuring not only proven vendors but also recent startups such as ThoughtSpot and Looker, which have been able to differentiate themselves enough from other BI vendors to gain a foothold and survive — Looker agreed to be acquired by Google for $2.7 billion in June while ThoughtSpot remains independent.

“Startups have to stand out,” said Doug Henschen, an analyst at Constellation Research. “They can’t present me-too versions of capabilities that are already out there. They can’t be too broad and they also can’t expect companies to risk ripping out and replacing existing systems of mission-critical importance. The sweet spot is focused solutions that complement or extend existing capabilities or that take on new or emerging use cases or challenges.”

The Sisu data analytics platform is just that — highly focused — and not attempting to do anything other than track data.

A sample Sisu dashboard displays an organization's customer conversion rate data.
An organization’s customer conversion rate data is displayed on a Sisu dashboard.

It relies on machine learning and statistical analysis to monitor, recognize and explain changes to a given organization’s key performance indicators.

And it’s in that final stage — the explanation — where Sisu wants to differentiate from existing diagnostic tools. Others, according to Bailis, monitor data sets and are automated to send push notifications when changes happen, but don’t necessarily explain why those changes occurred.

Startups have to stand out. They can’t present me-too versions of capabilities that are already out there.
Doug HenschenAnalyst, Constellation Research

“We’re designed to answer one key question, and be the best at it,” said Bailis, who is on leave from Stanford. “We want to be faster, and we want to be better. There’s intense pressure to build everything into a platform, but I’m a firm believer that doing any one thing well is a company in itself. I’d rather be great at diagnosing than do a bunch of things just OK.”

The speed Bailis referred to comes from the architecture of the Sisu data analytics tool. Sisu is cloud native, which gives it more computing power than an on-premises platform, and its algorithms are built on augmented intelligence.

That speed is indeed a meaningful differentiator, according to Henschen.

“The sweet spot for Sisu is quickly diagnosing what’s changing in critical areas of a business and why,” he said. “It’s appealing to high-level business execs, not the analyst class or IT. The tech is compatible with, and doesn’t try to replace, existing investments in data visualization capabilities.”

Moving forward, Bailis said the Sisu data analytics platform will stay focused on data workflows, but that there’s room to grow even within that focused space.

“Long term, there is a really interesting opportunity for additional workflow operations,” he said. “There’s value because it leads to actions, and we want to own more and more of the action space. You can take action directly from the platform.”

Meanwhile, though survival is a challenge for any startup and many begin with the end goal of being acquired, Bailis said Sisu plans to take on the challenge of independence and compete against established vendors for market share. The recent funding, he said, will enable Sisu to continue to grow its capabilities to take advantage of what he sees as “an insane opportunity.”

Henschen, meanwhile, cautioned that unless Sisu does in fact grow its capabilities, it likely will be more of an acquisition target than a vendor with the potential for long-term survival.

“Sometimes startups come up with innovative technology, but [Sisu] strikes me as an IP [intellectual property] for a feature or set of features likely to be acquired by a larger, broader vendor,” he said. “That might be a good path for Sisu, but it’s early days for the company. I think it would have to evolve and develop broader capabilities in order to go [public] and continue as an independent company.”

Sisu is a Finnish word that translates loosely to tenacity or resilience, and is used by Finns to express their national character.

Go to Original Article
Author:

Oracle looks to grow multi-model database features

Perhaps no single vendor or database platform over the past three decades has been as pervasive as the Oracle database.

Much as the broader IT market has evolved, so too has Oracle’s database. Oracle has added new capabilities to meet changing needs and competitive challenges. With a move toward the cloud, new multi-model database options and increasing automation, the modern Oracle database continues to move forward. Among the executives who have been at Oracle the longest is Juan Loaiza, executive vice president of mission critical database technologies, who has watched the database market evolve, first-hand, since 1988.

In this Q&A, Loaiza discusses the evolution of the database market and how Oracle’s namesake database is positioned for the future.

Why have you stayed at Oracle for more than three decades and what has been the biggest change you’ve seen over that time?

Juan LoaizaJuan Loaiza

Juan Loaiza: A lot of it has to do with the fact that Oracle has done well. I always say Oracle’s managed to stay competitive and market-leading with good technology.

Oracle also pivots very quickly when needed. How do you survive for 40 years? Well, you have to react and lead when technology changes.

Decade after decade, Oracle continues to be relevant in the database market as it pivots to include an expanding list of capabilities to serve users.

The big change that happened a little over a year ago is that Thomas Kurian [former president of product development] left Oracle. He was head of all development and when he left what happened is that some of the teams, like database and apps, ended rolling up to [Oracle founder and CTO] Larry Ellison. Larry is now directly managing some of the big technology teams. For example, I work directly with Larry.

What is your view on the multi-model database approach?

Loaiza: This is something we’re starting to talk more about. So the term that people use is multi-model but we’re using a different term, we’re using a term called converged database and the reason for that is because multi-model is kind of one component of it.

Multi-model really talks about different data models that you can model inside the database, but we’re also doing much more than that. Blockchain is an example of converging technology that is not even thought about normally as database technology into the database. So we’re going well beyond the conventional kind of multi-model of, Hey, I can do this, data format, and that data format.

Initially, the relational database was the mainstream database people used for both OLTP [online transaction processing] and analytics. What has happened in the last 10 to 15 years is that there have been a lot of new database technologies to come around, things like NoSQL, JSON, document databases, databases for geospatial data and graph databases too. So there’s a lot of specialty databases that have come around. What’s happening is, people are having to cobble together a complex kind of web of databases to solve one problem and that creates an enormous amount of complexity.

With the idea of a converged database, we’re taking all the good ideas, whether it’s NoSQL, blockchain or graph, and we’re building it into the Oracle database. So you can basically use one data store and write your application to that.

The analogy that we use is that of a smartphone. We used to have a music device and a phone device and a calendar device and a GPS device and all these things and what’s happened is they’ve all been converged into a smartphone.

Are companies actually shifting their on-premises production database deployments to the cloud?

Loaiza: There’s definitely a switch to the cloud. There are two models to cloud; one is kind of the grassroots. So we’re seeing some of that, for example, with our autonomous database that people are using now. So they’re like, ‘Hey, I’m in the finance department, and I need a reporting database,’ or, ‘hey, I’m in the marketing department, and I need some database to run some campaign with.’ So that’s kind of a grassroots and those guys are building a new thing and they want to just go to cloud. It’s much easier and much quicker to set up a database and much more agile to go to the cloud.

The second model is where somebody up in the hierarchy says, ‘Hey, we have a strategy to move to cloud.’ Some companies want to move quickly and some companies say, ‘Hey, you know, I’m going to take my time,’ and there’s everything in the middle.

Will autonomous database technology mean enterprises will need fewer database professionals?

Loaiza: The autonomous database addresses the mundane aspects of running a database. Things like tuning the database, installing it, configuring it, setting up HA [high availability], among other tasks. That doesn’t mean that there’s nothing for database professionals to do.

Like every other field where there is automation, what you do is you move upstream, you say, ‘Hey, I’m going to work on machine learning or analytics or blockchain or security.’ There’s a lot of different aspects of data management that require a lot of labor.

One of the nice things that we have in this industry is there is no real unemployment crisis in IT. There’s a lot of unfilled jobs.

So it’s pretty straightforward for someone who has good skills in data management to just move upstream and do something that’s going to add more specific value then just configuring and setting up databases, which is really more of a mechanical process.

This interview has been edited for clarity and conciseness.

Go to Original Article
Author:

AI in advertising captures audiences with personalized ads

In 2017, Flipkart, the giant e-commerce vendor based in Bengaluru, India, got a cold-call email from AdGreetz. The email highlighted AdGreetz’s product, an AI in advertising platform that can generate thousands or millions of personalized ads quickly.

Intrigued, Flipkart, which is owned primarily by Walmart, struck up communication with AdGreetz, and decided to move ahead with a pilot project.

The first campaign AdGreetz did for Flipkart reached 200 million people across social media platforms, said Vijay Sharma, associate director of brand marketing and head of digital media at Flipkart.

Advertising flip

The audience, spread across different regions of India, was diverse, Sharma said.

“These are people of different types, in different cities, with different motivations, with different relationships with Flipkart,” he said.

We created a fairly complex, hard-working campaign, and that showed results.
Vijay SharmaAssociate director of brand marketing and head of digital media, Flipkart

To give the ads a more significant impact, AdGreetz and Flipkart created about a million creatives, or ad banners and other forms of created online advertising, each targeting different groups based on data collected over social media and Flipkart’s e-commerce platform. Depending on who they targeted, the ads varied dramatically, including different colors, voices and languages.

“We created a fairly complex, hard-working campaign, and that showed results,” Sharma said.

Since then, Flipkart has used AdGreetz to produce some 40 different campaigns, he said. The results have been mainly positive.

The first project took about two and a half months to complete, slowed by initial integrations and a lot of back-and-forth. Now, Vijay Sharma said, campaigns can be completed within a week.

AdGreetz, AI in advertising
Using AI, AdGreetz generates millions of personalized ads for Flipkart

AI in advertising

AdGreetz, a 2009 startup based in Los Angeles, uses a platform imbued with machine learning to quickly and automatically create millions of personalized advertisements, CEO and co-founder Eric Frankel said.

Taking a few templates and AI in advertising technology, along with data, including a consumer’s buying habits, location, age and gender, AdGreetz can automatically create personalized variations of those template ads. Advertisement forms include television spots, online banners and videos, emails, and physical product labels.

AdGreetz doesn’t keep consumer data used to create them, Frankel claimed.

The vendor’s relationship with Flipkart is close, probably closer than with any other customer AdGreetz has, Frankel said.

“They bought in from day one,” he said, adding that he thinks the relationship has been fruitful for Flipkart.

“They will probably be the most personalized company in the world,” he said.

Big results

For Flipkart, the ads campaigns have seen “big numbers,” according to Sharma. The company can create far more personalized ads than it did before using AdGreetz, back when marketing teams filled up Excel sheets with creatives, he said.

While the AdGreetz platform relies on AI in advertising, it still requires manual effort from Flipkart to operate.

“It’s not an end-to-end solution,” Sharma said.

The team first creates a creative and then works with AdGreetz to build it into a storyboard, which is then multiplied and personalized. Often, Flipkart needs to provide cultural context to the advertisements and tweak them before they go live.

Sharma has spoken with some competing vendors, as they came highly rated. He said he’s keeping his options open, but for right now is sticking with AdGreetz.

He referenced Alibaba, the China-based tech and e-commerce giant. Alibaba, Sharma said, possesses the ability to create billions of personalized advertisements using AI in advertising and provides some of the most personalized marketing campaigns in the world.

“Hopefully,” Sharma said, “One day, we will also get there.”

Go to Original Article
Author:

Kronos introduces its latest time clock, Kronos InTouch DX

Workforce management and HR software vendor Kronos this week introduced Kronos InTouch DX, a time clock offering features including individualized welcome screens, multilanguage support, biometric authentication and integration with Workforce Dimensions.

The new time clock is aimed at providing ease of use and more personalization for employees.

“By adding consumer-grade personalization with enterprise-level intelligence, Kronos InTouch DX surfaces the most important updates first, like whether a time-off request has been approved or a missed punch needs to be resolved,” said Bill Bartow, vice president and global product management at Kronos.

InTouch DX works with Workforce Dimensions, Kronos’ workforce management suite, so when a manager updates the schedule, employees can see those updates instantly on the Kronos InTouch DX and when employees request time off through the Kronos InTouch DX, managers are notified in Workforce Dimensions, according to the company.

Workforce Dimensions is mobile-native and accessible on smartphones and tablets.

Other features of InTouch DX include:

  • Smart Landing: Provides a personal welcome screen alerting users to unread messages, time-off approvals or requests, shifts swap and schedule updates.
  • Individual Mode: Provides one-click access to a user’s most frequent self-service tasks such as viewing their schedule, checking their accruals bank or transferring job codes.
  • My Time: Combines an individual’s timecard and weekly schedule, providing an overall view so that employees can compare their punches to scheduled hours to avoid errors.
  • Multilanguage support: Available for Dutch, English, French (both Canadian and French), German, Japanese, Spanish, Traditional and Simplified Chinese, Danish, Hindi, Italian, Korean, Polish and Portuguese.
  • Optional biometric authentication: Available as an option for an extra layer of security or in place of a PIN number or a badge. The InTouch DX supports major employee ID Badge formats, as well as PIN/employee ID numbers.
  • Date and time display: Features an always-on date and time display on screen.
  • Capacitive touchscreen: Utilizes capacitive technology used in consumer electronic devices to provide precision and reliability.

“Time clocks are being jolted in the front of workers’ visibility with new platform capabilities that surpass the traditional time clock hidden somewhere in a corner. Biometrics, especially facial recognition, are key to accelerate and validate time punches,” said Holger Mueller, vice president and principal analyst at Constellation Research.

When it comes to purchasing a product like this, Mueller said organizations should look into a software platform. “[Enterprises] need to get their information and processes on it, it needs to be resilient, sturdy, work without power, work without connectivity and gracefully reconnect when possible,” he said.

Other vendors in the human capital management space include Workday, Paycor and WorkForce Software. Workday platform’s time-tracking and attendance feature works on mobile devices and provide real-time analytics to aid managers’ decisions. Paycor’s Time and Attendance tool offers a mobile punching feature that can verify punch locations and enable administrators to set location maps to ensure employees punch in or near the correct work locations. WorkForce’s Time and Attendance tool automates pay rules for hourly, salaried or contingent workforce.

Go to Original Article
Author:

The importance of AI for fraud prevention

Jumio, the identity verification technology vendor, released Jumio Go, a real-time, automated platform for identity verification. Coming at a time when cybercriminals are becoming ever more technologically advanced, Jumio Go uses a combination of AI, optical character recognition and biometrics to automatically verify a user’s identity in real time.

Jumio, founded in 2010, has long sold an AI for fraud prevention platform used by organizations in financial services, travel, gaming and retail industries. The Palo Alto, Calif., vendor’s new Jumio Go platform builds on its existing technologies, which include facial recognition and verification tools, while also simplifying them.

Jumio Go, launched Oct. 28, provides real-time identity verification, giving users results much faster than Jumio’s flagship product, which takes 30 to 60 seconds to verify a user, according to Jumio. It also eliminates the need to add a component, meaning the process of matching a real-time photo of a user’s face to a saved photo is entirely automated. That speeds up the process, and enables employees to take on other tasks, but also potentially could make it a little less secure.

The new product accepts fewer ID documents than Jumio’s flagship platform, but the tradeoff is the boost in real-time speed. Using natural language processing, Jumio’s platforms can read through and extract relevant information from documents. The system scans that information for irregularities, such as odd wordings or misspellings, which could indicate a fraud.

AI for fraud prevention in finance

For financial institutions, whose customers conduct much more business online, this type of fraud detection and identity verification technology is vital.

For combating fraud, “leveraging AI is critical,” said Amyn Dhala, global product lead at AI Express, Mastercard’s methodology for the deployment of AI that grew out of the credit card company’s 2017 acquisition of Brighterion.

.

AI for fraud prevention, fraud
To help stop fraudsters, financial institutions are using AI-powered security tools.

Through AI Express, Mastercard sells AI for fraud prevention tools, as well as AI-powered technologies, to help predict credit risk, manage network security and catch money-laundering.

AI, Dhala said in an interview at AI World 2019 in Boston, is “important to provide a better customer experience and drive profitability,” as well as to ensure customer safety.

The 9 to 5 fraudster

For financial institutions, blocking fraudsters is no simple task. Criminals intent on fraud are taking a professional approach to their work, working for certain hours during the week and taking weekends off, according to an October 2019 report from Onfido, a London-based vendor of AI-driven identity software.

Also, today’s fraudsters are highly technologically skilled, said Dan Drapeau, head of technology at Blue Fountain Media, a digital marketing agency owned by Pactera, a technology consulting and implementation firm based in China.

Cybercriminals are always that one step ahead.
Dan DrapeauHead of technology, Blue Fountain Media

“You can always throw new technology at the problem, but cybercriminals are always going to do something new and innovative, and AI algorithms have to catch up to that,” Drapeau said. “Cybercriminals are always that one step ahead.”

“As good as AI and machine learning get, it still will always take time to catch up to the newest innovation from criminals,” he added.

Still, by using AI for fraud prevention, financial organizations can stop good deal of fraud automatically, Drapeau said. Now, combining AI with manual work, such as checking or double-checking data and verification documents, works best, he said.

Go to Original Article
Author: