Tag Archives: software

Startup Uplevel targets software engineering efficiency

Featuring a business plan that aims to increase software engineering efficiency and armed with $7.5 million in venture capital funding, Uplevel emerged from stealth Wednesday.

Based in Seattle and founded in 2018, Uplevel uses machine learning and organizational science to compile data about the daily activity of engineers in order to ultimately help them become more effective.

One of the main issues engineers face is a lack of time to do their job. They may be assigned a handful of tasks to carry out, but instead of being allowed to focus their attention on those tasks they’re instead being bombarded by messages, or mired in an overabundance of meetings.

Uplevel aims to improve software engineering efficiency by monitoring messaging platforms such as Slack, collaboration software like Jira, calendar tools, and code repository software such as GitHub. It then compiles the data and is able to show how engineers are truly spending their time — whether they’re being allowed to do their jobs or instead being prevented from it by no fault of their own.

“I kept seeing pain around engineering effectiveness,” said Joe Levy, co-founder and CEO of Uplevel. “Engineers are often seen as artists, but what they’re trying to manage from a business perspective can be tough. If we can help engineers be more effective, organizations can be more effective without having to throw more bodies at the problem.”

Beyond arming the engineers themselves with data to show how they can be more effective, Uplevel attempts to provide the leaders of engineering teams the kind of information they previously lacked.

If we can help engineers be more effective, organizations can be more effective without having to throw more bodies at the problem.
Joe LevyCEO and co-founder, Uplevel

While sales and marketing teams have reams of data to drive the decision-making process — and present when asked for reports — engineering teams haven’t had the same kind of solid information.

“Sales, marketing, they have super detailed data that leads to understanding, but the head of engineering doesn’t have that same level of data,” Levy said. “There are no metrics of the same caliber [for engineers], but they’re still asked to produce the same kind of detailed projections.”

As Uplevel emerges from stealth, as with all startups one of its challenges will be to demonstrate how it’s providing something different than what’s already on the market.

Without differentiation, its likelihood of success is diminished.

But according to Vanessa Larco, a partner at venture capital investment firm New Enterprise Associates with an extensive background in computer science, what Uplevel provides is something that indeed is unique.

“This is really interesting,” she said. “I haven’t seen anything doing this exact thing. The value proposition of Uplevel is compelling if it helps quantify some of the challenges faced by R&D teams to enable them to restructure their workload and processes to better enable them to reach their goals. I haven’t seen or used the product, but I can understand the need they are fulfilling.”

Similarly, Mike Leone, analyst at Enterprise Strategy Group, believes Uplevel is on to something new.

“There are numerous time-based tracking solutions for software engineering teams available today, but they lack a comprehensive view of the entire engineering ecosystem, including messaging apps, collaboration tools, code repository tools and calendars,” he said. “The level of intelligence Uplevel can provide based on analyzing all of the collected data will serve as a major differentiator for them.”

Uplevel developed from a combination of research done by organizational psychologist David Youssefnia and a winning hackathon concept from Dave Matthews, who previously worked at Microsoft and Hulu. The two began collaborating at Madrona Venture Labs in Seattle to hone their idea of how to improve software engineering efficiency before Levy, also formerly of Microsoft, and Ravs Kaur, whose previous experience includes time at Tableau and Microsoft, joined to help Uplevel go to market.

Youssefnia serves as chief strategy officer, Matthews as director product management, and Kaur as CTO.

Startup vendor Uplevel aims to improve the efficiency of software engineers by offering a look into how many distractions engineers face as they work.
A sample chart from Uplevel displays the distractions faced by an organization’s software engineering team.

Uplevel officially formed in June 2018, attracted its first round of funding in September of that year and its second in April 2019. Leading investors include Norwest Venture Partners, Madrona Venture Group and Voyager Capital.

“Their fundamental philosophy was different from what we’d heard,” said Jonathan Parramore, senior data scientist at Avalara, a provider of automated tax compliance software and an Uplevel customer for about a year. “Engineering efficiency is difficult to measure, and they took a behavioral approach and looked holistically at multiple sources of data, then had the data science to meld it together. I’d say that everything they promised they would do, they have delivered.”

Still, Avalara would eventually like to see more capabilities as Uplevel matures.

“They have amazing reports they generate by looking at the data they have access to, but we’d like them to be able to create reports that are more in real time,” said Danny Fields, Avalara’s CTO and executive vice president of engineering. “That’s coming.”

Moving forward, while Uplevel doesn’t plan to branch out and offer a wide array of products, it is aiming to become an essential platform for all organizations looking to improve software engineering efficiency.

As it builds up its own cache of information about improving software engineering efficiency it will be able to share that data — masking the identity of individual organizations — with customers so that they can compare the efficiency of their engineers versus those of other organizations.

“The goal we’re focused on is to be the de facto platform that is helping engineers do their job,” Levy said. “We want to be a platform they can’t live without, that every big organization is reliant on.”

Go to Original Article
Author:

ZF Becomes a Provider of Soft

“In the future, software will have one of the largest impacts on automotive system development and will be one of the key differentiating factors when it comes to realizing higher levels of automated driving functions. We want to help drive this trend forward. The collaboration with Microsoft will enable us to accelerate software integration and delivery significantly. This is important for our customers who appreciate agile collaboration and need short delivery cadences for software updates. Moreover, software will need to be developed when hardware is not yet available,” explained Dr Dirk Walliser, responsible for corporate research and development at ZF. ZF will then combine its enormous know-how as a system developer for the automotive industry with the added advantage of significantly higher speeds for software development.

“Digital capabilities will be key for automotive companies to grow and differentiate from their competition. DevOps empowers development and operations teams to optimize cross-team collaboration across automation, testing, monitoring and continuous delivery using agile methods. Microsoft is providing DevOps capabilities and sharing our experiences with ZF to help them become a software-driven mobility services provider”, said Sanjay Ravi, General Manager, Automotive Industry at Microsoft.

“cubiX”: Chassis of the Future from Code

At CES 2020, ZF will showcase its vision of software development with “cubiX”: It is a software component that gathers sensor information from the entire vehicle and prepares it for an optimized control of active systems in the chassis, steering, brakes and propulsion. Following a vendor-agnostic approach, “cubiX” will support components from ZF as well as third-party components. “cubiX creates networked chassis functions thanks to software: By connecting multiple vehicle systems such as electric power steering, active rear axle steering, the sMOTION active damping system, driveline control and integrated brake control, ‘cubiX’ can optimize the behavior of the car from one central source. This enables a new level of vehicle control and thus can increase safety – for example in unfavorable road conditions or in emergency situations,” said Dr Dirk Walliser. ZF plans to start projects with first customers in 2020 and will offer “cubiX” from 2023 either as part of an overall system or as an individual software component.

ZF at CES 2020

In addition, ZF will present its comprehensive systems for automated and autonomous driving at CES. They comprise sensors, computing power, software and actuators.

For passenger cars, Level 2+ systems pave the way for a safer and more comfortable means of private transportation. New mobility solutions like robo-taxis are designed to safely operate with ZF’s Level 4/5 systems. Additionally, ZF’s innovative integrated safety systems will be on display, like the Safe Human Interaction Cockpit. Innovative software utilizing artificial intelligence to provide new features and further-developed mobility offerings will also be highlighted.

Join ZF in Las Vegas

Press Conference: Monday, January 6, 2020, 8 AM (PST): Mandalay Bay, Lagoon E & F. Alternatively, you can watch the livestream at www.zf.com/CESlive

ZF Booth: LVCC, North Hall, booth 3931

Go to Original Article
Author: Microsoft News Center

Manual mainframe testing persists in the age of automation

A recent study indicates that although most IT organizations recognize software test automation benefits their app development lifecycle, the majority of mainframe testing is done manually, which creates bottlenecks in the implementation of modern digital services.

The bottom line is that mainframe shops that want to add new, modern apps need to adopt test automation and they need to do it quickly or get left behind in a world of potential backlogs and buggy code.

However, while it’s true that mainframe shops have been slow to implement automated testing, it’s mostly been because they haven’t really had to; most mainframe shops are in maintenance mode, said Thomas Murphy, an analyst at Gartner.

“There is a need to clean up crusty old code, but that is less automated ‘testing’ and more automated analysis like CAST,” he said. “In an API/service world, I think there is a decent footprint for service virtualization and API testing and services around this. There are a lot of boutique consulting firms that also do various pieces of test automation.”

Yet, Detroit-based mainframe software maker Compuware, which commissioned the study conducted by Vanson Bourne, a market research firm, found that as many as 86% of respondents to its survey said they find it difficult to automate the testing of mainframe code. Only 7% of respondents said they automate the execution of test cases on mainframe code and 75% of respondents said they do not have automated processes that test code at every stage of development.

The survey polled 400 senior IT leaders responsible for application development in organizations with a mainframe and more than 1,000 employees.

Overall, mainframe app developers — as opposed to those working in distributed environments — have been slow to automate mainframe testing of code, but demand for new, more complex applications continues to grow to the point where 92% of respondents said their organization’s mainframe teams spend much more time testing code than was required in the past. On average, mainframe app development teams spend 51% of their time on testing new mainframe applications, features or functionality, according to the survey.

Shift left

To remedy this, mainframe shops need to “shift left” and bring automated testing, particularly automated unit testing, into the application lifecycle earlier to avoid security risks and improve the quality of their software. But only 24% of organizations reported that they perform both unit and functional mainframe testing on code before it is released into production. Moreover, automation and the shift to Agile and DevOps practices are “crucial” to the effort to both cut the time required to build and improve the quality of mainframe software, said Chris O’Malley, CEO of Compuware.

Yet, 53% of mainframe application development managers said the time required to conduct thorough testing is the biggest barrier to integrating the mainframe into Agile and DevOps.

IBM system z 13 mainframe
Mainframes continue to be viewed as the gold standard for data privacy, security and resiliency, though IT pros say there is not enough automated software testing for systems like the IBM system z, pictured here.

Eighty-five percent of respondents said they feel pressure to cut corners in testing that could result in compromised code quality and bugs in production code. Fifty percent said they fear cutting corners could lead to potential security flaws, 38% said they are concerned about disrupting operations and 28% said they are most concerned about the potential negative impact on revenue.

In addition, 82% of respondents said that the paucity of automated test cases could lead to poor customer experiences, and 90% said that automating more test cases could be the single most important factor in their success, with 87% noting that it will help organizations overcome the shortage of skilled mainframe app developers.

Automated mainframe testing tools in short supply

Truth be told, there are fewer tools available to automate the testing of mainframe software and there is not much to be found in the open source market.

And though IBM — and its financial results after every new mainframe introduction — might beg to differ, many industry observers, like Gartner’s Murphy, view the mainframe as dead.

The mainframe isn’t where our headspace is at. We use that new mainframe — the cloud — now.
Thomas MurphyAnalyst, Gartner

“The mainframe isn’t where our headspace is at,” Murphy said. “We use that new mainframe — the cloud — now. There isn’t sufficient business pressure or mandate. If there were a bunch of recurring issues, if the mainframe was holding us back, then people would address the problem. Probably by shooting the mainframe and moving elsewhere.”

Outside of the mainframe industry, companies such as Parasoft, SmartBear and others regularly innovate and deliver new automated testing functionality for developers in distributed, web and mobile environments. For instance, Parasoft earlier this fall introduced Selenic, its AI-powered automated testing tool for Selenium. Selenium is an automated testing suite for web apps that has become a de facto standard for testing user interfaces. Parasoft’s Selenic integrates into existing CI/CD pipelines to ease the way for organizations that employ DevOps practices. Selenic’s AI capabilities provide recommendations that automate the “self-healing” of any broken Selenium scripts and provide deep code analysis to users.

For its part, Gartner named SmartBear, another prominent test automation provider, as a leader in the 2019 Gartner Magic Quadrant for Software Test Automation. Among the highlights of what the company has done for developers in 2019, the company expanded into CI/CD pipeline integration for native mobile test automation with the acquisition of Bitbar, added new tools for behavior-driven development and introduced testing support for GraphQL.

Go to Original Article
Author:

RPA in manufacturing increases efficiency, reduces costs

Robotic process automation software is increasingly being adopted by organizations to improve processes and make operations more efficient.

In manufacturing, the use cases for RPA range from reducing errors in payroll processes to eliminating unneeded processes before undergoing a major ERP system upgrade.

In this Q&A, Shibaji Das, global director of finance and accounting and supply chain management for UiPath, discusses the role of RPA in manufacturing ERP systems and how it can help improve efficiency in organizations.

UiPath, based in New York, got its start in 2005 as DeskOver, which made automation scripts. In 2012, the company relaunched as UiPath and shifted its focus to RPA. UiPath markets applications that enable organizations to examine processes and create bots, or software robots that automate repetitive, rules-based manufacturing and business processes. RPA bots are usually infused with AI or machine learning so that they can take on more complex tasks and learn as they encounter more processes.

What is RPA and how does it relate to ERP systems?

Shibaji Das: When you’re designing a city, you put down the freeways. Implementing RPA is a little like putting down those freeways with major traffic flowing through, with RPA as the last mile automation. Let’s say you’re operating on an ERP, but when you extract information from the ERP, you still do it manually to export to Excel or via email. A platform like [UiPath] forms a glue between ERP systems and automates repetitive rules-based stuff. On top of that, we have AI, which gives brains to the robot and helps it understand documents, processes and process-mining elements.

Why is RPA important for the manufacturing industry?

Shibaji Das, UiPathShibaji Das

Das: When you look at the manufacturing industry, the challenges are always the cost pressure of having lower margins or the resources to get innovation funds to focus on the next-generation of products. Core manufacturing is already largely automated with physical robots; for example, the automotive industry where robots are on the assembly lines. The question is how can RPA enable the supporting functions of manufacturing to work more efficiently? For example, how can RPA enable upstream processes like demand planning, sourcing and procurement? Then for downstream processes when the product is ready, how do you look at the distribution channel, warehouse management and the logistics? Those are the two big areas where RPA plus AI play an important role.

What are some steps companies need to take when implementing RPA for manufacturing?

Das: Initially, there will be a process mining element or process understanding element, because you don’t want to automate bad processes. That’s why having a thought process around making the processes efficient first is critical for any bigger digital transformation. Once that happens, and you have more efficient processes running, which will integrate with multiple ERP systems or other legacy systems, you could go to task automation. 

What are some of the ways that implementing RPA will affect jobs in manufacturing? Will it lead to job losses if more processes are automated?

Das: Will there be a change in jobs as we know them? Yes, but at the same time, there’s a very positive element that will create a net positive impact from a jobs perspective, experience perspective, cost, and the overall quality of life perspective. For example, the moment computers came in, someone’s job was to push hundreds of piles of paper, but now, because of computing, they don’t have to do that. Does that mean there was a negative effect? Probably not, in the long run. So, it’s important to understand that RPA — and RPA that’s done in collaboration with AI — will have a positive impact on the job market in the next five to 10 years.

Can RPA help improve operations by eliminating mistakes that are common in manual processes?

Das: Robots do not make mistakes unless you code it wrong at the beginning, and that’s why governance is so important. Robots are trained to do certain things and will do them correctly every time — 100%, 24/7 — without needing coffee breaks.

What are some of the biggest benefits of RPA in manufacturing?

Das: From an ROI perspective, one benefit of RPA is the cost element because it increases productivity. Second is revenue; for example, at UiPath, we are using our own robots to manage our cash application process, which has impacted revenue collection [positively]. Third is around speed, because what an individual can do, a robot can do much faster. However, this depends on the system, as a robot will only operate as fast as the mother system of the ERP system works — with accuracy, of course. Last, but not least, the most important part is experience. RPA plus AI will enhance the experience of your employees, of your customers and vendors. This is because the way you do business becomes easier, more user-friendly and much more nimble as you get rid of the most common frustrations that keep coming up, like a vendor not getting paid.

What’s the future of RPA and AI in organizations?

Das: The vision of Daniel Dines [UiPath’s co-founder and CEO] is to have one robot for every individual. It’s similar to every individual having access to Excel or Word. We know the benefits of the usage of Excel or Word, but RPA access is still a little technical and there’s a bit of coding involved. But UiPath is focused on making this as code free as possible. If you can draw a flowchart and define a process clearly through click level, our process mining tool can observe it and create an automation for you without any code. For example, I have four credit cards, and every month, I review it and click the statement for whatever the amount is and pay it. I have a bot now that goes in at the 15th of the month and logs into the accounts and clicks through the process. This is just a simple example of how practical RPA could become.

Go to Original Article
Author:

New capabilities added to Alfresco Governance Services

Alfresco Software introduced new information governance capabilities this week to its Digital Business Platform through updates to Alfresco Governance Services.

The updates include new desktop synchronization, federation services and AI-assisted legal holds features.

“In the coming year, we expect many organizations to be hit with large fines as a result of not meeting regulatory standards for data privacy, e.g., the European GDPR and California’s CCPA. We introduced these capabilities to help our customers guarantee their content security and circumvent those fines,” said Tara Combs, information governance specialist at Alfresco.

Federation Services enables cross-databases search

Federation Services is a new addition to Alfresco Governance Services. Users can search, view and manage content from Alfresco and other repositories, such as network file shares, OpenText, Documentum, Microsoft SharePoint, Dropbox.

Users can also search across different databases with the application without having to migrate content. Federation Services provides one user interface for users to manage all the information resources in an organization, according to the company.

Organizations can also store content in locations outside of Alfresco platform.

Legal holds feature provides AI-assisted search for legal teams

The legal holds feature provides document search and management capabilities that help legal teams identify relevant content for litigation purposes. Alfresco’s tool now uses AI to discover relevant content and metadata, according to the company.

“AI is offered in some legal discovery software systems, and over time all these specialized vendors will leverage AI and machine learning,” said Alan Pelz-Sharpe, founder and principal analyst at Deep Analysis. He added that the AI-powered feature of Alfresco Governance Services is one of the first such offerings from a more general information management vendor.

“It is positioned to augment the specialized vendors’ work, essentially curating and capturing relevant bodies of information for deeper analysis.”

Desktop synchronization maintains record management policies

Another new feature added to Alfresco Governance Services synchronizes content between a repository and a desktop, along with the records management policies associated with that content, according to the company.

With the desktop synchronization feature, users can expect to have the same record management policies when they access a document on their desktop computer or viewing it from the source repository, according to the company.

When evaluating a product like this in the market, Pelz-Sharpe said the most important feature a buyer should look for is usability. “AI is very powerful, but less than useless in the wrong hands. Many AI tools expect too much of the customer — usability and recognizable, preconfigured features that the customer can use with little to no training are essential.”

The new updates are available as of Dec. 3. There is no price difference between the updated version of Alfresco Governance Services and the previous version. Customers who already had a subscription can upgrade as part of their subscription, according to the company.

According to Pelz-Sharpe, Alfresco has traditionally competed against enterprise content management and business process management vendors. It has pivoted during recent years to compete more directly with PaaS competitors, offering a content- and process-centric platform upon which its customer can build their own applications. In the future, the company is likely to compete against the likes of Oracle and IBM, he said.

Go to Original Article
Author:

Hyper-V Virtual CPUs Explained

Did your software vendor indicate that you can virtualize their application, but only if you dedicate one or more CPU cores to it? Not clear on what happens when you assign CPUs to a virtual machine? You are far from alone.

Note: This article was originally published in February 2014. It has been fully updated to be relevant as of November 2019.

Introduction to Virtual CPUs

Like all other virtual machine “hardware”, virtual CPUs do not exist. The hypervisor uses the physical host’s real CPUs to create virtual constructs to present to virtual machines. The hypervisor controls access to the real CPUs just as it controls access to all other hardware.

Hyper-V Never Assigns Physical Processors to Specific Virtual Machines

Make sure that you understand this section before moving on. Assigning 2 vCPUs to a system does not mean that Hyper-V plucks two cores out of the physical pool and permanently marries them to your virtual machine. You cannot assign a physical core to a VM at all. So, does this mean that you just can’t meet that vendor request to dedicate a core or two? Well, not exactly. More on that toward the end.

Understanding Operating System Processor Scheduling

Let’s kick this off by looking at how CPUs are used in regular Windows. Here’s a shot of my Task Manager screen:

Task Manager

Task Manager

Nothing fancy, right? Looks familiar, doesn’t it?

Now, back when computers never, or almost never, shipped as multi-CPU multi-core boxes, we all knew that computers couldn’t really multitask. They had one CPU and one core, so there was only one possible active thread of execution. But aside from the fancy graphical updates, Task Manager then looked pretty much like Task Manager now. You had a long list of running processes, each with a metric indicating what percentage of the CPUs time it was using.

Then, as in now, each line item you see represents a process (or, new in the recent Task Manager versions, a process group). A process consists of one or more threads. A thread is nothing more than a sequence of CPU instructions (keyword: sequence).

What happens is that (in Windows, this started in 95 and NT) the operating system stops a running thread, preserves its state, and then starts another thread. After a bit of time, it repeats those operations for the next thread. We call this pre-emptive, meaning that the operating system decides when to suspend the current thread and switch to another. You can set priorities that affect how a process rates, but the OS is in charge of thread scheduling.

Today, almost all computers have multiple cores, so Windows can truly multi-task.

Taking These Concepts to the Hypervisor

Because of its role as a thread manager, Windows can be called a “supervisor” (very old terminology that you really never see anymore): a system that manages processes that are made up of threads. Hyper-V is a hypervisor: a system that manages supervisors that manage processes that are made up of threads.

Task Manager doesn’t work the same way for Hyper-V, but the same thing is going on. There is a list of partitions, and inside those partitions are processes and threads. The thread scheduler works pretty much the same way, something like this:

Hypervisor Thread Scheduling

Hypervisor Thread Scheduling

Of course, a real system will always have more than nine threads running. The thread scheduler will place them all into a queue.

What About Processor Affinity?

You probably know that you can affinitize threads in Windows so that they always run on a particular core or set of cores. You cannot do that in Hyper-V. Doing so would have questionable value anyway; dedicating a thread to a core is not the same thing as dedicating a core to a thread, which is what many people really want to try to do. You can’t prevent a core from running other threads in the Windows world or the Hyper-V world.

How Does Thread Scheduling Work?

The simplest answer is that Hyper-V makes the decision at the hypervisor level. It doesn’t really let the guests have any input. Guest operating systems schedule the threads from the processes that they own. When they choose a thread to run, they send it to a virtual CPU. Hyper-V takes it from there.

The image that I presented above is necessarily an oversimplification, as it’s not simple first-in-first-out. NUMA plays a role, for instance. Really understanding this topic requires a fairly deep dive into some complex ideas. Few administrators require that level of depth, and exploring it here would take this article far afield.

The first thing that matters: affinity aside, you never know where any given thread will execute. A thread that was paused to yield CPU time to another thread may very well be assigned to another core when it is resumed. Did you ever wonder why an application consumes right at 50% of a dual-core system and each core looks like it’s running at 50% usage? That behavior indicates a single-threaded application. Each time the scheduler executes it, it consumes 100% of the core that it lands on. The next time it runs, it stays on the same core or goes to the other core. Whichever core the scheduler assigns it to, it consumes 100%. When Task Manager aggregates its performance for display, that’s an even 50% utilization — the app uses 100% of 50% of the system’s capacity. Since the core not running the app remains mostly idle while the other core tops out, they cumulatively amount to 50% utilization for the measured time period. With the capabilities of newer versions of Task Manager, you can now instruct it to show the separate cores individually, which makes this behavior far more apparent.

Now we can move on to a look at the number of vCPUs assigned to a system and priority.

What Does the Number of vCPUs Assigned to a VM Really Mean?

You should first notice that you can’t assign more vCPUs to a virtual machine than you have logical processors in your host.

Invalid CPU Count

Invalid CPU Count

So, a virtual machine’s vCPU count means this: the maximum number of threads that the VM can run at any given moment. I can’t set the virtual machine from the screenshot to have more than two vCPUs because the host only has two logical processors. Therefore, there is nowhere for a third thread to be scheduled. But, if I had a 24-core system and left this VM at 2 vCPUs, then it would only ever send a maximum of two threads to Hyper-V for scheduling. The virtual machine’s thread scheduler (the supervisor) will keep its other threads in a queue, waiting for their turn.

But Can’t I Assign More Total vCPUs to all VMs than Physical Cores?

Yes, the total number of vCPUs across all virtual machines can exceed the number of physical cores in the host. It’s no different than the fact that I’ve got 40+ processes “running” on my dual-core laptop right now. I can only run two threads at a time, but I will always far more than two threads scheduled. Windows has been doing this for a very long time now, and Windows is so good at it (usually) that most people never see a need to think through what’s going on. Your VMs (supervisors) will bubble up threads to run and Hyper-V (hypervisor) will schedule them the way (mostly) that Windows has been scheduling them ever since it outgrew cooperative scheduling in Windows 3.x.

What’s The Proper Ratio of vCPU to pCPU/Cores?

This is the question that’s on everyone’s mind. I’ll tell you straight: in the generic sense, this question has no answer.

Sure, way back when, people said 1:1. Some people still say that today. And you know, you can do it. It’s wasteful, but you can do it. I could run my current desktop configuration on a quad 16 core server and I’d never get any contention. But, I probably wouldn’t see much performance difference. Why? Because almost all my threads sit idle almost all the time. If something needs 0% CPU time, what does giving it its own core do? Nothing, that’s what.

Later, the answer was upgraded to 8 vCPUs per 1 physical core. OK, sure, good.

Then it became 12.

And then the recommendations went away.

They went away because no one really has any idea. The scheduler will evenly distribute threads across the available cores. So then, the amount of physical CPUs needed doesn’t depend on how many virtual CPUs there are. It depends entirely on what the operating threads need. And, even if you’ve got a bunch of heavy threads going, that doesn’t mean their systems will die as they get pre-empted by other heavy threads. The necessary vCPU/pCPU ratio depends entirely on the CPU load profile and your tolerance for latency. Multiple heavy loads require a low ratio. A few heavy loads work well with a medium ratio. Light loads can run on a high ratio system.

I’m going to let you in on a dirty little secret about CPUs: Every single time a thread runs, no matter what it is, it drives the CPU at 100% (power-throttling changes the clock speed, not workload saturation). The CPU is a binary device; it’s either processing or it isn’t. When your performance metric tools show you that 100% or 20% or 50% or whatever number, they calculate it from a time measurement. If you see 100%, that means that the CPU was processing during the entire measured span of time. 20% means it was running a process 1/5th of the time and 4/5th of the time it was idle. This means that a single thread doesn’t consume 100% of the CPU, because Windows/Hyper-V will pre-empt it when it wants to run another thread. You can have multiple “100%” CPU threads running on the same system. Even so, a system can only act responsively when it has some idle time, meaning that most threads will simply let their time slice go by. That allows other threads to access cores more quickly. When you have multiple threads always queuing for active CPU time, the overall system becomes less responsive because threads must wait. Using additional cores will address this concern as it spreads the workload out.

The upshot: if you want to know how many physical cores you need, then you need to know the performance profile of your actual workload. If you don’t know, then start from the earlier 8:1 or 12:1 recommendations.

Altaro Dojo Forums
forums logo

Connect with fellow IT pros and master Hyper-V

Moderated by Microsoft MVPs

What About Reserve and Weighting (Priority)?

I don’t recommend that you tinker with CPU settings unless you have a CPU contention problem to solve. Let the thread scheduler do its job. Just like setting CPU priorities on threads in Windows can cause more problems than they solve, fiddling with hypervisor vCPU settings can make everything worse.

Let’s look at the config screen:

vCPU Settings

vCPU Settings

The first group of boxes is the reserve. The first box represents the percentage of its allowed number of vCPUs to set aside. Its actual meaning depends on the number of vCPUs assigned to the VM. The second box, the grayed-out one, shows the total percentage of host resources that Hyper-V will set aside for this VM. In this case, I have a 2 vCPU system on a dual-core host, so the two boxes will be the same. If I set 10 percent reserve, that’s 10 percent of the total physical resources. If I drop the allocation down to 1 vCPU, then 10 percent reserve becomes 5 percent physical. The second box, will be auto-calculated as you adjust the first box.

The reserve is a hard minimum… sort of. If the total of all reserve settings of all virtual machines on a given host exceeds 100%, then at least one virtual machine won’t start. But, if a VM’s reserve is 0%, then it doesn’t count toward the 100% at all (seems pretty obvious, but you never know). But, if a VM with a 20% reserve is sitting idle, then other processes are allowed to use up to 100% of the available processor power… until such time as the VM with the reserve starts up. Then, once the CPU capacity is available, the reserved VM will be able to dominate up to 20% of the total computing power. Because time slices are so short, it’s effectively like it always has 20% available, but it does have to wait like everyone else.

So, that vendor that wants a dedicated CPU? If you really want to honor their wishes, this is how you do it. You enter whatever number in the top box that makes the second box show the equivalent processor power of however many pCPUs/cores the vendor thinks they need. If they want one whole CPU and you have a quad-core host, then make the second box show 25%. Do you really have to? Well, I don’t know. Their software probably doesn’t need that kind of power, but if they can kick you off support for not listening to them, well… don’t get me in the middle of that. The real reason virtualization densities never hit what the hypervisor manufacturers say they can do is because of software vendors’ arbitrary rules, but that’s a rant for another day.

The next two boxes are the limit. Now that you understand the reserve, you can understand the limit. It’s a resource cap. It keeps a greedy VM’s hands out of the cookie jar. The two boxes work together in the same way as the reserve boxes.

The final box is the priority weight. As indicated, this is relative. Every VM set to 100 (the default) has the same pull with the scheduler, but they’re all beneath all the VMs that have 200 and above all the VMs that have 50, so on and so forth. If you’re going to tinker, weight is safer than fiddling with reserves because you can’t ever prevent a VM from starting by changing relative weights. What the weight means is that when a bunch of VMs present threads to the hypervisor thread scheduler at once, the higher-weighted VMs go first.

But What About Hyper-Threading?

Hyper-Threading allows a single core to operate two threads at once — sort of. The core can only actively run one of the threads at a time, but if that thread stalls while waiting for an external resource, then the core operates the other thread. You can read a more detailed explanation below in the comments section, from contributor Jordan. AMD has recently added a similar technology.

To kill one major misconception: Hyper-Threading does not double the core’s performance ability. Synthetic benchmarks show a high-water mark of a 25% improvement. More realistic measurements show closer to a 10% boost. An 8-core Hyper-Threaded system does not perform as well as a 16-core non-Hyper-Threaded system. It might perform almost as well as a 9-core system.

With the so-called “classic” scheduler, Hyper-V places threads on the next available core as described above. With the core scheduler, introduced in Hyper-V 2016, Hyper-V now prevents threads owned by different virtual machines from running side-by-side on the same core. It will, however, continue to pre-empt one virtual machine’s threads in favor of another’s. We have an article that deals with the core scheduler.

Making Sense of Everything

I know this is a lot of information. Most people come here wanting to know how many vCPUs to assign to a VM or how many total vCPUs to run on a single system.

Personally, I assign 2 vCPUs to every VM to start. That gives it at least two places to run threads, which gives it responsiveness. On a dual-processor system, it also ensures that the VM automatically has a presence on both NUMA nodes. I do not assign more vCPU to a VM until I know that it needs it (or an application vendor demands it).

As for the ratio of vCPU to pCPU, that works mostly the same way. There is no formula or magic knowledge that you can simply apply. If you plan to virtualize existing workloads, then measure their current CPU utilization and tally it up; that will tell you what you need to know. Microsoft’s Assessment and Planning Toolkit might help you. Otherwise, you simply add resources and monitor usage. If your hardware cannot handle your workload, then you need to scale out.


Go to Original Article
Author: Eric Siron

New DataCore vFilO software pools NAS, object storage

DataCore Software is expanding beyond block storage virtualization with new file and object storage capabilities for unstructured data.

Customers can use the new DataCore vFilO to pool and centrally manage disparate file servers, NAS appliances and object stores located on premises and in the cloud. They also have the option to install vFilO on top of DataCore’s SANsymphony block virtualization software, now rebranded as DataCore SDS

DataCore CMO Gerardo Dada said customers that used the block storage virtualization asked for similar capabilities on the file side. Bringing diverse systems under central management can give them a consistent way to provision, encrypt, migrate and protect data and to locate and share files. Unifying the data also paves the way for customers to use tools such as predictive analytics, Dada said.

Global namespace

The new vFilO software provides a scale-out file system for unstructured data and virtualization technology to abstract existing storage systems. A global namespace facilitates unified access to local and cloud-based data though standard NFS, SMB, and S3 protocols. On the back end, vFilO communicates with the file systems through parallel NFS to speed response times. The software separates metadata from the data to facilitate keyword queries, the company said.

Users can set policies at volume or more granular file levels to place frequently accessed data on faster storage and less active data on lower cost options. They can control the frequency of snapshots for data protection, and they can archive data in on-premises or public cloud object storage in compressed and deduplicated format to reduce costs, Dada said.

DataCore’s vFilO supports automated load balancing across the diverse filers, and users can add nodes to scale out capacity and performance. The minimum vFilo configuration for high availability is four virtual machines, with one node managing the metadata services and the other handling the data services, Dada said.

DataCore vFilo screenshot
New DataCore vFilO software can pool and manage disparate file servers, NAS appliances and object stores.

File vs. object storage

Steven Hill, a senior analyst of storage technologies at 451 Research, said the industry in general would need to better align file and object storage moving forward to address the emerging unstructured data crisis.

“Most of our applications still depend on file systems, but metadata-rich object is far better suited to longer-term data governance and management — especially in the context of hybrid IT, where much of the data resident in the cloud is based on efficient and reliable objects, ” Hill said.

“File systems are great for helping end users remember what their data is and where they put it, but not very useful for identifying and automating policy-based management downstream,” Hill added. “Object storage provides a highly-scalable storage model that’s cloud-friendly and supports the collection of metadata that can then be used to classify and manage that data over time.”

DataCore expects the primary use cases for vFilO to include consolidating file systems and NAS appliances. Customers can use vFilo to move unused or infrequently accessed files to cheaper cloud object storage to free up primary storage space. They can also replicate files for disaster recovery.

Eric Burgener, a research vice president at IDC, said unstructured data is a high growth area. He predicts vFilO will be most attractive to the company’s existing customers. DataCore claims to have more than 10,000 customers.

“DataCore customers already liked the functionality, and they know how to manage it,” Burgener said. “If [vFilO] starts to get traction because of its ease of use, then we may see more pickup on the new customer side.”

Camberley Bates, a managing director and analyst at Evaluator Group, expects DataCore to focus on the media market and other industries needing high performance.

Pricing for vFilO

Pricing for vFilO is based on capacity consumption, with a 10 TB minimum order. One- and three-year subscriptions are available, with different pricing for active and inactive data. A vFilO installation with 10 TB to 49 TB of active data costs $345 per TB for a one-year subscription and $904 per TB for a three-year subscription. For the same capacity range of inactive data, vFilo would cost $175 per TB for a one-year subscription and $459 per TB for a three-year subscription. DataCore offers volume discounts for customers with higher capacity deployments.

The Linux-based vFilO image can run on a virtual machine or on commodity bare-metal servers. Dada said DataCore recommends separate hardware for the differently architected vFilO and SANsymphony products to avoid resource contention. Both products have plugins for Kubernetes and other container environments, Dada added.

The vFilO software became available this week as a software-only product, but Dada said the company could add an appliance if customers and resellers express enough interest. DataCore launched a hyper-converged infrastructure appliance for SANsymphony over the summer. 

DataCore incorporated technology from partners and open source projects into the new vFilO software, but Dada declined to identify the sources.

Go to Original Article
Author:

Amazon Chime gets integration with Dolby Voice Room

Amazon Web Services has integrated its Amazon Chime online meetings software with a video hardware kit for small and midsize conference rooms made by Dolby Laboratories.

Businesses using Amazon Chime could already connect the app to software-agnostic video hardware using H.323 and SIP. But standards-based connections are generally difficult to set up and use.

The Dolby partnership gives Chime users access to video gear that is preloaded with the AWS software. However, Dolby only entered the video hardware market last year, so few Chime customers will be able to take advantage of the integration without purchasing new equipment.

Amazon Chime is far behind competing services, such as Zoom and Microsoft Teams. Both already have partnerships with leading makers of conference room hardware, such as Poly and Logitech. Also, Chime still lacks support for a room system for large meeting spaces and boardrooms.

Online meetings software must integrate with room systems to effectively compete, said Irwin Lazar, analyst at Nemertes Research. “So the Dolby announcement represents a much-needed addition to their capabilities.”

Dolby Voice Room includes a camera and a separate speakerphone with a touchscreen for controlling a meeting. The audio device’s microphone suppresses background noise and compensates for quiet and distant voices.

AWS recently expanded Chime to include a bare-bones service for calling, voicemail and SMS messaging. The vendor also earlier this year released a service for connecting on-premises PBXs to the internet using SIP.

Unlike other cloud-based calling and meeting providers, AWS charges customers based on how much they use Chime. However, Chime still trails more established offerings in the video conferencing market.

“Customers I’ve spoken to like their pay-per-use pricing model,” Lazar said. “But at this point, I don’t yet see them making a major push to challenge Microsoft, Cisco or Zoom.”

In a recent Nemertes Research study, 8% of organizations using a video conferencing service were Chime customers, seventh behind offerings from Microsoft, Cisco and others. However, only 0.6% said Chime was the primary app they used — the smallest percentage of any vendor.

Adoption of Chime has been “pretty sluggish,” said Zeus Kerravala, principal analyst at ZK Research. “But Amazon can play the long game here.” Launched in February 2017, Chime is a relatively insignificant project of AWS, a division of Amazon that generated more than $25 billion in revenue last fiscal year.

Go to Original Article
Author:

Microsoft to apply CCPA protections to all US customers

Microsoft is taking California’s new data privacy law nationwide.

The software giant this week said it will honor the California Consumer Privacy Act (CCPA) throughout the United States. When the CCPA goes into effect on Jan. 1, 2020, companies in California will be required to provide people with the option to stop their personal information from being sold, and will generally require that companies are transparent about data collection and data use.

The CCPA applies to companies that do business in California, collect customers’ personal data and meet one of the following requirements: have annual gross revenue of more than $25 million; buy, receive, sell or share personal data of 50,000 or more consumers, devices or households for commercial purposes; or earn 50% or more of their annual revenues from selling consumers’ personal data.

Julie Brill, Microsoft’s corporate vice president for global privacy and regulatory affairs and Chief Privacy Officer, announced her company’s plans to go a step further and apply the CCPA’s data privacy protections to all U.S. customers — not just those in California.

“We are strong supporters of California’s new law and the expansion of privacy protections in the United States that it represents. Our approach to privacy starts with the belief that privacy is a fundamental human right and includes our commitment to provide robust protection for every individual,” Brill wrote in a blog post. “This is why, in 2018, we were the first company to voluntarily extend the core data privacy rights included in the European Union’s General Data Protection Regulation (GDPR) to customers around the world, not just to those in the EU who are covered by the regulation. Similarly, we will extend CCPA’s core rights for people to control their data to all our customers in the U.S.”

Brill added that Microsoft is working with its enterprise customers to assist them with CCPA compliance. “Our goal is to help our customers understand how California’s new law affects their operations and provide the tools and guidance they will need to meet its requirements,” she said.

Microsoft did not specify when or how it will apply the CPAA for all U.S. citizens. In recent years the company has introduced several privacy-focused tools and features designed to give customers greater control over their personal data.

Fatemeh Khatibloo, vice president and principal analyst at Forrester Research, said Microsoft has an easier path to becoming CCPA compliant because of its early efforts to broadly implement GDPR protections.

“They’re staying very true to all the processes they went through under GDPR,” she said. “CCPA has some differences with GDPR. Namely, it’s got some requirements to verify the identity of people who want to exercise their rights. GDPR is still based on an opt-in framework rather than an opt-out one; it requires consent if you don’t have another legal basis for processing somebody’s data. The CCPA is still really about giving you the opportunity to opt out. It’s not a consent-based framework.”

Khatibloo also noted that Microsoft was supportive of the CCPA early on, and that Brill, who formerly served as commissioner of the U.S. Federal Trade Commission under the Obama administration, has a strong history on data privacy.

“She understands the extensive need for a comprehensive privacy bill in the U.S., and I think she also understands that that’s probably not going to happen in the next year,” Khatibloo said. “Instead of waiting for a patchwork of laws to turn up, I think she’s taking a very proactive move to say, ‘We’re going to abide by this particular set of rules, and we’re going to make it available to everybody.’ The other really big factor here is, who wants to be the company that says its New York customers don’t have the same rights that its California customers do?

Rebecca Herold, an infosec and privacy consultant as well as CEO of The Privacy Professor consultancy, argued that while CCPA does a good job addressing the “breadth of privacy issues for individuals who fall under the CCPA definition of a ‘California consumer,'” it falls short in multiple areas. To name a few criticisms, she pointed out that it doesn’t apply to organizations with under $25 million in revenue, it does not apply to all types of data or individuals such as employees, and that many of its requirements can come across as confusing.

But Herold said Microsoft’s move to apply CCPA for all 50 states makes sense and it’s something she recommends to her clients when consulting on new regional regulations. “When looking at implementing a wide-ranging law like CCPA, it would be much more simplified to just follow it for all their millions of customers, and not try to parse out the California customers from all others,” she said via email. “It is much more efficient and effective to simplify data security and privacy practices by treating all individuals within an organization’s database equally, meeting a baseline of actions that fit all legal requirements across the board. This is a smart and savvy business leadership move.”

Mike Bittner, associate director of digital security and operations for advertising security vendor The Media Trust, agreed that Microsoft’s move isn’t surprising.  

“For a large company like Microsoft that serves consumers around the world, simplifying regulatory compliance by applying the same policies across an entire geography makes a lot of sense, because it removes the headaches of applying a hodgepodge of state-level data privacy laws,” he said in an email. “Moreover, by using the CCPA — the most robust U.S. data privacy law to date — as the standard, it demonstrates the company’s commitment to protecting consumers’ data privacy rights.”

Herold added that the CCPA will likely become the de facto data privacy law for the U.S. in the foreseeable future because Congress doesn’t appear to be motivated to pass any federal privacy laws.

Brill appeared to agree.

“CCPA marks an important step toward providing people with more robust control over their data in the United States,” she wrote in her blog post. “It also shows that we can make progress to strengthen privacy protections in this country at the state level even when Congress can’t or won’t act.”

Senior reporter Michael Heller contributed to this report.

Go to Original Article
Author:

Kronos introduces its latest time clock, Kronos InTouch DX

Workforce management and HR software vendor Kronos this week introduced Kronos InTouch DX, a time clock offering features including individualized welcome screens, multilanguage support, biometric authentication and integration with Workforce Dimensions.

The new time clock is aimed at providing ease of use and more personalization for employees.

“By adding consumer-grade personalization with enterprise-level intelligence, Kronos InTouch DX surfaces the most important updates first, like whether a time-off request has been approved or a missed punch needs to be resolved,” said Bill Bartow, vice president and global product management at Kronos.

InTouch DX works with Workforce Dimensions, Kronos’ workforce management suite, so when a manager updates the schedule, employees can see those updates instantly on the Kronos InTouch DX and when employees request time off through the Kronos InTouch DX, managers are notified in Workforce Dimensions, according to the company.

Workforce Dimensions is mobile-native and accessible on smartphones and tablets.

Other features of InTouch DX include:

  • Smart Landing: Provides a personal welcome screen alerting users to unread messages, time-off approvals or requests, shifts swap and schedule updates.
  • Individual Mode: Provides one-click access to a user’s most frequent self-service tasks such as viewing their schedule, checking their accruals bank or transferring job codes.
  • My Time: Combines an individual’s timecard and weekly schedule, providing an overall view so that employees can compare their punches to scheduled hours to avoid errors.
  • Multilanguage support: Available for Dutch, English, French (both Canadian and French), German, Japanese, Spanish, Traditional and Simplified Chinese, Danish, Hindi, Italian, Korean, Polish and Portuguese.
  • Optional biometric authentication: Available as an option for an extra layer of security or in place of a PIN number or a badge. The InTouch DX supports major employee ID Badge formats, as well as PIN/employee ID numbers.
  • Date and time display: Features an always-on date and time display on screen.
  • Capacitive touchscreen: Utilizes capacitive technology used in consumer electronic devices to provide precision and reliability.

“Time clocks are being jolted in the front of workers’ visibility with new platform capabilities that surpass the traditional time clock hidden somewhere in a corner. Biometrics, especially facial recognition, are key to accelerate and validate time punches,” said Holger Mueller, vice president and principal analyst at Constellation Research.

When it comes to purchasing a product like this, Mueller said organizations should look into a software platform. “[Enterprises] need to get their information and processes on it, it needs to be resilient, sturdy, work without power, work without connectivity and gracefully reconnect when possible,” he said.

Other vendors in the human capital management space include Workday, Paycor and WorkForce Software. Workday platform’s time-tracking and attendance feature works on mobile devices and provide real-time analytics to aid managers’ decisions. Paycor’s Time and Attendance tool offers a mobile punching feature that can verify punch locations and enable administrators to set location maps to ensure employees punch in or near the correct work locations. WorkForce’s Time and Attendance tool automates pay rules for hourly, salaried or contingent workforce.

Go to Original Article
Author: