Tag Archives: manage

Announcing general availability of Azure IoT Hub’s integration with Azure Event Grid

We’re proud to see more and more customers using Azure IoT Hub to control and manage billions of devices, send data to the cloud and gain business insights. We are excited to announce that IoT Hub integration with Azure Event Grid is now generally available, making it even easier to transform these insights into actions by simplifying the architecture of IoT solutions. Some key benefits include:

  • Easily integrate with modern serverless architectures, such as Azure Functions and Azure Logic Apps, to automate workflows and downstream processes.
  • Enable alerting with quick reaction to creation, deletion, connection, and disconnection of devices.
  • Eliminate the complexity and expense of polling services and integrate events with 3rd party applications using webhooks such as ticketing, billing system, and database updates.

Together, these two services help customers easily integrate event notifications from IoT solutions with other powerful Azure services or 3rd party applications. These services add important device lifecycle support with events such as device created, device deleted, device connected, and device disconnected, in a highly reliable, scalable, and secure manner.

Here is how it works:

As of today, this capability is available in the following regions:

  • Asia Southeast
  • Asia East
  • Australia East
  • Australia Southeast
  • Central US
  • East US 2
  • West Central US
  • West US

  • West US 2
  • South Central US
  • Europe West
  • Europe North
  • Japan East
  • Japan West
  • Korea Central
  • Korea South

  • Canada Central
  • Central India
  • South India
  • Brazil South
  • UK West
  • UK South
  • East US, coming soon
  • Canada East, coming soon

Azure Event Grid became generally available earlier this year and currently has built-in integration with the following services:

Azure Event Grid service integration

As we work to deliver more events from Azure IoT Hub, we are excited for you to try this capability and build more streamlined IoT solutions for your business. Try this tutorial to get started.

We would love to hear more about your experiences with the preview and get your feedback! Are there other IoT Hub events you would like to see made available? Please continue to submit your suggestions through the Azure IoT User Voice forum.

News roundup: Manage employee resource groups and more

This week’s news roundup features a tool to manage employee resource groups, a roadmap for a wellness coaching technology program and an AI-powered platform to match employees with the right insurance options.

Ready, set, engage

Espresa, which makes a platform for automating employee programs, has added new features that can track and manage employee resource groups.

Employee resource groups, which are organically formed clubs of people with shared enthusiasms, are increasingly popular in U.S. corporations. A 2016 study by Bentley University indicated 90% of Fortune 500 companies have employee resource groups, and 8.5% of American employees participate in at least one.

At a time when employee retention has become more critical, thanks to a very tight labor market, employee resource groups can help employee engagement. But the grassroots nature of the efforts makes it hard for both employees and HR departments to track and manage them.

In many companies today, employee resource groups are managed with a cobbled-together collection of wiki pages, Google Docs and Evite invitations, said Raghavan Menon, CTO of Espresa, based in Palo Alto, Calif. And HR departments often have no idea what’s going on, when it’s happening or who is in charge.

“Today, nothing allows the employer or company to actually promote [employee resource groups] and then decentralize them to allow employees to manage and run the groups with light oversight from HR,” Menon explained.

Espresa’s new features give HR departments a web-based way to keep track of the employee resource groups, while giving the employees a matching mobile app to help them run the efforts.

“When employees are running things, they’re not going to use it if it’s an old-style enterprise app,” he said. “They want consumer-grade user experience on a mobile app.”

With Espresa, HR staff can also measure employee resource groups’ success factors, including participation and volunteer activity levels. That information can then be used to make decisions about company funding or a rewards program, Menon said.

An alternate health coach

Is it possible to help an employee with a chronic condition feel supported and empowered to make lifestyle changes using high-tech health coaching and wearable health technology? According to John Moore, M.D., medical director at San Francisco-based Fitbit, the answer is yes.

During World Congress’ 10th annual Virtual Health Care Summit in Boston, Moore outlined a health coaching roadmap designed to help HR departments and employers meet workers where they are.

“Hey, we know the healthcare experience can be really tough, and it’s hard to manage with other priorities,” he said. “We know you have a life.”

Using a health coach, wearables or a mobile phone — and possibly even looping in family and friends — an employee with a health condition is walked through the steps of setting micro-goals over a two-week period. Reminders, support and encouragement are delivered via a wearable or a phone and can include a real or virtual coach, or even a family intervention, if necessary.

The idea, Moore stressed, is to enable an HR wellness benefits program to give ownership of lifestyle changes back to the employee, while at the same time making the goals sufficiently small to be doable.

“This is different than [typical] health coaching in the workplace,” he said. “This is going to be a much richer interaction on a daily basis. And because it’s facilitated by technology, it’s more scalable and more cost-effective. We’ll be able to collect information that spans from blood pressure, to weight, to steps, to glucose activity and sleep data to get the whole picture of the individual so they can understand themselves better.”

This is an in-the-works offering from Fitbit, and it will not be limited to just the Fitbit-brand device. This platform will be based on technology Fitbit acquired from Twine in February 2018. Moore outlined a vision of interoperability that could include everything, from the pharmacy to a glucose meter to, eventually, an electronic health record system. This could work in tandem with a company’s on-site or near-site health clinic and expand from there, he said.

“Technology can help break down barriers that have existed in traditional healthcare. Right now, interactions are so widely spaced, you can’t put coaches in the office every day or every week. There needs to be a way to leverage technology,” he said. “We can’t just give people an app with an AI chatbot and expect it to magically help them. The human element is still a very important piece, and we can use technology to make that human superhuman.”

HR on the go

StaffConnect has released version 2.2 of its mobile engagement platform, which includes new options for customers to create portals for easier access to payroll, training and other HR information and forms. The StaffConnect service can be used by workers in the office and by what the company calls “nondesk employees,” or NDEs.

The company’s 2018 Employee Engagement Survey showed more than one-third of companies have at least 50% of their workforce as NDEs and highlighted the challenges of keeping all employees equally informed and engaged. The survey indicated the vast majority of companies continue to use either email (almost 80%) or an intranet (almost 49%) to communicate with employees, while just 2% of companies reach out via mobile devices.

The company is also now offering a REST API to make it easier to integrate its platform into existing HR services, and it added custom branding and increased quiz feature options to boost customization.

StaffConnect’s new version also offers additional security options and features, including GDPR compliance and protection for data at rest.

Manage APIs with connectivity-led strategy to cure data access woes

An effective strategy to manage APIs calls for more than just building and publishing APIs. It can enable API-led connectivity, DevOps agility and easier implementation of new technologies, like AI and function as a service, or FaaS.

Real-time data access and delivery are critical to create excellent consumer experiences. The industry’s persistent appetite for API management and integration to connect apps and data is exemplified by Salesforce’s MuleSoft acquisition in March 2018.

In this Q&A, MuleSoft CTO Ross Mason discusses the importance of a holistic strategy to manage APIs that connect data to applications and that speed digital transformation projects, as well as development innovation.

Why do enterprises have so much trouble with data access and delivery?

Ross Mason: Historically, enterprises have considered IT a cost center — one that typically gets a budget cut every year and must do more with less. It doesn’t make sense to treat as a cost center the part of the organization that has a treasure-trove of data and functionality to build new consumer experiences.

In traditional IT, every project is built from the ground up, and required customer data resides separately in each project. There really is no reuse. They have used application integration architectures, like ESBs [enterprise service buses], to suck the data out from apps. That’s why enterprise IT environments have a lot of point-to-point connectivity inside and enterprises have problems with accessing their data.

Today, if enterprises want easy access to their data, they can use API-led connectivity to tap into data in real time. The web shows us that building software blocks with APIs enables improvements in connection experiences.

How does API-led connectivity increase developers’ productivity?

Mason: Developers deliver reusable API and reusable templates with each project. The next time someone needs access to the API, that data or a function, it’s already there, ready to use. The developer doesn’t need to re-create anything.

Reuse allows IT to keep costs down. It also allows people in other ecosystems within the organization to discover and get access to those APIs and data, so they can build their own applications.

In what ways can DevOps extend an API strategy beyond breaking down application and data silos?

Mason: Once DevOps teams deliver microservices and APIs, they see the value of breaking down other IT problems into smaller, bite-size chunks. For example, they get a lot of help with change management, because one code change does not impact a massive, monolithic application. The code change just impacts, say, a few services that rely on a piece of data or a capability in a system.

APIs make applications more composable. If I have an application that’s broken down into 20 APIs, for example, I can use any one of those APIs to fill a feature or a need in any other application without impacting each other. You remove the dependencies between other applications that talk to these APIs.

Overall, a strong API strategy allows software development to move faster, because you don’t build from the ground up each time.
Ross MasonCTO, MuleSoft

Overall, a strong API strategy allows software development to move faster, because you don’t build from the ground up each time. Also, when developers publish APIs, they create an interesting culture dynamic of self-service. This is something that most businesses haven’t had in the past, and it enables developers to build more on their own without going through traditional project cycles.

Which new technologies come next in an API strategy?

Mason: Look at FaaS and AI. Developers now comfortably manage APIs and microservices together to break up monolithic applications. A next step is to add function as a service. This type of service typically calls out other to APIs to get anything done. FaaS allows you a way to stitch these things together for specific purposes.

It’s not too early to get into AI for some use cases. One use of machine learning is to increase developer productivity. Via AI, we learn what the developer is doing and can suggest better approaches. On our runtime management pane, we use machine learning to understand tracking patterns and spot anomalies, to get proactive about issues that might occur.

An API strategy can be extended easily to new technologies, such as IoT, AI and whatever comes next. These systems rely on APIs to interact with the world around them.

Manage all your Hyper-V snapshots with PowerShell


It’s much easier to manage Hyper-V snapshots using PowerShell than a GUI because PowerShell offers the greater…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

flexibility. Once you’re familiar with the basic commands, you’ll be equipped to oversee and change the state of the VMs in your virtual environment.

PowerShell not only reduces the time it takes to perform a task using a GUI tool, but it also reduces the time it takes to perform repeated tasks. For example, if you want to see the memory configured on all Hyper-V VMs, a quick PowerShell command or script is easier to execute than checking VMs one by one. Similarly, you can perform operations related to Hyper-V snapshots using PowerShell.

A snapshot — or checkpoint, depending on which version of Windows Server you have — is a point-in-time picture of a VM that you can use to restore that VM to the state it was in when the snapshot was taken. For example, if you face issues when updating Windows VMs and they don’t restart properly, you can restore VMs to the state they were in before you installed the updates.

Similarly, developers can use checkpoints to quickly perform application tests.

Before Windows Server 2012 R2, Microsoft didn’t support snapshots for production use. But starting with Windows Server 2012 R2, snapshots have been renamed checkpoints and are well-supported in a production environment.

PowerShell commands for Hyper-V snapshots and checkpoints

Microsoft offers a few PowerShell commands to work with Hyper-V checkpoints and snapshots, such as Checkpoint-VM, Get-VMSnapshot, Remove-VMSnapshot and Restore-VMSnapshot

If you want to retrieve all the Hyper-V snapshots associated with a particular VM, all you need to do is execute the Get-VMSnapshot -VMName PowerShell command. For example, the PowerShell command below lists all the snapshots associated with SQLVM:

Get-VMSnapshot -VMName SQLVM

There are two types of Hyper-V checkpoints available: standard and production checkpoints. If you just need all the production checkpoints for a VM, execute the PowerShell command below:

Get-VMSnapshot -VMName SQLVM -SnapshotType Production

To list only the standard checkpoints, execute the following PowerShell command:

Get-VMSnapshot -VMName SQLVM -SnapshotType Standard

When it comes to creating Hyper-V checkpoints for VMs, use the Checkpoint-VM PowerShell command. For example, to take a checkpoint for a particular VM, execute the command below:

Checkpoint-VM -Name TestVM -SnapshotName TestVMSnapshot1

The above command creates a checkpoint for TestVM on the local Hyper-V server, but you can use the following command to create a checkpoint for a VM located on a remote Hyper-V server:

Get-VM SQLVM -ComputerName HyperVServer | Checkpoint-VM

There are situations where you might want to create Hyper-V checkpoints of VMs in bulk. For example, before installing an update on production VMs or upgrading line-of-business applications in a VM, you might want to create checkpoints to ensure you can successfully restore VMs to ensure business continuity. But if you have several VMs, the checkpoint process might take a considerable amount of time.

You can design a small PowerShell script to take Hyper-V checkpoints for VMs specified in a text file, as shown in the PowerShell script below:

$ProdVMs = “C:TempProdVMs.TXT”
Foreach ($ThisVM in Get-Content $ProdVMs)
{   
$ChkName = $ThisVM+”_BeforeUpdates”
Checkpoint-VM -Name $ThisVM -SnapshotName $ChkName
}
Write-Host “Script finished creating Checkpoints for Virtual Machines.”

The above PowerShell script gets VM names from the C:TempProdVMs.TXT file one by one and then runs the Checkpoint-VM PowerShell command to create the checkpoints.

To remove Hyper-V snapshots from VMs, use the Remove-VMSnapshot PowerShell command. For example, to remove a snapshot called TestSnapshot from a VM, execute the following PowerShell command:

Get-VM SQLVM | Remove-VMSnapshot -Name TestSnapshot

To remove Hyper-V checkpoints from bulk VMs, use the same PowerShell script you used to create the checkpoints. Let’s assume all the VMs are working as expected after installing the updates and you would like to remove the checkpoints. Simply execute the PowerShell script below:

$ProdVMs = “C:TempProdVMs.TXT”
Foreach ($ThisVM in Get-Content $ProdVMs)
{   
$ChkName = $ThisVM+”_BeforeUpdates”
Get-VM $ThisVM | Remove-VMSnapshot $ChkName
}
Write-Host “Script finished removing Checkpoints for Virtual Machines.”

To restore Hyper-V snapshots for VMs, use the Restore-VMSnapshot PowerShell cmdlet. For example, to restore or apply a snapshot to a particular VM, use the following PowerShell command:

Restore-VMSnapshot -Name “TestSnapshot1” -VMName SQLVM -Confirm:$False

Let’s assume your production VMs aren’t starting up after installing the updates and you would like to restore the VMs to their previous states. Use the PowerShell script below and perform the restore operation:

$ProdVMs = “C:TempProdVMs.TXT”
Foreach ($ThisVM in Get-Content $ProdVMs)
{   
$ChkName = $ThisVM+”_BeforeUpdates”
Restore-VMSnapshot -Name $ChkName -VMName $ThisVM -Confirm:$False
}
Write-Host “Script finished Restoring Checkpoints for Virtual Machines.”

Note that, by default, when restoring a checkpoint for a VM, the command asks for confirmation. To avoid the confirmation prompt, add the Confirm:$False parameter to the command, as shown above.

New tools unveiled to monitor, manage and optimize SAP environments


The world of the SAP intelligent enterprise requires new tools to monitor, manage and optimize SAP environments as they evolve to include new SAP platforms, integrations and advanced technologies.

SAP’s vision of the intelligent enterprise includes SAP Data Hub, which incorporates integration and data management components, and it shows the company can embrace modern open source platforms, like Hadoop and Spark, and hybrid and multi-cloud deployment, according to Doug Henschen, an analyst at Constellation Research.

This openness, along with extending cloud initiatives to Microsoft Azure, Google Cloud Platform and IBM private cloud instances, necessitated a move to bring customers hybrid and multi-cloud data management capabilities, Henschen said.

“The Data Hub, in particular, facilitates hybrid and multi-cloud data access without data movement and copying,” he said. “This is crucial in harnessing data from any source, no matter where it may be running, to facilitate data-driven decisioning.”

At SAP Sapphire Now 2018, several vendors unveiled new tools — or updates to existing ones — that address some of the challenges associated with moving SAP systems to the intelligent enterprise landscape.

  • Tricentis Tosca’s continuous testing method is designed to keep pace with modern SAP environments, unlike traditional testing methods, which were built for previous versions of SAP applications. These legacy testing systems may not always adequately support S/4HANA and Fiori 2.0, so many SAP users have to use manual testing to validate releases, according to Tricentis. Cloud-enabled Tricentis Tosca 11.2 now supports a variety of the newest SAP versions, including S/4HANA and Fiori 2.0.
  • Worksoft announced the release of Worksoft Interactive Capture 2.0, which is test automation software for SAP environments. Worksoft Interactive Capture 2.0 operates on the principle that it’s critical to keep existing SAP applications operating as new systems and applications are being developed. Worksoft Interactive Capture 2.0 allows business users and application functional experts to create automated business workflows, test documentation and test cases.
  • Virtual Forge announced its CodeProfiler for HANA can now scan the SAPUI5 programming language. CodeProfiler for HANA provides detailed information on code quality as a programmer writes code, similar to spell check on a word processor, according to Virtual Forge. This allows coders to identify and manage performance, security and compliance deficiencies early in the HANA application development process. Reducing or eliminating performance decline and application downtime is particularly critical, as HANA enables real-time business applications.
  • As more organizations move their SAP environments to S/4HANA — or plan to — it becomes important to understand how users actually interact with SAP applications. Knoa Software showed a new version of its user experience management application, Knoa UEM for Enterprise Applications — it’s also resold by SAP as SAP User Experience Management by Knoa. The product allows organizations to view and analyze how users are interacting with SAP applications, including activities that lead to errors, never-used applications and workarounds that are needed because an application’s software is bad, according to Knoa. The latest version of Knoa UEM for Enterprise Applications allows companies that are migrating to S/4HANA to analyze usage on a range of SAP applications, including SAP Fiori, SAP Business Client, SAP Enterprise Portal and SAP GUI for Windows. It can also be used for SAP Leonardo application development by determining how customers actually use the applications and developing a business case for the application based on accurate measurements of user experience improvements in the new apps.
  • General Data Protection Regulation (GDPR) compliance is a huge issue now, and Attunity released Gold Client for Data Protection, a data governance application for SAP environments. Gold Client for Data Protection enables the identification and masking of personally identifiable information across production SAP ECC systems, according to Attunity. The software helps organizations to find PII across SAP systems, which then enables them to enforce GDPR’s “right to be forgotten” mandate.

Dig Deeper on SAP development

Mastering PowerShell commands for Exchange by the book

The key to manage Exchange Server 2016 is to master PowerShell commands for Exchange.

With this latest version of Exchange, IT administrators must learn how to manage Exchange 2016 mailbox and client access and troubleshoot issues with the edge transport server, which routes email online and protects the system from malware and spam. Part of the difficulty of managing Exchange Server is learning how to use PowerShell, as the web-based Exchange admin center cannot handle every issue.

The book Practical PowerShell Exchange Server 2016: Second Edition by Damian Scoles and Dave Stork, teaches administrators with little PowerShell experience how to use the scripting language to ease configuration jobs or handle tasks with automation.

For experienced PowerShell users, this book shares ways to improve existing scripts. Administrators can learn how to use PowerShell commands for Exchange to customize their servers, manage mailboxes and mobile devices, and create reports.

From migrating to Exchange 2016 to taking advantage of its new functions, this book walks administrators through common tasks with PowerShell commands for Exchange.

From migrating to Exchange 2016 to taking advantage of its new functions, this book walks administrators through common tasks with PowerShell commands for Exchange. This excerpt from chapter 14 explains why mailbox migrations work better with PowerShell commands for Exchange:

It’s very unlikely that there is no Exchange admin that has not or will not have to move one or more mailboxes from one Exchange database to another. While some scenarios are quite easy, there are some scenarios that require some more planning, reporting and so on.

With the introduction of Exchange 2010, Microsoft also improved the one element that would grow into an almost impossible task: Mailbox moves. The revolutionary change in Exchange 2010 made it possible to move mailbox data while the user still could access and modify his/her data: Online Mailbox Move. New incoming mail is queued in mail queues until the mailbox is available again (i.e. when it’s successfully moved or has failed).

Practical PowerShell Exchange Server 2016

With the trend of growing average mailbox sizes, this was a necessary step. Otherwise it could mean that a migration would take too long to perform in a single big bang, meaning that you have to migrate mailboxes in stages and maintain a coexistence environment until the last mailbox has been moved. It was also a major step towards making Office 365 more accessible to migrate to and more flexible for Microsoft on managing servers and databases. Just consider moving mailboxes like in Exchange 2003, hoping that every mailbox has moved before your maintenance window closes… .

Luckily this has changed, and as Exchange 2016 can only coexist with Exchange 2010 and 2013, earlier versions of Exchange won’t be an issue. However, the option is still there with the -ForceOffline switch in the New-MoveRequest cmdlet. You shouldn’t have to use it under normal conditions, however from time to time a mailbox is fickle and can only move via an Offline move.

Now, most of the move mailbox options are available from within the Exchange Admin Center in one way or another. But in our experience, EAC is probably fine for simple migrations or the incidental move of one mailbox. If you migrate your server environment from one major build to another, it’s almost impossible to ignore PowerShell. Those migrations are far more complex and full of caveats, that it mostly always requires the use of custom PowerShell cmdlets and scripts

Editor’s note: This excerpt is from Practical PowerShell Exchange Server 2016: Second Edition, authored by Damian Scoles and Dave Stork, published by Practical PowerShell Press, 2017.

Hyper-converged infrastructures get a new benchmark

Hyper-converged infrastructures can be extremely difficult to manage, because everything is interconnected. Measuring performance in this type of infrastructure is just as challenging. And in the past, the available benchmarks only focused on one part of the system. Now, administrators have the ability to look at the infrastructure as a whole.

In November, the Transaction Processing Performance Council (TPC) announced the availability of TPCx-HCI, an application system-level benchmark for measuring the performance of hyper-converged infrastructures. With this benchmark kit, administrators can get a complete view of their virtualized hardware and converged storage, networking and compute platforms running a database application.

We spoke with Reza Taheri, chairman of the TPCx-HCI committee and principal engineer at VMware, who explained the new benchmark for hyper-converged infrastructures and how the council created it.

What was the process for developing the TPCx-HCI benchmark?

Reza Taheri: Originally, we developed a functional specifications document to leave people’s hands open to do any implementation [of the benchmark]. But over time, we realized that it actually made it very hard for people to implement. Not anybody could just go out and start learning the benchmark based on Transaction Processing Performance Council standards. So, we put out a benchmark kit that anybody can download, and it implements the benchmark, measurement, collection of data and all of that in the application kit itself.

The TPCx-V benchmark [for virtualization] was released a couple of years ago. The idea was to look at the performance of a virtualized server — so the hardware, hypervisor, storage and networking using the database workload. We wanted to compare different virtualization stacks using a very heavy business-critical database workload.

We looked at the TPCx-V benchmark kit and specifications and realized that we could very quickly repurpose that for hyper-converged infrastructures.
Reza Taherichairman of the TPCx-HCI committee and principal engineer at VMware

Earlier this year, we had a couple new members join the TPC, and they were HCI vendors — DataCore and Nutanix. They, along with other vendors, [started] asking about a benchmark for HCI systems. We looked at the TPCx-V benchmark kit and specifications and realized that we could very quickly repurpose that for hyper-converged infrastructures. We realized that the HCI market is hot and that there was a demand for a good benchmark.

Will this benchmark account for quality of service, in addition to price and performance?

Taheri: In a couple of ways, yes. One is that you need to have very strict response time performance. 

The other one is something that’s new in this benchmark: Combine performance with some notion of availability. Say you’re running on a four-node cluster. For the test, you limit the VMs on three of the nodes, but all four nodes supply data. At some point during the test, you kill the fourth node and run for a while, and then you turn it back on. You’re required to report the impact on performance during this run and also to report how long it took you to recover resilience and redundancy after the host came back on.

What types of applications do you use for benchmark testing?

Taheri: It’s an online transaction processing application — a database application — that runs on top of Postgres [an open source relational database management system] in a Linux VM. We use that to generate a realistic, very heavy workload that then runs on top of the hypervisor and virtualized storage, virtualized networking, the hardware and so on. The beauty of an application like that is that it really leaves nowhere to hide. Sometimes, for example, if it’s a very simple test of just IOPS, you can make up for low storage by using a lot of CPU or a lot of memory.

But you can’t do that with a high-level system benchmark like this, because if you make up for storage by using too much CPU in the HCI software itself or do caching and use memory, then the application suffers and your performance drops. So, to have good performance, you have to have good storage, memory, CPU and networking all at the same time.

Are all the tested systems running the same hypervisor? Can you accurately compare benchmark performance results for HCI systems that are running different hypervisors?

Taheri: Any hypervisor can be used for this benchmark. Different hyper-converged infrastructures might be running different software stacks besides different hypervisors. It might not be possible to state how much of a performance difference is solely due to the hypervisor. The TPCx-V benchmark is very similar to TPCx-HCI, but runs on one node and can use any type of storage. TPCx-V is a better tool for studying the performance of hypervisors.

Is there any way to compare this benchmark to something running in the cloud?

Taheri: Not directly, but the benchmark has many attributes of cloud-based applications, such as elasticity of load, virtualization and so on. Also, a sponsor might choose to run the benchmark on a cloud platform, which is allowed by the Transaction Processing Performance Council specifications.

As HCI is still evolving, are there plans to review and make changes to the benchmark at any point?

Taheri: We would need to. It was a quantum leap from the Iometer type of benchmarks — micro-benchmarks — to a system application benchmark like this. Going forward, these specs will evolve. Benchmarks … evolve in minor ways, and every few years we have to do a major change, which makes it incomparable to previous versions of the same benchmark.

Will PowerShell Core 6 fill in missing features?

Administrators who have embraced PowerShell to automate tasks and manage systems will need to prepare themselves…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

as Microsoft plans to focus its energies in the open source version called PowerShell Core.

All signs from Microsoft indicate it is heading away from the Windows-only version of PowerShell, which the company said it will continue to support with critical fixes — but no further upgrades. The company plans to release PowerShell Core 6 shortly. Here’s what admins need to know about the transition.

What’s different with PowerShell Core?

PowerShell Core 6 is an open source configuration management and automation tool from Microsoft. As of this article’s publication, Microsoft made a release candidate available in November. PowerShell Core 6 represents a significant change for administrators because it shifts from a Windows-only platform to accommodate heterogeneous IT shops and hybrid cloud networks. Microsoft’s intention is to give administrative teams a single tool to manage Linux, macOS and Windows systems.

What features are not in PowerShell Core?

PowerShell Core runs on .NET Core and uses .NET Standard 2.0, the latter is a common library that helps make some current Windows PowerShell modules work in PowerShell Core.

As a subset of the .NET Framework, PowerShell Core misses out on some useful features in Windows PowerShell. For example, workflow enables admins to execute tasks or retrieve data through a sequence of automated steps. This feature is not in PowerShell Core 6. Similarly, tasks such as sequencing, checkpointing, resumability and persistence are not available in PowerShell Core.

A few other features missing from PowerShell Core 6 are:

  • Windows Presentation Foundation: This is the group of .NET libraries that enable coders to build UIs for scripts. It offers a common platform for developers and designers to work together with standard tools to create Windows and web interfaces.
  • Windows Forms: In PowerShell 5.0 for Windows, the Windows Forms feature provides a robust platform to build rich client apps with the GUI class library on the .NET Framework. To create a form, the admin loads the System.Windows.Forms assembly, creates a new object of type system.windows.forms and calls the ShowDialog method. With PowerShell Core 6, administrators lose this capability.
  • Cmdlets: As of publication, most cmdlets in Windows PowerShell have not been ported to PowerShell Core 6. However, the compatibility with .NET assemblies enables admins to use the existing modules. Users on Linux are limited to modules mostly related to security, management and utility. Admins on that platform can use the PowerShellGet in-box module to install, update and discover PowerShell modules. PowerShell Web Access is not available for non-Windows systems because it requires Internet Information Services, the Windows-based web server functionality.
  • PowerShell remoting: Microsoft ports Secure Socket Shell to Windows, and SSH is already popular in other environments. That means SSH-based remoting for PowerShell is likely the best option for remoting tasks. Modules such as Hyper-V, Storage, NetTCPIP and DnsClient have not been ported to PowerShell Core 6, but Microsoft plans to add them.

Is there a new scripting environment?

For Windows administrators, the PowerShell Integrated Scripting Environment (ISE) is a handy editor that admins use to write, test and debug commands to manage networks. But PowerShell ISE is not included in PowerShell Core 6, so administrators must move to a different integrated development environment.

Microsoft recommends admins use Visual Studio Code (VS Code). VS Code is a cross-platform tool and uses web technologies to provide a rich editing experience across many languages. However, VS Code lacks some of PowerShell ISE’s features, such as PSEdit and remote tabs. PSEdit enables admins to edit files on remote systems without leaving the development environment. Despite VS Code’s limitations, Windows admins should plan to migrate from PowerShell ISE and familiarize themselves with VS Code.

What about Desired State Configuration?

Microsoft offers two versions of Desired State Configuration: Windows PowerShell DSC and DSC for Linux. DSC helps administrators maintain control over software deployments and servers to avoid configuration drift.

Microsoft plans to combine these two options into a single cross-platform version called DSC Core, which will require PowerShell Core and .NET Core. DSC Core is not dependent on Windows Management Framework (WMF) and Windows Management Instrumentation (WMI) and is compatible with Windows PowerShell DSC. It supports resources written in Python, C and C++.

Debugging in DSC has always been troublesome, and ISE eased that process. But with Microsoft phasing out ISE, what should admins do now? A Microsoft blog says the company uses VS Code internally for DSC resource development and plans to release instructional videos that explain how to use the PowerShell extension for DSC resource development.

PowerShell Core 6 is still in its infancy, but Microsoft’s moves show the company will forge ahead with its plan to replace Windows PowerShell. This change brings a significant overhaul to the PowerShell landscape, and IT admins who depend on this automation tool should pay close attention to news related to its development.

Dig Deeper on Microsoft Windows Scripting Language

Druva Cloud Platform expands with Apollo

Druva moved to help manage data protection in the cloud with its latest Apollo software as a service, which helps protect workloads in Amazon Web Services through the Druva Cloud Platform.

The company’s new service provides a single control plane to manage infrastructure-as-a-service and platform-as-a-service cloud workloads.

Druva, based in Sunnyvale, Calif., sells two cloud backup products, Druva InSync and Druva Phoenix, for its Druva Cloud Platform. The enterprise-level Druva InSync backs up endpoint data across physical and public cloud storage. The Druva Phoenix agent backs up and restores data sets in the cloud for distributed physical and virtual servers. Phoenix applies global deduplication at the source and points archived server backups to the cloud target.

There is a big change going on throughout the industry in how data is being managed. The growth is shifting toward secondary data.
Steven Hillsenior storage analyst, 451 Research

Apollo enables data management of Druva Cloud Platform workloads under a single control plane so administrators can do snapshot management for backup, recovery and replication of Amazon Web Services instances. It automates service-level agreements with global orchestration that includes file-level recovery. It also protects Amazon Elastic Compute Cloud instances.

Druva Apollo is part of an industrywide trend among data protection vendors to bring all secondary data under global management across on-premises and cloud storage.

“There is a big change going on throughout the industry in how data is being managed,” said Steven Hill, senior storage analyst for 451 Research. “The growth is shifting toward secondary data. Now, secondary data is growing faster than structured data, and that is where companies are running into a challenge.”

“Apollo will apply snapshot policies,” said Dave Packer, Druva’s vice president of product and alliance marketing. “It will automate many of the lifecycles of the snapshots. That is the first feature of Apollo.”

Automation for discovery, analysis and information governance is on the Druva cloud roadmap, Packer said.

Druva last August pulled in $80 million in funding, bringing total investments into the range of $200 million for the fast-growing vendor. Druva claims to have more than 4,000 worldwide customers that include NASA, Pfizer, NBCUniversal, Marriott Hotels, Stanford University and Lockheed Martin.

Druva has positioned its data management software to go up against traditional backup vendors Commvault and Veritas Technologies, which also are transitioning into broad-based data management players. It’s also competing with startups Rubrik, which has raised a total of $292 million in funding since 2015 for cloud data management, and Cohesity, which has raised $160 million.

Hyper-V PowerShell commands for every occasion

You can certainly manage Hyper-V hosts and VMs with Hyper-V Manager or System Center Virtual Machine Manager, but…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

in some cases, it’s easier to use PowerShell. With this scripting language and interactive command line, you can perform a number of actions, from simply importing a VM and performing a health check to more complex tasks, like enabling replication and creating checkpoints. Follow these five expert tips, and you’ll be well on your way to becoming a Hyper-V PowerShell pro.

Import and export Hyper-V VMs

If you need to import and export VMs and you don’t have the Hyper-V role installed, you can install Hyper-V PowerShell modules on a management server. To export a single VM, use the Export-VM command. This command creates a folder on the path specified with three subfolders — snapshots, virtual hard disks and VMs — which contain all of your VM files. You also have the option to export all of the VMs running on a Hyper-V host or to specify a handful of VMs to export by creating a text file with the VM names and executing a short script using that file. To import a single VM, use the Import-VM command. The import process will register the VM with Hyper-V and check for compatibility with the target host. If the VM is already registered, the existing VM with the same globally unique identifier will be deleted, and the VM will be registered again.

Check Hyper-V host and VM health

You can perform a complete health check for Hyper-V hosts and VMs by using PowerShell commands. When it comes to checking the health of your Hyper-V hosts, there are a lot of elements to consider, including the Hyper-V OS and its service pack, memory and CPU usages, Hyper-V uptime and total, used and available memory. If you want to perform a health check for a standalone host, you can use individual Hyper-V PowerShell commands. To perform a health check for a cluster, use Get-ClusterNode to generate a report. When performing a VM health check, consider the following factors: VM state, integration services version, uptime, whether the VM is clustered, virtual processors, memory configuration and dynamic memory status. You can use Get-VM to obtain this information and a script using the command to check the health status of VMs in a cluster.

Enable Hyper-V replication

Hyper-V replication helps keep VM workloads running in the event of an issue at the production site by replicating those workloads to the disaster recovery site and bringing them online there when need be. To configure Hyper-V replication, you need at least two Hyper-V hosts running Windows Server 2012 or later. There are a few steps involved, but it’s a pretty straightforward process. First, you need to run a script on the replica server to configure the Hyper-V replica and enable required firewall rules. Then, execute a script on the primary server to enable replication for a specific VM — we’ll name it SQLVM, in this case. Finally, initiate the replication with Start-VMInitialReplication –VMName SQLVM. After you’ve completed this process, the VM on the replica server will be turned off, while the one on the primary server will continue to provide services.

Create Hyper-V checkpoints

If you’d like to test applications or just play it safe in case a problem arises, enable Hyper-V checkpoints on your VMs so you can roll back changes to a specific point in time.

If you’d like to test applications or just play it safe in case a problem arises, enable Hyper-V checkpoints on your VMs so you can roll back changes to a specific point in time. The option to take point-in-time images is disabled by default, but you can enable it for a single VM with the following Hyper-V PowerShell command: Set-VM. In order to use production checkpoints, you’ll have to also configure the VM to do so. One you enable and configure the VM to use checkpoints, you can use CheckPoint-VM to create a checkpoint, and the entry will include the date and time it was taken. Unfortunately, the above command won’t work on its own to create checkpoints for VMs on remote Hyper-V hosts, but you can use a short script to create a checkpoint in this instance. To restore a checkpoint, simply stop the VM, and then use the Restore-VMSnapshot command.

Use Port ACL rules in Hyper-V

Port Access Control Lists (ACLs) are an easy way to isolate VM traffic from other VMs. To use this feature, you’ll need Windows Server 2012 or later, and your VMs must be connected to a Hyper-V switch. You can create and manage Port ACL rules using just a few Hyper-V PowerShell commands, but you need to gather some information first. Figure out the source of the traffic, the direction of the traffic — inbound, outbound or both — and whether you want to block or allow traffic. Then, you can execute the Add-VMNetworkAdapterACL command with those specific parameters. You can also list all of the Port ACL rules for a VM with the Get-VMNetworkAdapterACL command. To remove a Port ACL rule associated with a VM, use Remove-VMNetworkAdapterACL. As a time-saver, combine the two previous PowerShell cmdlets to remove all of the VM’s Port ACL rules.

Next Steps

Deep dive into Windows PowerShell

Manage cached credentials with PowerShell

Use PowerShell to enable automated software deployment