Tag Archives: manage

Druva expands on AWS backup, moves S3 snaps across regions

Customers with multiple Amazon accounts now have a way to manage backup policies for all of them.

Druva is adding a global policy management tool to its CloudRanger software, alongside other AWS backup features. Originally, CloudRanger only allowed backup policy setting within individual accounts. The update allows users to create backup policies first, then select or exclude the Amazon accounts to apply them to.

Druva’s vice president of product David Gildea said there has been an increase in the number of enterprises that hold multiple accounts. He said Druva designed the new CloudRanger feature around the idea that customers have thousands of accounts and multiple resources, and it gives the customer a “broad stroke” approach to backup policy management.

“[Amazon] S3 is one of the biggest and most important data sources in the world now,” Gildea said, highlighting the need to protect and manage the data within it.

S3 backup is one of Druva’s key new AWS backup features. Customers can back up S3 snapshots across regions, protecting them from a regional outage. In addition, users can move EBS snapshots to S3 storage, including Glacier and Glacier Deep Archive tiers for greater cost efficiency.

Druva CloudRanger is a management tool for AWS workloads and automated disaster recovery. The total list of Amazon workloads CloudRanger protects now includes EBS, S3, RedShift, ODS, EC2, DocumentDB and Neptune DB. Along with AWS backup, Druva also has products for on-premises data center, endpoint and SaaS application protection.

Druva is not alone in the AWS backup space. Clumio recently extended its backup as a service to support EBS, and Veeam recently launched a cloud-native EC2 protection product in AWS Marketplace.

Screenshot of Druva CloudRanger
Druva CloudRanger now lets customers apply backup policies by account.

Druva’s new AWS backup capabilities are available immediately to early access customers and are expected to become generally available in the first quarter of 2020.

Gildea said customers who have built apps on Amazon and use them at a large scale have a large amount of off-premises data that may not be under the protection of a business’s traditional backup. Druva’s AWS backup saves these customers the trouble of scripting and developing custom backup, which typically does not scale and needs to be continually maintained with every Amazon update.

There is a growing adoption of hybrid cloud infrastructure, said Steven Hill, a senior analyst at 451 Research. Backup vendors have products for protecting on-premises workloads, as well as offerings for the cloud. However, Hill said the challenge for vendors is eliminating the complexity that comes with managing separate environments, one of which is off premises.

Hill said as businesses push more critical workloads to the cloud, the cost of backup will be minor compared to the potential loss of data. He said some businesses have to learn this the hard way through a data loss incident before they buy in.

“Data protection is a bit like buying insurance — it’s optional,” Hill said.

Hill said over time, businesses will learn that cloud workloads need the same quality of backup and business continuity and disaster recovery (BC/DR) as on-premises environments. However, monitoring off-premises systems is an additional challenge. Therefore, he believes the future of BC/DR will lie in automation and flexibility through policy-based management regardless if an environment is on or off premises.

Go to Original Article

Redis Labs eases database management with RedisInsight

The robust market of tools to help users of the Redis database manage their systems just got a new entrant.

Redis Labs disclosed the availability of its RedisInsight tool, a graphical user interface (GUI) for database management and operations.

Redis is a popular open source NoSQL database that is also increasingly being used in cloud-native Kubernetes deployments as users move workloads to the cloud. Open source database use is growing quickly according to recent reports as the need for flexible, open systems to meet different needs has become a common requirement.

Among the challenges often associated with databases of any type is ease of management, which Redis is trying to address with RedisInsight.

“Database management will never go out of fashion,” said James Governor, analyst and co-founder at RedMonk. “Anyone running a Redis cluster is going to appreciate better memory and cluster management tools.”

Governor noted that Redis is following a tested approach, by building out more tools for users that improve management. Enterprises are willing to pay for better manageability, Governor noted, and RedisInsight aims to do that.

RedisInsight based on RDBtools

The RedisInsight tool, introduced Nov. 12, is based on the RDBTools technology that Redis Labs acquired in April 2019. RDBTools is an open source GUI for users to interact with and explore data stores in a Redis database.

Database management will never go out of fashion.
James GovernorAnalyst and co-founder, RedMonk

Over the last seven months, Redis added more capabilities to the RDBTools GUI, expanding the product’s coverage for different applications, said Alvin Richards, chief product officer at Redis.

One of the core pieces of extensibility in Redis is the ability to introduce modules that contain new data structures or processing frameworks. So for example, a module could include time series, or graph data structures, Richards explained.

“What we have added to RedisInsight is the ability to visualize the data for those different data structures from the different modules,” he said. “So if you want to visualize the connections in your graph data for example, you can see that directly within the tool.”

RedisInsight overview dashboard
RedisInsight overview dashboard

RDBTools is just one of many different third-party tools that exist for providing some form of management and data insight for Redis. There are some 30 other third-party GUI tools in the Redis ecosystem, though lack of maturity is a challenge.

“They tend to sort of come up quickly and get developed once and then are never maintained,” Richards said. “So, the key thing we wanted to do is ensure that not only is it current with the latest features, but we have the apparatus behind it to carry on maintaining it.”

How RedisInsight works

For users, getting started with the new tool is relatively straightforward. RedisInsight is a piece of software that needs to be downloaded and then connected to an existing Redis database. The tool ingests all the appropriate metadata and delivers the visual interface to users.

RedisInsight is available for Windows, macOS and Linux, and also available as a Docker container. Redis doesn’t have a RedisInsight as a Service offering yet.

“We have considered having RedisInsight as a service and it’s something we’re still working on in the background, as we do see demand from our customers,” Richards said. “The challenge is always going to be making sure we have the ability to ensure that there is the right segmentation, security and authorization in place to put guarantees around the usage of data.”

Go to Original Article

Azure Bastion brings convenience, security to VM management

Administrators who want to manage virtual machines securely but want to avoid complicated jump server setup and maintenance have a new option at their disposal.

When you run Windows Server and Linux virtual machines in Azure, you need to configure administrative access. This requires communicating with these VMs from across the internet using Transmission Control Protocol (TCP) port 3389 for Remote Desktop Protocol (RDP), and TCP 22 for Secure Shell (SSH).

You want to avoid the configuration in Figure 1, which exposes your VMs to the internet with an Azure public IP address and invites trouble via port scan attacks. Microsoft publishes its public IPv4 data center ranges, so bad actors know which public IP addresses to check to find vulnerable management ports.

The problem with the network address translation (NAT)/load balancer method is your security team won’t like it. This technique is security by obfuscation, which is to say it does not make things more secure. It’s more of a NAT protocol hack.

port scan attacks
Figure 1. This setup exposes VMs to the internet with an Azure public IP address that makes an organization vulnerable to port scan attacks.

Another remote server management option offers illusion of security  

If you have a dedicated hybrid cloud setup with site-to-site virtual private network or an ExpressRoute circuit, then you can interact with your Azure VMs the same way you would with your on-premises workloads. But not every business has the money and staff to configure a hybrid cloud.

Another option, shown in Figure 2, combines the Azure public load balancer with NAT to route management traffic through the load balancer on nonstandard ports.

NAT rules
Figure 2. Using NAT and Azure load balancer for internet-based administrative VM access.

For instance, you could create separate NAT rules for inbound administrative access to the web tier VMs. If the load balancer public IP is, winserv1’s private IP is, and winserv2’s private IP is, then you could create two NAT rules that look like:

  • Inbound RDP connections to on port TCP 33389 route to TCP 3389 on
  • Inbound RDP connections to on port TCP 43389 route to TCP 3389 on

The problem with this method is your security team won’t like it. This technique is security by obfuscation that relies on a NAT protocol hack.

Jump servers are safer but have other issues

A third method that is quite common in the industry is to deploy a jump server VM to your target virtual network in Azure as shown in Figure 3.

jump server configuration
Figure 3. This diagram details a conventional jump server configuration for Azure administrative access.

The jump server is nothing more than a specially created VM that is usually exposed to the internet but has its inbound and outbound traffic restricted heavily with network security groups (NSGs). You allow your admins access to the jump server; once they log in, they can jump to any other VMs in the virtual network infrastructure for any management jobs.

Of these choices, the jump server is safest, but how many businesses have the expertise to pull this off securely? The team would need intermediate- to advanced-level skill in TCP/IP internetworking, NSG traffic rules, public and private IP addresses and Remote Desktop Services (RDS) Gateway to support multiple simultaneous connections.

For organizations that don’t have these skills, Microsoft now offers Azure Bastion.

What Azure Bastion does

Azure Bastion is a managed network virtual appliance that simplifies jump server deployment in your virtual networks.

Azure Bastion is a managed network virtual appliance that simplifies jump server deployment in your virtual networks. You drop an Azure Bastion host into its own subnet, perform some NSG configuration, and you are done.

Organizations that use Azure Bastion get the following benefits:

  • No more public IP addresses for VMs in Azure.
  • RDP/SSH firewall traversal. Azure Bastion tunnels the RDP and SSH traffic over a standard, non-VPN Transport Layer Security/Secure Sockets Layer connection.
  • Protection against port scan attacks on VMs.

How to set up Azure Bastion

Azure Bastion requires a virtual network in the same region. As of publication, Microsoft offers Azure Bastion in the following regions: Australia East, East U.S., Japan East, South Central U.S., West Europe and West U.S.

You also need an empty subnet named AzureBastionSubnet. Do not enable service endpoints, route tables or delegations on this special subnet. Further in this tutorial you can define or edit an NSG on each VM-associated subnet to customize traffic flow.

Because the Azure Bastion supports multiple simultaneous connections, size the AzureBastionSubnet subnet with at least a /27 IPv4 address space. One possible reason for this network address size is to give Azure Bastion room to auto scale in a method similar to the one used with autoscaling in Azure Application Gateway.

Next, browse to the Azure Bastion configuration screen and click Add to start the deployment.

Azure Bastion deployment setup
Figure 4: Deploying an Azure Bastion resource.

As you can see in Figure 4, the deployment process is straightforward if the virtual network and AzureBastionSubnet subnet are in place.

According to Microsoft, Azure Bastion will support client RDP and SSH clients in time, but for now you establish your management connection via the Connect experience in Azure portal. Navigate to a VM’s Overview blade, click Connect, and switch to the Bastion tab as shown Figure 5.

Azure Bastion setup
Figure 5. The Azure portal includes an Azure Bastion connection workflow.

On the Bastion tab, provide an administrator username and password, and then click Connect one more time. Your administrative RDP or SSH session opens in another browser tab, shown in Figure 6.

Windows Server management
Figure 6. Manage a Windows Server VM in Azure with Azure Bastion using an Azure portal-based RDP session.

You can share clipboard data between the Azure Bastion-hosted connection and your local system. Close the browser tab to end your administrative session.

Customize Azure Bastion

To configure Azure Bastion for your organization, create or customize an existing NSG to control traffic between the Azure Bastion subnet and your VM subnets.

Secure access to VMs with Azure Bastion.

Microsoft provides default NSG rules to allow traffic among subnets within your virtual network. For a more efficient and powerful option, upgrade your Azure Security Center license to Standard and onboard your VMs to just-in-time (JIT) VM access, which uses dynamic NSG rules to lock down VM management ports unless an administrator explicitly requests a connection.

You can combine JIT VM access with Azure Bastion, which results in this VM connection workflow:

  • Request access to the VM.
  • Upon approval, proceed to Azure Bastion to make the connection.

Azure Bastion needs some fine-tuning

Azure Bastion has a fixed hourly cost; Microsoft also charges for outbound data transfer after 5 GB.

Azure Bastion is an excellent way to secure administrative access to Azure VMs, but there are a few deal-breakers that Microsoft needs to address:

  1. You need to deploy an Azure Bastion host for each virtual network in your environments. If you have three virtual networks, then you need three Azure Bastion hosts, which can get expensive. Microsoft says virtual network peering support is on the product roadmap. Once Microsoft implements this feature, you can deploy a single Bastion host in your hub virtual network to manage VMs in peered spoke virtual networks.
  2. There is no support for PowerShell remoting ports, but Microsoft does support RDP, which goes against its refrain to avoid the GUI to manage servers.
  3. Microsoft’s documentation does not give enough architectural details to help administrators determine the capabilities of Azure Bastion, such as whether an existing RDP session Group Policy can be combined with Azure Bastion.

Go to Original Article

Alluxio updates data orchestration platform, launches 2.0

Alluxio has launched Alluxio 2.0, a platform designed for data engineers who manage and deploy analytical and AI workloads in the cloud.

According to Alluxio, the 2.0 version was built particularly with hybrid and multi-cloud environments in mind, with the aim of providing data orchestration to bring data locality, accessibility and elasticity to compute.

Alluxio 2.0 Community Edition and Enterprise Edition provide a handful of new capabilities, including data orchestration for multi-cloud, compute-optimized data access for cloud analytics, AWS support and architectural foundations using open source.

Data orchestration for multi-cloud

There are three main components to the data orchestration capabilities of Alluxio 2.0: policy-driven data management, administration of data access policies and cross-cloud storage data movement using data service.

Policy-driven data management enables data engineers to automate data movement across different storage systems based on predefined policies. Users can also automate tiering of data across any environment or any number of storage systems. Alluxio claims this will reduce storage costs because the data platform teams will only manage the most important data in the expensive storage systems, while moving less important data to cheaper alternatives.

The administration of data access policies enables users to configure policies at any directory or folder level to streamline data access and workload performance. This includes defining behaviors for individual data sets for core functions, such as writing data or syncing it with Alluxio storage systems.

With cross-cloud storage data movement using data service, Alluxio claims users get highly efficient data movement across cloud stores, such as AWS S3 and Google Cloud services.

Compute-optimized data access for cloud analytics

The compute-optimized data access capabilities include two components: compute-focused cluster partitioning and integration with external data sources over REST.

Compute-focused cluster partitioning enables users to partition a single Alluxio cluster based on any dimension. This keeps data sets within each framework or workload from being contaminated by the other. Alluxio claims that this reduces data transfer costs and constrains data to stay within a specific region or zone.

Integration with external data sources over REST enables users to import data from web-based sources, which can then be aggregated in Alluxio to perform analytics. Users can also direct web locations with files to Alluxio to be pulled in as needed.

AWS support

The new suite provides Amazon Elastic MapReduce (EMR) service integration. According to Alluxio, Amazon EMR is frequently used during the process of moving to cloud services to deploy analytical and AI workloads. Amazon EMR is now available as a data layer within EMR for Spark, Presto and Hive frameworks.

Architectural foundations using open source

According to Alluxio, core foundational elements have been rebuilt using open source technologies. RocksDB is now used for tiering metadata of files and objects for data that Alluxio manages to enable hyperscale. Alluxio uses gRPC as the core transport protocol for communication with clusters, as well as between the client and master.

In addition to the main components, other new features include the following:

  • Alluxio Data Service: A distributed clustered service.
  • Adaptive replication: Configures a range for the number of copies of data stored in Alluxio that are automatically managed.
  • Embedded journal: A fault tolerance and high availability mode for file and object metadata that uses the RAFT consensus algorithm and is separate from other external storage systems.
  • Alluxio POSIX API: A Portable OS Interface-compatible API that enables frameworks such as Tensorflow, Caffe and other Python-based models to directly access data from any storage system through Alluxio using traditional access.

Alluxio 2.0 Community Edition and Enterprise Edition are both generally available now.

Go to Original Article

Announcing general availability of Azure IoT Hub’s integration with Azure Event Grid

We’re proud to see more and more customers using Azure IoT Hub to control and manage billions of devices, send data to the cloud and gain business insights. We are excited to announce that IoT Hub integration with Azure Event Grid is now generally available, making it even easier to transform these insights into actions by simplifying the architecture of IoT solutions. Some key benefits include:

  • Easily integrate with modern serverless architectures, such as Azure Functions and Azure Logic Apps, to automate workflows and downstream processes.
  • Enable alerting with quick reaction to creation, deletion, connection, and disconnection of devices.
  • Eliminate the complexity and expense of polling services and integrate events with 3rd party applications using webhooks such as ticketing, billing system, and database updates.

Together, these two services help customers easily integrate event notifications from IoT solutions with other powerful Azure services or 3rd party applications. These services add important device lifecycle support with events such as device created, device deleted, device connected, and device disconnected, in a highly reliable, scalable, and secure manner.

Here is how it works:

As of today, this capability is available in the following regions:

  • Asia Southeast
  • Asia East
  • Australia East
  • Australia Southeast
  • Central US
  • East US 2
  • West Central US
  • West US

  • West US 2
  • South Central US
  • Europe West
  • Europe North
  • Japan East
  • Japan West
  • Korea Central
  • Korea South

  • Canada Central
  • Central India
  • South India
  • Brazil South
  • UK West
  • UK South
  • East US, coming soon
  • Canada East, coming soon

Azure Event Grid became generally available earlier this year and currently has built-in integration with the following services:

Azure Event Grid service integration

As we work to deliver more events from Azure IoT Hub, we are excited for you to try this capability and build more streamlined IoT solutions for your business. Try this tutorial to get started.

We would love to hear more about your experiences with the preview and get your feedback! Are there other IoT Hub events you would like to see made available? Please continue to submit your suggestions through the Azure IoT User Voice forum.

News roundup: Manage employee resource groups and more

This week’s news roundup features a tool to manage employee resource groups, a roadmap for a wellness coaching technology program and an AI-powered platform to match employees with the right insurance options.

Ready, set, engage

Espresa, which makes a platform for automating employee programs, has added new features that can track and manage employee resource groups.

Employee resource groups, which are organically formed clubs of people with shared enthusiasms, are increasingly popular in U.S. corporations. A 2016 study by Bentley University indicated 90% of Fortune 500 companies have employee resource groups, and 8.5% of American employees participate in at least one.

At a time when employee retention has become more critical, thanks to a very tight labor market, employee resource groups can help employee engagement. But the grassroots nature of the efforts makes it hard for both employees and HR departments to track and manage them.

In many companies today, employee resource groups are managed with a cobbled-together collection of wiki pages, Google Docs and Evite invitations, said Raghavan Menon, CTO of Espresa, based in Palo Alto, Calif. And HR departments often have no idea what’s going on, when it’s happening or who is in charge.

“Today, nothing allows the employer or company to actually promote [employee resource groups] and then decentralize them to allow employees to manage and run the groups with light oversight from HR,” Menon explained.

Espresa’s new features give HR departments a web-based way to keep track of the employee resource groups, while giving the employees a matching mobile app to help them run the efforts.

“When employees are running things, they’re not going to use it if it’s an old-style enterprise app,” he said. “They want consumer-grade user experience on a mobile app.”

With Espresa, HR staff can also measure employee resource groups’ success factors, including participation and volunteer activity levels. That information can then be used to make decisions about company funding or a rewards program, Menon said.

An alternate health coach

Is it possible to help an employee with a chronic condition feel supported and empowered to make lifestyle changes using high-tech health coaching and wearable health technology? According to John Moore, M.D., medical director at San Francisco-based Fitbit, the answer is yes.

During World Congress’ 10th annual Virtual Health Care Summit in Boston, Moore outlined a health coaching roadmap designed to help HR departments and employers meet workers where they are.

“Hey, we know the healthcare experience can be really tough, and it’s hard to manage with other priorities,” he said. “We know you have a life.”

Using a health coach, wearables or a mobile phone — and possibly even looping in family and friends — an employee with a health condition is walked through the steps of setting micro-goals over a two-week period. Reminders, support and encouragement are delivered via a wearable or a phone and can include a real or virtual coach, or even a family intervention, if necessary.

The idea, Moore stressed, is to enable an HR wellness benefits program to give ownership of lifestyle changes back to the employee, while at the same time making the goals sufficiently small to be doable.

“This is different than [typical] health coaching in the workplace,” he said. “This is going to be a much richer interaction on a daily basis. And because it’s facilitated by technology, it’s more scalable and more cost-effective. We’ll be able to collect information that spans from blood pressure, to weight, to steps, to glucose activity and sleep data to get the whole picture of the individual so they can understand themselves better.”

This is an in-the-works offering from Fitbit, and it will not be limited to just the Fitbit-brand device. This platform will be based on technology Fitbit acquired from Twine in February 2018. Moore outlined a vision of interoperability that could include everything, from the pharmacy to a glucose meter to, eventually, an electronic health record system. This could work in tandem with a company’s on-site or near-site health clinic and expand from there, he said.

“Technology can help break down barriers that have existed in traditional healthcare. Right now, interactions are so widely spaced, you can’t put coaches in the office every day or every week. There needs to be a way to leverage technology,” he said. “We can’t just give people an app with an AI chatbot and expect it to magically help them. The human element is still a very important piece, and we can use technology to make that human superhuman.”

HR on the go

StaffConnect has released version 2.2 of its mobile engagement platform, which includes new options for customers to create portals for easier access to payroll, training and other HR information and forms. The StaffConnect service can be used by workers in the office and by what the company calls “nondesk employees,” or NDEs.

The company’s 2018 Employee Engagement Survey showed more than one-third of companies have at least 50% of their workforce as NDEs and highlighted the challenges of keeping all employees equally informed and engaged. The survey indicated the vast majority of companies continue to use either email (almost 80%) or an intranet (almost 49%) to communicate with employees, while just 2% of companies reach out via mobile devices.

The company is also now offering a REST API to make it easier to integrate its platform into existing HR services, and it added custom branding and increased quiz feature options to boost customization.

StaffConnect’s new version also offers additional security options and features, including GDPR compliance and protection for data at rest.

Manage APIs with connectivity-led strategy to cure data access woes

An effective strategy to manage APIs calls for more than just building and publishing APIs. It can enable API-led connectivity, DevOps agility and easier implementation of new technologies, like AI and function as a service, or FaaS.

Real-time data access and delivery are critical to create excellent consumer experiences. The industry’s persistent appetite for API management and integration to connect apps and data is exemplified by Salesforce’s MuleSoft acquisition in March 2018.

In this Q&A, MuleSoft CTO Ross Mason discusses the importance of a holistic strategy to manage APIs that connect data to applications and that speed digital transformation projects, as well as development innovation.

Why do enterprises have so much trouble with data access and delivery?

Ross Mason: Historically, enterprises have considered IT a cost center — one that typically gets a budget cut every year and must do more with less. It doesn’t make sense to treat as a cost center the part of the organization that has a treasure-trove of data and functionality to build new consumer experiences.

In traditional IT, every project is built from the ground up, and required customer data resides separately in each project. There really is no reuse. They have used application integration architectures, like ESBs [enterprise service buses], to suck the data out from apps. That’s why enterprise IT environments have a lot of point-to-point connectivity inside and enterprises have problems with accessing their data.

Today, if enterprises want easy access to their data, they can use API-led connectivity to tap into data in real time. The web shows us that building software blocks with APIs enables improvements in connection experiences.

How does API-led connectivity increase developers’ productivity?

Mason: Developers deliver reusable API and reusable templates with each project. The next time someone needs access to the API, that data or a function, it’s already there, ready to use. The developer doesn’t need to re-create anything.

Reuse allows IT to keep costs down. It also allows people in other ecosystems within the organization to discover and get access to those APIs and data, so they can build their own applications.

In what ways can DevOps extend an API strategy beyond breaking down application and data silos?

Mason: Once DevOps teams deliver microservices and APIs, they see the value of breaking down other IT problems into smaller, bite-size chunks. For example, they get a lot of help with change management, because one code change does not impact a massive, monolithic application. The code change just impacts, say, a few services that rely on a piece of data or a capability in a system.

APIs make applications more composable. If I have an application that’s broken down into 20 APIs, for example, I can use any one of those APIs to fill a feature or a need in any other application without impacting each other. You remove the dependencies between other applications that talk to these APIs.

Overall, a strong API strategy allows software development to move faster, because you don’t build from the ground up each time.
Ross MasonCTO, MuleSoft

Overall, a strong API strategy allows software development to move faster, because you don’t build from the ground up each time. Also, when developers publish APIs, they create an interesting culture dynamic of self-service. This is something that most businesses haven’t had in the past, and it enables developers to build more on their own without going through traditional project cycles.

Which new technologies come next in an API strategy?

Mason: Look at FaaS and AI. Developers now comfortably manage APIs and microservices together to break up monolithic applications. A next step is to add function as a service. This type of service typically calls out other to APIs to get anything done. FaaS allows you a way to stitch these things together for specific purposes.

It’s not too early to get into AI for some use cases. One use of machine learning is to increase developer productivity. Via AI, we learn what the developer is doing and can suggest better approaches. On our runtime management pane, we use machine learning to understand tracking patterns and spot anomalies, to get proactive about issues that might occur.

An API strategy can be extended easily to new technologies, such as IoT, AI and whatever comes next. These systems rely on APIs to interact with the world around them.

Manage all your Hyper-V snapshots with PowerShell

It’s much easier to manage Hyper-V snapshots using PowerShell than a GUI because PowerShell offers the greater…


* remove unnecessary class from ul
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

* Replace “errorMessageInput” class with “sign-up-error-msg” class
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {

* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
return validateReturn;

* DoC pop-up window js – included in moScripts.js which is not included in responsive page
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);

flexibility. Once you’re familiar with the basic commands, you’ll be equipped to oversee and change the state of the VMs in your virtual environment.

PowerShell not only reduces the time it takes to perform a task using a GUI tool, but it also reduces the time it takes to perform repeated tasks. For example, if you want to see the memory configured on all Hyper-V VMs, a quick PowerShell command or script is easier to execute than checking VMs one by one. Similarly, you can perform operations related to Hyper-V snapshots using PowerShell.

A snapshot — or checkpoint, depending on which version of Windows Server you have — is a point-in-time picture of a VM that you can use to restore that VM to the state it was in when the snapshot was taken. For example, if you face issues when updating Windows VMs and they don’t restart properly, you can restore VMs to the state they were in before you installed the updates.

Similarly, developers can use checkpoints to quickly perform application tests.

Before Windows Server 2012 R2, Microsoft didn’t support snapshots for production use. But starting with Windows Server 2012 R2, snapshots have been renamed checkpoints and are well-supported in a production environment.

PowerShell commands for Hyper-V snapshots and checkpoints

Microsoft offers a few PowerShell commands to work with Hyper-V checkpoints and snapshots, such as Checkpoint-VM, Get-VMSnapshot, Remove-VMSnapshot and Restore-VMSnapshot

If you want to retrieve all the Hyper-V snapshots associated with a particular VM, all you need to do is execute the Get-VMSnapshot -VMName PowerShell command. For example, the PowerShell command below lists all the snapshots associated with SQLVM:

Get-VMSnapshot -VMName SQLVM

There are two types of Hyper-V checkpoints available: standard and production checkpoints. If you just need all the production checkpoints for a VM, execute the PowerShell command below:

Get-VMSnapshot -VMName SQLVM -SnapshotType Production

To list only the standard checkpoints, execute the following PowerShell command:

Get-VMSnapshot -VMName SQLVM -SnapshotType Standard

When it comes to creating Hyper-V checkpoints for VMs, use the Checkpoint-VM PowerShell command. For example, to take a checkpoint for a particular VM, execute the command below:

Checkpoint-VM -Name TestVM -SnapshotName TestVMSnapshot1

The above command creates a checkpoint for TestVM on the local Hyper-V server, but you can use the following command to create a checkpoint for a VM located on a remote Hyper-V server:

Get-VM SQLVM -ComputerName HyperVServer | Checkpoint-VM

There are situations where you might want to create Hyper-V checkpoints of VMs in bulk. For example, before installing an update on production VMs or upgrading line-of-business applications in a VM, you might want to create checkpoints to ensure you can successfully restore VMs to ensure business continuity. But if you have several VMs, the checkpoint process might take a considerable amount of time.

You can design a small PowerShell script to take Hyper-V checkpoints for VMs specified in a text file, as shown in the PowerShell script below:

$ProdVMs = “C:TempProdVMs.TXT”
Foreach ($ThisVM in Get-Content $ProdVMs)
$ChkName = $ThisVM+”_BeforeUpdates”
Checkpoint-VM -Name $ThisVM -SnapshotName $ChkName
Write-Host “Script finished creating Checkpoints for Virtual Machines.”

The above PowerShell script gets VM names from the C:TempProdVMs.TXT file one by one and then runs the Checkpoint-VM PowerShell command to create the checkpoints.

To remove Hyper-V snapshots from VMs, use the Remove-VMSnapshot PowerShell command. For example, to remove a snapshot called TestSnapshot from a VM, execute the following PowerShell command:

Get-VM SQLVM | Remove-VMSnapshot -Name TestSnapshot

To remove Hyper-V checkpoints from bulk VMs, use the same PowerShell script you used to create the checkpoints. Let’s assume all the VMs are working as expected after installing the updates and you would like to remove the checkpoints. Simply execute the PowerShell script below:

$ProdVMs = “C:TempProdVMs.TXT”
Foreach ($ThisVM in Get-Content $ProdVMs)
$ChkName = $ThisVM+”_BeforeUpdates”
Get-VM $ThisVM | Remove-VMSnapshot $ChkName
Write-Host “Script finished removing Checkpoints for Virtual Machines.”

To restore Hyper-V snapshots for VMs, use the Restore-VMSnapshot PowerShell cmdlet. For example, to restore or apply a snapshot to a particular VM, use the following PowerShell command:

Restore-VMSnapshot -Name “TestSnapshot1” -VMName SQLVM -Confirm:$False

Let’s assume your production VMs aren’t starting up after installing the updates and you would like to restore the VMs to their previous states. Use the PowerShell script below and perform the restore operation:

$ProdVMs = “C:TempProdVMs.TXT”
Foreach ($ThisVM in Get-Content $ProdVMs)
$ChkName = $ThisVM+”_BeforeUpdates”
Restore-VMSnapshot -Name $ChkName -VMName $ThisVM -Confirm:$False
Write-Host “Script finished Restoring Checkpoints for Virtual Machines.”

Note that, by default, when restoring a checkpoint for a VM, the command asks for confirmation. To avoid the confirmation prompt, add the Confirm:$False parameter to the command, as shown above.

New tools unveiled to monitor, manage and optimize SAP environments

The world of the SAP intelligent enterprise requires new tools to monitor, manage and optimize SAP environments as they evolve to include new SAP platforms, integrations and advanced technologies.

SAP’s vision of the intelligent enterprise includes SAP Data Hub, which incorporates integration and data management components, and it shows the company can embrace modern open source platforms, like Hadoop and Spark, and hybrid and multi-cloud deployment, according to Doug Henschen, an analyst at Constellation Research.

This openness, along with extending cloud initiatives to Microsoft Azure, Google Cloud Platform and IBM private cloud instances, necessitated a move to bring customers hybrid and multi-cloud data management capabilities, Henschen said.

“The Data Hub, in particular, facilitates hybrid and multi-cloud data access without data movement and copying,” he said. “This is crucial in harnessing data from any source, no matter where it may be running, to facilitate data-driven decisioning.”

At SAP Sapphire Now 2018, several vendors unveiled new tools — or updates to existing ones — that address some of the challenges associated with moving SAP systems to the intelligent enterprise landscape.

  • Tricentis Tosca’s continuous testing method is designed to keep pace with modern SAP environments, unlike traditional testing methods, which were built for previous versions of SAP applications. These legacy testing systems may not always adequately support S/4HANA and Fiori 2.0, so many SAP users have to use manual testing to validate releases, according to Tricentis. Cloud-enabled Tricentis Tosca 11.2 now supports a variety of the newest SAP versions, including S/4HANA and Fiori 2.0.
  • Worksoft announced the release of Worksoft Interactive Capture 2.0, which is test automation software for SAP environments. Worksoft Interactive Capture 2.0 operates on the principle that it’s critical to keep existing SAP applications operating as new systems and applications are being developed. Worksoft Interactive Capture 2.0 allows business users and application functional experts to create automated business workflows, test documentation and test cases.
  • Virtual Forge announced its CodeProfiler for HANA can now scan the SAPUI5 programming language. CodeProfiler for HANA provides detailed information on code quality as a programmer writes code, similar to spell check on a word processor, according to Virtual Forge. This allows coders to identify and manage performance, security and compliance deficiencies early in the HANA application development process. Reducing or eliminating performance decline and application downtime is particularly critical, as HANA enables real-time business applications.
  • As more organizations move their SAP environments to S/4HANA — or plan to — it becomes important to understand how users actually interact with SAP applications. Knoa Software showed a new version of its user experience management application, Knoa UEM for Enterprise Applications — it’s also resold by SAP as SAP User Experience Management by Knoa. The product allows organizations to view and analyze how users are interacting with SAP applications, including activities that lead to errors, never-used applications and workarounds that are needed because an application’s software is bad, according to Knoa. The latest version of Knoa UEM for Enterprise Applications allows companies that are migrating to S/4HANA to analyze usage on a range of SAP applications, including SAP Fiori, SAP Business Client, SAP Enterprise Portal and SAP GUI for Windows. It can also be used for SAP Leonardo application development by determining how customers actually use the applications and developing a business case for the application based on accurate measurements of user experience improvements in the new apps.
  • General Data Protection Regulation (GDPR) compliance is a huge issue now, and Attunity released Gold Client for Data Protection, a data governance application for SAP environments. Gold Client for Data Protection enables the identification and masking of personally identifiable information across production SAP ECC systems, according to Attunity. The software helps organizations to find PII across SAP systems, which then enables them to enforce GDPR’s “right to be forgotten” mandate.

Dig Deeper on SAP development

Mastering PowerShell commands for Exchange by the book

The key to manage Exchange Server 2016 is to master PowerShell commands for Exchange.

With this latest version of Exchange, IT administrators must learn how to manage Exchange 2016 mailbox and client access and troubleshoot issues with the edge transport server, which routes email online and protects the system from malware and spam. Part of the difficulty of managing Exchange Server is learning how to use PowerShell, as the web-based Exchange admin center cannot handle every issue.

The book Practical PowerShell Exchange Server 2016: Second Edition by Damian Scoles and Dave Stork, teaches administrators with little PowerShell experience how to use the scripting language to ease configuration jobs or handle tasks with automation.

For experienced PowerShell users, this book shares ways to improve existing scripts. Administrators can learn how to use PowerShell commands for Exchange to customize their servers, manage mailboxes and mobile devices, and create reports.

From migrating to Exchange 2016 to taking advantage of its new functions, this book walks administrators through common tasks with PowerShell commands for Exchange.

From migrating to Exchange 2016 to taking advantage of its new functions, this book walks administrators through common tasks with PowerShell commands for Exchange. This excerpt from chapter 14 explains why mailbox migrations work better with PowerShell commands for Exchange:

It’s very unlikely that there is no Exchange admin that has not or will not have to move one or more mailboxes from one Exchange database to another. While some scenarios are quite easy, there are some scenarios that require some more planning, reporting and so on.

With the introduction of Exchange 2010, Microsoft also improved the one element that would grow into an almost impossible task: Mailbox moves. The revolutionary change in Exchange 2010 made it possible to move mailbox data while the user still could access and modify his/her data: Online Mailbox Move. New incoming mail is queued in mail queues until the mailbox is available again (i.e. when it’s successfully moved or has failed).

Practical PowerShell Exchange Server 2016

With the trend of growing average mailbox sizes, this was a necessary step. Otherwise it could mean that a migration would take too long to perform in a single big bang, meaning that you have to migrate mailboxes in stages and maintain a coexistence environment until the last mailbox has been moved. It was also a major step towards making Office 365 more accessible to migrate to and more flexible for Microsoft on managing servers and databases. Just consider moving mailboxes like in Exchange 2003, hoping that every mailbox has moved before your maintenance window closes… .

Luckily this has changed, and as Exchange 2016 can only coexist with Exchange 2010 and 2013, earlier versions of Exchange won’t be an issue. However, the option is still there with the -ForceOffline switch in the New-MoveRequest cmdlet. You shouldn’t have to use it under normal conditions, however from time to time a mailbox is fickle and can only move via an Offline move.

Now, most of the move mailbox options are available from within the Exchange Admin Center in one way or another. But in our experience, EAC is probably fine for simple migrations or the incidental move of one mailbox. If you migrate your server environment from one major build to another, it’s almost impossible to ignore PowerShell. Those migrations are far more complex and full of caveats, that it mostly always requires the use of custom PowerShell cmdlets and scripts

Editor’s note: This excerpt is from Practical PowerShell Exchange Server 2016: Second Edition, authored by Damian Scoles and Dave Stork, published by Practical PowerShell Press, 2017.