Tag Archives: options

How to Use Failover Clusters with 3rd Party Replication

In this second post, we will review the different types of replication options and give you guidance on what you need to ask your storage vendor if you are considering a third-party storage replication solution.

If you want to set up a resilient disaster recovery (DR) solution for Windows Server and Hyper-V, you’ll need to understand how to configure a multi-site cluster as this also provides you with local high-availability. In the first post in this series, you learned about the best practices for planning the location, node count, quorum configuration and hardware setup. The next critical decision you have to make is how to maintain identical copies of your data at both sites, so that the same information is available to your applications, VMs, and users.

Multi-Site Cluster Storage Planning

All Windows Server Failover Clusters require some type of shared storage to allow an application to run on any host and access the same data. Multi-site clusters behave the same way, but they require multiple independent storage arrays at each site, with the data replicated between them. The data for the clustered application or virtual machine (VM) on each site should use its own local storage array, or it could have significant latency if each disk IO operation had to go to the other location.

If you are running Hyper-V VMs on your multi-site cluster, you may wish to use Cluster Shared Volumes (CSV) disks. This type of clustered storage configuration is optimized for Hyper-V and allows multiple virtual hard disks (VHDs) to reside on the same disk while allowing the VMs to run on different nodes. The challenge when using CSV in a multi-site cluster is that the VMs must make sure that they are always writing to their disk in their site, and not the replicated copy. Most storage providers offer CSV-aware solutions, and you must make sure that they explicitly support multi-site clustering scenarios. Often the vendors will force writes at the primary site by making the CSV disk at the second site read-only, to ensure that the correct disks are always being used.

Understanding Synchronous and Asynchronous Replication

As you progress in planning your multi-site cluster you will have to select how your data is copied between sites, either synchronously or asynchronously. With asynchronous replication, the application will write to the clustered disk at the primary site, then at regular intervals, the changes will be copied to the disk at the secondary site. This usually happens every few minutes or hours, but if a site fails between replication cycles, then any data from the primary site which has not yet been copied to the secondary site will be lost. This is the recommended configuration for applications that can sustain some amount of data loss, and this generally does not impose any restrictions on the distance between sites. The following image shows the asynchronous replication cycle.

Asynchronous Replication in a Multi-Site Cluster

Asynchronous Replication in a Multi-Site Cluster

With synchronous replication, whenever a disk write command occurs on the primary site, it is then copied to the secondary site, and an acknowledgment is returned to both the primary and secondary storage arrays before that write is committed. Synchronous replication ensures consistency between both sites and avoids data loss in the event that there is a crash between a replication cycle. The challenge of writing to two sets of disks in different locations is that the physical distance between sites must be close or it can affect the performance of the application. Even with a high-bandwidth and low-latency connection, synchronous replication is usually recommended only for critical applications that cannot sustain any data loss, and this should be considered with the location of your secondary site.  The following image shows the asynchronous replication cycle.

Synchronous Replication in a Multi-Site Cluster

Synchronous Replication in a Multi-Site Cluster

As you continue to evaluate different storage vendors, you may also want to assess the granularity of their replication solution. Most of the traditional storage vendors will replicate data at the block-level, which means that they track specific segments of data on the disk which have changed since the last replication. This is usually fast and works well with larger files (like virtual hard disks or databases), as only blocks that have changed need to be copied to the secondary site. Some examples of integrated block-level solutions include HP’s Cluster Extension, Dell/EMC’s Cluster Enabler (SRDF/CE for DMX, RecoverPoint for CLARiiON), Hitachi’s Storage Cluster (HSC), NetApp’s MetroCluster, and IBM’s Storage System.

There are also some storage vendors which provide a file-based replication solution that can run on top of commodity storage hardware. These providers will keep track of individual files which have changed, and only copy those. They are often less efficient than the block-level replication solutions as larger chunks of data (full files) must be copied, however, the total cost of ownership can be much less. A few of the top file-level vendors who support multi-site clusters include Symantec’s Storage Foundation High Availability, Sanbolic’s Melio, SIOS’s Datakeeper Cluster Edition, and Vision Solutions’ Double-Take Availability.

The final class of replication providers will abstract the underlying sets of storage arrays at each site. This software manages disk access and redirection to the correct location. The more popular solutions include EMC’s VPLEX, FalconStor’s Continuous Data Protector and DataCore’s SANsymphony. Almost all of the block-level, file-level, and appliance-level providers are compatible with CSV disks, but it is best to check that they support the latest version of Windows Server if you are planning a fresh deployment.

By now you should have a good understanding of how you plan to configure your multi-site cluster and your replication requirements. Now you can plan your backup and recovery process. Even though the application’s data is being copied to the secondary site, which is similar to a backup, it does not replace the real thing. This is because if the VM (VHD) on one site becomes corrupted, that same error is likely going to be copied to the secondary site. You should still regularly back up any production workloads running at either site.  This means that you need to deploy your cluster-aware backup software and agents in both locations and ensure that they are regularly taking backups. The backups should also be stored independently at both sites so that they can be recovered from either location if one datacenter becomes unavailable. Testing recovery from both sites is strongly recommended. Altaro’s Hyper-V Backup is a great solution for multi-site clusters and is CSV-aware, ensuring that your disaster recovery solution is resilient to all types of disasters.

If you are looking for a more affordable multi-site cluster replication solution, only have a single datacenter, or your storage provider does not support these scenarios, Microsoft offers a few solutions. This includes Hyper-V Replica and Azure Site Recovery, and we’ll explore these disaster recovery options and how they integrate with Windows Server Failover Clustering in the third part of this blog series.

Let us know if you have any questions in the comments form below!


Go to Original Article
Author: Symon Perriman

Google joins bare-metal cloud fray

Google has introduced bare-metal cloud deployment options geared for legacy applications such as SAP, for which customers require high levels of performance along with deeper virtualization controls.

“[Bare metal] is clearly an area of focus of Google,” and one underscored by its recent acquisition of CloudSimple for running VMware workloads on Google Cloud, said Deepak Mohan, an analyst at IDC.

Deepak MohanDeepak Mohan

IBM, AWS and Azure have their own bare-metal cloud offerings, which allow them to support an ESXi hypervisor installation for VMware, and Bare Metal Solution will apparently underpin CloudSimple’s VMware service on Google, Mohan added.

But Google will also be able to support other workloads that can benefit from bare metal availability, such as machine learning, real-time analytics, gaming and graphical rendering. Bare-metal cloud instances also avert the “noisy neighbor” problem that can crop up in virtualized environments as clustered VMs seek out computing resources, and do away with the general hit to performance known commonly as the “hypervisor tax.”

Google’s bare-metal cloud instances offer a dedicated interconnect to customers and tie into all native Google Cloud services, according to a blog post. The hardware has been certified to run “multiple enterprise applications,” including ones built on top of Oracle’s database, Google said.

Oracle, which lags far behind in the IaaS market, has sought to preserve some of those workloads as customers move to the cloud.

This is clearly an area of focus of Google.
Deepak MohanAnalyst, IDC

Earlier this year, it formed a cloud interoperability partnership with Microsoft, pushing a use case wherein customers could run enterprise application logic and presentation tiers on Azure infrastructure, while tying back to an Oracle database running on bare-metal servers or specialized Exadata hardware in Oracle’s cloud.

Not all competitive details laid bare

Overall, bare-metal cloud is a niche market, but by some estimates it is growing quickly.

Among hyperscalers such as AWS, Google and Microsoft, the battleground is in early days, with AWS only making its bare-metal offerings generally available in May 2018. Microsoft has mostly positioned bare metal for memory-intensive workloads such as SAP HANA, while also offering it underneath CloudSimple’s VMware service for Azure.

Meanwhile, Google’s bare-metal cloud service is fully managed by Google, provides a set of provisioning tools for customers, and will have unified billing with other Google Cloud services, according to the blog.

How smoothly this all works together could be a key differentiator for Google in comparison with rival bare-metal providers. Management of bare-metal machines can be more granular than traditional IaaS, which can mean increased flexibility as well as complexity.

Google’s Bare Metal Solution instances are based on x86 systems that range from 16 cores with 384 GB of DRAM, to 112 cores with 3,072 GB of DRAM. Storage comes in 1 TB chunks, with customers able to choose between all-flash or a mix of storage types. Google also plans to offer custom compute configurations to customers with that need.

It also remains to be seen how price-competitive Google is on bare metal compared with competitors, which includes providers such as Packet, CenturyLink and Rackspace.

The company didn’t immediately provide costs for Bare Metal Solution instances, but said the hardware can be purchased via monthly subscription, with the best deals for customers that sign 36-month terms. Google won’t charge for data movement between Bare Metal Solution instances and general-purpose Google Cloud infrastructure if it occurs in the same cloud region.

Go to Original Article
Author:

What are the Azure Stack HCI deployment, management options?

There are several management approaches and deployment options for organizations interested in using the Azure Stack HCI product.

Azure Stack HCI is a hyper-converged infrastructure product, similar to other offerings in which each node holds processors, memory, storage and networking components. Third-party vendors sell the nodes that can scale should the organization need more resources. A purchase of Azure Stack HCI includes the hardware, Windows Server 2019 operating system, management tools, and service and support from the hardware vendor. At time of publication, Microsoft’s Azure Stack HCI catalog lists more than 150 offerings from 19 vendors.

Azure Stack HCI, not to be confused with Azure Stack, gives IT pros full administrator rights to manage the system.

Tailor the Azure Stack HCI options for different needs

The basic components of an Azure Stack HCI node might be the same, but an organization can customize them for different needs, such as better performance or lowest price. For example, a company that wants to deploy a node in a remote office/branch office might select Lenovo’s ThinkAgile MX Certified Node, or its SR650 model. The SR650 scales to two nodes that can be configured with a variety of processors offering up to 28 cores, up to 1.5 TB of memory, hard drive combinations providing up to 12 TB (or SSDs offering more than 3.8 TB), and networking with 10/25 GbE. Each node comes in a 2U physical form factor.

If the organization needs the node for more demanding workloads, one option is the Fujitsu Primeflex. Azure Stack HCI node models such as the all-SSD Fujitsu Primergy RX2540 M5 scale to 16 nodes. Each node can range from 16 to 56 processor cores, up to 3 TB of SSD storage and 25 GbE networking.

Management tools for Azure Stack HCI systems

Microsoft positions the Windows Admin Center (WAC) as the ideal GUI management tool for Azure Stack HCI, but other familiar utilities will work on the platform.

Microsoft positions the Windows Admin Center (WAC) as the ideal GUI management tool for Azure Stack HCI, but other familiar utilities will work on the platform.

The Windows Admin Center is a relatively new browser-based tool for consolidated management for local and remote servers. The Windows Admin Center provides a wide array of management capabilities, such as managing Hyper-V VMs and virtual switches, along with failover and hyper-converged cluster management. While it is tailored for Windows Server 2019 — the server OS used for Azure Stack HCI — it fully supports Windows Server 2012/2012 R2 and Windows Server 2016, and offers some functionality for Windows Server 2008 R2.

Azure Stack HCI users can also use more established management tools such as System Center. The System Center suite components handle infrastructure provisioning, monitoring, automation, backup and IT service management. System Center Virtual Machine Manager provisions and manages the resources to create and deploy VMs, and handle private clouds. System Center Operations Manager monitors services, devices and operations throughout the infrastructure.

Other tools are also available including PowerShell, both the Windows and the PowerShell Core open source versions, as well as third-party products, such as 5nine Manager for Windows Server 2019 Hyper-V management, monitoring and capacity planning.

It’s important to check over each management tool to evaluate its compatibility with the Azure Stack HCI platform, as well as other components of the enterprise infrastructure.

Go to Original Article
Author:

Managed private cloud gives IT a cost-effective option

Cost is a big factor when IT admins explore different options for cloud. In certain cases, a managed private cloud may be more cost-effective than public cloud.

Canonical, a distributor and contributor to Linux Ubuntu, helps organizations manage their cloud setups and uses a variety of proprietary technology to streamline management. Based on the company’s BootStack offering, Canonical’s managed cloud supports a variety of applications and use cases. A managed private cloud can help organizations operate in the “Goldilocks zone,” where they have the right amount of cloud resources for their needs, said Stephan Fabel, director of product at Canonical, based in London. 

Currently, 35% of enterprises are moving data to a private cloud, but hurdles such as hardware costs and initial provisioning can cause organizations to delay deployment, according to a June 2018 report by 451 Research. Here, Fabel talks about what makes a managed private cloud a more effective strategy for the long term.

What is different about BootStack? 

Stephan Fabel: BootStack is applicable to the entire reference architecture to our OpenStack offering. The use case will often dictate a loose handling of the details in terms of the reference architecture. So, you can say, for example, deploy a telco-grade cluster or a cluster for enterprise or a cluster for application development, and those are very different characteristics from another company.

Stephan Fabel, CanonicalStephan Fabel

We support Swift [an API for data storage and scalability] and Chef [framework codes for deployments]. With some of the more locked-down distributions of OpenStack, we support multiple Cinder-volume stores. … We have the ability to do a Contrail application programming interface and even an open Contrail.

The reason why we can do a managed private cloud at the economics we portray them is that we have the operational efficiencies baked into our tooling. Metal as a service and Juju [an open source application modeling tool] provide that base layer on which OpenStack can run and manage.

One thing that is not entirely unique — but it is rare — is that BootStack actually stands for ‘build, operate and optionally transfer.’ Managed service providers generally want users to get on their platform and never leave. We basically say, ‘You know you want to get started with OpenStack, but you’re not sure you’re operationally ready. That’s fine; jump on BootStack for a year, and then build up your confidence or skill set. When you’re ready to take it on, go for it.’

We’ll transfer back the stack in your control and convert it from a managed service to a generic support contract.

What features contribute to a managed private cloud being more cost-effective than public cloud? 

Fabel: The value of public cloud is that you can get started with a snap of your finger, use your credit card and off you go. … However, down the road, you can end up in a situation where due to smart lock-in schemes, nonopen APIs’ interfaces and unique business features, you’re locked into this public cloud and paying a lot of money out of your Opex.

The challenge is it takes a lot more investment upfront to actually get started with a managed private cloud. Somebody still has to order hardware, it still constitutes a commitment, and someone still needs to install the hardware and run it for you. … But, for what it’s worth, we’ll send two engineers, and it’ll take two weeks and you’ll have a private cloud.

Is it common to be able to deploy a private cloud with just two engineers, or is that specific to Canonical?

I think we’ll see more adoption of managed services from the more advanced user base.
Stephan Fabeldirector of product at Canonical

Fabel: You’ll certainly find in this space a lot of players who will emphasize their expertise and the ability to do almost anything you want with OpenStack, in a similar amount of time. The question is, what kind of cloud is within that offering? If you go to a professional service-oriented company, they’ll try and sell you bodies to continually engage with as their way of staying with the contract, which racks up those tremendous costs.

The differentiating fact with Juju is, as opposed to other configuration tooling such as Puppet or Chef, it actually takes things further by not just installing packages and making sure the configuration is set; it is actually orchestrating the OpenStack installation.

So, for example, a classic problem with OpenStack is upgrading it. If you go to some of our competitors, their upgrades are going to be an extremely expensive professional services quote, because it’s so manual. What we did is basically encoded the smart in with what we call Charms that work in conjunction with Juju to manage that automatically.

How does automation help reduce the cost of managed private cloud? 

Fabel: We launched [Juju] five years ago, and it went through a lot of growing pains. Back then, everybody was set on configuration management, and they were appropriating configuration management technology to also do orchestration. … That’s great if you’re only deploying one thing. But, as OpenStack exhibits, it’s not quite that easy when you try and deploy something a little bit more complex.

[Now,] Juju basically says, ‘I will write out the configuration because I’m an agent and I understand the context.’ If you can automate tasks such as server installation and management, and you can code that logic, then you have to think less.

It does require more discipline on the Charms side and more knowledge on the operator in case something does go wrong. … For you to be able to debug this, you actually have to understand how to use it. And that’s a hurdle that people in the beginning sort of dismissed.

Will there always be a mix of public and private managed cloud?

Fabel: We’re seeing interest in power users of OpenStack who want to move onto new frontiers, such as Kubernetes, which seems to be it right now, and we’re ready to take [management] off their hands.

I think we’ll see more adoption of managed services from the more advanced user base and in the more off-the-shelf kind of market that want a 15-node or 20-node cloud. It’s not about the 2,000-node cloud as much anymore. I think there’s a whole market that’s just saying, ‘I have a 10-node cloud, and I can pay VMware or someone to run it for me, and I choose so because it’s economically more attractive.’ 

PC speakers

As per title, looking for PC. Am open to variety of options, what have you got?

Thanks,

Andy

Location: Gosport, Hampshire

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that…

PC speakers

PC speakers

As per title, looking for PC. Am open to variety of options, what have you got?

Thanks,

Andy

Location: Gosport, Hampshire

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that…

PC speakers

PC speakers

As per title, looking for PC. Am open to variety of options, what have you got?

Thanks,

Andy

Location: Gosport, Hampshire

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that…

PC speakers

Wanted – PC speakers

As per title, looking for PC. Am open to variety of options, what have you got?

Thanks,

Andy

Location: Gosport, Hampshire

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

VeloCloud SD-WAN made more responsive to network troubles

VeloCloud Networks Inc. has added to its SD-WAN software policy options that make the technology more responsive to network problems that could affect application performance in branch and remote offices.

The enhancements, introduced this week, are delivered through software upgrades to the company’s cloud-based orchestrator and gateway and the VeloCloud SD-WAN appliance deployed at the branch and data center.

In general, VeloCloud’s technology lets companies combine T1 lines, MPLS links and other enterprise network connections with cheaper broadband, DSL and 4G consumer links. The subscription-based SD-WAN provides continuous monitoring, packet-by-packet traffic steering and link remediation to maintain network performance and reliability.

The new 3.1 version lets companies dedicate segments of the network for specific traffic, such as VoIP, a guest Wi-Fi or sensitive credit-card data heading from a retail store to the corporate data center. Organizations can set policies that provide multiple paths for a traffic flow. If performance for a connection falters, the VeloCloud SD-WAN will steer packets elsewhere to maintain a user-defined quality of service.

The on-the-fly correction is targeted at performance-sensitive applications, such as VoIP. “Not only can you set things up quickly, but once you set it up, the network will adjust,” said Bob Laliberte, an analyst at Enterprise Strategy Group, based in Milford, Mass.

Other improvements include a more straightforward process for setting up an IPsec VPN between a VeloCloud SD-WAN and a non-VeloCloud location. Customers can also use the vendor’s orchestration software to deploy security services from VeloCloud partners and to apply group profiles when adding VeloCloud Edge appliances.

In April, VeloCloud introduced a partner program that lets third-party security vendors integrate their products with the SD-WAN service. Partners include Check Point Software Technologies, Fortinet, IBM, Palo Alto Networks and Zscaler.

VeloCloud claims outcome-driven networking in SD-WAN

VeloCloud is marketing its latest upgrade as “outcome-driven networking,” a term not widely used in the industry. Instead, networking vendors and some analyst firms are pushing an approach called “intent-based networking,” which uses GUI-based tools to abstract the many complicated technical steps underpinning the delivery of services.

Earlier this month, virtualization vendor VMware announced plans to acquire VeloCloud for an undisclosed sum. If completed in early February as planned, the acquisition would place VMware in head-to-head competition with Cisco in the branch office. In August, Cisco acquired VeloCloud rival Viptela for $610 million.