Tag Archives: options

Managing Mailbox Retention and Archiving Policies in Microsoft 365

Microsoft 365 (formerly Office 365) provides a wide set of options for managing data classification, retention of different types of data, and archiving data. This article will show the options a Microsoft 365 administrator has when setting up retention policies for Exchange, SharePoint, and other Microsoft 365 workloads and how those policies affect users in Outlook. It’ll also cover the option of an Online Archive Mailbox and how to set one up.

There’s also an accompanying video to this article which shows you how to configure a retention policy, retention labels, enabling Archive mailboxes, and creating a move to archive retention tag.

[embedded content]

Before we continue, we know that for all Microsoft 365 admins security is a priority. And in the current climate of COVID-19, it’s well documented how hackers are working around the clock to exploit vulnerabilities. As such, we assembled two Microsoft experts to discuss the critical security features in Microsoft 365 you should be using right now in a free webinar on May 27. Don’t miss out on this must-attend event – save your seat now!

How To Manage Retention Policies in Microsoft 365

There are many reasons to consider labeling data and using retention policies but before we discuss these let’s look at how Office 365 manages your data in the default state. For Exchange Online (where mailboxes and Public Folders are stored if you use them), each database has at least four copies, spread across two datacenters. One of these copies is a lagged copy which means the replication to it is delayed, to provide the option to recover from a data corruption issue. In short, a disk, server, rack, or even datacenter failure isn’t going to mean that you lose your mailbox data.

Further, the default policy (for a few years now) is that deleted items in Outlook stay in the Deleted Items folder “forever”, until you empty it, or they are moved to an archive mailbox. If an end-user deletes items out of their Deleted Items folder, they’re kept for another 30 days (as long as the mailbox was created in 2017 or later), meaning the user can recover it, by opening the Deleted Items folder and clicking the link.

Where to find recoverable items in Outlook, Microsoft 365

Where to find recoverable items in Outlook

This opens the dialogue box where a user can recover one or more items.

Recovering deleted items in Exchange Online, Microsoft 365

Recovering deleted items in Exchange Online

If an administrator deletes an entire mailbox it’s kept in Exchange Online for 30 days and you can recover it by restoring the associated user account.

Additionally, it’s also important to realize that Microsoft does not back up your data in Microsoft 365. Through native data protection in Exchange and SharePoint online they make sure that they’ll never lose your current data but if you have deleted an item, document or mailbox for good, it’s gone. There’s no secret place where Microsoft’s support can get it back from (although it doesn’t hurt to try), hence the popularity of third-party backup solutions such as Altaro Office 365 Backup.

Litigation Hold – the “not so secret” secret

One option that I have seen some administrators employ is to use litigation or in-place hold (the latter feature is being retired in the second half of 2020) which keeps all deleted items in a hidden subfolder of the Recoverable Items folder until the hold lapses (which could be never if you make it permanent). Note that you need at least an E3 or Exchange Online Plan 2 for this feature to be available. This feature is designed to be used when a user is under some form of investigation and ensures that no evidence can be purged by that user and it’s not designed as a “make sure nothing is ever deleted” policy. However, I totally understand the job security it can bring when the CEO is going ballistic because something super important is “gone”.

Litigation hold settings for a mailbox, Microsoft 365

Litigation hold settings for a mailbox

Retention Policies

If the default settings and options described above doesn’t satisfy the needs of your business or regulatory requirements you may have, the next step is to consider retention policies. A few years ago, there were different policy frameworks for the different workloads in Office 365, showing the on-premises heritage of Exchange and SharePoint. Thankfully we now have a unified service that spans most Office 365 workloads. Retention in this context refers to ensuring that the data can’t be deleted until the retention period expires.

There are two flavors here, label policies which publish labels to your user base, letting users pick a retention policy by assigning individual emails or documents a label (only one label per piece of content). Note that labels can do two things that retention policies can’t do, firstly they can apply from the date the content was labeled, and secondly, you can trigger a disposition / manual review of the SharePoint or OneDrive for Business document when the retention expires.

Labels only apply to objects that you label; it doesn’t retroactively scan through email or documents at rest. While labels can be part of a bigger data classification story, my recommendation is that anything that relies on users remembering to do something extra to manage data will only work with extensive training and for a small subset of very important data. You can (if you have E5 licensing for the users in question) use label policies to automatically apply labels to sensitive content, based on a search query you build (particular email subject lines or recipients or SharePoint document types in particular sites for instance) or to a set of trainable classifiers for offensive language, resumes, source-code, harassment, profanity, and threats. You can also apply a retention label to a SharePoint library, folder, or document set.

As an aside, Exchange Online also has personal labels that are similar to retention labels but created by users themselves instead of being created and published by administrators.

A more holistic flavor, in my opinion, is retention policies. These apply to all items stored in the various repositories and can apply across several different workloads. Retention policies can also both ensure that data is retained for a set period of time AND disposed of after the expiry of the data, which is often a regulatory requirement. A quick note here if you’re going to play around with policies is that they’re not instantaneously applied – it can take up to 24 hours or even 7 days, depending on the workload and type of policy – so prepare to be patient.

These policies can apply across Exchange, SharePoint (which means files stored in Microsoft 365 Groups, Teams, and Yammer), OneDrive for business, and IM conversations in Skype for Business Online / Teams and Groups. Policies can be broad and apply across several workloads, or narrow and only apply to a specific workload or location in that workload. An organization-wide policy can apply to the workloads above (except Teams, you need a separate policy for its content) and you can have up to 10 of these in a tenant. Non-org wide policies can be applied to specific mailboxes, sites, or groups or you can use a search query to narrow down the content that the policy applies to. The limits are 10,000 policies in a tenant, each of which can apply to up to 1000 mailboxes or 100 sites.

Especially with org-wide policies be aware that they apply to ALL selected content so if you set it to retain everything for four years and then delete it, data is going to automatically start disappearing after four years. Note that you can set the “timer” to start when the content is created or when it was last modified, the latter is probably more in line with what people would expect, otherwise, you could have a list that someone updates weekly disappear suddenly because it was created several years ago.

To create a retention policy login to the Microsoft 365 admin center, expand Admin centers, and click on Compliance. In this portal click on Policies and then Retention under Data.

Retention policies link in the Compliance portal, Microsoft 365

Retention policies link in the Compliance portal

Select the Retention tab and click New retention policy.

Retention policies and creating a new one, Microsoft 365

Retention policies and creating a new one

Give your policy a name and a description, select which data stores it’s going to apply to and whether the policy is going to retain and then delete data or just delete it after the specified time.

Retention settings in a policy, Microsoft 365

Retention settings in a policy

Outside of the scope of this article but related are sensitivity labels, instead of classifying data based on how long it should be kept, these policies classify data based on the security needs of the content. You can then apply policies to control the flow of emails with this content, or automatically encrypt documents in SharePoint for instance. You can also combine sensitivity and retention labels in policies.

Conflicts

Since there can be multiple policies applied to the same piece of data and perhaps even retention labels in play there could be a situation where conflicting settings apply. Here’s how these conflicts are resolved.

Retention wins over deletion, making sure that nothing is deleted that you expected to be retained and the longest retention period wins. If one policy says two years and another says five years, it’ll be kept for five. The third rule is that explicit wins over implicit so if a policy has been applied to a specific area such as a SharePoint library it’ll take precedence over an organization-wide general policy. Finally, the shortest deletion policy wins so that if an administrator has made a choice to delete content after a set period of time, it’ll be deleted then even if another policy applies that requires deletion after a longer period of time. Here’s a graphic that shows the four rules and their interaction:

Policy conflict resolution rules. Microsoft 365

Policy conflict resolution rules (courtesy of Microsoft)

As you can see, building a set of retention policies that really work for your business and don’t unintentionally cause problems is a project for the whole business, working out exactly what’s needed across different workloads, rather than the job of a “click-happy” IT administrator.

Archive Mailbox

It all started with trying to rid the world of PST stored emails. Back in the day, when hard drive and SAN storage only provided small amounts of storage, many people learnt to “expand” the capacity of their small mailbox quota with local PST files. The problem is that these local files aren’t backed up and aren’t included in regulatory or eDiscovery searches. Office 365 largely solved part of this problem by providing generous quotas, the Business plans provide 50 GB per mailbox whereas the Enterprise plans have 100 GB limits.

If you need more mailbox storage one option is to enable online archiving which provides another 50 GB mailbox for the Business plans and an unlimited (see below) mailbox for the Enterprise plans. There are some limitations on this “extra” mailbox, it can only be accessed online, and it’s never synchronized to your offline (OST) file in Outlook. When you search for content you must select “all mailboxes” to see matches in your archive mailbox. ActiveSync and the Outlook client on Android and iOS can’t see the archive mailbox and users may need to manually decide what to store in which location (unless you’ve set up your policies correctly).

For these reasons many businesses avoid archive mailboxes altogether, just making sure that all mailbox data is stored in the primary mailbox (after all, 100 GB is quite a lot of emails). Other businesses, particularly those with a lot of legacy PST storage find these mailboxes fantastic and use either manual upload or even drive shipping to Microsoft 365 to convert all those PSTs to online archives where the content isn’t going to disappear because of a failed hard drive and where eDiscovery can find it.

For those that really need it and are on E3 or E5 licensing you can also enable auto-expanding archives which will ensure that as you use up space in an online archive mailbox, additional mailboxes will be created behind the scenes to provide effectively unlimited archival storage.

To enable archive mailboxes, go to Security & Compliance Center, click on Information governance, and the Archive tab.

The Archive tab, Microsoft 365

The Archive tab

Click on a user’s name to be able to enable the archive mailbox.

Archive mailbox settings, Mod admin, Microsoft 365

Archive mailbox settings

Once you have enabled archive mailboxes, you’ll need a policy to make sure that items are moved into at the cadence you need. Go to the Exchange admin center and click on Compliance management – Retention tags.

Exchange Admin Center - Retention tags, Microsoft 365

Exchange Admin Center – Retention tags

Here you’ll find the Default 2 year move to archive tag or you can create a new policy by clicking on the + sign.

Exchange Retention tags default policies, Microsoft 365

Exchange Retention tags default policies

Pick Move to Archive as the action, give the policy a name and select the number of days that has to pass before the move happens.

Creating a custom Move to archive policy, Microsoft 365

Creating a custom Move to archive policy

Note that online archive mailboxes have NOTHING to do with the Archive folder that you see in the folder tree in Outlook, this is just an ordinary folder that you can move items into from your inbox for later processing. This Archive folder is available on mobile clients and also when you’re offline and you can swipe in Outlook mobile to automatically store emails in it.

Conclusion

Now you know how and when to apply retention policies and retention tags in Microsoft 365, as well as when online archive mailboxes are appropriate and how to enable them and configure policies to archive items.

Finally, if you haven’t done so already, remember to save your seat on our upcoming must-attend webinar for all Microsoft 365 admins:

Critical Security Features in Office/Microsoft 365 Admins Simply Can’t Ignore

Is Your Office 365 Data Secure?

Did you know Microsoft does not back up Office 365 data? Most people assume their emails, contacts and calendar events are saved somewhere but they’re not. Secure your Office 365 data today using Altaro Office 365 Backup – the reliable and cost-effective mailbox backup, recovery and backup storage solution for companies and MSPs. 

Start your Free Trial now


Go to Original Article
Author: Paul Schnackenburg

How to Use Failover Clusters with 3rd Party Replication

In this second post, we will review the different types of replication options and give you guidance on what you need to ask your storage vendor if you are considering a third-party storage replication solution.

If you want to set up a resilient disaster recovery (DR) solution for Windows Server and Hyper-V, you’ll need to understand how to configure a multi-site cluster as this also provides you with local high-availability. In the first post in this series, you learned about the best practices for planning the location, node count, quorum configuration and hardware setup. The next critical decision you have to make is how to maintain identical copies of your data at both sites, so that the same information is available to your applications, VMs, and users.

Multi-Site Cluster Storage Planning

All Windows Server Failover Clusters require some type of shared storage to allow an application to run on any host and access the same data. Multi-site clusters behave the same way, but they require multiple independent storage arrays at each site, with the data replicated between them. The data for the clustered application or virtual machine (VM) on each site should use its own local storage array, or it could have significant latency if each disk IO operation had to go to the other location.

If you are running Hyper-V VMs on your multi-site cluster, you may wish to use Cluster Shared Volumes (CSV) disks. This type of clustered storage configuration is optimized for Hyper-V and allows multiple virtual hard disks (VHDs) to reside on the same disk while allowing the VMs to run on different nodes. The challenge when using CSV in a multi-site cluster is that the VMs must make sure that they are always writing to their disk in their site, and not the replicated copy. Most storage providers offer CSV-aware solutions, and you must make sure that they explicitly support multi-site clustering scenarios. Often the vendors will force writes at the primary site by making the CSV disk at the second site read-only, to ensure that the correct disks are always being used.

Understanding Synchronous and Asynchronous Replication

As you progress in planning your multi-site cluster you will have to select how your data is copied between sites, either synchronously or asynchronously. With asynchronous replication, the application will write to the clustered disk at the primary site, then at regular intervals, the changes will be copied to the disk at the secondary site. This usually happens every few minutes or hours, but if a site fails between replication cycles, then any data from the primary site which has not yet been copied to the secondary site will be lost. This is the recommended configuration for applications that can sustain some amount of data loss, and this generally does not impose any restrictions on the distance between sites. The following image shows the asynchronous replication cycle.

Asynchronous Replication in a Multi-Site Cluster

Asynchronous Replication in a Multi-Site Cluster

With synchronous replication, whenever a disk write command occurs on the primary site, it is then copied to the secondary site, and an acknowledgment is returned to both the primary and secondary storage arrays before that write is committed. Synchronous replication ensures consistency between both sites and avoids data loss in the event that there is a crash between a replication cycle. The challenge of writing to two sets of disks in different locations is that the physical distance between sites must be close or it can affect the performance of the application. Even with a high-bandwidth and low-latency connection, synchronous replication is usually recommended only for critical applications that cannot sustain any data loss, and this should be considered with the location of your secondary site.  The following image shows the asynchronous replication cycle.

Synchronous Replication in a Multi-Site Cluster

Synchronous Replication in a Multi-Site Cluster

As you continue to evaluate different storage vendors, you may also want to assess the granularity of their replication solution. Most of the traditional storage vendors will replicate data at the block-level, which means that they track specific segments of data on the disk which have changed since the last replication. This is usually fast and works well with larger files (like virtual hard disks or databases), as only blocks that have changed need to be copied to the secondary site. Some examples of integrated block-level solutions include HP’s Cluster Extension, Dell/EMC’s Cluster Enabler (SRDF/CE for DMX, RecoverPoint for CLARiiON), Hitachi’s Storage Cluster (HSC), NetApp’s MetroCluster, and IBM’s Storage System.

There are also some storage vendors which provide a file-based replication solution that can run on top of commodity storage hardware. These providers will keep track of individual files which have changed, and only copy those. They are often less efficient than the block-level replication solutions as larger chunks of data (full files) must be copied, however, the total cost of ownership can be much less. A few of the top file-level vendors who support multi-site clusters include Symantec’s Storage Foundation High Availability, Sanbolic’s Melio, SIOS’s Datakeeper Cluster Edition, and Vision Solutions’ Double-Take Availability.

The final class of replication providers will abstract the underlying sets of storage arrays at each site. This software manages disk access and redirection to the correct location. The more popular solutions include EMC’s VPLEX, FalconStor’s Continuous Data Protector and DataCore’s SANsymphony. Almost all of the block-level, file-level, and appliance-level providers are compatible with CSV disks, but it is best to check that they support the latest version of Windows Server if you are planning a fresh deployment.

By now you should have a good understanding of how you plan to configure your multi-site cluster and your replication requirements. Now you can plan your backup and recovery process. Even though the application’s data is being copied to the secondary site, which is similar to a backup, it does not replace the real thing. This is because if the VM (VHD) on one site becomes corrupted, that same error is likely going to be copied to the secondary site. You should still regularly back up any production workloads running at either site.  This means that you need to deploy your cluster-aware backup software and agents in both locations and ensure that they are regularly taking backups. The backups should also be stored independently at both sites so that they can be recovered from either location if one datacenter becomes unavailable. Testing recovery from both sites is strongly recommended. Altaro’s Hyper-V Backup is a great solution for multi-site clusters and is CSV-aware, ensuring that your disaster recovery solution is resilient to all types of disasters.

If you are looking for a more affordable multi-site cluster replication solution, only have a single datacenter, or your storage provider does not support these scenarios, Microsoft offers a few solutions. This includes Hyper-V Replica and Azure Site Recovery, and we’ll explore these disaster recovery options and how they integrate with Windows Server Failover Clustering in the third part of this blog series.

Let us know if you have any questions in the comments form below!


Go to Original Article
Author: Symon Perriman

Google joins bare-metal cloud fray

Google has introduced bare-metal cloud deployment options geared for legacy applications such as SAP, for which customers require high levels of performance along with deeper virtualization controls.

“[Bare metal] is clearly an area of focus of Google,” and one underscored by its recent acquisition of CloudSimple for running VMware workloads on Google Cloud, said Deepak Mohan, an analyst at IDC.

Deepak MohanDeepak Mohan

IBM, AWS and Azure have their own bare-metal cloud offerings, which allow them to support an ESXi hypervisor installation for VMware, and Bare Metal Solution will apparently underpin CloudSimple’s VMware service on Google, Mohan added.

But Google will also be able to support other workloads that can benefit from bare metal availability, such as machine learning, real-time analytics, gaming and graphical rendering. Bare-metal cloud instances also avert the “noisy neighbor” problem that can crop up in virtualized environments as clustered VMs seek out computing resources, and do away with the general hit to performance known commonly as the “hypervisor tax.”

Google’s bare-metal cloud instances offer a dedicated interconnect to customers and tie into all native Google Cloud services, according to a blog post. The hardware has been certified to run “multiple enterprise applications,” including ones built on top of Oracle’s database, Google said.

Oracle, which lags far behind in the IaaS market, has sought to preserve some of those workloads as customers move to the cloud.

This is clearly an area of focus of Google.
Deepak MohanAnalyst, IDC

Earlier this year, it formed a cloud interoperability partnership with Microsoft, pushing a use case wherein customers could run enterprise application logic and presentation tiers on Azure infrastructure, while tying back to an Oracle database running on bare-metal servers or specialized Exadata hardware in Oracle’s cloud.

Not all competitive details laid bare

Overall, bare-metal cloud is a niche market, but by some estimates it is growing quickly.

Among hyperscalers such as AWS, Google and Microsoft, the battleground is in early days, with AWS only making its bare-metal offerings generally available in May 2018. Microsoft has mostly positioned bare metal for memory-intensive workloads such as SAP HANA, while also offering it underneath CloudSimple’s VMware service for Azure.

Meanwhile, Google’s bare-metal cloud service is fully managed by Google, provides a set of provisioning tools for customers, and will have unified billing with other Google Cloud services, according to the blog.

How smoothly this all works together could be a key differentiator for Google in comparison with rival bare-metal providers. Management of bare-metal machines can be more granular than traditional IaaS, which can mean increased flexibility as well as complexity.

Google’s Bare Metal Solution instances are based on x86 systems that range from 16 cores with 384 GB of DRAM, to 112 cores with 3,072 GB of DRAM. Storage comes in 1 TB chunks, with customers able to choose between all-flash or a mix of storage types. Google also plans to offer custom compute configurations to customers with that need.

It also remains to be seen how price-competitive Google is on bare metal compared with competitors, which includes providers such as Packet, CenturyLink and Rackspace.

The company didn’t immediately provide costs for Bare Metal Solution instances, but said the hardware can be purchased via monthly subscription, with the best deals for customers that sign 36-month terms. Google won’t charge for data movement between Bare Metal Solution instances and general-purpose Google Cloud infrastructure if it occurs in the same cloud region.

Go to Original Article
Author:

What are the Azure Stack HCI deployment, management options?

There are several management approaches and deployment options for organizations interested in using the Azure Stack HCI product.

Azure Stack HCI is a hyper-converged infrastructure product, similar to other offerings in which each node holds processors, memory, storage and networking components. Third-party vendors sell the nodes that can scale should the organization need more resources. A purchase of Azure Stack HCI includes the hardware, Windows Server 2019 operating system, management tools, and service and support from the hardware vendor. At time of publication, Microsoft’s Azure Stack HCI catalog lists more than 150 offerings from 19 vendors.

Azure Stack HCI, not to be confused with Azure Stack, gives IT pros full administrator rights to manage the system.

Tailor the Azure Stack HCI options for different needs

The basic components of an Azure Stack HCI node might be the same, but an organization can customize them for different needs, such as better performance or lowest price. For example, a company that wants to deploy a node in a remote office/branch office might select Lenovo’s ThinkAgile MX Certified Node, or its SR650 model. The SR650 scales to two nodes that can be configured with a variety of processors offering up to 28 cores, up to 1.5 TB of memory, hard drive combinations providing up to 12 TB (or SSDs offering more than 3.8 TB), and networking with 10/25 GbE. Each node comes in a 2U physical form factor.

If the organization needs the node for more demanding workloads, one option is the Fujitsu Primeflex. Azure Stack HCI node models such as the all-SSD Fujitsu Primergy RX2540 M5 scale to 16 nodes. Each node can range from 16 to 56 processor cores, up to 3 TB of SSD storage and 25 GbE networking.

Management tools for Azure Stack HCI systems

Microsoft positions the Windows Admin Center (WAC) as the ideal GUI management tool for Azure Stack HCI, but other familiar utilities will work on the platform.

Microsoft positions the Windows Admin Center (WAC) as the ideal GUI management tool for Azure Stack HCI, but other familiar utilities will work on the platform.

The Windows Admin Center is a relatively new browser-based tool for consolidated management for local and remote servers. The Windows Admin Center provides a wide array of management capabilities, such as managing Hyper-V VMs and virtual switches, along with failover and hyper-converged cluster management. While it is tailored for Windows Server 2019 — the server OS used for Azure Stack HCI — it fully supports Windows Server 2012/2012 R2 and Windows Server 2016, and offers some functionality for Windows Server 2008 R2.

Azure Stack HCI users can also use more established management tools such as System Center. The System Center suite components handle infrastructure provisioning, monitoring, automation, backup and IT service management. System Center Virtual Machine Manager provisions and manages the resources to create and deploy VMs, and handle private clouds. System Center Operations Manager monitors services, devices and operations throughout the infrastructure.

Other tools are also available including PowerShell, both the Windows and the PowerShell Core open source versions, as well as third-party products, such as 5nine Manager for Windows Server 2019 Hyper-V management, monitoring and capacity planning.

It’s important to check over each management tool to evaluate its compatibility with the Azure Stack HCI platform, as well as other components of the enterprise infrastructure.

Go to Original Article
Author:

Managed private cloud gives IT a cost-effective option

Cost is a big factor when IT admins explore different options for cloud. In certain cases, a managed private cloud may be more cost-effective than public cloud.

Canonical, a distributor and contributor to Linux Ubuntu, helps organizations manage their cloud setups and uses a variety of proprietary technology to streamline management. Based on the company’s BootStack offering, Canonical’s managed cloud supports a variety of applications and use cases. A managed private cloud can help organizations operate in the “Goldilocks zone,” where they have the right amount of cloud resources for their needs, said Stephan Fabel, director of product at Canonical, based in London. 

Currently, 35% of enterprises are moving data to a private cloud, but hurdles such as hardware costs and initial provisioning can cause organizations to delay deployment, according to a June 2018 report by 451 Research. Here, Fabel talks about what makes a managed private cloud a more effective strategy for the long term.

What is different about BootStack? 

Stephan Fabel: BootStack is applicable to the entire reference architecture to our OpenStack offering. The use case will often dictate a loose handling of the details in terms of the reference architecture. So, you can say, for example, deploy a telco-grade cluster or a cluster for enterprise or a cluster for application development, and those are very different characteristics from another company.

Stephan Fabel, CanonicalStephan Fabel

We support Swift [an API for data storage and scalability] and Chef [framework codes for deployments]. With some of the more locked-down distributions of OpenStack, we support multiple Cinder-volume stores. … We have the ability to do a Contrail application programming interface and even an open Contrail.

The reason why we can do a managed private cloud at the economics we portray them is that we have the operational efficiencies baked into our tooling. Metal as a service and Juju [an open source application modeling tool] provide that base layer on which OpenStack can run and manage.

One thing that is not entirely unique — but it is rare — is that BootStack actually stands for ‘build, operate and optionally transfer.’ Managed service providers generally want users to get on their platform and never leave. We basically say, ‘You know you want to get started with OpenStack, but you’re not sure you’re operationally ready. That’s fine; jump on BootStack for a year, and then build up your confidence or skill set. When you’re ready to take it on, go for it.’

We’ll transfer back the stack in your control and convert it from a managed service to a generic support contract.

What features contribute to a managed private cloud being more cost-effective than public cloud? 

Fabel: The value of public cloud is that you can get started with a snap of your finger, use your credit card and off you go. … However, down the road, you can end up in a situation where due to smart lock-in schemes, nonopen APIs’ interfaces and unique business features, you’re locked into this public cloud and paying a lot of money out of your Opex.

The challenge is it takes a lot more investment upfront to actually get started with a managed private cloud. Somebody still has to order hardware, it still constitutes a commitment, and someone still needs to install the hardware and run it for you. … But, for what it’s worth, we’ll send two engineers, and it’ll take two weeks and you’ll have a private cloud.

Is it common to be able to deploy a private cloud with just two engineers, or is that specific to Canonical?

I think we’ll see more adoption of managed services from the more advanced user base.
Stephan Fabeldirector of product at Canonical

Fabel: You’ll certainly find in this space a lot of players who will emphasize their expertise and the ability to do almost anything you want with OpenStack, in a similar amount of time. The question is, what kind of cloud is within that offering? If you go to a professional service-oriented company, they’ll try and sell you bodies to continually engage with as their way of staying with the contract, which racks up those tremendous costs.

The differentiating fact with Juju is, as opposed to other configuration tooling such as Puppet or Chef, it actually takes things further by not just installing packages and making sure the configuration is set; it is actually orchestrating the OpenStack installation.

So, for example, a classic problem with OpenStack is upgrading it. If you go to some of our competitors, their upgrades are going to be an extremely expensive professional services quote, because it’s so manual. What we did is basically encoded the smart in with what we call Charms that work in conjunction with Juju to manage that automatically.

How does automation help reduce the cost of managed private cloud? 

Fabel: We launched [Juju] five years ago, and it went through a lot of growing pains. Back then, everybody was set on configuration management, and they were appropriating configuration management technology to also do orchestration. … That’s great if you’re only deploying one thing. But, as OpenStack exhibits, it’s not quite that easy when you try and deploy something a little bit more complex.

[Now,] Juju basically says, ‘I will write out the configuration because I’m an agent and I understand the context.’ If you can automate tasks such as server installation and management, and you can code that logic, then you have to think less.

It does require more discipline on the Charms side and more knowledge on the operator in case something does go wrong. … For you to be able to debug this, you actually have to understand how to use it. And that’s a hurdle that people in the beginning sort of dismissed.

Will there always be a mix of public and private managed cloud?

Fabel: We’re seeing interest in power users of OpenStack who want to move onto new frontiers, such as Kubernetes, which seems to be it right now, and we’re ready to take [management] off their hands.

I think we’ll see more adoption of managed services from the more advanced user base and in the more off-the-shelf kind of market that want a 15-node or 20-node cloud. It’s not about the 2,000-node cloud as much anymore. I think there’s a whole market that’s just saying, ‘I have a 10-node cloud, and I can pay VMware or someone to run it for me, and I choose so because it’s economically more attractive.’ 

PC speakers

As per title, looking for PC. Am open to variety of options, what have you got?

Thanks,

Andy

Location: Gosport, Hampshire

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that…

PC speakers

PC speakers

As per title, looking for PC. Am open to variety of options, what have you got?

Thanks,

Andy

Location: Gosport, Hampshire

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that…

PC speakers

PC speakers

As per title, looking for PC. Am open to variety of options, what have you got?

Thanks,

Andy

Location: Gosport, Hampshire

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that…

PC speakers

Wanted – PC speakers

As per title, looking for PC. Am open to variety of options, what have you got?

Thanks,

Andy

Location: Gosport, Hampshire

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.