Tag Archives: cases

Splice Machine 3.0 integrates machine learning capabilities, database

Databases have long been used for transactional and analytics use cases, but they also have practical utility to help enable machine learning capabilities. After all, machine learning is all about deriving insights from data, which is often stored inside a database.

San Francisco-based database vendor Splice Machine is taking an integrated approach to enabling machine learning with its eponymous database. Splice Machine is a distributed SQL relational database management system that includes machine learning capabilities as part of the overall platform.

Splice Machine 3.0 became generally available on March 3, bringing with it updated machine learning capabilities. It also has new Kubernetes cloud native-based model for cloud deployment and enhanced replication features.

In this Q&A, Monte Zweben, co-founder and CEO of Splice Machine, discusses the intersection of machine learning and databases and provides insight into the big changes that have occurred in the data landscape in recent years.

How do you integrate machine learning capabilities with a database?

Monte ZwebenMonte Zweben

Monte Zweben: The data platform itself has tables, rows and schema. The machine learning manager that we have native to the database has notebooks for developing models, Python for manipulating the data, algorithms that allow you to model and model workflow management that allows you to track the metadata on models as they go through their experimentation process. And finally we have in-database deployment.

So as an example, imagine a data scientist working in Splice Machine working in the insurance industry. They have an application for claims processing and they are building out models inside Splice Machine to predict claims fraud. There’s a function in Splice Machine called deploy, and what it will do is take a table and a model to generate database code. The deploy function builds a trigger on the database table that tells the table to call a stored procedure that has the model in it for every new record that comes in the table.

So what does this mean in plain English? Let’s say in the claims table, every time new claims would come in, the system would automatically trigger, grab those claims, run the model that predicts claim cause and outputs those predictions in another table. And now all of a sudden, you have real-time, in-the-moment machine learning that is detecting claim fraud on first notice of loss.

What does distributed SQL mean to you?

Zweben: So at its heart, it’s about sharing data across multiple nodes. That provides you the ability to parallelize computation and gain elastic scalability. That is the most important distributed attribute of Splice Machine.

In our new 3.0 release, we just added distributed replication. It’s another element of distribution where you have secondary Splice Machine instances in geo-replicated areas, to handle failover for disaster recovery.

What’s new in Splice Machine 3.0?

Zweben: We moved our cloud stack for Splice Machines from an old Mesos architecture to Kubernetes. Now our container-based architecture is all Kubernetes, and that has given us the opportunity to enable the separation of storage and compute. You literally can pause Splice Machine clusters and turn them back on. This is a great utility for consumption based usage of databases.

Along with our upgrade to Kubernetes, we also upgraded our machine learning manager from an older notebook technology called Zeppelin to a newer notebook technology that has really gained momentum in the marketplace, as much as Kubernetes has in the DevOps world. Jupyter notebooks have taken off in the data science space.

We’ve also enhanced our workflow management tool called mlflow, which is an open source tool that originated with Databricks and we’re part of that community. Mlflow allows data scientists to track their experiments and has that record of metadata available for governance.

What’s your view on open source and the risk of a big cloud vendor cannibalizing open source database technology?

Zweben: We do compose many different open source projects into a seamless and highly performant integration. Our secret sauce is how we put these things together at a very low level, with transactional integrity, to enable a single integrated system. This composition that we put together is open source, so that all of the pieces of our data platform are available in our open source repository, and people can see the source code right now.

I’m intensely worried about cloud cannibalization. I switched to an AGPL license specifically to protect against cannibalization by cloud vendors.

On the other hand, we believe we’re moving up the stack. If you look at our machine learning package, and how it’s so inextricably linked with the database, and the reference applications that we have in different segments, we’re going to be delivering more and more higher-level application functionality.

What are some of the biggest changes you’ve seen in the data landscape over the seven years you’ve been running Splice Machine?

Zweben: With the first generation of big data, it was all about data lakes, and let’s just get all the data the company has into one repository. Unfortunately, that has proven time and time again, at company after company, to just be data swamps.

Data repositories work, they’re scalable, but they don’t have anyone using the data, and this was a mistake for several reasons.

Instead of thinking about storing the data, companies should think about how to use the data.
Monte ZwebenCo-founder and CEO, Splice Machine

Instead of thinking about storing the data, companies should think about how to use the data. Start with the application and how you are going to make the application leverage new data sources.

The second reason why this was a mistake was organizationally, because the data scientists who know AI were all centralized in one data science group, away from the application. They are not the subject matter experts for the application.

When you focus on the application and retrofit the application to make it smart and inject AI, you can get a multidisciplinary team. You have app developers, architects, subject-matter experts, data engineers and data scientists, all working together on one purpose. That is a radically more effective and productive organizational structure for modernizing applications with AI.

Go to Original Article
Author:

ArangoDB 3.6 accelerates performance of multi-model database

By definition, a multi-model database provides multiple database models for different use cases and user needs. Among the popular options users have for a multi-model database is ArangoDB from the open source database vendor.

ArangoDB 3.6, released into general availability Jan. 8, brings a series of new updates to the multi-model database platform. Among the updates are improved performance capabilities for queries and overall database operations. Also, the new OneShard feature from the San Mateo, Calif.-based vendor is a way for organizations to create robust data resilience as well as use synchronization capabilities.

For Kaseware, based in Denver, ArangoDB has been a core element since the company was founded in 2016, enabling the law enforcement software vendor’s case management system.

“I specifically sought out a multi-model database because for me, that simplified things,” said Scott Baugher, the co-founder, president and CTO of Kaseware, and a former FBI special agent. “I had fewer technologies in my stack, which meant fewer things to keep updated and patched.”

Kaseware uses ArangoDB as a document, key/value, and graph database. Baugher noted that the one other database the company uses is ElasticSearch, for its full-text search capabilities. Kaseware uses ElasticSearch because until fairly recently, ArangoDB did not offer full-text search capabilities, he said.

“If I were starting Kaseware over again now, I’d take a very hard look at eliminating ElasticSearch from our stack as well,” Baugher said. “I say that not because ElasticSearch isn’t a great product, but it would allow me to even further simplify my deployment stack.” 

Adding OneShard to ArangoDB 3.6

With OneShard, users will gain a new option for database distribution. OneShard is a feature for users for whom data is small enough to fit on a single node, but the requirement for fault tolerance still requires the database to replicate data across multiple nodes, said Joerg Schad, head of engineering and machine learning at ArangoDB.

I specifically sought out a multi-model database because for me, that simplified things. I had fewer technologies in my stack, which meant fewer things to keep updated and patched.
Scott BaugherCo-founder, president and CTO of Kaseware

“ArangoDB will basically colocate all data on a single node and hence offer local performance and transactions as queries can be evaluated on a single node,” Schad said. “It will still replicate the data synchronously to achieve fault tolerance.”

Baugher said he’ll be taking a close look at OneShard.

He noted that Kaseware now uses ArangoDB’s “resilient single” database setup, which in his view is similar, but less robust. 

“One main benefit of OneShard seems to be the synchronous replication of the data to the backup or failover databases versus the asynchronous replication used by the active failover configuration,” Baugher said.

Baugher added that OneShard also allows database reads to happen from any database node. This contrasts with active failover, in that reads are limited to the currently active node only. 

“So for read-heavy applications like ours, OneShard should not only offer performance benefits, but also let us make better use of our standby nodes by having them respond to read traffic,” he said.

More performance gains in ArangoDB 3.6

The ArangoDB 3.6 multi-model database also provides users with faster query execution thanks to a new feature for subquery optimization. Schad explained that when writing queries, it is a typical pattern to build a complex based on multiple simple queries. 

“With the improved subquery optimization, ArangoDB optimizes and processes such queries more efficiently by merging them into one which especially improves performance for larger data sizes up to a factor of 28x,” he said.

The new database release also enables parallel execution of queries to further improve performance. Schad said that if a query requires data from multiple nodes, with ArangoDB 3.6 operations can be parallelized to be performed concurrently. The end results, according to Schad, are improvements of 30% to 40% for queries involving data across multiple nodes.

Looking forward to the next release of ArangoDB, scalability improvements will be at the top of the agenda, he said.

“For the upcoming 3.7 release, we are already working on improving the scalability even further for larger data sizes and larger clusters,” Schad said.

Go to Original Article
Author:

Pivot3, Scale Computing HCI appliances zoom in on AI, edge

Hyper-converged vendors Pivot3 and Scale Computing this week expanded their use cases with product launches.

Scale formally unveiled HE150 all-flash NVMe hyper-converged infrastructure (HCI) appliances for space-constrained edge environments. Scale sells the compute device as a three-node cluster, but it does not require a server rack.

The new device is a tiny version of the Scale HE500 HCI appliances that launched this year. HE150 measures 4.6 inches wide, 1.7 inches high and 4.4 inches deep. Scale said select customers have deployed proofs of concept.

Pivot3 rolled out AI-enabled data protection in its Acuity HCI operating software. The vendor said Pivot3 appliances can stream telemetry data from customer deployments to the vendor’s support cloud for historical analysis and troubleshooting.

HCI use cases evolve

Hyper-converged infrastructure vendors package the disparate elements of converged infrastructure in a single piece of hardware, including compute, hypervisor software, networking and storage.

Dell is the HCI market leader, in large measure to VMware vSAN, while HCI pioneer Nutanix holds the No. 2 spot. But competition is heating up. Server vendors Cisco and Hewlett Packard Enterprise have HCI products, as does NetApp with a product using its SolidFire all-flash technology. Ctera Networks, DataCore and startup Datrium are also trying to elbow into the crowded space.

Pivot3 storage is used mostly for video surveillance, although the Austin, Texas-based vendor has focused on increasing its deal size for its Acuity systems.

Scale Computing, based in Indianapolis, sells the HC3 virtualization platform for use in edge and remote office deployments. The company has customers in education, financial services, government, healthcare and retail.

Hyper-converged infrastructure has expanded beyond its origins in virtual desktop infrastructure to support cloud analytics of primary and secondary storage, said Eric Sheppard, a research vice president in IDC’s infrastructure systems, platforms and technologies group.

“The most common use of HCI is virtualized applications, but the percentage of [hosted] apps that are mission-critical has increased considerably,” Sheppard said.

Scale HE150: Small gear for the edge

Scale’s HC3 system is designed for Linux-based KVM. Unlike most HCI appliances, Scale HC3 does not support VMware. Scale designed HyperCore to run Linux-based KVM.

Scale Computing's mini-HCI appliance
Scale Computing HE150 HCI appliance

The HE150 includes a full version of HyperCore operating system, including rolling updates, replication and snapshots. The device comes with up to six cores and up to 64 GB of RAM. Intel’s Frost Canyon Next Unit of Computing (NUC) mini-PC provides the compute. Storage per nodes is up to 2 TB with one M.2 NVMe SSD.

Traditional HCI appliances require a dedicated backplane switch to route network traffic, including Scale’s larger HC3 appliances. HE150 features new HC3 Edge Fabric software-based tunneling for communication between HC3 nodes. The tunneling is needed to accommodate the tiny form factor, said Dave Demlow, Scale’s VP of product management.

Scale recommends a three-node HE150 cluster. Data is mirrored twice between the nodes for redundancy. Demlow said the cluster takes up the space of three smart phones stacked together.

Eric Slack, a senior analyst at Evaluator Group, said Scale’s operating system enables it to sell an HCI appliance the size of Scale HE150.

“This new small device runs the full Scale HyperCore OS, which is an important feature. Scale stack is pretty thin. They don’t run VMware or a separate software-defined storage layer, so HyperCore can run with limited memory and a limited number of CPU cores,” Slack said.

Pivot3 HCI appliances

Pivot3 did not make hardware upgrades with this release. The features in Acuity center on AI-driven analytics for more automated management.

Pivot3 enhanced its Intelligence Engine policy manager with AI tools for backup and disaster recovery in multi-petabyte storage. The move comes amid research by IDC that indicates more enterprises expect HCI vendors to provide autonomous management via the cloud.

The IDC survey of 252 data centers found that 89% rely on cloud-based predictive analytics to manage IT infrastructure, but only 72% had enterprise storage systems that bundle analytics tools as part of the base price.

“The entirety of the data center infrastructure market is increasing the degree to which tasks can be automated. All roads lead toward autonomous operations, and cloud-based predictive analytics is the fastest way to get there,” Sheppard said.

Pivot3 said it added self-healing to identify failed nodes and automatically returns repaired nodes to the cluster. The vendor also added delta differencing to its erasure coding for faster rebuilds.

Go to Original Article
Author:

Epicor ERP system focuses on distribution

Many ERP systems try to be all things to all use cases, but that often comes at the expense of heavy customizations.

Some companies are discovering that a purpose-built ERP is a better and more cost-effective bet, particularly for small and midsize companies. One such product is the Epicor ERP system Prophet 21, which is primarily aimed at wholesale distributors.

The functionality in the Epicor ERP system is designed to help distributors run processes more efficiently and make better use of data flowing through the system.

In addition to distribution-focused functions, the Prophet 21 Epicor ERP system includes the ability to integrate value-added services, which could be valuable for distributors, said Mark Jensen, Epicor senior director of product management.

“A distributor can do manufacturing processes for their customers, or rentals, or field service and maintenance work. Those are three areas that we focused on with Prophet 21,” Jensen said.

Prophet 21’s functionality is particularly strong in managing inventory, including picking, packing and shipping goods, as well as receiving and put-away processes.

Specialized functions for distributors

Distribution companies that specialize in certain industries or products have different processes that Prophet 21 includes in its functions, Jensen said. For example, Prophet 21 has functionality designed specifically for tile and slab distributors.

“The ability to be able to work with the slab of granite or a slab of marble — what size it is, how much is left after it’s been cut, transporting that slab of granite or tile — is a very specific functionality, because you’re dealing with various sizes, colors, dimensions,” he said. “Being purpose-built gives [the Epicor ERP system] an advantage over competitors like Oracle, SAP, NetSuite, [which] either have to customize or rely on a third-party vendor to attach that kind of functionality.”

Jergens Industrial Supply, a wholesale supplies distributor based in Cleveland, has improved efficiency and is more responsive to shifting customer demands using Prophet 21, said Tony Filipovic, Jergens Industrial Supply (JIS) operations manager.

We looked at other systems that say they do manufacturing and distribution, but I just don’t feel that that’s the case.
Tony FilipovicOperations manager, Jergens Industrial Supply

“We like Prophet 21 because it’s geared toward distribution and was the leading product for distribution,” Filipovic said. “We looked at other systems that say they do manufacturing and distribution, but I just don’t feel that that’s the case. Prophet 21 is something that’s been top of line for years for resources distribution needs.”

One of the key differentiators for JIS was Prophet 21’s inventory management functionality, which was useful because distributors manage inventory differently than manufacturers, Filipovic said.

“All that functionality within that was key, and everything is under one package,” he said. “So from the moment you are quoting or entering an order to purchasing the product, receiving it, billing it, shipping it and paying for it was all streamlined under one system.”

Another key new feature is an IoT-enabled button similar to Amazon Dash buttons that enables customers to resupply stocks remotely. This allows JIS to “stay ahead of the click” and offer customers lower cost and more efficient delivery, Filipovic said.

“Online platforms are becoming more and more prevalent in our industry,” he said. “The Dash button allows customers to find out where we can get into their process and make things easier. We’ve got the ordering at the point where customers realize that when they need to stock, all they do is press the button and it saves multiple hours and days.”

Epicor Prophet 21 a strong contender in purpose-built ERP

Epicor Prophet 21 is on solid ground with its purpose-built ERP focus, but companies have other options they can look at, said Cindy Jutras, president of Mint Jutras, an ERP research and advisory firm in Windham, NH.

“Epicor Prophet 21 is a strong contender from a feature and function standpoint. I’m a fan of solutions that go that last mile for industry-specific functionality, and there aren’t all that many for wholesale distribution,” Jutras said. “Infor is pretty strong, NetSuite plays here, and then there a ton of little guys that aren’t as well-known.”

Prophet 21 may take advantage of new cloud capabilities to compete better in some global markets, said Predrag Jakovljevic, principal analyst at Technology Evaluation Centers, an enterprise computing analysis firm in Montreal.

“Of course a vertically-focused ERP is always advantageous, and Prophet 21 and Infor SX.e go head-to-head all the time in North America,” Jakovljevic said. “Prophet 21 is now getting cloud enabled and will be in Australia and the UK, where it might compete with NetSuite or Infor M3, which are global products.”

Go to Original Article
Author:

For Sale – 3 x 2U X-Case Server cases

Hi.

Having a clear out of my server gear and have 3 x 2u cases. There are no make/model numbers on the case but I’m sure they are “x-case”.

They are 2u, standard 19” wide and 21.5” deep. They take normal atx power supplies which makes these cases ideal for building budget servers.

They are in decent condition, some scratches etc due to use and being stored in my garage.

Collection preferred due to size of these but if no one local bites, I may post these if I can find packaging.

£20 each or all 3 for £50

COLLECTION FROM NEWCASTLE UPON TYNE

Price and currency: 20 each.
Delivery: Delivery cost is not included
Payment method: BT, cash, ppg, magic beans.
Location: Newcastle upon tyne
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

For Sale – 3 x 2U X-Case Server cases

Hi.

Having a clear out of my server gear and have 3 x 2u cases. There are no make/model numbers on the case but I’m sure they are “x-case”.

They are 2u, standard 19” wide and 21.5” deep. They take normal atx power supplies which makes these cases ideal for building budget servers.

They are in decent condition, some scratches etc due to use and being stored in my garage.

Collection preferred due to size of these but if no one local bites, I may post these if I can find packaging.

£20 each or all 3 for £50

COLLECTION FROM NEWCASTLE UPON TYNE

Price and currency: 20 each.
Delivery: Delivery cost is not included
Payment method: BT, cash, ppg, magic beans.
Location: Newcastle upon tyne
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

For Sale – 3 x 2U X-Case Server cases

Hi.

Having a clear out of my server gear and have 3 x 2u cases. There are no make/model numbers on the case but I’m sure they are “x-case”.

They are 2u, standard 19” wide and 21.5” deep. They take normal atx power supplies which makes these cases ideal for building budget servers.

They are in decent condition, some scratches etc due to use and being stored in my garage.

Collection preferred due to size of these but if no one local bites, I may post these if I can find packaging.

£20 each or all 3 for £50

COLLECTION FROM NEWCASTLE UPON TYNE

Price and currency: 20 each.
Delivery: Delivery cost is not included
Payment method: BT, cash, ppg, magic beans.
Location: Newcastle upon tyne
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

For Sale – 3 x 2U X-Case Server cases

Hi.

Having a clear out of my server gear and have 3 x 2u cases. There are no make/model numbers on the case but I’m sure they are “x-case”.

They are 2u, standard 19” wide and 21.5” deep. They take normal atx power supplies which makes these cases ideal for building budget servers.

They are in decent condition, some scratches etc due to use and being stored in my garage.

Collection preferred due to size of these but if no one local bites, I may post these if I can find packaging.

£20 each or all 3 for £50

COLLECTION FROM NEWCASTLE UPON TYNE

Price and currency: 20 each.
Delivery: Delivery cost is not included
Payment method: BT, cash, ppg, magic beans.
Location: Newcastle upon tyne
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

For Sale – 3 x 2U X-Case Server cases

Hi.

Having a clear out of my server gear and have 3 x 2u cases. There are no make/model numbers on the case but I’m sure they are “x-case”.

They are 2u, standard 19” wide and 21.5” deep. They take normal atx power supplies which makes these cases ideal for building budget servers.

They are in decent condition, some scratches etc due to use and being stored in my garage.

Collection preferred due to size of these but if no one local bites, I may post these if I can find packaging.

£20 each or all 3 for £50

COLLECTION FROM NEWCASTLE UPON TYNE

Price and currency: 20 each.
Delivery: Delivery cost is not included
Payment method: BT, cash, ppg, magic beans.
Location: Newcastle upon tyne
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Managed private cloud gives IT a cost-effective option

Cost is a big factor when IT admins explore different options for cloud. In certain cases, a managed private cloud may be more cost-effective than public cloud.

Canonical, a distributor and contributor to Linux Ubuntu, helps organizations manage their cloud setups and uses a variety of proprietary technology to streamline management. Based on the company’s BootStack offering, Canonical’s managed cloud supports a variety of applications and use cases. A managed private cloud can help organizations operate in the “Goldilocks zone,” where they have the right amount of cloud resources for their needs, said Stephan Fabel, director of product at Canonical, based in London. 

Currently, 35% of enterprises are moving data to a private cloud, but hurdles such as hardware costs and initial provisioning can cause organizations to delay deployment, according to a June 2018 report by 451 Research. Here, Fabel talks about what makes a managed private cloud a more effective strategy for the long term.

What is different about BootStack? 

Stephan Fabel: BootStack is applicable to the entire reference architecture to our OpenStack offering. The use case will often dictate a loose handling of the details in terms of the reference architecture. So, you can say, for example, deploy a telco-grade cluster or a cluster for enterprise or a cluster for application development, and those are very different characteristics from another company.

Stephan Fabel, CanonicalStephan Fabel

We support Swift [an API for data storage and scalability] and Chef [framework codes for deployments]. With some of the more locked-down distributions of OpenStack, we support multiple Cinder-volume stores. … We have the ability to do a Contrail application programming interface and even an open Contrail.

The reason why we can do a managed private cloud at the economics we portray them is that we have the operational efficiencies baked into our tooling. Metal as a service and Juju [an open source application modeling tool] provide that base layer on which OpenStack can run and manage.

One thing that is not entirely unique — but it is rare — is that BootStack actually stands for ‘build, operate and optionally transfer.’ Managed service providers generally want users to get on their platform and never leave. We basically say, ‘You know you want to get started with OpenStack, but you’re not sure you’re operationally ready. That’s fine; jump on BootStack for a year, and then build up your confidence or skill set. When you’re ready to take it on, go for it.’

We’ll transfer back the stack in your control and convert it from a managed service to a generic support contract.

What features contribute to a managed private cloud being more cost-effective than public cloud? 

Fabel: The value of public cloud is that you can get started with a snap of your finger, use your credit card and off you go. … However, down the road, you can end up in a situation where due to smart lock-in schemes, nonopen APIs’ interfaces and unique business features, you’re locked into this public cloud and paying a lot of money out of your Opex.

The challenge is it takes a lot more investment upfront to actually get started with a managed private cloud. Somebody still has to order hardware, it still constitutes a commitment, and someone still needs to install the hardware and run it for you. … But, for what it’s worth, we’ll send two engineers, and it’ll take two weeks and you’ll have a private cloud.

Is it common to be able to deploy a private cloud with just two engineers, or is that specific to Canonical?

I think we’ll see more adoption of managed services from the more advanced user base.
Stephan Fabeldirector of product at Canonical

Fabel: You’ll certainly find in this space a lot of players who will emphasize their expertise and the ability to do almost anything you want with OpenStack, in a similar amount of time. The question is, what kind of cloud is within that offering? If you go to a professional service-oriented company, they’ll try and sell you bodies to continually engage with as their way of staying with the contract, which racks up those tremendous costs.

The differentiating fact with Juju is, as opposed to other configuration tooling such as Puppet or Chef, it actually takes things further by not just installing packages and making sure the configuration is set; it is actually orchestrating the OpenStack installation.

So, for example, a classic problem with OpenStack is upgrading it. If you go to some of our competitors, their upgrades are going to be an extremely expensive professional services quote, because it’s so manual. What we did is basically encoded the smart in with what we call Charms that work in conjunction with Juju to manage that automatically.

How does automation help reduce the cost of managed private cloud? 

Fabel: We launched [Juju] five years ago, and it went through a lot of growing pains. Back then, everybody was set on configuration management, and they were appropriating configuration management technology to also do orchestration. … That’s great if you’re only deploying one thing. But, as OpenStack exhibits, it’s not quite that easy when you try and deploy something a little bit more complex.

[Now,] Juju basically says, ‘I will write out the configuration because I’m an agent and I understand the context.’ If you can automate tasks such as server installation and management, and you can code that logic, then you have to think less.

It does require more discipline on the Charms side and more knowledge on the operator in case something does go wrong. … For you to be able to debug this, you actually have to understand how to use it. And that’s a hurdle that people in the beginning sort of dismissed.

Will there always be a mix of public and private managed cloud?

Fabel: We’re seeing interest in power users of OpenStack who want to move onto new frontiers, such as Kubernetes, which seems to be it right now, and we’re ready to take [management] off their hands.

I think we’ll see more adoption of managed services from the more advanced user base and in the more off-the-shelf kind of market that want a 15-node or 20-node cloud. It’s not about the 2,000-node cloud as much anymore. I think there’s a whole market that’s just saying, ‘I have a 10-node cloud, and I can pay VMware or someone to run it for me, and I choose so because it’s economically more attractive.’