Tag Archives: platform

HubSpot Conversations adds chatbots, collaborative inbox

In an effort to become a platform its users rarely leave, HubSpot has released HubSpot Conversations, a tool that helps users keep track of customer interactions regardless of the channel.

The new module comes a few months after HubSpot introduced Service Hub, HubSpot’s new service product, and adds communication capabilities to the Cambridge, Mass., martech vendor’s marketing and service systems.

HubSpot Conversations has three main components: a collaborative inbox tool, chatbot capabilities and lead routing.

All of these capabilities were a welcome addition to the HubSpot tech stack for customer Frame My TV, a Haverhill, Mass., manufacturing company that builds custom frames for wall-mounted televisions.

“It allowed us, as a small business, to bring all of our remote employees under one umbrella,” said Kevin Hancock, principal at Frame My TV.

Hancock has a small core staff of about six employees, with roughly a dozen more freelance workers in sales and other departments. Having most of his technology all within HubSpot has made marketing and selling easier and more efficient — due to fewer integrations with third-party tools.

It allows me to focus on running the business and not having to focus on the technology.
Kevin Hancockprincipal, Frame My TV

“It allows me to focus on running the business and not having to focus on the technology,” Hancock said. “The top benefit of HubSpot Conversations is it puts the prospect information into the CRM for us and can trigger certain functionality.”

Previously, Hancock’s sales reps would have to toggle in-between different UI and input information in HubSpot on particular inbound prospects. With those interactions being recorded by HubSpot Conversations, the system is able to update the customer information automatically. Conversations also integrate with Slack, which Hancock and his team use more than email to communicate.

Separating spam from leads

While Hancock said HubSpot Conversations has improved efficiency for Frame My TV, he expressed some uneasiness about some aspects of the product. When working within inbound marketing, a common problem that arises is an abundance of irrelevant material — something HubSpot is still trying to work through with the collaborative inbox feature of Conversations.

“The number one hurdle with Conversations is figuring out how to handle the flurry of junk mail,” Hancock said. “Conceptually it makes sense to bring them into one place, but spam is a problem. I don’t want those emails to end up as contacts and muddying up the inbox.”

Hancock said that while he has turned off the feature, HubSpot is trying to fix the problem.

“It needs to have an option that when a message comes in to add that email as a contact,” he said.

HubSpot recently released Conversations, a new feature that brings all prospect interactions into one place.
The user interface for HubSpot Conversations shows how the tool brings all conversations and interactions into one place.

Chatbots come to HubSpot

Another feature of HubSpot Conversations that Frame My TV had deployed is building out chatbots, helping to gather initial information on inquiries and working to find the ideal number of questions to ask before moving over to live chat.

“One challenge we have is we sell a product to the top 5% — people aren’t buying it because of the cost, it’s a desire to finish off a room,” Hancock said. “So we have to qualify people a little bit and a lot of that can be sifted through using a bot.”

Building a chatbot for HubSpot was made possible by HubSpot’s acquisition of Motion AI in 2017.

HubSpot is seeking to build a platform-wide product — similar to what Salesforce has done over the years — with the goal of getting its users to stay on the HubSpot platform for all their needs, according to the company.

“With Conversations, what we’re trying to do is give teams that full clear picture of that customer relationship,” said Elise Beck, product marketing manager for Conversations. “As teams engage and work with prospects through their journey, they can keep things tied to CRM and keep tabs on what has happened.”

Beck added that HubSpot Conversations is available now within all existing HubSpot products as a drop-down item at no additional charge.

HubSpot’s annual user conference, Inbound 2018, is in Boston Sept. 4 to 7. Check back to SearchCRM.com for coverage of the conference.

Confluent Platform 5.0 aims to mainstream Kafka streaming

The Confluent Platform continues to expand on capabilities useful for Kafka-based data streaming, with additions that are part of a 5.0 release now available from Confluent Inc.

The creation of former LinkedIn data engineers who helped build the Kafka messaging framework, Confluent Platform’s goal is to make real-time big data analytics accessible to a wider community.

Part of that effort takes the form of KSQL, which is meant to bring easier SQL-style queries to analytics on Kafka data. KSQL is a Kafka-savvy SQL query engine and language Confluent created in 2017 to open Kafka streaming data to analytics.

Version 5.0 of the Confluent Platform, commercially released on July 31, seeks to improve disaster recovery with more adept handling of application client failover to enhance IoT abilities with MQTT proxy support, and to reduce the need to use Java for programming streaming analytics with a new GUI for writing KSQL code.

Data dips into mainstream

Confluent Platform 5.0’s support for disaster recovery and other improvements is useful, said Doug Henschen, a principal analyst at Constellation Research. But the bigger value in the release, he said, is in KSQL’s potential for “the mainstreaming of streaming analytics.”

Doug Henschen, Constellation ResearchDoug Henschen

Besides the new GUI, this Confluent release upgrades the KSQL engine with support for user-defined functions, which are essential parts of many existing SQL workloads. Also, the release supports handling nested data in popular Avro and JSON formats.

“With these moves Confluent is meeting developer expectations and delivering sought-after capabilities in the context of next-generation streaming applications,” Henschen said.

That’s important because web, cloud and IoT applications are creating data at a prodigious rate, and companies are looking to analyze that data as part of real-time operations. The programming skills required to do that level of development remain rare, but, as big data ecosystem software like Apache Spark and Kafka find wider use, simpler libraries and interfaces are appearing to link data streaming and analytics more easily.

Kafka, take a log

At its base, Kafka is a log-oriented publish-and-subscribe messaging system created to handle the data created by burgeoning web and cloud activity at social media giant LinkedIn.

The core software has been open sourced as Apache Kafka. Key Kafka messaging framework originators, including Jay Krebs, Neha Narkhede and others, left LinkedIn in 2014 to found Confluent, with the stated intent to build on core Kafka messaging for further enterprise purposes.

Joanna Schloss, Confluent’s director of product marketing, said Confluent Platform’s support for nested data in Avro and JSON will enable greater use of business intelligence (BI) tools in Kafka data streaming. In addition, KSQL now support more complex joins, allowing KSQL applications to enhance data in more varied ways.

Joanna Schloss, director of product marketing at ConfluentJoanna Schloss

She said opening KSQL activity to view via a GUI makes KSQL a full citizen in modern development teams in which programmers, as well as DevOps and operations staff, all take part in data streaming efforts.

“Among developers, DevOps and operations personnel there are persons interested in seeing how Kafka clusters are performing,” she said. Now, with the KSQL GUI, “when something arrives they can use SQL [skills] to watch what happened.” They don’t need to find a Java developer to interrogate the system, she noted.

Making Kafka more accessible for applications

KSQL is among the streaming analytics capabilities of interest to Stephane Maarek, CEO at DataCumulus, a Paris-based firm focused on Java, Scala and Kafka training and consulting.

Stephane Maarek, CEO of DataCumulusStephane Maarek

Maarek said KSQL has potential to encapsulate a lot of programming complexity, and, in turn, to lower the barrier to writing streaming applications. In this, Maarek said, Confluent is helping make Kafka more accessible “to a variety of use cases and data sources.”

Moreover, because the open source community that supports Kafka “is strong, the real-time applications are really easy to create and operate,” Maarek added.

Advances in the replication capabilities in Confluent Platform are “a leap forward for disaster recovery, which has to date been something of a pain point,” he said.

Maarek also said he welcomed recent updates to Confluent Control Center, because they give developers and administrators more insights into the activity of Kafka cluster components, particularly schema registry and application consumption lags — the difference between messaging reads and messaging writes. The updates also reduce the need for administrators to write commands, according to Maarek.

Data streaming field

The data streaming field remains young, and Confluent faces competition from established data analytics players like IBM, Teradata and SAS Institute, Hadoop distribution vendors like Cloudera, Hortonworks and MapR, and a variety of specialists such as MemSQL, SQLstream and Striim.

“There’s huge interest in streaming applications and near-real-time analytics, but it’s a green space,” Henschen said. “There are lots of ways to do it and lots of vendor camps — database, messaging-streaming platforms, next-gen data platforms and so on — all vying for a piece of the action.”

However, Kafka often is a common ingredient, Henschen noted. Such ubiquity helps put Confluent in a position “to extend the open source core with broader capabilities in a commercial offering,” he said.

IT pros debate upstream vs. packaged Kubernetes implementations

Packaged versions of Kubernetes promise ease of use for the finicky container orchestration platform, but some enterprises will stick with a DIY approach to Kubernetes implementation.

Red Hat, Docker, Heptio, Mesosphere, Rancher, Platform9, Pivotal, Google, Microsoft, IBM and Cisco are among the many enterprise vendors seeking to cash in on the container craze with prepackaged Kubernetes implementations for private and hybrid clouds. Some of these products — such Red Hat’s OpenShift Container Platform, Docker Enterprise Edition and Rancher’s eponymous platform — offer their own distribution of the container orchestration software, and most add their own enterprise security and management features on top of upstream Kubernetes code.

However, some enterprise IT shops still prefer to download Kubernetes source code from GitHub and leave out IT vendor middlemen.

“We’re seeing a lot of companies go with [upstream] Kubernetes over Docker [Enterprise Edition] and [Red Hat] OpenShift,” said Damith Karunaratne, director of client solutions for Indellient Inc., an IT consulting firm in Oakville, Ont. “Those platforms may help with management out of the gate, but software license costs are always a consideration, and companies are confident in their technical teams’ expertise.”

The case for pure upstream Kubernetes

One such company is Rosetta Stone, which has used Docker containers in its DevOps process for years, but has yet to put a container orchestration tool into production. In August 2017, the company considered Kubernetes overkill for its applications and evaluated Docker swarm mode as a simpler approach to container orchestration.

Fast-forward a year, however, and the global education software company plans to introduce upstream Kubernetes into production due to its popularity and ubiquity as the container orchestration standard in the industry.

Concerns about Kubernetes management complexity are outdated, given how the latest versions of the tool smooth out management kinks and require less customization for enterprise security features, said Kevin Burnett, DevOps lead for Rosetta Stone in Arlington, Va.

“We’re a late adopter, but we have the benefit of more maturity in the platform,” Burnett said. “We also wanted to avoid [licensing] costs, and we already have servers. Eventually, we may embrace a cloud service like Google Kubernetes Engine more fully, but not yet.”

Burnett said his team prefers to hand-roll its own configurations of open source tools, and it doesn’t want to use features from a third-party vendor’s Kubernetes implementation that may hinder cloud portability in the future.

Other enterprise IT shops are concerned that third-party Kubernetes implementations — particularly those that rely on a vendor’s own distribution of Kubernetes, such as Red Hat’s OpenShift — will be easier to install initially, but could worsen management complexity in the long run.

“Container sprawl combined with a forked Kubernetes runtime in the hands of traditional IT ops is a management nightmare,” said a DevOps transformation leader at an insurance company who spoke on condition of anonymity, because he’s not authorized to publicly discuss the company’s product evaluation process.

His company is considering OpenShift because of an existing relationship with the vendor, but adding a new layer of orchestration and managing multiple control planes for VMs and containers would also be difficult, the DevOps leader predicted, particularly when it comes to IT ops processes such as security patching.

“Why invite that mess when you already have your hands full with a number of packaged containers that you’re going to have to develop security patching processes for?” he said.

Vendors’ Kubernetes implementations offer stability, support

Fork is a fighting word in the open source world, and most vendors say their Kubernetes implementations don’t diverge from pure Kubernetes code. And early adopters of vendors’ Kubernetes implementations said enterprise support and security features are the top priorities as they roll out container orchestration tools, rather than conformance with upstream code, per se.

Amadeus, a global travel technology company, is an early adopter of Red Hat OpenShift. As such, Dietmar Fauser, vice president of core platforms and middleware at Amadeus, said he doesn’t worry about security patching or forked Kubernetes from Red Hat. While Red Hat could theoretically choose to deviate from, or fork, upstream Kubernetes, it hasn’t done so, and Fauser said he doubts the vendor ever will.

Meanwhile, Amadeus is on the cusp of multi-cloud container portability, with instances of OpenShift on Microsoft Azure, Google and AWS public clouds in addition to its on-premises data centers. Fauser said he expects the multi-cloud deployment process will go smoothly under OpenShift.

Multi-tenancy support and a DevOps platform on top of Kubernetes were what made us want to go with third-party vendors.
Surya Suravarapuassistant vice president of product development, Change Healthcare

“Red Hat is very good at maintaining open source software distributions, patching is consistent and easy to maintain, and I trust them to maintain a portable version of Kubernetes,” Fauser said. “Some upstream Kubernetes APIs come and go, but Red Hat’s approach offers stability.”

Docker containers and Kubernetes are de facto standards that span container environments and provide portability, regardless of which vendor’s Kubernetes implementation is in place, said Surya Suravarapu, assistant vice president of product development for Change Healthcare, a healthcare information technology company in Nashville, Tenn., that spun out of McKesson in March 2017.

Suravarapu declined to specify which vendor’s container orchestration tools the company uses, but said Change Healthcare uses multiple third-party Kubernetes tools and plans to put containers into production this quarter.

“Multi-tenancy support and a DevOps platform on top of Kubernetes were what made us want to go with third-party vendors,” Suravarapu said. “The focus is on productivity improvements for our IT teams, where built-in tooling converts code to container images with the click of a button or one CLI [command-line interface] line, and compliance and security policies are available to all product teams.”

A standard way to manage containers in Kubernetes offers enough consistency between environments to improve operational efficiency, while portability between on-premises, public cloud and customer environments is a longer-term goal, Suravarapu said.

“We’re a healthcare IT company,” he added. “We can’t just go with a raw tool without 24/7 enterprise-level support.”

Still, Amadeus’s Fauser acknowledged there’s risk to trust one vendor’s Kubernetes implementation, especially when that implementation is one of the more popular market options.

“Red Hat wants to own the whole ecosystem, so there’s the danger that they could limit other companies’ access to providing plug-ins for their platform,” he said.

That hasn’t happened, but the risk exists, Fauser said.

NetApp AI storage packages OnTap all-flash FAS A800, Nvidia

NetApp is jumping into the AI storage market with a platform that combines OnTap-powered All Flash FAS arrays with Nvidia supercomputers.

NetApp AI OnTap Converged Infrastructure is a validated architecture combining NetApp’s FAS A800 all-flash NVMe array for NFS storage with integrated Nvidia DGX-1 servers and graphical processing units (GPUs). NetApp said the reference design verified four DGX servers to one FAS A800, although customers can start with a 1-1 ratio and nondisruptively scale as needed.

 “The audience for this type of architecture is data scientists,” said Octavian Tanase, senior vice president of NetApp’s Data OnTap system group, during a live webcast this week. “We want to make it simple for them. We want to eliminate complexity (and give) a solution that can be integrated and deployed with confidence, from the edge to the core to the cloud. We believe they will be able to adopt this with a lot of confidence.”

The product is intended to help companies implement data analytics that bridges a core data center, edge computing and cloud environment, said Jim McHugh, a vice president and general manager at Nvidia Corp. He said Nvidia DGX processors build on the foundation of Nvidia Cuda GPUs developed for general-process computing.

“Every industry is figuring ‘we need better insights,’ but better insights means a new computing block,” McHugh said. “Data is really the new source code. When you don’t spend time writing all the features and going through QA, you’re letting data drive the solutions. That takes an incredible amount of parallel computing.”

The joint NetApp-Nvidia product reflects a surge in AI and machine learning, which requires scalable storage to ingest reams of data and highly powerful parallel processing to analyze it.

In April, NetApp rival Pure Storage teamed with Nvidia to launch an AI storage platform with DGX-1 GPUs and Pure’s scale-out NAS FlashBlade array.

Capacity and scaling of NetApp AI OnTap

The NetApp FAS A800 system supports 30 TB NVMe SSDs with multistream write capabilities, scaling to 2 PB of raw capacity in a 2U shelf. The system scales from 364 TB in two nodes to 24 nodes and 74 PB. NetApp said a 24-node FAS A800 cluster delivers up to 300 gigabits per second of throughput and 11.4 million IOPS. It supports 100 Gigabit Ethernet together and 32 Gbps Fibre Channel network connectivity.

The NetApp AI storage platform is tested to minimize deployment risks, the vendors said. A NetApp AI OnTap cluster can scale to multiple racks with additional network switches and storage controller pairs. The product integrates NetApp Data Fabric technologies to move AI data between edge, core and cloud environments, Tanase said.

NetApp AI OnTap is based on OnTap 9.4, which handles enterprise data management, protection and replication. Each DGX server packs eight Nvidia Tesla V100 GPUs, configured in a hybrid cube-mesh topology to use Nvidia’s NVLink network transport as high-bandwidth, low-latency fabric. The design is intended to eliminate traffic bottlenecks that occur with PCIe-based interconnects.

DGX-1 servers support multimode clustering via Remote-Direct-Memory-Access-capable fabrics. 

Enterprises struggle to size, deploy AI projects

AI storage is a hot topic among enterprise customers, said Scott Webb, who manages the global storage practice at World Wide Technologies (WWT) in St. Louis, a NetApp technology partner.

“In our customer workshops, AI is now a main use case. Customers are trying to figure out the complexity. DGX and AI on a NetApp flash back end is a winning combination. It’s not only the performance, but the ability for a customer to start small and [scale] as their use cases grow,” Webb said.

John Woodall, a vice president of engineering at systems integrator Integrated Archive Systems, based in Palo Alto, Calif., cited NetApp Data Fabric as a key enabler for AI storage deployments.

“The speeds and feeds are very imp in AI, but that becomes a game of leapfrog. With the Data Fabric, I think NetApp has been able to give customers more control about where to apply their assets,” Woodall said.

Vendors race to adopt Google Contact Center AI

Google has released a development platform that will make it easier for businesses to deploy virtual agents and other AI technologies in the contact center. The tech giant launched the product in partnership with several leading contact center vendors, including Cisco and Genesys. 

The Google Contact Center AI platform includes three main features: virtual agents, AI-powered assistance for human agents and contact center analytics. Google first released a toolkit for building conversational AI bots in November and updated the platform this week, with additional tools for contact centers.

The virtual agents can help resolve common customer inquiries using Google’s natural language processing platform, which recognizes voice and textual inputs. Genesys, for example, demonstrated how the chatbot could help a customer return ill-fitting shoes before passing the phone call to a human agent, who could help the customer order a new pair.

Google’s agent assistance system scans a company’s knowledge bases, such as FAQs and internal documents, to help agents answer customer questions faster. The analytics tool reviews chats and call recordings to identify customer trends, assisting in the training of live agents and the development of virtual agents.

Vendors rush to adopt Google Contact Center AI

Numerous contact center vendors that directly compete with one another sent out strikingly similar press releases on Tuesday about their adoption of Google Contact Center AI. The Google platform is available through partners Cisco, Genesys, Mitel, Five9, RingCentral, Vonage, Twilio, Appian and Upwire.

“I don’t think I’ve ever heard of a launch like this, where almost every player — except Avaya — is announcing something with the same company,” said Jon Arnold, principal analyst of Toronto-based research and analysis firm J Arnold & Associates.

Avaya was noticeably absent from the list of partners. The company spent most of 2017 in bankruptcy court and was previously faulted by critics for failing to pivot to the cloud quickly enough. The company said at a conference earlier this year it was developing AI capabilities internally, said Irwin Lazar, an analyst at Nemertes Research, based in Mokena, Ill.

An Avaya spokesperson said its platforms integrated with a range of AI technologies from vendors, including Google, IBM, Amazon and Nuance. “Avaya does have a strong relationship with Google, and we continue to pursue opportunities for integration on top of what already exists today,” the spokesperson said.

Google made headlines last month with the release of Google Duplex, a conversational AI bot targeting the consumer market. The company demonstrated how the platform could pass as human during short phone conversations with a hair salon and restaurant. Google’s Contact Center AI was built on some of the same infrastructure, but it’s a separate platform, the company said.

“Google has been pretty quiet. They are not a contact center player. But as AI keeps moving along the curve, everyone is trying to figure out what to do with it. And Google is clearly one of the strongest players in AI, as is Amazon,” Arnold said.

Because it relies overwhelmingly on advertising revenue, Google doesn’t need its Contact Center AI to make a profit. Google will be able to use the data that flows through contact centers to improve its AI capabilities. That should help it compete against Amazon, which entered the contact center market last year with the release of Amazon Connect.

The contact center vendors now partnering with Google had already been racing to develop or acquire AI technologies on their own, and some highlighted how their own AI capabilities would complement Google’s offering. Genesys, for example, said its Blended AI platform — which combines chatbots, machine learning and analytics — would use predictive routing to transfer calls between Google-powered chatbots and live agents.  

“My sense with AI is that it will be difficult for vendors to develop capabilities on their own, given that few can match the computing power required for advanced AI that vendors like Amazon, Google and Microsoft can bring to the table,” Lazar said.

WSO2 integration platform twirls on Ballerina language

The latest version of WSO2’s open source integration platform strengthens its case to help enterprises create and execute microservices.

The summer 2018 release of the WSO2 Integration Agile Platform, introduced at the company’s recent annual WSO2Con user conference, supports what the company calls an “integration agile” approach to implement microservices. Software development has moved to an agile approach, but legacy Waterfall approaches can stymie the integration process, and WSO2 aims to change that.

Solving the integration challenge

Integration remains a primary challenge for enterprise IT shops. The shift of automation functions to the public cloud complicates enterprises’ integration maps, but at the same time, enterprises want to use microservices and serverless architectures, which require new integration architectures, said Holger Mueller, an analyst at Constellation Research in San Francisco.

Improvements to the WSO2 integration platform, such as integration of the WSO2 API management, enterprise integration, real-time analytics and identity and access management options, aim to help enterprises adopt agile integration as they move from monolithic applications to microservices as part of digital transformation projects. The company also introduced a new licensing structure for enterprises to scale their microservices-based apps.

In addition, WSO2 Integration Agile Platform now supports the open source Ballerina programming language, a cloud-native programming language built by WSO2 and optimized for integration. The language features a visual interface that suits noncoding business users, yet also empowers developers to write code to integrate items rather than use bulky configuration-based integration schemes.

“Ballerina has a vaguely Java-JavaScript look and feel,” said Paul Fremantle, CTO of WSO2. “The concurrency model is most similar to Go and the type system is probably most similar to functional programming languages like Elm. We’ve inherited from a bunch of languages.” Using Ballerina, University of Oxford students finished projects in 45 minutes that typically took two hours in other languages, Fremantle said.

Some early Ballerina adopters requested more formal support, so WSO2 now offers a Ballerina Early Access Development Support package with unlimited query support to users, but this is only available until Ballerina 1.0 is released later this year, Fremantle said. Pricing for the package is $500 per developer seat, with a minimum package of five developers.

WSO2's Paul Fremantle at BallerinaCon
Paul Fremantle, CTO of WSO2, demoing Ballerina at BallerinaCon.

Integration at the heart of PaaS

Integration technology is central functionality for all PaaS offerings that aim to ease enterprise developers and DevOps pros into microservices, serverless computing, and even emerging technologies like blockchain, said Charlotte Dunlap, an analyst at GlobalData in Santa Cruz, Calif. WSO2 offers a competitive open source alternative to pricier options from larger rivals such as Oracle, IBM, and SAP, though it’s more of a “second tier” integration and API management provider and lacks the brand recognition to attract global enterprises, she said.

Nevertheless, Salesforce’s MuleSoft acquisition earlier this year exemplifies the importance of smaller integration vendors. Meanwhile, Red Hat offers integration and API management options, and public cloud platform providers will also build out these services.

GCP Marketplace beats AWS, Azure to Kubernetes app store

Google Cloud Platform has made the first foray into a new frontier of Kubernetes ease of use with a cloud app store that includes apps preconfigured to run smoothly on the notoriously persnickety container orchestration platform.

As Kubernetes becomes the de facto container orchestration standard for enterprise DevOps shops, helping customers tame the notoriously complex management of the platform has become de rigueur for software vendors and public cloud service providers. Google stepped further into that territory with GCP Marketplace, a Kubernetes app store with application packages that can automatically be deployed onto container clusters with one click and then billed through Google Cloud Platform as a service.

A search of the AWS Marketplace for “Kubernetes” turned up Kubernetes infrastructure packages by Rancher, Bitnami and CoreOS, but not prepackaged apps from vendors such as Nginx and Elastic ready to be deployed on Kubernetes clusters, which is what GCP Marketplace offers. Another search of the Azure Marketplace returned similar results.

Just because Google is first to market with this Kubernetes app store doesn’t mean that what the company has done is magic.

“These marketplaces are based on Kubernetes template technologies such as Helm charts, so they’re widely available to everyone,” said Gary Chen, an analyst at IDC in Framingham, Mass. “I’m sure if [AWS and Azure] don’t have it, they are already working on something like this.”

Google executives said Helm charts factored into some of the app packages it created with partners, but in some cases it created GCP Marketplace offerings by way of its work with independent software vendors.

“Our approach gives vendors flexibility to use Helm or other packaging mechanisms, given that there isn’t a clear standard today,” the company said through a spokesperson.

GCP Marketplace Kubernetes apps
A screenshot of Kubernetes apps in the revamped GCP Marketplace.

Initially, GCP Marketplace apps offer click-to-deploy support onto Kubernetes clusters that run in Google Kubernetes Engine, and GKE does telemetry monitoring and logging on those apps in addition to offering billing support.

But there’s nothing that precludes these packages to eventually run on premises or even in other public cloud providers’ infrastructures, said Jennifer Lin, product director for Google Cloud.

“It’s always within the realm of possibility, but not something we’re announcing today,” she said.

There is precedent for GCP products with third-party cloud management capabilities — the Google Stackdriver cloud monitoring tool, for example, can be used with AWS and Azure resources.

Initial app partners include the usual suspects among open source cloud-native infrastructure and middleware projects such as GitLab, Couchbase, CloudBees, Cassandra, InfluxDB, Elasticsearch, Prometheus, Nginx, RabbitMQ and Apache Spark. Commonly used web apps such as WordPress and a container security utility from Aqua Security Software will also be available in the initial release of GCP Marketplace.

Mainstream enterprise customers will look for more traditional apps such as MySQL, SQL Server and Oracle databases, as well as back-office and productivity apps. Lin said Google plans more mainstream app support, but she declined to specify which apps are on the GCP Marketplace roadmap.

Microsoft Azure platform sparks partner offerings

With Microsoft Azure platform revenue doubling, channel partners are rolling out services and products to spark further adoption and consumption of the public cloud environment.

A number of Azure-oriented partner offerings were unveiled at Microsoft Inspire 2018, the company’s annual partner conference, which concludes today, July 19, in Las Vegas. The launches run the gamut from hybrid cloud bundles to workspace products, but all aim to take advantage of Azure’s market momentum and its status as a pivotal Microsoft platform.

Jason Zander, executive vice president of the Microsoft Azure team in the company’s cloud and AI group, said Azure experienced 100% year-over-year consumed revenue growth. That growth, he said, translates into partner momentum, noting that every dollar of Azure cloud consumption drives $5 of partner services business.

In addition, the Microsoft Azure platform lies at the heart of the company’s vision of a ubiquitous computing fabric that extends from the edge to the cloud.

“The core of the intelligent cloud and the intelligent edge is Microsoft Azure,” Zander said.

Partners build on the Microsoft Azure platform

Partners showcasing offerings for the Microsoft Azure platform at Inspire included Dell EMC, which expanded its Azure Stack hardware bundle debuted in 2017. Azure Stack extends the Azure public cloud to private settings, such as service provider or end customer data centers.

Dell EMC’s new Azure Stack additions include an all-flash VxRack Azure Stack configuration option, an automated hyper-converged infrastructure (HCI) patch and updated orchestration tool, and SecureVM integration available via Azure Marketplace. In addition, Dell EMC now lets customers and partners acquire Azure Stack through its Cloud Flex pay-as-you-go consumption model, which the company offers to encourage adoption of its HCI product line. Dell EMC treats its Azure Stack hardware bundles as an HCI offering.

The upshot for Dell EMC’s channel partners is the ability to rapidly roll out Azure Stack to customers, said Paul Galjan, senior director of product management and engineering for Azure Stack at Dell EMC.

Chart showing top IaaS providers worldwide
Microsoft Azure has solidified its position among the top IaaS options.

“From a channel partner perspective this is something their customers are interested in,” Galjan said. “Any customer that has a Microsoft cloud strategy will be talking to them about Azure Stack.”

Azure-based offerings on the rise

One of the clear takeaways from Inspire is the rise in Azure-based solutions.
Max PrugerChief revenue officer, CloudJumper

Meanwhile, CloudJumper, a workspace-as-a-service platform provider, launched Cloud Workspace for Azure at Microsoft Inspire 2018. The platform links together CloudJumper’s Cloud Workspace Management Suite with Microsoft’s Remote Desktop modern infrastructure (RDmi). The integration provides increased visibility into users’ Azure, Office 365 and Cloud Workspace experiences, according to the company.

Max Pruger, chief revenue officer at CloudJumper, cited the uptick in offerings around the Microsoft Azure platform as a key development at the partner conference.

“One of the clear takeaways from Inspire is the rise in Azure-based solutions, as organizations further integrate their cloud-forward IT initiatives,” he said. “Microsoft is capitalizing on this, and the conference is relaying their vision to build out the modern workspace with the integration of [Office] 365, Azure Active Directory Sync and RDmi — all built on top of the Azure stack.”

Other partners showcasing Microsoft Azure platform offerings include Atmosera, a managed Azure solutions provider based in Beaverton, Ore. The company featured its Three-Tier Azure Management Suite at Microsoft Inspire 2018. The suite delivers managed, comanaged and self-managed Azure solutions.

“There’s a tremendous opportunity — and an equal amount of pressure to do so — for Microsoft partners to innovate, embrace new capabilities and leverage Azure for business outcomes,” said Jon Thomsen, CEO at Atmosera.

Microsoft Azure Dev Spaces, Google Jib target Kubernetes woes

To entice developers to create more apps on their environments, major cloud platform companies will meet them where they live.

Microsoft and Google both released tools to help ease app development on their respective platforms, Microsoft Azure and the Google Cloud Platform. Microsoft’s Azure Dev Spaces and Google Jib help developers build applications for the Kubernetes container orchestrator and Java environments and represent a means to deliver simpler, developer-friendly technology.

Microsoft’s Azure Dev Spaces, now in public preview, is a cloud-native development environment for the company’s Azure Kubernetes Service (AKS), where developers can work on applications while connected with the cloud and their team. These users can build cloud applications with containers and microservices on AKS and do not deal with any infrastructure management or orchestration, according to Microsoft.

As Kubernetes further commoditizes deployment and orchestration, cloud platform vendors and public cloud providers must focus on how to simplify customers’ implementation of cloud-native development methods — namely DevOps, CI/CD and microservices, said Rhett Dillingham, an analyst at Moor Insights & Strategy in Austin, Texas.

“Azure Dev Spaces has the potential to be one of Microsoft’s most valuable recent developer tooling innovations, because it addresses the complexity of integration testing and debugging in microservices environments,” he said.

Edwin Yuen, analyst, Enterprise Strategy GroupEdwin Yuen

With the correct supporting services, developers can fully test and deploy in Microsoft Azure, added Edwin Yuen, an analyst at Enterprise Strategy Group in Milford, Mass.

“This would benefit the developer, as it eases the process of container development by allowing them to see the results of their app without having to set up a Docker or Kubernetes environment,” he said.

Meanwhile, Google’s Jib containerizer tool enables developers to package a Java application into a container image with the Java tools they already know to create container-based advanced applications. And like Azure Dev Spaces, it handles a lot of the underlying infrastructure and orchestration tasks.

It’s about simplifying the experience … the developer is eased into the process by using existing tools and eliminating the need to set up Docker or Kubernetes.
Edwin Yuenanalyst, Enterprise Strategy Group

Integration with Java development tools Maven and Gradle means Java developers can skip the step to create JAR, or Java ARchive, files and then containerize them, Yuen said.

“Like Azure Dev Spaces, it’s about simplifying the experience — this time, not the laptop jump, but the jump from JAR to container,” he said. “But, again, the developer is eased into the process by using existing tools and eliminating the need to set up Docker or Kubernetes.”

Jib also extends Google’s association with the open source community to provide Java developers an easy path to containerize their apps while using the Google Cloud Platform, Yuen added.

For Sale – Patriot Black Mamba 1866 2x4GB DDR3

Hello,

I am selling my Patriot RAM, due to a recent upgrade to Z370 platform…

Price and currency: £40
Delivery: Delivery cost is included within my country
Payment method: BT.
Location: Edinburgh, Scotland.
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.