Tag Archives: platforms

Google buys AppSheet for low-code app development

Google has acquired low-code app development vendor AppSheet in a bid to up its cloud platform’s appeal among line-of-business users and tap into a hot enterprise IT trend.

Like similar offerings, AppSheet ingests data from sources such as Excel spreadsheets, Smartsheet and Google Sheets. Users apply views to the data — such as charts, tables, maps, galleries and calendars — and then develop workflows with AppSheet’s form-based interface. The apps run on Android, iOS and within browsers.

AppSheet, based in Seattle, already integrated with G Suite and other Google cloud sources, as well as Office 365, Salesforce, Box and other services. The company will continue to support and improve those integrations following the Google acquisition, AppSheet CEO Praveen Seshadri said in a blog post.

“Our core mission is unchanged,” Seshadri said. “We want to ‘democratize’ app development by enabling as many people as possible to build and distribute applications without writing a line of code.”

Terms of the deal were not disclosed, but the price tag for the low-code app development startup is likely far less than Google’s $2.6 billion acquisition of data visualization vendor Looker in June 2019.

Under the leadership of former longtime Oracle executive Thomas Kurian, Google Cloud was expected to make a series of deals to shore up its position in the cloud computing market, where it trails AWS and Microsoft by significant percentages.

So far, Kurian has not made moves to buy core enterprise applications such as ERP and CRM, two markets dominated by the likes of SAP, Oracle and Salesforce. Rather, the AppSheet purchase reflects Google Cloud’s perceived strength in application development, but with a gesture toward nontraditional coders.

As for why Google chose AppSheet to boost its low-code/no-code strategy, one reason could be the dwindling number of options. In the past couple of years, several prominent low-code/no-code vendors became acquisition targets. Notable examples include Siemens’ August 2018 purchase of Mendix for $730 million, and more recently, Swiss banking software provider Temenos’ move to buy Kony in a $559 million deal.

It’s not as if Google, Siemens and Temenos made a long shot bet, either. A survey released last year by Forrester Research, based on data collected in late 201, found that 23% of more than 3,000 developers surveyed reported their companies were already using low-code development platforms. In addition, another 22% indicated their organizations would buy into low-code within a year.

Low-code app dev platforms foster quick creation of business data-driven mobile apps.
Google’s purchase of AppSheet gives it low-code app dev tools for business users.

Low-code competition heightens

Google’s AppSheet buy pits it directly against cloud platform rival Microsoft, which has the citizen developer-targeted Power Apps low-code app development platform that has taken off like a rocket, said John Rymer, an analyst at Forrester. The acquisition of AppSheet also sets Google apart from cloud market share leader AWS, whose alleged super-secret low-code/no-code platform that was said to be under development by a team led by prominent development guru Adam Bosworth has yet to appear.

However, in AppSheet, Google is getting a winner, Rymer noted. “It’s a really good product and a really good team,” he said.

Moreover, the addition of AppSheet will help Google get more horsepower out of Apigee than just API management. The company wanted a broader platform with more functionality to address more customers and more use cases, Rymer said.

“So, I think they will be positioning this as a new platform anchored by Apigee,” he said. “Customers could use Apigee to create and publish APIs and AppSheet is how they would consume them. But they won’t stop there. They need process automation/workflow, so I would expect them to go there as well.”

AppSheet gives Google the potential to craft a more cohesive story that integrates that with Google Cloud and Anthos in the future.
Jeffrey HammondAnalyst, Forrester

Meanwhile, another key benefit Google gains from this acquisition is the integration that AppSheet already has with Google’s office productivity products, said Jeffrey Hammond, another Forrester analyst.

“G Suite has always felt a bit out of place to me at Google’s developer conferences, but it used to be one of the main ‘leads’ for the enterprise,” he said. “AppSheet gives Google the potential to craft a more cohesive story that integrates that with Google Cloud and Anthos in the future.”

Overall, this acquisition is yet another indication that low-code/no-code development has gone mainstream and the number of people building applications will continue to grow.

Go to Original Article

Customer data platform tools top priority list for big vendors

Adobe, Microsoft and Oracle released their own customer data platforms in 2019 to compete with roughly 20 CDP vendors that had been serving their users for several years. Salesforce and SAP plan to follow with their own CDPs in 2020.

Those large customer experience (CX) platforms face challenges in the marketplace. Customer data platforms, which unify data across marketing automation, customer service, sales and e-commerce applications, solve a problem the big vendors created: A lack of data flow between CX applications, according to David Raab, founder of the CDP Institute.

“Nothing else has solved it; nothing comes close to solving it,” Raab said.

That data is siloed, typically, because the vendors built their CX technologies via acquisition and continue to have difficulty integrating the marketing, sales and service clouds comprising their platforms. But users demand it, as they see value in real-time access to customer data from all channels at once. They want machine learning and analytics running with those systems, pulling data from across the platform in order to create one-to-one customer offers, in real time, to drive sales and marketing campaigns.

Tech buyers must choose big vs. small CDPs

The question is, as many B2B companies are just starting to digitally transform their commerce, will they purchase new tools from the big vendors like Oracle and Microsoft, or go with more established and technically advanced CDPs from the smaller companies, such as Lytics, Lotame, Arm Treasure Data and RedPoint Global?

“Buyers are within their rights to be skeptical [of the big-box vendors],” said Gartner analyst Benjamin Bloom. “That vendor who might not have delivered the thing that you were looking for — or [caused] unintended challenges or consequences — now they are exactly the ones who are telling you how to clean up the mess [with their new CDP].”

Smaller CDP vendors tend to be nimble and more responsive to customer needs for features and integrations with analytics tools and outside applications, Bloom said. He sees them keeping their users for some time to come, as the larger platform vendors play catch-up, which Raab agrees with.

Graphic showing value of customer data platform technologies
Customer data platforms unify data from applications tracking customer web behavior, sales, e-commerce and other sources to create personalized marketing profiles and drive revenue.

Yet another option has become available for technology buyers tasked with building customer experiences: the digital experience platform. These typically arise from cloud content management vendors that are moving into customer experience. Acquia acquired CDP vendor AgilOne earlier this month to assemble a marketing automation and e-commerce platform with more sophisticated web content management than all the large CX platform vendors, with the exception of Adobe.

One of Acquia’s main competitors that also offers a CDP, Episerver, is moving more deeply into digital experience. It expanded its B2B e-commerce offering by acquiring InSite Software this month, and hired former SAP CX platform lead Alex Atzberger as CEO to oversee its digital experience technologies.

So many companies are building B2B e-commerce operations from scratch, said Gartner analyst Jason Daigler, that it doesn’t surprise him to see content management vendors challenge companies like Oracle and SAP for customers. He sees the appeal of combining strong content management with e-commerce.

Having the right data in the right time is really important to delivering a good customer experience. Does that require a single database? I don’t think it does, but it does require good understanding of data, where it is, and how to put it together.
Nicole FranceAnalyst, Constellation Research

“Most commerce platforms were not built with the best content management systems out there; they’re not known for their digital experience capabilities,” Daigler said.

Salesforce takes a different approach

Other experts wonder whether or not CDPs are the answer to collecting real-time data from disparate sources such as social media, sales and marketing channels. Because this data is always imperfect and the customer golden record is a mythical concept, said self-described CDP skeptic Constellation Research analyst Nicole France, the CDP may be a “fool’s errand.”

Salesforce is coming out with “a CDP that’s not a CDP,” as the company described it in analyst previews, France said. Salesforce may be solving problems that require customer data platforms to fix with upcoming features in Those could amount to integrations and APIs connecting data and unifying customer profiles with Mulesoft tools, instead of a whole new database itself.

“I do think that having the right data in the right time is really important to delivering a good customer experience. Does that require a single database? I don’t think it does, but it does require good understanding of data, where it is, and how to put it together.”

Go to Original Article

Nuage Networks, Talari SD-WAN tack on multi-cloud connectivity

Software-defined WAN vendors are rushing to enhance their SD-WAN platforms with multi-cloud support, as more enterprises and service providers migrate their workloads to the cloud. This week, both Nuage Networks and Talari made multi-cloud connectivity announcements of their own.

Nuage Networks, a Nokia company, updated its SD-WAN platform — Virtualized Network Services — to better support SaaS and multi-cloud connectivity.

The platform enhancement moves to address three specific pain points among customers, according to Hussein Khazaal, Nuage’s vice president of marketing and partnerships. The three points, multi-cloud connectivity, value-added services and end-to-end security, are already available to customers.

“It’s a single platform that you can deploy today and get connectivity to software as a service,” Khazaal said. “We support customers as they send traffic directly from the branch to the SaaS application.”

In addition to multi-cloud connectivity, Nuage VNS offers customers the option to add value-added services — or virtual network functions (VNFs) — that can be embedded within the SD-WAN platform, hosted in x86 customer premises equipment (CPE) or through service chaining (a set of network services interconnected through the network to support an application). These VNFs are available from more than 40 third-party partners and can include services like next-generation firewalls, voice over IP and WAN optimization, Khazaal said.

While many service providers are leaning toward the VNF and virtual CPE approach, the process isn’t simple, according to Lee Doyle, principal analyst at Doyle Research.

“Many service providers are finding the vCPE and VNF approach side to be challenging,” Doyle said. “Those with the resources can, and will, pursue it, and that’s where Nuage could be a piece of the puzzle.”

When it comes to enterprise customers, however, the VNF approach is less attainable, both Doyle and Khazaal noted.

“Nuage is one piece of the puzzle that a customer might add if they’re able to do it themselves,” Doyle said. “But most customers don’t want to piece together different elements.”

For smaller enterprise customers, Khazaal recommended using the option with embedded features, like stateful firewall and URL filtering, built into the SD-WAN platform.

Although Nuage has more than 400 enterprise customers, according to a company statement, its primary market is among service providers. Nuage counts more than 50 service providers as partners that offer managed SD-WAN services — including BT, Cogeco Peer 1, Telefónica and Vertel — and has been a proven partner for service providers over the years, Doyle said.

“Nuage is a popular element of service providers’ managed services strategies, including SD-WAN,” he said. “These enhancements will be attractive mainly to the service providers.”

Nuage VNS is available now with perpetual and subscription-based licenses, and varies based on desired features and capabilities.

Talari launches Cloud Connect for SaaS, multi-cloud connectivity

In an additional multi-cloud move, Talari updated its own SD-WAN offering with Talari Cloud Connect, a platform that supports access to cloud-based and SaaS applications.

Talari also named five accompanying Cloud Connect partners: RingCentral, Pure IP, Evolve IP, Meta Networks and Mode. These partners will run Talari’s Cloud Connect point of presence (POP) technology in their own infrastructure, creating a tunnel from the customer’s Talari software into the cloud or SaaS service, according to Andy Gottlieb, Talari’s co-founder and chief marketing officer.

“The technology at the service provider is multi-tenant, so they only have to stand up one instance to support multiple customers,” Gottlieb said. Meantime, enterprises can use the Cloud Connect tunnel without having to worry about building infrastructure in the cloud, which reduces costs and complexity, he added.

Talari’s partner list reflects the demands of both customers and service providers, he said. Unified communications vendors like RingCentral, for example, require reliable connectivity and low latency for their applications. Meta Networks, on the other hand, offers cloud-based security capabilities, which enterprises are increasingly adding to their networks. Talari SD-WAN already supports multi-cloud connectivity to Amazon Web Services and Microsoft Azure.

Talari Cloud Connect will be available at the end of October. The software comes at no additional charge for Talari customers with maintenance contracts or with subscriptions, Gottlieb said. Also, Cloud Connect partners can use the Cloud Connect POP software free of charge to connect to Talari SD-WAN customers, he added.

Web cache poisoning attacks demonstrated on major websites, platforms

Major websites and platforms may be vulnerable to simple yet devastating web cache poisoning attacks, which could put millions of users in jeopardy.

James Kettle, head of research at PortSwigger Web Security, Ltd., a cybersecurity tool publisher headquartered near Manchester, U.K., demonstrated several such attacks during his Black Hat 2018 session titled “Practical Web Cache Poisoning: Redefining ‘Unexploitable.'” Kettle first unveiled his web cache poisoning hacks in May, but in the Black Hat session he detailed his techniques and showed how major weaknesses in HTTPS response headers allowed him to compromise popular websites and manipulate platforms such as Drupal and Mozilla’s Firefox browser.

“Web cache poisoning is about using caches to save malicious payloads so those payloads get served up to other users,” he said. “Practical web cache poisoning is not theoretical. Every example I use in this entire presentation is based on a real system that I’ve proven can be exploited using this technique.”

As an example, Kettle showed how he was able to use a simple technique to compromise the home page of Linux distributor Red Hat. He created an open source extension for PortSwigger’s Burp Suite Scanner called Param Miner, which detected unkeyed inputs in the home page. From there, Kettle was able to change the X-Forwarded-Host header and load a cross-site scripting payload to the site’s cache and then craft responses that would deliver the malicious payload to whoever visited the site. “We just got full control over the home page of RedHat.com, and it wasn’t very difficult,” he said.

In another test case, Kettle used web cache poisoning on the infrastructure for Mozilla’s Firefox Shield, which gives users the ability to push application and plug-in updates. When the Firefox browser initially loads, it contacts Shield for updates and other information such as “recipes” for installing extensions. During a different test case on a Data.gov site, he found an “origin: null” header from Mozilla and discovered he could manipulate the “X-Forwarded-Host” header to trick the system so that instead of going to Firefox Shield to fetch recipes, Firefox would instead be directed to a domain Kettle controlled.

Kettle found that Mozilla signed the recipes, so he couldn’t simply make a malicious extension and install it on 50 million computers. But he discovered he could replay old recipes, specifically one for an extension with a known vulnerability; he could then compromise that extension and forcibly inflict that vulnerable extension on every Firefox browser in the world.

“The end effect was I could make every Firefox browser on the planet connect to my system to fetch this recipe, which specified what extensions to install,” he said. “So that’s pretty cool because that’s 50 million browsers or something like that.”

Kettle noted in his research that when he informed Mozilla of the technique, they patched it within 24 hours; but, he wrote, “there was some disagreement about the severity so it was only rewarded with a $1,000 bounty.”

Kettle also demonstrated techniques that allowed him to compromise GoodHire.com, blog.Cloudflare.com and several sites that use Drupal’s content management platform. While the web cache poisoning attacks he demonstrated were potentially devastating, Kettle said they could be mitigated with a few simple steps. First, he said, organizations should “cache with caution” and if possible, disable it completely.

However, Kettle acknowledged that may not be realistic for larger enterprises, so in those cases he recommended diligently scanning for unkeyed inputs. “Avoid taking input from HTTP headers and cookies as much as possible,” he said, “and also audit your applications with Para Miner to see if you can find any unkeyed inputs that your framework has snuck in support for.”

AIOps platforms delve deeper into root cause analysis

The promise of AIOps platforms for enterprise IT pros lies in their potential to provide automated root cause analysis, and early customers have begun to use these tools to speed up problem resolution.

The city of Las Vegas needed an IT monitoring tool to replace a legacy SolarWinds deployment in early 2018 and found FixStream’s Meridian AIOps platform. The city introduced FixStream to its Oracle ERP and service-oriented architecture (SOA) environments as part of its smart city project, an initiative that will see municipal operations optimized with a combination of IoT sensors and software automation. Las Vegas is one of many U.S. cities working with AWS, IBM and other IT vendors on such projects.

FixStream’s Meridian offers an overview of how business process performance corresponds to IT infrastructure, as the city updates its systems more often and each update takes less time as part of its digital transformation, said Michael Sherwood, CIO for the city of Las Vegas.

“FixStream tells us where problems are and how to solve them, which takes the guesswork, finger-pointing and delays out of incident response,” he said. “It’s like having a new help desk department, but it’s not made up of people.”

The tool first analyzes a problem and offers insights as to the cause. It then automatically creates a ticket in the company’s ServiceNow IT service management system. ServiceNow acquired DxContinuum in 2017 and released its intellectual property as part of a similar help desk automation feature, called Agent Intelligence, in January 2018, but it’s the high-level business process view that sets FixStream apart from ServiceNow and other tools, Sherwood said.

FixStream’s Meridian AIOps platform creates topology views that illustrate the connections between parts of the IT infrastructure and how they underpin applications, along with how those applications underpin business processes. This was a crucial level of detail when a credit card payment system crashed shortly after FixStream was introduced to monitor Oracle ERP and SOA this spring.

“Instead of telling us, ‘You can’t take credit cards through the website right now,’ FixStream told us, ‘This service on this Oracle ERP database is down,'” Sherwood said.

This system automatically correlated an application problem to problems with deeper layers of the IT infrastructure. The speedy diagnosis led to a fix that took the city’s IT department a few hours versus a day or two.

AIOps platform connects IT to business performance

Instead of telling us, ‘You can’t take credit cards through the website right now,’ FixStream told us, ‘This service on this Oracle ERP database is down.’
Michael SherwoodCIO for the city of Las Vegas

Some IT monitoring vendors associate application performance management (APM) data with business outcomes in a way similar to FixStream. AppDynamics, for example, offers Business iQ, which associates application performance with business performance metrics and end-user experience. Dynatrace offers end-user experience monitoring and automated root cause analysis based on AI.

The differences lie in the AIOps platforms’ deployment architectures and infrastructure focus, said Nancy Gohring, an analyst with 451 Research who specializes in IT monitoring tools and wrote a white paper that analyzes FixStream’s approach.

“Dynatrace and AppDynamics use an agent on every host that collects app-level information, including code-level details,” Gohring said. “FixStream uses data collectors that are deployed once per data center, which means they are more similar to network performance monitoring tools that offer insights into network, storage and compute instead of application performance.”

FixStream integrates with both Dynatrace and AppDynamics to join its infrastructure data to the APM data those vendors collect. Its strongest differentiation is in the way it digests all that data into easily readable reports for senior IT leaders, Gohring said.

“It ties business processes and SLAs [service-level agreements] to the performance of both apps and infrastructure,” she said.

OverOps fuses IT monitoring data with code analysis

While FixStream makes connections between low-level infrastructure and overall business performance, another AIOps platform, made by OverOps, connects code changes to machine performance data. So, DevOps teams that deploy custom applications frequently can understand whether an incident is related to a code change or an infrastructure glitch.

OverOps’ eponymous software has been available for more than a year, and larger companies, such as Intuit and Comcast, have recently adopted the software. OverOps identified the root cause of a problem with Comcast’s Xfinity cable systems as related to fluctuations in remote-control batteries, said Tal Weiss, co-founder and CTO of OverOps, based in San Francisco.

OverOps uses an agent that can be deployed on containers, VMs or bare-metal servers, in public clouds or on premises. It monitors the Java Virtual Machine or Common Language Runtime interface for .NET apps. Each time code loads into the CPU via these interfaces, OverOps captures a data signature and compares it with code it’s previously seen to detect changes.

OverOps Grafana dashboard
OverOps exports reliability data to Grafana for visual display

From there, the agent produces a stream of log-like files that contain both machine data and code information, such as the number of defects and the developer team responsible for a change. The tool is primarily intended to catch errors before they reach production, but it can be used to trace the root cause of production glitches, as well.

“If an IT ops or DevOps person sees a network failure, with one click, they can see if there were code changes that precipitated it, if there’s an [Atlassian] Jira ticket associated with those changes and which developer to communicate with about the problem,” Weiss said.

In August 2018, OverOps updated its AIOps platform to feed code analysis data into broader IT ops platforms with a RESTful API and support for StatsD. Available integrations include Splunk, ELK, Dynatrace and AppDynamics. In the same update, the OverOps Extensions feature also added a serverless AWS Lambda-based framework, as well as on-premises code options, so users can create custom functions and workflows based OverOps data.

“There’s been a platform vs. best-of-breed tool discussion forever, but the market is definitely moving toward platforms — that’s where the money is,” Gohring said.

Recruiting platforms see large VC investments

Recruiting platforms have long been a leader in the HR venture capital market and this year seem to be attracting some big funding rounds.

Recruiting platform Greenhouse recently received $50 million in new funding, and Hired recently received $30 million. Earlier this year, Scout Exchange gained $100 million in funding. Applicant-tracking systems and recruiting platforms typically lead the HR market in venture capital funding, industry analysts have reported.

These platforms each approach the problem of recruiting in different ways, and their methods illustrate the complexity of filling jobs with high-quality candidates.

With its latest funding round, Greenhouse, a recruiting platform and applicant-tracking system provider, has now raised $110 million. Raising $100 million or more is not unusual for recruiting platforms.

Greenhouse believes recruiting is a companywide responsibility, and its platform is built with this approach in mind, said Daniel Chait, CEO of Greenhouse, based in New York. Recruiting “involves everyone in the company, every single day,” doing all kinds of things, such as interviewing and finding candidates, he said. The Greenhouse platform uses data to help consider candidates “in a fair and objective way,” and to ensure a good candidate experience, he said.

Employees don’t like interviewing job candidates

Recruiting platform Greenhouse recently received $50 million in new funding, and Hired recently received $30 million. Earlier this year, Scout Exchange gained $100 million in funding.

Preparing employees to take part in candidate interviews is an important aspect of Greenhouse’s platform, Chait said. It provides all the people conducting the interview with all the available information on the candidate, but also helps users develop questions to ask the candidates.

“Employees generally don’t like doing interviews. They are stressful, and they don’t know what questions to ask,” he said.

Scout Exchange runs a marketplace recruiting platform that matches recruiters with job searches based on their expertise and “their actual track record of filling positions,” said Ken Lazarus, CEO of Scout, based in Boston.

Scout enables employers to tap into one or more recruiters with the best record for filling a particular type of job, Lazarus said.

Meanwhile, Hired has a created a talent pool and the technology to help match candidates with employers. If an employer believes a candidate has the right skills, they will send an interview request to the candidate. The firm said it has raised more than $130 million to date.

Setting the right salary level

Knowing what to pay candidates helped to drive Salary.com’s just-announced acquisition of Compdata Surveys & Consulting.

Salary.com gets its compensation data from surveys purchased from other providers, as well as what it gathers in its own surveys. The acquisition of Compdata, which is predominantly a survey firm, gives Salary.com the platform, analytics and the data it needs, said Alys Scott, chief marketing officer at Salary.com, based in Waltham, Mass., although the firm will still buy some third-party surveys.

The low unemployment rate and retirements of baby boomers is putting pressure on firms to have good compensation models, Scott said. The “No. 1 motivator” around retaining, attracting and engaging talent is compensation, she said.

How to know if, when and how to pursue blockchain projects

BOSTON — There is no shortage of blockchain platforms out there; the numbers now run in the dozens. As for enumerating potential blockchain projects, it may be easier to list the blockchain use cases companies are currently not exploring. Moreover, although blockchain’s approach to verifying and sharing data is novel, many of the technologies used in blockchain projects have been around for a long time, said Martha Bennett, a CIO analyst at Forrester Research who’s been researching blockchain since 2014.

Even the language around blockchain is settling down. Bennett said she uses the terms blockchain and distributed ledger technology interchangeably.

But the growth and interest in blockchain projects doesn’t mean the technology is mature or that we know where it is headed, Bennett told an audience of IT executives at the Forrester New Tech & Innovation 2018 Forum. Just as in the early days of the internet when few anticipated how radically a network of networks would alter the status quo, today we don’t know how blockchain will play out.

“It is still a little bit of a Wild West. I should clarify that and say, it is the Wild West,” she said. Additionally, no matter how revolutionary distributed ledger technology may prove to be, Bennett said “nothing is being revolutionized today from an enterprise perspective,” because distributed ledger technology is not yet being deployed at scale.

Dirty hands

Indeed, IT leaders have their work cut out for them just figuring out how these nascent distributed ledger platforms perform at enterprise scale, and where they would be of use in the businesses they serve.

“At this stage, you really need to open up the covers and understand what a platform offers and what is in there. You have to get your hands dirty,” she said.

Blockchain projects today are about “thinking really big but starting small,” she said. If what gets accomplished is “inventing a faster horse” — that is, taking an existing process and making it a bit better — the endeavor will help IT leaders learn about how blockchain architectures work. That’s important because it’s hard “to catch up on innovation,” she said. “If you wait until things are settled it may be too late.” 

While CIOs get up to speed, they also need to think about using blockchain to reinvent how their companies function internally and how they do business. “That is the big bang,” she said, but added it may take decades for blockchain to give birth to a new order.

Martha Bennett, analyst at Forrester, on blockchain at the Forrester New Tech & Innovation 2018 Forum.
Forrester analyst Martha Bennett presents on blockchain at the Forrester New Tech & Innovation 2018 Forum.

In a 90-minute session that included a talk by the IT director of the Federal Reserve Bank of Boston about how the Fed is approaching blockchain (blogged about here), Bennett ticked through:

  • Forrester’s definition of blockchain and why the wording merited close attention;
  • why blockchain projects remain in pilot phase;
  • a checklist to assess if you have a viable blockchain use case; and
  • situations when blockchain can help.

Here are some of the salient pointers for CIOs:

What is blockchain?

Blockchain, or distributed ledger technology, as defined by Forrester, “is a software architecture that supports collaborative processes around trusted data that is shared across organizational and potentially national boundaries.”

The wording is important. Architecture, because blockchain is a technology principle and not about any one platform. Collaborative, because blockchain is a “team sport, not something you do for yourself,” Bennett said, requiring anywhere between three and 10 partners. (Under three will not provide the diversity of views blockchain projects need, while more than 10 is “like herding cats.”) Blockchain requires data you can “trust to the highest degree,” she said, and it is about sharing. In many cases, CIOs will find they can deliver the service in question “better, faster, cheaper with existing technologies,” she said. “But what you don’t get is that collaborative aspect, extending processes across organizational boundaries.”

What factors hold back enterprise-scale deployment?

Companies are exploring a plethora of blockchain projects, from car sharing and tracking digital assets to securities lending, corporate loans and data integrity. Full deployment can’t happen until experimenters figure out if the software can scale; if it needs to integrate with existing systems and if so, how to do that; what regulatory and compliance requirements must be met; and what business process changes are required both internally and at partner organizations in the blockchain, among other hurdles.

“We are seeing projects transition beyond the POC [proof of concept] and pilot phase, but that is not the same as full-scale rollout,” Bennett said.

How to decide whether to take on a blockchain use case

“If you don’t have a use case, don’t even start,” Bennett said. A company can come to Forrester and ask for examples of good use cases, she said, but ultimately only the company knows its organization and industry well enough to be able to pinpoint how blockchain might make the process better. She suggested asking these questions to help clarify the use case:

  • What problem are you trying to solve with blockchain?
  • Do other ecosystem participants have the same or related issues?
  • What opportunity are you trying to capture?
  • Do you have your ecosystem (which can comprise competitors) on board?

On the last question, Bennett explained that even rich industries like investment banking need to address process efficiency. “Everybody needs to worry about how much it costs to run IT operations,” she said. If competitors have common processes that are costly and cumbersome, why not consider sharing them using blockchain?

How to know when blockchain helps

Here is Bennett’s checklist for identifying when blockchain can be of use:

  • Are there multiple parties that need access to the same data store?
  • Does everybody need assurance that the data is valid and hasn’t been tampered with?
  • What are the conditions of the current system — is it error-prone, incredibly complex, unreliable, filled with friction?
  • Are there good reasons not to have a single, centralized system? Distributed ledger technology introduces complexity and risk precisely for reasons listed above. In addition to making the technology scale, adopters still are wrestling with how to balance transparency and privacy, and how to handle exceptions.

Avoid preserving ‘garbage in a more persistent way’

Distributed ledger technology, Bennett stressed, also cannot fix problems with the data. “If your data is bad to start with, it will still be bad. You’re just preserving garbage in a more persistent way,” she said. A lot of blockchain projects target tracking and provenance of goods to take cost out of the supply chain and reduce fraud. Those are “great use cases,” she said. But if the object being tracked has been tampered with — even if you have established an unbreakable link between the physical object and the data on the blockchain — “the representation on the blockchain is a problem because suddenly you are tracking a fake item,” she said. Physical fraud issues need to be fixed for the blockchain to be of value.

The 80/20 rule

The digitization of paper processes has been the “real breakthrough,” but blockchain cannot “turn paper into anything digital,” Bennett said. If processes haven’t been digitized yet, CIOs need to get their enterprises to ask themselves why because that is the starting point.

Finally, CIOs must understand that technology problems notwithstanding, blockchain projects are 80% about the business and 20% about technology. 

“Technology problems have a habit of being addressed and of being resolved,” Bennett said. Business issues — digitizing, dismantling internal silos, redesigning processes — can take far longer.”

New data science platforms aim to be workflow, collaboration hubs

An emerging class of data science platforms that provide collaboration and workflow management capabilities is gaining more attention from both users and vendors — most recently Oracle, which is buying its way into the market.

Oracle’s acquisition of startup DataScience.com puts more major-vendor muscle behind the workbench-style platforms, which give data science teams a collaborative environment for developing, deploying and documenting analytical models. IBM is already in with its Data Science Experience platform, informally known as DSX. Other vendors include Domino Data Lab and Cloudera, which last week detailed plans for a new release of its Cloudera Data Science Workbench (CDSW) software this summer.

These technologies are a subcategory of data science platforms overall. They aren’t analytics tools; they’re hubs that data scientists can use to build predictive and machine learning models in a shared and managed space — instead of doing so on their own laptops, without a central location to coordinate workflows and maintain models. Typically, they’re aimed at teams with 10 to 20 data scientists and up.

The workbenches began appearing in 2014, but it’s only over the past year or so that they matured into products suitable for mainstream users. Even now, the market is still developing. Domino and Cloudera wouldn’t disclose the number of customers they have for their technologies; in a March interview, DataScience.com CEO Ian Swanson said only that its namesake platform has “dozens” of users.

A new way to work with data science volunteers

Ruben van der Dussen, ThornRuben van der Dussen

Thorn, a nonprofit group that fights child sex trafficking and pornography, deployed Domino’s software in early 2017. The San Francisco-based organization only has one full-time data scientist, but it taps volunteers to do analytics work that helps law enforcement agencies identify and find trafficking victims. About 20 outside data scientists are often involved at a time — a number that swells to 100 or so during hackathons that Thorn holds, said Ruben van der Dussen, director of its Innovation Lab.

That makes this sort of data science platform a good fit for the group, he said. Before, the engineers on his team had to create separate computing instances on the Amazon Elastic Compute Cloud (EC2) for volunteers and set them up to log in from their own systems. With Domino, the engineers put Docker containers on Thorn’s EC2 environment, with embedded Jupyter Notebooks that the data scientists access via the web. That lets them start analyzing data faster and frees up time for the engineers to spend on more productive tasks, van der Dussen said.

He added that data security and access privileges are also easier to manage now — an important consideration, given the sensitive nature of the images, ads and other online data that Thorn analyzes with a variety of machine learning and deep learning models, including ones based on natural language processing and computer vision algorithms.

Thorn develops and trains the analytical models within the Domino platform and uses it to maintain different versions of the Jupyter Notebooks, so the work done by data scientists is documented for other volunteers to pick up on. In addition, multiple people working together on a project can collaborate through the platform. The group uses tools like Slack for direct communication, “but Domino makes it really easy to share a Notebook and for people to comment on it,” van der Dussen said.

Screenshot of Domino Data Lab's data science platform
Domino Data Lab’s data science platform lets users run different analytics tools in separate workspaces.

Oracle puts its money down on data science

Oracle is betting that data science platforms like DataScience.com’s will become a popular technology for organizations that want to manage their advanced analytics processes more effectively. Oracle, which announced the acquisition this month, plans to combine DataScience.com’s platform with its own AI infrastructure and model training tools as part of a data science PaaS offering in the Oracle Cloud.

By buying DataScience.com, Oracle hopes to help users get more out of their analytics efforts — and better position itself as a machine learning vendor against rivals like Amazon Web Services, IBM, Google and Microsoft. Oracle said it will continue to invest in DataScience.com’s technology, with a goal of delivering “more functionality and capabilities at a quicker pace.” It didn’t disclose what it’s paying for the Culver City, Calif., startup.

The workbench platforms centralize work on analytics projects and management of the data science workflow. Data scientists can team up on projects and run various commercial and open source analytics tools to which the platforms connect, then deploy finished models for production applications. The platforms also support data security and governance, plus version control on analytical models.

Cloudera said its upcoming CDSW 1.4 release adds features for tracking and comparing different versions of models during the development and training process, as well as the ability to deploy models as REST APIs embedded in containers for easier integration into dashboards and other applications. DataScience.com, Domino and IBM provide similar functionality in their data science platforms.

Screenshot of Cloudera Data Science Workbench
Cloudera Data Science Workbench uses a sessions concept for running analytics applications.

Choices on data science tools and platforms

Deutsche Telekom AG is offering both CDSW and IBM’s DSX to users of Telekom Data Intelligence Hub, a cloud-based big data analytics service that the telecommunications company is testing with a small number of customers in Europe ahead of a planned rollout during the second half of the year.

Users can also access Jupyter, RStudio and three other open source analytics tools, said Sven Löffler, a business development executive at the Bonn, Germany, company who’s leading the implementation of the analytics service. The project team sees benefits in enabling organizations to connect to those tools through the two data science platforms and get “all this sharing and capabilities to work collaboratively with others,” he said.

However, Löffler has heard from some customers that the cost of the platforms could be a barrier compared to working directly with the open source tools as part of the service, which runs in the Microsoft Azure cloud. It’s fed by data pipelines that Deutsche Telekom is building with a new Azure version of Cloudera’s Altus Data Engineering service.

News briefs: Mobile recruiting interfaces still painful

Mobile recruiting platforms aren’t getting enough attention from HR departments, according to a recent Glassdoor report. Mobile interfaces are clunky and hard to use. They impose required fields that duplicate data that’s already on the résumé.

“Mobile job application experiences remain painful for most job seekers,” said Andrew Chamberlain, Glassdoor’s chief economist, in a report on upcoming trends. This is a problem for employers. Many job seekers today are using mobile devices to reach employer job sites.

It is a consequence of legacy enterprise applicant tracking systems (ATSes) built before the mobile era. Firms are waking up to this fact, and Glassdoor believes improving mobile recruiting systems is on the verge of becoming a priority.

A lot of organizations have a hodgepodge of HR systems. Their primary goal is moving to cloud and to mobile more quickly, said Tony DiRomualdo, senior director of the HR executive advisory program at The Hackett Group, based in Miami.

But mobile is only “widely implemented” in 16% of organizations surveyed last fall by Hackett. DiRomualdo said he believes the percentage is higher for mobile recruiting platforms, because it’s easier to make a business case.  

Mobile recruiting implementation “has been slower than a lot of people in HR would like,” DiRomualdo said. “They have a hard time getting the funding and prioritization for it,” he said.

A new recruiting platform with ATS-like systems

Mobile job application experiences remain painful for most job seekers.
Andrew Chamberlainchief economist, Glassdoor

Recruiting platform vendors are taking on some of the work of internal applicant tracking systems and can give job seekers a better mobile experience. They are creating dashboards and intelligent ranking systems. JobzMall, the latest addition to this trend, is due to launch Jan. 15.

The site, which has about 250 participating organizations and is running in a closed beta, organizes itself around a “virtual shopping mall,” said Nathan Candaner, co-founder of JobzMall, based in Irvine, Calif.

Employers have virtual stores and can use video to create a personalized experience about their business. There are different buildings — such as the startup building, one for nonprofits, another for freelancers and one for larger firms. Job seekers fill out a template on the recruiting platform, which they can use to apply for multiple jobs. The system gives applicants a little more transparency into the progress of their application.

Candaner said he sees a need for this type of recruiting platform. Many job sites today want users to cut and paste their résumés for each job application. The systems give employers little help in managing the applications.

JobzMall gives employers a dashboard, which includes collaborative tools, for managing and viewing applicants in one spot. The system knows what the qualifications are and the skill sets of the applicants. It also learns the employer’s behavior in evaluating candidates. It uses that to help rank and select applicants. “Our system learns, and in time, we do give very pointed candidates to required jobs,” Candaner said.

Container security platforms diverge on DevSecOps approach

SAN FRANCISCO — Container security platforms have begun to proliferate, but enterprises may have to watch the DevSecOps trend play out before they settle on a tool to secure container workloads.

Two container security platforms released this month — one by an up-and-coming startup and another by an established enterprise security vendor — take different approaches. NeuVector, a startup that introduced an enterprise edition at DevOps Enterprise Summit 2017, supports code and container-scanning features that integrate into continuous integration and continuous delivery (CI/CD) pipelines, but its implementation requires no changes to developers’ workflow.

By contrast, a product from the more established security software vendor CSPi, Aria Software Defined Security, allows developers to control the insertion of libraries into container and VM images that enforce security policies.

There’s still significant overlap between these container security platforms. NeuVector has CSPi’s enterprise customer base in its sights, with added support for noncontainer workloads and Lightweight Directory Access Protocol. Software-defined security includes network microsegmentation features for policy enforcement that are NeuVector’s primary focus. And while developers inject software-defined security code into machine images, they aren’t expected to become security experts. Enterprise IT security pros set the policies enforced by software-defined security, and a series of wizards guide developers through the integration process for software-defined security libraries.

Both vendors also agree on this: Modern IT infrastructures with DevOps pipelines that deliver rapid application changes require a fundamentally different approach to security than traditional vulnerability detection and patching techniques.

There’s definitely a need for new security techniques for containers that rely less on layers of VM infrastructure to enforce network boundaries, which can negate some of the gains to be had from containerization, said Jay Lyman, analyst with 451 Research.

However, even amid lots of talk about the need to “shift left” and get developers involved with IT security practices, bringing developers and security staff together at most organizations is still much easier said than done, Lyman said.

NeuVector 1.3 container security platform
NeuVector 1.3 captures network sessions automatically when container threats are detected, a key feature for enterprises.

Container security platforms encounter DevSecOps growing pains

As NeuVector and CSPi product updates hit the market, enterprise IT pros at the DevOps Enterprise Summit (DOES) here this week said few enterprises use containers at this point, and the container security discussion is even further off their radar. By the time containers are widely used, DevSecOps may be more mature, which could favor CSPi’s more hands-on developer strategy. But for now, developers and IT security remain sharply divided.

Eventually, we’ll see more developer involvement in security, but it will take time and probably be pretty painful.
Jay Lymananalyst, 451 Research

“Everyone needs to be security-conscious, but to demand developers learn security and integrate it into their own workflow, I don’t see how that happens,” said Joan Qafoku, a risk consulting associate at KPMG LLP in Seattle who works with an IT team at a large enterprise client also based in Seattle. That client, which Qafoku did not name, gives developers a security-focused questionnaire, but security integration into their process goes no further than that.

NeuVector’s ability to integrate into the CI/CD pipeline without changes to application code or the developer workflow was a selling point for Tobias Gurtzick, security architect for Arvato, an international outsourcing services company based in Gütersloh, Germany.

Still, this integration wasn’t perfect in earlier iterations of NeuVector’s product, Gurtzick said in an interview before DOES. Gurtzick’s team polled an API every two minutes to trigger container and code scans with previous versions. NeuVector’s 1.3 release includes a new webhooks notification feature that more elegantly triggers code scans as part of continuous integration testing, without the performance overhead of polling the API.

“That’s the most important feature of the new version,” Gurtzick said. He also pointed to added support for detailed network session snapshots that can be used in forensic analysis. Software-defined security offers a similar feature with its first release.

While early adopters of container security platforms, such as Gurtzick, have settled the debate about how developers and IT security should bake security into applications, the overall market has been slower to take shape as enterprises hash out that collaboration, Lyman said.

“Earlier injection of security into the development process is better, but that still usually falls to IT ops and security [staff],” Lyman said. “Part of the DevOps challenge is aligning those responsibilities with application development. Eventually, we’ll see more developer involvement in security, but it will take time and probably be pretty painful.”

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at [email protected] or follow @PariseauTT on Twitter.