Tag Archives: emerged

2019 BI trends included reality check for AI, consolidation

The BI trends that emerged in 2019 weren’t necessarily what was most expected.

A year ago, one of the dominant BI trends for 2019 was predicted to be an augmented intelligence takeover.

One vendor, ThoughtSpot, wrote that AI in BI will go mainstream and that self-driving analytics will emerge. Another, Tableau, predicted the rise of explainable AI and that advances in natural language processing would enable users to converse with their data. Meanwhile, the CEO of one more, Yellowfin, wrote that natural language queries would supplant text search interfaces.

The introduction and advancements of AI features was indeed one of the significant BI trends of 2019. Progress was most definitely made. ThoughtSpot, for one, introduced new machine learning capabilities, as did Tibco.

But the progress wasn’t quite as spectacular as predicted.

“I think the expectations were a bit too lofty,” said Mike Leone, an analyst at Enterprise Strategy Group. “We saw numerous announcements from BI vendors that looked to incorporate AI into their platforms, but adoption is lagging behind quite a bit. Organizations want to use AI to better do their jobs, but there’s still a bit of a learning curve and a need to address the simplicity of using the technology for less technical or non-advanced users in these systems.”

Similarly, Dan Sommer, global market intelligence lead at Qlik, said that while the advancement of AI capabilities was one of the BI trends in 2019, it was modest.

AI is finding its way into use cases but in very specific scenarios. We’re still waiting to see the impact of more general AI.
Dan SommerGlobal market intelligence lead, Qlik

“AI is finding its way into use cases, but in very specific scenarios,” he said. “We’re still waiting to see the impact of more general AI.”

Beyond the incremental advancement of AI capabilities, vendor consolidation was one of the significant BI trends of 2019. Others were the introduction of low-code/no-code tools for application developers, improved mobile apps from BI vendors, and perhaps the demise of Hadoop.

Consolidation

Three weeks into 2019, Qlik acquired CrunchBot and Crunch Data to usher in one of the main BI trends of 2019. One month later, it purchased Attunity.

In the months that followed, Alteryx bought ClearStory Data and Sisense acquired Periscope. Then on June 6, Google bought Looker for $2.6 billion. That same day, Logi Analytics acquired Zoomdata. And just four days after that, Salesforce purchased Tableau for $15.7 billion.

The first six months of 2019 ushered in a wave of consolidation not seen in the BI space since 2007, when IBM acquired Cognos, Oracle bought Hyperion and SAP purchased BusinessObjects.

The impetus for consolidation was the recognition of the growing importance of business intelligence, according to Donald Farmer, principal at TreeHive Strategy.

“What drove consolidation was the commoditization of analytics,” he said. “The pressure on pure-play analytics companies has been great, and there are some companies that are naturally struggling and others that are pre-struggling and could see the writing on the wall. But the positive is the drive toward analytics everywhere, the need to get analytics into every part of the business.”

While consolidation was one of the significant BI trends throughout the first half of 2019 — at least among major vendors – it slowed during the second half of the year. That does not, however, mean the wave has subsided.

“The low-hanging fruit has been taken, but I’m sure there will be another round,” Farmer said.

A new audience

Software vendors who market far more than merely BI platforms have provided low-code and no-code tools for developers for some time now. Salesforce, for example, debuted Salesforce Lightning in 2014. Microsoft answered with PowerApps two years later.

But one of the BI trends to emerge late in 2019 was the introduction of low-code and no-code tools for developers from vendors specializing in business intelligence platforms.

Looker unveiled Looker 7, including a development framework and an in-product marketplace for add-ons, in early November. The same week, Yellowfin rolled out Yellowfin 9. The release included features called Dashboard Canvas and Dashboard Code Mode, which enable developers to customize their organizations’ applications.

Sisense then released an update with embedded capabilities to help customers create enterprise-grade applications, and Alteryx’s latest version included an application programming interface that connects Alteryx Designer with Tableau Hyper and allows users to easily read, write and transform Hyper files.

“I think, especially over the last few months, we’re seeing a much heavier focus on developer enablement,” Leone said. “The idea of low-code/no-code to efficiently build workflows, apply automation and incorporate insights into modern applications is really being emphasized.”

The motivation behind the tools isn’t to replace developers. Instead, it’s to free them from some of the cumbersome coding work it takes to build customized applications. It’s also to enable business users to create rudimentary applications, leading to citizen development much the way advances in the ease-of-use of analytics tools has led to the rise of citizen data scientists.

“It’s not that developers don’t want to write code or can’t write code,” Leone said. “It’s all about efficiency and not having to re-create the wheel.”

Going mobile

BI vendors have historically struggled to develop effective mobile apps. Too often, they tried to re-create their desktop dashboards on mobile screens, but the screen size of phones in particular and tablets to a degree made it difficult to consume data visualizations.

The small screen size proved a big hurdle for mobile BI apps.

In 2019, however, one of the BI trends was that vendors began to figure out how to turn mobile devices to their advantage.

The ones now having success — and the rare ones who had success prior to 2019 — view mobile differently than they do desktop applications. They recognize the limitations of phones and tablets, as well as the strengths of the devices.

“We saw an emergence of a new style of mobile apps being based on being able to take action — Yellowfin, Microsoft, Domo,” Farmer said. “They’re action-focused mobile apps.”

MicroStrategy is one vendor that has long invested in its mobile app, first introducing one in 2009. And one thing it does is take advantage of the mobile aspect of mobile devices. Given the app’s AI and machine learning capabilities, it’s able to provide users with information cards as they move around.

Yellowfin, as mentioned by Farmer, is another vendor figuring out how to present BI in a mobile format.

After struggling for a decade following the release of its first mobile app, the vendor overhauled its mobile strategy and unveiled a new app in September. Rather than mimic dashboards, it presents information via a timeline feed similar to Facebook and Twitter.

Hadoop’s demise

BI trends come, and BI trends go. And just as AI and developer tools are gaining popularity and proficiency, Hadoop’s time might be up.

“Hadoop ended last year,” Sommer said.

Hadoop was created in 2005. It allowed organizations to store and access large amounts of data, and for data scientists to access that data and structure it as needed. In recent years, however, cloud data warehouses such as Amazon Redshift and Snowflake — among others — have lessened the need for Hadoop.

In fact, the idea of big data — the problem Hadoop sought to solve — may not exist anymore, according to Sommer.

“There’s a notion that big data now is just data, and next is wide data,” he said. “Big data is just what you can’t achieve with your current infrastructure, but with cloud storage that restraint is now gone and you can always add more.”

Expanding on the notion of wide data now being an issue rather than big data, Sommer said that data is now fragmented, coming in from different sources at different speeds, and the problem that needs solving is how to bring it all together.

“New data catalogs help pull multiple data sets together,” he said.

As evidence of Hadoop’s perilous position, the three main Hadoop vendors — Cloudera, MapR Technologies and Hortonworks — are in a state of upheaval. Cloudera and Hortonworks wound up merging early in 2019, and MapR, after seeking a buyer, was acquired by Hewlett Packard Enterprises.

“A negative BI trend is that Hadoop is over — it’s done, put a fork in it,” Farmer said. “It’s had an interesting effect and people are moving toward a different type of data lake architecture — a hybrid data lake/warehouse … that enables seamless operation between the two.”

The same can be said for 2019: It’s done, put a fork in it.

Go to Original Article
Author:

Kubernetes security opens a new frontier: multi-tenancy

SAN DIEGO — As enterprises expand production container deployments, a new Kubernetes security challenge has emerged: multi-tenancy.

Among the many challenges with multi-tenancy in general is that it is not easy to define, and few IT pros agree on a single definition or architectural approach. Broadly speaking, however, multi-tenancy occurs when multiple projects, teams or tenants, share a centralized IT infrastructure, but remain logically isolated from one another.

Kubernetes multi-tenancy also adds multilayered complexity to an already complex Kubernetes security picture, and demands that IT pros wire together a stack of third-party and, at times, homegrown tools on top of the core Kubernetes framework.

This is because core upstream Kubernetes security features are limited to service accounts for operations such as role-based access control — the platform expects authentication and authorization data to come from an external source. Kubernetes namespaces also don’t offer especially granular or layered isolation by default. Typically, each namespace corresponds to one tenant, whether that tenant is defined as an application, a project or a service.

“To build logical isolation, you have to add a bunch of components on top of Kubernetes,” said Karl Isenberg, tech lead manager at Cruise Automation, a self-driving car service in San Francisco, in a presentation about Kubernetes multi-tenancy here at KubeCon + CloudNativeCon North America 2019 this week. “Once you have Kubernetes, Kubernetes alone is not enough.”

Karl Isenberg, Cruise Automation
Karl Isenberg, tech lead manager at Cruise Automation, presents at KubeCon about multi-tenant Kubernetes security.

However, Isenberg and other presenters here said Kubernetes multi-tenancy can have significant advantages if done right. Cruise, for example, runs very large Kubernetes clusters, with up to 1,000 nodes, shared by thousands of employees, teams, projects and some customers. Kubernetes multi-tenancy means more highly efficient clusters and cost savings on data center hardware and cloud infrastructure.

“Lower operational costs is another [advantage] — if you’re starting up a platform operations team with five people, you may not be able to manage five [separate] clusters,” Isenberg added. “We [also] wanted to make our investments in focused areas, so that they applied to as many tenants as possible.”

Multi-tenant Kubernetes security an ad hoc practice for now

The good news for enterprises that want to achieve Kubernetes multi-tenancy securely is that there are a plethora of third-party tools they can use to do it, some of which are sold by vendors, and others open sourced by firms with Kubernetes development experience, including Cruise and Yahoo Media.

Duke Energy Corporation, for example, has a 60-node Kubernetes cluster in production that’s stretched across three on-premises data centers and shared by 100 web applications so far. The platform is comprised of several vendors’ products, from Diamanti hyper-converged infrastructure to Aqua Security Software’s container firewall, which logically isolates tenants from one another at a granular level that accounts for the ephemeral nature of container infrastructure.

“We don’t want production to talk to anyone [outside of it],” said Ritu Sharma, senior IT architect at the energy holding company in Charlotte, N.C., in a presentation at KubeSec Enterprise Summit, an event co-located with KubeCon this week. “That was the first question that came to mind — how to manage cybersecurity when containers can connect service-to-service within a cluster.”

Some Kubernetes multi-tenancy early adopters also lean on cloud service providers such as Google Kubernetes Engine (GKE) to take on parts of the Kubernetes security burden. GKE can encrypt secrets in the etcd data store, which became available in Kubernetes 1.13, but isn’t enabled by default, according to a KubeSec presentation by Mike Ruth, one of Cruise’s staff security engineers.

Google also offers Workload Identity, which matches up GCP identity and access management with Kubernetes service accounts so that users don’t have to manage Kubernetes secrets or Google Cloud IAM service account keys themselves. Kubernetes SIG-Auth looks to modernize how Kubernetes security tokens are handled by default upstream to smooth Kubernetes secrets management across all clouds, but has run into snags with the migration process.

In the meantime, Verizon’s Yahoo Media has donated a project called Athenz to open source, which handles multiple aspects of authentication and authorization in its on-premises Kubernetes environments, including automatic secrets rotation, expiration and limited-audience policies for intracluster communication similar to those offered by GKE’s Workload Identity. Cruise also created a similar open source tool called RBACSync, along with Daytona, a tool that fetches secrets from HashiCorp Vault, which Cruise uses to store secrets instead of in etcd, and injects them into running applications, and k-rail for workload policy enforcement.

Kubernetes Multi-Tenancy Working Group explores standards

While early adopters have plowed ahead with an amalgamation of third-party and homegrown tools, some users in highly regulated environments look to upstream Kubernetes projects to flesh out more standardized Kubernetes multi-tenancy options.

For example, investment banking company HSBC can use Google’s Anthos Config Management (ACM) to create hierarchical, or nested, namespaces, which make for more highly granular access control mechanisms in a multi-tenant environment, and simplifies their management by automatically propagating shared policies between them. However, the company is following the work of a Kubernetes Multi-Tenancy Working Group established in early 2018 in the hopes it will introduce free open source utilities compatible with multiple public clouds.

Sanjeev Rampal, Kubernetes Multi-Tenancy Working Group
Sanjeev Rampal, co-chair of the Kubernetes Multi-Tenancy Working Group, presents at KubeCon.

“If I want to use ACM in AWS, the Anthos license isn’t cheap,” said Scott Surovich, global container engineering lead at HSBC, in an interview after a presentation here. Anthos also requires VMware server virtualization, and hierarchical namespaces available at the Kubernetes layer could offer Kubernetes multi-tenancy on bare metal, reducing the layers of abstraction and potentially improving performance for HSBC.

Homegrown tools for multi-tenant Kubernetes security won’t fly in HSBC’s highly regulated environment, either, Surovich said.

“I need to prove I have escalation options for support,” he said. “Saying, ‘I wrote that’ isn’t acceptable.”

So far, the working group has two incubation projects that create custom resource definitions — essentially, plugins — that support hierarchical namespaces and virtual clusters that create self-service Kubernetes API Servers for each tenant. The working group has also created working definitions of the types of multi-tenancy and begun to define a set of reference architectures.

The working group is also considering certification of multi-tenant Kubernetes security and management tools, as well as benchmark testing and evaluation of such tools, said Sanjeev Rampal, a Cisco principal engineer and co-chair of the group.

Go to Original Article
Author:

Graph processing gives credit analysis firms an edge

Graph databases have emerged as yet another way to connect data points — but graph processing requirements for big data have sometimes kept them out of the reach of real-time, operational analytics.

Startup TigerGraph has come along recently with its take on graph processing, one the company said will prove to be a fit in fintech and other applications that seek to disrupt business as usual.

Led by former Teradata Hadoop engineer Yu Xu, TigerGraph came out of stealth in 2017, claiming such notable users as Visa and the online payment platform Alipay for its graph database technology. The software has been used by these players, for example, to speed up credit checks and other traditionally time-consuming financial processes. 

Credit worthy

According to the company, TigerGraph supports a massively parallel processing architecture in which graph nodes — the company uses the less common term “vertices” — exhibit both compute and storage features; employs a parallel loader to speed data ingestion; and has fashioned a GSQL analytics language to produce parallel graph queries.

IceKredit has found those features useful in its efforts to expand the availability of credit ratings and risk assessments, according to Minqi Xie, vice president and director of modeling and business intelligence at the financial technology company.

“We have very large data sets with hundreds of millions of [nodes], and we need to mine the relationships at depth,” said Xie, who works to ensure IceKredit provides useful online credit ratings and risk monitoring services for companies and individuals.

Xie indicated IceKredit uses graph analytics to uncover connections within data sets that can identify patterns of risk. Such patterns must be uncovered more and more quickly in the fast-moving fintech space, where credit approvals that once took a week are now accomplished in minutes.

“TigerGraph made it feasible for us to leverage the features from relationship networks for real-time scoring,” he said.

Developer view of graph data
TigerGraph graph database development centers around graph nodes, or, in the company’s parlance, vertexes. Users can define vertex types and edge types to model a data schema, which can be loaded into working graphs.

Graph processing cuts through data molasses

Like others, Gaurav Deshpande, a longtime IBM data hand who earlier this year became vice president of marketing at TigerGraph, points to graph processing as superior to relational database schemes when it comes to representing interconnected relationship between data points.

Historically, almost all the graph database vendors have focused on operational capabilities.
Philip Howardanalyst, Bloor Research

But, Deshpande remarks, how quickly these connections can be analyzed can be an obstacle.

Traditionally, he said graph databases “weren’t ‘operational’ when it came to data volumes beyond 50 to 100 GBs. Performance slowed down. When you got to more than 500 GBs, performance was like molasses.”

Such systems were useful for visualizing data, but “they were nothing that could be used operationally,” Deshpande said. The parallelizing tactics TigerGraph applies to its graph database, he indicated, are intended to bring graph processing technology closer to operations.

Complex analytics at scale

TigerGraph is looking to join the ranks of still fairly limited graph NoSQL database makers that employ a number of diverse approaches to implementing the technology. These include vendors such as Neo4J, IBM, and Cambridge Semantics. Among others are Amazon and Microsoft, which have taken steps to bring graph processing technology more mainstream with, respectively, AWS Neptune and Azure Cosmos cloud offerings.

“Historically, almost all the graph database vendors have focused on operational capabilities. They support a certain level of analytics and query processing but not complex analytics at scale,” said Philip Howard, analyst at Bloor Research.

“TigerGraph, on the other hand, is specifically targeted at precisely those sorts of use cases, so you can’t really compare it with most of the other suppliers,” he said.

There are some precedents, however, and signs of change. Bloor noted supercomputer maker Cray offers graph analytics as part of its portfolio. And he sees others ready to follow suit.

IBM blockchain apps starter pack targets developer disparity

Blockchain has emerged as one of the hottest trends in IT, and as such, it suffers the familiar plight of other big IT trends. There just aren’t enough developers to meet the demand to build blockchain apps.

To help boost the number of blockchain developers, Big Blue recently brought its blockchain platform released last summer to new developers, such as beginners with no previous knowledge of blockchain. The IBM Blockchain Platform Starter Plan helps individual developers, startups and enterprises build blockchain proof of concepts quickly and affordably. The package includes samples, tutorials and videos to help developers learn the basic concepts of blockchain and then build blockchain apps.

For $500 month, the IBM Blockchain Platform Starter Plan includes access to the IBM Cloud compute infrastructure, the open source Hyperledger Fabric blockchain framework and Hyperledger Composer developer tools — to run the blockchain ledger. IBM also offers a set of development, operational and governance tools to make it simpler to set up and run a blockchain network. Starter plan customers also get $500 in IBM Cloud credits when they sign on, said Kathryn Harrison, IBM Blockchain offering director.

Kathryn Harrison, IBM Blockchain offering directorKathryn Harrison

Blockchain is a distributed database ledger that manages transactions and tracks assets. It can enable a network of users who wish to securely record, verify and execute transactions. That security is what draws everyone’s interest, but few blockchain application developers have the skills to match.

“While there are a lot of developers that want to get in this space, there aren’t a lot of developers qualified to work on the core of a lot of these protocols from a security perspective,” said Chris Pacia, lead backend developer at OB1 based in Centreville, Va., at the recent QCon New York 2018 conference. OB1 is the parent company of OpenBazaar, an online marketplace that uses cryptocurrency.

Blockchain apps: The ‘cloud’ of the 21st century

Blockchain expertise is the top request among more than 5,000 skills on Upwork,  the organization, based in Mountain View, Calif., that matches freelance workers with employers. Demand for blockchain expertise on Upwork surged more than 6,000% year-over-year in the first three months of 2018.

In a recent Gartner study of nearly 300 CIOs of organizations with ongoing blockchain initiatives, 23% of respondents said that blockchain requires the most new skills to implement of any technology area, and another 18% said blockchain skills are the most difficult to find.

While there are a lot of developers that want to get in this space, there aren’t a lot of developers qualified to work on the core of a lot of these protocols from a security perspective.
Chris Pacialead backend developer, OB1

New York City-based Global Debt Registry (GDR), a fintech provider of asset certainty solutions, adopted the IBM blockchain starter plan to build its collateral pledge registry, which enables lenders to check the collateral positions of the institutional investors to which it lends money. For example, if Goldman Sachs lends money to a hedge fund and that hedge fund pledges a set of assets to them, that fund may also approach JPMorgan Chase & Co. and try to pledge the same set of assets. GDR’s registry would check to see if those assets are double-pledged, said Robert Brown, CTO of Global Debt Registry.

Brown’s team saw blockchain as a good fit because it’s essentially a set of data shared among a group of companies in an ecosystem. GDR, which started with no blockchain expertise, evaluated different blockchain options and selected Hyperledger because it was built from the ground up as a private blockchain. “We have a set of institutional investors and banks, and they don’t want to have their data in the open,” Brown said.

The IBM blockchain starter plan’s tools helped GDR developers build blockchain apps and get up and running quickly on the IBM Cloud, he said.

“Hyperledger Composer let us write our smart contracts in JavaScript, which is a language we’re familiar with,” Brown said. “The API was straightforward to deal with. Composer also has a modeling language that lets you define your data structures and signatures for the objects you create. The tools make it easy to get going.”

OB1’s Pacia said he is hopeful for projects like IBM’s starter plan approach but worries if it will be enough to overcome the low number of people with blockchain expertise. “I’ve seen other efforts to kind of like train people and slowly bring them along so that they can contribute at that type of high level. But it does take a high level of training to do this securely,” he said.

Believe it or not, CIOs need a digital customer experience strategy

Whether or not a company is born digital, delivering a quality digital customer experience has emerged as a key performance indicator for technology leaders.

So say the CTO at Kayak, the CIO at DBS Bank, and the CIO at Adobe Systems Inc., who expounded on this idea during a panel discussion at the recent MIT Sloan CIO Symposium. Simply put: Customer satisfaction equates to company success, and technology such as artificial intelligence is the link between the two.

The three technology leaders are aggressively helping build a digital customer experience strategy that benefits both customers and the company. Doing this requires collecting data on how customers interact with the company and then finding ways to make those interactions more efficient — and more intelligent. Here is what each had to say about using advanced technology to monitor, enhance and capitalize on customer experience.

The error budget

One of the most practical pieces of advice on creating an effective digital customer experience strategy came from David Gledhill, group CIO and head of group technology and operations at DBS Bank in Singapore. He encouraged the audience to follow his lead and steal Google’s concept of an “error budget,” which can help companies strike a balance between moving fast and keeping customer service top of mind.

DBS Bank, AI, MIT CIO SymposiumDavid Gledhill

The error budget, a concept that’s evolving at DBS Bank, is a joint key performance indicator between technology and operations “to gauge and monitor customer experience” on digital platforms, according to Gledhill. “Every time a customer gets a performance degradation or [experiences] a struggle, it counts against the platform,” he said. Whatever those strikes are — be they performance issues or incomplete transactions — the company should determine a threshold and “round everything up to a single number,” Gledhill said.

Once the strikes against the digital platform hit the error threshold, developers have to stop and “refocus their efforts on solving those customer pain point interactions,” Gledhill said. He pointed audience members to Google’s book Site Reliability Engineering: How Google Runs Production Systems for more information.

Mapping the ‘customer journey’

Cynthia Stoddard, senior vice president and CIO at Adobe, said AI and machine learning have always been a part of the software company’s products. “We refer to it as the Adobe magic,” she said.

But what the company is attempting to do now is to use those tools to improve the customer’s experience with Adobe products — especially with its Creative Cloud. “What we want to be able to do with it is really unleash the power and let our customer have access to it so that we can remove the mundane and let people focus on the creativity,” Stoddard said.

Our view of the world with AI, from a product perspective, is more of a Harry Potter view of the world. We want to do good things and help people do their tasks quicker.
Cynthia StoddardSVP and CIO, Adobe

Part of Adobe’s digital customer experience strategy is to map a customer’s “journey” across its product set, which can help illuminate both customer friction points as well as repetitive activity that might be ripe for automation. “Our view of the world with AI, from a product perspective, is more of a Harry Potter view of the world,” she said. “We want to do good things and help people do their tasks quicker.”

Stoddard said she uses an “outside-in” approach to understand the customer’s perspective by “looking at their journey points and ensuring that we remove all friction points,” she said. But she also said it’s important to look at the world from an inside-out perspective, which focuses on designing for enterprise scale and efficiency.

When the two perspectives conflict, “the customer comes first,” she said.

A hybrid approach

Giorgos Zacharia, CTO and chief scientist at Kayak, demystified AI and machine learning as computational statistics with more computational power. “To me, there is nothing magic about it,” he said.

At Kayak, the digital customer experience strategy is the strategy for the company, according to Zacharia. “The dominant metric is completion of transaction — has the user found what they’re looking for?” he said.

Kayak, data science, AI, MIT CIO SymposiumGiorgos Zacharia

But as Kayak’s developers experiment with how to better serve their users, their ideas can sometimes produce an undesirable result. “If you change the user experience way too much, the users might be taken aback,” Zacharia said. “And it takes time to retrain them.”

This happened recently when Kayak developers implemented a machine learning algorithm for sorting flights. Rather than sorting by price, the algorithm sorted by likelihood that a customer would complete a transaction. “For some users, the snackers, we call them, those who run a search to see what the current prices are, they were taken aback that they didn’t see the cheapest price on top,” he said.

Zacharia and his team addressed the issue with a hybrid approach — the cheapest fare is on top and the rest of the results are sorted by likelihood of conversion. “It works for the user — for now,” he said.