Tag Archives: enterprises

As AI identity management takes shape, are enterprises ready?

BOSTON — Enterprises may soon find themselves replacing their usernames and passwords with algorithms.

At the Identiverse 2018 conference last month, a chorus of vendors, infosec experts and keynote speakers discussed how machine learning and artificial intelligence are changing the identity and access management (IAM) space. Specifically, IAM professionals promoted the concept of AI identity management, where vulnerable password systems are replaced by systems that rely instead on biometrics and behavioral security to authenticate users. And, as the argument goes, humans won’t be capable of effectively analyzing the growing number of authentication factors, which can include everything from login times and download activity to mouse movements and keystroke patterns. 

Sarah Squire, senior technical architect at Ping Identity, believes that use of machine learning and AI for authentication and identity management will only increase. “There’s so much behavioral data that we’ll need AI to help look at all of the authentication factors,” she told SearchSecurity, adding that such technology is likely more secure than relying solely on traditional password systems.

During his Identiverse keynote, Andrew McAfee, principal research scientist at the Massachusetts Institute of Technology, discussed how technology, and AI in particular, is changing the rules of business and replacing executive “gut decisions” with data intensive predictions and determinations. “As we rewrite the business playbook, we need to keep in mind that machines are now demonstrating excellent judgment over and over and over,” he said.

AI identity management in practice

Some vendors have already deployed AI and machine learning for IAM. For example, cybersecurity startup Elastic Beam, which was acquired by Ping last month, uses AI-driven analysis to monitor API activity and potentially block APIs if malicious activity is detected. Bernard Harguindeguy, founder of Elastic Beam and Ping’s new senior vice president of intelligence, said AI is uniquely suited for API security because there are simply too many APIs, too many connections and too wide an array of activity to monitor for human admins to keep up with.

There are other applications for AI identity management and access control. Andras Cser, vice president and principal analyst for security and risk professionals at Forrester Research, said he sees several ways machine learning and AI are being used in the IAM space. For example, privileged identity management can use algorithms to analyze activity and usage patterns to ensure the individuals using the privileged accounts aren’t malicious actors.

“You’re looking at things like, how has a system administrator been doing X, Y and Z, and why? If this admin has been using these three things and suddenly he’s looking at 15 other things, then why does he need that?” Cser said.

In addition, Cser said machine learning and AI can be used for conditional access and authorization. “Adaptive or risk-based authorization tend to depend on machine learning to a great degree,” he said. “For example, we see that you have access to these 10 resources, but you need to be in your office during normal business hours to access them. Or if you’ve been misusing these resources across these three applications, then it will ratchet back your entitlements at least temporarily and grant you read-only access or require manager approval.”

Algorithms are being used not just for managing identities but creating them as well. During his Identiverse keynote, Jonathan Zittrain, George Bemis professor of international law at Harvard Law School, discussed how companies are using data to create “derived identities” of consumers and users. “Artificial intelligence is playing a role in this in a way that maybe it wasn’t just a few years ago,” he said.

Zittrain said he had a “vague sense of unease” around machine learning being used to target individuals via their derived identities and market suggested products. We don’t know what data is being used, he said, but we know there is a lot of it, and the identities that are created aren’t always accurate. Zittrain joked about how when he was in England a while ago, he was looking at the Lego Creator activity book on Amazon, which was offered up as the “perfect partner” to a book called American Jihad. Other times, he said, the technology creates anxieties when people discover they are too accurate.

“You realize the way these machine learning technologies work is by really being effective at finding correlations where our own instincts would tell us none exist,” Zittrain said. “And yet, they can look over every rock to find one.”

Potential issues with AI identity management

Experts say allowing AI systems to automatically authenticate or block users, applications and APIs with no human oversight comes with some risk, as algorithms are never 100% accurate. Squire says there could be a trial and error period, but added there are ways to mitigate those errors. For example, she suggested AI identity management shouldn’t treat all applications and systems the same and suggested assigning risk levels for each resource or asset that requires authentication.

“It depends on what the user is doing,” Squire said. “If you’re doing something that has a low risk score, then you don’t need to automatically block access to it. But if something has a high risk score, and the authentication factors don’t meet the requirement, then it can automatically block access.”

Squire said she doesn’t expect AI identity management to remove the need for human infosec professionals. In fact, it may require even more. “Using AI is going to allow us to do our jobs in a smarter way,” she said. “We’ll still need humans in the loop to tell the AI to shut up and provide context for the authentication data.”

Cser said the success of AI-driven identity management and access control will depend on a few critical factors. “The quality and reliability of the algorithms are important,” he said. “How is the model governed? There’s always a model governance aspect. There should be some kind of mathematically defensible, formalized governance method to ensure you’re not creating regression.”

Explainability is also important, he said. Vendor technology should have some type of “explanation artifacts” that clarify why access has been granted or rejected, what factors were used, how those factors were weighted and other vital details about the process. If IAM systems or services don’t have those artifacts, then they risk becoming black boxes that human infosec professionals can’t manage or trust.

Regardless of potential risks, experts at Identiverse generally agreed that machine learning and AI are proving their effectiveness and expect an increasing amount of work to be delegated to them. “The optimal, smart division of labor between what we do — minds — and [what] machines do is shifting very, very quickly,” McAfee said during his keynote. “Very often it’s shifting in the direction of the machines. That doesn’t mean that all of us have nothing left to offer, that’s not the case at all. It does mean that we’d better re-examine some of our fundamental assumptions about what we’re better at than the machines because of the judgment and the other capabilities that the machines are demonstrating now.”

Container security emerges in IT products enterprises know and trust

Container security has arrived from established IT vendors that enterprises know and trust, but startups that were first to market still have a lead, with support for cloud-native tech.

Managed security SaaS provider Alert Logic this week became the latest major vendor to throw its hat into the container security ring, a month after cloud security and compliance vendor Qualys added container security support to its DevSecOps tool.

Container security monitoring is now a part of Alert Logic’s Cloud Defender and Threat Manager intrusion detection systems (IDSes). Software agents deployed on each host inside a privileged container monitor network traffic between containers within that host, as well as between hosts for threats. A web application firewall blocks suspicious traffic Threat Manager finds between containers, and Threat Manager offers remediation recommendations to address any risks that remain in the infrastructure.

Accesso Technology Group bought into Alert Logic’s IDS products in January 2018 because it supports VM-based and bare-metal infrastructure, and planned container support was a bonus.

“They gave us a central location to monitor our physical data centers, remote offices and multiple public clouds,” said Will DeMar, director of information security at Accesso, a ticketing and e-commerce service provider in Lake Mary, Fla.

DeMar beta-tested the Threat Manager features and has already deployed them with production Kubernetes clusters in Google Kubernetes Engine and AWS Elastic Compute Cloud environments, though Alert Logic’s official support for its initial release is limited to AWS.

Immediate visibility into intrusion and configuration issues … [is] critical to our DevOps process.
Will DeMarDirector of information security, Accesso

“We have [AWS] CloudFormation and [HashiCorp] Terraform scripts that put Alert Logic onto every new Kubernetes host, which gives us immediate visibility into intrusion and configuration issues,” DeMar said. “It’s critical to our DevOps process.”

A centralized view of IT security in multiple environments and “one throat to choke” in a single vendor appeals to DeMar, but he hasn’t ruled out tools from Alert Logic’s startup competitors, such as Aqua Security, NeuVector and Twistlock, which he sees as complementary to Alert Logic’s product.

“Aqua and Twistlock are more container security-focused than intrusion detection-focused,” DeMar said. “They help you check the configuration on your container before you release it to the host; Alert Logic doesn’t help you there.”

Container security competition escalates

Alert Logic officials, however, do see Aqua Security, Twistlock and their ilk as competitors, and the container image scanning ability DeMar referred to is on the company’s roadmap for Threat Manager in the next nine months. Multiple layers of infrastructure are involved to secure Docker containers, and Alert Logic positions its container security approach as network-based IDS, as opposed to host-based IDS. The company said network-based IDS more deeply inspects real-time network traffic at the packet level, whereas startups’ products examine only where that network traffic goes between hosts.

lert Logic Threat Manager UI
Alert Logic’s Threat Manager offers container security remediation recommendations.

Aqua Security co-founder and CTO Amir Jerbi, of course, sees things differently.

“Traditional security tools are trying to shift into containers and still talk in traditional terms about the host and network,” Jerbi said. “Container security companies like ours don’t distinguish between network, host and other levels of access — we protect the container, through a mesh of multiple disciplines.”

That’s the major distinction for enterprise end users: whether they prefer container security baked into broader, traditional products or as the sole focus of their vendor’s expertise. Aqua Security version 3.2, also released this week, added support for container host monitoring where thin OSes are used, but the tool isn’t a good fit in VM or bare-metal environments where containers aren’t present, Jerbi said.

Aqua Security’s tighter focus means it has a head start on the latest and greatest container security features. For example, version 3.2 includes the ability to customize and build a whitelist of system calls containers make, which is still on the roadmap for Alert Logic. Version 3.2 also adds support for static AWS Lambda function monitoring, with real-time Lambda security monitoring already on the docket. Aqua Security was AWS’ partner for container security with Fargate, while Alert Logic must still catch up there as well.

Industry watchers expect this dynamic to continue for the rest of 2018 and predict that incumbent vendors will snap up startups in an effort to get ahead of the curve.

“Everyone sees the same hill now, but they approach it from different viewpoints, more aligned with developers or more aligned with IT operations,” said Fernando Montenegro, analyst with 451 Research. “As the battle lines become better defined, consolidation among vendors is still a possibility, to strengthen the operations approach where vendors are already focused on developers and vice versa.”

Building a data science pipeline: Benefits, cautions

Enterprises are adopting data science pipelines for artificial intelligence, machine learning and plain old statistics. A data science pipeline — a sequence of actions for processing data — will help companies be more competitive in a digital, fast-moving economy. 

Before CIOs take this approach, however, it’s important to consider some of the key differences between data science development workflows and traditional application development workflows.

Data science development pipelines used for building predictive and data science models are inherently experimental and don’t always pan out in the same way as other software development processes, such as Agile and DevOps. Because data science models break and lose accuracy in different ways than traditional IT apps do, a data science pipeline needs to be scrutinized to assure the model reflects what the business is hoping to achieve.

At the recent Rev Data Science Leaders Summit in San Francisco, leading experts explored some of these important distinctions, and elaborated on ways that IT leaders can responsibly implement a data science pipeline. Most significantly, data science development pipelines need accountability, transparency and auditability. In addition, CIOs need to implement mechanisms for addressing the degradation of a model over time, or “model drift.” Having the right teams in place in the data science pipeline is also critical: Data science generalists work best in the early stages, while specialists add value to more mature data science processes.

Data science at Moody’s

Jacob Grotta, managing director, Moody's AnalyticsJacob Grotta

CIOs might want to take note from Moody’s, the financial analytics giant, which was an early pioneer in using predictive modeling to assess the risks of bonds and investment portfolios. Jacob Grotta, managing director at Moody’s Analytics, said the company has streamlined the data science pipeline it uses to create models in order to be able to quickly adapt to changing business and economic conditions.

“As soon as a new model is built, it is at its peak performance, and over time, they get worse,” Grotta said. Declining model performance can have significant impacts. For example, in the finance industry, a model that doesn’t accurately predict mortgage default rates puts a bank in jeopardy. 

Watch out for assumptions

Grotta said it is important to keep in mind that data science models are created by and represent the assumptions of the data scientists behind them. Before the 2008 financial crisis, a firm approached Grotta with a new model for predicting the value of mortgage-backed derivatives, he said. When he asked what would happen if the prices of houses went down, the firm responded that the model predicted the market would be fine. But it didn’t have any data to support this. Mistakes like these cost the economy almost $14 trillion by some estimates.

The expectation among companies often is that someone understands what the model does and its inherent risks. But these unverified assumptions can create blind spots for even the most accurate models. Grotta said it is a good practice to create lines of defense against these sorts of blind spots.

The first line of defense is to encourage the data modelers to be honest about what they do and don’t know and to be clear on the questions they are being asked to solve. “It is not an easy thing for people to do,” Grotta said.

A second line of defense is verification and validation. Model verification involves checking to see that someone implemented the model correctly, and whether mistakes were made while coding it. Model validation, in contrast, is an independent challenge process to help a person developing a model to identify what assumptions went into the data. Ultimately, Grotta said, the only way to know if the modeler’s assumptions are accurate or not is to wait for the future.

A third line of defense is an internal audit or governance process. This involves making the results of these models explainable to front-line business managers. Grotta said he was working with a bank recently that protested its bank managers would not use a model if they didn’t understand what was driving its results. But he said the managers were right to do this. Having a governance process and ensuring information flows up and down the organization is extremely important, Grotta said.

Baking in accountability

Models degrade or “drift” over time, which is part of the reason organizations need to streamline their model development processes. It can take years to craft a new model. “By that time, you might have to go back and rebuild it,” Grotta said. Critical models must be revalidated every year.

To address this challenge, CIOs should think about creating a data science pipeline with an auditable, repeatable and transparent process. This promises to allow organizations to bring the same kind of iterative agility to model development that Agile and DevOps have brought to software development.

Transparent means that upstream and downstream people understand the model drivers. It is repeatable in that someone can repeat the process around creating it. It is auditable in the sense that there is a program in place to think about how to manage the process, take in new information, and get the model through the monitoring process. There are varying levels of this kind of agility today, but Grotta believes it is important for organizations to make it easy to update data science models in order to stay competitive.

How to keep up with model drift

Nick Elprin, CEO and co-founder of Domino Data Lab, a data science platform vendor, agreed that model drift is a problem that must be addressed head on when building a data science development pipeline. In some cases, the drift might be due to changes in the environment, like changing customer preferences or behavior. In other cases, drift could be caused by more adversarial factors. For example, criminals might adopt new strategies for defeating a new fraud detection model.

Nick Elprin, CEO and co-founder, Domino Data LabNick Elprin

In order to keep up with this drift, CIOs need to include a process for monitoring the effectiveness of their data models over time and establishing thresholds for replacing these models when performance degrades.

With traditional software monitoring, the IT service management needs to track metrics related to CPU, network and memory usage. With data science, CIOs need to capture metrics related to accuracy of model results. “Software for [data science] production models needs to look at the output they are getting from those models, and if drift has occurred, that should raise an alarm to retrain it,” Elprin said.

Fashion-forward data science

At Stitch Fix, a personal shopping service, the company’s data science pipeline allows it to sell clothes online at full price. Using data science in various ways allows them to find new ways to add value against deep discount giants like Amazon, said Eric Colson, chief algorithms officer at Stitch Fix.

Eric Colson, chief algorithms officer,  Stitch FixEric Colson

For example, the data science team has used natural language processing to improve its recommendation engines and buy inventory. Stitch Fix also uses genetic algorithms — algorithms that are designed to mimic evolution and iteratively select the best results following a set of randomized changes. These are used to streamline the process for designing clothes, coming up with countless iterations: Fashion designers then vet the designs.

This kind of digital innovation, however, was only possible he said because the company created an efficient data science pipeline. He added that it was also critical that the data science team is considered a top-level department at Stitch Fix and reports directly to the CEO.

Specialists or generalists?

One important consideration for CIOs in constructing the data science development pipeline is whether to recruit data science specialists or generalists. Specialists are good at optimizing one step in a complex data science pipeline. Generalists can execute all the different tasks in a data science pipeline. In the early stages of a data science initiative, generalists can adapt to changes in the workflow more easily, Colson said.

Some of these different tasks include feature engineering, model training, enhance transform and loading (ETL) data, API integration, and application development. It is tempting to staff each of these tasks with specialists to improve individual performance. “This may be true of assembly lines, but with data science, you don’t know what you are building, and you need to iterate,” Colson said. The process of iteration requires fluidity, and if the different roles are staffed with different people, there will be longer wait times when a change is made.

In the beginning at least, companies will benefit more from generalists. But after data science processes are established after a few years, specialists may be more efficient.

Align data science with business

Today a lot of data science models are built in silos that are disconnected from normal business operations, Domino’s Elprin said. To make data science effective, it must be integrated into existing business processes. This comes from aligning data science projects with business initiatives. This might involve things like reducing the cost of fraudulent claims or improving customer engagement.

In less effective organizations, management tends to start with the data the company has collected and wonder what a data science team can do with it. In more effective organizations, data science is driven by business objectives.

“Getting to digital transformation requires top down buy-in to say this is important,” Elprin said. “The most successful organizations find ways to get quick wins to get political capital. Instead of twelve-month projects, quick wins will demonstrate value, and get more concrete engagement.”

Juniper adds core campus switch to EX series

Juniper Networks has added to its EX series a core aggregation switch aimed at enterprises with campus networks that are too small for the company’s EX9000 line.

Like the EX9000 series, the EX4650 — a compact 25/100 GbE switch — uses network protocols typically found in the data center. As a result, the same engineering team can manage the data center and the campus.

“If an enterprise has a consistent architecture and common protocols across networks, it should be well-placed to achieve operational efficiencies across the board,” said Brad Casemore, an analyst at IDC.

The network protocols used in the EX4650 and EX9000 are the Ethernet VPN (EVPN) and the Virtual Extensible LAN (VXLAN). EVPN secures multi-tenancy environments in a data center. Engineers typically use it with the Border Gateway Protocol and the VXLAN encapsulation protocol. The latter creates an overlay network on an existing Layer 3 infrastructure.

Offering a common set of protocols lets Juniper target its campus switches at data center customers, Casemore said. “That’s a less resistant path than trying to displace other vendors in both the data center and the campus.”

Juniper released the EX4650 four months after releasing two multigigabit campus switches, the EX2300 and EX4300. Juniper also released in February a cloud-based dashboard, called Sky Enterprise, for provisioning and configuring Juniper’s campus switches and firewalls.

Juniper rivals Arista and Cisco are also focused on the campus market. In May, Arista extended its data center switching portfolio to the campus LAN with the introduction of the 7300X3 and 7050X3 spline switches. Cisco, on the other hand, has been building out a software-controlled infrastructure for the campus network, centered around a management console called the Digital Network Architecture (DNA) Center.

EX4650 switch
Juniper Networks’ EX4650 core aggregation switch for the campus

SD-WAN upgrade

Along with introducing the EX4650, Juniper unveiled this week improvements within its software-defined WAN for the campus. Companies can use Juniper’s Contrail Service Orchestration technology to prioritize specific application traffic traveling through the SD-WAN. The capability supports more than 3,700 applications, including Microsoft’s Outlook, SharePoint and Skype for Business, Juniper said.

Juniper runs its SD-WAN as a feature within the company’s NFX Network Services Platform, which also includes the Contrail orchestration software and Juniper’s SRX Series Services Gateways. The latter contains the vSRX virtual firewall, IP VPN, content filtering and threat management.

Juniper has added to the NFX platform support for active-active clustering, which is the ability to spread a workload across NFX hardware. NFX runs its software on a Linux server.

The clustering feature will improve the reliability of the LTE, broadband and MPLS connections typically attached to an SD-WAN, Juniper said.

Unchecked cloud IoT costs can quickly spiral upward

The convergence of IoT and cloud computing can tantalize enterprises that want to delve into new technology, but it’s potentially a very pricey proposition.

Public cloud providers have pushed heavily into IoT, positioning themselves as a hub for much of the storage and analysis of data collected by these connected devices. Managed services from AWS, Microsoft Azure and others make IoT easy to initiate, but users who don’t properly configure their workloads quickly encounter runaway IoT costs.

Cost overruns on public cloud deployments are nothing new, despite lingering perceptions that these platforms are always a cheaper alternative to private data centers. But IoT architectures are particularly sensitive to metered billing because of the sheer volume of data they produce. For example, a connected device in a factory setting could generate hundreds of unique streams of data every few milliseconds that record everything from temperatures to acoustics. That much data could add up to a terabyte of data being uploaded daily to cloud storage.

“The amount of data you transmit and store and analyze is potentially infinite,” said Ezra Gottheil, an analyst at Technology Business Research Inc. in Hampton, N.H. “You can measure things however often you want. And if you measure it often, the amount of data grows without bounds.”

Users must also consider networking costs. Most large cloud vendors charge based on communications between the device and their core services. And in typical public cloud fashion, each vendor charges differently for those services.

Predictive analytics reveals, compares IoT costs

To parse the complexity and scale of potential IoT cost considerations, analyst firm 451 Research built a Python simulation and applied predictive analytics to determine costs for 10 million IoT workload configurations. It found Azure was largely the least-expensive option — particularly if resources were purchased in advance — though AWS could be cheaper on deployments with fewer than 20,000 connected devices. It also illuminated how vast pricing complexities hinder straightforward cost comparisons between providers.

In a VM, you have a comparison with dedicated servers from before. But with IoT, it’s a whole new world.
Owen Rogersanalyst, 451 Research

For example, Google charges in terms of data transferred, while AWS and Azure charge against the number of messages sent. Yet, AWS and Azure treat messages differently, which can also affect IoT costs; Microsoft caps the size of a message, potentially requiring a customer to send multiple messages.

There are other unexpected charges, said Owen Rogers, a 451 analyst. Google, for example, charges for ping messages, which check that the connection is kept alive. That ping may only be 64 bytes, but Google rounds up to the kilobyte. So, customers essentially pay for unused capacity.

“Each of these models has nuances, and you only really discover them when you look through the terms and conditions,” Rogers said.

Some of these nuances aim to protect the provider or hide complexity from the users, but users may scratch their heads. Charging discrepancies are endemic to the public cloud, but IoT costs present new challenges for those deciding which cloud to use — especially those who start out with no past experience as a reference point.

“How are you going to say it’s less or more than it was before? At least in a VM, you have a comparison with dedicated servers from before. But with IoT, it’s a whole new world,” Rogers said. “If you want to compare providers, it would be almost impossible to do manually.”

There are many unknowns to building an IoT deployment compared to more traditional applications, some of which apply regardless of whether it’s built on the public cloud or in a private data center. Software asset management can be a huge cost at scale. In the case of a connected factory or building, greater heterogeneity affects time and cost, too.

“Developers really need to understand the environment, and they have to be able to program for that environment,” said Alfonso Velosa, a Gartner analyst. “You would set different protocols, logic rules and processes when you’re in the factory for a robot versus a man[-operated] machine versus the air conditioners.”

Data can also get stale rather quickly and, in some cases, become useless, if it’s not used within seconds. Companies must put policies in place to make sure they understand how frequently to record data and transmit the appropriate amount of data back to the cloud. That includes when to move data from active storage to cold storage and if and when to completely purge those records.

“It’s really sitting down and figuring out, ‘What’s the value of this data, and how much do I want to collect?'” Velosa said. “For a lot of folks, it’s still not clear where that value is.”

Jitterbit Harmony update brings API management to iPaaS

Multi-SaaS environments are common in enterprises today, and so are connection challenges between those environments. Jitterbit promises to simplify SaaS integrations with the latest version of its enterprise iPaaS platform.

The Harmony Summer ’18 release adds API lifecycle management and hundreds of self-service integration templates. It also features point-and-click integration and API management capabilities to accommodate both non-IT knowledge workers and experienced integrators and API developers.

In this era of BizDevOps, individual business departments frequently help implement integration between cloud applications, said Neil Ward-Dutton, research director for U.K.-based MWD Advisors. Jitterbit has worked to provide an integration platform as a service (iPaaS) that caters not only to IT specialists, but also to lesstechnical staff, he said.

Jitterbit’s expanded recipe book

With over 500 new, prebuilt and certified recipes, the Harmony Summer ’18 release aims to help less technical users quickly build integrations for common combinations of applications, Ward-Dutton said. Jitterbit recipes enable endpoint connections between enterprise SaaS apps, such as Amazon Simple Storage Service, Box, NetSuite, Salesforce and others.

In the past, Jitterbit Harmony enabled IT specialists to build integration recipes for business teams, Ward-Dutton said. The new set of development templates goes a step further to provide a library of easy-to-use, certified content, with a guarantee of certified General Data Protection Regulation compliance.

Jitterbit, overall, does a good job of spanning core IT and citizen integrator audiences.
Neil Ward-Duttonresearch director, MWD Advisors

“Jitterbit, overall, does a good job of spanning core IT and citizen integrator audiences, and its [recipes are] more consumable than more hardcore tech platforms, like those from MuleSoft and TIBCO,” Ward-Dutton said. “However, others like Boomi, Scribe Online and SnapLogic are pretty comparable.”

For Skullcandy Inc., Jitterbit’s prebuilt integration templates help it accelerate deployment and enable live integrations in weeks instead of months. “We were able to connect and automate our business processes with SAP [Business] ByDesign, EDI [electronic data interchange], FTP, email, databases — you name it,” said Yohan Beghein, IT director for the device vendor, based in Park City, Utah. With the updated Jitterbit Harmony iPaaS platform, Skullcandy processes millions of transactions, transforms high volumes of information with logic and synchronizes data across all systems, he said.

API management brings integration control

API integration is one of Jitterbit Harmony’s strong points, but its API management features lagged behind the aforementioned competitors; this latest release brings Harmony in line with many other players in this space, Ward-Dutton said.

Simon Peel, chief strategy officer, JitterbitSimon Peel

HotSchedules, a restaurant software vendor based in Austin, Texas, uses Harmony’s improved API integration and management features to quickly and accurately aggregate and manage data from many different sources. Without these capabilities, the HotSchedules’ operations team would have to use several different systems to understand the health of customers’ APIs and integrations, said Laura McDonough, vice president of operations at HotSchedules. “If the data isn’t accurate, our customer success team would be making decisions based on incorrect data,” she said.

With API development, integration and management capabilities on a single platform, it’s easier to expose data from existing apps and drive real-time integration, said Simon Peel, Jitterbit’s chief strategy officer. The new Jitterbit Harmony release enables full API lifecycle management from any device, including security control management, user authentication and API performance monitoring, and provides alerts about API processes.

The Summer ’18 release is available to new users on a 30-day trial basis.

Offering the largest scale and broadest choice for SAP HANA in the cloud

Microsoft at SAPPHIRE NOW 2018

Enterprises have been embarking on a journey of digital transformation for many years. For many enterprises this journey cannot start or gain momentum until core SAP Enterprise Resource Planning (ERP) landscapes are transformed. The last year has seen an acceleration of this transformation with SAP customers of all sizes like Penti, Malaysia Airlines, Guyana Goldfields , Rio Tinto, Co-op, and Coats migrating to the cloud on Microsoft Azure. This cloud migration, which is central to digital transformation, helps to increase business agility, lower costs, and enable new business processes to fuel growth. In addition, it has allowed them to take advantage of advancements in technology such as big data analytics, self-service business intelligence (BI), and Internet of Things (IOT).

As leaders in enterprise software, SAP and Microsoft provide the preferred foundation for enabling the safe and trusted path to digital transformation. Together we enable the inevitable move to SAP S/4HANA which will help accelerate digital transformation for customers of all sizes.

Microsoft has collaborated with SAP for 20+ years to enable enterprise SAP deployments with Windows Server and SQL Server. In 2016 we partnered to offer SAP certified, purpose-built, SAP HANA on Azure Large Instances supporting up to 4 TB of memory. Last year at SAPPHIRENOW, we announced the largest scale for SAP HANA in the public-cloud with support up to 20 TB on a single node and our M-series VM sizes up to 4 TB. With the success of M-series VMs and our SAP HANA on Azure Large instances, customers have asked us for even more choices to address a wider variety of SAP HANA workloads.

Microsoft is committed to offering the most scale and performance for SAP HANA in the public cloud, and yesterday announced additional SAP HANA offerings on Azure which include:

  • Largest SAP HANA optimized VM size in the cloud: We are happy to announce that the Azure M-series will support large memory virtual machines with sizes up to 12 TB. These new sizes will  be launching soon, pushing the limits of virtualization in the cloud for SAP HANA. These new sizes are based on Intel Xeon Scalable (Skylake) processors and will offer the most memory available of any VM in the public cloud.
  • Wide range of SAP HANA certified VMs: For customers needing smaller instances we have expanded our offering with smaller M-series VM sizes, extending Azure’s SAP HANA certified M-series VM range from 192 GB – 4 TB with 10 different VM sizes. These sizes offer on-demand and SAP certified instances with flexibility to spin-up or scale-up in minutes and to spin-down to save costs all in a pay-as-you-go model available worldwide. This flexibility and agility is something that is not possible with a private cloud or on-premises SAP HANA deployment.
  • 24 TB bare metal instance and optimized price per TB: For customers that need a higher performance dedicated offering for SAP HANA, we are increasing our investments in our purpose-built bare metal SAP HANA infrastructure. We now offer additional SAP HANA TDIv5 options of 6 TB, 12 TB, 18 TB, and 24 TB configurations in addition to our current configurations from 0.7TB to 20 TB. This enables customers who need more memory but the same number of cores to get a better price per TB deployed.
  • Most choice for SAP HANA in the cloud: With 26 distinct SAP HANA offerings from 192 GB to 24 TB, scale-up certification up to 20 TB and scale-out certification up to 60 TB, global availability in 12 regions with plans to increase to 22 regions in the next 6 months, Azure now offers the most choice for SAP HANA workloads of any public cloud.

Microsoft Azure also enables customers to derive insights and analytics from SAP data with services such as Azure Data Factory SAP HANA connector to automate data pipelines, Azure Data Lake Store for hyper scale data storage and Power BI, an industry leading self-service visualization tool, to create rich dashboards and reports from SAP ERP data.

Our unique partnership with SAP to enable customer success

Last November, Microsoft and SAP announced an expanded partnership to help customers accelerate their business transformation with S/4HANA on Azure. Microsoft has been a long time SAP customer for many of our core business processes such as financials and supply chain. As part of this renewed partnership, Microsoft announced it will use S/4HANA for Central Finance, and SAP announced it will use Azure to host 17 internal business critical systems.

I am very pleased to share an update on SAP’s migration to Azure from Thomas Saueressig, CIO of SAP:

“In 2017 we started to leverage Azure as IaaS Platform. By the end of 2018 we will have moved 17 systems including an S/4HANA system for our Concur Business Unit. We are expecting significant operational efficiencies and increased agility which will be a foundational element for our digital transformation.”

Correspondingly here’s an update on Microsoft’s migration to S/4HANA on Azure from Mike Taylor, GM Partner; Enterprise Applications Services at Microsoft.

“In 2017 we started the internal migration of our SAP system that we have been running for over 25 years, to S/4HANA. As part of that journey we felt it was necessary to first move our SAP environment completely onto Azure, which we completed in February 2018. With the agility that Azure offers we have already stood up multiple sandbox environments to help our business realize the powerful new capabilities of S/4HANA.”

As large enterprises, we are going through our business transformation with SAP S/4HANA on Azure and we will jointly share lessons from our journey and reference architectures at several SAPPHIRE NOW sessions.

Last November, we announced availability of SAP HANA Enterprise Cloud (HEC) with Azure, to offer customers an accelerated path to SAP HANA. We see several customers, such as Avianca and Aker embark on their business transformation by leveraging the best of both worlds, SAP’s managed services, and the most robust cloud infrastructure for SAP HANA on Azure.

“In Avianca, we are committed on providing the best customer experience through the use of digital technologies. We have the customer as the center of our strategy, and to do that, we are in a digital transformation of our customer experience and of our enterprise to provide our employees with the best tools to increase their productivity. Our new implementation of SAP HANA Enterprise Cloud on Microsoft Azure is a significant step forward in our enterprise digital transformation,” said Mr. Santiago Aldana Sanin, Chief Technology Officer, Chief Digital Officer at Avianca. “The SAP and Microsoft partnership continues to create powerful solutions that combine application management and product expertise from SAP with a global, trusted and intelligent cloud from Microsoft Azure. At Avianca, we are leveraging the strengths of both companies to further our journey to the cloud.”

Learn more about HEC with Azure.

Announcing SAP Cloud Platform general availability on Azure

The SAP Cloud Platform offers developers a choice to build their SAP applications and extensions using a PaaS development platform with integrated services. Today, I’m excited to announce that SAP Cloud Platform is now generally available on Azure. Developers can now deploy Cloud Foundry based SAP Cloud Platform on Azure in the West Europe region. We’re working with SAP to enable more regions in the months ahead.

“We are excited to announce general availability for SAP Cloud Platform on Azure. With our expanded partnership last November, we have been working on a number of co-engineering initiatives for the benefit of our mutual customers. SAP Cloud Platform offers the best of PaaS services for developers building apps around SAP and Azure offers world-class infrastructure for SAP solutions with cloud services for application development. With this choice, developers can spin up infrastructure on-demand in a global cloud co-located with other business apps, scale up as necessary in minutes boosting developer productivity and accelerating time to market for innovative applications with SAP solutions,” said Björn Goerke, CTO of SAP and President of SAP Cloud Platform.

Microsoft has a long history of working with developers with our .NET, Visual Studio, and Windows community. With our focus on open source support on Azure for Java, Node.js, Red Hat, SUSE, Docker, Kubernetes, Redis, and PostgreSQL to name a few, Azure offers the most developer friendly cloud according to the latest development-only public cloud platforms report from Forrester. We recently published an ABAP SDK for Azure on GitHub, to enable SAP developers to seamlessly connect into Azure services from SAP applications.

SAP application developers can now use a single cloud platform to co-locate application development next to SAP ERP data and boost development productivity with Azure’s Platform services such as Azure Event Hubs for event data processing and Azure Storage for unlimited inexpensive storage, while accessing SAP ERP data at low latencies for faster application performance. To get started with SAP Cloud Platform on Azure, sign up for a free trial account.

Today, we are also excited to announce another milestone in our partnership. SAP’s Hybris Commerce Cloud now runs on Azure as a “Software as a service” offering.

Customers embarking on digital transformation with SAP on Azure

With our broadest scale global offerings for SAP on Azure, we are seeing increased momentum with customers moving to the cloud. Here are some digital transformation stories from recent customer deployments.

  • Daimler AG: One of the world’s most successful automotive companies, Daimler AG is modernizing its purchasing system with a new SAP S/4HANA on Azure solution. The Azure-based approach is a foundational step in an overall digital transformation initiative to ensure agility and flexibility for the contracting and sourcing of its passenger cars, commercial vehicles, and International Procurement Services on a global basis.
  • Devon Energy: Fully committed to its digital transformation, Devon Energy is pioneering efforts to deploy SAP applications on Azure across all its systems. The Oklahoma City-based independent oil and natural gas exploration and production company is strategically partnering with Microsoft on multiple fronts such as AI, IT modernization, and SAP on Azure. Learn more about Devon’s digital transformation at their SAPPHIRE NOW session.
  • MMG: MMG, a global mining company, recognized that existing SAP infrastructure was approaching end-of-life, and that moving to the cloud would deliver the lowest long-term cost whilst providing flexibility to grow and enable new capabilities. Immediate benefits have been realized in relation to the overall performance of SAP, in particular, data loads into Business Warehouse.

For more on how you can accelerate your digital transformation with SAP on Azure, please check out our website.

In closing, at Microsoft, we are committed to ensuring Azure offers the best enterprise-grade option for all your SAP workload needs, whether you are ready to move to HANA now or later. I will be at SAPPHIRE NOW 2018 and encourage you to check our SAPPHIRE NOW event website for details on 40+ sessions and several demos that we’ll be showcasing at the event. Stop by and see us at the Microsoft booth #358 to learn more.

Accenture: Intelligent operations goal requires data backbone

A newly released report co-authored by Accenture and market researcher HfS reveals 80% of the global enterprises surveyed worry about digital disruption, but many of those companies lack the data backbone that could help them compete.

The report stated that large organizations are “concerned with disruption and competitive threats, especially from new digital-savvy entrants.” Indeed, digital disrupters such as Uber and Lyft in personal transportation, Airbnb in travel and hospitality, and various fintech startups have upset the established order in those industries. The Accenture-HfS report views “intelligent operations” as the remedy for the digital challenge and the key to bolstering customer experience. But the task of improving operations calls for organizations to pursue more than a few mild course corrections, according to Debbie Polishook, group chief executive at Accenture Operations, a business segment that includes business process and cloud services.

In the past, enterprises that encountered friction in their operations would tweak the errant process, add a few more people and take on a Lean Six Sigma project, she noted. Those steps, however, won’t suffice in the current business climate, Polishook said.

“Given what is happening  today with the multichannel, with the various ways customers and employees can interact with you, making tiny tweaks is not going to get it done and meet the expectations of your stakeholders,” she said.

Graphic detailing data quality problems within organizations
Organizations struggle to leverage their data

Hard work ahead

The report, which surveyed 460 technology and services decision-makers in organizations with more than $3 billion in revenue, suggested professional services firms such as Accenture will have their work cut out for them as they prepare clients for the digital era.

The survey noted most enterprises struggle to harness data with an eye toward improving operations and achieving competitive advantage. The report stated “nearly 80% of respondents estimate that 50% [to] 90% of their data is unstructured” and largely inaccessible. A 2017 Accenture report also pointed to a data backbone deficit among corporations: More than 90% of the respondents to that survey said they struggle with data access.

In addition, half of the Accenture-HfS report respondents who were surveyed acknowledged their back office isn’t keeping pace with the front office demands to support digital capabilities.

“Eighty percent of the organizations we talked to are concerned with digital disruption and are starting to note that their back office is not quite keeping up with their front office,” Polishook said. “The entire back office is the boat anchor holding them back.”

That lagging back office is at odds with enterprises’ desire to rapidly roll out products and services. An organization’s operations must be able to accommodate the demand for speed in the context of a digital, online and mobile world, Polishook said.

Enterprises need a “set of operations that can respond to these pressures,” she added. “Most companies are not there yet.”

One reason for the lag: Organizations tend to prioritize new product development and front office concerns when facing digital disruption. Back office systems such as procurement tend to languish.

“Naturally, as clients … are becoming disrupted in the market, they pay attention first to products and services,” Polishook said. “They are finding that is not enough.”

The report’s emphasis on revamped operations as critical to fending off digital disruption mirrors research from MIT Sloan’s Center for Information Systems Research. In a presentation in 2017, Jeanne Ross, principal research scientist at the center, identified a solid operational backbone as one of four keys to digital transformation. The other elements were strategic vision, a focus on customer engagement or digitized solutions and a plan for rearchitecting the business.

The path to intelligent operations

The Accenture-HfS report identified five essential components necessary for intelligent operations: innovative talent, a data backbone, applied intelligence, cloud computing and a “smart partnership ecosystem.”

As for innovative talent, the report cited “entrepreneurial drive, creativity and partnering ability” as enterprises’ top areas of talent focus.

There is a lot of heavy lifting to be done.
Debbie Polishookgroup chief executive, Accenture Operations

“One of the most important pieces getting to intelligent operations is the talent,” Polishook said. She said organizations in the past looked to ERP or business process management to boost operations, but contended there is no technology silver bullet.

The data-driven backbone is becoming an important focus for large organizations. The report stated more than 85% of enterprises “are developing a data strategy around data aggregation, data lakes, or data curation, as well as mechanisms to turn data into insights and then actions.” Big data consulting is already a growing market for channel partners.

In the area of applied intelligence about 90% of the enterprises surveyed identified automation, analytics and AI as technologies that will emerge as the cornerstone of business and process transformation. Channel partners also look forward to the AI field and the expanded use of such automation tools as robotic process automation as among the top anticipated trends of 2018.

Meanwhile, more than 90% of large enterprises expect to realize “plug-and-play digital services, coupled with enterprise-grade security, via the cloud, according to the Accenture-HfS report. And a like percentage of respondents viewed partnering with an ecosystem as important for exploiting market opportunities. The report said enterprises of the future will create “symbiotic relationships with startups, academia, technology providers and platform players.”

The path to achieving intelligent operations calls for considerable effort among all partners involved in the transformation.

“There is a lot of heavy lifting to be done,” Polishook said.

AI apps demand DevOps infrastructure automation

Artificial intelligence can offer enterprises a significant competitive advantage for some strategic applications. But enterprise IT shops will also require DevOps infrastructure automation to keep up with frequent iterations.

Most enterprise shops won’t host AI apps in-house, but those that do will turn to sophisticated app-level automation techniques to manage IT infrastructure. And any enterprise that wants to inject AI into its apps will require rapid application development and deployment — a process early practitioners call “DevOps on steroids.”

“When you’re developing your models, there’s a rapid iteration process,” said Michael Bishop, CTO of Alpha Vertex, a fintech startup in New York that specializes in AI data analysis of equities markets. “It’s DevOps on steroids because you’re trying to move quickly, and you may have thousands of features you’re trying to factor in and explore.”

DevOps principles of rapid iteration will be crucial to train AI algorithms and to make changes to applications based on the results of AI data analysis at Nationwide Mutual Insurance Co. The company, based in Columbus, Ohio, experiments with IBM’s Watson AI system to predict whether new approaches to the market will help it sell more insurance policies and to analyze data collected from monitoring devices in customers’ cars that help it set insurance rates.

“You’ve got to have APIs and microservices,” said Carmen DeArdo, technology director responsible for Nationwide’s software delivery pipeline. “You’ve got to deploy more frequently to respond to those feedback loops and the market.”

This puts greater pressure on IT ops to provide developers and data scientists with self-service access to an automated infrastructure. Nationwide relies on ChatOps for self-service, as chatbots limit how much developers switch between different interfaces for application development and infrastructure troubleshooting. ChatOps also allows developers to correct application problems before they enter a production environment.

AI apps push the limits of infrastructure automation

Enterprise IT pros who support AI apps quickly find that no human can keep up with the required rapid pace of changes to infrastructure. Moreover, large organizations must deploy many different AI algorithms against their data sets to get a good return on investment, said Michael Dobrovolsky, executive director of the machine learning practice and global development at financial services giant Morgan Stanley in New York.

“The only way to make AI profitable from an enterprise point of view is to do it at scale; we’re talking hundreds of models,” Dobrovolsky said. “They all have different lifecycles and iteration [requirements], so you need a way to deploy it and monitor it all. And that is the biggest challenge right now.”

Houghton Mifflin Harcourt, an educational book and software publisher based in Boston, has laid the groundwork for AI apps with infrastructure automation that pairs Apache Mesos for container orchestration with Apache Aurora, an open source utility that allows applications to automatically request infrastructure resources.

Long term, the goal is to put all the workload management in the apps themselves, so that they manage all the scheduling … managing tasks in that way is the future.
Robert Allendirector of engineering, Houghton Mifflin Harcourt

“Long term, the goal is to put all the workload management in the apps themselves, so that they manage all the scheduling,” said Robert Allen, director of engineering at Houghton Mifflin Harcourt. “I’m more interested in two-level scheduling [than container orchestration], and I believe managing tasks in that way is the future.”

Analysts agreed application-driven infrastructure automation will be ideal to support AI apps.

“The infrastructure framework for this will be more and more automated, and the infrastructure will handle all the data preparation and ingestion, algorithm selection, containerization, and publishing of AI capabilities into different target environments,” said James Kobielus, analyst with Wikibon.

Automated, end-to-end, continuous release cycles are a central focus for vendors, Kobielus said. Tools from companies such as Algorithmia can automate the selection of back-end hardware at the application level, as can services such as Amazon Web Services’ (AWS) SageMaker. Some new infrastructure automation tools also provide governance features such as audit trails on the development of AI algorithms and the decisions they make, which will be crucial for large enterprises.

Early AI adopters favor containers and serverless tech

Until app-based automation becomes more common, companies that work with AI apps will turn to DevOps infrastructure automation based on containers and serverless technologies.

Veritone, which provides AI apps as a service to large customers such as CBS Radio, uses Iron Functions, now the basis for Oracle’s Fn serverless product, to orchestrate containers. The company, based in Costa Mesa, Calif., evaluated Lambda a few years ago, but saw Iron Functions as a more suitable combination of functions as a service and containers. With Iron Functions, containers can process more than one event at a time, and functions can attach to a specific container, rather than exist simply as snippets of code.

“If you have apps like TensorFlow or things that require libraries, like [optical character recognition], where typically you have to use Tesseract and compile C libraries, you can’t put that into functions AWS Lambda has,” said Al Brown, senior vice president of engineering for Veritone. “You need a container that has the whole environment.”

Veritone also prefers this approach to Kubernetes and Mesos, which focus on container orchestration only.

“I’ve used Kubernetes and Mesos, and they’ve provided a lot of the building blocks,” Brown said. “But functions let developers focus on code and standards and scale it without having to worry about [infrastructure].”

Beth Pariseau is senior news writer for TechTarget’s Cloud and DevOps Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

Midmarket enterprises push UCaaS platform adoption

Cloud unified communications adoption is growing among midmarket enterprises as they look to improve employee communication, productivity and collaboration. Cloud offerings, too, are evolving to meet midmarket enterprise needs, according to a Gartner Inc. report on North American midmarket unified communications as a service (UCaaS).

Gartner, a market research firm based in Stamford, Conn., defines the midmarket as enterprises with 100 to 999 employees and revenue between $50 million and $1 billion. UCaaS spending in the midmarket segment reached nearly $1.5 billion in 2017 and is expected to hit almost $3 billion by 2021, according to the report. Midmarket UCaaS providers include vendors ranked in Gartner’s UCaaS Magic Quadrant report. The latest Gartner UCaaS midmarket report, however, examined North American-focused providers not ranked in the larger Magic Quadrant report, such as CenturyLink, Jive and Vonage.

But before deploying a UCaaS platform, midmarket IT decision-makers must evaluate the broader business requirements that go beyond communication and collaboration.

Evaluating the cost of a UCaaS platform

The most significant challenge facing midmarket IT planners over the next 12 months is budget constraints, according to the report. These constraints play a major role in midmarket UC decisions, said Megan Fernandez, Gartner analyst and co-author of the report.

“While UCaaS solutions are not always less expensive than premises-based solutions, the ability to acquire elastic services with straightforward costs is useful for many midsize enterprises,” she said.

Many midmarket enterprises are looking to acquire UCaaS functions as a bundled service rather than stand-alone functions, according to the report. Bundles can be more cost-effective as prices are based on a set of features rather than a single UC application. Other enterprises will acquire UCaaS through a freemium model, which offers basic voice and conferencing functionality.

“We tend to see freemium services coming into play when organizations are trying new services,” she said. “Users might access the service and determine if the freemium capabilities will suffice for their business needs.”

For some enterprises, this basic functionality will meet business requirements and offer cost savings. But other enterprises will upgrade to a paid UCaaS platform after using the freemium model to test services.

Cloud adoption
Enterprises are putting more emphasis on cloud communications services.

Addressing multiple network options

Midmarket enterprises have a variety of network configurations depending on the number of sites and access to fiber. As a result, UCaaS providers offer multiple WAN strategies to connect to enterprises. Midmarket IT planners should ensure UCaaS providers align with their companies’ preferred networking approach, Fernandez said.

Enterprises looking to keep network costs down may connect to a UCaaS platform via DSL or cable modem broadband. Enterprises with stricter voice quality requirements may pay more for an IP MPLS connection, according to the report. Software-defined WAN (SD-WAN) is also a growing trend for communications infrastructure. 

“We expect SD-WAN to be utilized in segments with requirements for high QoS,” Fernandez said. “We tend to see more requirements for high performance in certain industries like healthcare and financial services.”

Team collaboration’s influence and user preferences

Team collaboration, also referred to as workstream collaboration, offers similar capabilities as UCaaS platforms, such as voice, video and messaging, but its growing popularity won’t affect how enterprises buy UCaaS, yet.

Fernandez said team collaboration is not a primary factor influencing UCaaS buying decisions as team collaboration is still acquired at the departmental or team level. But buying decisions could shift as the benefits of team-oriented management become more widely understood, she said.

“This means we’ll increasingly see more overlap in the UCaaS and workstream collaboration solution decisions in the future,” Fernandez said.

Intuitive user interfaces have also become an important factor in the UCaaS selection process as ease of use will affect user adoption of a UCaaS platform. According to the report, providers are addressing ease of use demands by trying to improve access to features, embedding AI functionality and enhancing interoperability among UC services.