Tag Archives: Cloud

Industrial cloud moving from public to hybrid systems

The industrial cloud runs largely in the public domain currently, but that may be about to change.

Over the next few years, manufacturers will move industrial cloud deployments from the public cloud to hybrid cloud systems, according to a new report from ABI Research, an Oyster Bay, N.Y., research firm that specializes in industrial technologies. Public cloud accounts for almost half of the industrial IoT market share in 2018 (49%), while hybrid cloud systems have just 20%. But by 2023 this script will flip, according to the report, with hybrid cloud systems making up 52% of the IIoT market and public cloud just 25%.

The U.S.-based report surveyed vice presidents and other high-level decision-makers from manufacturing firms of various types and sizes, according to Ryan Martin, ABI Research principal analyst. The main focus of the report was IoT industrial cloud and it surveyed the manufacturers and their predisposition to technology adoption.

According to the report, the industrial cloud encompasses the entirety of the manufacturing process  and unifies the digital supply chain. This unification can lead to a number of benefits. Companies can streamline internal and external operations through digital business, product, manufacturing, asset and logistics processes; use data and the insights generated to enable new services; and improve control over environmental, health and safety issues.

Changing needs will drive move to hybrid systems

Historically, most data and applications in the IoT resided on premises, often in proprietary systems, but as IoT exploded the public cloud became more prevalent, according to Martin. 

The cloud, whether public or private, made sense because it offers a centralized location for storing large amounts of data and computing power at a reasonable cost, but organizational needs are changing, Martin said. Manufacturers are finding that a hybrid approach makes sense because it’s better to perform analytics on the device or activity that’s generating the data, such as equipment at a remote site, than to perform analytics in the cloud.

You don’t want to be shipping data to and from the cloud every time you need to perform a query or a search because you’re paying for that processing power, as well as the bandwidth.
Ryan Martinprincipal analyst, ABI Research

“There’s a desire to keep certain system information on site, and it makes a lot of business sense to do that, because you don’t want to be shipping data to and from the cloud every time you need to perform a query or a search because you’re paying for that processing power, as well as the bandwidth,” Martin said. “Instead it’s better to ship the code to the data for processing then shoot the results back to the edge. The heavy lifting for the analytics, primarily for machine learning types of applications, would happen in the cloud, and then the inferences or insights would be sent to a more localized server or gateway.”

Providers like AWS and Microsoft Azure will likely carry the bulk of the cloud load, according to Martin, but several vendors will be prominent in providing services for the industrial cloud.

“There will be participation from companies like SAP, as well as more traditional industrial organizations like ABB, Siemens, and so forth,” Martin said. “Then we have companies like PTC, which has recently partnered with Rockwell Automation, doing aggregation and integration, and activation to the ThingWorx platform.”

The industrial cloud will increasingly move from public cloud to hybrid cloud systems.
The hybrid cloud market for IIOT will double by 2023.

Transformation not disruption

However, companies face challenges as they move to implement the new technologies and systems that comprise the hybrid industrial cloud. The most prominent challenge is to implement the changes without interrupting current operations, Martin said.

“It will be a challenge to bring all these components like AI, machine learning and robotics together, because their lifecycles operate on different cadences and have different stakeholders in different parts of the value chain,” Martin said. “Also they’re producing heterogeneous data, so there needs to be normalization of mass proportion, not just for the data, but for the application providers, partners and supplier networks to make this all work.”

The overall strategy should be about incremental change that focuses on transformation over disruption, he explained.

“This is analogous to change management in business, but the parallel for IIoT providers is that these markets in manufacturing favor those suppliers whose hardware, software and services can be acquired incrementally with minimal disruption to existing operations,” he said. “We refer to this as minimal viable change. The goal should be business transformation; it’s not disruption.”

Avaya earnings show cloud, recurring revenue growth

Avaya hit revenue targets, increased cloud sales and added customers in its second full quarter as a public company — welcome news for customers and partners anxious for proof that the company is regaining its financial footing following last year’s bankruptcy.

Avaya reported revenue of $755 million in the third quarter of 2018 — down from $757 million last quarter, but within the vendor’s previously announced targets. When excluding sales from the networking division, which Avaya sold last year, adjusted revenue was 1% higher than during the third quarter of 2017.

To keep pace with competitors like Microsoft and Cisco, Avaya is looking to reduce its dependence on large, one-time hardware purchases by selling more monthly cloud subscriptions. This transition can make it difficult to show positive quarter-over-quarter and year-over-year growth in the short term.

Recurring revenue accounted for 59% of Avaya’s adjusted earnings in the third quarter — up from 58% the previous quarter. Cloud revenue represented just 11% of the quarter’s total, but monthly recurring revenue from cloud sales increased by 43% in the midmarket and 107% in the enterprise market, compared with last quarter.

Avaya reported an $88 million net loss in the third quarter. Still, the company’s operations netted $83 million in cash, which is a more critical financial indicator, in this case, than net income, said Hamed Khorsand, analyst at BWS Financial Inc., based in Woodland Hills, Calif.

“This is a company that’s still in transition as far as their accounting goes, with the bankruptcy proceedings,” Khorsand said. “[The net cash flow] actually tells you that the company is adding cash to its balance sheet.”

Also during the third quarter, Avaya regained top ratings in Gartner’s yearly rankings of unified communications (UC) and contact center infrastructure vendors. Avaya’s one-year absence from the leadership quadrant in the Gartner report probably slowed growth, Khorsand said, because C-suite executives place value in those standings.

Avaya’s stock closed up 3.61%, at $20.68 per share, following the Avaya earnings report on Thursday.

Avaya earnings report highlights product growth

The Avaya earnings report showed the company added 1,700 customers worldwide during the third quarter. It also launched and refreshed several products, including an updated workforce optimization suite for contact centers and a new version of Avaya IP Office, its UC offering for small and midsize businesses.

The product releases demonstrate that Avaya continued to invest in research and development, even as it spent most of 2017 engaged in Chapter 11 bankruptcy proceedings, said Zeus Kerravala, principal analyst at ZK Research in Westminster, Mass.

“As long as we continue to see this steady stream of new products coming out, I think it should give customers confidence,” Kerravala said. “Channel partners tend to live on new products, as well.”

The bankruptcy allowed Avaya to cut its debt in half to a level it can afford based on current revenue. But years of underinvestment in product continue to haunt the vendor, as it tries to play catch-up with rivals Cisco and Microsoft, which analysts generally agreed have pulled ahead of all other vendors in the UC market.

Avaya acquired cloud contact center vendor Spoken Communications earlier this year, gaining a multi-tenant public cloud offering. Avaya plans to use the same technology to power a UC-as-a-service product in the future.

“We are investing significantly in people and technology, investing more on technology in the last two quarters than we did in all of fiscal 2017,” said Jim Chirico, CEO at Avaya.

Avaya is expecting to bring in adjusted revenue between $760 and $780 million in the fourth quarter, which would bring the fiscal year’s total to a little more than $3 billion.

Oracle Autonomous Database Cloud gets transaction processing

Oracle is now offering transaction processing capabilities as part of its Autonomous Database Cloud software platform, which is designed to automate database administration tasks for Oracle users in the cloud.

The vendor launched a new Oracle Autonomous Transaction Processing (ATP) cloud service, expanding on the data warehouse service that debuted in March as the first Autonomous Database Cloud offering. The addition of Oracle ATP enables the automated system to handle both transaction and analytical processing workloads, Oracle executive chairman and CTO Larry Ellison said during a launch event that was streamed live.

Ellison reiterated Autonomous Database Cloud’s primary selling point: that automated administration functions driven partly by machine learning algorithms eliminate the need for hands-on configuration, tuning and patching work by database administrators (DBAs).

That frees up DBAs to focus on more productive data management tasks and could lead to lower labor costs for customers, he claimed.

“There’s nothing to learn, and there’s nothing to do,” said Ellison, who also repeated previous jabs at cloud platforms market leader Amazon Web Services (AWS) and previewed the upcoming 19c release of the flagship Oracle Database software that underlies Autonomous Database Cloud.

Cloud success still a test for Oracle

However, while Ellison taunted Amazon for its longtime reliance on Oracle databases and expressed skepticism about his competitor’s ability to execute a reported plan to completely move off of them by 2020, Oracle lags behind not only AWS but also Microsoft and Google in the ranks of cloud platform vendors.

Make no mistake, Oracle still has to prove themselves in the cloud.
Adam Ronthalanalyst, Gartner

“Make no mistake, Oracle still has to prove themselves in the cloud,” Gartner database analyst Adam Ronthal said in an email after the announcement.

And Oracle isn’t starting from a position of strength. Overall, the technology lineup that Oracle currently offers on its namesake cloud doesn’t match the breadth of what users can get on AWS, Microsoft Azure and the Google Cloud Platform, Ronthal said.

But Oracle ATP “helps close that gap, at least in the data management space,” he said.

Together, ATP and the Autonomous Data Warehouse (ADW) service that preceded it “are Oracle coming out to the world with products that are built and architected for cloud,” with promises of scalability, elasticity and a low operational footprint for users, Ronthal said.

Oracle's Larry Ellison speaking at the launch of Oracle Autonomous Transaction Processing
Larry Ellison, Oracle’s executive chairman and CTO, introduces the Autonomous Transaction Processing cloud database service.

The Autonomous Database Cloud services are only available on the Oracle Cloud, and Oracle also limits other key data management technologies to its own cloud platform; for example, it doesn’t offer technical support for its Oracle Real Application Clusters software on other clouds.

In addition, Ronthal noted that it’s typically more expensive to run regular Oracle databases on AWS and Azure than on Oracle’s cloud because of software licensing changes Oracle made last year.

“Oracle is doing everything it can to make its cloud the most attractive place to run Oracle databases,” Ronthal said.

But now the company needs to build some momentum by convincing customers to adopt Oracle ATP and ADW, he added — even if that’s likely to primarily involve existing Oracle users migrating to the cloud services, as opposed to new customers.

Oracle’s autonomous services get a look

Clothing retailer Gap Inc. is a case in point, although the San Francisco company’s use of Oracle databases could grow as part of a plan to move more of its data processing operations to the Oracle Cloud.

For example, Gap is working with Oracle on a proof-of-concept project to convert an on-premises Teradata data warehouse to Oracle ADW, said F.S. Nooruddin, the retailer’s chief IT architect.

That’s a first step in the potential consolidation of various data warehouses into the ADW service, he said. Gap also plans to look closely at Oracle ATP for possible transaction processing uses, according to Nooruddin, who took part in a customer panel discussion during the ATP launch event.

Gap already runs Oracle’s retail applications and Hyperion enterprise performance management software in the cloud.

As the retailer’s use of the cloud expands, the Autonomous Database Cloud technologies could help ensure that all of its Oracle database instances, from test and development environments to production systems, are properly patched and secured, Nooruddin said.

Ellison said Oracle ATP also automatically scales the transaction processing infrastructure allotted to users up and down as workloads fluctuate, so they can meet spikes in demand without paying for compute, network and storage resources they don’t need.

That capability appeals to Gap, too, said Connie Santilli, the company’s vice president of enterprise systems and strategy. Gap’s transaction processing and downstream reporting workloads increase sharply during the holiday shopping season — a common occurrence in the retail industry. But Santilli said Gap had to build its on-premises IT architecture to handle the peak performance level, with less flexibility for downsizing systems when the full processing resources aren’t required.

Cloud costs and considerations for Oracle users

In taking aim at AWS, Ellison again said Oracle would guarantee a 50% reduction in infrastructure costs to Amazon users that migrate to Autonomous Database Cloud — a vow he first made at the Oracle OpenWorld 2017 conference.

Meanwhile, Ellison said Oracle customers can use existing on-premises database licenses to make the switch to Oracle ATP and ADW, avoiding the need to pay for the software again. In such cases, users would continue to pay their current annual support fees plus the cost of their cloud infrastructure usage.

The ATP and ADW services layer the automation capabilities Oracle developed on top of Oracle Database 18c, which Oracle released in February as part of a new plan to update the database software annually. During the ATP launch, Ellison disclosed some details about the planned 19c release and the capabilities it will add to Autonomous Database Cloud.

When databases are upgraded to the 19c-based cloud services, the software will automatically check built-in query execution plans and retain the existing ones if they’ll run faster than new ones, Ellison said. That eliminates the need for DBAs to do regression testing on the plans themselves, he added.

Other new features coming with Oracle Database 19c include the ability to configure Oracle ATP and ADW on dedicated Exadata systems in the Oracle Cloud instead of sharing a multitenant pool of the machines, and to deploy the cloud services in on-premises data centers through Oracle’s Cloud@Customer program.

Oracle’s official roadmap shows 19c becoming available in January 2019, but Ellison claimed that was “worst case” and said the new release may be out before the end of this year.

Shadow IT channel outpaces traditional partners

The traditional channel is fading, while a new shadow IT channel consisting of cloud consultants, tech-oriented professional services firms and startups is on the rise.

That’s the analysis of Jay McBain, principal analyst for global channels at Forrester Research. McBain, speaking at CompTIA’s ChannelCon 2018 conference, said this emerging shadow channel is adding thousands of new companies to the partner ecosystem, while conventional resellers of hardware, software and services are slowly dwindling in number.

“The traditional channel isn’t dead; it isn’t dying, but it is declining,” he said.

McBain said the population of traditional channel players has dropped 36% since the 2008 recession. He also noted that 40% of channel partner owners plan to retire by 2024, noting that the average age of an owner or principal is 58.

In contrast, shadow IT channel companies are rapidly growing in number. McBain cited several categories of such companies. He termed one group everything-as-a-service (XaaS) ecosystem consultants. Those companies help enterprises install, implement and secure software-as-a-service and infrastructure-as-a-service platforms.

According to McBain, XaaS ecosystem consultants are line-of-business experts who understand cloud-driven best practices and typically partner with a handful of vendors, such as Salesforce or Amazon Web Services.

McBain pointed to the surge in growth in the AWS ecosystem. AWS said it added 10,000 new AWS Partner Network companies in 2017. He said AWS could have a total of 100,000 companies in its partner ecosystem in the next 18 months.

“People are flooding into these ecosystems,” he said.

Industry-based professional services firms are another aspect of the shadow IT channel. Accounting firms, digital agencies, architectural companies and law offices are moving into IT services to support their clients in industries undergoing digital disruption.

“They’re technology companies,” McBain said, who noted that there are about as many certified public accountant (CPA) firms as there are value-added resellers.

The traditional channel isn’t dead; it isn’t dying, but it is declining.
Jay McBainprincipal analyst for global channels, Forrester Research

While the depth of the CPA-as-IT-provider shift may be a recent development, the largest accounting firms were rolling out IT strategy consulting and systems integration services 30 years ago.

Other participants in the shadow IT channel include ISVs, an area also seeing explosive growth. Bain estimated 100,000 ISVs exist today worldwide, compared with 10,000 software houses a decade ago. He predicted the number of ISVs will grow to 1 million by 2027, a rise driven by customers’ demand for increasing levels of specialization.

Large IT vendors such as Cisco help fuel the ISV trend. As Cisco pursues a software-led strategy, the company is encouraging its traditional channel partners to develop software and is cultivating expanded ties with ISVs.

In addition, McBain identified born-in-the-cloud firms that focus on back-end project-based services as another example of shadow channel companies. He also said he sees the potential of companies stemming from the startup community as channel disrupters.

The traditional partner response to the shadow IT channel trend could include partnering or merging with the new channel players, McBain suggested, noting that channel partners may be better at such things as business continuity than a digital marketing firm. Such skill set combinations are already coalescing in the emergence of digital consulting firms, which combine elements of traditional systems integration and digital marketing.

Google’s Edge TPU breaks model inferencing out of the cloud

Google is bringing tensor processing units to the edge. At the Google Cloud Next conference in San Francisco, the company introduced Edge TPU, an application-specific integrated circuit designed to run TensorFlow Lite machine learning models on mobile and embedded devices.  

The announcement is indicative of both the red-hot AI hardware market, as well as the growing influence machine learning is having on the internet of things and wireless devices. But the Edge TPU also gives Google a more comprehensive edge-to-cloud AI stack to compete against the likes of Microsoft, Amazon and IBM as it looks to attract a new generation of application developers.

Analysts called the move a good one. “This fills in a big gap that Google had,” said Forrester’s Mike Gualtieri.

Spotlight on model inferencing

Google’s cloud environment is a fertile ground for training AI models, a nontrivial process that requires enormous amounts of data and processing power. Once a model has been trained, it’s put into production where it performs what’s known as inferencing, or where it uses its training to make predictions.

Forrester, AI, Machine learning, edge TPUMike Gualtieri

A growing trend is to push inferencing out to edge devices such as wireless thermostats or smart parking meters that don’t need a lot of power or even connectivity to the cloud, according to David Schatsky, managing director at Deloitte LLP. “These applications will avoid the latency that can be present when shuttling data back and forth to the cloud because they’ll be able to perform inferencing locally and on the device,” he said.

But Google customers who wanted to embed their models into edge devices had to turn to another provider — Nvidia or Intel — for that kind of functionality. Until now. The Edge TPU will give Google customers a more seamless environment to train machine learning models in its cloud and then deploy them into production at the edge.

Deloitte, AI, machine learning, GoogleDavid Schatsky

It also appears to be a nod to the burgeoning relationship between AI and IoT. According to Schatsky, venture capital funding in AI-focused IoT startups outpaced funding to IoT startups overall last year. “AI is so useful in deriving insight from IoT data that it may soon become rare to find an IoT application that doesn’t use AI,” he said.

A competitive stack

They’re not just saying this is a TPU and you can run it on the edge. No, they’re saying this is a fundamentally new chip designed specifically for inferencing.
Mike Gualtierianalyst, Forrester

The Edge TPU is in the same vein as an announcement Microsoft made last year with Project Brainwave, a deep learning platform that converts trained models to run more efficiently on Intel’s Field-Programmable Gate Arrays than on GPUs, according to Gualtieri. “There is a fundamental difference in training a model versus inferencing a model,” he said. “Google recognizes this. They’re not just saying this is a TPU and you can run it on the edge. No, they’re saying this is a fundamentally new chip designed specifically for inferencing.”

Indeed, Gualtieri said, the Edge TPU makes Google more competitive with Microsoft, Amazon and even IBM, all of which made moves to differentiate between model training and model inferencing sooner. “This is an effort, I believe, for Google to make its cloud more attractive, oddly by saying, well, yes, we have the cloud, but we also have the edge — the non-cloud,” he said.

James Kobielus, lead analyst at SiliconAngle Wikibon, also sees the Edge TPU as a strategic move. He called the Edge TPU an example of how the internet giant is creating a complete AI stack of hardware, software and tools for its customers while adding a force multiplier to compete against other vendors in the space.

Wikibon, AI, machine learning, GoogleJames Kobielus

“Google is making a strong play to build a comprehensive application development and services environment in the cloud to reach out to partners, developers and so forth to give them the tools they need to build the new generation of apps,” he said.

Kobielus highlighted the introduction of the Edge TPU software development kit as another example of how Google is planning to compete. The dev kit, which is still in beta and available to only those who apply for access, shows a “great effort” to convince developers to build their apps on the Google cloud and to catch up to Amazon and Microsoft, both of which have a strong developer orientation, he said. “They needed to do this — to reach out to the developer market now while the iron is hot,” he said.

What is the Google AI stack missing? It’s too soon to tell, both Kobielus and Gualtieri said. But with innovation in AI happening at breakneck speed, companies should see this as a part of an evolution and not an end point.

“Different applications are going to require even different chips,” Gualtieri said. “Google is not behind on this. It’s just what’s going to happen because there may be very data-heavy applications or power requirements on smaller devices. So I would expect a whole bunch of different chips to come out. Is that a gap? I would say no because of maturity in this industry.”

Google Cloud security adds data regions and Titan security keys

Multiple improvements for Google Cloud security aim to help users protect data through better access management, more data security options

and
greater transparency.

More than half of the security features announced are either in beta or part of the G Suite Early Adopter Program, but in total the additions should offer better control and transparency for users.

The biggest improvement in Google Cloud security comes in identity and access management. Google has developed its own Titan multi-factor physical security key — similar to a YubiKey — to protect users against phishing attacks. Google previously reported that there have been no confirmed account takeovers in more than one year since requiring all employees to use physical security keys, and according to a Google spokesperson, Titan keys have already been one such key available to employees.

The Titan security keys are FIDO keys that include “firmware developed by Google to verify its integrity.” Google announced it is offering two models of Titan keys for Cloud users: one based on USB and NFC and one that uses Bluetooth in order to support iOS devices as well. The keys are available now to Cloud customers and will come to the Google Store soon. Pricing details have not been released.

“The Titan security key provides a phishing-resistant second factor of authentication. Typically, our customers will place it in front of

high value
users or content administrators and root users, the compromise of those would be much more damaging to an enterprise customer … or specific applications which contain sensitive data, or sort of the crown jewels of corporate environments,” Jess Leroy, director of product management for Google Cloud, told reporters in a briefing. “It’s built with a secure element, which includes firmware that we built ourselves, and it provides a ton of security with very little interaction and effort on the part of

user
.”

However, Stina Ehrensvard, CEO

and
founder of Yubico, the manufacturer of Yubikey two factor authentication keys, headquartered in Palo Alto, Calif., noted in a blog post that her company does not see Bluetooth as a good option for a physical security key.

“Google’s offering includes a Bluetooth (BLE) capable key. While Yubico previously initiated development of a BLE security

key,
and contributed to the BLE U2F standards work, we decided not to launch the product as it does not meet our standards for security, usability

and
durability,” Ehrensvard wrote. “BLE does not provide the security assurance levels of NFC and USB, and requires batteries and pairing that offer a poor user experience.”

In addition to the Titan keys, Google Cloud security will have improved access management with the implementation of the context-aware access approach Google used in its BeyondCorp network setups.

“Context-aware access allows organizations to define and enforce granular access to [Google Cloud Platform] APIs, resources, G Suite, and third-party SaaS apps based on a user’s identity, location, and the context of their request. This increases your security posture while decreasing complexity for your users, giving them the ability to seamlessly log on to apps from anywhere and any device,” Jennifer Lin, director of product management for Google Cloud, wrote in the Google Cloud security announcement post. “Context-aware access capabilities are available for select customers using VPC Service Controls, and are coming soon for customers using Cloud Identity and Access Management (IAM), Cloud Identity-Aware Proxy (IAP), and Cloud Identity.”

Data transparency and control

New features also aim to improve Google Cloud security visibility and control over data. Access Transparency will offer users a “near real-time log” of the actions taken by administrators, including Google engineers.

“Inability to audit cloud provider accesses is often a barrier to moving to

cloud
. Without visibility into the actions of cloud provider administrators, traditional security processes cannot be replicated,” Google wrote in

documentation
. “Access Transparency enables that verification, bringing your audit controls closer to what you can expect

on premise
.”

In terms of Google Cloud security and control over data, Google will also now allow customers to decide in what region data will be stored. Google described this feature as allowing multinational organizations to protect their data with

geo redundancy
, but in a way that organizations can follow any requirements regarding where in the

world
data is stored.

A Google spokesperson noted via email that the onus for ensuring that regional data storage complies with local laws would be on the individual organizations.

Other Google Cloud security improvements

Google announced several features that are still in beta, including Shielded Virtual Machines (VM, which will allow users to monitor and react to changes in the VM to protect against tampering; Binary Authorization, which will force signature validation when deploying container images; Container Registry Vulnerability Scanning, which will automatically scan Ubuntu, Debian and Alpine images to prevent deploying images that contain any vulnerable packages; geo-based access control for Cloud Armor, which helps defend users against DDoS attacks; and Cloud HSM, a managed cloud-hosted hardware security module (HSM) service.

Juniper preps 400 GbE across PTX, MX and QFX hardware

Juniper plans to add 400 Gigabit Ethernet across its PTX and MX routers and QFX switches as internet companies and cloud providers gear up for the higher throughput needed to meet global demand from subscribers.

Juniper said this week it would roll out higher speed ports in the three product series over the next 12 months. The schedule is in line with analysts predictions that vendors would start shipping 400 GbE devices this year.

Juniper will market the devices for several uses, including a data center backbone, internet peering, data center interconnect, a metro core, telecommunication services and a hyperscale data center IP fabric.

The announcement follows by a month Juniper’s release of the 400 GbE-capable Penta, a 16 nanometer (nm) packet-forwarding chipset that consumes considerably less energy than Juniper’s other silicon. Juniper designed the Penta for carriers rearchitecting their data centers to deliver 5G services.

Penta is destined for some of the new hardware, which will help Juniper meet carrier demand for more speed, said Eric Hanselman, an analyst at New York-based 451 Research.

“Juniper has such a strong base with service providers and network operators and they’re already seeing strong pressure for higher capacity,” Hanselman said. “Getting the Penta silicon out into the field on new platforms could help to move Juniper forward [in the market].”

The upcoming hardware will also use a next-generation ExpressPlus chipset and Q5 application-specific integrated circuit. The Juniper silicon will provide better telemetry and support for VXLAN and EVPN, the company said.

Cloud developers use EVPN, VXLAN and the Border Gateway Protocol to set up a multi-tenancy network architecture that supports multiple customers. The design isolates customers so data and malware can’t travel between them.

For the IP transport layer, Juniper plans to introduce in the second half of the year the 3-RU PTX10003 Packet Transport Router for the backbone, internet peering and data center interconnect applications. The hardware supports 100 and 400 GbE and plugs into an existing multirate QSFP-DD fiber connector system for a more straightforward speed upgrade. The Juniper system provides MACSec support for 160 100 GbE interfaces and FlexE support for 32 400 GbE interfaces. The upcoming ExpressPlus silicon powers the device.

Also, in the second half of the year, Juniper plans to release for the data center the QFX10003 switch. The system packs 32 400 GbE interfaces in 3-RU hardware that can scale up to 160 100 GbE. The next-generation Q5 chip will power the system.

In the first half of next year, Juniper expects to release the QFX5220 switch, which will offer up to 32 400 GbE interfaces in a 1-RU system. The Q5-powered hardware also supports a mix of 50, 100 and 400 GbE for server and inter-fabric connectivity.

Finally, for wide-area network services, Juniper plans to release Penta-powered 400 GbE MPC10E line cards for the MX960, MX480 and MX240. The vendor plans to release the products on the first of next year.

Juniper is likely to face stiff competition in the 400 GbE market from Cisco and Arista. Initially, prices for the high-speed interfaces will be too high for many companies. However, Hanselman expects that to change over time.

“The biggest challenge with 400 GbE is getting interface prices to a point where they can open up new possibilities,” he said. “[But] healthy competition is bound to make this happen.”

Indeed, in 2017, competition for current hardware drove Ethernet bandwidth costs down to a six-year low, according to analyst firm Crehan Research Inc., based in San Francisco. By 2022, 400 GbE will account for the majority of Ethernet bandwidth from switches, Crehan predicts.

Middleware tools demand to peak in 2018, before iPaaS ascends

2018 is a big year for middleware technologies, as revenues peak and cloud alternatives woo customers away from legacy, on-premises middleware suites.

The maturation of cloud, IoT and digital development platforms overall will drive enterprise investments in middleware tools to an all-time high of $30 billion in 2018, according to Gartner. Although traditional on-premises middleware technologies will serve legacy applications for years, enterprise investments in iPaaS and MWaaS will eclipse the traditional market from now on.

Cloud middleware tools are enterprise-ready, which is good news for businesses that require both cloud and on-premises systems to support customers’ and partners’ digital touchpoints and application development.

“Businesses today need flexible, consumable and agile integration capabilities that enable more people to get involved in delivering solutions,” said Neil Ward-Dutton, research director at MWD Advisors in Horsham, U.K.

End of the middleware suite era

MWD Advisors' Neil Ward-DuttonNeil Ward-Dutton

Investment trends signal a shift in enterprise business application development and integration. Revenues for traditional on-premises, ESB-centric middleware suites from such vendors as IBM and Oracle achieved only single-digit growth in 2016 and 2017, according to Gartner.

Meanwhile, spending is increasing for hybrid middleware tools from vendors such as Neosoft, Red Hat, WSO2 and Talend. The iPaaS market exceeded $1 billion for the first time in 2017 and grew by over 60% in 2016 and 72% in 2017, said Saurabh Sharma, principal analyst at Ovum, a London-based IT research firm. Common characteristics of these middleware technologies include an open source base, API integration, loosely coupled architecture, and subscription model.

Saurabh Sharma, principal analyst, OvumSaurabh Sharma

Businesses with on-premises, legacy middleware suites run these applications and do a lot of integration in their own data centers. However, many enterprises are migrating to a hybrid cloud environment and require both on-premises and cloud integration, said Elizabeth Golluscio, a Gartner analyst. These newer vendors replace traditional ESBs with lightweight, open source service busses, largely encompassed in iPaaS offerings.

Costs, flexibility drive iPaaS adoption

Businesses’ shift away from traditional, on-premises middleware technologies is driven by lower costs, an inundation of new technologies, new digital business integration requirements, and faster time-to-integration particularly for SaaS applications. “Cloud integration platforms are now mature enough to deliver these flexible and speedy integration capabilities,” Sharma said.

The era of perpetual licensing of expensive, on-premises integration middleware on a per-server basis is fading fast.
Neil Ward-Duttonresearch director, MWD Advisors

The cloud middleware subscription fee model aligns enterprises’ costs with usage and return. “The era of perpetual licensing of expensive, on-premises integration middleware on a per-server basis is fading fast,” Ward-Dutton said.

In the past few years, businesses were inundated with new technologies like AI and IoT, as well as shifts in enterprise architecture and explosion customer touch points. “To consolidate the middleware tools required to support and glue all of these things together, enterprise IT has to evaluate cloud middleware,” Golluscio said.

Don’t muddle through middleware modernization

In evaluations of cloud middleware tools, don’t give in to the lure of one-size-fits-all products, Ward-Dutton said. First, get buy-in for hybrid projects from the IT team. In a business with a long history of enterprise middleware use, specialists may fiercely protect their roles. IT organizations with mature DevOps teams have an advantage because new integration models enable more open collaboration across different roles and require smooth teamwork.

To plan a middleware modernization project, determine the desired state of integration architecture, and account for the integration requirements of digital business processes, Sharma said. Create integration competency centers (ICCs) to facilitate the adoption of self-service integration tools and cloud middleware platforms. Then, plan to gradually migrate appropriate integration processes and workloads to cloud-based integration services to deliver greater agility with a lower cost of ownership.

Overall, the key theme in middleware modernization projects is API-led integration that uses both API management and service-oriented and microservices architectures to expose and consume REST APIs. Focus on adopting API platforms to implement API- and design-first principles and enable the rapid creation of APIs that can effectively meet the specific requirements of end users.

Salesforce Datorama acquisition to bolster Marketing Cloud

The Salesforce Datorama acquisition is expected to enhance the Salesforce Marketing Cloud system and better compete with Adobe — the CRM software vendor’s main competitor in the marketing space.

The Salesforce Datorama acquisition marks the fourth purchase for the vendor this year, following its acquisitions of Attic Labs, CloudCraze and MuleSoft. The cost of buying Datorama, according to reports, was about $800 million.

Datorama uses AI and machine learning to provide marketing intelligence and analytics to help organizations identify which campaigns work best and what the next best marketing tactic should be. Salesforce appears to be looking to the Israel-based company’s technology to bolster its Einstein AI-backed business intelligence software.

The move will strengthen Salesforce’s portfolio in marketing and analytics, said Ray Wang, principal analyst and founder of Constellation Research.

“Datorama looks at every piece of analytics around the campaign to figure out why one was more successful than another. Salesforce has been building out its Marketing Cloud and [has] been doing specific acquisitions to bolster its marketing and ad-tech capabilities,” Wang said.

Not all observers think the Salesforce Datorama acquisition was the best move to help with marketing analytics.

Datorama customers have used the product more for reporting, rather than analytics, according to Tina Moffett, senior analyst at Forrester Research.

“Datorama’s strong suit is in its ability to connect disparate data sources — from Facebook and ad servers and email providers — into one central system, and it uses AI to do that,” Moffett said. “A lot of the organizations that we’ve talked to use Datorama as a central reporting and dashboard tool.”

Building capabilities through acquisition

When you see this acquisition, you have to think the next thing for Salesforce is ad tech.
Ray Wangprincipal analyst and founder of Constellation Research

Salesforce has been visibly working to improve Marketing Cloud and has done so mainly through acquisitions.

That campaign started with acquiring ExactTarget in 2013 and turning it into the core Marketing Cloud system. Salesforce then bought Krux in 2016 to improve Marketing Cloud’s data management capabilities and soon renamed Krux to Salesforce DMP.

Salesforce’s acquisition of Datorama AI marketing software fits the same theme, but the purchase may have surprised some.

“Salesforce’s [approach] is to build capabilities through acquisitions,” Moffett said. “For them to focus on marketing performance measurement and then acquire a company wasn’t that big of a shock. I think what was a big of a surprise was that it was Datorama.”

Surprise or not, Salesforce appears to have big plans for Datorama.

“Salesforce’s acquisition of Datorama will enhance Salesforce’s Marketing Cloud with expanded data integration, intelligence and analytics, enabling marketers to unlock insights across Salesforce data and the myriad of technologies used in today’s marketing and consumer engagement ecosystem,” Ran Sarig, Datorama CEO and co-founder, wrote in a blog post.

Meanwhile, Wang said he could see the Salesforce Datorama acquisition laying the groundwork for another Salesforce product: an advertising cloud.

“When you see this acquisition, you have to think the next thing for Salesforce is ad tech,” Wang said.

Salesforce graphic with Marketing Cloud logo
Salesforce acquired Datorama, an AI marketing analytics company, to help bolster its Marketing Cloud (pictured).

Salesforce sets sights on Adobe

Salesforce’s focus on strengthening Marketing Cloud also is apparently aimed at Adobe — another marketing software giant.

The two software goliaths have battled fiercely in recent years, and the Salesforce Datorama acquisition should be viewed in the context of that technological arms race, Wang said.

“From a Marketing Cloud perspective, it’s Salesforce and Google versus Microsoft and Adobe, and that’s what people need to recognize when considering their investments,” he said.

The big tech leaders all are trying to make it easier for organizations to connect the dozens of marketing tools that most large enterprises use.

“The bigger issue is the fact that most organizations run 40 to 50 martech solutions and want to know how to consolidate [their data],” Wang said. “Everyone is looking for one vendor to make this easier, and the integrations that Datorama has are important and allow you to connect those different pieces.”

Cloud misconfigurations can be caused by too many admins

When it comes to cloud security, enterprise employees can be their own worst enemy, especially when organizations stray too far from least-privilege models of access.

Data exposures have been a constant topic of news recently — often blamed on cloud misconfigurations — and have led to voter records, Verizon customer data and even army secrets being publicly available in cloud storage.

In a Q&A, BetterCloud CEO and founder David Politis discussed why SaaS security has become such big news and how enterprises can take control of these cloud misconfigurations in order to protect data.

Editor’s note: This conversation has been edited for length and clarity.

There have been quite a few stories recently about cloud misconfigurations leading to exposures of data. Do you think this is a new issue or just something that is becoming more visible now?

David Politis: This is an issue that has been around really since people started adopting SaaS applications. But it’s only coming out now because, in a lot of cases, the misconfigurations are not identified until it’s too late. In most cases, business configurations were in place when the stock application was deployed, or they were in place when the setting was changed years ago or six months ago, and it’s not until some high-profile exposure happens that the organization starts paying attention to it.

David Politis, CEO, BetterCloudDavid Politis

We’ve actually seen this recently. We had a couple of customers that we’re talking to for, in one case, three years. And we told them three years ago, ‘You’re going to have issue X, Y and Z down the line, because you have too many administrators and because you have this issue with groups. And for three years, it has been living dormant, essentially. And then, all of a sudden, they had an issue where all their groups got exposed to all the employees in the company. It’s a 10,000-person company, where every single employee in the entire company could read every single email distribution list.

Similarly, another company that we’ve talked to for a year came to us three weeks ago and said, ‘I know you told us when we’re going to have these problems, where we just had one of the super admins that should not have been a super admin incorrectly delete about a third of our company’ — they’re about 3,000-person company — ‘and a third of the company just was left without email, without documents and without calendars and thought they got fired.’

A thousand people, in a matter of minutes, thought they got fired, because they had no access anything. And they had to go and restore that app. Fifteen minutes of downtime for 1,000 people is a lot of confusion.

We’ve seen these types of incidences, and we’re seeing it in these environments. This is why we started the company almost seven years ago now. But only now has the adoption of these SaaS applications reached critical mass enough to where these problems are happening at scale and people are reporting it publicly.

You mention different SaaS security issues that can arise from cloud misconfigurations. Are these data exposure stories overshadowing bigger issues?

Politis: It’s more of the inadvertent piece is what makes this so challenging. And this is not malicious. There are malicious actors, but a lot of these situations are not malicious. It’s misconfiguration or just a just a general mistake that someone made. Even deleting the users is just a result of having too many administrators, which is a result of not understanding how to configure the applications to follow a least-privilege model.

I think, even if it’s a mistake, the kind of data that can be exposed is the most sensitive data, because we’ve hit the tipping point in how SaaS applications are being used. The cloud, in general, is being used as a system of record. If we go back maybe five years ago, six years ago, I’m not sure we’re at the point where cloud was being trusted as a system of record. It was kind of secondary. You could put some stuff there, maybe some design files, but now you have your [human resources] files.

Recently, we did a security assessment for a customer, and what we found was that all the HR files that they had lived in a public folder in one of their cloud storage systems. And it was literally all their HR files, by employee. That was this configuration that was definitely not malicious, and that’s as bad as it gets. We’re talking about Social Security numbers. We were finding documents such as background checks on employees that were publicly available files. And if you knew how to go find them, you could pull that up.

That, I’d argue, is worse than people’s email being deleted for 15 minutes — and, again, completely by mistake. We spoke to the company, and the person in charge of HR was just not very familiar with these cloud-based systems. And they just misconfigured something at the folder level, and then all the files that they were adding to the folder were becoming publicly available. And so I think it’s more dangerous there, because you’re not even looking for a bad actor. It’s just happening. It’s happening day in, day out, which I think is harder to catch actually.

Should all enterprises assume there is a cloud misconfiguration somewhere? How difficult is it to find these issues?

Politis: I can say from our experience that are nine out of 10 environments that we go into — and it doesn’t matter the size of the organization — have a major, critical misconfiguration somewhere in their environment. And it is possible, in most cases, to find the misconfiguration, but it’s a little bit like finding a needle in a haystack. It requires a lot of time to go through, because the only way to do it is to go page by page in the admin console; it’s to click on every setting to look at every group, look at every channel and look at every folder. And so unless you’re doing it programmatically, right now, there are not many [other] ways to do it.

This is self-serving, but this is why we built BetterCloud is to identify those blind spots. That’s because there’s a real need. When we went to look at these environments and we started logging into Salesforce and Slack and Dropbox and Google, it could take you months to go through an environment of couple hundred employees and just check all the configurations and all the different areas, because there [are] so many places where that misconfiguration can be a problem.

The way that people have to do it today is do it manually. And it can take a very long period of time [depending] on how big an organization is, how long they’ve been using the SaaS applications, how much they’ve adopted cloud in general, and the sprawl of data that they have to manage and, more importantly, the sprawl of entitlement, configuration settings, permissions across all the SaaS.

And we’re seeing a large portion of that is not even IT’s fault. The misconfigurations may predate that IT organization in many cases, because the SaaS application has been around for longer than that IT organization or that IT leader.

In many cases, it may be the end users who are misconfiguring, because they have a lot of control over these applications. It could be that it started a shadow IT, and it was configured by a shadow IT in a certain way. When the apps are taken over by the IT organization, a lot of that cleanup of the configuration isn’t done, and so it doesn’t fit within the same policies that IT has.

We also have a lot of customers where the number of admins that they have is crazy, because sales operations were the ones responsible for that and, generally speaking, it’s easier to make everyone an admin and let them make their own changes, let them do all of that. But when IT openly takes over the security and management of Salesforce, the work required to go find all the misconfiguration is really hard. That goes for Dropbox, Slack and anything that starts as shadow IT; you’re going to have those problems.