Tag Archives: nextgeneration

Nvidia unveils A100 GPU for demanding AI workloads

Nvidia unveiled its next-generation Ampere GPU architecture and new hardware that will use it for AI and data science-intensive workloads.

An advancement on Nvidia’s Volta architecture, released three years ago, Ampere will power the Nvidia A100, a new GPU built specifically for AI training and inference, as well as data analytics, scientific computing and cloud graphics.

The chip and software giant unveiled the new products at its GTC 2020 virtual conference Thursday.

AI super users

Nvidia A100 is “a tremendous improvement as Nvidia’s own comparisons with Volta bear out,” said Peter Rutten, research director of infrastructure systems, platforms and technologies at IDC.

“For super users, this is an exceptional processor,” he said.

Nvidia A100, with more than 54 billion transistors and a die size of 826 mm square, is the world’s largest 7 nm chip, according to Nvidia. It also boasts a third-generation Tensor Core with TF32 precision, representing up to 20 times higher performance compared to the previous generation with no code changes, as well as an additional two times boost with automatic mixed precision and FP16, the vendor said.

“The larger number of tensor cores, the different precision levels allowing for different performance scenarios, the partitioning capability, all that is very relevant and important,” Rutten said.

But the new chip, with a thermal design power (TDP) of 400 watts, is more power-intensive than Nvidia’s Volta GPU chip, the V100, which had a TDP of 350w, Rutten noted.

With the A100, Nvidia clearly has supercomputing in mind, or at least some highly demanding AI inferencing and training workloads. Its need for great power could be unattractive to some enterprises that don’t necessarily need such large-scale power for many AI jobs.

Nvidia A100
Nvidia A100

“Whether this is the kind of capability [with the A100] that you need for a new AI initiative, that I would doubt,” Rutten said.

Supercomputing applications

Nvidia also revealed a new product in its DGX line — DGX A100, a $200,000 supercomputing AI system comprised of eight A100 GPUs. The DGX A100, providing 320GB of memory for training huge AI datasets, is capable of 5 petaflops of AI performance. Another new product, the DGX SuperPOD, a cluster of 140 DGX A100 systems, is capable of hitting 700 petaflops of AI computing power, Nvidia said.

As far as the core Nvidia A100’s position in the market, it’s “objectively fair to say that there is currently no competing product,” said Rutten.

For super users, this is an exceptional processor.
Peter RuttenResearch director of infrastructure systems, platforms and technologies, IDC

“In this sense, Nvidia has again succeeded in being far ahead of the competition,” Rutten said. “Of course, we don’t know what’s brewing at competing companies, both incumbents and startups, but I don’t expect anything competitive to A100 to be launched anytime soon.”

Meanwhile, the DGX SuperPOD will likely be one of the more powerful supercomputers available. Nvidia’s previous SuperPOD, built on V100s, is on the Super Computer Top 500 list, Rutten noted.

“So, safe to say that a SuperPod with A100s will be listed as well — and pretty high up,” he added.

AI at the edge

Nvidia also unveiled two new products to its edge computing EGX line — the EGX A100 and EGX Jetson Xavier NX.

Enterprises can integrate the EGX A100 converged accelerator, built on the Ampere architecture, into their servers to carry out real-time AI or 5G signal processing on up to 200 Gbps of data.

EGX Jetson Xavier NX, meanwhile, is a tiny, 70 mm by 45 mm supercomputer designed for high-performance compute or AI workloads in edge systems. The device can deliver up to 21 trillion operations/second (TOPS).

For Nvidia, AI at the edge is a critical piece of its market, Rutten said.

“AI inferencing will become a larger market than AI training in the near future,” he said. “A lot of inferencing will happen at the edge. Nvidia cannot allow itself to let that market slip away to other vendors.”

Yet, Rutten questioned whether the core A100 platform is the answer for many enterprises’ edge computing needs as they evolve.

“It could be overkill for an edge solution, except perhaps for very intensive, larger-scale edge deployments,” he said.

Go to Original Article

Long-delayed Dell EMC PowerStore midrange array makes debut

Enterprises can get a look at Dell EMC’s next-generation midrange storage, more than a year later than the array’s planned debut.

The Dell EMC PowerStore system that launched today marks the vendor’s first internally developed storage product since Dell bought EMC in 2015. Integration of Dell EMC-owned VMware is a key element, with an onboard ESXi hypervisor and capability to run applications on certain array models.

The base PowerStore is a 2U two-node enclosure for active-active failover and high availability. The chassis takes 25 NVMe SSDs, with support for Intel Optane persistent memory chips. Three 25-drive SAS expansion shelves can be added per chassis. Support for NVMe-f architecture is on Dell EMC’s roadmap.

The PowerStore midrange storage has been a strategic priority for several years. More than 1,000 engineers across Dell EMC storage and the wider Dell Technologies organization worked on the system, said Caitlin Gordon, senior vice president of Dell EMC storage marketing.

“Data has never been more diverse or more valuable, but customers have had to choose between prioritizing between service levels for performance and simplifying their operations. We know not every applications can be virtualized, and we engineered PowerStore so you can consolidate all workloads on a single platform,” Gordon said.

What’s next for Dell EMC midrange?

Dell EMC first scheduled the new midrange system to launch in 2019, but a series of delays pushed it back to now. The all-flash PowerStore adds to Dell EMC’s overlapping midrange storage, although the vendor said the new system would help streamline the portfolio. Dell EMC is the market leader in storage, with midrange platforms that include the Unity flagship all-flash and hybrid arrays that EMC brought to market. Other midrange systems include the SC Series and PS Series. Dell acquired Compellent and EqualLogic arrays years ago and renamed both products. Compellent is now known as SC Series and still sold and supported by Dell. The EqualLogic arrays were renamed PS Series, which Dell maintains but no longer sells. Dell EMC executives said the other systems will be phased out slowly with PowerStore’s arrival.

Dell EMC PowerStore midrange array
Dell EMC PowerStore midrange array

The PowerStoreOS operating system incorporates a Kubernetes framework to serve storage management from containers and includes a machine learning engine to automate rebalancing and other administrative tasks. Based on internal testing, Dell EMC claims PowerStore had seven times the performance and three times lower latency than Unity XT array.

The ground-up PowerStore design eventually will emerge as the dominant Dell EMC midrange storage, said Scott Sinclair, a storage analyst with Enterprise Strategy Group.

“This is a completely new architecture that’s based on a container framework. It’s designed to address a bunch of different workload needs on one array. That’s not the type of hard work you put into a product do just to add another midrange storage array,” Sinclair said.

A software capability called AppsOn allows data-intensive applications to access storage on PowerStore and use VMware vMotion to migrate it between core and cloud environments.

“The idea is you that can be within a VMware environment — let’s say VMware Cloud Foundation, or vSphere — and have different ways to move applications to various targets. AppsOn is a novel approach that gives you more flexibility to deploy apps, based on your resource needs,” Sinclair said

Beta customer tried to ‘blow up’ PowerStore

Dell EMC guarantees data reduction of 4-to-1 with always-on inline deduplication. Dell claims the inline data reduction does not degrade performance. Based on the ratio, a single Dell EMC PowerStore with three expansion enclosures is rated to provide 2.8 PB of usable storage per appliance. Effective capacity scales to 11.3 PB in a maximum eight-node cluster.

Five capacity models are available: PowerStore 1000 (384 TB), PowerStore 3000 (768 TB), PowerStore 5000 (1,152 TB), PowerStore 7000 (1,536 TB) and PowerStore 9000 (2,560 TB). PowerStore X models come with the VMware hypervisor and AppsOn, a software capability that allows data-intensive applications to access storage on the array across core and cloud environments. The Power T configuration does not include the latter features.

“I actually like the PowerStore X a lot more than I ever thought I would,” said Alan Hunt, the director of network operations for Detroit-based law firm Dickinson Wright. Hunt is running a PowerStore X and PowerStore T in beta to simulate live production. He said PowerStore will help Dickinson Wright to incorporate new storage with existing SC Series and retire PS Series arrays.

“We did a lot of testing and migrating of live workloads with the AppsOn feature, and that was excellent. We’re running simulate workloads and don’t have anything in production [on PowerStore], but I want to jump on it immediately. I take systems and try to blow them up, and this was definitely the most stable beta test I’ve ever done,” Hunt said.

Dell EMC initially said it would converge features of its multiple midrange arrays in 2019. The product launch was slated for Dell Tech World in Las Vegas in May, but that event was cancelled due to the coronavirus. Dell said it will have a virtual show later this year but has not specified dates.

Gordon said  PowerStore systems started shipping in April.

Go to Original Article

AI at the core of next-generation BI

Next-generation BI is upon us, and has been for a few years now.

The first generation of business intelligence, beginning in the 1980s and extending through the turn of the 21st century, relied entirely on information technology experts. It was about business reporting, and was inaccessible to all but a very few with specialized skills.

The second introduced self-service analytics, and lasted until just a few years ago. The technology was accessible to data analysts, and defined by data visualization, data preparation and data discovery.

Next-generation BI — the third generation — is characterized by augmented intelligence, machine learning and natural language processing. It’s open everyday business users, and trust and transparency are important aspects. It’s also changing the direction data looks, becoming more predictive.

In September, Constellation Research released “Augmented Analytics: How Smart Features Are Changing Business Intelligence.The report, authored by analyst Doug Henschen, took a deep look at next-generation BI.

Henschen reflected on some of his findings about the third generation of business analytics for a two-part Q&A.

In Part I, Henschen addressed what marked the beginning of this new era and who stands to benefit most from augmented BI capabilities. In Part II, he looked at which vendors are positioned to succeed and where next-generation business intelligence is headed next.

In your report you peg 2015 as the beginning of next generation BI — what features were you seeing in analytics platforms at that time that signaled a new era?

Doug HenschenDoug Henschen

Doug Henschen: There was a lot percolating at the time, but I don’t think that it’s about a specific technology coming out in 2015. That’s an approximation of when augmented analytics really became something that was ensconced as a buying criteria. That’s I think approximately when we shifted — the previous decade was really when self-service became really important and the majority of deployments were driving toward it, and I pegged 2015 as the approximate time at which augmented started getting on everyone’s radar.

Beyond the technology itself, what were some things that happened in the market around the time of 2015 that showed things were changing?

Henschen: There were lots of technology things that led up to that — Watson playing Jeopardy was in 2011, SAP acquired KXEN in 2013, IBM introduced Watson Analytics in 2014. Some startups like ThoughtSpot and BeyondCore came in during the middle of the decade, Salesforce introduced Einstein in 2016 and ended up acquiring BeyondCore in 2016. A lot of stuff was percolating in the decade, and 2015 is about when it became about, ‘OK, we want augmented analytics on our list. We want to see these features coming up on roadmaps.’

What are you seeing now that has advanced next-generation BI beyond what was available in 2015?

Anything that is proactive, that provides recommendations, that helps automate work that was tedious, that surfaces insights that humans would have a tough time recognizing but that machines can recognize — that’s helpful to everybody.
Doug HenschenAnalyst, Constellation Research

Henschen: In the report I dive into four areas — data preparation, data discovery and analysis, natural language interfaces and interaction, and forecasting and prediction — and in every category you’ve seen certain capabilities become commonplace, while other capabilities have been emerging and are on the bleeding edge. In data prep, everyone can pretty much do auto data profiling, but recommended or suggested data sources and joins are a little bit less common. Guided approaches that walk you through how to cleanse this, how to format this, where and how to join — that’s a little bit more advanced and not everybody does it.

Similarly, in the other categories, recommended data visualization is pretty common in discover and analysis, but intent-driven recommendations that track what individuals are doing and make recommendations based on patterns among people are more on the bleeding edge. It applies in every category. There’s stuff that is now widely done by most products, and stuff that is more bleeding edge where some companies are innovating and leading.

Who benefits from next-generation BI that didn’t benefit in previous generations — what types of users?

Henschen: I think these features will benefit all. Anything that is proactive, that provides recommendations, that helps automate work that was tedious, that surfaces insights that humans would have a tough time recognizing but that machines can recognize — that’s helpful to everybody. It has long been an ambition in BI and analytics to spread this capability to the many, to the business users, as well as the analysts who have long served the business users, and this extends the trend of self-service to more users, but it also saves time and supports even the more sophisticated users.

Obviously, larger companies have teams of data analysts and data engineers and have more people of that sort — they have data scientists. Midsize companies don’t have as many of those assets, so I think [augmented capabilities] stand to be more beneficial to midsize companies. Things like recommended visualizations and starting points for data exploration, those are very helpful when you don’t have an expert on hand and a team at your disposal to develop a dashboard to address a problem or look at the impact of something on sales. I think [augmented capabilities] are going to benefit all, but midsize companies and those with fewer people and resources stand to benefit more.  

You referred to medium-sized businesses, but what about small businesses?

Henschen: In the BI and analytics world there are products that are geared to reporting and helping companies at scale. The desktop products are more popular with small companies — Tableau, Microsoft Power BI, Tibco Spotfire are some that have desktop options, and small companies are turning also to SaaS options. We focus on enterprise analytics — midsize companies and up — and I think enterprise software vendors are focused that way, but there are definitely cloud services, SaaS vendors and desktop options. Salesforce has some good small business options. Augmented capabilities are coming into those tools as well.

Editor’s note: This interview has been edited for clarity and conciseness.

Go to Original Article

NAND flash manufacturers showcase new technologies

NAND flash manufacturers laid out their roadmaps for next-generation products and architectures at the 2018 Flash Memory Summit this month.

As expected, Intel, Micron, SK Hynix and Toshiba talked up 3D NAND flash chips that can store four bits of data per cell, known as quadruple-level cell (QLC). They also spotlighted their 96-layer 3D NAND and outlined roadmaps that extend to 128 layers and beyond to further boost density.

NAND flash manufacturers introduced new efforts to speed performance, raise density and lower costs. Toshiba launched a low-latency option called XL-Flash. Chinese startup Yangtze Memory Technologies Co. (YMTC) hopes to catch up to the flash chip incumbents with its “Xtacking” architecture that can potentially increase performance and bit density. And South Korea-based chipmaker SK Hynix harbors similar aspirations with its so-called “4D NAND” flash that industry experts say is a misnomer.

Key NAND flash manufacturer Samsung was notably absent from the Flash Memory Summit keynotes, a year after discussing its Z-NAND technology at the conference. Z-NAND is another attempt to reduce costs by shifting periphery logic to a place that doesn’t take up space on the flash chip, said Jim Handy, general director and semiconductor analyst at Objective Analysis.

Here are some of the new technologies that NAND flash manufacturers showcased at last week’s Flash Memory Summit:

Toshiba’s XL-Flash

Toshiba’s XL-Flash is based on the company’s single-level cell (SLC) 3D NAND bit column stacked (BiCS) technology and enables optimization for multi-level cell (MLC) flash. The XL stands for excellent latency, according to Shigeo (Jeff) Ohshima, a technology executive in SSD application engineering at Toshiba Memory Corporation.

Ohshima said XL-Flash requires no additional process and is fully compatible with conventional flash in terms of the command protocol and interface. The read latency of XL-Flash could be 10 times faster than conventional TLC flash devices, according to Ohshima.

He said the company has “a lot of room” to do more with its current 3D NAND BiCS flash technology before new nonvolatile memories such as resistive RAM (ReRAM), magnetoresistive RAM (MRAM), and phase change memory ramp up in volume and become dominant.

“So it ain’t over ’til it’s over,” Ohshima said.

Ohshima said a combination of XL-Flash and denser QLC flash could handle a broad range of application workloads and improve overall system performance over the classic storage architecture of DRAM and HDDs. He noted the performance gap between XL-Flash and QLC flash is considerably smaller than the differential between DRAM and HDDs. And, although XL-Flash is slower than DRAM, it costs less and offers higher capacity.

Industry analysts view Toshiba’s XL-Flash and Samsung’s Z-NAND as a low-latency, flash-based response to 3D XPoint memory technology that Intel and Micron co-developed. Intel last year began shipping 3D XPoint-based SSDs under the brand name Optane, and this year started sampling persistent memory modules that use the 3D XPoint technology. Micron has yet to release products based on 3D XPoint.

David Floyer, CTO and co-founder of Wikibon, said Toshiba’s XL-Flash and Samsung’s Z-NAND will never quite reach the performance of Optane SSDs, but they’ll get “pretty close” and won’t cost anywhere near as much.

Handy expects XL-Flash and Z-NAND to read data at a similar speed to Optane, but he said they “will still be plagued by the extraordinarily slow write cycle that NAND flash is stuck with because of quantum mechanics.”

Startup takes on incumbent NAND flash manufacturers

YMTC hopes to challenge established NAND flash manufacturers with Xtacking. YMTC claims the new architecture can improve efficiency and I/O speed, reduce die size and increase bit density, and shorten development time.

“It really takes courage to go down that path because we know that it’s not easy to make that technology work,” YMTC CEO Simon Yang said.

Unlike conventional NAND, Xtacking separates the processing between the flash cell array and the periphery circuitry, or logic, onto different wafers. The startup claimed the high-voltage transistors that conventional NAND typically uses for the periphery circuit limit NAND I/O speed. YMTC claims Xtacking permits the use of lower voltage transistors that can enable higher I/O and more advanced functions, according to YMTC.

“We really can match the DDR4 I/O speed without any limitation,” Yang said.

Yang said results have been encouraging. He said the flash chip yield is increasing, and the reliability of the memory bits through cycling looks positive. YMTC plans to introduce samples of the new Xtacking-based flash technology into the market early next year, Yang said.

“Hopefully, we can catch up with our friends and contribute to this industry,” Yang said.

YMTC started 3D NAND development in 2014 with a nine-layer test chip and later co-developed a 32-layer test chip with Spansion, which merged with Cypress Semiconductor. YMTC moved the chip into production late last year, but Yang said the company held back on volume ramp-up because the first-generation product was not cost competitive.

“We are very much profit-driven,” Yang said. He later added, “We only want to ramp into volume when it’s cost competitive.”

Handy expressed skepticism that YMTC will be able to meet its cost target, but he said YMTC’s Xtacking efforts might help the company to get to market faster.

SK Hynix 4D NAND flash

SK Hynix came up with a new name to describe its latest NAND flash technology. The company said its “4D NAND” puts the periphery circuitry under the charge-trap-flash-based 3D NAND cell array to reduce chip size, cut the number of process steps and lower overall cost over conventional NAND, in which the periphery circuitry is generally alongside the NAND cell.

But, industry analysts say 4D NAND is merely a catchy marketing term and the approach not unique.

“YMTC is stacking a chip on top of the other, whereas Hynix is putting the logic on the same bit but just building it underneath,” Handy said. “The cost of the chip is a function of how big the die is, and if you tuck things underneath other things, you make the die smaller. What Hynix is doing is a good thing, but I wouldn’t call it an innovation because of the fact that it’s the mainstream product for Intel and Micron.”

Intel and Micron have touted their CMOS under the array (CuA) technology in both their 64-layer QLC and 96-layer TLC flash technologies that they claim reduces die sizes and improves performance over competitive approaches. Handy said Samsung has also discussed putting the logic under the flash chip.

Hyun Ahn, senior vice president of NAND development and business strategy at SK Hynix, said his company’s charge-trap-based 4D NAND roadmap starts at 96 layers with a roadmap that extends to 128 layers and beyond using the same platform.

The first SK Hynix 4D NAND technology will begin sampling in the fourth quarter with 96 stacks of NAND cell, I/O speed of 1.2 Gbps per pin, and a mobile package of 11.5 by 12 mm. The chip size is 30% smaller, and 4D NAND can replace two 256 Gb chips with similar performance, according to SK Hynix.

The new SK Hynix 512 Gb triple-level cell (TLC) 4D NAND improves write performance by 30% and read performance by 25% over the company’s prior 72-stack TLC 3D NAND, with 150% greater power efficiency.

Upcoming 1 terabit (Tb) TLC 4D NAND that SK Hynix will sample in the first half of next year fits into a 16 mm by 20 mm ball grid array (BGA) package with a maximum 2 TB for BGA. An enterprise U.2 SSD using the technology will offer up to 64 TB of capacity, according to SK Hynix.

SK Hynix plans to begin sampling 96-stack QLC 4D NAND, with 1 Tb density in a mono die, in the second half of next year. The company said the QLC 4D NAND would provide more than 20% higher wafer capacity than the TLC NAND that it has been producing since the second half of last year. The 72-stack, enterprise-class 3D NAND will represent more than 50% of SK Hynix NAND production this year, the company said.

NSS Labs ranks next-gen firewalls, with some surprises

New testing of next-generation firewalls found that products from seven vendors effectively protected enterprises from malicious traffic for a reasonable total cost of ownership — under $10 per Mbps of network traffic.

NSS Labs released its annual evaluation of next-gen firewalls on Tuesday, offering seven of 10 product recommendations for security effectiveness and total cost of ownership (TCO) based on comparative testing of hardware and software that prevents unauthorized access to networks.

“Our data shows that north of 80% of enterprises deploy next-gen firewalls,” said Jason Brvenik, CTO at NSS Labs, who noted that the market is mature and many of these vendors’ technologies are in refresh cycles.

The research analysts reviewed next-gen firewalls from 10 vendors for the comparative group test, including:

  • Barracuda Networks CloudGen Firewall F800.CCE v7.2.0;
  • Check Point 15600 Next Generation Threat Prevention Appliance vR80.20;
  • Cisco Firepower 4120 Security Appliance v6.2.2;
  • Forcepoint NGFW 2105 Appliance v6.3.3 build 19153 (Update Package: 1056);
  • Fortinet FortiGate 500E V5.6.3GA build 7858;
  • Palo Alto Networks PA-5220 PAN-OS 8.1.1;
  • SonicWall NSa 2650 SonicOS Enhanced;
  • Sophos XG Firewall 750 SFO v17 MR7;
  • Versa Networks FlexVNF 16.1R1-S6; and
  • WatchGuard M670 v12.0.1.B562953.

The independent testing involved some cooperation from participating vendors and in some cases help from consultants who verified that the next-gen firewall technology was configured properly using default settings for physical and virtual test environments. NSS Labs did not evaluate systems from Huawei or Juniper Networks because it could not “verify the products,” which researchers claimed was necessary to measure their effectiveness.

Despite the maturity of the NGFW market, the vast majority of enterprises don’t customize default configurations, according to Brvenik. Network security teams disable core protections that are noisy to avoid false positives and create access control policies, but otherwise they trust the vendors’ default recommendations.

The expanding functionality in next-gen firewalls underscores the complexity of protecting enterprise networks against modern threats. In addition to detecting and blocking malicious traffic through the use of dynamic packet filtering and user-defined security policies, next-gen firewalls integrate intrusion prevention systems (IPS), application and user awareness controls, threat intelligence to block malware, SSL and SSH inspection and, in some cases, support for cloud services.

Some products offer a single management console to enable network security teams to monitor firewall deployments and policies, including VPN and IPS, across environments. An assessment of manageability was not part of NSS Labs’ evaluation, however. NSS Labs focused on the firewall technology itself.

Worth the investment?

Researchers used individual test reports and comparison data to assess security effectiveness, which ranged from 25.0% to 99.7%, and total cost of ownership per protected Mbps, which ranged from U.S. $2 to U.S. $57, to determine the value of investments. The testing resulted in overall ratings of “recommended” for seven next-gen firewalls, two “caution” limited value ratings (Check Point and Sophos) and one “security recommended” but higher than average cost (Cisco).

The security effectiveness assessment was based on the product’s ability to enforce security policies and block attacks while passing nonmalicious traffic over a testing period that lasted several hours. Researchers factored in exploit block rates, evasion techniques, stability and reliability, and performance under different traffic conditions. The total cost of ownership per protected Mbps was calculated using a three-year TCO based on capital expenditure for the products divided by security effectiveness times network throughput.

Six of the next-gen firewalls scored 90.3% or higher for security effectiveness, and most products cost less than $10 per protected Mbps of network throughput, according to the report. While the majority of the next-gen firewalls received favorable assessments, four failed to detect one or more common evasion techniques, which could cause a product to completely miss a class of attacks.

Lack of resilience

NSS Labs added a new test in 2018 for resiliency against modified exploits and, according to the report, none of the devices exhibited resilience against all attack variants.

“The most surprising thing that we saw in this test was that … our research and our testing showed that a fair number of firewalls did not demonstrate resilience against changes in attacks that are already known,” Brvenik said.

Enterprises deploy next-gen firewalls to protect their networks from the internet, he added, and as part of that they expect that employees who browse the internet should not have to worry about new threats. Technology innovation related to cloud integration and real-time updates is promising, but key enterprise problems remain unsolved such as the ability to defend against attacks delivered in JavaScript.

“I think one of the greatest opportunities in the market is to handle that traffic,” said Brvenik, who noted that some next-gen firewalls performed adequately in terms of toolkit-based protections, but NSS Labs didn’t observe any of them “wholly mitigating JavaScript.”

TCO in 2018 is trending lower than previous years. While there are a number of very affordable next-gen firewalls on the market, vendors that can’t validate the effectiveness of next-gen firewalls with independent testing to show the technology can consistently deliver on top-level protections, should be questioned, according to Brvenik. Affordable products are a great choice only if they achieve what the enterprise is looking for and “live up to the security climate.”

Practical info on tap at SAP Sapphire Now 2018

Next-generation technologies such as AI are likely to dominate the stage at SAP Sapphire Now 2018. The annual conference of SAP users, partners and vendors takes place June 5-7 in Orlando, Fla.

Like past conferences, it’s expected that SAP will showcase its latest technology and product advancements, but attendees also expect to see practical applications of how the technologies solve real problems.

Gavin Quinn, founder and CEO of Mindset, a Minneapolis-based SAP partner that specializes in SAP Fiori development and implementations, believes that machine learning and AI will be the biggest overall focus, but he wants to see how SAP is using this in realistic ways.

“Last year was more about ‘here’s Leonardo and look at these exciting things we’re doing,’ but now we want to know how much the rubber hits the road,” Quinn said.

There’s evidence that this will happen, in that SAP has been putting AI and machine learning features into live applications, but the proof may lie in the keynotes. Quinn explained that he would like to see major customers provide testimonials about how AI features have changed their businesses, which may put SAP ahead of the other major vendors that are integrating AI into ERP platforms.

“There’s been a lot of talk running up, but those kinds of success stories would lock it in for me,” Quinn said. “I think SAP CoPilot could get a lot of play, which is their bot play in a lot of ways, so I think that will get a lot of gravity during Sapphire.”

More clarity on ERP applications, please

SAP Sapphire Now 2018 attendees should hear a lot about Leonardo and connectivity, but SAP needs to strike the right balance between high-level conceptual discourse and practical applications of their technology, according to Cindy Jutras, president of Mint Jutras, an ERP consulting firm.

All too often they talk at such a high level of abstraction that it becomes somewhat meaningless, either that or they get into the weeds of the technical details.
Cindy Jutraspresident, Mint Jutras

“All too often they talk at such a high level of abstraction that it becomes somewhat meaningless, either that or they get into the weeds of the technical details,” Jutras said. “I would also like to hear if they are bringing SAP Leonardo down into the midmarket and if so, how.”

Midmarket companies don’t want to focus on technical possibilities, Jutras said. They want technology to solve problems and don’t have the time, money or expertise to develop their own applications.

Jutras is most interested in seeing what SAP is doing on the ERP applications front, including S/4HANA, SAP Business One and SAP Business ByDesign, but she noted that Sapphire is not usually known for that focus.

“On the SME side I am hoping to get an update on the process of turning Business One into a platform and I’d like to get some clarity on Business ByDesign,” she said. “It seems like Business ByDesign and S4/HANA are starting to encroach on each other’s market segments, and I’d also like some additional clarity on S4/HANA Cloud specifically.”

SAP targets Salesforce in CRM market

SAP appears to be clearly targeting Salesforce, which is expected to be a main theme at SAP Sapphire Now 2018, according to Kelsey Mason, senior analyst at Technology Business Research.

“It will be interesting to see how they take all of their various [CRM] front-office assets — Hybris, Callidus, Gigya — and create one comprehensive suite and how they tie Leonardo, specifically the AI and IoT aspects, to that portfolio,” Mason said. “I expect that CRM rebrand to share center stage with S/4HANA and SAP Leonardo, and the theme once again will be the intelligent enterprise.”

Mason would also like to see how SAP’s concept of “customer empathy” comes into play, particularly in light of the indirect access issue of the past year.

“This will likely be touted on the main stage, first as a way to show customers that SAP has heard their complaints and has addressed them, and second a proof point to show that SAP is the only vendor to have clear pricing strategies for digital access in IoT scenarios,” she said.

Mason expects a lot of hype around S/4HANA, the new CRM portfolio, and SAP Leonardo, but would like to see how SAP ties its “orbiting applications,” including SuccessFactors, SAP Ariba and Concur, into its intelligent enterprise vision, rather than treating them as an afterthought.

“I would also like to get a sense for traction within the S/4HANA portfolio,” she said. “Most of the customers to date have chosen the on-premises version, but it would be great to hear the breakdown of customers using public cloud, HANA Enterprise Cloud (HEC) hosted, and on-premises S/4HANA. It would also be nice to understand how many customers have chosen just one aspect of S/4HANA such as Simple Finance versus how many have chosen the full S/4HANA suite.”

Mason would also like to see SAP Services have a presence alongside major SAP service providers, including Accenture, Deloitte and EY.

SAP Services is a big part of the S/4HANA and SAP Leonardo stories, but one that doesn’t seem to be highlighted as much,” she said. “It does complicate SAP’s relationships with some of its major partners, but that’s why I think putting them together on stage to talk about how they can work together to help customers form and execute digital transformations and become the intelligent enterprise would be good to hear for both customers and partners. I’m not holding my breath that this will happen, but certainly something that would be nice to see.”

Google Cloud Platform services engage corporate IT

Google continues to pitch its public cloud as a hub for next-generation applications, but in 2017, the company took concrete steps to woo traditional corporations that haven’t made that leap.

Google Cloud Platform services still lag behind Amazon Web Services (AWS) and Microsoft Azure, and Google’s lack of experience with enterprise IT is still seen as GCP’s biggest weakness. But the company made important moves this year to address that market’s needs, with several updates around hybrid cloud, simplified migration and customer support.

The shift to attract more than just the startup crowd has steadily progressed since the hire of Diane Greene in 2015. In 2017, her initiatives bore their first fruit.

Google expanded its Customer Reliability Engineering program to help new customers — mostly large corporations — model their architectures after Google’s. The company also added tiered support services for technical and advisory assistance.

Other security features included Google Cloud Key Management Service and the Titan chip, which takes security down to the silicon. Dedicated Interconnect taps directly into Google’s network for consistent and secure performance. Several updates and additions highlighted Google’s networking capabilities, which it sees as an advantage over other platforms, such as a slower and cheaper networking tier Google claims is still on par with the competition’s best results for IT shops.

Google Cloud Platform services also expanded into hybrid cloud through separate partnerships with Cisco and Nutanix, with products from each partnership expected to be available in 2018. The Cisco deal involves a collection of products for cloud-native workloads and will lean heavily on open source projects Kubernetes and Istio. The Nutanix deal is closer to the VMware on AWS offering as a lift-and-shift bridge between the two environments.

And for those companies that want to move large amounts of data from their private data centers to the cloud, Google added its own version of AWS’ popular Snowball device. Transfer Appliance is a shippable server that can be used to transfer up to 1 TB of compressed data to Google cloud data centers.

In many ways, GCP is where Microsoft Azure was around mid-2014, as it tried to frame its cloud approach and put together a cohesive strategy, said Deepak Mohan, an analyst with IDC.

The price point is fantastic and the product offering is fantastic, but they need to invest in finding how they can approach the enterprise at scale.
Deepak Mohananalyst, IDC

“They don’t have the existing [enterprise] strength that Microsoft did, and they don’t have that accumulated size that AWS does,” he said. “The price point is fantastic and the product offering is fantastic, but they need to invest in finding how they can approach the enterprise at scale.”

To help strengthen its enterprise IT story, Google infused its relatively small partner ecosystem — a critical piece to help customers navigate the myriad low- and high-level services — through partnerships forged with companies such as SAP, Pivotal and Rackspace. Though still not in the league of AWS or Azure, Google also has stockpiled some enterprise customers of its own, such as Home Depot, Coca-Cola and HSBC, to help sell its platform to that market. And it also hired former Intel data center executive Diane Bryant as COO in November.

GCP also more than doubled its global footprint, with new regions in Northern Virginia, Singapore, Sydney, London, Germany, Brazil and India.

gcp services
Google Cloud Platform services

Price and features still matter for Google

Price is no longer the first selling point for Google Cloud Platform services, but it remained a big part of the company’s cloud story in 2017. Google continued to drop prices across various services, and it added a Committed Use Discount for customers that purchase a certain monthly capacity for one to three years. Those discounts were particularly targeted at large corporations, which prefer to plan ahead with spending when possible.

There were plenty of technological innovations in 2017, as well. Google Cloud Platform was the first to use Intel’s next-gen Skylake processors, and several more instance types were built with GPUs. The company also added features to BigQuery, one of its most popular services, and improved its interoperability with other Google Cloud Platform services.

Cloud Spanner, which sprang from an internal Google tool, addresses challenges with database applications on a global scale that require high availability. It provides the consistency of transactional relational databases with the distributed, horizontal scaling associated with NoSQL databases. Cloud Spanner may be too advanced for most companies, but it made enough waves that Microsoft soon followed with its Cosmos DB offering, and AWS upgraded its Aurora and DynamoDB services.

That illustrates another hallmark of 2017 for Google’s cloud platform: On several fronts, the company’s cloud provider competitors came around to Google’s way of thinking. Kubernetes, the open source tool spun out of Google in 2014, became the de facto standard in container orchestration. Microsoft came out with its own managed Kubernetes service this year, and AWS did the same in late November — much to the delight of its users.

Machine learning, another area into which Google has pushed headlong for the past several years, also came to the forefront, as Microsoft and Amazon launched — and heavily emphasized — their own new products that require varying levels of technical knowhow.

Coming into this year, conversations about the leaders in the public cloud centered on AWS and Microsoft, but by the end of 2017, Google managed to overtake Microsoft in that role, said Erik Peterson, co-founder and CEO of CloudZero, a Boston startup focused on cloud security and DevOps.

“They really did a good job this year of distinguishing the platform and trying to build next-generation architectures,” he said.

Azure may be the default choice for Windows, but Google’s push into cloud-native systems, AI and containers has planted a flag as the place to do something special for companies that don’t already have a relationship with AWS, Peterson said.

Descartes Labs, a geospatial analytics company in Los Alamos, N.M., jumped on Google Cloud Platform early on partly because of Google’s  activity with containers. Today, about 90% of its infrastructure is on GCP, said Tim Kelton, the company’s co-founder and cloud architect. He is pleased not only with how Google Container Engine manages its workloads and responds to new features in Kubernetes, but how other providers have followed Google’s lead.

“If I need workloads on all three clouds, there’s a way to federate that across those clouds in a fairly uniform way, and that’s something we never had with VMs,” Kelton said.

Kelton is also excited about Istio, an open source project led by Google, IBM and Lyft that sits on top of Kubernetes and creates a service mesh to connect, manage and secure microservices. The project looks to address issues around governance and telemetry, as well as things like rate limits, control flow and security between microservices.

“For us, that has been a huge part of the infrastructure that was missing that is now getting filled in,” he said.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at [email protected].