Tag Archives: company’s

Chang takes leave from Cisco collaboration unit amid reorganization

Amy Chang, a top Cisco executive who has led the company’s collaboration division for nearly two years, has taken a leave of absence for an unspecified period.

Chang’s time off comes amid a restructuring within Cisco. David Goeckeler, general manager of Cisco’s networking and security group, resigned to become CEO of Western Digital. The company took Goeckeler’s departure apparently as an opportunity to reorganize into five new product groups.

Under the reorganization, the head of Cisco’s collaboration business will no longer report to the CEO. Instead, that person will answer to the leader of the new security and applications group. Cisco said it planned to appoint an executive to oversee the new group in the future.

Sri Srinivasan, general manager of the Webex suite, will run the collaboration division until Chang returns, the company said in a statement. Srinivasan joined Cisco in early 2018 after spending more than 12 years at Microsoft.

“After an impressive 15 years of great achievements at an incredibly fast pace, Amy has decided to take a well-earned breath,” Cisco said. “She is going to recharge her batteries, while also prioritizing time with her 12-year-old son, and [CEO Chuck Robbins] and Cisco as a whole applaud her for this.”

The reorganization comes at a critical time for Cisco’s collaboration division, which generated something close to $5.8 billion in revenue last fiscal year. The vendor has an opportunity to capitalize on a surge in teleconferencing and remote work amid the coronavirus outbreak.

Chang’s leave of absence also follows a disappointing financial quarter for her unit — the first under her leadership. Revenue for the product category that includes collaboration was down 8% year over year in the three months ended Jan. 25.

Cisco is battling for enterprise customers with Microsoft, which has attracted more than 20 million daily active users to the Office 365 collaboration app Microsoft Teams. The vendor’s Webex business is also taking heat from video conferencing upstart Zoom.

Chang replaced Rowan Trollope as the leader of Cisco’s collaboration business in May 2018 after the vendor acquired her startup, Accompany. Chang previously held a seat on Cisco’s board of directors but resigned to become an employee.

Chang spearheaded significant changes to Cisco’s portfolio. She led an effort to align the features and interfaces of premise-based Jabber and cloud-based Webex Teams. Chang also sought to differentiate Cisco’s products based on a set of AI features marketed as “cognitive collaboration.”

“I am surprised by the changes,” said Dave Michels, principal analyst at TalkingPointz. “Cisco just hosted a highly successful and engaging analyst event last month. The Cisco collaboration leadership team seemed well-aligned, and Chang seemed enthused and engaged.”

Srinivasan is a good pick to lead the division in Chang’s absence, Michels said. Srinivasan spearheaded significant improvements to Webex during his tenure. He will now also oversee Cisco’s telephony and contact center businesses.

Srinivasan has been Chang’s “right-hand man,” said Irwin Lazar, analyst at Nemertes Research. His promotion suggests the company’s strategy will not change dramatically, at least for now.

“Should Amy not return, or be replaced by someone outside the organization, then I’d expect there to be change,” Lazar said.

Go to Original Article
Author:

Q&A: SwiftStack object storage zones in on AI, ML, analytics

SwiftStack founder Joe Arnold said the company’s recent layoffs reflected a change in its sales focus but not in its core object storage technology.

San Francisco-based SwiftStack attributed the layoffs to a switch in use cases from classic backup and archiving to newer artificial intelligence, machine learning and analytics. Arnold said the staffing changes had no impact on the engineering and support team, and the core product will continue to focus on modern applications and complex workflows that need to store lots of data.

“I’ve always thought of object storage as a data as a service platform more than anything else,” said Arnold, SwiftStack’s original CEO and current president and chief product officer.

TechTarget caught up with Arnold to talk about customer trends and the ways SwiftStack is responding in an increasingly cloud-minded IT world. Arnold unveiled product news about SwiftStack adding Microsoft Azure as a target for its 1space technology, which facilitates a single namespace between object storage locations for cloud platform compatibility. The company already supported Amazon S3 and Google.

SwiftStack’s storage software, which is based on open source OpenStack Swift, runs on commodity hardware on premises, but the 1space technology can run in the public cloud to facilitate access to public and private cloud data. Nearly all of SwiftStack’s estimated 125 customers have some public cloud footprint, according to Arnold.

Arnold also revealed a new distributed, multi-region erasure code option that can enable customers to reduce their storage footprint.

What caused SwiftStack to change its sales approach?

Joe Arnold, founder and president, SwiftStackJoe Arnold

Joe Arnold: At SwiftStack, we’ve always been focused on applications that are in the data path and mission critical to our customers. Applications need to generate more value from the data. People are distributing data across multiple locations, between the public cloud and edge data locations. That’s what we’ve been really good at. So, the change of focus with the go-to-market path has been to double down on those efforts rather than what we had been doing.

How would you compare your vision of object storage with what you see as the conventional view of object storage?

Arnold: The conventional view of object storage is that it’s something to put in the corner. It’s only for cold data that I’m not going to access. But, that’s not the reality of how I was brought up through object storage. My first exposure to object storage was building platforms versus Amazon Web Services when they introduced S3. We immediately began using that as the place to store data for applications that were directly in the data path.

Didn’t object storage tend to address backup and archive use cases because it wasn’t fast enough for primary workloads?

Arnold: I wouldn’t say that. Our customers are using their data for their applications. That’s usually a large data set that can’t be stored in traditional ways. Yes, we do have customers that use [SwiftStack] for purely cold archive and purely backup. In fact, we have features and capabilities to enhance some of the cold storage capabilities of the product. What we’ve changed is our go-to-market approach, not the core product.

So, for example, we’re adding a distributed, multi-region erasure code storage policy that customers can use across three data centers for colder data. It allows the entire segments of data — data bits and parity bits — to be distributed across multiple sites and, to retrieve data, only two of the data centers need to be online.

How does the new erasure code option differ from what you’ve offered in the past?

Arnold: Before, we offered the ability to use erasure code where each site could fully reconstruct the data. A data center could be offline, and you could still reconstruct fully. Now, with this new approach, you can store data more economically, but it requires two of three data centers to be online. It’s just another level of efficiency in our storage tier. Customers can distribute data across more data centers without using as much raw storage footprint and still have high levels of durability and availability. Since we’re building out storage workflows that tier up and down across different storage tiers, they can utilize this one for their most cold data storage policies.

Does the new erasure coding target users who strictly do archiving, or will it also benefit those doing AI and analytics?

Arnold: They absolutely need it. Data goes back and forth between their core data center, the edge and the public cloud in workflows such as autonomous vehicles, personalized medicine, telco and connected city. People need to manage data between different tiers as they’re evolving from more traditional-based applications into more modern, cloud-native type applications. And they need this ultra-cold tier.

How similar is this cold tier to Amazon Glacier?

Arnold: From a cost point of view, it will be similar. From a performance point of view, it’s much better. From a data availability point of view, it’s much better. It costs a lot of money to egress data out of something like AWS Glacier.

How important is flash technology in getting performance out of object storage?

Arnold: If the applications care about concurrency and throughput, particularly when it comes to a large data set, then a disk-based solution is going to satisfy their needs. Because the SwiftStack product’s able to distribute requests across lots of disks at the same time, they’re able to sustain the concurrency and throughput. Sure, they could go deploy a flash solution, but that’s going to be extremely expensive to get the same amount of storage footprint. We’re able to get single storage systems that can deliver a hundred gigabytes a second aggregate read-write throughput rates. That’s nearly a terabit of throughput across the cluster. That’s all with disk-based storage.

What do you think of vendors such as Pure Storage offering flash-based options with cheaper quad-level cell (QLC) flash that compares more favorably price-wise to disk?

Arnold: QLC flash is great, too. We support that as well in our product. We’re not dogmatic about using or not using flash. We’re trying to solve large-footprint problems of our customers. We do have customers using flash with a SwiftStack environment today. But they’re using it because they want reduced latencies across a smaller storage footprint.

How do you see demand for AWS, Microsoft and Google based on customer feedback?

Arnold: People want options and flexibility. I think that’s the reason why Kubernetes has become popular, because that enables flexibility and choice between on premises and the public cloud, and then between public clouds. Our customers were asking for the same. We have a number of customers focused on Microsoft Azure for their public cloud usage. And they want to be able to manage SwiftStack data between their on-premises environments with SwiftStack and the public cloud. So, we added the 1space functionality to include Azure.

What tends to motivate your customers to use the public cloud?  

Arnold: Some use it because they want to have disaster recovery ready to go up in the public cloud. We will mirror a set of data and use that as a second data center if they don’t already have one. We have customers that collect data from partners or devices out in the field. The data lands in the public cloud, and they want to move it to their on-premises environment. The other example would be customers that want to use the public cloud for compute resources where they need access to their data, but they don’t want to necessarily have long-term data storage in the public clouds. They want the flexibility of which public cloud they’re going to use for their computation and application runtime, and we can provide them connections to the storage environment for those use cases.

Do you have customers who have second thoughts about their cloud decisions due to egress and other costs?

Arnold: Of course. That happens in all directions. Sometimes you’re helping people move more stuff into the public cloud. In some situations, you’re pulling down data, or maybe it’s going in between clouds. They may have had a storage footprint in the public cloud that was feeding to some end users or some computation process. The egress charges were getting too high. The footprint was getting too high. And that costs them a tremendous amount month over month. That’s where we have the conversation. But it still doesn’t mean that they need to evacuate entirely from the public cloud. In fact, many customers will keep the storage on premises and use the public cloud for what it’s good at — more burstable computation points.

What’s your take on public cloud providers coming out with various on-premises options, such as Amazon Outposts and Azure Stack?

Arnold: It’s the trend of ‘everything as a service.’ I think what customers want is a managed experience. The number of operators who are able to manage these big environments is becoming harder and harder to come across. So, it’s a natural for those companies to offer a managed on-premises product. We feel the same way. We think that managing large sets of infrastructure needs to be highly automated, and we’ve built our product to make that as simple as possible. And we offer a product to do storage as a service on premises for customers who want us to do remote operations of their SwiftStack environments.

How has Kubernetes- and container-based development affected the way you design your product?

Arnold: Hugely. It impacts how applications are being developed. Kubernetes gives an organization the flexibility to deploy an application in different environments, whether that’s core data centers, bursting out into the public cloud or crafting applications out to the edge. At SwiftStack, we need to make the data just as portable as the containerized application is. That’s why we developed 1space. A huge number of our customers are using Kubernetes. That just naturally lends itself to the use of something like 1space to give them the portability they need for access to their data.

What gaps do you need to fill to more fully address what customers want to do?

Arnold: One is further flushing out ‘everything as a service.’ We just launched a service around that. As more customers adopt that, we’re going to have more work to do, as the deployments become more diverse across not just core data centers, but also edge data centers.

I see the convergence of file and object workflows and furthering 1space with our edge-to-core-to-cloud workflows. Particularly in the world of high-performance data analytics, we’re seeing the need for object — but it’s a world that is dominated by file-based applications. Data gets pumped into the system by robots, and object storage is awesome for that because it’s easy and you get lots of concurrency and lots of parallelism. However, you see humans building out algorithms and doing research and development work. They’re using file systems to do much of their programming, particularly in this high performance data analytics world. So, managing the convergence between file and object is an important thing to do to solve those use cases.

Go to Original Article
Author:

AWS security faces challenges after a decade of dominance

Amazon Web Services has a stranglehold on the public cloud market, but the company’s dominance in cloud security is facing new challenges.

The world’s largest cloud provider earned a reputation over the last 10 years as an influential leader in IaaS security, thanks to introducing products such as AWS Identity & Access Management and Key Management Service in the earlier part of the decade to more recent developments in event-driven security. AWS security features helped the cloud service provider establish its powerful market position; according to Gartner, AWS in 2018 earned an estimated $15.5 billion in revenue for nearly 48% of the worldwide public IaaS market.

But at the re:Invent 2019 conference last month, many of the new security tools and features announced were designed to fix existing issues, such as misconfigurations and data exposures, rather than push AWS security to new heights. “There wasn’t much at re:Invent that I’d call security,” said Colin Percival, founder of open source backup service Tarsnap and an AWS Community Hero, via email. “Most of what people are talking about as security improvements address what I’d call misconfiguration risk.”

Meanwhile, Microsoft has not only increased its cloud market share but also invested heavily in new Azure security features that some believe rival AWS’ offerings. Rich Mogull, president and analyst at Securosis, said there are two sides to AWS security — the inherent security of the platform’s architecture, and the additional tools and products AWS provides to customers.

“In terms of the inherent security of the platform, I still think Amazon is very far ahead,” he said, citing AWS’ strengths such as availability zones, segregation, and granular identity and access management. “Microsoft has done a lot with Azure, but Amazon still has a multi-year lead. But when it comes to security products, it’s more of a mixed bag.”

Most of what people are talking about as [AWS] security improvements address what I’d call misconfiguration risk.
Colin PercivalFounder, Tarsnap

Microsoft has been able to close the gap in recent years with the introduction of its own set of products and tools that compete with AWS security offerings, he said. “Azure Security Center and AWS Security Hub are pretty comparable, and both have strengths and weaknesses,” Mogull said. “Azure Sentinel is quite interesting and seems more complete than AWS Detective.”

New tools, old problems

Arguably the biggest AWS security development at re:Invent was a new tool designed to fix a persistent problem for the cloud provider: accidental S3 bucket exposures. The IAM Access Analyzer, which is part of AWS’ Identity and Access Management (IAM) console, alerts users when an S3 bucket is possibly misconfigured to allow public access via the internet and lets them block such access with one click.

AWS had previously made smaller moves, including changes to S3 security settings and interfaces, to curb the spate of high-profile and embarrassing S3 exposures in recent years. IAM Access Analyzer is arguably the strongest move yet to resolve the ongoing problem.

“They created the S3 exposure issue, but they also fixed it,” said Jerry Gamblin, principal security engineer at vulnerability management vendor Kenna Security, which is an AWS customer. “I think they’ve really stepped up in that regard.”

Still, some AWS experts feel the tool doesn’t fully resolve the problem. “Tools like IAM Access Analyzer will definitely help some people,” Percival said, “but there’s a big difference between warning people that they screwed up and allowing people to make systems more secure than they could previously.”

Scott Piper, an AWS security consultant and founder of Summit Route in Salt Lake City, said “It’s yet another tool in the toolbelt and it’s free, but it’s not enabled by default.”

There are other issues with IAM Access Analyzer. “With this additional information, you have to get that to the customer in some way,” Piper said. “And doing that can be awkward and difficult with this service and others in AWS like GuardDuty, because it doesn’t make cross-region communication very easy.”

For example, EC2 regions are isolated to ensure the highest possible fault tolerance and stability for customers. But Piper said the isolation presents challenges for customers using multiple regions because it’s difficult to aggregate GuardDuty alerts to a single source, which requires security teams to analyze “multiple panes of glass instead of one.”

Metadata headaches

AWS recently addressed another security issue that became a high-profile concern for enterprises following the Capital One breach last summer. The attacker in that exploited an SSRF vulnerability to access the AWS metadata service for company’s EC2 instances, which allowed them to obtain credentials contained in the service.

The Capital One breach led to criticism from security experts as well as lawmakers such as Sen. Ron Wyden (D-Ore.), who questioned why AWS hadn’t addressed SSRF vulnerabilities for its metadata service. The lack of security around the metadata service has concerned some AWS experts for years; in 2016, Percival penned a blog post titled “EC2’s most dangerous feature.”

“I think the biggest problem Amazon has had in recent years — judging by the customers affected — is the lack of security around their instance metadata service,” Percival told SearchSecurity.

In November, AWS made several updates to the metadata service to prevent unauthorized access, including the option to turn off access to the service altogether. Mogull said the metadata service update was crucial because it improved security around AWS account credentials.

But like other AWS security features, the metadata service changes are not enabled by default. Percival said enabling the update by default would’ve caused issues for enterprise applications and services that rely on the existing version of the service. “Amazon was absolutely right in making their changes opt-in since if they had done otherwise, they would have broken all of the existing code that uses the service,” he said. “I imagine that once more or less everyone’s code has been updated, they’ll switch this from opt-in to opt-out — but it will take years before we get to that point.”

Percival also said the update is “incomplete” because it addresses common misconfigurations but not software bugs. (Percival is working on an open source tool that he says will provide “a far more comprehensive fix to this problem,” which he hopes to release later this month.)

Still, Piper said the metadata service update is an important step for AWS security because it showed the cloud provider was willing to acknowledge there was a problem with the existing service. That willingness and responsiveness hasn’t always been there in the past, he said.

“AWS has historically had the philosophy of providing tools to customers, and it’s kind of up to customers to use them and if they shoot themselves in the foot, then it’s the customers’ fault,” Piper said. “I think AWS is starting to improve and change that philosophy to help customers more.”

AWS security’s road ahead

While the metadata service update and IAM Access Analyzer addressed lingering security issues, experts highlighted other new developments that could strengthen AWS’ position in cloud security.

AWS Nitro Enclaves, for example, is a new EC2 capability introduced at re:Invent 2019 that allows customers to create isolated instances for sensitive data. The Nitro Enclaves, which will be available in preview this year, are virtual machines attached to EC2 instances but have CPU and memory isolation from the instances and can be accessed only through secure local connections.

“Nitro Enclaves will have a big impact for customers because of its isolation and compartmentalization capabilities” which will give enterprises’ sensitive data an additional layer of protection against potential breaches, Mogull said.

Percival agreed that Nitro Enclaves could possibly “raise the ceiling,” for AWS Security, though he cautioned against using them. “Enclaves are famously difficult for people to use correctly, so it’s hard to predict whether they will make a big difference or end up being another of the many ‘Amazon also has this feature, which nobody ever uses’ footnotes.”

Experts also said AWS’ move to strengthen its ARM-based processor business could have major security implications. The cloud provider announced at re:Invent 2019 that it will be launching EC2 instances that run on its new, customized ARM chips, dubbed Graviton2.

Gamblin said the Graviton2 processors are a security play in part because of recent microprocessor vulnerabilities and side channel attacks like Meltdown and Spectre. While some ARM chips were affected by both Meltdown and Spectre, subsequent side channel attacks and Spectre variants have largely affected x86 processors.

“Amazon doesn’t want to rely on other chips that may be vulnerable to side channel attacks and may have to be taken offline and rebooted or suffer performance issues because of mitigations,” Gamblin said.

Percival said he was excited by the possibility of the cloud provider participating in ARM’s work on the “Digital Security by Design” initiative, a private-sector partnership with the UK that is focused in part on fundamentally restructuring — and improving — processor security. The results of that project will be years down the road, Percival said, but it would show a commitment from AWS to once again raising the bar for security.

“If it works out — and it’s a decade-long project, which is inherently experimental in nature — it could be the biggest step forward for computer security in a generation.”

Go to Original Article
Author:

Cisco 2020: Challenges, prospects shape the new year

Cisco finished 2019 with a blitz of announcements that recast the company’s service provider business. Instead of providing just integrated hardware and software, Cisco became a supplier of components for open gear.

Cisco enters the new decade with rearchitected silicon tailored for white box routers favored by cloud providers and other organizations with hyperscale data centers. To add punch to its new Silicon One chipset, Cisco plans to offer high-speed integrated optics from Acacia Communications. Cisco expects to complete its $2.6 billion acquisition of Acacia in 2020.

Cisco is aiming its silicon-optics combo at Broadcom. The chipmaker has been the only significant silicon supplier for white box routers and switches built on specifications from the Open Compute Project. The specialty hardware has become the standard within the mega-scale data centers of cloud providers like AWS, Google and Microsoft; and internet companies like Facebook.

I think the Silicon One announcement was a watershed moment.
Chris AntlitzPrincipal analyst, Technology Business Research Inc.

“I think the Silicon One announcement was a watershed moment,” said Chris Antlitz, principal analyst at Technology Business Research Inc. (TBR).

Cisco designed Silicon One so white box manufacturers could program the hardware platform for any router type. Gear makers like Accton Technology Corporation, Edgecore Networks and Foxconn Technology Group will be able to use the chip in core, aggregation and access routers. Eventually, they could also use it in switches.

Cisco 2020: Silicon One in the 5G market

Cisco is attacking the cloud provider market by addressing its hunger for higher bandwidth and lower latency. At the same time, the vendor will offer its new technology to communication service providers. Their desire for speed and higher performance will grow over the next couple of years as they rearchitect their data centers to deliver 5G wireless services to businesses.

For the 5G market, Cisco could combine Silicon One with low-latency network interface cards from Exablaze, which Cisco plans to acquire by the end of April 2020. The combination could produce exceptionally fast switches and routers to compete with other telco suppliers, including Ericsson, Juniper Networks, Nokia and Huawei. Startups are also targeting the market with innovative routing architectures.

“Such a move could give Cisco an edge,” said Tom Nolle, president of networking consultancy CIMI Corp., in a recent blog. “If you combine a low-latency network card with the low-latency Silicon One chip, you might have a whole new class of network device.”

Cisco 2020: Trouble with the enterprise

Cisco will launch its repositioned service provider business, while contending with the broader problem of declining revenues. Cisco could have difficulty reversing that trend, while also addressing customer unhappiness with the high price of its next-generation networking architecture for enterprise data centers. 

“I do think 2020 is likely to be an especially challenging year for Cisco,” said John Burke, an analyst at Nemertes Research. “The cost of getting new goodies is far too high.”

Burke said he had spoken to several people in the last few months who had dropped Cisco gear from their networks to avoid the expense. At the same time, companies have reported using open source network automation tools in place of Cisco software to lower costs.

Cisco software deemed especially expensive include its Application Centric Infrastructure (ACI) and DNA Center, Burke said. ACI and DNA Center are at the heart of Cisco’s modernized approach to the data center and campus network, respectively.

Both offer significant improvements over Cisco’s older network architectures. But they require businesses to purchase new Cisco hardware and retrain IT staff.

John Mulhall, an independent contractor with 20 years of networking experience, said any new generation of Cisco technology requires extra cost analyses to justify the price.

“As time goes on, a lot of IT shops are going to be a little bit reluctant to just go the standard Cisco route,” he said. “There’s too much competition out there.”

Cisco SD-WAN gets dinged

Besides getting criticized for high prices, Cisco also took a hit in 2019 for the checkered performance of its Viptela software-defined WAN, a centerpiece for connecting campus employees to SaaS and cloud-based applications. In November, Gartner reported that Viptela running on Cisco’s IOS-XE platform had “stability and scaling issues.”

Also, customers who had bought Cisco’s ISR routers during the last few years reported the hardware didn’t have enough throughput to support Viptela, Gartner said.

The problems convinced the analyst firm to drop Cisco from the “leaders” ranking of Gartner’s latest Magic Quadrant for WAN Edge Infrastructure.

Gartner and some industry analysts also knocked Cisco for selling two SD-WAN products — Viptela and Meraki — with separate sales teams and distinct management and hardware platforms.

The approach has made it difficult for customers and resellers to choose the product that best suits their needs, analysts said. Other vendors use a single SD-WAN to address all uses.

“Cisco’s SD-WAN is truly a mixed bag,” said Roy Chua, principal analyst at AvidThink. “In the end, the strategy will need to be clearer.”

Antlitz of TBR was more sanguine about Cisco’s SD-WAN prospects. “We see no reason to believe that Cisco will lose its status as a top-tier SD-WAN provider.”

Go to Original Article
Author:

AWS Outposts brings hybrid cloud support — but only for Amazon

LAS VEGAS — AWS controls nearly half of the public IaaS market today, and based on the company’s rules against use of the term ‘multi-cloud,’ would be happy to have it all, even as rivals Microsoft and Google make incremental gains and more customers adopt multi-cloud strategies.

That’s the key takeaway from the start of this year’s massive re:Invent conference here this week, which was marked by the release of AWS Outposts for hybrid clouds and a lengthy keynote from AWS CEO Andy Jassy that began with a tongue-in-cheek invite to AWS’ big tent in the cloud.

“You have to decide what you’re going to bring,” Jassy said of customers who want to move workloads into the public cloud. “It’s a little bit like moving from a home,” he added, as a projected slide comically depicted moving boxes affixed with logos for rival vendors such as Oracle and IBM sitting on a driveway.

“It turns out when companies are making this big transformation, what we see is that all bets are off,” Jassy said. “They reconsider everything.”

For several years now, AWS has used re:Invent as a showcase for large customers in highly regulated industries that have made substantial, if not complete, migrations to its platform. One such company is Goldman Sachs, which has worked with AWS on several projects, including Marcus, a digital banking service for consumers. A transaction banking service that helps companies manage their cash in a cloud-native stack on AWS is coming next year, said Goldman Sachs CEO David Solomon, who appeared during Jassy’s talk. Goldman is also moving its Marquee market intelligence platform into production on AWS.

Along with showcasing enthusiastic customers like Goldman Sachs, Jassy took a series of shots at the competition, some veiled and others overt.

“Every industry has lots of companies with mainframes, but everyone wants to move off of them,” he claimed. The same goes for databases, he added. Customers are trying to move away from Oracle and Microsoft SQL Server due to factors such as expense and lock-in, he said. Jassy didn’t mention that similar accusations have been lodged at AWS’ native database services.

Jassy repeatedly took aim at Microsoft, which has the second most popular cloud platform after AWS, albeit with a significant lag. “People don’t want to pay the tax anymore for Windows,” he said.

But it isn’t as if AWS would actually shun Microsoft technology, since it has long been a host for many Windows Server workloads. In fact, it wants as much as it can get. This week, AWS introduced a new bring-your-own-license program for Windows Server and SQL Server designed to make it easier for customers to run those licenses on AWS, versus Azure.

AWS pushes hybrid cloud, but rejects multi-cloud

One of the more prominent, although long-expected, updates this week is the general availability of AWS Outposts. These specialized server racks provided by AWS reside in customers’ own data centers, in order to comply with regulations or meet low-latency needs. They are loaded with a range of AWS software, are fully managed by AWS and maintain continuous connections to local AWS regions.

The company is taking the AWS Outposts idea a bit further with the release of new AWS Local Zones. These will consist of Outpost machines placed in facilities very close to large cities, giving customers who don’t want or have their own data centers, but still have low-latency requirements, another option. Local Zones, the first of which is in the Los Angeles area, provide this capability and tie back to AWS’ larger regional zones, the company said.

Outposts, AWS Local Zones and the previously launched VMware Cloud on AWS constitute a hybrid cloud computing portfolio for AWS — but you won’t hear Jassy or other executives say the phrase multi-cloud, at least not in public.

In fact, partners who want to co-brand with AWS are forbidden from using that phrase and similar verbiage in marketing materials, according to an AWS co-branding document provided to SearchAWS.com.

“AWS does not allow or approve use of the terms ‘multi-cloud,’ ‘cross cloud,’ ‘any cloud, ‘every cloud,’ or any other language that implies designing or supporting more than one cloud provider,” the co-branding guidelines, released in August, state. “In this same vein, AWS will also not approve references to multiple cloud providers (by name, logo, or generically).”

An AWS spokesperson didn’t immediately reply to a request for comment.

The statement may not be surprising in context of AWS’s market lead, but does stand in contrast to recent approaches by Google, with the Anthos multi-cloud container management platform, and Microsoft’s Azure Arc, which uses native Azure tools, but has multi-cloud management aspects.

AWS customers may certainly want multi-cloud capabilities, but can protect themselves by using portable products and technologies, such as Kubernetes at the lowest level with a tradeoff being the manual labor involved, said Holger Mueller, an analyst with Constellation Research in Cupertino, Calif.

“To be fair, Azure and Google are only at the beginning of [multi-cloud],” he said.

Meanwhile, many AWS customers have apparently grown quite comfortable moving their IT estates onto the platform. One example is Cox Automotive, known for its digital properties such as Autotrader.com and Kelley Blue Book.

In total, Cox has more than 200 software applications, many of which it accrued through a series of acquisitions, and the company expects to move it all onto AWS, said Chris Dillon, VP of architecture, during a re:Invent presentation.

Cox is using AWS Well-Architected Framework, a best practices tool for deployments on AWS, to manage the transition.

“When you start something new and do it quickly you always run the risk of not doing it well,” said Gene Mahon, director of engineering operations. “We made a decision early on that everything would go through a Well-Architected review.”

Go to Original Article
Author:

SAP sees S/4HANA migration as its future, but do customers?

The first part of our 20-year SAP retrospective examined the company’s emerging dominance in the ERP market and its transition to the HANA in-memory database. Part two looks at the release of SAP S/4HANA in February 2015. The “next-generation ERP” was touted by the company as the key to SAP’s future, but it ultimately raised questions that in many cases have yet to be answered. The issues surrounding the S/4HANA migration remain the most compelling initiative for the company’s future.

Questions about SAP’s future have altered in the past year, as the company has undergone an almost complete changeover in its leadership ranks. Most of the SAP executives who drove the strategy around S/4HANA and the intelligent enterprise have left the company, including former CEO Bill McDermott. New co-CEOs Jennifer Morgan and Christian Klein are SAP veterans, and analysts don’t think the change in leadership will make for significant changes in the company’s technology and business strategy.

But they will take over the most daunting task SAP has faced: convincing customers of the business value of the intelligent enterprise, a data-driven transformation of businesses with S/4HANA serving as the digital core. As part of the transition toward intelligence, SAP is pushing customers to move off of tried and true SAP ECC ERP systems (or the even older SAP R/3), and onto the modern “next-generation ERP” S/4HANA. SAP plans to end support for ECC by 2025.

Dan LahlDan Lahl

S/4HANA is all about enabling businesses to make decisions in real time as data becomes available, said Dan Lahl, SAP vice president of product marketing and a 24-year SAP veteran.

“That’s really what S/4HANA is about,” Lahl said. “You want to analyze the data that’s in your system today. Not yesterday’s or last week’s information and data that leads you to make decisions that don’t even matter anymore, because the data’s a week out. It’s about giving customers the ability to make better decisions at their fingertips.”

S/4HANA migration a matter of when, not if

Most SAP customers see the value of an S/4HANA migration, but they are concerned about how to get there, with many citing concerns about the cost and complexity of the move. This is a conundrum that SAP acknowledges.

“We see that our customers aren’t grappling with if [they are going to move], but when,” said Lloyd Adams, managing director of the East Region at SAP America. “One of our responsibilities, then, is to provide that clarity and demonstrate the value of S/4HANA, but to do so in the context of the customers’ business and their industry. Just as important as showing them how to move, we need to do it as simply as possible, which can be a challenge.”

Lloyd AdamsLloyd Adams

S/4HANA is the right platform for the intelligent enterprise because of the way it can handle all the data that the intelligent enterprise requires, said Derek Oats, CEO of Americas at SNP, an SAP partner based in Heidelberg, Germany that provides migration services.

In order to build the intelligent enterprise, customers need to have a platform that can consume data from a variety of systems — including enterprise applications, IoT sensors and other sources — and ready it for analytics, AI and machine learning, according to Oats. S/4HANA uses SAP HANA, a columnar, in-memory database, to do that and then presents the data in an easy-to-navigate Fiori user interface, he said.

“If you don’t have that ability to push out of the way a lot of the work and the crunching that has often occurred down to the base level, you’re kind of at a standstill,” he said. “You can only get so much out of a relational database because you have to rely on the CPU at the application layer to do a lot of the crunching.”

S/4HANA business case difficult to make

Although many SAP customers understand the benefits of S/4HANA, SAP has had a tough sell in getting its migration message across to its large customer base. The majority of customers plan to remain on SAP ECC and have only vague plans for an S/4HANA migration.

Joshua GreenbaumJoshua Greenbaum

“The potential for S/4HANA hasn’t been realized to the degree that SAP would like,” said Joshua Greenbaum, principal at Enterprise Applications Consulting. “More companies are really looking at S/4HANA as the driver of genuine business change, and recognize that this is what it’s supposed to be for. But when you ask them, ‘What’s your business case for upgrading to S/4HANA?’ The answer is ‘2025.’”

The real issue with S/4HANA is that the concepts behind it are relatively big and very specific to company, line of business and geography.
Joshua GreenbaumPrincipal, Enterprise Applications Consulting

One of the problems that SAP faces when convincing customers of the value of S/4HANA and the intelligent enterprise is that no simple use case drives the point home, Greenbaum said. Twenty years ago, Y2K provided an easy-to-understand reason why companies needed to overhaul their enterprise business systems, and the fear that computers wouldn’t adapt to the year 2000 led in large measure to SAP’s early growth.

“Digital transformation is a complicated problem and the real issue with S/4HANA is that the concepts behind it are relatively big and very specific to company, line of business and geography,” he said. “So the use cases are much harder to justify, or it’s much more complicated to justify than, ‘Everything is going to blow up on January 1, 2000, so we have to get our software upgraded.'”

Evolving competition faces S/4HANA

Jon Reed, analyst and co-founder of ERP news and analysis firm Diginomica.com, agrees that SAP has successfully embraced the general concept of the intelligent enterprise with S/4HANA, but struggles to present understandable use cases.

Jon ReedJon Reed

“The question of S/4HANA adoption remains central to SAP’s future prospects, but SAP customers are still trying to understand the business case,” Reed said. “That’s because agile, customer-facing projects get the attention these days, not multi-year tech platform modernizations. For those SAP customers that embrace a total transformation — and want to use SAP tech to do it — S/4HANA looks like a viable go-to product.”

SAP’s issues with driving S/4HANA adoption may not come from the traditional enterprise competitors like Oracle, Microsoft and Infor, but from cloud-based business applications like Salesforce and Workday, said Eric Kimberling, president of Third Stage Consulting, a Denver-based firm that provides advice on ERP deployments and implementations.

Eric KimberlingEric Kimberling

“They aren’t direct competitors with SAP; they don’t have the breadth of functionality and the scale that SAP does, but they have really good functionality in their best-of-breed world,” Kimberling said. “Companies like Workday and Salesforce make it easier to add a little piece of something without having to worry about a big SAP project, so there’s an indirect competition with S/4HANA.”

SAP customers are going to have to adapt to evolving enterprise business conditions regardless of whether or when they move to S/4HANA, Greenbaum said.

“Companies have to build business processes to drive the new business models. Whatever platform they settle on, they’re going to be unable to stand still,” he said. “There’s going to have to be this movement in the customer base. The question is will they build primarily on top of S/4HANA? Will they use an Amazon or an Azure hyperscaler as the platform for innovation? Will they go to their CRM or workforce automation tool for that? The ‘where’ and ‘what next’ is complicated, but certainly a lot of companies are positioning themselves to use S/4HANA for that.”

Go to Original Article
Author:

Microsoft cybersecurity strategy, hybrid cloud in focus at Ignite

Microsoft CEO Satya Nadella has hinted that the big news at the company’s Ignite conference will involve cybersecurity and updates to its approach to hybrid and distributed cloud applications.

“Rising cyber threats and increasing regulation mean security and compliance is a strategic priority for every organization,” Nadella said on Microsoft’s earnings call for the first quarter of 2020 this week. He highlighted that the company has offerings across identity, security and compliance that span people, devices, apps, developer tools, data and infrastructure “to protect customers in today’s zero trust environment.”

In addition to Microsoft cybersecurity-related comments, Nadella addressed investor questions about the company’s hybrid cloud business.

“Our approach has always been about this distributed computing fabric, or thinking about hybrid not as some transitory phase, but as a long-term vision for how computing will meet the real-world needs,” he replied in the call.

Satya NadellaSatya Nadella

Microsoft’s hybrid cloud offerings include Azure Stack, which takes a subset of Azure’s software foundation and installs it on specialized hardware to be run in customer-controlled environments.

At Ignite, “you will see us take the next leap forward even in terms of how we think about the architecture inclusive of the application models, programming models on what distributed computing looks like going forward,” Nadella said.

Microsoft targets cybersecurity, hybrid cloud

Given that cybersecurity and hybrid cloud computing are two of the hottest areas in enterprise tech today, Nadella’s teases aren’t especially surprising. But the specific details of what Microsoft has planned are worth delving into, analysts said.

It was a bit surprising that Nadella didn’t mention Azure Stack in his remarks on the conference call, given the progress that product has made in the market, said Holger Mueller, an analyst with Constellation Research in Cupertino, Calif.

However, Ignite’s session agenda includes a fair number of Azure Stack sessions, covering matters such as migration planning and operational best practices. One possibility is that Microsoft will announce expansions of Azure Stack’s footprint so it’s more on par with the Azure cloud’s full capabilities, Mueller added. 

Azure CTO Mark Russinovich is scheduled to speak at Ignite on multiple occasions. One session will focus on new innovations in Azure’s global architecture and another targets next-generation application development and deployment.

On Twitter, Russinovich said he’ll discuss matters such as DAPR, Microsoft’s recently launched open source runtime for microservices applications. He also plans to talk about Open Application Model, a specification for cloud-native app development, and Rudr, a reference implementation of the Open Application Model (OAM).

The OAM is a project under the Open Web Foundation. It serves as a specification so the application description is separated from the details of how the application is deployed and managed by the infrastructure 

According to a source familiar with the company’s plans, Microsoft released OAM because it is designed to be built by developers but then passed on for execution by an operations team, adding that DAPR is a way to build applications that are designed to be componentized.

“Developers don’t have to worry about where (an application) will run,” the source said. “They just describe its resource requirements, focus on building a microservices application and not to worry about how each component will communicate with the others,” he said. Going into Ignite, the hyperscale cloud market is being driven by a couple of factors, said Jay Lyman, an analyst with 451 Research.

“AWS, Microsoft and Google sort of define the modern enterprise IT operational paradigm with their breadth of services, innovation and competition,” Lyman said. “At the same time, the market serves as a discipline for them.”

I wouldn’t be surprised to see Microsoft announce something around support for other public clouds.
Jay LymanAnalyst, 451 Research

Hybrid cloud is an example of this, having emerged to meet customer needs to run on-premises infrastructure in a similar manner to public clouds, he added. Azure Stack, Google Kubernetes Engine On-Prem and AWS Outposts are some early answers to the problem.

Meanwhile, Google’s Anthos and IBM Red Hat’s OpenShift platform target multi-cloud application deployments.

“I wouldn’t be surprised to see Microsoft announce something around support for other public clouds,” Lyman said.

Microsoft cybersecurity portfolio gains gravity

Some analysts believe Microsoft is already well positioned in the cybersecurity market on the proven reliability of Windows Defender, Active Directory, the Azure Active Directory, Azure Sentinel and Office 365 Advanced Threat Protection.

“Many enterprises trust Microsoft to manage the identities of their users accessing information both from on-prem and cloud-based applications,” said Doug Cahill, senior analyst and group director at the Enterprise Strategy Group (ESG) in Milford, Ma. “They’re already a formidable cybersecurity competitor,” he said.

In a recent survey conducted by ESG, IT pros said one of the most important attributes they look for in an enterprise-class cybersecurity vendor is the reliability of products across their portfolio and that they are “well-aligned” with their particular IT initiatives.

“Obviously, Microsoft is one of the leading IT vendors,” Cahill said. “They have Active Directory, which is broadly adopted, serving as a foundational piece of their cybersecurity strategy,” he said.

Logically, the next step for Microsoft is to extend its platform out to so it plays across the broader attack surface, which includes the rapidly growing Office 365.

During the earnings call, Nadella ran down what he believes are the individual strengths of the company’s cybersecurity offerings. He made special note of the cloud-based Sentinel and its ability to analyze security vulnerabilities across an entire organization using AI to “detect, investigate and automatically remediate threats.”

Nadella said the company would reveal more details about its “expanding opportunities in the cybersecurity market” at Ignite.

Go to Original Article
Author:

Panasas storage roadmap includes route to software-defined

Panasas is easy to overlook in the scale-out NAS market. The company’s products don’t carry the name recognition of Dell EMC Isilon, NetApp NAS filers and IBM Spectrum Scale. But CEO Faye Pairman said her team is content to fly below the radar — for now — concentrating mostly on high-performance computing, or HPC.

The Panasas storage flagship is the ActiveStor hybrid array with the PanFS parallel file system. The modular architecture scales performance in a linear fashion, as additional capacity is added to the system. “The bigger our solution gets, the faster we go,” Pairman said.

Panasas founder Garth Gibson launched the object-based storage architecture in 2000. Gibson, a computer science professor at Carnegie Mellon University in Pittsburgh, was a a developer of RAID storage taxonomy. He serves as Panasas’ chief scientist.

Panasas has gone through many changes over the past several years, marked by varying degrees of success to broaden into mainstream commercial NAS. That was Pairman’s charter when she took over as CEO in 2010. Key executives left in a 2016 management shuffle, and while investors have provided $155 million to Panasas since its inception, the last reported funding was a $52.5 million venture round in 2013.

As a private company, Panasas does not disclose its revenue, but “we don’t have the freedom to hemorrhage cash,” Pairman said.

We caught up with Pairman recently to discuss Panasas’ growth strategy, which could include offering a software-only license option for PanFS. She also addressed how the vendor is moving to make its software portable and why Panasas isn’t jumping on the object-storage bandwagon.

Panasas storage initially aimed for the high end of the HPC market. You were hired to increase Panasas’ presence in the commercial enterprise space. How have you been executing on that strategy?

Faye Pairman: It required looking at our parallel file system and making it more commercially ready, with features added to improve stability and make it more usable and reliable. We’ve been on that track until very recently.

We have an awesome file system that is very targeted at the midrange commercial HPC market. We sell our product as a fully integrated appliance, so our next major objective — and we announced some of this already — is to disaggregate the file system from the hardware. The reason we did that is to take advantage of commodity hardware choices on the market.

Once the file system is what we call ‘portable,’ meaning you can run it on any hardware, there will be a lot of new opportunity for us. That’s what you’ll be hearing from us in the next six months.

Would Panasas storage benefit by introducing an object storage platform, even as an archive device?

Pairman: You know, this is a question we’ve struggled with over the years. Our customers would like us to service the whole market. [Object storage] would be a very different financial profile than the markets we serve. As a small company, right now, it’s not a focus for us.

We differentiate in terms of performance and scale. Normally, what you see in scale-out NAS is that the bigger it gets, the more sluggish it tends to be. We have linear scalability, so the bigger our solution gets, the faster we go.

That’s critically important to the segments we serve. It’s different from object storage, which is all about being simple and the ability to get bigger and bigger. And performance is not a consideration.

Which vendors do you commonly face off with in deals? 

Pairman: Our primary competitor is IBM Spectrum Scale, with a file system and approach that is probably the most similar to our own and a very clear target on commercial HPC. We also run into Isilon, which plays more to commercial — meaning high reads, high usability features, but [decreased] performance at scale.

And then, at the very high end, we see DataDirect Networks (DDN) with a Lustre file system for all-out performance, but very little consideration for usability and manageability.

The niche is in the niche. We target very specific markets and very specific workloads.
Faye PairmanCEO, Panasas

Which industry verticals are prominent users of Panasas storage architecture? Are you a niche within the niche of HPC?

Pairman: The niche is in the niche. We target very specific markets and very specific workloads. We serve all kinds of application environments, where we manage very large numbers of users and very large numbers of files.

Our target markets are manufacturing, which is a real sweet spot, as well as life sciences and media and entertainment. We also have a big practice in oil and gas exploration and all kinds of scientific applications, and even some manufacturing applications within the federal government.

Panasas storage is a hybrid system, and we manage a combination of disk and flash. With every use case, while we specialize in managing very large files, we also have the ability to manage the file size that a company does on flash.

What impact could DDN’s acquisition of open source Lustre exert on the scale-out sector, in general, and Panasas in particular?

Pairman: I think it’s a potential market-changer and might benefit us, which is why we’re keeping a close eye on where Lustre ends up. We don’t compete directly with Lustre, which is more at the high end.

Until now, Lustre always sat in pretty neutral hands. It was in a peaceful place with Intel and Seagate, but they both exited the Lustre business, and Lustre ended up in DDN’s hands. It remains to be seen what that portends. But there is a long list of vendors that depend on Lustre remaining neutral, and now it’s in the hands of the most aggressive competitor in that space.

What happens to Lustre is less relevant to us if it stays the same. If it falters, we think we have an opportunity to move into that space. It’s potentially a big shakeup that could benefit vendors like us who build a proprietary file system.

Juniper boosting performance of SRX5000 firewall for IoT, 5G

Juniper Networks has introduced a security acceleration card that boosts the performance of the company’s SRX5000 line of firewalls to future-proof the data centers of service providers, cloud providers and large enterprises.

Juniper designed the services processing card, SPC3, for organizations anticipating large data flows from upcoming multi-cloud, internet-of-things and 5G applications. Besides meeting future demand, the SPC3 can also accommodate current traffic increases due to video conferencing, media streaming and other data-intensive applications.

The SPC3 multiplies performance up to a factor of 11 across key metrics for the SRX5000 line, Juniper said. Organizations using the Juniper SPC2 can upgrade to the SPC3 without service interruptions.

What’s in the SRX5000 line?

The SRX5000 line’s security services include a stateful firewall, an intrusion prevention system, unified threat management and a virtual private network. Network operators manage security policies for SRX5000 hardware through Juniper’s Junos Space Security Director.

With the addition of an SPC, the SRX5000 line can support up to 2 Tbps of firewall throughput. The line’s I/O cards offer a range of connectivity options, including 1 Gigabit Ethernet, 10 GbE, 40 GbE and 100 GbE interfaces.

Security is one area Juniper has reported quarterly revenue growth while overall sales have declined. For the quarter ended June 30, Juniper reported last month revenue from its security business increased to $79.5 million from $68.7 million a year ago.

However, overall revenue fell 8% to $1.2 billion, and the company said sales in the current quarter would also be down. Nevertheless, the company expects to return to quarterly revenue growth in the fourth quarter.

Cisco Viptela integrated with IOS XE on ISR, ASR

Cisco has integrated its Viptela software-defined WAN with the company’s IOS XE network operating system, effectively making the cloud-controlled SD-WAN product an option for distributing network traffic from Cisco ISR and ASR routers.

Announced this week, the integration means companies using Cisco’s legacy SD-WAN product, Intelligent WAN — often used with the Integrated Services Router (ISR) — can switch to a much simpler system. IWAN’s complexity precluded broad market adoption, so when Cisco acquired Viptela last year for $610 million, many analysts predicted the company would eventually migrate customers to Viptela.

Connecting Cisco Viptela to IOS XE adds a cloud-controlled element to IOS XE hardware through the SD-WAN product’s vManage console. The cloud-based software is the centralized component for configuration management and monitoring network traffic going to and from the ISR and Aggregation Services Router (ASR) hardware.

As a router network operating system, IOS XE includes dozens of services beyond routing and switching, such as encryption, authentication, firewall capabilities and policy enforcement.

Next for Cisco Viptela

In March, Cisco launched cloud-based predictive analytics for Viptela, called vAnalytics. The software, which companies access through vManage, provides network managers with answers to what-if scenarios.

Over the next 18 months, Cisco plans to merge vManage into DNA Center, a centralized software console for managing campus networks built on top of Cisco’s Catalyst 9000 campus switches. The integration would provide network managers with a single view of their LAN, WAN and campus networks.

Companies use SD-WAN for traffic distribution across broadband, Long Term Evolution and MPLS links connecting campuses and remote offices to the internet and the corporate data center. In the first quarter, companies refreshing their campus and branch networks contributed to a more than 5% increase year to year in 1 Gb Ethernet revenue and a nearly 16% rise in port shipments, according to IDC.

Cisco claimed organizations use more than 1 million ISR and ASR routers globally. ASR routers are designed for high-bandwidth applications, such as video streaming, while ISR systems are for small or midsize networks found in small businesses and branch offices.