Tag Archives: after

IBM expands patent troll fight with its massive IP portfolio

After claiming more than a quarter century of patent leadership, IBM has expanded its fight against patent assertion entities, also known as patent trolls, by joining the LOT Network. As a founding member of the Open Invention Network in 2005, IBM has been in the patent troll fight for nearly 15 years.

The LOT Network (short for License on Transfer) is a nonprofit community of more than 600 companies that have banded together to protect themselves against patent trolls and their lawsuits. The group says companies lose up to $80 billion per year on patent troll litigation. Patent trolls are organizations that hoard patents and bring lawsuits against companies they accuse of infringing on those patents.

IBM joins the LOT Network after its $34 billion acquisition of Red Hat, which was a founding member of the organization.

“It made sense to align IBM’s and Red Hat’s view on how to manage our patent portfolio,” said Jason McGee, vice president and CTO of IBM Cloud Platform. “We want to make sure that patents are used for their traditional purposes, and that innovation proceeds and open source developers can work without the threat of a patent litigation.”

To that end, IBM contributed more than 80,000 patents and patent applications to the LOT Network to shield those patents from patent assertion entities, or PAEs.

Charles KingCharles King

IBM joining the LOT Network is significant for a couple of reasons, said Charles King, principal analyst at Pund-IT in Hayward, Calif. First and foremost, with 27 years of patent leadership, IBM brings a load of patent experience and a sizable portfolio of intellectual property (IP) to the LOT Network, he said.

“IBM’s decision to join should also silence critics who decried how the company’s acquisition of Red Hat would erode and eventually end Red Hat’s long-standing leadership in open source and shared IP,” King said. “Instead, the opposite appears to have occurred, with IBM taking heed of its new business unit’s dedication to open innovation and patent stewardship.”

IBM’s decision to join should also silence critics who decried how the company’s acquisition of Red Hat would erode and eventually end Red Hat’s long-standing leadership in open source and shared IP.
Charles KingAnalyst, Pund-IT

The LOT Network operates as a subscription service that charges members for the IP protection they provide. LOT’s subscription rates are based on company revenue. Membership is free for companies making less than $25 million annually. Companies with annual revenues between $25 million and $50 million pay $5,000 annually to LOT. Companies with revenues between $50 million and $100 million pay $10,000 annually to LOT. Companies with revenues between $100 million and $1 billion pay $15,000. And LOT caps its annual subscription rates at $20,000 for companies with revenues greater than $1 billion.

Meanwhile, the Open Invention Network (OIN) has three levels of participation: members, associate members and licensees. Participation in OIN is free, the organization said.

“One of the most powerful characteristics of the OIN community and its cross-license agreement is that the board members sign the exact same licensing agreement as the other 3,100 business participants,” said Keith Bergelt, CEO of OIN. “The cross license is royalty-free, meaning it costs nothing to join the OIN community. All an organization or business must agree to do is promise not to sue other community participants based on the Linux System Definition.”

IFI Claims Patent Services confirms that 2019 marked the 27th consecutive year in which IBM has been the leader in the patent industry, earning 9,262 U.S. patents last year. The patents reach across key technology areas such as AI, blockchain, cloud computing, quantum computing and security, McGee said.

IBM achieved more than 1,800 AI patents, including a patent for a method for teaching AI systems how to understand implications behind certain text or phrases of speech by analyzing other related content. IBM also gained patents for improving the security of blockchain networks.

In addition, IBM inventors were awarded more than 2,500 patents in cloud technology and grew the number of patents the company has in the nascent quantum computing field.

“We’re talking about new patent issues each year, not the size of our patent portfolio, because we’re focused on innovation,” McGee said. “There are lots of ways to gain and use patents, we got the most for 27 years and I think that’s a reflection of real innovation that’s happening.”

Since 1920, IBM has received more than 140,000 U.S. patents, he noted. In 2019, more than 8,500 IBM inventors, spanning 45 different U.S. states and 54 countries contributed to the patents awarded to IBM, McGee added.

In other patent-related news, Apple and Microsoft this week joined 35 companies who petitioned the European Union to strengthen its policy on patent trolls. The coalition of companies sent a letter to EU Commissioner for technology and industrial policy Thierry Breton seeking to make it harder for patent trolls to function in the EU.

Go to Original Article
Author:

NSA reports flaw in Windows cryptography core

After years of criticism from the infosec community about hoarding critical vulnerabilities, the National Security Agency may be changing course.

The highlight of Microsoft’s first Patch Tuesday of 2020 is a vulnerability in the Windows cryptography core first reported to vendor by the NSA. The flaw in CryptoAPI DLL (CVE-2020-0601) affects Windows 10 and Windows Server 2016 and 2019. According to Microsoft’s description, an attacker could exploit how Windows validates ECC certificates in order to launch spoofing attacks.

The NSA gave a more robust description in its advisory, noting that the Windows cryptography flaw also affects “applications that rely on Windows for trust functionality,” and specifically impacts HTTPS connections, signed files and emails and signed executable code.

“Exploitation of the vulnerability allows attackers to defeat trusted network connections and deliver executable code while appearing as legitimately trusted entities,” NSA wrote in its advisory. “NSA assesses the vulnerability to be severe and that sophisticated cyber actors will understand the underlying flaw very quickly and, if exploited, would render the previously mentioned platforms as fundamentally vulnerable. The consequences of not patching the vulnerability are severe and widespread. Remote exploitation tools will likely be made quickly and widely available.”

Will Dormann, vulnerability analyst at the CERT Coordination Center, confirmed the issue also affects X.509 certificates, meaning an attacker could spoof a certificate chain to a trusted root certificate authority and potentially intercept or modify TLS-encrypted communication.

Johannes Ullrich, fellow at the SANS Internet Storm Center, said the flaw is especially noteworthy because “the affected library is a core component of the Windows operating systems. Pretty much all software doing any kind of cryptography uses it.”

“The flaw is dangerous in that it allows an attacker to impersonate trusted websites and trusted software publishers. Digital signatures are used everywhere to protect the integrity and the authenticity of software, web pages and, in some cases, email,” Ullrich told SearchSecurity. “This flaw could be used to trick a user into installing malicious software. Most endpoint protection products will inspect the digital signature of software the user installs, and consider software created by trusted organizations as harmless. Using this flaw, an attacker would be able attach a signature claiming that the software was created by a trusted entity.”

However, Craig Young, computer security researcher for Tripwire’s vulnerability and exposure research team, said the impact of this Windows cryptography vulnerability might be more limited to enterprises and “most individuals don’t need to lose sleep over this attack just yet.”

“The primary attack vectors most people would care about are HTTPS session compromise malware with spoofed authenticode signatures. The attack against HTTPS however requires that the attacker can insert themselves on the network between the client and server. This mostly limits the attack to nation-state adversaries,” Young told SearchSecurity. “The real risk is more likely to enterprises where a nation state attacker may be motivated to carry out an attack. The worst-case scenario would be that a hostile or compromised network operator is used to replace legitimate executable content from an HTTPS session with malicious binaries having a spoofed signature.”

Beyond patching, NSA suggested network prevention and detection techniques to inspect certificates outside of Windows cryptography validation.

“Some enterprises route traffic through existing proxy devices that perform TLS inspection, but do not use Windows for certificate validation. The devices can help isolate vulnerable endpoints behind the proxies while the endpoints are being patched,” NSA wrote. “Properly configured and managed TLS inspection proxies independently validate TLS certificates from external entities and will reject invalid or untrusted certificates, protecting endpoints from certificates that attempt to exploit the vulnerabilities. Ensure that certificate validation is enabled for TLS proxies to limit exposure to this class of vulnerabilities and review logs for signs of exploitation.”

NSA takes credit

Infosec experts credited the NSA for not only reporting the Windows cryptography flaw but also providing detailed insight and advice about the threat. Chris Morales, head of security analytics at Vectra, based in San Jose, Calif., praised the NSA for recommending “leveraging network detection to identify malicious certificates.”

“I think they did a great job of being concise and clear on both the problem and recommended courses of action,” Morales told SearchSecurity. “Of course, it would be great if the NSA did more of this, but it is not their normal job and I wouldn’t expect them to be accountable for doing a vendor job. Relying on the vendor for notification of security events will always be important.”

Young also commended the NSA’s advisory for being very helpful and providing “useful insights which are not included in either the CERT/CC note or the Microsoft advisory.”

The NSA is designated as the Executive Secretariat of the government’s Vulnerabilities Equities Process (VEP), designed to organize the process of determining what vulnerabilities found by federal agencies would be kept secret and which would be disclosed. However, the NSA has consistently received criticism from experts that it keeps too many vulnerabilities secret and should disclose more in order to help protect the public. In recent years, this criticism was loudest when leaked NSA cyberweapons were used in widespread WannaCry attacks.

The NSA advisory for the Windows cryptography flaw is rare for the agency, which has been more open with warnings about potential threats but hasn’t been known to share more technical analysis.

Also making this vulnerability an outlier is that the NSA was given attribution in Microsoft’s patch acknowledgements section. Anne Neuberger, deputy national manager at the NSA, said on a call with news media Tuesday that this wasn’t the first vulnerability the NSA has reported to Microsoft, but it does mark the first time the agency accepted attribution.

Infosec journalist Brian Krebs, who broke the story of the Windows cryptography patch on Monday, claimed sources told him this disclosure may mark the beginning of a new initiative at NSA to make more vulnerability research available to vendors and the public.

NSA did not respond to requests for comment at the time of this writing.

Go to Original Article
Author:

Microsoft Teams, consumer Skype interoperability launches in January

After nearly three years of fits and starts, Microsoft plans to deliver in January interoperability between Microsoft Teams and the consumer version of Skype.

The integration will let users of the two cloud-based collaboration applications message and call one another without switching apps. Skype for Business, which Microsoft is encouraging users to abandon in favor of Teams, has had the same interoperability for years.

The upcoming feature is good news for organizations that were frustrated over their inability to use Teams to communicate with external clients or partners on consumer Skype. More than 3,000 people endorsed a request for the integration on Microsoft’s user feedback website.

After initially denying the request in 2017, the company tentatively slated the feature for launch in 2018 before shelving it once again. Microsoft recommitted to building the integration in June and promised in its September roadmap to release the feature in the first quarter of 2020. The vendor updated its target this month to a January 2020 rollout.

However, interoperability between Microsoft’s enterprise and consumer communications apps is not as important as it once was, said Tom Arbuthnot, a technology architect at Modality Systems. Modality is a Microsoft-focused systems integrator.

In Skype for Business’ heyday, customers tried in vain to persuade Microsoft to allow external guests to join business meetings using consumer Skype. Now, Teams lets users join meetings in a web browser without plugins or downloads.

Meanwhile, Teams customers are still waiting for Microsoft to deliver better ways to collaborate with external parties that also use Teams. The app lacks a configuration akin to Slack’s shared channels, which let employees work across organizations.

“The bigger demand area right now is for Teams federation across company boundaries,” said Irwin Lazar, analyst at Nemertes Research. The feature would eliminate the need for guest accounts and provide more control over security.

More Microsoft Teams features on tap for January 2020

Also, next month, Microsoft will enable new integrations between Teams and Outlook. Buttons in those apps will let users share information between them.

Users will be able to transfer the contents and attachments of an email to Teams and to export Teams messages to Outlook. They will also be able to sign up for email alerts for messages and to reply to messages from within Outlook.

The email integrations will bring Teams nearly on par with rival Slack, which launched similar capabilities earlier this year. However, unlike Microsoft, Slack also supports interoperability with Gmail. 

Another feature on the roadmap for January is read receipts, which will give users the ability to see whether their colleague has read a direct message.

Go to Original Article
Author:

Routing made easier with traffic camera images and more

After launching traffic camera imagery on Bing Maps in April, we have seen a lot of interest in this new feature. You can view traffic conditions directly on a map and see the road ahead for your planned routes. This extra visibility helps you make informed decisions about the best route to your destination. Based on the popularity of this feature, the Bing Maps Routing and Traffic team has made some further improvements to this routing experience.

Hover to see traffic camera images or traffic incident details

In addition to clicking on the traffic camera icons on Bing Maps, traffic camera images and details can be accessed now by simply hovering over the camera icon along the planned route. Now you can quickly and easily glance at road conditions across your entire route.

Traffic Camera

The Team also added traffic incident alerts along your planned route, which are shown as little orange or red triangle icons on the map. Just like the traffic cameras, you can view details about these traffic incident alerts by simply hovering over the little triangle icons. The examples below show traffic incident alerts about scheduled constructions and traffic ingestion respectively.

Scheduled Construction Screenshot

Serious Congestion Screenshot

Changes in click behavior

While hovering over the cameras or incident icons launches a popup for the duration of the hover, a click will keep the popup window open until you click anywhere else on the map or hover over another incident or camera icon.

Best Mode Routing

Sometimes, the destination you are trying to get to can be reached by different routing modes (e.g., driving, transit, or walking). In addition to allowing you to easily toggle between different routing modes on Bing Maps, we recently added a new default option of “Best Mode” to the Directions offering where you are served the best route options based on time, distance, and traffic. For example, for a very short-distance trip (e.g., 10 minutes walking), the “Best Mode” feature may recommend walking or driving routes because taking a bus such a short distance may not be the best option, considering wait time and bus fare. Likewise, for trips greater than 1.5 miles, walking may not be the best option. If a bus route requires several transfers, driving may be the better option.

The “Best Mode” feature allows you to view the best route options across modes without having to switch tabs for different modes. Armed with the recommended options and route details, you can quickly see how best to get to where you’re trying to go. Also, click on “More Details” to see detailed driving or transit journey instructions.

Best Mode Routing Screenshot

We hope these new features make life easier for you when it comes to getting directions and routing. Please let us know what you think on our Bing Maps Answers page. We are always looking for new ways to further improve our services with new updates releasing regularly.

– The Bing Maps Team

Go to Original Article
Author: Microsoft News Center

U.S. facility may have best data center PUE

After years of big gains, the energy efficiency of data centers has stalled. The key data center efficiency measurement, power usage effectiveness, is not improving. It even declined a little from last year.

The reason may have to do with the limits of the technology in use by the majority of data centers.

Improving data center PUE, “will take a shift in technology,” said Chris Brown, chief technical officer of Uptime Institute LLC, a data center advisory group in Seattle. Most data centers — as many as 90% — continue to use air cooling, which isn’t as effective as water in removing heat, he said.

But one data center has made the shift in technology: the National Renewable Energy Laboratory (NREL) in Golden, Colo. The NREL’s mission is to work on advancing energy-related technologies, such as renewable power, sustainable transportation, building efficiency, grid modernization, among others. Its supercomputer data center deploys technologies that help it to achieve a very low data center PUE.

The technologies includes cold plates, which uses liquid to draw waste heat away from the CPU and memory. It also has a few rear door heat exchangers. A heat exchanger is fitted to the rear of server racks. It removes heat from the server when it interacts with water carrying coils that cool the air before it enters the data center room.

“The closer you get to rack in terms of picking up that waste heat and removing it, the more energy efficient you are going to be,” said David Sickinger, a researcher at NREL’s Computational Science Center.

Data center efficiency gains have stalled

NREL uses cooling towers to chill the water, which can be as warm as 75 degrees Fahrenheit and still cool the systems. The cooler and drier climate conditions of Colorado help. NREL doesn’t have mechanical cooling, which includes chillers. 

Because of the increasing power of high-performance computing (HPC) systems, “that has sort of forced the industry to be early adopters of warm water liquid cooling,” Sickinger said. 

The lowest possible data center PUE is 1, which means that all the power drawn goes to the IT equipment. NREL is reporting that its supercomputing data center PUE is 1.04 on an annualized basis. The NREL HPC data center has two supercomputers in a data center of approximately 10,000 square feet.

“We feel this is sort of world leading in terms of our PUE,” Sickinger said.

Is AI starting to reduce staffing needs?

Something else that NREL believes sets it apart is its reuse of the waste heat energy. The lab uses it to heat offices and for heating loops under outdoor patio areas to melt snow.

More than 10 years ago, the average PUE as reported by Uptime was 2.5. That has since improved. By 2012, the average data center PUE was 1.65. It continued to improve slightly but has since leveled off. In 2019, the average data center PUE ticked up to a PUE of nearly 1.7.

“I think as an industry we started to get to about the end of what we can do with the way we’re designing today,” Brown said. He believes in time data centers will look at different technologies, such as immersion cooling, which involves immersing IT equipment in a nonconductive liquid.

I think as an industry we started to get to about the end of what we can do with the way we’re designing today.
Chris BrownChief technical officer, Uptime Institute LLC

Improvements in data center PUE add up. If a data center has a PUE of 2, it is using 2 megawatts of power to support 1 megawatt of IT load. But if a data center can lower the PUE to 1.6, then 1.6 megawatts is being used by the facility, providing a savings of about 400 kilowatts of electrical energy, Brown said.

Data centers are becoming major users of electricity in the United States. They account for nearly 2% of all U.S. electrical use.

In a 2016 U.S. government-sponsored study, researchers reported that data centers in 2014 accounted for about 70 billion kWh and was forecasted to reach 73 billion kWh in 2020. This estimate has not been updated, according to energy research scientist Jonathan Koomey, who was one of the authors of the study.

Koomey, who works as an independent researcher, said it is unlikely the estimates in the 2016 report have been exceeded much, if at all. He’s involved in a new independent research effort to update those estimates.

NREL is working with Hewlett Packard Enterprise to develop AI algorithms specific to IT operations, also known as AIOps. The goal is to develop machine learning models and predictive capabilities to optimize the use of data centers and possibly inform development of data centers to serve exascale computing, said Kristin Munch, NREL manager of the data, analysis and visualization group.

National labs generally collect data on their computer and data center operations, but they may not keep this data for a long period of time. NREL has collected five years worth of data from its supercomputer and facilities, Munch said.

Go to Original Article
Author:

With Time on its hands, Meredith drives storage consolidation

After Meredith Corp. closed its $2.8 billion acquisition of Time Inc. in January 2018, it adopted the motto “Be Bold. Together.”

David Coffman, Meredith’s director of enterprise infrastructure, took that slogan literally. “I interpreted that as ‘Drive it like you stole it,'” said Coffman, who was given a mandate to overhaul the combined company’s data centers that held petabytes of data. He responded with an aggressive backup and primary storage consolidation.

The Meredith IT team found itself with a lot of Time data on its hands, and in need of storage consolidation because a variety of vendors were in use. Meredith was upgrading its own Des Moines, Iowa, data center at the time, and Coffman’s team standardized technology across legacy Time and Meredith. It dumped most of its traditional IT gear and added newer technology developed around virtualization, convergence and the cloud.

Although Meredith divested some of Time’s best-known publications, it now publishes People, Better Homes and Gardens, InStyle, Southern Living and Martha Stewart Living. The company also owns 17 local television stations and other properties.

The goal is to reduce its data centers to two major sites in New York and Des Moines with the same storage, server and data protection technologies. The sites can serve as DR sites for each other. Meredith’s storage consolidation resulted in implementing Nutanix hyper-converged infrastructure for block storage and virtualization, Rubrik data protection and a combination of Nasuni and NetApp for file storage.

“I’ve been working to merge two separate enterprises into one,” Coffman said. “We decided we wanted to go with cutting-edge technologies.”

At the time of the merger, Meredith used NetApp-Cisco FlexPod converged infrastructure for primary storage and Time had Dell EMC and Hitachi Vantara in its New York and Weehawken, N.J. data centers. Both companies backed up with Veritas NetBackup software. Meredith had a mixture of tape and NetBackup appliances and Time used tape and Dell EMC Data Domain disk backup.

By coincidence, both companies were doing proofs of concept with Rubrik backup software on integrated appliances and were happy with the results.

Meredith installed Rubrik clusters in its Des Moines and New York data centers as well as a large Birmingham, Alabama office after the merger. They protect Nutanix clusters in all those sites.

“If we lost any of those sites, we could hook up our gear to another site and do restores,” Coffman said.

Meredith also looked at Cohesity and cloud backup vendor Druva while evaluating Rubrik Cloud Data Management. Coffman and Michael Kientoff, senior systems administrator of data protection at Meredith, said they thought Rubrik had the most features and they liked its instant restore capabilities.

Coffman said Cohesity was a close second, but he didn’t like that Cohesity includes its own file system and bills itself as secondary storage.

“We didn’t think a searchable file system would be that valuable to us,” Coffman said. “I didn’t want more storage. I thought, ‘These guys are data on-premises when I’m already getting yelled out for having too much data on premises.’ I didn’t want double the amount of storage.”

Coffman swept out most of the primary storage and servers from before the merger. Meredith still has some NetApp for file storage, and Nasuni cloud NAS for 2 PB of data that is shared among staff in different offices. Nasuni stores data on AWS.

Kientoff is responsible for protecting the data across Meredith’s storage systems.

“All of a sudden, my world expanded exponentially,” he said of the Time aftermath. “I had multiple NetBackup domains all across the world to manage. I was barely keeping up on the NetBackup domain we had at Meredith.”

Coffman and Kientoff said they were happy to be rid of tape, and found Rubrik’s instant restores and migration features valuable. Instead of archiving to tape, Rubrik moves data to AWS after its retention period expires.

Rubrik’s live mount feature can recover data from a virtual machine in seconds. This comes in handy when an application running in a VM dies, but also for migrating data.

However, that same feature is missing from Nutanix. Meredith is phasing out VMware in favor of Nutanix’s AHV hypervisor to save money on VMware licenses and to have, as Coffman put it, “One hand to shake, one throat to choke. Nutanix provided the opportunity to have consolidation between the hypervisor and the hardware.”

The Meredith IT team has petitioned for Nutanix to add a similar live mount capability for AHV. Even without it, though, Kientoff said backing up data from Nutanix with Rubrik beats using tapes.

“With a tape restore, calling backup tapes from off-site, it might be a day or two before they get their data back,” he said. “Now it might take a half an hour to an hour to restore a VM instead of doing a live mount [with VMware]. Getting out of the tape handling business was a big cost savings.”

The Meredith IT team is also dealing with closing smaller sites around the country to get down to the two major data centers. “That’s going to take a lot of coordinating with people, and a lot of migrations,” Coffman said.

Meredith will back up data from remote offices locally and move them across the WAN to New York or Des Moines.

Kientoff said Rubrik’s live restores is a “killer feature” for the office consolidation project. “That’s where Rubrik has really shone for us,” he said. “We recently shut down a sizeable office in Tampa. We migrated most of those VMs to New York and some to Des Moines. We backed up the cluster across the WAN, from Tampa to New York. We shut down the VM in Tampa, live mounted in New York, changed the IP address and put it on the network. There you go — we instantly moved VMs form one office to another.”

Go to Original Article
Author:

Avaya revenue slump expected to continue in 2020

Avaya shares closed down 5% Wednesday after the company failed to hit its financial targets for the fourth fiscal quarter and predicted that revenue would likely decline again in 2020.

Avaya brought in $723 million in the three months ended Sept. 30, despite projecting revenues between $735 million and $755 million. The quarter capped a year of disappointing returns, with the company generating just under $2.89 billion after initially telling investors it would sell between $3.01 billion and $3.12 billion worth of products and services.

Avaya attributed its underperformance in the fourth quarter in large part to a delay in executing a 10-year $400 million deal to sell phone systems and contact center software to the Social Security Administration. A competing vendor has challenged the contract, sparking a procurement review that Avaya expects will further delay revenues at least through the current quarter.

Meanwhile, the Avaya revenue slump is projected to continue in fiscal 2020, which began Oct. 1, with the company forecasting receipts of $2.81 billion to $2.89 billion. But analysts credit Avaya for at least significantly slowing the rate of its revenue decline in the two years since emerging from bankruptcy in late 2017.

Company executives said 2020 would be a transformational year for Avaya as it finally introduces a unified communications as a service (UCaaS) offering in partnership with RingCentral. The product will plug a gap in the vendor’s portfolio, which cloud-based competitors had exploited to steal the longtime customers of Avaya’s on-premises gear.

But Avaya is poised to face a significant challenge in a few years, said Steve Blood, analyst at Gartner. Many large enterprises aren’t ready to replace on-premises communications gear because they spent a lot of money on it. But, eventually, that calculation will change.

In the meantime, Avaya is selling maintenance and other services to those customers. The company has highlighted the growth of its software and services segment, which now represents 83% of total revenue, up from 71% in fiscal 2015.

“Avaya will talk about that as having loyal customers,” Blood said. “We will look at that differently. We don’t think they are so much loyal as they need a stop-gap to hold off while they build their strategy with other providers.”

Avaya’s answer to that impending problem has been to invest in a single-tenant cloud product called ReadyNow. It gives each customer a separate instance of the software on servers in an Avaya data center. The architecture allows for a higher level of security and customization than would be possible in a multi-tenant cloud. Avaya said its large enterprise customers prefer that approach.

Partnerships have emerged as another critical aspect of Avaya’s cloud strategy. Avaya is now relying on vendors like RingCentral and Afiniti to deliver innovative products and features. Just last week, Avaya announced it would partner with Google to bring a suite of AI capabilities to contact center customers in 2020.

Avaya plans to begin reporting to investors the percentage of revenue attributable to cloud, partnerships and emerging technologies combined. As of last quarter, that figure stood at 15%, but Avaya expects it will reach 30% once the RingCentral partnership ramps up.

The cloud alone accounted for 11% of revenue in fiscal 2019. That’s up from 10% last fiscal year but below the company’s original estimate of 12% to 14%. Avaya has sold nearly 4 million licenses for cloud telephony and contact center software, up from 3.5 million at the end of fiscal 2018.

Meanwhile, Avaya is retooling its executive team. On Tuesday, Avaya announced that its top cloud executive, Gaurav Passi, was no longer with the company.

Anthony Bartolo will become chief product officer overseeing on premises and cloud portfolio next month. He is currently a top executive at Tata Communications, a networking and communications service provider, and previously spent four years with Avaya.

As part of the shuffle, Chris McGugan, currently senior vice president of solutions and technology, will become CTO.

Go to Original Article
Author: