Tag Archives: than

IBM expands patent troll fight with its massive IP portfolio

After claiming more than a quarter century of patent leadership, IBM has expanded its fight against patent assertion entities, also known as patent trolls, by joining the LOT Network. As a founding member of the Open Invention Network in 2005, IBM has been in the patent troll fight for nearly 15 years.

The LOT Network (short for License on Transfer) is a nonprofit community of more than 600 companies that have banded together to protect themselves against patent trolls and their lawsuits. The group says companies lose up to $80 billion per year on patent troll litigation. Patent trolls are organizations that hoard patents and bring lawsuits against companies they accuse of infringing on those patents.

IBM joins the LOT Network after its $34 billion acquisition of Red Hat, which was a founding member of the organization.

“It made sense to align IBM’s and Red Hat’s view on how to manage our patent portfolio,” said Jason McGee, vice president and CTO of IBM Cloud Platform. “We want to make sure that patents are used for their traditional purposes, and that innovation proceeds and open source developers can work without the threat of a patent litigation.”

To that end, IBM contributed more than 80,000 patents and patent applications to the LOT Network to shield those patents from patent assertion entities, or PAEs.

Charles KingCharles King

IBM joining the LOT Network is significant for a couple of reasons, said Charles King, principal analyst at Pund-IT in Hayward, Calif. First and foremost, with 27 years of patent leadership, IBM brings a load of patent experience and a sizable portfolio of intellectual property (IP) to the LOT Network, he said.

“IBM’s decision to join should also silence critics who decried how the company’s acquisition of Red Hat would erode and eventually end Red Hat’s long-standing leadership in open source and shared IP,” King said. “Instead, the opposite appears to have occurred, with IBM taking heed of its new business unit’s dedication to open innovation and patent stewardship.”

IBM’s decision to join should also silence critics who decried how the company’s acquisition of Red Hat would erode and eventually end Red Hat’s long-standing leadership in open source and shared IP.
Charles KingAnalyst, Pund-IT

The LOT Network operates as a subscription service that charges members for the IP protection they provide. LOT’s subscription rates are based on company revenue. Membership is free for companies making less than $25 million annually. Companies with annual revenues between $25 million and $50 million pay $5,000 annually to LOT. Companies with revenues between $50 million and $100 million pay $10,000 annually to LOT. Companies with revenues between $100 million and $1 billion pay $15,000. And LOT caps its annual subscription rates at $20,000 for companies with revenues greater than $1 billion.

Meanwhile, the Open Invention Network (OIN) has three levels of participation: members, associate members and licensees. Participation in OIN is free, the organization said.

“One of the most powerful characteristics of the OIN community and its cross-license agreement is that the board members sign the exact same licensing agreement as the other 3,100 business participants,” said Keith Bergelt, CEO of OIN. “The cross license is royalty-free, meaning it costs nothing to join the OIN community. All an organization or business must agree to do is promise not to sue other community participants based on the Linux System Definition.”

IFI Claims Patent Services confirms that 2019 marked the 27th consecutive year in which IBM has been the leader in the patent industry, earning 9,262 U.S. patents last year. The patents reach across key technology areas such as AI, blockchain, cloud computing, quantum computing and security, McGee said.

IBM achieved more than 1,800 AI patents, including a patent for a method for teaching AI systems how to understand implications behind certain text or phrases of speech by analyzing other related content. IBM also gained patents for improving the security of blockchain networks.

In addition, IBM inventors were awarded more than 2,500 patents in cloud technology and grew the number of patents the company has in the nascent quantum computing field.

“We’re talking about new patent issues each year, not the size of our patent portfolio, because we’re focused on innovation,” McGee said. “There are lots of ways to gain and use patents, we got the most for 27 years and I think that’s a reflection of real innovation that’s happening.”

Since 1920, IBM has received more than 140,000 U.S. patents, he noted. In 2019, more than 8,500 IBM inventors, spanning 45 different U.S. states and 54 countries contributed to the patents awarded to IBM, McGee added.

In other patent-related news, Apple and Microsoft this week joined 35 companies who petitioned the European Union to strengthen its policy on patent trolls. The coalition of companies sent a letter to EU Commissioner for technology and industrial policy Thierry Breton seeking to make it harder for patent trolls to function in the EU.

Go to Original Article
Author:

Red Hat OpenShift Container Storage seeks to simplify Ceph

The first Red Hat OpenShift Container Storage release to use multiprotocol Ceph rather than the Gluster file system to store application data became generally available this week. The upgrade comes months after the original late-summer target date set by open source specialist Red Hat.

Red Hat — now owned by IBM — took extra time to incorporate feedback from OpenShift Container Storage (OCS) beta customers, according to Sudhir Prasad, director of product management in the company’s storage and hyper-converged business unit.

The new OCS 4.2 release includes Rook Operator-driven installation, configuration and management so developers won’t need special skills to use and manage storage services for Kubernetes-based containerized applications. They indicate the capacity they need, and OCS will provision the available storage for them, Prasad said.

Multi-cloud support

OCS 4.2 also includes multi-cloud support, through the integration of NooBaa gateway technology that Red Hat acquired in late 2018. NooBaa facilitates dynamic provisioning of object storage and gives developers consistent S3 API access regardless of the underlying infrastructure.

Prasad said applications become portable and can run anywhere, and NooBaa abstracts the storage, whether AWS S3 or any other S3-compatible cloud or on-premises object store. OCS 4.2 users can move data between cloud and on-premises systems without having to manually change configuration files, a Red Hat spokesman added.

Customers buy OCS to use with the Red Hat OpenShift Container Platform (OCP), and they can now manage and monitor the storage through the OCP console. Kubernetes-based OCP has more than 1,300 customers, and historically, about 40% to 50% attached to OpenShift Container Storage, a Red Hat spokesman said. OCS had about 400 customers in May 2019, at the time of the Red Hat Summit, according to Prasad.

One critical change for Red Hat OpenShift Container Storage customers is the switch from file-based Gluster to multiprotocol Ceph to better target data-intensive workloads such as artificial intelligence, machine learning and analytics. Prasad said Red Hat wanted to give customers a more complete platform with block, file and object storage that can scale higher than the product’s prior OpenStack S3 option. OCS 4.2 can support 5,000 persistent volumes and will support 10,000 in the upcoming 4.3 release, according to Prasad.

Migration is not simple

Although OCS 4 may offer important advantages, the migration will not be a trivial one for current customers. Red Hat provides a Cluster Application Migration tool to help them move applications and data from OCP 3/OCS 3 to OCP 4/OCS 4 at the same time. Users may need to buy new hardware, unless they can first reduce the number of nodes in their OpenShift cluster and use the nodes they free up, Prasad confirmed.

“It’s not that simple. I’ll be upfront,” Prasad said, commenting on the data migration and shift from Gluster-based OCS to Ceph-backed OCS. “You are moving from OCP 3 to OCP 4 also at the same time. It is work. There is no in-place migration.”

One reason that Red Hat put so much emphasis on usability in OCS 4.2 was to abstract away the complexity of Ceph. Prasad said Red Hat got feedback about Ceph being “kind of complicated,” so the engineering team focused on simplifying storage through the operator-driven installation, configuration and management.

“We wanted to get into that mode, just like on the cloud, where you can go and double-click on any service,” Prasad said. “That took longer than you would have expected. That was the major challenge for us.”

OpenShift Container Storage roadmap

The original OpenShift Container Storage 4.x roadmap that Red Hat laid out last May at its annual customer conference called for a beta release in June or July, OCS 4.2 general availability in August or September, and a 4.3 update in December 2019 or January 2020. Prasad said February is the new target for the OCS 4.3 release.

The OpenShift Container Platform 4.3 update became available this week, with new security capabilities such as Federal Information Processing Standard (FIPS)-compliant encryption. Red Hat eventually plans to return to its prior practice of synchronizing new OCP and OCS releases, said Irshad Raihan, the company’s director of storage product marketing.

The Red Hat OpenShift Container Storage 4.3 software will focus on giving customers greater flexibility, such as the ability to choose the type of disk they want, and additional hooks to optimize the storage. Prasad said Red Hat might need to push its previously announced bare-metal deployment support from OCS 4.3 to OCS 4.4.

OCS 4.2 supports converged-mode operation, with compute and storage running on the same node or in the same cluster. The future independent mode will let OpenShift use any storage backend that supports the Container Storage Interface. OCS software would facilitate access to the storage, whether it’s bare-metal servers, legacy systems or public cloud options.

Alternatives to Red Hat OpenShift Container Storage include software from startups Portworx, StorageOS, and MayaData, according to Henry Baltazar, storage research director at 451 Research. He said many traditional storage vendors have added container plugins to support Kubernetes. The public cloud could appeal to organizations that don’t want to buy and manage on-premises systems, Baltazar added.

Baltazar advised Red Hat customers moving from Gluster-based OCS to Ceph-based OCS to keep a backup copy of their data to restore in the event of a problem, as they would with any migration. He said any users who are moving a large data set to public cloud storage needs to factor in network bandwidth and migration time and consider egress changes if they need to bring the data back from the cloud.

Go to Original Article
Author:

SAP Data Hub opens predictive possibilities at Paul Hartmann

Organizations have access to more data than they’ve ever had, and the number of data sources and volume of data just keeps growing.

But how do companies deal with all the data and can they derive real business use from it? Paul Hartmann AG, a medical supply company, is trying to answer those questions by using SAP Data Hub to integrate data from different sources and use the data to improve supply chain operations. The technology is part of the company’s push toward a data-based digital transformation, where some existing processes are digitized and new analytics-based models are being developed.

The early results have been promising, said Sinanudin Omerhodzic, Paul Hartmann’s CIO and chief data officer.

Paul Hartmann is a 200-year-old firm in Heidenheim, Germany that supplies medical and personal hygiene products to customers such as hospitals, nursing homes, pharmacies and retail outlets. The main product groups include wound management, incontinence management and infection management.

Paul Hartmann is active in 35 countries and turns over around $2.2 billion in sales a year. Omerhodzic described the company as a pioneer in digitizing its supply chain operations, running SAP ERP systems for 40 years. However, changes in the healthcare industry have led to questions about how to use technology to address new challenges.

For example, an aging population increases demand for certain medical products and services, as people live longer and consume more products than before.

One prime area for digitization was in Paul Hartmann’s supply chain, as hospitals demand lower costs to order and receive medical products. Around 60% of Paul Hartmann’s orders are still handled by email, phone calls or fax, which means that per-order costs are high, so the company wanted to begin to automate these processes to reduce costs, Omerhodzic said.

One method was to install boxes stocked with products and equipped with sensors in hospital warehouses that automatically re-order products when stock reaches certain levels. This process reduced costs by not requiring any human intervention on the customer side. Paul Hartmann installed 9,000 replenishment boxes in about 100 hospitals in Spain, which proved adept at replacing stock when needed. But it then began to consider the next step: how to predict with greater accuracy what products will be needed when and where to further reduce the wait time on restocking supplies.  

Getting predictive needs new data sources

This new level of supply chain predictive analytics requires accessing and analyzing vast amounts of data from a variety of new sources, Omerhodzic said. For example, weather data could show that a storm may hit a particular area, which could result in more accidents, leading hospitals to stock more bandages in preparation. Data from social media sources that refer to health events such as flu epidemics could lead to calculations on the number of people who could get sick in particular regions and the number of products needed to fight the infections.

“All those external data sources — the population data, weather data, the epidemic data — combined with our sales history data, allow us to predict and forecast for the future how many products will be required in the hospitals and for all our customers,” Omerhodzic said.

Paul Hartmann worked with SAP to implement a predictive system based on SAP Data Hub, a software service that enables organizations to orchestrate data from different sources without having to extract the data from the source. AI and machine learning are used to analyze the data, including the entire history of the company’s sales data, and after just a few months of the pilot project was making better predictions than the sales staff, Omerhodzic said.

“We have 200 years selling our product, so the sales force has a huge wealth of information and experience, but the new system could predict even better than they could,” he said. “This was a huge wake up for us and we said we need to learn more about our data, we need to pull more data inside and see how that could improve or maybe create new business models. So we are now in the process of implementing that.”

Innovation on the edge less disruptive

The use of SAP Data Hub as an innovation center is one example of how SAP can foster digital transformation without directly changing core ERP systems, said Joshua Greenbaum, principal analyst at Enterprise Applications Consulting. This can result in new processes that aren’t as costly or disruptive as a major ERP upgrade.

Joshua GreenbaumJoshua Greenbaum

“Eventually this touches your ERP because you’re going to be making and distributing more bandages, but you can build the innovation layer without it being directly inside the ERP system,” Greenbaum said. “When I discuss digital transformation with companies, the easy wins don’t start with the statement, ‘Let’s replace our ERP system.’ That’s the road to complexity and high costs — although, ultimately, that may have to happen.”

For most organizations, Greenbaum said, change management — not technology — is still the biggest challenge of any digital transformation effort.

Change management challenges

At Paul Hartmann, change management has been a pain point. The company is addressing the technical issues of the SAP Data Hub initiative through education and training programs that enhance IT skills, Omerhodzic said, but getting the company to work with data is another matter.

“The biggest change in our organization is to think more from the data perspective side and the projects that we have today,” he said. “To have this mindset and understanding of what can be done with the data requires a completely different approach and different skills in the business and IT. We are still in the process of learning and establishing the appropriate organization.”

Although the sales organization at Paul Hartmann may feel threatened by the predictive abilities of the new system, change is inevitable and affects the entire organization, and the change must be managed from the top, according to Omerhodzic.

“Whenever you have a change there’s always fear from all people that are affected by it,” he said. “We will still need our sales force in the future — but maybe to sell customer solutions, not the products. You have to explain it to people and you have to explain to them where their future could be.”

Go to Original Article
Author:

Cloudian CEO: AI, IoT drive demand for edge storage

AI and IoT is driving demand for edge storage as data is being created faster than it can be reasonably moved across clouds, object storage vendor Cloudian’s CEO said.

Cloudian CEO Michael Tso said “Cloud 2.0” is giving rise to the growing importance of edge storage among other storage trends. He said customers are getting smarter about how they use the cloud, and that’s leading to growing demand for products that can support private and hybrid clouds. He also detects an increased demand for resiliency against ransomware attacks.

We spoke with Tso about these trends, including the Edgematrix subsidiary Cloudian launched in September 2019 that focuses on AI use cases at the edge. Tso said we can expect more demand for edge storage and spoke about an upcoming Cloudian product related to this. He also talked about how AI relates to object storage, and if Cloudian is preparing other Edgematrix-like spinoffs.

What do you think storage customers are most concerned with now?
Michael Tso: I think there is a lot, but I’ll just concentrate on two things here. One is that they continue to just need lower-cost, easier to manage and highly scalable solutions. That’s why people are shifting to cloud and looking at either public or hybrid/private.

Related to that point is I think we’re seeing a Cloud 2.0, where a lot of companies now realize the public cloud is not the be-all, end-all and it’s not going to solve all their problems. They look at a combination of cloud-native technologies and use the different tools available wisely.

I think there’s the broad brush of people needing scalable solutions and lower costs — and that will probably always be there — but the undertone is people getting smarter about private and hybrid.

Point number two is around data protection. We’re now seeing more and more customers worried about ransomware. They’re keeping backups for longer and longer and there is a strong need for write-once compliant storage. They want to be assured that any ransomware that is attacking the system cannot go back in time and mess up the data that was stored from before.

Cloudian actually invested very heavily in building write-once compliant technologies, primarily for financial and the military market because that was where we were seeing it first. Now it’s become a feature that almost everyone we talked to that is doing data protection is asking for.

People are getting smarter about hybrid and multi-cloud, but what’s the next big hurdle to implementing it?

Tso: I think as people are now thinking about a post-cloud world, one of the problems that large enterprises are coming up with is data migration. It’s not easy to add another cloud when you’re fully in one. I think if there’s any kind of innovation in being able to off-load a lot of data between clouds, that will really free up that marketplace and allow it to be more efficient and fluid.

Right now, cloud is a bunch of silos. Whatever data people have stored in cloud one is kind of what they’re stuck with, because it will take them a lot of money to move data out to cloud two, and it’s going to take them years. So, they’re kind of building strategies around that as opposed to really, truly being flexible in terms of where they keep data.

What are you seeing on the edge?

Tso: We’re continuing to see more and more data being created at the edge, and more and more use cases of the data needing to be stored close to the edge because it’s just too big to move. One classic use case is IoT. Sensors, cameras — that sort of stuff. We already have a number of large customers in the area and we’re continuing to grow in that area.

The edge can mean a lot of different things. Unfortunately, a lot of people are starting to hijack that word and make it mean whatever they want it to mean. But what we see is just more and more data popping up in all kinds of locations, with the need of having low-cost, scalable and hybrid-capable storage.

We’re working on getting a ruggedized, easy-to-deploy cloud storage solution. What we learned from Edgematrix was that there’s a lot of value to having a ruggedized edge AI device. But the unit we’re working on is going to be more like a shipping container or a truck as opposed to a little box like with Edgematrix.

What customers would need a mobile cloud storage device like you just described?

Tso: There are two distinct use cases here. One is that you want a cloud on the go, meaning it is self-contained. It means if the rest of the infrastructure around you has been destroyed, or your internet connectivity has been destroyed, you are still able to do everything you could do with the cloud. The intention is a completely isolatable cloud.

In the military application, it’s very straightforward. You always want to make sure that if the enemy is attacking your communication lines and shooting down satellites, wherever you are in the field, you need to have the same capability that you have during peak time.

But the civilian market, especially in global disaster, is another area that we are seeing demand. It’s state and local governments asking for it. In the event of a major disaster, oftentimes for a period, they don’t have any access to the internet. So the idea is to run in a cloud in a ruggedized unit that is completely stand-alone until connectivity is restored.

AI-focused Edgematrix started as a Cloudian idea. What does AI have to do with object storage?
Tso: AI is an infinite data consumer. Improvements on AI accuracy is a log scale — it’s an exponential scale in terms of the amount of data that you need for the additional improvements in accuracy. So, a lot of the reasons why people are accumulating all this data is to run their AI tools and run AI analysis. It’s part of the reason why people are keeping all their data.

Being S3 object store compatible is a really big deal because that allows us to plug into all of the modern AI workloads. They’re all built on top of cloud-native infrastructure, and what Cloudian provides is the ability to run those workloads wherever the data happens to be stored, and not have to move the data to another location.

Are you planning other Edgematrix-like spinoffs?
Tso: Not in the immediate future. We’re extremely pleased with the way Edgematrix worked out, and we certainly are open to do more of this kind of spin off.

We’re not a small company anymore, and one of the hardest things for startups in our growth stage is balancing creativity and innovation with growing the core business. We seem to have found a good sort of balance, but it’s not something that we want to do in volume because it’s a lot of work.

Go to Original Article
Author:

Microsoft Teams reaches 20 million users, threatening Slack

More than 20 million people now use Microsoft Teams daily — up from 13 million in July. The product’s impressive growth has many analysts and investors worried about the long-term prospects of rival collaboration vendor Slack. 

Slack’s stock was down more than 8% Tuesday in apparent reaction to Microsoft’s announcement. The company’s value has been steadily declining since June as more and more financial analysts have voiced concerns about the rise of Microsoft Teams.

Unlike Slack, Microsoft has a massive base of existing customers to target. More than 200 million people use Office 365 every month, and those customers usually have access to Microsoft Teams at no additional cost.

“Microsoft has the advantage of including Teams collaboration with a lot of their Office 365 packages,” said Rob Arnold, analyst at Frost & Sullivan. “And that literally gets it in front of more users than Slack can ever hope to.”

Microsoft also has a vast network of partners worldwide that provide services and support to businesses using its software. Slack launched its partner program last week, but so far has only recruited small and midsize firms.

Slack has attempted to undercut Microsoft’s growing user count by focusing on user engagement. Among paid customers, Slack users spend nine hours connected to the app and 90 minutes actively using it each day, the company said.

“As we’ve said before, you can’t transform a workplace if people aren’t actually using your product,” a Slack spokesperson said in a statement Tuesday.

There is, however, no evidence to suggest Teams users are less engaged with the app, as Microsoft has not released comparable statistics on the subject. Microsoft said Tuesday that users conducted 27 million voice and video calls in Teams last month, and interacted with documents stored in Teams 220 million times.

More than 12 million people used Slack daily in September. Use of the app has more than tripled over the past three years, making the vendor a leader in the market for team-based workplace communications software.

Slack often leads larger rivals Microsoft and Cisco in adding innovative features. For example, Slack developed a way to export emails to Slack in a few clicks earlier this year, while Microsoft won’t launch a similar feature until early 2020.

But Slack has never made a profit, losing nearly $139 million on $400 million in revenue last year. Attaining profitability will require selling to more businesses with thousands, if not tens of thousands, of employees. Many of those companies already use Office 365.

In an interview last month with the Wall Street Journal, Slack CEO Stewart Butterfield said 70% of its 50 largest customers were using Office 365. He also pointed out that many of the top Google Search trends for Microsoft Teams were related to uninstalling the app.

Over the past year, Slack has been redesigning aspects of its user interface to be friendlier to the average office worker. The company launched the tool originally for software engineers, which led to quirks in the way users interact with bots and integrations.

This month, Slack is in the process of rolling out a new toolbar for writing messages that resembles what users are accustomed to when using apps like Microsoft Word. The toolbar lets users bold, underline and italicize text, and create numbered and bulleted lists. Previously, users had to do unintuitive things like put asterisks on either side of a word to make it bold.  

But Microsoft has also been investing heavily in Teams, naming it the successor to Skype for Business. Just last week, Microsoft announced a partnership with Salesforce to integrate that vendor’s online sales and service platforms with Teams. The move could further boost the adoption of Microsoft’s product.

Go to Original Article
Author:

New Azure HPC and partner offerings at Supercomputing 19

For more than three decades, the researchers and practitioners that make up the high-performance computing (HPC) community will come together for their annual event. More than ten thousand strong in attendance, the global community will converge on Denver, Colorado to advance the state-of-the-art in HPC. The theme for Supercomputing ‘19 is “HPC is now” – a theme that resonates strongly with the Azure HPC team given the breakthrough capabilities we’ve been working to deliver to customers.

Azure is upending preconceptions of what the cloud can do for HPC by delivering supercomputer-grade technologies, innovative services, and performance that rivals or exceeds some of the most optimized on-premises deployments. We’re working to ensure Azure is paving the way for a new era of innovation, research, and productivity.

At the show, we’ll showcase Azure HPC and partner solutions, benchmarking white papers and customer case studies – here’s an overview of what we’re delivering.

  • Massive Scale MPI – Solve problems at the limits of your imagination, not the limits of other public cloud’s commodity network. Azure supports your tightly coupled workloads up to 80,000 cores per job featuring the latest HPC-grade CPUs and ultra-low latency InfiniBand hDR networking.
  • Accelerated Compute – Choose from the latest GPUs, field programmable gate array (FPGAs), and now IPUs for maximum performance and flexibility across your HPC, AI, and visualization workloads.
  • Apps and Services – Leverage advanced software and storage for every scenario: from hybrid cloud to cloud migration, from POCs to production, from optimized persistent deployments to agile environment reconfiguration. Azure HPC software has you covered.

Azure HPC unveils new offerings

  • The preview of new second gen AMD EPYC based HBv2 Azure Virtual Machines for HPC – HBv2 virtual machines (VMs) deliver supercomputer-class performance, message passing interface (MPI) scalability, and cost efficiency for a variety of real-world HPC workloads. HBv2 VMs feature 120 non-hyperthreaded CPU cores from the new AMD EPYC™ 7742 processor, up to 45 percent more memory bandwidth than x86 alternatives, and up to 4 teraFLOPS of double-precision compute. Leveraging the cloud’s first deployment of 200 Gigabit HDR InfiniBand from Mellanox, HBv2 VMs support up to 80,000 cores per MPI job to deliver workload scalability and performance that rivals some of the world’s largest and most powerful bare metal supercomputers. HBv2 is not just one of the most powerful HPC servers Azure has ever made, but also one of the most affordable. HBv2 VMs are available now.
  • The preview of new NVv4 Azure Virtual Machines for virtual desktops – NVv4 VMs enhance Azure’s portfolio of Windows Virtual Desktops with the introduction of the new AMD EPYC™ 7742 processor and the AMD RADEON INSTINCT™ MI25 GPUs. NVv4 gives customers more choice and flexibility by offering partitionable GPUs. Customers can select a virtual GPU size that is right for their workloads and price points, with as little as 2GB of dedicated GPU frame buffer for an entry-level desktop in the cloud, up to an entire MI25 GPU with 16GB of HBM2 memory to provide powerful workstation-class experience. NVv4 VMs are available now in preview.
  • The preview NDv2 Azure GPU Virtual Machines – NDv2 VMs are the latest, fastest, and most powerful addition to Azure GPU family, and are designed specifically for the most demanding distributed HPC, artificial intelligence (AI), and machine learning (ML) workloads. These VMs feature 8 NVIDIA Tesla V100 NVLink interconnected GPUs each with 32 GB of HBM2 memory, 40 non-hyperthreaded cores from the Intel Xeon Platinum 8168 processor, and 672 GiB of system memory. NDv2-series virtual machines also now feature 100 Gigabit EDR InfiniBand from Mellanox with support for standard OFED drivers and all MPI types and versions. NDv2-series virtual machines are ready for the most demanding machine learning models and distributed AI training workloads with out-of-box NCCL2 support for InfiniBand- allowing for easy scale-up to supercomputer-sized clusters that can run workloads utilizing CUDA, including popular ML tools and  frameworks. NDv2 VMs are available now in preview.
  • Azure HPC Cache now available – When it comes to file performance, the new Azure HPC Cache delivers flexibility and scale. Use this service right from the Azure Portal to connect high-performance computing workloads to on-premises network-attached storage or run Azure Blob as the portable operating system interface.
  • The preview of new NDv3 Azure Virtual Machines for AI –The preview of NDv3 VMs are Azure’s first offering featuring the Graphcore IPU, designed from the ground up for AI training and inference workloads. The IPU novel architecture enables high-throughput processing of neural networks even at small batch sizes, which accelerates training convergence and enables short inference latency. With the launch of NDv3, Azure is bringing the latest in AI silicon innovation to the public cloud and giving customers additional choice in how they develop and run their massive scale AI training and inference workloads. NDv3 VMs feature 16 IPU chips, each with over 1200 cores and over 7000 independent threads and a large 300MB on-chip memory that delivers up to 45 TB/s of memory bandwidth. The 8 Graphcore accelerator cards are connected through a high-bandwidth, low-latency interconnect enabling large models to be trained in a model-parallel (including pipelining) or data parallel way. An NDv3 VM also includes 40 cores of CPU, backed by Intel Xeon Platinum 8168 processors and 768 GB of memory. Customers can develop applications for IPU technology using Graphcore’s Poplar® software development toolkit, and leverage the IPU-compatible versions of popular machine learning frameworks. NDv3 VMs are now available in preview.
  • NP Azure Virtual Machines for HPC coming soon – Our Alveo U250 FPGA-Accelerated VMs offer from 1-to-4 Xilinx U250 FPGA devices as an Azure VM- backed by powerful Xeon Platinum CPU cores, and fast NVMe-based storage. The NP series will enable true lift-and-shift and single-target development of FPGA applications for a general purpose cloud. Based on a board and software ecosystem customers can buy today, RTL and high-level language designs targeted at Xilinx’s U250 card and SDAccel 2019.1 runtime will run on Azure VMs just as they do on-premises and on the edge, enabling the bleeding edge of accelerator development to harness the power of the cloud without additional development costs.

  • Azure CycleCloud 7.9 Update We are excited to announce the release of Azure CycleCloud 7.9. Version 7.9 focuses on improved operational clarity and control, in particular for large MPI workloads on Azure’s unique InfiniBand interconnected infrastructure. Among many other improvements, this release includes:

    • Improved error detection and reporting user interface (UI) that greatly simplify diagnosing VM issues.

    • ​Node time-to-live capabilities via a “Keep-alive” function, allowing users to build and debug MPI applications that are not affected by autoscaling policies.

    • VM placement group management through the UI that provides users direct control into node topology for latency sensitive applications.

    • Support for Ephemeral OS disks, which improve virtual machines and virtual machines scale sets start-up performance and cost.

  • Microsoft HPC Pack 2016, Update 3 – released in August, Update 3 includes significant performance and reliability improvements, support for Windows Server 2019 in Azure, a new VM extension for deploying Azure IaaS Windows nodes, and many other features, fixes, and improvements.

In all of our new offerings and alongside our partners, Azure HPC aims to consistently offer the latest in capabilities for HPC oriented use cases. Together with our partners Intel, AMD, Mellanox, NVIDIA, Graphcore, Xilinx, and many more, we look forward to seeing you next week in Denver!

Go to Original Article
Author: Microsoft News Center

Contact center agent experience needs massive overhaul

Gone are the days when it is acceptable to have greater than 40% turnover rates among contact center agents.

Leading organizations are revamping the contact center agent experience to improve business metrics, such as operational costs, revenue and customer ratings; and a targeted agent program keeps companies at a competitive advantage, according to the Nemertes 2019-20 Intelligent Customer Engagement research study of 518 organizations.

The problems

CX leaders participating in the research pointed to several issues responsible for a failing contact center agent experience:

  • Low pay. In some organizations it’s at minimum wage, despite requirements for bachelor’s degrees and/or experience.
  • Dead-end job. Organizations typically do not have a growth path for agents. They expect them to last 18 months to two years, and there always will be a revolving door of agents coming and going.
  • Lack of customer context. Agents find it difficult to take pride in their work when they don’t have the right tools. Without CRM integrations, AI assistance and insightful agent desktops, it is difficult to delight customers.
  • Cranky customers. Agents also find it difficult to regularly interact with dissatisfied customers. With a better work environment, more interaction channels, better training, more analytics and context, they could change those attitudes.
  • No coaching. Because supervisors are busy interviewing and hiring to keep backfilling the agents who are leaving, they rarely have time to coach the agents they have. What’s more, they don’t have the analytics tools — from contact center vendors such as Avaya, Cisco, Five 9, Genesys and RingCentral; or from pure-play tools such as Clarabridge, Medallia, and Maritz CM — to provide performance insight.

The enlightenment

Those in the contact center know this has been status quo for decades, but that is starting to change.

One of the big change drivers is the addition of a chief customer officer (CCO). Today, 37% of organizations have a CCO, up from 25% last year. The CCO is an executive-level individual with ultimate responsibility for all customer-facing activities and strategy to maximize customer acquisition, retention and satisfaction.

The CCO has budget, staff and the attention of the entire C-suite. As a result, high agent turnover rates are no longer flying under the radar. After bringing the issue to CEOs and CFOs, they are investing resources into turning around the turnover rates.

Additionally, organizations value contact centers more today, with 61% of research participants say the company views the contact center as a “value center” versus a “cost center.” Four years ago, that figure was reversed, with two-thirds viewing the contact center as a cost center.

Research shows there are five common changes organizations are now making to improve the contact center agent experience and reduce the turnover rate.

Companies are adding more outbound contact centers, targeting sales or proactive customer engagement — such as customer check-ups, loyalty program invitations and discount offers — and they are supporting new products and services. This helps to explain why, despite the growth in self-service and AI-enabled digital channels, 44% of companies actually increased the number of agents in 2019, compared to 13% who decreased, 40% who were flat and 3% unsure.

The solution

Research shows there are five common changes organizations are now making to improve the contact center agent experience and reduce the turnover rate — now at 21%, down from 38% in 2016. These changes include:

  • Improved compensation plan. Nearly 47% of companies are increasing agent compensation, compared to the 7% decreasing it. The increase ranges from 22% to 28%. Average agent compensation is $49,404, with projected increases up to $60,272, minimally, by the end of 2020.
  • Investment in agent analytics. About 24% of companies are using agent analytics today, with another 20.2% planning to use the tools by 2021. Agent analytics provides data on performance to help with coaching and improvement, in addition to delivering real-time screen pops to help agents on the spot during interactions with customers. Those using analytics see a 52.6% improvement in revenue and a 22.7% decrease in operational costs.
  • Increases in coaching. By delivering data from analytics tools, supervisors have a better picture of areas of success and those that need improvement. By using a product such as Intradiem Contact Center RPA, they can automate the scheduling of training and coaching during idle times.
  • Addition of gamification. Agents are inspired with programs that inject competitiveness among agents, by awarding badges for bragging rights, weekly gift cards for top performance and monthly cash bonuses. Such rewards improve their loyalty to the company and reduce turnover.
  • Development of career path. Successful companies are developing a solid career path with escalations into marketing, product development and supervisory roles in the contact center or CX apps/analysis.

Developing a solid game plan that provides agents with the compensation, support and career path they deserve will drastically reduce turnover rates. In a drastic example, one consumer goods manufacturing company reduced agent turnover from 88% to 2% with a program that addressed the aforementioned issues. More typically, companies are seeing 5% to 15% reductions in their turnover rates one year after developing such a plan.

Go to Original Article
Author:

The story of ADLaM is now a limited-run hardcover book from Story Labs

The African language Fulfulde is spoken by more than 50 million people worldwide. But until recently this centuries-old language lacked an alphabet of its own.

Abdoulaye and Ibrahima Barry were just young boys when they set out to change that. While other children were out playing, the Barry brothers would hole up in their family’s house in Nzérékoré, Guinea, carefully drawing shapes on paper that would eventually become ADLaM – an acronym for “the alphabet that will prevent a people from being lost.”

In the decades since, ADLaM has sparked a revolution in literacy, community and cultural preservation among Fulani people across the world. Abdoulaye and Ibrahima have dedicated their lives to sustaining these efforts, including expanding ADLaM’s reach through Unicode adoption. And thanks to support from a dedicated cross-company team at Microsoft, ADLaM is now available in Windows and Office.

My team at Microsoft Story Labs recently had the privilege of working with Abdoulaye and Ibrahima on a longform feature story about ADLaM. Today I’m happy to announce that we’ve printed a limited-run book version of that story that contains both the original English and an ADLaM translation, so the community of millions now using ADLaM can enjoy it in print. A few copies of the book will be available in a contest giveaway by Microsoft Design on Twitter. The rest will go directly into the hands of the amazing people behind the unique achievement that is ADLaM.

When you’ve been working in the digital realm for most of your career like I have, it’s kind of a treat to make something you can hold in your own two hands! But the biggest reward here was the opportunity to shine a light on remarkable people like Abdoulaye and Ibrahima who have achieved so much, and the team at Microsoft who lent a hand.

Steve Wiens

Microsoft Story Labs

Story by Deborah Bach & Sara Lerner. Design by Daniel Victor.

Go to Original Article
Author: Microsoft News Center

Salesforce acquisition of Tableau finally getting real

LAS VEGAS — It’s been more than five months since the Salesforce acquisition of Tableau was first revealed, but it’s been five months of waiting.

Even after the deal closed on Aug. 1, a regulatory review in the United Kingdom about how the Salesforce acquisition of Tableau might affect competition held up the integration of the two companies.

In fact, it wasn’t until last week on Nov. 5 after the go-ahead from the U.K. Competition and Markets Authority (CMA) — exactly a week before the start of Tableau Conference 2019, the vendor’s annual user conference — that Salesforce and Tableau were even allowed to start speaking with each other. Salesforce’s big Dreamforce 2019 conference is Nov. 19-22.

Meanwhile, Tableau didn’t just stop what it was doing. The analytics and business intelligence software vendor continued to introduce new products and update existing ones. Just before Tableau Conference 2019, it rolled out a series of new tools and product upgrades.

Perhaps most importantly, Tableau revealed an enhanced partnership agreement with Amazon Web Services entitled Modern Cloud Analytics that will help Tableau’s many on-premises users migrate to the cloud.

Andrew Beers, Tableau’s chief technology officer, discussed the recent swirl of events in a two-part Q&A.

In Part I, Beers reflected on Tableau’s product news, much of it centered on new data management capabilities and enhanced augmented intelligence powers. In Part II, he discusses the Salesforce acquisition of Tableau and what the future might look like now that the $15.7 billion purchase is no longer on hold.

Will the Salesforce acquisition of Tableau change Tableau in any way?

Andrew Beers: It would be naïve to assume that it wouldn’t. We are super excited about the acceleration that it’s going to offer us, both in terms of the customers we’re talking to and the technology that we have access to. There are a lot of opportunities for us to accelerate, and as [Salesforce CEO] Marc Benioff was saying [during the keynote speech] on Wednesday, the cultures of the two companies are really aligned, the vision about the future is really aligned, so I think overall it’s going to mean analytics inside businesses is just going to move faster.

Technologically speaking, are there any specific ways the Salesforce acquisition of Tableau might accelerate Tableau’s capabilities?

Andrew BeersAndrew Beers

Beers: It’s hard to say right now. Just last week the CMA [order] was lifted. There was a big cheer, and then everyone said, ‘But wait, we have two conferences to put on.’

Have you had any strategic conversations with Salesforce in just the week or so since regulatory restrictions were lifted, even though Tableau Conference 2019 is this week and Salesforce Dreamforce 2019 is next week?

Beers: Oh sure, and a lot of it has been about the conferences of course, but there’s been some early planning on how to take some steps together. But it’s still super early.

Users, of course, fear somewhat that what they love about Tableau might get lost as a result of the Salesforce acquisition of Tableau. What can you say to alleviate their worries?

Beers: The community that Tableau has built, and the community that Salesforce has built, they’re both these really excited and empowered communities, and that goes back to the cultural alignment of the companies. As a member of the Tableau community, I would encourage people to be excited. To have two companies come together that have similar views on the importance of the community, the product line, the ecosystem that the company is trying to create, it’s exciting.

Is the long-term plan — the long-term expectation — for Tableau to remain autonomous under Salesforce?

We’ve gone into this saying that Tableau is going to continue to operate as Tableau, but long-term, I can’t answer that question. It’s really hard for anyone to say.
Andrew BeersChief technology officer, Tableau

Beers: We’ve gone into this saying that Tableau is going to continue to operate as Tableau, but long-term, I can’t answer that question. It’s really hard for anyone to say.

From a technological perspective, as a technology officer, what about the Salesforce acquisition of Tableau excites you — what are some things that Salesforce does that you can’t wait to get access to?

Beers: Salesforce spent the past 10 or so years changing into a different company, and I’m not sure a lot of people noticed. They went from being a CRM company to being this digital-suite-for-the-enterprise company, so they’ve got a lot of interesting technology. Just thinking of analytics, they’ve built some cool stuff with Einstein. What does that mean when you bring it into the Tableau environment? I don’t know, but I’m excited to find out. They’ve got some interesting tools that hold their hold ecosystem together, and I’m interested in what that means for analysts and for Tableau. I think there are a lot of exciting technology topics ahead of us.

What about conversations you might have with Salesforce technology officers, learning from one another. Is that exciting?

Beers: It’s definitely exciting. They’ve been around — a lot of that team has different experience than us. They’re experienced technology leaders in this space and I’m definitely looking forward to learning from their wisdom. They have a whole research group that’s dedicated to some of their longer term ideas, so I’m looking forward to learning from them.

You mentioned Einstein Analytics — do Tableau and Einstein conflict? Are they at odds in any way, or do they meld in a good way?

Beers: It’s still early days, but I think you’re going to find that they’re going to meld in a good way.

What else can you tell the Tableau community about what the future holds after the Salesforce acquisition of Tableau?

Beers: We’re going to keep focused on what we’ve been focusing on for a long time. We’re here to bring interesting innovations to market to help people work with their data, and that’s something that’s going to continue. You heard Marc Benioff and [Tableau CEO Adam Selipsky] talk about their excitement around that [during a conference keynote]. Our identity as a product and innovation company doesn’t change, it just gets juiced by this. We’re ready to go — after the conferences are done.

Go to Original Article
Author: