Tag Archives: center

How Genesys is personalizing the customer experience with Engage, Azure and AI | Transform

Microsoft and Genesys, a global provider of contact center software, recently announced a partnership to enable enterprises to run Genesys’ omnichannel customer experience solution, Genesys Engage, on Microsoft Azure. According to the two companies, this combination will provide a secure cloud environment to help companies more easily leverage AI to address customer needs on any channel.

Headquartered in Daly City, California, Genesys has more than 5,000 employees in nearly 60 offices worldwide. Every year, the company supports more than 70 billion customer experiences for organizations like Coca-Cola Business Services North America, eBay, Heineken, Lenovo, PayPal, BOSCH, Quicken and more.

Transform spoke with Barry O’Sullivan, executive vice president and general manager of Multicloud Solutions for Genesys, to explore how technology is reinventing the customer service experience.

TRANSFORM: How are technologies like artificial intelligence (AI), machine learning and cloud transforming the customer service sector?

O’SULLIVAN: It’s broader than customer service. It’s the entire customer experience, which encompasses any point at which businesses engage with consumers, whether it’s in a marketing, sales or service context. What cloud, AI and machine learning enable is the ability to make every experience unique to each individual. Every consumer wants to feel like they’re the only customer that matters during each interaction with a brand. These technologies allow organizations to understand what customers are doing, predict what they will need next and then deliver it in real time.

Traditionally, companies haven’t been able to do that well, because it’s hard to get a fix on a consumer as they move between channels. Maybe they come to a physical store one day, then call the next day or engage via web chat. These technologies allow brands to stitch together every customer interaction, and then use the resulting data to personalize the experience.

TRANSFORM: Can you talk a little bit more about that customer journey and what customers will experience going forward?

O’SULLIVAN: Let’s use contacting the cable company to get internet service as an example. You check out their website, but maybe you get stuck and use web chat to interact with a customer service representative. Today’s technologies allow businesses to connect the dots to better understand the customer.

Before these technologies were available, interactions were disconnected, and important customer details and context didn’t move from one department or agent to the next. We all know what that’s like – just think about a customer service experience when you had to repeat your name and birthdate every time you were passed to a new agent.

Today’s technology can tie together a customer’s details, like their favored communication channel, past purchases, prior service requests and more, so the business really knows them. Then, using AI, it can match that customer with the contact center agent who has the best chance of successfully resolving the issue and achieving a specific business outcome, such as making a related sale.

TRANSFORM: All of those kinds of experiences seem to be present in some form today. Is there a change coming that’s going to take the consumer experience to the next level?

O’SULLIVAN: Personalized service is not a new concept, but very few businesses get it right. Today, it’s about so much more than targeting personas or market segments.

It’s really about enabling organizations to link together their customers’ and employees’ experiences to deliver truly memorable, one-of-a-kind interactions. When it’s done right, organizations already know who the customer is, what he or she wants and the best way to deliver it.

That means understanding customers so well that businesses know the best times to contact them, on which channel and even the best days for an appointment. It’s no longer one-size-fits-all service – it’s tailor-made customer care for each consumer.

TRANSFORM: Are your own customers ready to adopt the technologies to enable this kind of new experience?

O’SULLIVAN: When it comes to cloud, it’s not a question of if, but when and how. And that’s one of the reasons the announcement between Genesys and Microsoft is so exciting. We have a lot of customers, especially large enterprises, who love Genesys and love Azure and really want to see that combination come together. So, giving them that option and that choice is really going to accelerate the migration to cloud.

In terms of adopting AI and machine learning, many companies are in the early phases, but recognize the enormous potential of the technology. What makes AI truly compelling in the customer experience market is its ability to unlock data. Increasingly, businesses use digital channels, like web chat and text, to communicate with consumers, which combined with traditional voice interactions has resulted in copious amounts of data being produced daily. The key for organizations is figuring out how to harness and leverage it to more fully understand customers, their experiences and behaviors, as well as the needs of human agents. That’s where Genesys comes in.

TRANSFORM: How would you describe your experience working with Microsoft?

O’SULLIVAN: It’s a great partnership because we’ve got a common view of the customer and a very aligned vision on cloud. It’s all about delivering agility and innovation quickly and reliably to our joint customers. So, it really helps when we’re both all in on the cloud, all in on customer experience.

Our customers are really excited about this combination of Genesys and Azure. They can simplify their maintenance, reduce costs and streamline the buying process. We believe in the advantages of moving to cloud, and obviously Azure is a leader there.

Go to Original Article
Author: Microsoft News Center

U.S. facility may have best data center PUE

After years of big gains, the energy efficiency of data centers has stalled. The key data center efficiency measurement, power usage effectiveness, is not improving. It even declined a little from last year.

The reason may have to do with the limits of the technology in use by the majority of data centers.

Improving data center PUE, “will take a shift in technology,” said Chris Brown, chief technical officer of Uptime Institute LLC, a data center advisory group in Seattle. Most data centers — as many as 90% — continue to use air cooling, which isn’t as effective as water in removing heat, he said.

But one data center has made the shift in technology: the National Renewable Energy Laboratory (NREL) in Golden, Colo. The NREL’s mission is to work on advancing energy-related technologies, such as renewable power, sustainable transportation, building efficiency, grid modernization, among others. Its supercomputer data center deploys technologies that help it to achieve a very low data center PUE.

The technologies includes cold plates, which uses liquid to draw waste heat away from the CPU and memory. It also has a few rear door heat exchangers. A heat exchanger is fitted to the rear of server racks. It removes heat from the server when it interacts with water carrying coils that cool the air before it enters the data center room.

“The closer you get to rack in terms of picking up that waste heat and removing it, the more energy efficient you are going to be,” said David Sickinger, a researcher at NREL’s Computational Science Center.

Data center efficiency gains have stalled

NREL uses cooling towers to chill the water, which can be as warm as 75 degrees Fahrenheit and still cool the systems. The cooler and drier climate conditions of Colorado help. NREL doesn’t have mechanical cooling, which includes chillers. 

Because of the increasing power of high-performance computing (HPC) systems, “that has sort of forced the industry to be early adopters of warm water liquid cooling,” Sickinger said. 

The lowest possible data center PUE is 1, which means that all the power drawn goes to the IT equipment. NREL is reporting that its supercomputing data center PUE is 1.04 on an annualized basis. The NREL HPC data center has two supercomputers in a data center of approximately 10,000 square feet.

“We feel this is sort of world leading in terms of our PUE,” Sickinger said.

Is AI starting to reduce staffing needs?

Something else that NREL believes sets it apart is its reuse of the waste heat energy. The lab uses it to heat offices and for heating loops under outdoor patio areas to melt snow.

More than 10 years ago, the average PUE as reported by Uptime was 2.5. That has since improved. By 2012, the average data center PUE was 1.65. It continued to improve slightly but has since leveled off. In 2019, the average data center PUE ticked up to a PUE of nearly 1.7.

“I think as an industry we started to get to about the end of what we can do with the way we’re designing today,” Brown said. He believes in time data centers will look at different technologies, such as immersion cooling, which involves immersing IT equipment in a nonconductive liquid.

I think as an industry we started to get to about the end of what we can do with the way we’re designing today.
Chris BrownChief technical officer, Uptime Institute LLC

Improvements in data center PUE add up. If a data center has a PUE of 2, it is using 2 megawatts of power to support 1 megawatt of IT load. But if a data center can lower the PUE to 1.6, then 1.6 megawatts is being used by the facility, providing a savings of about 400 kilowatts of electrical energy, Brown said.

Data centers are becoming major users of electricity in the United States. They account for nearly 2% of all U.S. electrical use.

In a 2016 U.S. government-sponsored study, researchers reported that data centers in 2014 accounted for about 70 billion kWh and was forecasted to reach 73 billion kWh in 2020. This estimate has not been updated, according to energy research scientist Jonathan Koomey, who was one of the authors of the study.

Koomey, who works as an independent researcher, said it is unlikely the estimates in the 2016 report have been exceeded much, if at all. He’s involved in a new independent research effort to update those estimates.

NREL is working with Hewlett Packard Enterprise to develop AI algorithms specific to IT operations, also known as AIOps. The goal is to develop machine learning models and predictive capabilities to optimize the use of data centers and possibly inform development of data centers to serve exascale computing, said Kristin Munch, NREL manager of the data, analysis and visualization group.

National labs generally collect data on their computer and data center operations, but they may not keep this data for a long period of time. NREL has collected five years worth of data from its supercomputer and facilities, Munch said.

Go to Original Article
Author:

New telephony controls coming to Microsoft Teams admin center

Microsoft will add several telephony controls to the Microsoft Teams admin center in the coming months, a significant move in the vendor’s campaign to retire Skype for Business Online by mid-2021.

Admins will be able to build, test and manage custom dial plans through the Teams portal. Additionally, organizations that use Microsoft Calling Plan will be able to create and assign phone numbers and designate emergency addresses for users.

Currently, admins can only perform those tasks in Teams through the legacy admin center for Skype for Business Online. Microsoft has been gradually moving controls to the Teams admin center, with telephony controls among the last to switch over.

Microsoft plans to begin adding the new telephony controls to the Teams admin center in November, according to the vendor’s Office 365 Roadmap webpage. The company will also introduce some advanced features it didn’t support in Skype for Business Online, a cloud-based app within Office 365.

The update will let admins configure what’s known as dynamic emergency calling. The feature — supported only in the on-premises version of Skype for Business — automatically detects a user’s location when they place a 911 call. It then transmits that information to emergency officials.

The admin center for Skype for Business Online is “fairly rudimentary,” said Tom Arbuthnot, principal solutions architect at Modality Systems, a Microsoft-focused systems integrator. The new console for Teams provides advancements like the ability to sort and filter users and phone numbers.

“All of these little features add up to making a more friendly voice platform for an administrator,” Arbuthnot said. “They are getting closer and closer to everything being administered in the Teams admin center.”

Microsoft Teams still missing advanced calling controls, features

The superior design of the admin center notwithstanding, Teams still lacks crucial tools for organizations too large to use the management console.

For those enterprises, Teams PowerShell is the go-to tool for auto-configuring settings on a large scale using code-based commands. However, PowerShell cannot do everything that the Teams admin center can do. Microsoft has also yet to release APIs that would allow a third-party consultant to help manage a Fortune 500 company’s transition to Teams calling.

“When you’re up to hundreds of thousands of seats, you don’t really want to be going to an admin center and manually administrating,” Arbuthnot said. “The PowerShell and APIs tend to lag a little bit.”

A lack of parity between the telephony features of Skype for Business and Teams had been one of the biggest roadblocks preventing organizations from fully transitioning from the old to the new platform.

But at this point, Teams should be suitable for everyone except those with the most complex needs, such as receptionists, Arbuthnot said.

Other features that Microsoft is planning include compliance call recording, virtual desktop infrastructure support and contact center integrations.

Go to Original Article
Author:

UCaaS vendor Intermedia adds Telax CCaaS to portfolio

Unified communications vendor Intermedia has added contact center software to its cloud portfolio. The move is the latest example of how the markets for UC and contact center technologies are converging.

Intermedia follows the lead of other cloud UC vendors, including RingCentral, Vonage and 8×8, in building or acquiring a contact center as a service (CCaaS) platform. Intermedia’s CCaaS software stems from the acquisition of Toronto-based Telax in August.

The Intermedia Contact Center will be available as a stand-alone offering or bundled with Intermedia Unite, a cloud-based suite of calling, messaging and video conferencing applications. Intermedia will sell the offering in three tiers: Express, Pro and Elite.

Express — sold only as an add-on to Intermedia Unite — is a basic call routing platform for small businesses. Pro includes more advanced call routing, analytics, and support for additional contact channels, such as chat.

Elite, the most expensive tier, integrates with CRM platforms and includes support for self-service voice bots, outbound notification campaigns and quality assurance monitoring. 

Intermedia has already integrated Express with its UC platform. It’s planning to do the same for Pro and Elite early next year.

Integrating UC and contact center platforms can save money by letting customer service agents transfer calls outside of the contact center without going through the public telephone network. Plus, communication between agents and others in the organization is more effective when everyone uses the same chat and video apps.

Based in Sunnyvale, Calif., Intermedia sells its technology to small and midsize businesses through 6,600 channel partners. Most of them are managed service providers that brand Intermedia’s service as their own.

In addition to UC and contact center, Intermedia offers email archiving and encryption, file backup and sharing systems, and hosted Microsoft email services.

Roughly 1.4 million people across 125,000 businesses use Intermedia’s technology. The company, founded in 1995 and now owned by private equity firm Madison Dearborn Partners, said its acquisition of Telax brought annual revenue to around $250 million. 

Founded in 1997, Telax sold its CCaaS platform exclusively through service providers, which rebranded it mostly for small and midsize businesses.

Go to Original Article
Author:

How Windows Admin Center stacks up to other management tools

Microsoft took a lot of administrators by surprise when it released Windows Admin Center, a new GUI-based management tool, last year. But is it mature enough to replace third-party offerings that handle some of the same tasks?

Windows Admin Center is a web-based management environment for Windows Server 2012 and up that exposes roughly 50% of the capabilities of the traditional Microsoft Management Console-based GUI environment. Most common services — DNS, Dynamic Host Configuration Protocol, Event Viewer, file sharing and even Hyper-V — are available within the Windows Admin Center, which can be installed on a workstation with a self-hosted web server built in, or on a traditional Windows Server machine using IIS.

It also covers several Azure management scenarios, as well, including managing Azure virtual machines when you link your cloud subscription to the Windows Admin Center instance you use.

Windows Admin Center dashboard
Among its many features, the Windows Admin Center dashboard provides an overview of the selected Windows machine, including the current state of the CPU and memory.

There are a number of draws for Windows Admin Center. It’s free and designed to be developed out of band, or shipped as a web download, rather than included in the Windows Server product. So, Microsoft can update it more frequently than the core OS.

Microsoft said, over time, most of the Windows administrative GUI tools will move to Windows Admin Center. It makes sense to spin up an instance of it on a management workstation, an old server or even a lightweight VM on your virtualization infrastructure. Windows Admin Center is a tool you will need to get familiar with even if you have a larger, third-party OS management tool.

How does Windows Admin Center compare with similar products on the market? Here’s a look at the pros and cons of each.

Goverlan Reach

Goverlan Reach is a remote systems management and administration suite for remote administration of virtually any aspect of a Windows system that is configurable via Windows Management Instrumentation. Goverlan is a fat client, normal Windows application, not a web app, so it runs on a regular workstation. Goverlan provides one-stop shopping for Windows administration in a reasonably well-laid-out interface. There is no Azure support.

For the extra money, you get a decent engine that allows you to automate certain IT processes and create a runbook of typical actions you would take on a system. You also get built-in session capturing and control without needing to connect to each desktop separately, as well as more visibility into software updates and patch management for not only Windows, but also major third-party software such as Chrome, Firefox and Adobe Reader.

Goverlan Reach has three editions. The Standard version is $29 per month and offers remote control functions. The Professional version costs $69 per month and includes Active Directory management and software deployment. The Enterprise version with all the advanced features costs $129 per month and includes compliance and more advanced automation abilities.

Editor’s note: Goverlan paid the writer to develop content marketing materials for its product in 2012 and 2013, but there is no ongoing relationship.

PRTG Network Monitor

Paessler’s PRTG Network Monitor tracks the uptime, health, disk space, and performance of servers and devices on your network, so you proactively respond to issues and prevent downtime.

[embedded content]
Managing Windows Server 2019 with Windows Admin Center.

PRTG monitors mail servers, web servers, database servers, file servers and others. It has sensors built in for the attendant protocols of each kind of server. You can build your own sensors to monitor key aspects of homegrown applications. PRTG logs all this monitoring information for analysis to build a baseline performance profile to develop ways to improve stability and performance on your network.

When looking at how PRTG stacks up against Windows Admin Center, it’s only really comparable from a monitoring perspective. The Network Monitor product offers little from a configuration standpoint. While you could install the alerting software and associated agents on Azure virtual machines in the cloud, there’s no real native cloud support; it treats the cloud virtual machines simply as another endpoint. 

It’s also a paid-for product, starting at $1,600 for 500 sensors and going all the way up to $60,000 for unlimited sensors. It does offer value and is perhaps the best monitoring suite out there from an ease-of-use standpoint, but most shops would most likely choose it in addition to Windows Admin Center, not in lieu of it.

SolarWinds

Windows Admin Center is a tool you will need to get familiar with even if you have a larger, third-party OS management tool.

SolarWinds has quite a few products under its systems management umbrella, including server and application monitoring; virtualization administration; storage resource monitoring; configuration and performance monitoring; log analysis; access right auditing; and up/down monitoring for networks, servers and applications. While there is some ability to administer various portions of Windows, with the Access Rights Manager or Virtualization Manager, these SolarWinds products are very heavily tilted toward monitoring, not administration.

The SolarWinds modules all start with list prices anywhere from $1,500 to $3,500, so you quickly start incurring a substantial expense to purchase the modules needed to administer all the different detailed areas of your Windows infrastructure. While these products are surely more full-featured than Windows Admin Center, the delta might not be worth $3,000 to your organization. For my money, PRTG becomes a better value for the money if monitoring is your goal.

Nagios

Nagios has a suite of tools to monitor infrastructure, from individual systems to protocols and applications, along with database monitoring, log monitoring and, perhaps important in today’s cloud world, bandwidth monitoring.

Nagios has long been available as an open source tool that’s very powerful, and the free version, Nagios Core, certainly has a place in any moderately complex infrastructure. The commercial versions of Nagios XI — $1,995 for standard and $3,495 for enterprise — have lots of shine and polish, but lack any sort of interface to administer systems.

The price is right, but its features still lag behind

There is clearly a place for Windows Admin Center in every Windows installation, given it is free, very functional although there are some bugs that will get worked out over time — and gives you a vendor-supported way of both monitoring and administering Windows.

However, Windows Admin Center lacks quite a bit of monitoring prowess and also doesn’t address all potential areas of Windows administration. There is no clear-cut winner out of all the profiled tools in this article. If anything, Windows Admin Center should be thought of as an additional tool to use in conjunction with some of these other products.

Go to Original Article
Author:

With the onset of value-based care, machine learning is making its mark

In a value-based care world, population health takes center stage.

The healthcare industry is slowly moving away from traditional fee-for-service models, where healthcare providers are reimbursed for the quantity of services rendered, and toward value-based care, which focuses on the quality of care provided. The shift in focus on quality versus quantity also shifts a healthcare organization’s focus to more effectively manage high-risk patients.

Making the shift to value-based care and better care management means looking at new data sources — the kind healthcare organizations won’t get just from the lab.

In this Q&A, David Nace, chief medical officer for San Francisco-based healthcare technology and data analytics company Innovaccer Inc., talks about how the company is applying AI and machine learning to patient data — clinical and nonclinical — to predict a patient’s future cost of care.

Doing so enables healthcare organizations to better allocate their resources by focusing their efforts on smaller groups of high-risk patients instead of the patient population as a whole. Indeed, Nace said the company is able to predict the likelihood of an individual experiencing a high-cost episode of care in the upcoming year with 52% accuracy.

What role does data play in Innovaccer’s individual future cost of care prediction model?

David Nace, chief medical officer, Innovaccer David Nace

David Nace: You can’t do anything at all around understanding a population or an individual without being able to understand the data. We all talk about data being the lifeblood of everything we want to accomplish in healthcare.

What’s most important, you’ve got to take data in from multiple sources — claims, clinical data, EHRs, pharmacy data, lab data and data that’s available through health information exchanges. Then, also [look at] nontraditional, nonclinical forms of data, like social media; or local, geographic data, such as transportation, environment, food, crime, safety. Then, look at things like availability of different community resources. Things like parks, restaurants, what we call food deserts, and bring all that data into one place. But none of that data is standardized.

How does Innovaccer implement and use machine learning algorithms in its prediction model?

Nace: Most of that information I just described — all the data sources — there are no standards around. So, you have to bring that data in and then harmonize it. You have to be able to bring it in from all these different sources, in which it’s stored in different ways, get it together in one place by transforming it, and then you have to harmonize the data into a common data model.

We’ve done a lot of work around that area. We used machine learning to recognize patterns as to whether we’ve seen this sort of data before from this kind of source, what do we know about how to transform it, what do we know about bringing it into a common data model.

Lastly, you have to be able to uniquely identify a cohort or an individual within that massive population data. You bring all that data together. You have to have a unique master patient index, and that’s been very difficult, because, in this country, we don’t have a national patient identifier.

We use machine learning to bring all that data in, transform it, get it into a common data model, and we use some very complex algorithms to identify a unique patient within that core population.

How did you develop a risk model to predict an individual’s future cost of care? 

You can’t do anything at all around understanding a population or an individual without being able to understand the data.
David NaceChief medical officer, Innovaccer

Nace: There are a couple of different sources of risk. There’s clinical risk, [and] there’s social, environmental and financial risk. And then there’s risk related to behavior. Historically, people have looked at claims data to look at the financial risk in kind of a rearview-mirror approach, and that’s been the history of risk detection and risk management.

There are models that the government uses and relies on, like CMS’ Hierarchical Condition Category [HCC] scoring, relying heavily on claims data and taking a look at what’s happened in the past and some of the information that’s available in claims, like diagnosis, eligibility and gender.

One of the things we wanted to do is, with all that data together, how do you identify risk proactively, not rearview mirror. How do you then use all of this new mass of data to predict the likelihood that someone’s going to have a future event, mostly cost? When you look at healthcare, everybody is concerned about what is the cost of care going to be. If they go back into the hospital, that’s a cost. If they need an operation, that’s a cost.

Why is predicting individual risk beneficial to a healthcare organization moving toward value-based care?

Nace: Usually, risk models are used for rearview mirror for large population risk. When the government goes to an accountable care organization or a Medicare Advantage plan and wants to say how much risk is in here, it uses the HCC model, because it’s good at saying what’s the risk of populations, but it’s terrible when you go down to the level of an individual. We wanted to get it down to the level of an individual, because that’s what humans work with.

How do social determinants of health play a role in Innovaccer’s future cost of care model?

Nace: We’ve learned in healthcare that the demographics of where you live, and the socioeconomic environment around you, really impact your outcome of care much more than the actual clinical condition itself.

As a health system, you’re starting to understand this, and you don’t want people to come back to the hospital. You want people to have good care plans that are highly tailored for them so they’re adherent, and you want to have effective strategies for managing care coordinators or managers.

Now, we have this social vulnerability index that we have a similar way of using AI to test against a population, reiterate multiple forms of regression analysis and come up with a highly specific approach to detecting the social vulnerability of that patient down to the level of a ZIP code around their economic and environmental risk. You can pull data off an API from Google Maps that shows food sources, crime rates, down to the level of a ZIP code. All that information, transportation methods, etc., we can integrate that with all that other clinical data in that data model.

We can now take a vaster amount of data that will not only get us that clinical risk, but also the social, environmental and economic risk. Then, as a health system, you can deploy your resources carefully.

Editor’s note: Responses have been edited for brevity and clarity.

Go to Original Article
Author:

Samsung adds Z-NAND data center SSD

Samsung’s lineup of data center solid-state drives– including a Z-NAND model — introduced this week targets smaller organizations facing demanding workloads such as in-memory databases, artificial intelligence and IoT.

The fastest option in the Samsung data center SSD family — the 983 ZET NVMe-based PCIe add-in card — uses the company’s latency-lowering Z-NAND flash chips. Earlier this year, Samsung announced its first Z-NAND-based enterprise SSD, the SZ985, designed for the OEM market. The new 983 ZET SSD targets SMBs, including system builders and integrators, that buy storage drives through channel partners.

The Samsung data center SSD lineup also adds the first NVMe-based PCIe SSDs designed for channel sales in 2.5-inch U.2 and 22-mm-by-110-mm M.2 form factors. At the other end of the performance spectrum, the new entry-level 2.5-inch 860 DCT 6 Gbps SATA SSD targets customers who want an alternative to client SSDs for data center applications, according to Richard Leonarz, director of product marketing for Samsung SSDs.

Rounding out the Samsung data center SSD product family is a 2.5-inch 883 DCT SATA SSD that uses denser 3D NAND technology, which Samsung calls V-NAND, than comparable predecessor models. Samsung’s PM863 and PM863a SSDs use 32-layer and 48-layer V-NAND respectively, but the new 883 DCT SSD is equipped with triple-level cell (TLC) 64-layer V-NAND chips, as are the 860 DCT and 983 DCT models, Leonarz said.

Noticeably absent from the Samsung data center SSD product line is 12 Gbps SAS. Leonarz said research showed SAS SSDs trending flat to downward in terms of units sold. He said Samsung doesn’t see a growth opportunity for SAS on the channel side of the business that sells to SMBs such as system builders and integrators. Samsung will continue to sell dual-ported enterprise SAS SSDs to OEMs.

Samsung 983 ZET NVMe SSD
The Samsung 983 ZET NVMe SSD uses its latency-lowering Z-NAND flash chips.

Z-NAND-based SSD uses SLC flash

The Z-NAND technology in the new 983 ZET SSD uses high-performance single-level cell (SLC) V-NAND 3D flash technology and builds in logic to drive latency down to lower levels than standard NVMe-based PCIe SSDs that store two or three bits of data per cell.

Samsung positions the Z-NAND flash technology it unveiled at the 2016 Flash Memory Summit as a lower-cost, high-performance alternative to new 3D XPoint nonvolatile memory that Intel and Micron co-developed. Intel launched 3D XPoint-based SSDs under the brand name Optane in March 2017, and later added Optane dual inline memory modules (DIMMs). Toshiba last month disclosed its plans for XL-Flash to compete against Optane SSDs.

Use cases for Samsung’s Z-NAND NVMe-based PCIe SSDs include cache memory, database servers, real-time analytics, artificial intelligence and IoT applications that require high throughput and low latency.

“I don’t expect to see millions of customers out there buying this. It’s still going to be a niche type of solution,” Leonarz said.

Samsung claimed its SZ985 NVMe-based PCIe add-in card could reduce latency by 5.5 times over top NVMe-based PCIe SSDs. Product data sheets list the SZ985’s maximum performance at 750,000 IOPS for random reads and 170,000 IOPS for random writes, and data transfer rates of 3.2 gigabytes per second (GBps) for sequential reads and 3 GBps for sequential writes.

The new Z-NAND based 983 ZET NVMe-based PCIe add-in card is also capable of 750,000 IOPS for random reads, but the random write performance is lower at 75,000 IOPS. The data transfer rate for the 983 ZET is 3.4 GBps for sequential reads and 3 GBps for sequential writes. The 983 ZET’s latency for sequential reads and writes is 15 microseconds, according to Samsung.

Both the SZ985 and new 983 ZET are half-height, half-length PCIe Gen 3 add-in cards. Capacity options for the 983 ZET will be 960 GB and 480 GB when the SSD ships later this month. SZ985 SSDs are currently available at 800 GB and 240 GB, although a recent product data sheet indicates 1.6 TB and 3.2 TB options will be available at an undetermined future date.

Samsung’s SZ985 and 983 ZET SSDs offer significantly different endurance levels over the five-year warranty period. The SZ985 is rated at 30 drive writes per day (DWPD), whereas the new 983 ZET supports 10 DWPD with the 960 GB SSD and 8.5 DWPD with the 480 GB SSD.

Samsung data center SSD endurance

The rest of the new Samsung data center SSD lineup is rated at less than 1 DWPD. The entry-level SATA 860 DCT SATA SSD supports 0.20 DWPD for five years or 0.34 DWPD for three years. The 883 DCT SATA SSD and 983 DCT NVMe-based PCIe SSD are officially rated at 0.78 DWPD for five years, with a three-year option of 1.30 DWPD.

Samsung initially targeted content delivery networks with its 860 DCT SATA SSD, which is designed for read-intensive workloads. Sequential read/write performance is 550 megabytes per second (MBps) and 520 MBps, and random read/write performance is 98,000 IOPS and 19,000 IOPS, respectively, according to Samsung. Capacity options range from 960 GB to 3.84 TB.

“One of the biggest challenges we face whenever we talk to customers is that folks are using client drives and putting those into data center applications. That’s been our biggest headache for a while, in that the drives were not designed for it. The idea of the 860 DCT came from meeting with various customers who were looking at a low-cost SSD solution in the data center,” Leonarz said.

He said the 860 DCT SSDs provide consistent performance for round-the-clock operation with potentially thousands of users pinging the drives, unlike client SSDs that are meant for lighter use. The cost per GB for the 860 DCT is about 25 cents, according to Leonarz.

The 883 DCT SATA SSD is a step up, at about 30 cents per GB, with additional features such as power loss protection. The performance metrics are identical to the 860 DCT, with the exception of its higher random writes of 28,000 IOPS. The 883 DCT is better suited to mixed read/write workloads for applications in cloud data centers, file and web servers and streaming media, according to Samsung. Capacity options range from 240 GB to 3.84 TB.

The 983 DCT NVMe-PCIe SSD is geared for I/O-intensive workloads requiring low latency, such as database management systems, online transaction processing, data analytics and high performance computing applications. The 2.5-inch 983 DCT in the U.2 form factor is hot swappable, unlike the M.2 option. Capacity options are 960 GB and 1.92 TB for both form factors. Pricing for the 983 DCT is about 34 cents per GB, according to Samsung.

The 983 DCT’s sequential read performance is 3,000 MBps for each of the U.2 and M.2 983 DCT options. The sequential write performance is 1,900 MBps for the 1.92 TB U.2 SSD, 1,050 MBps for the 960 GB U.2 SSD, 1,400 MBps for the 1.92 TB M.2 SSD, and 1,100 MBps for the 960 GB M.2 SSD. Random read/write performance for the 1.92 TB U.2 SSD is 540,000 IOPS and 50,000 IOPS, respectively. The read/write latency is 85 microseconds and 80 microseconds, respectively.

The 860 DCT, 883 DCT and 983 DCT SSDs are available now through the channel, and the 983 ZET is due later this month.

Mitel targets enterprises with MiCloud Engage contact center

Mitel has released a contact-center-as-a-service platform that — unlike its other contact center offerings — is detached from its unified communication products. The over-the-top product should appeal to large organizations, which are more likely to buy their contact center and Unified Communications apps separately. 

MiCloud Engage Contact Center, which runs in the multi-tenant public cloud of Amazon Web Services, supports voice, web chat, SMS and email channels, and integrates with Facebook Messenger and customer relationship management (CRM) software.

The MiCloud Engage platform plugs two gaps in the vendor’s cloud contact center portfolio. It scales to over 5,000 agents, significantly more than the 1,000-agent capacity of its flagship cloud platform, MiCloud Flex.

Furthermore, Mitel has traditionally bundled its UC and contact center products, a combination that appeals to the vendor’s historical customer base of small and midsize businesses. MiCloud Engage, in contrast, is available as a stand-alone offering.

Mitel hopes the new platform will help it gain a foothold among enterprises, which are more often customers of Avaya, Cisco or Genesys. It could also appeal to individual divisions or lines of business within a large organization.

Mitel continues cloud pivot ahead of acquisition

The release of MiCloud Engage comes months shy of the publicly traded company’s planned acquisition by the private equity firm Searchlight Capital Partners L.P. The $2 billion deal, announced in April, is expected to close by the year’s end.

Going private should help Mitel grow its cloud business because it will be able to focus on long-term growth rather than quarterly earnings. Following a series of recent acquisitions, the company also benefits from a relatively large install base and a broad mix of cloud UC offerings.

Mitel’s 2017 acquisition of ShoreTel made it one of the top UC-as-a-service vendors worldwide, along with 8×8 and RingCentral. Still, only 6% of Mitel’s 70 million UC seats were in the cloud at the outset of 2018: 1.1 million in the public cloud and another 3 million hosted in Mitel’s data centers.

Ultimately, MiCloud Engage could serve as a conduit to more enterprises buying Mitel’s UC products, the core of its business. Gartner ranks Mitel among the top four UC vendors, alongside Microsoft, Cisco and Avaya.

“If you can’t win the UC business, then winning the contact center business and creating a backdoor that way is a good strategy,” said Zeus Kerravala, the founder and principal analyst at ZK Research in Westminster, Mass. “Getting your foot in the door is the important piece, and that’s what they’re trying to do with [MiCloud Engage].”

Gartner Catalyst 2018: A future without data centers?

SAN DIEGO — Can other organizations do what Netflix has done — run a business without a data center? That’s the question that was posed by Gartner Inc. research vice president Douglas Toombs at the Gartner Catalyst 2018 conference.

While most organizations won’t run 100% of their IT in the cloud, the reality is that many workloads can be moved, Toombs told the audience.

“Your future IT is actually going to be spread across a number of different execution venues, and at each one of these venues you’re trading off control and choice, but you get the benefits of not having to deal with the lower layers,” he said.

Figure out the why, how much and when

When deciding why they are moving to the cloud, the “CEO drive-by strategy” — where the CEO swings in and says, “We need to move a bunch of stuff in the cloud, go make it happen,” — shouldn’t be the starting point, Toombs said.

“In terms of setting out your overall organizational priorities, what we want to do is get away from having just that as the basis and we want to try to think of … the real reasons why,” Toombs said.

Increasing business agility and accessing new technologies should be some of the top reasons why businesses would want to move their applications to the cloud, Toombs said. Once they have a sense of “why,” the next thing is figuring out “how much” of their applications will make the move. For most mainstream enterprises, the sweet spot seems to be somewhere between 40% and 80% of their overall applications, he said.

Businesses then need to figure out the timeframe to make this happen. Those trying to move 50% or 60% of their apps usually give themselves about three years to try and accomplish that goal, he said. If they’re more aggressive — with a target of 80% — they will need a five-year horizon, he said.

Whatever metric you pick, you want to track this very publicly over time within your organization.
Douglas Toombsresearch vice president, Gartner

“We need to get everyone in the organization with a really important job title — could be the top-level titles like CIO, CFO, COO — also in agreement and nodding along with us, and what we suggest for this is actually codifying this into a cloud strategy document,” Toombs told the audience at Gartner Catalyst 2018.

Dissecting application risk

Once organizations have outlined their general strategy, Toombs suggested they incorporate the CIA triad of confidentiality, integrity and availability for risk analysis purposes.

These three core pillars are essential to consider when moving an app to the cloud so the organization can determine potential risk factors.

“You can take these principles and start to think of them in terms of impact levels for an application,” he said. “As we look at an app and consider a potential new execution venue for it, how do we feel about the risk for confidentiality, integrity and availability — is this kind of low, or no risk, or is it really severe?”

Assessing probable execution venues

Organizations need to think very carefully about where their applications go if they exit their data centers, Toombs said. He suggested they assess their applications one-by-one, moving them off into other execution venues when they’re capable and are not going to increase overall risk

“We actually recommend starting with the app tier where you would have to give up the most control and look in the SaaS market,” he said. They can then look at PaaS, and if they have exhausted the PaaS options in the market, they can start to look at IaaS, he said.

However, if they have found an app that probably shouldn’t go to a cloud service but they still want to get to no data centers, organizations could talk to hosting providers that are out there — they’re happy to sell them hardware on a three-year contract and charge monthly for it — or go to a colocation provider. Even if they have put 30% of their apps in a colocation environment, they are not running data center space anymore, he said.

But if for some reason they have found an app that can’t be moved to any one of these execution venues, then they have absolutely justified and documented an app that now needs to stay on premises, he said. “It’s actually very freeing to have a no-go pile and say, ‘You know what, we just don’t think this can go or we just don’t think this is the right time for it, we will come back in three years and look at it again.'”

Kilowatts as a progress metric

While some organizations say they are going to move a certain percentage of their apps to the cloud, others measure in terms of number of racks or number of data centers or square feet of data center, he said.

Toombs suggested using kilowatts of data center processing power as a progress metric. “It is a really interesting metric because it abstracts away the complexities in the technology,” he said.

It also:

  • accounts for other overhead factors such as cooling;
  • easily shows progress with first migration;
  • should be auditable against a utility bill; and
  • works well with kilowatt-denominated colocation contracts.

“But whatever metric you pick, you want to track this very publicly over time within your organization,” he reminded the audience at the Gartner Catalyst 2018 conference. “It is going to give you a bit of a morale boost to go through your 5%, 10%, 15%, and say ‘Hey, we’re getting down the road here.'”