Tag Archives: center

How Windows Admin Center stacks up to other management tools

Microsoft took a lot of administrators by surprise when it released Windows Admin Center, a new GUI-based management tool, last year. But is it mature enough to replace third-party offerings that handle some of the same tasks?

Windows Admin Center is a web-based management environment for Windows Server 2012 and up that exposes roughly 50% of the capabilities of the traditional Microsoft Management Console-based GUI environment. Most common services — DNS, Dynamic Host Configuration Protocol, Event Viewer, file sharing and even Hyper-V — are available within the Windows Admin Center, which can be installed on a workstation with a self-hosted web server built in, or on a traditional Windows Server machine using IIS.

It also covers several Azure management scenarios, as well, including managing Azure virtual machines when you link your cloud subscription to the Windows Admin Center instance you use.

Windows Admin Center dashboard
Among its many features, the Windows Admin Center dashboard provides an overview of the selected Windows machine, including the current state of the CPU and memory.

There are a number of draws for Windows Admin Center. It’s free and designed to be developed out of band, or shipped as a web download, rather than included in the Windows Server product. So, Microsoft can update it more frequently than the core OS.

Microsoft said, over time, most of the Windows administrative GUI tools will move to Windows Admin Center. It makes sense to spin up an instance of it on a management workstation, an old server or even a lightweight VM on your virtualization infrastructure. Windows Admin Center is a tool you will need to get familiar with even if you have a larger, third-party OS management tool.

How does Windows Admin Center compare with similar products on the market? Here’s a look at the pros and cons of each.

Goverlan Reach

Goverlan Reach is a remote systems management and administration suite for remote administration of virtually any aspect of a Windows system that is configurable via Windows Management Instrumentation. Goverlan is a fat client, normal Windows application, not a web app, so it runs on a regular workstation. Goverlan provides one-stop shopping for Windows administration in a reasonably well-laid-out interface. There is no Azure support.

For the extra money, you get a decent engine that allows you to automate certain IT processes and create a runbook of typical actions you would take on a system. You also get built-in session capturing and control without needing to connect to each desktop separately, as well as more visibility into software updates and patch management for not only Windows, but also major third-party software such as Chrome, Firefox and Adobe Reader.

Goverlan Reach has three editions. The Standard version is $29 per month and offers remote control functions. The Professional version costs $69 per month and includes Active Directory management and software deployment. The Enterprise version with all the advanced features costs $129 per month and includes compliance and more advanced automation abilities.

Editor’s note: Goverlan paid the writer to develop content marketing materials for its product in 2012 and 2013, but there is no ongoing relationship.

PRTG Network Monitor

Paessler’s PRTG Network Monitor tracks the uptime, health, disk space, and performance of servers and devices on your network, so you proactively respond to issues and prevent downtime.

[embedded content]
Managing Windows Server 2019 with Windows Admin Center.

PRTG monitors mail servers, web servers, database servers, file servers and others. It has sensors built in for the attendant protocols of each kind of server. You can build your own sensors to monitor key aspects of homegrown applications. PRTG logs all this monitoring information for analysis to build a baseline performance profile to develop ways to improve stability and performance on your network.

When looking at how PRTG stacks up against Windows Admin Center, it’s only really comparable from a monitoring perspective. The Network Monitor product offers little from a configuration standpoint. While you could install the alerting software and associated agents on Azure virtual machines in the cloud, there’s no real native cloud support; it treats the cloud virtual machines simply as another endpoint. 

It’s also a paid-for product, starting at $1,600 for 500 sensors and going all the way up to $60,000 for unlimited sensors. It does offer value and is perhaps the best monitoring suite out there from an ease-of-use standpoint, but most shops would most likely choose it in addition to Windows Admin Center, not in lieu of it.

SolarWinds

Windows Admin Center is a tool you will need to get familiar with even if you have a larger, third-party OS management tool.

SolarWinds has quite a few products under its systems management umbrella, including server and application monitoring; virtualization administration; storage resource monitoring; configuration and performance monitoring; log analysis; access right auditing; and up/down monitoring for networks, servers and applications. While there is some ability to administer various portions of Windows, with the Access Rights Manager or Virtualization Manager, these SolarWinds products are very heavily tilted toward monitoring, not administration.

The SolarWinds modules all start with list prices anywhere from $1,500 to $3,500, so you quickly start incurring a substantial expense to purchase the modules needed to administer all the different detailed areas of your Windows infrastructure. While these products are surely more full-featured than Windows Admin Center, the delta might not be worth $3,000 to your organization. For my money, PRTG becomes a better value for the money if monitoring is your goal.

Nagios

Nagios has a suite of tools to monitor infrastructure, from individual systems to protocols and applications, along with database monitoring, log monitoring and, perhaps important in today’s cloud world, bandwidth monitoring.

Nagios has long been available as an open source tool that’s very powerful, and the free version, Nagios Core, certainly has a place in any moderately complex infrastructure. The commercial versions of Nagios XI — $1,995 for standard and $3,495 for enterprise — have lots of shine and polish, but lack any sort of interface to administer systems.

The price is right, but its features still lag behind

There is clearly a place for Windows Admin Center in every Windows installation, given it is free, very functional although there are some bugs that will get worked out over time — and gives you a vendor-supported way of both monitoring and administering Windows.

However, Windows Admin Center lacks quite a bit of monitoring prowess and also doesn’t address all potential areas of Windows administration. There is no clear-cut winner out of all the profiled tools in this article. If anything, Windows Admin Center should be thought of as an additional tool to use in conjunction with some of these other products.

Go to Original Article
Author:

With the onset of value-based care, machine learning is making its mark

In a value-based care world, population health takes center stage.

The healthcare industry is slowly moving away from traditional fee-for-service models, where healthcare providers are reimbursed for the quantity of services rendered, and toward value-based care, which focuses on the quality of care provided. The shift in focus on quality versus quantity also shifts a healthcare organization’s focus to more effectively manage high-risk patients.

Making the shift to value-based care and better care management means looking at new data sources — the kind healthcare organizations won’t get just from the lab.

In this Q&A, David Nace, chief medical officer for San Francisco-based healthcare technology and data analytics company Innovaccer Inc., talks about how the company is applying AI and machine learning to patient data — clinical and nonclinical — to predict a patient’s future cost of care.

Doing so enables healthcare organizations to better allocate their resources by focusing their efforts on smaller groups of high-risk patients instead of the patient population as a whole. Indeed, Nace said the company is able to predict the likelihood of an individual experiencing a high-cost episode of care in the upcoming year with 52% accuracy.

What role does data play in Innovaccer’s individual future cost of care prediction model?

David Nace, chief medical officer, Innovaccer David Nace

David Nace: You can’t do anything at all around understanding a population or an individual without being able to understand the data. We all talk about data being the lifeblood of everything we want to accomplish in healthcare.

What’s most important, you’ve got to take data in from multiple sources — claims, clinical data, EHRs, pharmacy data, lab data and data that’s available through health information exchanges. Then, also [look at] nontraditional, nonclinical forms of data, like social media; or local, geographic data, such as transportation, environment, food, crime, safety. Then, look at things like availability of different community resources. Things like parks, restaurants, what we call food deserts, and bring all that data into one place. But none of that data is standardized.

How does Innovaccer implement and use machine learning algorithms in its prediction model?

Nace: Most of that information I just described — all the data sources — there are no standards around. So, you have to bring that data in and then harmonize it. You have to be able to bring it in from all these different sources, in which it’s stored in different ways, get it together in one place by transforming it, and then you have to harmonize the data into a common data model.

We’ve done a lot of work around that area. We used machine learning to recognize patterns as to whether we’ve seen this sort of data before from this kind of source, what do we know about how to transform it, what do we know about bringing it into a common data model.

Lastly, you have to be able to uniquely identify a cohort or an individual within that massive population data. You bring all that data together. You have to have a unique master patient index, and that’s been very difficult, because, in this country, we don’t have a national patient identifier.

We use machine learning to bring all that data in, transform it, get it into a common data model, and we use some very complex algorithms to identify a unique patient within that core population.

How did you develop a risk model to predict an individual’s future cost of care? 

You can’t do anything at all around understanding a population or an individual without being able to understand the data.
David NaceChief medical officer, Innovaccer

Nace: There are a couple of different sources of risk. There’s clinical risk, [and] there’s social, environmental and financial risk. And then there’s risk related to behavior. Historically, people have looked at claims data to look at the financial risk in kind of a rearview-mirror approach, and that’s been the history of risk detection and risk management.

There are models that the government uses and relies on, like CMS’ Hierarchical Condition Category [HCC] scoring, relying heavily on claims data and taking a look at what’s happened in the past and some of the information that’s available in claims, like diagnosis, eligibility and gender.

One of the things we wanted to do is, with all that data together, how do you identify risk proactively, not rearview mirror. How do you then use all of this new mass of data to predict the likelihood that someone’s going to have a future event, mostly cost? When you look at healthcare, everybody is concerned about what is the cost of care going to be. If they go back into the hospital, that’s a cost. If they need an operation, that’s a cost.

Why is predicting individual risk beneficial to a healthcare organization moving toward value-based care?

Nace: Usually, risk models are used for rearview mirror for large population risk. When the government goes to an accountable care organization or a Medicare Advantage plan and wants to say how much risk is in here, it uses the HCC model, because it’s good at saying what’s the risk of populations, but it’s terrible when you go down to the level of an individual. We wanted to get it down to the level of an individual, because that’s what humans work with.

How do social determinants of health play a role in Innovaccer’s future cost of care model?

Nace: We’ve learned in healthcare that the demographics of where you live, and the socioeconomic environment around you, really impact your outcome of care much more than the actual clinical condition itself.

As a health system, you’re starting to understand this, and you don’t want people to come back to the hospital. You want people to have good care plans that are highly tailored for them so they’re adherent, and you want to have effective strategies for managing care coordinators or managers.

Now, we have this social vulnerability index that we have a similar way of using AI to test against a population, reiterate multiple forms of regression analysis and come up with a highly specific approach to detecting the social vulnerability of that patient down to the level of a ZIP code around their economic and environmental risk. You can pull data off an API from Google Maps that shows food sources, crime rates, down to the level of a ZIP code. All that information, transportation methods, etc., we can integrate that with all that other clinical data in that data model.

We can now take a vaster amount of data that will not only get us that clinical risk, but also the social, environmental and economic risk. Then, as a health system, you can deploy your resources carefully.

Editor’s note: Responses have been edited for brevity and clarity.

Go to Original Article
Author:

Samsung adds Z-NAND data center SSD

Samsung’s lineup of data center solid-state drives– including a Z-NAND model — introduced this week targets smaller organizations facing demanding workloads such as in-memory databases, artificial intelligence and IoT.

The fastest option in the Samsung data center SSD family — the 983 ZET NVMe-based PCIe add-in card — uses the company’s latency-lowering Z-NAND flash chips. Earlier this year, Samsung announced its first Z-NAND-based enterprise SSD, the SZ985, designed for the OEM market. The new 983 ZET SSD targets SMBs, including system builders and integrators, that buy storage drives through channel partners.

The Samsung data center SSD lineup also adds the first NVMe-based PCIe SSDs designed for channel sales in 2.5-inch U.2 and 22-mm-by-110-mm M.2 form factors. At the other end of the performance spectrum, the new entry-level 2.5-inch 860 DCT 6 Gbps SATA SSD targets customers who want an alternative to client SSDs for data center applications, according to Richard Leonarz, director of product marketing for Samsung SSDs.

Rounding out the Samsung data center SSD product family is a 2.5-inch 883 DCT SATA SSD that uses denser 3D NAND technology, which Samsung calls V-NAND, than comparable predecessor models. Samsung’s PM863 and PM863a SSDs use 32-layer and 48-layer V-NAND respectively, but the new 883 DCT SSD is equipped with triple-level cell (TLC) 64-layer V-NAND chips, as are the 860 DCT and 983 DCT models, Leonarz said.

Noticeably absent from the Samsung data center SSD product line is 12 Gbps SAS. Leonarz said research showed SAS SSDs trending flat to downward in terms of units sold. He said Samsung doesn’t see a growth opportunity for SAS on the channel side of the business that sells to SMBs such as system builders and integrators. Samsung will continue to sell dual-ported enterprise SAS SSDs to OEMs.

Samsung 983 ZET NVMe SSD
The Samsung 983 ZET NVMe SSD uses its latency-lowering Z-NAND flash chips.

Z-NAND-based SSD uses SLC flash

The Z-NAND technology in the new 983 ZET SSD uses high-performance single-level cell (SLC) V-NAND 3D flash technology and builds in logic to drive latency down to lower levels than standard NVMe-based PCIe SSDs that store two or three bits of data per cell.

Samsung positions the Z-NAND flash technology it unveiled at the 2016 Flash Memory Summit as a lower-cost, high-performance alternative to new 3D XPoint nonvolatile memory that Intel and Micron co-developed. Intel launched 3D XPoint-based SSDs under the brand name Optane in March 2017, and later added Optane dual inline memory modules (DIMMs). Toshiba last month disclosed its plans for XL-Flash to compete against Optane SSDs.

Use cases for Samsung’s Z-NAND NVMe-based PCIe SSDs include cache memory, database servers, real-time analytics, artificial intelligence and IoT applications that require high throughput and low latency.

“I don’t expect to see millions of customers out there buying this. It’s still going to be a niche type of solution,” Leonarz said.

Samsung claimed its SZ985 NVMe-based PCIe add-in card could reduce latency by 5.5 times over top NVMe-based PCIe SSDs. Product data sheets list the SZ985’s maximum performance at 750,000 IOPS for random reads and 170,000 IOPS for random writes, and data transfer rates of 3.2 gigabytes per second (GBps) for sequential reads and 3 GBps for sequential writes.

The new Z-NAND based 983 ZET NVMe-based PCIe add-in card is also capable of 750,000 IOPS for random reads, but the random write performance is lower at 75,000 IOPS. The data transfer rate for the 983 ZET is 3.4 GBps for sequential reads and 3 GBps for sequential writes. The 983 ZET’s latency for sequential reads and writes is 15 microseconds, according to Samsung.

Both the SZ985 and new 983 ZET are half-height, half-length PCIe Gen 3 add-in cards. Capacity options for the 983 ZET will be 960 GB and 480 GB when the SSD ships later this month. SZ985 SSDs are currently available at 800 GB and 240 GB, although a recent product data sheet indicates 1.6 TB and 3.2 TB options will be available at an undetermined future date.

Samsung’s SZ985 and 983 ZET SSDs offer significantly different endurance levels over the five-year warranty period. The SZ985 is rated at 30 drive writes per day (DWPD), whereas the new 983 ZET supports 10 DWPD with the 960 GB SSD and 8.5 DWPD with the 480 GB SSD.

Samsung data center SSD endurance

The rest of the new Samsung data center SSD lineup is rated at less than 1 DWPD. The entry-level SATA 860 DCT SATA SSD supports 0.20 DWPD for five years or 0.34 DWPD for three years. The 883 DCT SATA SSD and 983 DCT NVMe-based PCIe SSD are officially rated at 0.78 DWPD for five years, with a three-year option of 1.30 DWPD.

Samsung initially targeted content delivery networks with its 860 DCT SATA SSD, which is designed for read-intensive workloads. Sequential read/write performance is 550 megabytes per second (MBps) and 520 MBps, and random read/write performance is 98,000 IOPS and 19,000 IOPS, respectively, according to Samsung. Capacity options range from 960 GB to 3.84 TB.

“One of the biggest challenges we face whenever we talk to customers is that folks are using client drives and putting those into data center applications. That’s been our biggest headache for a while, in that the drives were not designed for it. The idea of the 860 DCT came from meeting with various customers who were looking at a low-cost SSD solution in the data center,” Leonarz said.

He said the 860 DCT SSDs provide consistent performance for round-the-clock operation with potentially thousands of users pinging the drives, unlike client SSDs that are meant for lighter use. The cost per GB for the 860 DCT is about 25 cents, according to Leonarz.

The 883 DCT SATA SSD is a step up, at about 30 cents per GB, with additional features such as power loss protection. The performance metrics are identical to the 860 DCT, with the exception of its higher random writes of 28,000 IOPS. The 883 DCT is better suited to mixed read/write workloads for applications in cloud data centers, file and web servers and streaming media, according to Samsung. Capacity options range from 240 GB to 3.84 TB.

The 983 DCT NVMe-PCIe SSD is geared for I/O-intensive workloads requiring low latency, such as database management systems, online transaction processing, data analytics and high performance computing applications. The 2.5-inch 983 DCT in the U.2 form factor is hot swappable, unlike the M.2 option. Capacity options are 960 GB and 1.92 TB for both form factors. Pricing for the 983 DCT is about 34 cents per GB, according to Samsung.

The 983 DCT’s sequential read performance is 3,000 MBps for each of the U.2 and M.2 983 DCT options. The sequential write performance is 1,900 MBps for the 1.92 TB U.2 SSD, 1,050 MBps for the 960 GB U.2 SSD, 1,400 MBps for the 1.92 TB M.2 SSD, and 1,100 MBps for the 960 GB M.2 SSD. Random read/write performance for the 1.92 TB U.2 SSD is 540,000 IOPS and 50,000 IOPS, respectively. The read/write latency is 85 microseconds and 80 microseconds, respectively.

The 860 DCT, 883 DCT and 983 DCT SSDs are available now through the channel, and the 983 ZET is due later this month.

Mitel targets enterprises with MiCloud Engage contact center

Mitel has released a contact-center-as-a-service platform that — unlike its other contact center offerings — is detached from its unified communication products. The over-the-top product should appeal to large organizations, which are more likely to buy their contact center and Unified Communications apps separately. 

MiCloud Engage Contact Center, which runs in the multi-tenant public cloud of Amazon Web Services, supports voice, web chat, SMS and email channels, and integrates with Facebook Messenger and customer relationship management (CRM) software.

The MiCloud Engage platform plugs two gaps in the vendor’s cloud contact center portfolio. It scales to over 5,000 agents, significantly more than the 1,000-agent capacity of its flagship cloud platform, MiCloud Flex.

Furthermore, Mitel has traditionally bundled its UC and contact center products, a combination that appeals to the vendor’s historical customer base of small and midsize businesses. MiCloud Engage, in contrast, is available as a stand-alone offering.

Mitel hopes the new platform will help it gain a foothold among enterprises, which are more often customers of Avaya, Cisco or Genesys. It could also appeal to individual divisions or lines of business within a large organization.

Mitel continues cloud pivot ahead of acquisition

The release of MiCloud Engage comes months shy of the publicly traded company’s planned acquisition by the private equity firm Searchlight Capital Partners L.P. The $2 billion deal, announced in April, is expected to close by the year’s end.

Going private should help Mitel grow its cloud business because it will be able to focus on long-term growth rather than quarterly earnings. Following a series of recent acquisitions, the company also benefits from a relatively large install base and a broad mix of cloud UC offerings.

Mitel’s 2017 acquisition of ShoreTel made it one of the top UC-as-a-service vendors worldwide, along with 8×8 and RingCentral. Still, only 6% of Mitel’s 70 million UC seats were in the cloud at the outset of 2018: 1.1 million in the public cloud and another 3 million hosted in Mitel’s data centers.

Ultimately, MiCloud Engage could serve as a conduit to more enterprises buying Mitel’s UC products, the core of its business. Gartner ranks Mitel among the top four UC vendors, alongside Microsoft, Cisco and Avaya.

“If you can’t win the UC business, then winning the contact center business and creating a backdoor that way is a good strategy,” said Zeus Kerravala, the founder and principal analyst at ZK Research in Westminster, Mass. “Getting your foot in the door is the important piece, and that’s what they’re trying to do with [MiCloud Engage].”

Gartner Catalyst 2018: A future without data centers?

SAN DIEGO — Can other organizations do what Netflix has done — run a business without a data center? That’s the question that was posed by Gartner Inc. research vice president Douglas Toombs at the Gartner Catalyst 2018 conference.

While most organizations won’t run 100% of their IT in the cloud, the reality is that many workloads can be moved, Toombs told the audience.

“Your future IT is actually going to be spread across a number of different execution venues, and at each one of these venues you’re trading off control and choice, but you get the benefits of not having to deal with the lower layers,” he said.

Figure out the why, how much and when

When deciding why they are moving to the cloud, the “CEO drive-by strategy” — where the CEO swings in and says, “We need to move a bunch of stuff in the cloud, go make it happen,” — shouldn’t be the starting point, Toombs said.

“In terms of setting out your overall organizational priorities, what we want to do is get away from having just that as the basis and we want to try to think of … the real reasons why,” Toombs said.

Increasing business agility and accessing new technologies should be some of the top reasons why businesses would want to move their applications to the cloud, Toombs said. Once they have a sense of “why,” the next thing is figuring out “how much” of their applications will make the move. For most mainstream enterprises, the sweet spot seems to be somewhere between 40% and 80% of their overall applications, he said.

Businesses then need to figure out the timeframe to make this happen. Those trying to move 50% or 60% of their apps usually give themselves about three years to try and accomplish that goal, he said. If they’re more aggressive — with a target of 80% — they will need a five-year horizon, he said.

Whatever metric you pick, you want to track this very publicly over time within your organization.
Douglas Toombsresearch vice president, Gartner

“We need to get everyone in the organization with a really important job title — could be the top-level titles like CIO, CFO, COO — also in agreement and nodding along with us, and what we suggest for this is actually codifying this into a cloud strategy document,” Toombs told the audience at Gartner Catalyst 2018.

Dissecting application risk

Once organizations have outlined their general strategy, Toombs suggested they incorporate the CIA triad of confidentiality, integrity and availability for risk analysis purposes.

These three core pillars are essential to consider when moving an app to the cloud so the organization can determine potential risk factors.

“You can take these principles and start to think of them in terms of impact levels for an application,” he said. “As we look at an app and consider a potential new execution venue for it, how do we feel about the risk for confidentiality, integrity and availability — is this kind of low, or no risk, or is it really severe?”

Assessing probable execution venues

Organizations need to think very carefully about where their applications go if they exit their data centers, Toombs said. He suggested they assess their applications one-by-one, moving them off into other execution venues when they’re capable and are not going to increase overall risk

“We actually recommend starting with the app tier where you would have to give up the most control and look in the SaaS market,” he said. They can then look at PaaS, and if they have exhausted the PaaS options in the market, they can start to look at IaaS, he said.

However, if they have found an app that probably shouldn’t go to a cloud service but they still want to get to no data centers, organizations could talk to hosting providers that are out there — they’re happy to sell them hardware on a three-year contract and charge monthly for it — or go to a colocation provider. Even if they have put 30% of their apps in a colocation environment, they are not running data center space anymore, he said.

But if for some reason they have found an app that can’t be moved to any one of these execution venues, then they have absolutely justified and documented an app that now needs to stay on premises, he said. “It’s actually very freeing to have a no-go pile and say, ‘You know what, we just don’t think this can go or we just don’t think this is the right time for it, we will come back in three years and look at it again.'”

Kilowatts as a progress metric

While some organizations say they are going to move a certain percentage of their apps to the cloud, others measure in terms of number of racks or number of data centers or square feet of data center, he said.

Toombs suggested using kilowatts of data center processing power as a progress metric. “It is a really interesting metric because it abstracts away the complexities in the technology,” he said.

It also:

  • accounts for other overhead factors such as cooling;
  • easily shows progress with first migration;
  • should be auditable against a utility bill; and
  • works well with kilowatt-denominated colocation contracts.

“But whatever metric you pick, you want to track this very publicly over time within your organization,” he reminded the audience at the Gartner Catalyst 2018 conference. “It is going to give you a bit of a morale boost to go through your 5%, 10%, 15%, and say ‘Hey, we’re getting down the road here.'”

Microsoft awards grant to Tribal Digital Village and Numbers4Health to expand internet access and solutions for rural and underserved communities in California – Stories

The grant will provide broadband access and telehealth solutions in Valley Center and Compton, California

REDMOND, Wash. — Aug. 1, 2018 — On Wednesday, Microsoft Corp. announced it selected Tribal Digital Village and Numbers4Health as winners of its third annual Airband Grant Fund to help bring broadband internet access to rural and underserved communities. As two of eight winners, Tribal Digital Village (TDVNet) will help bring broadband to tribal land in the rural community of Valley Center, California, and Numbers4Health will deploy a solution in partnership with internet service providers to help support telemedicine and improve healthcare outcomes in Compton, California. The Airband Grant Fund is part of the Microsoft Airband Initiative, which aims to help close the broadband access gap in rural America by 2022.

“Tribal Digital Village and Numbers4Health are working to ensure the citizens of Valley Center and Compton have the broadband access they need to connect and compete with their more urban neighbors and access critical telehealth solutions,” said Shelley McKinley, Microsoft’s head of Technology and Corporate Responsibility. “Their use of innovative technologies like TV white spaces will help address the broadband and healthcare gap in California.”

The Microsoft Airband Grant Fund seeks to spark innovation to overcome barriers to affordable internet access, through support of high-potential, early-stage startups creating innovative new technologies, services and business models. This year’s grantees receive cash investments, access to technology, mentoring and networking opportunities.

“It’s truly a benefit when a corporation such as Microsoft focuses on scaling the reach of new technologies, like TV white spaces, to solve for the hardest-to-reach tribal communities,” said Matthew Rantanen, director, TDVNet. “Microsoft’s investment in projects that are uniquely solving these connectivity issues on the ground, like TDVNet, is essential in stimulating creativity and permanently fixing the broadband access gap.”

“The best way to manage healthcare costs and improve health outcomes is to treat injury and illness as fast as possible,” said Peg Molloy, managing director, Numbers4Health. “Numbers4Health puts health information software and technology at schools where injured student athletes can be quickly assessed. Microsoft’s Airband Grant Fund is helping us make that happen.”

Broadband is the electricity of the 21st century. It is a necessity to start and grow a small business and take advantage of advances in agriculture, telemedicine and education. In the United States, more than 24 million Americans lack broadband access, including 19.4 million people living in rural areas.

Below is a list of this year’s Microsoft Airband Grant Fund recipients. More about the Microsoft Airband Grant Fund can be found here.

About Tribal Digital Village

Tribal Digital Village, a tribal-owned ISP based in Valley Center, California, has developed hybrid wireless networks to solve last mile connectivity challenges and enable tribal members to deliver community-based networks.

About Numbers4Health

Numbers4Health is a Colorado-based startup that provides a collection of tools to encourage increased use of telehealth solutions to drive positive change and better healthcare outcomes. The system operates across Windows, Android, and iOS environments.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications for Microsoft, +1 (425) 638-7777,

[email protected]

Numbers4Health, Peg Molloy, managing director, [email protected]

Tribal Digital Village, Matthew R. Rantanen, director, [email protected]

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com.Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.

8×8 X Series combines UC and contact center

Unified-communications-as-a-service vendor 8×8 pushed further into the cloud contact center market this week with the release of X Series, an offering that combines voice, video, collaboration and contact center functions in a single platform.

Combining UC and contact center makes it easier for agents to get in touch with the right people when handling customer queries, said Meghan Keough, vice president of product marketing at 8×8, based in San Jose, Calif. For example, a company could set up shared rooms within a team collaboration app where agents and knowledge workers can chat or video conference.

The 8×8 X Series will also help companies better track customer contacts, because the same back-end infrastructure will handle calls to a local retail store and the customer service line at headquarters, Keough said. 

8×8 highlighted the platform’s ability to federate chats between leading team collaboration apps, such as Slack, Microsoft Teams and Cisco Webex Teams, allowing users of those cloud services to communicate with each other from their respective interfaces.

Technology acquired in 8×8’s 2017 acquisition of Sameroom is powering that federation and is available as a stand-alone product. The vendor also released its collaboration platform, 8×8 Team Messaging, in beta this week, with features such as persistent chat rooms, presence and file sharing.

The vendor is offering several subscription tiers for the 8×8 X Series. The more expensive plans include calling capabilities in 47 countries, as well as AI features, such as speech analytics.

Cloud fuels convergence of UC, contact center in 8×8 X Series

UC and contact center technologies used to live in “parallel universes,” said Jon Arnold, principal analyst of Toronto-based research and analysis firm J Arnold & Associates. But the cloud delivery model has made it easier to combine the platforms, which lets customers use the same over-the-top service for geographically separate office locations.

Many UCaaS vendors have added contact centers to their cloud platforms in recent years. While some, including 8×8, developed or acquired contact center suites, others — such as RingCentral and Fuze — partner with contact-center-as-a-service specialists, like Five9 and Nice InContact.

Legacy vendors are also taking steps to enhance their cloud contact center offerings. Cisco is planning to use the CC-One cloud platform it recently acquired from BroadSoft to target the midmarket, for example. Avaya, meanwhile, bought contact-center-as-a-service provider Spoken Communications earlier this year to fill a gap in its portfolio.

For many businesses, a cloud subscription to the 8×8 X Series will be cheaper than purchasing UC and contact center platforms separately, analysts said. Also, 8×8’s multi-tiered pricing model should appeal to organizations that are looking to transition to the cloud gradually.

8×8 is not the only vendor capable of offering integrated UC and contact center services, Arnold said. But the vendor has done a good job of marketing and packaging its products to make it easy for buyers and channel partners, he said.

“It’s all part of one large integrated family of services, and you can cherry-pick along the way what level is best for you,” Arnold said of the 8×8 X Series. “So, it kind of simplifies the roadmap [to the cloud] for companies.”

Experts skeptical an AWS switch is coming

Industry experts said AWS has no need to build and sell a white box data center switch as reported last week but could help customers by developing a dedicated appliance for connecting a private data center with the public cloud provider.

The Information reported last Friday AWS was considering whether to design open switches for an AWS-centric hybrid cloud. The AWS switch would compete directly with Arista, Cisco and Juniper Networks and could be available within 18 months if AWS went through with the project. AWS has declined comment.

Industry observers said this week the report could be half right. AWS customers could use hardware dedicated to establishing a network connection to the service provider, but that device is unlikely to be an AWS switch.

“A white box switch in and of itself doesn’t help move workloads to the cloud, and AWS, as you know, is in the cloud business,” said Brad Casemore, an analyst at IDC.

What AWS customers could use isn’t an AWS switch, but hardware designed to connect a private cloud to the infrastructure-as-a-service provider, experts said. Currently, AWS’ software-based Direct Connect service for the corporate data center is “a little kludgy today and could use a little bit of work,” said an industry executive who requested his name not be used because he works with AWS.

“It’s such a fragile and crappy part of the Amazon cloud experience,” he said. “The Direct Connect appliance is a badly needed part of their portfolio.”

AWS could also use a device that provides a dedicated connection to a company’s remote office or campus network, said John Fruehe, an independent analyst.  “It would speed up application [service] delivery greatly.”

Indeed, Microsoft recently introduced the Azure Virtual WAN service, which connects the Azure cloud with software-defined WAN systems that serve remote offices and campuses. The systems manage traffic through multiple network links, including broadband, MPLS and LTE.

Connectors to AWS, Google, Microsoft clouds

For the last couple of years, AWS and its rivals Google and Microsoft have been working with partners on technology to ease the difficulty of connecting to their respective services.

In October 2016, AWS and VMware launched an alliance to develop the VMware Cloud on AWS. The platform would essentially duplicate on AWS a private cloud built with VMware software. As a result, customers of the vendors could use a single set of tools to manage and move workloads between both environments.

A year later, Google announced it had partnered with Cisco to connect Kubernetes containers running on Google Cloud with Cisco’s hyper-converged infrastructure, called HyperFlex. Cisco would also provide management tools and security for the hybrid cloud system.

Microsoft, on the other hand, offers a hybrid cloud platform called the Azure Stack. The software runs on third-party hardware and shares its code, APIs and management portal with Microsoft’s Azure public cloud to create a common cloud-computing platform. Microsoft hardware partners for Azure Stack include Cisco, Dell EMC and Hewlett Packard Enterprise.

Notre Dame uses N2WS Cloud Protection Manager for backup

Coinciding with its decision to eventually close its data center and migrate most of its workloads to the public cloud, the University of Notre Dame’s IT team switched to cloud-native data protection.

Notre Dame, based in Indiana, began its push to move its business-critical applications and workloads to Amazon Web Services (AWS) in 2014. Soon after, the university chose N2WS Cloud Protection Manager to handle backup and recovery.

Now, 80% of the applications used daily by faculty members and students, as well as the data associated with those services, lives on the cloud. The university protects more than 600 AWS instances, and that number is growing fast.

In a recent webinar, Notre Dame systems engineer Aaron Wright talked about the journey of moving a whopping 828 applications to the cloud, and protecting those apps and their data.  

N2WS, which was acquired by Veeam earlier this year, is a provider of cloud-native, enterprise backup and disaster recovery for AWS. The backup tool is available through the AWS Marketplace.

Wright said Notre Dame’s main impetus for migrating to the cloud was to lower costs. Moving services to the cloud would reduce the need for hardware. Wright said the goal is to eventually close the university’s on-premises primary data center.

“We basically put our website from on premises to the AWS account and transferred the data, saw how it worked, what we could do. … As we started to see the capabilities and cost savings [of the cloud], we were wondering what we could do to put not just our ‘www’ services on the cloud,” he said.

Wright said Notre Dame plans to move 90% of its applications to the cloud by the end of 2018. “The data center is going down as we speak,” he said.

We looked at what it would cost us to build our own backup software and estimated it would cost 4,000 hours between two engineers.
Aaron Wrightsystems engineer, Notre Dame

As a research organization that works on projects with U.S. government agencies, Notre Dame owns sensitive data. Wright saw the need for a centralized backup software to protect that data, and found N2WS Cloud Protection Manager through AWS Marketplace. Wright could not find many good commercial options for protecting that cloud data.

“We looked at what it would cost us to build our own backup software and estimated it would cost 4,000 hours between two engineers,” he said. By comparison, Wright said his team deployed Cloud Protection Manger in less than an hour.

Wright said N2WS Cloud Protection Manager rescued Notre Dame’s data at least twice since the installation. One came after Linux machines failed to boot after application of a patch, and engineers restored data from snapshots within five minutes. Wright said his team used the snapshots to find and detach a corrupted Amazon Elastic Block Store volume, and then manually created and attached a new volume.

In another incident, Wright said the granularity of the N2WS Cloud Protection Manager backup capabilities proved valuable.

“Back in April-May 2018, we had to do a single-file restore through Cloud Protection Manager. Normally, we would have to have taken the volume and recreated a 300-gig volume,” he said. Locating and restoring that single file so quickly allowed him to resolve the incident within five minutes.