Tag Archives: among

Midmarket enterprises push UCaaS platform adoption

Cloud unified communications adoption is growing among midmarket enterprises as they look to improve employee communication, productivity and collaboration. Cloud offerings, too, are evolving to meet midmarket enterprise needs, according to a Gartner Inc. report on North American midmarket unified communications as a service (UCaaS).

Gartner, a market research firm based in Stamford, Conn., defines the midmarket as enterprises with 100 to 999 employees and revenue between $50 million and $1 billion. UCaaS spending in the midmarket segment reached nearly $1.5 billion in 2017 and is expected to hit almost $3 billion by 2021, according to the report. Midmarket UCaaS providers include vendors ranked in Gartner’s UCaaS Magic Quadrant report. The latest Gartner UCaaS midmarket report, however, examined North American-focused providers not ranked in the larger Magic Quadrant report, such as CenturyLink, Jive and Vonage.

But before deploying a UCaaS platform, midmarket IT decision-makers must evaluate the broader business requirements that go beyond communication and collaboration.

Evaluating the cost of a UCaaS platform

The most significant challenge facing midmarket IT planners over the next 12 months is budget constraints, according to the report. These constraints play a major role in midmarket UC decisions, said Megan Fernandez, Gartner analyst and co-author of the report.

“While UCaaS solutions are not always less expensive than premises-based solutions, the ability to acquire elastic services with straightforward costs is useful for many midsize enterprises,” she said.

Many midmarket enterprises are looking to acquire UCaaS functions as a bundled service rather than stand-alone functions, according to the report. Bundles can be more cost-effective as prices are based on a set of features rather than a single UC application. Other enterprises will acquire UCaaS through a freemium model, which offers basic voice and conferencing functionality.

“We tend to see freemium services coming into play when organizations are trying new services,” she said. “Users might access the service and determine if the freemium capabilities will suffice for their business needs.”

For some enterprises, this basic functionality will meet business requirements and offer cost savings. But other enterprises will upgrade to a paid UCaaS platform after using the freemium model to test services.

Cloud adoption
Enterprises are putting more emphasis on cloud communications services.

Addressing multiple network options

Midmarket enterprises have a variety of network configurations depending on the number of sites and access to fiber. As a result, UCaaS providers offer multiple WAN strategies to connect to enterprises. Midmarket IT planners should ensure UCaaS providers align with their companies’ preferred networking approach, Fernandez said.

Enterprises looking to keep network costs down may connect to a UCaaS platform via DSL or cable modem broadband. Enterprises with stricter voice quality requirements may pay more for an IP MPLS connection, according to the report. Software-defined WAN (SD-WAN) is also a growing trend for communications infrastructure. 

“We expect SD-WAN to be utilized in segments with requirements for high QoS,” Fernandez said. “We tend to see more requirements for high performance in certain industries like healthcare and financial services.”

Team collaboration’s influence and user preferences

Team collaboration, also referred to as workstream collaboration, offers similar capabilities as UCaaS platforms, such as voice, video and messaging, but its growing popularity won’t affect how enterprises buy UCaaS, yet.

Fernandez said team collaboration is not a primary factor influencing UCaaS buying decisions as team collaboration is still acquired at the departmental or team level. But buying decisions could shift as the benefits of team-oriented management become more widely understood, she said.

“This means we’ll increasingly see more overlap in the UCaaS and workstream collaboration solution decisions in the future,” Fernandez said.

Intuitive user interfaces have also become an important factor in the UCaaS selection process as ease of use will affect user adoption of a UCaaS platform. According to the report, providers are addressing ease of use demands by trying to improve access to features, embedding AI functionality and enhancing interoperability among UC services.

How to Resize Virtual Hard Disks in Hyper-V 2016

We get lots of cool tricks with virtualization. Among them is the ability to change our minds about almost any provisioning decision. In this article, we’re going to examine Hyper-V’s ability to resize virtual hard disks. Both Hyper-V Server (2016) and Client Hyper-V (Windows 10) have this capability.

Requirements for Hyper-V Disk Resizing

If we only think of virtual hard disks as files, then we won’t have many requirements to worry about. We can grow both VHD and VHDX files easily. We can shrink VHDX files fairly easily. Shrinking VHD requires more effort. This article primarily focuses on growth operations, so I’ll wrap up with a link to a shrink how-to article.

You can resize any of Hyper-V’s three layout types (fixed, dynamically expanding, and differencing). However, you cannot resize an AVHDX file (a differencing disk automatically created by the checkpoint function).

If a virtual hard disk belongs to a virtual machine, the rules change a bit.

  • If the virtual machine is Off, any of its disks can be resized (in accordance with the restrictions that we just mentioned)
  • If the virtual machine is Saved or has checkpoints, none of its disks can be resized
  • If the virtual machine is Running, then there are additional restrictions for resizing its virtual hard disks

Can I Resize a Hyper-V Virtual Machine’s Virtual Hard Disks Online?

A very important question: do you need to turn off a Hyper-V virtual machine to resize its virtual hard disks? The answer: sometimes.

  • If the virtual disk in question is the VHD type, then no, it cannot be resized online.
  • If the virtual disk in question belongs to the virtual IDE chain, then no, you cannot resize the virtual disk while the virtual machine is online.
  • If the virtual disk in question belongs to the virtual SCSI chain, then yes, you can resize the virtual disk while the virtual machine is online.


Does Online VHDX Resize Work with Generation 1 Hyper-V VMs?

The generation of the virtual machine does not matter for virtual hard disk resizing. If the virtual disk is on the virtual SCSI chain, then you can resize it online.

Does Hyper-V Virtual Disk Resize Work with Linux Virtual Machines?

The guest operating system and file system do not matter. Different guest operating systems might react differently to a resize event, and the steps that you take for the guest’s file system will vary. However, the act of resizing the virtual disk does not change.

Do I Need to Connect the Virtual Disk to a Virtual Machine to Resize It?

Most guides show you how to use a virtual machine’s property sheet to resize a virtual hard disk. That might lead to the impression that you can only resize a virtual hard disk while a virtual machine owns it. Fortunately, you can easily resize a disconnected virtual disk. Both PowerShell and the GUI provide suitable methods.

How to Resize a Virtual Hard Disk with PowerShell

PowerShell is the preferred method for all virtual hard disk resize operations. It’s universal, flexible, scriptable, and, once you get the hang of it, much faster than the GUI.

The cmdlet to use is Resize-VHD. As of this writing, the documentation for that cmdlet says that it operates offline only. Ignore that. Resize-VHD works under the same restrictions outlined above.

The VHDX that I used in the sample began life at 20GB. Therefore, the above cmdlet will work as long as I did at least one of the following:

  • Left it unconnected
  • Connected it to the VM’s virtual SCSI controller
  • Turned the connected VM off

Notice the gb suffix on the SizeBytes parameter. PowerShell natively provides that feature; the cmdlet itself has nothing to do with it. PowerShell will automatically translate suffixes as necessary. Be aware that 1kb equals 1,024, not 1,000 (although both b and B both mean “byte”).

Had I used a number for SizeBytes smaller than the current size of the virtual hard disk file, I might have had some trouble. Each VHDX has a specific minimum size dictated by the contents of the file. See the discussion on shrinking at the end of this article for more information. Quickly speaking, the output of Get-VHD includes a MinimumSize field that shows how far you shrink the disk without taking additional actions.

This cmdlet only affects the virtual hard disk’s size. It does not affect the contained file system(s). That’s a separate step.

How to Resize a Disconnected Virtual Hard Disk with Hyper-V Manager

Hyper-V Manager allows you to resize a virtual hard disk whether or not a virtual machine owns it.

  1. From the main screen of Hyper-V Manager, first, select a host in the left pane. All VHD/X actions are carried out by the hypervisor’s subsystems, even if the target virtual hard disk does not belong to a specific virtual machine. Ensure that you pick a host that can reach the VHD/X. If the file resides on SMB storage, delegation may be necessary.
  2. In the far right Actions pane, click Edit Disk.
  3. The first page is information. Click Next.
  4. Browse to (or type) the location of the disk to edit.
  5. The directions from this point are the same as for a connected disk, so go to the next section and pick up at step 6.

Note: Even though these directions specify disconnected virtual hard disks, they can be used on connected virtual disks. All of the rules mentioned earlier apply.

How to Resize a Virtual Machine’s Virtual Hard Disk with Hyper-V Manager

Hyper-V Manager can also resize virtual hard disks that are attached to virtual machines.

  1. If the virtual hard disk is attached to the VM’s virtual IDE controller, turn off the virtual machine. If the VM is saved, start it.
  2. Open the virtual machine’s Settings dialog.
  3. In the left pane, choose the virtual disk to resize.
  4. In the right pane, click the Edit button in the Media block.
  5. The wizard will start by displaying the location of the virtual hard disk file, but the page will be grayed out. Otherwise, it will look just like the screenshot from step 4 of the preceding section. Click Next.
  6. Choose to Expand or Shrink (VHDX only) the virtual hard disk. If the VM is off, you will see additional options. Choose the desired operation and click Next.
  7. If you chose Expand, it will show you the current size and give you a New Size field to fill in. It will display the maximum possible size for this VHD/X’s file type. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable).
    rv_expandIf you chose Shrink (VHDX only), it will show you the current size and give you a New Size field to fill in. It will display the minimum possible size for this file, based on the contents. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable).
    Enter the desired size and click Next.
  8. The wizard will show a summary screen. Review it to ensure accuracy. Click Finish when ready.

The wizard will show a progress bar. That might happen so briefly that you don’t see it, or it may take some time. The variance will depend on what you selected and the speed of your hardware. Growing fixed disks will take some time; shrinking disks usually happens almost instantaneously. Assuming that all is well, you’ll be quietly returned to the screen that you started on.

Following Up After a Virtual Hard Disk Resize Operation

When you grow a virtual hard disk, only the disk’s parameters change. Nothing happens to the file system(s) inside the VHD/X. For a growth operation, you’ll need to perform some additional action. For a Windows guest, that typically means using Disk Management to extend a partition:


Note: You might need to use the Rescan Disks operation on the Action menu to see the added space.

Of course, you could also create a new partition (or partitions) if you prefer.

I have not performed this operation on any Linux guests, so I can’t tell you exactly what to do. The operation will depend on the file system and the tools that you have available. You can probably determine what to do with a quick Internet search.

VHDX Shrink Operations

I didn’t talk much about shrink operations in this article. Shrinking requires you to prepare the contained file system(s) before you can do anything in Hyper-V. You might find that you can’t shrink a particular VHDX at all. Rather than muddle this article will all of the necessary information, I’m going to point you to an earlier article that I wrote on this subject. That article was written for 2012 R2, but nothing has changed since then.

What About VHD/VHDX Compact Operations?

I often see confusion between shrinking a VHD/VHDX and compacting a VHD/VHDX. These operations are unrelated. When we talk about resizing, then the proper term for reducing the size of a virtual hard disk is “shrink”. “Compact” refers to removing the zeroed blocks of a dynamically expanding VHD/VHDX so that it consumes less space on physical storage. Look for a forthcoming article on that topic.

Apache Hadoop 3.0 goes GA, adds hooks for cloud and GPUs

It may still have a place among screaming new technologies, but Apache Hadoop is neither quite as new nor quite as screaming as it once was. And the somewhat subdued debut of Apache Hadoop 3.0 reflects that.

Case in point: In 2017, the name Hadoop was removed from the title of more than one event previously known as a “Hadoop conference.” IBM dropped off the list of Hadoop distro providers in 2017, a year in which machine learning applications — and tools like Spark and TensorFlow — became the focus of many big data efforts.

So, the small fanfare that accompanied the mid-December release of Hadoop 3.0 was not too surprising. The release does hold notable improvements, however. This update to the 11-year-old Hadoop distributed data framework reduces storage requirements, allows cluster pooling on the latest graphics processing unit (GPU) resources, and adds a new federation scheme that enables the crucial Hadoop YARN resource manager and job scheduler to greatly expand the number of Hadoop nodes that can run in a cluster.

This latter capability could find use in Hadoop cloud applications — where many appear to be heading.

Scaling nodes to tens of thousands

“Federation for YARN means spanning out to much larger clusters,” according to Carlo Aldo Curino, principal scientist at Microsoft, an Apache Hadoop committer and a member of the Apache Hadoop Project Management Committee (PMC). With federation, in effect, a routing layer now sits in front of Hadoop Distributed File System (HDFS) clusters, he said.

Curino emphasized that he was speaking in his role as PMC member, and not for Microsoft. He did note, however, that the greater scalability is useful in clouds such as Azure. Most of “the biggest among Hadoop clusters to date have been in the low thousands of nodes, but people want to go to tens of thousands of nodes,” he said.

If Hadoop applications are going begin to include millions of machines running YARN, federation will be needed to get there, he said. Looking ahead, Curino said he expects YARN will be a focus of updates to Hadoop.

In fact, YARN was the biggest cog in the machine that was Hadoop 2.0, released in 2013 — most particularly because it untied Hadoop from reliance on its original MapReduce processing engine. So, its central role in Hadoop 3.0 should not surprise.

In Curino’s estimation, YARN carries forward important new trends in distributed architecture. “YARN was an early incarnation of the serverless movement,” he said, referring to the computing scheme that has risen to some prominence on the back of Docker containers.

Curino noted that some of the important updates in Hadoop 3.0, which is now generally available and deemed production-ready, had been brewing in some previous point updates.

Opening up the Hadoop 3.0 pack

Among other new aspects of Hadoop 3.0, GPU enablement is important, according to Vinod Vavilapalli, who is Hadoop YARN and MapReduce development lead at Hortonworks, based in Santa Clara, Calif.

This is important because GPUs, as well as field-programmable gate arrays — which are also supported in Hadoop 3.0 API updates — are becoming go-to hardware for some machine learning and deep learning workloads.

[Hadoop] will be part of the critical infrastructure that powers an increasingly data-driven world.
Vinod VavilapalliHadoop YARN and MapReduce development lead, Hortonworks

Without updated APIs such as those found with Hadoop 3.0, he noted, these workloads require special setups to access modern data lakes.

“With Hadoop 3.0, we are moving into larger scale, better storage efficiency, into deep learning and AI workloads, and improving interoperability with the cloud,” Vavilapalli said. In this latter regard, he added, Apache Hadoop 3.0 brings better support via erasure coding, an alternative to typical Hadoop replication and saves on storage space.

Will Hadoop take a back seat?

Both Curino and Vavilapalli concurred the original model of Hadoop, in which HDFS is tightly matched with MapReduce, may be fading, but that is not necessarily reason to declare this the “post-Hadoop” era, as some pundits suggest.

“One of the things I noticed about sensational pieces that say ‘Hadoop is dead’ is that it is bit of mischaracterization. What it is that people see MapReduce losing [is] popularity. It’s not the paradigm use anymore,” Curino said. “This was clear to the community long ago. It is why we started work on YARN.”

For his part, Vavilapalli said he sees Hadoop becoming more powerful and enabling newer use cases.

“This constant reinvention tells me that Hadoop is always going to be relevant,” he said. Even if it is something running in the background, “it will be part of the critical infrastructure that powers an increasingly data-driven world.”

To buy or build IT infrastructure focus of ONUG 2017 panel

NEW YORK — Among the most vexing questions enterprises face is whether it makes more sense to buy or build IT infrastructure. The not-quite-absolute answer, according to the Great Discussion panel at last week’s ONUG conference: It depends.

“It’s hard, because as engineers, we follow shiny objects,” said Tsvi Gal, CTO at New York-based Morgan Stanley, adding there are times when the financial services firm will build what it needs, rather than being lured by what vendors may be selling.

“If there are certain areas of the industry that have no good solution in the market, and we believe that building something will give us significant value or edge over the competition, then we will build,” he said.

This decision holds even if buying the product is cheaper than building IT infrastructure, he said — especially if the purchased products don’t always have the features and functions Morgan Stanley needs.

“I don’t mind spending way more money on the development side, if the return for it will be significantly higher than buying would,” he said. “We’re geeks; we love to build. But at the end of the day, we do it only for the areas where we can make a difference.”

ONUG panelists discuss buy vs. build IT infrastructure
Panelists at the Great Discussion during the ONUG 2017 fall conference

A company’s decision to buy or build IT infrastructure heavily depends on its size, talent and culture.

For example, Suneet Nandwani, senior director of cloud infrastructure and platform services at eBay, based in San Jose, Calif., said eBay’s culture as a technology company creates a definite bias toward building and developing its own IT infrastructure. As with Morgan Stanley, however, Nandwani said eBay stays close to the areas it knows.

“We often stick within our core competencies, especially since eBay competes with companies like Facebook and Netflix,” he said.

On the other side of the coin, Swamy Kocherlakota, S&P Global’s head of global infrastructure and enterprise transformation, takes a mostly buy approach, especially for supporting functions. It’s a choice based on S&P Global’s position as a financial company, where technology development remains outside the scope of its main business.

This often means working with vendors after purchase.

“In the process, we’ve discovered not everything you buy works out of the box, even though we would like it to,” Kocherlakota said.

Although he said it’s tempting to let S&P Global engineers develop the desired features, the firm prefers to go back to the vendor to build the features. This choice, he said, traces back to the company’s culture.

“You have to be part of a company and culture that actually fosters that support and can maintain [the code] in the long term,” he said.  

The questions of talent, liability and supportability

Panelists agreed building the right team of engineers was an essential factor to succeed in building IT infrastructure.

“If your company doesn’t have enough development capacity to [build] it yourself, even when you can make a difference, then don’t,” Morgan Stanley’s Gal said. “It’s just realistic.”

But for companies with the capacity to build, putting together a capable team is necessary.

“As we build, we want to have the right talent and the right teams,” eBay’s Nandwani said. “That’s so key to having a successful strategy.”

To attract the needed engineering talent, he said companies should foster a culture of innovation, acknowledging that mistakes will happen.

For Gal, having the right team means managers should do more than just manage.

“Most of our managers are player-coach, not just coach,” Gal said. “They need to be technical; they need to understand what they’re doing, and not just [be] generic managers.”

But it’s not enough to just possess the talent to build IT infrastructure; companies must be able to maintain both the talent and the developed code.

“One of the mistakes people make when building software is they don’t staff or resource it adequately for operational support afterward,” Nandwani said. “You have to have operational process and the personnel who are responding when those things screw up.”

S&P Global’s Kocherlakota agreed, citing the fallout that can occur when an employee responsible for developing important code leaves the company. Without proper documentation, the required information to maintain the code would be difficult to follow.

This means having the right support from the beginning, with well-defined processes encompassing software development lifecycle, quality assurance and control and code reviews.

“I would just add that when you build, it doesn’t free you from the need to document what you’re doing,” Gal said.

MobileIron, VMware can help IT manage Macs in the enterprise

As Apple computers have become more popular among business users, IT needs better ways to manage Macs in the enterprise. Vendors have responded with some new options.

The traditional problem with Macs is they have required different management and security software than their Windows counterparts, which means organizations must spend more money or simply leave these devices unmanaged. New features from MobileIron and VMware aim to help IT manage Macs in a more uniform way.

“Organizations really didn’t have an acute system to secure and manage Macs as they did with their Windows environment. But now, what we are starting to see is that a large number of companies have started taking Mac a lot more seriously,” said Nicholas McQuire, vice president of enterprise research at CCS Insight.

Macs in the enterprise see uptick

Windows PCs have long dominated the business world, whereas Apple positioned Macs for designers and other creative workers, plus the education market. There are several reasons why businesses traditionally did not offer Macs to employees, including their pricing and a lack of strong management and security options. About 5% to 10% of corporate computers are Macs, but that percentage is growing, McQuire said.

[embedded content]

With Macs growing in popularity, IT needs
streamlined configuration methods.

There are a few potential reasons for the growth of Macs in the enterprise. Demand from younger workers is a big one, said Ojas Rege, chief strategy officer at MobileIron, based in Mountain View, Calif. In addition, because Macs don’t lose value as quickly as PCs, the difference in total cost of ownership between Macs and PCs isn’t as significant as it once was, he said.

“A lot of our customers tell us that Macs are key to the new generation of their workforce,” Rege said. “Another key is that the economics are improving.”

New capabilities help manage Macs in the enterprise

It is surprising how many people still think they do not need additional software to help secure Macs.
Tobias Kreidldesktop creation and integration services team lead, Northern Arizona University

Windows has managed to stay on top in the eyes of IT because of its ability to offer more management platforms from third parties. Despite some options, such as those from Jamf, the macOS management ecosystem was very limited for a long time. But as the BYOD trend took off and shadow IT emerged, more business leaders felt they could no longer limit their employees to using Windows PCs.

VMware in August introduced updates to Workspace One, its end-user computing software, that allow IT to manage Macs the same way they would mobile devices. Workspace One will also have a native macOS client and let users enroll their Macs in unified endpoint management through a self-service portal, just like they can with smartphones and tablets.

MobileIron already supported macOS for basic device configuration and security. The latest improvements included these new Mac management features:

  • secure delivery of macOS apps through MobileIron’s enterprise app store;
  • per-app virtual private network connectivity through MobileIron Tunnel; and
  • trusted access enforcement for cloud services, such as Office 365, through MobileIron Access.

Mac security threats increase

At Northern Arizona University, the IT department is deploying Jamf Pro to manage and secure Macs, which make up more than a quarter of all client devices on campus. The rise in macOS threats over the past few years is a concern, said Tobias Kriedl, desktop creation and integration services team lead at the school in Flagstaff, Ariz.

The number of macOS malware threats increased from 819 in 2015 to 3,033 in 2016, per a report by AV-Test. And the first quarter of 2017 saw a 140% year-over-year increase in the number of different types of macOS malware, according to the report.

“It is surprising how many people still think they do not need additional software to help secure Macs,” Kreidl said. “[Apple macOS] is pretty good as it stands, but more and more efforts are being spent to find ways to circumvent Mac security, and some have been successful.”

Healthcare quality goals set for telehealth, interoperability

The quality of healthcare and health IT interoperability are continuing concerns among healthcare professionals. To address these concerns, the National Quality Forum and its telehealth committee met recently to discuss ways to measure healthcare quality and interoperability.

The National Quality Forum (NQF) was asked by the Health Department to accomplish two tasks: identify critical areas where measurement can effectively assess the healthcare quality and impact of telehealth services, and assess the current state of interoperability and its impact on quality processes and outcomes.

In a media briefing last week, NQF experts and members of the committee in charge of the two aforementioned tasks discussed the thought process behind the development of healthcare quality measures and the goal the committee hopes these measures will help achieve.

“After a comprehensive literature review conducted by NQF staff, the telehealth committee developed measurement concepts … across four distinct domains: access to care; financial impact and cost; telehealth experience for patients, care givers, care team members and others; as well as effectiveness, including system, clinical, operational and technical,” said Jason Goldwater, senior director at NQF, during the briefing.

Goldwater said that, ultimately, the following areas were identified as the highest priorities: “The use of telehealth to decrease travel, timeliness of care, actionable information, the added value of telehealth to provide evidence-based best practices, patient empowerment and care coordination.”

Those of us that live in the world of telemedicine believe not only are there quality enhancements, but there’s convenience enhancements that are going to make medicine easier to deliver.
Judd Hollanderassociate dean of strategic health initiatives, Thomas Jefferson University

Judd Hollander, associate dean of strategic health initiatives at Thomas Jefferson University and a member of the NQF telehealth committee, explained that the committee wanted to begin this process of creating measures for telehealth and interoperability in healthcare by conducting an “environmental scan.”

“Where is there data and where are there data holes and what do we need to know?” Hollander said. “After we informed that and took a good look at it we started thinking, what are types of domains and subdomains and measure concepts that the evidence out there helps us illustrate but the evidence we’re lacking can also be captured? … So it was a really nice way to begin the discussion.”

Hollander added that the implications of the NQF report and the measures the committee is working on are “expected to inform policy across the entire spectrum of alternative payment models, government funded healthcare, and care funded by commercial payers because it’s just what you should be assessing to provide quality care.”

NQF’s telehealth measures: Patient experience

For healthcare to truly reap the benefits of telehealth, the industry has to focus on quality first. And to improve healthcare quality, there has to be a way to measure and report it, Hollander said.

“Those of us that live in the world of telemedicine believe not only are there quality enhancements, but there’s convenience enhancements that are going to make medicine easier to deliver,” Hollander said.

Hollander used a personal experience as an example of the benefits telehealth can bring to patients, even if a diagnosis isn’t or cannot be made via telehealth technologies.

“I had a patient who hurt his knee working in Staples, actually, at about 5:15, 5:30 in the evening. He had a prior knee injury and he had an orthopedist, but he couldn’t reach the orthopedist because their offices were closed,” Hollander said.

Without telehealth, this patient would have had to go to the emergency department, he would’ve waited hours to be seen, and then he would’ve been examined and had X-rays done, Hollander said.

Not only would this have taken a long time, it also would’ve cost this patient a lot of money, Hollander added.

Instead of going to the ER, the patient was able to connect with Hollander through JeffConnect, Jefferson University Hospitals’ app that enables patients to connect with doctors anytime, anywhere.

“I was the doc on call. We do know how to examine knees by telemedicine and we can tell with over 99% accuracy whether someone has a fracture or not and he did not,” he said.

Hollander explained that they then did a little “wilderness medicine.” Using materials lying around, the patient was splinted with yard sticks and an ace bandage and then was able to wait to see his orthopedist the next day.

“So we didn’t actually really solve his problem, but we saved him a ton of time and money; he didn’t have to go get X-rays one day, [then] have them repeated by the orthopedist who couldn’t see him [until] the next day because the systems aren’t interoperable,” Hollander said.

NQF’s telehealth measures: Rural communities

Marcia Ward, director of the Rural Telehealth Research Center at the University of Iowa and also an NQF telehealth committee member, brings a rural perspective to the telehealth conversation.

“Creating this framework we had to look across all of those different aspects of telehealth and how it could be applied. I find it particularly interesting that telehealth has been thought of as an answer for increasing access in rural healthcare … and I think that’s been one of the strongest suits,” she said during the briefing. “But now it’s developing into an urban application and I think we’ll see particular growth in that.”

Ward used the concept of travel in rural areas as an example of thinking of a unique, and maybe not always obvious, issue to address when creating telehealth measures.

“Travel is a concept that is very important, particularly in rural telehealth,” Ward said. “An example of that is there’s a telestroke program at the Medical University of South Carolina and one of the measures that they use is how many of the patients that are seen through their telestroke program at the rural hospitals are able to stay at their local rural hospital.”

This is an example of a healthcare quality measure that wouldn’t normally be seen in conventional medicine but is very appropriate for telehealth in rural areas.

“That’s a very important measure concept … able to be captured. Another one particularly important in the rural area is workforce shortages and we’re seeing evidence that telehealth programs can be implemented that help bridge that gap [and] be able to deliver services in very rural areas and have the backup from telehealth hub where there’s emergency physicians,” Ward said.  And we’re seeing evidence that telehealth, in terms of rural communities in particular, it’s really filling a particular need.”

NQF’s interoperability measures

While the experts focused mainly on telehealth during the briefing, Goldwater explained that when the committee was discussing and creating measures for interoperability they conducted several interviews to help them define guiding principles.

Goldwater said that these guiding principles include:

  • “Interoperability is more than just EHR to EHR;
  • “Various stakeholders with diverse needs are involved in the exchange and use of data, and the framework and concepts will differ based on these perspectives;
  • “The term ‘electronically exchanged information’ is more appropriate to completely fulfill the definition of interoperability;
  • “And all critical data elements should be included in the analysis of measures as interoperability increases access to information.”

Ultimately the committee developed healthcare quality measures across four domains: The exchange of electronic health information to the quality of data content and the method of exchange, the usability of the exchange of electronic health information such as the data’s relevance and its accessibility, the application of exchange of electronic health information such as “Is it computable?” and the impact of interoperability such as patient safety and care coordination, Goldwater said.

Continue on PC, Timeline features raise Windows 10 security concerns

New Windows 10 syncing features should be popular among users but could lead to IT security risks.

Microsoft’s upcoming Windows 10 Fall Creators Update will include the Continue on PC feature, which allows users to start web browsing on their Apple iPhones or Google Android smartphones and then continuing where they left off on their PCs. A similar feature called Timeline, which will allow users to access some apps and documents across their smartphones and PCs, is also in the works. IT will have to pay close attention to both of these features, because linking PCs to other devices can threaten security.

“It does have the potential to be a real mess,” said Willem Bagchus, messaging and collaboration specialist at United Bank in Parkersburg, W.Va. “To pick up data on another device, you have to do it securely. This has to be properly protected.”

How Continue on PC works

Continue on PC syncs browser sessions through an app for iPhones and Android smartphones. Users must be logged into the same Microsoft account in the app and on their Windows 10 PC.

When on a webpage, smartphone users can select the Share option in the browser and choose Continue on PC, which syncs the browsing session through the app. The feature is currently available as part of a preview build leading up to the Windows 10 Fall Creators Update, and the iOS app is already available in the Apple App Store.

To pick up data on another device, you have to do it securely. This has to be properly protected.
Willem Bagchusmessaging and collaboration specialist, United Bank

Microsoft did not say if the feature will allow users to continue a browsing session on their smartphone that started on their PC. Apple’s Continuity feature offers this capability, and the Google Chrome browser lets users share tabs and browsing history across multiple devices as well.  

Continue on PC could expose sensitive data when sharing web applications through synced devices, said Jack Gold, founder and principal analyst of J. Gold Associates, a mobile analyst firm in Northborough, Mass.

For example, if a user’s personal laptop is stolen that is synced to a corporate phone, the thief could access business web apps through a synced browsing session, exposing company data. If the feature is expanded to share browsing sessions from a PC to a smartphone, all it would take is someone to steal a user’s smartphone to have access to the web apps the employee used on their PC.

“It could be something to worry about if a user loses their phone,” Gold said. “I can’t lose that device because it can sync to my PC.”

To avoid this problem, IT could use enterprise mobility management (EMM) software to blacklist the Continue on PC app altogether, or simply prevent users from sharing the browser session through the app.   

Timeline shares security issues

Originally, Timeline was supposed to be part of the Windows 10 Fall Creators Update, but now it will come out in a preview build shortly afterward, Microsoft said.

Timeline suggests recent documents and apps a user accessed on a synced smartphone and allows them to pull some of them up on their PC, and vice versa. Microsoft hasn’t disclosed which apps the feature will support.

This feature could also cause a security problem if a user loses their PC or smartphone and it gets in the wrong hands. Timeline is basically a dashboard displaying every app, document and webpage the user was in across multiple devices, so someone could access documents, apps and web apps that contain work data on the stolen device.

“Security is needed across the board,” said Bagchus, whose company plans to move to Windows 10 next year. “It absolutely has to be managed.”

EMM software should also come into play when managing this feature, he said.

IT needs to force users to have passwords on all PCs and mobile devices to protect from these instances, said Jim Davies, IT director at Ongweoweh Corp., a pallet and packing management company in Ithaca, N.Y.

“This is something that will be used by a lot of people in a lot of companies,” Davies said. “People won’t need to email themselves a link because this makes it simpler. That being said, your password is that much more important now.”

Ongweoweh Corp. plans to migrate to Windows 10 in the first quarter of 2018.

It is likely that these Windows 10 syncing features won’t be limited to smartphones, and iPads and Android tablets could gain this ability in the future, Bagchus said.

“This feature … makes productivity easier,” Bagchus said. “This will be huge.” 

Powered by WPeMatico

Span multiple services with Office 365 data loss prevention policies

As Office 365 gains more traction among organizations of all sizes, Microsoft refines the collaboration platform’s…


* remove unnecessary class from ul
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

* Replace “errorMessageInput” class with “sign-up-error-msg” class
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {

* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
return validateReturn;

* DoC pop-up window js – included in moScripts.js which is not included in responsive page
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);

security features to help administrators secure their perimeters. Office 365 now includes a data loss prevention feature that works across multiple services.

Administrators can enlist data loss prevention policies to scan both message text and message attachments for sensitive data, such as social security numbers or credit card numbers. These policies can now extend into Microsoft Office attachments and scan files in SharePoint and OneDrive for Business.

Build the data loss prevention policies

In the Exchange admin center, administrators can choose to build a single data loss prevention (DLP) policy (Figure 1) in the Office 365 Security and Compliance Center to guard data and messages in SharePoint, OneDrive and Exchange, or stick with the existing DLP option.

Office 365 DLP policy
Figure 1. Administrators can create unified data loss prevention policies through the Office 365 Security and Compliance Center.

Administrators develop data loss prevention policies from rules. Each rule has a condition and an action. Administrators can apply the policy to specific locations within Office 365.

To create a DLP policy, open the Office 365 Security & Compliance Admin Center, expand the Data loss prevention container and click on the Policy container. Then click on the Create a policy button.

Now choose the information to protect. As is the case in Exchange Server, the Security & Compliance Center in Office 365 contains DLP templates to assist with regulatory compliance. For example, there are templates designed for the financial services industry (Figure 2) as well as templates meant for healthcare providers. Administrators can always create a custom policy to fit organizational needs.

DLP policy templates
Figure 2. Administrators can use templates in the Office 365 Security & Compliance portal or choose the custom setting to build their own data loss prevention policies.

Name the policy

Naming the policy also means adding a description to it. In some cases, Office 365 automatically assigns a policy name, which the administrator can modify if necessary.

Choose the locations to apply the policy. By default, data loss prevention policies extend to all locations within Office 365, but administrators can also specify policy locations. In Figure 3, manual location assignments allow for finer control. Administrators can choose which Office 365 services to apply the policy to and whether to include or exclude specific SharePoint sites or OneDrive accounts. For example, it may be permissible for members of the legal team to transmit sensitive information, but not a sales person.

DLP locations
Figure 3. An administrator can choose which services to apply the new policy to and make adjustments.

While this wizard does not expose the individual rules that make up a policy, the Advanced Settings option allows the administrator to edit the policy rules and create additional ones.

Refine the policy settings

Next, customize the types of sensitive information to protect with DLP policies. Figure 4 shows one policy that detects when a worker sends a message that shares credit card numbers outside of the organization. The administrator can configure the policy to monitor the use of other data types. Data loss prevention policies can also monitor when sensitive information gets shared within the organization.

DLP policy wizard
Figure 4. The DLP policy wizard allows administrators to customize the types of sensitive information to protect.

The wizard allows the administrator to choose an action to take when sensitive information is shared, such as display a policy tip, block the content from being shared, or send a report to someone in the organization.

After the configuration process, the wizard will ask whether to enable the policy right away or test it.

The last step in the process is to review your selections and, if everything appears to be correct, click the Create button to generate the data loss prevention policy.

Next Steps

How to craft the best DLP policies

Choose the right DLP template in Exchange 2013 SP1

The top email security gateways on the market

Essential Guide

What data loss prevention systems and tactics can do now

Powered by WPeMatico