Tag Archives: among

Convenience: Driver of BI innovation

Allaa “Ella” Hilal is among that rare breed of computer experts who straddle the academic and commercial worlds. As director of data at Ottawa-based Shopify, Hilal oversees data product development for the e-commerce company’s international and larger merchants, also known as Plus customers. She is also an adjunct associate professor in the Centre for Pattern Analysis and Machine Intelligence at the University of Waterloo in Ontario, where she earned a Ph.D. in electrical and computer engineering.

An expert in data intelligence, wireless sensor networks and autonomous systems, Hilal is among the featured speakers at the Real Business Intelligence Conference on June 27 to 28 in Cambridge, Mass.  Here, Hilal discusses what’s driving business intelligence (BI) innovation today and some of the pitfalls companies should be aware of.

What is driving BI innovation today?

Ella Hilal: First of all, in this day and age, companies are creating more and more products to derive customer convenience. This convenience ends up saving time, which ties to money. When we become more efficient, whether it’s in our IT systems or in our daily commute, we gain moments that we can spend on something else. We can have more time with our families and loved ones, or even gain more time or resources to do the things we love or care about.

There is this immediate need and craving for more efficiency and convenience from the customer side. And businesses all are aware of this craving. They are trying to think about what they can do with the data that exists within the systems or data being collected from IoT, which they know is valuable. The power of BI lies in the fact that it can take all of these different data sources and derive valuable insights to drive business decisions and data products that empower customers and the business in general.

There are many methodologies of how you can apply this to your business, and I plan to discuss some methodologies during my talk at the Real Business Intelligence Conference.

Companies have been doing business intelligence for a long time; they’ve had to figure out which data is useful and which is not for their businesses. What’s different about capitalizing on data generated from technologies like IoT and smart systems?

Hilal: Generally, only 12% of company data that is analyzed today is critical to a business — the rest is either underutilized or untapped. If we think we’re doing such a good job with the analytics we have today, imagine if you apply these efforts across the entire data available in your business. At Shopify, we work to identify the pain points of running a business and use data to provide value to the merchants so they have a better experience as an entrepreneurs.

So, there is huge value we can mine and surface. And when we talk about advanced analytics, we’re not talking about just basic business analytics; we’re talking also about applying AI, machine learning, prediction, forecasting and even prescriptive analytics.

Most CIOs are acutely aware that AI and advanced analytics should be part of a BI innovation strategy. But even big companies are having trouble finding skilled people to do this work.

Hilal: It’s a problem every company will face, because the skilled data scientist is still scarce compared to the need. One challenge is that the people who have the technical abilities to do this strong analytical work don’t always have the business acumen that is needed for an experienced data scientist. They might be very smart in doing sophisticated analysis, but if we don’t tie that with business acumen, they fail to communicate the business value and enable the decision-makers with useful insights. Furthermore, the lack of business acumen makes it challenging to build data products you can utilize or sell. So, you need to build the right kind of team.

Community and university collaborations are one of the strongest approaches that big companies are adopting; you can see that Google, Uber and Shopify, for example, are all partnering with university research labs and reaping the benefits from a technical perspective. They have the technical team and the business acumen team, which then brings the work in-house to focus on data analytics products. So, you get to bridge the gap between this amazing research initiative and the productization of the results.

Another benefit is that with these partnerships, researchers with very strong technical AI and statistical backgrounds can also develop business acumen, because they are working closely with product managers and production teams. This is definitely a longer-term strategy. Wearing my research hat, I can say that universities are also working hard to introduce programs with a mix of computer science and machine learning, programs with a good mix of the old pillars of data science and new approaches.

So, companies need to come up with new frameworks for capitalizing on data. Are there pitfalls companies want to keep in mind?

Hilal: You’ll hear me say this time and time again: We all need to have a sense of responsible innovation. We’re in this industrial race to build really good products that can succeed in the market, and we need to keep in mind that we are building these products for ourselves, as well as for others.

When we create these products, it is the distributed responsibility of all of us to make sure that we embed our morals and ethics in them, making sure they are secure, they are private, they don’t discriminate. At Shopify, we are always asking ourselves, ‘Will this close or open a door for a merchant?’ It is not enough that our products are functional; they have to maintain certain ethical standards, as well.

We’ve reported on how the IoT space may pose a threat because developers are under such pressure to get these products to market that considerations like security and ethics and who owns the data are an afterthought.

Hilal: We should not be putting anything out there that we wouldn’t want in our own homes. But this is not just about AI or IoT. Whether it is a piece of software or hardware system, we need to make sure that security is not a bolt-on, or that privacy is fixed after the fact with a new policy statement — these things need to be done early on and need to be thought of before and throughout the production process.

DRaaS solution: US Signal makes rounds in healthcare market

A managed service provider’s disaster-recovery-as-a-service offering is carving a niche among healthcare market customers, including Baystate Health System, a five-hospital medical enterprise in western Massachusetts.

The DRaaS solution from US Signal, an MSP based in Grand Rapids, Mich., is built on Zerto’s disaster recovery software, US Signal’s data center capability and the company’s managed services. The offering is designed to work in VMware vCenter Server and Microsoft System Center environments. One target market is healthcare.

“We have several healthcare facilities … all across the Midwest using this solution,” said Jerry Clark, director of cloud sales development at US Signal. The DRaaS solution meets HIPAA standards, according to the company.

Clark said many hospitals — and organizations in other industries, for that matter — are searching for ways to avoid the investment in duplicate hardware traditional DR approaches require. With DRaaS, hardware becomes the service provider’s issue. Instead of paying for hardware upfront, the customer pays a monthly management fee to the DRaaS provider. The approach has expanded the channel opportunity in DR.

“Enterprises … run into the same situation: ‘Do we spend all this Capex on disaster recovery hardware that may or may not ever get used?'” Clark noted. “A DRaaS solution makes it much more economical.”

Chart showing anticipated budget growth across various IT sectors
One-third of the respondents to TechTarget’s IT Priorities survey identified disaster recovery as an area for budget growth.

Baystate Health adopts DRaaS solution

US Signal found an East Coast customer, Baystate Health, based in Springfield, Mass., though VertitechIT, a US Signal consulting partner located in nearby Holyoke, Mass.

Jerry Clark, director of cloud sales development at US SignalJerry Clark

VertitechIT helped Baystate Health launch a software-defined data center initiative. The implementation uses the entire VMware stack across three active data centers. The three-node arrangement provides local data replication, but David Miller, senior IT director and CTO at Baystate Health, said an outage in 2016 knocked out all three sites — contrary to design assumptions — for 10 hours.

Miller said his organization had been looking into some form of remote replication and high availability but had yet to land a good solution. The downtime event, however, increased the urgency of finding one.

“We realized we had to do something now rather than later,” Miller said.

David Miller, CTO at Baystate Health SystemDavid Miller

VertitechIT introduced US Signal to Baystate Health. The companies met in VertitechIT’s corporate office and US Signal proposed its DRaaS solution. In its DRaaS solution, US Signal deploys Zerto’s IT Resilience Platform, specifically Zerto Virtual Manager and Virtual Replication Appliance. The software installed in the customer source environment replicates data writes for each protected virtual machine to the DR target site, in this case US Signal’s Grand Rapids data center. An MPLS link connects Baystate Health to the Michigan facility.

The remote replication service provides the benefit of geodiversity, according to the companies. Baystate Health’s data centers are all in the Springfield area.

[embedded content]

CIO of Christian Brothers Services discusses the
company’s infrastructure partnership with US Signal.

US Signal’s DRaaS solution also includes a playbook, which documents the steps Baystate Health IT personnel should take to failover to the disaster recovery site in the event of an outage. In addition, US Signal’s DRaaS package provides two annual DR tests. The DRaaS provider also tests failover before the DR plan goes into effect and documents that test in the playbook, Clark noted.

Miller said the DR service, which went live about a year ago, provides a recovery point objective (RPO) of “less than a couple of minutes” for Baystate Health’s PeopleSoft system, one of the healthcare provider’s tier-one applications. The recovery time objective (RTO) is less than two hours. RPO and RTO characteristics differ according to the application and its criticality.

Initially, the DRaaS solution covered a handful of apps, but the list of protected systems has expanded over the past 12 months, Miller said.

A DRaaS ‘showcase’

Myles Angell, executive project officer at VertitechIT, said the Baystate Health deployment has become “a showcase” when meeting with potential clients that have similar DR challenges.

Myles Angell, executive project officer at VertitechITMyles Angell

“We’re talking to other hospitals about it,” he said.

Other organizations interested in DRaaS should pay close attention to their application portfolios, however. Angell said businesses need to have a thorough understanding of applications before embarking on a DR strategy.

“To successfully build a disaster recovery option — and have confidence in the execution — relies on complete documentation of the application’s running state, dependencies and any necessary changes that would need to be executed at the time of a DR cut over,” he explained. “These pieces of information are vital to knowing how to adhere to the RTO/RPO objectives that have been defined.”

Angell said businesses may have a good understanding of their tier-one applications but may have less of a handle with regard to their tier-three or tier-four systems. The recovery of an application that isn’t well-documented or completely understood becomes a riskier endeavor when a disaster strikes.

“The DR option may miss the objectives and targets that the business is expecting and, therefore, the company may actually be worse off due to lost time trying to scramble for the little things that were not documented,” Angell said.

IDC, Cisco survey assesses future IT staffing needs

Network engineers, architects and administrators will be among the most critical job positions to fill if enterprises are to meet their digital transformation goals, according to an IDC survey tracking future IT staffing trends.

 The survey, sponsored by Cisco, zeroed in on the top 10 technology trends shaping IT hiring and 20 specific roles IT professionals should consider in terms of expanding their skills and training. IDC surveyed global IT hiring managers and examined an estimated 2 million IT job postings to assess current and future IT staffing needs.

The survey results showed digital transformation is increasing demand for skills in a number of key technology areas, driven by the growing number of network-connected devices, the adoption of cloud services and the rise in security threats.

Intersections provide hot jobs

IDC classified the intersections of where hot technologies and jobs meet as “significant IT opportunities” for current and future IT staffing, said Mark Leary, directing analyst at Cisco Services.

“From computing and networking resources to systems software resources, lots of the hot jobs function at these intersections and take advantage of automation, AI and machine learning.” Rather than eliminating IT staff jobs, a lot of jobs take advantage of those same technologies, he added.

Organizations are preparing for future IT staffing by filling vacant IT positions from within rather than hiring from outside the company, then sending staff to training, if needed, according to the survey.

But technology workers still should investigate where the biggest challenges exist and determine where they may be most valued, Leary said.

“Quite frankly, IT people have to have greater understanding of the business processes and of the innovation that’s going on within the lines of business and have much more of a customer focus.”

The internet of things illustrates the complexity of emerging digital systems. Any IoT implementation requires from 10 to 12 major technologies to come together successfully, and the IT organization is seen as the place where that expertise lies, Leary said.

IDC’s research found organizations place a high value on training and certifications. IDC found that 70% of IT leaders believe certifications are an indicator of a candidate’s qualifications and 82% of digital transformation executives believe certifications speed innovation and new ways to support the business.

Network influences future IT staffing

IDC’s results also reflect the changes going on within enterprise networking.

Digital transformation is raising the bar on networking staffs, specifically because it requires enterprises to focus on newer technologies, Leary said. The point of developing skills in network programming, for example, is to work with the capabilities of automation tools so they can access analytics and big data.

This isn’t something that’s evolutionary; it’s revolutionary.
Mark Learydirecting analyst, Cisco Services

In 2015, only one in 15 Cisco-certified workers viewed network programming as critical to his or her job. By 2017, the percentage rose to one in four. “This isn’t something that’s evolutionary; it’s revolutionary,” Leary said.

While the traditional measure of success was to make sure the network was up and running with 99.999% availability, that goal is being replaced by network readiness, Leary said. “Now you need to know if your network is ready to absorb new applications or that new video stream or those new customers we just let on the network.”

Leary is involved with making sure Cisco training and certifications are relevant and matched to jobs and organizational needs, he said. “We’ve been through a series of enhancements for the network programmability training we offer, and we continually add things to it,” he added. Cisco also monitors customers to make sure they’re learning about the right technologies and tools rather than just deploying technologies faster.

To meet the new networking demands, Cisco is changing its CCNA, CCNP and CCIE certifications in two different ways, Leary said. “We’ve developed a lot of new content that focuses on cybersecurity, network programming, cloud interactions and such because the person who is working in networking is doing that,” he said. The other emphasis is to make sure networking staff understands language of other groups like software developers.

Midmarket enterprises push UCaaS platform adoption

Cloud unified communications adoption is growing among midmarket enterprises as they look to improve employee communication, productivity and collaboration. Cloud offerings, too, are evolving to meet midmarket enterprise needs, according to a Gartner Inc. report on North American midmarket unified communications as a service (UCaaS).

Gartner, a market research firm based in Stamford, Conn., defines the midmarket as enterprises with 100 to 999 employees and revenue between $50 million and $1 billion. UCaaS spending in the midmarket segment reached nearly $1.5 billion in 2017 and is expected to hit almost $3 billion by 2021, according to the report. Midmarket UCaaS providers include vendors ranked in Gartner’s UCaaS Magic Quadrant report. The latest Gartner UCaaS midmarket report, however, examined North American-focused providers not ranked in the larger Magic Quadrant report, such as CenturyLink, Jive and Vonage.

But before deploying a UCaaS platform, midmarket IT decision-makers must evaluate the broader business requirements that go beyond communication and collaboration.

Evaluating the cost of a UCaaS platform

The most significant challenge facing midmarket IT planners over the next 12 months is budget constraints, according to the report. These constraints play a major role in midmarket UC decisions, said Megan Fernandez, Gartner analyst and co-author of the report.

“While UCaaS solutions are not always less expensive than premises-based solutions, the ability to acquire elastic services with straightforward costs is useful for many midsize enterprises,” she said.

Many midmarket enterprises are looking to acquire UCaaS functions as a bundled service rather than stand-alone functions, according to the report. Bundles can be more cost-effective as prices are based on a set of features rather than a single UC application. Other enterprises will acquire UCaaS through a freemium model, which offers basic voice and conferencing functionality.

“We tend to see freemium services coming into play when organizations are trying new services,” she said. “Users might access the service and determine if the freemium capabilities will suffice for their business needs.”

For some enterprises, this basic functionality will meet business requirements and offer cost savings. But other enterprises will upgrade to a paid UCaaS platform after using the freemium model to test services.

Cloud adoption
Enterprises are putting more emphasis on cloud communications services.

Addressing multiple network options

Midmarket enterprises have a variety of network configurations depending on the number of sites and access to fiber. As a result, UCaaS providers offer multiple WAN strategies to connect to enterprises. Midmarket IT planners should ensure UCaaS providers align with their companies’ preferred networking approach, Fernandez said.

Enterprises looking to keep network costs down may connect to a UCaaS platform via DSL or cable modem broadband. Enterprises with stricter voice quality requirements may pay more for an IP MPLS connection, according to the report. Software-defined WAN (SD-WAN) is also a growing trend for communications infrastructure. 

“We expect SD-WAN to be utilized in segments with requirements for high QoS,” Fernandez said. “We tend to see more requirements for high performance in certain industries like healthcare and financial services.”

Team collaboration’s influence and user preferences

Team collaboration, also referred to as workstream collaboration, offers similar capabilities as UCaaS platforms, such as voice, video and messaging, but its growing popularity won’t affect how enterprises buy UCaaS, yet.

Fernandez said team collaboration is not a primary factor influencing UCaaS buying decisions as team collaboration is still acquired at the departmental or team level. But buying decisions could shift as the benefits of team-oriented management become more widely understood, she said.

“This means we’ll increasingly see more overlap in the UCaaS and workstream collaboration solution decisions in the future,” Fernandez said.

Intuitive user interfaces have also become an important factor in the UCaaS selection process as ease of use will affect user adoption of a UCaaS platform. According to the report, providers are addressing ease of use demands by trying to improve access to features, embedding AI functionality and enhancing interoperability among UC services.

How to Resize Virtual Hard Disks in Hyper-V 2016

We get lots of cool tricks with virtualization. Among them is the ability to change our minds about almost any provisioning decision. In this article, we’re going to examine Hyper-V’s ability to resize virtual hard disks. Both Hyper-V Server (2016) and Client Hyper-V (Windows 10) have this capability.

Requirements for Hyper-V Disk Resizing

If we only think of virtual hard disks as files, then we won’t have many requirements to worry about. We can grow both VHD and VHDX files easily. We can shrink VHDX files fairly easily. Shrinking VHD requires more effort. This article primarily focuses on growth operations, so I’ll wrap up with a link to a shrink how-to article.

You can resize any of Hyper-V’s three layout types (fixed, dynamically expanding, and differencing). However, you cannot resize an AVHDX file (a differencing disk automatically created by the checkpoint function).

If a virtual hard disk belongs to a virtual machine, the rules change a bit.

  • If the virtual machine is Off, any of its disks can be resized (in accordance with the restrictions that we just mentioned)
  • If the virtual machine is Saved or has checkpoints, none of its disks can be resized
  • If the virtual machine is Running, then there are additional restrictions for resizing its virtual hard disks

Can I Resize a Hyper-V Virtual Machine’s Virtual Hard Disks Online?

A very important question: do you need to turn off a Hyper-V virtual machine to resize its virtual hard disks? The answer: sometimes.

  • If the virtual disk in question is the VHD type, then no, it cannot be resized online.
  • If the virtual disk in question belongs to the virtual IDE chain, then no, you cannot resize the virtual disk while the virtual machine is online.
  • If the virtual disk in question belongs to the virtual SCSI chain, then yes, you can resize the virtual disk while the virtual machine is online.

rv_idevscsi

Does Online VHDX Resize Work with Generation 1 Hyper-V VMs?

The generation of the virtual machine does not matter for virtual hard disk resizing. If the virtual disk is on the virtual SCSI chain, then you can resize it online.

Does Hyper-V Virtual Disk Resize Work with Linux Virtual Machines?

The guest operating system and file system do not matter. Different guest operating systems might react differently to a resize event, and the steps that you take for the guest’s file system will vary. However, the act of resizing the virtual disk does not change.

Do I Need to Connect the Virtual Disk to a Virtual Machine to Resize It?

Most guides show you how to use a virtual machine’s property sheet to resize a virtual hard disk. That might lead to the impression that you can only resize a virtual hard disk while a virtual machine owns it. Fortunately, you can easily resize a disconnected virtual disk. Both PowerShell and the GUI provide suitable methods.

How to Resize a Virtual Hard Disk with PowerShell

PowerShell is the preferred method for all virtual hard disk resize operations. It’s universal, flexible, scriptable, and, once you get the hang of it, much faster than the GUI.

The cmdlet to use is Resize-VHD. As of this writing, the documentation for that cmdlet says that it operates offline only. Ignore that. Resize-VHD works under the same restrictions outlined above.

The VHDX that I used in the sample began life at 20GB. Therefore, the above cmdlet will work as long as I did at least one of the following:

  • Left it unconnected
  • Connected it to the VM’s virtual SCSI controller
  • Turned the connected VM off

Notice the gb suffix on the SizeBytes parameter. PowerShell natively provides that feature; the cmdlet itself has nothing to do with it. PowerShell will automatically translate suffixes as necessary. Be aware that 1kb equals 1,024, not 1,000 (although both b and B both mean “byte”).

Had I used a number for SizeBytes smaller than the current size of the virtual hard disk file, I might have had some trouble. Each VHDX has a specific minimum size dictated by the contents of the file. See the discussion on shrinking at the end of this article for more information. Quickly speaking, the output of Get-VHD includes a MinimumSize field that shows how far you shrink the disk without taking additional actions.

This cmdlet only affects the virtual hard disk’s size. It does not affect the contained file system(s). That’s a separate step.

How to Resize a Disconnected Virtual Hard Disk with Hyper-V Manager

Hyper-V Manager allows you to resize a virtual hard disk whether or not a virtual machine owns it.

  1. From the main screen of Hyper-V Manager, first, select a host in the left pane. All VHD/X actions are carried out by the hypervisor’s subsystems, even if the target virtual hard disk does not belong to a specific virtual machine. Ensure that you pick a host that can reach the VHD/X. If the file resides on SMB storage, delegation may be necessary.
  2. In the far right Actions pane, click Edit Disk.
    rv_actionseditdisk
  3. The first page is information. Click Next.
  4. Browse to (or type) the location of the disk to edit.
    rv_browse
  5. The directions from this point are the same as for a connected disk, so go to the next section and pick up at step 6.

Note: Even though these directions specify disconnected virtual hard disks, they can be used on connected virtual disks. All of the rules mentioned earlier apply.

How to Resize a Virtual Machine’s Virtual Hard Disk with Hyper-V Manager

Hyper-V Manager can also resize virtual hard disks that are attached to virtual machines.

  1. If the virtual hard disk is attached to the VM’s virtual IDE controller, turn off the virtual machine. If the VM is saved, start it.
  2. Open the virtual machine’s Settings dialog.
  3. In the left pane, choose the virtual disk to resize.
  4. In the right pane, click the Edit button in the Media block.
    rv_vmsettingsedit
  5. The wizard will start by displaying the location of the virtual hard disk file, but the page will be grayed out. Otherwise, it will look just like the screenshot from step 4 of the preceding section. Click Next.
  6. Choose to Expand or Shrink (VHDX only) the virtual hard disk. If the VM is off, you will see additional options. Choose the desired operation and click Next.
    rv_exorshrink
  7. If you chose Expand, it will show you the current size and give you a New Size field to fill in. It will display the maximum possible size for this VHD/X’s file type. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable).
    rv_expandIf you chose Shrink (VHDX only), it will show you the current size and give you a New Size field to fill in. It will display the minimum possible size for this file, based on the contents. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable).
    rv_expand
    Enter the desired size and click Next.
  8. The wizard will show a summary screen. Review it to ensure accuracy. Click Finish when ready.

The wizard will show a progress bar. That might happen so briefly that you don’t see it, or it may take some time. The variance will depend on what you selected and the speed of your hardware. Growing fixed disks will take some time; shrinking disks usually happens almost instantaneously. Assuming that all is well, you’ll be quietly returned to the screen that you started on.

Following Up After a Virtual Hard Disk Resize Operation

When you grow a virtual hard disk, only the disk’s parameters change. Nothing happens to the file system(s) inside the VHD/X. For a growth operation, you’ll need to perform some additional action. For a Windows guest, that typically means using Disk Management to extend a partition:

rv_extend

Note: You might need to use the Rescan Disks operation on the Action menu to see the added space.

Of course, you could also create a new partition (or partitions) if you prefer.

I have not performed this operation on any Linux guests, so I can’t tell you exactly what to do. The operation will depend on the file system and the tools that you have available. You can probably determine what to do with a quick Internet search.

VHDX Shrink Operations

I didn’t talk much about shrink operations in this article. Shrinking requires you to prepare the contained file system(s) before you can do anything in Hyper-V. You might find that you can’t shrink a particular VHDX at all. Rather than muddle this article will all of the necessary information, I’m going to point you to an earlier article that I wrote on this subject. That article was written for 2012 R2, but nothing has changed since then.

What About VHD/VHDX Compact Operations?

I often see confusion between shrinking a VHD/VHDX and compacting a VHD/VHDX. These operations are unrelated. When we talk about resizing, then the proper term for reducing the size of a virtual hard disk is “shrink”. “Compact” refers to removing the zeroed blocks of a dynamically expanding VHD/VHDX so that it consumes less space on physical storage. Look for a forthcoming article on that topic.

Apache Hadoop 3.0 goes GA, adds hooks for cloud and GPUs

It may still have a place among screaming new technologies, but Apache Hadoop is neither quite as new nor quite as screaming as it once was. And the somewhat subdued debut of Apache Hadoop 3.0 reflects that.

Case in point: In 2017, the name Hadoop was removed from the title of more than one event previously known as a “Hadoop conference.” IBM dropped off the list of Hadoop distro providers in 2017, a year in which machine learning applications — and tools like Spark and TensorFlow — became the focus of many big data efforts.

So, the small fanfare that accompanied the mid-December release of Hadoop 3.0 was not too surprising. The release does hold notable improvements, however. This update to the 11-year-old Hadoop distributed data framework reduces storage requirements, allows cluster pooling on the latest graphics processing unit (GPU) resources, and adds a new federation scheme that enables the crucial Hadoop YARN resource manager and job scheduler to greatly expand the number of Hadoop nodes that can run in a cluster.

This latter capability could find use in Hadoop cloud applications — where many appear to be heading.

Scaling nodes to tens of thousands

“Federation for YARN means spanning out to much larger clusters,” according to Carlo Aldo Curino, principal scientist at Microsoft, an Apache Hadoop committer and a member of the Apache Hadoop Project Management Committee (PMC). With federation, in effect, a routing layer now sits in front of Hadoop Distributed File System (HDFS) clusters, he said.

Curino emphasized that he was speaking in his role as PMC member, and not for Microsoft. He did note, however, that the greater scalability is useful in clouds such as Azure. Most of “the biggest among Hadoop clusters to date have been in the low thousands of nodes, but people want to go to tens of thousands of nodes,” he said.

If Hadoop applications are going begin to include millions of machines running YARN, federation will be needed to get there, he said. Looking ahead, Curino said he expects YARN will be a focus of updates to Hadoop.

In fact, YARN was the biggest cog in the machine that was Hadoop 2.0, released in 2013 — most particularly because it untied Hadoop from reliance on its original MapReduce processing engine. So, its central role in Hadoop 3.0 should not surprise.

In Curino’s estimation, YARN carries forward important new trends in distributed architecture. “YARN was an early incarnation of the serverless movement,” he said, referring to the computing scheme that has risen to some prominence on the back of Docker containers.

Curino noted that some of the important updates in Hadoop 3.0, which is now generally available and deemed production-ready, had been brewing in some previous point updates.

Opening up the Hadoop 3.0 pack

Among other new aspects of Hadoop 3.0, GPU enablement is important, according to Vinod Vavilapalli, who is Hadoop YARN and MapReduce development lead at Hortonworks, based in Santa Clara, Calif.

This is important because GPUs, as well as field-programmable gate arrays — which are also supported in Hadoop 3.0 API updates — are becoming go-to hardware for some machine learning and deep learning workloads.

[Hadoop] will be part of the critical infrastructure that powers an increasingly data-driven world.
Vinod VavilapalliHadoop YARN and MapReduce development lead, Hortonworks

Without updated APIs such as those found with Hadoop 3.0, he noted, these workloads require special setups to access modern data lakes.

“With Hadoop 3.0, we are moving into larger scale, better storage efficiency, into deep learning and AI workloads, and improving interoperability with the cloud,” Vavilapalli said. In this latter regard, he added, Apache Hadoop 3.0 brings better support via erasure coding, an alternative to typical Hadoop replication and saves on storage space.

Will Hadoop take a back seat?

Both Curino and Vavilapalli concurred the original model of Hadoop, in which HDFS is tightly matched with MapReduce, may be fading, but that is not necessarily reason to declare this the “post-Hadoop” era, as some pundits suggest.

“One of the things I noticed about sensational pieces that say ‘Hadoop is dead’ is that it is bit of mischaracterization. What it is that people see MapReduce losing [is] popularity. It’s not the paradigm use anymore,” Curino said. “This was clear to the community long ago. It is why we started work on YARN.”

For his part, Vavilapalli said he sees Hadoop becoming more powerful and enabling newer use cases.

“This constant reinvention tells me that Hadoop is always going to be relevant,” he said. Even if it is something running in the background, “it will be part of the critical infrastructure that powers an increasingly data-driven world.”

To buy or build IT infrastructure focus of ONUG 2017 panel

NEW YORK — Among the most vexing questions enterprises face is whether it makes more sense to buy or build IT infrastructure. The not-quite-absolute answer, according to the Great Discussion panel at last week’s ONUG conference: It depends.

“It’s hard, because as engineers, we follow shiny objects,” said Tsvi Gal, CTO at New York-based Morgan Stanley, adding there are times when the financial services firm will build what it needs, rather than being lured by what vendors may be selling.

“If there are certain areas of the industry that have no good solution in the market, and we believe that building something will give us significant value or edge over the competition, then we will build,” he said.

This decision holds even if buying the product is cheaper than building IT infrastructure, he said — especially if the purchased products don’t always have the features and functions Morgan Stanley needs.

“I don’t mind spending way more money on the development side, if the return for it will be significantly higher than buying would,” he said. “We’re geeks; we love to build. But at the end of the day, we do it only for the areas where we can make a difference.”

ONUG panelists discuss buy vs. build IT infrastructure
Panelists at the Great Discussion during the ONUG 2017 fall conference

A company’s decision to buy or build IT infrastructure heavily depends on its size, talent and culture.

For example, Suneet Nandwani, senior director of cloud infrastructure and platform services at eBay, based in San Jose, Calif., said eBay’s culture as a technology company creates a definite bias toward building and developing its own IT infrastructure. As with Morgan Stanley, however, Nandwani said eBay stays close to the areas it knows.

“We often stick within our core competencies, especially since eBay competes with companies like Facebook and Netflix,” he said.

On the other side of the coin, Swamy Kocherlakota, S&P Global’s head of global infrastructure and enterprise transformation, takes a mostly buy approach, especially for supporting functions. It’s a choice based on S&P Global’s position as a financial company, where technology development remains outside the scope of its main business.

This often means working with vendors after purchase.

“In the process, we’ve discovered not everything you buy works out of the box, even though we would like it to,” Kocherlakota said.

Although he said it’s tempting to let S&P Global engineers develop the desired features, the firm prefers to go back to the vendor to build the features. This choice, he said, traces back to the company’s culture.

“You have to be part of a company and culture that actually fosters that support and can maintain [the code] in the long term,” he said.  

The questions of talent, liability and supportability

Panelists agreed building the right team of engineers was an essential factor to succeed in building IT infrastructure.

“If your company doesn’t have enough development capacity to [build] it yourself, even when you can make a difference, then don’t,” Morgan Stanley’s Gal said. “It’s just realistic.”

But for companies with the capacity to build, putting together a capable team is necessary.

“As we build, we want to have the right talent and the right teams,” eBay’s Nandwani said. “That’s so key to having a successful strategy.”

To attract the needed engineering talent, he said companies should foster a culture of innovation, acknowledging that mistakes will happen.

For Gal, having the right team means managers should do more than just manage.

“Most of our managers are player-coach, not just coach,” Gal said. “They need to be technical; they need to understand what they’re doing, and not just [be] generic managers.”

But it’s not enough to just possess the talent to build IT infrastructure; companies must be able to maintain both the talent and the developed code.

“One of the mistakes people make when building software is they don’t staff or resource it adequately for operational support afterward,” Nandwani said. “You have to have operational process and the personnel who are responding when those things screw up.”

S&P Global’s Kocherlakota agreed, citing the fallout that can occur when an employee responsible for developing important code leaves the company. Without proper documentation, the required information to maintain the code would be difficult to follow.

This means having the right support from the beginning, with well-defined processes encompassing software development lifecycle, quality assurance and control and code reviews.

“I would just add that when you build, it doesn’t free you from the need to document what you’re doing,” Gal said.

MobileIron, VMware can help IT manage Macs in the enterprise

As Apple computers have become more popular among business users, IT needs better ways to manage Macs in the enterprise. Vendors have responded with some new options.

The traditional problem with Macs is they have required different management and security software than their Windows counterparts, which means organizations must spend more money or simply leave these devices unmanaged. New features from MobileIron and VMware aim to help IT manage Macs in a more uniform way.

“Organizations really didn’t have an acute system to secure and manage Macs as they did with their Windows environment. But now, what we are starting to see is that a large number of companies have started taking Mac a lot more seriously,” said Nicholas McQuire, vice president of enterprise research at CCS Insight.

Macs in the enterprise see uptick

Windows PCs have long dominated the business world, whereas Apple positioned Macs for designers and other creative workers, plus the education market. There are several reasons why businesses traditionally did not offer Macs to employees, including their pricing and a lack of strong management and security options. About 5% to 10% of corporate computers are Macs, but that percentage is growing, McQuire said.

[embedded content]

With Macs growing in popularity, IT needs
streamlined configuration methods.

There are a few potential reasons for the growth of Macs in the enterprise. Demand from younger workers is a big one, said Ojas Rege, chief strategy officer at MobileIron, based in Mountain View, Calif. In addition, because Macs don’t lose value as quickly as PCs, the difference in total cost of ownership between Macs and PCs isn’t as significant as it once was, he said.

“A lot of our customers tell us that Macs are key to the new generation of their workforce,” Rege said. “Another key is that the economics are improving.”

New capabilities help manage Macs in the enterprise

It is surprising how many people still think they do not need additional software to help secure Macs.
Tobias Kreidldesktop creation and integration services team lead, Northern Arizona University

Windows has managed to stay on top in the eyes of IT because of its ability to offer more management platforms from third parties. Despite some options, such as those from Jamf, the macOS management ecosystem was very limited for a long time. But as the BYOD trend took off and shadow IT emerged, more business leaders felt they could no longer limit their employees to using Windows PCs.

VMware in August introduced updates to Workspace One, its end-user computing software, that allow IT to manage Macs the same way they would mobile devices. Workspace One will also have a native macOS client and let users enroll their Macs in unified endpoint management through a self-service portal, just like they can with smartphones and tablets.

MobileIron already supported macOS for basic device configuration and security. The latest improvements included these new Mac management features:

  • secure delivery of macOS apps through MobileIron’s enterprise app store;
  • per-app virtual private network connectivity through MobileIron Tunnel; and
  • trusted access enforcement for cloud services, such as Office 365, through MobileIron Access.

Mac security threats increase

At Northern Arizona University, the IT department is deploying Jamf Pro to manage and secure Macs, which make up more than a quarter of all client devices on campus. The rise in macOS threats over the past few years is a concern, said Tobias Kriedl, desktop creation and integration services team lead at the school in Flagstaff, Ariz.

The number of macOS malware threats increased from 819 in 2015 to 3,033 in 2016, per a report by AV-Test. And the first quarter of 2017 saw a 140% year-over-year increase in the number of different types of macOS malware, according to the report.

“It is surprising how many people still think they do not need additional software to help secure Macs,” Kreidl said. “[Apple macOS] is pretty good as it stands, but more and more efforts are being spent to find ways to circumvent Mac security, and some have been successful.”

Healthcare quality goals set for telehealth, interoperability

The quality of healthcare and health IT interoperability are continuing concerns among healthcare professionals. To address these concerns, the National Quality Forum and its telehealth committee met recently to discuss ways to measure healthcare quality and interoperability.

The National Quality Forum (NQF) was asked by the Health Department to accomplish two tasks: identify critical areas where measurement can effectively assess the healthcare quality and impact of telehealth services, and assess the current state of interoperability and its impact on quality processes and outcomes.

In a media briefing last week, NQF experts and members of the committee in charge of the two aforementioned tasks discussed the thought process behind the development of healthcare quality measures and the goal the committee hopes these measures will help achieve.

“After a comprehensive literature review conducted by NQF staff, the telehealth committee developed measurement concepts … across four distinct domains: access to care; financial impact and cost; telehealth experience for patients, care givers, care team members and others; as well as effectiveness, including system, clinical, operational and technical,” said Jason Goldwater, senior director at NQF, during the briefing.

Goldwater said that, ultimately, the following areas were identified as the highest priorities: “The use of telehealth to decrease travel, timeliness of care, actionable information, the added value of telehealth to provide evidence-based best practices, patient empowerment and care coordination.”

Those of us that live in the world of telemedicine believe not only are there quality enhancements, but there’s convenience enhancements that are going to make medicine easier to deliver.
Judd Hollanderassociate dean of strategic health initiatives, Thomas Jefferson University

Judd Hollander, associate dean of strategic health initiatives at Thomas Jefferson University and a member of the NQF telehealth committee, explained that the committee wanted to begin this process of creating measures for telehealth and interoperability in healthcare by conducting an “environmental scan.”

“Where is there data and where are there data holes and what do we need to know?” Hollander said. “After we informed that and took a good look at it we started thinking, what are types of domains and subdomains and measure concepts that the evidence out there helps us illustrate but the evidence we’re lacking can also be captured? … So it was a really nice way to begin the discussion.”

Hollander added that the implications of the NQF report and the measures the committee is working on are “expected to inform policy across the entire spectrum of alternative payment models, government funded healthcare, and care funded by commercial payers because it’s just what you should be assessing to provide quality care.”

NQF’s telehealth measures: Patient experience

For healthcare to truly reap the benefits of telehealth, the industry has to focus on quality first. And to improve healthcare quality, there has to be a way to measure and report it, Hollander said.

“Those of us that live in the world of telemedicine believe not only are there quality enhancements, but there’s convenience enhancements that are going to make medicine easier to deliver,” Hollander said.

Hollander used a personal experience as an example of the benefits telehealth can bring to patients, even if a diagnosis isn’t or cannot be made via telehealth technologies.

“I had a patient who hurt his knee working in Staples, actually, at about 5:15, 5:30 in the evening. He had a prior knee injury and he had an orthopedist, but he couldn’t reach the orthopedist because their offices were closed,” Hollander said.

Without telehealth, this patient would have had to go to the emergency department, he would’ve waited hours to be seen, and then he would’ve been examined and had X-rays done, Hollander said.

Not only would this have taken a long time, it also would’ve cost this patient a lot of money, Hollander added.

Instead of going to the ER, the patient was able to connect with Hollander through JeffConnect, Jefferson University Hospitals’ app that enables patients to connect with doctors anytime, anywhere.

“I was the doc on call. We do know how to examine knees by telemedicine and we can tell with over 99% accuracy whether someone has a fracture or not and he did not,” he said.

Hollander explained that they then did a little “wilderness medicine.” Using materials lying around, the patient was splinted with yard sticks and an ace bandage and then was able to wait to see his orthopedist the next day.

“So we didn’t actually really solve his problem, but we saved him a ton of time and money; he didn’t have to go get X-rays one day, [then] have them repeated by the orthopedist who couldn’t see him [until] the next day because the systems aren’t interoperable,” Hollander said.

NQF’s telehealth measures: Rural communities

Marcia Ward, director of the Rural Telehealth Research Center at the University of Iowa and also an NQF telehealth committee member, brings a rural perspective to the telehealth conversation.

“Creating this framework we had to look across all of those different aspects of telehealth and how it could be applied. I find it particularly interesting that telehealth has been thought of as an answer for increasing access in rural healthcare … and I think that’s been one of the strongest suits,” she said during the briefing. “But now it’s developing into an urban application and I think we’ll see particular growth in that.”

Ward used the concept of travel in rural areas as an example of thinking of a unique, and maybe not always obvious, issue to address when creating telehealth measures.

“Travel is a concept that is very important, particularly in rural telehealth,” Ward said. “An example of that is there’s a telestroke program at the Medical University of South Carolina and one of the measures that they use is how many of the patients that are seen through their telestroke program at the rural hospitals are able to stay at their local rural hospital.”

This is an example of a healthcare quality measure that wouldn’t normally be seen in conventional medicine but is very appropriate for telehealth in rural areas.

“That’s a very important measure concept … able to be captured. Another one particularly important in the rural area is workforce shortages and we’re seeing evidence that telehealth programs can be implemented that help bridge that gap [and] be able to deliver services in very rural areas and have the backup from telehealth hub where there’s emergency physicians,” Ward said.  And we’re seeing evidence that telehealth, in terms of rural communities in particular, it’s really filling a particular need.”

NQF’s interoperability measures

While the experts focused mainly on telehealth during the briefing, Goldwater explained that when the committee was discussing and creating measures for interoperability they conducted several interviews to help them define guiding principles.

Goldwater said that these guiding principles include:

  • “Interoperability is more than just EHR to EHR;
  • “Various stakeholders with diverse needs are involved in the exchange and use of data, and the framework and concepts will differ based on these perspectives;
  • “The term ‘electronically exchanged information’ is more appropriate to completely fulfill the definition of interoperability;
  • “And all critical data elements should be included in the analysis of measures as interoperability increases access to information.”

Ultimately the committee developed healthcare quality measures across four domains: The exchange of electronic health information to the quality of data content and the method of exchange, the usability of the exchange of electronic health information such as the data’s relevance and its accessibility, the application of exchange of electronic health information such as “Is it computable?” and the impact of interoperability such as patient safety and care coordination, Goldwater said.