Tag Archives: among

Using visualizations and analytics in media content

BOSTON — Among countless online newspapers and journals, blogs, videos and social media feeds, the modern digital consumer has a dizzying amount of media sources to choose from.

As content creators vie for consumer attention, some organizations have turned to data visualization and advanced analytics in media to gain an advantage.

Visualizing data analytics in media

Take, for example, Condé Nast, an American-based mass media company whose 19 brands attract around 150 million consumers.

With a diverse portfolio that includes The New Yorker, Wired and Teen Vogue, the media company needs to capture the attention of numerous social groups and niches around the world. Condé Nast has found that interactive charts and graphs seem to appeal to inquisitiveness of most types of consumers.

Compared with static images, interactive visualizations “introduce a whole new level [to content], and increase time spent” on content by consumers, said Danielle Carrick, a data visualization designer and developer at Condé Nast, during a presentation this week at the 2018 Data Visualization Summit.

Carrick showed examples of colorful, easy-to-read charts and graphs. Large gray and red bars with moveable sliders on the entertainment and culture site Glamour plainly illustrated the disparity between men and women Oscar nominees since 1928.

On Teen Vogue, an in-depth interactive scatterplot of tweets from @realDonaldTrump splashed red dots across the screen. Each visualization, though in itself an example of analytics in media, was different.

“Same type of data, totally different way to look at it,” Carrick said of the visualizations.

Danielle Carrick, Condé Nast, 2018 Data Visualization Summit Boston
Danielle Carrick of Condé Nast speaks at the 2018 Data Visualization Summit in Boston this week.

Static still around

The benefits of consistently changing the way data sets are illustrated are twofold, Carrick said. This varied approach gives consumers new and fresh ways to interact with different data sets, and also enables her and her team to be creative.

Same type of data, totally different way to look at it.
Danielle Carrickdata visualization designer and developer, Condé Nast

Carrick noted that despite the increased use of interactive visuals, static graphs and images are far from being phased out.

Static visuals still are used most often, and are developed separately by each brand, rather than a team working directly under the Condé Nast flag. Understandably, interactive data sets are harder to create, and require input from the local editor, writer and design team working on the content piece.

There’s a lot of communication, Carrick said, and ultimately, it’s up to the brand to decide if it will use the visual.

“They’re not going to publish something they don’t think they’re readers are interested in,” she said.

Internally, the team employs Qlik software, which has revamped its visualization capabilities recently to better compete with rival self-service BI vendor Tableau, for analytics in media.

And while Carrick admitted that more tracking needs to be done to measure the results of using interactive visuals, they seem to both draw in more consumers and keep them on the webpage longer.

Ad analytics

Visualizations aren’t the only ways organizations are using analytics in media, however.

In a separate presentation at the parallel 2018 Big Data Innovation Summit, Carla Pacione, senior director of data and systems at Comcast Spotlight, talked about how advanced analytics plays a role in the telecommunication conglomerate’s advertising efforts. In particular, Pacione highlighted the importance of digital metrics, which she claimed to have “really took the level of advertising to a whole new level.”

Thanks to new and updated technologies in TV and digital metrics, including embedding a pixel in commercials that can capture household and engagement data, organizations like Comcast can better measure metrics today and enable them to gain deeper insights, Pacione said.

Comcast is piloting more advanced “household addressable TV advertising” — the ability to send more targeted and relevant ads to different households watching the same TV program.

While Pacione noted Comcast uses third-party organizations to track purchases and predict future purchases, better being able to measure metrics has enabled such analytics in media advertising advancements.

With so many different ways of consuming media, Pacione said it will be important for media partners to work together to share information and advice and ultimately better target consumers.

Already, she said, “we’re starting to see that sharing in the industry because there’s just so much to learn.”

The 2018 Data Visualization Summit and the 2018 Big Data Innovation Summit were held Sept. 11 to 12 at the Renaissance Boston Waterfront Hotel.

New tech trends in HR: Josh Bersin predicts employee experience ‘war’

LAS VEGAS — Among fresh tech trends in HR, one that may garner the most interest is a new layer of software — which superstar analyst Josh Bersin called an employee experience platform — that will fit between core HR and talent management tools.

Bersin said he expects employee experience to become the next-generation employee portal — in other words, the go-to application for modern workers who need HR-based information. Vendors are lining up to address the need, he added.

“There is going to be a holy war for [what] system your employees use first,” said Bersin, an independent analyst who founded Bersin by Deloitte. Although his quote served as hyperbole, it nonetheless stuck with attendees here at the 2018 HR Technology Conference & Exposition.

“He hit home,” said Rita Reslow, senior director of global benefits at HR software vendor Kronos, based in Lowell, Mass. “We have all these systems, and we keep buying more.” But she wondered aloud when one product would tie her systems together for employees.

No vendor has achieved a true employee experience platform, Bersin told a room packed with 900 or so attendees at the conference on Tuesday. However, ServiceNow, PeopleDoc — which Ultimate Software acquired in July — and possibly IBM appear to have a head start, he added.

Tech trends in HR point to team successes

There is going to be a holy war for [what] system your employees use first.
Josh Bersinindependent analyst

Bersin, who plans to release an extensive report about 2019 tech trends in HR, said software development within the industry reflects a shift in management that steers away from employee engagement and company culture in favor of increased team performance.

Unless a recession hits, “I think the focus of the tech market for the next couple of years … is on performance, productivity and agility,” he said.

The shift to productivity will require future technology to simplify work life, said Cliff Howe, manager of enterprise applications at Cox Enterprises, a communications and media company in Atlanta. “Our employees are being inundated,” Howe said. “We don’t want to hit our employees with too much [technology].”

Bersin suggested that HR software buyers consider the following tips when evaluating new human capital management products:

  • Shop around for vendors that focus on your company’s particular market. For example, if your organization exhibits a compliance-based culture, find a vendor that mirrors that approach.
  • Evaluate the “personality of the vendor,” he said. As an example, determine if the vendor’s reps listen to your decision-makers and help them. If the answer is no, it may be time to drop that vendor from consideration.

AI auditing, real-time payrolls needed in future

In other upcoming tech trends in HR, Bersin pegged AI as a quickly growing field that smart HR departments will learn how to monitor and audit in the future. That notion was on the minds of many at the HR Technology Conference, for which TechTarget — the publisher of SearchHRSoftware — is a media partner.

AI innovation has increased rapidly in the last two years. Today, even small HR software vendors with three to five engineers can use technology from Google or IBM, combine it with open source options and scale a new product on the cloud quickly, Bersin said. HR professionals will need to adjust their skills in order to better understand why AI software makes its decisions, which is an area not fully grasped yet, he added.

Howe agreed AI has grown beyond wish-list status. “AI will be a requirement, rather than a shiny object,” he said.

Bersin also noted that software will need to reflect a possible switch to a continuous payroll model — perhaps as often as daily. Younger workers, some of whom might not have bank accounts, have increased their demands to be compensated in real time, and this request is not just for the gig economy, he said.

Convenience: Driver of BI innovation

Allaa “Ella” Hilal is among that rare breed of computer experts who straddle the academic and commercial worlds. As director of data at Ottawa-based Shopify, Hilal oversees data product development for the e-commerce company’s international and larger merchants, also known as Plus customers. She is also an adjunct associate professor in the Centre for Pattern Analysis and Machine Intelligence at the University of Waterloo in Ontario, where she earned a Ph.D. in electrical and computer engineering.

An expert in data intelligence, wireless sensor networks and autonomous systems, Hilal is among the featured speakers at the Real Business Intelligence Conference on June 27 to 28 in Cambridge, Mass.  Here, Hilal discusses what’s driving business intelligence (BI) innovation today and some of the pitfalls companies should be aware of.

What is driving BI innovation today?

Ella Hilal: First of all, in this day and age, companies are creating more and more products to derive customer convenience. This convenience ends up saving time, which ties to money. When we become more efficient, whether it’s in our IT systems or in our daily commute, we gain moments that we can spend on something else. We can have more time with our families and loved ones, or even gain more time or resources to do the things we love or care about.

There is this immediate need and craving for more efficiency and convenience from the customer side. And businesses all are aware of this craving. They are trying to think about what they can do with the data that exists within the systems or data being collected from IoT, which they know is valuable. The power of BI lies in the fact that it can take all of these different data sources and derive valuable insights to drive business decisions and data products that empower customers and the business in general.

There are many methodologies of how you can apply this to your business, and I plan to discuss some methodologies during my talk at the Real Business Intelligence Conference.

Companies have been doing business intelligence for a long time; they’ve had to figure out which data is useful and which is not for their businesses. What’s different about capitalizing on data generated from technologies like IoT and smart systems?

Hilal: Generally, only 12% of company data that is analyzed today is critical to a business — the rest is either underutilized or untapped. If we think we’re doing such a good job with the analytics we have today, imagine if you apply these efforts across the entire data available in your business. At Shopify, we work to identify the pain points of running a business and use data to provide value to the merchants so they have a better experience as an entrepreneurs.

So, there is huge value we can mine and surface. And when we talk about advanced analytics, we’re not talking about just basic business analytics; we’re talking also about applying AI, machine learning, prediction, forecasting and even prescriptive analytics.

Most CIOs are acutely aware that AI and advanced analytics should be part of a BI innovation strategy. But even big companies are having trouble finding skilled people to do this work.

Hilal: It’s a problem every company will face, because the skilled data scientist is still scarce compared to the need. One challenge is that the people who have the technical abilities to do this strong analytical work don’t always have the business acumen that is needed for an experienced data scientist. They might be very smart in doing sophisticated analysis, but if we don’t tie that with business acumen, they fail to communicate the business value and enable the decision-makers with useful insights. Furthermore, the lack of business acumen makes it challenging to build data products you can utilize or sell. So, you need to build the right kind of team.

Community and university collaborations are one of the strongest approaches that big companies are adopting; you can see that Google, Uber and Shopify, for example, are all partnering with university research labs and reaping the benefits from a technical perspective. They have the technical team and the business acumen team, which then brings the work in-house to focus on data analytics products. So, you get to bridge the gap between this amazing research initiative and the productization of the results.

Another benefit is that with these partnerships, researchers with very strong technical AI and statistical backgrounds can also develop business acumen, because they are working closely with product managers and production teams. This is definitely a longer-term strategy. Wearing my research hat, I can say that universities are also working hard to introduce programs with a mix of computer science and machine learning, programs with a good mix of the old pillars of data science and new approaches.

So, companies need to come up with new frameworks for capitalizing on data. Are there pitfalls companies want to keep in mind?

Hilal: You’ll hear me say this time and time again: We all need to have a sense of responsible innovation. We’re in this industrial race to build really good products that can succeed in the market, and we need to keep in mind that we are building these products for ourselves, as well as for others.

When we create these products, it is the distributed responsibility of all of us to make sure that we embed our morals and ethics in them, making sure they are secure, they are private, they don’t discriminate. At Shopify, we are always asking ourselves, ‘Will this close or open a door for a merchant?’ It is not enough that our products are functional; they have to maintain certain ethical standards, as well.

We’ve reported on how the IoT space may pose a threat because developers are under such pressure to get these products to market that considerations like security and ethics and who owns the data are an afterthought.

Hilal: We should not be putting anything out there that we wouldn’t want in our own homes. But this is not just about AI or IoT. Whether it is a piece of software or hardware system, we need to make sure that security is not a bolt-on, or that privacy is fixed after the fact with a new policy statement — these things need to be done early on and need to be thought of before and throughout the production process.

DRaaS solution: US Signal makes rounds in healthcare market

A managed service provider’s disaster-recovery-as-a-service offering is carving a niche among healthcare market customers, including Baystate Health System, a five-hospital medical enterprise in western Massachusetts.

The DRaaS solution from US Signal, an MSP based in Grand Rapids, Mich., is built on Zerto’s disaster recovery software, US Signal’s data center capability and the company’s managed services. The offering is designed to work in VMware vCenter Server and Microsoft System Center environments. One target market is healthcare.

“We have several healthcare facilities … all across the Midwest using this solution,” said Jerry Clark, director of cloud sales development at US Signal. The DRaaS solution meets HIPAA standards, according to the company.

Clark said many hospitals — and organizations in other industries, for that matter — are searching for ways to avoid the investment in duplicate hardware traditional DR approaches require. With DRaaS, hardware becomes the service provider’s issue. Instead of paying for hardware upfront, the customer pays a monthly management fee to the DRaaS provider. The approach has expanded the channel opportunity in DR.

“Enterprises … run into the same situation: ‘Do we spend all this Capex on disaster recovery hardware that may or may not ever get used?'” Clark noted. “A DRaaS solution makes it much more economical.”

Chart showing anticipated budget growth across various IT sectors
One-third of the respondents to TechTarget’s IT Priorities survey identified disaster recovery as an area for budget growth.

Baystate Health adopts DRaaS solution

US Signal found an East Coast customer, Baystate Health, based in Springfield, Mass., though VertitechIT, a US Signal consulting partner located in nearby Holyoke, Mass.

Jerry Clark, director of cloud sales development at US SignalJerry Clark

VertitechIT helped Baystate Health launch a software-defined data center initiative. The implementation uses the entire VMware stack across three active data centers. The three-node arrangement provides local data replication, but David Miller, senior IT director and CTO at Baystate Health, said an outage in 2016 knocked out all three sites — contrary to design assumptions — for 10 hours.

Miller said his organization had been looking into some form of remote replication and high availability but had yet to land a good solution. The downtime event, however, increased the urgency of finding one.

“We realized we had to do something now rather than later,” Miller said.

David Miller, CTO at Baystate Health SystemDavid Miller

VertitechIT introduced US Signal to Baystate Health. The companies met in VertitechIT’s corporate office and US Signal proposed its DRaaS solution. In its DRaaS solution, US Signal deploys Zerto’s IT Resilience Platform, specifically Zerto Virtual Manager and Virtual Replication Appliance. The software installed in the customer source environment replicates data writes for each protected virtual machine to the DR target site, in this case US Signal’s Grand Rapids data center. An MPLS link connects Baystate Health to the Michigan facility.

The remote replication service provides the benefit of geodiversity, according to the companies. Baystate Health’s data centers are all in the Springfield area.

[embedded content]

CIO of Christian Brothers Services discusses the
company’s infrastructure partnership with US Signal.

US Signal’s DRaaS solution also includes a playbook, which documents the steps Baystate Health IT personnel should take to failover to the disaster recovery site in the event of an outage. In addition, US Signal’s DRaaS package provides two annual DR tests. The DRaaS provider also tests failover before the DR plan goes into effect and documents that test in the playbook, Clark noted.

Miller said the DR service, which went live about a year ago, provides a recovery point objective (RPO) of “less than a couple of minutes” for Baystate Health’s PeopleSoft system, one of the healthcare provider’s tier-one applications. The recovery time objective (RTO) is less than two hours. RPO and RTO characteristics differ according to the application and its criticality.

Initially, the DRaaS solution covered a handful of apps, but the list of protected systems has expanded over the past 12 months, Miller said.

A DRaaS ‘showcase’

Myles Angell, executive project officer at VertitechIT, said the Baystate Health deployment has become “a showcase” when meeting with potential clients that have similar DR challenges.

Myles Angell, executive project officer at VertitechITMyles Angell

“We’re talking to other hospitals about it,” he said.

Other organizations interested in DRaaS should pay close attention to their application portfolios, however. Angell said businesses need to have a thorough understanding of applications before embarking on a DR strategy.

“To successfully build a disaster recovery option — and have confidence in the execution — relies on complete documentation of the application’s running state, dependencies and any necessary changes that would need to be executed at the time of a DR cut over,” he explained. “These pieces of information are vital to knowing how to adhere to the RTO/RPO objectives that have been defined.”

Angell said businesses may have a good understanding of their tier-one applications but may have less of a handle with regard to their tier-three or tier-four systems. The recovery of an application that isn’t well-documented or completely understood becomes a riskier endeavor when a disaster strikes.

“The DR option may miss the objectives and targets that the business is expecting and, therefore, the company may actually be worse off due to lost time trying to scramble for the little things that were not documented,” Angell said.

IDC, Cisco survey assesses future IT staffing needs

Network engineers, architects and administrators will be among the most critical job positions to fill if enterprises are to meet their digital transformation goals, according to an IDC survey tracking future IT staffing trends.

 The survey, sponsored by Cisco, zeroed in on the top 10 technology trends shaping IT hiring and 20 specific roles IT professionals should consider in terms of expanding their skills and training. IDC surveyed global IT hiring managers and examined an estimated 2 million IT job postings to assess current and future IT staffing needs.

The survey results showed digital transformation is increasing demand for skills in a number of key technology areas, driven by the growing number of network-connected devices, the adoption of cloud services and the rise in security threats.

Intersections provide hot jobs

IDC classified the intersections of where hot technologies and jobs meet as “significant IT opportunities” for current and future IT staffing, said Mark Leary, directing analyst at Cisco Services.

“From computing and networking resources to systems software resources, lots of the hot jobs function at these intersections and take advantage of automation, AI and machine learning.” Rather than eliminating IT staff jobs, a lot of jobs take advantage of those same technologies, he added.

Organizations are preparing for future IT staffing by filling vacant IT positions from within rather than hiring from outside the company, then sending staff to training, if needed, according to the survey.

But technology workers still should investigate where the biggest challenges exist and determine where they may be most valued, Leary said.

“Quite frankly, IT people have to have greater understanding of the business processes and of the innovation that’s going on within the lines of business and have much more of a customer focus.”

The internet of things illustrates the complexity of emerging digital systems. Any IoT implementation requires from 10 to 12 major technologies to come together successfully, and the IT organization is seen as the place where that expertise lies, Leary said.

IDC’s research found organizations place a high value on training and certifications. IDC found that 70% of IT leaders believe certifications are an indicator of a candidate’s qualifications and 82% of digital transformation executives believe certifications speed innovation and new ways to support the business.

Network influences future IT staffing

IDC’s results also reflect the changes going on within enterprise networking.

Digital transformation is raising the bar on networking staffs, specifically because it requires enterprises to focus on newer technologies, Leary said. The point of developing skills in network programming, for example, is to work with the capabilities of automation tools so they can access analytics and big data.

This isn’t something that’s evolutionary; it’s revolutionary.
Mark Learydirecting analyst, Cisco Services

In 2015, only one in 15 Cisco-certified workers viewed network programming as critical to his or her job. By 2017, the percentage rose to one in four. “This isn’t something that’s evolutionary; it’s revolutionary,” Leary said.

While the traditional measure of success was to make sure the network was up and running with 99.999% availability, that goal is being replaced by network readiness, Leary said. “Now you need to know if your network is ready to absorb new applications or that new video stream or those new customers we just let on the network.”

Leary is involved with making sure Cisco training and certifications are relevant and matched to jobs and organizational needs, he said. “We’ve been through a series of enhancements for the network programmability training we offer, and we continually add things to it,” he added. Cisco also monitors customers to make sure they’re learning about the right technologies and tools rather than just deploying technologies faster.

To meet the new networking demands, Cisco is changing its CCNA, CCNP and CCIE certifications in two different ways, Leary said. “We’ve developed a lot of new content that focuses on cybersecurity, network programming, cloud interactions and such because the person who is working in networking is doing that,” he said. The other emphasis is to make sure networking staff understands language of other groups like software developers.

Midmarket enterprises push UCaaS platform adoption

Cloud unified communications adoption is growing among midmarket enterprises as they look to improve employee communication, productivity and collaboration. Cloud offerings, too, are evolving to meet midmarket enterprise needs, according to a Gartner Inc. report on North American midmarket unified communications as a service (UCaaS).

Gartner, a market research firm based in Stamford, Conn., defines the midmarket as enterprises with 100 to 999 employees and revenue between $50 million and $1 billion. UCaaS spending in the midmarket segment reached nearly $1.5 billion in 2017 and is expected to hit almost $3 billion by 2021, according to the report. Midmarket UCaaS providers include vendors ranked in Gartner’s UCaaS Magic Quadrant report. The latest Gartner UCaaS midmarket report, however, examined North American-focused providers not ranked in the larger Magic Quadrant report, such as CenturyLink, Jive and Vonage.

But before deploying a UCaaS platform, midmarket IT decision-makers must evaluate the broader business requirements that go beyond communication and collaboration.

Evaluating the cost of a UCaaS platform

The most significant challenge facing midmarket IT planners over the next 12 months is budget constraints, according to the report. These constraints play a major role in midmarket UC decisions, said Megan Fernandez, Gartner analyst and co-author of the report.

“While UCaaS solutions are not always less expensive than premises-based solutions, the ability to acquire elastic services with straightforward costs is useful for many midsize enterprises,” she said.

Many midmarket enterprises are looking to acquire UCaaS functions as a bundled service rather than stand-alone functions, according to the report. Bundles can be more cost-effective as prices are based on a set of features rather than a single UC application. Other enterprises will acquire UCaaS through a freemium model, which offers basic voice and conferencing functionality.

“We tend to see freemium services coming into play when organizations are trying new services,” she said. “Users might access the service and determine if the freemium capabilities will suffice for their business needs.”

For some enterprises, this basic functionality will meet business requirements and offer cost savings. But other enterprises will upgrade to a paid UCaaS platform after using the freemium model to test services.

Cloud adoption
Enterprises are putting more emphasis on cloud communications services.

Addressing multiple network options

Midmarket enterprises have a variety of network configurations depending on the number of sites and access to fiber. As a result, UCaaS providers offer multiple WAN strategies to connect to enterprises. Midmarket IT planners should ensure UCaaS providers align with their companies’ preferred networking approach, Fernandez said.

Enterprises looking to keep network costs down may connect to a UCaaS platform via DSL or cable modem broadband. Enterprises with stricter voice quality requirements may pay more for an IP MPLS connection, according to the report. Software-defined WAN (SD-WAN) is also a growing trend for communications infrastructure. 

“We expect SD-WAN to be utilized in segments with requirements for high QoS,” Fernandez said. “We tend to see more requirements for high performance in certain industries like healthcare and financial services.”

Team collaboration’s influence and user preferences

Team collaboration, also referred to as workstream collaboration, offers similar capabilities as UCaaS platforms, such as voice, video and messaging, but its growing popularity won’t affect how enterprises buy UCaaS, yet.

Fernandez said team collaboration is not a primary factor influencing UCaaS buying decisions as team collaboration is still acquired at the departmental or team level. But buying decisions could shift as the benefits of team-oriented management become more widely understood, she said.

“This means we’ll increasingly see more overlap in the UCaaS and workstream collaboration solution decisions in the future,” Fernandez said.

Intuitive user interfaces have also become an important factor in the UCaaS selection process as ease of use will affect user adoption of a UCaaS platform. According to the report, providers are addressing ease of use demands by trying to improve access to features, embedding AI functionality and enhancing interoperability among UC services.

How to Resize Virtual Hard Disks in Hyper-V 2016

We get lots of cool tricks with virtualization. Among them is the ability to change our minds about almost any provisioning decision. In this article, we’re going to examine Hyper-V’s ability to resize virtual hard disks. Both Hyper-V Server (2016) and Client Hyper-V (Windows 10) have this capability.

Requirements for Hyper-V Disk Resizing

If we only think of virtual hard disks as files, then we won’t have many requirements to worry about. We can grow both VHD and VHDX files easily. We can shrink VHDX files fairly easily. Shrinking VHD requires more effort. This article primarily focuses on growth operations, so I’ll wrap up with a link to a shrink how-to article.

You can resize any of Hyper-V’s three layout types (fixed, dynamically expanding, and differencing). However, you cannot resize an AVHDX file (a differencing disk automatically created by the checkpoint function).

If a virtual hard disk belongs to a virtual machine, the rules change a bit.

  • If the virtual machine is Off, any of its disks can be resized (in accordance with the restrictions that we just mentioned)
  • If the virtual machine is Saved or has checkpoints, none of its disks can be resized
  • If the virtual machine is Running, then there are additional restrictions for resizing its virtual hard disks

Can I Resize a Hyper-V Virtual Machine’s Virtual Hard Disks Online?

A very important question: do you need to turn off a Hyper-V virtual machine to resize its virtual hard disks? The answer: sometimes.

  • If the virtual disk in question is the VHD type, then no, it cannot be resized online.
  • If the virtual disk in question belongs to the virtual IDE chain, then no, you cannot resize the virtual disk while the virtual machine is online.
  • If the virtual disk in question belongs to the virtual SCSI chain, then yes, you can resize the virtual disk while the virtual machine is online.

rv_idevscsi

Does Online VHDX Resize Work with Generation 1 Hyper-V VMs?

The generation of the virtual machine does not matter for virtual hard disk resizing. If the virtual disk is on the virtual SCSI chain, then you can resize it online.

Does Hyper-V Virtual Disk Resize Work with Linux Virtual Machines?

The guest operating system and file system do not matter. Different guest operating systems might react differently to a resize event, and the steps that you take for the guest’s file system will vary. However, the act of resizing the virtual disk does not change.

Do I Need to Connect the Virtual Disk to a Virtual Machine to Resize It?

Most guides show you how to use a virtual machine’s property sheet to resize a virtual hard disk. That might lead to the impression that you can only resize a virtual hard disk while a virtual machine owns it. Fortunately, you can easily resize a disconnected virtual disk. Both PowerShell and the GUI provide suitable methods.

How to Resize a Virtual Hard Disk with PowerShell

PowerShell is the preferred method for all virtual hard disk resize operations. It’s universal, flexible, scriptable, and, once you get the hang of it, much faster than the GUI.

The cmdlet to use is Resize-VHD. As of this writing, the documentation for that cmdlet says that it operates offline only. Ignore that. Resize-VHD works under the same restrictions outlined above.

The VHDX that I used in the sample began life at 20GB. Therefore, the above cmdlet will work as long as I did at least one of the following:

  • Left it unconnected
  • Connected it to the VM’s virtual SCSI controller
  • Turned the connected VM off

Notice the gb suffix on the SizeBytes parameter. PowerShell natively provides that feature; the cmdlet itself has nothing to do with it. PowerShell will automatically translate suffixes as necessary. Be aware that 1kb equals 1,024, not 1,000 (although both b and B both mean “byte”).

Had I used a number for SizeBytes smaller than the current size of the virtual hard disk file, I might have had some trouble. Each VHDX has a specific minimum size dictated by the contents of the file. See the discussion on shrinking at the end of this article for more information. Quickly speaking, the output of Get-VHD includes a MinimumSize field that shows how far you shrink the disk without taking additional actions.

This cmdlet only affects the virtual hard disk’s size. It does not affect the contained file system(s). That’s a separate step.

How to Resize a Disconnected Virtual Hard Disk with Hyper-V Manager

Hyper-V Manager allows you to resize a virtual hard disk whether or not a virtual machine owns it.

  1. From the main screen of Hyper-V Manager, first, select a host in the left pane. All VHD/X actions are carried out by the hypervisor’s subsystems, even if the target virtual hard disk does not belong to a specific virtual machine. Ensure that you pick a host that can reach the VHD/X. If the file resides on SMB storage, delegation may be necessary.
  2. In the far right Actions pane, click Edit Disk.
    rv_actionseditdisk
  3. The first page is information. Click Next.
  4. Browse to (or type) the location of the disk to edit.
    rv_browse
  5. The directions from this point are the same as for a connected disk, so go to the next section and pick up at step 6.

Note: Even though these directions specify disconnected virtual hard disks, they can be used on connected virtual disks. All of the rules mentioned earlier apply.

How to Resize a Virtual Machine’s Virtual Hard Disk with Hyper-V Manager

Hyper-V Manager can also resize virtual hard disks that are attached to virtual machines.

  1. If the virtual hard disk is attached to the VM’s virtual IDE controller, turn off the virtual machine. If the VM is saved, start it.
  2. Open the virtual machine’s Settings dialog.
  3. In the left pane, choose the virtual disk to resize.
  4. In the right pane, click the Edit button in the Media block.
    rv_vmsettingsedit
  5. The wizard will start by displaying the location of the virtual hard disk file, but the page will be grayed out. Otherwise, it will look just like the screenshot from step 4 of the preceding section. Click Next.
  6. Choose to Expand or Shrink (VHDX only) the virtual hard disk. If the VM is off, you will see additional options. Choose the desired operation and click Next.
    rv_exorshrink
  7. If you chose Expand, it will show you the current size and give you a New Size field to fill in. It will display the maximum possible size for this VHD/X’s file type. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable).
    rv_expandIf you chose Shrink (VHDX only), it will show you the current size and give you a New Size field to fill in. It will display the minimum possible size for this file, based on the contents. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable).
    rv_expand
    Enter the desired size and click Next.
  8. The wizard will show a summary screen. Review it to ensure accuracy. Click Finish when ready.

The wizard will show a progress bar. That might happen so briefly that you don’t see it, or it may take some time. The variance will depend on what you selected and the speed of your hardware. Growing fixed disks will take some time; shrinking disks usually happens almost instantaneously. Assuming that all is well, you’ll be quietly returned to the screen that you started on.

Following Up After a Virtual Hard Disk Resize Operation

When you grow a virtual hard disk, only the disk’s parameters change. Nothing happens to the file system(s) inside the VHD/X. For a growth operation, you’ll need to perform some additional action. For a Windows guest, that typically means using Disk Management to extend a partition:

rv_extend

Note: You might need to use the Rescan Disks operation on the Action menu to see the added space.

Of course, you could also create a new partition (or partitions) if you prefer.

I have not performed this operation on any Linux guests, so I can’t tell you exactly what to do. The operation will depend on the file system and the tools that you have available. You can probably determine what to do with a quick Internet search.

VHDX Shrink Operations

I didn’t talk much about shrink operations in this article. Shrinking requires you to prepare the contained file system(s) before you can do anything in Hyper-V. You might find that you can’t shrink a particular VHDX at all. Rather than muddle this article will all of the necessary information, I’m going to point you to an earlier article that I wrote on this subject. That article was written for 2012 R2, but nothing has changed since then.

What About VHD/VHDX Compact Operations?

I often see confusion between shrinking a VHD/VHDX and compacting a VHD/VHDX. These operations are unrelated. When we talk about resizing, then the proper term for reducing the size of a virtual hard disk is “shrink”. “Compact” refers to removing the zeroed blocks of a dynamically expanding VHD/VHDX so that it consumes less space on physical storage. Look for a forthcoming article on that topic.

Apache Hadoop 3.0 goes GA, adds hooks for cloud and GPUs

It may still have a place among screaming new technologies, but Apache Hadoop is neither quite as new nor quite as screaming as it once was. And the somewhat subdued debut of Apache Hadoop 3.0 reflects that.

Case in point: In 2017, the name Hadoop was removed from the title of more than one event previously known as a “Hadoop conference.” IBM dropped off the list of Hadoop distro providers in 2017, a year in which machine learning applications — and tools like Spark and TensorFlow — became the focus of many big data efforts.

So, the small fanfare that accompanied the mid-December release of Hadoop 3.0 was not too surprising. The release does hold notable improvements, however. This update to the 11-year-old Hadoop distributed data framework reduces storage requirements, allows cluster pooling on the latest graphics processing unit (GPU) resources, and adds a new federation scheme that enables the crucial Hadoop YARN resource manager and job scheduler to greatly expand the number of Hadoop nodes that can run in a cluster.

This latter capability could find use in Hadoop cloud applications — where many appear to be heading.

Scaling nodes to tens of thousands

“Federation for YARN means spanning out to much larger clusters,” according to Carlo Aldo Curino, principal scientist at Microsoft, an Apache Hadoop committer and a member of the Apache Hadoop Project Management Committee (PMC). With federation, in effect, a routing layer now sits in front of Hadoop Distributed File System (HDFS) clusters, he said.

Curino emphasized that he was speaking in his role as PMC member, and not for Microsoft. He did note, however, that the greater scalability is useful in clouds such as Azure. Most of “the biggest among Hadoop clusters to date have been in the low thousands of nodes, but people want to go to tens of thousands of nodes,” he said.

If Hadoop applications are going begin to include millions of machines running YARN, federation will be needed to get there, he said. Looking ahead, Curino said he expects YARN will be a focus of updates to Hadoop.

In fact, YARN was the biggest cog in the machine that was Hadoop 2.0, released in 2013 — most particularly because it untied Hadoop from reliance on its original MapReduce processing engine. So, its central role in Hadoop 3.0 should not surprise.

In Curino’s estimation, YARN carries forward important new trends in distributed architecture. “YARN was an early incarnation of the serverless movement,” he said, referring to the computing scheme that has risen to some prominence on the back of Docker containers.

Curino noted that some of the important updates in Hadoop 3.0, which is now generally available and deemed production-ready, had been brewing in some previous point updates.

Opening up the Hadoop 3.0 pack

Among other new aspects of Hadoop 3.0, GPU enablement is important, according to Vinod Vavilapalli, who is Hadoop YARN and MapReduce development lead at Hortonworks, based in Santa Clara, Calif.

This is important because GPUs, as well as field-programmable gate arrays — which are also supported in Hadoop 3.0 API updates — are becoming go-to hardware for some machine learning and deep learning workloads.

[Hadoop] will be part of the critical infrastructure that powers an increasingly data-driven world.
Vinod VavilapalliHadoop YARN and MapReduce development lead, Hortonworks

Without updated APIs such as those found with Hadoop 3.0, he noted, these workloads require special setups to access modern data lakes.

“With Hadoop 3.0, we are moving into larger scale, better storage efficiency, into deep learning and AI workloads, and improving interoperability with the cloud,” Vavilapalli said. In this latter regard, he added, Apache Hadoop 3.0 brings better support via erasure coding, an alternative to typical Hadoop replication and saves on storage space.

Will Hadoop take a back seat?

Both Curino and Vavilapalli concurred the original model of Hadoop, in which HDFS is tightly matched with MapReduce, may be fading, but that is not necessarily reason to declare this the “post-Hadoop” era, as some pundits suggest.

“One of the things I noticed about sensational pieces that say ‘Hadoop is dead’ is that it is bit of mischaracterization. What it is that people see MapReduce losing [is] popularity. It’s not the paradigm use anymore,” Curino said. “This was clear to the community long ago. It is why we started work on YARN.”

For his part, Vavilapalli said he sees Hadoop becoming more powerful and enabling newer use cases.

“This constant reinvention tells me that Hadoop is always going to be relevant,” he said. Even if it is something running in the background, “it will be part of the critical infrastructure that powers an increasingly data-driven world.”

To buy or build IT infrastructure focus of ONUG 2017 panel

NEW YORK — Among the most vexing questions enterprises face is whether it makes more sense to buy or build IT infrastructure. The not-quite-absolute answer, according to the Great Discussion panel at last week’s ONUG conference: It depends.

“It’s hard, because as engineers, we follow shiny objects,” said Tsvi Gal, CTO at New York-based Morgan Stanley, adding there are times when the financial services firm will build what it needs, rather than being lured by what vendors may be selling.

“If there are certain areas of the industry that have no good solution in the market, and we believe that building something will give us significant value or edge over the competition, then we will build,” he said.

This decision holds even if buying the product is cheaper than building IT infrastructure, he said — especially if the purchased products don’t always have the features and functions Morgan Stanley needs.

“I don’t mind spending way more money on the development side, if the return for it will be significantly higher than buying would,” he said. “We’re geeks; we love to build. But at the end of the day, we do it only for the areas where we can make a difference.”

ONUG panelists discuss buy vs. build IT infrastructure
Panelists at the Great Discussion during the ONUG 2017 fall conference

A company’s decision to buy or build IT infrastructure heavily depends on its size, talent and culture.

For example, Suneet Nandwani, senior director of cloud infrastructure and platform services at eBay, based in San Jose, Calif., said eBay’s culture as a technology company creates a definite bias toward building and developing its own IT infrastructure. As with Morgan Stanley, however, Nandwani said eBay stays close to the areas it knows.

“We often stick within our core competencies, especially since eBay competes with companies like Facebook and Netflix,” he said.

On the other side of the coin, Swamy Kocherlakota, S&P Global’s head of global infrastructure and enterprise transformation, takes a mostly buy approach, especially for supporting functions. It’s a choice based on S&P Global’s position as a financial company, where technology development remains outside the scope of its main business.

This often means working with vendors after purchase.

“In the process, we’ve discovered not everything you buy works out of the box, even though we would like it to,” Kocherlakota said.

Although he said it’s tempting to let S&P Global engineers develop the desired features, the firm prefers to go back to the vendor to build the features. This choice, he said, traces back to the company’s culture.

“You have to be part of a company and culture that actually fosters that support and can maintain [the code] in the long term,” he said.  

The questions of talent, liability and supportability

Panelists agreed building the right team of engineers was an essential factor to succeed in building IT infrastructure.

“If your company doesn’t have enough development capacity to [build] it yourself, even when you can make a difference, then don’t,” Morgan Stanley’s Gal said. “It’s just realistic.”

But for companies with the capacity to build, putting together a capable team is necessary.

“As we build, we want to have the right talent and the right teams,” eBay’s Nandwani said. “That’s so key to having a successful strategy.”

To attract the needed engineering talent, he said companies should foster a culture of innovation, acknowledging that mistakes will happen.

For Gal, having the right team means managers should do more than just manage.

“Most of our managers are player-coach, not just coach,” Gal said. “They need to be technical; they need to understand what they’re doing, and not just [be] generic managers.”

But it’s not enough to just possess the talent to build IT infrastructure; companies must be able to maintain both the talent and the developed code.

“One of the mistakes people make when building software is they don’t staff or resource it adequately for operational support afterward,” Nandwani said. “You have to have operational process and the personnel who are responding when those things screw up.”

S&P Global’s Kocherlakota agreed, citing the fallout that can occur when an employee responsible for developing important code leaves the company. Without proper documentation, the required information to maintain the code would be difficult to follow.

This means having the right support from the beginning, with well-defined processes encompassing software development lifecycle, quality assurance and control and code reviews.

“I would just add that when you build, it doesn’t free you from the need to document what you’re doing,” Gal said.