Tag Archives: organizations

Upskilling: Digital transformation is economic transformation for all – Asia News Center

“So, we are working with a number of non-government organizations to help build the skills of these youth, to build their digital skills. We know that today, about 50 percent of jobs require technology skills. Within the next three years, that’s going to jump to more than 75 percent. So we’re working to help build the skills that employers need to expand their companies.”

In Sri Lanka, decades of civil strife have now given way to sustained economic growth. In this period of calm and reconstruction, 24-year-old Prabhath Mannapperuma leads a team of techie volunteers teaching digital skills to rural and underprivileged children. They use the micro:bit, a tiny programmable device that makes coding fun. “Using a keyboard to type in code is not interesting for kids,” says Mannapperuma, an IT professional and tech-evangelist, who is determined to inspire a new generation.

Upskilling is also a priority in one of Asia’s poorest countries, Nepal. Here, Microsoft has launched an ambitious digital literacy training program that is transforming lives. Santosh Thapa lost his home and livelihood in a massive 2015 earthquake and struggled in its aftermath to start again in business. Things turned around after he graduated from a Microsoft-sponsored course that taught him some digital basics, which he now uses daily to serve his customers and stay ahead of his competitors.

Often women are the most disadvantaged in the skills race. For instance in Myanmar, only 35 percent of the workforce is female. Without wide educational opportunities, most women have been relegated to the home or the farm. But times are changing as the economy opens up after decades of isolation. “Women in Myanmar are at risk of not gaining the skills for the jobs of the future, and so we are helping to develop the skills of young women, and that’s been an exciting effort,” says Michelle.

The team at Microsoft Philanthropies has partnered with the Myanmar Book Aid and Preservation Foundation in its Tech Age Girls program. It identifies promising female leaders, between the ages of 14 to 18, and provides them with essential leadership and computer science skills to be future-ready for the jobs of the 4th Industrial Revolution.

The goal is to create a network of 100 young women leaders in at least five locations throughout Myanmar with advanced capacity in high-demand technology skills. One of these future leaders is Thuza who is determined to forge a career in the digital space. “There seems to be a life path that girls are traditionally expected to take,” she says. “But I’m a Tech Age Girl and I’m on a different path.”

In Bangladesh, Microsoft and the national government have come together to teach thousands of women hardware and software skills. Many are now working at more than 5,000 state-run digital centers that encourage ordinary people to take up technology for the business, work, and studies. It is also hoped that many of the women graduates of the training program will become digital entrepreneurs themselves.

This slideshow requires JavaScript.

These examples are undeniably encouraging and even inspirational. But Michelle is clear on one point: Digital skills development is more than just meaningful and impactful philanthropic work. It is also a hard-headed, long-term business strategy and a big investment in human capital.

Microsoft wants its customers to grow and push ahead with digital transformation, she says. And to do that, they need digitally skilled workers. “This is about empowering individuals, and also about enabling businesses to fill gaps in order for them to actually compete globally … by developing the skills, we can develop the economy.”

Citrix enables VM live migration for Nvidia vGPUs

Live migration for virtual GPUs has arrived, and the technology will help organizations more easily distribute resources and improve performance for virtual desktop users.

As more applications get graphic rich, VDI shops need better ways to support and manage virtualized GPUs (vGPUs). Citrix and Nvidia said this month they will support live migration of GPU-accelerated VMs, allowing administrators to move VMs between physical servers with no downtime. VMware has also demonstrated similar capabilities but has not yet brought them to market.

“The first time we all saw vMotion of a normal VM, we were all amazed,” said Rob Beekmans, an end-user computing consultant in the Netherlands. “So it’s the same thing. It’s amazing that this is possible.”

How vGPU VM live migration works

Live migration, the ability to move a VM from one host to another while the VM is still running, has been around for years. But it was not possible to live migrate a VM that included GPU acceleration technology such as Nvidia’s Grid. VMware’s VM live migration tool, vMotion, and Citrix’s counterpart, XenMotion, did not allow migration of VMs that had direct access to a physical hardware component. Complicating matters was the fact that live migration must replicate the GPU on one server to another server, and essentially map its processes one to one. That’s difficult because a GPU is such a dense processor, said Anne Hecht, senior director of product marketing for vGPU at Nvidia.

The GPU isn’t just for gamers anymore.
Zeus Kerravalafounder and principal analyst, ZK Research

XenMotion is now capable of live migrating a GPU-enabled VM on XenServer. Using the Citrix Director management console, administrators can monitor and migrate these VMs. They simply select the VM and from a drop-down menu choose the host they want to move it to. This migration process takes a few seconds, according to a demo Nvidia showed at Citrix Synergy 2017. XenMotion with vGPUs is available now as a tech preview for select customers, and Nvidia did not disclose a planned date for general availability.

This ability to redistribute VMs without having to shut them down brings several benefits. It could be useful for a single project, such as a designer working on a task that needs a lot of GPU resources for a few months, or adding more virtual desktop users overall. If a user needs more GPU power all of a sudden, IT can migrate his or her desktop VM to a different server that has more GPU resources available. IT may use live migration on a regular basis to change the amount of processing on different servers as users go through peaks and valleys of GPU needs.

Most important to users themselves, VM live migration means that there is no downtime on their virtual desktop during maintenance or when IT has to move a machine.

“The amount of time needed to save and close down a project can number in the tens of minutes in complex cases, and that makes for a lot of lost production time,” said Tobias Kreidl, desktop computing team lead at Northern Arizona University, who manages around 500 Citrix virtual desktops and applications. “Having this option is in bigger operations a huge plus. Even in a smaller shop, not having to deal with downtime is always a good thing as many maintenance processes require reboots.”

VMware vs. Citrix

The new Citrix capability only supports VM live migration between servers that have Nvidia GPU cards of the same type. Nvidia offers a variety of Grid options, which differ in the amounts of memory they include, how many GPUs they support and other aspects. So, XenMotion live migration can only happen from one Tesla M10 to another Tesla M10 card, for example, Hecht said.

At VMworld 2017, VMware demoed a similar process for Nvidia vGPUs with vMotion. This capability was not in beta or tech preview at the time, however, and still isn’t. Plus, the VMware capability works a little differently from Citrix’s. With VMware Horizon, IT cannot migrate without downtime; instead, a process called Suspend and Resume allows a GPU-enabled VM to hibernate, move to another host, then restart from its last running state. Users experience desktop downtime, but when it restarts it automatically logs in and runs with all of the last existing data saved.

Nvidia Grid
Nvidia Grid graphics card

Nvidia is working with VMware to develop and release an official tech preview of this Suspend and Resume capability for vGPU migration and hopes to develop a fully live scenario for Horizon in the future as well, Hecht said.

“VMware will catch up, but I think it gives Citrix an early mover advantage,” said Zeus Kerravala, founder and principal analyst of ZK Research. “This might wake VMware up a little bit and be more aggressive with a lot of these emerging technologies.”

Who needs GPU acceleration?

Virtualized GPUs are becoming more necessary for VDI shops as more applications require intensive graphics and multimedia processing. Applications that use video, augmented or virtual reality and even many Windows 10 apps use more CPU than ever before — and vGPUs can help offload that.

“The GPU isn’t just for gamers anymore,” Kerravala said. “It is becoming more mainstream, and the more you have Microsoft and IBM and the big mainstream IT vendors doing this, it will help accelerate [GPU acceleration] adoption. It becomes a really important part of any data center strategy.”

At the same time, not every user needs GPU acceleration. Beekmans’ clients sometimes think they need vGPUs, when actually the CPU will provide good enough processing for the application in question, he said. And vGPU technology isn’t cheap, so organizations must weigh the cost versus benefits of adopting it.

“I don’t think everybody needs a GPU,” Beekmans said. “It’s hype. You have to look at the cost.”

More competition in the GPU acceleration market — which Nvidia currently dominates in terms of virtual desktop GPU cards — would help bring costs down and increase innovation, Beekmans said.

Still, the market is here to stay as more apps begin to require more GPU power, he added.

“If you work with anything that uses compute resources, you need to keep an eye on the world of GPUs, because it’s coming and it’s coming fast,” Kerravala agreed.

Microsoft to acquire Avere Systems, accelerating high-performance computing innovation for media and entertainment industry and beyond – The Official Microsoft Blog

The cloud is providing the foundation for the digital economy, changing how organizations produce, market and monetize their products and services. Whether it’s building animations and special effects for the next blockbuster movie or discovering new treatments for life-threatening diseases, the need for high-performance storage and the flexibility to store and process data where it makes the most sense for the business is critically important.

Over the years, Microsoft has made a number of investments to provide our customers with the most flexible, secure and scalable storage solutions in the marketplace. Today, I am pleased to share that Microsoft has signed an agreement to acquire Avere Systems, a leading provider of high-performance NFS and SMB file-based storage for Linux and Windows clients running in cloud, hybrid and on-premises environments.

Avere logoAvere uses an innovative combination of file system and caching technologies to support the performance requirements for customers who run large-scale compute workloads. In the media and entertainment industry, Avere has worked with global brands including Sony Pictures Imageworks, animation studio Illumination Mac Guff and Moving Picture Company (MPC) to decrease production time and lower costs in a world where innovation and time to market is more critical than ever.

High performance computing needs however do not stop there. Customers in life sciences, education, oil and gas, financial services, manufacturing and more are increasingly looking for these types of solutions to help transform their businesses. The Library of Congress, John Hopkins University and Teradyne, a developer and supplier of automatic test equipment for the semiconductor industry, are great examples where Avere has helped scale datacenter performance and capacity, and optimize infrastructure placement.

By bringing together Avere’s storage expertise with the power of Microsoft’s cloud, customers will benefit from industry-leading innovations that enable the largest, most complex high-performance workloads to run in Microsoft Azure. We are excited to welcome Avere to Microsoft, and look forward to the impact their technology and the team will have on Azure and the customer experience.

You can also read a blog post from Ronald Bianchini Jr., president and CEO of Avere Systems, here.

Tags: Avere Systems, Azure, Big Computing, Cloud, High-Performance Computing, high-performance storage

Five questions to ask before purchasing NAC products

As network borders become increasingly difficult to define, and as pressure mounts on organizations to allow many different devices to connect to the corporate network, network access control is seeing a significant resurgence in deployment.

Often positioned as a security tool for the bring your own device (BYOD) and internet of things (IoT) era, network access control (NAC) is also increasingly becoming a very useful tool in network management, acting as a gatekeeper to the network. It has moved away from being a system that blocks all access unless a device is recognized, and is now more permissive, allowing for fine-grained control over what access is permitted based on policies defined by the organization. By supporting wired, wireless and remote connections, NAC can play a valuable role in securing all of these connections.

Once an organization has determined that NAC will be useful to its security profile, it’s time for it to consider the different purchasing criteria for choosing the right NAC product for its environment. NAC vendors provide a dizzying array of information, and it can be difficult to differentiate between their products.

When you’re ready to buy NAC products and begin researching your options — and especially when speaking to vendors to determine the best choice for your organization — consider the questions and features outlined in this article.

NAC device coverage: Agent or agentless?

NAC products should support all devices that may connect to an organization’s network. This includes many different configurations of PCs, Macs, Linux devices, smartphones, tablets and IoT-enabled devices. This is especially true in a BYOD environment.

NAC agents are small pieces of software installed on a device that provide detailed information about the device — such as its hardware configuration, installed software, running services, antivirus versions and connected peripherals. Some can even monitor keystrokes and internet history, though that presents privacy concerns. NAC agents can either run scans as a one-off — dissolvable — or periodically via a persistently installed agent.

If the NAC product uses agents, it’s important that they support the widest variety of devices possible, and that other devices can use agentless NAC if required. In many cases, devices will require the NAC product to support agentless implementation to detect BYOD and IoT-enabled devices and devices that can’t support NAC agents, such as printers and closed-circuit television equipment. Agentless NAC allows a device to be scanned by the network access controller and be given the correct designation based on the class of the device. This is achieved with aggressive port scans and operating system version detection.

Agentless NAC is a key component in a BYOD environment, and most organizations should look at this as must-have when buying NAC products. Of course, gathering information via an agent will provide more information on the device, but it’s not viable on a modern network that needs to support many different devices.

Does the NAC product integrate with existing software and authentication?

This is a key consideration before you buy an NAC product, as it is important to ensure it supports the type of authentication that best integrates with your organization’s network. The best NAC products should offer a variety of choices: 802.1x — through the use of a RADIUS server — Active Directory, LDAP or Oracle. NAC will also need to integrate with the way an organization uses the network. If the staff uses a specific VPN product to connect remotely, for example, it is important to ensure the NAC system can integrate with it.

Supporting many different security systems that do not integrate with one another can cause significant overhead. A differentiator between the different NAC products is not only what type of products they integrate with, but also how many systems exist within each category.

Consider the following products that an organization may want to integrate with, and be sure that your chosen NAC product supports the products already in place:

1. Security information and event management

2. Vulnerability assessment

3. Advanced threat detection

4. Mobile device management

5. Next-generation firewalls

Does the NAC product aid in regulatory compliance?

NAC can help achieve compliance with many different regulations, such as the Payment Card Industry Data Security Standard, HIPAA, International Organization for Standardization 27002 — ISO 27002 — and the National Institute of Standards and Technology. Each of these regulations stipulates certain controls regarding network access that should be implemented, especially around BYOD, IoT and rogue devices connecting to the network.

By continually monitoring network connections and performing actions based on the policies set by an organization, NAC can help with compliance with many of these regulations. These policies can, in many cases, be configured to match those of the compliance regulations mentioned above. So, when buying NAC products, be sure to have compliance in mind and to select a vendor that can aid in this process — be it through specific knowledge in its support team or through predefined policies that can be tweaked to provide the compliance required for your individual business.

What is the true cost of buying an NAC product?

The price of NAC products can be the most significant consideration, depending on the budget you have available for procurement. Most NAC products are charged per endpoint (device) connected to the network. On a large network, this can quickly become a substantial cost. There are often also hidden costs with NAC products that must be considered when assessing your purchase criteria.

Consider the following costs before you buy an NAC product:

A differentiator between the different NAC products is not only what type of products they integrate with, but also how many systems exist within each category.

1. Add-on modules. Does the basic price give organizations all the information and control they need? NAC products often have hidden costs, in that the basic package does not provide all the functionality required. The additional cost of add-on modules can run into tens of thousands of dollars on a large network. Be sure to look at what the basic NAC package includes and investigate how the organization will be using the NAC system. Specific integrations may be an additional cost. Is there extra functionality that will be required in the NAC product to provide all the benefits required?

2. Upfront costs. Are there any installation charges or initial training that will be required? Be sure to factor these into the calculation, on top of the price per endpoint — of course.

3. Support costs. What level of support does the organization require? Does it need one-off or regular training, or does it require 24/7 technical support? This can add significantly to the cost of NAC products.

4. Staff time. While not a direct cost of buying NAC products, consider how much monitoring an NAC system requires. Time will need to be set aside not only to learn the NAC system, but to manage it on an ongoing basis and respond to alerts. Even the best NAC systems will require staff to be trained so if problems occur, there will be people available to address the issues.

NAC product support: What’s included?

Support from the NAC manufacturer is an important consideration from the perspective of the success of the rollout and assessing the cost. Some of the questions that should be asked are:

  1. What does the basic support package include?
  2. What is the cost of extended support?
  3. Is support available at all times?
  4. Does the vendor have a significant presence in the organization’s region? For example, some NAC providers are primarily U.S.-based, and if an organization is based in EMEA, it may not provide the same level of support.
  5. Is on-site training available and included in the license?

Support costs can significantly drive up the cost of deployment and should be assessed early in the procurement process.

What to know before you buy an NAC system

When it comes to purchasing criteria for network access control products, it is important that not only is an NAC system capable of detecting all the devices connected to an organization’s network, but that it integrates as seamlessly as possible. The cost of attempting to shoehorn existing processes and systems into an NAC product that does not offer integration can quickly skyrocket, even if the initial cost is on the cheaper side.

NAC should also work for the business, not against it. In the days when NAC products only supported 802.1x authentication and blocked everything by default, it was seen as an annoyance that stopped legitimate network authentication requests. But, nowadays, a good NAC system provides seamless connections for employees, third parties and contractors alike — and to the correct area of the network to which they have access. It should also aid in regulatory compliance, an issue all organizations need to deal with now.

Assessing NAC products comes down to the key questions highlighted above. They are designed to help organizations determine what type of NAC product is right for them, and accordingly aid them in narrowing their choices down to the vendor that provides the product that most closely matches those criteria.

Once seldom used by organizations, endpoint protection is now a key part of IT security, and NAC products have a significant part to play in that. From a hacker’s perspective, well-implemented and managed NAC products can mean the difference between a full network attack and total attack failure.

Microsoft customers and partners envision smarter, safer, more connected societies – Transform

Organizations around the world are transforming for the digital era, changing how businesses, cities and citizens work. This new digital era will address many of the problems created in the earlier agricultural and industrial eras, making society safer, more sustainable, more efficient and more inclusive.

But an infrastructure gap is keeping this broad vision from becoming a reality. Digital transformation is happening faster than we expected — only in pockets. Microsoft and its partners seek to help city and other public infrastructures close the gaps, with advanced technologies in the cloud, data analytics, machine learning and artificial intelligence (AI).

Microsoft’s goal is to be a trusted partner to both public and private organizations in building connected societies. This summer, an IDC survey named Microsoft the top company for trust and customer satisfaction in enabling smart-city digital transformations.

Last week at a luncheon in New York City, Microsoft and executives from three organizations participating in the digital transformation shared how they are helping to close the infrastructure gap.

A photo of Arnold Meijer, TomTom's strategic business development manager.

Arnold Meijer, TomTom’s strategic business development manager, at the Building Digital Societies salon lunch. (Photo by John Brecher)

TomTom NV, based in Amsterdam, traditionally focused on providing consumers with personal navigation. Now, “the need for locations surpasses the need for navigation — it’s everywhere,” said Arnold Meijer, strategic business development manager. “Managing a fleet of connected devices or ordering a ride from your phone — these things weren’t possible five years ago. We’re turning to cloud connectivity and the Internet of Things as tools to keep our maps and locations up to date.”

Sensors from devices and vehicles on the road deliver condition and usage data essential to highway planners, infrastructure managers and fleet operators to make well informed decisions.

Autonomous driving is directly in TomTom’s sights, a way to cut down on traffic accidents, one of the top 10 causes of death worldwide, and to reduce emissions through efficient routing. “You probably won’t own a vehicle 20 years from now, and the one that picks you up won’t have a driver,” Meijer said. “If you do go out driving yourself, it will be for fun.”

With all that time freed up from driving, travelers can do something else such as relax or work. Either option presents new business opportunities for companies that offer entertainment or enable productivity for a mobile client, who is almost certainly connected to the internet. “There will be new companies coming out supporting that, and I definitely foresee Microsoft and other businesses active there,” Meijer said.

“Such greatly eased personal transport may decrease the need to live close to work or school, changing settlement patterns and reduce the societal impacts of mobility. All because we can use location- and cloud technology.” he added.

A photo of George Pitagorsky, CIO for the New York City Department of Education Office of School Support Services.

George Pitagorsky, CIO for the New York City Department of Education Office of School Support Services. (Photo by John Brecher)

The New York City Dept. of Education is using Microsoft technology extensively in a five-year, $25-million project that will tell parents their children’s whereabouts while the students are in transit, increase use of the cafeterias and provide access to information about school sports.

The city’s Office of Pupil Transportation provides rides to more than 600,000 students per day, with more than 9,000 buses and vehicles. For a preliminary version of the student-tracking system, the city has equipped its leased buses with GPS devices.

“When the driver turns on the GPS and signs in his bus, we can find out where it is at any time,” said George Pitagorsky, executive director and CIO for the department’s Office of School Support Services. If parents know what bus their child is on, they can more easily meet it at the stop or be sure to be there when the child is brought home.

A next step will be GPS units that don’t require driver activation. To let the system track not just the vehicle but its individual occupants, drivers will still need to register students into the GPS when they get on the bus.

“Biometrics like facial recognition that automate check-in when a student steps onto a bus — we’re most likely going to be there, but we’re not there yet,” Pitagorsky said.

Further out within the $25-million Illumination Program, a new bus-routing tool will replace systems developed more than 20 years ago, allowing the creation of more efficient routes, making course corrections to avoid problems, easily gathering vehicle-maintenance costs and identifying problem vehicles.

Other current projects include a smartphone app to advise students of upcoming meal choices in the school cafeterias, with an eye to increasing cafeteria use, enhancing students’ nutritional intake and offering students a voice in entree choices. The department has also created an app that displays all high school sports games, locations and scores.

A new customer-relations management app will let parents update their addresses and request special transport services on behalf of their children, with no more need to make a special visit to the school to do so. A mobile app will allow parents and authorized others to locate their children or bus, replacing the need for a phone call to the customer service unit. And business intelligence and data warehousing will get a uniform architecture, to replace the patchwork data, systems and tools now in place.

A photo of Christy Szoke CMO and co-founder of Fathym.

Christy Szoke CMO and co-founder of Fathym. (Photo by John Brecher)

Fathym, a startup in Boulder, Colorado, is directly addressing infrastructure gaps through a rapid-innovation platform intended to harmonize disparate data and apps and facilitate Internet of Things solutions.

“Too often, cities don’t have a plan worked out and are pouring millions of dollars into one solution, which is difficult to adjust to evolving needs and often leads to inaccessible, siloed data,” said co-founder and chief marketing officer Christy Szoke. “Our philosophy is to begin with a small proof of concept, then use our platform to build out a solution that is flexible to change and allows data to be accessible from multiple apps and user types.” Fathym makes extensive use of Azure services but hides that complexity from customers, she said.

To create its WeatherCloud service, Fathym combined data from roadside weather stations and sensors with available weather models to create a road weather forecast especially for drivers and maintenance providers, predicting conditions they’ll find precisely along their route.

“We’re working with at least eight data sets, all completely different in format, time intervals and spatial resolutions,” said Fathym co-founder and CEO Matt Smith. “This is hard stuff. You can’t have simplicity on the front end without a complicated back-end system, a lot of math, and a knowledgeable group of different types of engineers helping to make sense of it all.”

Despite the ease that cloud services have brought to application development, even 20 years from now foresees a need for experts to wrangle data.

“When people say, ‘the Internet of Things is here’ and ‘the robots are going to take over,’ I don’t think they have the respect they should have for how challenging it will remain to build complex apps,” Smith said.

Added Szoke, “You can’t just say ‘put an AI on it’ or ‘apply machine learning’ and expect to get useful data. You will still need creative minds, and data scientists, to understand what you’re looking at, and that will continue to be an essential industry.”

Micro data centers garner macro hype and little latency

LAS VEGAS — The large growth of data in many organizations is piquing IT interest in edge computing.

Edge computing is a process that places data processing closer to the data sources and its end users to reduce latency, and micro data centers are one way to achieve that. Micro data centers, also called edge data centers, are typically modular devices that house all infrastructure elements, from servers and storage to uninterruptible power supply and cooling. As data centers receive a flood of information from IoT devices, the two concepts took center stage for IT pros at Gartner’s data center conference last week in Las Vegas.

“As we start to have billions of things that are expecting instantaneous response and generating 4k video, the traffic is such that it doesn’t make sense to take all of that and shove it into a centralized cloud,” said Bob Gill, a research vice president at Gartner.

Gill compared the importance of edge to the effect that U.S. president Dwight D. Eisenhower’s highway system had on declining communities in the Midwest. Towns that were otherwise defunct of commerce could easily connect to cities with more promise. Similarly, organizations can place a micro data center in whichever location will maximize its value: an office, warehouse, factory floor or colocation facility.

“If you build the infrastructure based only on the centralized cloud models, data centers and colocation facilities without thinking about the edge, we’re going to find ourselves in three to four years with suboptimal infrastructure,” Gill said.

Edge computing
Edge computing enables data to be processed closer to its source.

A growing market

The market for micro data centers is still small, but it’s growing quickly. By 2021, 25% of enterprise organizations will have deployed a micro data center, according to Gartner.

If you build the infrastructure based only on the centralized cloud models … we’re going to find ourselves in three to four years with suboptimal infrastructure.
Bob Gillresearch vice president, Gartner

Some organizations are ahead of the edge computing game. Frank Barrett, IT operations director for Spartan Motors in Charlotte, Mich., inherited multiple data centers from a company acquisition. Now, the company has two primary data centers in Michigan and Indiana, and three smaller, micro data centers in Nebraska. All of the data centers currently have traditional hub-and-spoke networking infrastructure, but Barrett is considering improving the networks to better support the company’s edge data centers. He is currently in the process of moving to one provider for all of the company’s services, as well as updating switches, routers and firewalls throughout the enterprise.

“Latency is a killer,” Barrett said. “We’ve got a lot of legacy systems that people in different locations need access to. It can be a nightmare for those remote locations, regardless of how much bandwidth I throw between sites.”

Other organizations interested in adopting this technology have a choice between three types of micro data center providers. Infrastructure providers offer a one-stop shop with a single vendor for all hardware. Facilities specialists are often hardware-agnostic and provide a range of modular options, but they may require separate hardware. Regional providers are highly focused in a given region, providing strong local customer service. But a smaller business base can lead to less stability for those providers, with a higher risk of acquisitions and mergers, said Jeffrey Hewitt, a research vice president at Gartner.

One data center engineer at an audit company for retail and healthcare services is interested in the facilities provider approach because his company has a dedicated, in-house IT team to handle the other aspects of a data center. The engineer requested anonymity because he wasn’t authorized to speak to the media.

“With the facilities [option], you can install whatever you want,” he said. “Most offices have a main distribution facility, so they already have a circuit coming in, cooling in place and security. We don’t need any of that; it’d just be a dedicated rack for the micro data center.”

Micro data centers, not micro problems

Since micro data centers are often in a different physical location than a company’s traditional data center, IT needs to ensure that the equipment is secure, reliable and able to operate without constant repairs, said Daniel Bowers, a research director at Gartner. Industrial edge data centers in particular need to ruggedize the equipment so that it can withstand elements such as excessive vibrations and dust.

The distributed nature of micro data centers means that management is another concern, said Steven Carlini, senior director of data center global solutions at Schneider Electric, an energy management provider based in France.

“You don’t want to dispatch service to thousands of sites; you want the notifications that you get to be very specific,” he said. “Hopefully you can resolve an issue without sending someone on site.”

Vendor lock-in is a concern, particularly with all-in-one providers such as Dell EMC, HPE and Hitachi. It’s important to choose the right vendor from the start, which can be overwhelming due to an oversaturated market. The reality is that micro data centers have been around for years. At last year’s conference, Schneider Electric and HPE unveiled the HPE Micro Datacenter, but Schneider Electric has offered micro data centers for at least three years prior, for instance. This year, the company introduced Micro Data Center Xpress, which allows customers or partners to configure IT equipment before installing the system.

Hewitt recommends a four-step process to choose a micro data center vendor: Identify the requirements, score the vendors based on strengths and weaknesses, use those to create a shortlist and negotiate a contract with at least two comparable vendors.

Third-party E911 services expand call management tools

Organizations are turning to third-party E911 services to gain management capabilities they can’t get natively from their IP telephony provider, according to a report from Nemertes Research.

IP telephony providers may offer basic 911 management capabilities, such as tracking phone locations, but organizations may have needs that go beyond phone tracking. The report, sponsored by telecom provider West Corporation, lists the main reasons why organizations would use third-party E911 services.

Some organizations may deploy third-party E911 management for call routing to ensure an individual 911 call is routed to the correct public safety answering point (PSAP). Routing to the correct PSAP is difficult for organizations with remote and mobile workers. But third-party E911 services can offer real-time location tracking of all endpoints and use that information to route to the proper PSAP, according to the report.

Many larger organizations have multivendor environments that may include multiple IP telephony vendors. Third-party E911 services offer a single method of managing location information across endpoints, regardless of the underlying telephony platform.

The report also found third-party E911 management can reduce costs for organizations by automating the initial setup and maintenance of 911 databases in the organization. Third-party E911 services may also support centralized call routing, which could eliminate the need for local PSTN connections at remote sites and reduce the operating and hardware expenses at those sites.

Genesys unveils Amazon integration

Contact center vendor Genesys, based in Daly City, Calif., revealed an Amazon Web Services partnership that integrates AI and Genesys’ PureCloud customer engagement platform.

Genesys has integrated PureCloud with Amazon Lex, a service that lets developers build natural language, conversational bots, or chatbots. The integration allows businesses to build and maintain conversational interactive voice response (IVR) flows that route calls more efficiently.

Amazon Lex helps IVR flows better understand natural language by enabling IVR flows to recognize what callers are saying and their intent, which makes it more likely for the call to be directed to the appropriate resource the first time without error.

The chatbot integration also allows organizations to consolidate multiple interactions into a single flow that can be applied over different self-service channels. This reduces the number of call flows that organizations need to maintain and can simplify contact center administration.

The chatbot integration will be available to Genesys customers in 2018.

Conference calls face user, security challenges

A survey of 1,000 professionals found that businesses in the U.S. and U.K. are losing $34 billion due to delays and distractions during conference calls, a significant increase from $16 billion in a 2015 survey.

The survey found employees waste an average of 15 minutes per conference call getting it started and dealing with distractions. More than half of respondents said distractions have a moderate-to-major negative effect on productivity, enthusiasm to participate and the ability to concentrate.

The survey was conducted by remote meetings provider LoopUp and surveyed 1,000 professionals in the U.S. and U.K. who regularly participate in conference calls at organizations ranging from 50 to more than 1,000 employees.

The survey also found certain security challenges with conference calls. Nearly 70% of professionals said it’s normal to discuss confidential information over a call, while more than half of respondents said it’s normal to not know who is on a call.

Users are also not fully comfortable with video conferencing, according to the survey. Half of respondents said video conferencing is useful for day-to-day calls, but 61% still prefer to use the phone to dial in to conference calls.

DevOps transformation in large companies calls for IT staff remix

SAN FRANCISCO — A DevOps transformation in large organizations can’t just rely on mandates from above that IT pros change the way they work; IT leaders must rethink how teams are structured if they want them to break old habits.

Kaiser Permanente, for example, has spent the last 18 months trying to extricate itself from 75 years of organizational cruft through a consumer digital strategy program led by Alice Raia, vice president of digital presence technologies. With the Kaiser Permanente website as its guinea pig, Raia realigned IT teams into a squad framework popularized by digital music startup Spotify, with cross-functional teams of about eight engineers. At the 208,000-employee Kaiser Permanente, that’s been subject to some tweaks.

“At our first two-pizza team meeting, we ate 12 pizzas,” Raia said in a session at DevOps Enterprise Summit here. Since then, the company has settled on an optimal number of 12 to 15 people per squad.

The Oakland, Calif., company decided on the squads approach when a previous model with front-end teams and systems-of-record teams in separate scrums didn’t work, Raia said. Those silos and a focus on individual projects resulted in 60% waste in the application delivery pipeline as of a September 2015 evaluation. The realignment into cross-functional squads has forced Kaiser’s website team to focus on long-term investments in products and faster delivery of features to consumers.

IT organizational changes vary by company, but IT managers who have brought about a DevOps transformation in large companies share a theme: Teams can’t improve their performance without a new playbook that puts them in a better position to succeed.

We had to break the monogamous relationships between engineers and [their] areas of interest.
Scott Nasellosenior manager of platforms and systems engineering, Columbia Sportswear Co.

At Columbia Sportswear Co. in Portland, Ore., this meant new rotations through various areas of focus for engineers — from architecture design to infrastructure building to service desk and maintenance duties, said Scott Nasello, senior manager of platforms and systems engineering, in a presentation.

“We had to break the monogamous relationships between engineers and those areas of interest,” Nasello said. This resulted in surprising discoveries, such as when two engineers who had sat next to each other for years discovered they’d taken different approaches to server provisioning.

Short-term pain means long-term gain

In the long run, the move to DevOps results in standardized, repeatable and less error-prone application deployments, which reduces the number of IT incidents and improved IT operations overall. But those results require plenty of blood, sweat and tears upfront.

“Prepare to be unpopular,” Raia advised other enterprise IT professionals who want to move to DevOps practices. During Kaiser Permanente’s transition to squads, Raia had the unpleasant task to inform executive leaders that IT must slow down its consumer-facing work to shore up its engineering practices — at least at first.

Organizational changes can be overwhelming, Nasello said.

“There were a lot of times engineers were running on empty and wanted to tap the brakes,” he said. “You’re already working at 100%, and you feel like you’re adding 30% more.”

IT operations teams ultimately can be crushed between the contradictory pressures of developer velocity on the one hand and a fear of high-profile security breaches and outages on the other, said Damon Edwards, co-founder of Rundeck Inc., a digital business process automation software maker in Menlo Park, Calif.

Damon Edwards, co-founder of Rundeck Inc., shares the lessons he learned from customers about how to reduce the impact of DevOps velocity on IT operations.
Damon Edwards, co-founder of Rundeck Inc., shares the lessons he learned from customers about how to reduce the impact of DevOps velocity on IT operations.

A DevOps transformation means managers must empower those closest to day-to-day systems operations to address problems without Byzantine systems of escalation, service tickets and handoffs between teams, Edwards said.

Edwards pointed to Rundeck customer Ticketmaster as an example of an organizational shift toward support at the edge. A new ability to resolve incidents in the company’s network operations center — the “EMTs” of IT incident response — reduced IT support costs by 55% and the mean time to response from 47 minutes to 3.8 minutes on average.

“Silos ruin everything — they’re proven to have a huge economic impact,” Edwards said.

And while DevOps transformations pose uncomfortable challenges to the status quo, some IT ops pros at big companies hunger for a more efficient way to work.

“We’d like a more standardized way to deploy and more investment in the full lifecycle of the app,” said Jason Dehn, systems analyst for a large U.S. retailer he asked not to be named. But some lines of business at the company are happy with a status quo, where they aren’t entangled in day-to-day application maintenance.

“Business buy-in can be the challenge,” Dehn said.

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

Industrial IoT adoption rates high, but deployment maturity low

Industrial organizations have embraced IoT, that much is clear. But a new study found current deployments aren’t very advanced yet — though that will come in time, the organizations said.

Bsquare’s 2017 Annual IIoT Maturity Study surveyed more than 300 senior-level employees with operational responsibilities from manufacturing, transportation, and oil and gas companies with annual revenues of more than $250 million.

Eighty-six percent of respondents said they have IIoT technologies in place, with an additional 12% planning to deploy IIoT within the next year. And of the 86% of industrial organizations that have completed IoT adoption, 91% said the IIoT deployments were important to business operations.

However, while IoT is catching on, most industrial organizations are still in the early stages.

The state of IoT adoption in industrial organizations

The study outlined five levels of IoT adoption: device connectivity and data forwarding, real-time monitoring, data analytics, automation and on-board intelligence.

Seventy-eight percent of survey respondents, with transportation leading the pack, self-identified their companies at the first stage, transmitting sensor data to the cloud for analytics, and 56%, again with transportation in the lead, reached the second stage, monitoring sensor data in real time for visualization.

Dave McCarthy, BsquareDave McCarthy

Dave McCarthy, senior director of products at Bsquare, said he had predicted the gap between the first two stages would be smaller; no surprise there. What really surprised him, however, was the small gap between the second stage and third: Forty-eight percent of respondents said they were using data analytics for insight, predictions and optimization with applied analytics such as machine learning or artificial intelligence.

“What it indicates to me,” McCarthy said, “Is that people who have gone down the visualization route have figured out, to some degree, some use of the data they’re collecting, and they know that analytics is going to play a part in helping them understand more closely what that data is going to mean for them.”

McCarthy wasn’t surprised to see the drop in the fourth and fifth stages: Twenty-eight percent said they were automating actions across internal systems with their IoT deployments, and only 7% had reached the edge analytics level.

“Just as expected, there’s a large drop-off from people doing analytics to people who are automating the results,” McCarthy said. “And in my mind, the highest amount of ROI comes when you can get to those levels.”

IIoT Maturity Model
Maturity of IoT adoption in industrial organizations

IIoT adoption and satisfaction

Not reaching the highest levels of ROI isn’t deterring IoT adoption, though: Seventy-three percent of respondents said they expect to increase IIoT deployments over the next year, with higher IoT adoption rates in transportation and manufacturing (85% and 78%, respectively) than oil and gas (56%). Additionally, 35% of all industrial organizations believe they will reach the automation stage, and 29% are aiming to reach the real-time monitoring stage in the same time period.

Nor will ROI always be calculated the same by analysts and companies as it is by the organizations using IIoT technologies, McCarthy noted. Respondents cited machine health- (90%) and logistics- (67%) related goals as top IoT adoption drivers, while lowering operating costs came in at 24%.

“The number one motivation that all operations-oriented companies have is improving and increasing uptime of their equipment,” McCarthy said. “I hear this over and over again. They know they eventually have to do maintenance on equipment and take things down for repairs, but it is so much more manageable when they can get ahead of that and plan for it.”

“The reality for these types of businesses is that if there are plant shutdowns or line shutdowns that last for extended periods of time, they often don’t have the ability to make up that loss in production,” McCarthy added. “You can’t just run another shift on a Saturday to pick up the slack. Oftentimes the value of the product they’re producing far outweighs the cost of operating the equipment. What this indicates to me is, ‘I’ll spend more if that means I can keep that line running because of the production value.'”

With or without traditional ROI, the majority of survey respondents said they were happy with the results they’re seeing: Eighty-four percent said their products and services were extremely or very effective, with the transportation sector seeing a 96% satisfaction rate.

Additionally, 99% of oil and gas, 98% of transportation and 90% of manufacturing organizations said IIoT would have a significant impact on their industry at a global level. Perhaps those predictions of IIoT investments reaching $60 trillion in the next 15 years and the number of internet-connected IIoT devices exceeding 50 billion by 2020 will become a reality.

GDPR requirements put end-user data in the spotlight

As organizations around the world prepare for major new data privacy rules to take effect, their biggest challenge is taking stock of data and how they use it.

The General Data Protection Regulation (GDPR), which goes into effect in May 2018, governs the storage and processing of individuals’ personal data. For IT departments, this regulation means they must review their handling of employees’ and customers’ information to ensure it meets new security requirements. Endpoint management products play an important role in helping IT get ready for GDPR requirements, but many of these tools don’t yet have all the capabilities they need, experts said.

“It’s going to be a huge risk if the organization is not able to control data that’s part of GDPR,” said Danny Frietman, co-founder of MobileMindz, an enterprise mobility consultancy in the Netherlands. “A lot of companies will not be able to cope with the magnitude of that change.”

GDPR is a European Union (EU) regulation that aims to protect Europe residents’ data, but it has worldwide ramifications. U.S.-based companies that have branches in the EU, use consultants based in the EU or have customers in the EU, for example, will all have to comply. GDPR would come into play for most organizations when it comes to protecting their employees’ and customers’ personally identifiable information (PII), such as home address, IP address or bank account details.

The role of endpoint management tools in GDPR

Some of the end-user computing (EUC) technologies IT can use to ensure GDPR compliance include information and identity management and enterprise mobility management (EMM).

Mobile and desktop management tools allow administrators to implement the following technologies and features:

  • encryption;
  • multifactor authentication;
  • application blacklisting;
  • per-user security policies; and
  • alerts that identify noncompliant activities.

Specifically for mobile devices, IT can use capabilities such as the following:

  • remote wipe to remove a user’s information once they leave the company;
  • containerization to separate personal and corporate information and ensure that IT only accesses the identifiable data it really needs; and
  • threat defense tools to be proactive about potential breaches.

MobileMindz, for instance, uses Apperian for mobile application management and adopted its enterprise app store to ensure all employees’ apps that deal with sensitive data are secure, Frietman said.

But EMM tools lack the ability to allow for clear and efficient logging, reporting and auditing of what personal data an organization has. That’s the bigger challenge for IT, said Frietman, whose firm is preparing clients and itself for GDPR.

“This is a huge opportunity for EMM vendors,” he said. “It could solve a lot of questions for customers.”

VMware, for one, has aimed over the past year to improve upon its existing data-reporting capabilities in Workspace One, a company spokesperson said. Workspace One Intelligence, announced at this year’s VMworld, can help IT document information for GDPR requirements by gaining deeper insight into its data and running reports based on historical and future big data. It should be generally available before the regulation goes into effect in May, the spokesperson said.

GDPR concerns
Businesses list their concerns about the upcoming GDPR.

Preparing a paper trail

The biggest change EUC administrators will need to enact to comply with GDPR requirements is around governance and data inventory — an approach to managing information that’s based on clear processes and roles. The regulation requires the entities that collect personal data be able to identify exactly what data they have, whose it is, why they have it, the purpose of keeping it and what they are going to do with it.

Clear documentation of all data will be key, said Chris Marsh, research director at 451 Research.

“You can point to that straight away if anyone came to you, and you can say, ‘This is what the purpose was, and here’s what we’re doing with the data,'” Marsh said.

Organizations should also develop clear, written security and compliance policies that state who has access to what data and how they can use it. Can a human resources manager view employees’ bank account information? Can IT administrators view GPS location from a user’s mobile device? Can a salesperson who deals with customer information share data from a corporate app to a personal one?

“We are living with decentralized data, and companies should have thought about the impact of that data a while ago,” Frietman said.

How the GDPR works

We are living with decentralized data.
Danny Frietmanco-founder, MobileMindz

GDPR differs from its predecessor, the Data Protection Directive, in that it has tighter requirements for documenting and defining what data an organization processes and why. It also has a stricter definition of consent, which says companies must get “freely given, specific, informed and unambiguous” agreement from individuals to process their data. In addition, authorities that regulate GDPR will do so in standard fashion across the EU, rather than enforcing the regulation differently in each member state.

But what makes GDPR so complex is its wide-ranging classification of what constitutes personal data. The European definition of personal data is much wider than the U.S. definition of PII. It can even include biometric data, political opinions, health information, sexual orientation, trade union membership and more.

“Those things, according to the European view, are particularly susceptible to misuse in discrimination against individuals,” said Tatiana Kruse, of counsel at global law firm Dentons.

GDPR includes dozens of requirements and suggested security guidelines for how to comply. For instance, certain companies must appoint a data protection officer and report breaches to authorities. They may also have to take data privacy into account when building IT systems and applications by using technologies such as pseudonymization, which masks data so it can’t be attributed to a specific person — an approach called privacy by design.

But the GDPR requirements do not include many specific security measures that IT must implement; a lot of the law will be figured out in litigation as regulators check into companies’ compliance, said Joseph Jerome, a policy counsel on the Privacy and Data Project at the Center for Democracy and Technology in Washington, D.C.

“Everyone needs to be inventorying their personal data and take a broad characterization of this,” Jerome said. “If you’re putting things in writing, that’s good. GDPR is going to lead to lots and lots of documentation.”