Tag Archives: organizations

Intune APIs in Microsoft Graph – Now generally available

With tens of thousands of enterprise mobility customers, we see a great diversity in how organizations structure their IT resources. Some choose to manage their mobility solutions internally while others choose to work with a managed service provider to manage on their behalf. Regardless of the structure, our goal is to enable IT to easily design processes and workflows that increase user satisfaction and drive security and IT effectiveness.

In 2017, we unified Intune, Azure Active Directory, and Azure Information Protection admin experiences in the Azure portal (portal.azure.com) while also enabling the public preview of Intune APIs in Microsoft Graph. Today, we are taking another important step forward in our ability to offer customers more choice and capability by making Intune APIs in Microsoft Graph generally available. This opens a new set of possibilities for our customers and partners to automate and integrate their workloads to reduce deployment times and improve the overall efficiency of device management.

Intune APIs in Microsoft Graph enable IT professionals, partners, and developers to programmatically access data and controls that are available through the Azure portal. One of our partners, Crayon (based in Norway), is using Intune APIs to automate tasks with unattended authentication:

Jan Egil Ring, Lead Architect at Crayon: “The Intune API in Microsoft Graph enable users to access the same information that is available through the Azure Portal – for both reporting and operational purposes. It is an invaluable asset in our toolbelt for automating business processes such as user on- and offboarding in our customer`s tenants. Intune APIs, combined with Azure Automation, help us keep inventory tidy, giving operations updated and relevant information.”

Intune APIs now join a growing family of other Microsoft cloud services that are accessible through Microsoft Graph, including Office 365 and Azure AD. This means that you can use Microsoft Graph to connect to data that drives productivity – mail, calendar, contacts, documents, directory, devices, and more. It serves as a single interface where Microsoft cloud services can be reached through a set of REST APIs.

The scenarios that Microsoft Graph enables are expansive. To give you a better idea of what is possible with Intune APIs in Microsoft Graph, let’s look at some of the core use cases that we have already seen being utilized by our partners and customers.


Microsoft Graph allows you to connect different Microsoft cloud services and automate workflows and processes between them. It is accessible through several platforms and tools, including REST- based API endpoints and most popular programming and automation platforms (.NET, JS, iOS, Android, PowerShell). Resources (user, group, device, application, file, etc) and policies can be queried through this API, and formerly difficult or complex questions can be addressed via straightforward queries.

For example, one of our partners, PowerON Platforms (based in the UK), is using Intune APIs in Microsoft Graph to deliver their solutions to their customers faster and more consistently. PowerOn Platforms has created baseline deployment templates to increase the speed at which they are able to deploy solutions to customers. These templates are based on unique customer types and requirements and vastly accelerate the process that normally would take two to three days to complete and compresses it down to 15 seconds. Their ability to get customers up and running is now faster than ever before.

Steve Beaumont, Technical Director at PowerON Platforms: “PowerON has developed new and innovative methods to increase the speed of our Microsoft Intune delivery and achieve consistent outputs for customers. By leveraging the power of Microsoft Graph and new Intune capabilities, PowerON’s new tooling enhances the value of Intune.”


Intune APIs in Microsoft Graph can also provide detailed user, device, and application information to other IT asset management systems. You could build custom experiences which call Microsoft Graph to configure Intune controls and policies and unify workflows across multiple services.

For example, Kloud (based in Australia) leverages Microsoft Graph to integrate Intune device management and support activities into existing central management portals. This increases Kloud’s ability to centrally manage an integrated solution for their clients, making them much more effective as an integrated solution provider.

Tom Bromby, Managing Consultant at Kloud: “Microsoft Graph allows us to automate large, complex configuration tasks on the Intune platform, saving time and reducing the risk of human error. We can store our tenant configuration in source control, which greatly streamlines the change management process, and allows for easy audit and reporting of what is deployed in the environment, what devices are enrolled and what users are consuming the service”


Having the right data at your fingertips is a must for busy IT teams managing diverse mobile environments. You can access Intune APIs in Microsoft Graph with PowerBI and other analytics services to create custom dashboards and reports based on Intune, Azure AD, and Office 365 data – allowing you to monitor your environment and view the status of devices and apps across several dimensions, including device compliance, device configuration, app inventory, and deployment status. With Intune Data Warehouse, you can now access historical data for up to 90 days.

For example, Netrix, LLC (based in the US) leverages Microsoft Graph to curate automated solutions to improve end-user experiences and increase reporting accuracy for a more effective device management. These investments increase their efficiency and overall customer satisfaction.

Tom Lilly, Technical Team Lead at Netrix, LLC: “By using Intune APIs in Microsoft Graph, we’ve been able to provide greater insights and automation to our clients. We are able to surface the data they really care about and deliver it to the right people, while keeping administrative costs to a minimum. As an integrator, this also allows Netrix to provide repetitive, manageable solutions, while improving our time to delivery, helping get our customers piloted or deployed quicker.”

We are extremely excited to see how you will use these capabilities to improve your processes and workflows as well as to create custom solutions for your organization and customers. To get started, you can check out the documentation on how to use Intune and Azure Active Directory APIs in Microsoft Graph, watch our Microsoft Ignite presentation on this topic, and leverage sample PowerShell scripts.

Deployment note: Intune APIs in Microsoft Graph are being updated to their GA version today. The worldwide rollout should complete within the next few days.

Please note: Use of a Microsoft online service requires a valid license. Therefore, accessing EMS, Microsoft Intune, or Azure Active Directory Premium features via Microsoft Graph API requires paid licenses of the applicable service and compliance with Microsoft Graph API Terms of Use.

Additional resources:

Curb stress from Exchange Server updates with these pointers

systems. In my experience as a consultant, I find that few organizations have a reliable method to execute Exchange Server updates.

This tip outlines the proper procedures for patching Exchange that can prevent some of the upheaval associated with a disruption on the messaging platform.

How often should I patch Exchange?

In a perfect world, administrators would apply patches as soon as Microsoft releases them. This doesn’t happen for a number of reasons.

Microsoft has released patches and updates for both Exchange and Windows Server that cause trouble on those systems. Many IT departments have long memories, and they will let the bad feelings keep them from staying current with Exchange Server updates. This is detrimental to the health of Exchange and should be avoided. With proper planning, updates can and should be run on Exchange Server on a regular schedule.

Another wrinkle in the update process is Microsoft releases Cumulative Updates (CUs) for Exchange Server on a quarterly schedule. CUs are updates that feature functionality enhancements for the application.

With proper planning, updates can and should be run on Exchange Server on a regular schedule.

Microsoft plans to release one CU for Exchange 2013 and 2016 each quarter, but they do not provide a set release date. The CUs may be released on the first day of one quarter, and then on the last day of the next.

Rollup Updates (RUs) for Exchange 2010 are also released quarterly. An RU is a package that contains multiple security fixes, while a CU is a complete server build.

For Exchange 2013 and 2016, Microsoft supports the current and previous CU. When admins call Microsoft for a support case, the company will ask them to update Exchange Server to at least the N-1 CU — where N is the latest CU, N-1 refers to the previous CU — before they begin work on the issue. An organization that prefers to stay on older CUs limits its support options.

Because CUs are the full build of Exchange 2013/2016, administrators can deploy a new Exchange Server from the most recent CU. For existing Exchange Servers, using a new CU for that version to update it should work without issue.

Microsoft only tests a new CU deployment with the last two CUs, but I have never had an issue with an upgrade with multiple missed CUs. The only problems I have seen when a large number of CUs were skipped had to do with the prerequisites for Exchange, not Exchange itself.

Microsoft releases Windows Server patches on the second Tuesday of every month. As many administrators know, some of these updates can affect how Exchange operates. There is no set schedule for other updates, such as .NET. I recommend a quarterly update schedule for Exchange.

How can I curb issues from Exchange Server updates?

As every IT department is different, so is every Exchange deployment. There is no single update process that works for every organization, but these guidelines can reduce problems with Exchange Server patching. Even if the company has an established patching process, if it’s missing some of the advice outlined below, then it might be a good idea to review that method.

  • Back up Exchange servers before applying patches. This might be common sense for most administrators, but I have found it is often overlooked. If a patch causes a critical failure, a recent backup is the key to the recovery effort. Some might argue that there are Exchange configurations — such as Exchange Preferred Architecture — that do not require this, but a backup provides some reassurance if a patch breaks the system.
  • Measure the performance baseline before an update. How would you know if the CPU cycles on the Exchange Server are too high after an update if this metric hasn’t been tracked? The Managed Availability feature records performance data by default on Exchange 2013 and 2016 servers, but Exchange administrators should review server performance regularly to establish an understanding of normal server behavior.
  • Test patches in a lab that resembles production. When a new Exchange CU arrives, it has been through extensive testing. Microsoft deploys updates to Office 365 long before they are publicly available. After that, Microsoft gives the CUs to its MVP community and select organizations in its testing programs. This vetting process helps catch the vast majority of bugs before CUs go to the public, but some will slip through. To be safe, test patches in a lab that closely mirrors the production environment, with the same servers, firmware and network configuration.
  • Put Exchange Server into maintenance mode before patching: If the Exchange deployment consists of redundant servers, then put them in maintenance mode before the update process. Maintenance mode is a feature of Managed Availability that turns off monitoring on those servers during the patching window. There are a number of PowerShell scripts in the TechNet Gallery that help put servers into maintenance mode, which helps administrators streamline the application of Exchange Server updates.

Upskilling: Digital transformation is economic transformation for all – Asia News Center

“So, we are working with a number of non-government organizations to help build the skills of these youth, to build their digital skills. We know that today, about 50 percent of jobs require technology skills. Within the next three years, that’s going to jump to more than 75 percent. So we’re working to help build the skills that employers need to expand their companies.”

In Sri Lanka, decades of civil strife have now given way to sustained economic growth. In this period of calm and reconstruction, 24-year-old Prabhath Mannapperuma leads a team of techie volunteers teaching digital skills to rural and underprivileged children. They use the micro:bit, a tiny programmable device that makes coding fun. “Using a keyboard to type in code is not interesting for kids,” says Mannapperuma, an IT professional and tech-evangelist, who is determined to inspire a new generation.

Upskilling is also a priority in one of Asia’s poorest countries, Nepal. Here, Microsoft has launched an ambitious digital literacy training program that is transforming lives. Santosh Thapa lost his home and livelihood in a massive 2015 earthquake and struggled in its aftermath to start again in business. Things turned around after he graduated from a Microsoft-sponsored course that taught him some digital basics, which he now uses daily to serve his customers and stay ahead of his competitors.

Often women are the most disadvantaged in the skills race. For instance in Myanmar, only 35 percent of the workforce is female. Without wide educational opportunities, most women have been relegated to the home or the farm. But times are changing as the economy opens up after decades of isolation. “Women in Myanmar are at risk of not gaining the skills for the jobs of the future, and so we are helping to develop the skills of young women, and that’s been an exciting effort,” says Michelle.

The team at Microsoft Philanthropies has partnered with the Myanmar Book Aid and Preservation Foundation in its Tech Age Girls program. It identifies promising female leaders, between the ages of 14 to 18, and provides them with essential leadership and computer science skills to be future-ready for the jobs of the 4th Industrial Revolution.

The goal is to create a network of 100 young women leaders in at least five locations throughout Myanmar with advanced capacity in high-demand technology skills. One of these future leaders is Thuza who is determined to forge a career in the digital space. “There seems to be a life path that girls are traditionally expected to take,” she says. “But I’m a Tech Age Girl and I’m on a different path.”

In Bangladesh, Microsoft and the national government have come together to teach thousands of women hardware and software skills. Many are now working at more than 5,000 state-run digital centers that encourage ordinary people to take up technology for the business, work, and studies. It is also hoped that many of the women graduates of the training program will become digital entrepreneurs themselves.

This slideshow requires JavaScript.

These examples are undeniably encouraging and even inspirational. But Michelle is clear on one point: Digital skills development is more than just meaningful and impactful philanthropic work. It is also a hard-headed, long-term business strategy and a big investment in human capital.

Microsoft wants its customers to grow and push ahead with digital transformation, she says. And to do that, they need digitally skilled workers. “This is about empowering individuals, and also about enabling businesses to fill gaps in order for them to actually compete globally … by developing the skills, we can develop the economy.”

Citrix enables VM live migration for Nvidia vGPUs

Live migration for virtual GPUs has arrived, and the technology will help organizations more easily distribute resources and improve performance for virtual desktop users.

As more applications get graphic rich, VDI shops need better ways to support and manage virtualized GPUs (vGPUs). Citrix and Nvidia said this month they will support live migration of GPU-accelerated VMs, allowing administrators to move VMs between physical servers with no downtime. VMware has also demonstrated similar capabilities but has not yet brought them to market.

“The first time we all saw vMotion of a normal VM, we were all amazed,” said Rob Beekmans, an end-user computing consultant in the Netherlands. “So it’s the same thing. It’s amazing that this is possible.”

How vGPU VM live migration works

Live migration, the ability to move a VM from one host to another while the VM is still running, has been around for years. But it was not possible to live migrate a VM that included GPU acceleration technology such as Nvidia’s Grid. VMware’s VM live migration tool, vMotion, and Citrix’s counterpart, XenMotion, did not allow migration of VMs that had direct access to a physical hardware component. Complicating matters was the fact that live migration must replicate the GPU on one server to another server, and essentially map its processes one to one. That’s difficult because a GPU is such a dense processor, said Anne Hecht, senior director of product marketing for vGPU at Nvidia.

The GPU isn’t just for gamers anymore.
Zeus Kerravalafounder and principal analyst, ZK Research

XenMotion is now capable of live migrating a GPU-enabled VM on XenServer. Using the Citrix Director management console, administrators can monitor and migrate these VMs. They simply select the VM and from a drop-down menu choose the host they want to move it to. This migration process takes a few seconds, according to a demo Nvidia showed at Citrix Synergy 2017. XenMotion with vGPUs is available now as a tech preview for select customers, and Nvidia did not disclose a planned date for general availability.

This ability to redistribute VMs without having to shut them down brings several benefits. It could be useful for a single project, such as a designer working on a task that needs a lot of GPU resources for a few months, or adding more virtual desktop users overall. If a user needs more GPU power all of a sudden, IT can migrate his or her desktop VM to a different server that has more GPU resources available. IT may use live migration on a regular basis to change the amount of processing on different servers as users go through peaks and valleys of GPU needs.

Most important to users themselves, VM live migration means that there is no downtime on their virtual desktop during maintenance or when IT has to move a machine.

“The amount of time needed to save and close down a project can number in the tens of minutes in complex cases, and that makes for a lot of lost production time,” said Tobias Kreidl, desktop computing team lead at Northern Arizona University, who manages around 500 Citrix virtual desktops and applications. “Having this option is in bigger operations a huge plus. Even in a smaller shop, not having to deal with downtime is always a good thing as many maintenance processes require reboots.”

VMware vs. Citrix

The new Citrix capability only supports VM live migration between servers that have Nvidia GPU cards of the same type. Nvidia offers a variety of Grid options, which differ in the amounts of memory they include, how many GPUs they support and other aspects. So, XenMotion live migration can only happen from one Tesla M10 to another Tesla M10 card, for example, Hecht said.

At VMworld 2017, VMware demoed a similar process for Nvidia vGPUs with vMotion. This capability was not in beta or tech preview at the time, however, and still isn’t. Plus, the VMware capability works a little differently from Citrix’s. With VMware Horizon, IT cannot migrate without downtime; instead, a process called Suspend and Resume allows a GPU-enabled VM to hibernate, move to another host, then restart from its last running state. Users experience desktop downtime, but when it restarts it automatically logs in and runs with all of the last existing data saved.

Nvidia Grid
Nvidia Grid graphics card

Nvidia is working with VMware to develop and release an official tech preview of this Suspend and Resume capability for vGPU migration and hopes to develop a fully live scenario for Horizon in the future as well, Hecht said.

“VMware will catch up, but I think it gives Citrix an early mover advantage,” said Zeus Kerravala, founder and principal analyst of ZK Research. “This might wake VMware up a little bit and be more aggressive with a lot of these emerging technologies.”

Who needs GPU acceleration?

Virtualized GPUs are becoming more necessary for VDI shops as more applications require intensive graphics and multimedia processing. Applications that use video, augmented or virtual reality and even many Windows 10 apps use more CPU than ever before — and vGPUs can help offload that.

“The GPU isn’t just for gamers anymore,” Kerravala said. “It is becoming more mainstream, and the more you have Microsoft and IBM and the big mainstream IT vendors doing this, it will help accelerate [GPU acceleration] adoption. It becomes a really important part of any data center strategy.”

At the same time, not every user needs GPU acceleration. Beekmans’ clients sometimes think they need vGPUs, when actually the CPU will provide good enough processing for the application in question, he said. And vGPU technology isn’t cheap, so organizations must weigh the cost versus benefits of adopting it.

“I don’t think everybody needs a GPU,” Beekmans said. “It’s hype. You have to look at the cost.”

More competition in the GPU acceleration market — which Nvidia currently dominates in terms of virtual desktop GPU cards — would help bring costs down and increase innovation, Beekmans said.

Still, the market is here to stay as more apps begin to require more GPU power, he added.

“If you work with anything that uses compute resources, you need to keep an eye on the world of GPUs, because it’s coming and it’s coming fast,” Kerravala agreed.

Microsoft to acquire Avere Systems, accelerating high-performance computing innovation for media and entertainment industry and beyond – The Official Microsoft Blog

The cloud is providing the foundation for the digital economy, changing how organizations produce, market and monetize their products and services. Whether it’s building animations and special effects for the next blockbuster movie or discovering new treatments for life-threatening diseases, the need for high-performance storage and the flexibility to store and process data where it makes the most sense for the business is critically important.

Over the years, Microsoft has made a number of investments to provide our customers with the most flexible, secure and scalable storage solutions in the marketplace. Today, I am pleased to share that Microsoft has signed an agreement to acquire Avere Systems, a leading provider of high-performance NFS and SMB file-based storage for Linux and Windows clients running in cloud, hybrid and on-premises environments.

Avere logoAvere uses an innovative combination of file system and caching technologies to support the performance requirements for customers who run large-scale compute workloads. In the media and entertainment industry, Avere has worked with global brands including Sony Pictures Imageworks, animation studio Illumination Mac Guff and Moving Picture Company (MPC) to decrease production time and lower costs in a world where innovation and time to market is more critical than ever.

High performance computing needs however do not stop there. Customers in life sciences, education, oil and gas, financial services, manufacturing and more are increasingly looking for these types of solutions to help transform their businesses. The Library of Congress, John Hopkins University and Teradyne, a developer and supplier of automatic test equipment for the semiconductor industry, are great examples where Avere has helped scale datacenter performance and capacity, and optimize infrastructure placement.

By bringing together Avere’s storage expertise with the power of Microsoft’s cloud, customers will benefit from industry-leading innovations that enable the largest, most complex high-performance workloads to run in Microsoft Azure. We are excited to welcome Avere to Microsoft, and look forward to the impact their technology and the team will have on Azure and the customer experience.

You can also read a blog post from Ronald Bianchini Jr., president and CEO of Avere Systems, here.

Tags: Avere Systems, Azure, Big Computing, Cloud, High-Performance Computing, high-performance storage

Five questions to ask before purchasing NAC products

As network borders become increasingly difficult to define, and as pressure mounts on organizations to allow many different devices to connect to the corporate network, network access control is seeing a significant resurgence in deployment.

Often positioned as a security tool for the bring your own device (BYOD) and internet of things (IoT) era, network access control (NAC) is also increasingly becoming a very useful tool in network management, acting as a gatekeeper to the network. It has moved away from being a system that blocks all access unless a device is recognized, and is now more permissive, allowing for fine-grained control over what access is permitted based on policies defined by the organization. By supporting wired, wireless and remote connections, NAC can play a valuable role in securing all of these connections.

Once an organization has determined that NAC will be useful to its security profile, it’s time for it to consider the different purchasing criteria for choosing the right NAC product for its environment. NAC vendors provide a dizzying array of information, and it can be difficult to differentiate between their products.

When you’re ready to buy NAC products and begin researching your options — and especially when speaking to vendors to determine the best choice for your organization — consider the questions and features outlined in this article.

NAC device coverage: Agent or agentless?

NAC products should support all devices that may connect to an organization’s network. This includes many different configurations of PCs, Macs, Linux devices, smartphones, tablets and IoT-enabled devices. This is especially true in a BYOD environment.

NAC agents are small pieces of software installed on a device that provide detailed information about the device — such as its hardware configuration, installed software, running services, antivirus versions and connected peripherals. Some can even monitor keystrokes and internet history, though that presents privacy concerns. NAC agents can either run scans as a one-off — dissolvable — or periodically via a persistently installed agent.

If the NAC product uses agents, it’s important that they support the widest variety of devices possible, and that other devices can use agentless NAC if required. In many cases, devices will require the NAC product to support agentless implementation to detect BYOD and IoT-enabled devices and devices that can’t support NAC agents, such as printers and closed-circuit television equipment. Agentless NAC allows a device to be scanned by the network access controller and be given the correct designation based on the class of the device. This is achieved with aggressive port scans and operating system version detection.

Agentless NAC is a key component in a BYOD environment, and most organizations should look at this as must-have when buying NAC products. Of course, gathering information via an agent will provide more information on the device, but it’s not viable on a modern network that needs to support many different devices.

Does the NAC product integrate with existing software and authentication?

This is a key consideration before you buy an NAC product, as it is important to ensure it supports the type of authentication that best integrates with your organization’s network. The best NAC products should offer a variety of choices: 802.1x — through the use of a RADIUS server — Active Directory, LDAP or Oracle. NAC will also need to integrate with the way an organization uses the network. If the staff uses a specific VPN product to connect remotely, for example, it is important to ensure the NAC system can integrate with it.

Supporting many different security systems that do not integrate with one another can cause significant overhead. A differentiator between the different NAC products is not only what type of products they integrate with, but also how many systems exist within each category.

Consider the following products that an organization may want to integrate with, and be sure that your chosen NAC product supports the products already in place:

1. Security information and event management

2. Vulnerability assessment

3. Advanced threat detection

4. Mobile device management

5. Next-generation firewalls

Does the NAC product aid in regulatory compliance?

NAC can help achieve compliance with many different regulations, such as the Payment Card Industry Data Security Standard, HIPAA, International Organization for Standardization 27002 — ISO 27002 — and the National Institute of Standards and Technology. Each of these regulations stipulates certain controls regarding network access that should be implemented, especially around BYOD, IoT and rogue devices connecting to the network.

By continually monitoring network connections and performing actions based on the policies set by an organization, NAC can help with compliance with many of these regulations. These policies can, in many cases, be configured to match those of the compliance regulations mentioned above. So, when buying NAC products, be sure to have compliance in mind and to select a vendor that can aid in this process — be it through specific knowledge in its support team or through predefined policies that can be tweaked to provide the compliance required for your individual business.

What is the true cost of buying an NAC product?

The price of NAC products can be the most significant consideration, depending on the budget you have available for procurement. Most NAC products are charged per endpoint (device) connected to the network. On a large network, this can quickly become a substantial cost. There are often also hidden costs with NAC products that must be considered when assessing your purchase criteria.

Consider the following costs before you buy an NAC product:

A differentiator between the different NAC products is not only what type of products they integrate with, but also how many systems exist within each category.

1. Add-on modules. Does the basic price give organizations all the information and control they need? NAC products often have hidden costs, in that the basic package does not provide all the functionality required. The additional cost of add-on modules can run into tens of thousands of dollars on a large network. Be sure to look at what the basic NAC package includes and investigate how the organization will be using the NAC system. Specific integrations may be an additional cost. Is there extra functionality that will be required in the NAC product to provide all the benefits required?

2. Upfront costs. Are there any installation charges or initial training that will be required? Be sure to factor these into the calculation, on top of the price per endpoint — of course.

3. Support costs. What level of support does the organization require? Does it need one-off or regular training, or does it require 24/7 technical support? This can add significantly to the cost of NAC products.

4. Staff time. While not a direct cost of buying NAC products, consider how much monitoring an NAC system requires. Time will need to be set aside not only to learn the NAC system, but to manage it on an ongoing basis and respond to alerts. Even the best NAC systems will require staff to be trained so if problems occur, there will be people available to address the issues.

NAC product support: What’s included?

Support from the NAC manufacturer is an important consideration from the perspective of the success of the rollout and assessing the cost. Some of the questions that should be asked are:

  1. What does the basic support package include?
  2. What is the cost of extended support?
  3. Is support available at all times?
  4. Does the vendor have a significant presence in the organization’s region? For example, some NAC providers are primarily U.S.-based, and if an organization is based in EMEA, it may not provide the same level of support.
  5. Is on-site training available and included in the license?

Support costs can significantly drive up the cost of deployment and should be assessed early in the procurement process.

What to know before you buy an NAC system

When it comes to purchasing criteria for network access control products, it is important that not only is an NAC system capable of detecting all the devices connected to an organization’s network, but that it integrates as seamlessly as possible. The cost of attempting to shoehorn existing processes and systems into an NAC product that does not offer integration can quickly skyrocket, even if the initial cost is on the cheaper side.

NAC should also work for the business, not against it. In the days when NAC products only supported 802.1x authentication and blocked everything by default, it was seen as an annoyance that stopped legitimate network authentication requests. But, nowadays, a good NAC system provides seamless connections for employees, third parties and contractors alike — and to the correct area of the network to which they have access. It should also aid in regulatory compliance, an issue all organizations need to deal with now.

Assessing NAC products comes down to the key questions highlighted above. They are designed to help organizations determine what type of NAC product is right for them, and accordingly aid them in narrowing their choices down to the vendor that provides the product that most closely matches those criteria.

Once seldom used by organizations, endpoint protection is now a key part of IT security, and NAC products have a significant part to play in that. From a hacker’s perspective, well-implemented and managed NAC products can mean the difference between a full network attack and total attack failure.

Microsoft customers and partners envision smarter, safer, more connected societies – Transform

Organizations around the world are transforming for the digital era, changing how businesses, cities and citizens work. This new digital era will address many of the problems created in the earlier agricultural and industrial eras, making society safer, more sustainable, more efficient and more inclusive.

But an infrastructure gap is keeping this broad vision from becoming a reality. Digital transformation is happening faster than we expected — only in pockets. Microsoft and its partners seek to help city and other public infrastructures close the gaps, with advanced technologies in the cloud, data analytics, machine learning and artificial intelligence (AI).

Microsoft’s goal is to be a trusted partner to both public and private organizations in building connected societies. This summer, an IDC survey named Microsoft the top company for trust and customer satisfaction in enabling smart-city digital transformations.

Last week at a luncheon in New York City, Microsoft and executives from three organizations participating in the digital transformation shared how they are helping to close the infrastructure gap.

A photo of Arnold Meijer, TomTom's strategic business development manager.

Arnold Meijer, TomTom’s strategic business development manager, at the Building Digital Societies salon lunch. (Photo by John Brecher)

TomTom NV, based in Amsterdam, traditionally focused on providing consumers with personal navigation. Now, “the need for locations surpasses the need for navigation — it’s everywhere,” said Arnold Meijer, strategic business development manager. “Managing a fleet of connected devices or ordering a ride from your phone — these things weren’t possible five years ago. We’re turning to cloud connectivity and the Internet of Things as tools to keep our maps and locations up to date.”

Sensors from devices and vehicles on the road deliver condition and usage data essential to highway planners, infrastructure managers and fleet operators to make well informed decisions.

Autonomous driving is directly in TomTom’s sights, a way to cut down on traffic accidents, one of the top 10 causes of death worldwide, and to reduce emissions through efficient routing. “You probably won’t own a vehicle 20 years from now, and the one that picks you up won’t have a driver,” Meijer said. “If you do go out driving yourself, it will be for fun.”

With all that time freed up from driving, travelers can do something else such as relax or work. Either option presents new business opportunities for companies that offer entertainment or enable productivity for a mobile client, who is almost certainly connected to the internet. “There will be new companies coming out supporting that, and I definitely foresee Microsoft and other businesses active there,” Meijer said.

“Such greatly eased personal transport may decrease the need to live close to work or school, changing settlement patterns and reduce the societal impacts of mobility. All because we can use location- and cloud technology.” he added.

A photo of George Pitagorsky, CIO for the New York City Department of Education Office of School Support Services.

George Pitagorsky, CIO for the New York City Department of Education Office of School Support Services. (Photo by John Brecher)

The New York City Dept. of Education is using Microsoft technology extensively in a five-year, $25-million project that will tell parents their children’s whereabouts while the students are in transit, increase use of the cafeterias and provide access to information about school sports.

The city’s Office of Pupil Transportation provides rides to more than 600,000 students per day, with more than 9,000 buses and vehicles. For a preliminary version of the student-tracking system, the city has equipped its leased buses with GPS devices.

“When the driver turns on the GPS and signs in his bus, we can find out where it is at any time,” said George Pitagorsky, executive director and CIO for the department’s Office of School Support Services. If parents know what bus their child is on, they can more easily meet it at the stop or be sure to be there when the child is brought home.

A next step will be GPS units that don’t require driver activation. To let the system track not just the vehicle but its individual occupants, drivers will still need to register students into the GPS when they get on the bus.

“Biometrics like facial recognition that automate check-in when a student steps onto a bus — we’re most likely going to be there, but we’re not there yet,” Pitagorsky said.

Further out within the $25-million Illumination Program, a new bus-routing tool will replace systems developed more than 20 years ago, allowing the creation of more efficient routes, making course corrections to avoid problems, easily gathering vehicle-maintenance costs and identifying problem vehicles.

Other current projects include a smartphone app to advise students of upcoming meal choices in the school cafeterias, with an eye to increasing cafeteria use, enhancing students’ nutritional intake and offering students a voice in entree choices. The department has also created an app that displays all high school sports games, locations and scores.

A new customer-relations management app will let parents update their addresses and request special transport services on behalf of their children, with no more need to make a special visit to the school to do so. A mobile app will allow parents and authorized others to locate their children or bus, replacing the need for a phone call to the customer service unit. And business intelligence and data warehousing will get a uniform architecture, to replace the patchwork data, systems and tools now in place.

A photo of Christy Szoke CMO and co-founder of Fathym.

Christy Szoke CMO and co-founder of Fathym. (Photo by John Brecher)

Fathym, a startup in Boulder, Colorado, is directly addressing infrastructure gaps through a rapid-innovation platform intended to harmonize disparate data and apps and facilitate Internet of Things solutions.

“Too often, cities don’t have a plan worked out and are pouring millions of dollars into one solution, which is difficult to adjust to evolving needs and often leads to inaccessible, siloed data,” said co-founder and chief marketing officer Christy Szoke. “Our philosophy is to begin with a small proof of concept, then use our platform to build out a solution that is flexible to change and allows data to be accessible from multiple apps and user types.” Fathym makes extensive use of Azure services but hides that complexity from customers, she said.

To create its WeatherCloud service, Fathym combined data from roadside weather stations and sensors with available weather models to create a road weather forecast especially for drivers and maintenance providers, predicting conditions they’ll find precisely along their route.

“We’re working with at least eight data sets, all completely different in format, time intervals and spatial resolutions,” said Fathym co-founder and CEO Matt Smith. “This is hard stuff. You can’t have simplicity on the front end without a complicated back-end system, a lot of math, and a knowledgeable group of different types of engineers helping to make sense of it all.”

Despite the ease that cloud services have brought to application development, even 20 years from now foresees a need for experts to wrangle data.

“When people say, ‘the Internet of Things is here’ and ‘the robots are going to take over,’ I don’t think they have the respect they should have for how challenging it will remain to build complex apps,” Smith said.

Added Szoke, “You can’t just say ‘put an AI on it’ or ‘apply machine learning’ and expect to get useful data. You will still need creative minds, and data scientists, to understand what you’re looking at, and that will continue to be an essential industry.”

Micro data centers garner macro hype and little latency

LAS VEGAS — The large growth of data in many organizations is piquing IT interest in edge computing.

Edge computing is a process that places data processing closer to the data sources and its end users to reduce latency, and micro data centers are one way to achieve that. Micro data centers, also called edge data centers, are typically modular devices that house all infrastructure elements, from servers and storage to uninterruptible power supply and cooling. As data centers receive a flood of information from IoT devices, the two concepts took center stage for IT pros at Gartner’s data center conference last week in Las Vegas.

“As we start to have billions of things that are expecting instantaneous response and generating 4k video, the traffic is such that it doesn’t make sense to take all of that and shove it into a centralized cloud,” said Bob Gill, a research vice president at Gartner.

Gill compared the importance of edge to the effect that U.S. president Dwight D. Eisenhower’s highway system had on declining communities in the Midwest. Towns that were otherwise defunct of commerce could easily connect to cities with more promise. Similarly, organizations can place a micro data center in whichever location will maximize its value: an office, warehouse, factory floor or colocation facility.

“If you build the infrastructure based only on the centralized cloud models, data centers and colocation facilities without thinking about the edge, we’re going to find ourselves in three to four years with suboptimal infrastructure,” Gill said.

Edge computing
Edge computing enables data to be processed closer to its source.

A growing market

The market for micro data centers is still small, but it’s growing quickly. By 2021, 25% of enterprise organizations will have deployed a micro data center, according to Gartner.

If you build the infrastructure based only on the centralized cloud models … we’re going to find ourselves in three to four years with suboptimal infrastructure.
Bob Gillresearch vice president, Gartner

Some organizations are ahead of the edge computing game. Frank Barrett, IT operations director for Spartan Motors in Charlotte, Mich., inherited multiple data centers from a company acquisition. Now, the company has two primary data centers in Michigan and Indiana, and three smaller, micro data centers in Nebraska. All of the data centers currently have traditional hub-and-spoke networking infrastructure, but Barrett is considering improving the networks to better support the company’s edge data centers. He is currently in the process of moving to one provider for all of the company’s services, as well as updating switches, routers and firewalls throughout the enterprise.

“Latency is a killer,” Barrett said. “We’ve got a lot of legacy systems that people in different locations need access to. It can be a nightmare for those remote locations, regardless of how much bandwidth I throw between sites.”

Other organizations interested in adopting this technology have a choice between three types of micro data center providers. Infrastructure providers offer a one-stop shop with a single vendor for all hardware. Facilities specialists are often hardware-agnostic and provide a range of modular options, but they may require separate hardware. Regional providers are highly focused in a given region, providing strong local customer service. But a smaller business base can lead to less stability for those providers, with a higher risk of acquisitions and mergers, said Jeffrey Hewitt, a research vice president at Gartner.

One data center engineer at an audit company for retail and healthcare services is interested in the facilities provider approach because his company has a dedicated, in-house IT team to handle the other aspects of a data center. The engineer requested anonymity because he wasn’t authorized to speak to the media.

“With the facilities [option], you can install whatever you want,” he said. “Most offices have a main distribution facility, so they already have a circuit coming in, cooling in place and security. We don’t need any of that; it’d just be a dedicated rack for the micro data center.”

Micro data centers, not micro problems

Since micro data centers are often in a different physical location than a company’s traditional data center, IT needs to ensure that the equipment is secure, reliable and able to operate without constant repairs, said Daniel Bowers, a research director at Gartner. Industrial edge data centers in particular need to ruggedize the equipment so that it can withstand elements such as excessive vibrations and dust.

The distributed nature of micro data centers means that management is another concern, said Steven Carlini, senior director of data center global solutions at Schneider Electric, an energy management provider based in France.

“You don’t want to dispatch service to thousands of sites; you want the notifications that you get to be very specific,” he said. “Hopefully you can resolve an issue without sending someone on site.”

Vendor lock-in is a concern, particularly with all-in-one providers such as Dell EMC, HPE and Hitachi. It’s important to choose the right vendor from the start, which can be overwhelming due to an oversaturated market. The reality is that micro data centers have been around for years. At last year’s conference, Schneider Electric and HPE unveiled the HPE Micro Datacenter, but Schneider Electric has offered micro data centers for at least three years prior, for instance. This year, the company introduced Micro Data Center Xpress, which allows customers or partners to configure IT equipment before installing the system.

Hewitt recommends a four-step process to choose a micro data center vendor: Identify the requirements, score the vendors based on strengths and weaknesses, use those to create a shortlist and negotiate a contract with at least two comparable vendors.

Third-party E911 services expand call management tools

Organizations are turning to third-party E911 services to gain management capabilities they can’t get natively from their IP telephony provider, according to a report from Nemertes Research.

IP telephony providers may offer basic 911 management capabilities, such as tracking phone locations, but organizations may have needs that go beyond phone tracking. The report, sponsored by telecom provider West Corporation, lists the main reasons why organizations would use third-party E911 services.

Some organizations may deploy third-party E911 management for call routing to ensure an individual 911 call is routed to the correct public safety answering point (PSAP). Routing to the correct PSAP is difficult for organizations with remote and mobile workers. But third-party E911 services can offer real-time location tracking of all endpoints and use that information to route to the proper PSAP, according to the report.

Many larger organizations have multivendor environments that may include multiple IP telephony vendors. Third-party E911 services offer a single method of managing location information across endpoints, regardless of the underlying telephony platform.

The report also found third-party E911 management can reduce costs for organizations by automating the initial setup and maintenance of 911 databases in the organization. Third-party E911 services may also support centralized call routing, which could eliminate the need for local PSTN connections at remote sites and reduce the operating and hardware expenses at those sites.

Genesys unveils Amazon integration

Contact center vendor Genesys, based in Daly City, Calif., revealed an Amazon Web Services partnership that integrates AI and Genesys’ PureCloud customer engagement platform.

Genesys has integrated PureCloud with Amazon Lex, a service that lets developers build natural language, conversational bots, or chatbots. The integration allows businesses to build and maintain conversational interactive voice response (IVR) flows that route calls more efficiently.

Amazon Lex helps IVR flows better understand natural language by enabling IVR flows to recognize what callers are saying and their intent, which makes it more likely for the call to be directed to the appropriate resource the first time without error.

The chatbot integration also allows organizations to consolidate multiple interactions into a single flow that can be applied over different self-service channels. This reduces the number of call flows that organizations need to maintain and can simplify contact center administration.

The chatbot integration will be available to Genesys customers in 2018.

Conference calls face user, security challenges

A survey of 1,000 professionals found that businesses in the U.S. and U.K. are losing $34 billion due to delays and distractions during conference calls, a significant increase from $16 billion in a 2015 survey.

The survey found employees waste an average of 15 minutes per conference call getting it started and dealing with distractions. More than half of respondents said distractions have a moderate-to-major negative effect on productivity, enthusiasm to participate and the ability to concentrate.

The survey was conducted by remote meetings provider LoopUp and surveyed 1,000 professionals in the U.S. and U.K. who regularly participate in conference calls at organizations ranging from 50 to more than 1,000 employees.

The survey also found certain security challenges with conference calls. Nearly 70% of professionals said it’s normal to discuss confidential information over a call, while more than half of respondents said it’s normal to not know who is on a call.

Users are also not fully comfortable with video conferencing, according to the survey. Half of respondents said video conferencing is useful for day-to-day calls, but 61% still prefer to use the phone to dial in to conference calls.

DevOps transformation in large companies calls for IT staff remix

SAN FRANCISCO — A DevOps transformation in large organizations can’t just rely on mandates from above that IT pros change the way they work; IT leaders must rethink how teams are structured if they want them to break old habits.

Kaiser Permanente, for example, has spent the last 18 months trying to extricate itself from 75 years of organizational cruft through a consumer digital strategy program led by Alice Raia, vice president of digital presence technologies. With the Kaiser Permanente website as its guinea pig, Raia realigned IT teams into a squad framework popularized by digital music startup Spotify, with cross-functional teams of about eight engineers. At the 208,000-employee Kaiser Permanente, that’s been subject to some tweaks.

“At our first two-pizza team meeting, we ate 12 pizzas,” Raia said in a session at DevOps Enterprise Summit here. Since then, the company has settled on an optimal number of 12 to 15 people per squad.

The Oakland, Calif., company decided on the squads approach when a previous model with front-end teams and systems-of-record teams in separate scrums didn’t work, Raia said. Those silos and a focus on individual projects resulted in 60% waste in the application delivery pipeline as of a September 2015 evaluation. The realignment into cross-functional squads has forced Kaiser’s website team to focus on long-term investments in products and faster delivery of features to consumers.

IT organizational changes vary by company, but IT managers who have brought about a DevOps transformation in large companies share a theme: Teams can’t improve their performance without a new playbook that puts them in a better position to succeed.

We had to break the monogamous relationships between engineers and [their] areas of interest.
Scott Nasellosenior manager of platforms and systems engineering, Columbia Sportswear Co.

At Columbia Sportswear Co. in Portland, Ore., this meant new rotations through various areas of focus for engineers — from architecture design to infrastructure building to service desk and maintenance duties, said Scott Nasello, senior manager of platforms and systems engineering, in a presentation.

“We had to break the monogamous relationships between engineers and those areas of interest,” Nasello said. This resulted in surprising discoveries, such as when two engineers who had sat next to each other for years discovered they’d taken different approaches to server provisioning.

Short-term pain means long-term gain

In the long run, the move to DevOps results in standardized, repeatable and less error-prone application deployments, which reduces the number of IT incidents and improved IT operations overall. But those results require plenty of blood, sweat and tears upfront.

“Prepare to be unpopular,” Raia advised other enterprise IT professionals who want to move to DevOps practices. During Kaiser Permanente’s transition to squads, Raia had the unpleasant task to inform executive leaders that IT must slow down its consumer-facing work to shore up its engineering practices — at least at first.

Organizational changes can be overwhelming, Nasello said.

“There were a lot of times engineers were running on empty and wanted to tap the brakes,” he said. “You’re already working at 100%, and you feel like you’re adding 30% more.”

IT operations teams ultimately can be crushed between the contradictory pressures of developer velocity on the one hand and a fear of high-profile security breaches and outages on the other, said Damon Edwards, co-founder of Rundeck Inc., a digital business process automation software maker in Menlo Park, Calif.

Damon Edwards, co-founder of Rundeck Inc., shares the lessons he learned from customers about how to reduce the impact of DevOps velocity on IT operations.
Damon Edwards, co-founder of Rundeck Inc., shares the lessons he learned from customers about how to reduce the impact of DevOps velocity on IT operations.

A DevOps transformation means managers must empower those closest to day-to-day systems operations to address problems without Byzantine systems of escalation, service tickets and handoffs between teams, Edwards said.

Edwards pointed to Rundeck customer Ticketmaster as an example of an organizational shift toward support at the edge. A new ability to resolve incidents in the company’s network operations center — the “EMTs” of IT incident response — reduced IT support costs by 55% and the mean time to response from 47 minutes to 3.8 minutes on average.

“Silos ruin everything — they’re proven to have a huge economic impact,” Edwards said.

And while DevOps transformations pose uncomfortable challenges to the status quo, some IT ops pros at big companies hunger for a more efficient way to work.

“We’d like a more standardized way to deploy and more investment in the full lifecycle of the app,” said Jason Dehn, systems analyst for a large U.S. retailer he asked not to be named. But some lines of business at the company are happy with a status quo, where they aren’t entangled in day-to-day application maintenance.

“Business buy-in can be the challenge,” Dehn said.

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.