Tag Archives: organizations

IAM engineer roles require training and flexibility

BOSTON — As identity and access management become more critical to security strategies, organizations must be on the lookout for good identity engineers — and there are a few different ways IT can approach this staffing.

Identity and access management (IAM) is increasingly essential as mobile devices add new access points for employees and fresh ways to leak corporate data. But the job market still lacks skilled IAM engineer candidates, so organizations may be better off training existing IT staff or hiring general security engineers to educate on IAM expertise, experts said here at this week’s Identiverse conference.

“Focus on general IT skills and roles [when you] hire engineers,” said Olaf Grewe, director of access certification services at Deutsche Bank, in a session. “Don’t wait for this elusive candidate that has all of this baked in. Bring them up to where you need to be.”

IAM job market landscape

Job growth in IAM has surged in the past year, with about 1,500 IAM engineer openings currently in the Boston area, 4,800 in the D.C. area and 3,320 in Silicon Valley, according to a presentation by Dave Shields, a senior security architect for IAM at DST Systems, a financial technology company in Kansas City.

“It is finally reaching a state where people see that it’s a viable place to have [a career],” said Shields, who was also recently the managing director of IT and ran IAM at the University of Oklahoma. “There are so many things you can do with it.”

There aren’t enough people already skilled in IAM to fill these roles, however, and ones that are may not live nearby. Instead, IT departments can train up existing staff on IAM — but the key is to choose the right people.

“The best engineers you’re going to find are the people who aren’t afraid to break stuff,” Shields said. “Maybe you have a sysadmin who gets into systems and was able to make them do things they were never able to do before. Talk to that person.”

The person should also be flexible, adaptable to change and willing to ask questions others don’t want to hear, he said. Other desirable qualities for an IAM engineer are creativity and an ability to understand the business’ functions and the technology in use.

“Find someone who can look at something and say, ‘I can make that better,'” Shields said. “There are some things that simply cannot be taught.”

IAM and security go hand in hand

Deutsche Bank is currently building up an IAM team that includes existing IT staff and external hires, which the company then trains on IAM skills. That involves four major steps: baseline IAM training, then vendor-specific education, then CISSP, followed by continuous learning over time via conferences, lunch and learns, and updated vendor training.

We need to make sure people have access to the right resources.
Olaf Grewedirector of access certification services, Deutsche Bank

“We need to make sure people have access to the right resources,” Grewe said. “We want to have people who are continuously developing.”

General security skills are especially important for IAM engineer candidates, experts said. Sarah Squire, a senior technical architect at Ping Identity, started out by learning the important security specs and standards as a way toward training up on identity management.

“It’s a lot of on-the-job training,” Squire said. “We’re starting to realize that we really need a base body of knowledge for the entire field.”

For that reason, Squire along with Ian Glazer, vice president for identity product management at Salesforce, founded IDPro, a community for IAM professionals. Launched at last year’s Identiverse (then Cloud Identity Summit), IDPro is currently forming the body of knowledge that an IAM engineer must know, and plans to offer a certification in the future, Squire said.

“It’s really important that people who come in not only understand IAM but also really understand security,” Grewe said.

It’s also important to determine where within the organization those IAM professionals will live. Is it operations? Development? Security?

“A lot of people just don’t know where that fits,” Shields said. “There is nowhere better for them to be in my opinion than on the IT security team.”

Grewe’s team at Deutsche Bank, for instance, works under the chief security officer, which has a lot of budget to work with, he said. At IBM, the team that handles internal identity management works closely with HR and other groups that are involved in employees’ access rights, said Heather Hinton, vice president and chief information security officer for IBM Hybrid Cloud.

“[Organizations] need to figure out how to be less siloed,” she said.

Iron Mountain data recovery adds ransomware protection

Iron Mountain data recovery wants to perform “CPR” on organizations that get hit with ransomware.

The Iron Cloud Critical Protection and Recovery (CPR), set to launch this month, isolates data, disconnecting it from a network. It provides a “cleanroom” to recover data, in the event of an attack, and ensures that ransomware is out of the system.

“Every business is really data-driven today,” said Pete Gerr, senior product manager at Iron Mountain, which is based in Boston. “Data is their most valuable asset.”

Legacy backup and disaster recovery “really weren’t built for the modern threat environment,” and isolated recovery offers the best protection against ransomware, Gerr said.

Ransomware continues to get smarter and remains a prevalent method of cyberattack. Phil Goodwin, research director of storage systems and software at IDC, said the majority of risks for organizations’ data loss involve malware and ransomware. “It’s not a matter of if they’re going to get hit, it’s a matter of when,” Goodwin said.

That’s caused many organizations to proactively tackle the problem with ransomware-specific products

“It’s moved from a backroom discussion to the boardroom,” Gerr said.

Iron Mountain data recovery gets ‘clean’

Iron Cloud CPR features Iron Mountain’s Virtual Cleanroom, a dedicated computing environment hosted within Iron Cloud data centers that provides an air gap. The cleanroom serves as an offline environment where customers can recover backups stored within the secure CPR vault. Then customers can use data forensic utilities or a designated security provider to audit and validate that restored data sets are free from viruses and remediate them if necessary.

It’s moved from a backroom discussion to the boardroom.
Pete Gerrsenior product manager, Iron Mountain

Customers then use Iron Mountain data recovery to restore selected sets back to their production environment or another site.

“The last thing we want to do is recover a backup set … that reinfects your environment,” Gerr said.

The air gap, which ensures that ransomware does not touch a given data set, can also be found in such media as tape storage that is disconnected from the network.

Goodwin cautioned that the CPR product should complement an organization’s backup and recovery platform, not replace it.

“It will fit well with what the customer has,” he said.

Iron Cloud CPR also includes a managed service for organizations using Dell EMC’s Cyber Recovery for ransomware recovery. Hosted in Iron Mountain’s data centers, Iron Cloud CPR for Dell EMC Cyber Recovery on Data Domain enables customers to isolate critical data off site for protection against attacks, using a cloud-based monthly subscription model.

CPR is part of the Iron Cloud data management portfolio, which was built using Virtustream’s xStream Cloud Management Platform. The portfolio also includes backup, archive and disaster recovery services.

Both Iron Cloud CPR offerings are fully managed services and work without any other products, Gerr said. They will be available as part of Dell EMC and Virtustream’s data protection portfolios.

Iron Mountain, which claims more than 230,000 customers across its entire product line, said Iron Cloud CPR is expected to be generally available by the end of June. Several customers are working with the Iron Mountain data recovery product as early adopters.

A data replication strategy for all your disaster recovery needs

Meeting an organization’s disaster recovery challenges requires addressing problems from several angles based on specific recovery point and recovery time objectives. Today’s tight RTO and RPO expectations mean almost no data gets lost and no downtime.

To meet those expectations, businesses must move beyond backup and consider a data replication strategy. Modern replication products offer more than just a rapid disaster recovery copy of data, though. They can help with cloud migration, using the cloud as a DR site and even solving copy data challenges.

Replication software comes in two forms. One is integrated into a storage system, and the other is bought separately. Both have their strengths and weaknesses.

An integrated data replication strategy

The integrated form of replication has a few advantages. It’s often bundled at no charge or is relatively inexpensive. Of course, nothing in life is really free. The customer pays extra for the storage hardware in order to get the “free” software. In addition, at-scale, storage-based replication is relatively easy to manage. Most storage system replication works at a volume level, so one job replicates the entire volume, even if there are a thousand virtual machines on it. And finally, storage system-based replication is often backup-controlled, meaning the replication job can be integrated and managed by backup software.

There are, however, problems with a storage system-based data replication strategy. First, it’s specific to that storage system. Consequently, since most data centers use multiple storage systems from different vendors, they must also manage multiple replication products. Second, the advantage of replicating entire volumes can be a disadvantage, because some data centers may not want to replicate every application on a volume. Third, most storage system replication inadequately supports the cloud.

Stand-alone replication

IT typically installs stand-alone replication software on each host it’s protecting or implements it into the cluster in a hypervisor environment. Flexibility is among software-based replication’s advantages. The same software can replicate from any hardware platform to any other hardware platform, letting IT mix and match source and target storage devices. The second advantage is that software-based replication can be more granular about what’s replicated and how frequently replication occurs. And the third advantage is that most software-based replication offers excellent cloud support.

While backup software has improved significantly, tight RPOs and RTOs mean most organizations will need replication as well.

At a minimum, the cloud is used as a DR target for data, but it’s also used as an entire disaster recovery site, not just a copy. This means there can be instantiate virtual machines, using cloud compute in addition to cloud storage. Some approaches go further with cloud support, allowing replication across multiple clouds or from the cloud back to the original data center.

The primary downside of a stand-alone data replication strategy is it must be purchased, because it isn’t bundled with storage hardware. Its granularity also means dozens, if not hundreds of jobs, must be managed, although several stand-alone data replication products have added the ability to group jobs by type. Finally, there isn’t wide support from backup software vendors for these products, so any integration is a manual process, requiring custom scripts.

Modern replication features

Modern replication software should support the cloud and support it well. This requirement draws a line of suspicion around storage systems with built-in replication, because cloud support is generally so weak. Replication software should have the ability to replicate data to any cloud and use that cloud to keep a DR copy of that data. It should also let IT start up application instances in the cloud, potentially completely replacing an organization’s DR site. Last, the software should support multi-cloud replication to ensure both on-premises and cloud-based applications are protected.

Another feature to look for in modern replication is integration into data protection software. This capability can take two forms: The software can manage the replication process on the storage system, or the data protection software could provide replication. Several leading data protection products can manage snapshots and replication functions on other vendors’ storage systems. Doing so eliminates some of the concern around running several different storage system replication products.

Data protection software that integrates replication can either be traditional backup software with an added replication function or traditional replication software with a file history capability, potentially eliminating the need for backup software. It’s important for IT to make sure the capabilities of any combined product meets all backup and replication needs.

How to make the replication decision

The increased expectation of rapid recovery with almost no data loss is something everyone in IT will have to address. While backup software has improved significantly, tight RPOs and RTOs mean most organizations will need replication as well. The pros and cons of both an integrated and stand-alone data replication strategy hinge on the environment in which they’re deployed.

Each IT shop must decide which type of replication best meets its current needs. At the same time, IT planners must figure out how that new data replication product will integrate with existing storage hardware and future initiatives like the cloud.

MIT CIO: What is digital culture, why it’s needed and how to get it

CAMBRIDGE, Mass. — Can large organizations adopt the digital cultures of 21st century goliaths like Amazon and Google? That was the question posed in the kickoff session at the recent MIT Sloan CIO Symposium.

The assumption — argued by panel moderator and MIT Sloan researcher George Westerman — is that there is such a thing as a digital culture. Citing MIT Sloan research, Westerman said it includes values like autonomy, speed, creativity and openness; it prevails at digital native companies whose mission is nothing less than to change the world; and it’s something that “pre-digital” companies need too — urgently.

Digital technologies change fast, Westerman said; organizations much less so. But as digital technologies like social, mobile, AI and cloud continue to transform how customers behave, organizational change is imperative — corporate visions, values and practices steeped in 20th century management theories must also be adapted to exploit digital technologies, or companies will fail.

“For all the talk we’ve got about digital, the real conversation should be about transformation,” said Westerman, principal research scientist at the MIT Initiative on the Digital Economy. “This digital transformation is a leadership challenge.”

Marrying core values with digital mechanisms

Creating a digital culture is not just about using digital technology or copying Silicon Valley companies, Westerman stressed. He said he often hears executives say that if they just had the culture of a Google or Netflix, their companies could really thrive.

George Westerman, principal research scientist, MIT Initiative on the Digital EconomyGeorge Westerman

“And I say, ‘Are you sure you want that?’ That means you’ve got to hire people that way, pay them that way and you might need to move out to California. And frankly a lot of these cultures are not the happiest places to work,” Westerman said. And some can even be downright toxic, he noted, alluding to Uber’s problems with workplace culture.

The question for predigital companies then is not if they can adopt a digital culture but how do they create the right digital culture, given their predigital legacies, which include how their employees want to work and how they want to treat employees. The next challenge will be infusing the chosen digital culture into every level of the organization.

Corporate values are important, but culture is what happens when the boss leaves the room, Westerman said, referencing his favorite definition.

David Gledhill, group CIO and head of group technology and operations, DBS Bank  David Gledhill

“The practices are what matters,” he told the audience of CIOs, introducing a panel of experts who served up some practical advice.

Here are some of the digital culture lessons practiced by the two IT practitioners on the panel, David Gledhill, group CIO and head of group technology and operations at financial services giant DBS Bank, and Andrei Oprisan, vice president of technology and director of the Boston tech hub at Liberty Mutual Insurance, the diversified global insurer.

Liberty Mutual’s Andrei Oprisan: ‘Challenging everything’

Mission: Oprisan, who was hired by Liberty Mutual in 2017 to fix core IT systems and help unlock the business value in digital systems, said the company’s digital mission is clear and clearly understood. “We ask ourselves, ‘Are we doing the best thing for the customer in every single step we’re taking?'”

Andrei Oprisan, vice president of technology and director of the Boston tech hub, Liberty Mutual InsuranceAndrei Oprisan

The mission is also urgent, because not only are insurance competitors changing rapidly, he said, but “we’re seeing companies like Amazon and Google entering the insurance space.”

“We need to be able to compete with them and beat them at that game, because we do have those core competencies, we do have a lot of expertise in this area and we can build products much faster than they can,” he said.

Outside talent: Indeed, in the year since he was hired, Oprisan has scaled the Boston tech hub’s team from eight developers to over 120 developers, scrum masters and software development managers to create what he calls a “customer-centric agile transformation.” About a quarter of the hires were from inside the organization; the rest were from the outside.

Hiring from the outside was a key element in creating a digital culture in his organization, Oprisan said.

“We infused the organization with a lot of new talent to help us figure out what good looks like,” he said. “So, we’re only trying to reinvent ourselves and investing in our own talent and helping them improve and giving them all the tools they need, but we also add talents to that pool to change the way we’re solving all of these challenges.”

Small empowered teams: In the quest to get closer to the customer, the organization has become “more open to much smaller teams owning business decisions end to end,” he said, adding that empowering small teams represented a “seismic shift for any organization.” Being open to feedback and being “OK with failure” — the sine qua non of the digital transformation — is also a “very big part of being able to evolve very quickly,” he said.

“We’re challenging everything. We’re looking at all of our systems and all of our processes, we’re looking at culture, looking at brands, looking at how we’re attracting and retaining talent,” he said.

T-shirts and flip-flops: Oprisan said that autonomy and trust are key values in the digital culture he is helping to build at Liberty’s Boston tech hub.

“We emphasize that we are going to give them very challenging, hard problems to solve, and that we are going to trust they know how to solve them,” he said. “We’re going to hire the right talent, we’re going to give you a very direct mission and we’re going to get out of the way.”

In fact, Oprisan’s development teams work across the street from the company’s Boston headquarters, and they favor T-shirts and flip-flops over the industry’s penchant for business attire, he said — with corporate’s blessing. “Whatever it takes to get the job done.”

DBS Bank CIO David Gledhill: ‘Becoming the D in Gandalf’

Mission: Gledhill, the winner of the 2017 MIT Sloan CIO Leadership Award and a key player in DBS Bank’s digital transformation, said the digital journey at Singapore’s largest bank began a few years ago with the question of what it would take to run the bank “more like a technology company.”

Bank leadership studied how Google, Amazon, Netflix, Apple, LinkedIn and Facebook operated “at a technology level but also at a culture level,” he said, analyzing the shifts DBS would have to make to become more like those companies. In the process, Gledhill hit upon a slogan: DBS would strive to become the “D” in Google-Amazon-Netflix-Apple-LinkedIn-Facebook (GANALF). “It seems a little cheesy … but it just resonated so well with people.”

Cheesiness aside, the wizardry involved in becoming the “D” in Gandalf, has indeed played out on a technology and human level, according to Gledhill. Employees now have “completely different sets of aspirations” about their jobs, a change that started with the people in the technology units and spread to operations and the real state unit. “It was really revolutionary. Just unlocking this interest in talent and desire in people has taken us to a completely new level of operation.”

Gledhill is a fan of inspirational motifs — another DBS slogan is “Making banking joyful” — but he said slogans are not sufficient to drive digital transformation. He explained that the collective embrace of a digital culture by DBS tech employees was buttressed by five key operational tenets. (He likened the schema to a DBS version of the Trivial Pursuit cheese wheel.) They are: 1. Shift from project to platform; 2. Agile at scale; 3. Rethinking the organization; 4. Smaller systems for experimentation; 5. Automation.

Platform not projects, Agile: “Rather than having discrete projects that need budget and financing and committees and all that stuff, we got rid of all that,” Gledhill said. In its place, DBS has created and funded platforms with specific capabilities. Management describes the outcomes for teams working on the platforms. For example, goals include increasing the number of customers acquired digitally, or increasing digital transactions. But it does not prescribe the inputs, setting teams free to achieve the goals. That’s when “you can really start performing Agile at scale,” he said.

Rethink, rebuild, automate: DBS’s adoption of a digital culture required rethinking organizational processes and incentives. “We call it ‘organized for success’ on the cheese wheel, which is really about DevOps, business and tech together, and how you change the structure of the KPIs and other things you use to measure performance with,” he said.

On the engineering side, DBS now “builds for modern systems,” he said. That translates into smaller systems built for experimentation, for A/B testing, for data and for scaling. “The last piece was automation — how do you automate the whole tech pipeline, from test to build to code deploy,” Gledhill said.

“So those five cheeses were the things we wanted everybody to shift to — and that included open source and other bits and pieces,” he said. “On the outer rim of the five cheeses, each one had a set of maybe five to 10 discrete outputs that had to change.”

One objective of automating every system was to enable DBS to get products to market faster, Gledhill said. “We have increased our release cadence — that is, the number of times we can push into a dev or production environment — by 7.5 times. That’s a massive increase from where we started.”

Editor’s note: Look for detailed advice on how to create a digital culture from experts at McKinsey & Company and Korn Ferry in part two of this story later this week.

Intune APIs in Microsoft Graph – Now generally available

With tens of thousands of enterprise mobility customers, we see a great diversity in how organizations structure their IT resources. Some choose to manage their mobility solutions internally while others choose to work with a managed service provider to manage on their behalf. Regardless of the structure, our goal is to enable IT to easily design processes and workflows that increase user satisfaction and drive security and IT effectiveness.

In 2017, we unified Intune, Azure Active Directory, and Azure Information Protection admin experiences in the Azure portal (portal.azure.com) while also enabling the public preview of Intune APIs in Microsoft Graph. Today, we are taking another important step forward in our ability to offer customers more choice and capability by making Intune APIs in Microsoft Graph generally available. This opens a new set of possibilities for our customers and partners to automate and integrate their workloads to reduce deployment times and improve the overall efficiency of device management.

Intune APIs in Microsoft Graph enable IT professionals, partners, and developers to programmatically access data and controls that are available through the Azure portal. One of our partners, Crayon (based in Norway), is using Intune APIs to automate tasks with unattended authentication:

Jan Egil Ring, Lead Architect at Crayon: “The Intune API in Microsoft Graph enable users to access the same information that is available through the Azure Portal – for both reporting and operational purposes. It is an invaluable asset in our toolbelt for automating business processes such as user on- and offboarding in our customer`s tenants. Intune APIs, combined with Azure Automation, help us keep inventory tidy, giving operations updated and relevant information.”

Intune APIs now join a growing family of other Microsoft cloud services that are accessible through Microsoft Graph, including Office 365 and Azure AD. This means that you can use Microsoft Graph to connect to data that drives productivity – mail, calendar, contacts, documents, directory, devices, and more. It serves as a single interface where Microsoft cloud services can be reached through a set of REST APIs.

The scenarios that Microsoft Graph enables are expansive. To give you a better idea of what is possible with Intune APIs in Microsoft Graph, let’s look at some of the core use cases that we have already seen being utilized by our partners and customers.

Automation

Microsoft Graph allows you to connect different Microsoft cloud services and automate workflows and processes between them. It is accessible through several platforms and tools, including REST- based API endpoints and most popular programming and automation platforms (.NET, JS, iOS, Android, PowerShell). Resources (user, group, device, application, file, etc) and policies can be queried through this API, and formerly difficult or complex questions can be addressed via straightforward queries.

For example, one of our partners, PowerON Platforms (based in the UK), is using Intune APIs in Microsoft Graph to deliver their solutions to their customers faster and more consistently. PowerOn Platforms has created baseline deployment templates to increase the speed at which they are able to deploy solutions to customers. These templates are based on unique customer types and requirements and vastly accelerate the process that normally would take two to three days to complete and compresses it down to 15 seconds. Their ability to get customers up and running is now faster than ever before.

Steve Beaumont, Technical Director at PowerON Platforms: “PowerON has developed new and innovative methods to increase the speed of our Microsoft Intune delivery and achieve consistent outputs for customers. By leveraging the power of Microsoft Graph and new Intune capabilities, PowerON’s new tooling enhances the value of Intune.”

Integration

Intune APIs in Microsoft Graph can also provide detailed user, device, and application information to other IT asset management systems. You could build custom experiences which call Microsoft Graph to configure Intune controls and policies and unify workflows across multiple services.

For example, Kloud (based in Australia) leverages Microsoft Graph to integrate Intune device management and support activities into existing central management portals. This increases Kloud’s ability to centrally manage an integrated solution for their clients, making them much more effective as an integrated solution provider.

Tom Bromby, Managing Consultant at Kloud: “Microsoft Graph allows us to automate large, complex configuration tasks on the Intune platform, saving time and reducing the risk of human error. We can store our tenant configuration in source control, which greatly streamlines the change management process, and allows for easy audit and reporting of what is deployed in the environment, what devices are enrolled and what users are consuming the service”

Analytics

Having the right data at your fingertips is a must for busy IT teams managing diverse mobile environments. You can access Intune APIs in Microsoft Graph with PowerBI and other analytics services to create custom dashboards and reports based on Intune, Azure AD, and Office 365 data – allowing you to monitor your environment and view the status of devices and apps across several dimensions, including device compliance, device configuration, app inventory, and deployment status. With Intune Data Warehouse, you can now access historical data for up to 90 days.

For example, Netrix, LLC (based in the US) leverages Microsoft Graph to curate automated solutions to improve end-user experiences and increase reporting accuracy for a more effective device management. These investments increase their efficiency and overall customer satisfaction.

Tom Lilly, Technical Team Lead at Netrix, LLC: “By using Intune APIs in Microsoft Graph, we’ve been able to provide greater insights and automation to our clients. We are able to surface the data they really care about and deliver it to the right people, while keeping administrative costs to a minimum. As an integrator, this also allows Netrix to provide repetitive, manageable solutions, while improving our time to delivery, helping get our customers piloted or deployed quicker.”

We are extremely excited to see how you will use these capabilities to improve your processes and workflows as well as to create custom solutions for your organization and customers. To get started, you can check out the documentation on how to use Intune and Azure Active Directory APIs in Microsoft Graph, watch our Microsoft Ignite presentation on this topic, and leverage sample PowerShell scripts.

Deployment note: Intune APIs in Microsoft Graph are being updated to their GA version today. The worldwide rollout should complete within the next few days.

Please note: Use of a Microsoft online service requires a valid license. Therefore, accessing EMS, Microsoft Intune, or Azure Active Directory Premium features via Microsoft Graph API requires paid licenses of the applicable service and compliance with Microsoft Graph API Terms of Use.

Additional resources:

Curb stress from Exchange Server updates with these pointers

systems. In my experience as a consultant, I find that few organizations have a reliable method to execute Exchange Server updates.

This tip outlines the proper procedures for patching Exchange that can prevent some of the upheaval associated with a disruption on the messaging platform.

How often should I patch Exchange?

In a perfect world, administrators would apply patches as soon as Microsoft releases them. This doesn’t happen for a number of reasons.

Microsoft has released patches and updates for both Exchange and Windows Server that cause trouble on those systems. Many IT departments have long memories, and they will let the bad feelings keep them from staying current with Exchange Server updates. This is detrimental to the health of Exchange and should be avoided. With proper planning, updates can and should be run on Exchange Server on a regular schedule.

Another wrinkle in the update process is Microsoft releases Cumulative Updates (CUs) for Exchange Server on a quarterly schedule. CUs are updates that feature functionality enhancements for the application.

With proper planning, updates can and should be run on Exchange Server on a regular schedule.

Microsoft plans to release one CU for Exchange 2013 and 2016 each quarter, but they do not provide a set release date. The CUs may be released on the first day of one quarter, and then on the last day of the next.

Rollup Updates (RUs) for Exchange 2010 are also released quarterly. An RU is a package that contains multiple security fixes, while a CU is a complete server build.

For Exchange 2013 and 2016, Microsoft supports the current and previous CU. When admins call Microsoft for a support case, the company will ask them to update Exchange Server to at least the N-1 CU — where N is the latest CU, N-1 refers to the previous CU — before they begin work on the issue. An organization that prefers to stay on older CUs limits its support options.

Because CUs are the full build of Exchange 2013/2016, administrators can deploy a new Exchange Server from the most recent CU. For existing Exchange Servers, using a new CU for that version to update it should work without issue.

Microsoft only tests a new CU deployment with the last two CUs, but I have never had an issue with an upgrade with multiple missed CUs. The only problems I have seen when a large number of CUs were skipped had to do with the prerequisites for Exchange, not Exchange itself.

Microsoft releases Windows Server patches on the second Tuesday of every month. As many administrators know, some of these updates can affect how Exchange operates. There is no set schedule for other updates, such as .NET. I recommend a quarterly update schedule for Exchange.

How can I curb issues from Exchange Server updates?

As every IT department is different, so is every Exchange deployment. There is no single update process that works for every organization, but these guidelines can reduce problems with Exchange Server patching. Even if the company has an established patching process, if it’s missing some of the advice outlined below, then it might be a good idea to review that method.

  • Back up Exchange servers before applying patches. This might be common sense for most administrators, but I have found it is often overlooked. If a patch causes a critical failure, a recent backup is the key to the recovery effort. Some might argue that there are Exchange configurations — such as Exchange Preferred Architecture — that do not require this, but a backup provides some reassurance if a patch breaks the system.
  • Measure the performance baseline before an update. How would you know if the CPU cycles on the Exchange Server are too high after an update if this metric hasn’t been tracked? The Managed Availability feature records performance data by default on Exchange 2013 and 2016 servers, but Exchange administrators should review server performance regularly to establish an understanding of normal server behavior.
  • Test patches in a lab that resembles production. When a new Exchange CU arrives, it has been through extensive testing. Microsoft deploys updates to Office 365 long before they are publicly available. After that, Microsoft gives the CUs to its MVP community and select organizations in its testing programs. This vetting process helps catch the vast majority of bugs before CUs go to the public, but some will slip through. To be safe, test patches in a lab that closely mirrors the production environment, with the same servers, firmware and network configuration.
  • Put Exchange Server into maintenance mode before patching: If the Exchange deployment consists of redundant servers, then put them in maintenance mode before the update process. Maintenance mode is a feature of Managed Availability that turns off monitoring on those servers during the patching window. There are a number of PowerShell scripts in the TechNet Gallery that help put servers into maintenance mode, which helps administrators streamline the application of Exchange Server updates.

Upskilling: Digital transformation is economic transformation for all – Asia News Center

“So, we are working with a number of non-government organizations to help build the skills of these youth, to build their digital skills. We know that today, about 50 percent of jobs require technology skills. Within the next three years, that’s going to jump to more than 75 percent. So we’re working to help build the skills that employers need to expand their companies.”

In Sri Lanka, decades of civil strife have now given way to sustained economic growth. In this period of calm and reconstruction, 24-year-old Prabhath Mannapperuma leads a team of techie volunteers teaching digital skills to rural and underprivileged children. They use the micro:bit, a tiny programmable device that makes coding fun. “Using a keyboard to type in code is not interesting for kids,” says Mannapperuma, an IT professional and tech-evangelist, who is determined to inspire a new generation.

Upskilling is also a priority in one of Asia’s poorest countries, Nepal. Here, Microsoft has launched an ambitious digital literacy training program that is transforming lives. Santosh Thapa lost his home and livelihood in a massive 2015 earthquake and struggled in its aftermath to start again in business. Things turned around after he graduated from a Microsoft-sponsored course that taught him some digital basics, which he now uses daily to serve his customers and stay ahead of his competitors.

Often women are the most disadvantaged in the skills race. For instance in Myanmar, only 35 percent of the workforce is female. Without wide educational opportunities, most women have been relegated to the home or the farm. But times are changing as the economy opens up after decades of isolation. “Women in Myanmar are at risk of not gaining the skills for the jobs of the future, and so we are helping to develop the skills of young women, and that’s been an exciting effort,” says Michelle.

The team at Microsoft Philanthropies has partnered with the Myanmar Book Aid and Preservation Foundation in its Tech Age Girls program. It identifies promising female leaders, between the ages of 14 to 18, and provides them with essential leadership and computer science skills to be future-ready for the jobs of the 4th Industrial Revolution.

The goal is to create a network of 100 young women leaders in at least five locations throughout Myanmar with advanced capacity in high-demand technology skills. One of these future leaders is Thuza who is determined to forge a career in the digital space. “There seems to be a life path that girls are traditionally expected to take,” she says. “But I’m a Tech Age Girl and I’m on a different path.”

In Bangladesh, Microsoft and the national government have come together to teach thousands of women hardware and software skills. Many are now working at more than 5,000 state-run digital centers that encourage ordinary people to take up technology for the business, work, and studies. It is also hoped that many of the women graduates of the training program will become digital entrepreneurs themselves.

This slideshow requires JavaScript.

These examples are undeniably encouraging and even inspirational. But Michelle is clear on one point: Digital skills development is more than just meaningful and impactful philanthropic work. It is also a hard-headed, long-term business strategy and a big investment in human capital.

Microsoft wants its customers to grow and push ahead with digital transformation, she says. And to do that, they need digitally skilled workers. “This is about empowering individuals, and also about enabling businesses to fill gaps in order for them to actually compete globally … by developing the skills, we can develop the economy.”

Citrix enables VM live migration for Nvidia vGPUs

Live migration for virtual GPUs has arrived, and the technology will help organizations more easily distribute resources and improve performance for virtual desktop users.

As more applications get graphic rich, VDI shops need better ways to support and manage virtualized GPUs (vGPUs). Citrix and Nvidia said this month they will support live migration of GPU-accelerated VMs, allowing administrators to move VMs between physical servers with no downtime. VMware has also demonstrated similar capabilities but has not yet brought them to market.

“The first time we all saw vMotion of a normal VM, we were all amazed,” said Rob Beekmans, an end-user computing consultant in the Netherlands. “So it’s the same thing. It’s amazing that this is possible.”

How vGPU VM live migration works

Live migration, the ability to move a VM from one host to another while the VM is still running, has been around for years. But it was not possible to live migrate a VM that included GPU acceleration technology such as Nvidia’s Grid. VMware’s VM live migration tool, vMotion, and Citrix’s counterpart, XenMotion, did not allow migration of VMs that had direct access to a physical hardware component. Complicating matters was the fact that live migration must replicate the GPU on one server to another server, and essentially map its processes one to one. That’s difficult because a GPU is such a dense processor, said Anne Hecht, senior director of product marketing for vGPU at Nvidia.

The GPU isn’t just for gamers anymore.
Zeus Kerravalafounder and principal analyst, ZK Research

XenMotion is now capable of live migrating a GPU-enabled VM on XenServer. Using the Citrix Director management console, administrators can monitor and migrate these VMs. They simply select the VM and from a drop-down menu choose the host they want to move it to. This migration process takes a few seconds, according to a demo Nvidia showed at Citrix Synergy 2017. XenMotion with vGPUs is available now as a tech preview for select customers, and Nvidia did not disclose a planned date for general availability.

This ability to redistribute VMs without having to shut them down brings several benefits. It could be useful for a single project, such as a designer working on a task that needs a lot of GPU resources for a few months, or adding more virtual desktop users overall. If a user needs more GPU power all of a sudden, IT can migrate his or her desktop VM to a different server that has more GPU resources available. IT may use live migration on a regular basis to change the amount of processing on different servers as users go through peaks and valleys of GPU needs.

Most important to users themselves, VM live migration means that there is no downtime on their virtual desktop during maintenance or when IT has to move a machine.

“The amount of time needed to save and close down a project can number in the tens of minutes in complex cases, and that makes for a lot of lost production time,” said Tobias Kreidl, desktop computing team lead at Northern Arizona University, who manages around 500 Citrix virtual desktops and applications. “Having this option is in bigger operations a huge plus. Even in a smaller shop, not having to deal with downtime is always a good thing as many maintenance processes require reboots.”

VMware vs. Citrix

The new Citrix capability only supports VM live migration between servers that have Nvidia GPU cards of the same type. Nvidia offers a variety of Grid options, which differ in the amounts of memory they include, how many GPUs they support and other aspects. So, XenMotion live migration can only happen from one Tesla M10 to another Tesla M10 card, for example, Hecht said.

At VMworld 2017, VMware demoed a similar process for Nvidia vGPUs with vMotion. This capability was not in beta or tech preview at the time, however, and still isn’t. Plus, the VMware capability works a little differently from Citrix’s. With VMware Horizon, IT cannot migrate without downtime; instead, a process called Suspend and Resume allows a GPU-enabled VM to hibernate, move to another host, then restart from its last running state. Users experience desktop downtime, but when it restarts it automatically logs in and runs with all of the last existing data saved.

Nvidia Grid
Nvidia Grid graphics card

Nvidia is working with VMware to develop and release an official tech preview of this Suspend and Resume capability for vGPU migration and hopes to develop a fully live scenario for Horizon in the future as well, Hecht said.

“VMware will catch up, but I think it gives Citrix an early mover advantage,” said Zeus Kerravala, founder and principal analyst of ZK Research. “This might wake VMware up a little bit and be more aggressive with a lot of these emerging technologies.”

Who needs GPU acceleration?

Virtualized GPUs are becoming more necessary for VDI shops as more applications require intensive graphics and multimedia processing. Applications that use video, augmented or virtual reality and even many Windows 10 apps use more CPU than ever before — and vGPUs can help offload that.

“The GPU isn’t just for gamers anymore,” Kerravala said. “It is becoming more mainstream, and the more you have Microsoft and IBM and the big mainstream IT vendors doing this, it will help accelerate [GPU acceleration] adoption. It becomes a really important part of any data center strategy.”

At the same time, not every user needs GPU acceleration. Beekmans’ clients sometimes think they need vGPUs, when actually the CPU will provide good enough processing for the application in question, he said. And vGPU technology isn’t cheap, so organizations must weigh the cost versus benefits of adopting it.

“I don’t think everybody needs a GPU,” Beekmans said. “It’s hype. You have to look at the cost.”

More competition in the GPU acceleration market — which Nvidia currently dominates in terms of virtual desktop GPU cards — would help bring costs down and increase innovation, Beekmans said.

Still, the market is here to stay as more apps begin to require more GPU power, he added.

“If you work with anything that uses compute resources, you need to keep an eye on the world of GPUs, because it’s coming and it’s coming fast,” Kerravala agreed.

Microsoft to acquire Avere Systems, accelerating high-performance computing innovation for media and entertainment industry and beyond – The Official Microsoft Blog

The cloud is providing the foundation for the digital economy, changing how organizations produce, market and monetize their products and services. Whether it’s building animations and special effects for the next blockbuster movie or discovering new treatments for life-threatening diseases, the need for high-performance storage and the flexibility to store and process data where it makes the most sense for the business is critically important.

Over the years, Microsoft has made a number of investments to provide our customers with the most flexible, secure and scalable storage solutions in the marketplace. Today, I am pleased to share that Microsoft has signed an agreement to acquire Avere Systems, a leading provider of high-performance NFS and SMB file-based storage for Linux and Windows clients running in cloud, hybrid and on-premises environments.

Avere logoAvere uses an innovative combination of file system and caching technologies to support the performance requirements for customers who run large-scale compute workloads. In the media and entertainment industry, Avere has worked with global brands including Sony Pictures Imageworks, animation studio Illumination Mac Guff and Moving Picture Company (MPC) to decrease production time and lower costs in a world where innovation and time to market is more critical than ever.

High performance computing needs however do not stop there. Customers in life sciences, education, oil and gas, financial services, manufacturing and more are increasingly looking for these types of solutions to help transform their businesses. The Library of Congress, John Hopkins University and Teradyne, a developer and supplier of automatic test equipment for the semiconductor industry, are great examples where Avere has helped scale datacenter performance and capacity, and optimize infrastructure placement.

By bringing together Avere’s storage expertise with the power of Microsoft’s cloud, customers will benefit from industry-leading innovations that enable the largest, most complex high-performance workloads to run in Microsoft Azure. We are excited to welcome Avere to Microsoft, and look forward to the impact their technology and the team will have on Azure and the customer experience.

You can also read a blog post from Ronald Bianchini Jr., president and CEO of Avere Systems, here.

Tags: Avere Systems, Azure, Big Computing, Cloud, High-Performance Computing, high-performance storage

Five questions to ask before purchasing NAC products

As network borders become increasingly difficult to define, and as pressure mounts on organizations to allow many different devices to connect to the corporate network, network access control is seeing a significant resurgence in deployment.

Often positioned as a security tool for the bring your own device (BYOD) and internet of things (IoT) era, network access control (NAC) is also increasingly becoming a very useful tool in network management, acting as a gatekeeper to the network. It has moved away from being a system that blocks all access unless a device is recognized, and is now more permissive, allowing for fine-grained control over what access is permitted based on policies defined by the organization. By supporting wired, wireless and remote connections, NAC can play a valuable role in securing all of these connections.

Once an organization has determined that NAC will be useful to its security profile, it’s time for it to consider the different purchasing criteria for choosing the right NAC product for its environment. NAC vendors provide a dizzying array of information, and it can be difficult to differentiate between their products.

When you’re ready to buy NAC products and begin researching your options — and especially when speaking to vendors to determine the best choice for your organization — consider the questions and features outlined in this article.

NAC device coverage: Agent or agentless?

NAC products should support all devices that may connect to an organization’s network. This includes many different configurations of PCs, Macs, Linux devices, smartphones, tablets and IoT-enabled devices. This is especially true in a BYOD environment.

NAC agents are small pieces of software installed on a device that provide detailed information about the device — such as its hardware configuration, installed software, running services, antivirus versions and connected peripherals. Some can even monitor keystrokes and internet history, though that presents privacy concerns. NAC agents can either run scans as a one-off — dissolvable — or periodically via a persistently installed agent.

If the NAC product uses agents, it’s important that they support the widest variety of devices possible, and that other devices can use agentless NAC if required. In many cases, devices will require the NAC product to support agentless implementation to detect BYOD and IoT-enabled devices and devices that can’t support NAC agents, such as printers and closed-circuit television equipment. Agentless NAC allows a device to be scanned by the network access controller and be given the correct designation based on the class of the device. This is achieved with aggressive port scans and operating system version detection.

Agentless NAC is a key component in a BYOD environment, and most organizations should look at this as must-have when buying NAC products. Of course, gathering information via an agent will provide more information on the device, but it’s not viable on a modern network that needs to support many different devices.

Does the NAC product integrate with existing software and authentication?

This is a key consideration before you buy an NAC product, as it is important to ensure it supports the type of authentication that best integrates with your organization’s network. The best NAC products should offer a variety of choices: 802.1x — through the use of a RADIUS server — Active Directory, LDAP or Oracle. NAC will also need to integrate with the way an organization uses the network. If the staff uses a specific VPN product to connect remotely, for example, it is important to ensure the NAC system can integrate with it.

Supporting many different security systems that do not integrate with one another can cause significant overhead. A differentiator between the different NAC products is not only what type of products they integrate with, but also how many systems exist within each category.

Consider the following products that an organization may want to integrate with, and be sure that your chosen NAC product supports the products already in place:

1. Security information and event management

2. Vulnerability assessment

3. Advanced threat detection

4. Mobile device management

5. Next-generation firewalls

Does the NAC product aid in regulatory compliance?

NAC can help achieve compliance with many different regulations, such as the Payment Card Industry Data Security Standard, HIPAA, International Organization for Standardization 27002 — ISO 27002 — and the National Institute of Standards and Technology. Each of these regulations stipulates certain controls regarding network access that should be implemented, especially around BYOD, IoT and rogue devices connecting to the network.

By continually monitoring network connections and performing actions based on the policies set by an organization, NAC can help with compliance with many of these regulations. These policies can, in many cases, be configured to match those of the compliance regulations mentioned above. So, when buying NAC products, be sure to have compliance in mind and to select a vendor that can aid in this process — be it through specific knowledge in its support team or through predefined policies that can be tweaked to provide the compliance required for your individual business.

What is the true cost of buying an NAC product?

The price of NAC products can be the most significant consideration, depending on the budget you have available for procurement. Most NAC products are charged per endpoint (device) connected to the network. On a large network, this can quickly become a substantial cost. There are often also hidden costs with NAC products that must be considered when assessing your purchase criteria.

Consider the following costs before you buy an NAC product:

A differentiator between the different NAC products is not only what type of products they integrate with, but also how many systems exist within each category.

1. Add-on modules. Does the basic price give organizations all the information and control they need? NAC products often have hidden costs, in that the basic package does not provide all the functionality required. The additional cost of add-on modules can run into tens of thousands of dollars on a large network. Be sure to look at what the basic NAC package includes and investigate how the organization will be using the NAC system. Specific integrations may be an additional cost. Is there extra functionality that will be required in the NAC product to provide all the benefits required?

2. Upfront costs. Are there any installation charges or initial training that will be required? Be sure to factor these into the calculation, on top of the price per endpoint — of course.

3. Support costs. What level of support does the organization require? Does it need one-off or regular training, or does it require 24/7 technical support? This can add significantly to the cost of NAC products.

4. Staff time. While not a direct cost of buying NAC products, consider how much monitoring an NAC system requires. Time will need to be set aside not only to learn the NAC system, but to manage it on an ongoing basis and respond to alerts. Even the best NAC systems will require staff to be trained so if problems occur, there will be people available to address the issues.

NAC product support: What’s included?

Support from the NAC manufacturer is an important consideration from the perspective of the success of the rollout and assessing the cost. Some of the questions that should be asked are:

  1. What does the basic support package include?
  2. What is the cost of extended support?
  3. Is support available at all times?
  4. Does the vendor have a significant presence in the organization’s region? For example, some NAC providers are primarily U.S.-based, and if an organization is based in EMEA, it may not provide the same level of support.
  5. Is on-site training available and included in the license?

Support costs can significantly drive up the cost of deployment and should be assessed early in the procurement process.

What to know before you buy an NAC system

When it comes to purchasing criteria for network access control products, it is important that not only is an NAC system capable of detecting all the devices connected to an organization’s network, but that it integrates as seamlessly as possible. The cost of attempting to shoehorn existing processes and systems into an NAC product that does not offer integration can quickly skyrocket, even if the initial cost is on the cheaper side.

NAC should also work for the business, not against it. In the days when NAC products only supported 802.1x authentication and blocked everything by default, it was seen as an annoyance that stopped legitimate network authentication requests. But, nowadays, a good NAC system provides seamless connections for employees, third parties and contractors alike — and to the correct area of the network to which they have access. It should also aid in regulatory compliance, an issue all organizations need to deal with now.

Assessing NAC products comes down to the key questions highlighted above. They are designed to help organizations determine what type of NAC product is right for them, and accordingly aid them in narrowing their choices down to the vendor that provides the product that most closely matches those criteria.

Once seldom used by organizations, endpoint protection is now a key part of IT security, and NAC products have a significant part to play in that. From a hacker’s perspective, well-implemented and managed NAC products can mean the difference between a full network attack and total attack failure.