Tag Archives: organizations

Mature DevSecOps orgs refine developer security skills training

BOSTON — IT organizations that plan to tackle developer security skills as part of a DevSecOps shift have started to introduce tools and techniques that can help.

Many organizations have moved past early DevSecOps phases such as a ‘seat at the table‘ for security experts during application design meetings and locked-down CI/CD and container environments. At DevSecCon 2018 here this week, IT pros revealed they’ve begun in earnest to ‘shift security left’ and teach developers how to write more secure application code from the beginning.

“We’ve been successful with what I’d call SecOps, and now we’re working on DevSec,” said Marnie Wilking, global CISO at Orion Health, a healthcare software company based in Boston, during a Q&A after her DevSecCon presentation. “We’ve just hired an application security expert, and we’re working toward overall information assurance by design.”

Security champions and fast feedback shift developer mindset

Orion Health’s plan to bring an application security expert, or security champion, into its DevOps team reflects a model followed by IT security software companies, such as CA Veracode. The goal of security champions is to bridge the gap and liaise between IT security and developer teams, so that groups spend less time in negotiations.

“The security champions model is similar to having an SRE team for ops, where application security experts play a consultative role for both the security and the application development team,” said Chris Wysopal, CTO at CA Veracode in Burlington, Mass., in a presentation. “They can determine when new application backlog items need threat modeling or secure code review from the security team.”

However, no mature DevSecOps process allows time for consultation before every change to application code. Developers must hone their security skills to reduce vulnerable code without input from security experts to maintain app delivery velocity.

The good news is that developer security skills often emerge organically in CI/CD environments, provided IT ops and security pros build vulnerability checks into DevOps pipelines in the early phases of DevSecOps.

Marnie Wilking at DevSecCon
Marnie Wilking, global CISO at Orion Health, presents at DevSecCon.

“If you’re seeing builds fail day after day [because of security flaws], and it stops you from doing what you want to get done, you’re going to stop [writing insecure code],” said Julie Chickillo, VP of information security, risk and compliance at Beeline, a company headquartered in Jacksonville, Fla., which sell workforce management and vendor management software.

Beeline built security checks into its CI/CD pipeline that use SonarQube, which blocks application builds if it finds major, critical or limiting application security vulnerabilities in the code, and immediately sends that feedback to developers. Beeline also uses interactive code scanning tools from Contrast Security as part of its DevOps application delivery process.

“It’s all about giving developers constant feedback, and putting information in their hands that helps them make better decisions,” Chickillo said.

Developer security training tools emerge

Application code scans and continuous integration tests only go so far to make applications secure by design. DevSecOps organizations will also use updated tools to further developer security skills training.

Sooner or later, companies put security scanning tools in place, then realize they’re not enough, because people don’t understand the output of those tools.
Mark FelegyhaziCEO, Avatao.com Innovative Learning Ltd

“Sooner or later, companies put security scanning tools in place, then realize they’re not enough, because people don’t understand the output of those tools,” said Mark Felegyhazi, CEO of Avatao.com Innovative Learning Ltd, a startup in Hungary that sells developer security skills training software. Avatao competitors in this emerging field include Secure Code Warrior, which offers gamelike interfaces that train developers in secure application design. Avatao also offers a hands-on gamification approach, but its tools also cover threat modeling, which Secure Code Warrior doesn’t address, Felegyhazi said.

Firms also will look to internal and external training resources to build developer security skills. Beeline has sent developers to off-site security training, and plans to set up a sandbox environment for developers to practice penetration testing on their own code, so they better understand the mindset of attackers and how to head them off, Chickillo said.

Higher education must take a similar hands-on approach to bridge the developer security skills gap for graduates as they enter the workforce, said Gabor Pek, CTO at Avatao, in a DevSecCon presentation about security in computer science curricula.

“Universities don’t have security champion programs,” Pek said. “Most of their instruction is designed for a large number of students in a one-size-fits-all format, with few practical, hands-on exercises.”

In addition to his work with Avatao, Pek helped create a bootcamp for student leaders of capture-the-flag teams that competed at the DEFCON conference in 2015. Capture-the-flag exercises offer a good template for the kinds of hands-on learning universities should embrace, he said, since they are accessible to beginners but also challenge experts.

Transforming IT infrastructure and operations to drive digital business

It’s time for organizations to modernize their IT infrastructure and operations to not just support, but to drive digital business, according to Gregory Murray, research director at Gartner.

But to complete that transformation, organizations need to first understand their desired future state, he added.

“The future state for the vast majority of organizations is going to be a blend of cloud, on prem and off prem,” Murray told the audience at the recent Gartner Catalyst conference. “What’s driving this is the opposing forces of speed and control.”

From 2016 to 2024, the percentage of new workloads that will be deployed through on-premises data centers is going to plummet from about 80% to less than 20%, Gartner predicts. During the same period, cloud adoption will explode — going from less than 10% to as much as 45% — with off-premises, colocation and managed hosting facilities also picking up more workloads.

IT infrastructure needs to provide capabilities across these platforms, and operations must tackle the management challenges that come with it, Murray said.

How to transform IT infrastructure and operations

Once organizations have defined their future state — and Murray urged organizations to start with developing a public cloud strategy to determine which applications will be in the cloud — they should begin modernizing their infrastructure, he told the audience at the Gartner Catalyst conference. 

“Programmatic control is the key to enabling automation and automation is, of course, critical to addressing the disparity between the speed that we can deliver and execute in cloud, and improving our speed of execution on prem,” he said. 

Organizations will also need developers with the skills to take advantage of it, he said. Another piece of the automation equation when modernizing the infrastructure to gain speed is standardization, he said.

The future state for the vast majority of organizations is going to be a blend of cloud, on prem and off prem.
Gregory Murrayresearch director, Gartner

“We need to standardize around those programmatic building blocks, either by using individual components of software-defined networking, software-defined compute and software-defined storage, or by using a hyper-converged system.”

Hyper-converged simplifies the complexity associated with establishing programmatic control and helps create a unified API for infrastructure, he said.

Organizations also need to consider how to uplevel their standardization, according to Murray. This is where containers come into play. The atomic unit of deployment is specific to an application and it abstracts much of the dependencies and complications that come with moving an application independent of its operating system, he explained.

“And if we can do that, now I have a construct that I can standardize around and deploy into cloud, into on prem, into off prem and give it straight to my developers and give them the ability to move quickly and deploy their applications,” he said.

Hybrid is the new normal

To embrace this hybrid environment, Murray said organizations should establish a fundamental substrate to unify these environments.

“The two pieces that are so fundamental that they precede any sort of hybrid integration is the concept of networks — specifically your WAN and WAN strategy across your providers — and identity,” Murray said. “If I don’t have fundamental identity constructs, governance will be impossible.”

Organizations looking to modernize their network for hybrid capabilities should resort to SD-WAN, Murray said. This provides software-defined control that extends outside of the data center and allows a programmatic approach and automation around their WAN connectivity to help keep that hybrid environment working together, he explained.

But to get that framework of governance in place across this hybrid environment requires a layered approach, Murray said. “It’s a combination of establishing principles, publishing the policies and using programmatic controls to bring as much cloud governance as we can.”

Murray also hinted that embracing DevOps is the first step in “a series of cultural changes” that organizations are going to need to truly modernize IT infrastructure and operations. For those who aren’t operating at agile speed, operations still needs to get out of the business of managing tickets and delivering resources and get to a self-service environment where operations and IT are involved in brokering the services, he added.

There is also need to have a monitoring framework in place to gain visibility across the environment. Embracing AIOps — which uses big data, data analytics and machine learning — can help organizations become more predictive and more proactive with their operations, he added.

Gartner Catalyst 2018: A future without data centers?

SAN DIEGO — Can other organizations do what Netflix has done — run a business without a data center? That’s the question that was posed by Gartner Inc. research vice president Douglas Toombs at the Gartner Catalyst 2018 conference.

While most organizations won’t run 100% of their IT in the cloud, the reality is that many workloads can be moved, Toombs told the audience.

“Your future IT is actually going to be spread across a number of different execution venues, and at each one of these venues you’re trading off control and choice, but you get the benefits of not having to deal with the lower layers,” he said.

Figure out the why, how much and when

When deciding why they are moving to the cloud, the “CEO drive-by strategy” — where the CEO swings in and says, “We need to move a bunch of stuff in the cloud, go make it happen,” — shouldn’t be the starting point, Toombs said.

“In terms of setting out your overall organizational priorities, what we want to do is get away from having just that as the basis and we want to try to think of … the real reasons why,” Toombs said.

Increasing business agility and accessing new technologies should be some of the top reasons why businesses would want to move their applications to the cloud, Toombs said. Once they have a sense of “why,” the next thing is figuring out “how much” of their applications will make the move. For most mainstream enterprises, the sweet spot seems to be somewhere between 40% and 80% of their overall applications, he said.

Businesses then need to figure out the timeframe to make this happen. Those trying to move 50% or 60% of their apps usually give themselves about three years to try and accomplish that goal, he said. If they’re more aggressive — with a target of 80% — they will need a five-year horizon, he said.

Whatever metric you pick, you want to track this very publicly over time within your organization.
Douglas Toombsresearch vice president, Gartner

“We need to get everyone in the organization with a really important job title — could be the top-level titles like CIO, CFO, COO — also in agreement and nodding along with us, and what we suggest for this is actually codifying this into a cloud strategy document,” Toombs told the audience at Gartner Catalyst 2018.

Dissecting application risk

Once organizations have outlined their general strategy, Toombs suggested they incorporate the CIA triad of confidentiality, integrity and availability for risk analysis purposes.

These three core pillars are essential to consider when moving an app to the cloud so the organization can determine potential risk factors.

“You can take these principles and start to think of them in terms of impact levels for an application,” he said. “As we look at an app and consider a potential new execution venue for it, how do we feel about the risk for confidentiality, integrity and availability — is this kind of low, or no risk, or is it really severe?”

Assessing probable execution venues

Organizations need to think very carefully about where their applications go if they exit their data centers, Toombs said. He suggested they assess their applications one-by-one, moving them off into other execution venues when they’re capable and are not going to increase overall risk

“We actually recommend starting with the app tier where you would have to give up the most control and look in the SaaS market,” he said. They can then look at PaaS, and if they have exhausted the PaaS options in the market, they can start to look at IaaS, he said.

However, if they have found an app that probably shouldn’t go to a cloud service but they still want to get to no data centers, organizations could talk to hosting providers that are out there — they’re happy to sell them hardware on a three-year contract and charge monthly for it — or go to a colocation provider. Even if they have put 30% of their apps in a colocation environment, they are not running data center space anymore, he said.

But if for some reason they have found an app that can’t be moved to any one of these execution venues, then they have absolutely justified and documented an app that now needs to stay on premises, he said. “It’s actually very freeing to have a no-go pile and say, ‘You know what, we just don’t think this can go or we just don’t think this is the right time for it, we will come back in three years and look at it again.'”

Kilowatts as a progress metric

While some organizations say they are going to move a certain percentage of their apps to the cloud, others measure in terms of number of racks or number of data centers or square feet of data center, he said.

Toombs suggested using kilowatts of data center processing power as a progress metric. “It is a really interesting metric because it abstracts away the complexities in the technology,” he said.

It also:

  • accounts for other overhead factors such as cooling;
  • easily shows progress with first migration;
  • should be auditable against a utility bill; and
  • works well with kilowatt-denominated colocation contracts.

“But whatever metric you pick, you want to track this very publicly over time within your organization,” he reminded the audience at the Gartner Catalyst 2018 conference. “It is going to give you a bit of a morale boost to go through your 5%, 10%, 15%, and say ‘Hey, we’re getting down the road here.'”

Focus, scope and spotting opportunity are key to role of CDO

CAMBRIDGE, Mass. — In the age of big data, the opportunities to change organizations by using data are many. For a newly minted chief data officer, the opportunities may actually be too vast, making focus the most essential element in the role of CDO.

“It’s about scope,” said Charles Thomas, chief data and analytics officer at General Motors. “You struggle if you do too many things.”

As chief data officer at auto giant GM, Thomas is focusing on opportunities to repackage and monetize data. He called it “whale hunting,” meaning he is looking for the biggest opportunities.

Thomas spoke as part of a panel on the role of CDO this week at the MIT Chief Data Officer and Information Quality Symposium.

At GM, he said, the emphasis is on taking the trove of vehicle data available from today’s highly digitized, instrumented and connected cars. Thomas said he sees monetary opportunities in which GM can “anonymize data and sell it.”

The role of CDO is important, if not critical, Thomas emphasized in an interview at the event.

The nurturing CDO

“Companies generate more data than they use, so someone has to approach it from an innovative perspective — not just for internal innovation, but also to be externally driving new income,” he said. “Someone has to [be] accountable for that. It has to be their only job.”

“A lot of industries are interested in how people move around cities. It’s an opportunity to sell [data] to B2B clients,” Thomas added.

Focus is also important in Christina Clark’s view of the role of CDO. But nurturing data capabilities across the organization is the initial prime area for attention, said Clark, who is CDO at industrial conglomerate General Electric’s GE Power subsidiary and was also on hand as an MIT symposium panelist.

Every company should get good at aggregating, analyzing and monetizing data, Clark said.

“You then look at where you want to focus,” she said. The role of CDO, she added, is likely to evolve according to the data maturity of any given organization.

Focusing on data areas in which an organization needs rounding out was also important to symposium panelist Jeff McMillan, chief analytics and data officer at Morgan Stanley’s wealth management unit, based in New York.

The chief data officer role evolution
As the role of CDO changes, it’s becoming more strategic.

It’s about the analytics

“Organizations say, ‘We need a CDO,’ and then bring them in, but they don’t provide the resources they need to be successful,” he said. “A lot of people define the CDO role before they define the problem.”

It’s unwise to suggest a CDO can fix all the data problems of an organization, McMillan said. The way to succeed with data is to drive an understanding of data’s value as deeply into the organization as possible.

“That is really hard, by the way,” he added. At Morgan Stanley, McMillan said, his focus in the role of chief data officer has been around enabling wider use of analytics in advising clients on portfolio moves.

All things data and CDO

Tom Davenport, BabsonTom Davenport

Since widely materializing in the aftermath of the 2008 financial crisis, the role of CDO has been seen largely as seeking consensus.

Compliance and regulation tasks have often blended in a broad job description that has come to include big data innovation initiatives. But individual executives’ refinements to chief data officer approaches may be the next step for the role of CDO, longtime industry observer and Babson College business professor Tom Davenport said in an interview.

“Having someone responsible for all things data is not a workable task. So, you really need to focus,” Davenport said. “If you want to focus on monetization, that’s fine. If you want to focus on internal enablement or analytics, that’s fine.”

The advice to the would-be CDO is not unlike that for most any other position. “What you do must be focused; you can’t be all things to all people,” Davenport said.

IAM engineer roles require training and flexibility

BOSTON — As identity and access management become more critical to security strategies, organizations must be on the lookout for good identity engineers — and there are a few different ways IT can approach this staffing.

Identity and access management (IAM) is increasingly essential as mobile devices add new access points for employees and fresh ways to leak corporate data. But the job market still lacks skilled IAM engineer candidates, so organizations may be better off training existing IT staff or hiring general security engineers to educate on IAM expertise, experts said here at this week’s Identiverse conference.

“Focus on general IT skills and roles [when you] hire engineers,” said Olaf Grewe, director of access certification services at Deutsche Bank, in a session. “Don’t wait for this elusive candidate that has all of this baked in. Bring them up to where you need to be.”

IAM job market landscape

Job growth in IAM has surged in the past year, with about 1,500 IAM engineer openings currently in the Boston area, 4,800 in the D.C. area and 3,320 in Silicon Valley, according to a presentation by Dave Shields, a senior security architect for IAM at DST Systems, a financial technology company in Kansas City.

“It is finally reaching a state where people see that it’s a viable place to have [a career],” said Shields, who was also recently the managing director of IT and ran IAM at the University of Oklahoma. “There are so many things you can do with it.”

There aren’t enough people already skilled in IAM to fill these roles, however, and ones that are may not live nearby. Instead, IT departments can train up existing staff on IAM — but the key is to choose the right people.

“The best engineers you’re going to find are the people who aren’t afraid to break stuff,” Shields said. “Maybe you have a sysadmin who gets into systems and was able to make them do things they were never able to do before. Talk to that person.”

The person should also be flexible, adaptable to change and willing to ask questions others don’t want to hear, he said. Other desirable qualities for an IAM engineer are creativity and an ability to understand the business’ functions and the technology in use.

“Find someone who can look at something and say, ‘I can make that better,'” Shields said. “There are some things that simply cannot be taught.”

IAM and security go hand in hand

Deutsche Bank is currently building up an IAM team that includes existing IT staff and external hires, which the company then trains on IAM skills. That involves four major steps: baseline IAM training, then vendor-specific education, then CISSP, followed by continuous learning over time via conferences, lunch and learns, and updated vendor training.

We need to make sure people have access to the right resources.
Olaf Grewedirector of access certification services, Deutsche Bank

“We need to make sure people have access to the right resources,” Grewe said. “We want to have people who are continuously developing.”

General security skills are especially important for IAM engineer candidates, experts said. Sarah Squire, a senior technical architect at Ping Identity, started out by learning the important security specs and standards as a way toward training up on identity management.

“It’s a lot of on-the-job training,” Squire said. “We’re starting to realize that we really need a base body of knowledge for the entire field.”

For that reason, Squire along with Ian Glazer, vice president for identity product management at Salesforce, founded IDPro, a community for IAM professionals. Launched at last year’s Identiverse (then Cloud Identity Summit), IDPro is currently forming the body of knowledge that an IAM engineer must know, and plans to offer a certification in the future, Squire said.

“It’s really important that people who come in not only understand IAM but also really understand security,” Grewe said.

It’s also important to determine where within the organization those IAM professionals will live. Is it operations? Development? Security?

“A lot of people just don’t know where that fits,” Shields said. “There is nowhere better for them to be in my opinion than on the IT security team.”

Grewe’s team at Deutsche Bank, for instance, works under the chief security officer, which has a lot of budget to work with, he said. At IBM, the team that handles internal identity management works closely with HR and other groups that are involved in employees’ access rights, said Heather Hinton, vice president and chief information security officer for IBM Hybrid Cloud.

“[Organizations] need to figure out how to be less siloed,” she said.

Iron Mountain data recovery adds ransomware protection

Iron Mountain data recovery wants to perform “CPR” on organizations that get hit with ransomware.

The Iron Cloud Critical Protection and Recovery (CPR), set to launch this month, isolates data, disconnecting it from a network. It provides a “cleanroom” to recover data, in the event of an attack, and ensures that ransomware is out of the system.

“Every business is really data-driven today,” said Pete Gerr, senior product manager at Iron Mountain, which is based in Boston. “Data is their most valuable asset.”

Legacy backup and disaster recovery “really weren’t built for the modern threat environment,” and isolated recovery offers the best protection against ransomware, Gerr said.

Ransomware continues to get smarter and remains a prevalent method of cyberattack. Phil Goodwin, research director of storage systems and software at IDC, said the majority of risks for organizations’ data loss involve malware and ransomware. “It’s not a matter of if they’re going to get hit, it’s a matter of when,” Goodwin said.

That’s caused many organizations to proactively tackle the problem with ransomware-specific products

“It’s moved from a backroom discussion to the boardroom,” Gerr said.

Iron Mountain data recovery gets ‘clean’

Iron Cloud CPR features Iron Mountain’s Virtual Cleanroom, a dedicated computing environment hosted within Iron Cloud data centers that provides an air gap. The cleanroom serves as an offline environment where customers can recover backups stored within the secure CPR vault. Then customers can use data forensic utilities or a designated security provider to audit and validate that restored data sets are free from viruses and remediate them if necessary.

It’s moved from a backroom discussion to the boardroom.
Pete Gerrsenior product manager, Iron Mountain

Customers then use Iron Mountain data recovery to restore selected sets back to their production environment or another site.

“The last thing we want to do is recover a backup set … that reinfects your environment,” Gerr said.

The air gap, which ensures that ransomware does not touch a given data set, can also be found in such media as tape storage that is disconnected from the network.

Goodwin cautioned that the CPR product should complement an organization’s backup and recovery platform, not replace it.

“It will fit well with what the customer has,” he said.

Iron Cloud CPR also includes a managed service for organizations using Dell EMC’s Cyber Recovery for ransomware recovery. Hosted in Iron Mountain’s data centers, Iron Cloud CPR for Dell EMC Cyber Recovery on Data Domain enables customers to isolate critical data off site for protection against attacks, using a cloud-based monthly subscription model.

CPR is part of the Iron Cloud data management portfolio, which was built using Virtustream’s xStream Cloud Management Platform. The portfolio also includes backup, archive and disaster recovery services.

Both Iron Cloud CPR offerings are fully managed services and work without any other products, Gerr said. They will be available as part of Dell EMC and Virtustream’s data protection portfolios.

Iron Mountain, which claims more than 230,000 customers across its entire product line, said Iron Cloud CPR is expected to be generally available by the end of June. Several customers are working with the Iron Mountain data recovery product as early adopters.

A data replication strategy for all your disaster recovery needs

Meeting an organization’s disaster recovery challenges requires addressing problems from several angles based on specific recovery point and recovery time objectives. Today’s tight RTO and RPO expectations mean almost no data gets lost and no downtime.

To meet those expectations, businesses must move beyond backup and consider a data replication strategy. Modern replication products offer more than just a rapid disaster recovery copy of data, though. They can help with cloud migration, using the cloud as a DR site and even solving copy data challenges.

Replication software comes in two forms. One is integrated into a storage system, and the other is bought separately. Both have their strengths and weaknesses.

An integrated data replication strategy

The integrated form of replication has a few advantages. It’s often bundled at no charge or is relatively inexpensive. Of course, nothing in life is really free. The customer pays extra for the storage hardware in order to get the “free” software. In addition, at-scale, storage-based replication is relatively easy to manage. Most storage system replication works at a volume level, so one job replicates the entire volume, even if there are a thousand virtual machines on it. And finally, storage system-based replication is often backup-controlled, meaning the replication job can be integrated and managed by backup software.

There are, however, problems with a storage system-based data replication strategy. First, it’s specific to that storage system. Consequently, since most data centers use multiple storage systems from different vendors, they must also manage multiple replication products. Second, the advantage of replicating entire volumes can be a disadvantage, because some data centers may not want to replicate every application on a volume. Third, most storage system replication inadequately supports the cloud.

Stand-alone replication

IT typically installs stand-alone replication software on each host it’s protecting or implements it into the cluster in a hypervisor environment. Flexibility is among software-based replication’s advantages. The same software can replicate from any hardware platform to any other hardware platform, letting IT mix and match source and target storage devices. The second advantage is that software-based replication can be more granular about what’s replicated and how frequently replication occurs. And the third advantage is that most software-based replication offers excellent cloud support.

While backup software has improved significantly, tight RPOs and RTOs mean most organizations will need replication as well.

At a minimum, the cloud is used as a DR target for data, but it’s also used as an entire disaster recovery site, not just a copy. This means there can be instantiate virtual machines, using cloud compute in addition to cloud storage. Some approaches go further with cloud support, allowing replication across multiple clouds or from the cloud back to the original data center.

The primary downside of a stand-alone data replication strategy is it must be purchased, because it isn’t bundled with storage hardware. Its granularity also means dozens, if not hundreds of jobs, must be managed, although several stand-alone data replication products have added the ability to group jobs by type. Finally, there isn’t wide support from backup software vendors for these products, so any integration is a manual process, requiring custom scripts.

Modern replication features

Modern replication software should support the cloud and support it well. This requirement draws a line of suspicion around storage systems with built-in replication, because cloud support is generally so weak. Replication software should have the ability to replicate data to any cloud and use that cloud to keep a DR copy of that data. It should also let IT start up application instances in the cloud, potentially completely replacing an organization’s DR site. Last, the software should support multi-cloud replication to ensure both on-premises and cloud-based applications are protected.

Another feature to look for in modern replication is integration into data protection software. This capability can take two forms: The software can manage the replication process on the storage system, or the data protection software could provide replication. Several leading data protection products can manage snapshots and replication functions on other vendors’ storage systems. Doing so eliminates some of the concern around running several different storage system replication products.

Data protection software that integrates replication can either be traditional backup software with an added replication function or traditional replication software with a file history capability, potentially eliminating the need for backup software. It’s important for IT to make sure the capabilities of any combined product meets all backup and replication needs.

How to make the replication decision

The increased expectation of rapid recovery with almost no data loss is something everyone in IT will have to address. While backup software has improved significantly, tight RPOs and RTOs mean most organizations will need replication as well. The pros and cons of both an integrated and stand-alone data replication strategy hinge on the environment in which they’re deployed.

Each IT shop must decide which type of replication best meets its current needs. At the same time, IT planners must figure out how that new data replication product will integrate with existing storage hardware and future initiatives like the cloud.

MIT CIO: What is digital culture, why it’s needed and how to get it

CAMBRIDGE, Mass. — Can large organizations adopt the digital cultures of 21st century goliaths like Amazon and Google? That was the question posed in the kickoff session at the recent MIT Sloan CIO Symposium.

The assumption — argued by panel moderator and MIT Sloan researcher George Westerman — is that there is such a thing as a digital culture. Citing MIT Sloan research, Westerman said it includes values like autonomy, speed, creativity and openness; it prevails at digital native companies whose mission is nothing less than to change the world; and it’s something that “pre-digital” companies need too — urgently.

Digital technologies change fast, Westerman said; organizations much less so. But as digital technologies like social, mobile, AI and cloud continue to transform how customers behave, organizational change is imperative — corporate visions, values and practices steeped in 20th century management theories must also be adapted to exploit digital technologies, or companies will fail.

“For all the talk we’ve got about digital, the real conversation should be about transformation,” said Westerman, principal research scientist at the MIT Initiative on the Digital Economy. “This digital transformation is a leadership challenge.”

Marrying core values with digital mechanisms

Creating a digital culture is not just about using digital technology or copying Silicon Valley companies, Westerman stressed. He said he often hears executives say that if they just had the culture of a Google or Netflix, their companies could really thrive.

George Westerman, principal research scientist, MIT Initiative on the Digital EconomyGeorge Westerman

“And I say, ‘Are you sure you want that?’ That means you’ve got to hire people that way, pay them that way and you might need to move out to California. And frankly a lot of these cultures are not the happiest places to work,” Westerman said. And some can even be downright toxic, he noted, alluding to Uber’s problems with workplace culture.

The question for predigital companies then is not if they can adopt a digital culture but how do they create the right digital culture, given their predigital legacies, which include how their employees want to work and how they want to treat employees. The next challenge will be infusing the chosen digital culture into every level of the organization.

Corporate values are important, but culture is what happens when the boss leaves the room, Westerman said, referencing his favorite definition.

David Gledhill, group CIO and head of group technology and operations, DBS Bank  David Gledhill

“The practices are what matters,” he told the audience of CIOs, introducing a panel of experts who served up some practical advice.

Here are some of the digital culture lessons practiced by the two IT practitioners on the panel, David Gledhill, group CIO and head of group technology and operations at financial services giant DBS Bank, and Andrei Oprisan, vice president of technology and director of the Boston tech hub at Liberty Mutual Insurance, the diversified global insurer.

Liberty Mutual’s Andrei Oprisan: ‘Challenging everything’

Mission: Oprisan, who was hired by Liberty Mutual in 2017 to fix core IT systems and help unlock the business value in digital systems, said the company’s digital mission is clear and clearly understood. “We ask ourselves, ‘Are we doing the best thing for the customer in every single step we’re taking?'”

Andrei Oprisan, vice president of technology and director of the Boston tech hub, Liberty Mutual InsuranceAndrei Oprisan

The mission is also urgent, because not only are insurance competitors changing rapidly, he said, but “we’re seeing companies like Amazon and Google entering the insurance space.”

“We need to be able to compete with them and beat them at that game, because we do have those core competencies, we do have a lot of expertise in this area and we can build products much faster than they can,” he said.

Outside talent: Indeed, in the year since he was hired, Oprisan has scaled the Boston tech hub’s team from eight developers to over 120 developers, scrum masters and software development managers to create what he calls a “customer-centric agile transformation.” About a quarter of the hires were from inside the organization; the rest were from the outside.

Hiring from the outside was a key element in creating a digital culture in his organization, Oprisan said.

“We infused the organization with a lot of new talent to help us figure out what good looks like,” he said. “So, we’re only trying to reinvent ourselves and investing in our own talent and helping them improve and giving them all the tools they need, but we also add talents to that pool to change the way we’re solving all of these challenges.”

Small empowered teams: In the quest to get closer to the customer, the organization has become “more open to much smaller teams owning business decisions end to end,” he said, adding that empowering small teams represented a “seismic shift for any organization.” Being open to feedback and being “OK with failure” — the sine qua non of the digital transformation — is also a “very big part of being able to evolve very quickly,” he said.

“We’re challenging everything. We’re looking at all of our systems and all of our processes, we’re looking at culture, looking at brands, looking at how we’re attracting and retaining talent,” he said.

T-shirts and flip-flops: Oprisan said that autonomy and trust are key values in the digital culture he is helping to build at Liberty’s Boston tech hub.

“We emphasize that we are going to give them very challenging, hard problems to solve, and that we are going to trust they know how to solve them,” he said. “We’re going to hire the right talent, we’re going to give you a very direct mission and we’re going to get out of the way.”

In fact, Oprisan’s development teams work across the street from the company’s Boston headquarters, and they favor T-shirts and flip-flops over the industry’s penchant for business attire, he said — with corporate’s blessing. “Whatever it takes to get the job done.”

DBS Bank CIO David Gledhill: ‘Becoming the D in Gandalf’

Mission: Gledhill, the winner of the 2017 MIT Sloan CIO Leadership Award and a key player in DBS Bank’s digital transformation, said the digital journey at Singapore’s largest bank began a few years ago with the question of what it would take to run the bank “more like a technology company.”

Bank leadership studied how Google, Amazon, Netflix, Apple, LinkedIn and Facebook operated “at a technology level but also at a culture level,” he said, analyzing the shifts DBS would have to make to become more like those companies. In the process, Gledhill hit upon a slogan: DBS would strive to become the “D” in Google-Amazon-Netflix-Apple-LinkedIn-Facebook (GANALF). “It seems a little cheesy … but it just resonated so well with people.”

Cheesiness aside, the wizardry involved in becoming the “D” in Gandalf, has indeed played out on a technology and human level, according to Gledhill. Employees now have “completely different sets of aspirations” about their jobs, a change that started with the people in the technology units and spread to operations and the real state unit. “It was really revolutionary. Just unlocking this interest in talent and desire in people has taken us to a completely new level of operation.”

Gledhill is a fan of inspirational motifs — another DBS slogan is “Making banking joyful” — but he said slogans are not sufficient to drive digital transformation. He explained that the collective embrace of a digital culture by DBS tech employees was buttressed by five key operational tenets. (He likened the schema to a DBS version of the Trivial Pursuit cheese wheel.) They are: 1. Shift from project to platform; 2. Agile at scale; 3. Rethinking the organization; 4. Smaller systems for experimentation; 5. Automation.

Platform not projects, Agile: “Rather than having discrete projects that need budget and financing and committees and all that stuff, we got rid of all that,” Gledhill said. In its place, DBS has created and funded platforms with specific capabilities. Management describes the outcomes for teams working on the platforms. For example, goals include increasing the number of customers acquired digitally, or increasing digital transactions. But it does not prescribe the inputs, setting teams free to achieve the goals. That’s when “you can really start performing Agile at scale,” he said.

Rethink, rebuild, automate: DBS’s adoption of a digital culture required rethinking organizational processes and incentives. “We call it ‘organized for success’ on the cheese wheel, which is really about DevOps, business and tech together, and how you change the structure of the KPIs and other things you use to measure performance with,” he said.

On the engineering side, DBS now “builds for modern systems,” he said. That translates into smaller systems built for experimentation, for A/B testing, for data and for scaling. “The last piece was automation — how do you automate the whole tech pipeline, from test to build to code deploy,” Gledhill said.

“So those five cheeses were the things we wanted everybody to shift to — and that included open source and other bits and pieces,” he said. “On the outer rim of the five cheeses, each one had a set of maybe five to 10 discrete outputs that had to change.”

One objective of automating every system was to enable DBS to get products to market faster, Gledhill said. “We have increased our release cadence — that is, the number of times we can push into a dev or production environment — by 7.5 times. That’s a massive increase from where we started.”

Editor’s note: Look for detailed advice on how to create a digital culture from experts at McKinsey & Company and Korn Ferry in part two of this story later this week.

Intune APIs in Microsoft Graph – Now generally available

With tens of thousands of enterprise mobility customers, we see a great diversity in how organizations structure their IT resources. Some choose to manage their mobility solutions internally while others choose to work with a managed service provider to manage on their behalf. Regardless of the structure, our goal is to enable IT to easily design processes and workflows that increase user satisfaction and drive security and IT effectiveness.

In 2017, we unified Intune, Azure Active Directory, and Azure Information Protection admin experiences in the Azure portal (portal.azure.com) while also enabling the public preview of Intune APIs in Microsoft Graph. Today, we are taking another important step forward in our ability to offer customers more choice and capability by making Intune APIs in Microsoft Graph generally available. This opens a new set of possibilities for our customers and partners to automate and integrate their workloads to reduce deployment times and improve the overall efficiency of device management.

Intune APIs in Microsoft Graph enable IT professionals, partners, and developers to programmatically access data and controls that are available through the Azure portal. One of our partners, Crayon (based in Norway), is using Intune APIs to automate tasks with unattended authentication:

Jan Egil Ring, Lead Architect at Crayon: “The Intune API in Microsoft Graph enable users to access the same information that is available through the Azure Portal – for both reporting and operational purposes. It is an invaluable asset in our toolbelt for automating business processes such as user on- and offboarding in our customer`s tenants. Intune APIs, combined with Azure Automation, help us keep inventory tidy, giving operations updated and relevant information.”

Intune APIs now join a growing family of other Microsoft cloud services that are accessible through Microsoft Graph, including Office 365 and Azure AD. This means that you can use Microsoft Graph to connect to data that drives productivity – mail, calendar, contacts, documents, directory, devices, and more. It serves as a single interface where Microsoft cloud services can be reached through a set of REST APIs.

The scenarios that Microsoft Graph enables are expansive. To give you a better idea of what is possible with Intune APIs in Microsoft Graph, let’s look at some of the core use cases that we have already seen being utilized by our partners and customers.

Automation

Microsoft Graph allows you to connect different Microsoft cloud services and automate workflows and processes between them. It is accessible through several platforms and tools, including REST- based API endpoints and most popular programming and automation platforms (.NET, JS, iOS, Android, PowerShell). Resources (user, group, device, application, file, etc) and policies can be queried through this API, and formerly difficult or complex questions can be addressed via straightforward queries.

For example, one of our partners, PowerON Platforms (based in the UK), is using Intune APIs in Microsoft Graph to deliver their solutions to their customers faster and more consistently. PowerOn Platforms has created baseline deployment templates to increase the speed at which they are able to deploy solutions to customers. These templates are based on unique customer types and requirements and vastly accelerate the process that normally would take two to three days to complete and compresses it down to 15 seconds. Their ability to get customers up and running is now faster than ever before.

Steve Beaumont, Technical Director at PowerON Platforms: “PowerON has developed new and innovative methods to increase the speed of our Microsoft Intune delivery and achieve consistent outputs for customers. By leveraging the power of Microsoft Graph and new Intune capabilities, PowerON’s new tooling enhances the value of Intune.”

Integration

Intune APIs in Microsoft Graph can also provide detailed user, device, and application information to other IT asset management systems. You could build custom experiences which call Microsoft Graph to configure Intune controls and policies and unify workflows across multiple services.

For example, Kloud (based in Australia) leverages Microsoft Graph to integrate Intune device management and support activities into existing central management portals. This increases Kloud’s ability to centrally manage an integrated solution for their clients, making them much more effective as an integrated solution provider.

Tom Bromby, Managing Consultant at Kloud: “Microsoft Graph allows us to automate large, complex configuration tasks on the Intune platform, saving time and reducing the risk of human error. We can store our tenant configuration in source control, which greatly streamlines the change management process, and allows for easy audit and reporting of what is deployed in the environment, what devices are enrolled and what users are consuming the service”

Analytics

Having the right data at your fingertips is a must for busy IT teams managing diverse mobile environments. You can access Intune APIs in Microsoft Graph with PowerBI and other analytics services to create custom dashboards and reports based on Intune, Azure AD, and Office 365 data – allowing you to monitor your environment and view the status of devices and apps across several dimensions, including device compliance, device configuration, app inventory, and deployment status. With Intune Data Warehouse, you can now access historical data for up to 90 days.

For example, Netrix, LLC (based in the US) leverages Microsoft Graph to curate automated solutions to improve end-user experiences and increase reporting accuracy for a more effective device management. These investments increase their efficiency and overall customer satisfaction.

Tom Lilly, Technical Team Lead at Netrix, LLC: “By using Intune APIs in Microsoft Graph, we’ve been able to provide greater insights and automation to our clients. We are able to surface the data they really care about and deliver it to the right people, while keeping administrative costs to a minimum. As an integrator, this also allows Netrix to provide repetitive, manageable solutions, while improving our time to delivery, helping get our customers piloted or deployed quicker.”

We are extremely excited to see how you will use these capabilities to improve your processes and workflows as well as to create custom solutions for your organization and customers. To get started, you can check out the documentation on how to use Intune and Azure Active Directory APIs in Microsoft Graph, watch our Microsoft Ignite presentation on this topic, and leverage sample PowerShell scripts.

Deployment note: Intune APIs in Microsoft Graph are being updated to their GA version today. The worldwide rollout should complete within the next few days.

Please note: Use of a Microsoft online service requires a valid license. Therefore, accessing EMS, Microsoft Intune, or Azure Active Directory Premium features via Microsoft Graph API requires paid licenses of the applicable service and compliance with Microsoft Graph API Terms of Use.

Additional resources:

Curb stress from Exchange Server updates with these pointers

systems. In my experience as a consultant, I find that few organizations have a reliable method to execute Exchange Server updates.

This tip outlines the proper procedures for patching Exchange that can prevent some of the upheaval associated with a disruption on the messaging platform.

How often should I patch Exchange?

In a perfect world, administrators would apply patches as soon as Microsoft releases them. This doesn’t happen for a number of reasons.

Microsoft has released patches and updates for both Exchange and Windows Server that cause trouble on those systems. Many IT departments have long memories, and they will let the bad feelings keep them from staying current with Exchange Server updates. This is detrimental to the health of Exchange and should be avoided. With proper planning, updates can and should be run on Exchange Server on a regular schedule.

Another wrinkle in the update process is Microsoft releases Cumulative Updates (CUs) for Exchange Server on a quarterly schedule. CUs are updates that feature functionality enhancements for the application.

With proper planning, updates can and should be run on Exchange Server on a regular schedule.

Microsoft plans to release one CU for Exchange 2013 and 2016 each quarter, but they do not provide a set release date. The CUs may be released on the first day of one quarter, and then on the last day of the next.

Rollup Updates (RUs) for Exchange 2010 are also released quarterly. An RU is a package that contains multiple security fixes, while a CU is a complete server build.

For Exchange 2013 and 2016, Microsoft supports the current and previous CU. When admins call Microsoft for a support case, the company will ask them to update Exchange Server to at least the N-1 CU — where N is the latest CU, N-1 refers to the previous CU — before they begin work on the issue. An organization that prefers to stay on older CUs limits its support options.

Because CUs are the full build of Exchange 2013/2016, administrators can deploy a new Exchange Server from the most recent CU. For existing Exchange Servers, using a new CU for that version to update it should work without issue.

Microsoft only tests a new CU deployment with the last two CUs, but I have never had an issue with an upgrade with multiple missed CUs. The only problems I have seen when a large number of CUs were skipped had to do with the prerequisites for Exchange, not Exchange itself.

Microsoft releases Windows Server patches on the second Tuesday of every month. As many administrators know, some of these updates can affect how Exchange operates. There is no set schedule for other updates, such as .NET. I recommend a quarterly update schedule for Exchange.

How can I curb issues from Exchange Server updates?

As every IT department is different, so is every Exchange deployment. There is no single update process that works for every organization, but these guidelines can reduce problems with Exchange Server patching. Even if the company has an established patching process, if it’s missing some of the advice outlined below, then it might be a good idea to review that method.

  • Back up Exchange servers before applying patches. This might be common sense for most administrators, but I have found it is often overlooked. If a patch causes a critical failure, a recent backup is the key to the recovery effort. Some might argue that there are Exchange configurations — such as Exchange Preferred Architecture — that do not require this, but a backup provides some reassurance if a patch breaks the system.
  • Measure the performance baseline before an update. How would you know if the CPU cycles on the Exchange Server are too high after an update if this metric hasn’t been tracked? The Managed Availability feature records performance data by default on Exchange 2013 and 2016 servers, but Exchange administrators should review server performance regularly to establish an understanding of normal server behavior.
  • Test patches in a lab that resembles production. When a new Exchange CU arrives, it has been through extensive testing. Microsoft deploys updates to Office 365 long before they are publicly available. After that, Microsoft gives the CUs to its MVP community and select organizations in its testing programs. This vetting process helps catch the vast majority of bugs before CUs go to the public, but some will slip through. To be safe, test patches in a lab that closely mirrors the production environment, with the same servers, firmware and network configuration.
  • Put Exchange Server into maintenance mode before patching: If the Exchange deployment consists of redundant servers, then put them in maintenance mode before the update process. Maintenance mode is a feature of Managed Availability that turns off monitoring on those servers during the patching window. There are a number of PowerShell scripts in the TechNet Gallery that help put servers into maintenance mode, which helps administrators streamline the application of Exchange Server updates.