Tag Archives: their

Deloitte CIO survey: Traditional CIO role doesn’t cut it in digital era

CIOs who aren’t at the forefront of their companies’ digital strategies risk becoming obsolete — and they risk taking their IT departments with them.

The message isn’t new to IT executives, who have been counseled in recent years to take a leadership role in driving digital transformation. But new data suggests CIOs are struggling to make the shift. According to a recently published global CIO survey by Deloitte Consulting, 55% of business and technology leaders polled said CIOs are focused on delivering operational efficiency, reliability and cost-savings to their companies.

Kristi Lamar, managing director and U.S. CIO program leader at Deloitte and a co-author of the report, said IT executives who are serving in a traditional CIO capacity should take the finding as a clarion call to break out of that “trusted operator” role — and soon.

“If they don’t take a lead on digital, they’re ultimately going to be stuck in a trusted operator role, and IT is going to become a back office function versus really having a technology-enabled business,” she said. “The pace of change is fast and they need to get on board now.”

Taking on digital

Manifesting legacy: Looking beyond the digital era” is the final installment of a three-part, multiyear CIO survey series on CIO legacy. The idea was to chronicle how CIOs and business leaders perceived the role and to explore how CIOs delivered value to their companies against the backdrop of digital transformation.

Kristi Lamar, managing director and U.S.CIO program leader at DeloitteKristi Lamar

In the first installment, the authors developed three CIO pattern types. They are as follows:

  • Business co-creators: CIOs drive business strategy and enable change within the company to execute on the strategy.
  • Change instigators: CIOs lead digital transformation efforts for the enterprise.
  • Trusted operators: CIOs operate in a traditional CIO role and focus on operational efficiency and resiliency, as well as cost-savings efforts.

Based on their findings, the authors decided that CIOs should expect to move between the three roles, depending on what their companies needed at a given point in time. But this year’s CIO survey of 1,437 technology and business leaders suggested that isn’t happening for the most part. “We have not seen a huge shift in the last four years of CIOs getting out of that trusted operator role,” Lamar said.

The pace of change is fast and they need to get on board now.
Kristi Lamarmanaging director, Deloitte

Indeed, 44% of the CIOs surveyed reported they don’t lead digital strategy development or lead the execution of that strategy.

The inability of CIOs to break out of the trusted operator role is a two-way street. Lamar said that companies still see CIOs as — and need CIOs to be — trusted operators. But while CIOs must continue to be responsible for ensuring a high level of operational excellence, they also need to help their companies move away from what’s quickly becoming an outdated business-led, technology-enabled mindset.

The more modern view is that every company is a technology company, which means CIOs need to delegate responsibility for trustworthy IT operations and — as the company’s top technology expert — take a lead role in driving business strategy.

“The reality is the CIO should be pushing that trusted operator role down to their deputies and below so that they can focus their time and energy on being far more strategic and be a partner with the business,” she said.

Take your seat at the table

To become a digital leader, a trusted operator needs to “take his or her seat at the table” and change the corporate perception of IT, according to Lamar. She suggested they build credibility and relationships with the executive team and position themselves as the technology evangelist for the company.

“CIOs need to be the smartest person in the room,” she said. “They need to be proactive to educate, inform and enable the business leaders in the organization to be technology savvy and tech fluent.”

Trusted operators can get started by seeing any conversation they have with business leaders about digital technology as an opportunity to begin reshaping their relationship.

If they’re asked by the executive team or the board about technology investments, trusted operators should find ways to plant seeds on the importance of using new technologies or explain ways in which technology can drive business results. This way, CIOs continue to support the business while bringing to the discussion “the art of the possible and not just being an order taker,” Lamar said.

Next, become a ‘digital vanguard’

Ultimately, CIOs want to help their organizations join what Deloitte calls the “digital vanguard,” or companies with a clear digital strategy and that view their IT function as a market leader in digital and emerging technologies.

Lamar said organizations she and her co-authors identified as “digital vanguards” — less than 10% of those surveyed — share a handful of traits. They have a visible digital strategy that cuts across the enterprise. In many cases, IT — be it a CIO or a deputy CIO — is leading the execution of the digital strategy.

CIOs who work for digital vanguard companies have found ways to shift a percentage of their IT budgets away from operational expenses to innovation. According to the survey, baseline organizations spend on average about 56% of their budgets on business operations and 18% on business innovation versus 47% and 26% respectively at digital vanguard organizations.

Digital vanguard CIOs also place an emphasis on talent by thinking about retention and how to retool employees who have valuable institutional knowledge for the company. And they seek out well-rounded hires, employees who can bring soft skills, such as emotional intelligence, to the table, Lamar said.

Talent is top of mind for most CIOs, but digital vanguards have figured out how to build environments for continuous learning and engagement to both attract and retain talent. Lamar called this one of the hardest gaps to close between organizations that are digital vanguards and those that aren’t. “The culture of these organizations tends to embrace and provide opportunities for their people to do new things, play with new tools or embrace new technologies,” she said.

Wanted – 2017 MacBook Pro 15″ / 13″ (Swap/Trade with 2017 27″ 5K iMac)

Would anyone be interested in swapping their 15″ or maybe their 13″ 2017 MacBook Pro for a mint 2017 27″ 5K iMac?

The iMac i5 3.4GHz quad-core / 512GB SSD / 8GB RAM (additional available) / 4GB Radeon Pro 570.

The MacBook Pro must have 16GB.

Happy to explore swap options depending on what is presented.

Thanks.

Location: London

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

AIOps platforms delve deeper into root cause analysis

The promise of AIOps platforms for enterprise IT pros lies in their potential to provide automated root cause analysis, and early customers have begun to use these tools to speed up problem resolution.

The city of Las Vegas needed an IT monitoring tool to replace a legacy SolarWinds deployment in early 2018 and found FixStream’s Meridian AIOps platform. The city introduced FixStream to its Oracle ERP and service-oriented architecture (SOA) environments as part of its smart city project, an initiative that will see municipal operations optimized with a combination of IoT sensors and software automation. Las Vegas is one of many U.S. cities working with AWS, IBM and other IT vendors on such projects.

FixStream’s Meridian offers an overview of how business process performance corresponds to IT infrastructure, as the city updates its systems more often and each update takes less time as part of its digital transformation, said Michael Sherwood, CIO for the city of Las Vegas.

“FixStream tells us where problems are and how to solve them, which takes the guesswork, finger-pointing and delays out of incident response,” he said. “It’s like having a new help desk department, but it’s not made up of people.”

The tool first analyzes a problem and offers insights as to the cause. It then automatically creates a ticket in the company’s ServiceNow IT service management system. ServiceNow acquired DxContinuum in 2017 and released its intellectual property as part of a similar help desk automation feature, called Agent Intelligence, in January 2018, but it’s the high-level business process view that sets FixStream apart from ServiceNow and other tools, Sherwood said.

FixStream’s Meridian AIOps platform creates topology views that illustrate the connections between parts of the IT infrastructure and how they underpin applications, along with how those applications underpin business processes. This was a crucial level of detail when a credit card payment system crashed shortly after FixStream was introduced to monitor Oracle ERP and SOA this spring.

“Instead of telling us, ‘You can’t take credit cards through the website right now,’ FixStream told us, ‘This service on this Oracle ERP database is down,'” Sherwood said.

This system automatically correlated an application problem to problems with deeper layers of the IT infrastructure. The speedy diagnosis led to a fix that took the city’s IT department a few hours versus a day or two.

AIOps platform connects IT to business performance

Instead of telling us, ‘You can’t take credit cards through the website right now,’ FixStream told us, ‘This service on this Oracle ERP database is down.’
Michael SherwoodCIO for the city of Las Vegas

Some IT monitoring vendors associate application performance management (APM) data with business outcomes in a way similar to FixStream. AppDynamics, for example, offers Business iQ, which associates application performance with business performance metrics and end-user experience. Dynatrace offers end-user experience monitoring and automated root cause analysis based on AI.

The differences lie in the AIOps platforms’ deployment architectures and infrastructure focus, said Nancy Gohring, an analyst with 451 Research who specializes in IT monitoring tools and wrote a white paper that analyzes FixStream’s approach.

“Dynatrace and AppDynamics use an agent on every host that collects app-level information, including code-level details,” Gohring said. “FixStream uses data collectors that are deployed once per data center, which means they are more similar to network performance monitoring tools that offer insights into network, storage and compute instead of application performance.”

FixStream integrates with both Dynatrace and AppDynamics to join its infrastructure data to the APM data those vendors collect. Its strongest differentiation is in the way it digests all that data into easily readable reports for senior IT leaders, Gohring said.

“It ties business processes and SLAs [service-level agreements] to the performance of both apps and infrastructure,” she said.

OverOps fuses IT monitoring data with code analysis

While FixStream makes connections between low-level infrastructure and overall business performance, another AIOps platform, made by OverOps, connects code changes to machine performance data. So, DevOps teams that deploy custom applications frequently can understand whether an incident is related to a code change or an infrastructure glitch.

OverOps’ eponymous software has been available for more than a year, and larger companies, such as Intuit and Comcast, have recently adopted the software. OverOps identified the root cause of a problem with Comcast’s Xfinity cable systems as related to fluctuations in remote-control batteries, said Tal Weiss, co-founder and CTO of OverOps, based in San Francisco.

OverOps uses an agent that can be deployed on containers, VMs or bare-metal servers, in public clouds or on premises. It monitors the Java Virtual Machine or Common Language Runtime interface for .NET apps. Each time code loads into the CPU via these interfaces, OverOps captures a data signature and compares it with code it’s previously seen to detect changes.

OverOps Grafana dashboard
OverOps exports reliability data to Grafana for visual display

From there, the agent produces a stream of log-like files that contain both machine data and code information, such as the number of defects and the developer team responsible for a change. The tool is primarily intended to catch errors before they reach production, but it can be used to trace the root cause of production glitches, as well.

“If an IT ops or DevOps person sees a network failure, with one click, they can see if there were code changes that precipitated it, if there’s an [Atlassian] Jira ticket associated with those changes and which developer to communicate with about the problem,” Weiss said.

In August 2018, OverOps updated its AIOps platform to feed code analysis data into broader IT ops platforms with a RESTful API and support for StatsD. Available integrations include Splunk, ELK, Dynatrace and AppDynamics. In the same update, the OverOps Extensions feature also added a serverless AWS Lambda-based framework, as well as on-premises code options, so users can create custom functions and workflows based OverOps data.

“There’s been a platform vs. best-of-breed tool discussion forever, but the market is definitely moving toward platforms — that’s where the money is,” Gohring said.

Improving CISO-board communication: Partnership, metrics essential

“Are we secure?”

That’s the most common — and challenging — question that CISOs get asked by their board members, according to a recent report published by Kudelski Security. While there is no clear yes or no answer, the key is to first understand exactly what and why the board is asking, said John Hellickson, managing director of global strategy and governance at Kudelski Security.

“It is important to make it clear to the board that there is no such thing as perfect security,” Hellickson said.

The report, titled “Cyber Board Communications & Metrics — Challenging Questions from the Boardroom,” highlights top questions CISOs are asked by their board members and offers strategies to address them. For example, one idea to help facilitate an effective CISO-board communication is to bolster board presentations with metrics and visuals.

The biggest takeaway for CISOs is that boards of directors are taking more interest in the security posture of their organizations, Kudelski Security CEO Rich Fennessy said. This provides both a challenge and an opportunity for CISOs, Fennessy added.

“The challenge is that a majority of CISOs, even seasoned ones, have difficulty understanding what boards are looking for and then providing this in a way that resonates,” Fennessy said. “We feel that a new approach to communicating cyber risk is needed and this represents the opportunity.”

A new approach to CISO-board communication

One of the most important findings from the report is the need for a new approach to communication between the CISOs and their organization’s board members.

In today’s volatile security landscape, it is vital that CISOs present the need to invest in a robust and mature cybersecurity program, Fennessy stressed. A partnership between CISOs and their board of directors is crucial to this end, he added, and the effectiveness of any company’s security program depends on it.

Bryce Austin, CEO, TCE StrategyBryce Austin

To improve CISO-board communication, CISOs need to explain cybersecurity issues to the board in layman’s terms, according to Bryce Austin, CEO at TCE Strategy and author of Secure Enough? 20 Questions on Cybersecurity for Business Owners and Executives.

“Explain the concepts of multifactor authentication, encryption in motion and at rest, zero-day vulnerabilities and GDPR,” Austin said. “The board needs to understand what these concepts and regulations are and how they impact their company.”

But because CISOs are given limited time to interact with the board, they have to learn how to engage quickly and partner for the common cause, Hellickson said. This means getting to know their organization, its vision and mission. CISO-board communication should become easier as CISOs learn more about the board’s goals for the organization, share relevant security information and consider business needs in their presentations, he added.

John Hellickson, managing director, global strategy and governance, Kudelski SecurityJohn Hellickson

“CISOs will start to create a bridge between the technology and the organizations’ broader issues and challenges; linking security with the ability of the organization to go to market, operate efficiently, minimize downtime, reduce costs and finally become a key partner to the board,” Hellickson said.

Metrics matter

Metrics are an important tool for CISOs because they help answer key questions the board is likely to ask and help CISOs make their case, Hellickson said. Boards prefer objective, quantitative evidence, but both quantitative and qualitative metrics can be effective, he added.

Even the most seasoned CISOs find it challenging to translate security and risk information into business language that provides meaningful insight to boards and business leaders, he said.

“Traditionally, CISOs have presented boards with metrics related to technical and security operations, which are hard to understand,” he added. “Presenting them can even reduce trust in their ability as security leaders.”

The challenge is that a majority of CISOs, even seasoned ones, have difficulty understanding what boards are looking for and then providing this in a way that resonates.
Rich FennessyCEO, Kudelski Security

Boards are fact and financially driven, Austin reinforced. They want relevant data presented to them so that they can make the best decisions for their organization.

Core quantitative metrics like dwell time, details of new vulnerabilities discovered versus remediated, patch management data, number of incidents and vulnerabilities, and number of non-remediated risks should be part of the presentation, Hellickson said.

Other metrics to include are outcomes of initiatives that aimed to reduce risk; how security has integrated with application development; actions taken to improve the company’s security risk posture; risks the organization has accepted and how it aligns to company’s agreed-upon risk tolerance, he added.

“We also think it is helpful to talk about security as a journey, showing where you’re at today, where you want to get to and where you’ve made noteworthy progress,” Hellickson said.

Wanted – 2017 MacBook Pro 15″ / 13″ (Swap/Trade with 2017 27″ 5K iMac)

Would anyone be interested in swapping their 15″ or maybe their 13″ 2017 MacBook Pro for a mint 2017 27″ 5K iMac?

The iMac i5 3.4GHz quad-core / 512GB SSD / 8GB RAM (additional available) / 4GB Radeon Pro 570.

The MacBook Pro must have 16GB.

Happy to explore swap options depending on what is presented.

Thanks.

Location: London

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Wanted – 2017 MacBook Pro 15″ / 13″ (Swap/Trade with 2017 27″ 5K iMac)

Would anyone be interested in swapping their 15″ or maybe their 13″ 2017 MacBook Pro for a mint 2017 27″ 5K iMac?

The iMac i5 3.4GHz quad-core / 512GB SSD / 8GB RAM (additional available) / 4GB Radeon Pro 570.

The MacBook Pro must have 16GB.

Happy to explore swap options depending on what is presented.

Thanks.

Location: London

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Learn at your own pace with Microsoft Quantum Katas

For those who want to explore quantum computing and learn the Q# programming language at their own pace, we have created the Quantum Katas – an open source project containing a series of programming exercises that provide immediate feedback as you progress.

Coding katas are great tools for learning a programming language. They rely on several simple learning principles: active learning, incremental complexity growth, and feedback.

The Microsoft Quantum Katas are a series of self-paced tutorials aimed at teaching elements of quantum computing and Q# programming at the same time. Each kata offers a sequence of tasks on a certain quantum computing topic, progressing from simple to challenging. Each task requires you to fill in some code; the first task might require just one line, and the last one might require a sizable fragment of code. A testing framework validates your solutions, providing real-time feedback.

Working with the Quantum Katas in Visual Studio
Working with the Quantum Katas in Visual Studio

Programming competitions are another great way to test your quantum computing skills. Earlier this month, we ran the first Q# coding contest and the response was tremendous. More than 650 participants from all over the world joined the contest or the warmup round held the week prior. More than 350 contest participants solved at least one problem, while 100 participants solved all fifteen problems! The contest winner solved all problems in less than 2.5 hours. You can find problem sets for the warmup round and main contest by following the links below. The Quantum Katas include the problems offered in the contest, so you can try solving them at your own pace.

We hope you find the Quantum Katas project useful in learning Q# and quantum computing. As we work on expanding the set of topics covered in the katas, we look forward to your feedback and contributions!

Cloud misconfigurations can be caused by too many admins

When it comes to cloud security, enterprise employees can be their own worst enemy, especially when organizations stray too far from least-privilege models of access.

Data exposures have been a constant topic of news recently — often blamed on cloud misconfigurations — and have led to voter records, Verizon customer data and even army secrets being publicly available in cloud storage.

In a Q&A, BetterCloud CEO and founder David Politis discussed why SaaS security has become such big news and how enterprises can take control of these cloud misconfigurations in order to protect data.

Editor’s note: This conversation has been edited for length and clarity.

There have been quite a few stories recently about cloud misconfigurations leading to exposures of data. Do you think this is a new issue or just something that is becoming more visible now?

David Politis: This is an issue that has been around really since people started adopting SaaS applications. But it’s only coming out now because, in a lot of cases, the misconfigurations are not identified until it’s too late. In most cases, business configurations were in place when the stock application was deployed, or they were in place when the setting was changed years ago or six months ago, and it’s not until some high-profile exposure happens that the organization starts paying attention to it.

David Politis, CEO, BetterCloudDavid Politis

We’ve actually seen this recently. We had a couple of customers that we’re talking to for, in one case, three years. And we told them three years ago, ‘You’re going to have issue X, Y and Z down the line, because you have too many administrators and because you have this issue with groups. And for three years, it has been living dormant, essentially. And then, all of a sudden, they had an issue where all their groups got exposed to all the employees in the company. It’s a 10,000-person company, where every single employee in the entire company could read every single email distribution list.

Similarly, another company that we’ve talked to for a year came to us three weeks ago and said, ‘I know you told us when we’re going to have these problems, where we just had one of the super admins that should not have been a super admin incorrectly delete about a third of our company’ — they’re about 3,000-person company — ‘and a third of the company just was left without email, without documents and without calendars and thought they got fired.’

A thousand people, in a matter of minutes, thought they got fired, because they had no access anything. And they had to go and restore that app. Fifteen minutes of downtime for 1,000 people is a lot of confusion.

We’ve seen these types of incidences, and we’re seeing it in these environments. This is why we started the company almost seven years ago now. But only now has the adoption of these SaaS applications reached critical mass enough to where these problems are happening at scale and people are reporting it publicly.

You mention different SaaS security issues that can arise from cloud misconfigurations. Are these data exposure stories overshadowing bigger issues?

Politis: It’s more of the inadvertent piece is what makes this so challenging. And this is not malicious. There are malicious actors, but a lot of these situations are not malicious. It’s misconfiguration or just a just a general mistake that someone made. Even deleting the users is just a result of having too many administrators, which is a result of not understanding how to configure the applications to follow a least-privilege model.

I think, even if it’s a mistake, the kind of data that can be exposed is the most sensitive data, because we’ve hit the tipping point in how SaaS applications are being used. The cloud, in general, is being used as a system of record. If we go back maybe five years ago, six years ago, I’m not sure we’re at the point where cloud was being trusted as a system of record. It was kind of secondary. You could put some stuff there, maybe some design files, but now you have your [human resources] files.

Recently, we did a security assessment for a customer, and what we found was that all the HR files that they had lived in a public folder in one of their cloud storage systems. And it was literally all their HR files, by employee. That was this configuration that was definitely not malicious, and that’s as bad as it gets. We’re talking about Social Security numbers. We were finding documents such as background checks on employees that were publicly available files. And if you knew how to go find them, you could pull that up.

That, I’d argue, is worse than people’s email being deleted for 15 minutes — and, again, completely by mistake. We spoke to the company, and the person in charge of HR was just not very familiar with these cloud-based systems. And they just misconfigured something at the folder level, and then all the files that they were adding to the folder were becoming publicly available. And so I think it’s more dangerous there, because you’re not even looking for a bad actor. It’s just happening. It’s happening day in, day out, which I think is harder to catch actually.

Should all enterprises assume there is a cloud misconfiguration somewhere? How difficult is it to find these issues?

Politis: I can say from our experience that are nine out of 10 environments that we go into — and it doesn’t matter the size of the organization — have a major, critical misconfiguration somewhere in their environment. And it is possible, in most cases, to find the misconfiguration, but it’s a little bit like finding a needle in a haystack. It requires a lot of time to go through, because the only way to do it is to go page by page in the admin console; it’s to click on every setting to look at every group, look at every channel and look at every folder. And so unless you’re doing it programmatically, right now, there are not many [other] ways to do it.

This is self-serving, but this is why we built BetterCloud is to identify those blind spots. That’s because there’s a real need. When we went to look at these environments and we started logging into Salesforce and Slack and Dropbox and Google, it could take you months to go through an environment of couple hundred employees and just check all the configurations and all the different areas, because there [are] so many places where that misconfiguration can be a problem.

The way that people have to do it today is do it manually. And it can take a very long period of time [depending] on how big an organization is, how long they’ve been using the SaaS applications, how much they’ve adopted cloud in general, and the sprawl of data that they have to manage and, more importantly, the sprawl of entitlement, configuration settings, permissions across all the SaaS.

And we’re seeing a large portion of that is not even IT’s fault. The misconfigurations may predate that IT organization in many cases, because the SaaS application has been around for longer than that IT organization or that IT leader.

In many cases, it may be the end users who are misconfiguring, because they have a lot of control over these applications. It could be that it started a shadow IT, and it was configured by a shadow IT in a certain way. When the apps are taken over by the IT organization, a lot of that cleanup of the configuration isn’t done, and so it doesn’t fit within the same policies that IT has.

We also have a lot of customers where the number of admins that they have is crazy, because sales operations were the ones responsible for that and, generally speaking, it’s easier to make everyone an admin and let them make their own changes, let them do all of that. But when IT openly takes over the security and management of Salesforce, the work required to go find all the misconfiguration is really hard. That goes for Dropbox, Slack and anything that starts as shadow IT; you’re going to have those problems.

As AI identity management takes shape, are enterprises ready?

BOSTON — Enterprises may soon find themselves replacing their usernames and passwords with algorithms.

At the Identiverse 2018 conference last month, a chorus of vendors, infosec experts and keynote speakers discussed how machine learning and artificial intelligence are changing the identity and access management (IAM) space. Specifically, IAM professionals promoted the concept of AI identity management, where vulnerable password systems are replaced by systems that rely instead on biometrics and behavioral security to authenticate users. And, as the argument goes, humans won’t be capable of effectively analyzing the growing number of authentication factors, which can include everything from login times and download activity to mouse movements and keystroke patterns. 

Sarah Squire, senior technical architect at Ping Identity, believes that use of machine learning and AI for authentication and identity management will only increase. “There’s so much behavioral data that we’ll need AI to help look at all of the authentication factors,” she told SearchSecurity, adding that such technology is likely more secure than relying solely on traditional password systems.

During his Identiverse keynote, Andrew McAfee, principal research scientist at the Massachusetts Institute of Technology, discussed how technology, and AI in particular, is changing the rules of business and replacing executive “gut decisions” with data intensive predictions and determinations. “As we rewrite the business playbook, we need to keep in mind that machines are now demonstrating excellent judgment over and over and over,” he said.

AI identity management in practice

Some vendors have already deployed AI and machine learning for IAM. For example, cybersecurity startup Elastic Beam, which was acquired by Ping last month, uses AI-driven analysis to monitor API activity and potentially block APIs if malicious activity is detected. Bernard Harguindeguy, founder of Elastic Beam and Ping’s new senior vice president of intelligence, said AI is uniquely suited for API security because there are simply too many APIs, too many connections and too wide an array of activity to monitor for human admins to keep up with.

There are other applications for AI identity management and access control. Andras Cser, vice president and principal analyst for security and risk professionals at Forrester Research, said he sees several ways machine learning and AI are being used in the IAM space. For example, privileged identity management can use algorithms to analyze activity and usage patterns to ensure the individuals using the privileged accounts aren’t malicious actors.

“You’re looking at things like, how has a system administrator been doing X, Y and Z, and why? If this admin has been using these three things and suddenly he’s looking at 15 other things, then why does he need that?” Cser said.

In addition, Cser said machine learning and AI can be used for conditional access and authorization. “Adaptive or risk-based authorization tend to depend on machine learning to a great degree,” he said. “For example, we see that you have access to these 10 resources, but you need to be in your office during normal business hours to access them. Or if you’ve been misusing these resources across these three applications, then it will ratchet back your entitlements at least temporarily and grant you read-only access or require manager approval.”

Algorithms are being used not just for managing identities but creating them as well. During his Identiverse keynote, Jonathan Zittrain, George Bemis professor of international law at Harvard Law School, discussed how companies are using data to create “derived identities” of consumers and users. “Artificial intelligence is playing a role in this in a way that maybe it wasn’t just a few years ago,” he said.

Zittrain said he had a “vague sense of unease” around machine learning being used to target individuals via their derived identities and market suggested products. We don’t know what data is being used, he said, but we know there is a lot of it, and the identities that are created aren’t always accurate. Zittrain joked about how when he was in England a while ago, he was looking at the Lego Creator activity book on Amazon, which was offered up as the “perfect partner” to a book called American Jihad. Other times, he said, the technology creates anxieties when people discover they are too accurate.

“You realize the way these machine learning technologies work is by really being effective at finding correlations where our own instincts would tell us none exist,” Zittrain said. “And yet, they can look over every rock to find one.”

Potential issues with AI identity management

Experts say allowing AI systems to automatically authenticate or block users, applications and APIs with no human oversight comes with some risk, as algorithms are never 100% accurate. Squire says there could be a trial and error period, but added there are ways to mitigate those errors. For example, she suggested AI identity management shouldn’t treat all applications and systems the same and suggested assigning risk levels for each resource or asset that requires authentication.

“It depends on what the user is doing,” Squire said. “If you’re doing something that has a low risk score, then you don’t need to automatically block access to it. But if something has a high risk score, and the authentication factors don’t meet the requirement, then it can automatically block access.”

Squire said she doesn’t expect AI identity management to remove the need for human infosec professionals. In fact, it may require even more. “Using AI is going to allow us to do our jobs in a smarter way,” she said. “We’ll still need humans in the loop to tell the AI to shut up and provide context for the authentication data.”

Cser said the success of AI-driven identity management and access control will depend on a few critical factors. “The quality and reliability of the algorithms are important,” he said. “How is the model governed? There’s always a model governance aspect. There should be some kind of mathematically defensible, formalized governance method to ensure you’re not creating regression.”

Explainability is also important, he said. Vendor technology should have some type of “explanation artifacts” that clarify why access has been granted or rejected, what factors were used, how those factors were weighted and other vital details about the process. If IAM systems or services don’t have those artifacts, then they risk becoming black boxes that human infosec professionals can’t manage or trust.

Regardless of potential risks, experts at Identiverse generally agreed that machine learning and AI are proving their effectiveness and expect an increasing amount of work to be delegated to them. “The optimal, smart division of labor between what we do — minds — and [what] machines do is shifting very, very quickly,” McAfee said during his keynote. “Very often it’s shifting in the direction of the machines. That doesn’t mean that all of us have nothing left to offer, that’s not the case at all. It does mean that we’d better re-examine some of our fundamental assumptions about what we’re better at than the machines because of the judgment and the other capabilities that the machines are demonstrating now.”