Tag Archives: Security

End-user security requires a shift in corporate culture

SAN FRANCISCO — An internal culture change can help organizations put end-user security on the front burner.

If an organization only addresses security once a problem arises, it’s already too late. But it’s common for companies, especially startups, to overlook security because it can get in the way of productivity. That’s why it’s important for IT departments to create a company culture where employees and decision-makers take security seriously when it comes to end-user data and devices.

“Security was definitely an afterthought,” said Keane Grivich, IT infrastructure manager at Shorenstein Realty Services in San Francisco, at last week’s BoxWorks conference. “Then we saw some of the high-profile [breaches] and our senior management fully got on board with making sure that our names didn’t appear in the newspaper.”

How to create a security-centric culture

Improving end-user security starts with extensive training on topics such as what data is safe to share and what a malicious website looks like. That forces users to take responsibility for their actions and understand the risks of certain behaviors.

Plus, if security is a priority, the IT security team will feel like a part of the company, not just an inconvenience standing in users’ way.

“Companies get the security teams they deserve,” said Cory Scott, chief information security officer at LinkedIn. “Are you the security troll in the back room or are you actually part of the business decisions and respected as a business-aligned person?”

Finger-pointing is a complete impediment to learning.
Brian Roddyengineering executive, Cisco

When IT security professionals feel that the company values them, they are more likely to stick around as well. With the shortage of qualified security pros, retaining talent is key.

Keeping users involved in the security process helps, too. Instead of locking down a user’s PC when a user accesses a suspicious file, for example, IT can send him a message checking if he performed a certain action. If the user says he accessed the file, then IT knows someone is not impersonating the user. If he did not, then IT knows there is an intruder and it must act.

To keep end-user security top of mind, it’s important to make things such as changing passwords easy for users. IT can make security easier for developers as well by setting up security frameworks that they can apply to applications they’re building.

It’s also advisable to take a blameless approach when possible.

“Finger-pointing is a complete impediment to learning,” said Brian Roddy, an engineering executive who oversees the cloud security business at Cisco, in a session. “The faster we can be learning, the better we can respond and the more competitive we can be.”

Don’t make it easy for attackers

Once the end-user security culture is in place, IT should take steps to shore up the simple things.

Unpatched software is one of the easiest ways for attackers to enter a company’s network, said Colin Black, COO at CrowdStrike, a cybersecurity technology company based in Sunnyvale, Calif.

IT can also make it harder for hackers by adding extra security layers such as two-factor authentication. 

Microsoft cumulative updates bring security, frustration

A year ago, on October Patch Tuesday, Microsoft upended its customers’ monthly security routine when it aligned all supported operating systems to a cumulative updates model — and admins finally have begun to find their footing.

In the old format, administrators could prioritize Microsoft’s critical updates and deploy those as soon as possible. However, this “Swiss cheese” approach — a term used by Windows Server principal program manager Jeff Woolsey at the company’s Ignite 2017 conference — meant admins could pick and choose which vulnerabilities to address. The end result was some systems did not get updates they needed.

Microsoft expanded the cumulative updates model beyond Windows 10 in October 2016 to limit administrators to an all-or-nothing choice. Rather than select which patches to deploy first, the rollup model makes admins determine which systems get patching priority.

“Things have stabilized, and Microsoft has probably achieved their goal at this point of simplifying the process,” said Todd Schell, product manager at Ivanti, an IT security company in South Jordan, Utah. “With these cumulative updates, you don’t have to worry about testing all these individual updates.”

While the blanket approach of the Microsoft cumulative updates model secures systems against all vulnerabilities, admins need a larger test environment and must spend more time to vet every update before deployment.

“Definitely, I think there’s frustration,” Schell said. “This is only one of their jobs for a lot of these people, so the time being tied up adds to the frustration factor.”

Most businesses have adapted to the Microsoft cumulative updates model, which has led to a faster update deployment rate, Schell said. And, as a result, Windows systems are more secure than a year ago, Woolsey reported at Ignite.

“You can’t miss one patch now and say, ‘Whoops, we only deployed 10 of the 11 patches,'” said Jimmy Graham, director of product management for Qualys Inc., based in Redwood City, Calif. “It’s a lot easier to get more updates deployed.”

Watch out for the Search vulnerability

On the anniversary of Microsoft’s cumulative updates model, this year’s October Patch Tuesday includes updates for 62 vulnerabilities, 30 of which affect Windows systems.

While the blanket approach of the Microsoft cumulative updates model secures systems against all vulnerabilities, admins need a larger test environment and must spend more time to vet every update before deployment.

Graham said the most important item for Windows Server administrators is CVE-2017-11771, a critical vulnerability that affects Windows Server 2008 and up. This remote code execution exploit lets an unauthenticated intruder use a memory-handling flaw in the Windows Search service to overtake a machine.

CVE-2017-11771 is similar to vulnerabilities Microsoft patched in June, July and August that closed flaws in the Server Message Block protocol. While CVE-2017-11771 is SMB-related, it is not similar to the exploits used in the WannaCry attacks in spring 2017.

“It could be that [Microsoft] is looking at anything related to SMB,” Schell said.

Microsoft also released two updates on October Patch Tuesday that address similar critical vulnerabilities in Windows Server 2008 and up. CVE-2017-11762 and CVE-2017-11763 are remote code execution vulnerabilities in the Windows font library. On an unpatched system, an attacker gains access via a web-based attack or with a malicious file on a server that a user opens.

“It’s one of those backdoors that you don’t think about too often,” Schell said.

Microsoft also flagged CVE-2017-11779 as critical, a remote code execution vulnerability in Windows Domain Name System that affects Windows Server 2012 and up. To capitalize on the exploit, the attacker sends corrupted DNS responses to a system from a malicious DNS server.

In addition, Microsoft closed a zero-day vulnerability in Microsoft Office in CVE-2017-11826. An attacker inserts malicious code in an Office document that, once opened, hands over control of the system.

For more information about the remaining security bulletins for October Patch Tuesday, visit Microsoft’s Security Update Guide.

Dan Cagen is the associate site editor for SearchWindowsServer.com. Write to him at dcagen@techtarget.com.

DHS cyberinsurance research could improve security

The Department of Homeland Security has undertaken a long-term cyberinsurance study to determine if insurance can help improve cybersecurity overall, but experts said that will depend on the data gathered.

The DHS began researching cyberinsurance in 2014 by gathering breach data into its Cyber Incident Data and Analysis Repository (CIDAR). DHS uses CIDAR to collect cyber incident data along 16 categories, including the type, severity and timeline of an incident, the apparent goal of the attacker, contributing causes, specific control failures, assets compromised, detection and mitigation techniques, and the cost of the attack.

According to the DHS, it hoped to “promote greater understanding about the financial and operational impacts of cyber events.”

“Optimally, such a repository could enable a novel information sharing capability among the federal government, enterprise risk owners, and insurers that increases shared awareness about current and historical cyber risk conditions and helps identify longer-term cyber risk trends,” the DHS wrote in a report about the value proposition of CIDAR. “This information sharing approach could help not only enhance existing cyber risk mitigation strategies but also improve and expand upon existing cybersecurity insurance offerings.”

The full cyberinsurance study by the DHS could take 10 to 15 years to complete, but Matt Shabat, strategist and performance manager in the DHS Office of Cybersecurity and Communications, told TechRepublic that he hopes there can be short-term improvements to cybersecurity with analysis of the data as it is gathered.

Shabat said he hopes the added context gathered by CIDAR will improve the usefulness of its data compared to other threat intelligence sharing platforms. Experts said this was especially important because as Ken Spinner, vice president of global field engineering at Varonis, told SearchSecurity, “A data repository is only as good as the data within it, and its success will likely depend on how useful and thorough the data is.”

“Sector-based Information Sharing and Analysis Centers have been implemented over a decade ago, so creating a centralized cyber incident data repository for the purpose of sharing intelligence across sectors is a logical next step and a commendable endeavor,” Spinner added. “A data repository could have greater use beyond its original intent by helping researchers find patterns in security incidents and criminal tactics.”

Philip Lieberman, president of Lieberman Software, a cybersecurity company headquartered in Los Angeles, said speed was the key to threat intel sharing.

“The DHS study on cyberinsurance is a tough program to implement because of missing federal laws and protocols to provide safe harbor to companies that share intrusion information,” Lieberman told SearchSecurity. “The data will be of little use in helping others unless threat dissemination is done within hours of an active breach.”

Many organizations may be reluctant to share meaningful data because of the difficulty in anonymizing it and the potential for their disclosure to be used against them.
Scott Petryco-founder and CEO of Authentic8

Scott Petry, co-founder and CEO of Authentic8, a secure cloud-based browser company headquartered in Mountain View, Calif., said the 16 data elements used by the DHS could provide “a pretty comprehensive overview of exploits and responses, if a significant number of organizations were to contribute to CIDAR.”

“The value of the data would be in the volume and its accuracy. Neither feel like short term benefits, but there’s no question that understanding more about breaches can help prevent similar events,” Petry told SearchSecurity. “But many organizations may be reluctant to share meaningful data because of the difficulty in anonymizing it and the potential for their disclosure to be used against them. It goes against their nature for organizations to share detailed breach information.”

The DHS appears to understand these concerns and outlined potential ways to overcome the “perceived obstacles” to enterprises sharing attack data with CIDAR, although experts noted many of the suggestions offered by the DHS may not be as effective as desired because they tend to boil down to working together with organizations rather than offering innovative solutions to these longstanding issues.

DHS did not respond to requests for comment at the time of this post.

Using cyberinsurance to improve security

Still, experts said if the DHS can gather quality data, the cyberinsurance study could help enterprises to improve security.

Spinner said cyberinsurance is a valid risk mitigation tool.

“Counterintuitively, having a cyberinsurance policy can foster a culture of security. Think of it this way: When it comes to auto insurance, safer drivers who opt for the latest safety features on their vehicles can receive a discount,” Spinner said. “Similarly, organizations that follow best practices and take appropriate steps to safeguard the data on their networks can also be rewarded with lower a lower rate quote.”

Lieberman said the efficacy of cyberinsurance on security is limited because the “industry is in its infancy with both insurer and insured being not entirely clear as to what constitutes due and ordinary care of IT systems to keep them free of intruders.”

“Cyberinsurance does make sense if there are clear definitions of minimal security requirements that can be objectively tested and verified. To date, no such clear definitions nor tests exist,” Lieberman said. “DHS would do the best for companies and taxpayers by assisting the administration and [the] legislative branch in drafting clear guidelines with both practices and tests that would provide safe harbor for companies that adopt their processes.”

Petry said the best way for cyberinsurance to help improve security would be to require “an organization to meet certain security standards before writing the policy and by creating an ongoing compliance requirement.”

“It’s a big market, and insurers are certainly making money, but that doesn’t mean it’s a mature market. Many organizations require their vendors to carry cyberinsurance, which will continue to fuel that growth, but the insurers aren’t taking reasonable steps to understand the exposure of the organizations they’re underwriting. When I get health insurance, they want to know if I’m a smoker and what my blood pressure is. Cyberinsurance doesn’t carry any of the same real-world assessments of ‘the patient.'”

Spinner said the arrangement between the cybersecurity industry and cyberinsurance is “very much still a work in progress.”

“The cybersecurity market is evolving rapidly, to some extent it is still in the experimental phase in that providers are continuing to learn what approach works best, just as companies are trying to figure out just how much insurance is adequate,” Spinner said. “It’s a moving target and we’ll continue to see the industry and policies evolve. The industry needs to work towards a standard for assessing risk so they can accurately determine rates.”

How JSR-375 simplifies and standardizes Java EE security

When leafing through the pages of the recently ratified JSR-375, the new Java EE Security API, it’s amusing how quickly the reading of the spec turns into an exercise of uttering to yourself, Seriously, have they not standardized this stuff yet?

Historically, implementing various aspects of Java EE security was a responsibility shouldered primarily by the application server vendor, and hooking into those proprietary systems was always a headache. Any software architect who has gone through the process of setting up a WebSphere cluster, configuring a WebLogic server or doing a Liferay installation has inevitably wasted time jumping through the odious hoops that were required to connect to a proprietary user attribute registry or third-party authentication store. For those unlucky enough not to have a simple LDAP server that provided this functionality, a custom user registry might have to be developed, which meant coding against a vendor-specific API and hooking that into the application server’s runtime.

These little Java EE security nuisances were never show stoppers. There have always been workarounds or third-party frameworks that would help an organization achieve their security goals. The problem was that these various approaches weren’t standardized. And while there are many aspects of Java EE security that are documented within specifications, much of which can be found in the often overlooked Java Authentication Service Provider Interface for Containers (JASPIC) specification. Unfortunately, JASPIC isn’t fun to work with. Furthermore, it isn’t annotation-based and it doesn’t leverage container-based dependency injection. JSR-375, the Java EE Security API, is an attempt to address these security-related issues.

Containers, microservices and Java EE security

“It’s an important specification because it bridges some of the gaps that existed in previous Java EE versions,” said Java Champion Ivar Grimstad, who is hosting a JavaOne 2017 session entitled, “New Security APIs for Java EE.” “Now it’s there, and that’s a good thing. It’s a good foundation on which to build upon so if you want OAUTH or support for microservices, you have a good foundation to build upon.”

This first version of the Java EE Security API does a good job at standardizing security and addressing many of the shortcomings of the existing Java EE and JASPIC APIs.

But perhaps the most significant aspect of the JSR-375 API is the fact that it allows for all of the security information to be defined within the application, and not configured externally. “You do it all in the application,” Grimstad said. “You don’t need to configure it from the outside.”

That’s a significant improvement in managing the lifecycle of an application, especially in a world of Docker-hosted microservices that are distributed in containers. “With annotations, you can easily add security and you don’t have to do any vendo- specific configuration to get it working.”

The annotation based approach to security isn’t insignificant.

One of the nice things about JSR-375 is the fact that it doesn’t try to boil the ocean on its first run around the block. The enterprise security specification can be broken down into three key parts.

1. The authentication mechanism

Web-based authentication isn’t anything new. Every Servlet engine supports basic, digest, form and certificate authentication. But existing APIs don’t provide many hooks allowing developers to interact with the process. Doing something as simple as ensuring the authentication happens against a specific user registry isn’t possible without digging into non-standard APIs. Furthermore, there is no support for authentication mechanisms other than the aforementioned four. And mechanisms for doing things like firing off callbacks to the application after a user is authenticated don’t exist.

Many of these issues are addressed by JASPIC, but JASPIC demands a great deal of coding effort while lacking any declarative support that software developers have come to expect after the release of Java 5. The Java EE Security API’s HttpAuthenticationMechanism interface, built in JavaBeans containing sensible defaults and annotations such as @RememberMe and @LoginToContinue, greatly simplifies the act of programmatically interacting with authentication services.

2. The Java EE security identity store

The identity store is a central part of any Java EE security implementation, but a simple and standard mechanism for interacting with it has always been lacking. To simplify and standardize the process, the Java EE Security API defines an IdentityStore interface and a CredentialValidationResult object, both of which work together to perform the simple tasks of validating a user, providing the caller’s unique identifier, and the various groups to which a user belongs. Interfaces for interacting with an LDAP-based identity store or a relational database as an identity store are also defined.

3. The Java EE security context

When it comes to writing low-level code to programmatically secure Java resources, the EE specification has always been somewhat lacking. Declarative security is simple to use and always preferred, but it doesn’t meet the needs of every application. Enterprise software often has fine-grained security requirements that can only be fulfilled programmatically. That’s where the Java EE Security Context comes in.

The Security Context provides familiar methods, such as getCallerPrincipal() and isCallerInRole(String role) that helps to identify who is invoking a given resource. More interesting are methods such as the boolean hasAccessToWebResource method that can determine if a user can invoke a given HTTP method on a Servlet. Another interesting addition is the authenticate() method which programmatically triggers a challenge. Prior to the Java EE Security API, challengers were only triggered when a resource was accessed, and programmatic triggering wasn’t possible.

The team that worked on JSR-375 should be proud of their accomplishment. This first version of the Java EE Security API does a good job at standardizing security and addressing many of the shortcomings of the existing Java EE and JASPIC APIs. It is a solid foundation upon which to further build upon and enhance.

Proof-of-concept iOS exploit released by Google’s Project Zero

A security researcher for Google’s Project Zero team has released a proof-of-concept iOS exploit that takes advantage of another Broadcom Wi-Fi issue.

The vulnerability abused by Gal Beniamini, a security researcher for Google Project Zero based in Israel, was found in the same Broadcom BCM4355C0 Wi-Fi chips affected by the Broadpwn flaw, but is separate. Beniamini confirmed the Broadcom flaw (CVE-2017-11120) affects a range of devices, including the Samsung Galaxy S7 Edge and various Wi-Fi routers, but the exploit he released was specifically for the iPhone 7.

Beniamini wrote in his disclosure that the BCM4355C0 SoC with firmware version did not validate a specific field properly and an iOS exploit could allow code execution and more.

“The exploit gains code execution on the Wi-Fi firmware on the iPhone 7,” Beniamini wrote. “Upon successful execution of the exploit, a backdoor is inserted into the firmware, allowing remote read/write commands to be issued to the firmware via crafted action frames (thus allowing easy remote control over the Wi-Fi chip).”

However, Beniamini’s proof-of-concept iOS exploit requires knowledge of the MAC address of the target device, which may make using this attack in the wild more difficult.

Beniamini said his iOS exploit was tested against the Wi-Fi firmware in iOS 10.2 “but should work on all versions of iOS up to 10.3.3.”

Apple has patched against this iOS exploit in iOS 11 and Google patched the same Broadcom flaw in its September Security Update for Android. Users are urged to update if possible.

Network lateral movement from an attacker’s perspective

LOUISVILLE, KY. — A security researcher at DerbyCon 7.0 showed how an attacker will infiltrate, compromise and move laterally on an enterprise network, and why it benefits IT professionals to look at infosec from a threat actor’s perspective.

Ryan Nolette, security technologist at Sqrrl, based in Cambridge, Mass., said there are a number of different definitions for network lateral movement, but he prefers the MITRE definition which says network lateral movement is “a step in the process” of getting to the end goal of profit.

Nolette said there are a lot of different attacks that can all be part of network lateral movement, including compromising a shared web root — things running as the same permissions as the web server — using SQL injection, remote access tools and pass the hash attacks.

According to Nolette there are five key stages to the network lateral movement process: infection, compromise, reconnaissance, credential theft and lateral movement. This process will then repeat from the recon stage for each system as needed, but the network lateral movement stage is “where the attack gets really freaking exciting,” Nolette told the crowd.

“You’ve already mapped out where you want to go next. You have credentials that you can possibly use to log in to use other systems,” Nolette said. “Now, it’s time to make an engineer or IT admin cry because now you’re going to start moving across their environment.”

Demonstrating network lateral movement

Nolette walked through a demo attack and made sure he had some roadblocks to overcome. First, he ran a Meterpreter payload in Metasploit which would allow him to “run plugins, scripts, payloads, or start a local shell session against the victim” and used it to determine the user privileges of the victim machine.

Finding the privileges were limited, Nolette loaded a generic Windows User Access Controls bypass — which he noted was patched in the current version of Windows — to escalate privileges to admin level.

In a blog post expanding on the attack, Nolette said that once the attacker has access to a system with these privileges, the aim is to map the network and processes, learn naming conventions to identify targets and plan the next move, which is to recover hashes in order to steal login credentials.

With credentials, Nolette said he targets local users and domain users.

It’s time to make an engineer or IT admin cry because now you’re going to start moving across their environment.
Ryan Nolettesecurity technologist at Sqrrl

“The reason I want the local users is because in every single large corporation, IT has a backdoor local admin account that uses the same password across 10,000 systems,” Nolette told the DerbyCon audience. “For the record, [Group Policy Object] allows you to randomize that password for every system and stores it in [Active Directory], so there’s really no excuse anymore for this practice.”

Another way Nolette said attackers can find more privileged users is by looking at accounts that break the normal naming convention of the organization. For example, Nolette said if a username is initial.lastname but an attacker sees a name like master_a, that could be an indication it is a domain user with higher privileges.

When mapping the potential paths for network lateral movement, Nolette said attackers will look for specific open ports and use PsExec to run commands on remote systems — both tactics used in the recent WannaCry and NotPetya ransomware attacks.

“If you use PsExec, SpecOps hates you because that’s a legitimate tool used by IT and is constantly run throughout environments and being abused,” Nolette said. He suggested one good security practice was to use whitelisting software to only allow PsExec to be run by very specific IT user accounts. 

Understanding attacker network lateral movement

“In a lot of presentations you don’t get to see the offense side. All you get to see are the after-effects of what they did. They move laterally, great, now I have a new process on this system. But, what did they actually do in order to do that?” Nolette said. “If I figure out what the attacker is doing, I can try to move further up the attack chain and stop them there.”

Nolette said the value of threat hunting to him was not about finding a specific attack or method, but rather in validating a hypothesis about how threat actors may be abusing systems.

“I find that valuable because that’s a repeatable process. When you’re trying to sell to your upper management what you want to do, you always want to use business terms: return on investment, high value target, synergy,” Nolette said. “In order to be a successful security practitioner, you have to know why the business [cares]. Security is not a money-maker. It is always a cost center. How to change that view with the upper management is to show them return on investment. By spending a few hours looking at this stuff, I just saved us a few million dollars.”

Managed SDP security from Verizon, Vidder addresses enterprise security

Verizon has added managed software-defined perimeter security to its enterprise networking service portfolio. The service uses security software from Vidder Inc., headquartered in Campbell, Calif.

Vidder’s SDP security software is available as a gateway that can be cloud-based, hosted in a data center box or integrated with an SD-WAN gateway, said Junaid Islam, Vidder’s president and CTO. In Verizon’s case, Vidder’s software will run in universal customer premises equipment.

Vidder’s SDP security identifies individual users and their devices accessing the network and determines the applications that user is authorized to have. A key element to SDP security is the authorization process no longer takes place inside the device, but happens in the cloud or data center, Islam said. This process helps prevent malware from getting into the data center, he added.

Verizon also offers managed SD-WAN services, underpinned by SD-WAN vendors, including Cisco and Viptela — recently acquired by Cisco.

“Verizon is making a big bet on SD-WAN as a way to differentiate its service and now [Vidder] is working with them to put SDP on that [service] as a further differentiation, in terms of securing that application-aware infrastructure with application-aware security,” Islam said.

Although most SD-WAN services include their own security measurements or partner with security companies, Islam said adding SDP security on top of SD-WAN is beneficial. If SD-WAN reroutes traffic to a different path, SDP security ensures end-to-end encryption and a “secure enclave.”

Islam said Verizon’s SDP security service is especially suited for enterprises with high-value or regulated information. Those companies will want to minimize cyber risks and be more likely to implement SDP security more quickly, he said, although they may or may not transition to SD-WAN.

SevOne moves management platform to the cloud

SevOne has shifted its formerly appliance-based management platform to the cloud. With its SevOne Data Platform, SevOne intends to bring management and analytics to where enterprises are working in the cloud, according to Jim Melvin, SevOne’s CMO.

“Fundamentally, what we’ve done is we’ve taken our appliance-based software and we’ve containerized it,” Melvin said. “We’ve made it so it can run in just about anything you want it to run in.”

The platform includes three new offerings: Data Engine, Data Insight and Data Bus. Data Engine collects the actual data; Data Insight allows users to customize analytics and visualization for a more efficient workflow; and Data Bus lets enterprises send and receive data from other analytics packages, like custom data lakes, business insight packages or security operations.

Data Insights and Data Bus are available now, while Data Engine will be available later in 2017, according to Melvin.

Melvin said he thinks SevOne’s cloud-based platform will not only help improve a company’s workflow, but also benefit enterprises implementing software-based technologies.

“These technologies — like SDN, SD-WAN and NFV — provide great promise for the companies that are deploying them in terms of flexibility,” Melvin said. “But frankly, they’ve been a huge potential fear, risk or even nightmare for operational teams that have to embrace them. It’s just another big change.”

With a cloud-based platform in place, however, Melvin contends enterprises can keep up with the rapid changes in virtualized technologies and succeed with software-defined infrastructures.

InfoVista and Cato Networks offer integrated SD-WAN gateway

InfoVista and Cato Networks have introduced an integrated gateway that combines SD-WAN connectivity with cloud-based security and performance guarantees.

The new appliance couples InfoVista’s Ipanema Application Aware SD-WAN service with Cato Cloud network services. The Ipanema appliance routes application traffic to Cato Cloud via secure gateway connections.

“Cato’s unique SLA-backed global backbone with built-in network security combined with InfoVista’s Ipanema enables enterprises to dramatically reduce the cost, complexity and risk in their enterprise networks,” said Alon Alter, Cato’s vice president of business operations, in a statement.

The combined SD-WAN gateway will be available next week at the SD-WAN Summit. InfoVista released an updated version of Ipanema SD-WAN service earlier this month that focuses on enterprises transitioning from legacy networks to SD-WAN.

Aruba launches AI-based user behavioral analytics system

Aruba has beefed up the security of its wireless LAN portfolio with software that relies on user behavioral analytics and other features to guard against malicious activity.

Aruba 360 Secure Fabric — available now in North America — includes elements of pre-existing Aruba products, such as ClearPass and IntroSpect, armed with additional machine learning and artificial intelligence capabilities to allow customers to rely on user behavioral analytics to determine threats. It’s based, in part, on software from Niara, which Aruba’s parent, Hewlett Packard Enterprise, acquired earlier this year.

Instead of tracking signatures and known threats, an approach taken by many threat detection and firewall systems, Aruba 360 Secure Fabric attempts to spot small changes in behavior on a network to identify potential malicious activity, according to Larry Lunetta, Aruba’s vice president of security solutions marketing.

If malicious activity is detected through IntroSpect, it will be handed off to Aruba’s management app, ClearPass, for resolution. Although the new framework works best on Aruba products, it will also interoperate with infrastructure from different vendors, Lunetta said. The fabric also meshes with Aruba’s Secure Core offering, which contains security tools embedded within the vendor’s line of access points, wireless controllers, and core and aggregation switches.

Lunetta said Aruba 360 is an attempt to recognize the changing nature of enterprise network security threats, which increasingly encompass ransomware attacks and situations where a breach at a contractor introduces malware that may remain dormant for months.

“We haven’t seen a postmortem on Equifax, but it was probably a similar circumstance,” Lunetta said of the hack that potentially compromised the information of more than 143 million Americans. Most of all, the new offering is aimed at reducing the manual labor involved in IT security operations, he said.

IntroSpect changes

In conjunction with the new software, Aruba divided its existing IntroSpect user behavioral analytics application into two offerings: IntroSpect Standard and IntroSpect Advanced. Standard allows users to rely on machine learning and data fed from as few as three sources to monitor for and detect anomalous network activity, Lunetta said.

The software also assesses behavior across internet of things, mobile and cloud, combing through data from a variety of sources, including Microsoft Active Directory, Lightweight Directory Access Protocol records and even data sources from other security vendors, such as Palo Alto Networks and Check Point Software Technologies. Suspected threats are diverted and quarantined in ClearPass.

Customers can subsequently upgrade to Advanced, which correlates information from among more data sources and relies on 100 supervised and unsupervised machine learning models to assess potential threats from additional data sources, such as endpoints. IntroSpect Advanced interoperates with ClearPass, which groups devices by common characteristics, like classifying IP security cameras together. In addition, Advanced creates a benchmark for activity among devices in the same class.

Early user plans expansion

Semiconductor manufacturer Cadence Design Systems Inc., an early adopter of Aruba 360 Secure Fabric, is using elements of the user behavior analytics software in preparation for a more comprehensive rollout, said Faramarz Mahdavi, senior group director of IT operations.

“We plan to expand the footprint as time goes on,” he said. “We’re just focused on [protecting our] source code for now, but plan to expand to other parts of network and data. Like a lot of companies, there’s been a lot of focus on perimeter protection, and we’ve added another layer internally for our security posture.”

Aruba security fabric signals new partner strategy

Aruba Networks has rolled out a network security framework that it believes can help its partners expand customer reach.

The Aruba security framework, dubbed 360 Secure Fabric, combines the vendor’s networking products with the user and entity behavior analytics (UEBA) technology it acquired from Niara in February. Secure Fabric offerings include Aruba’s IntroSpect UEBA and ClearPass access control and policy management product lines.

According to the vendor, the security fabric signifies a shift in company strategy, which has up until now focused mainly on the networking side of customer organizations.

“We now have for our partners who may only be selling networking products … a way to bridge between [a customer’s] networking group and the security group in an elegant way,” said Larry Lunetta, vice president of marketing for security solutions at Aruba, a Hewlett Packard Enterprise company.

Larry Lunetta, vice president of marketing for security solutions, ArubaLarry Lunetta

Lunetta said the 360 Secure Fabric aims “to deal with the new kind of threat environment that organizations are facing,” characterized by an “expanded and expanding attack surface” due to burgeoning trends, such as mobility, cloud and internet of things.

IntroSpect software is central to the Aruba security framework, he said. “The idea is that we are using … machine learning and artificial intelligence to detect attacks that have evaded the rest of the security infrastructure … typically in place in an enterprise.”

IntroSpect is available in a new entry-level Standard edition, as well as an Advanced version, and it integrates with Aruba ClearPass.

We now have for our partners who may only be selling networking products … a way to bridge between [a customer’s] networking group and the security group in an elegant way.
Larry Lunettavice president of marketing at Aruba Networks

The Aruba security framework also features Aruba Secure Core, the foundational security capabilities embedded in Aruba’s Wi-Fi access points, wireless controllers and switches, according to the vendor.

Functionality in multivendor environments is a key aspect of the Aruba 360 Secure Fabric strategy. Through Aruba’s 360 Security Exchange Program, customers and partners can integrate Aruba’s portfolio with more than 120 non-Aruba security and infrastructure products, Lunetta said.

“Our partners sell a wide variety of products. Our customers use a wide variety of products. So, the idea is that any of the elements of the [360 Secure Fabric] can … interact [with] a wide variety of non-Aruba technologies.”

He noted Aruba is updating its training and certification program to bring partners up to speed on the Aruba security framework.

Devon uses Microsoft 365 to maximize productivity in a commodity business

Devon logo.

Today’s post was written by Matt Harper, director of information security and infrastructure at Devon Energy.

Profile picture of Matt Harper, director of information security and infrastructure at Devon Energy.The oil business is a tight market because we get paid the same amount for a barrel of oil as our competitors, and oil prices are depressed right now. This means that we have to operate very efficiently to make money. We’re adopting a cloud-first mindset and using Microsoft 365 Enterprise and Azure to help us do that.

Devon Energy is one of the largest independent oil and natural gas exploration and production companies in North America. We produce about 250,000 barrels of oil and 1.2 billion cubic feet of natural gas a day. We’re based in Oklahoma City, but the vast majority of our 3,500 employees work in the field, at drilling and production sites. We want to make our employees as productive as possible, wherever they are.

Over the years, we’ve deployed many technologies that support Devon’s business. What makes Microsoft 365 Enterprise unique is that it includes Office 365, Windows 10, and Microsoft Enterprise Mobility + Security (EMS), and has the potential to empower all our users.

We’re making Windows 10 our standard operating system for nearly 8,000 desktop and laptop computers. This will give our employees a better cloud experience and allow us to use the security capabilities built into EMS. We’re also looking forward to using Windows 10 roaming profiles to support more remote workforce capabilities. We’re very interested in the advanced authentication capabilities like bio-metrics, anomalous user behavior detection, and data protection capabilities that EMS provides.

Devon is a data-driven company, and we use sophisticated digital tools to find oil and gas in rock formations and reservoirs. The challenge is to get the right data to the right people at the right time. Office 365 will give our field workers new capabilities to access that data immediately—from any device—to make a drilling decision, repair a well, approve a purchase order, or expedite delivery of needed materials to a work site. Our field employees are making decisions in real-time about where to position a drill bit a mile underground to produce the best results, and often they need to consult with engineers and petrotechnical professionals back in Oklahoma City or Calgary, Alberta. By using Skype for Business Online on their mobile devices, they can connect to colleagues 1,000 miles away and get the input they need. Our time-to-productivity has improved because of this easier access to data; in some cases, we’ve reduced hours to minutes.

Another example of empowering our workforce is the grassroots do-it-yourself training videos that have sprung up. This started as a simple communications tool for the IT department. Now, other parts of our business have adopted it, as employees have figured out how to make videos on their smartphones or laptops and share them with colleagues over Skype for Business or SharePoint Online. Soon, a field operator might record a repair or installation procedure and share it with others online. Or someone in accounting might add narration to a Microsoft PowerPoint presentation to make their content more impactful and clear. Our people are our differentiators, and Office 365 helps them collaborate in ways that directly benefit the business.

Power BI is another example. Because it’s built into Microsoft 365 E5, we’ve been able to commoditize dashboard creation so anyone can do it—and that has dramatically increased usage. Dashboards are important to the way our management makes decisions. It simplifies access to and understanding of complex data whether production or financial.

Beyond Microsoft 365 E5, we’re moving line-of-business applications and our disaster recovery operations to Microsoft Azure. This offers rapid application deployment, speed to market, and scalability, with potential for significant cost savings.

Our ability to right-size our technology is key to managing costs in the cloud. We continually buy and sell field assets, so our workforce continually expands and contracts. Previously, we would build for peak capacity and end up with underutilized datacenter resources. With Azure and Microsoft 365 Enterprise, we can right-size our technologies and scale as needed.

As an IT professional, I love it when IT is viewed as a business enabler rather than a cost center. The Microsoft Cloud empowers all our employees in very tangible ways.

—Matt Harper