Tag Archives: field

Building the security operations center of tomorrow—harnessing the law of data gravity

This post was coauthored by Diana Kelley, Cybersecurity Field CTO, and , EMEA Chief Security Advisor, Cybersecurity Solutions Group.

You’ve got a big dinner planned and your dishwasher goes on the fritz. You call the repair company and are lucky enough to get an appointment for that afternoon. The repairperson shows up and says, “Yes, it’s broken, but to figure out why I will need to run some tests.” They start to remove your dishwasher from the outlet. “What are you doing?” you ask. “I’m taking it back to our repair shop for analysis and then repair,” they reply. At this point, you’re annoyed. You have a big party in three hours, and taking the dishwasher all the way back to the shop for analysis means someone will be washing dishes by hand after your party—why not test it right here and right now so it can be fixed on the spot?

Now, imagine the dishwasher is critical business data located throughout your organization. Sending all that data to a centralized location for analysis will give you insights, eventually, but not when you really need it, which is now. In cases where the data is extremely large, you may not be able to move it at all. Instead it makes more sense to bring services and applications to your data. This at the heart of a concept called “data gravity,” described by Dave McCrory back in 2010. Much like a planet, your data has mass, and the bigger that mass, the greater its gravitational pull, or gravity well, and the more likely that apps and services are drawn to it. Gravitational movement is accelerated when bandwidth and latency are at a premium, because the closer you are to something the faster you can process and act on it. This is the big driver of the intelligent cloud/intelligent edge. We bring analytics and compute to connected devices to make use of all the data they collect in near real-time.

But what might not be so obvious is what, if anything, does data gravity have to do with cybersecurity and the security operations center (SOC) of tomorrow. To have that discussion, let’s step back and look at the traditional SOCs, built on security information and event management (SIEM) solutions developed at the turn of the century. The very first SIEM solutions were predominantly focused on log aggregation. Log information from core security tools like firewalls, intrusion detection systems, and anti-virus/malware tools were collected from all over a company and moved to a single repository for processing.

That may not sound super exciting from our current vantage point of 2018, but back in 2000 it was groundbreaking. Admins were struggling with an increasing number of security tools, and the ever-expanding logs from those tools. Early SIEM solutions gave them a way to collect all that data and apply security intelligence and analytics to it. The hope was that if we could gather all relevant security log and reporting data into one place, we could apply rules and quickly gather insights about threats to our systems and security situational awareness. In a way this was antidata gravity, where data moved to the applications and services rather than vice versa.

After the initial “hype” for SIEM solutions, SOC managers realized a few of their limitations. Trying to write rules for security analytics proved to be quite hard. A minor error in a rule led to high false positives that ate into analyst investigative time. Many companies were unable to get all the critical log data into the SIEM, leading to false negatives and expensive blind spots. And one of the biggest concerns with traditional SIEM was the latency. SIEM solutions were marketed as “real-time” analytics, but once an action was written to a log, collected, sent to the SIEM, and then parsed through the SIEM analytics engine, quite a bit of latency was introduced. When it comes to responding to fast moving cyberthreats, latency is a distinct disadvantage.

Now think about these challenges and add the explosive amounts of data generated today by the cloud and millions of connected devices. In this environment it’s not uncommon that threat campaigns go unnoticed by an overloaded SIEM analytics engine. And many of the signals that do get through are not investigated because the security analysts are overworked. Which brings us back to data gravity.

What was one of the forcing factors for data gravity? Low tolerance for latency. What was the other? Building applications by applying insights and machine learning to data. So how can we build the SOC of tomorrow? By respecting the law of data gravity. If we can perform security analytics close to where the data already is, we can increase the speed of response. This doesn’t mean the end of aggregation. Tomorrow’s SOC will employ a hybrid approach by performing analytics as close to the data mass as possible, and then rolling up insights, as needed, to a larger central SOC repository for additional analysis and insight across different gravity wells.

Does this sound like an intriguing idea? We think so. Being practitioners, though, we most appreciate when great theories can be turned into real-world implementations. Please stay tuned for part 2 of this blog series, where we take the concept of tomorrow’s SOC and data gravity into practice for today.

Organize Active Directory with these strategies


It’s a familiar refrain for many in the IT field: You start a new job and have to clean up the previous administrator’s…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

handiwork, such as their Active Directory group configuration.

You might inherit an Active Directory group strategy from an admin who didn’t think the process through, leaving you with a setup that doesn’t reflect the usage patterns of your users. Administrators who take the time to organize Active Directory organizational units and groups in a more coherent fashion will simplify their workload by making it easier to audit Active Directory identities and minimize the Active Directory attack surface.

Here are some practical tips and tricks to streamline your Active Directory (AD) administrator work and support your security compliance officers.

The traditional Active Directory design pattern

To start, always organize individual user accounts into groups. Avoid giving access permissions to individual user accounts because that approach does not scale.

Figure 1 shows Microsoft’s recommendation to organize Active Directory user accounts for resource access.

AGDLP model
Figure 1. Microsoft recommends the account, global, domain local, permission security model to organize Active Directory user accounts.

The account, global, domain local, permission (AGDLP) model uses the following workflow:

  • Organize users into global groups based on business criteria, such as department and location.
  • Place the appropriate global groups into domain local groups on resource servers based on similar resource access requirements.
  • Grant resource permissions to domain local groups only.

Note how this model uses two different scopes. Global groups organize AD users at the domain level, and domain local groups organize global groups at the access server level, such as a file server or a print server.

Employ role-based access control principles

Role-based access control (RBAC) grants access to groups based on job role. For example, consider network printer access:

  • Most users need only the ability to submit and manage their own print jobs.
  • Some users have delegated privileges to manage the entire print queue.
  • Select users have full administrative access to the printer’s hardware and software.

Microsoft helps with some of the planning work by prepopulating RBAC roles in Active Directory. For instance, installing the Domain Name Service role creates several sub-administrative groups in Active Directory.

[embedded content]

How to set up users and groups in Active Directory

Instead of relying on prebuilt groups, think about the user population and how to design global and domain local groups. Try to organize Active Directory global groups according to business rules and domain local groups based on access roles.

You might have global groups defined for each business unit at your organization, including IT, accounting, legal, manufacturing and human resources. You might also have domain local groups based on specific job tasks: print queue managers, print users, file access managers, file readers, database reporters and database developers.

When you organize Active Directory, the goals are to describe both the user population and their resource access requirements completely and accurately while you keep the number of global and domain local groups as small as possible to reduce the management workload.

Keep group nesting to a minimum if possible

You should keep group nesting to a minimum because it increases your administrative overhead and makes it more difficult to troubleshoot effective access. You should only populate global groups with individual Active Directory user accounts and only populate domain local groups with global groups.

effective access tab
Figure 2. The effective access tab displays the effective permissions for groups, users and device accounts.

The Windows Server and client operating systems have a feature called effective access, found in the advanced security settings dialog box in a file or folder’s properties sheet. You model effective access for a particular user, group or computer account from this location. But analyzing multiple folders with this feature doesn’t scale. You have to run it multiple times to analyze permissions.

In a multi-domain environment, nesting is unavoidable. Stick to single domain topologies when possible.

cross-domain resource access
Figure 3. A cross-domain resource access configuration in Active Directory offers more flexibility to the administrator.

I recommend the topology in Figure 3 because while global groups can contain Active Directory user accounts from their own domain only, you can add global groups to discretionary access control lists in any forest domain.

Here’s what’s happening in the topology in Figure 3:

  • A: Global groups represent marketing department employees in the contoso.com and corp.contoso.com domains.
  • B: We create a domain local group on our app server named Mktg App Access and populate it with both global groups.
  • C: We assign permissions on our line-of-business marketing app to the Mktg App Access domain local group.

When you need to organize Active Directory groups, develop a naming convention that makes sense to everyone on your team and stick to it.

You might wonder why there is no mention of universal groups. I avoid them because they slow down user logon times due to global catalog universal group membership lookups. Universal groups also make it easy to be sloppy during group creation and with resource access strategy.

How to design for the hybrid cloud

Microsoft offers Azure Active Directory for cloud identity services that you can synchronize with on-premises Active Directory user and group accounts, but Azure AD does not support organizational units. Azure AD uses a flat list of user and group accounts that works well for identity purposes.

With this structure in mind, proper user and group naming is paramount. You should also sufficiently populate Active Directory properties to make it easier to manage these accounts in the Azure cloud.

When you need to organize Active Directory groups, develop a naming convention that makes sense to everyone on your team and stick to it.

One common group naming pattern involves prefixes. For example, you might start all your global group names with GL_ and your domain local group names with DL_. If you use Exchange Server, then you will have distribution groups in addition to the AD security groups. In that instance, you could use the DI_ prefix.

The Seattle Seahawks use data and sports science to help players work as hard at recovery as they do on the field – Transform

DeShawn Shead on the playfield at Virginia Mason Athletic Center.

Back on the field, helping players and coaches buy into the sports science system relies on the team’s sports scientists being able to quickly show them data, through Power BI’s visualization options. They can easily compare different days, and show progress or declines in very specific areas.

“The information needs to be digestible to players and coaches, so they can depend on it,” Ramsden says. “We want the player to be able to look at the Power BI visualization and then change it to go, yeah, but what had happened when this happened to me?,” adds Riddle. “You know, we want them to have that experience, and we’re really close to that now, so it’s a pretty exciting time. Our partnership with Microsoft has allowed us to build this really solid foundation so that we can make this leap to a new generation of athletes who are deeply connected to their own performance every single day.”

Another thing that helps the staff wrap their heads around the benefits of the program is its five elements: STEMS, which stands for Sleep and recovery, Thinking, Eating, Moving (movements) and Sensing (vision training). Understanding that every single person on the team doesn’t want to get better at every one of those things, the sports science staff is able to tailor their program to individual players and what they want to get better at doing.

A close-up of DeShawn Shead.

One of the program’s believers is Shead. When the “Most Outstanding Defensive Back” from Portland State University joined the Seahawks in 2012, he fell right into the team mindset.

“This team has a championship mindset. This team treats every single game as a championship game. Every game matters,” he says. “You trust in the preparation, the time we put into studying, working out, what we put into our bodies. The key is to stay consistent. By the time we get to game day, we’ve done so much.”

Even though he hasn’t been on the field with his teammates this season, he’s put the same effort into his recovery as if he’s out there with them.

“Every day, I find the positive,” says Shead, who began playing football in the sixth grade for the Palmdale Tigers with his older brothers, in southern California. “I see my teammates play and practice, and I want to get back out there. I feel like I’m going to come back better.”

Shead is a self-proclaimed “science guy” who loves numbers. He loves being able to track his progress. His “baby steps” have included learning to walk again after surgery, to jogging, then running and now, doing sprints.

“In terms of DeShawn, we’ve been able to use an organizational approach instead of one person’s opinion. We’ve collected data throughout his entire rehab, coming back off of his knee injury. We’re able to take that data and compare it to his position mates,” Ramsden says. “We’re also able to compare it to himself. And I think when you have all that, plus, when you can visually see that maybe there’s some kind of little hitch, you have a reason for it. We’re not guessing anymore. We have the professionals in the building to address the concern that you’re seeing, because there’s data to suggest that he may not be able to power off that leg as well as the other leg, for instance.”

Sports science technology gives Shead the ability to see the difference in power between his right and left legs, and between his muscle and hamstring strength. With this information, his trainers can recommend treatment plans, such as more or less reps in specific areas, which help him recover better.

“Having these technologies and putting it into rehabbing has been great for me, because I have something to gauge and to go off of. I love numbers, so when I can see that one is stronger than the other, that means I gotta do a little bit more reps on the other one,” Shead says. “I think this information is a great tool in rehab that helps guide me to get back on the field.”

When he goes up to the sports science department, he goes through a gauntlet of machines that test his hand/eye coordination (it looks like a video game), as well as how high and fast he can jump on each foot.

“The process is a challenge, and I love challenges,” Shead says.

Aparavi takes three-piece approach to cloud data protection

Newcomer Aparavi jumped into the cloud data protection field today, following in the footsteps of Cohesity and Rubrik in trying to buck established backup vendors.

Rather than an appliance-based approach, Aparavi launched a software-as-a-service platform aimed at a lower end of the cloud data management market than enterprise-focused Rubrik and Cohesity. But like Rubrik, Cohesity and larger data protection vendors, such as Veritas and Commvault, Aparavi wants to store, protect and manage secondary data across on-premises platforms, private clouds and public clouds.

Aparavi hops into ‘hot market’

Aparavi’s leadership team comes from NovaStor, which moved into online backup for small companies nearly a decade ago.

Jonathan Calmes, Aparavi’s vice president of business development, said it’s not enough to just move backup data into a public cloud. Organizations also need to manage the data after it’s in the cloud. While Rubrik and Cohesity can help enterprises do that, he said, that capability does not exist for smaller organizations.

Today, data is hosted on servers in private clouds, public clouds and on premises. Data is fragmented in many locations. This is the new normal.
Jonathan Calmesvice president of business development, Aparavi

“The world has changed enough, but current products out there have not,” he said. “Cohesity and Rubrik are focused so far up market that they leave a large amount of the market unaddressed. Today, data is hosted on servers in private clouds, public clouds and on premises. Data is fragmented in many locations. This is the new normal.”

Calmes said Aparavi pricing starts at $999 per year for 3 TB of protected source data, with 1 TB free forever. He said with new clouds such as Wasabi focused on lower pricing than Amazon, Google and Microsoft, customers will demand lower-priced data retention, as well.

Still, Aparavi will need a compelling platform to avoid getting squeezed between established cloud data management leaders and the next-generation products of Cohesity and Rubrik.

Steven Hill, senior storage analyst for 451 Research, said Aparavi has picked the right market. Now, it has to show it has the right approach.

“It’s the hot market now,” Hill said of cloud data management. “The industry is evolving away from traditional backup and recovery to a combination of backup and multicloud availability. But the trick is how to go about it.

“The million-dollar question is, how are their policies being applied, and how much control do they give you in tuning the system to your environment? Are they inventing a better mousetrap, or just a different-colored mousetrap?”

Aparavi dashboard
The Aparavi dashboard tracks files protected on premises and in public and private clouds.

The Aparavi approach

Aparavi’s three-piece “mousetrap” consists of a web-hosted platform, an on-premises software appliance and client software. Aparavi can host the platform, or it can be located at a hosted cloud, any Amazon Simple Storage Service-compliant object storage or a customer’s disk target. Calmes said he expects most customers to choose Aparavi as the host. The platform handles the communication for the architecture, orchestrating reporting, alerts and provisioning.

The virtual appliance serves as the relationship manager, using file deduplication and byte-level incremental technology to only move changed data. It also handles data streaming to improve performance.

The client software runs on a protected file server, acting as a temporary recovery location for quick restores. It is also the AES-256 encryption source, so data is not exposed in transit or at rest.

Calmes said Aparavi’s point-in-time recovery software can recover data from any cloud or on-premises storage, migrate it to a different cloud or on-premises site, and rebuild it based on the time and date it was last protected. Aparavi software takes snapshots as frequently as every 15 minutes, and it can keep those snaps local for quick recovery.

Calmes said the product can move data between clouds without interruption, and it has an open data format, so third-party tools can read data without using Aparavi.

Aparavi’s platform supports Amazon Web Services, Google Cloud Platform, Wasabi, IBM Bluemix, Scality and Cloudian cloud storage.

Besides the 3 TB plan, Aparavi offers annual subscription plans of 10 TB for $2,500 and 25 TB for $4,500. That does not include public cloud subscriptions. Although formally launched with limited availability today, the platform won’t be generally available until January.

Aparavi, which is based in Santa Monica, Calif., has $3 million in funding from a private investor on a $30 million valuation. Calmes said the startup has 15 employees, mostly engineers.

Aparavi chairman Adrian Knapp, CTO Rod Christensen and Calmes all come from NovaStor.