Tag Archives: Management

SIEM evaluation criteria: Choosing the right SIEM products

Security information and event management products and services collect, analyze and report on security log data from a large number of enterprise security controls, host operating systems, enterprise applications and other software used by an organization. Some SIEMs also attempt to stop attacks in progress that they detect, potentially preventing compromises or limiting the damage that successful compromises could cause.

There are many SIEM systems available today, including light SIEM products designed for organizations that cannot afford or do not feel they need a fully featured SIEM added to their current security operations.

Because light SIEM products offer few capabilities and are much easier to evaluate, they are out of the scope of this article. Instead, this feature points out the capabilities of regular SIEMs and can serve as a guide for creating SIEM evaluation criteria, which merit particularly close attention compared to other security technologies.

It can be quite a challenge to figure out which products to evaluate, let alone to choose the one that’s best for a particular organization or team. Part of the evaluation process involves creating a list of SIEM evaluation criteria potential buyers can use to highlight important capabilities.

1. How much native support does the SIEM provide for relevant log sources?

A SIEM’s value is diminished if it cannot receive and understand log data from all of the log-generating sources in the organization. Most obvious is the organization’s enterprise security controls, such as firewalls, virtual private networks, intrusion prevention systems, email and web security gateways, and antimalware products.

It is reasonable to expect a SIEM to natively understand log files created by any major product or cloud-based service in these categories. If the tool does not, it should have no role in your security operations.

There are many SIEM systems available today, including light SIEM products designed for organizations that cannot afford or do not feel they need a fully featured SIEM added to their current security operations.

In addition, a SIEM should provide native support for log files from the organization’s operating systems. An exception is mobile device operating systems, which often do not provide any security logging capabilities.

SIEMs should also natively support the organization’s major database platforms, as well as any enterprise applications that enable users to interact with sensitive data. Native SIEM support for other software is generally nice to have, but it is not mandatory.

If a SIEM does not natively support a log source, then the organization can either develop customized code to provide the necessary support or use the SIEM without the log source’s data.

2. Can the SIEM supplement existing logging capabilities?

An organization’s particular applications and software may lack robust logging capabilities. Some SIEM systems and services can supplement these by performing their own monitoring in addition to their regular job of log management.

In essence, this extends the SIEM from being strictly a centralized log collection, analysis and reporting tool to also generating raw log data on behalf of other hosts.

3. How effectively can the SIEM make use of threat intelligence?

Most SIEMs are capable of ingesting threat intelligence feeds. These feeds, which are often acquired from separate subscriptions, contain up-to-date information on threat activity observed all over the world, including which hosts are being used to stage or launch attacks and what the characteristics of these attacks are. The greatest value in using these feeds is enabling the SIEM to identify attacks more accurately and to make more informed decisions, often automatically, about which attacks need to be stopped and what the best method is to stop them.

Of course, the quality of threat intelligence varies between vendors. Factors to consider when evaluating threat intelligence should include how often the threat intelligence updates and how the threat intelligence vendor indicates its confidence in the malicious nature of each threat.

4. What forensic capabilities can SIEM products provide?

Forensics capabilities are an evolving SIEM evaluation criteria. Traditionally, SIEMs have only collected data provided by other log sources.

However, recently some SIEM systems have added various forensic capabilities that can collect their own data regarding suspicious activity. A common example is the ability to do full packet captures for a network connection associated with malicious activity. Assuming that these packets are unencrypted, a SIEM analyst can then review their contents more closely to better understand the nature of the packets.

Another aspect of forensics is host activity logging; the SIEM product can perform such logging at all times, or the logging could be triggered when the SIEM tool suspects suspicious activity involving a particular host.

5. What features do SIEM products provide to assist with performing data analysis?

SIEM products that are used for incident detection and handling should provide features that help users to review and analyze the log data for themselves, as well as the SIEM’s own alerts and other findings. One reason for this is that even a highly accurate SIEM will occasionally misinterpret events and generate false positives, so people need to have a way to validate the SIEM’s results.

Another reason for this is that the users involved in security analytics need helpful interfaces to facilitate their investigations. Examples of such interfaces include sophisticated search capabilities and data visualization capabilities.

6. How timely, secure and effective are the SIEM’s automated response capabilities?

Another SIEM evaluation criteria is the product’s automated response capabilities. This is often an organization-specific endeavor because it is highly dependent on the organization’s network architecture, network security controls and other aspects of security management.

For example, a particular SIEM product may not have the ability to direct an organization’s firewall or other network security controls to terminate a malicious connection.

Besides ensuring the SIEM product can communicate its needs to the organization’s other major security controls, it is also important to consider the following characteristics:

  • How long does it take the SIEM to detect an attack and direct the appropriate security controls to stop it?
  • How are the communications between the SIEM and the other security controls protected so as to prevent eavesdropping and alteration?
  • How effective is the SIEM product at stopping attacks before damage occurs?

7. Which security compliance initiatives does the SIEM support with built-in reporting?

Most SIEMs offer highly customizable reporting capabilities. Many of these products also offer built-in support to generate reports that meet the requirements of various security compliance initiatives. Each organization should identify which initiatives are applicable and then ensure that the SIEM product supports as many of these initiatives as possible.

For any initiatives that the SIEM does not support, make sure that the SIEM product supports the proper customizable reporting options to meet your requirements.

Do your homework and evaluate

SIEMs are complex technologies that require extensive integration with enterprise security controls and numerous hosts throughout an organization. To evaluate which tool is best for your organization, it may be helpful to define basic SIEM evaluation criteria. There is not a single SIEM product that is the best system for all organizations; every environment has its own combination of IT characteristics and security needs.

Even the main reason for having a SIEM, such as meeting compliance reporting requirements or aiding in incident detection and handling, may vary widely between organizations. Therefore, each organization should do its own evaluation before acquiring a SIEM product or service. Examine the offerings from several SIEM vendors before even considering deployment.

This article presents several SIEM evaluation criteria that organizations should consider, but other criteria may also be necessary. Think of these as a starting point for the organization to customize and build upon to develop its own list of SIEM evaluation criteria. This will help ensure the organization chooses the best possible SIEM product.

Mavenlink M-Bridge tether professional services automation silos

Embedded API integration is a significant trend across the software management universe that’s used by marquee-brand independent software vendors, like Salesforce and Red Hat, to break through data access and delivery barriers. Now, API integration has arrived in professional services automation platforms.

Designed for service organizations, such as law firms and nonprofits, professional services automation (PSA) software provides resource management, project management and project billing capabilities for enterprise applications. Organizations typically implement PSA platforms in silos and invest in integration PaaS (iPaaS) or integration middleware to connect with enterprise applications via prebuilt integration APIs.

Mavenlink, a cloud PSA platform provider in Irvine, Calif., hopes to bridge this connectivity gap with M-Bridge, an OpenAPI integration platform to help businesses standardize the data flow between operational platforms. Partner or customer integrations built into Mavenlink using M-Bridge are approved and added to other packaged integrations for other customers to use.

Systems of record, such as sales and financial systems, are typical uses for M-Bridge prebuilt integrations. Examples include integration with an accounting system to help manage and monitor expenses, project billings and a project burn rate; or link with a customer relationship management system to provide alerts about critical needs, such as new staffing requirements for delivering a project.

Streamlining application integration should help companies include more integrations in the initial phase of the implementation.
John Ragsdalevice president of service technology research, TSIA

Most PSA vendors publish integration APIs and packaged integrations to enterprise applications, such as Salesforce and Microsoft Dynamics. M-Bridge fills PSA users’ need for standardized API-based integration, which can allow reuse of integration models from one project to another, said John Ragsdale, vice president of service technology research for TSIA, an IT research firm in San Diego.

Connecting API integration to software management tools hits business users’ sweet spot for functionality and pricing, which sits between a simple set of published integration APIs on the low end and enterprise-level iPaaS and integration middleware on the other. PSA is the latest sector of software management tools enhanced with enterprise-level API integration. Earlier this year, Salesforce added standardized API integration capabilities to its software line with its MuleSoft acquisition, and Red Hat fused integration capabilities into its 3Scale API management product.

M-Bridge is the first domain-specific integration platform in the PSA market, Ragsdale said. Other PSA vendors include FinancialForce, Kimble, Upland, Workday and others.

API integration increased reusability, speed

Ragsdale said he frequently hears PSA software adopters complain about unmet ROI expectations, the causes of which are blamed on siloed data, too many applications and lack of adoption by employees averse to navigate them.

“Streamlining application integration should help companies include more integrations in the initial phase of the implementation, boosting time to value for the project, as well as employee adoption,” he said.

M-Bridge’s prebuilt integrations will help reduce the time to link the Mavenlink platform with other software platforms, said Kim Bernall, product manager at Talisys, a financial sector independent software vendor in Golden, Colo., which uses Mavenlink for resource management during project delivery lifecycles. Each Talisys development project involves the same repetitive tasks; Mavenlink PSA already allows the company to standardize process across projects and monitor and track project activities.

“M-Bridge is going to help us organize the API calls that we’re using now in a more integrated fashion,” Bernall said. Talisys started using OpenAPI over a year ago and with Mavenlink’s support created documentation for use cases. “I am so much more self-sufficient in looking at the documentation and creating calls on my own,” she said.

LinkedIn Sales Navigator refresh adds deals pipeline

A LinkedIn Sales Navigator refresh adds a deals management feature, smoother search experience and mobile deal pages to the social media giant’s social sales platform.

The revamp injects an array of new ways to search, manipulate and process LinkedIn’s vast troves of personal and consumer data and data from CRM systems and puts LinkedIn in a better position to monetize the information — coming off a hot quarter for LinkedIn, which reported June quarter earnings of $1.46 billion, up 37% from Q2 2017.

These upgraded features represent the next step in AI-assisted sales and marketing campaigns in which B2B companies mash up their own customer data with information on LinkedIn.

Microsoft banking on LinkedIn revenue

Microsoft bought LinkedIn in June 2016 for $26.2 billion. While Microsoft doesn’t always announce how AI is assisting automation of sales-centric search tools in Sales Navigator, a premium LinkedIn feature that also integrates LinkedIn data to CRM platforms such as Salesforce and Dynamics CRM, some experts have noted how AI subtly manifests itself in the search. 

The LinkedIn Sales Navigator refresh was unveiled in a blog post by Doug Camplejohn, vice president of products for LinkedIn Sales Solutions.

The new “Deals” web interface extracts and imports sales pipeline data from the user’s CRM system and enables users to update pipelines considerably faster, Camplejohn said in the post about the LinkedIn Sales Navigator refresh.

“Reps can now update their entire pipeline in minutes, not hours,” he wrote.

Adobe Sign connector added

Meanwhile, a new feature in Deals, “Buyer’s Circle,” pulls in and displays opportunity role information to streamline the B2B buying process. Users can see if any “key players” such as decision-maker, influencer or evaluator, are missing from deals, according to LinkedIn.

We all live in email.
Doug Camplejohnvice president of products, LinkedIn

The vendor called another new function in the LinkedIn Sales Navigator refresh — Office 365 integration — “Sales Navigator in your inbox.”

“We all live in email,” the blog post said. “Now you can take Sales Navigator actions and see key insights without ever leaving your Outlook for Web Inbox. “

LinkedIn also touted what it called a “new search experience” in the Sales Navigator update, saying it redesigned the search function to surface search results pages faster and easier.

Also as part of the LinkedIn Sales Navigator refresh, LinkedIn added mobile-optimized lead pages for sales people working on mobile devices. LinkedIn also named Adobe Sign the fourth partner to its Sales Navigator Application Platform (SNAP). Other SNAP partners include Salesforce, Microsoft Dynamics and SalesLoft.

SIEM benefits include efficient incident response, compliance

Security information and event management systems collect security log events from numerous hosts within an enterprise and store their relevant data centrally. By bringing this log data together, these SIEM products enable centralized analysis and reporting on an organization’s security events.

SIEM benefits include detecting attacks that other systems missed. Some SIEM tools also attempt to stop attacks — assuming the attacks are still in progress.

SIEM products have been available for many years, but initial security information and event management (SIEM) tools were targeted at large organizations with sophisticated security capabilities and ample security analyst staffing. It is only relatively recently that SIEM systems have emerged that are well-suited to meet the needs of small and medium-sized organizations.

SIEM architectures available today include SIEM software installed on a local server, a local hardware or virtual appliance dedicated to SIEM, and a public cloud-based SIEM service.

Different organizations use SIEM systems for different purposes, so SIEM benefits vary across organizations. This article looks at the three top SIEM benefits, which are:

  • streamlining compliance reporting;
  • detecting incidents that would otherwise not be detected; and
  • improving the efficiency of incident handling

1. Streamline compliance reporting

Many organizations deploy the tools for these SIEM benefits alone, including streamlining enterprise compliance reporting efforts through a centralized logging solution. Each host that needs to have its logged security events included in reporting regularly transfers its log data to a SIEM server. A single SIEM server receives log data from many hosts and can generate one report that addresses all of the relevant logged security events among these hosts.

An organization without a SIEM system is unlikely to have robust centralized logging capabilities that can create rich customized reports, such as those necessary for most compliance reporting efforts. In such an environment, it may be necessary to generate individual reports for each host or to manually retrieve data from each host periodically and reassemble it at a centralized point to generate a single report.

Many organizations deploy the tools for these SIEM benefits alone, including streamlining enterprise compliance reporting efforts through a centralized logging solution.

The latter can be incredibly difficult, in no small part because different operating systems, applications and other pieces of software are likely to log their security events in various proprietary ways, making correlation a challenge. Converting all of this information into a single format may require extensive code development and customization.

Another reason why SIEM tools are so useful is that they often have built-in support for most common compliance efforts. Their reporting capabilities are compliant with the requirements mandated by standards such as the Health Insurance Portability and Accountability Act (HIPAA), the Payment Card Industry Data Security Standard (PCI DSS) and the Sarbanes-Oxley Act.

By using SIEM logs, an organization can save considerable time and resources when meeting its security compliance reporting requirements, especially if it is subject to more than one such compliance initiative.

2. Detect the undetected

SIEM systems are able to detect otherwise undetected incidents.

Many hosts that log security breaches do not have built-in incident detection capabilities. Although these hosts can observe events and generate audit log entries for them, they lack the ability to analyze the log entries to identify signs of malicious activity. At best, these hosts, such as end-user laptops and desktops, might be able to alert someone when a particular type of event occurs.

SIEM tools offer increased detection capabilities by correlating events across hosts. By gathering events from hosts across the enterprise, a SIEM system can see attacks that have different parts on different hosts and then reconstruct the series of events to determine what the nature of the attack was and whether or not it succeeded.

In other words, while a network intrusion prevention system might see part of an attack and a laptop’s operating system might see another part of the attack, a SIEM system can correlate the log data for all of these events. A SIEM tool can determine if, for example, a laptop was infected with malware which then caused it to join a botnet and start attacking other hosts.

It is important to understand that while SIEM tools have many benefits, they should not replace enterprise security controls for attack detection, such as intrusion prevention systems, firewalls and antivirus technologies. A SIEM tool on its own is useless because it has no ability to monitor raw security events as they happen throughout the enterprise in real time. SIEM systems use log data as recorded by other software.

Many SIEM products also have the ability to stop attacks while they are still in progress. The SIEM tool itself doesn’t directly stop an attack; rather, it communicates with other enterprise security controls, such as firewalls, and directs them to block the malicious activity. This incident response capability enables the SIEM system to prevent security breaches that other systems might not have noticed elsewhere in the enterprise.

To take this a step further, an organization can choose to have its SIEM tool ingest threat intelligence data from trusted external sources. If the SIEM tool detects any activity involving known malicious hosts, it can then terminate those connections or otherwise disrupt the malicious hosts’ interactions with the organization’s hosts. This surpasses detection and enters the realm of prevention.

3. Improve the efficiency of incident handling activities

Another of the many SIEM benefits is that SIEM tools significantly increase the efficiency of incident handling, which in turn saves time and resources for incident handlers. More efficient incident handling ultimately speeds incident containment, thus reducing the amount of damage that many security breaches and incidents cause.

A SIEM tool can improve efficiency primarily by providing a single interface to view all the security log data from many hosts. Examples of how this can expedite incident handling include:

  • it enables an incident handler to quickly identify an attack’s route through the enterprise;
  • it enables rapid identification of all the hosts that were affected by a particular attack; and
  • it provides automated mechanisms to stop attacks that are still in progress and to contain compromised hosts.

The benefits of SIEM products make them a necessity

The benefits of SIEM tools enable an organization to get a big-picture view of its security events throughout the enterprise. By bringing together security log data from enterprise security controls, host operating systems, applications and other software components, a SIEM tool can analyze large volumes of security log data to identify attacks, security threats and compromises. This correlation enables the SIEM tool to identify malicious activity that no other single host could because the SIEM tool is the only security control with true enterprise-wide visibility.      

Businesses turn to SIEM tools, meanwhile, for a few different purposes. One of the most common SIEM benefits is streamlined reporting for security compliance initiatives — such as HIPAA, PCI DSS and Sarbanes-Oxley — by centralizing the log data and providing built-in support to meet the reporting requirements of each initiative.

Another common use for SIEM tools is detecting incidents that would otherwise be missed and, when possible, automatically stopping attacks that are in progress to limit the damage.

Finally, SIEM products can also be invaluable to improve the efficiency of incident handling activities, both by reducing resource utilization and allowing real-time incident response, which also helps to limit the damage.

Today’s SIEM tools are available for a variety of architectures, including public cloud-based services, which makes them suitable for use in organizations of all sizes. Considering their support for automating compliance reporting, incident detection and incident handling activities, SIEM tools have become a necessity for virtually every organization.

Cybersecurity and physical security: Key for ‘smart’ venues

When Boston Red Sox President and CEO Sam Kennedy joined the organization in 2001, the team’s management was facing questions about the then-89-yearold Fenway Park.

There was a campaign to tear down Fenway and build a new baseball stadium elsewhere in the city — a plan that was quickly nixed by Red Sox management in favor of one to preserve, protect and enhance the Boston landmark. One big obstacle they had to consider was how to handle potential threats more dangerous than the New York Yankees.

“Our job is to anticipate threats — probably the biggest threat to the sports industry, in general, would be some type of massive security breach or failure,” Kennedy said. “It’s certainly something that keeps us up at night.”

Kennedy made his remarks during the Johnson Controls Smart Ready Panel last week at Fenway Park, where panelists discussed how venues, buildings and cities are striving to become smarter and more sustainable.

To upgrade the park for the 21st century, the Red Sox organization began a project called Fenway 2.0 that would improve the fan experience via technology upgrades, additional seating and renovations to the area surrounding the park.

Another big part of the Fenway 2.0 project was working closely with city officials to protect fans’ cybersecurity and physical security.

“We have incredible partners at the city of Boston,” Kennedy said. “We work very closely with those guys and the regional intelligence center to make sure we’re doing everything we possibly can … to make sure that Fenway is safe.”

Cybersecurity a ‘smart’ priority

During the panel, Johnson Controls’ vice president of global sustainability and industry initiatives, Clay Nesler, pointed to a company-issued survey that showed cybersecurity capabilities were among the top technologies that respondents predicted would have the most influence on smart building and smart city development over the next five years.

Cities and large venues like Fenway Park certainly deliver many benefits to patrons through advanced technology, but these amenities also create potential risk, Nesler added. Several questions have to be answered, he said, before making upgrades to tech such as Wi-Fi capabilities: “Can systems be easily updated with the latest virus protection? Do you really limit user access in a very controllable way? Is the data encrypted?”

Our job is to anticipate threats — probably the biggest threat to the sports industry, in general, would be some type of massive security breach or failure.
Sam Kennedypresident and CEO, Boston Red Sox

Questions such as these are exactly why thinking ahead is essential to smart facility development, said panelist Elinor Klavens, senior analyst at Sports Innovation Lab, based in Boston.

“This is an open space that possibly could have Amazon drones flying over soon. What does that mean for the security of the people inside of it?” Klavens said. “We see venues really struggling to figure out how to secure themselves on that cyber level.”

Technology is certainly an enabler to get smarter about cybersecurity and physical security capabilities, Nesler said, but it’s still up to humans to interpret data. For example, new tech allows venues to create a 3D heat map of exactly how many people are in a 10-square-foot area to determine how fast they’re moving and find ways to avoid large groups slowing down during normal ingress and egress times. This information can also prove very valuable to prepare for emergency evacuations, Nesler said.

“We need to be clever about what’s really valuable to both the operations side and the fans and really be smart-ready in putting [in] place the systems and infrastructure to support things we haven’t even thought of yet,” Nesler said. 

The data access conundrum

The new technology offered by smart venues poses other concerns, as well, Kennedy said. For example, fans distracted by looking at their smartphones or digital screens could be putting themselves in danger of being hit by a foul ball at a baseball game, and ones watching events through smart glasses bring up potential legal questions regarding the event’s distribution rights. 

This goes back to the importance of communication for a smart venue to be successful, Kennedy said, with building management working together to ensure all of Fenway’s cybersecurity and physical security bases are covered.

“We need to be very, very careful in terms of providing fan safety,” Kennedy said.

And, of course, taking advantage of these technological advances often requires smart venues and cities to analyze a plethora of consumer-generated data. As a result, they must balance tapping into readily available data to improve amenities, cybersecurity and services with privacy concerns, Klavens said.

“Figuring out how to balance what is good for your fans and what is also your public’s appetite for giving up privacy in a public space is another way which we see venues really helping cities improve their understanding about how these new technologies will be deployed,” Klavens said.

Manage Hyper-V containers and VMs with these best practices

Containers and VMs should be treated as the separate instance types they are, but there are specific management strategies that work for both that admins should incorporate.


Containers and VMs are best suited to different workload types, so it makes sense that IT administrators would…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

use both in their virtual environments, but that adds another layer of complexity to consider.

One of the most notable features introduced in Windows Server 2016 was support for containers. At the time, it seemed that the world was rapidly transitioning away from VMs in favor of containers, so Microsoft had little choice but to add container support to its flagship OS.

Today, organizations use both containers and VMs. But for admins that use a mixture, what’s the best way to manage Hyper-V containers and VMs?

To understand the management challenges of supporting both containers and VMs, admins need to understand a bit about how Windows Server 2016 works. From a VM standpoint, Windows Server 2016 Hyper-V isn’t that different from the version of Hyper-V included with Windows Server 2012 R2. Microsoft introduced a few new features, as with every new release, but the tools and techniques used to create and manage VMs were largely unchanged.

In addition to being able to host VMs, Windows Server 2016 includes native support for two different types of containers: Windows Server containers and Hyper-V containers. Windows Server containers and the container host share the same kernel. Hyper-V containers differ from Windows Server containers in that Hyper-V containers run inside a special-purpose VM. This enables kernel-level isolation between containers and the container host.

Hyper-V management

When Microsoft created Hyper-V containers, it faced something of a quandary with regard to the management interface.

The primary tool for managing Hyper-V VMs is Hyper-V Manager — although PowerShell and System Center Virtual Machine Manager (SCVMM) are also viable management tools. This has been the case ever since the days of Windows Server 2008. Conversely, admins in the open source world used containers long before they ever showed up in Windows, and the Docker command-line interface has become a standard for container management.

Ultimately, Microsoft chose to support Hyper-V Manager as a tool for managing Hyper-V hosts and Hyper-V VMs, but not containers. Likewise, Microsoft chose to support the use of Docker commands for container management.

Management best practices

Although Hyper-V containers and VMs both use the Hyper-V virtualization engine, admins should treat containers and VMs as two completely different types of resources. While it’s possible to manage Hyper-V containers and VMs through PowerShell, most Hyper-V admins seem to prefer using a GUI-based management tool for managing Hyper-V VMs. Native GUI tools, such as Hyper-V Manager and SCVMM, don’t support container management.

As admins work to figure out the best way to manage Hyper-V containers and VMs, it’s important for them to remember that both depend on an underlying host.

Admins who wish to manage their containers through a GUI should consider using one of the many interfaces that are available for Docker. Kitematic is probably the best-known of these interfaces, but there are third-party GUI interfaces for containers that arguably provide a better overall experience.

For example, Datadog offers a dashboard for monitoring Docker containers. Another particularly nice GUI interface for Docker containers is DockStation.

Those who prefer an open source platform should check out the Docker Monitoring Project. This monitoring platform is based on the Kubernetes dashboard, but it has been adapted to work directly with Docker.

As admins work to figure out the best way to manage Hyper-V containers and VMs, it’s important for them to remember that both depend on an underlying host. Although Microsoft doesn’t provide any native GUI tools for managing VMs and containers side by side, admins can use SCVMM to manage all manner of Hyper-V hosts, regardless of whether those servers are hosting Hyper-V VMs or Hyper-V containers.

Admins who have never worked with containers before should spend some time experimenting with containers in a lab environment before attempting to deploy them in production. Although containers are based on Hyper-V, creating and managing containers is nothing like setting up and running Hyper-V VMs. A great way to get started is to install containers on Windows 10.

Dig Deeper on Microsoft Hyper-V management

Endpoint security tool fueled OpenText’s Guidance Software acquisition

TORONTO — When OpenText acquired Guidance Software in September 2017, the content management vendor needed its endpoint security tool.

Of course, OpenText also was pleased that endpoint security tool came with some other attractive assets: the well-known data forensics application seen on CSI and used by law enforcement and government investigators; an E-discovery platform; and Guidance’s loyal customer list.

Although OpenText plans to support all of those other assets in its aggressive growth-through-acquisition strategy — and one longtime customer confirms that needed integrations are underway — the biggest draw was the endpoint security tool.

OpenText’s private cloud has 50 million endpoints today, and is expanding as more customers migrate more data into it, CEO and CTO Mark Barrenechea said.

“We wanted to enter the security market,” he said during an OpenText Enterprise World media question and answer session.

While OpenText built its customer base on content management, which in time has evolved into content services, customers are demanding tighter hybrid cloud security, as well as content access control.

The endpoint security tool from Guidance fills in technological “white space” OpenText needed to fill, while acquiring Covisint brought much-needed ID management, Barrenechea concluded.

OpenText Enterprise World 2018 image
OpenText Enterprise World 2018

AI coming to EnCase

Guidance’s flagship application, EnCase, is used to collect and petrify data so it remains legally admissible in civil and criminal trials.

Lalith Subramanian, vice president of engineering for analytics, security and discovery at OpenText, said in an interview that the endpoint security tool now is needed mostly for laptops on the OpenText network.

But more use cases are coming as OpenText customers prepare IoT implementations that will multiply that number and as sensors come online, Subramanian said.

“People are not there yet but they’re going to get there,” he said. “Our challenge is more to support the breath of the platforms that are possible.”

Subramanian also said that a secondary opportunity OpenText saw for the Guidance acquisition was incorporating Guidance’s Magellan content-specific AI tools.

Guidance included no AI in its applications, but AI can help humans performing e-discovery for lawsuits, as well as data forensics investigations to sort documents and classify data.

We wanted to enter the security market.
Mark BarrenecheaCEO and CTO, OpenText

FBI investigators, Subramanian said, tell OpenText they are overwhelmed with digital data associated with cases and AI may quickly be able to identify patterns and starting points for evidence that sometimes take teams days to figure out.

OpenText forensicist James Kritselis has been assisting Guidance customers in extracting data from devices for years, taking on the most intractable cases to catch criminals.

He said he sees AI as a potential boon for finding connections (“link analysis” in investigator parlance) in data that aren’t obvious to humans. One example would be when a criminal might conduct relevant conversations — or pieces of conversations — in multiple apps.

“If I see WhatsApp on a phone, AI would be able to start to look in Bumble or Tinder — to figure out what other social media apps would be relevant,” Kritselis said in an interview at the conference.

Integrations coming together

Meanwhile, Liberty Mutual Group’s  insurance investigation unit doesn’t need the endpoint security tool as much as it does linkage with other OpenText applications and services, said Brian Morrison, principal business systems analyst. His group, based in Dover, N.H., had used OpenText and Guidance software separately for years.

While some customers may have felt apprehension when OpenText acquired Guidance, Morrison said he has seen some quick wins in consolidating workflows already and looks forward to more.

Getting information for evidence out of Guidance products and into OpenText required many connectors and third-party helper applications. That process also is a drag on IT staffers, keeping up with the needs for preserving data through the workflow so it remains legally admissible.

“[I’m looking forward to] being able to get there without all these connectors,” Morrison said.

“These are application developers and systems support people. I’m taking their time for something that if I could just get [from OpenText], they could work more on supporting the users, working on reporting, analytics and other things instead of grabbing something for me that’s three years old and absolutely meaningless to them,” he said.

Execs: Content management in the cloud not as easy as it looks

TORONTO — Companies like Oracle, SAP and Microsoft are pushing content management in the cloud, and they’re joined by OpenText, which announced the containerization of its systems for use on public clouds, such as Microsoft Azure, Google Cloud and AWS.

“Friends don’t let friends buy data centers.” That was OpenText CEO and CTO Mark Barrenechea’s recurring joke during his OpenText Enterprise World 2018 keynote, during which the company unveiled its cloud- and DevOps-friendly OT2 platform.

Barrenechea later clarified to reporters that while some customers are standardizing on AWS and Azure, most OpenText cloud customers are on OpenText’s private cloud. Opening OpenText apps and microservices, such as its Magellan AI tools, to the public clouds will also open up new markets for content management in the cloud, Barrenechea said.

But several speakers from the stage — including celebrity nonfiction writer and Toronto native Malcolm Gladwell — cautioned that while the cloud might bring convenience and freedom from data center upkeep, it also brings challenges.

The two most frequently mentioned were data security and process automation, as well as a related issue: automating bad or unnecessarily complicated processes that should have been fixed before their digital transformations.

Data security getting more complicated

If you have 854,000 people with top-secret clearances, I would venture to say that it’s no longer top-secret.
Malcolm Gladwellauthor

The internet of things and mobile devices comprise a major security vulnerability that, if left unsecure, can multiply risk and create entry points for hackers to penetrate networks. Opening up content management in the cloud — and the necessary multiplication of data transactions that comes with it — can spread that risk outside the firewall.

Persistent connectivity is the challenge for Zoll Medical’s personal defibrillators, said Jennifer Bell, enterprise CMS architect at the company. Zoll Medical’s IoT devices not only connect the patient to the device, but also port the data to caregivers and insurance providers in a regulatory-compliant way, which mandates data security the whole time.

“Security is huge, with HIPAA [Health Insurance Portability and Accountability Act] and everything,” she said.

IT leaders are just beginning to grasp the scale of risks.

At the National Institute of Allergy and Infectious Diseases (NIAID), even “smart microscopes” with which researchers take multi-gigabyte, close-up images have to check in with their manufacturer’s servers every night, said Matt Eisenberg, acting chief of NIAID’s business processes and information branch.

“Every evening, when the scientists are done with those devices, it has to phone home and recalibrate. And this is blowing the infrastructure guys away, because they’re not used to allowing this kind of bidirectional communication from something that really doesn’t look or feel like a computer or a laptop,” Eisenberg said.

Best-selling author Malcolm Gladwell giving conference keynote
Author Malcolm Gladwell delivering keynote at OpenText Enterprise World 2018

Meanwhile, Gladwell warned that data security threats are coming from every direction, inside and outside of organizations, and from new perpetrators.

Also coming under the spotlight was security of content management in the cloud when Chelsea Manning and Edward Snowden were able to steal sensitive military documents and hand them over to WikiLeaks, Gladwell said.

Government data security experts are having a hard time preventing another such breach, he continued, because security threats are rapidly changing. The feds, however, haven’t; they’re stuck with Cold War-era systems and processes that focused on a particular enemy and their operatives.

“It’s no longer that you have a short list of people high up that you have to worry about. Now, you have to worry about everyone,” Gladwell said. “If you have 854,000 people with top-secret clearances, I would venture to say that it’s no longer top-secret.”

Cloud: BPM boon or problem?

Content management in the cloud by way of SaaS apps can also bring process automation, AI and analytics tools to content formerly marooned in on-premises data silos. It can also extend a workforce beyond office walls, giving remote, traveling or field-based workers access to the same content their commuting co-workers get.

That’s if it’s done right.

Kyle Hufford, digital asset management director at Monster Energy, based in Corona, Calif., serves rich media content to an international marketing team that must comply with many national, state and local regulations, as well as standardized internal processes, approval trees and branding rules.

His job, he said, is opening access to Monster Energy’s sometimes-edgy content worldwide, while ensuring end users stay compliant.

The work starts with detailed examination of how a process is done before moving it into the cloud.

“People think there [are] complexities around approvals and how to get things done,” Hufford said. “In reality, they can take a 15-step process and make it a two- or three-step process and save everybody time.”

Panelists at OpenText Enterprise World 2018 conference
Panelists at OpenText Enterprise World 2018 conference, from left to right: Marl Barrenechea, OpenText CEO and CTO; Gopal Padinjaruveetil, vice president and chief information security officer at The Auto Club Group; Jennifer Bell, enterprise content management architect and analyst at Zoll Medical; Kyle Hufford, director of digital asset management at Monster Energy; and Matt Eisenberg, acting chief of the U.S. NIAID business process and information management branch.

As mature companies like SAP, Microsoft, OpenText and Oracle make big pushes into the cloud and bring their big customers along to migrate from on-premises systems, process issues like these are bound to happen, said Craig Wentworth, principal analyst for U.K.-based MWD Advisors.

Wentworth advised enterprise IT leaders to take a critical look at the vendor’s model in the evaluation stage before embarking on a project for content management in the cloud.

“I worry that, sometimes … software firms that have been around for a long time [and add] cloud are coming to it from a very different place than those who are born in the cloud,” Wentworth said. “Whilst they will be successful certainly with their existing customers, they’ve got a different slant to it.”

OpenText OT2 hybrid cloud EIM platform debuts

TORONTO — OpenText OT2, a new hybrid cloud-on-premises enterprise information management platform, brings self-service SaaS appdev environment to customers that crave the cloud’s flexibility and economy but often must keep some data at home, typically out of regulatory concerns or legacy application tethers.

Hybrid cloud EIM deployments — and apps connecting them — were previously feasible with OpenText on its AppWorks platform. But the company promises that the combination of OpenText OT2’s unified data model, updated interface and modernized, developer-friendly environment will make it more straightforward and faster to design and update applications.

That will cut development time for custom apps from weeks to hours in some cases, Muhi Majzoub, OpenText executive vice president of engineering and IT, said in an interview.

“OT2 simplifies for our customers how they invest and make decisions in taking some of their on-premises workflows and [port] them into a hybrid model or SaaS model into the cloud,” Majzoub said.

He added that life sciences and financial customers have taken a particular interest in that process to streamline records processes — and add AI and analytics behind them.

OpenText Enterprise World 2018 stock pic
OpenText Enterprise World 2018

New tools from new acquisitions

Some of the new OpenText OT2 microservices that can be deployed with low-code appdev or native programming reflect OpenText acquisitions from the last couple of years such as Covisint and Guidance Software. The short list of tools from these includes analytics, ID management and IoT endpoint security.

OpenText OT2 debuted at the company’s annual OpenText Enterprise World 2018 user conference, with Mazjoub planning to demonstrate the first OpenText OT2 apps, some created by customers and others by OpenText employees.

The company plans to keep OpenText OT2 tightly integrated with the current release of its main suite, OpenText 16, using quarterly updates. OpenText 16 connects to numerous associated applications and services, many of which have thousands of customers, such as document management platforms Documentum and Core, as well as software designed for specific vertical markets such as legal and manufacturing.

OpenText CEO and CTO Mark Barrenechea
OpenText CEO and CTO Mark Barrenechea giving keynote at OpenText Enterprise World 2018 in Toronto

Widening the market

OpenText OT2 apps will also be available for partners to run on Amazon AWS, Microsoft Azure and Google clouds.

It will be interesting to see if enterprise technology buyers will need, for example, OpenText Magellan AI apps set up specifically for content, said Alan Lepofsky, Constellation Research vice president and principal analyst.

Also remaining to be seen will be how the new system will compete against other vendors’ products, such as those from ABBYY that also offer content-specific AI tools.

“It comes down to: Will customers want to use a general AI platform like Azure, Google, IBM or AWS?” Lepofsky said. “Will the native AI functionality from OpenText compare and keep up? What will be the draw for new customers?”

Imanis Data improves ransomware detection

Imanis Data’s latest upgrade to its data management platform for big data applications includes improved ransomware protection, disaster recovery for Hadoop and DR testing.

Imanis manages and protects distributed databases such as Cassandra, Cloudera, Couchbase, Hortonworks and MongoDB, and cloud-native data for IoT and software-as-a-service applications. The startup, which changed its name from Talena in 2017, leaves backup and recovery of relationship databases to the data protection stalwarts such as Veritas, Dell EMC, IBM, Veeam and Commvault. It directly competes with smaller companies, mainly Datos IO, which Rubrik acquired in February 2018.

Imanis Data 3.3 — launched in June — strengthened the anomaly detection of the company’s ThreatSense anti-ransomware software, according to chief marketing officer Peter Smails. Smails said ThreatSense builds a baseline of normal data and periodically checks against it in order to find instances of ransomware encryption or mass deletion, whether malicious or accidental. In 3.3, the software has double the metrics from its previous build, allowing for higher granularity in the anomaly detection and lower false positive rates.

Also new in 3.3 is automated data recovery support for Hadoop users.

“We’re basically replicating data from one Hadoop cluster to another Hadoop cluster, at a different data center, and we can do it as aggressively as every 15 minutes,” said Jay Desai, Imanis’ vice president of products.

Smails said Imanis customers often treat data recovery testing as an afterthought. Imanis Data 3.3’s Recovery Sandbox feature can help fix that by allowing administrators to quickly and easily test if a backup is restorable, without disrupting the primary work environment.

Screenshot of Recovery Sandbox user interface
Recovery Sandbox provides automated recovery testing with no impact on production databases.

Imanis Data 3.3 also brings a user interface refresh to the platform’s existing point-in-time recovery. Whimsically dubbed the “time-travel widget,” this tool allows users to simply search and navigate to system restore points.

Screenshot of time-travel widget
A time-travel widget simplifies navigation of all restore points retained by the user.

Imanis Data received a $13.5 million Series B funding round earlier this year. Although founder and COO Nitin Donde at the time said there would only be a “modest investment” in research and development, the company released 3.3 just four months later.

Since the funding, there’s been a concerted effort to brand Imanis Data as a company focused on machine learning-based data management. Smails describes Imanis Data’s platform as data-aware, uniquely structured around machine learning and built for scale.

That data awareness is a key factor in what makes Imanis Data’s ransomware detection so powerful. George Crump, founder and president of IT analyst firm Storage Switzerland, describes how ransomware has become more insidious.

“In many cases, it lays dormant, so it’s actually backed up a few times,” Crump said. “Then, it encrypts slowly, like only 500 or 1,000 files a day. And that becomes very hard for a human to detect if that’s out of the ordinary. If you can, through machine learning, detect a much more ‘finer grain’ attack, you can pick it up the first time it happens.”