Tag Archives: Public

Industrial cloud moving from public to hybrid systems

The industrial cloud runs largely in the public domain currently, but that may be about to change.

Over the next few years, manufacturers will move industrial cloud deployments from the public cloud to hybrid cloud systems, according to a new report from ABI Research, an Oyster Bay, N.Y., research firm that specializes in industrial technologies. Public cloud accounts for almost half of the industrial IoT market share in 2018 (49%), while hybrid cloud systems have just 20%. But by 2023 this script will flip, according to the report, with hybrid cloud systems making up 52% of the IIoT market and public cloud just 25%.

The U.S.-based report surveyed vice presidents and other high-level decision-makers from manufacturing firms of various types and sizes, according to Ryan Martin, ABI Research principal analyst. The main focus of the report was IoT industrial cloud and it surveyed the manufacturers and their predisposition to technology adoption.

According to the report, the industrial cloud encompasses the entirety of the manufacturing process  and unifies the digital supply chain. This unification can lead to a number of benefits. Companies can streamline internal and external operations through digital business, product, manufacturing, asset and logistics processes; use data and the insights generated to enable new services; and improve control over environmental, health and safety issues.

Changing needs will drive move to hybrid systems

Historically, most data and applications in the IoT resided on premises, often in proprietary systems, but as IoT exploded the public cloud became more prevalent, according to Martin. 

The cloud, whether public or private, made sense because it offers a centralized location for storing large amounts of data and computing power at a reasonable cost, but organizational needs are changing, Martin said. Manufacturers are finding that a hybrid approach makes sense because it’s better to perform analytics on the device or activity that’s generating the data, such as equipment at a remote site, than to perform analytics in the cloud.

You don’t want to be shipping data to and from the cloud every time you need to perform a query or a search because you’re paying for that processing power, as well as the bandwidth.
Ryan Martinprincipal analyst, ABI Research

“There’s a desire to keep certain system information on site, and it makes a lot of business sense to do that, because you don’t want to be shipping data to and from the cloud every time you need to perform a query or a search because you’re paying for that processing power, as well as the bandwidth,” Martin said. “Instead it’s better to ship the code to the data for processing then shoot the results back to the edge. The heavy lifting for the analytics, primarily for machine learning types of applications, would happen in the cloud, and then the inferences or insights would be sent to a more localized server or gateway.”

Providers like AWS and Microsoft Azure will likely carry the bulk of the cloud load, according to Martin, but several vendors will be prominent in providing services for the industrial cloud.

“There will be participation from companies like SAP, as well as more traditional industrial organizations like ABB, Siemens, and so forth,” Martin said. “Then we have companies like PTC, which has recently partnered with Rockwell Automation, doing aggregation and integration, and activation to the ThingWorx platform.”

The industrial cloud will increasingly move from public cloud to hybrid cloud systems.
The hybrid cloud market for IIOT will double by 2023.

Transformation not disruption

However, companies face challenges as they move to implement the new technologies and systems that comprise the hybrid industrial cloud. The most prominent challenge is to implement the changes without interrupting current operations, Martin said.

“It will be a challenge to bring all these components like AI, machine learning and robotics together, because their lifecycles operate on different cadences and have different stakeholders in different parts of the value chain,” Martin said. “Also they’re producing heterogeneous data, so there needs to be normalization of mass proportion, not just for the data, but for the application providers, partners and supplier networks to make this all work.”

The overall strategy should be about incremental change that focuses on transformation over disruption, he explained.

“This is analogous to change management in business, but the parallel for IIoT providers is that these markets in manufacturing favor those suppliers whose hardware, software and services can be acquired incrementally with minimal disruption to existing operations,” he said. “We refer to this as minimal viable change. The goal should be business transformation; it’s not disruption.”

July Patch Tuesday brings three public disclosures

Microsoft announced three public disclosures from the 54 vulnerabilities released in the July Patch Tuesday.

An elevation of privilege public disclosure (CVE-2018-8313) affects all OSes except Windows 7. Attackers could impersonate processes, cross-process communication or interrupt system functionality to elevate their privilege levels. The patch addresses this issue by ensuring that the Windows kernel API enforces permissions.

“The fact that there is some level of detailed description of how to take advantage of this out in the open, it’s a good chance an attacker will look to develop some exploit code around this,” said Chris Goettl, director of product management and security at Ivanti, based in South Jordan, Utah.

A similar elevation-of-privilege vulnerability (CVE-2018-8314) this July Patch Tuesday affects all OSes except Windows Server 2016. Attackers could escape a sandbox to elevate their privileges when Windows fails a check. If this vulnerability were exploited in conjunction with another vulnerability, the attacker could run arbitrary code. The update fixes how Windows’ file picker handles paths.

A spoofing vulnerability in the Microsoft Edge browser (CVE-2018-8278) tricks users into thinking they are on a legitimate website. The attacker could then extract additional code to remotely exploit the system. The patch fixes how Microsoft Edge handles HTML content.

“That type of enticing of a user, we know works,” Goettl said. “It’s not a matter of will they get someone to do it or not; it’s a matter of statistically you only need to entice so many people before somebody will do it.”

Out-of-band updates continue

Chris Goettl of IvantiChris Goettl

Before July Patch Tuesday, Microsoft announced a new side-channel attack called Lazy FP State Restore (CVE-2018-3665) — similar to the Spectre and Meltdown vulnerabilities — on supported versions of Windows. An attacker uses a different side-channel to pull information from other registers on Intel CPUs through speculative execution.

Jimmy Graham of QualysJimmy Graham

Microsoft also updated its Spectre and Meltdown advisory (ADV180012). It does not contain any new releases on the original three variants, but the company did update the Speculative Store Bypass, Variant 4 of the Spectre and Meltdown vulnerabilities. This completed coverage for Intel processors, and Microsoft is still working with AMD to mitigate its processors.

Microsoft released out-of-band patches between June and July Patch Tuesday for a third-party Oracle Outside In vulnerability (ADV180010) that affects all Exchange servers.

“We don’t have a lot of info on the exploitability,” said Jimmy Graham, director of product management at Qualys, based in Foster City, Calif. “It should be treated as critical for Exchange servers.”

New Windows Server 2008 R2 servicing model on its way

Alongside its June Patch Tuesday, Microsoft announced plans to switch the updating system for Windows Server 2008 SP2 to a rollup model. The new monthly model will more closely match the servicing model used for older Windows versions, enabling administrators to simplify their servicing process. This will include a security-only quality update, a security monthly quality rollup and a preview of the monthly quality rollup.

“The 2008 Server users out there now need to adopt the same strategy, where they had the luxury of being able to do one or two updates if they chose to and not the rest,” Goettl said.

The new model will preview on Aug. 21, 2018. Administrators will still receive extended support for Windows Server 2008 SP2 until January 2020. After that, only companies that pay for Premium Assurance will have an additional six years of support.

For more information about the remaining security bulletins for July Patch Tuesday, visit Microsoft’s Security Update Guide.

Google adds single-tenant VMs for compliance, license cares

Google’s latest VM runs counter to standard public cloud frameworks, but its added flexibility checks off another box for enterprise clients.

Google Cloud customers can now access sole-tenant nodes on Google Compute Engine. The benefits for these single-tenant VMs, currently in beta, are threefold: They reduce the “noisy neighbor” issue that can arise on shared servers; add another layer of security, particularly for users with data residency concerns; and make it easier to migrate certain on-premises workloads with stringent licensing restrictions.

The public cloud model was built on the concept of multi-tenancy, which allows providers to squeeze more than one account onto the same physical host, and thus operate at economies of scale. Early customers happily waived some of those advantages of dedicated hardware in exchange for less infrastructure management and the ability to quickly scale out.

But as more traditional corporations adopt public cloud, providers have added isolation capabilities to approximate what’s inside enterprises’ own data centers, such as private networks, virtual private clouds and bare-metal servers. Single tenancy applies that approach down to the hardware level, while maintaining a virtualized architecture. AWS was the first to offer single-tenant VMs with its Dedicated Instances.

Customers access Google’s single-tenant VMs the same way as its other compute instances, except they’re placed on a dedicated server. The location of that node is either auto-selected through a placement algorithm, or customers can manually select the location at launch. These instances are customizable in size, and are charged per second for vCPU and system memory, as well as a 10% sole-tenancy premium.

Single-tenant VMs another step for Google Cloud’s enterprise appeal

Google still lags behind AWS and Microsoft Azure in public cloud capabilities, but it has added services and support in recent months to shake its image as a cloud valued solely for its engineering. Google must expand its enterprise customer base, especially with large organizations in which multiple stakeholders sign off on use of a particular cloud, said Fernando Montenegro, a 451 Research analyst.

Not all companies will pay the premium for this functionality, but it could be critical to those with compliance concerns, including those that must prove they’re on dedicated hardware in a specific location. For example, a DevOps team may want to build a CI/CD pipeline that releases into production, but a risk-averse security team might have some trepidations. With sole tenancy, that DevOps team has flexibility to spin up and down, while the security team can sign off on it because it meets some internal or external requirement.

“I can see security people being happy that, we can meet our DevOps team halfway, so they can have their DevOps cake and we can have our security compliance cake, too,” Montenegro said.

I can see security people being happy … our DevOps team … can have their DevOps cake and we can have our security compliance cake, too.
Fernando Montenegroanalyst, 451 Research

A less obvious benefit of dedicated hardware involves the lift and shift of legacy systems to the cloud. A traditional ERP contract may require a specific set of sockets or hosts, and it can be a daunting task to ensure a customer complies with licensing stipulations on a multi-tenant platform because the requirements aren’t tied to the VM.

In a bring-your-own-license scenario, these dedicated hosts can optimize customers’ license spending and reduce the cost to run those systems on a public cloud, said Deepak Mohan, an IDC analyst.

“This is certainly an important feature from an enterprise app migration perspective, where security and licensing are often top priority considerations when moving to cloud,” he said.

The noisy neighbor problem arises when a user is concerned that high CPU or IO usage by another VM on the same server will impact the performance of its own application, Mohan said.

“One of the interesting customer examples I heard was a latency-sensitive function that needed to compute and send the response within as short a duration as possible,” he said. “They used dedicated hosts on AWS because they could control resource usage on the server.”

Still, don’t expect this to be the type of feature that a ton of users rush to implement.

“[A single-tenant VM] is most useful where strict compliance/governance is required, and you need it in the public cloud,” said Abhi Dugar, an IDC analyst. “If operating under such strict criteria, it is likely easier to just keep it on prem, so I think it’s a relatively niche use case to put dedicated instances in the cloud.”

VMware is redesigning NSX networking for the cloud

SAN FRANCISCO — VMware is working on a version of NSX for public clouds that departs from the way the technology manages software-based networks in private data centers.

In an interview this week with a small group of reporters, Andrew Lambeth, an engineering fellow in VMware’s network and security business unit, said the computing architectures in public clouds require a new form of NSX networking.

“In general, it’s much more important in those environments to be much more in tune with what’s happening with the application,” he said. “It’s not interesting to try to configure [software] at as low a level as we had done in the data center.”

Four or five layers up the software stack, cloud provider frameworks typically have hooks to the way applications communicate with each other, Lambeth told reporters at VMware’s RADIO research and development conference. “That’s sort of the level where you’d look to integrate in the future.”

Todd Pugh, IT director at Sugar Creek Packing Co., based in Washington Court House, Ohio, said it’s possible for NSX to use Layer 7 — the application layer — to manage communications between cloud applications.

“If we burst something to the cloud on something besides AWS, the applications are going to have to know how to talk to one another, as opposed to just being extensions of the network,” Pugh said.

Today, VMware is focusing its cloud strategy on the company’s partnership with cloud provider AWS. The access VMware has to Amazon’s infrastructure makes it possible for NSX to operate the same on the cloud platform as it does in a private data center. Companies use NSX to deliver network services and security to applications running on VMware’s virtualization software.

Pugh would not expect an application-centric version of NSX to be as user-friendly as NSX on AWS. Therefore, he would prefer to have VMware strike a similar partnership with Microsoft Azure, which would give him the option of using the current version of NSX on either of the two largest cloud providers.

“I can shop at that point and still make it appear as if it’s my network and not have to change my applications to accommodate moving them to a different cloud,” Pugh said.

Nevertheless, having a version of NSX for any cloud provider would be useful to many companies, said Shamus McGillicuddy, an analyst at Enterprise Management Associates, based in Boulder, Colo.

“If VMware can open up the platform a bit to allow their customers to have a uniform network management model across any IaaS environment, that will simplify engineering and operations tremendously for companies that are embracing multi-cloud and hybrid cloud,” McGillicuddy said.

VMware customers can expect the vendor to roll out the new version of NSX over the next year or so, Lambeth said. He declined to give further details.

Rethinking NSX networking

More lately, I’ve been sort of taking a step back and figuring out what’s next. I feel like the platform for NSX is kind of in a similar situation to where ESX and vSphere were in 2006 and 2007.
Andrew Lambethengineering fellow at VMware

VMware will have to prepare NSX networking, not just for multiple cloud environments, but also the internet of things, which introduces other challenges to network management and security.

“More lately, I’ve been sort of taking a step back and figuring out what’s next,” Lambeth said. “I feel like the platform for NSX is kind of in a similar situation to where ESX and vSphere were in 2006 and 2007. Major pieces were kind of there, but there was a lot of buildout left.”

VSphere is the brand name for VMware’s suite of server virtualization products. ESX was the former name of VMware’s hypervisor.

VMware’s competitors in software-based networking that extends beyond the private data center include Cisco and Juniper Networks. In May, Juniper introduced its Contrail Enterprise Multicloud, while Cisco has been steadily developing new capabilities for its architecture, called Application Centric Infrastructure.

The immediate focus of the three vendors is on the growing number of companies moving workloads to public clouds. Synergy Research Group estimated cloud-based infrastructure providers saw their revenue rise by an average of 51% in the first quarter to $15 billion. The full-year growth rate was 44% in 2017 and 50% in 2016.

Windows Server 2019 Preview: What’s New and What’s Cool

Introducing the newly-announced Windows Server 2019 public preview. Covers the new features and includes a discussion on their impact. Should you be excited or worried about the next installment of Windows Server?

Read the post here: Windows Server 2019 Preview: What’s New and What’s Cool

What are some considerations for a public folders migration?


A public folders migration from one version of Exchange to another can tax the skills of an experienced administrator…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

— but there’s another level of complexity when cloud enters the mix.

A session at last week’s Virtualization Technology Users Group event in Foxborough, Mass. detailed the nuances of Office 365 subscription offerings and the migration challenges administrators face. Microsoft offers a la carte choices for companies that wish to sign up for a single cloud service, such as Exchange Online, and move the messaging platform into the cloud, said Michael Shaw, a solution architect for Office 365 at Whalley Computer Associates in Southwick, Mass., in his presentation.

Microsoft offers newer collaboration services in Office 365, but some IT departments cling to one holdover that the company cannot extinguish — public folders. This popular feature, introduced in 1996 with Exchange 4.0, gives users a shared location to store documents, contacts and calendars.

For companies on Exchange 2013/2016, Microsoft did not offer a way to move “modern” public folders — called “public folder mailboxes” after an architecture change in Exchange 2013 — to Office 365 until March 2017. Prior to that, many organizations either developed their own public folders migration process, used a third-party tool or brought in experts to help with the transition.

Organizations that want to use existing public folders after a switch from on-premises Exchange to Office 365 should be aware of the proper sequence to avoid issues with a public folders migration, Shaw said.

Most importantly, public folders should migrate over last. That’s because mailboxes in Office 365 can access a public folder that is on premises, but a mailbox that is on premises cannot access public folders in the cloud, Shaw said.

“New can always access old, but old can’t access new,” he said.

IT admins should keep in mind, however, that Microsoft dissuades customers from using public folders for document use due to potential issues when multiple people try to work on the same file. Instead, the company steers Office 365 shops to SharePoint Online for document collaboration, and the Groups service for shared calendars and mobile device access.

In another attempt to prevent public folders migration to Office 365, Microsoft caps public folder mailboxes in Exchange Online at 1,000. They also come with a limit of 50 GB per mailbox in the lower subscription levels and a 100 GB quota in the higher E3 and E5 tiers. Public folder storage cannot exceed 50 TB.

Still, support for public folders has no foreseeable end despite Microsoft’s efforts to eradicate the feature. Microsoft did not include public folders in Exchange Server 2007, but reintroduced it in a service pack after significant outcry from customers, Shaw said. Similarly, there was no support for public folders when Microsoft introduced Office 365 in 2011, but it later buckled to customer demand.

Putting data on the beat— Public safety intelligence led policing

While most public safety agencies operate in a fog of big data, it doesn’t have to be this way. There are approaches to improving data integration and analytics that substantially enhance policing. Broadly these initiatives fall into three categories:

Knowing before:

Hindsight, as they say, is 20/20. But a retrospective view only gets you so far in terms of applicable intelligence. Machine learning can be your force multiplier—because it offers the possibility of actual foresight in real-time situations. Using predictive, cloud-based analytics, it is possible to identify subtle patterns in data streams that lead to advanced awareness of crimes about to be committed or emergencies about to occur. In this way, the sort of intuition that a seasoned police officer has can be extended to provide an always-on view. For example, individual activities that seem innocuous might collectively trigger suspicion or flag an increased risk when aggregated and analyzed by machine learning algorithms—such as shifts in travel or purchase patterns, or social media activity.

Knowing in the moment:

No doubt every public safety agency wishes they had an omniscient 360-degree view of the scene they are encountering. Well, today sensors coupled with real-time data ingestion and analysis (performed in a secure cloud environment) can greatly enhance this situational intelligence, coupled with geo-spatial information allows first responders to correlate and execute an appropriate response. The relevant technologies include:

  • Connected devices: Synchronized feeds from CAD; RMS; body worn cameras and in-vehicle camera systems; CCTV; chemical, biological, radioactive, nuclear sensors (CBRN); (ALPR); acoustic listening devices; and open-source intelligence (OSINT) all help to capture a detailed picture of the event.
  • Geo-spatial awareness: Event information, as well as objects of potential interest nearby, is mapped, providing an enhanced view of the environment. For example, additional sensors are monitored, and nearby schools and businesses identified, along with egress routes, traffic patterns, and hospitals.
  • Other relevant information and histories: By using address-specific identity and licensing data, past calls for service, and other active calls in the area, pertinent information about the residence (such as any weapons or chemicals on the premises) can be instantly surfaced. In the event of fire, chemical, or environmental disasters weather information can be overlaid to help predict at-risk areas.

Knowing after:

As any seasoned investigator can attest, reconstructing events afterwards can be a time-consuming process, with the potential to miss key evidence. Highly integrated data systems and machine learning can significantly reduce the man-hours required of public safety agencies to uncover evidence buried across disparate data pools.

The promise of technology—what’s next?

To learn more about the future of intelligence led policing and law enforcement in the twenty-first century download the free whitepaper.

NIST botnet security report recommendations open for comments

The Departments of Commerce and Homeland Security opened public comments on a draft of its botnet security report before the final product heads to the president.

The report was commissioned by the cybersecurity executive order published by the White House on May 11, 2017. DHS and the National Institute of Standards and Technology (NIST), a unit of the Department of Commerce, were given 240 days to complete a report on improving security against botnets and other distributed cyberattacks, and they took every minute possible, releasing the draft botnet security report on Jan. 5, 2018.

The public comment period ends Feb. 12, 2018 and industry experts are supportive of the contents of the report. According to a NIST blog post, the draft report was a collaborative effort.

“This draft reflects inputs received by the Departments from a broad range of experts and stakeholders, including private industry, academia, and civil society,” NIST wrote. “The draft report lays out five complementary and mutually supportive goals intended to dramatically reduce the threat of automated, distributed attacks and improve the resilience of the ecosystem. For each goal, the report suggests supporting activities to be taken by both government and private sector actors.”

The blog post listed the goals for stakeholders laid out by the draft botnet security report as:

  1. Identify a clear pathway toward an adaptable, sustainable, and secure technology marketplace.
  2. Promote innovation in the infrastructure for dynamic adaptation to evolving threats.
  3. Promote innovation at the edge of the network to prevent, detect, and mitigate bad behavior.
  4. Build coalitions between the security, infrastructure, and operational technology communities domestically and around the world.
  5. Increase awareness and education across the ecosystem.

Rodney Joffe, senior vice president, technologist and fellow at Neustar, Inc., an identity resolution company headquartered in Sterling, Va., said NIST and DHS took the right approach in putting together the report.

“The Departments of Commerce and Homeland Security worked jointly on this effort through three approaches — hosting a workshop, publishing a request for comment, and initiating an inquiry through the President’s National Security Telecommunications Advisory Committee (NSTAC),” Joffe told SearchSecurity. “We commend the administration for working with and continuing to seek private sector advice on the best path forward.”

A good start, but… 

Experts, like Michael Patterson, CEO of Plixer, a network traffic analysis company based in Kennebunk, Maine, generally applauded the draft botnet security report as being an in-depth starting point that is missing some key features.

“The report offers a comprehensive framework for threat intelligence sharing, and utilizing NIST to work with a variety of industry groups to establish tighter security protocols and best practices while outlining government and industry transformations to protect the internet,” Patterson told SearchSecurity. “However, it is missing the required teeth to propel industry action. Without a mechanism to define a specific compliance standard, service providers will not have enough incentive to take the steps required to mitigate these risks.”

Stephen Horvath, vice president of strategy and vision for Telos Corporation. a cybersecurity company located in Ashburn, Va., applauded the draft botnet security report for balancing “high level explanations along with some technical details of merit.”

“This report will hopefully drive improvements and awareness of the issues surrounding botnets. Given a few of the more important recommendations are taken and funded, the establishment of an IoT [cybersecurity framework] profile for example, a general overall improvement across all domains should be felt in the next few years,” Horvath told SearchSecurity. “I believe stronger improvements would be possible more quickly if the recommendations included greater focus on enforcing hard requirements rather than incentives.”

Gavin Reid, chief security architect at Recorded Future, a threat intelligence company headquartered in Somerville, Mass., said NIST’s goals are “laudable and the paper takes the approach of providing as comprehensive of a solution as is possible given the transient nature of attacks.”

“It does not address how the goals and technology approach keep up with and change to match changes to the attack vectors,” Reid told SearchSecurity. “The paper also conflates all botnets with IoT botnets. Bots resulting in automated controlled attacks and toolkits are not limited to IoT but have a much wider footprint covering all IT ecosystems.”

The IoT question

Following the highly publicized botnet attacks like Mirai which preyed on insecure IoT devices, the draft report focused on these issues and even noted “IoT product vendors have expressed desire to enhance the security of their products, but are concerned that market incentives are heavily weighted toward cost and time to market.”

Luke Somerville, manager of special investigations at Forcepoint Security Labs, said the goals and actions within the draft botnet security report are “a good starting point, but the effectiveness of ideas such as baseline security standards for IoT devices will depend entirely on the standards themselves and how they are implemented.”

“Any standards would need to be backed up robustly enough to overcome the strong market incentives against security which exist at present,” Somerville told SearchSecurity. “Increasing awareness and security education is also discussed — something that has been a goal of the security industry for a long time. Ultimately, insecure systems don’t fix themselves, and nor do they make themselves insecure in the first place. By focusing on the human point of contact with data and systems — be that point of contact the developers writing the code controlling the systems, the end-users configuring the systems, or even prospective users in the process of making a purchasing decision — we can attempt to build security in throughout the design and usage lifecycle of a product.”

Botnet security report outcomes

While experts were generally favorable to the draft botnet security report, some were less optimistic about real-world changes that might come from such a report.

Jeff Tang, senior security researcher at Cylance, said he was “not convinced this report will make any significant strides towards deterring the spread of botnets.”

“Trying to develop an accepted security baseline through a consensus-based process when one of your stakeholder’s primary goal is to sell you a new shiny IoT device every year is only going to result in watered-down standards that will be ineffective. As the recent spectacle of CPU bugs has shown, speed is the enemy of security. If you’re rushing to release a new device every year, security is going to be nonexistent,” Tang told SearchSecurity. “Additionally, secure development best practices haven’t changed much in the last decade, but judging by the reports of various device vulnerabilities, manufacturers have not voluntarily adopted these best practices.”

This is not the work of a moment; this is evolution over thousands of software design lifecycles.
Pam Dingleprincipal technical architect at Ping Identity

Pam Dingle, principal technical architect at Ping Identity, an identity security company headquartered in Denver, said “changing ecosystems is difficult” and it will take a concerted effort by vendors and CISOs alike to make the change real, otherwise “the effects will likely be limited.”

“It is up to those who see the value in the recommended actions to put the manpower into participating in standards groups, collaborating with adjacent vendor spaces to make integration easier and more pattern-based, and demanding that a shared defense strategy stay high in priority lists,” Dingle told SearchSecurity. “This is not the work of a moment; this is evolution over thousands of software design lifecycles, and even then, the mass of legacy devices out there with no update capabilities will be shackles on our collective legs for a long time to come. We have to start.”

Frost Science Museum IT DR planning braced for worst, survived Irma

When you open a large public facility right on the water in Miami, a good disaster recovery setup is an essential task for an IT team. Hurricane Irma’s assault on Florida in September 2017 made that clear to the Phillip and Patricia Frost Museum of Science team.

The expected Category 5 hurricane moving in on Florida had the new Frost Science Museum square in its sights. Irma turned out to be less threatening to Miami than feared, and the then-4-month-old building suffered no major damage. Still, the museum’s vice president of technology said he felt prepared for the worst with his IT DR planning.

When preparing to open the museum on a 250,000-square-foot location on the Miami waterfront, technology chief Brooks Weisblat installed a new Dell EMC SAN in a fully redundant data center and set up a colocation site in Atlanta as part of its disaster recovery plan. The downgraded Category 4 hurricane dumped water into the building, but did no serious damage and caused no downtime.

Frost Science Museum's Brooks WeisblatBrooks Weisblat

The new Frost Science Museum building features three diesel generators and redundant power, including 20 minutes of backup power in the battery room that should provide enough juice until the backup generators come online. While much of southern Florida lost power during Irma, the museum did not.

“We’re sitting right on the water. It was supposed to be a major hurricane coming straight through Miami. But six hours before hitting, it veered off, so it wasn’t a direct hit,” Weisblat said. “We have two weather stations on the building, and we recorded force winds of 90 to 95 miles per hour. It could have been 190 mile-per-hour winds, and that would have been a different story.”

Advance warning of the hurricane prompted the museum’s team to bolster its IT DR planning.

“The hurricane moved us to get all of our backups in order,” Weisblat said. “Opening the building was intensive. We had backups internally, but we didn’t have off-site backups yet. It pushed us to get a colocated data center in Atlanta when the hurricane warnings came about a week before. At least we had a lot of advance notice for this one. Except for some water here and there, the museum did well.”

The Frost Science Museum raised $330 million in funding to build the new center in downtown Miami, closing its Coconut Grove site in August 2015. Museum organizers said they hoped to attract 750,000 visitors in the first year at the new site. From its May opening through Oct. 31, more than 525,000 people visited the museum.

Shifting to SAN, all-flash

When moving, Frost Science installed a dual-controller Dell EMC SC9000 — formerly Compellent — all-flash array, with 112 TB of capacity connected to 10 Dell EMC PowerEdge servers virtualized with VMware. As part of its IT DR planning, the museum uses Veeam Software to back up virtual machines to a Dell PowerEdge R530 server, with 40 TB of hard disk drive storage on site, and it replicates those backups to another PowerEdge server in the Atlanta location.

The hurricane moved us to get all of our backups in order.
Brooks Weisblatvice president of technology, Frost Science Museum

“If something happens at this site, we’re able to launch a limited number of VMs to power finance, ticketing and reporting,” Weisblat said. “We can control those servers out of Atlanta if we’re unable to get into the building.”

Before opening the new building, Weisblat’s team migrated all VMs between the old and new sites. The process took three weeks. “We had to take down services, copy them to drives a few miles away, then bring those into the new environment and do an import into a new VM cluster,” he said.

The data center sits on the third floor of the new building, 60 feet above sea level. It takes up 16 full cabinets, plus eight racks for networking, Weisblat said.

Frost Science Museum had no SAN in the old building. Its IT ran on 23 servers. Weisblat said he migrated the stand-alone servers into the VMware cluster on the Compellent array before moving. “That way, when the new system came online, it would be easy to move those servers over as files, and we would not have to do migrations into VMware in the new building during the crush time for our opening,” he said.

The Dell EMC SAN runs all critical applications, including the customer relationship management system, exhibit content management, property management system software, the museum website, online ticketing and building security management systems. The security system controls electricity, lights, solar power, centralized antivirus deployments and network access control. “Everything is powered off this one system,” Weisblat said.

The SAN has two Brocade — now Broadcom — Fibre Channel switches for redundancy. “We can unplug hosts; everything keeps running,” Weisblat said. “We can unplug one of the storage arrays, and everything keeps running. The top-of-rack 10-gig [Extreme Avaya Ethernet] switches are also fully redundant. We can lose one of those.”

He said since installing the new array, one solid-state drive went out. “The SSD sent us an alert, and Dell had parts to us in two hours. Before I knew something was wrong, they contacted me.”

Whether it’s a failed SSD or an impending hurricane, early alerts and IT DR planning certainly help when dealing with disasters.