Tag Archives: data

Challenges in cloud data security lead to a lack of confidence

Enterprise cloud use is full of contradictions, according to new research.

The “2017 Global Cloud Data Security Study,” conducted by Ponemon Institute and sponsored by security vendor Gemalto, found that one reason enterprises use cloud services is for the security benefits, but respondents were divided on whether cloud data security is realistic, particularly for sensitive information.

“More companies are selecting cloud providers because they will improve security,” the report stated. “While cost and faster deployment time are the most important criteria for selecting a cloud provider, security has increased from 12% of respondents in 2015 to 26% in 2017.”

Although 74% of respondents said their organization is either a heavy or moderate user of the cloud, nearly half (43%) said they are not confident that their organization’s IT department knows about all the cloud computing services it currently uses.

In addition, less than half of respondents said their organization has defined roles and accountability for cloud data security. While this number (46%) was up in 2017 — from 43% in 2016 and from 38% in 2015 — it is still low, especially considering the type of information that is stored in the cloud the most.

Customer data is at the highest risk

According to the survey findings, the primary types of data stored in the cloud are customer information, email, consumer data, employee records and payment information. At the same time, the data considered to be most at risk, according to the report, is payment information and customer information.

“Regulated data such as payment and customer information continue to be most at risk,” the report stated. “Because of the sensitivity of the data and the need to comply with privacy and data protection regulations, companies worry most about payment and customer information.”

Having tracked the progress of cloud security over the years, when we say ‘confidence in the cloud is up,’ we mean that we’ve come a long way.
Jason Hartvice president and CTO of data protection, Gemalto

One possible explanation for why respondents feel that sensitive data is at risk is that cloud data security is tough to actually achieve.

“The cloud is storing all types of information from personally identifiable information to passwords to credit cards,” said Jason Hart, vice president and CTO of data protection at Gemalto. “In some cases, people don’t know where data is stored, and more importantly, how easy it is to access by unauthorized people. Most organizations don’t have data classification policies for security or consider the security risks; instead, they’re worrying about the perimeter. From a risk point of view, all data has a risk value.”

The biggest reason it is so difficult to secure the cloud, according to the study, is that it’s more difficult to apply conventional infosec practices in the cloud. The next most cited reason is that it is more difficult for enterprises to assess the cloud provider for compliance with security best practices and standards. The majority of respondents (71% and 67%, respectively) feel those are the biggest challenges, but also note that it is more difficult to control or restrict end-user access to the cloud, which also provides some security challenges.

“To solve both of these challenges, enterprises should have control and visibility over their security throughout the cloud, and being able to enforce, develop and monitor security policies is key to ensuring an integrity,” Hart said. “People will apply the appropriate controls once they’re able to understand the risks towards their data.”

Despite the challenges in cloud data security and the perceived security risks to sensitive data stored in the cloud, all-around confidence in cloud computing is on the rise — slightly. The 25% of respondents who said they are “very confident” their organization knows about all the cloud computing services it currently uses is up from 19% in 2015. Fewer people (43%) said they were “not confident” in 2017 compared to 55% in 2015.

“Having tracked the progress of cloud security over the years, when we say ‘confidence in the cloud is up,’ we mean that we’ve come a long way,” Hart said. “After all, in the beginning, many companies were interested in leveraging the cloud but had significant concerns about security.”

Hart noted that, despite all the improvements to business workflows that the cloud has provided, security is still an issue. “Security has always been a concern and continues to be,” he said. “Security in the cloud can be improved if the security control is applied to the data itself.”

Ponemon sampled over 3,200 experienced IT and IT security practitioners in the U.S., the United Kingdom, Australia, Germany, France, Japan, India and Brazil who are actively involved in their organization’s use of cloud services.

Putting data on the beat— Public safety intelligence led policing

While most public safety agencies operate in a fog of big data, it doesn’t have to be this way. There are approaches to improving data integration and analytics that substantially enhance policing. Broadly these initiatives fall into three categories:

Knowing before:

Hindsight, as they say, is 20/20. But a retrospective view only gets you so far in terms of applicable intelligence. Machine learning can be your force multiplier—because it offers the possibility of actual foresight in real-time situations. Using predictive, cloud-based analytics, it is possible to identify subtle patterns in data streams that lead to advanced awareness of crimes about to be committed or emergencies about to occur. In this way, the sort of intuition that a seasoned police officer has can be extended to provide an always-on view. For example, individual activities that seem innocuous might collectively trigger suspicion or flag an increased risk when aggregated and analyzed by machine learning algorithms—such as shifts in travel or purchase patterns, or social media activity.

Knowing in the moment:

No doubt every public safety agency wishes they had an omniscient 360-degree view of the scene they are encountering. Well, today sensors coupled with real-time data ingestion and analysis (performed in a secure cloud environment) can greatly enhance this situational intelligence, coupled with geo-spatial information allows first responders to correlate and execute an appropriate response. The relevant technologies include:

  • Connected devices: Synchronized feeds from CAD; RMS; body worn cameras and in-vehicle camera systems; CCTV; chemical, biological, radioactive, nuclear sensors (CBRN); (ALPR); acoustic listening devices; and open-source intelligence (OSINT) all help to capture a detailed picture of the event.
  • Geo-spatial awareness: Event information, as well as objects of potential interest nearby, is mapped, providing an enhanced view of the environment. For example, additional sensors are monitored, and nearby schools and businesses identified, along with egress routes, traffic patterns, and hospitals.
  • Other relevant information and histories: By using address-specific identity and licensing data, past calls for service, and other active calls in the area, pertinent information about the residence (such as any weapons or chemicals on the premises) can be instantly surfaced. In the event of fire, chemical, or environmental disasters weather information can be overlaid to help predict at-risk areas.

Knowing after:

As any seasoned investigator can attest, reconstructing events afterwards can be a time-consuming process, with the potential to miss key evidence. Highly integrated data systems and machine learning can significantly reduce the man-hours required of public safety agencies to uncover evidence buried across disparate data pools.

The promise of technology—what’s next?

To learn more about the future of intelligence led policing and law enforcement in the twenty-first century download the free whitepaper.

Cisco Assurance services verify intent-based networking

Cisco has introduced a policy-centric layer of network analytics for the data center, campus and the wireless LAN, providing customers with additional intelligence to pinpoint problems and fix them. The latest technology represents a significant advancement in Cisco’s march toward intent-based networking.

Cisco’s Assurance analytics, launched on Tuesday, focuses on the nonpacket data the company’s Tetration network monitoring and troubleshooting software doesn’t cover. Unlike Tetration, Assurance keeps tabs on policies created in Cisco software to control the network’s infrastructure, such as switches, firewalls and load balancers.

Cisco Assurance is the latest step in the company’s intent-based networking (IBN) initiative, which is centered around creating policies that tell software what an operator wants the network to do. The application then makes the infrastructure changes.

The engine behind Cisco Assurance services

Cisco’s latest layer of analytics for the data center is called the Network Assurance Engine, which Cisco has tied to its software-defined networking (SDN) architecture, called Application Centric Infrastructure (ACI). The new technology is virtualized software that network operators deploy on any server.

Once installed, the software logs into the ACI controller, called the Application Policy Infrastructure Controller (APIC), which shares network policies, switch configurations and the data-plane state with the Assurance Engine.

At that point, the software creates a map of the entire ACI fabric and then builds a mathematical model that spans underlays, overlays and virtualization layers. The model establishes the network state, which Assurance compares to what operators want the network to do based on policies they’ve created.

“If a network engineer used flawed logic in expressing intent, the Assurance Engine would find that flaw when the intent is translated to network state,” said Shamus McGillicuddy, an analyst for Enterprise Management Associates, based in Boulder, Colo.

Other vendors, such as Forward Networks and Veriflow, also build models of network state and then perform analytics to spot discrepancies with a network operator’s intent. Cisco’s differentiator is the integration with its APIC policy controller, which creates a closed-loop system for ensuring operator intent matches network state, McGillicuddy said.

Knowing where an engineer’s policies have “gone off the rails” is a big help in keeping networks running smoothly, said Andrew Froehlich, the president of consulting firm West Gate Networks, based in Loveland, Colo. “For network administrators, this is a huge win, because it will help them to pinpoint where problems are occurring when people start shouting the network is slow.”

Cisco has tied the analytics engine to a troubleshooting library of what the company has identified as the most common network failure scenarios. As a result, when an engineer makes a change to the network, the Assurance Engine can determine, based on its knowledge base, where the modification could create a problem.

Initially, the Assurance Engine will cover only the Nexus 9000 switches required for an ACI fabric. Later in the quarter, Cisco plans to extend the software’s capabilities to firewalls, load balancers and other network services from Cisco or partners.

Cisco Assurance services for the campus

For the campus, Cisco has added its new analytics engine to version 1.1 of the Digital Network Architecture (DNA) Center — Cisco’s software console for distributing policy-based configurations across wired and wireless campus networks. DNA Center, which costs $77,000, requires the use of Cisco Catalyst switches and Aironet access points. Companies using DNA Center have to buy a subscription license for each network device attached to the software.

The Assurance analytics in the latest release of DNA Center draws network telemetry data from the APIC-EM controller, the campus network version of the ACI controller used in the data center. The model created from the data lets operators monitor applications, switches, routers, access points and end-user devices manufactured by Cisco partners, such as Apple.

As the data center software, the Cisco Assurance services for the campus are focused on troubleshooting and remediation. Later in the quarter, Cisco will add similar features to the cloud-based management console of the Meraki wireless LAN. Problems the Meraki analytics will help solve will include dropped traffic, latency and access-point congestion.

Today, most operators manage networks by programming switches and scores of other devices manually, usually via a command-line interface. Proponents of IBN claim the new paradigm is more flexible and agile in accommodating the needs of modern business applications. In the future, Cisco, Juniper Networks and others want to use machine learning and artificial intelligence to have networks fix common problems without operator involvement.

Despite progress vendors have made in developing IBN systems, enterprises are just beginning to roll out the methodology in their operations. Gartner predicted the number of commercial deployments will be in the hundreds through mid-2018, increasing to more than 1,000 by the end of next year.

Office 365 labels help keep content under control

Office 365 labels make it easy to classify data for compliance purposes through both manual and automatic methods.

Office 365 label policies, included with E3 subscriptions, provide a central location to configure and publish labels to Exchange Online, SharePoint Online and the services that depend on them, such as Office 365 Groups.

For example, administrators can add a label named Financial Data to the Security & Compliance Center and designate it to keep data with that label for six years. An Office 365 label policy pushes that label out to the other Microsoft services on the platform.

Users mark items in their inbox or documents with labels. Office 365 labels have policies that retain or delete data based on the organization’s needs. Personal data might get marked for deletion after a certain amount of time following a review, for example, or other information might get marked as an organizational record, so nobody can change or purge it.

The Advanced Data Governance functionality in an E5 subscription enables the automatic application of data labels based on keywords or sensitive information. Policies could mark all data with Social Security numbers as personal data or mark all data with credit card numbers as financial data.

The Advanced Data Governance functionality in an E5 subscription enables the automatic application of data labels based on keywords or sensitive information.

Label policies require some forethought to cover different types of information. Many organizations might require multiple labels to cover the types of data to retain or delete.

Office 365 labels take approximately 24 hours before they appear. Automatic labeling starts after about seven days.

User or organization-wide retention policies that hold data take precedence over Office 365 labels. A policy that holds data for 10 years across the organization will overrule one that removes certain data after five years.

View All Videos

Intelligent Retail: Top Tech Trends for 2018 and Beyond – Microsoft Enterprise

Today’s innovations in technology are opening new doors for retailers. The ability to infuse data and intelligence in all areas of a business has the potential to completely reinvent retail. Here’s a visual look at the top technologies we see enabling this transformation in 2018 and beyond, and where they’ll have the greatest impact.

2018 the year of intelligent retail- infographic

Future of data storage technology: Transformational trends for 2018

Sometimes big changes sneak up on you, especially when you’re talking about the future of data storage technology. For example, when exactly did full-on cloud adoption become fully accepted by all those risk-averse organizations, understaffed IT shops and disbelieving business executives? I’m not complaining, but the needle of cloud acceptance tilted over sometime in the recent past without much ado. It seems everyone has let go of their fear of cloud and hybrid operations as risky propositions. Instead, we’ve all come to accept the cloud as something that’s just done.

Sure, cloud was inevitable, but I’d still like to know why it finally happened now. Maybe it’s because IT consumers expect information technology will provide whatever they want on demand. Or maybe it’s because everything IT implements on premises now comes labeled as private cloud. Influential companies, such as IBM, Microsoft and Oracle, are happy to help ease folks formerly committed to private infrastructure toward hybrid architectures that happen to use their respective cloud services.

In any case, I’m disappointed I didn’t get my invitation to the “cloud finally happened” party. But having missed cloud’s big moment, I’m not going to let other obvious yet possibly transformative trends sneak past as they go mainstream with enterprises in 2018. So when it comes to the future of data storage technology, I’ll be watching the following:

  • Containers arose out of a long-standing desire to find a better way to package applications. This year we should see enterprise-class container management reach maturity parity with virtual machine management — while not holding back any advantages containers have over VMs. Expect modern software-defined resources, such as storage, to be delivered mostly in containerized form. When combined with dynamic operational APIs, these resources will deliver highly flexible programmable infrastructures. This approach should enable vendors to package applications and their required infrastructure as units that can be redeployed — that is, blueprinted or specified in editable and versionable manifest files — enabling full environment and even data center-level cloud provisioning. Being able to deploy a data center on demand could completely transform disaster recovery, to name one use case.
  • Everyone is talking about AI, but it is machine learning that’s slowly permeating through just about every facet of IT management. Although there’s a lot of hype, it’s worth figuring out how and where carefully applied machine learning could add significant value. Most machine learning is conceptually made up of advanced forms of pattern recognition. So think about where using the technology to automatically identify complex patterns would reduce time and effort. We expect the increasing availability of machine learning algorithms to give rise to new storage management processes. These algorithms can produce storage management processes that can learn and adjust operations and settings to optimize workload services, quickly identify and fix the root causes of abnormalities, and broker storage infrastructure and manage large-scale data to minimize cost.
  • Management as a service (MaaS) is gaining traction, when looking at the future of data storage technology. First, every storage array seemingly comes with built-in call home support replete with management analytics and performance optimization. I predict that the interval for most remote vendor management services to quickly drop from today’s daily batch to five-minute streaming. I also expect cloud-hosted MaaS offerings are the way most shops will manage their increasingly hybrid architectures, and many will start to shift away from the burdens of on-premises management software. It does seem that all the big and even small management vendors are quickly ramping up MaaS versions of their offerings. For example, this fall, VMware rolled out several cloud management services that are basically online versions of familiar on-premises capabilities.
  • More storage arrays now have in-cloud equivalents that can be easily replicated and failed over to if needed. Hewlett Packard Enterprise Cloud Volumes (Nimble); IBM Spectrum Virtualize; and Oracle cloud storage, which uses Oracle ZFS Storage Appliance internally, are a few notable examples. It seems counterproductive to require in-cloud storage to run the same or a similar storage OS as on-premises storage to achieve reliable hybrid operations. After all, a main point of a public cloud is that the end user shouldn’t have to care, and in most cases can’t even know, if the underlying infrastructure service is a physical machine, virtual image, temporary container service or something else.

    However, there can be a lot of proprietary technology involved in optimizing complex, distributed storage activities, such as remote replication, delta snapshot syncing, metadata management, global policy enforcement and metadata indexing. When it comes to hybrid storage operations, there simply are no standards. Even the widely supported Amazon Web Services Simple Storage Service API for object storage isn’t actually a standard. I predict cloud-side storage wars will heat up, and we’ll see storage cloud sticker shock when organizations realize they have to pay both the storage vendor for an in-cloud instance and the cloud service provider for the platform.

  • Despite the hype, nonvolatile memory express (NVMe) isn’t going to rock the storage world, given what I heard at VMworld and other fall shows. Yes, it could provide an incremental performance boost for those critical workloads that can never get enough, but it’s not going to be anywhere near as disruptive to the future of data storage technology as what NAND flash did to HDDs. Meanwhile, NVMe support will likely show up in most array lineups in 2018, eliminating any particular storage vendor advantage.

    On the other hand, a bit farther out than 2018, expect new computing architectures, purpose-built around storage-class memory (SCM). Intel’s initial releases of its “storage” type of SCM — 3D XPoint deployed on PCIe cards and accessed using NVMe — could deliver a big performance boost. But I expect an even faster “memory” type of SCM, deployed adjacent to dynamic RAM, would be far more disruptive.

How did last year go by so fast? I don’t really know, but I’ve got my seatbelt fastened for what looks to be an even faster year ahead, speeding into the future of data storage technology.

ONC unveils plan for health information sharing framework

If you want a say in how the government deals with health data interoperability, now’s your chance.

The Office of the National Coordinator for Health IT (ONC) has released draft rules for a health information sharing plan, called the Trusted Exchange Framework, and the public has until Feb. 18 to comment.

The framework stems from the interoperability provisions of the 21st Century Cures Act of 2016, a wide-ranging law that includes many aspects of healthcare and health IT, of which the health information sharing plan is only one part.

In a conference call with reporters, ONC National Coordinator Donald Rucker, M.D., called the framework concept a “network of networks,” and he noted that Congress explicitly called for a way to link disparate existing health information networks.

“How do these networks, which are typically moving very similar sets of information, how do we get them connected?” Rucker said.

Donald Rucker, M.D., ONC National CoordinatorDonald Rucker

The framework, Rucker said, is a response to what he called the “national challenge” of interoperability.

“It hasn’t been easy. Folks have made some great progress, but obviously there’s a lot of work to be done,” he said.

Among the existing networks that ONC officials are looking to link within the health information sharing framework are the many health information exchanges that have sprung up since the HITECH Act of 2009 spurred data sharing with the meaningful use program.

Other such networks include vendor-driven interoperability environments, such as the one overseen by the CommonWell Health Alliance.

CommonWell’s director, Jitin Asnaani, told Politico that he thinks the ONC model is a “path to scalable nationwide interoperability.”

Mariann Yeager, CEO of another vendor network, the Sequoia Project, was quoted by Politico expressing a somewhat more neutral assessment: “Overall, the approach seems reasonable,” but “we need to better understand the details.”

ONC envisions the Trusted Exchange Network — expected to be started by the end of 2018 and fully built out by 2021 — as being used by federal agencies, individuals, healthcare providers, public and private health organizations, insurance payers and health IT developers.

[The Trusted Exchange Framework is a] network of networks. How do these networks, which are typically moving very similar sets of information, how do we get them connected?
Donald RuckerONC national coordinator

The agency conceives of the network as a technical and governance infrastructure that connects health information networks around a core of “qualified health information networks” (Qualified HIN) overseen by a single, “recognized coordinating entity” to be chosen by ONC in a competitive bid process.

According to ONC, among other things, Qualified HINs must be able to locate and transmit electronic protected health information between multiple organizations and individuals; have mechanisms to audit participants’ compliance with certain core obligations; use a connectivity broker; and be neutral as to which participants are allowed to use the network.

A connectivity broker is a service provided by a Qualified HIN that provides the following:

  • A master patient index to accurately identify and match patients with their health information;
  • A health records locator service;
  • Both widely broadcast and specifically directed queries for health information; and
  • Guaranteed return of electronic health information to an authorized Qualified HIN that requests it.

Governance for the proposed framework consists of two parts. Part A is a set of “guardrails” and principles that health information networks should adopt to support interoperability; part B is a set of minimum required legal terms and conditions detailing how network participation agreements should be constructed to ensure health information networks can communicate with each other.

Genevieve Morris, ONC’s principal deputy national coordinator, specifically acknowledged the efforts of private sector organizations in laying groundwork for health data interoperability and noted that another private organization will coordinate the health information sharing framework.

“We at ONC recognize that our role is to make sure there is equity, scalability, integrity and sustainability in health information sharing,” Morris said.