Tag Archives: Enterprise

Challenges in cloud data security lead to a lack of confidence

Enterprise cloud use is full of contradictions, according to new research.

The “2017 Global Cloud Data Security Study,” conducted by Ponemon Institute and sponsored by security vendor Gemalto, found that one reason enterprises use cloud services is for the security benefits, but respondents were divided on whether cloud data security is realistic, particularly for sensitive information.

“More companies are selecting cloud providers because they will improve security,” the report stated. “While cost and faster deployment time are the most important criteria for selecting a cloud provider, security has increased from 12% of respondents in 2015 to 26% in 2017.”

Although 74% of respondents said their organization is either a heavy or moderate user of the cloud, nearly half (43%) said they are not confident that their organization’s IT department knows about all the cloud computing services it currently uses.

In addition, less than half of respondents said their organization has defined roles and accountability for cloud data security. While this number (46%) was up in 2017 — from 43% in 2016 and from 38% in 2015 — it is still low, especially considering the type of information that is stored in the cloud the most.

Customer data is at the highest risk

According to the survey findings, the primary types of data stored in the cloud are customer information, email, consumer data, employee records and payment information. At the same time, the data considered to be most at risk, according to the report, is payment information and customer information.

“Regulated data such as payment and customer information continue to be most at risk,” the report stated. “Because of the sensitivity of the data and the need to comply with privacy and data protection regulations, companies worry most about payment and customer information.”

Having tracked the progress of cloud security over the years, when we say ‘confidence in the cloud is up,’ we mean that we’ve come a long way.
Jason Hartvice president and CTO of data protection, Gemalto

One possible explanation for why respondents feel that sensitive data is at risk is that cloud data security is tough to actually achieve.

“The cloud is storing all types of information from personally identifiable information to passwords to credit cards,” said Jason Hart, vice president and CTO of data protection at Gemalto. “In some cases, people don’t know where data is stored, and more importantly, how easy it is to access by unauthorized people. Most organizations don’t have data classification policies for security or consider the security risks; instead, they’re worrying about the perimeter. From a risk point of view, all data has a risk value.”

The biggest reason it is so difficult to secure the cloud, according to the study, is that it’s more difficult to apply conventional infosec practices in the cloud. The next most cited reason is that it is more difficult for enterprises to assess the cloud provider for compliance with security best practices and standards. The majority of respondents (71% and 67%, respectively) feel those are the biggest challenges, but also note that it is more difficult to control or restrict end-user access to the cloud, which also provides some security challenges.

“To solve both of these challenges, enterprises should have control and visibility over their security throughout the cloud, and being able to enforce, develop and monitor security policies is key to ensuring an integrity,” Hart said. “People will apply the appropriate controls once they’re able to understand the risks towards their data.”

Despite the challenges in cloud data security and the perceived security risks to sensitive data stored in the cloud, all-around confidence in cloud computing is on the rise — slightly. The 25% of respondents who said they are “very confident” their organization knows about all the cloud computing services it currently uses is up from 19% in 2015. Fewer people (43%) said they were “not confident” in 2017 compared to 55% in 2015.

“Having tracked the progress of cloud security over the years, when we say ‘confidence in the cloud is up,’ we mean that we’ve come a long way,” Hart said. “After all, in the beginning, many companies were interested in leveraging the cloud but had significant concerns about security.”

Hart noted that, despite all the improvements to business workflows that the cloud has provided, security is still an issue. “Security has always been a concern and continues to be,” he said. “Security in the cloud can be improved if the security control is applied to the data itself.”

Ponemon sampled over 3,200 experienced IT and IT security practitioners in the U.S., the United Kingdom, Australia, Germany, France, Japan, India and Brazil who are actively involved in their organization’s use of cloud services.

Kubernetes storage projects dominate CNCF docket

Enterprise IT pros should get ready for Kubernetes storage tools, as the Cloud Native Computing Foundation seeks ways to support stateful applications.

The Cloud Native Computing Foundation (CNCF) began its quest to develop container storage products this week when it approved an inception-level project called Rook, which connects Kubernetes orchestration to the Ceph distributed file system through the Kubernetes operator API.

The Rook project’s approval illustrates the CNCF’s plans to emphasize Kubernetes storage.

“It’s going to be a big year for storage in Kubernetes, because the APIs are a little bit more solidified now,” said CNCF COO Chris Aniszczyk. The operator API and a Container Storage Interface API were released in the alpha stage with Kubernetes 1.9 in December. “[The CNCF technical board is] saying that the Kubernetes operator API is the way to go in [distributed container] storage,” he said.

Rook project gave Prometheus a seat on HBO’s Iron Throne

HBO wanted to deploy Prometheus for Kubernetes monitoring, and it ideally would have run the time-series database application on containers within the Kubernetes cluster, but that didn’t work well with cloud providers’ persistent storage volumes.

Illya Chekrygin, UpboundIllya Chekrygin

“You always have to do this careful coordination to make sure new containers only get created in the same availability zone. And if that entire availability zone goes away, you’re kind of out of luck,” said Illya Chekrygin, who directed HBO’s implementation of containers as a senior staff engineer in 2017. “That was a painful experience in terms of synchronization.”

Moreover, when containers that ran stateful apps were killed and restarted in different nodes of the Kubernetes cluster, it took too long to unmount, release and remount their attached storage volumes, Chekrygin said.

Rook was an early conceptual project in GitHub at that time, but HBO engineers put it into a test environment to support Prometheus. Rook uses a storage overlay that runs within the Kubernetes cluster and configures the cluster nodes’ available disk space as a giant pool of resources, which is in line with how Kubernetes handles CPU and memory resources.

Rather than synchronize data across multiple specific storage volumes or locations, Rook uses the Ceph distributed file system to stripe the data across multiple machines and clusters and to create multiple copies of data for high availability. That overcomes the data synchronization problem, and it avoids the need to unmount and remount external storage volumes.

“It’s using existing cluster disk configurations that are already there, so nothing has to be mounted and unmounted,” Chekrygin said. “You avoid external storage resources to begin with.”

At HBO, a mounting and unmounting process that took up to an hour was reduced to two seconds, which was suitable for the Kubernetes monitoring system in Prometheus that scraped telemetry data from the cluster every 10 to 30 seconds.

However, Rook never saw production use at HBO, which, by policy, doesn’t put prerelease software into production. Instead, Chekrygin and his colleagues set up an external Prometheus instance that received a relay of monitoring data from an agent inside the Kubernetes cluster. That worked, but it required an extra network hop for data and made Prometheus management more complex.

“Kubernetes provides a lot of functionality out of the box, such as automatically restarting your Pod if your Pod dies, automatic scaling and service discovery,” Chekrygin said. “If you run a service somewhere else, it’s your responsibility on your own to do all those things.”

Kubernetes storage in the spotlight

Kubernetes is ill-equipped to handle data storage persistence … this is the next frontier and the next biggest thing.
Illya Chekryginfounding member, Upbound

The CNCF is aware of the difficulty organizations face when they try to run stateful applications on Kubernetes. As of this week, it now owns the intellectual property and trademarks for Rook, which currently lists Quantum Corp. and Upbound, a startup in Seattle founded by Rook’s creator, Bassam Tabbara, as contributors to its open source code. As an inception-level project, Rook isn’t a sure thing, more akin to a bet on an early stage idea. It has about a 50-50 chance of panning out, CNCF’s Aniszczyk said.

Inception-level projects must update their presentations to the technical board once a year to continue as part of CNCF. From the inception level, projects may move to incubation, which means they’ve collected multiple corporate contributors and established a code of conduct and governance procedures, among other criteria. From incubation, projects then move to the graduated stage, although the CNCF has yet to even designate Kubernetes itself a graduated project. Kubernetes and Prometheus are expected to graduate this year, Aniszczyk said.

The upshot for container orchestration users is Rook will be governed by the same rules and foundation as Kubernetes itself, rather than held hostage by a single for-profit company. The CNCF could potentially support more than one project similar to Rook, such as Red Hat’s Gluster-based Container Native Storage Platform, and Aniszczyk said those companies are welcome to present them to the CNCF technical board.

Another Kubernetes storage project that may find its way into the CNCF, and potentially complement Rook, was open-sourced by container storage software maker Portworx this week. The Storage Orchestrator Runtime for Kubernetes (STORK) uses the Kubernetes orchestrator to automate operations within storage layers such as Rook to respond to applications’ needs. However, STORK needs more development before it is submitted to the CNCF, said Gou Rao, founder and CEO at Portworx, based in Los Altos, Calif.

Kubernetes storage seems like a worthy bet to Chekrygin, who left his three-year job with HBO this month to take a position as an engineer at Upbound.

“Kubernetes is ill-equipped to handle data storage persistence,” he said. “I’m so convinced that this is the next frontier and the next biggest thing, I was willing to quit my job.”

Beth Pariseau is senior news writer for TechTarget’s Cloud and DevOps Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

Intune APIs in Microsoft Graph – Now generally available

With tens of thousands of enterprise mobility customers, we see a great diversity in how organizations structure their IT resources. Some choose to manage their mobility solutions internally while others choose to work with a managed service provider to manage on their behalf. Regardless of the structure, our goal is to enable IT to easily design processes and workflows that increase user satisfaction and drive security and IT effectiveness.

In 2017, we unified Intune, Azure Active Directory, and Azure Information Protection admin experiences in the Azure portal (portal.azure.com) while also enabling the public preview of Intune APIs in Microsoft Graph. Today, we are taking another important step forward in our ability to offer customers more choice and capability by making Intune APIs in Microsoft Graph generally available. This opens a new set of possibilities for our customers and partners to automate and integrate their workloads to reduce deployment times and improve the overall efficiency of device management.

Intune APIs in Microsoft Graph enable IT professionals, partners, and developers to programmatically access data and controls that are available through the Azure portal. One of our partners, Crayon (based in Norway), is using Intune APIs to automate tasks with unattended authentication:

Jan Egil Ring, Lead Architect at Crayon: “The Intune API in Microsoft Graph enable users to access the same information that is available through the Azure Portal – for both reporting and operational purposes. It is an invaluable asset in our toolbelt for automating business processes such as user on- and offboarding in our customer`s tenants. Intune APIs, combined with Azure Automation, help us keep inventory tidy, giving operations updated and relevant information.”

Intune APIs now join a growing family of other Microsoft cloud services that are accessible through Microsoft Graph, including Office 365 and Azure AD. This means that you can use Microsoft Graph to connect to data that drives productivity – mail, calendar, contacts, documents, directory, devices, and more. It serves as a single interface where Microsoft cloud services can be reached through a set of REST APIs.

The scenarios that Microsoft Graph enables are expansive. To give you a better idea of what is possible with Intune APIs in Microsoft Graph, let’s look at some of the core use cases that we have already seen being utilized by our partners and customers.

Automation

Microsoft Graph allows you to connect different Microsoft cloud services and automate workflows and processes between them. It is accessible through several platforms and tools, including REST- based API endpoints and most popular programming and automation platforms (.NET, JS, iOS, Android, PowerShell). Resources (user, group, device, application, file, etc) and policies can be queried through this API, and formerly difficult or complex questions can be addressed via straightforward queries.

For example, one of our partners, PowerON Platforms (based in the UK), is using Intune APIs in Microsoft Graph to deliver their solutions to their customers faster and more consistently. PowerOn Platforms has created baseline deployment templates to increase the speed at which they are able to deploy solutions to customers. These templates are based on unique customer types and requirements and vastly accelerate the process that normally would take two to three days to complete and compresses it down to 15 seconds. Their ability to get customers up and running is now faster than ever before.

Steve Beaumont, Technical Director at PowerON Platforms: “PowerON has developed new and innovative methods to increase the speed of our Microsoft Intune delivery and achieve consistent outputs for customers. By leveraging the power of Microsoft Graph and new Intune capabilities, PowerON’s new tooling enhances the value of Intune.”

Integration

Intune APIs in Microsoft Graph can also provide detailed user, device, and application information to other IT asset management systems. You could build custom experiences which call Microsoft Graph to configure Intune controls and policies and unify workflows across multiple services.

For example, Kloud (based in Australia) leverages Microsoft Graph to integrate Intune device management and support activities into existing central management portals. This increases Kloud’s ability to centrally manage an integrated solution for their clients, making them much more effective as an integrated solution provider.

Tom Bromby, Managing Consultant at Kloud: “Microsoft Graph allows us to automate large, complex configuration tasks on the Intune platform, saving time and reducing the risk of human error. We can store our tenant configuration in source control, which greatly streamlines the change management process, and allows for easy audit and reporting of what is deployed in the environment, what devices are enrolled and what users are consuming the service”

Analytics

Having the right data at your fingertips is a must for busy IT teams managing diverse mobile environments. You can access Intune APIs in Microsoft Graph with PowerBI and other analytics services to create custom dashboards and reports based on Intune, Azure AD, and Office 365 data – allowing you to monitor your environment and view the status of devices and apps across several dimensions, including device compliance, device configuration, app inventory, and deployment status. With Intune Data Warehouse, you can now access historical data for up to 90 days.

For example, Netrix, LLC (based in the US) leverages Microsoft Graph to curate automated solutions to improve end-user experiences and increase reporting accuracy for a more effective device management. These investments increase their efficiency and overall customer satisfaction.

Tom Lilly, Technical Team Lead at Netrix, LLC: “By using Intune APIs in Microsoft Graph, we’ve been able to provide greater insights and automation to our clients. We are able to surface the data they really care about and deliver it to the right people, while keeping administrative costs to a minimum. As an integrator, this also allows Netrix to provide repetitive, manageable solutions, while improving our time to delivery, helping get our customers piloted or deployed quicker.”

We are extremely excited to see how you will use these capabilities to improve your processes and workflows as well as to create custom solutions for your organization and customers. To get started, you can check out the documentation on how to use Intune and Azure Active Directory APIs in Microsoft Graph, watch our Microsoft Ignite presentation on this topic, and leverage sample PowerShell scripts.

Deployment note: Intune APIs in Microsoft Graph are being updated to their GA version today. The worldwide rollout should complete within the next few days.

Please note: Use of a Microsoft online service requires a valid license. Therefore, accessing EMS, Microsoft Intune, or Azure Active Directory Premium features via Microsoft Graph API requires paid licenses of the applicable service and compliance with Microsoft Graph API Terms of Use.

Additional resources:

What is happening with AI in cybersecurity?

Jon Oltsik, an analyst with Enterprise Strategy Group in Milford, Mass., wrote about the growing role of AI in cybersecurity. Two recent announcements sparked his interest.

The first was by Palo Alto Networks, which rolled out Magnifier, a behavioral analytics system. Second, Alphabet deployed Chronicle, a cybersecurity intelligence platform. Both rely on AI in cybersecurity and machine learning to sort through massive amounts of data. Vendors are innovating to bring AI in cybersecurity to the market, and ESG sees growing demand for these forms of advanced analytics.

Twelve percent of enterprises have already deployed AI in cybersecurity. ESG research found 29% of respondents want to accelerate incident detection, while similar numbers demand faster incident response or the ability to better identify and communicate risk to the business. An additional 22% want AI cybersecurity systems to improve situational awareness.

Some AI applications work on a stand-alone basis, often tightly coupled with security information and event management or endpoint detection and response; in other cases, machine learning is applied as a helper app. This is true of Bay Dynamics’ partnership with Symantec, applying Bay’s AI engine to Symantec data loss prevention.

Oltsik cautioned that most chief information security officers (CISO) don’t understand AI algorithms and data science, so vendors will need to focus on what they can offer to enhance security. “In the future, AI could be a cybersecurity game-changer, and CISOs should be open to this possibility. In the meantime, don’t expect many organizations to throw the cybersecurity baby out with the AI bath water,” Oltsik said.

Read more of Oltsik’s ideas about AI in cybersecurity.

Simplify networks for improved security and performance

Russ White, blogging in Rule 11 Tech, borrowed a quote from a fellow blogger. “The problem is that once you give a monkey a club, he is going to hit you with it if you try to take it away from him.”

In this analogy, the club is software intended to simplify the work of a network engineer. But in reality, White said, making things easier can also create a new attack surface that cybercriminals can exploit.

To that end, White recommended removing unnecessary components and code to reduce the attack surface of a network. Routing protocols, quality-of-service controls and transport protocols can all be trimmed back, along with some virtual networks and overlays.

In addition to beefing up security, resilience is another key consideration, White said. When engineers think of network failure, their first thoughts include bugs in the code, failed connectors and faulty hardware. In reality, however, White said most failures stem from misconfiguration and user error.

“Giving the operator too many knobs to solve a single problem is the equivalent of giving the monkey a club. Simplicity in network design has many advantages — including giving the monkey a smaller club,” he said.

Explore more from White about network simplicity.

BGP in data centers using EVPN

Ivan Pepelnjak, writing in ipSpace, focused on running Ethernet VPN, or EVPN, in a single data center fabric with either EVPN or MPLS encapsulation. He contrasts this model with running EVPN between data center fabrics, where most implementations require domain isolation at the fabric edge.

EVPN is used as a Border Gateway Protocol address family that can be run on external BGP or internal BGP connections. For single data center fabrics, engineers can use either IBGP or EBGP to build EVPN infrastructure within a single data center fabric, Pepelnjak said.

He cautioned, however, that spine switches shouldn’t be involved in intra-fabric customer traffic forwarding. The BGP next-hop in an EVPN update can’t be changed on the path between ingress and egress switch, he said. Instead, the BGP next-hop must always point to the egress fabric edge switch.

To exchange EVPN updates across EBGP sessions within a data center fabric, the implementation needs to support functionality similar to MPLS VPN. Pepelnjak added many vendors have not boosted integration for EVPN, and users often run into issues that can result in  numerous configuration changes.

Pepelnjak recommended avoiding vendors that market EBGP between leaf-and-spine switches or IBGP switches on top of intra-fabric EBGP. If engineers are stuck with an inflexible vendor, it may be best to use Interior Gateway Protocol as the routing protocol.

Dig deeper into Pepelnjak’s ideas on EVPN.

Azure ExpressRoute updates – New partnerships, monitoring and simplification

Azure ExpressRoute allows enterprise customers to privately and directly connect to Microsoft’s cloud services, providing a more predictable networking experience than traditional internet connections. ExpressRoute is available in 42 peering locations globally and is supported by a large ecosystem of more than 100 connectivity providers. Leading customers use ExpressRoute to connect their on-premises networks to Azure, as a vital part of managing and running their mission critical applications and services.

Cisco to build Azure ExpressRoute practice

As we continue to grow the ExpressRoute experience in Azure, we’ve found our enterprise customers benefit from understanding networking issues that occur in their internal networks with hybrid architectures. These issues can impact their mission-critical workloads running in the cloud.

To help address on-premises issues, which often require deep technical networking expertise, we continue to partner closely with Cisco to provide a better customer networking experience. Working together, we can solve the most challenging networking issues encountered by enterprise customers using Azure ExpressRoute.

Today, Cisco announced an extended partnership with Microsoft to build a new network practice providing Cisco Solution Support for Azure ExpressRoute.   We are fully committed to working with Cisco and other partners with deep networking experience to build and expand on their networking practices and help accelerate our customers’ journey to Azure.

Cisco Solution Support provides customers with additional centralized options for support and guidance for Azure ExpressRoute, targeting the customers on premises end of the network.

New monitoring options for ExpressRoute

To provide more visibility into ExpressRoute network traffic, Network Performance Monitor (NPM) for ExpressRoute will be generally available in six regions in mid-February, following a successful preview announced at Microsoft Ignite 2017. NPM enables customers to continuously monitor their ExpressRoute circuits and alert on several key networking metrics including availability, latency, and throughput in addition to providing graphical view of the network topology. 

NPM for ExpressRoute can easily be configured through the Azure portal to quickly start monitoring your connections.

We will continue to enhance the footprint, features and functionality of NPM of ExpressRoute to provide richer monitoring capabilities for ExpressRoute. 

ExpressRoute1

ExpressRoute2

Figure 1: Network Performance Monitor and Endpoint monitoring simplifies ExpressRoute monitoring

Endpoint monitoring for ExpressRoute enables customers to monitor connectivity not only to PaaS services such as Azure Storage but also SaaS services such as Office 365 over ExpressRoute. Customers can continuously measure and alert on the latency, jitter, packet loss and topology of their circuits from any site to PaaS and SaaS services. A new preview of Endpoint Monitoring for ExpressRoute will be available in mid-February.

Simplifying ExpressRoute peering

To further simplify management and configuration of ExpressRoute we have merged public and Microsoft peerings. Now available on Microsoft peering are Azure PaaS services such as Azure Storage and Azure SQL along with Microsoft SaaS services (Dynamics 365 and Office 365). Access to your Azure Virtual Networking remains on private peering.

ExpressRoute with Microsoft peering and private peering

Figure 2: ExpressRoute with Microsoft peering and private peering

ExpressRoute, using BGP, provides Microsoft prefixes to your internal network. Route filters allow you to select the specific Office 365 or Dynamics 365 services (prefixes) accessed via ExpressRoute. You can also select Azure services by region (e.g. Azure US West, Azure Europe North, Azure East Asia). Previously this capability was only available on ExpressRoute Premium. We will be enabling Microsoft peering configuration for standard ExpressRoute circuits in mid-February.

Manage rules

New ExpressRoute locations

ExpressRoute is always configured as a redundant pair of virtual connections across two physical routers. This highly available connection enables us to offer an enterprise-grade SLA. We recommend that customers connect to Microsoft in multiple ExpressRoute locations to meet their Business Continuity and Disaster Recovery (BCDR) requirements. Previously this required customers to have ExpressRoute circuits in two different cities. In select locations we will provide a second ExpressRoute site in a city that already has an ExpressRoute site. A second peering location is now available in Singapore. We will add more ExpressRoute locations within existing cities based on customer demand. We’ll announce more sites in the coming months.

AI, machine learning ahead for Box content management platform

Content management spans enterprise content management, web content management, cloud, on premises and all the combinations thereof. What used to be a well-defined, vendor-versus-vendor competition is now spread across multiple categories, platforms and hosting sites.

Box content management is more of an enterprise than web content tool, competing with Documentum, SharePoint and OpenText more than Drupal and WordPress. It is all cloud, and it has become an API-driven collaboration platform, as well. We caught up with Jeetu Patel, Box’s chief product officer, to discuss his company’s roadmap for competing in this changing market.

How does Box content management and its 57 million users fit into the overall enterprise content management (ECM) market right now?

Jeetu Patel: There’s a convergence of multiple different markets, because what’s happened, unfortunately, in this industry, people have taken a market-centric view, rather than a customer- and user-centric view. When you think about what people want to do, they might start with a piece of content that’s unstructured — like yourself, with this article — and [share and collaborate with people inside and outside of the organization and eventually] publish it to the masses.

Jeetu Patel, chief product officer, BoxJeetu Patel

During that entire lifecycle, it’s pretty important your organization is maintaining intellectual property, governance and security around it. You might even have a custom mobile app, and you might want to make sure how the content is coming from the same content infrastructure. Look at all the people served in this lifecycle; eventually, that content might get archived and disposed. Typically, there’s like 17 different systems that piece of content might go through to have all these things done with it. This seemed like a counterproductive way to work.

Our thinking from the beginning was, ‘Shouldn’t there be a single source of truth where a set of policies can be applied to content, a place where you could collaborate and work on content, but if you have other applications that you’re using, you should still be able to integrate them into the content and share with people inside and outside the organization?’ That’s what we think this market should evolve into … and we call that cloud content management.

Smart search, AI, those things are on other vendors’ product roadmaps. What’s ahead for Box content management?

Patel: The three personas we serve are the end users, like you writing this article; enterprise IT admins and the enterprise security professionals; and the developer who’s building an application, but doesn’t want to rebuild content management infrastructure every time they serve up a piece of content, [but instead use API calls such as Box’s high-fidelity viewer for videos].

When we think about the roadmap, we have to be thinking at scale, for millions of users across hundreds of petabytes of data, and make it frictionless so that people who aren’t computer jockeys — just average people — use our system. One of our key philosophies is identifying megatrends and making them tailwinds for our business.

Looking back 13 years ago when we started, some of the big trends that have happened were cloud and mobile. We used those to propel our business forward. Now, it’s artificial intelligence and machine learning. In the next five years, content management is going to look completely different than it has the last 25.

In the next five years, content management is going to look completely different than it has the last 25.
Jeetu Patelchief product officer at Box

Content’s going to get meaningfully more intelligent. Machine learning algorithms should, for example, detect all the elements in an image, OCR [optical character recognition] the text and automatically populate the metadata without doing any manual data entry. Self-describing. [Sentiment analysis] when you’re recording every single call at a call center. Over time, the ultimate nirvana is that you’ll never have to search for and open up an unstructured piece of content — you just get an answer. We want to make sure we take advantage of all those innovations and bring them to Box.

How does Box content management compete with SharePoint, which is ingrained in many organizations and must be a formidable competitor, considering the always-expanding popularity of SharePoint Online?

Patel: Microsoft is an interesting company. They are one of our biggest competitors, with SharePoint and OneDrive, and one of our biggest partners, with Azure. We partner with them very closely for Azure and the Office 365 side of the house. And we think, [with Box migrations,] there’s an area where there’s an opportunity for customers to [reduce] fragmented [SharePoint] infrastructure and have a single platform to make it easy for user, administrator and developer to work end to end … and modernizing their business processes, as well.

Modernize their business processes?

Patel: Once you migrate the content over to Box, there’s a few things that happen you weren’t able to do in the past. For example, you can now make sure users can access content anywhere on any device, which you couldn’t do in the past without going through a lot of hoops. Try sharing a piece of content with someone outside of your organization that you started in OneDrive and moved over to SharePoint. They actually have a troubleshooting page for it. It’s not just SharePoint; it’s any legacy ECM system that has this problem. We want to make sure we solve that.

Hyper engine aims to give enterprise Tableau analytics a boost

Tableau is continuing its focus on enterprise functionality, rolling out several new features that the company hopes will make its data visualization and analytics software more attractive as an enterprise tool to help broaden its appeal beyond an existing base of line-of-business users.

In particular, the new Tableau 10.5 release, launched last week, includes the long-awaited Hyper in-memory compute engine. Company officials said Hyper will bring vastly improved speeds to the software and support new Tableau analytics use cases, like internet of things (IoT) analytics applications.

The faster speeds will be particularly noticeable, they said, when users refresh Tableau data extracts, which are in-memory snapshots of data from a source file. Extracts can reach large sizes, and refreshing larger files takes time with previous releases.

“We extract every piece of data that we work with going to production, so we’re really looking forward to [Hyper],” Jordan East, a BI data analyst at General Motors, said in a presentation at Tableau Conference 2017, held in Las Vegas last October.

East works in GM’s global telecom organization, which supports the company’s communications needs. His team builds BI reports on the overall health of the communications system. The amount of data coming in has grown substantially over the year, and keeping up with the increasing volume of data has been a challenge, he said.

Extracting the data, rather than connecting Tableau to live data, helped improve report performance. East said he hopes the extra speed of Hyper will enable dashboards to be used in more situations, like live meetings.

Faster extracts mean fresher analytics

The Tableau 10.5 update also includes support for running Tableau Server on Linux, new governance features and other additions. But Hyper is getting most of the attention. Potentially, faster extract refreshes mean customers will refresh extracts more frequently and be able to do their Tableau analytics on fresher data.

“If Hyper lives up to demonstrations and all that has been promised, it will be an incredible enhancement for customers that are struggling with large complex data,” said Rita Sallam, a Gartner analyst.

Sallam’s one caveat was that customers who are doing Tableau analytics on smaller data sets will see less of a performance upgrade, because their extracts likely already refresh and load quickly. She said she believes the addition of Hyper will make it easier to analyze data stored in a Hadoop data lake, which was typically too big to efficiently load into Tableau before Hyper. This will give analysts access to larger, more complex data sets and enable deeper analytics, Sallam said.

Focus on enterprise functionality risky

Looking at the bigger picture, though, Sallam said there is some risk for Tableau in pursuing an enterprise focus. She said moving beyond line-of-business deployments and doubling down on enterprise functionality was a necessary move to attract and retain customers. But, at the same time, the company risks falling behind on analytics functionality.

Sallam said the features in analytics software that will be most important in the years ahead will be things like automated machine learning and natural language querying and generation. By prioritizing the nuts and bolts of enterprise functionality, Tableau hasn’t invested as much in these types of features, Sallam said.

“If they don’t [focus on enterprise features], they’re not going to be able to respond to customers that want to deploy Tableau at scale,” Sallam said. “But that does come with a cost, because now they can’t fully invest in next-generation features, which are going to be the defining features of user experience two or three years from now.”

Cybersecurity skills shortage continues to worsen

Jon Oltsik, an analyst at Enterprise Strategy Group in Milford, Mass., said the global cybersecurity skills shortage is bad and getting worse. According to Oltsik, skills shortages among various networking disciplines have not eased — and the cybersecurity shortage is particularly acute — citing ESG’s annual survey on the state of IT. For instance in 2014, 23% of respondents said that their organization faced a problematic shortage of cybersecurity skills. In the most current survey, which polled more than 620 IT and cybersecurity professionals, 51% said they faced a cybersecurity skills shortage. The data aligns with the results of an ESG-ISSA survey in 2017 that found 70% of cybersecurity professionals reporting their organizations were affected by the skills shortage — resulting in increased workloads and little time for planning. 

“I can only conclude that the cybersecurity skills shortage is getting worse,” Oltsik said. “Given the dangerous threat landscape and a relentless push toward digital transformation, this means that the cybersecurity skills shortage represents an existential threat to developed nations that rely on technology as the backbone of their economy.”

Chief information security officers (CISOs), Oltsik said, need to consider the implications of the cybersecurity skills shortage. Smart leaders are doing their best to cope by consolidating systems, such as integrated security operations and analytics platform architecture, and adopting artificial intelligence and machine learning. In other cases, CISOs automate processes, adopt a portfolio management approach and increase staff compensation, training and mentorship to improve retention.

Dig deeper into Oltsik’s ideas on the cybersecurity skills shortage.

Building up edge computing power

Erik Heidt, an analyst at Gartner, spent part of 2017 discussing edge computing challenges with clients as they worked to improve computational power for IoT projects. Heidt said a variety of factors drive compute to the edge (and in some cases, away), including availability, data protection, cycle times and data stream filtering. In some cases, computing capability is added directly to an IoT endpoint. But in many situations, such as data protection, it may make more sense to host an IoT platform in an on-premises location or private data center.

Yet the private approach poses challenges, Heidt said, including licensing costs, capacity issues and hidden costs from IoT platform providers that limit users to certified hardware. Heidt recommends purchasing teams look carefully at what functions are being offered by vendors, as well as considering data sanitization strategies to enable more use of the cloud. “There are problems that can only be solved by moving compute, storage and analytical capabilities close to or into the IoT endpoint,” Heidt said.

Read more of Heidt’s thoughts on the shift to the edge.

Meltdown has parallels in networking

Ivan Pepelnjak, writing in IPSpace, responded to a reader’s question about how hardware issues become software vulnerabilities in the wake of the Meltdown vulnerability. According to Pepelnjak, there has always been privilege-level separation between kernels and user space. Kernels have always been mapped to high-end addresses in user space, but in more recent CPUs, operations needed to execute just a single instruction often following a pipeline with dozens of different instructions — thus exposing the vulnerability an attack like Meltdown can exploit.

In these situations, the kernel space location test fails once the command is checked against the access control list (ACL), but by then other parts of the CPU have already carried out instructions designed to call up the memory location.

Parallelized execution isn’t unique to CPU vendors. Pepelnjak said at least one hardware vendor created a version of IPv6 neighbor discovery that suffers from the same vulnerability. In response, vendors are rolling out operating system patches removing the kernel from user space. This approach prevents exploits but no longer gives the kernel direct access to the user space when it is needed. As a result, in many cases the kernel needs to change virtual-to-physical page tables, mapping user space into kernel page tables. Every single system call, even reading a byte from one file, means the kernel page tables need to be unmapped.

Explore more of Pepelnjak’s thoughts on network hardware vulnerabilities.