Tag Archives: Enterprise

Challenges in cloud data security lead to a lack of confidence

Enterprise cloud use is full of contradictions, according to new research.

The “2017 Global Cloud Data Security Study,” conducted by Ponemon Institute and sponsored by security vendor Gemalto, found that one reason enterprises use cloud services is for the security benefits, but respondents were divided on whether cloud data security is realistic, particularly for sensitive information.

“More companies are selecting cloud providers because they will improve security,” the report stated. “While cost and faster deployment time are the most important criteria for selecting a cloud provider, security has increased from 12% of respondents in 2015 to 26% in 2017.”

Although 74% of respondents said their organization is either a heavy or moderate user of the cloud, nearly half (43%) said they are not confident that their organization’s IT department knows about all the cloud computing services it currently uses.

In addition, less than half of respondents said their organization has defined roles and accountability for cloud data security. While this number (46%) was up in 2017 — from 43% in 2016 and from 38% in 2015 — it is still low, especially considering the type of information that is stored in the cloud the most.

Customer data is at the highest risk

According to the survey findings, the primary types of data stored in the cloud are customer information, email, consumer data, employee records and payment information. At the same time, the data considered to be most at risk, according to the report, is payment information and customer information.

“Regulated data such as payment and customer information continue to be most at risk,” the report stated. “Because of the sensitivity of the data and the need to comply with privacy and data protection regulations, companies worry most about payment and customer information.”

Having tracked the progress of cloud security over the years, when we say ‘confidence in the cloud is up,’ we mean that we’ve come a long way.
Jason Hartvice president and CTO of data protection, Gemalto

One possible explanation for why respondents feel that sensitive data is at risk is that cloud data security is tough to actually achieve.

“The cloud is storing all types of information from personally identifiable information to passwords to credit cards,” said Jason Hart, vice president and CTO of data protection at Gemalto. “In some cases, people don’t know where data is stored, and more importantly, how easy it is to access by unauthorized people. Most organizations don’t have data classification policies for security or consider the security risks; instead, they’re worrying about the perimeter. From a risk point of view, all data has a risk value.”

The biggest reason it is so difficult to secure the cloud, according to the study, is that it’s more difficult to apply conventional infosec practices in the cloud. The next most cited reason is that it is more difficult for enterprises to assess the cloud provider for compliance with security best practices and standards. The majority of respondents (71% and 67%, respectively) feel those are the biggest challenges, but also note that it is more difficult to control or restrict end-user access to the cloud, which also provides some security challenges.

“To solve both of these challenges, enterprises should have control and visibility over their security throughout the cloud, and being able to enforce, develop and monitor security policies is key to ensuring an integrity,” Hart said. “People will apply the appropriate controls once they’re able to understand the risks towards their data.”

Despite the challenges in cloud data security and the perceived security risks to sensitive data stored in the cloud, all-around confidence in cloud computing is on the rise — slightly. The 25% of respondents who said they are “very confident” their organization knows about all the cloud computing services it currently uses is up from 19% in 2015. Fewer people (43%) said they were “not confident” in 2017 compared to 55% in 2015.

“Having tracked the progress of cloud security over the years, when we say ‘confidence in the cloud is up,’ we mean that we’ve come a long way,” Hart said. “After all, in the beginning, many companies were interested in leveraging the cloud but had significant concerns about security.”

Hart noted that, despite all the improvements to business workflows that the cloud has provided, security is still an issue. “Security has always been a concern and continues to be,” he said. “Security in the cloud can be improved if the security control is applied to the data itself.”

Ponemon sampled over 3,200 experienced IT and IT security practitioners in the U.S., the United Kingdom, Australia, Germany, France, Japan, India and Brazil who are actively involved in their organization’s use of cloud services.

Kubernetes storage projects dominate CNCF docket

Enterprise IT pros should get ready for Kubernetes storage tools, as the Cloud Native Computing Foundation seeks ways to support stateful applications.

The Cloud Native Computing Foundation (CNCF) began its quest to develop container storage products this week when it approved an inception-level project called Rook, which connects Kubernetes orchestration to the Ceph distributed file system through the Kubernetes operator API.

The Rook project’s approval illustrates the CNCF’s plans to emphasize Kubernetes storage.

“It’s going to be a big year for storage in Kubernetes, because the APIs are a little bit more solidified now,” said CNCF COO Chris Aniszczyk. The operator API and a Container Storage Interface API were released in the alpha stage with Kubernetes 1.9 in December. “[The CNCF technical board is] saying that the Kubernetes operator API is the way to go in [distributed container] storage,” he said.

Rook project gave Prometheus a seat on HBO’s Iron Throne

HBO wanted to deploy Prometheus for Kubernetes monitoring, and it ideally would have run the time-series database application on containers within the Kubernetes cluster, but that didn’t work well with cloud providers’ persistent storage volumes.

Illya Chekrygin, UpboundIllya Chekrygin

“You always have to do this careful coordination to make sure new containers only get created in the same availability zone. And if that entire availability zone goes away, you’re kind of out of luck,” said Illya Chekrygin, who directed HBO’s implementation of containers as a senior staff engineer in 2017. “That was a painful experience in terms of synchronization.”

Moreover, when containers that ran stateful apps were killed and restarted in different nodes of the Kubernetes cluster, it took too long to unmount, release and remount their attached storage volumes, Chekrygin said.

Rook was an early conceptual project in GitHub at that time, but HBO engineers put it into a test environment to support Prometheus. Rook uses a storage overlay that runs within the Kubernetes cluster and configures the cluster nodes’ available disk space as a giant pool of resources, which is in line with how Kubernetes handles CPU and memory resources.

Rather than synchronize data across multiple specific storage volumes or locations, Rook uses the Ceph distributed file system to stripe the data across multiple machines and clusters and to create multiple copies of data for high availability. That overcomes the data synchronization problem, and it avoids the need to unmount and remount external storage volumes.

“It’s using existing cluster disk configurations that are already there, so nothing has to be mounted and unmounted,” Chekrygin said. “You avoid external storage resources to begin with.”

At HBO, a mounting and unmounting process that took up to an hour was reduced to two seconds, which was suitable for the Kubernetes monitoring system in Prometheus that scraped telemetry data from the cluster every 10 to 30 seconds.

However, Rook never saw production use at HBO, which, by policy, doesn’t put prerelease software into production. Instead, Chekrygin and his colleagues set up an external Prometheus instance that received a relay of monitoring data from an agent inside the Kubernetes cluster. That worked, but it required an extra network hop for data and made Prometheus management more complex.

“Kubernetes provides a lot of functionality out of the box, such as automatically restarting your Pod if your Pod dies, automatic scaling and service discovery,” Chekrygin said. “If you run a service somewhere else, it’s your responsibility on your own to do all those things.”

Kubernetes storage in the spotlight

Kubernetes is ill-equipped to handle data storage persistence … this is the next frontier and the next biggest thing.
Illya Chekryginfounding member, Upbound

The CNCF is aware of the difficulty organizations face when they try to run stateful applications on Kubernetes. As of this week, it now owns the intellectual property and trademarks for Rook, which currently lists Quantum Corp. and Upbound, a startup in Seattle founded by Rook’s creator, Bassam Tabbara, as contributors to its open source code. As an inception-level project, Rook isn’t a sure thing, more akin to a bet on an early stage idea. It has about a 50-50 chance of panning out, CNCF’s Aniszczyk said.

Inception-level projects must update their presentations to the technical board once a year to continue as part of CNCF. From the inception level, projects may move to incubation, which means they’ve collected multiple corporate contributors and established a code of conduct and governance procedures, among other criteria. From incubation, projects then move to the graduated stage, although the CNCF has yet to even designate Kubernetes itself a graduated project. Kubernetes and Prometheus are expected to graduate this year, Aniszczyk said.

The upshot for container orchestration users is Rook will be governed by the same rules and foundation as Kubernetes itself, rather than held hostage by a single for-profit company. The CNCF could potentially support more than one project similar to Rook, such as Red Hat’s Gluster-based Container Native Storage Platform, and Aniszczyk said those companies are welcome to present them to the CNCF technical board.

Another Kubernetes storage project that may find its way into the CNCF, and potentially complement Rook, was open-sourced by container storage software maker Portworx this week. The Storage Orchestrator Runtime for Kubernetes (STORK) uses the Kubernetes orchestrator to automate operations within storage layers such as Rook to respond to applications’ needs. However, STORK needs more development before it is submitted to the CNCF, said Gou Rao, founder and CEO at Portworx, based in Los Altos, Calif.

Kubernetes storage seems like a worthy bet to Chekrygin, who left his three-year job with HBO this month to take a position as an engineer at Upbound.

“Kubernetes is ill-equipped to handle data storage persistence,” he said. “I’m so convinced that this is the next frontier and the next biggest thing, I was willing to quit my job.”

Beth Pariseau is senior news writer for TechTarget’s Cloud and DevOps Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

Intune APIs in Microsoft Graph – Now generally available

With tens of thousands of enterprise mobility customers, we see a great diversity in how organizations structure their IT resources. Some choose to manage their mobility solutions internally while others choose to work with a managed service provider to manage on their behalf. Regardless of the structure, our goal is to enable IT to easily design processes and workflows that increase user satisfaction and drive security and IT effectiveness.

In 2017, we unified Intune, Azure Active Directory, and Azure Information Protection admin experiences in the Azure portal (portal.azure.com) while also enabling the public preview of Intune APIs in Microsoft Graph. Today, we are taking another important step forward in our ability to offer customers more choice and capability by making Intune APIs in Microsoft Graph generally available. This opens a new set of possibilities for our customers and partners to automate and integrate their workloads to reduce deployment times and improve the overall efficiency of device management.

Intune APIs in Microsoft Graph enable IT professionals, partners, and developers to programmatically access data and controls that are available through the Azure portal. One of our partners, Crayon (based in Norway), is using Intune APIs to automate tasks with unattended authentication:

Jan Egil Ring, Lead Architect at Crayon: “The Intune API in Microsoft Graph enable users to access the same information that is available through the Azure Portal – for both reporting and operational purposes. It is an invaluable asset in our toolbelt for automating business processes such as user on- and offboarding in our customer`s tenants. Intune APIs, combined with Azure Automation, help us keep inventory tidy, giving operations updated and relevant information.”

Intune APIs now join a growing family of other Microsoft cloud services that are accessible through Microsoft Graph, including Office 365 and Azure AD. This means that you can use Microsoft Graph to connect to data that drives productivity – mail, calendar, contacts, documents, directory, devices, and more. It serves as a single interface where Microsoft cloud services can be reached through a set of REST APIs.

The scenarios that Microsoft Graph enables are expansive. To give you a better idea of what is possible with Intune APIs in Microsoft Graph, let’s look at some of the core use cases that we have already seen being utilized by our partners and customers.

Automation

Microsoft Graph allows you to connect different Microsoft cloud services and automate workflows and processes between them. It is accessible through several platforms and tools, including REST- based API endpoints and most popular programming and automation platforms (.NET, JS, iOS, Android, PowerShell). Resources (user, group, device, application, file, etc) and policies can be queried through this API, and formerly difficult or complex questions can be addressed via straightforward queries.

For example, one of our partners, PowerON Platforms (based in the UK), is using Intune APIs in Microsoft Graph to deliver their solutions to their customers faster and more consistently. PowerOn Platforms has created baseline deployment templates to increase the speed at which they are able to deploy solutions to customers. These templates are based on unique customer types and requirements and vastly accelerate the process that normally would take two to three days to complete and compresses it down to 15 seconds. Their ability to get customers up and running is now faster than ever before.

Steve Beaumont, Technical Director at PowerON Platforms: “PowerON has developed new and innovative methods to increase the speed of our Microsoft Intune delivery and achieve consistent outputs for customers. By leveraging the power of Microsoft Graph and new Intune capabilities, PowerON’s new tooling enhances the value of Intune.”

Integration

Intune APIs in Microsoft Graph can also provide detailed user, device, and application information to other IT asset management systems. You could build custom experiences which call Microsoft Graph to configure Intune controls and policies and unify workflows across multiple services.

For example, Kloud (based in Australia) leverages Microsoft Graph to integrate Intune device management and support activities into existing central management portals. This increases Kloud’s ability to centrally manage an integrated solution for their clients, making them much more effective as an integrated solution provider.

Tom Bromby, Managing Consultant at Kloud: “Microsoft Graph allows us to automate large, complex configuration tasks on the Intune platform, saving time and reducing the risk of human error. We can store our tenant configuration in source control, which greatly streamlines the change management process, and allows for easy audit and reporting of what is deployed in the environment, what devices are enrolled and what users are consuming the service”

Analytics

Having the right data at your fingertips is a must for busy IT teams managing diverse mobile environments. You can access Intune APIs in Microsoft Graph with PowerBI and other analytics services to create custom dashboards and reports based on Intune, Azure AD, and Office 365 data – allowing you to monitor your environment and view the status of devices and apps across several dimensions, including device compliance, device configuration, app inventory, and deployment status. With Intune Data Warehouse, you can now access historical data for up to 90 days.

For example, Netrix, LLC (based in the US) leverages Microsoft Graph to curate automated solutions to improve end-user experiences and increase reporting accuracy for a more effective device management. These investments increase their efficiency and overall customer satisfaction.

Tom Lilly, Technical Team Lead at Netrix, LLC: “By using Intune APIs in Microsoft Graph, we’ve been able to provide greater insights and automation to our clients. We are able to surface the data they really care about and deliver it to the right people, while keeping administrative costs to a minimum. As an integrator, this also allows Netrix to provide repetitive, manageable solutions, while improving our time to delivery, helping get our customers piloted or deployed quicker.”

We are extremely excited to see how you will use these capabilities to improve your processes and workflows as well as to create custom solutions for your organization and customers. To get started, you can check out the documentation on how to use Intune and Azure Active Directory APIs in Microsoft Graph, watch our Microsoft Ignite presentation on this topic, and leverage sample PowerShell scripts.

Deployment note: Intune APIs in Microsoft Graph are being updated to their GA version today. The worldwide rollout should complete within the next few days.

Please note: Use of a Microsoft online service requires a valid license. Therefore, accessing EMS, Microsoft Intune, or Azure Active Directory Premium features via Microsoft Graph API requires paid licenses of the applicable service and compliance with Microsoft Graph API Terms of Use.

Additional resources:

What is happening with AI in cybersecurity?

Jon Oltsik, an analyst with Enterprise Strategy Group in Milford, Mass., wrote about the growing role of AI in cybersecurity. Two recent announcements sparked his interest.

The first was by Palo Alto Networks, which rolled out Magnifier, a behavioral analytics system. Second, Alphabet deployed Chronicle, a cybersecurity intelligence platform. Both rely on AI in cybersecurity and machine learning to sort through massive amounts of data. Vendors are innovating to bring AI in cybersecurity to the market, and ESG sees growing demand for these forms of advanced analytics.

Twelve percent of enterprises have already deployed AI in cybersecurity. ESG research found 29% of respondents want to accelerate incident detection, while similar numbers demand faster incident response or the ability to better identify and communicate risk to the business. An additional 22% want AI cybersecurity systems to improve situational awareness.

Some AI applications work on a stand-alone basis, often tightly coupled with security information and event management or endpoint detection and response; in other cases, machine learning is applied as a helper app. This is true of Bay Dynamics’ partnership with Symantec, applying Bay’s AI engine to Symantec data loss prevention.

Oltsik cautioned that most chief information security officers (CISO) don’t understand AI algorithms and data science, so vendors will need to focus on what they can offer to enhance security. “In the future, AI could be a cybersecurity game-changer, and CISOs should be open to this possibility. In the meantime, don’t expect many organizations to throw the cybersecurity baby out with the AI bath water,” Oltsik said.

Read more of Oltsik’s ideas about AI in cybersecurity.

Simplify networks for improved security and performance

Russ White, blogging in Rule 11 Tech, borrowed a quote from a fellow blogger. “The problem is that once you give a monkey a club, he is going to hit you with it if you try to take it away from him.”

In this analogy, the club is software intended to simplify the work of a network engineer. But in reality, White said, making things easier can also create a new attack surface that cybercriminals can exploit.

To that end, White recommended removing unnecessary components and code to reduce the attack surface of a network. Routing protocols, quality-of-service controls and transport protocols can all be trimmed back, along with some virtual networks and overlays.

In addition to beefing up security, resilience is another key consideration, White said. When engineers think of network failure, their first thoughts include bugs in the code, failed connectors and faulty hardware. In reality, however, White said most failures stem from misconfiguration and user error.

“Giving the operator too many knobs to solve a single problem is the equivalent of giving the monkey a club. Simplicity in network design has many advantages — including giving the monkey a smaller club,” he said.

Explore more from White about network simplicity.

BGP in data centers using EVPN

Ivan Pepelnjak, writing in ipSpace, focused on running Ethernet VPN, or EVPN, in a single data center fabric with either EVPN or MPLS encapsulation. He contrasts this model with running EVPN between data center fabrics, where most implementations require domain isolation at the fabric edge.

EVPN is used as a Border Gateway Protocol address family that can be run on external BGP or internal BGP connections. For single data center fabrics, engineers can use either IBGP or EBGP to build EVPN infrastructure within a single data center fabric, Pepelnjak said.

He cautioned, however, that spine switches shouldn’t be involved in intra-fabric customer traffic forwarding. The BGP next-hop in an EVPN update can’t be changed on the path between ingress and egress switch, he said. Instead, the BGP next-hop must always point to the egress fabric edge switch.

To exchange EVPN updates across EBGP sessions within a data center fabric, the implementation needs to support functionality similar to MPLS VPN. Pepelnjak added many vendors have not boosted integration for EVPN, and users often run into issues that can result in  numerous configuration changes.

Pepelnjak recommended avoiding vendors that market EBGP between leaf-and-spine switches or IBGP switches on top of intra-fabric EBGP. If engineers are stuck with an inflexible vendor, it may be best to use Interior Gateway Protocol as the routing protocol.

Dig deeper into Pepelnjak’s ideas on EVPN.

Azure ExpressRoute updates – New partnerships, monitoring and simplification

Azure ExpressRoute allows enterprise customers to privately and directly connect to Microsoft’s cloud services, providing a more predictable networking experience than traditional internet connections. ExpressRoute is available in 42 peering locations globally and is supported by a large ecosystem of more than 100 connectivity providers. Leading customers use ExpressRoute to connect their on-premises networks to Azure, as a vital part of managing and running their mission critical applications and services.

Cisco to build Azure ExpressRoute practice

As we continue to grow the ExpressRoute experience in Azure, we’ve found our enterprise customers benefit from understanding networking issues that occur in their internal networks with hybrid architectures. These issues can impact their mission-critical workloads running in the cloud.

To help address on-premises issues, which often require deep technical networking expertise, we continue to partner closely with Cisco to provide a better customer networking experience. Working together, we can solve the most challenging networking issues encountered by enterprise customers using Azure ExpressRoute.

Today, Cisco announced an extended partnership with Microsoft to build a new network practice providing Cisco Solution Support for Azure ExpressRoute.   We are fully committed to working with Cisco and other partners with deep networking experience to build and expand on their networking practices and help accelerate our customers’ journey to Azure.

Cisco Solution Support provides customers with additional centralized options for support and guidance for Azure ExpressRoute, targeting the customers on premises end of the network.

New monitoring options for ExpressRoute

To provide more visibility into ExpressRoute network traffic, Network Performance Monitor (NPM) for ExpressRoute will be generally available in six regions in mid-February, following a successful preview announced at Microsoft Ignite 2017. NPM enables customers to continuously monitor their ExpressRoute circuits and alert on several key networking metrics including availability, latency, and throughput in addition to providing graphical view of the network topology. 

NPM for ExpressRoute can easily be configured through the Azure portal to quickly start monitoring your connections.

We will continue to enhance the footprint, features and functionality of NPM of ExpressRoute to provide richer monitoring capabilities for ExpressRoute. 

ExpressRoute1

ExpressRoute2

Figure 1: Network Performance Monitor and Endpoint monitoring simplifies ExpressRoute monitoring

Endpoint monitoring for ExpressRoute enables customers to monitor connectivity not only to PaaS services such as Azure Storage but also SaaS services such as Office 365 over ExpressRoute. Customers can continuously measure and alert on the latency, jitter, packet loss and topology of their circuits from any site to PaaS and SaaS services. A new preview of Endpoint Monitoring for ExpressRoute will be available in mid-February.

Simplifying ExpressRoute peering

To further simplify management and configuration of ExpressRoute we have merged public and Microsoft peerings. Now available on Microsoft peering are Azure PaaS services such as Azure Storage and Azure SQL along with Microsoft SaaS services (Dynamics 365 and Office 365). Access to your Azure Virtual Networking remains on private peering.

ExpressRoute with Microsoft peering and private peering

Figure 2: ExpressRoute with Microsoft peering and private peering

ExpressRoute, using BGP, provides Microsoft prefixes to your internal network. Route filters allow you to select the specific Office 365 or Dynamics 365 services (prefixes) accessed via ExpressRoute. You can also select Azure services by region (e.g. Azure US West, Azure Europe North, Azure East Asia). Previously this capability was only available on ExpressRoute Premium. We will be enabling Microsoft peering configuration for standard ExpressRoute circuits in mid-February.

Manage rules

New ExpressRoute locations

ExpressRoute is always configured as a redundant pair of virtual connections across two physical routers. This highly available connection enables us to offer an enterprise-grade SLA. We recommend that customers connect to Microsoft in multiple ExpressRoute locations to meet their Business Continuity and Disaster Recovery (BCDR) requirements. Previously this required customers to have ExpressRoute circuits in two different cities. In select locations we will provide a second ExpressRoute site in a city that already has an ExpressRoute site. A second peering location is now available in Singapore. We will add more ExpressRoute locations within existing cities based on customer demand. We’ll announce more sites in the coming months.

AI, machine learning ahead for Box content management platform

Content management spans enterprise content management, web content management, cloud, on premises and all the combinations thereof. What used to be a well-defined, vendor-versus-vendor competition is now spread across multiple categories, platforms and hosting sites.

Box content management is more of an enterprise than web content tool, competing with Documentum, SharePoint and OpenText more than Drupal and WordPress. It is all cloud, and it has become an API-driven collaboration platform, as well. We caught up with Jeetu Patel, Box’s chief product officer, to discuss his company’s roadmap for competing in this changing market.

How does Box content management and its 57 million users fit into the overall enterprise content management (ECM) market right now?

Jeetu Patel: There’s a convergence of multiple different markets, because what’s happened, unfortunately, in this industry, people have taken a market-centric view, rather than a customer- and user-centric view. When you think about what people want to do, they might start with a piece of content that’s unstructured — like yourself, with this article — and [share and collaborate with people inside and outside of the organization and eventually] publish it to the masses.

Jeetu Patel, chief product officer, BoxJeetu Patel

During that entire lifecycle, it’s pretty important your organization is maintaining intellectual property, governance and security around it. You might even have a custom mobile app, and you might want to make sure how the content is coming from the same content infrastructure. Look at all the people served in this lifecycle; eventually, that content might get archived and disposed. Typically, there’s like 17 different systems that piece of content might go through to have all these things done with it. This seemed like a counterproductive way to work.

Our thinking from the beginning was, ‘Shouldn’t there be a single source of truth where a set of policies can be applied to content, a place where you could collaborate and work on content, but if you have other applications that you’re using, you should still be able to integrate them into the content and share with people inside and outside the organization?’ That’s what we think this market should evolve into … and we call that cloud content management.

Smart search, AI, those things are on other vendors’ product roadmaps. What’s ahead for Box content management?

Patel: The three personas we serve are the end users, like you writing this article; enterprise IT admins and the enterprise security professionals; and the developer who’s building an application, but doesn’t want to rebuild content management infrastructure every time they serve up a piece of content, [but instead use API calls such as Box’s high-fidelity viewer for videos].

When we think about the roadmap, we have to be thinking at scale, for millions of users across hundreds of petabytes of data, and make it frictionless so that people who aren’t computer jockeys — just average people — use our system. One of our key philosophies is identifying megatrends and making them tailwinds for our business.

Looking back 13 years ago when we started, some of the big trends that have happened were cloud and mobile. We used those to propel our business forward. Now, it’s artificial intelligence and machine learning. In the next five years, content management is going to look completely different than it has the last 25.

In the next five years, content management is going to look completely different than it has the last 25.
Jeetu Patelchief product officer at Box

Content’s going to get meaningfully more intelligent. Machine learning algorithms should, for example, detect all the elements in an image, OCR [optical character recognition] the text and automatically populate the metadata without doing any manual data entry. Self-describing. [Sentiment analysis] when you’re recording every single call at a call center. Over time, the ultimate nirvana is that you’ll never have to search for and open up an unstructured piece of content — you just get an answer. We want to make sure we take advantage of all those innovations and bring them to Box.

How does Box content management compete with SharePoint, which is ingrained in many organizations and must be a formidable competitor, considering the always-expanding popularity of SharePoint Online?

Patel: Microsoft is an interesting company. They are one of our biggest competitors, with SharePoint and OneDrive, and one of our biggest partners, with Azure. We partner with them very closely for Azure and the Office 365 side of the house. And we think, [with Box migrations,] there’s an area where there’s an opportunity for customers to [reduce] fragmented [SharePoint] infrastructure and have a single platform to make it easy for user, administrator and developer to work end to end … and modernizing their business processes, as well.

Modernize their business processes?

Patel: Once you migrate the content over to Box, there’s a few things that happen you weren’t able to do in the past. For example, you can now make sure users can access content anywhere on any device, which you couldn’t do in the past without going through a lot of hoops. Try sharing a piece of content with someone outside of your organization that you started in OneDrive and moved over to SharePoint. They actually have a troubleshooting page for it. It’s not just SharePoint; it’s any legacy ECM system that has this problem. We want to make sure we solve that.

Hyper engine aims to give enterprise Tableau analytics a boost

Tableau is continuing its focus on enterprise functionality, rolling out several new features that the company hopes will make its data visualization and analytics software more attractive as an enterprise tool to help broaden its appeal beyond an existing base of line-of-business users.

In particular, the new Tableau 10.5 release, launched last week, includes the long-awaited Hyper in-memory compute engine. Company officials said Hyper will bring vastly improved speeds to the software and support new Tableau analytics use cases, like internet of things (IoT) analytics applications.

The faster speeds will be particularly noticeable, they said, when users refresh Tableau data extracts, which are in-memory snapshots of data from a source file. Extracts can reach large sizes, and refreshing larger files takes time with previous releases.

“We extract every piece of data that we work with going to production, so we’re really looking forward to [Hyper],” Jordan East, a BI data analyst at General Motors, said in a presentation at Tableau Conference 2017, held in Las Vegas last October.

East works in GM’s global telecom organization, which supports the company’s communications needs. His team builds BI reports on the overall health of the communications system. The amount of data coming in has grown substantially over the year, and keeping up with the increasing volume of data has been a challenge, he said.

Extracting the data, rather than connecting Tableau to live data, helped improve report performance. East said he hopes the extra speed of Hyper will enable dashboards to be used in more situations, like live meetings.

Faster extracts mean fresher analytics

The Tableau 10.5 update also includes support for running Tableau Server on Linux, new governance features and other additions. But Hyper is getting most of the attention. Potentially, faster extract refreshes mean customers will refresh extracts more frequently and be able to do their Tableau analytics on fresher data.

“If Hyper lives up to demonstrations and all that has been promised, it will be an incredible enhancement for customers that are struggling with large complex data,” said Rita Sallam, a Gartner analyst.

Sallam’s one caveat was that customers who are doing Tableau analytics on smaller data sets will see less of a performance upgrade, because their extracts likely already refresh and load quickly. She said she believes the addition of Hyper will make it easier to analyze data stored in a Hadoop data lake, which was typically too big to efficiently load into Tableau before Hyper. This will give analysts access to larger, more complex data sets and enable deeper analytics, Sallam said.

Focus on enterprise functionality risky

Looking at the bigger picture, though, Sallam said there is some risk for Tableau in pursuing an enterprise focus. She said moving beyond line-of-business deployments and doubling down on enterprise functionality was a necessary move to attract and retain customers. But, at the same time, the company risks falling behind on analytics functionality.

Sallam said the features in analytics software that will be most important in the years ahead will be things like automated machine learning and natural language querying and generation. By prioritizing the nuts and bolts of enterprise functionality, Tableau hasn’t invested as much in these types of features, Sallam said.

“If they don’t [focus on enterprise features], they’re not going to be able to respond to customers that want to deploy Tableau at scale,” Sallam said. “But that does come with a cost, because now they can’t fully invest in next-generation features, which are going to be the defining features of user experience two or three years from now.”

Cybersecurity skills shortage continues to worsen

Jon Oltsik, an analyst at Enterprise Strategy Group in Milford, Mass., said the global cybersecurity skills shortage is bad and getting worse. According to Oltsik, skills shortages among various networking disciplines have not eased — and the cybersecurity shortage is particularly acute — citing ESG’s annual survey on the state of IT. For instance in 2014, 23% of respondents said that their organization faced a problematic shortage of cybersecurity skills. In the most current survey, which polled more than 620 IT and cybersecurity professionals, 51% said they faced a cybersecurity skills shortage. The data aligns with the results of an ESG-ISSA survey in 2017 that found 70% of cybersecurity professionals reporting their organizations were affected by the skills shortage — resulting in increased workloads and little time for planning. 

“I can only conclude that the cybersecurity skills shortage is getting worse,” Oltsik said. “Given the dangerous threat landscape and a relentless push toward digital transformation, this means that the cybersecurity skills shortage represents an existential threat to developed nations that rely on technology as the backbone of their economy.”

Chief information security officers (CISOs), Oltsik said, need to consider the implications of the cybersecurity skills shortage. Smart leaders are doing their best to cope by consolidating systems, such as integrated security operations and analytics platform architecture, and adopting artificial intelligence and machine learning. In other cases, CISOs automate processes, adopt a portfolio management approach and increase staff compensation, training and mentorship to improve retention.

Dig deeper into Oltsik’s ideas on the cybersecurity skills shortage.

Building up edge computing power

Erik Heidt, an analyst at Gartner, spent part of 2017 discussing edge computing challenges with clients as they worked to improve computational power for IoT projects. Heidt said a variety of factors drive compute to the edge (and in some cases, away), including availability, data protection, cycle times and data stream filtering. In some cases, computing capability is added directly to an IoT endpoint. But in many situations, such as data protection, it may make more sense to host an IoT platform in an on-premises location or private data center.

Yet the private approach poses challenges, Heidt said, including licensing costs, capacity issues and hidden costs from IoT platform providers that limit users to certified hardware. Heidt recommends purchasing teams look carefully at what functions are being offered by vendors, as well as considering data sanitization strategies to enable more use of the cloud. “There are problems that can only be solved by moving compute, storage and analytical capabilities close to or into the IoT endpoint,” Heidt said.

Read more of Heidt’s thoughts on the shift to the edge.

Meltdown has parallels in networking

Ivan Pepelnjak, writing in IPSpace, responded to a reader’s question about how hardware issues become software vulnerabilities in the wake of the Meltdown vulnerability. According to Pepelnjak, there has always been privilege-level separation between kernels and user space. Kernels have always been mapped to high-end addresses in user space, but in more recent CPUs, operations needed to execute just a single instruction often following a pipeline with dozens of different instructions — thus exposing the vulnerability an attack like Meltdown can exploit.

In these situations, the kernel space location test fails once the command is checked against the access control list (ACL), but by then other parts of the CPU have already carried out instructions designed to call up the memory location.

Parallelized execution isn’t unique to CPU vendors. Pepelnjak said at least one hardware vendor created a version of IPv6 neighbor discovery that suffers from the same vulnerability. In response, vendors are rolling out operating system patches removing the kernel from user space. This approach prevents exploits but no longer gives the kernel direct access to the user space when it is needed. As a result, in many cases the kernel needs to change virtual-to-physical page tables, mapping user space into kernel page tables. Every single system call, even reading a byte from one file, means the kernel page tables need to be unmapped.

Explore more of Pepelnjak’s thoughts on network hardware vulnerabilities.

Campus network architecture strategies to blossom in 2018

Bob Laliberte, an analyst with Enterprise Strategy Group in Milford, Mass., said even though data center networking is slowing down in the face of cloud and edge computing, local and campus network architecture strategies are growing in importance as the new year unfolds.

After a long period of data center consolidation, demand for reduced latency — spurred by the growth of the internet of things (IoT) — and the evolution of such concepts as autonomous vehicles are driving a need for robust local networks.

At the same time, organizations have moved to full adoption of the cloud for roles beyond customer relationship management, and many have a cloud-first policy. As a result, campus network architecture strategies need to allow companies to use multiple clouds to control costs. In addition, good network connectivity is essential to permit access to the cloud on a continuous basis.

Campus network architecture plans must also accommodate Wi-Fi to guarantee user experience and to enable IoT support. The emergence of 5G will also continue to expand wireless capabilities.

Intent-based networks, meanwhile, will become a tool for abstraction and the introduction of automated tasks. “The network is going to have to be an enabler, not an anchor,” with greater abstraction, automation and awareness, Laliberte said.

Laliberte said he expects intent-based networks to be deployed in phases, in specific domains of the network, or to improve verification and insights. “Don’t expect your network admins to have Alexa architecting and building out your network,” he said. Although, he said, systems modeled after Alexa will become interfaces for network systems.

Explore more of Laliberte’s thoughts on networking.

BGP route selection and intent-based networking

Ivan Pepelnjak, writing in ipSpace, said pundits who favor the demise of Border Gateway Protocol (BGP) through new SDN approaches often praise the concept of intent-based networking.

Yet, the methodologies behind intent-based networks fail when it comes to BGP route selection, he said. Routing protocols were, in fact, an early approach to the intent-based idea, although many marketers now selling intent-based systems are criticizing those very same protocols, Pepelnjak said. Without changing the route algorithm, the only option is for users to tweak the intent and hope for better results.

To deal with the challenges of BGP route selection, one option might involve a centralized controller with decentralized local versions of the software for fallback in case the controller fails. Yet, few would want to adopt that approach, Pepelnjak said, calling such a strategy “messy” and difficult to get right. Route selection is now being burdened with intent-driven considerations, such as weights, preferences and communities.

“In my biased view (because I don’t believe in fairy tales and magic), BGP is a pretty obvious lesson in what happens when you try to solve vague business rules with [an] intent-driven approach instead of writing your own code that does what you want to be done,” Pepelnjak wrote. “It will be great fun to watch how the next generation of intent-based solutions will fare. I haven’t seen anyone beating laws of physics or RFC 1925 Rule 11 yet,” he added.  

Dig deeper into Pepelnjak’s ideas about BGP route selection and intent-based networking.

Greater hybridization of data centers on the horizon

Chris Drake, an analyst with Current Analysis in Sterling, Va., said rising enterprise demand for hybrid cloud systems will fuel partnerships between hyperscale public providers and traditional vendors. New alliances — such as the one between Google and Cisco — joined existing cloud-vendor partnerships like those between Amazon and VMware and Microsoft and NetApp, Drake said. New alliances are in the offing.

Moving and managing workloads across hybrid IT environments will be a key point of competition between providers, Drake said, perhaps including greater management capabilities to oversee diverse cloud systems.

Drake said he also expects a proliferation of strategies aimed at edge computing. The appearance of micro data centers and converged edge systems may decentralize large data centers. He said he also anticipates greater integration with machine learning and artificial intelligence. However, as a result of legacy technologies, actual deployments of these technologies will remain gradual.

Read more of Drake’s assessment of 2018 data center trends.

Comparing the leading mobile device management products

The mobile device management space is growing at a rapid pace, and MDM is widely used across the enterprise to manage and secure smartphones and tablets. Investing in this technology enables organizations to not just secure mobile devices themselves, but the data on them and the corporate networks they connect to, as well.

The market for MDM software is saturated now, and there are new vendors arriving in this vertical on a consistent basis. Many of the larger names in mobile security, meanwhile, have been buying up smaller vendors and integrating their technology into their mobile management offerings, while others have remained pure mobile device management companies from the beginning. So what are the best mobile device management products available today?

Since the mobile security market has become so crowded, it is harder than ever to determine what the best mobile device management products are for an organization’s environment.

To make choosing easier for readers, this article evaluates five leading EMM companies offering MDM as a part of their bundles and their products against the most important criteria to consider when procuring and deploying mobile security in the enterprise. These criteria include MDM implementation, app integration, containerization vs. non-containerization, licensing models and policy management. The mobile management vendors covered are Good Technology Inc., VMware AirWatch, MobileIron Inc., IBM MaaS360, Sophos and Citrix.

That being said, there are also niche players — such as BlackBerry — that are attempting to move into the broader MDM market outside of just securing and managing their own hardware, in addition to free offerings from the likes of Google that have attempted to compete with the above list of MDM vendors by providing tools to assist in Android device management. Even Microsoft has a small amount of MDM built into its operating systems to manage mobile devices.

Today, the vast majority of mobile devices in use — both smartphones and tablets — run on either Apple’s iOS or Google’s Android OS. So while many of today’s MDM products are also capable of managing Windows Phones, BlackBerry devices and so on, this article focuses mostly on their Apple and Android management and security capabilities.

Selecting the best mobile device management product for your organization isn’t easy. By using the criteria presented in this feature and asking six crucial questions before buying MDM software, an organization will find it easier to procure the right mobile management and security products to satisfy its enterprise needs.

Criteria #1: Implementation of MDM

Organizations should understand and plan out their mobile device deployment and MDM requirements before looking at vendors. The installation criteria for MDM are normally based on a few things: resources, money and hardware. With that being said, there are two distinct installation possibilities when deploying an MDM product.

The first is an on-premises implementation that needs dedicated resources, both from a hardware and technical perspective, to assist with installing the system or application on a network. Vendors like Good Technology with it’s Good For Enterprise suite require the installation of servers within an organization’s DMZ. This will necessitate firewall changes and operating system resources to implement.

These systems will then need to be managed appropriately to verify that they’re consistently patched and scanned for vulnerabilities, among other issues. In essence, this type of MDM deployment is treated as an additional server on an organization’s network.

It’s possible that a smaller business might shy away from an install of this nature due to the requirements and technical know-how it would take to get off the ground. On the other hand, if businesses are able to manage this type of mobile management and security product, it gives them complete ownership of these systems and the data that’s on them.

The second installation type is a cloud-based service that enables an off premises installation of MDM, removing any concerns regarding management, technical resources and hardware. Vendors like VMware AirWatch and Sophos have the ability to let customers provision their entire MDM product in the cloud and manage the system from any internet connection. This is both a pro and a con: It provides companies with resource constraints — like not having the experience or headcount — with the ability to get an MDM product set up quickly, but it does so at the risk of having data reside outside the complete control of these organizations — within the cloud.

Depending on an organization’s resource availability, technical experience and risk appetite, these are the two options — on-premises and cloud — currently available for installing MDM.

Criteria #2: App integration

Apps are a major reason mobile device popularity and demand has increased exponentially over the years. Without the ability to have apps work properly and yet securely, the power of mobile devices and the ability for users to take full advantage of these tools becomes severely limited.

MDM companies have realized this need for functionality and security, so they’ve created business-grade apps that enable productivity without compromising the integrity of mobile devices, the data on them and the networks to which they connect.

Citrix XenMobile has created XenMobile Apps that are tied together and save data in a secure sandbox on mobile devices, so users don’t need to use unapproved apps to send business data to potentially insecure apps out of an enterprise’s control. The sandboxing technology works by securing, and even at times partitioning, the MDM app separately from the rest of the mobile OS — essentially isolating it from the rest of the device, while allowing a user to have the ability to work securely and efficiently.

There are also third-party app vendors that MDM vendors have partnered with to create branded apps. Good Technology has, for example, partnered with many large vendors to accommodate the need to use their apps with a specific MDM environment. This integration between vendors is extremely helpful and adds to the synergy between both vendors to create better security and more productive users. Sophos also allows this with their Secure Workspace feature, which enables users to access files within a container while securing the access to these documents.

Whether you’re using apps created by an MDM vendor for additional security, or apps that have been developed through the collaboration of an MDM vendor and a third-party vendor, it’s important to know that most of the work on a mobile device is done via these apps, and securing the data that flows through them and is created on them is important.

Criteria #3: Container vs. non-container

There are two major operational options available when researching MDM products: MDM that uses the container approach and MDM that uses the non-container approach. This is a major decision that needs to be made before selecting a mobile management product, as most vendors only subscribe to one of these methods.

This decision, whether to go with the container or non-container method of mobile management, will guide the device policy, app installation policy, BYOD plans and data security for the mobile devices that an organization is looking to manage.

A containerized approach is one that keeps all the data and access to corporate resources contained within an app on a mobile device. This app normally won’t allow access to the app from outside the mobile device and vice versa.

Both the Good for Enterprise suite and MaaS360 offer MDM products that enable customers to use a containerized approach. Large companies tend to benefit from this approach — as do government agencies and financial institutions — as it tends to offer the highest degree of protection for sensitive data.

Once a container is removed from a mobile device, all organizational data is gone, and the organization can be sure there was no data leakage onto the mobile device

In contrast to the restricted tactic used by containerization, the non-container approach creates a more fluid and seamless user experience on mobile devices. Companies like VMware AirWatch, Sophos and MobileIron are the leaders in this approach, which enables security on mobile devices via policies and integrated apps. This means these systems rely on pushing policies to the native OS to control their mobile devices. They also support multiple integrated apps — supplied by trusted vendors the MDM companies have partnered with — that assist in adding an additional layer of security to their data. These companies also allow the use of containers and help bridge the gap between customer needs.

Many organizations, including startups and those in retail, lean toward the non-container approach for mobile management and security due to the speed and native familiarity that end users already have with their mobile devices — with OS-bundled calendaring and mail apps, for example. However, keep in mind, in order to completely secure all the data on mobile devices, the non-container approach requires the aforementioned tight MDM policies and integrated apps to enforce the protection of data.

Criteria #4: License models

The licensing model for MDMs has changed slightly in recent years. In the past, there was only a per device license model, which meant organizations were pushed into using licensing models that weren’t very effective for them financially. Due to the emergence of tablets and users carrying multiple smartphones, there became the need to have a license model based on the user — and not the individual device.

All the MDM products covered in this article offer similar, if not identical, pricing models. MDM vendors have listened to the customers and realized that end users in this day and age don’t always have one device. Which licensing model an organization chooses — per device or user based — depends on the company’s mobile device inventory.

The per-device model normally works well for small companies. In this model, every user gets a device that counts against the organization’s total license count. If a user has three devices, all of these go against the total license count of the business. These licenses are normally cheaper per seat, but can quickly become expensive if there are multiple devices per user requiring coverage.

The user-based pricing model, by contrast, takes into account the need for users to have multiple devices that all require MDM coverage. With this model, the user name is the basis of the license, and the user can have multiple devices attached to his one license. This is the reason many larger organizations lean toward this model, or at least a hybrid approach of the two licensing models — to account for users who use multiple mobile devices.

MDM criteria #5: Policy management

This is an important feature of mobile device management, and one that organizations need to review with either a request for proposal (RFP) or something that outlines the details of what mobile device policies they require. Mobile policies enable organizations to make granular changes to a mobile device to limit certain features — the camera and apps, among others — push wireless networks, create VPN tunnels and whitelist apps. This is the nuts and bolts of MDM, and a criterion that should be reviewed heavily during the proof-of-concept stage with specific vendors.

This ability to push certain features of a policy to mobile devices is certainly required, as is the ability to wipe devices remotely if the need occurs should they be lost or stolen. While all the MDM products covered in this article provide the ability to remotely wipe mobile devices, in the case of Good for Enterprise and IBM MaaS360, organizations have the option to wipe mobile devices completely or to just remove the container.

Also important for MDM products is the ability to perform actions such as VPN connections, wireless network configurations and certificate installs, which AirWatch can accomplish. Sophos also offers the ability to manage policies from a security perspective by enforcing antiphishing, antimalware and web protection.

You must assert these options in an RFP beforehand to determine what part of the mobile device policy you’re looking to secure. Evaluating what policy changes you can push to a mobile device and what functions an organization might want to see within a policy will help provide insight for an educated decision on the best mobile device management products.

Most times there will be multiple policies created that allow certain users to receive a particular policy, while allowing someone with other needs to receive a completely different MDM policy. This is a standard function within all MDMs, but it should be understood that a single policy for all users is not always plausible.

Finding the best mobile device management product for you

There are many vendors in this saturated market, but following these five criteria should assist organizations in narrowing the field down to find the best mobile device management products available today. There is much overlap between vendors, but finding the right one that can secure an organization’s data completely and offer full coverage, with the ability to manage all the aspects needed in a policy, is what businesses should be aiming for in MDM products.

Many large companies, especially those in the financial or government sector, are running Good for Enterprise due to the extra layer of security it provides by leveraging a container and integrated apps developed by vendors with whom they partnered.

There is much overlap between vendors, but finding the right one that can secure an organization’s data completely and offer full coverage, with the ability to manage all the aspects needed in a policy, is what businesses should be aiming for in MDM products.

IBM MaaS360, on the other hand, offers both a container and non-container approach to mobile security and management, which makes it suitable for larger enterprises that require some flexibility in terms of operational method deployment. This gives IBM MaaS360 the ability to play to both sides and gives them some leverage over competitors by attracting customers from both mindsets.

Many midsize companies don’t have to meet the level of security imposed by large financial clients, though, and thus aren’t running to boost their mobile device security. We’ve seen that, many times, compliance will bring an extra layer of required security, however, thereby making these organizations more conscience at times about securing data on mobile devices.

Midsize to large companies — those outside of the financial sector — tend to run AirWatch, Sophos or MobileIron MDM due to their abilities to keep the native feel of mobile devices intact, while being able to push custom policies that secure mobile devices to the clients.

As for app integration, Citrix has performed very well in this area with XenMobile, having shown that it’s pushing the boundaries of this area. These apps are selling points to many customers who want to integrate their data onto a mobile device, but want the flexibility to manage the data these mobile apps are consuming. By dispensing these approved apps to managed mobile devices and writing policy for their data to be used on these apps, MDM products, such as Citrix’s, assist with adding an extra layer of data control for the company and ease of use for the user.

As mobile devices become more indispensible for business users, the MDM market will keep expanding in response to the growing need for mobile security.