Category Archives: Enterprise IT news

Enterprise IT news

Advanced Protection Program locks down Google accounts

The latest Google multifactor authentication solution aims to protect high-risk users from targeted attacks, but will add complexity to logins.

Google’s Advanced Protection Program has been designed to not only help keep users safe from phishing attacks such as spear phishing as well preventing unauthorized access to Gmail accounts by having users take advantage of physical security keys — like a Yubikey — for authentication.

“Journalists, human rights defenders, environment campaigners and civil society activists working on any number of sensitive issues can quickly find themselves targeted by well-resourced and highly capable adversaries,” Andrew Ford Lyons, a technologist at Internews, said in Google’s announcement post. “For those whose work may cause their profile to become more visible, setting this up could be seen as an essential preventative step.”

Google’s Advanced Protection Program could help to prevent some types of cyberattacks seen over the past couple years, including the phishing schemes that compromised the Gmail account of Hillary Clinton’s campaign chairman, John Podesta, or the Google Docs phishing attack.

According to Google, the Advanced Protection Program focuses on three areas of defense: using a security key for multifactor authentication, limiting third-party app access to Gmail and Google Drive, and mitigating fraudulent account access by adding steps to the account recovery process.

Google warns that third-party mobile apps like Apple Mail, Calendar and Contacts “do not currently support security keys and will not be able to access your Google data,” so Advanced Protection Program users would need to use Google’s first-party apps for now.

How the Google Advanced Protection Program works

Google has supported security keys for multifactor authentication in the past and has an option to use mobile devices as a multifactor device, but the Advanced Protection Program is far more strict because there will be no backup options with SMS or stored authentication codes.

Users will only be able to login to Google accounts with their password and registered security keys. If a security key is lost, the account recovery will be more onerous than answering simple security questions, but Google has yet to provide details on what such a recovery process will entail.

For those whose work may cause their profile to become more visible, setting this up could be seen as an essential preventative step.
Andrew Ford Lyonstechnologist at Internews

Although anyone can enroll in the Advanced Protection Program, Google admitted in its blog post that it would be best for those who “are willing to trade off a bit of convenience for more protection of their personal Google Accounts.”

At the start, the Advanced Protection Program requires the use of the Chrome browser and two security keys that support the FIDO U2F standard — one to connect to a traditional computer via USB port and one for mobile devices using Bluetooth.

The former isn’t as troublesome, but users need to be careful about the security key used for mobile. Google’s support page suggests purchasing the Feitan Multipass Bluetooth security key, which appears to be in limited supply on Amazon, as of this post, but, a Bluetooth security key is only necessary for those using iOS devices or an Android device that doesn’t support Near Field Communication.(NFC) for wireless access. An NFC-enabled security key would work for those with NFC-capable Android devices. 

Microsoft mum on 2013 database breach of bug tracking system

Microsoft’s internal bug tracking system was hacked in 2013, and no one outside the company knew about the database breach until now, according to a Reuters report.

The breached database was accessible with just a password, according to five former employees. But after the database breach Microsoft added two-factor authentication, as well as other security measures to better protect the bug tracking system containing detailed descriptions of unpatched vulnerabilities in Microsoft software.

Shortly after reports surfaced in 2013 of a security incident at Microsoft, the software giant had stated only that a “small number” of computers had been infected with malicious software. However, it turns out that the database breach exposed details of critical — and unpatched — bugs in Windows and other Microsoft software.

The bugs documented in the breached database could have been used by threat actors to create exploits against the unpatched software, although the ex-employees told Reuters that a Microsoft investigation after the database breach failed to uncover any evidence that the vulnerability data had been used in any attacks on other organizations.

“The compromise of Microsoft’s database highlights that everyone is vulnerable to sophisticated intrusions,” Dmitri Alperovitch, co-founder and CTO at CrowdStrike, told SearchSecurity by email. “From the adversary perspective, having access to critical and unfixed vulnerabilities is the ‘holy grail.’ We may be seeing the ripple effects of this hack for some time and many businesses may end up suffering stealthy compromises.”

According to Reuters, the group behind the database breach was identified as Wild Neutron, also known as Morpho and Butterfly. The breach was discovered after the same group accessed systems at Apple, Facebook and Twitter. The Wild Neutron group, considered to be well-resourced and focused on financial gains, is not thought to be a state-sponsored threat actor.

This isn’t the first time a bug tracking system breach of major software provider has been made public. In 2015, Mozilla announced that its Bugzilla bug tracking system had been accessed by an unknown attacker, who used at least one of the vulnerabilities breached to carry out attacks on Firefox users.

In other news

  • The U.S. Department of Homeland Security has given federal agencies just 30 days to develop plans to enhance email and web security under a new binding operational directive (BOD). Under the directive, BOD 18-01, agencies have 90 days to deploy STARTTLS on all internet-facing mail servers and to begin deploying Domain-based Message Authentication, Reporting and Conformance, to validate email and combat spam and phishing attacks. STARTTLS is a protocol option added to email and other application protocols in order to specify that transmissions of that protocol use Transport Layer Security (TLS) protocol encryption. Under the new BOD, agencies have 120 days to transition all web content to HTTPS, instead of HTTP, to drop support for deprecated Secure Sockets Layer (SSL) versions 2 and 3, and to disable 3DES and RC4 ciphers on all web and mail servers.
  • The U.S. Supreme Court will decide whether authorities can access data stored anywhere in the world. The case in question involves a warrant for emails believed to be connected to a narcotics investigation that were stored on a Microsoft server in Ireland. A warrant was issued for the emails in 2013, which Microsoft challenged in court. Brad Smith, president and chief legal officer at Microsoft, wrote in a blog post this week that Microsoft is contesting the warrant “because we believed U.S. search warrants shouldn’t reach over borders to seize the emails of people who live outside the United States and whose emails are stored outside the United States.” The Justice Department argues that because the data being demanded can be retrieved from Microsoft’s U.S. headquarters, the data must be turned over no matter where it is being stored.
  • Google added limited antivirus capability to Chrome for Windows this week. Citing the importance of preventing unwanted software from running on browsers, the ad giant announced three changes to Chrome for Windows that would help prevent unwanted software from taking over the browser. Philippe Rivard, product manager for Chrome Cleanup at Google, wrote that Google “upgraded the technology we use in Chrome Cleanup to detect and remove unwanted software,” working with antivirus and internet security vendor ESET to integrate its detection engine with Chrome’s sandbox technology. Chrome is now able to detect when an extension attempts to change user settings, a tactic that malicious software sometimes uses to take control of the browser by manipulating search engine results to steer users to malicious sites. The other major change was a simplified method for removing unwanted software using Chrome Cleanup.

Why open network systems are so difficult to find

Andrew Lerner, an analyst with Gartner, has seen increased interest from Gartner clients in open network systems since the beginning of the “SDN craze” four or five years ago. Lerner said although openness is widely preferred, it falls on a spectrum — one that is complicated by marketing claims from leading network vendors.

Most vendors offer closed, proprietary products, where openness boils down to either support for Border Gateway Protocol or to a published API. At the controller layer, Ethernet fabrics don’t interoperate, and multichassis link aggregation groups aren’t supported across vendors.

When it comes to open network systems, Lerner said proprietary network systems will continue to exist in most enterprises for the foreseeable future. He cautioned network professionals about claims that a proprietary feature is the only way to solve a requirement and to be wary of standards.

Professionals need to be careful of proprietary standards that inhibit interoperability. In many cases, it may be best to simply ask a networking vendor about its level of support for open network systems, such as third-party software or management with another vendor’s automation platform, and discount unqualified claims.

Explore more of Lerner’s thoughts on open network systems.

The benefits of security analytics and operations

Jon Oltsik, an analyst with Enterprise Strategy Group Inc. in Milford, Mass., said he sees security operations analytics platform architecture (SOAPA) catching on since he first began writing about it a year ago. ESG research indicated that 21% of enterprises are beginning to integrate security operations technologies and to consider the creation of a security operations architecture a priority. An additional 50% of respondents were somewhat active in this field.

According to Oltsik, the move to SOAPA is driven by a variety of factors in IT departments, including goals of better identifying and communicating risk to the business, automating manual processes and overcoming lags in incident response time.

Oltsik suggested enterprises, together with the government’s National Institute of Standards and Technology, need to partner to create common standards for SOAPA. In Oltsik’s view, standards would increase technology options, allowing chief information security officers to, for example, add security analytics in one year and endpoint detection response tools the next, even as they improve innovation and increase security efficacy.

Additionally, he said SOAPA would help to create a global sense of community in the cybersecurity industry, with professionals trained on a common security architecture.

Read more of Oltsik’s thoughts on SOAPA.

Changing up your Ansible playbook

Ivan Pepelnjak, writing in ipSpace, took a look at making changes to an Ansible playbook. In a previous article, Pepelnjak wrote about a playbook he uses to collect Secure Shell keys and the tedium of having to write ansible-playbook path-to-playbook every time to run the collection. Because Ansible playbooks are YAML documents, which use # to start comments, he reasoned that by adding a shebang character sequence, he could convert the YAML document into a script.

From there, Pepelnjak used a Bash shell to execute the file. First, he entered chmod+x playbook to make the playbook executable and found the path using “which ansible-playbook.” Lastly, he recommended adding a symbolic link to the playbook in one of the directories along the search path. At the end of the process, the Ansible playbook became executable just like other Linux commands.

Dig deeper into Pepelnjak’s thoughts on Ansible playbooks.

Experts: Updating customer digital experience is a tall task

BOSTON — Updating and designing a new website is not an easy task in today’s multiscreen, multiplatform world.

Between growing customer expectations, streaming and other live content, and the potential for multiple dynamic translations, a lot more goes into the customer digital experience than even just a few years ago.

“Today, it’s how do we go beyond the broadcast and enhance the linear experience,” said Eric Black, CTO of NBC Sports. Black was onstage at the Intercontinental Hotel with Acquia CEO Tom Erickson, kicking off the second day of the Acquia Engage conference.

Black spoke about propping up reliable customer digital experiences for some of the largest broadcast events in the world, including the Super Bowl and the Olympics, and the challenges that coincide with trying to provide interactive experiences for millions of online viewers.

“We had plenty of video problems during [the 2016 Summer Olympics in] Rio that nobody here would know about,” Black said when asked about what challenges NBC Sports has faced when trying to maintain this digital experience. He added that fail-safes are needed to keep the customer-facing performance on track and to be able to go back and address the problem later.

And while every company may not need to set up a website that includes live streaming for millions of people across the world, almost all successful modern companies do need an engaging, user-friendly customer digital experience — something that Planet Fitness is working to address right now.

Acquia Engage conference
Creating an effective customer digital experience was a focal point at the Acquia Engage conference.

‘I hate this website’

With more than 10 million members and 1,400 locations in five countries, Planet Fitness has had success as a gym for the average person, but it would like to bring that customer-experience success to its online platform.

“I hate this website,” said Chris Lavoie, vice president of information systems for Planet Fitness, based in Hampton, N.H. Lavoie was referencing Planet Fitness’ current site, launched in 2014 — a relatively static site with no engagement or mobile view and with one main objective: Get people to join.

But today’s customers are looking for more than easy access to join a gym or stream a sporting event. They want quality, interaction and personalization.

“We had a very specific focus,” Lavoie said, “and being modern was not one of them.”

You have the technical side of something, but to really make sure it represents your brand, you want someone from marketing offering input, as well.
Kate Kingdigital marketing manager, Planet Fitness

To upgrade that customer digital experience for its 1,400 members and potential new members, Planet Fitness is redesigning and upgrading its website with the help of Acquia and several other vendors to better provide for its now international customer base.

One way Planet Fitness is ensuring its website meets its brand standards is by having IT and marketing collaborate together on the website’s development.

“That was new to me. I’ve been in companies where it’s very siloed, and [the departments] didn’t communicate with each other,” said Kate King, digital marketing manager for Planet Fitness. “It makes a ton of sense. You have the technical side of something, but to really make sure it represents your brand, you want someone from marketing offering input, as well.”

Hitting roadblocks

There are going to be challenges and unforeseen difficulties when designing a site from the bottom up. And while some of them can be masked, like the streaming challenges Black and NBC Sports routinely face, other problems need to be resolved before they can be customer-facing. Planet Fitness had been preparing to show off the newly launched site at Acquia Engage, but some core challenges in standing up the new customer digital experience delayed its launch date — a comforting occurrence for those in attendance who have most likely experienced similar technological snafus.

“One thing we need to do is preserve the functionality that’s critical to our business. If we do nothing else, we need to do those things on the site,” King said. “We were getting some bugs in the development of those pieces, and it was a bit of a roadblock.”

Another obstacle that is common in international organizations — especially one with individual franchises — is making sure the localization is correct and existing and potential customers can read the site. Site translation tools are a necessary plug-in for website development in today’s multicultural environment.

Gone are the days of a quick website launch. Between multimedia requirements, consumer interaction, translation requirements and up-to-date content, redesigning a website is now a companywide project.

Just be sure to anticipate some challenges.

Microsoft hones Azure AI Services at Microsoft Ignite 2017

ORLANDO — While Microsoft has previously touted itself as a mobile-first, cloud-first technology provider, the company continues to evolve its focus and is further employing its artificial intelligence capabilities together with its Azure cloud to help customers along their journey to digital transformation.

At its Microsoft Ignite 2017 conference last month, the company introduced new and improved AI capabilities, including Azure Machine Learning (AML), new Visual Studio tools for AI, new Cognitive Services and other new enterprise AI services and tools.

Microsoft’s goal with Azure

Microsoft’s goal is to use Azure to help democratize the use of AI technologies by simplifying and bringing AI services to any developer anywhere. A key message at Ignite 2017 was to school enterprises on how to infuse cloud, AI and mixed reality into business applications.

In a statement, Microsoft CEO Satya Nadella said the company is “pushing the frontiers of what’s possible with mixed reality and artificial intelligence infused across Microsoft 365, Dynamics 365 and Azure, to transform and have impact in the world.”

As the conduit through which Microsoft delivers practically all the goods the company produces, Azure is key.

There’s a set of technology — things like IoT, AI, microservices, serverless computing and more — this is all happening right now thanks in large part to cloud computing.
Scott Guthrieexecutive vice president, Microsoft Cloud and Enterprise Group

“One of the defining aspects of cloud computing is the ability to innovate and release new technology faster and at greater scale than ever before,” said Scott Guthrie, executive vice president at Microsoft Cloud and Enterprise Group, during his keynote at Ignite. “There’s a set of technology — things like IoT, AI, microservices, serverless computing and more — this is all happening right now thanks in large part to cloud computing.”

Moreover, “to enable the new generation of AI-powered apps and experiences, Azure has built the entire stack for AI — from infrastructure, to platform services, to AI dev tools,” Guthrie said in a blog post. “Azure offers the most complete, end-to-end AI capabilities such that AI solutions are possible for any developer and any scenario.”

Azure Machine Learning

At Microsoft Ignite 2017, the company announced new Azure Machine Learning capabilities, including a new Machine Learning Workbench, which improves AI productivity of developers and data scientists.

These new capabilities provide rapid data wrangling and agile experimentation using familiar and open tools, Guthrie said. “AI developers and data scientists can now use Azure Machine Learning to develop, experiment and deploy AI models on any type of data, on any scale, in Azure and on premises,” he added.

In addition to the AML Workbench, Microsoft launched the AML Experimentation service and the AML Model Management service. The AML Experimentation service helps data scientists increase their rate of experimentation with big data and GPUs. The AML Model Management service enables users to host, version, manage and monitor machine learning models. Microsoft also announced a new capability of integrating AML with the Excel spreadsheet.

Visual Studio Code Tools for AI

At Ignite 2017, Microsoft also released Visual Studio Code Tools for AI, which provides capabilities for easily building models with deep learning frameworks, including Microsoft Cognitive Toolkit (CNTK), Google TensorFlow and others.

“The Visual Studio Code Tools for AI are particularly interesting because they should effectively ease the way into creating AI-enabled applications and services for the hundreds of thousands of developers who are already skilled in Microsoft’s Visual Studio,” said Charles King, principal analyst at Pund-IT.

Yet, he warns that this isn’t a “simplified approach” that eliminates the need for learning AI skills and frameworks, he said.

“Obviously, Microsoft isn’t the only vendor out there with its sights set on commercial AI products and AI services,” King noted. “But the company is focusing its attention and energies on delivering AI tools and solutions that existing customers, developer allies and partners will find immediately valuable.”

Docker with Kubernetes forges new container standard

The comingling of the two main competitors in container orchestration should bring IT shops a greater stability and consistency in container infrastructures over time.

Docker with Kubernetes will appear in the next versions of Docker Enterprise Edition and Community Edition, expected to be generally available in 1Q18, according to the company. This comes on the heels of support for Kubernetes in recent products from Mesosphere, Rancher and Cloud Foundry — an industry embrace that affirms Kubernetes as the standard for container orchestration, and expands choices available to enterprise IT organizations as containers go into production.

Kubernetes and Docker rose to popularity simultaneously and were always closely associated. However, they emerged independently, and changes to one would sometimes break the other. With Docker and Kubernetes formally aligned under the Cloud Native Computing Foundation, developers can more closely coordinate alterations and therefore likely eliminate such hitches.

“It has not always been a given that Kubernetes was going to work with Docker,” said Gary Chen, an analyst at IDC. “People who want Docker from the source and Kubernetes along with that can now get that integration from a single vendor.”

Docker with Kubernetes is a declaration of victory for Kubernetes, but it’s also a big change for the IT industry with a standard for orchestration in addition to the standard OCI runtime and format.

Gary Chen, analyst, IDCGary Chen

“It’s not something we ever had with servers or virtual machines,” Chen said. “This brings industry standardization to a whole new level.”

Container management vendors will seek new differentiations outside of raw orchestration, and enterprise IT users can evaluate new tools and consider new possibilities for multicloud interoperability.

Docker brings support for modernizing traditional enterprise apps, while Kubernetes is still favored for newer, stateless distributed applications. Their convergence will strengthen orchestration that spans enterprise IT operating systems and different types of cloud infrastructure, said E.T. Cook, chief advocate at Dallas-based consulting firm Etc.io.

“Unified tooling that can orchestrate across all of the different platforms offers enterprises a massive advantage,” he said.

Being able to bridge private data centers, public clouds, and Docker Swarm and Kubernetes orchestrators will make deploying the software that runs on those things easier.
Peter Nealonsolutions architect, Runkeeper

Container portability will also take on new flexibility and depth with increased compatibility between Docker and Kubernetes, said Peter Nealon, a solutions architect at Runkeeper, a mobile running app owned by ASICS, the Japanese athletic equipment retailer.

“Being able to bridge private data centers, public clouds, and Docker Swarm and Kubernetes orchestrators will make deploying the software that runs on those things easier,” Nealon said. “It will also be easier to provide the security and performance that apps need.”

The rich get richer with Docker and Kubernetes

Docker remains committed to its Swarm container orchestrator. But with heavy momentum on the Kubernetes side, some IT pros are concerned whether the market will sustain a healthy, long-term competition.

“I’m sure some folks will not like to see Kubernetes get another win, wanting choices,” said Michael Bishop, CTO at Alpha Vertex, a New York-based fintech startup, which uses Kubernetes. “But I’ll be happy to see even more developers [from Docker] working away at making it even more powerful.”

Meanwhile, enterprise IT consultants said their clients at large companies rarely mention Swarm.

“I personally have never seen anyone run Swarm in a production cluster,” said Enrico Bartz, system engineer at SVA in Hamburg, Germany.

Some SVA clients will consider Docker Enterprise Edition support for Kubernetes as it may offer a more streamlined and familiar developer interface and be easier to install and configure than Kubernetes alone, Bartz said. But Docker still faces stiff competition from other products, such as Red Hat OpenShift, which already makes Kubernetes easier to use for enterprise IT.

Some industry watchers also wonder if Docker with Kubernetes might be too late to preserve Docker Inc., and Swarm with it, in the long run.

“Two years ago or even a year ago there was more differentiation for Docker in terms of the security and networking features it could offer beyond Kubernetes,” said Chris Riley, director of solutions architecture at cPrime Inc., a consulting firm in Foster City, Calif., that focuses on Agile software development. “But the recent releases of Kubernetes have made up those gaps, and it’s closing the gaps in stateful application management.”

Amazon also waits in the wings with its own forthcoming Kubernetes as a service alternative, which users hope to see unveiled at the AWS Re:Invent conference next month. Some enterprise shops won’t evaluate Docker with Kubernetes until they see what Amazon can offer as a managed public cloud service.

“If there’s no AWS announcement that hugely expands the feature set around [the EC2 Container Service], it will open up a whole set of discussions around whether we deploy Kubernetes or Docker Swarm in the cloud, or consider other cloud providers,” Runkeeper’s Nealon said. “Our discussion has been focused on what container orchestration platform we will consume as a cloud service.”

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

Hitachi Data Instance Director redesigned for scalability

The latest version of Hitachi Vantara’s Data Instance Director data protection platform supports a new RESTful API for cloud deployments and includes a ransomware protection utility.

Hitachi Data Instance Director (HDID) was rearchitected with MongoDB as the underlying database. The more robust database enables Hitachi Data Instance Director 6 — generally available Tuesday — to scale to hundreds of thousands of objects, compared to the previous version, which scaled to thousands of objects.

The software was also upgraded from a single login access control to a granular, role-based access control at the RESTful API layer, so service providers have a secure interface to control domains, access roles and resources.

In addition, a new data retention utility automatically places a data snapshot in a lockdown write-once, read-many mode in case a company gets hit with a ransomware attack.

Hitachi Vantara was created in September when Hitachi Data Systems, Hitachi Insight Group and Pentaho combined into one business unit. Now, Hitachi is rebranding Hitachi Data Instance Manager as Hitachi Data Instance Director. The product is based on the Cofio Software technology that Hitachi acquired in 2012. Hitachi’s data recovery software manages backup, archiving, replication, bare metal recovery and continuous data protection with a single platform.

“(Version 5) had a flat file database,” said Rich Vining, Hitachi’s senior worldwide product marketing manager for data protection. “Now it’s been architected to be 100 times more scalable than previous versions. Now you can set up as many users as you want with access rights. It can handle hundreds of users and restore points.”

Hitachi rounds out its data protection portfolio

Phil Goodwin, research director for IDC’s storage systems and software research practice, said the most significant part of the HDID upgrade is that Hitachi can now directly provide data protection capabilities to the user, instead of depending on partners.

“This rounds out their portfolio,” he said. “The HDID is a high-level management console and workflow director that invokes Hitachi TrueCopy and Universal Replicator to move data. Before, you had to manage TrueCopy and Replicator individually with each piece of hardware, but now, this gives a higher level workflow.”

Hitachi Data Instance Director works with the Hitachi Virtual Storage Platforms, the Hitachi Unified Storage VM, the Hitachi Unified Compute Platform and the Hitachi NAS Platform. The HDID targets customers with large databases.

We are using snapshot technology so we can work and protect large databased environments where traditional backups can’t perform as well.
Rich Viningsenior worldwide product marketing manager for data protection, Hitachi

“We are using snapshot technology so we can work and protect large databased environments where traditional backups can’t perform as well,” Vining said. “Built into the system are a set of replication technologies, such as thin image, shadow image, and synchronous and asynchronous replication.

“The challenge is these technologies are fairly complicated to set up, and HDID automates the thin image, shadow image and replication.”

The new role-based access controls also enable Hitachi Data Instance Director to offer cloud services that include backup as a service, copy data management and business continuity as a service. The new RESTful API can be used to connect to other services and cloud repositories.

Vining said roadmap plans call for the ability to use cloud storage as a backup target for Hitachi Content Platform object storage, as well. 

End-user security requires a shift in corporate culture

SAN FRANCISCO — An internal culture change can help organizations put end-user security on the front burner.

If an organization only addresses security once a problem arises, it’s already too late. But it’s common for companies, especially startups, to overlook security because it can get in the way of productivity. That’s why it’s important for IT departments to create a company culture where employees and decision-makers take security seriously when it comes to end-user data and devices.

“Security was definitely an afterthought,” said Keane Grivich, IT infrastructure manager at Shorenstein Realty Services in San Francisco, at last week’s BoxWorks conference. “Then we saw some of the high-profile [breaches] and our senior management fully got on board with making sure that our names didn’t appear in the newspaper.”

How to create a security-centric culture

Improving end-user security starts with extensive training on topics such as what data is safe to share and what a malicious website looks like. That forces users to take responsibility for their actions and understand the risks of certain behaviors.

Plus, if security is a priority, the IT security team will feel like a part of the company, not just an inconvenience standing in users’ way.

“Companies get the security teams they deserve,” said Cory Scott, chief information security officer at LinkedIn. “Are you the security troll in the back room or are you actually part of the business decisions and respected as a business-aligned person?”

Finger-pointing is a complete impediment to learning.
Brian Roddyengineering executive, Cisco

When IT security professionals feel that the company values them, they are more likely to stick around as well. With the shortage of qualified security pros, retaining talent is key.

Keeping users involved in the security process helps, too. Instead of locking down a user’s PC when a user accesses a suspicious file, for example, IT can send him a message checking if he performed a certain action. If the user says he accessed the file, then IT knows someone is not impersonating the user. If he did not, then IT knows there is an intruder and it must act.

To keep end-user security top of mind, it’s important to make things such as changing passwords easy for users. IT can make security easier for developers as well by setting up security frameworks that they can apply to applications they’re building.

It’s also advisable to take a blameless approach when possible.

“Finger-pointing is a complete impediment to learning,” said Brian Roddy, an engineering executive who oversees the cloud security business at Cisco, in a session. “The faster we can be learning, the better we can respond and the more competitive we can be.”

Don’t make it easy for attackers

Once the end-user security culture is in place, IT should take steps to shore up the simple things.

Unpatched software is one of the easiest ways for attackers to enter a company’s network, said Colin Black, COO at CrowdStrike, a cybersecurity technology company based in Sunnyvale, Calif.

IT can also make it harder for hackers by adding extra security layers such as two-factor authentication. 

Cisco cloud VP calls out trends in multicloud strategy

Large enterprises have quickly embraced multicloud strategy as a common practice — a shift that introduces opportunities, as well as challenges.

Cisco has witnessed this firsthand, as the company seeks a niche in a shifting IT landscape. Earlier this year, Cisco shuttered Intercloud Services, its failed attempt to create a public cloud competitor to Amazon Web Services (AWS). Now, Cisco’s bets are on a multicloud strategy to draw from its networking and security pedigree and sell itself as a facilitator for corporate IT’s navigation across disparate cloud environments.

In an interview with SearchCloudComputing, Kip Compton, vice president of Cisco’s cloud platforms and solutions group, discussed the latest trends with multicloud strategy and where Cisco plans to fit in the market.

How are your customers shifting their view on multicloud strategy?

Kip Compton: It started with the idea that each application is going to be on separate clouds. It’s still limited to more advanced customers, but we’re seeing use cases where they’re spanning clouds with different parts of an application or subsystems, either for historical reasons or across multiple applications, or taking advantage of the specific capabilities in a cloud.

Hybrid cloud was initially billed as a way for applications to span private and public environments, but that was always more hype than reality. What are enterprises doing now to couple their various environments?

Compton: The way we approach hybrid cloud is as a use case where you have an on-prem data center and a public data center and the two work together. Multicloud, the definition we’ve used, is at least two clouds, one of which is a public cloud. In that way, hybrid is essentially a subset of multicloud for us.

Azure Stack is a bit of an outlier, but hybrid has changed a bit for most of our customers in terms of it not being tightly coupled. Now it is deployments where they have certain codes that run in both places, and the two things work together to deliver an application. They’re calling that hybrid, whereas in the early days, it was more about seamless environments and moving workloads between on prem and the public cloud based on load and time of day, and that seems to have faded.

What are the biggest impediments to a successful multicloud strategy?

Compton: Part of it is what types of problems do people talk about to Cisco, as opposed to other companies, so I acknowledge there may be some bias there. But there are four areas that are pretty reliable for us in customer conversations.

First is networking, not surprisingly, and they talk about how to connect from on prem to the cloud. How do they connect between clouds? How do they figure out how that integrates to their on-prem connectivity frameworks?

Then, there’s security. We see a lot of companies carry forward their security posture as they move workloads; so virtual versions of our firewalls and things like that, and wanting to align with how security works in the rest of their enterprise.

The third is analytics, particularly application performance analytics. If you move an app to a completely different environment, it’s not just about getting the functionality, it’s about being performant. And then, obviously, how do you monitor and manage it [on] an ongoing basis?

The trend we see is [customers] want to take advantage of the unique capabilities of each cloud, but they need some common framework, some capability that actually spans across these cloud providers, which includes their on-prem deployment.

Where do you draw the line on that commonality between environments?

Compton: In terms of abstraction, there was a time where a popular approach was — I’ll call it the Cloud Foundry or bring-your-own-PaaS [platform as a service] approach — to say, ‘OK, the way I’m going to have portability is I’m not going to write my application to use any of the cloud providers’ APIs. I’m not going to take advantage of anything special from AWS or Azure or anyone.’

That’s less popular because the cloud providers have been fairly successful at launching new features developers want to use. We think of it more like a microservices style or highly modular pattern, where, for my application to run, there’s a whole bunch of things I need: messaging queues, server load, database, networking, security plans. It’s less to abstract Amazon’s networking, and it’s more to provide a common networking capability that will run on Amazon.

You mentioned customers with workloads spanning multiple clouds. How are those being built?

Compton: What I referred to are customers that have an application, maybe with a number of different subsystems. They might have an on-prem database that’s a business-critical system. They might do machine learning in Google Cloud Platform with TensorFlow, and they might look to deliver an experience to their customers through Alexa, which means they need to run some portion of the application in Amazon. They’re not taking their database and sharding it across multiple clouds, but those three subsystems have to work together to deliver that experience that the customer perceives as a single application.

What newer public cloud services do you see getting traction with your customers?

Compton: A few months ago, people were reticent to use [cloud-native] services because portability was the most important thing — but now, ROI and speed matter, so they use those services across the board.

A few months ago, people were reticent to use [cloud-native] services because portability was the most important thing — but now, ROI and speed matter, so they use those services across the board.
Kip Comptonvice president, Cisco’s cloud platforms and solutions group

We see an explosion of interest in serverless. It seems to mirror the container phenomenon where everybody agrees containers will become central to cloud computing architectures. We’re reaching the same point on serverless, or function as a service, where people see that as a better way to create code for more efficient [use of] resources.

The other trend we see: a lot of times people use, for example, Salesforce’s PaaS because their data is there, so the consumption of services is driven by practical considerations. Or they’re in a given cloud using services because of how they interface with one of their business partners. So, as much as there are some cool new services, there are some fairly practical points that drive people’s selection, too.

Have you seen companies shift their in-house architectures to accommodate what they’re doing in the public cloud?

Compton: I see companies starting new applications in the cloud and not on prem. And what’s interesting is a lot of our customers continue to see on-prem growth. They have said, ‘We’re going to go cloud-first on our new applications,’ but the application they already have on prem continues to grow in resource needs.

We also see interest in applying the cloud techniques to the on-prem data center or private cloud. They’re moving away from some of the traditional technologies to make their data center work more like a cloud, partially so it’s easier to work between the two environments, but also because the cloud approach is more efficient and agile than some of the traditional approaches.

And there are companies that want to get out of running data centers. They don’t want to deal with the real estate, the power, the cooling, and they want to move everything they can into Amazon.

What lessons did Cisco learn from the now-shuttered Intercloud?

Compton: The idea was to build a global federated IaaS [infrastructure as a service] that, in theory, would compete with AWS. At that time, most in the industry thought that OpenStack would take over the world. It was viewed as a big threat to AWS.

Today, it’s hard to relate to that point of view — obviously, that didn’t happen. In many ways, cloud is about driving this brutal consistency, and by having global fabrics that are identical and consistent around the world, you can roll out new features and capabilities and scale better than if you have a federated model.

Where we are now in terms of multicloud and strategy going forward — to keep customers and partners and large web scale cloud providers wanting to either buy from us or partner with us — it’s solving some of these complex networking and security problems. Cisco has value in our ability to solve these problems [and] link to the enterprise infrastructures that are in place around the world … that’s the pivot we’ve gone through.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.

CIOs should lean on AI ‘giants’ for machine learning strategy

NEW YORK — Machine learning and deep learning will be part of every data science organization, according to Edd Wilder-James, former vice president of technology strategy at Silicon Valley Data Science and now an open source strategist at Google’s TensorFlow.

Wilder-James, who spoke at the Strata Data Conference, pointed to recent advancements in image and speech recognition algorithms as examples of why machine learning and deep learning are going mainstream. He believes image and speech recognition software has evolved to the point where it can see and understand some things as well as — and in some use cases better than — humans. That makes it ripe to become part of the internal workings of applications and the driver of new and better services to internal and external customers, he said.

But what investments in AI should CIOs make to provide these capabilities to their companies? When building a machine learning strategy, choice abounds, Wilder-James said.

Machine learning vs. deep learning

Deep learning is a subset of machine learning, but it’s different enough to be discussed separately, according to Wilder-James. Examples of machine learning models include optimization, fraud detection and preventive maintenance. “We use machine learning to identify patterns,” Wilder-James said. “Here’s a pattern. Now, what do we know? What can we do as a result of identifying this pattern? Can we take action?”

Deep learning models perform tasks that more closely resemble human intelligence such as image processing and recognition. “With a massive amount of compute power, we’re able to look at a massively large number of input signals,” Wilder-James said. “And, so what a computer is able to do starts to look like human cognitive abilities.”

Some of the terrain for machine learning will look familiar to CIOs. Statistical programming languages such as SAS, SPSS and Matlab are known territory for IT departments. Open source counterparts such as R, Python and Spark are also machine-learning ready. “Open source is probably a better guarantee of stability and a good choice to make in terms of avoiding lock-in and ensuring you have support,” Wilder-James said.

Unlike other tech rollouts

The rollout of machine learning and deep learning models, however, is a different process than most technology rollouts. After getting a handle on the problem, CIOs will need to investigate if machine learning is even an appropriate solution.

“It may not be true that you can solve it with machine learning,” Wilder-James said. “This is one important difference from other technical rollouts. You don’t know if you’ll be successful or not. You have to enter into this on the pilot, proof-of-concept ladder.”

The most time-consuming step in deploying a machine learning model is feature engineering, or finding features in the data that will help the algorithms self-tune. Deep learning models skip the tedious feature engineering step and go right to the training step. To tune a deep learning model correctly requires immense data sets, graphic processing units or tensor processing units, and time. Wilder-James said it could take weeks and even months to train a deep learning model.

One more thing to note: Building deep learning models is hard and won’t be a part of most companies’ machine learning strategy.

“You have to be aware that a lot of what’s coming out is the closest to research IT has ever been,” he said. “These things are being published in papers and deployed in production in very short cycles.”

CIOs whose companies are not inclined to invest heavily in AI research and development should instead rely on prebuilt, reusable machine and deep learning models rather than reinvent the wheel. Image recognition models, such as Inception, and natural language models, such as SyntaxNet and Parsey McParseface, are examples of models that are ready and available for use.

“You can stand on the shoulders of giants, I guess that’s what I’m trying to say,” Wilder-James said. “It doesn’t have to be from scratch.”

Machine learning tech

The good news for CIOs is that vendors have set the stage to start building a machine learning strategy now. TensorFlow, a machine learning software library, is one of the best known toolkits out there. “It’s got the buzz because it’s an open source project out of Google,” Wilder-James said. “It runs fast and is ubiquitous.”

While not terribly developer-friendly, a simplified interface called Keras eases the burden and can handle the majority of use cases. And TensorFlow isn’t the only deep learning library or framework option, either. Others include MXNet, PyTorch, CNTK, and Deeplearning4j.

For CIOs who want AI to live on premises, technologies such as Nvidia’s DGX-1 box, which retails for $129,000, are available.

But CIOs can also utilize cloud as a computing resource, which would cost anywhere between $5 and $15 an hour, according to Wilder-James. “I worked it out, and the cloud cost is roughly the same as running the physical machine continuously for about a year,” he said.

Or they can choose to go the hosted platform route, where a service provider will run trained models for a company. And other tools, such as domain-specific proprietary tools like the personalization platform from Nara Logics, can fill out the AI infrastructure.

“It’s the same kind of range we have with plenty of other services out there,” he said. “Do you rent an EC2 instance to run a database or do you subscribe to Amazon Redshift? You can pick the level of abstraction that you want for these services.”

Still, before investments in technology and talent are made, a machine learning strategy should start with the basics: “The single best thing you can do to prepare with AI in the future is to develop a competency with your own data, whether it’s getting access to data, integrating data out of silos, providing data results readily to employees,” Wilder-James said. “Understanding how to get at your data is going to be the thing to prepare you best.”