Tag Archives: FRANCISCO

Databricks platform additions unify machine learning frameworks

SAN FRANCISCO — Open source machine learning frameworks have multiplied in recent years, as enterprises pursue operational gains through AI. Along the way, the situation has formed a jumble of competing tools, creating a nightmare for development teams tasked with supporting them all.

Databricks, which offers managed versions of the Spark compute platform in the cloud, is making a play for enterprises that are struggling to keep pace with this environment. At Spark + AI Summit 2018, which was hosted by Databricks here this week, the company announced updates to its platform and to Spark that it said will help bring the diverse array of machine learning frameworks under one roof.

Unifying machine learning frameworks

MLflow is a new open source framework on the Databricks platform that integrates with Spark, SciKit-Learn, TensorFlow and other open source machine learning tools. It allows data scientists to package machine learning code into reproducible modules, conduct and compare parallel experiments, and deploy models that are production-ready.

Databricks also introduced a new product on its platform, called Runtime for ML. This is a preconfigured Spark cluster that comes loaded with distributed machine learning frameworks commonly used for deep learning, including Keras, Horovod and TensorFlow, eliminating the integration work data scientists typically have to do when adopting a new tool.

Databricks’ other announcement, a tool called Delta, is aimed at improving data quality for machine learning modeling. Delta sits on top of data lakes, which typically contain large amounts of unstructured data. Data scientists can specify a schema they want their training data to match, and Delta will pull in all the data in the data lake that fits the specified schema, leaving out data that doesn’t fit.

MLflow's tracking user interface
MLflow includes a tracking interface for logging the results of machine learning jobs.

Users want everything under one roof

Each of the new tools is either in a public preview or alpha test stage, so few users have had a chance to get their hands on them. But attendees at the conference were broadly happy about the approach of stitching together disparate frameworks more tightly.

Saman Michael Far, senior vice president of technology at the Financial Industry Regulatory Authority (FINRA) in Washington, D.C., said in a keynote presentation that he brought in the Databricks platform largely because it already supports several query languages, including R, Python and SQL. Integrating these tools more closely with machine learning frameworks will help FINRA use more machine learning in its goal of spotting potentially illegal financial trades.

You have to take a unified approach. Pick technologies that help you unify your data and operations.
John Golesenior director of business analysis and product management at Capital One

“It’s removed a lot of the obstacles that seemed inherent to doing machine learning in a business environment,” Far said.

John Gole, senior director of business analysis and product management at Capital One, based in McLean, Va., said the financial services company has implemented Spark throughout its operational departments, including marketing, accounts management and business reporting. The platform is being used for tasks that range from extract, transform and load jobs to SQL querying for ad hoc analysis and machine learning. It’s this unified nature of Spark that made it attractive, Gole said.

Going forward, he said he expects this kind of unified platform to become even more valuable as enterprises bring more machine learning to the center of their operations.

“You have to take a unified approach,” Gole said. “Pick technologies that help you unify your data and operations.”

Bringing together a range of tools

Engineers at ride-sharing platform Uber have already built integrations similar to what Databricks unveiled at the conference. In a presentation, Atul Gupte, a product manager at Uber, based in San Francisco, described a data science workbench his team created that brings together a range of tools — including Jupyter, R and Python — into a web-based environment that’s powered by Spark on the back end. The platform is used for all the company’s machine learning jobs, like training models to cluster rider pickups in Uber Pool or forecast rider demand so the app can encourage more drivers to get out on the roads.

Gupte said, as the company grew from a startup to a large enterprise, the old way of doing things, where everyone worked in their own silo using their own tool of choice, didn’t scale, which is why it was important to take this more standardized approach to data analysis and machine learning.

“The power is that everyone is now working together,” Gupte said. “You don’t have to keep switching tools. It’s a pretty foundational change in the way teams are working.”

VMware is redesigning NSX networking for the cloud

SAN FRANCISCO — VMware is working on a version of NSX for public clouds that departs from the way the technology manages software-based networks in private data centers.

In an interview this week with a small group of reporters, Andrew Lambeth, an engineering fellow in VMware’s network and security business unit, said the computing architectures in public clouds require a new form of NSX networking.

“In general, it’s much more important in those environments to be much more in tune with what’s happening with the application,” he said. “It’s not interesting to try to configure [software] at as low a level as we had done in the data center.”

Four or five layers up the software stack, cloud provider frameworks typically have hooks to the way applications communicate with each other, Lambeth told reporters at VMware’s RADIO research and development conference. “That’s sort of the level where you’d look to integrate in the future.”

Todd Pugh, IT director at Sugar Creek Packing Co., based in Washington Court House, Ohio, said it’s possible for NSX to use Layer 7 — the application layer — to manage communications between cloud applications.

“If we burst something to the cloud on something besides AWS, the applications are going to have to know how to talk to one another, as opposed to just being extensions of the network,” Pugh said.

Today, VMware is focusing its cloud strategy on the company’s partnership with cloud provider AWS. The access VMware has to Amazon’s infrastructure makes it possible for NSX to operate the same on the cloud platform as it does in a private data center. Companies use NSX to deliver network services and security to applications running on VMware’s virtualization software.

Pugh would not expect an application-centric version of NSX to be as user-friendly as NSX on AWS. Therefore, he would prefer to have VMware strike a similar partnership with Microsoft Azure, which would give him the option of using the current version of NSX on either of the two largest cloud providers.

“I can shop at that point and still make it appear as if it’s my network and not have to change my applications to accommodate moving them to a different cloud,” Pugh said.

Nevertheless, having a version of NSX for any cloud provider would be useful to many companies, said Shamus McGillicuddy, an analyst at Enterprise Management Associates, based in Boulder, Colo.

“If VMware can open up the platform a bit to allow their customers to have a uniform network management model across any IaaS environment, that will simplify engineering and operations tremendously for companies that are embracing multi-cloud and hybrid cloud,” McGillicuddy said.

VMware customers can expect the vendor to roll out the new version of NSX over the next year or so, Lambeth said. He declined to give further details.

Rethinking NSX networking

More lately, I’ve been sort of taking a step back and figuring out what’s next. I feel like the platform for NSX is kind of in a similar situation to where ESX and vSphere were in 2006 and 2007.
Andrew Lambethengineering fellow at VMware

VMware will have to prepare NSX networking, not just for multiple cloud environments, but also the internet of things, which introduces other challenges to network management and security.

“More lately, I’ve been sort of taking a step back and figuring out what’s next,” Lambeth said. “I feel like the platform for NSX is kind of in a similar situation to where ESX and vSphere were in 2006 and 2007. Major pieces were kind of there, but there was a lot of buildout left.”

VSphere is the brand name for VMware’s suite of server virtualization products. ESX was the former name of VMware’s hypervisor.

VMware’s competitors in software-based networking that extends beyond the private data center include Cisco and Juniper Networks. In May, Juniper introduced its Contrail Enterprise Multicloud, while Cisco has been steadily developing new capabilities for its architecture, called Application Centric Infrastructure.

The immediate focus of the three vendors is on the growing number of companies moving workloads to public clouds. Synergy Research Group estimated cloud-based infrastructure providers saw their revenue rise by an average of 51% in the first quarter to $15 billion. The full-year growth rate was 44% in 2017 and 50% in 2016.

Juniper Contrail battles Cisco ACI, VMware NSX in the cloud

SAN FRANCISCO — Juniper Networks has extended its Contrail network virtualization platform to multicloud environments, competing with Cisco and VMware for the growing number of enterprises running applications across public and private clouds.

The Juniper Contrail Enterprise Multicloud, introduced this week at the company’s NXTWORK conference, is a single software console for orchestrating, managing and monitoring network services across applications running on cloud-computing environments. The new product, which won’t be available until early next year, would compete with the cloud versions of Cisco’s ACI and VMware’s NSX.

Also at the show, Juniper announced that it would contribute the codebase for OpenContrail, the open source version of the software-defined networking (SDN) overlay, to The Linux Foundation. The company said the foundation’s networking projects would help drive OpenContrail deeper into cloud ecosystems.

Contrail Enterprise Multicloud stems, in part, from the work Juniper has done over several years with telcos building private clouds, Juniper CEO Rami Rahim told analysts and reporters at the conference.

“It’s almost like a bad secret — how embedded we have been now with practically all — many — telcos around the world in helping them develop the telco cloud,” Rahim said. “We’ve learnt the hard way in some cases how this [cloud networking] needs to be done.”

Is Juniper’s technology enough to win?

Technologically, Juniper Contrail can compete with ACI and NSX, IDC analyst Brad Casemore said. “Juniper clearly has put considerable thought into the multicloud capabilities that Contrail needs to support, and, as you’d expect from Juniper, the features and functionality are strong.”

Cisco and VMware have marketed their multicloud offerings aggressively. As such, Juniper will have to raise and sustain the marketing profile of Contrail Enterprise Multicloud.
Brad Casemoreanalyst, IDC

However, Juniper will need more than good technology when competing for customers. A lot more enterprises use Cisco and VMware products in data centers than Juniper gear. Also, Cisco has partnered with Google to build strong technological ties with the Google Cloud Platform, and VMware has a similar deal with Amazon.

“Cisco and VMware have marketed their multicloud offerings aggressively,” Casemore said. “As such, Juniper will have to raise and sustain the marketing profile of Contrail Enterprise Multicloud.”

Networking with Juniper Contrail Enterprise Multicloud

Contrail Enterprise Multicloud comprises networking, security and network management. Companies can buy the three pieces separately, but the new product lets engineers manage the trio through the software console that sits on top of the centralized Contrail controller.

For networking in a private cloud, the console relies on a virtual network overlay built on top of abstracted hardware switches, which can be from Juniper or a third party. The system also includes a virtual router that provides links to the physical underlay and Layer 4-7 network services, such as load balancers and firewalls. Through the console, engineers can create and distribute policies that tailor the network services and underlying switches to the needs of applications.

Contrail Enterprise Multicloud capabilities within public clouds, including Amazon Web Services, Google Cloud Platform and Microsoft Azure, are different because the provider controls the infrastructure. Network operators use the console to program and control overlay services for workloads through the APIs made available by cloud providers. The Juniper software also uses native cloud APIs to collect analytics information. 

Other Juniper Contrail Enterprise Multicloud capabilities

Network managers can use the console to configure and control the gateway leading to the public cloud and to define and distribute policies for cloud-based virtual firewalls.

Also accessible through the console is Juniper’s AppFormix management software for cloud environments. AppFormix provides policy monitoring and application and software-based infrastructure analytics. Engineers can configure the product to handle routine networking tasks.

The cloud-related work of Juniper, Cisco and VMware is a recognition that the boundaries of the enterprise data center are being redrawn. “Data center networking vendors are having to redefine their value propositions in a multicloud world,” Casemore said.

Indeed, an increasing number of companies are reducing the amount of hardware and software running in private data centers by moving workloads to public clouds. Revenue from cloud services rose almost 29% year over year in the first half of 2017 to more than $63 billion, according to IDC.

Juniper Junos Space Security Director gets automation boost

SAN FRANCISCO — Juniper Networks has made its security products more responsive to threats, thereby reducing the amount of manual labor required to fend off attacks.

On Tuesday at the Juniper NXTWORK conference, the company introduced “dynamic policy management” in the Junos Space Security Director. The central software console for Juniper network security manages the vendor’s firewalls and enforces security policies on Juniper’s EX and QFX switches.

The latest improvement to Junos Space Security Director lets security pros define variables that will trigger specific rules in Juniper SRX Series next-generation firewalls. For example, if a company is under a ransomware attack that has planted malware in employees’ PCs, then Director could activate rules restricting access to critical applications that handle sensitive data. The rules could also tell firewalls to cut off internet access for those applications.

The new Junos Space Security Director features can lower the response time to security threats from hours to minutes, said Mihir Maniar, vice president of security product management at Juniper, based in Sunnyvale, Calif. “It’s completely dynamic, completely user-intent-driven.”

Vendors trending toward automated security threat response

Automating the response to security threats is a trend among vendors, including Juniper rival Cisco. Companies can configure products to take specific actions against threats, which removes the time security pros would have to spend deploying new firewall rules manually.

Automation means 10 different things to 10 different people.
Dan Condeanalyst at Enterprise Strategy Group

“You have to mitigate very quickly and not just inform somebody and hope for the best,” said Dan Conde, an analyst at Enterprise Strategy Group, based in Milford, Mass. “Manual procedures do not work very quickly.”

But the ultimate goal, which eludes vendors today, is to have products that detect and mitigate threats on their own and then continue to monitor the network to ensure the steps taken were successful.

Vendor marketing tends to play down the fact that the level of automation is rudimentary, which has led to confusion over the definition of automation across different products. “Automation means 10 different things to 10 different people,” Conde said.

Juniper network security stronger with new SRX4600 firewall

Juniper has integrated a new firewall with the latest iteration of Junos Space Security Director. The SRX4600 is designed to protect data flowing in multi-cloud environments found in an increasing number of companies. The SRX4600 is a 1RU appliance with a throughput of 80 Gbps.

Juniper also unveiled at NXTWORK an on-premises malware detection appliance that uses analytics and remediation technology built by Cyphort, which Juniper acquired this year. Cyphort has developed security analytics that spots malware based on its abnormal activity in the network.

The new Advanced Threat Prevention Appliance in Juniper’s network security portfolio is designed for companies with “strict data sovereignty requirements,” the company said. The on-premises hardware has been certified by ISCA Labs, which is an independent division of Verizon that conducts testing and certification of security and health IT products.

Salesforce Quip gets a facelift

SAN FRANCISCO — Salesforce launched a major overhaul of the Quip collaboration tool it acquired in July 2016….

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

The core concept behind the new version is organizing everything related to a project in one tab.

This, Salesforce hopes, can reduce the friction associated with clicking through different browser tabs associated with email, chat, cloud storage, shared spreadsheets and other Salesforce apps a company may have integrated into its CRM platform

First launched in 2013, Salesforce Quip enables users to collaboratively chat and work on shared documents and spreadsheets. Salesforce calls the latest update, announced at the annual Dreamforce user conference, the Salesforce Quip Collaboration Platform. It enables users to bring a wide variety of live applications onto a single canvas.

A project manager can customize the widgets associated with a project and provide team members with the permissions required to make changes. All the updates to this page can be automatically reflected in the appropriate Salesforce database in an auditable and, if necessary, reversible manner.

Focus on a single canvas

Collaborative interfaces are certainly not new, but the team behind Quip has a lot of experience in launching some of the most successful apps on the web, including Google Maps, FriendFeed and the Google App Engine.

The team leveraged this experience to create a core set of Quip widgets called Live Apps, as well as an API that enables third-party developers to add new widgets to the platform. The individual apps were developed by DocuSign, Lucidchart, New Relic Inc., Altify and others. Now that the platform is live, more apps are expected to be developed. Current Salesforce apps include Salesforce records, calendars, Kanban boards, shared documents and chat.

View of Quip dashboard and examples of mobile layout
A screenshot of the Salesforce Quip dashboard and mobile features

The Altify app enables teams to include a widget to map out the relationships inside a customer opportunity. The New Relic app enables a marketing team to track website performance during big events, like Black Friday sales, so that the sales and engineering teams can collaboratively make changes during the campaigns.

A project manager can also create a Quip workbook that best matches their team’s process. A single workbook can include a marketing budget, marketing goals and marketing documents, all in one place.

Collaborating on a better film

Salesforce Quip is used by 29,000 employees at 21st Century Fox Inc. to manage film production, sales and marketing. Creatives use it to track scripts or call sheets associated with TV and movie productions. All changes are made to a document of record in one place so that everyone is working on the same version. This reduces the burden of trying to weave changes made to different versions of a document into the master.

What’s particularly intriguing is the level of granularity with which participants can reference data in the apps. For example, 21th Century Fox producers use Quip for reviewing film dailies, and they can tie a chat to an arrow pointing to a specific object in a video frame. This saves them time because everyone involved can look at the exact video frame in the footage without having to open another window and manually look for it.

Creating a new experience layer to drive process

Salesforce Quip represents an example of driving better workflow by improving the user experience layer.

Twenty years ago, enterprises talked about process. Now, we have moved to engagement. If I create the right engagement mechanism, the process is a byproduct of that.
Paul Gaynorpartner, PricewaterhouseCoopers

headshot of Paul GaynorPaul Gaynor

“The experience could be a customer, employee or partner experience,” said Paul Gaynor, partner at PricewaterhouseCoopers LLC, at Dreamforce. “A focus on the experience layer allows enterprises not to focus so much on the process, [but on] how to bring about engagement. Twenty years ago, enterprises talked about process. Now, we have moved to engagement. If I create the right engagement mechanism, the process is a byproduct of that.”

The key is to hide the complexity from users.

“Behind the scenes, we want to apply AI, machine learning and the capability to bring multiple data repositories together, either in the public or private cloud, and have them merge,” Gaynor said. “If I create the right enablement, then the process naturally follows.”

Turning business into a team sport

“Complex enterprise selling is a team sport,” said Anthony Reynolds, CEO of Altify, referring to the difficulty of a company selling its products or services to large organizations.

It’s too easy for teams on all kinds of projects to get bogged down in the minutia and friction of moving between different apps. The promise of Quip is to make any enterprise process a team sport. The idea of a team sitting around a single screen related to a campaign sounds a lot more exciting than separate individuals trying to keep up with a flurry of emails, chats and various app notifications.

Leading sales organizations are starting to adopt a more collaborative approach to selling to larger customers. Account-based marking (ABM) has emerged as a way of customizing the marketing message to address the unique needs of all the stakeholders in a target opportunity. But this requires a high level of collaboration between all the employees involved in customizing the marketing communication and sales strategy for the target customer.

headshot of Anthony ReynoldsAnthony Reynolds

“A company can’t really be successful with their ABM strategy unless it is tightly coupled with an account-based selling strategy,” Reynolds explained. “Account-based marketing starts with [a] better understanding of a company’s unique needs to enable a custom engagement. Altify allows an organization to cleanly execute the handoff from marketing to sales teams so they can effectively position value, connect to power and get a deal done.”

Salesforce Quip is still in its early phases compared to traditional communication channels, like email and chat. Reynolds estimates that about 10% of Altify’s customers are using Quip today, while another 25% are exploring it.

Note: TechTarget offers ABM and project intelligence data and tool services.

DevOps value stream mapping plots course at Nationwide

SAN FRANCISCO — After a decade of change, Nationwide Insurance sees DevOps value stream mapping as its path to achieve IT nirvana, with an orderly flow of work from lines of business into the software delivery pipeline.

Since 2007, Nationwide Mutual Insurance Co., based in Columbus, Ohio, has streamlined workflows in these corporate groups according to Lean management principles, among software developers with the Agile method and in the software delivery pipeline with DevOps. Next, it plans to bring all those pieces together through an interface that creates a consistent model of how tasks are presented to developers, translated into code and deployed into production.

That approach, called value stream mapping, is a Lean management concept that originated at Toyota to record all the processes required to bring a product to market. Nationwide uses a feature called Landscape View in Tasktop Technologies’ Integration Hub model-based software suite to create its own record of how code artifacts flow through its systems, as part of an initiative to quicken the pace of software delivery.

Other DevOps software vendors, such as XebiaLabs and CollabNet, offer IT pros information about the health of the DevOps pipeline and its relationship to business goals. But Tasktop applies the Lean management concept of value stream mapping to DevOps specifically.

“It’s a diagram that shows all your connectivity and shows the flow of work,” said Carmen DeArdo, the technology director responsible for the software delivery pipeline at Nationwide, in an interview at DevOps Enterprise Summit here last week. “You can see how artifacts are flowing … What we’re hoping for in the future is more metrics and analytics around things like lead time.”

DevOps value stream mapping boosts pipeline consistency

Before Landscape View, Nationwide used Tasktop’s Sync product to integrate the tools that make up its DevOps pipeline. These tools include the following:

  • IBM Rational Doors Next Generation and Rational Team Concert software for team collaboration;
  • HP Quality Center  — now Micro Focus Quality Center Enterprise — for defect management;
  • Jenkins, GitHub and IBM UrbanCode for continuous integration and continuous delivery;
  • ServiceNow for IT service management;
  • New Relic and Splunk for monitoring;
  • IBM’s ChangeMan ZMF for mainframe software change management; and
  • Microsoft Team Foundation Server for .NET development.

One Tasktop Sync integration routes defects from HP Quality Center directly into a backlog for Agile teams in Rational Team Concert. Another integration feeds requirements in IBM Doors Next Generation into HP Quality Center to generate automated code tests.

However, the business still lacked a high-level understanding of how its products were brought to market, especially where business requirements were presented to the DevOps teams to be translated into software features and deployed.

Without that understanding, teams unsuccessfully tried to hasten software delivery with additional developers and engineers. However, that didn’t get to the root of delays in the creation of business requirements. Other attempts to bridge that gap with whiteboards, team meetings and consultants produced no sustainable improvements, DeArdo said.

The Landscape View value stream mapping software tool, however, presents a more objective view than anecdotal descriptions in a team meeting of how work flows to the DevOps team, from artifacts to deployments and incident responses. The software also helps the DevOps team understand lessons learned from incidents and apply them to application development backlogs.

Landscape View’s objective analysis of the DevOps pipeline, complete with its flaws, forces the IT team to set aside biases and misunderstandings and think about process improvement in a new way, DeArdo said. “It’s one thing to talk about value stream, and another to show a picture of what it could look like when things are connected.”

A screenshot of Tasktop Integration Hub's Landscape View feature, which helps Nationwide with DevOps value stream mapping.
A screenshot of Tasktop Integration Hub’s Landscape View feature, which helps Nationwide with DevOps value stream mapping.

A more accurate sense of how its processes work will help Nationwide more effectively improve those processes, DeArdo said. For example, the company has already amended how product defects move to the developer backlog, from an error-prone manual process that relied on email messages to an automated set of handoffs between software APIs.

DevOps to-do list and wish list still full

DevOps value stream mapping doesn’t mean Nationwide’s DevOps work is done. The company aims to use infrastructure as code more broadly and bring that aspect of IT under GitHub version control, as well as migrate more on-premises workloads to the public cloud. And even with the addition of value stream mapping software as an aid, it still struggles to introduce companywide systems thinking to a traditionally siloed set of business units and IT disciplines.

“We don’t really architect the value stream around the major DevOps metrics, [such as] frequency of deployment, reducing lead time or [mean time to resolution],” DeArdo said. “Maybe we do, in some sense, but not as intentionally as we could.”

To address this disparity, Nationwide will tie traditionally separate environments, which include a mainframe, into the same DevOps pipeline as the rest of its workloads.

Anything that has a request and a response and an SLA has a target on its back to be automated.
Carmen DeArdotechnology director, Nationwide

“We don’t buy in to the whole [bimodal] IT concept,” DeArdo said, in reference to a Gartner term that describes a DevOps approach limited to new applications, while legacy applications are managed separately. “[To say DevOps] is just for the cool kids, and if you’re on a legacy system, you need not apply, sends the wrong message.”

DeArdo would like Tasktop to extend DevOps value stream mapping on Integration Hub with the ability to run simulations of different value stream models to see what will work best. He’d also like to see more metrics and recommendations from Integration Hub to help identify what’s causing bottlenecks in the process and how to resolve them.

“Anything that has a request and a response and an SLA [service-level agreement] has a target on its back to be automated from a value stream perspective,” he said. “How can we make it self-service and improve it? If you can’t see it, you’re only touching part of the elephant.”

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

Container security platforms diverge on DevSecOps approach

SAN FRANCISCO — Container security platforms have begun to proliferate, but enterprises may have to watch the DevSecOps trend play out before they settle on a tool to secure container workloads.

Two container security platforms released this month — one by an up-and-coming startup and another by an established enterprise security vendor — take different approaches. NeuVector, a startup that introduced an enterprise edition at DevOps Enterprise Summit 2017, supports code and container-scanning features that integrate into continuous integration and continuous delivery (CI/CD) pipelines, but its implementation requires no changes to developers’ workflow.

By contrast, a product from the more established security software vendor CSPi, Aria Software Defined Security, allows developers to control the insertion of libraries into container and VM images that enforce security policies.

There’s still significant overlap between these container security platforms. NeuVector has CSPi’s enterprise customer base in its sights, with added support for noncontainer workloads and Lightweight Directory Access Protocol. Software-defined security includes network microsegmentation features for policy enforcement that are NeuVector’s primary focus. And while developers inject software-defined security code into machine images, they aren’t expected to become security experts. Enterprise IT security pros set the policies enforced by software-defined security, and a series of wizards guide developers through the integration process for software-defined security libraries.

Both vendors also agree on this: Modern IT infrastructures with DevOps pipelines that deliver rapid application changes require a fundamentally different approach to security than traditional vulnerability detection and patching techniques.

There’s definitely a need for new security techniques for containers that rely less on layers of VM infrastructure to enforce network boundaries, which can negate some of the gains to be had from containerization, said Jay Lyman, analyst with 451 Research.

However, even amid lots of talk about the need to “shift left” and get developers involved with IT security practices, bringing developers and security staff together at most organizations is still much easier said than done, Lyman said.

NeuVector 1.3 container security platform
NeuVector 1.3 captures network sessions automatically when container threats are detected, a key feature for enterprises.

Container security platforms encounter DevSecOps growing pains

As NeuVector and CSPi product updates hit the market, enterprise IT pros at the DevOps Enterprise Summit (DOES) here this week said few enterprises use containers at this point, and the container security discussion is even further off their radar. By the time containers are widely used, DevSecOps may be more mature, which could favor CSPi’s more hands-on developer strategy. But for now, developers and IT security remain sharply divided.

Eventually, we’ll see more developer involvement in security, but it will take time and probably be pretty painful.
Jay Lymananalyst, 451 Research

“Everyone needs to be security-conscious, but to demand developers learn security and integrate it into their own workflow, I don’t see how that happens,” said Joan Qafoku, a risk consulting associate at KPMG LLP in Seattle who works with an IT team at a large enterprise client also based in Seattle. That client, which Qafoku did not name, gives developers a security-focused questionnaire, but security integration into their process goes no further than that.

NeuVector’s ability to integrate into the CI/CD pipeline without changes to application code or the developer workflow was a selling point for Tobias Gurtzick, security architect for Arvato, an international outsourcing services company based in Gütersloh, Germany.

Still, this integration wasn’t perfect in earlier iterations of NeuVector’s product, Gurtzick said in an interview before DOES. Gurtzick’s team polled an API every two minutes to trigger container and code scans with previous versions. NeuVector’s 1.3 release includes a new webhooks notification feature that more elegantly triggers code scans as part of continuous integration testing, without the performance overhead of polling the API.

“That’s the most important feature of the new version,” Gurtzick said. He also pointed to added support for detailed network session snapshots that can be used in forensic analysis. Software-defined security offers a similar feature with its first release.

While early adopters of container security platforms, such as Gurtzick, have settled the debate about how developers and IT security should bake security into applications, the overall market has been slower to take shape as enterprises hash out that collaboration, Lyman said.

“Earlier injection of security into the development process is better, but that still usually falls to IT ops and security [staff],” Lyman said. “Part of the DevOps challenge is aligning those responsibilities with application development. Eventually, we’ll see more developer involvement in security, but it will take time and probably be pretty painful.”

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

Salesforce-Google integration bolsters CRM, analytics functionality

SAN FRANCISCO — Salesforce and Google cozied up a little more closely on several new product integrations: One offers a cloud alternative to Amazon Web Services, and another pushes Salesforce deeper into G Suite, Google’s subscription business applications cloud, which may send shock waves through the thriving ecosystem of partners that already provide small-business CRM to those apps.

The Salesforce-Google integration includes the naming of Google Cloud as Salesforce’s latest preferred public cloud for international customers, joining AWS, with which Salesforce partnered earlier this year.

The Salesforce-Google integration, announced earlier this month at Salesforce’s Dreamforce user conference, also includes Salesforce Lightning for Gmail and Google Sheets, as well as Quip Live Apps for Google Drive and Google Calendar. Salesforce’s Sales and Marketing Clouds will both have Google Analytics 360 embedded.

As part of the partnership, Google will also be offering existing Salesforce customers a one-year free trial of G Suite.

Salesforce is no stranger to partnering with other enterprise tech companies — even if some products compete — to offer its customers an enhanced experience, and that appears to be the reason behind the Salesforce-Google integration.

“In the past, [Salesforce] has announced some pretty big partnerships that turned out to maybe be not so big, but this is from a different angle,” said Michael Fauscette, chief research officer for G2 Crowd, based in Chicago. “They have a go-to-market strategy together.”

Integration to help SMBs

In the past, [Salesforce] has announced some pretty big partnerships that turned out to maybe be not so big, but this is from a different angle.
Michael Fauscettechief research officer, G2 Crowd

Small- and medium-sized businesses will benefit from the Salesforce-Google integration, because many of them already use Gmail and G Suite, according to analysts, and the Salesforce tie-in could make the San Francisco-based company attractive as a CRM option.

“I have a client, and the No. 1 challenge is adoption: They have these great tools and insights, but if people aren’t in there and feeding the engine and taking action, it doesn’t matter,” said Lisa Hager, global head of Salesforce practices for Mumbai-based Tata Consulting Services. “But if I go in to get my mail and the Salesforce platform prepopulates my email and spreadsheets, I’m more likely to go into that tool.”

Voices.com, which works with brands to find voice actors for campaigns and is based in London, Ont., has been a Salesforce customer for 12 years, and its CEO, David Ciccarelli, was enthusiastic about some of the Salesforce-Google integrations.

“Salesforce is a great system of record, but where it can improve upon is mass editing,” he said. “So, being able to one-click export from Salesforce into something manipulative like Google Sheets, make changes and one-click import back — that’s where you’ll see huge time savings.”

Marketing is where the data is

Salesforce has made a concerted effort to increase market share for its Marketing Cloud to match that of Sales Cloud and Service Cloud, and it looks to do that through data.

Just weeks after releasing B2B Lead Analytics for Facebook, Salesforce is embedding Google Analytics 360 into Marketing Cloud, giving marketers insights at the two leading data points on the internet.

“If you’re not in a separate data silo and you can ingest the Google Analytics 360 data on website visitors and keyword ad buys, with that integrated, you don’t have to take that data out and manually process it,” said Cindy Zhou, principal analyst for Constellation Research. “And there’s always data lost when you have to move it from one place to another. So, having it embedded natively will help you get deeper insights, and you can still apply Einstein on top to do audience segmentation and analysis.”

Ciccarelli of Voices.com said, as a smaller business, a license for Google Analytics 360 was always too much to budget for, but with it integrated into Salesforce at no additional cost, smaller companies will be able to receive enterprise-level insights.

Salesforce adds another storage cloud

The news of Salesforce adding another preferred public cloud for international expansion comes just months after Salesforce formed a similar partnership with AWS. The addition of Google Cloud is to address customer needs, according to Ryan Aytay, executive vice president for business development and strategic accounts for Salesforce.

“AWS continues to be an important part of our infrastructure, so nothing’s changing there,” Aytay said. “We’re just adding another preferred cloud and moving forward to address customer needs.”

Google and AWS are two of the three leaders in the cloud space, with the other being Microsoft’s Azure. Salesforce CEO Marc Benioff has been outspoken about his abhorrence toward former partner Microsoft, so it’s unlikely Salesforce will be partnering with the Seattle-based company anytime soon.

The move toward Google could have been a response to customers’ demands, according to Hager, as AWS is costly when it comes to cloud storage.

“Just being able to have that option of Google storage instead of AWS is important; I had three clients this morning complaining about the cost of AWS,” Hager said. “If you’re storing a lot of documents on Salesforce, it can get expensive. So, integrating with Google is a nice option.”

There’s some potential overlap with the integrated products, especially between Salesforce’s Quip and Google’s G Suite, but Salesforce executives aren’t worried about the overlap, with Aytay saying internally at Salesforce the company has used both products.

Zhou can see the products coexisting, but there’s also some “friendly competition” between G Suite and Quip, with the Salesforce product being a good alternative for companies creating contracts or requests for proposal.

Several of the Salesforce-Google integrations are already in market, including Lightning for Gmail and integrations with Calendar and Google Drive, with deeper integrations rolling out in 2018, according to the press release. Quip Live Apps integration with Google Drive is expected to be generally available in the first half of 2018 for $25 per user, per month with any Quip Enterprise License. And the integrations between Salesforce and Google Analytics 360 are expected in the first half of 2018 at no additional cost to licensed customers.

DevOps transformation in large companies calls for IT staff remix

SAN FRANCISCO — A DevOps transformation in large organizations can’t just rely on mandates from above that IT pros change the way they work; IT leaders must rethink how teams are structured if they want them to break old habits.

Kaiser Permanente, for example, has spent the last 18 months trying to extricate itself from 75 years of organizational cruft through a consumer digital strategy program led by Alice Raia, vice president of digital presence technologies. With the Kaiser Permanente website as its guinea pig, Raia realigned IT teams into a squad framework popularized by digital music startup Spotify, with cross-functional teams of about eight engineers. At the 208,000-employee Kaiser Permanente, that’s been subject to some tweaks.

“At our first two-pizza team meeting, we ate 12 pizzas,” Raia said in a session at DevOps Enterprise Summit here. Since then, the company has settled on an optimal number of 12 to 15 people per squad.

The Oakland, Calif., company decided on the squads approach when a previous model with front-end teams and systems-of-record teams in separate scrums didn’t work, Raia said. Those silos and a focus on individual projects resulted in 60% waste in the application delivery pipeline as of a September 2015 evaluation. The realignment into cross-functional squads has forced Kaiser’s website team to focus on long-term investments in products and faster delivery of features to consumers.

IT organizational changes vary by company, but IT managers who have brought about a DevOps transformation in large companies share a theme: Teams can’t improve their performance without a new playbook that puts them in a better position to succeed.

We had to break the monogamous relationships between engineers and [their] areas of interest.
Scott Nasellosenior manager of platforms and systems engineering, Columbia Sportswear Co.

At Columbia Sportswear Co. in Portland, Ore., this meant new rotations through various areas of focus for engineers — from architecture design to infrastructure building to service desk and maintenance duties, said Scott Nasello, senior manager of platforms and systems engineering, in a presentation.

“We had to break the monogamous relationships between engineers and those areas of interest,” Nasello said. This resulted in surprising discoveries, such as when two engineers who had sat next to each other for years discovered they’d taken different approaches to server provisioning.

Short-term pain means long-term gain

In the long run, the move to DevOps results in standardized, repeatable and less error-prone application deployments, which reduces the number of IT incidents and improved IT operations overall. But those results require plenty of blood, sweat and tears upfront.

“Prepare to be unpopular,” Raia advised other enterprise IT professionals who want to move to DevOps practices. During Kaiser Permanente’s transition to squads, Raia had the unpleasant task to inform executive leaders that IT must slow down its consumer-facing work to shore up its engineering practices — at least at first.

Organizational changes can be overwhelming, Nasello said.

“There were a lot of times engineers were running on empty and wanted to tap the brakes,” he said. “You’re already working at 100%, and you feel like you’re adding 30% more.”

IT operations teams ultimately can be crushed between the contradictory pressures of developer velocity on the one hand and a fear of high-profile security breaches and outages on the other, said Damon Edwards, co-founder of Rundeck Inc., a digital business process automation software maker in Menlo Park, Calif.

Damon Edwards, co-founder of Rundeck Inc., shares the lessons he learned from customers about how to reduce the impact of DevOps velocity on IT operations.
Damon Edwards, co-founder of Rundeck Inc., shares the lessons he learned from customers about how to reduce the impact of DevOps velocity on IT operations.

A DevOps transformation means managers must empower those closest to day-to-day systems operations to address problems without Byzantine systems of escalation, service tickets and handoffs between teams, Edwards said.

Edwards pointed to Rundeck customer Ticketmaster as an example of an organizational shift toward support at the edge. A new ability to resolve incidents in the company’s network operations center — the “EMTs” of IT incident response — reduced IT support costs by 55% and the mean time to response from 47 minutes to 3.8 minutes on average.

“Silos ruin everything — they’re proven to have a huge economic impact,” Edwards said.

And while DevOps transformations pose uncomfortable challenges to the status quo, some IT ops pros at big companies hunger for a more efficient way to work.

“We’d like a more standardized way to deploy and more investment in the full lifecycle of the app,” said Jason Dehn, systems analyst for a large U.S. retailer he asked not to be named. But some lines of business at the company are happy with a status quo, where they aren’t entangled in day-to-day application maintenance.

“Business buy-in can be the challenge,” Dehn said.

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

By aligning sales and marketing, Mizuho OSI could sell faster

SAN FRANCISCO — To take on larger competitors with more resources, medical equipment manufacturer Mizuho OSI had to create a faster track from lead generation to sales.

To work smarter and faster to identify leads and close sales, the Union City, Calif., company broke down its internal silos, deciding that aligning sales and marketing departments would be its best bet.

“We had a gap in collaboration,” said Greg Neukirch, vice president of sales and marketing at Mizuho OSI, during a session at Dreamforce 2017 this week. “We needed to be smarter and faster and improve our customer experience beyond what we did in the past.”

Neukirch added that the company did extensive research to see which software tools could aid in aligning sales and marketing. It ultimately chose Salesforce for CRM and Salesforce Pardot for marketing automation.

“We had a sales team wanting more and a marketing team trying to give more, and we looked at how we could leverage Salesforce and Pardot to close the gap between those two functions,” Neukirch said.

Mizuho OSI adopted Salesforce in February 2016 and Pardot a year later, working to ensure close collaboration between the sales and marketing departments.

Bringing sales, marketing together

We had a sales team wanting more and a marketing team trying to give more, and we looked at how we could leverage Salesforce and Pardot to close the gap.
Greg Neukirchvice president of sales and marketing, Mizuho OSI

Breaking down internal silos for businesses is a common problem, because sales and marketing departments have historically had different objectives. But as consumers have become more educated through the buying process, aligning sales and marketing is a strategy that can bring a company more customers — it’s not an easy process, however.

“There was skepticism in our sales department,” Neukirch said. “They didn’t know the products or understand why they needed to do something different. But it was up to us to help communicate that value.”

New Salesforce Sales Cloud features are designed to make it easier for customers to better align sales and marketing. With the Lightning Data feature, for example, companies can discover and import new potential customers, according to Brooke Lane, director of product management for Sales Cloud.

“In today’s setting, we want to quickly close deals and also better understand customers,” Lane said. “With [the new feature] Campaign Management, it can help you show the impact of marketing activities on the sales pipeline. We want to continue bridging Salesforce and Pardot so you’re not troubled with tasks.”

Addressing implementation challenges

Mizuho OSI’s transition to a more efficient, modern customer journey — one that shortened the time for a prospect to become a customer — hasn’t come without challenges.

“Sales can’t do things on its own,” said Chris Lisle, director of North American sales at Mizuho OSI. “But the biggest hurdle was getting sales to adopt a new tool.”

Mizuho OSI ran into some hurdles during the implementation — mainly the time it takes to successfully change how the organization is run.

“We took time to identify the problems we wanted to solve — mainly that our customer journey was outdated,” said Kevin McCallum, director of marketing at Mizuho OSI. “We needed an aggressive timeline for our deployment, but however long you think it’ll take, it takes longer than that.”

But by aligning sales and marketing departments at the start of the project, Mizuho OSI was able to start modernizing its customer journey.

“Sales had full visibility with what we were doing and what we were working on and helped through the journey,” McCallum said.

Neukirch agreed, calling the alignment essential.

“To get that collaboration and see the departments come together, we were able to move faster,” Neukirch said.

And while the company is still aligning sales and marketing, it has seen anecdotal benefits of the change.

“What we did in the last nine months exceeded our expectations,” Neukirch said. “We were following that vision and executing on the deliverables and making sure we kept focus with how the customer could interact with us better and faster, so we’d have the opportunity to outpace the folks we’re in market against.”