Tag Archives: enterprises

Are SD-WAN security concerns warranted?

Are software-defined WAN security features sufficient to handle the demands of most enterprises? That’s the question addressed by author and engineer Christoph Jaggi, whose SD-WAN security concerns were cited in a recent blog post on IPSpace. The short answer? No — primarily because of the various connections that can take place over an SD-WAN deployment.

“The only common elements between the different SD-WAN offerings on the market are the separation of the data plane and the control plane and the takeover of the control plane by an SD-WAN controller,” Jaggi said. “When looking at an SD-WAN solution, it is part of the due diligence to look at the key management and the security architecture in detail. There are different approaches to implement network security, each having its own benefits and challenges.”

Organizations contemplating SD-WAN rollouts should determine whether prospective products meet important security thresholds. For example, products should support cryptographic protocols and algorithms and meet current key management criteria, Jaggi said.

Read what Jaggi had to say about the justification for SD-WAN security concerns.

Wireless ain’t nothing without the wire

You can have the fanciest access points and the flashiest management software, but without good and reliable wiring underpinning your wireless LAN, you’re not going to get very far. So said network engineer Lee Badman as he recounted a situation where a switch upgrade caused formerly reliable APs to lurch to a halt.

“I’ve long been a proponent of recognizing [unshielded twisted pair] as a vital component in the networking ecosystem,” Badman said. Flaky cable might still be sufficient in a Fast Ethernet world, but with multigig wireless now taking root, old cable can be the source of many problems, he said.

For Badman, the culprit was PoE-related and once the cable was re-terminated and tested anew, the APs again worked like a charm. A good lesson.

See what else Badman had to say about the issues that can plague a WLAN.

The long tail and DDoS attacks

Now there’s something new to worry about with distributed denial of service, or DDoS, attacks. Network engineer Russ White has examined another tactic, dubbed tail attacks, which can just as easily clog networking resources.

Unlike traditional DDoS or DoS attacks that overwhelm bandwidth or TCP sessions, tail attacks concentrate on resource pools, such as storage nodes. In this scenario, a targeted node might be struggling because of full queues, White said, and that can cause dependent nodes to shut down as well. These tail attacks don’t require a lot of traffic and, what’s more, are difficult to detect.

For now, tail attacks aren’t common; they require attackers to know a great deal about a particular network before they can be launched. That said, they are something network managers should be aware of, White added.

Read more about tail attacks.

Zoom audio feature could reduce PSTN costs for large enterprises

Zoom enhanced the SIP audio service it offers for large enterprises this week, while also rolling out two smaller audio features that should benefit all users of the web conferencing platform.

Zoom’s Session Initiation Protocol audio feature lets businesses establish an SIP connection between their IP telephony network and the Zoom cloud. That way, Zoom users can conduct Zoom audio conferencing over the SIP trunk rather than the public switched telephone network (PSTN).

Zoom said the SIP connection helps enterprises save money by reducing spending on PSTN services. The vendor is targeting the feature at companies that conduct more than 1 million minutes of audio conferencing every month and have significant deployments of IP telephony.

Zoom already has several customers using the service. Now, the vendor is giving companies more control over which calls get directed to the SIP trunk and which are handled by Zoom’s standard PSTN dial-in and call-out service.

This flexibility could benefit companies with multiple offices that rely on a mix of telephony endpoints. For example, a company could use the SIP trunk for calls at its headquarters in the United States, while directing calls from a remote office in Bulgaria to the PSTN.

“I think this helps Zoom in its quest to win in the larger enterprise market,” said Irwin Lazar, analyst at Nemertes Research, based in Mokena, Ill. “It certainly helps them compete with the likes of Cisco and Microsoft that offer this kind of integration between their meeting apps and their on-premises phone platforms.”

The SIP audio connection is available to businesses subscribed to Zoom’s premium audio plan. Those customers commonly get billed per minute for the use of Zoom’s call-out, toll-free dial-in and premium toll dial-in services.

“SIP Connected Audio provides opportunity to avoid or minimize those fees in exchange for the costs of establishing and maintaining the SIP trunk plus a small flat-rate, per-user fee that Zoom charges for this service,” said Walt Anderson, senior product manager with Zoom.

Zoom highlights additional audio enhancements

Zoom announced two other new audio features this week for all customers. The vendor has both freemium and premium offerings.

Users can now join and start Zoom audio conferences using only phones. That is, a host no longer needs to open the Zoom desktop client or web application to start the meeting.

Zoom also updated its cloud infrastructure to avoid voicemail recordings being added to a meeting when a participant doesn’t answer the phone. Now, Zoom will require users to press 1 to join the meeting if Zoom’s technology detects that the phone seemed to ring for too long or too short of a time.

Founded in 2011, Zoom is facing increasing competition in the web conferencing market from Microsoft and Cisco, as well as other pure cloud startups, such as BlueJeans.

Facebook targets WhatsApp business messaging at large enterprises

Facebook’s WhatsApp is giving large enterprises tools for communicating with its more than 1 billion users. It’s the latest social messaging channel to enter the contact center, but analysts say most businesses don’t need to rush to adopt such platforms just yet.

Large businesses can use the WhatsApp Business API to integrate a WhatsApp messaging channel with contact center and customer relationship management software. Companies will then be able to create WhatsApp profiles and add WhatsApp click-to-chat buttons to websites or mobile apps.

WhatsApp plans to charge businesses for sending notifications to customers through the app, such as order receipts, shipping updates and boarding passes. The vendor will also make companies pay if they fail to respond to a customer inquiry within 24 hours.

The beta release of the API this month is the Facebook-owned platform’s latest initiative to connect customers and businesses. Earlier this year, the company launched WhatsApp Business, a separate app within which small businesses can create profiles and message with customers. 

WhatsApp is also deepening ties to the social platform of its parent company, Facebook. Businesses using the new API will be able to create Facebook ads that invite customers to message them through WhatsApp.

Facebook would be wise to consider integrating WhatsApp business messaging with its enterprise intranet and collaboration platform, Workplace by Facebook, said Phil Edholm, president of PKE Consulting LLC. That would make the offering more unique, he said.

“From just a pure channel perspective, having WhatsApp as a channel from [consumer] to [business] is interesting, but it’s just another channel,” Edholm said.

WhatsApp business messaging vs. Apple Business Chat

WhatsApp business messaging will rival Apple Business Chat, which lets customers and business interact through iMessages. WhatsApp is particularly popular among Android users.

However, WhatsApp’s offering doesn’t seem to have as many capabilities as Apple’s, nor does WhatsApp seem to have as clear a vision as Apple, said Michael Finneran, president of the advisory firm DBrn Associates Inc. in Hewlett Neck, N.Y.

Unlike WhatsApp, which must be downloaded from an app store, Apple Business Chat is available natively to all iOS users and lets customers remain anonymous when contacting a business. The iMessaging interface also has more productive and more interactive features than WhatsApp business messaging, such as the ability to place orders.

Still, businesses have some time before they need to adopt either platform, Finneran said. Use of Apple Business Chat has so far been limited to a handful of well-known banks, retailers, hotels and internet businesses.

“Unless digital engagement is a key attribute of your business, you can probably wait on Apple Business Chat,” Finneran said. “Everyone can wait on WhatsApp.”

When businesses do decide to adopt additional social messaging channels, they should seek help from communications platform as a service (CPaaS) vendors such as Twilio, Nexmo and Smooch, said Tsahi Levent-Levi, an independent analyst.  

Businesses shouldn’t attempt to juggle too many communications channels on their own, mainly because the APIs for many of these platforms are new and could change, Levent-Levi said.

“You can’t rely on a single channel because the APIs might change, and the way you interact with customers might change,” Levent-Levi said. That necessitates an omnichannel approach, he said. “And you can’t do it on your own, even if you’re big.”

Bugcrowd CTO explains crowdsourced security benefits and challenges

Crowdsourced security can provide enormous value to enterprises today, according to Casey Ellis, but the model isn’t without its challenges.

In this Q&A, Ellis, chairman, founder and CTO of San Francisco-based crowdsourced security testing platform Bugcrowd Inc., talks about the growth of bug bounties, the importance of vulnerability research and the evolution of his company’s platform. According to the Bugcrowd “2018 State of Bug Bounty Report,” reported vulnerabilities have increased 21% to more than 37,000 submissions in the last year, while bug bounty payouts have risen 36%.

In part one of this interview, Ellis expressed his concerns that the good faith that exists between security researchers and enterprises is eroding and discussed the need for better vulnerability disclosure policies and frameworks. In part two, he discusses the benefits of crowdsourced security testing, as well as some of the challenges, including responsible disclosure deadlines and the accurate vetting of thousands of submissions.

Editor’s note: This interview has been edited for clarity and length.

When it comes to responsible vulnerability disclosure, do you think companies are at a point now where they generally accept the 90-day disclosure period?

Casey Ellis: No. No, I think technology companies are, but it’s very easy working in technology to see adoption by technology companies and assume that it’s normal now. I see a lot of people do that and I think it’s unwise, frankly.

I think that’s where we’ll end up eventually, and I think we’re moving toward that type of thing. But there are caveats in terms of, for example, complex supply chain products or vehicles or medical devices — the stuff that takes longer than 90 days to refresh and test, patch, and deploy out to the wild. The market is not used to that kind of pressure on public disclosure yet, but I think the pressure is a good thing.

The bigger problem is in terms of general vulnerability disclosure; that’s not accepted outside of the tech sector yet — at all, frankly.

There’s been a lot of talk about security automation and machine learning at RSA Conference again this year. Where do you see that going?

Ellis: It depends on your definition of automation at that point. Is it automation of decision-making or is it automation of leverage and reaching that decision?

For the customers, they just want to know what they need to go and fix. But we have to prioritize the submissions.
Casey EllisBugcrowd

Using Bugcrowd as an example, we’re heavy users of machine [learning] and automation within our platform, but we’re not doing it to replace the hackers. We’re doing it to understand which of the conversations we’re having as these submissions come in are most important. And we’re trying to get to the point where we can say, ‘Okay, this bug is less likely to be important than this other bug. We should focus on that first.’

For the customers, they just want to know what they need to go and fix. But we have to prioritize the submissions. We have to sit in front of that customer and have these conversations at scale with everyone who’s submitting, regardless of whether they’re very, very valuable in terms of the information or they’re getting points for enthusiasm but not for usefulness. It’s actually a fun and a valuable problem to solve, but it’s difficult.

How do you prioritize and rank all of the submissions you receive? What’s that process like?

Ellis: There’s a bunch of different things because the bug bounty economic model is this: The first person to find each unique issue is the one who gets rewarded for it. And then, the more critical it is, the more they get paid. And this is what we’ve been doing since day one because the premise was these are two groups of people that historically suck at talking to each other.

So we said we’re going to need to pull together a human team to help out, and then what we’ll do is we’ll learn from that team to build the product and make the product more effective as we go. It’s a learning loop that we’ve got internally, as well. And what they’re doing is, basically, understanding what’s a duplicate [submission], what’s out of scope and things like that. There are simple things that we can do from a filtering standpoint.

Duplicates get interesting because you have pattern matching and Bayesian analysis and different things like that to understand what the likelihood of a duplicate is. Those are the know things. Then there’s the heavy stuff — the critical importance, wake up the engineering team stuff.

There’s also a bunch of stuff we do in terms of analyzing the vulnerability against the corpus [of known vulnerabilities] to understand what that is, as well as who the submitter is. Because if they’re a notorious badass who comes in and destroys stuff and has a really high signal-to-noise ratio then, yes, that’s probably something that we should pay attention to.

There’s a bunch of really simple stuff or comparatively simple stuff that we can do, but then there’s a bunch of much more nuanced, complicated stuff that we have to work out. And then we’ve got the human at the end of [the process] because we can’t afford to get it wrong. We can’t say, no to something that’s actually a yes. The whole thing gets basically proofed, and then those learnings go back into the system and it improves over time.

Do you receive a lot of submissions that you look at and say, ‘Oh, this is nonsense, someone’s trying to mess with us and throw the process off’?

Ellis: Yes. There’s a lot of that. As this has grown, there are a bunch of people that are joining in for the first time, and some of them are actively trolling. But then, for every one of those, there are 10 that are just as noisy, but it’s because they think they’re doing the right thing even though they’re not.

If someone runs Nessus and then uploads a scan and says, ‘That’s a bug!’ then what we do at that point is we say, ‘No, it’s not. By the way, here are some different communities and education initiatives that we’ve got.’

We try to train them to see if they can get better because maybe they can. And if they’ve initiated that contact with us, then they’re clearly interested and enthusiastic, which is a great starting point because just because they don’t know how to be useful right now doesn’t mean they can’t be in the future. We give the benefit of the doubt there, but obviously, we have to protect the customer from having to deal with all of that noise.

When it comes to that noise in crowdsourced bug hunting, do you think those people are looking more at the reward money or the reputation boost?

Ellis: It’s usually both. Money is definitely a factor in bug bounties, but reputation is a huge factor, too. And it goes in two directions.

There’s reputation for the sake of ego, and they’re the ones that can get difficult pretty quickly, but then there’s also reputation for the sake of career development. And that’s something that we actually want to help them with. That’s been an initiative that we’ve had from day one, and a bunch of our customers actually have people in their security teams that they hired off the platform.

Jason Haddix [Bugcrowd vice president of trust and security] was number one on the platform before we hired him. We think this is actually a good thing in terms of helping address the labor shortage.

But, to your point, if someone comes in and says, ‘Oh, this is a quick way to get a high-paying career in cybersecurity,’ then we have to obviously temper that. And it does happen.

Last question: What activity on your platform has stood out to you lately?

Ellis: There’s a real shift toward people scaling up in IoT. We have more customers coming onboard to test IoT. I think the issue of IoT security and awareness around the fact that it’s something that should actually be addressed is in a far better state now than it was when IoT first kicked off years ago.

And the same thing that happened in web and mobile and automotive is happening in IoT. With IoT, it was ‘We don’t have the people [for security testing]. Okay, where are we going to get them?’ I think the crowd is reacting to that opportunity now and starting to dig into the testing for IoT.

And here’s the thing with IoT security: For starters, bugs that are silicon level or at a hardcoded level are probably out there, but the cost to find them and the value of having them [reported] hasn’t justified the effort being put in yet.

That’s usually not what people are talking about when they’re talking about IoT bugs. It’s usually either bugs that are CVEs [Common Vulnerabilities and Exposures] in the supply chain software that forms the operating system or bugs that are in the bespoke stuff that sits on top. And, usually, both of those things can be flushed and changed.

We’re not at the point where you’ve got a more common issue and you’re not able to change it ever. I assume that will happen at some point but, hopefully by the time we get there, people are going to be thinking about design with security more in mind for the first place, and all that older stuff will be at end-of-life anyway.

Insurer accelerates DevOps test data refreshes with Actifio

Large enterprises lug data-heavy legacy apps with them to DevOps. To keep rapid development on track, teams must take fresh approaches to IT operations at the deepest levels of infrastructure.

For ActiveHealth Management Inc., a New-York-based subsidiary of Aetna International, a large health insurer in Hartford, Conn., that problematic app was a 150 TB Oracle database deployed on a six-node Oracle Real Application Cluster, to produce analytics reports on member data. The amount of data on such a complex server infrastructure presented a major obstacle to the company’s planned implementation of a continuous DevOps test process in early 2017. A manual refresh of database test data through a traditional backup copy of the cluster would require an estimated minimum of 350 hours of work, over 30 days.

“Our QA team wanted live real-time data in our lower test/dev environments,” said Conrad Meneide, then the vice president of infrastructure at ActiveHealth, now executive director of affiliate infrastructure services at Aetna. “But a 150 terabyte production database takes an insurmountable amount of time to copy, and importing a full copy of that data to a test environment would require a costly storage footprint.”

Even if ActiveHealth could spare the disk space and time to generate DevOps test data, the performance requirements for the production database cluster prohibited such a backup during business hours.

Actifio CDS UI
Actifio CDS UI shows DevOps test copy process

DevOps test data copy bake-off favored Actifio

Parent company Aetna already had a relationship with a DevOps test data management vendor, but the ActiveHealth team wasn’t convinced that product could make on-demand clones of its large, performance-sensitive database. The team conducted a five-week bake-off in early 2017 between that incumbent tool and a product called Copy Data Storage (CDS) from Actifio Inc.

“The incumbent product produced some improvement over the manual process, but Actifio gave us five times the performance gain of that competitor’s product,” Meneide said. He declined to name the incumbent vendor, as its software remains in use at Aetna.

Our QA team wanted live real-time data in our lower test/dev environments.
Conrad Meneideexecutive director of affiliate infrastructure services, Aetna

Competitors to Actifio in copy data management include Catalogic Software, Cohesity Inc., Commvault Systems Inc., Delphix Corp. and Rubrik Inc. These vendors are able to make fast copies of data stores with a small footprint. Another set of vendors specializes in test data management, which generates test reports and includes data masking and encryption features out of the box in addition to fast-copy mechanisms. Test data management vendors include CA, Delphix, HP and Informatica.

Meneide attributed the difference between Actifio and the incumbent vendor his team evaluated to the products’ architectures. The incumbent product was installed on a VM and addressed back-end storage through the company’s IP network via the Network File System (NFS) protocol, while Actifio CDS was packaged with Fibre Channel storage area network hardware on an appliance.

“This meant we didn’t have to reinvest in a faster Ethernet network for NFS, or worry about security concerns around NFS over the main network,” Meneide said.

DevOps test performance removes release roadblocks

Actifio CDS integrates with ActiveHealth’s Jenkins CI/CD pipeline through a RESTful API, and developers generate clones of the data for DevOps tests on-demand through the Jenkins interface. The API meant ActiveHealth could also connect a homegrown data masking and encryption utility, while the incumbent vendor’s software would have required a separately licensed encryption engine.

With on-demand DevOps test data, ActiveHealth established a continuous integration and delivery workflow in its dev/test environments, which resulted in releases to production every two weeks.

“This was possible before Actifio, but not with fresh test data — everything almost had to stop for a data refresh, and data refresh requests came in ad hoc,” Meneide said.

Actifio clones the database data using pointers to a deduplicated golden image, which means the DevOps test data environment also takes up only a fraction — some 20% to 30% — of the storage space compared to the production environment.

In his new position as an executive director at Aetna, Meneide said he will evaluate wider use of Actifio CDS in other subsidiaries at the company, such as its Medicaid claims business, as well as with other database types, such as Microsoft SQL Server. As the insurer moves some workloads to public cloud service providers, it will explore whether Actifio CDS can help quickly clone data to send with apps into those new environments.

However, as in any large enterprise, a number of DevOps and IT initiatives compete for attention and a share of the budget. At Aetna, a broader Actifio rollout must compete with an IT to-do list that includes a DevSecOps transformation, and the adoption of containers and Kubernetes container orchestration in the company’s private cloud.

“We’re still exploring and understanding the use cases, and where compliance dictates we retain systems of record,” Meneide said. “Actifio also has potential uses for disaster recovery for us.”

As AI identity management takes shape, are enterprises ready?

BOSTON — Enterprises may soon find themselves replacing their usernames and passwords with algorithms.

At the Identiverse 2018 conference last month, a chorus of vendors, infosec experts and keynote speakers discussed how machine learning and artificial intelligence are changing the identity and access management (IAM) space. Specifically, IAM professionals promoted the concept of AI identity management, where vulnerable password systems are replaced by systems that rely instead on biometrics and behavioral security to authenticate users. And, as the argument goes, humans won’t be capable of effectively analyzing the growing number of authentication factors, which can include everything from login times and download activity to mouse movements and keystroke patterns. 

Sarah Squire, senior technical architect at Ping Identity, believes that use of machine learning and AI for authentication and identity management will only increase. “There’s so much behavioral data that we’ll need AI to help look at all of the authentication factors,” she told SearchSecurity, adding that such technology is likely more secure than relying solely on traditional password systems.

During his Identiverse keynote, Andrew McAfee, principal research scientist at the Massachusetts Institute of Technology, discussed how technology, and AI in particular, is changing the rules of business and replacing executive “gut decisions” with data intensive predictions and determinations. “As we rewrite the business playbook, we need to keep in mind that machines are now demonstrating excellent judgment over and over and over,” he said.

AI identity management in practice

Some vendors have already deployed AI and machine learning for IAM. For example, cybersecurity startup Elastic Beam, which was acquired by Ping last month, uses AI-driven analysis to monitor API activity and potentially block APIs if malicious activity is detected. Bernard Harguindeguy, founder of Elastic Beam and Ping’s new senior vice president of intelligence, said AI is uniquely suited for API security because there are simply too many APIs, too many connections and too wide an array of activity to monitor for human admins to keep up with.

There are other applications for AI identity management and access control. Andras Cser, vice president and principal analyst for security and risk professionals at Forrester Research, said he sees several ways machine learning and AI are being used in the IAM space. For example, privileged identity management can use algorithms to analyze activity and usage patterns to ensure the individuals using the privileged accounts aren’t malicious actors.

“You’re looking at things like, how has a system administrator been doing X, Y and Z, and why? If this admin has been using these three things and suddenly he’s looking at 15 other things, then why does he need that?” Cser said.

In addition, Cser said machine learning and AI can be used for conditional access and authorization. “Adaptive or risk-based authorization tend to depend on machine learning to a great degree,” he said. “For example, we see that you have access to these 10 resources, but you need to be in your office during normal business hours to access them. Or if you’ve been misusing these resources across these three applications, then it will ratchet back your entitlements at least temporarily and grant you read-only access or require manager approval.”

Algorithms are being used not just for managing identities but creating them as well. During his Identiverse keynote, Jonathan Zittrain, George Bemis professor of international law at Harvard Law School, discussed how companies are using data to create “derived identities” of consumers and users. “Artificial intelligence is playing a role in this in a way that maybe it wasn’t just a few years ago,” he said.

Zittrain said he had a “vague sense of unease” around machine learning being used to target individuals via their derived identities and market suggested products. We don’t know what data is being used, he said, but we know there is a lot of it, and the identities that are created aren’t always accurate. Zittrain joked about how when he was in England a while ago, he was looking at the Lego Creator activity book on Amazon, which was offered up as the “perfect partner” to a book called American Jihad. Other times, he said, the technology creates anxieties when people discover they are too accurate.

“You realize the way these machine learning technologies work is by really being effective at finding correlations where our own instincts would tell us none exist,” Zittrain said. “And yet, they can look over every rock to find one.”

Potential issues with AI identity management

Experts say allowing AI systems to automatically authenticate or block users, applications and APIs with no human oversight comes with some risk, as algorithms are never 100% accurate. Squire says there could be a trial and error period, but added there are ways to mitigate those errors. For example, she suggested AI identity management shouldn’t treat all applications and systems the same and suggested assigning risk levels for each resource or asset that requires authentication.

“It depends on what the user is doing,” Squire said. “If you’re doing something that has a low risk score, then you don’t need to automatically block access to it. But if something has a high risk score, and the authentication factors don’t meet the requirement, then it can automatically block access.”

Squire said she doesn’t expect AI identity management to remove the need for human infosec professionals. In fact, it may require even more. “Using AI is going to allow us to do our jobs in a smarter way,” she said. “We’ll still need humans in the loop to tell the AI to shut up and provide context for the authentication data.”

Cser said the success of AI-driven identity management and access control will depend on a few critical factors. “The quality and reliability of the algorithms are important,” he said. “How is the model governed? There’s always a model governance aspect. There should be some kind of mathematically defensible, formalized governance method to ensure you’re not creating regression.”

Explainability is also important, he said. Vendor technology should have some type of “explanation artifacts” that clarify why access has been granted or rejected, what factors were used, how those factors were weighted and other vital details about the process. If IAM systems or services don’t have those artifacts, then they risk becoming black boxes that human infosec professionals can’t manage or trust.

Regardless of potential risks, experts at Identiverse generally agreed that machine learning and AI are proving their effectiveness and expect an increasing amount of work to be delegated to them. “The optimal, smart division of labor between what we do — minds — and [what] machines do is shifting very, very quickly,” McAfee said during his keynote. “Very often it’s shifting in the direction of the machines. That doesn’t mean that all of us have nothing left to offer, that’s not the case at all. It does mean that we’d better re-examine some of our fundamental assumptions about what we’re better at than the machines because of the judgment and the other capabilities that the machines are demonstrating now.”

Container security emerges in IT products enterprises know and trust

Container security has arrived from established IT vendors that enterprises know and trust, but startups that were first to market still have a lead, with support for cloud-native tech.

Managed security SaaS provider Alert Logic this week became the latest major vendor to throw its hat into the container security ring, a month after cloud security and compliance vendor Qualys added container security support to its DevSecOps tool.

Container security monitoring is now a part of Alert Logic’s Cloud Defender and Threat Manager intrusion detection systems (IDSes). Software agents deployed on each host inside a privileged container monitor network traffic between containers within that host, as well as between hosts for threats. A web application firewall blocks suspicious traffic Threat Manager finds between containers, and Threat Manager offers remediation recommendations to address any risks that remain in the infrastructure.

Accesso Technology Group bought into Alert Logic’s IDS products in January 2018 because it supports VM-based and bare-metal infrastructure, and planned container support was a bonus.

“They gave us a central location to monitor our physical data centers, remote offices and multiple public clouds,” said Will DeMar, director of information security at Accesso, a ticketing and e-commerce service provider in Lake Mary, Fla.

DeMar beta-tested the Threat Manager features and has already deployed them with production Kubernetes clusters in Google Kubernetes Engine and AWS Elastic Compute Cloud environments, though Alert Logic’s official support for its initial release is limited to AWS.

Immediate visibility into intrusion and configuration issues … [is] critical to our DevOps process.
Will DeMarDirector of information security, Accesso

“We have [AWS] CloudFormation and [HashiCorp] Terraform scripts that put Alert Logic onto every new Kubernetes host, which gives us immediate visibility into intrusion and configuration issues,” DeMar said. “It’s critical to our DevOps process.”

A centralized view of IT security in multiple environments and “one throat to choke” in a single vendor appeals to DeMar, but he hasn’t ruled out tools from Alert Logic’s startup competitors, such as Aqua Security, NeuVector and Twistlock, which he sees as complementary to Alert Logic’s product.

“Aqua and Twistlock are more container security-focused than intrusion detection-focused,” DeMar said. “They help you check the configuration on your container before you release it to the host; Alert Logic doesn’t help you there.”

Container security competition escalates

Alert Logic officials, however, do see Aqua Security, Twistlock and their ilk as competitors, and the container image scanning ability DeMar referred to is on the company’s roadmap for Threat Manager in the next nine months. Multiple layers of infrastructure are involved to secure Docker containers, and Alert Logic positions its container security approach as network-based IDS, as opposed to host-based IDS. The company said network-based IDS more deeply inspects real-time network traffic at the packet level, whereas startups’ products examine only where that network traffic goes between hosts.

lert Logic Threat Manager UI
Alert Logic’s Threat Manager offers container security remediation recommendations.

Aqua Security co-founder and CTO Amir Jerbi, of course, sees things differently.

“Traditional security tools are trying to shift into containers and still talk in traditional terms about the host and network,” Jerbi said. “Container security companies like ours don’t distinguish between network, host and other levels of access — we protect the container, through a mesh of multiple disciplines.”

That’s the major distinction for enterprise end users: whether they prefer container security baked into broader, traditional products or as the sole focus of their vendor’s expertise. Aqua Security version 3.2, also released this week, added support for container host monitoring where thin OSes are used, but the tool isn’t a good fit in VM or bare-metal environments where containers aren’t present, Jerbi said.

Aqua Security’s tighter focus means it has a head start on the latest and greatest container security features. For example, version 3.2 includes the ability to customize and build a whitelist of system calls containers make, which is still on the roadmap for Alert Logic. Version 3.2 also adds support for static AWS Lambda function monitoring, with real-time Lambda security monitoring already on the docket. Aqua Security was AWS’ partner for container security with Fargate, while Alert Logic must still catch up there as well.

Industry watchers expect this dynamic to continue for the rest of 2018 and predict that incumbent vendors will snap up startups in an effort to get ahead of the curve.

“Everyone sees the same hill now, but they approach it from different viewpoints, more aligned with developers or more aligned with IT operations,” said Fernando Montenegro, analyst with 451 Research. “As the battle lines become better defined, consolidation among vendors is still a possibility, to strengthen the operations approach where vendors are already focused on developers and vice versa.”

Building a data science pipeline: Benefits, cautions

Enterprises are adopting data science pipelines for artificial intelligence, machine learning and plain old statistics. A data science pipeline — a sequence of actions for processing data — will help companies be more competitive in a digital, fast-moving economy. 

Before CIOs take this approach, however, it’s important to consider some of the key differences between data science development workflows and traditional application development workflows.

Data science development pipelines used for building predictive and data science models are inherently experimental and don’t always pan out in the same way as other software development processes, such as Agile and DevOps. Because data science models break and lose accuracy in different ways than traditional IT apps do, a data science pipeline needs to be scrutinized to assure the model reflects what the business is hoping to achieve.

At the recent Rev Data Science Leaders Summit in San Francisco, leading experts explored some of these important distinctions, and elaborated on ways that IT leaders can responsibly implement a data science pipeline. Most significantly, data science development pipelines need accountability, transparency and auditability. In addition, CIOs need to implement mechanisms for addressing the degradation of a model over time, or “model drift.” Having the right teams in place in the data science pipeline is also critical: Data science generalists work best in the early stages, while specialists add value to more mature data science processes.

Data science at Moody’s

Jacob Grotta, managing director, Moody's AnalyticsJacob Grotta

CIOs might want to take note from Moody’s, the financial analytics giant, which was an early pioneer in using predictive modeling to assess the risks of bonds and investment portfolios. Jacob Grotta, managing director at Moody’s Analytics, said the company has streamlined the data science pipeline it uses to create models in order to be able to quickly adapt to changing business and economic conditions.

“As soon as a new model is built, it is at its peak performance, and over time, they get worse,” Grotta said. Declining model performance can have significant impacts. For example, in the finance industry, a model that doesn’t accurately predict mortgage default rates puts a bank in jeopardy. 

Watch out for assumptions

Grotta said it is important to keep in mind that data science models are created by and represent the assumptions of the data scientists behind them. Before the 2008 financial crisis, a firm approached Grotta with a new model for predicting the value of mortgage-backed derivatives, he said. When he asked what would happen if the prices of houses went down, the firm responded that the model predicted the market would be fine. But it didn’t have any data to support this. Mistakes like these cost the economy almost $14 trillion by some estimates.

The expectation among companies often is that someone understands what the model does and its inherent risks. But these unverified assumptions can create blind spots for even the most accurate models. Grotta said it is a good practice to create lines of defense against these sorts of blind spots.

The first line of defense is to encourage the data modelers to be honest about what they do and don’t know and to be clear on the questions they are being asked to solve. “It is not an easy thing for people to do,” Grotta said.

A second line of defense is verification and validation. Model verification involves checking to see that someone implemented the model correctly, and whether mistakes were made while coding it. Model validation, in contrast, is an independent challenge process to help a person developing a model to identify what assumptions went into the data. Ultimately, Grotta said, the only way to know if the modeler’s assumptions are accurate or not is to wait for the future.

A third line of defense is an internal audit or governance process. This involves making the results of these models explainable to front-line business managers. Grotta said he was working with a bank recently that protested its bank managers would not use a model if they didn’t understand what was driving its results. But he said the managers were right to do this. Having a governance process and ensuring information flows up and down the organization is extremely important, Grotta said.

Baking in accountability

Models degrade or “drift” over time, which is part of the reason organizations need to streamline their model development processes. It can take years to craft a new model. “By that time, you might have to go back and rebuild it,” Grotta said. Critical models must be revalidated every year.

To address this challenge, CIOs should think about creating a data science pipeline with an auditable, repeatable and transparent process. This promises to allow organizations to bring the same kind of iterative agility to model development that Agile and DevOps have brought to software development.

Transparent means that upstream and downstream people understand the model drivers. It is repeatable in that someone can repeat the process around creating it. It is auditable in the sense that there is a program in place to think about how to manage the process, take in new information, and get the model through the monitoring process. There are varying levels of this kind of agility today, but Grotta believes it is important for organizations to make it easy to update data science models in order to stay competitive.

How to keep up with model drift

Nick Elprin, CEO and co-founder of Domino Data Lab, a data science platform vendor, agreed that model drift is a problem that must be addressed head on when building a data science development pipeline. In some cases, the drift might be due to changes in the environment, like changing customer preferences or behavior. In other cases, drift could be caused by more adversarial factors. For example, criminals might adopt new strategies for defeating a new fraud detection model.

Nick Elprin, CEO and co-founder, Domino Data LabNick Elprin

In order to keep up with this drift, CIOs need to include a process for monitoring the effectiveness of their data models over time and establishing thresholds for replacing these models when performance degrades.

With traditional software monitoring, the IT service management needs to track metrics related to CPU, network and memory usage. With data science, CIOs need to capture metrics related to accuracy of model results. “Software for [data science] production models needs to look at the output they are getting from those models, and if drift has occurred, that should raise an alarm to retrain it,” Elprin said.

Fashion-forward data science

At Stitch Fix, a personal shopping service, the company’s data science pipeline allows it to sell clothes online at full price. Using data science in various ways allows them to find new ways to add value against deep discount giants like Amazon, said Eric Colson, chief algorithms officer at Stitch Fix.

Eric Colson, chief algorithms officer,  Stitch FixEric Colson

For example, the data science team has used natural language processing to improve its recommendation engines and buy inventory. Stitch Fix also uses genetic algorithms — algorithms that are designed to mimic evolution and iteratively select the best results following a set of randomized changes. These are used to streamline the process for designing clothes, coming up with countless iterations: Fashion designers then vet the designs.

This kind of digital innovation, however, was only possible he said because the company created an efficient data science pipeline. He added that it was also critical that the data science team is considered a top-level department at Stitch Fix and reports directly to the CEO.

Specialists or generalists?

One important consideration for CIOs in constructing the data science development pipeline is whether to recruit data science specialists or generalists. Specialists are good at optimizing one step in a complex data science pipeline. Generalists can execute all the different tasks in a data science pipeline. In the early stages of a data science initiative, generalists can adapt to changes in the workflow more easily, Colson said.

Some of these different tasks include feature engineering, model training, enhance transform and loading (ETL) data, API integration, and application development. It is tempting to staff each of these tasks with specialists to improve individual performance. “This may be true of assembly lines, but with data science, you don’t know what you are building, and you need to iterate,” Colson said. The process of iteration requires fluidity, and if the different roles are staffed with different people, there will be longer wait times when a change is made.

In the beginning at least, companies will benefit more from generalists. But after data science processes are established after a few years, specialists may be more efficient.

Align data science with business

Today a lot of data science models are built in silos that are disconnected from normal business operations, Domino’s Elprin said. To make data science effective, it must be integrated into existing business processes. This comes from aligning data science projects with business initiatives. This might involve things like reducing the cost of fraudulent claims or improving customer engagement.

In less effective organizations, management tends to start with the data the company has collected and wonder what a data science team can do with it. In more effective organizations, data science is driven by business objectives.

“Getting to digital transformation requires top down buy-in to say this is important,” Elprin said. “The most successful organizations find ways to get quick wins to get political capital. Instead of twelve-month projects, quick wins will demonstrate value, and get more concrete engagement.”

Juniper adds core campus switch to EX series

Juniper Networks has added to its EX series a core aggregation switch aimed at enterprises with campus networks that are too small for the company’s EX9000 line.

Like the EX9000 series, the EX4650 — a compact 25/100 GbE switch — uses network protocols typically found in the data center. As a result, the same engineering team can manage the data center and the campus.

“If an enterprise has a consistent architecture and common protocols across networks, it should be well-placed to achieve operational efficiencies across the board,” said Brad Casemore, an analyst at IDC.

The network protocols used in the EX4650 and EX9000 are the Ethernet VPN (EVPN) and the Virtual Extensible LAN (VXLAN). EVPN secures multi-tenancy environments in a data center. Engineers typically use it with the Border Gateway Protocol and the VXLAN encapsulation protocol. The latter creates an overlay network on an existing Layer 3 infrastructure.

Offering a common set of protocols lets Juniper target its campus switches at data center customers, Casemore said. “That’s a less resistant path than trying to displace other vendors in both the data center and the campus.”

Juniper released the EX4650 four months after releasing two multigigabit campus switches, the EX2300 and EX4300. Juniper also released in February a cloud-based dashboard, called Sky Enterprise, for provisioning and configuring Juniper’s campus switches and firewalls.

Juniper rivals Arista and Cisco are also focused on the campus market. In May, Arista extended its data center switching portfolio to the campus LAN with the introduction of the 7300X3 and 7050X3 spline switches. Cisco, on the other hand, has been building out a software-controlled infrastructure for the campus network, centered around a management console called the Digital Network Architecture (DNA) Center.

EX4650 switch
Juniper Networks’ EX4650 core aggregation switch for the campus

SD-WAN upgrade

Along with introducing the EX4650, Juniper unveiled this week improvements within its software-defined WAN for the campus. Companies can use Juniper’s Contrail Service Orchestration technology to prioritize specific application traffic traveling through the SD-WAN. The capability supports more than 3,700 applications, including Microsoft’s Outlook, SharePoint and Skype for Business, Juniper said.

Juniper runs its SD-WAN as a feature within the company’s NFX Network Services Platform, which also includes the Contrail orchestration software and Juniper’s SRX Series Services Gateways. The latter contains the vSRX virtual firewall, IP VPN, content filtering and threat management.

Juniper has added to the NFX platform support for active-active clustering, which is the ability to spread a workload across NFX hardware. NFX runs its software on a Linux server.

The clustering feature will improve the reliability of the LTE, broadband and MPLS connections typically attached to an SD-WAN, Juniper said.

Unchecked cloud IoT costs can quickly spiral upward

The convergence of IoT and cloud computing can tantalize enterprises that want to delve into new technology, but it’s potentially a very pricey proposition.

Public cloud providers have pushed heavily into IoT, positioning themselves as a hub for much of the storage and analysis of data collected by these connected devices. Managed services from AWS, Microsoft Azure and others make IoT easy to initiate, but users who don’t properly configure their workloads quickly encounter runaway IoT costs.

Cost overruns on public cloud deployments are nothing new, despite lingering perceptions that these platforms are always a cheaper alternative to private data centers. But IoT architectures are particularly sensitive to metered billing because of the sheer volume of data they produce. For example, a connected device in a factory setting could generate hundreds of unique streams of data every few milliseconds that record everything from temperatures to acoustics. That much data could add up to a terabyte of data being uploaded daily to cloud storage.

“The amount of data you transmit and store and analyze is potentially infinite,” said Ezra Gottheil, an analyst at Technology Business Research Inc. in Hampton, N.H. “You can measure things however often you want. And if you measure it often, the amount of data grows without bounds.”

Users must also consider networking costs. Most large cloud vendors charge based on communications between the device and their core services. And in typical public cloud fashion, each vendor charges differently for those services.

Predictive analytics reveals, compares IoT costs

To parse the complexity and scale of potential IoT cost considerations, analyst firm 451 Research built a Python simulation and applied predictive analytics to determine costs for 10 million IoT workload configurations. It found Azure was largely the least-expensive option — particularly if resources were purchased in advance — though AWS could be cheaper on deployments with fewer than 20,000 connected devices. It also illuminated how vast pricing complexities hinder straightforward cost comparisons between providers.

In a VM, you have a comparison with dedicated servers from before. But with IoT, it’s a whole new world.
Owen Rogersanalyst, 451 Research

For example, Google charges in terms of data transferred, while AWS and Azure charge against the number of messages sent. Yet, AWS and Azure treat messages differently, which can also affect IoT costs; Microsoft caps the size of a message, potentially requiring a customer to send multiple messages.

There are other unexpected charges, said Owen Rogers, a 451 analyst. Google, for example, charges for ping messages, which check that the connection is kept alive. That ping may only be 64 bytes, but Google rounds up to the kilobyte. So, customers essentially pay for unused capacity.

“Each of these models has nuances, and you only really discover them when you look through the terms and conditions,” Rogers said.

Some of these nuances aim to protect the provider or hide complexity from the users, but users may scratch their heads. Charging discrepancies are endemic to the public cloud, but IoT costs present new challenges for those deciding which cloud to use — especially those who start out with no past experience as a reference point.

“How are you going to say it’s less or more than it was before? At least in a VM, you have a comparison with dedicated servers from before. But with IoT, it’s a whole new world,” Rogers said. “If you want to compare providers, it would be almost impossible to do manually.”

There are many unknowns to building an IoT deployment compared to more traditional applications, some of which apply regardless of whether it’s built on the public cloud or in a private data center. Software asset management can be a huge cost at scale. In the case of a connected factory or building, greater heterogeneity affects time and cost, too.

“Developers really need to understand the environment, and they have to be able to program for that environment,” said Alfonso Velosa, a Gartner analyst. “You would set different protocols, logic rules and processes when you’re in the factory for a robot versus a man[-operated] machine versus the air conditioners.”

Data can also get stale rather quickly and, in some cases, become useless, if it’s not used within seconds. Companies must put policies in place to make sure they understand how frequently to record data and transmit the appropriate amount of data back to the cloud. That includes when to move data from active storage to cold storage and if and when to completely purge those records.

“It’s really sitting down and figuring out, ‘What’s the value of this data, and how much do I want to collect?'” Velosa said. “For a lot of folks, it’s still not clear where that value is.”