Tag Archives: said

Meltdown and Spectre malware discovered in the wild

Chip makers have said they’ve seen no evidence the Meltdown and Spectre vulnerabilities have been exploited to steal customer data, but those days of relative comfort may be coming to an end.

Researchers at AV-TEST, an independent organization that tests antimalware and security software, announced this week they had discovered 139 samples of malware that “appear to be related to recently reported CPU vulnerabilities.” While the good news is that most of the malware samples appear to be based on previously published proof-of-concepts from security researchers, the bad news is that AV-TEST’s latest findings show the number of unique samples has risen sharply in recent weeks.  

The organization had previously reported the discovery of 77 unique samples of Meltdown and Spectre malware on January 17. At that time, AV-TEST said via Twitter that all identified samples were “original or modified PoC code” and that the majority of the samples were for Spectre rather than Meltdown. AV-TEST posted another update on Jan. 23 showing the unique malware samples had risen to 119.

After analyzing most of those samples, Fortinet’s FortiGuard Labs published a report Tuesday saying it was “concerned” about the potential of Meltdown and Spectre malware attacking users and enterprises.

“FortiGuard Labs has analyzed all of the publicly available samples, representing about 83 percent of all the samples that have been collected [by AV-TEST], and determined that they were all based on proof of concept code,” the research team wrote. “The other 17 percent may have not been shared publicly because they were either under NDA or were unavailable for reasons unknown to us.”

Fortinet also released several antivirus signatures to help users defend against the Meltdown and Spectre malware samples. But detecting other exploits related to these chip vulnerabilities could prove extremely difficult. While Intel and AMD have said there is no evidence the flaws have been exploited in the wild, the researchers who discovered the chip vulnerabilities say it’s “probably not” possible for organizations or users to tell whether Meltdown and Spectre have been used against them.

“The exploitation does not leave any traces in traditional log files,” according to an FAQ on the Meltdown and Spectre research site.

Defending against possible Meltdown and Spectre malware has been further complicated by patch issues. Intel recently announced it was pulling its microcode updates for the chip vulnerabilities because of reboot problems on systems running Intel’s Broadwell and Haswell processors. Microsoft later issued an out-of-band patch that disabled Intel’s update for variant 2 of the Spectre vulnerability, which involves branch target injection.

Big content providers influence undersea cable price drops

Google this month said it will build three new undersea cable routes to be completed in 2019, bringing to 11 the number of cables in which the company has invested since 2010.

Google’s multimillion-dollar move isn’t unusual anymore, even though undersea cable builds were historically driven by individual telecom operators or consortiums that then sold optical wavelength services to their customers.

The big four content providers — Google, Amazon, Microsoft and Facebook — are increasingly financing undersea cable routes to enable fast, high-capacity data connections around the globe, particularly to locations where they have built data centers. Yet, these changes in the submarine networking market worry telecom operators that fear the big content providers have more money to invest in undersea cable, leaving the traditional players at a disadvantage.

Not only are 1 Gb, 10 Gb and 100 Gb wavelength service prices declining year over year, according to trend data from telecommunications research and consulting firm TeleGeography, but each time a new undersea cable goes live, 100 Gb wavelength prices drop quicker as more bandwidth becomes available at the high end of the market.

Yet, the news isn’t all bad for telecom operators, according to Michael Bisaha, senior analyst at TeleGeography, based in Carlsbad, Calif. Speaking at a recent webinar, he said the big content providers shouldn’t always be viewed as competitors taking over the undersea market, because they aren’t always financing entire cable builds. Moreover, content providers may finance only a portion of a cable, so telecom operators have opportunities to get onto a shared cable system and use that to stretch their investment capital, he said.

There are definitely opportunities to compete with or cater to the content players.
Michael Bisahasenior analyst at TeleGeography

Content providers that have consistently invested in cable routes are on the routes where they have their major content hubs or cloud hubs, largely in the United States, Western Europe and some in Asia, Bisaha said. But that doesn’t mean they only need primary undersea routes. They still need diverse routes, and when their needs don’t meet the threshold of investing in a whole cable, they might need wavelengths or shared spectrum to specific geographies that they may buy from a telecom operator, he added.

“There are definitely opportunities to compete with or cater to the content players,” Bisaha said, adding that content providers are primarily buying or investing in cables on the major routes that connect their primary hubs.

“The existence of one content provider on a cable doesn’t necessarily preclude the need for the wholesale providers beyond those cable routes,” Bisaha added.

Undersea cable pricing trends

Apart from the influence content providers are having on the high end of the undersea cable market, the growing need for bandwidth is also driving the 10 Gb wavelength market, where prices dropped from 2014 through 2017 at an average compound annual rate of 20% to 25%, according to TeleGeography data. Bisaha said he expects the downward pricing trend to continue for both 10 Gb and 100 Gb wavelengths.

Even as 100 Gb wavelength prices drop, steeper “shocks to the 100 [Gb] system” tend to coincide with new content-provider-sponsored cables coming online, Bisaha said. Increased participation of content companies influences the 100 Gb market more than in the 10 Gb or the 1 Gb markets.

Still, global undersea cable prices aren’t falling uniformly in all areas. Less popular subsea routes may have higher transport rates than those for major hubs. “While the range in different geographies may indeed continue to shrink, we don’t expect them to completely disappear,” Bisaha said.

Intel Meltdown patches pulled with little explanation

Intel said it has identified the cause of the reboot issues related to firmware updates assumed to be the Spectre and Meltdown patches, but hasn’t offered much more information than that.

It is widely assumed the Spectre and Meltdown patches led systems running on Intel Broadwell and Haswell chips to reboot. However, when Intel announced the problem on Jan. 11, the company only admitted that customers saw “higher system reboots after applying firmware updates.” So far, Intel has been careful to avoid mentioning the Spectre and Meltdown patches in connection to the reboot issues with Broadwell and Haswell chips.

Despite not outright admitting the connection, Intel has pulled its Spectre and Meltdown patches while it tests an updated version of the fix. The company said it has now discovered the “root cause” of the reboot issues and has “made good progress in developing a solution to address it.”

“We recommend that OEMs, cloud service providers, system manufacturers, software vendors and end users stop deployment of current versions, as they may introduce higher than expected reboots and other unpredictable system behavior,” Navin Shenoy, executive vice president and general manager of the data center group at Intel, wrote in a blog post. “We ask that our industry partners focus efforts on testing early versions of the updated solution so we can accelerate its release. We expect to share more details on timing later this week.”

Bob Noel, director of strategic relationships and marketing for Plixer, a network traffic analysis company based in Kennebunk, Maine, said the current unstable code for the Spectre and Meltdown patches “leaves end users vulnerable, with no available options other than to wait for a stable fix.”

Meltdown logo

“In times like these, customers should be extra vigilant to ensure they have not been compromised. Network traffic analytics should be used to monitor their environment for anomalous traffic patterns and unusual behaviors,” Noel told SearchSecurity. “The secondary problem this unstable patch code creates is a general hesitancy for end users to quickly apply future patches. Early adopters of these patches experienced hardware reboots and downtime, which is likely to leave them wary of becoming early adopters for future patches.”

Ben Johnson, CTO at Obsidian, based in Newport Beach, Calif., and former National Security Agency computer scientist, agreed the way Intel has handled the Spectre and Meltdown patches may harm customer trust.

“Consumers have no patience for perceived inactivity when it comes to vulnerabilities or security issues, so organizations want to take action as soon as a vulnerability becomes public. But if you roll out a patch without proper testing, you can exacerbate the problem by paralyzing your system and your workforce, as Intel and Dell’s customers found out over the past week,” Johnson told SearchSecurity. “This is particularly problematic, because one of the biggest issues in security is getting people to patch vulnerabilities. Incidents like this just make matters worse because they make IT teams gun-shy. Your customers need to have faith that, when you roll out a patch, it isn’t going to hammer their system. If they don’t trust you, they won’t patch.”

Linus Torvalds, creator of Linux, had much harsher words for Intel’s handling of the situation and called the Spectre and Meltdown patches “complete and utter garbage.”

“I’m sure there is some lawyer there who says ‘we’ll have to go through motions to protect against a lawsuit.’ But legal reasons do not make for good technology, or good patches that I should apply,” Torvalds wrote in a Linux Kernel Mailing List post. “[The patches] do literally insane things. They do things that do not make sense. That makes all your arguments questionable and suspicious. The patches do things that are not sane … I think we need something better than this garbage.”

Veeam acquisition of N2WS enhances cloud protection

Eight months after investing in N2WS, Veeam Software today said it acquired the cloud data protection company for $42.5 million.

The all-cash Veeam acquisition is the backup and recovery vendor’s first in 10 years. N2WS provides cloud-native, enterprise backup and disaster recovery for Amazon Web Services (AWS) through its Cloud Protection Manager.

N2WS was founded in 2012 and released its first product in 2013. It will operate as a stand-alone business, keeping its brand name and becoming “A Veeam Company” while selling Cloud Protection Manager.

Veeam disclosed its investment in N2WS in May 2017, and began selling N2WS technology through an OEM deal as part of Veeam Availability for AWS. Peter McKay, Veeam co-CEO and president, said the investment and partnership allowed his company to monitor the N2WS business, use its technology and accelerate Veeam’s AWS capabilities.

Cloud Protection Manager is built specifically for AWS and automates backup and recovery for Amazon Elastic Compute Cloud instances. It is available in the AWS Marketplace.

“Technology wise, it’s a good addition to the portfolio,” McKay said.

Cloud-native data protection is seeing high growth, said Phil Goodwin, research director of storage systems and software at IDC.

“This puts them square in the middle of that marketplace,” Goodwin said.

The challenge with a company specifically targeted for AWS data protection is that organizations are going to have workloads in multiple clouds and on premises.

“I think they intend to address it,” Goodwin said of Veeam’s answer to the challenge.

The sky’s the limit

Both companies reported significant growth in the last year. N2WS grew revenue by 102% in 2017. Veeam hit $827 million in total bookings revenue in 2017, an increase of 36% year over year, and claims more than 282,000 customers.

Veeam's Peter McKayPeter McKay

Veeam was founded in 2006 as a virtual backup company, but has since added physical and cloud protection.

Investing in and then buying N2WS helped alleviate any concerns with the Veeam acquisition, McKay said. N2WS, which was privately held before the acquisition, now has more than 1,000 customers.

“I think we’ve de-risked it quite a bit,” McKay said.

Veeam’s investment helped N2WS build up its team last year. The company had seven employees at the beginning of 2017 and now has 42, N2WS CEO Jason Judge said.

Veeam funded N2WS through a round led by Insight Venture Partners, which is a large investor in Veeam. The companies did not disclose the amount invested in N2WS.

As part of the Veeam acquisition:

  • Veeam will have access to N2WS technology and research and development to integrate data protection for AWS workloads into the Veeam Availability Platform.
  • N2WS will have access to Veeam’s research and development and its alliances and partners, including nearly 55,000 resellers and 18,000 Veeam Cloud & Service Providers. 
  • Current Veeam customers will receive special offers and incentives for Cloud Protection Manager from N2WS.

The R&D teams will be working together on bigger things.
Ezra Charmvice president of marketing, N2WS

“The R&D teams will be working together on bigger things,” said Ezra Charm, N2WS vice president of marketing.

There is some overlap in customers, but N2WS is focused solely on AWS protection, Charm said. He said the deal presents an opportunity for N2WS to sell to Veeam customers who don’t know the company and those who don’t yet use AWS.

“We have a lot more resources available to us as part of Veeam,” and can accelerate development, Charm said.

Lofty goals for Veeam and N2WS

The typical backup and recovery customer is evolving to think more cloud-first, McKay said.

The acquisition helps Veeam as its cloud business and enterprise base are both growing. Veeam reported a year-over-year increase of 57% in cloud bookings for the fourth quarter. The vendor attained a 62% year-over-year increase in large enterprise deals, and reached 500% annual growth for deals over $1 million, according to its latest revenue report.

McKay said in 2017 that he didn’t feel acquisitions were needed for Veeam revenue to hit $1.5 billion by 2020. But the Veeam acquisition only helps the revenue goals, which now include a push to get to $2.2 billion by 2022, McKay said.

Veeam last acquired a company in 2008, when it bought privately held Nworks, a creator of enterprise management connectors.

McKay said there was a time when Veeam was possibly growing too fast, and a lot of employees were stretched trying to do too much. In the last 18 months, though, the company has added employees from outside and developed teams internally. The company has 3,100 employees and will likely add 700 in the next year, including 230 in research and development, McKay said.

N2WS plans to add employees as well, across its three offices: its headquarters in West Palm Beach, Fla.; its research and development center in Haifa, Israel; and its new office in Edinburgh, Scotland.

The Veeam acquisition closed at the end of 2017. Judge will continue to lead N2WS as its CEO and all teams including sales, marketing, research and development, and customer service will stay intact, according to Veeam.

N2WS and Veeam are well-positioned to take advantage of the growing infrastructure-as-a-service market, Charm said.

“It’s really the new hotness in IT,” Charm said. “It’s changing the way people are talking about IT and infrastructure.”

Cybersecurity skills shortage continues to worsen

Jon Oltsik, an analyst at Enterprise Strategy Group in Milford, Mass., said the global cybersecurity skills shortage is bad and getting worse. According to Oltsik, skills shortages among various networking disciplines have not eased — and the cybersecurity shortage is particularly acute — citing ESG’s annual survey on the state of IT. For instance in 2014, 23% of respondents said that their organization faced a problematic shortage of cybersecurity skills. In the most current survey, which polled more than 620 IT and cybersecurity professionals, 51% said they faced a cybersecurity skills shortage. The data aligns with the results of an ESG-ISSA survey in 2017 that found 70% of cybersecurity professionals reporting their organizations were affected by the skills shortage — resulting in increased workloads and little time for planning. 

“I can only conclude that the cybersecurity skills shortage is getting worse,” Oltsik said. “Given the dangerous threat landscape and a relentless push toward digital transformation, this means that the cybersecurity skills shortage represents an existential threat to developed nations that rely on technology as the backbone of their economy.”

Chief information security officers (CISOs), Oltsik said, need to consider the implications of the cybersecurity skills shortage. Smart leaders are doing their best to cope by consolidating systems, such as integrated security operations and analytics platform architecture, and adopting artificial intelligence and machine learning. In other cases, CISOs automate processes, adopt a portfolio management approach and increase staff compensation, training and mentorship to improve retention.

Dig deeper into Oltsik’s ideas on the cybersecurity skills shortage.

Building up edge computing power

Erik Heidt, an analyst at Gartner, spent part of 2017 discussing edge computing challenges with clients as they worked to improve computational power for IoT projects. Heidt said a variety of factors drive compute to the edge (and in some cases, away), including availability, data protection, cycle times and data stream filtering. In some cases, computing capability is added directly to an IoT endpoint. But in many situations, such as data protection, it may make more sense to host an IoT platform in an on-premises location or private data center.

Yet the private approach poses challenges, Heidt said, including licensing costs, capacity issues and hidden costs from IoT platform providers that limit users to certified hardware. Heidt recommends purchasing teams look carefully at what functions are being offered by vendors, as well as considering data sanitization strategies to enable more use of the cloud. “There are problems that can only be solved by moving compute, storage and analytical capabilities close to or into the IoT endpoint,” Heidt said.

Read more of Heidt’s thoughts on the shift to the edge.

Meltdown has parallels in networking

Ivan Pepelnjak, writing in IPSpace, responded to a reader’s question about how hardware issues become software vulnerabilities in the wake of the Meltdown vulnerability. According to Pepelnjak, there has always been privilege-level separation between kernels and user space. Kernels have always been mapped to high-end addresses in user space, but in more recent CPUs, operations needed to execute just a single instruction often following a pipeline with dozens of different instructions — thus exposing the vulnerability an attack like Meltdown can exploit.

In these situations, the kernel space location test fails once the command is checked against the access control list (ACL), but by then other parts of the CPU have already carried out instructions designed to call up the memory location.

Parallelized execution isn’t unique to CPU vendors. Pepelnjak said at least one hardware vendor created a version of IPv6 neighbor discovery that suffers from the same vulnerability. In response, vendors are rolling out operating system patches removing the kernel from user space. This approach prevents exploits but no longer gives the kernel direct access to the user space when it is needed. As a result, in many cases the kernel needs to change virtual-to-physical page tables, mapping user space into kernel page tables. Every single system call, even reading a byte from one file, means the kernel page tables need to be unmapped.

Explore more of Pepelnjak’s thoughts on network hardware vulnerabilities.

Cisco, Hyundai to add software-defined platform to vehicles

Cisco and Hyundai Motor Co. said they would work together to develop vehicles anchored by software-defined networking. The first vehicles are slated to roll out next year, the companies said. Cisco and Hyundai released their plans at this year’s CES in Las Vegas, following up on an original announcement in 2016.

In smart-car fashion, Cisco and Hyundai will develop the vehicles with a focus on communication and sensors. That’s where the “software-defined” status comes in. The premium Hyundai vehicles will integrate a Cisco-built software-defined platform with an Internet Protocol (IP) and 1 Gbps Ethernet in-vehicle network, according to a Cisco statement.

The IP and Ethernet network will enable high-speed connectivity to each vehicle device, the statement said. But more than that, Cisco and Hyundai hope to develop a more open vehicle system to enable the actual communication among smart vehicles, roadways or traffic lights, Ruba Borno, Cisco’s vice president of growth initiatives, wrote in a blog post.

“This is the only way to achieve full autonomy and enable vehicle-to-vehicle and vehicle-to-roadways communication,” Borno wrote. “By putting software inside the central gateway, the new solution enables high-speed connectivity downstream to every device in the car — and upstream to the cloud. This IP connectivity is required for applications to control devices based on real-time data and analytics.”

The software-defined platform will also allow for easier feature updates.

“[The new platform] is highly configurable and secure — and offers the flexibility to design and build new services,” Cisco’s statement said. “It will provide ‘over-the-air updates’ and accelerate the time it takes to bring new capabilities to market.”

The software-defined platform will also act as a foundation for security, touting “integrated, multilayered security, as well as full end-to-end networking,” according to Cisco. This end-to-end security includes encryption, authentication, intrusion detection, firewall and network traffic analysis, Borno wrote in her blog post.

The companies said they are looking into integrating the software-defined vehicles with Hyundai data centers in order to access real-time data.

Windstream to acquire MassComm

Windstream said it plans to acquire MassComm, a New York-based competitive local exchange carrier, or CLEC, according to a filing submitted to the Federal Communications Commission the last week of December.

The proposal stated that Windstream will purchase in cash all issued and outstanding MassComm capital stock. Once the deal closes, MassComm will be a wholly owned subsidiary of Windstream, the filing said.

MassComm provides voice, data and networking technologies, in addition to telecommunications and connectivity management and consultation. The CLEC is authorized to serve customers in California, Connecticut, the District of Columbia, Florida, Illinois, Massachusetts, Michigan, New York, Pennsylvania and Texas. Those areas will be combined with Windstream’s reach across the U.S.  Windstream runs a fiber network comprising approximately 150,000 miles.

“By combining MassComm’s customer base with Windstream’s presence and fiber network, the combined company will have the opportunity to serve more of MassComm’s current customers on Windstream’s own last-mile facilities,” the proposal said.

The companies explained the acquisition holds no competitive risks, as MassComm doesn’t own any last-mile facilities, thereby eliminating potential overlap with Windstream’s facilities. Further, the proposal stated competition in the medium-sized business market will be enhanced. The companies don’t expect the transaction to affect current MassComm customer rates or terms of service, according to the filing.

The proposal did not disclose specific financial terms of the transaction. Windstream completed acquisitions of Broadview and EarthLink last year and also partnered with VeloCloud to offer SD-WAN managed services.

Aryaka and Zscaler partner to boost security for cloud-bound traffic

Aryaka is working with Zscaler to offer an SD-WAN package that combines Aryaka’s private network connectivity with Zscaler’s cloud-delivered security.

Once the service is available later this year, internet and cloud-bound traffic will be directly forwarded to Zscaler’s cloud via Aryaka’s edge device, Aryaka Network Access Point, according to an Aryaka statement. All traffic will then undergo a variety of security processes, including antivirus, threat prevention, data protection and access control.

Aryaka said the security enhancement will allow its customers to use its private network to securely access cloud-based and on-premises applications.

Zscaler already provides security services to other SD-WAN vendors, including VeloCloud, Riverbed and Talari.

Campus network architecture strategies to blossom in 2018

Bob Laliberte, an analyst with Enterprise Strategy Group in Milford, Mass., said even though data center networking is slowing down in the face of cloud and edge computing, local and campus network architecture strategies are growing in importance as the new year unfolds.

After a long period of data center consolidation, demand for reduced latency — spurred by the growth of the internet of things (IoT) — and the evolution of such concepts as autonomous vehicles are driving a need for robust local networks.

At the same time, organizations have moved to full adoption of the cloud for roles beyond customer relationship management, and many have a cloud-first policy. As a result, campus network architecture strategies need to allow companies to use multiple clouds to control costs. In addition, good network connectivity is essential to permit access to the cloud on a continuous basis.

Campus network architecture plans must also accommodate Wi-Fi to guarantee user experience and to enable IoT support. The emergence of 5G will also continue to expand wireless capabilities.

Intent-based networks, meanwhile, will become a tool for abstraction and the introduction of automated tasks. “The network is going to have to be an enabler, not an anchor,” with greater abstraction, automation and awareness, Laliberte said.

Laliberte said he expects intent-based networks to be deployed in phases, in specific domains of the network, or to improve verification and insights. “Don’t expect your network admins to have Alexa architecting and building out your network,” he said. Although, he said, systems modeled after Alexa will become interfaces for network systems.

Explore more of Laliberte’s thoughts on networking.

BGP route selection and intent-based networking

Ivan Pepelnjak, writing in ipSpace, said pundits who favor the demise of Border Gateway Protocol (BGP) through new SDN approaches often praise the concept of intent-based networking.

Yet, the methodologies behind intent-based networks fail when it comes to BGP route selection, he said. Routing protocols were, in fact, an early approach to the intent-based idea, although many marketers now selling intent-based systems are criticizing those very same protocols, Pepelnjak said. Without changing the route algorithm, the only option is for users to tweak the intent and hope for better results.

To deal with the challenges of BGP route selection, one option might involve a centralized controller with decentralized local versions of the software for fallback in case the controller fails. Yet, few would want to adopt that approach, Pepelnjak said, calling such a strategy “messy” and difficult to get right. Route selection is now being burdened with intent-driven considerations, such as weights, preferences and communities.

“In my biased view (because I don’t believe in fairy tales and magic), BGP is a pretty obvious lesson in what happens when you try to solve vague business rules with [an] intent-driven approach instead of writing your own code that does what you want to be done,” Pepelnjak wrote. “It will be great fun to watch how the next generation of intent-based solutions will fare. I haven’t seen anyone beating laws of physics or RFC 1925 Rule 11 yet,” he added.  

Dig deeper into Pepelnjak’s ideas about BGP route selection and intent-based networking.

Greater hybridization of data centers on the horizon

Chris Drake, an analyst with Current Analysis in Sterling, Va., said rising enterprise demand for hybrid cloud systems will fuel partnerships between hyperscale public providers and traditional vendors. New alliances — such as the one between Google and Cisco — joined existing cloud-vendor partnerships like those between Amazon and VMware and Microsoft and NetApp, Drake said. New alliances are in the offing.

Moving and managing workloads across hybrid IT environments will be a key point of competition between providers, Drake said, perhaps including greater management capabilities to oversee diverse cloud systems.

Drake said he also expects a proliferation of strategies aimed at edge computing. The appearance of micro data centers and converged edge systems may decentralize large data centers. He said he also anticipates greater integration with machine learning and artificial intelligence. However, as a result of legacy technologies, actual deployments of these technologies will remain gradual.

Read more of Drake’s assessment of 2018 data center trends.

Biggest SDN and SD-WAN news and trends of 2017

It can certainly be said that 2017 was a year of disruption.

The disruption, of course, refers to the major SDN and SD-WAN news and trends that occurred throughout the year. If you want to review the highlights from a software-defined perspective, the SearchSDN 2017 timeline below can help you refresh yourself on what happened and get prepared for what happens next.

But first, to set the stage, SD-WAN news continued to steal the show in terms of market attention, vendor moves and new services announcements, as SD-WAN adoption increased throughout the year. SD-WAN global revenues surpassed $300 million, and the number of operational SD-WAN sites passed 90,000, according to a Frost & Sullivan study released in October. Analysts expect SD-WAN to continue growing throughout the next few years.

As predicted in 2016, vendor consolidation disrupted the SD-WAN market in 2017. The first major announcement came from networking vendor Cisco, who set its acquisition sights on Viptela, a leading SD-WAN vendor. VMware followed a few months later with its intention to acquire VeloCloud. Speculation continues about which vendors will be next to move.

2017 timeline of SDN and SD-WAN news and trends
A timeline showing major SDN and SD-WAN news stories of 2017

While the SD-WAN vendor market might have started shrinking in 2017, service providers looked to grab their piece of the SD-WAN pie. In hopes of remaining competitive within a changing industry, a slew of service providers formed partnerships with SD-WAN vendors to offer their own managed services to customers. A short list of managed SD-WAN service providers includes AT&T, Sprint, Verizon, Windstream, MegaPath, Global Capacity, Orange Business, Masergy and Telefónica.

What about SDN? It has garnered its own share of hype in the past few years. While SDN might not be living up to its originally defined expectations in some aspects, it can be credited with spurring interest in SD-WAN technology, policy-based networks and a broader software-based networking initiative.

Now, here are the SDN and SD-WAN news stories that grabbed our attention this year.

How to win in the AI era? For now, it’s all about the data

Artificial intelligence is the new electricity, said deep learning pioneer Andrew Ng. Just as electricity transformed every major industry a century ago, AI will give the world a major jolt. Eventually.

For now, 99% of the economic value created by AI comes from supervised learning systems, according to Ng. These algorithms require human teachers and tremendous amounts of data to learn. It’s a laborious, but proven process.

AI algorithms, for example, can now recognize images of cats, although they required thousands of labeled images of cats to do so; and they can understand what someone is saying, although leading speech recognition systems needed 50,000 hours of speech — and their transcripts — to do so.

Ng’s point is that data is the competitive differentiator for what AI can do today — not algorithms, which, once trained, can be copied.

“There’s so much open source, word gets out quickly, and it’s not that hard for most organizations to figure out what algorithms organizations are using,” said Ng, an AI thought leader and an adjunct professor of computer science at Stanford University, at the recent EmTech conference in Cambridge, Mass.

His presentation gave attendees a look at the state of the AI era, as well as the four characteristics he believes will be a part of every AI company, which includes a revamp of job descriptions.

Positive feedback loop

So data is vital in today’s AI era, but companies don’t need to be a Google or a Facebook to reap the benefits of AI. All they need is enough data upfront to get a project off the ground, Ng said. That starter data will attract customers who, in turn, will create more data for the product.

“This results in a positive feedback loop. So, after a period of time, you might have enough data yourself to have a defensible business,” said Ng.

Andrew Ng, Stanford, AI, state of AI, deep learning, EmTech
Andrew Ng on stage at EmTech

A couple of his students at Stanford did just that when they launched Blue River Technology, an ag-tech startup that combines computer vision, robotics and machine learning for field management. The co-founders started with lettuce, collecting images and putting together enough data to get lettuce farmers on board, according to Ng. Today, he speculated, they likely have the biggest data asset of lettuce in the world.

“And this actually makes their business, in my opinion, pretty defensible because even the global giant tech companies, as far as I know, do not have this particular data asset, which makes their business at least challenging for the very large tech companies to enter,” he said.

Turns out, that data asset is actually worth hundreds of millions: John Deere acquired Blue River for $300 million in September.

“Data accumulation is one example of how I think corporate strategy is changing in the AI era, and in the deep learning era,” he said.

Four characteristics of an AI company

While it’s too soon to tell what successful AI companies will look like, Ng suggested another corporate disruptor might provide some insight: the internet.

One of the lessons Ng learned with the rise of the internet was that companies need more than a website to be an internet company. The same, he argued, holds true for AI companies.

“If you take a traditional tech company and add a bunch of deep learning or machine learning or neural networks to it, that does not make it an AI company,” he said.

Internet companies are architected to take advantage of internet capabilities, such as A/B testing, short cycle times to ship products, and decision-making that’s pushed down to the engineer and product level, according to Ng.

AI companies will need to be architected to do the same in relation to AI. What A/B testing’s equivalent will be for AI companies is still unknown, but Ng shared four thoughts on characteristics he expects AI companies will share.

  1. Strategic data acquisition. This is a complex process, requiring companies to play what Ng called multiyear chess games, acquiring important data from one resource that’s monetized elsewhere. “When I decide to launch a product, one of the criteria I use is, can we plan a path for data acquisition that results in a defensible business?” Ng said.
  2. Unified data warehouse. This likely comes as no surprise to CIOs, who have been advocates of the centralized data warehouse for years. But for AI companies that need to combine data from multiple sources, data silos — and the bureaucracy that comes with them — can be an AI project killer. Companies should get to work on this now, as “this is often a multiyear exercise for companies to implement,” Ng said.
  3. New job descriptions. AI products like chatbots can’t be sketched out the way apps can, and so product managers will have to communicate differently with engineers. Ng, for one, is training product managers to give product specifications.
  4. Centralized AI team. AI talent is scarce, so companies should consider building a single AI team that can then support business units across the organization. “We’ve seen this pattern before with the rise of mobile,” Ng said. “Maybe around 2011, none of us could hire enough mobile engineers.” Once the talent numbers caught up with demand, companies embedded mobile talent into individual business units. The same will likely play out in the AI era, Ng said.