Tag Archives: center

Samsung adds Z-NAND data center SSD

Samsung’s lineup of data center solid-state drives– including a Z-NAND model — introduced this week targets smaller organizations facing demanding workloads such as in-memory databases, artificial intelligence and IoT.

The fastest option in the Samsung data center SSD family — the 983 ZET NVMe-based PCIe add-in card — uses the company’s latency-lowering Z-NAND flash chips. Earlier this year, Samsung announced its first Z-NAND-based enterprise SSD, the SZ985, designed for the OEM market. The new 983 ZET SSD targets SMBs, including system builders and integrators, that buy storage drives through channel partners.

The Samsung data center SSD lineup also adds the first NVMe-based PCIe SSDs designed for channel sales in 2.5-inch U.2 and 22-mm-by-110-mm M.2 form factors. At the other end of the performance spectrum, the new entry-level 2.5-inch 860 DCT 6 Gbps SATA SSD targets customers who want an alternative to client SSDs for data center applications, according to Richard Leonarz, director of product marketing for Samsung SSDs.

Rounding out the Samsung data center SSD product family is a 2.5-inch 883 DCT SATA SSD that uses denser 3D NAND technology, which Samsung calls V-NAND, than comparable predecessor models. Samsung’s PM863 and PM863a SSDs use 32-layer and 48-layer V-NAND respectively, but the new 883 DCT SSD is equipped with triple-level cell (TLC) 64-layer V-NAND chips, as are the 860 DCT and 983 DCT models, Leonarz said.

Noticeably absent from the Samsung data center SSD product line is 12 Gbps SAS. Leonarz said research showed SAS SSDs trending flat to downward in terms of units sold. He said Samsung doesn’t see a growth opportunity for SAS on the channel side of the business that sells to SMBs such as system builders and integrators. Samsung will continue to sell dual-ported enterprise SAS SSDs to OEMs.

Samsung 983 ZET NVMe SSD
The Samsung 983 ZET NVMe SSD uses its latency-lowering Z-NAND flash chips.

Z-NAND-based SSD uses SLC flash

The Z-NAND technology in the new 983 ZET SSD uses high-performance single-level cell (SLC) V-NAND 3D flash technology and builds in logic to drive latency down to lower levels than standard NVMe-based PCIe SSDs that store two or three bits of data per cell.

Samsung positions the Z-NAND flash technology it unveiled at the 2016 Flash Memory Summit as a lower-cost, high-performance alternative to new 3D XPoint nonvolatile memory that Intel and Micron co-developed. Intel launched 3D XPoint-based SSDs under the brand name Optane in March 2017, and later added Optane dual inline memory modules (DIMMs). Toshiba last month disclosed its plans for XL-Flash to compete against Optane SSDs.

Use cases for Samsung’s Z-NAND NVMe-based PCIe SSDs include cache memory, database servers, real-time analytics, artificial intelligence and IoT applications that require high throughput and low latency.

“I don’t expect to see millions of customers out there buying this. It’s still going to be a niche type of solution,” Leonarz said.

Samsung claimed its SZ985 NVMe-based PCIe add-in card could reduce latency by 5.5 times over top NVMe-based PCIe SSDs. Product data sheets list the SZ985’s maximum performance at 750,000 IOPS for random reads and 170,000 IOPS for random writes, and data transfer rates of 3.2 gigabytes per second (GBps) for sequential reads and 3 GBps for sequential writes.

The new Z-NAND based 983 ZET NVMe-based PCIe add-in card is also capable of 750,000 IOPS for random reads, but the random write performance is lower at 75,000 IOPS. The data transfer rate for the 983 ZET is 3.4 GBps for sequential reads and 3 GBps for sequential writes. The 983 ZET’s latency for sequential reads and writes is 15 microseconds, according to Samsung.

Both the SZ985 and new 983 ZET are half-height, half-length PCIe Gen 3 add-in cards. Capacity options for the 983 ZET will be 960 GB and 480 GB when the SSD ships later this month. SZ985 SSDs are currently available at 800 GB and 240 GB, although a recent product data sheet indicates 1.6 TB and 3.2 TB options will be available at an undetermined future date.

Samsung’s SZ985 and 983 ZET SSDs offer significantly different endurance levels over the five-year warranty period. The SZ985 is rated at 30 drive writes per day (DWPD), whereas the new 983 ZET supports 10 DWPD with the 960 GB SSD and 8.5 DWPD with the 480 GB SSD.

Samsung data center SSD endurance

The rest of the new Samsung data center SSD lineup is rated at less than 1 DWPD. The entry-level SATA 860 DCT SATA SSD supports 0.20 DWPD for five years or 0.34 DWPD for three years. The 883 DCT SATA SSD and 983 DCT NVMe-based PCIe SSD are officially rated at 0.78 DWPD for five years, with a three-year option of 1.30 DWPD.

Samsung initially targeted content delivery networks with its 860 DCT SATA SSD, which is designed for read-intensive workloads. Sequential read/write performance is 550 megabytes per second (MBps) and 520 MBps, and random read/write performance is 98,000 IOPS and 19,000 IOPS, respectively, according to Samsung. Capacity options range from 960 GB to 3.84 TB.

“One of the biggest challenges we face whenever we talk to customers is that folks are using client drives and putting those into data center applications. That’s been our biggest headache for a while, in that the drives were not designed for it. The idea of the 860 DCT came from meeting with various customers who were looking at a low-cost SSD solution in the data center,” Leonarz said.

He said the 860 DCT SSDs provide consistent performance for round-the-clock operation with potentially thousands of users pinging the drives, unlike client SSDs that are meant for lighter use. The cost per GB for the 860 DCT is about 25 cents, according to Leonarz.

The 883 DCT SATA SSD is a step up, at about 30 cents per GB, with additional features such as power loss protection. The performance metrics are identical to the 860 DCT, with the exception of its higher random writes of 28,000 IOPS. The 883 DCT is better suited to mixed read/write workloads for applications in cloud data centers, file and web servers and streaming media, according to Samsung. Capacity options range from 240 GB to 3.84 TB.

The 983 DCT NVMe-PCIe SSD is geared for I/O-intensive workloads requiring low latency, such as database management systems, online transaction processing, data analytics and high performance computing applications. The 2.5-inch 983 DCT in the U.2 form factor is hot swappable, unlike the M.2 option. Capacity options are 960 GB and 1.92 TB for both form factors. Pricing for the 983 DCT is about 34 cents per GB, according to Samsung.

The 983 DCT’s sequential read performance is 3,000 MBps for each of the U.2 and M.2 983 DCT options. The sequential write performance is 1,900 MBps for the 1.92 TB U.2 SSD, 1,050 MBps for the 960 GB U.2 SSD, 1,400 MBps for the 1.92 TB M.2 SSD, and 1,100 MBps for the 960 GB M.2 SSD. Random read/write performance for the 1.92 TB U.2 SSD is 540,000 IOPS and 50,000 IOPS, respectively. The read/write latency is 85 microseconds and 80 microseconds, respectively.

The 860 DCT, 883 DCT and 983 DCT SSDs are available now through the channel, and the 983 ZET is due later this month.

Mitel targets enterprises with MiCloud Engage contact center

Mitel has released a contact-center-as-a-service platform that — unlike its other contact center offerings — is detached from its unified communication products. The over-the-top product should appeal to large organizations, which are more likely to buy their contact center and Unified Communications apps separately. 

MiCloud Engage Contact Center, which runs in the multi-tenant public cloud of Amazon Web Services, supports voice, web chat, SMS and email channels, and integrates with Facebook Messenger and customer relationship management (CRM) software.

The MiCloud Engage platform plugs two gaps in the vendor’s cloud contact center portfolio. It scales to over 5,000 agents, significantly more than the 1,000-agent capacity of its flagship cloud platform, MiCloud Flex.

Furthermore, Mitel has traditionally bundled its UC and contact center products, a combination that appeals to the vendor’s historical customer base of small and midsize businesses. MiCloud Engage, in contrast, is available as a stand-alone offering.

Mitel hopes the new platform will help it gain a foothold among enterprises, which are more often customers of Avaya, Cisco or Genesys. It could also appeal to individual divisions or lines of business within a large organization.

Mitel continues cloud pivot ahead of acquisition

The release of MiCloud Engage comes months shy of the publicly traded company’s planned acquisition by the private equity firm Searchlight Capital Partners L.P. The $2 billion deal, announced in April, is expected to close by the year’s end.

Going private should help Mitel grow its cloud business because it will be able to focus on long-term growth rather than quarterly earnings. Following a series of recent acquisitions, the company also benefits from a relatively large install base and a broad mix of cloud UC offerings.

Mitel’s 2017 acquisition of ShoreTel made it one of the top UC-as-a-service vendors worldwide, along with 8×8 and RingCentral. Still, only 6% of Mitel’s 70 million UC seats were in the cloud at the outset of 2018: 1.1 million in the public cloud and another 3 million hosted in Mitel’s data centers.

Ultimately, MiCloud Engage could serve as a conduit to more enterprises buying Mitel’s UC products, the core of its business. Gartner ranks Mitel among the top four UC vendors, alongside Microsoft, Cisco and Avaya.

“If you can’t win the UC business, then winning the contact center business and creating a backdoor that way is a good strategy,” said Zeus Kerravala, the founder and principal analyst at ZK Research in Westminster, Mass. “Getting your foot in the door is the important piece, and that’s what they’re trying to do with [MiCloud Engage].”

Gartner Catalyst 2018: A future without data centers?

SAN DIEGO — Can other organizations do what Netflix has done — run a business without a data center? That’s the question that was posed by Gartner Inc. research vice president Douglas Toombs at the Gartner Catalyst 2018 conference.

While most organizations won’t run 100% of their IT in the cloud, the reality is that many workloads can be moved, Toombs told the audience.

“Your future IT is actually going to be spread across a number of different execution venues, and at each one of these venues you’re trading off control and choice, but you get the benefits of not having to deal with the lower layers,” he said.

Figure out the why, how much and when

When deciding why they are moving to the cloud, the “CEO drive-by strategy” — where the CEO swings in and says, “We need to move a bunch of stuff in the cloud, go make it happen,” — shouldn’t be the starting point, Toombs said.

“In terms of setting out your overall organizational priorities, what we want to do is get away from having just that as the basis and we want to try to think of … the real reasons why,” Toombs said.

Increasing business agility and accessing new technologies should be some of the top reasons why businesses would want to move their applications to the cloud, Toombs said. Once they have a sense of “why,” the next thing is figuring out “how much” of their applications will make the move. For most mainstream enterprises, the sweet spot seems to be somewhere between 40% and 80% of their overall applications, he said.

Businesses then need to figure out the timeframe to make this happen. Those trying to move 50% or 60% of their apps usually give themselves about three years to try and accomplish that goal, he said. If they’re more aggressive — with a target of 80% — they will need a five-year horizon, he said.

Whatever metric you pick, you want to track this very publicly over time within your organization.
Douglas Toombsresearch vice president, Gartner

“We need to get everyone in the organization with a really important job title — could be the top-level titles like CIO, CFO, COO — also in agreement and nodding along with us, and what we suggest for this is actually codifying this into a cloud strategy document,” Toombs told the audience at Gartner Catalyst 2018.

Dissecting application risk

Once organizations have outlined their general strategy, Toombs suggested they incorporate the CIA triad of confidentiality, integrity and availability for risk analysis purposes.

These three core pillars are essential to consider when moving an app to the cloud so the organization can determine potential risk factors.

“You can take these principles and start to think of them in terms of impact levels for an application,” he said. “As we look at an app and consider a potential new execution venue for it, how do we feel about the risk for confidentiality, integrity and availability — is this kind of low, or no risk, or is it really severe?”

Assessing probable execution venues

Organizations need to think very carefully about where their applications go if they exit their data centers, Toombs said. He suggested they assess their applications one-by-one, moving them off into other execution venues when they’re capable and are not going to increase overall risk

“We actually recommend starting with the app tier where you would have to give up the most control and look in the SaaS market,” he said. They can then look at PaaS, and if they have exhausted the PaaS options in the market, they can start to look at IaaS, he said.

However, if they have found an app that probably shouldn’t go to a cloud service but they still want to get to no data centers, organizations could talk to hosting providers that are out there — they’re happy to sell them hardware on a three-year contract and charge monthly for it — or go to a colocation provider. Even if they have put 30% of their apps in a colocation environment, they are not running data center space anymore, he said.

But if for some reason they have found an app that can’t be moved to any one of these execution venues, then they have absolutely justified and documented an app that now needs to stay on premises, he said. “It’s actually very freeing to have a no-go pile and say, ‘You know what, we just don’t think this can go or we just don’t think this is the right time for it, we will come back in three years and look at it again.'”

Kilowatts as a progress metric

While some organizations say they are going to move a certain percentage of their apps to the cloud, others measure in terms of number of racks or number of data centers or square feet of data center, he said.

Toombs suggested using kilowatts of data center processing power as a progress metric. “It is a really interesting metric because it abstracts away the complexities in the technology,” he said.

It also:

  • accounts for other overhead factors such as cooling;
  • easily shows progress with first migration;
  • should be auditable against a utility bill; and
  • works well with kilowatt-denominated colocation contracts.

“But whatever metric you pick, you want to track this very publicly over time within your organization,” he reminded the audience at the Gartner Catalyst 2018 conference. “It is going to give you a bit of a morale boost to go through your 5%, 10%, 15%, and say ‘Hey, we’re getting down the road here.'”

Microsoft awards grant to Tribal Digital Village and Numbers4Health to expand internet access and solutions for rural and underserved communities in California – Stories

The grant will provide broadband access and telehealth solutions in Valley Center and Compton, California

REDMOND, Wash. — Aug. 1, 2018 — On Wednesday, Microsoft Corp. announced it selected Tribal Digital Village and Numbers4Health as winners of its third annual Airband Grant Fund to help bring broadband internet access to rural and underserved communities. As two of eight winners, Tribal Digital Village (TDVNet) will help bring broadband to tribal land in the rural community of Valley Center, California, and Numbers4Health will deploy a solution in partnership with internet service providers to help support telemedicine and improve healthcare outcomes in Compton, California. The Airband Grant Fund is part of the Microsoft Airband Initiative, which aims to help close the broadband access gap in rural America by 2022.

“Tribal Digital Village and Numbers4Health are working to ensure the citizens of Valley Center and Compton have the broadband access they need to connect and compete with their more urban neighbors and access critical telehealth solutions,” said Shelley McKinley, Microsoft’s head of Technology and Corporate Responsibility. “Their use of innovative technologies like TV white spaces will help address the broadband and healthcare gap in California.”

The Microsoft Airband Grant Fund seeks to spark innovation to overcome barriers to affordable internet access, through support of high-potential, early-stage startups creating innovative new technologies, services and business models. This year’s grantees receive cash investments, access to technology, mentoring and networking opportunities.

“It’s truly a benefit when a corporation such as Microsoft focuses on scaling the reach of new technologies, like TV white spaces, to solve for the hardest-to-reach tribal communities,” said Matthew Rantanen, director, TDVNet. “Microsoft’s investment in projects that are uniquely solving these connectivity issues on the ground, like TDVNet, is essential in stimulating creativity and permanently fixing the broadband access gap.”

“The best way to manage healthcare costs and improve health outcomes is to treat injury and illness as fast as possible,” said Peg Molloy, managing director, Numbers4Health. “Numbers4Health puts health information software and technology at schools where injured student athletes can be quickly assessed. Microsoft’s Airband Grant Fund is helping us make that happen.”

Broadband is the electricity of the 21st century. It is a necessity to start and grow a small business and take advantage of advances in agriculture, telemedicine and education. In the United States, more than 24 million Americans lack broadband access, including 19.4 million people living in rural areas.

Below is a list of this year’s Microsoft Airband Grant Fund recipients. More about the Microsoft Airband Grant Fund can be found here.

About Tribal Digital Village

Tribal Digital Village, a tribal-owned ISP based in Valley Center, California, has developed hybrid wireless networks to solve last mile connectivity challenges and enable tribal members to deliver community-based networks.

About Numbers4Health

Numbers4Health is a Colorado-based startup that provides a collection of tools to encourage increased use of telehealth solutions to drive positive change and better healthcare outcomes. The system operates across Windows, Android, and iOS environments.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications for Microsoft, +1 (425) 638-7777,

rrt@we-worldwide.com

Numbers4Health, Peg Molloy, managing director, memolloy@vistapartners.com

Tribal Digital Village, Matthew R. Rantanen, director, mrr@sctdv.net

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com.Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.

8×8 X Series combines UC and contact center

Unified-communications-as-a-service vendor 8×8 pushed further into the cloud contact center market this week with the release of X Series, an offering that combines voice, video, collaboration and contact center functions in a single platform.

Combining UC and contact center makes it easier for agents to get in touch with the right people when handling customer queries, said Meghan Keough, vice president of product marketing at 8×8, based in San Jose, Calif. For example, a company could set up shared rooms within a team collaboration app where agents and knowledge workers can chat or video conference.

The 8×8 X Series will also help companies better track customer contacts, because the same back-end infrastructure will handle calls to a local retail store and the customer service line at headquarters, Keough said. 

8×8 highlighted the platform’s ability to federate chats between leading team collaboration apps, such as Slack, Microsoft Teams and Cisco Webex Teams, allowing users of those cloud services to communicate with each other from their respective interfaces.

Technology acquired in 8×8’s 2017 acquisition of Sameroom is powering that federation and is available as a stand-alone product. The vendor also released its collaboration platform, 8×8 Team Messaging, in beta this week, with features such as persistent chat rooms, presence and file sharing.

The vendor is offering several subscription tiers for the 8×8 X Series. The more expensive plans include calling capabilities in 47 countries, as well as AI features, such as speech analytics.

Cloud fuels convergence of UC, contact center in 8×8 X Series

UC and contact center technologies used to live in “parallel universes,” said Jon Arnold, principal analyst of Toronto-based research and analysis firm J Arnold & Associates. But the cloud delivery model has made it easier to combine the platforms, which lets customers use the same over-the-top service for geographically separate office locations.

Many UCaaS vendors have added contact centers to their cloud platforms in recent years. While some, including 8×8, developed or acquired contact center suites, others — such as RingCentral and Fuze — partner with contact-center-as-a-service specialists, like Five9 and Nice InContact.

Legacy vendors are also taking steps to enhance their cloud contact center offerings. Cisco is planning to use the CC-One cloud platform it recently acquired from BroadSoft to target the midmarket, for example. Avaya, meanwhile, bought contact-center-as-a-service provider Spoken Communications earlier this year to fill a gap in its portfolio.

For many businesses, a cloud subscription to the 8×8 X Series will be cheaper than purchasing UC and contact center platforms separately, analysts said. Also, 8×8’s multi-tiered pricing model should appeal to organizations that are looking to transition to the cloud gradually.

8×8 is not the only vendor capable of offering integrated UC and contact center services, Arnold said. But the vendor has done a good job of marketing and packaging its products to make it easy for buyers and channel partners, he said.

“It’s all part of one large integrated family of services, and you can cherry-pick along the way what level is best for you,” Arnold said of the 8×8 X Series. “So, it kind of simplifies the roadmap [to the cloud] for companies.”

Experts skeptical an AWS switch is coming

Industry experts said AWS has no need to build and sell a white box data center switch as reported last week but could help customers by developing a dedicated appliance for connecting a private data center with the public cloud provider.

The Information reported last Friday AWS was considering whether to design open switches for an AWS-centric hybrid cloud. The AWS switch would compete directly with Arista, Cisco and Juniper Networks and could be available within 18 months if AWS went through with the project. AWS has declined comment.

Industry observers said this week the report could be half right. AWS customers could use hardware dedicated to establishing a network connection to the service provider, but that device is unlikely to be an AWS switch.

“A white box switch in and of itself doesn’t help move workloads to the cloud, and AWS, as you know, is in the cloud business,” said Brad Casemore, an analyst at IDC.

What AWS customers could use isn’t an AWS switch, but hardware designed to connect a private cloud to the infrastructure-as-a-service provider, experts said. Currently, AWS’ software-based Direct Connect service for the corporate data center is “a little kludgy today and could use a little bit of work,” said an industry executive who requested his name not be used because he works with AWS.

“It’s such a fragile and crappy part of the Amazon cloud experience,” he said. “The Direct Connect appliance is a badly needed part of their portfolio.”

AWS could also use a device that provides a dedicated connection to a company’s remote office or campus network, said John Fruehe, an independent analyst.  “It would speed up application [service] delivery greatly.”

Indeed, Microsoft recently introduced the Azure Virtual WAN service, which connects the Azure cloud with software-defined WAN systems that serve remote offices and campuses. The systems manage traffic through multiple network links, including broadband, MPLS and LTE.

Connectors to AWS, Google, Microsoft clouds

For the last couple of years, AWS and its rivals Google and Microsoft have been working with partners on technology to ease the difficulty of connecting to their respective services.

In October 2016, AWS and VMware launched an alliance to develop the VMware Cloud on AWS. The platform would essentially duplicate on AWS a private cloud built with VMware software. As a result, customers of the vendors could use a single set of tools to manage and move workloads between both environments.

A year later, Google announced it had partnered with Cisco to connect Kubernetes containers running on Google Cloud with Cisco’s hyper-converged infrastructure, called HyperFlex. Cisco would also provide management tools and security for the hybrid cloud system.

Microsoft, on the other hand, offers a hybrid cloud platform called the Azure Stack. The software runs on third-party hardware and shares its code, APIs and management portal with Microsoft’s Azure public cloud to create a common cloud-computing platform. Microsoft hardware partners for Azure Stack include Cisco, Dell EMC and Hewlett Packard Enterprise.

Notre Dame uses N2WS Cloud Protection Manager for backup

Coinciding with its decision to eventually close its data center and migrate most of its workloads to the public cloud, the University of Notre Dame’s IT team switched to cloud-native data protection.

Notre Dame, based in Indiana, began its push to move its business-critical applications and workloads to Amazon Web Services (AWS) in 2014. Soon after, the university chose N2WS Cloud Protection Manager to handle backup and recovery.

Now, 80% of the applications used daily by faculty members and students, as well as the data associated with those services, lives on the cloud. The university protects more than 600 AWS instances, and that number is growing fast.

In a recent webinar, Notre Dame systems engineer Aaron Wright talked about the journey of moving a whopping 828 applications to the cloud, and protecting those apps and their data.  

N2WS, which was acquired by Veeam earlier this year, is a provider of cloud-native, enterprise backup and disaster recovery for AWS. The backup tool is available through the AWS Marketplace.

Wright said Notre Dame’s main impetus for migrating to the cloud was to lower costs. Moving services to the cloud would reduce the need for hardware. Wright said the goal is to eventually close the university’s on-premises primary data center.

“We basically put our website from on premises to the AWS account and transferred the data, saw how it worked, what we could do. … As we started to see the capabilities and cost savings [of the cloud], we were wondering what we could do to put not just our ‘www’ services on the cloud,” he said.

Wright said Notre Dame plans to move 90% of its applications to the cloud by the end of 2018. “The data center is going down as we speak,” he said.

We looked at what it would cost us to build our own backup software and estimated it would cost 4,000 hours between two engineers.
Aaron Wrightsystems engineer, Notre Dame

As a research organization that works on projects with U.S. government agencies, Notre Dame owns sensitive data. Wright saw the need for a centralized backup software to protect that data, and found N2WS Cloud Protection Manager through AWS Marketplace. Wright could not find many good commercial options for protecting that cloud data.

“We looked at what it would cost us to build our own backup software and estimated it would cost 4,000 hours between two engineers,” he said. By comparison, Wright said his team deployed Cloud Protection Manger in less than an hour.

Wright said N2WS Cloud Protection Manager rescued Notre Dame’s data at least twice since the installation. One came after Linux machines failed to boot after application of a patch, and engineers restored data from snapshots within five minutes. Wright said his team used the snapshots to find and detach a corrupted Amazon Elastic Block Store volume, and then manually created and attached a new volume.

In another incident, Wright said the granularity of the N2WS Cloud Protection Manager backup capabilities proved valuable.

“Back in April-May 2018, we had to do a single-file restore through Cloud Protection Manager. Normally, we would have to have taken the volume and recreated a 300-gig volume,” he said. Locating and restoring that single file so quickly allowed him to resolve the incident within five minutes.

Stolen digital certificates used in Plead malware spread

Stolen digital certificates at the center of a new malware campaign made the malicious software appear safe before it stole user passwords.

An espionage group used stolen digital certificates to sign Plead backdoor malware and a password stealer component used in attacks in East Asia, according to Anton Cherepanov, senior malware researcher at ESET. The password stealer targeted Google Chrome, Mozilla Firefox and Internet Explorer browsers, as well as Microsoft Outlook.

Cherepanov determined the certificates were likely stolen because the malware code was signed with the “exact same certificate … used to sign non-malicious D-Link software.”

“Recently, the JPCERT published a thorough analysis of the Plead backdoor, which, according to Trend Micro, is used by the cyberespionage group BlackTech,” Cherepanov wrote in a blog post. “Along with the Plead samples signed with the D-Link certificate, ESET researchers have also identified samples signed using a certificate belonging to a Taiwanese security company named Changing Information Technology Inc. Despite the fact that the Changing Information Technology Inc. certificate was revoked on July ‎4, ‎2017, the BlackTech group is still using it to sign their malicious tools.”

ESET researchers contacted D-Link about the stolen digital certificates, and D-Link revoked the compromised certificate on July 3.

Cherepanov said this case was different from recent issues with compromised SSL certificates because the stolen digital certificates were used to sign malicious files, and “unlike SSL certificates, the code signing certificates can’t be obtained for free.”

“Misusing digital certificates is one of the many ways cybercriminals try to mask their malicious intentions — as the stolen certificates let malware appear like legitimate applications, the malware has a greater chance of sneaking past security measures without raising suspicion,” Cherepanov wrote via email. “This technique also helps attackers to circumvent native/built-in protective measures of the OS based on the validity of these certificates. Also noteworthy, certificates from a Taiwan-based company were stolen and misused by Stuxnet.”

Kevin Bocek, vice president of security strategy and threat intelligence at Venafi, said “there’s no doubt we’re going to see a lot more of these attacks in the future,” where machine identities and stolen digital certificates are being abused by malicious actors.

“Code signing certificates are a method to ensure the identity of the code developer. Ideally, they verify that the software has been published by a trusted company. They also double-check the software to ensure that it hasn’t degraded, become corrupted, or been tampered with,” Bocek wrote via email. “Because of the power of these certificates, if they fall into the wrong hands they can be the ultimate ‘keys to the kingdom’. Any attacker or developer with malicious intent can obtain a private key for code signing if they really want to. What deters most of them is that they have to register with the [certificate authority] to obtain one, which makes it much easier to identity them if they distribute malicious code. This is why there is a thriving black market for stolen code-signing certificates.”

Cisco to merge Viptela, DNA Center for campus networking

ORLANDO, Fla. — Cisco plans to merge its Viptela SD-WAN management software into DNA Center over the next 18 months, providing customers with a single view of their LAN, WAN and campus networks.

During interviews this week at the Cisco Live conference, company executives said the integration would take place after Cisco builds a cloud-based version of DNA Center for campus networking. Companies would then have the option of accessing DNA Center as a service from Cisco or a managed service provider. DNA Center is a centralized software console for managing campus networks built on top of Cisco’s Catalyst 9000 switches.

“At that point, it may make logical sense to bring the two solutions together,” said Scott Harrell, general manager of Cisco’s enterprise networking business.

Waiting for a cloud-based version of DNA Center makes sense, because Viptela’s management application, vManage, is an online service. In a separate interview, Kiran Ghodgaonkar, senior marketing manager for Cisco’s enterprise products, said integrating vManage into DNA Center would occur over the next 12 to 18 months.

Merging the two products will tie the Viptela SD-WAN into other technologies wrapped into DNA Center, such as SD-Access, which lets engineers set access policies that follow employees wherever and however they want to enter the corporate network, Ghodgaonkar said. The SD-Access integration is essential, because Viptela routes traffic to and from business applications running on SaaS and IaaS platforms.

One view of LAN, WAN and campus networking

Overall, merging Viptela technology into DNA Center would simplify network management by treating the LAN, WAN and campus networking as a “single entity,” Ghodgaonkar said. Cisco wants to make SD-WAN management part of a single workflow within DNA Center.

Until then, development of Viptela’s SD-WAN and vManage products would continue “full-bore,” Harrell said. Slowing down the current pace of upgrades would risk falling behind rivals adding security, analytics, load balancing and other features to their software.

“Right now, we want to be able to iterate and make innovations as fast as possible,” Harrell said.

Enhancements planned for Viptela include making the 4000 Series Integrated Services Routers for the branch manageable through vManage, Harrell said. “That’ll be this summer.”

To make that happen, Viptela would run as a software image on ISR, Ghodgaonkar said. Cisco plans to release the image as a software upgrade for the router starting in July.

Cisco customers currently use ISR to run its legacy SD-WAN product, Intelligent WAN. IWAN’s complexity prevented it from becoming a successful product, so many analysts have predicted Cisco would slowly migrate customers to Viptela.

Since acquiring Viptela a year ago, Cisco has increased sales of the company’s SD-WAN product to more than 800 customers globally, according to Ghodgaonkar. He declined to say how many customers Viptela had when Cisco bought the company.

The global market for SD-WAN, which includes revenue from vendors and managed service providers, will grow by nearly 70% annually through 2021, when it could reach $8 billion, according to IDC.