Category Archives: Enterprise IT news

Enterprise IT news

VMware NSX-T updates include new firewalls, load balancing

VMware made significant changes to NSX-T 2.0, released in November, adding native support for microsegmentation and containers, and will follow up shortly with NSX 2.1, expected by February 2018.

VMware brings many different but extremely useful tools to the table in these NSX-T releases. From a large infrastructure point of view, they provide more features and flexibility to design the network the way administrators and security teams want them to be designed.

There are two versions of NSX: NSX for vSphere and NSX-T. NSX for vSphere is more widely deployed; NSX T focuses on multi-hypervisor and cloud-native environments. In this article, we’ll look at what’s different between NSX-T 1.0 and 2.1.

NSX-T updates ease firewall management

The major news in VMware NSX-T’s latest versions is support for microsegmentation. Micro-segmentation is a big deal because it provides a new security paradigm for the cloud and large-scale environments.

Historically, firewalls were mostly a north-south proposition — i.e., inbound and outbound traffic from the rest of the internet/network. With NSX, firewall rules can be applied to individual VMs, groups of VMs and many other scenarios that were once difficult or not possible.

IT departments typically devote about 8% of their budgets to perimeter security. These rules also follow the VM as it moves around. In short, it helps harden the interior infrastructure that’s usually the soft spot for attackers. Think about it as a hard exterior shell with a soft inner shell. It changes the game. No other cloud vendor has anything like this.

VMware brings many different but extremely useful tools to the table in these NSX-T releases.

Now that everything is automated, you can easily implement firewall rules and manage them all centrally. Alongside this new firewall is the distributed network encryption between VMs/containers — when all items are within the same virtual domain, of course. Again, this functionality helps stop things like network eavesdropping by undesirables.

The complexity is without a doubt the overriding issue. Manually managing VMs in a massive environment becomes complex, if not unfeasible. With NSX, the network traffic and associated management for east-west traffic can be implemented more easily. It used to be quite complex to implement firewalls at the VM level, but not anymore.

VMware adds container support

Other big news in VMware NSX-T 2.x is native support for containers. This was a critical addition due to the undeniable ownership of containerization by the Docker-based infrastructure.

Along with VMware doubling down on BOSH/Pivotal as an orchestration platform, version 2.1 supports both Pivotal Cloud Foundry and Pivotal Container Service.

Extend on premises to the cloud

These developments feed into NSX Cloud, one of the VMware Cloud Services the company rolled out at VMworld in August 2017. NSX Cloud provides consistent networking and security for applications running in multiple private and public clouds via a single management console and common API. This is interesting, as this is a service no one else offers. It allows NSX to be expanded beyond the local borders of the infrastructure and allows the NSX domain to be expanded beyond the local network into major cloud providers. In other words, it expands on premises into the cloud. AWS is already supported and Azure support is on the roadmap. It brings such functionality as discovery.

Added content packs ease troubleshooting

Alongside this is the inclusion of Log Insight. Log Insight, as the name suggests, collects or logs key information from the NSX environment. “Great,” you might say. “So what?” Content packs are the answer. Content packs are add-ins that can be included in Log Insight and they help drill down and troubleshoot problems within the NSX environment. Don’t forget that we are talking about your network here; it may be virtual, but it’s still critical.

New VMware NSX-T load balancing feature

Finally, one major thing that came in 2.1 was NSX load balancing. Over time, it’s clear that many other features will be added to help NSX reach or exceed feature parity with other software load balancers.

What makes it even better is that VMware is very much pushing an API first environment. Infrastructure in code is where it’s at. The revised 2.0/2.1 API has been heavily reworked, making features easier to consume and access.

Five questions to ask before purchasing NAC products

As network borders become increasingly difficult to define, and as pressure mounts on organizations to allow many different devices to connect to the corporate network, network access control is seeing a significant resurgence in deployment.

Often positioned as a security tool for the bring your own device (BYOD) and internet of things (IoT) era, network access control (NAC) is also increasingly becoming a very useful tool in network management, acting as a gatekeeper to the network. It has moved away from being a system that blocks all access unless a device is recognized, and is now more permissive, allowing for fine-grained control over what access is permitted based on policies defined by the organization. By supporting wired, wireless and remote connections, NAC can play a valuable role in securing all of these connections.

Once an organization has determined that NAC will be useful to its security profile, it’s time for it to consider the different purchasing criteria for choosing the right NAC product for its environment. NAC vendors provide a dizzying array of information, and it can be difficult to differentiate between their products.

When you’re ready to buy NAC products and begin researching your options — and especially when speaking to vendors to determine the best choice for your organization — consider the questions and features outlined in this article.

NAC device coverage: Agent or agentless?

NAC products should support all devices that may connect to an organization’s network. This includes many different configurations of PCs, Macs, Linux devices, smartphones, tablets and IoT-enabled devices. This is especially true in a BYOD environment.

NAC agents are small pieces of software installed on a device that provide detailed information about the device — such as its hardware configuration, installed software, running services, antivirus versions and connected peripherals. Some can even monitor keystrokes and internet history, though that presents privacy concerns. NAC agents can either run scans as a one-off — dissolvable — or periodically via a persistently installed agent.

If the NAC product uses agents, it’s important that they support the widest variety of devices possible, and that other devices can use agentless NAC if required. In many cases, devices will require the NAC product to support agentless implementation to detect BYOD and IoT-enabled devices and devices that can’t support NAC agents, such as printers and closed-circuit television equipment. Agentless NAC allows a device to be scanned by the network access controller and be given the correct designation based on the class of the device. This is achieved with aggressive port scans and operating system version detection.

Agentless NAC is a key component in a BYOD environment, and most organizations should look at this as must-have when buying NAC products. Of course, gathering information via an agent will provide more information on the device, but it’s not viable on a modern network that needs to support many different devices.

Does the NAC product integrate with existing software and authentication?

This is a key consideration before you buy an NAC product, as it is important to ensure it supports the type of authentication that best integrates with your organization’s network. The best NAC products should offer a variety of choices: 802.1x — through the use of a RADIUS server — Active Directory, LDAP or Oracle. NAC will also need to integrate with the way an organization uses the network. If the staff uses a specific VPN product to connect remotely, for example, it is important to ensure the NAC system can integrate with it.

Supporting many different security systems that do not integrate with one another can cause significant overhead. A differentiator between the different NAC products is not only what type of products they integrate with, but also how many systems exist within each category.

Consider the following products that an organization may want to integrate with, and be sure that your chosen NAC product supports the products already in place:

1. Security information and event management

2. Vulnerability assessment

3. Advanced threat detection

4. Mobile device management

5. Next-generation firewalls

Does the NAC product aid in regulatory compliance?

NAC can help achieve compliance with many different regulations, such as the Payment Card Industry Data Security Standard, HIPAA, International Organization for Standardization 27002 — ISO 27002 — and the National Institute of Standards and Technology. Each of these regulations stipulates certain controls regarding network access that should be implemented, especially around BYOD, IoT and rogue devices connecting to the network.

By continually monitoring network connections and performing actions based on the policies set by an organization, NAC can help with compliance with many of these regulations. These policies can, in many cases, be configured to match those of the compliance regulations mentioned above. So, when buying NAC products, be sure to have compliance in mind and to select a vendor that can aid in this process — be it through specific knowledge in its support team or through predefined policies that can be tweaked to provide the compliance required for your individual business.

What is the true cost of buying an NAC product?

The price of NAC products can be the most significant consideration, depending on the budget you have available for procurement. Most NAC products are charged per endpoint (device) connected to the network. On a large network, this can quickly become a substantial cost. There are often also hidden costs with NAC products that must be considered when assessing your purchase criteria.

Consider the following costs before you buy an NAC product:

A differentiator between the different NAC products is not only what type of products they integrate with, but also how many systems exist within each category.

1. Add-on modules. Does the basic price give organizations all the information and control they need? NAC products often have hidden costs, in that the basic package does not provide all the functionality required. The additional cost of add-on modules can run into tens of thousands of dollars on a large network. Be sure to look at what the basic NAC package includes and investigate how the organization will be using the NAC system. Specific integrations may be an additional cost. Is there extra functionality that will be required in the NAC product to provide all the benefits required?

2. Upfront costs. Are there any installation charges or initial training that will be required? Be sure to factor these into the calculation, on top of the price per endpoint — of course.

3. Support costs. What level of support does the organization require? Does it need one-off or regular training, or does it require 24/7 technical support? This can add significantly to the cost of NAC products.

4. Staff time. While not a direct cost of buying NAC products, consider how much monitoring an NAC system requires. Time will need to be set aside not only to learn the NAC system, but to manage it on an ongoing basis and respond to alerts. Even the best NAC systems will require staff to be trained so if problems occur, there will be people available to address the issues.

NAC product support: What’s included?

Support from the NAC manufacturer is an important consideration from the perspective of the success of the rollout and assessing the cost. Some of the questions that should be asked are:

  1. What does the basic support package include?
  2. What is the cost of extended support?
  3. Is support available at all times?
  4. Does the vendor have a significant presence in the organization’s region? For example, some NAC providers are primarily U.S.-based, and if an organization is based in EMEA, it may not provide the same level of support.
  5. Is on-site training available and included in the license?

Support costs can significantly drive up the cost of deployment and should be assessed early in the procurement process.

What to know before you buy an NAC system

When it comes to purchasing criteria for network access control products, it is important that not only is an NAC system capable of detecting all the devices connected to an organization’s network, but that it integrates as seamlessly as possible. The cost of attempting to shoehorn existing processes and systems into an NAC product that does not offer integration can quickly skyrocket, even if the initial cost is on the cheaper side.

NAC should also work for the business, not against it. In the days when NAC products only supported 802.1x authentication and blocked everything by default, it was seen as an annoyance that stopped legitimate network authentication requests. But, nowadays, a good NAC system provides seamless connections for employees, third parties and contractors alike — and to the correct area of the network to which they have access. It should also aid in regulatory compliance, an issue all organizations need to deal with now.

Assessing NAC products comes down to the key questions highlighted above. They are designed to help organizations determine what type of NAC product is right for them, and accordingly aid them in narrowing their choices down to the vendor that provides the product that most closely matches those criteria.

Once seldom used by organizations, endpoint protection is now a key part of IT security, and NAC products have a significant part to play in that. From a hacker’s perspective, well-implemented and managed NAC products can mean the difference between a full network attack and total attack failure.

Mirai creators and operators plead guilty to federal charges

The three men accused of creating and operating the Mirai botnet have pleaded guilty to federal charges.

The Department of Justice announced Wednesday it had unsealed the guilty pleas of Paras Jha, age 21, of Fanwood, N.J.; Josiah White, 20, of Washington, Pa.; and Dalton Norman, 21, of Metairie, La. on charges of “conspiracy to violate the Computer Fraud and Abuse Act in operating the Mirai botnet.”  

According to the DoJ, the three Mirai creators built the botnet during the summer and fall of 2016 before unleashing the first wave of Mirai attacks, which at its peak was generating DDoS attacks from hundreds of thousands of vulnerable IoT devices.

“The defendants used the botnet to conduct a number of powerful distributed denial-of-service, or ‘DDoS’ attacks, which occur when multiple computers, acting in unison, flood the Internet connection of a targeted computer or computers,” the DoJ wrote in a statement. “The defendants’ involvement with the original Mirai variant ended in the fall of 2016, when Jha posted the source code for Mirai on a criminal forum. Since then, other criminal actors have used Mirai variants in a variety of other attacks.”

Jha and Norman were separately charged with and pleaded guilty to infecting more than 100,000 devices between Dec. 2016 and Feb. 2017 with “malicious software,” but did not specifically attribute these attacks to Mirai The DoJ announcement accused the Mirai creators with making a botnet “used primarily in advertising fraud, including ‘click fraud’ … for the purpose of artificially generating revenue,”and it is unclear if this botnet was separate from Mirai or not..

“Our world has become increasingly digital, and increasingly complex,” U.S. Attorney Bryan D. Schroder said in the DoJ statement. “Cybercriminals are not concerned with borders between states or nations, but should be on notice that they will be held accountable in Alaska when they victimize Alaskans in order to perpetrate criminal schemes. The U.S. Attorney’s Office, along with our partners at the FBI and Department of Justice’s Computer Crime and Intellectual Property Section, are committed to finding these criminals, interrupting their networks, and holding them accountable.”

Jha alone also pleaded guilty to a series of attacks against the Rutgers University network — where Jha was a student — between Nov. 2014 and Sept. 2016.

Mirai creator attribution

Early reports following the Mirai botnet attacks, including the Dyn DDoS incident, attempted to attribute the attack to nation-state actors and foreign adversaries. However, in January 2017 Brian Krebs, cybersecurity journalist and investigator, identified Jha and White as likely being the Mirai creators. It is unclear how his investigation played a part in the DoJ charges. Krebs was one of the first known victims of the Mirai DDoS attacks.

Lesley Carhart, security incident response team lead at Motorola Solutions, said on Twitter that this case against the Mirai creators should be a moment to realize “attribution is complex.”

Kubernetes roadmap looks to smooth container management bumps

AUSTIN, Texas — “This job is too hard.”

It wasn’t a message the DevOps faithful at KubeCon 2017 last week might have expected from a Microsoft distinguished engineer and Kubernetes co-creator.

Brendan Burns, Microsoft Azure’s director of engineering, introduced a personal project called Metaparticle at the annual gathering of Kubernetes users and contributors. With Metaparticle, which translates complex distributed systems concepts into snippets of Java and JavaScript code, Burns aims to make distributed systems a Computer Science 101-level exercise.

In that same vein, Kubernetes project leaders know the container management platform will only get rapid acceptance if it is accessible to more people. The Cloud Native Computing Foundation (CNCF) revealed features on the Kubernetes roadmap and introduced a Kubernetes mentoring program for administrators to make it easier to manage clusters across multiple clouds.

Third-party integrations, such as Pivotal Cloud Foundry 2.0, which is now available, will also improve on-premises Kubernetes management and, eventually, hybrid cloud management for enterprises, said Larry Carvalho, an analyst at IDC.

Traditional enterprise IT vendors run hands-on training programs — Pivotal Labs, Red Hat Open Innovation Labs, IBM Cloud Garage — to impart distributed systems skills to enterprise IT staff, Carvalho said. “[These programs] not only lead a horse to water, but force it down his throat,” he said.

“Startups are going gangbusters, but more than half of enterprises still don’t have a production workload in containers,” Carvalho said. “There’s an opportunity, but for them to start adopting it really requires a culture shift.”

Kubernetes users want secure multicluster management

Enterprises with some Kubernetes experience echoed Burns’ desire for simplicity, particularly to manage multiple container orchestration clusters, as all got their first look at the Kubernetes roadmap for 2018.

Production-ready, federated Kubernetes clusters topped the wish list for Rick Moss, infrastructure operations engineer for MailChannels, an email service provider in Vancouver, B.C..

“We want to be able to set up and tear down Kubernetes in different clouds, and federation is the only way to do that securely,” Moss said.

One can use multiple separate clusters for multi-cloud Kubernetes deployments, but rather than stand up and debug a new cluster, Moss said he wants the ability to just roll out part of the same system. However, Kubernetes federation last saw a major update in Kubernetes release 1.5 last year, and it’s been difficult to operate in real-world environments. Kubernetes is at release 1.9 at the time of publication.

It’s not easy to do hybrid [cloud deployments] today, but Cluster API will be the great equalizer for deploying Kubernetes on different systems.
Aparna SinhaKubernetes project management lead, Google

Bloomberg LP engineers said they’re not interested in the nascent federated clusters, but will track their progress in 2018. In the meantime, engineers at the financial services company headquartered in New York must occasionally restart specific hosts in on-premises Kubernetes clusters, and they want instance addressability within Kubernetes to help with that. The ability to dynamically provision local persistent storage volumes would help move stateful apps closer to production on Kubernetes, said Steven Bower, search and data science infrastructure lead at Bloomberg.

Enterprise IT shops also look forward to the Kubernetes roadmap’s security features disclosed by Kubernetes project managers at KubeCon. Pluggable ID, for example, will allow Kubernetes identity management and role-based access control to plug into existing identity management systems, such as the Lightweight Directory Access Protocol (LDAP).

“It’s nice they have identity management support for Amazon [Web Services] and Google Cloud [Platform], but on-premises LDAP is where they need to focus,” Bower said.

A special-interest group within the CNCF will integrate with SPIFFE, which stands for Secure Production Identity Framework for Everyone, an open source project that defines a set of standards to identify and secure communications between web-based services. It’s still too early to tell if it will succeed, Bower said.

Brendan Burns, distinguished engineer at Microsoft Azure
Microsoft’s Brendan Burns presents the Metaparticle distributed systems management project at KubeCon 2017.

Cluster API project aspires to be ‘the great equalizer’

KubeCon attendees also saw Cluster API, a plan by the SIG-Cluster-Lifecycle group to create a set of standards to install Kubernetes clusters in multiple infrastructures.

“It’s a declarative way of deploying and upgrading clusters that abstracts the infrastructure behind Kubernetes,” said Aparna Sinha, project management lead for Kubernetes at Google. “It’s not easy to do hybrid [cloud deployments] today, but Cluster API will be the great equalizer for deploying Kubernetes on different systems.”

Also in the works is a declarative application management project that builds on the open source ksonnet configuration tools to define applications on Kubernetes in a nonrestrictive way, Sinha said. Though it’s still in its early stages, there is a working group.

Another trend expected in 2018 is increased attention to serverless technologies and how they compete with and integrate with containers. Several open source function-as-a-service projects are currently in process, but the CNCF has yet to align itself with any of them. CNCF officials think the community should remain neutral, but KubeCon observers said they think one will naturally emerge and eventually earn support from the CNCF next year.

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

Juniper Contrail battles Cisco ACI, VMware NSX in the cloud

SAN FRANCISCO — Juniper Networks has extended its Contrail network virtualization platform to multicloud environments, competing with Cisco and VMware for the growing number of enterprises running applications across public and private clouds.

The Juniper Contrail Enterprise Multicloud, introduced this week at the company’s NXTWORK conference, is a single software console for orchestrating, managing and monitoring network services across applications running on cloud-computing environments. The new product, which won’t be available until early next year, would compete with the cloud versions of Cisco’s ACI and VMware’s NSX.

Also at the show, Juniper announced that it would contribute the codebase for OpenContrail, the open source version of the software-defined networking (SDN) overlay, to The Linux Foundation. The company said the foundation’s networking projects would help drive OpenContrail deeper into cloud ecosystems.

Contrail Enterprise Multicloud stems, in part, from the work Juniper has done over several years with telcos building private clouds, Juniper CEO Rami Rahim told analysts and reporters at the conference.

“It’s almost like a bad secret — how embedded we have been now with practically all — many — telcos around the world in helping them develop the telco cloud,” Rahim said. “We’ve learnt the hard way in some cases how this [cloud networking] needs to be done.”

Is Juniper’s technology enough to win?

Technologically, Juniper Contrail can compete with ACI and NSX, IDC analyst Brad Casemore said. “Juniper clearly has put considerable thought into the multicloud capabilities that Contrail needs to support, and, as you’d expect from Juniper, the features and functionality are strong.”

Cisco and VMware have marketed their multicloud offerings aggressively. As such, Juniper will have to raise and sustain the marketing profile of Contrail Enterprise Multicloud.
Brad Casemoreanalyst, IDC

However, Juniper will need more than good technology when competing for customers. A lot more enterprises use Cisco and VMware products in data centers than Juniper gear. Also, Cisco has partnered with Google to build strong technological ties with the Google Cloud Platform, and VMware has a similar deal with Amazon.

“Cisco and VMware have marketed their multicloud offerings aggressively,” Casemore said. “As such, Juniper will have to raise and sustain the marketing profile of Contrail Enterprise Multicloud.”

Networking with Juniper Contrail Enterprise Multicloud

Contrail Enterprise Multicloud comprises networking, security and network management. Companies can buy the three pieces separately, but the new product lets engineers manage the trio through the software console that sits on top of the centralized Contrail controller.

For networking in a private cloud, the console relies on a virtual network overlay built on top of abstracted hardware switches, which can be from Juniper or a third party. The system also includes a virtual router that provides links to the physical underlay and Layer 4-7 network services, such as load balancers and firewalls. Through the console, engineers can create and distribute policies that tailor the network services and underlying switches to the needs of applications.

Contrail Enterprise Multicloud capabilities within public clouds, including Amazon Web Services, Google Cloud Platform and Microsoft Azure, are different because the provider controls the infrastructure. Network operators use the console to program and control overlay services for workloads through the APIs made available by cloud providers. The Juniper software also uses native cloud APIs to collect analytics information. 

Other Juniper Contrail Enterprise Multicloud capabilities

Network managers can use the console to configure and control the gateway leading to the public cloud and to define and distribute policies for cloud-based virtual firewalls.

Also accessible through the console is Juniper’s AppFormix management software for cloud environments. AppFormix provides policy monitoring and application and software-based infrastructure analytics. Engineers can configure the product to handle routine networking tasks.

The cloud-related work of Juniper, Cisco and VMware is a recognition that the boundaries of the enterprise data center are being redrawn. “Data center networking vendors are having to redefine their value propositions in a multicloud world,” Casemore said.

Indeed, an increasing number of companies are reducing the amount of hardware and software running in private data centers by moving workloads to public clouds. Revenue from cloud services rose almost 29% year over year in the first half of 2017 to more than $63 billion, according to IDC.

The Bitcoin boom and its infosec effects

Listen to this podcast

In this week’s Risk & Repeat podcast, SearchSecurity editors discuss the recent bitcoin boom and how the cryptocurrency’s rising value could affect the cybersecurity landscape.

The bitcoin boom that saw a dramatic rise in the cryptocurrency’s value in recent weeks could have big implications for information security.

In the last month, the price of a single bitcoin tripled, jumping from approximately $5,700 to more than $17,000. A number of factors, including interest in the opening of the first regulated bitcoin futures exchanges and a hard fork in the cryptocurrency, could be contributing to the bitcoin boom beyond a general increase in buying and selling volumes.

But the surge also comes at a time of rampant global ransomware attacks, many of which demand payment from victims in bitcoin. While some enterprises have disclosed ransomware attacks, experts generally believe that many more attacks are kept quiet.

Could cybercriminals and ransomware attacks be contributing to the bitcoin boom? What will the rising price of the cryptocurrency mean for the cybercrime economy? Will the high value of bitcoin lead to more cyberattacks on bitcoin owners and exchanges, like NiceHash, which recently lost approximately $80 million in bitcoin following a massive data breach?

SearchSecurity editors Rob Wright and Peter Loshin discuss those questions and more on the bitcoin boom in this episode of the Risk & Repeat podcast.

Free phone service could boost Dialpad’s UCaaS status

Unified-communications-as-a-service provider Dialpad has released a free version of its cloud business phone system for small organizations with up to five employees.

Subscribers to the service, Dialpad Free, will receive one free office phone number and up to five employees can be dialed by name or as extensions. The free phone service includes the most basic telephony features, except for E911.

The features of the Dialpad Free phone service include 100 outbound calling minutes per month, unlimited inbound calling minutes, 100 inbound and outbound text messages per month, call recording, voicemail and video calling between Dialpad users. The system also integrates with LinkedIn and Google G Suite.

“A lot of small tech startups are using Google’s G Suite for email, calendar and documents,” Nemertes Research analyst Irwin Lazar said. “Being able to use Dialpad for free, which tightly integrates into G Suite, should be attractive.”

While the Dialpad Free phone service won’t generate significant revenue for the provider, Lazar said the service could help boost Dialpad’s recognition in the competitive UCaaS market.

Organizations can download the Dialpad app onto a desktop, laptop, tablet or smartphone. There are free apps for Mac, Windows, iOS and Android. For a limited time, there will be no charge for transferring an existing phone line to the Dialpad Free service. However, there is a $3 fee for porting a number away from Dialpad Free.

Facebook partners push Workplace adoption

Talk Social to Me, a tech consulting firm, and ServiceRocket, a provider of Workplace by Facebook apps, have partnered to create an adoption program for Workplace by Facebook.

The partnership, called Elevate, will offer Workplace by Facebook support for enterprise customers with a regulated and deskless workforce. Elevate offers customers access to ServiceRocket’s Moderation and Insights apps, which provide data on how Workplace by Facebook is used in the enterprise, alongside Talk Social to Me’s consulting services.

The majority of the 30,000 organizations that have adopted Workplace by Facebook are in industries that employ deskless workers, such as healthcare, retail and manufacturing. These organizations tend to have complex working environments comprised of hourly and part-time workers.

“We know that business value is best achieved when companies concerned with HIPAA, employee unions and large hourly populations can discover and respond immediately to business and social conversations,” Talk Social to Me CEO Carrie Basham Young said.

CPaaS gains ground for embedded video

Communications platform as a service (CPaaS) is becoming the deployment model of choice for embedded video, according to a report from video conferencing vendor Vidyo, based in Hackensack, N.J. The report surveyed 166 developers in 48 countries and found more than half of developers have implemented some form of video chat.

Developers looking to embed video into enterprise apps have four deployment models to consider:

  • a full internal development where the majority of the video technology is developed in-house;
  • commercially available software that is integrated as part of the deployment;
  • open source software that is used as is or customized by the developer; and
  • CPaaS, which embeds video via an API platform.

According to the report, early adopters of embedded video prefer open source and CPaaS deployment models. CPaaS is growing in popularity as 78% of respondents plan to use CPaaS for embedded video, with nearly half planning to use CPaaS in the next 12 months.

The top considerations for deploying CPaaS include the support for various devices and operating systems, WebRTC support, high availability and the ability to sustain calls over unreliable networks.

December Patch Tuesday closes year on a relatively calm note

Administrators were greeted with a subdued December Patch Tuesday, a quiet end to what had been a somewhat tumultuous year early in 2017.

Of the 32 unique Common Vulnerabilities and Exposures (CVEs) that Microsoft addressed, just three patches were directly related to Windows operating systems. While not a critical exploit, the patch for CVE-2017-11885, which affects Windows client and server operating systems, is where administrators should focus their attention.

The patch is for a Remote Procedure Call (RPC) vulnerability for machines with the Routing and Remote Access service (RRAS) enabled. RRAS is a Windows service that allows remote workers to use a virtual private network to access internal network resources, such as files and printers.

“Anyone who has RRAS enabled is going to want to deploy the patch and check other assets to make sure RRAS is not enabled on any devices that don’t use it actively to prevent the exploitation,” said Gill Langston, director of product management at Qualys Inc., based in Redwood City, Calif.

The attacker triggers the exploit by running a specially crafted application against a Windows machine with RRAS enabled.

“Once the bad actor is on the endpoint, they can then install applications and run code,” Langston said. “They establish a foothold in the network, then see where they can spread. The more machines you have under your control, the more ability you have to move laterally within the organization.”

In addition, desktop administrators should roll out updates promptly to apply 19 critical fixes that affect the Internet Explorer and Edge browsers, Langston said.

“The big focus should be on browsers because of the scripting engine updates Microsoft seems to release every month,” he said. “These are all remote-code execution type vulnerabilities, so they’re all critical. That’s obviously a concern because that’s what people are using for browsing.”

Fix released for Windows Malware Protection Engine flaw

On Dec. 6, Microsoft sent out an update to affected Windows systems for a Windows Malware Protection Engine vulnerability (CVE-2017-11937). This emergency repair closed a security hole in Microsoft’s antimalware application, affecting systems on Windows 7, 8.1 and 10, and Windows Server 2016. Microsoft added this correction to the December Patch Tuesday updates.

“The fix happened behind the scenes … but it was recommended [for] administrators using any version of the Malware Protection Engine that it’s set to automatically update definitions and verify that they’re on version 1.1.14405.2, which is not vulnerable to the issue,” Langston said.

OSes that lack the update are susceptible to a remote-code execution exploit if the Windows Malware Protection Engine scanned a specially crafted file, which would give the attacker a range of access to the system. That includes the ability to view and delete data, and create a new account with full user rights.

Other affected Microsoft products include Exchange Server 2013 and 2016, Microsoft Forefront Endpoint Protection, Microsoft Security Essentials, Windows Defender and Windows Intune Endpoint Protection.

“Microsoft uses the Forefront engine to scan incoming email on Exchange 2013 and Exchange 2016, so they were part of this issue,” Langston said.

Lessons learned from WannaCry

Microsoft in May surprised many in IT when the company released patches for unsupported Windows XP and Windows Server 2003 systems to stem the tide of WannaCry ransomware attacks. Microsoft had closed this exploit for supported Windows systems in March, but it took the unusual step of releasing updates for OSes that had reached end of life.

Many of the Windows malware threats from early 2017 spawned from exploits found in the Server Message Block (SMB) protocol, which is used to share files on the network. The fact that approximately 400,000 machines got bit by the ransomware bug showed how difficult it is for IT to keep up with patching demands.

“WannaCry woke people back up to how critical it is to focus on your patch cycles,” Langston said.

More than three months elapsed between the time Microsoft first patched the SMB vulnerability in March that WannaCry exploited and when the Petya ransomware — which used the same SMB exploit — continued to compromise people. Some administrators might be lulled into a false sense of security from the cumulative update servicing model and delay the patching process, Langston said.

“They may delay because the next rollup will cover the updates they missed, but then that’s more time those machines are unprotected,” he said.

For more information about the remaining security bulletins for December Patch Tuesday, visit Microsoft’s Security Update Guide.

Tom Walat is the site editor for SearchWindowsServer. Write to him at twalat@techtarget.com or follow him @TomWalatTT on Twitter.

AI experts: Business alignment key to AI implementation

BOSTON — There should be zero separation between an enterprise’s business objectives and its AI implementation, according to those with experience implementing artificial intelligence in their organizations.

“You have to think about the business problems first before you drive the tools in that direction,” said John Daly, senior vice president of worldwide production services at Sony Pictures Entertainment. “Be crystal clear on where the business needs to go.”

Daly spoke at the AI World Conference & Expo 2017 in Boston. He said that an enterprise’s key performance indicators (KPIs) or higher-level market strategy shouldn’t change just because it starts using powerful new AI tools. Instead, market strategy and KPIs should remain constant. The AI tools should be aimed at improving existing processes, not creating new ones.

Daly co-developed an AI tool from Algomus, which automatically generates data reports and supports natural language queries. Daly’s team uses Algomus to track stock of DVDs and other merchandise at retailers, which helps them know when to send more or scale back deliveries. Daly said because improving retail sales is a core business objective, it makes sense to look for ways AI could improve that process, rather than starting with an AI implementation and then looking for business use cases.

If you’re not able to change the bottom line, [AI] will not have the impact we hoped to achieve.
Anju Guptadirector of digital partnerships and outreach, Monsanto

It says a lot about the state of AI tools today. The term AI evokes a futuristic vision where computers can answer any question or perform a limitless array of tasks. But enterprises that have implemented AI tools have seen more limited value. It’s not that the tools aren’t useful for some — it’s more that the hype has outpaced the reality. Improvements from an AI implementation are incremental, rather than transformative.

“If you’re not able to change the bottom line, we will not have the impact we hoped to achieve,” said Anju Gupta, director of digital partnerships and outreach at agriculture company Monsanto.

Get the business on your side

A big part of realizing the desired business impact, Gupta said, is making sure lines of business are involved in implementation and projection planning. “It’s critical to bring those business users to the forefront so that they’re solving business problems,” she said.

Gupta said Monsanto is currently working on about 50 deep learning projects, which include discovering new ways to make crops resistant to diseases, for example. The number of projects grew organically, rather than as part of a forced initiative. Business users at Monsanto who were engaged in early-stage projects that eventually panned out promoted their successes and evangelized the technology, increasing demand for it.

Getting the business involved “helps ignite throughout the company an interest in doing these things,” Gupta said.

Implementing AI tools is a process problem

Aligning AI tools with business objectives will often require developing new processes, said Rekha Joshi, a principal software engineer at financial software vendor Intuit. “If your organization has history, it has baggage,” she said.

AI has the potential to automate a lot of tasks, which some people may see as automating their jobs and, thus, view AI as a potential threat. But experts at AI World 2017 stressed that if enterprises take a smart approach to figuring out how an AI implementation can be used alongside workers while assuring workers that they won’t be replaced, they’ll get the best results.

Joshi recommended taking a platform approach that allows for experimentation while ensuring that any new tools developed can plug into any process throughout the enterprise. “It does require a kind of mind shift across the organization,” she said.

CloudBerry backups feature protection from ransomware

CloudBerry backups are hopping aboard the ransomware protection train, with the ability to detect encyrption changes along the way.

The latest update to CloudBerry’s flagship product, CloudBerry Backup, protects a customer’s file-level backups when it discovers ransomware. The product prevents existing CloudBerry backups from being overwritten until an administrator confirms if there is an issue.

Statistics show that ransomware attacks are still prevalent. Requested payment amounts to release encrypted files are also trending up.

“Customers are looking for any type of protection they can get,” said David Gugick, vice president of product management at CloudBerry Lab, which is based in New York City. “You don’t want ransomware to find your backup files.”

Some ransomware, though, is smart enough to encrypt backups. CloudBerry’s off-site cloud backup helps customers follow the 3-2-1 rule of backup, Gugick said. Organizations should have three copies of data on two different media, with one copy off site.

In addition, some ransomware is smart enough to exist on a user’s system without making its presence known right away.

When a customer enables ransomware protection in CloudBerry Backup 5.8, the vendor performs the initial backup and analyzes the bit structure of each file to determine if any files are encrypted. During subsequent backups, CloudBerry compares the original byte structure to the current byte structure, which enables the identification of newly encrypted files.

Customers are looking for any type of protection they can get.
David Gugickvice president of product management, CloudBerry

The customer’s backup plan continues, but CloudBerry prevents existing backups from deletion regardless of retention policies, according to the vendor. Customers can go back to a point in time before the attack and restore from protected CloudBerry backups.

Gugick cautioned that a ransomware protection strategy should be comprehensive and also include user education and security patches.

“Customers should not rely exclusively on backup and disaster recovery,” Gugick said. “This is just a piece of the protection puzzle.”

Waking up from the ransomware ‘nightmare’

Lori Hardtke, president of ByteWize Inc., which provides IT support for small businesses, said one of her clients got hit with a ransomware attack on a server earlier this year, before this new protection feature launched.

“It was the worst nightmare I ever went through,” Hardtke said.

However, the organization restored from CloudBerry backups and didn’t lose any data.

Hardtke recently downloaded the latest CloudBerry Backup software and engaged the ransomware feature, essentially by just checking a box. She welcomed the capability as “another layer of protection.”

Hardtke uses CloudBerry for file-level backups of Windows environments, primarily desktops. Her business, based in Scottsdale, Ariz., has roughly 50 clients across the United States. CloudBerry backs up 5.5 TB of data, mainly QuickBooks and standard documents, such as Word files and PDFs.

ByteWize uses Google Cloud Platform as the back end for its storage. CloudBerry does not provide storage; it only  handles backup and disaster recovery, which keeps costs low compared to its competition, Gugick said. The majority of customers use Amazon Web Services, but CloudBerry supports more than 30 cloud storage vendors, also including Google, Microsoft Azure, Backblaze B2, Oracle and Wasabi.

ByteWize switched to CloudBerry in September 2015 after about five years with Jungle Disk backup. Hardtke said she was looking for more innovation and less cost, and she found both with CloudBerry backups. She said she appreciates the steady flow of upgrades with significant enhancements.  

One enhancement Hardtke likes is the ability to do image-based backups. She said it would be helpful to retrieve files out of an image, like she can with Veeam Software, which she also uses to protect data.

CloudBerry's ransomware detection
CloudBerry Backup informs the user when it detects possible ransomware.

What else is new?

The ransomware protection is currently only designed for file-level backup, but Gugick said CloudBerry is planning support for images in a future release.

Other new features in CloudBerry Backup 5.8, which became generally available two weeks ago, include protection for Microsoft Hyper-V 2016 and support for VMware changed block tracking.

CloudBerry has two main backup offerings that support Windows, macOS and Linux. CloudBerry Backup for small businesses and consumers starts at $29.99 for the desktop edition and $119.99 for the server edition, and it features perpetual licenses. CloudBerry Managed Backup for managed service providers and larger businesses offers subscription licensing and starts at $5 per month, per server or desktop for file-level backup and $6 per month, per server or desktop for image-based backup.

CloudBerry backups protect more than 210 PB of data, Gugick said. The vendor claims about 43,000 CloudBerry Backup customers and 4,500 active CloudBerry Managed Backup customers.