Tag Archives: software

Critical Cisco ASA vulnerability patched against remote attacks

A new critical flaw in Cisco’s Adaptive Security Appliance software could allow dangerous remote attacks and requires a patch to mitigate.

The Cisco ASA vulnerability received the highest severity rating of 10.0 on CVSS and according to Cisco, it could “allow an unauthenticated, remote attacker to cause a reload of the affected system or to remotely execute code.”

“The vulnerability is due to an attempt to double free a region of memory when the webvpn feature is enabled on the Cisco ASA device. An attacker could exploit this vulnerability by sending multiple, crafted XML packets to a webvpn-configured interface on the affected system,” Cisco wrote in a security advisory. “An exploit could allow the attacker to execute arbitrary code and obtain full control of the system, or cause a reload of the affected device.”

Kevin Beaumont, a security architect based in the UK, said on Twitter the Cisco ASA vulnerability was disclosed early and called it “one of the bigger bugs.”

According to the official advisory, the Cisco ASA vulnerability has no mitigations, and the only way to secure affected devices is to apply the patch.

Potential damage

Craig Young, computer security researcher for Tripwire’s Vulnerability and Exposures Research Team, said the Cisco ASA vulnerability could be exploited by an attacker “to harvest credentials as well as to monitor and manipulate traffic which should be protected by the VPN.”  

“The danger is further compounded by the fact that attackers can easily locate public SSL VPN terminals through services like Shodan as well as by searching certificate transparency logs for security certificates containing the word VPN,” Young told SearchSecurity. “In general, an attacker must have some degree of knowledge or control over the remote memory layout. In practical terms, this means that attackers will need to study the vulnerability and develop reliable exploit methods specific for different firmware versions. Developing these exploits would not be within reach of the average hacker as it requires rather extensive knowledge about the ASA operating system and how it manages system memory.”

Mounir Hahad, head of threat research at Juniper Networks, said described a range of attacks that could leverage the Cisco ASA vulnerability.                                       

“Typically, WebVPN is enabled on edge firewalls, which means this particular vulnerability is exploitable directly from the internet. It is fairly easy to exploit as it only requires crafting specific XML packets to a WebVPN configured device. An attacker could take full control of the firewall: they could change the running configuration of the device, allow inbound traffic that should be blocked and infiltrate the organization,” Hahad told SearchSecurity. “They could also simply launch a denial of service attack by restarting the device continuously, which will basically shutdown internet connectivity to an entire organization. For cloud services, the entire service could go offline.”

Want to make software developer hiring easier? Be flexible

Forget what you think you know about software developers — at least when it comes to hiring. They’re not financially motivated, they’re largely self-taught, and one in four of them learned how to code before they could drive a car.

And here’s one more surprise: Their potential employers aren’t looking for prospective developers’ degrees; rather, they’re looking at their latest GitHub project.

Those insights, from a just-released survey of over 39,000 development professionals by technical hiring platform HackerRank, offer a unique view at both sides of the often-fraught software developer hiring process. Coders remain in very short supply around the world, and it’s tempting to think salary and tech tools lure developers, while employers prioritize a top-notch college degree. Apparently, it’s not nearly that simple.

Survey respondents ranked compensation as only the third most important factor when choosing a new job, after work-life balance (56.5%) and professional growth and learning (55.1%). But only 27.4% said a company’s technology stack was vital — a finding so unexpected, according to HackerRank’s co-founder and CEO Vivek Ravisankar, that the company did a follow-up survey and found developers really want employers to support them to work on side tech projects or coding-related hobbies. They also were clear about their work-life balance goals: 89.4% of developers want flexible working hours, and just over 80% want to work from home.

That flextime helps self-taught developers — 73.7% of survey respondents identified themselves as such — to continue their learning journey, which is vital for software developer hiring. According to the HackerRank survey, they’d rather go online to Stack Overflow (88.4%) or YouTube (63.8%) than learn from books. Nearly 40% want to learn Go, followed by Python, Scala, Kotlin and Ruby.

Keep learning to stay relevant

Most developers’ drive to learn is simply a built-in preference, but other factors may be at play. A survey by worldwide placement firm Harvey Nash showed close to 40% of developers feel they’re under pressure from automation, low code/no code tools and AI. The antidote to this, according to Alex Robbins, software development hiring recruiter at Harvey Nash, is learning.

One in four developers learned how to code before they could drive a car.

“Skills learned five years ago are often no longer relevant today,” he said. “[Our survey] revealed that 95% of tech experts are spending time developing their skills, and four in 10 are actually paying for training out of their own pocket.”

That should pay off, Ravisankar said, because employers want to see a prospective employee’s experience and what they’ve done. When executives hire software developers, the vast majority (84.1%) look at a developer’s portfolio, which in most cases means GitHub. Just over 71% consider previous experience, but only 35.4% take education into account.

When looking at GitHub, employers evaluate problem-solving skills, rather than programming language fluency. Over 94% of companies of all sizes indicated problem-solving skills were their top priority in software developer hiring, while less than 60% emphasized programming languages or debugging.

The focus on problem-solving has been a long time in coming, but fits with the software development market today, said Robert Stroud, principal analyst at Forrester Research. “There’s a worldwide shortage of developers still, but we need the developers we have to focus on learning the business,” he said. Coders, too, must avoid distraction with the shiny new toy of a hot language, he said.

Ultimately, it helps to know what motivates coders, but software developer hiring is still a battle, said Ernest Mueller, longtime developer and now director of engineering operations at AlienVault, headquartered in San Mateo, Calif.

“There’s more supply now than there’s been of developers, but demand is not going away,” he said. The work needs to be interesting, and employers must be prepared to pay for them — particularly if they have experience. Amounts vary, but a software engineer’s median base pay was $85,651 in December 2017, an increase of 1.6% year over year, according to data from job and recruiting site Glassdoor.

Selecting network configuration software for automation

Ivan Pepelnjak, blogging in IP Space, explored what network configuration software is best for automation. Ansible, Chef and Puppet are commonly cited network configuration software options, with Salt becoming increasingly commonplace and CFEngine used occasionally. According to Pepelnjak, most network engineers prefer Ansible. Chef and Puppet focus mainly on configuration and state management and don’t make changes unless necessary and tend to manage dependencies — such as creating groups and then accounts within a group.

In Pepelnjak’s view, managing configuration and soft state services is a good goal but doesn’t go far enough. Among network configuration software, Ansible is unique, aiding in device provisioning, validating network topologies, upgrading software, helping with compliance and generating reports. Engineers can often get started more quickly with Ansible, learning the basics in a matter of hours. “Maybe it’s just our mentality, or maybe we have to do things a bit different because of the huge blast radius of our mistakes. In any case, Ansible (which is just a generic automation/orchestration framework) fits better to our way of doing things,” he said.

Read more of Pepelnjak’s thoughts on network configuration software.

New developments in endpoint detection and response

Jon Oltsik, an analyst at Enterprise Strategy Group in Milford, Mass., reflected on a 2016 project where he interviewed 30 enterprises about endpoint security strategies. At the time, Oltsik came up with a concept he termed a continuum of endpoint tools, with advanced threat detection at one end and endpoint detection and response (EDR) on the other end.

Based on the interviews, Oltsik and his colleague guessed that 75% to 80% of the market would steer toward advanced protection, while the remainder would pursue EDR. He also predicted that vendors would work to bridge the gap with combined offerings.

Now, in 2018, Oltsik said that the hypothesis has mostly played out. ESG research indicates that 87% of organizations are planning to buy comprehensive endpoint security suites and 28% of cybersecurity professionals identified EDR as the most attractive feature of the offerings. He projected that EDR will now undergo additional market segmentation. Traditional EDR, anchored by on-premises infrastructure, will continue as a niche market for high-security industries. A lighter, “trigger-based” version of EDR — one that collects data when a behavioral anomaly occurs — will appeal to purchasers in the midmarket, he said.

Managed EDR may also appear, with subsegments, catering to companies that want full EDR capabilities but lack personnel to oversee it. “Rather than default to a product, security managers really need to assess their needs, resources, and skills before making an EDR decision. There will be a lot of options to choose from, so CISOs must choose wisely,” Oltsik said.

Dig deeper into Oltsik’s predictions about EDR.

Streamlining with SD-WAN and network functions virtualization

Mike Fratto, an analyst at GlobalData in Sterling, Va., said he’s heard commentary about stand-alone SD-WAN disappearing, instead becoming just another feature on routers and firewalls. Although he said many vendors will eventually consolidate features like these into a single appliance, he does not see the end of single-function SD-WAN devices.

That’s because enterprise IT teams like bespoke products and many teams like the ability to swap out older stand-alone products for newer offerings as they become available.

Second, the shift to software-defined everything will let enterprises rely more on virtualized instances of SD-WAN. This will permit companies to consolidate network functions into fewer appliances.

Third is the fact that enterprise IT teams are often loath to replace tried and true systems with new options that may not be as capable.

“What enterprises want — what they would pay for but will likely never get — is an environment of deep management integration across multiple vendor products which could ultimately reduce operational overhead, unlock more efficient workflows, and generate significant operational cost savings along with way,” Fratto said. “Here’s where managed service providers have a unique advantage, provided they dedicate the resources to creating a portal that integrates the management functions across vendor products,” he added.

Explore more of Fratto’s ideas on SD-WAN as a stand-alone product.

Onboarding software a weak link, according to HCI-Kronos survey

Onboarding software may help reduce turnover, but many firms are neglecting this technology, according to a new study. A bad onboarding experience may prompt a new employee to quit.

Most firms today have invested in recruiting management systems. They want to speed hiring and find the best candidates. IDC said it expects spending in 2018 on applicant tracking systems to reach double digits.

Like recruiting, onboarding software is a pillar of talent management systems. But it’s “neglected,” said Jenna Filipkowski, the head of research at the Human Capital Institute (HCI), based in Cincinnati. That’s a mistake, she argued.

HCI and workforce management software vendor Kronos Inc., in a survey of 350 firms, found 36% have “insufficient technology” to automate or organize the onboarding process. Overall, this research found 75% reported “that onboarding practices are underutilized.” In a tight labor market, this may be a mistake.

Bad onboarding experience may hurt retention

A good onboarding program can make a difference in whether people leave work on that first day wondering what they have gotten themselves into and whether they made a huge mistake.
Howard Kleinprofessor of management of human resources at Ohio State University

Getting a job seeker excited about taking a job may be undercut by underused onboarding tech. Disorganized, incomplete, paper-based and inefficient onboarding can sour a new hire. It also hurts productivity if it takes longer to become proficient. The new employee may well believe they “were sold a bill of goods,” Filipkowski said.

“When they do have a more positive [onboarding] experience, studies have shown that they tend to want to stay longer,” Filipkowski said.

Other studies support this, according to management professors who have examined this issue.

“A good onboarding program can make a difference in whether people leave work on that first day wondering what they have gotten themselves into and whether they made a huge mistake,” said Howard Klein, a professor of management of human resources at Ohio State University and editor in chief of the Human Resource Management Review, a professional journal.

First impressions really do matter

“First impressions matter,” Klein said in an email. “If you ask people about their worst job, chances are you’ll hear about a horrible first day or week in which they were not made to feel welcome, appreciated or important,” he said.

New employees are impressionable, and an organization “does not want to miss that opportunity to instill values, vision and desired behaviors,” Klein said.

Onboarding software systems are intended to make onboarding more efficient. These platforms include online training, electronic paperwork processing, incorporating audio and video onboarding materials, automatic updates of employee records, set reminders and appointments.

Onboarding software use is inconsistent

The HCI and Kronos survey suggests adoption of onboarding software will increase. About 60% of the firms surveyed were using some type of onboarding technology, either web-based or developed in-house. Of the balance, 24% said they didn’t use it, but plan on doing so in the next three years. The remaining 15% said they had no plan to use onboarding technology in the next three years.

Talya Bauer, a professor of management at Portland State University, said organizations have come a long way in terms of thinking of onboarding as a yearlong process and not just new employee orientation, “but there’s great variance in how much time and attention onboarding gets across organizations.”

Keeping new hires will be important if a just-released survey by staffing firm Accountemps, a Robert Half company, proves to be accurate. It found 29% of professionals intend to look for a new position in the next year. The highest percentage of workers considering leaving their present employers is in Los Angeles, at 40%, followed closely by Austin and Dallas, Texas.

Apstra bolsters IBN with customizable analytics

Startup Apstra has added to its intent-based networking software customizable analytics capable of spotting potential problems and reporting them to network managers.

Apstra introduced this week intent-based analytics as part of an upgrade to the company’s Apstra Operating System (AOS). The latest version, AOS 2.1, also includes other enhancements, such as support for additional network hardware and the ability to use a workload’s MAC or IP address to find it in an IP fabric.

In general, AOS is a network operating system designed to let managers automatically configure and troubleshoot switches. Apstra focuses on hardware transporting Layer 2 and Layer 3 traffic between devices from multiple vendors, including Arista Networks, Cisco, Dell and Juniper Networks. Apstra also supports white-box hardware running the Cumulus Networks OS.

AOS, which can run on a virtualized x86 server, communicates with the hardware through installed drivers or the hardware’s REST API. Data on the state of each device is continuously fed to the AOS data store. Alerts are sent to network operators when the state data conflicts with how a device is configured to operate.

AOS 2.1 takes the software’s capabilities up a notch through tools that operators can use to choose specific data they want the Apstra analytics engine to process.

“This is a logical progression for Apstra with AOS,” said Brad Casemore, an analyst at IDC. “Pervasive, real-time analytics should be an integral element of any intent-based networking system.”

Using Apstra analytics

The first step is for operators to define the type of data AOS will collect. For example, managers could ask for the CPU utilization on all spine switches. Also, they could request queries of all the counters for server-facing interfaces and of the routing tables for links connecting leaf and spine switches.

Mansour Karam, CEO, ApstraMansour Karam

“If you were to add a new link, add a new server, or add a new spine, the data would be included automatically and dynamically,” Apstra CEO Mansour Karam said.

Once the data is defined, operators can choose the conditions under which the software will examine the information. Apstra provides preset scenarios or operators can create their own. “You can build this [data] pipeline in the way that you want, and then put in rules [to extract intelligence],” Karam said.

Useful information that operators can extract from the system include:

  • traffic imbalances on connections between leaf and spine switches;
  • links reaching traffic capacity;
  • the distribution of north-south and east-west traffic; and
  • the available bandwidth between servers or switches.

Enterprises moving slowly with IBN deployments

Other vendors, such as Cisco, Forward Networks and Veriflow, are building out intent-based networking (IBN) systems to drive more extensive automation. Analytics plays a significant role in making automation possible.

“Nearly every enterprise that adopts advanced network analytics solutions

is using it to enable network automation,” said Shamus McGillicuddy, an analyst at Enterprise Management Associates, based in Boulder, Colo. “You can’t really have extensive network automation without analytics. Otherwise, you have no way to verify that what you are automating conforms with your intent.”

Today, most IT staffs use command-line interfaces (CLIs) to manually program switches and scores of other devices that comprise a network’s infrastructure. IBN abstracts configuration requirements from the CLI and lets operators use declarative statements within a graphical user interface to tell the network what they want. The system then makes the necessary changes.

The use of IBN is just beginning in the enterprise. Gartner predicts the number of commercial deployments will be in the hundreds through mid-2018, increasing to more than 1,000 by the end of next year.

Jenkins pipeline as code unifies enterprise DevOps approach

One software maker got a whole lot more than rapid app delivery, thanks to its use of Jenkins pipeline as code.

Ellucian Company L.P., a Reston, Va., software company that specializes in ERP systems for colleges and universities, embraced the pipeline-as-code features introduced with Jenkins 2.0 in 2016. These features give IT architects a visual interface to create pipelines using Jenkins’ domain-specific language, which could then be kept and version-controlled alongside application code to ensure consistency between them.

Before Jenkins 2.0, the software company struggled to rapidly iterate disparate legacy tech stacks, from Microsoft and Oracle proprietary applications to cloud-native apps developed in-house on Node.js. These applications also undergo different stages of development: Some pass continuous integration tests and are deployed immediately, while others involve a more lengthy continuous delivery process based on Amazon Machine Images.

However, Jenkins pipeline-as-code features, which were added to CloudBees’ commercially supported offering Jenkins Enterprise, helped standardize Ellucian’s approach to continuous integration and continuous delivery (CI/CD).

“Traditionally, to build a pipeline, you’d have to build various stages and transfer parameters from one stage to another. But now, the stages become more standardized, and several of our groups are reusing others’ pipeline stages,” said Jason Shawn, senior director of DevOps and cloud for Ellucian. “People have also been able to make changes to their pipelines more quickly.”

Jenkins pipeline as code is available in the open source version of the CI/CD software, but Ellucian chose CloudBees for enterprise support, as it integrated a diverse array of applications.

“If this company was in a different spot in their journey and much more locked in to one or two tech stacks, CloudBees might not be part of the equation. But, realistically, we need help to integrate pretty much everything under the sun,” Shawn said. But because CloudBees’ offering is based on the open source platform, Ellucian can also draw on community support and apply lessons others have learned.

CloudBees’ Jenkins Enterprise platform will soon offer Kubernetes container orchestration support for Jenkins masters, which Ellucian plans to adopt. This will extend the notion of a large, centrally provisioned client master toward distributed masters that can be provisioned by engineering teams on demand. Shawn said he expects this change will improve the performance of Jenkins pipeline-as-code deployments, as well.

Our vision is to take a product through its whole lifecycle [with] a master, and then it can scale using as many workers as it needs to,” he said.

Jenkins pipeline as code opens door to DevSecOps

Our vision is to take a product through its whole lifecycle [with] a master, and then it can scale using as many workers as it needs to.
Jason Shawnsenior director of DevOps and cloud, Ellucian

After the upgrade to distributed masters, Ellucian plans to draw security practices into the DevOps process. The company will integrate open source security scanning tools, such as OAuth, OWASP Dependency-Check and Arachni, and will evaluate open source utility Zed Attack Proxy to find vulnerabilities in web applications. Jenkins pipeline as code will help link the results of these tools’ security scans into ThreadFix, so developers can evaluate the risks of vulnerabilities for each application.

“The whole [security] ecosystem is riddled with false positives. And, inevitably, you find you’re breaking a build for something that’s not necessary,” Shawn said. “So, how do you take that false positive and build it into your pipeline so you don’t hit it again and again? We’re exploring ways to build that security model with a more DevSecOps approach.”

Jenkins is still among the most widely used CI/CD pipeline tools, but Jenkins alternatives abound. Shawn said he’s also looked at Amazon’s native CI/CD pipeline services. He said he hopes CloudBees will make the Jenkins EC2 plug-in part of the core Jenkins code in future versions of CloudBees Jenkins Enterprise, which would help the third-party product keep up with Amazon’s natively integrated offerings.

“I don’t know that Amazon has quite delivered the robustness and versatility that Jenkins offers, but I would never count them out,” Shawn said. “If I was in CloudBees’ shoes, I’d be looking at competitors like that and ensuring that my ability to accommodate enterprise needs is first and foremost.”

Shawn’s team also evaluated the Blue Ocean UI, but needs further user acceptance tests with everyone who uses Jenkins pipeline as code at Ellucian before they switch.

“We were in the early beta,” Shawn said. “It had a little bit of a jarring effect; it’s a much better-looking UI, but most of our users were used to the old and ugly UI.”

Beth Pariseau is senior news writer for TechTarget’s Cloud and DevOps Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

Veeam acquisition of N2WS enhances cloud protection

Eight months after investing in N2WS, Veeam Software today said it acquired the cloud data protection company for $42.5 million.

The all-cash Veeam acquisition is the backup and recovery vendor’s first in 10 years. N2WS provides cloud-native, enterprise backup and disaster recovery for Amazon Web Services (AWS) through its Cloud Protection Manager.

N2WS was founded in 2012 and released its first product in 2013. It will operate as a stand-alone business, keeping its brand name and becoming “A Veeam Company” while selling Cloud Protection Manager.

Veeam disclosed its investment in N2WS in May 2017, and began selling N2WS technology through an OEM deal as part of Veeam Availability for AWS. Peter McKay, Veeam co-CEO and president, said the investment and partnership allowed his company to monitor the N2WS business, use its technology and accelerate Veeam’s AWS capabilities.

Cloud Protection Manager is built specifically for AWS and automates backup and recovery for Amazon Elastic Compute Cloud instances. It is available in the AWS Marketplace.

“Technology wise, it’s a good addition to the portfolio,” McKay said.

Cloud-native data protection is seeing high growth, said Phil Goodwin, research director of storage systems and software at IDC.

“This puts them square in the middle of that marketplace,” Goodwin said.

The challenge with a company specifically targeted for AWS data protection is that organizations are going to have workloads in multiple clouds and on premises.

“I think they intend to address it,” Goodwin said of Veeam’s answer to the challenge.

The sky’s the limit

Both companies reported significant growth in the last year. N2WS grew revenue by 102% in 2017. Veeam hit $827 million in total bookings revenue in 2017, an increase of 36% year over year, and claims more than 282,000 customers.

Veeam's Peter McKayPeter McKay

Veeam was founded in 2006 as a virtual backup company, but has since added physical and cloud protection.

Investing in and then buying N2WS helped alleviate any concerns with the Veeam acquisition, McKay said. N2WS, which was privately held before the acquisition, now has more than 1,000 customers.

“I think we’ve de-risked it quite a bit,” McKay said.

Veeam’s investment helped N2WS build up its team last year. The company had seven employees at the beginning of 2017 and now has 42, N2WS CEO Jason Judge said.

Veeam funded N2WS through a round led by Insight Venture Partners, which is a large investor in Veeam. The companies did not disclose the amount invested in N2WS.

As part of the Veeam acquisition:

  • Veeam will have access to N2WS technology and research and development to integrate data protection for AWS workloads into the Veeam Availability Platform.
  • N2WS will have access to Veeam’s research and development and its alliances and partners, including nearly 55,000 resellers and 18,000 Veeam Cloud & Service Providers. 
  • Current Veeam customers will receive special offers and incentives for Cloud Protection Manager from N2WS.

The R&D teams will be working together on bigger things.
Ezra Charmvice president of marketing, N2WS

“The R&D teams will be working together on bigger things,” said Ezra Charm, N2WS vice president of marketing.

There is some overlap in customers, but N2WS is focused solely on AWS protection, Charm said. He said the deal presents an opportunity for N2WS to sell to Veeam customers who don’t know the company and those who don’t yet use AWS.

“We have a lot more resources available to us as part of Veeam,” and can accelerate development, Charm said.

Lofty goals for Veeam and N2WS

The typical backup and recovery customer is evolving to think more cloud-first, McKay said.

The acquisition helps Veeam as its cloud business and enterprise base are both growing. Veeam reported a year-over-year increase of 57% in cloud bookings for the fourth quarter. The vendor attained a 62% year-over-year increase in large enterprise deals, and reached 500% annual growth for deals over $1 million, according to its latest revenue report.

McKay said in 2017 that he didn’t feel acquisitions were needed for Veeam revenue to hit $1.5 billion by 2020. But the Veeam acquisition only helps the revenue goals, which now include a push to get to $2.2 billion by 2022, McKay said.

Veeam last acquired a company in 2008, when it bought privately held Nworks, a creator of enterprise management connectors.

McKay said there was a time when Veeam was possibly growing too fast, and a lot of employees were stretched trying to do too much. In the last 18 months, though, the company has added employees from outside and developed teams internally. The company has 3,100 employees and will likely add 700 in the next year, including 230 in research and development, McKay said.

N2WS plans to add employees as well, across its three offices: its headquarters in West Palm Beach, Fla.; its research and development center in Haifa, Israel; and its new office in Edinburgh, Scotland.

The Veeam acquisition closed at the end of 2017. Judge will continue to lead N2WS as its CEO and all teams including sales, marketing, research and development, and customer service will stay intact, according to Veeam.

N2WS and Veeam are well-positioned to take advantage of the growing infrastructure-as-a-service market, Charm said.

“It’s really the new hotness in IT,” Charm said. “It’s changing the way people are talking about IT and infrastructure.”

Reduxio Systems’ storage wows human resources specialist

Reduxio Systems’ storage has gone from curiosity to mainstay at human resources software firm CPP Inc.

The maker of personality-assessment software initially installed Reduxio HX550 hybrid arrays to support standard systems for development, quality assurance and testing. Impressed by the performance, CPP has promoted the Reduxio SAN to handle mission-critical applications and a select number of primary workloads.

The plan is to eventually move most tier-one storage from existing SAN environments to Reduxio to take advantage of its capacity, native data protection and performance scaling, said Mike Johnson, director of global infrastructure and desktop support at CPP, based in Sunnyvale, Calif.

“I’ve always figured there isn’t one storage device that gives you all three of those things, but it’s looking like Reduxio Systems has the potential,” Johnson said.

CPP has two Reduxio HX550 hybrid arrays at its main data center in Sunnyvale and two others at a newly opened facility in the U.K.

Reduxio hybrid flash augments all-flash IBM V9000 primary SAN

The Reduxio HX550 Enterprise Flash Storage hybrid flagship is a dual-controller system housed in a 2U Seagate server chassis. The system accommodates 24 disk drives or SAS-connected SSDs, with enterprise multi-level cell NAND flash SSDs for 40 TB of raw block storage. Effective capacity scales to 150 TB of usable storage with Reduxio NoDup global inline data deduplication.

Reduxio Systems deduplicates data in 8K blocks in a pre-memory buffer. A unique timestamp is applied to each block in the databases. A separate database for metadata includes log data on which blocks received writes and when.

Until 2002, CPP was known as Corporate Psychologists Press Inc. The firm sells human resources software to corporations and career-minded individuals, and it’s best known for its flagship Myers-Briggs Type Indicator-certified assessment.

Over the years, CPP has used storage appliances from Dell EMC, NetApp, Hitachi Vantara and other vendors. CPP still uses an all-flash IBM V9000 SAN to support a Microsoft Dynamics AX enterprise resource planning system and related production systems, as well as a scale-out Coho Data DataStream SAN to increase capacity or performance on the fly.

Although the IBM V9000 is “one of the highest-performing SANs I’ve ever seen,” Johnson said it has limited capacity for all of CPP’s primary storage. The Coho Data storage is “plug-and-play,” but requires the upfront expense of customized Arista network switches.

Compounding the challenge is the demise of Coho Data, which went out of business in September.

Johnson credited a reseller with introducing him to Reduxio Systems. CPP had already purchased the IBM and Coho Data gear by that time, but Johnson was intrigued enough by Reduxio to give it a test run.

“I was willing to put it in as our tier-three storage device, but I didn’t know how it would perform,” he said. “Once we saw the performance was pretty good, we promoted it to our mission-critical workloads.”

Reduxio BackDating aids faster disaster recovery

Johnson’s IT team did further testing and research designed to answer a key question: Could Reduxio storage reliably support CPP’s moneymaking activities? Johnson said he was pleased at Reduxio’s ability to deliver primary storage performance without relying exclusively on flash.

Johnson said he also likes the native data protection in Reduxio’s TimeOS operating system, especially the BackDating that allows recovery to any-point-in-time snapshot. Reduxio Systems recently added NoMigrate replication and NoRestore copy data management.

“We decided our revenue-generating systems could reside on the Reduxio storage device,” Johnson said. “Our plan going forward is to put all our revenue-generating systems on Reduxio and reduce our recovery point objectives and recovery time objectives from hours to days to seconds to minutes.”

Recapping 2017’s biggest trends in networking technology

Editor’s note: Cisco accelerated its shift to software, vendors launched new tools for managing data centers, and analytics, fueled by machine learning, stole the spotlight. Here, a recap of some of the most significant 2017 trends in networking technology.

Data center infrastructure trends in networking

In February, Cisco joined Microsoft to offer Azure Stack services in its UCS server. Throughout the early months of the year, Cisco revenues continued to fall, dropping for a fifth consecutive quarter because of declining sales of routers and switches.

Cisco attracted a lot of attention for its Digital Network Architecture (DNA) software initiative, which included a new line of Catalyst campus switches engineered to pave the way for a more intuitive way to program the network. DNA eliminates the need to program devices manually through the command-line interface; instead engineers use a policy-based approach to determine network behavior. Later that summer, Cisco said it would acquire SD-WAN vendor Viptela for $610 million in a bid to consolidate its WAN offerings.

In the fall, Cisco launched Intersight, a software-as-a-service initiative slated to become a management option for the vendor’s Unified Computing System and HyperFlex, a hyper-converged infrastructure platform. It also bolstered its Application Centric Infrastructure SDN software by enabling it to run across multiple data centers.

Other data center news included Juniper’s work on a switch fabric intended for multiple data centers, with a single set of management tools and higher spending on public cloud services. Juniper also made a series of announcements in December that included the release of bot software aimed at automating certain network functions.

Additionally, Dell EMC made its NOS standard on new open networking switches and Arista expanded its spine-leaf architecture for hyperscale data centers. Dell followed up its NOS announcement by releasing a line of high-speed switches for data centers and carriers in the fall.

Vendor consolidation gained traction, with Extreme Networks purchasing the data center business of Brocade, as well as the networking assets of Avaya.

Wireless LAN technology trends

The past 12 months were relatively quiet in WLAN trends in networking, as enterprises worked to deploy systems based on the 802.11ac Wave 2 specification.

One important technological development took place, however, as vendors began to release switches and other components capable of supporting the 2.5 and 5 GbE standard, which was ratified by the IEEE in late 2016. Toward that end, Dell EMC, among others, released multigigabit campus switches for both wired and WLAN deployments.

In February, Arris International said it would purchase WLAN vendor Ruckus Wireless Inc. for $800 million. Arris said Ruckus would continue to operate as an independent unit as it targets its technology to service providers and the hospitality market.

That acquisition was followed by a similar move by Riverbed Technology, which bought wireless LAN vendor Xirrus to complement its SD-WAN portfolio.

In June, Aruba released a core switch, aimed at large campus networks and internet of things applications. The 8400X switch also supports Aruba’s WLAN portfolio of products and software.

Extreme Networks announced plans in July to embed its recently acquired Avaya fabric technology in switches and management software to centralize control of large campus wired and wireless networks. And Aerohive, one of the last remaining independent Wi-Fi vendors, said it would add SD-WAN features to its cloud-based wireless controller in a bid to offer a more comprehensive service package to its customers. It also released a low-cost version of its Connect management platform for smaller deployments.

Network performance management and monitoring

In February, Cisco added policy-enforcement capabilities to its Tetration Analytics engine. The upgrade included a cheaper version for midsize companies. Following on the Tetration update, the vendor also launched cloud management for hyper-converged infrastructure in early March, providing enterprises with more choices in how they oversee the vendor’s  HyperFlex product.

VeloCloud beefed up its SD-WAN software with policy options to make it more responsive to network performance problems. The new capabilities let enterprises dedicate segments of the network to specific traffic. In the event of glitches, the software reroutes traffic to alternative routes.

Intent-based networking (IBN) — policy-based software that tells the network what you want instead of telling it what to do — was one of the biggest trends in networking technology. Cisco said IBN would reshape much of its network management efforts, while startup Apstra Inc. upgraded its software that lets companies configure and troubleshoot network devices from multiple vendors.

The addition of analytics — fueled by machine learning — within network management and monitoring applications also gained steam. ExtraHop Networks added machine learning as a service to its Discover packet capture appliances.

In November, Nyansa upgraded its Voyance remediation engine to flag potential sources of network trouble, improve analytics and recommend fixes.

CodeTalk: Rethinking accessibility for IDEs

By Suresh Parthasarathy, Senior Research Developer; Gopal Srinivasa, Senior Research Software Development Engineer

CodeTalk team members from left to right include: Priyan Vaithilingam, Suresh Parthasarathy, Venkatesh Potluri, Manohar Swaminathan and Gopal Srinivasa from Microsoft Research India.

Software programming productivity tools known as integrated development environments, or IDEs, are supposed to be a game changer for Venkatesh Potluri, a research fellow in Microsoft’s India research lab. Potluri is a computer scientist who regularly needs to write code efficiently and accurately for his research in human computer interaction and accessibility. Instead, IDEs are one more source of frustration for Potluri: he is blind and unable to see the features that make IDEs a boon to the productivity of sighted programmers, such as squiggly red lines that automatically appear beneath potential code errors.

Potluri uses a screen reader to hear the code that he types. He scrolls back and forth through the computer screen to maintain context. But using a screen reader with an IDE is incomplete since much of the information from these systems is conveyed visually. For example, code is syntax highlighted in bright colors, errors are automatically highlighted with squiggles and the debugger uses several windows to provide the full context of a running program. Performance analysis tools use charts and graphs to highlight bottlenecks and architecture analysis tools use graphical models to show code structure.

“IDEs provide a lot of relevant information while writing code; a lot of this information — such as the current state of the program being debugged, real-time error alerts and code refactoring suggestions, are not announced to screen reader users,” Potluri said. “As a developer using a screen reader, the augmentation IDEs provide is not of high value to me.”

Soon after Venkatesh joined Microsoft Research India in early 2017, he and his colleagues Priyan Vaithilingam and Saqib Shaikh launched Project CodeTalk to increase the value of IDE’s for the community of blind and low vision users. According to a recent survey posted on the developer community website Stack Overflow, users who self-identify as blind or low vision make up one percent of the programmer population, which is higher than the 0.4 percent of people in the general population. Team members realized that while a lot of work had gone into making IDEs more accessible, the efforts had fallen short of meeting the needs of blind and low vision developers.

As a first step, the team explored their personal experiences with IDE technologies. Potluri, for example, detailed frustrations such as trying to fix one last bug before the end of a long day, listening carefully to the screen reader and concentrating hard to retain in his mind the structures of the code file only to have the screen reader go silent a few seconds after program execution. Uncertain if the program completed successfully or terminated with an exception, he has to take extra steps to recheck the program that keep him at work late into the night.

[embedded content]

The CodeTalk team also drew insights from a survey of blind and low vision developers that was led by senior researcher Manohar Swaminathan. The effort generated ideas for the development of an extension that improves the experience of the blind and low vision community of developers who use Microsoft’s Visual Studio, a popular IDE that supports multiple programming languages and is customizable. The CodeTalk extension and source code are now available on GitHub.

Highlights of the extension include the ability to quickly access code constructs and functions that lead to faster coding, learn the context of where the cursor is in the code, navigate through chunks of code with simple keystrokes and hear auditory cues when the code has errors and while debugging. The extension also introduces a novel concept of Talkpoints, which can be thought of as audio-based breakpoints.

Together, these features make debugging and syntax checking—two critical features of IDEs—far more accessible to blind and low vision developers, according to a study the CodeTalk team conducted with blind and low vision programmers. Real-time error information and talk points were particularly appreciated as significant productivity boosters. The team also began using the extension for their own development, and discovered that the features were useful for sighted users, as well.

CodeTalk is one step in a long journey of exploring ways to make IDEs more accessible. Research is ongoing to define and meet the needs of blind and low vision developers. The source code is available on GitHub and contributors are invited. The Visual Studio extension is available for download.

You can read more about this story on Microsoft’s Research Blog.

CodeTalk team members include Suresh Parthasarathy, Gopal Srinivasa, Priyan Vaithilingam, Manohar Swaminathan and Venkatesh Potluri from Microsoft Research India and Saqib Shaikh from Microsoft Research Cambridge.