Category Archives: Enterprise IT news

Enterprise IT news

Microsoft Exchange Online UM drops third-party PBX support

Microsoft will end third-party PBX support for Exchange Online Unified Messaging in July 2018, leaving affected organizations less than a year to migrate completely to Skype for Business or find another third-party service.

“I would anticipate quite a few long days for IT admins, as well as more than a few professional services contracts being signed to cope with the changes,” said Michael Brandenburg, an analyst at Frost & Sullivan in San Antonio, about Microsoft’s decision.

According to Microsoft’s announcement, the vendor is retiring its session border controllers (SBCs) and ending third-party PBX support for Exchange Online UM in favor of standard Exchange and Skype for Business protocols to provide higher quality of service for voicemail.

Organizations unaffected by change are those that connect to Exchange Online UM through Skype for Business on premises or a third-party voicemail with Microsoft’s APIs, as well as all forms of on-premises Exchange Server UM.

Brandenburg said the change could be motivated by bandwidth and quality-of-service concerns.

“SIP [Session Initiation Protocol] interoperability has been an ongoing challenge for service providers and vendors,” he said. “It’s not a big leap to suggest that supporting a high-quality unified messaging service at such scale as Office 365 has become untenable for Microsoft.”

For organizations affected by the announcement, Microsoft offered four migration options:

  • A complete migration to Office 365 Cloud PBX;
  • A complete migration to Skype for Business Server Enterprise Voice on premises;
  • For those organizations with a mixed deployment of a third-party PBX and Skype for Business, the use of Exchange Online UM through a Microsoft partner, such as TE-SYSTEMS, to connect to Skype for Business server; and
  • For companies with no Skype for Business deployment or for whom the first three options are not appropriate, Microsoft recommended deploying a third-party voicemail service.

Short deadline creates migration pressure

Microsoft said the announcement affects a small number of customers. Those customers, however, tend to be larger organizations with a number of SBCs, according to Jeff Guillet, founder of IT consulting firm EXPTA Consulting in Pacifica, Calif.

“Once customers settle on a connectivity solution, they continue to invest and expand upon it,” he said in a blog.

With less than a year to transfer services, Brandenburg said the difficulty organizations will face as they migrate will be tied to their unified communications (UC) strategy.

Organizations that are already on a migration path to Skype for Business, but still have third-party PBX and UC platforms in place, will have to accelerate their migration plans, he said.

“The biggest challenge will be for those organizations that have committed to a heterogeneous environment,” Brandenburg said. “These organizations will have to seek out third-party solutions that are compatible with Microsoft’s API.”

A migration could be rife with complications for organizations. Those that have to replace their existing PBXs more rapidly than planned will face accelerated deployment and user training schedules. Customers planning to deploy third-party services to maintain integration with Exchange Online UM could face software and user-facing issues, as IT will have an additional service to maintain and support, Brandenburg said.

“Forcing customers to plan for and deploy all new phone systems, SBC solutions or voicemail solutions in one year is asking a lot, especially for the size of customer they’re affecting,” Guillet said.

The announcement also casts doubts over whether Microsoft and other UC vendors can be trusted to support hybrid UC environments, Brandenburg said.

Opportunities for partners

Third-party unified messaging vendors, such as AVST, are offering third-party PBX support for Exchange Online UM. It’s possible other Microsoft partners, such as SBC vendors, could build specific connections to alleviate the need for organizations to bring in another vendor, he said.

The announcement also creates an opportunity for UC vendors that already maintain interoperability with the Skype for Business desktop client. Brandenburg said there’s nothing preventing these providers from “coming to the aid of their customers” by natively supporting Exchange Online UM through Microsoft APIs.

Track users with correlated data from multiple log files

Which American university is ranked tops for innovation? Stanford? MIT? You’d be wrong.

When it comes to innovation in academia in the United States, Arizona State University sits atop the prestigious U.S. News & World Report rankings for both 2016 and 2017. Stanford University and the Massachusetts Institute of Technology could do no better than second and third, respectively.

As ASU innovates by extending its technological reach into artificial intelligence, augmented reality, machine learning and cognitive computing, the school’s computing infrastructure, its mix of cloud-based and on-premises resources, including mainframes, continues to grow. Keeping track of thousands of servers and applications, along with activities of tens of thousands of student and faculty users — and then collating and correlating all of that information for viewing and tracking in a single, unified auditable view — became a top priority, according to Chris Kurtz, a system architect in ASU’s department of university analytics and data services.

A matter of correlated data

Diagnosing potential security issues or locating a broken multisystem integration necessitates looking at log files from each system involved in context, necessitating a consistent correlated data view.

Chris Kurtz, system architect, Arizona State UniversityChris Kurtz

“The problem we needed to solve is getting disparate logs from Windows, Linux, firewalls, switches, and more all in one place that’s easily searchable and can be audited and distributed in a protected environment,” Kurtz said. It’s all about obtaining logs from operational servers and network devices, and putting the information into the correct order chronologically or by user, to create correlated data for personnel charged with overseeing IT infrastructure operations and security. Think of it as an aggregation engine. “You want to see individual user logs and how that user transits across systems,” Kurtz said.

Those individual users add up, according to Kurtz. With more than 80,000 enrolled students and 20,000 faculty and others, ASU has a lot to keep track of. To aid with the machine data collection and collation of logs, ASU turned to Splunk Inc., a San Francisco provider of software that aims to transform machine-generated data into what the company calls “operational intelligence.”

You want to see individual user logs and how that user transits across systems.
Chris Kurtzsystem architect, Arizona State University

The collated and correlated data is necessary to give IT personnel clues where to look when something goes wrong, according to Kevin Davis, Splunk’s vice president of public sector.

“Moving at the speed of any IT system and the internet, systems become massive and complex and we tend to create silos,” he said. Silos hinder visibility across the totality of IT systems, making it difficult to find problems or to track a particular user’s travels. “It’s not something you think about it until something goes wrong.”

Kevin Davis, vice president of public sector, SplunkKevin Davis

When something does go wrong, it’s somebody’s job — or the job of a lot of people — to figure out what went wrong and get the system back up and running as fast as possible. That’s job No. 1, Davis said. “After that, you can finally do a bit of triage.”

Larry Ponemon, chairman, Ponemon InstituteLarry Ponemon

That job is not getting any easier, said Larry Ponemon, chairman of the Ponemon Institute, a Traverse City, Mich., research firm specializing in security. “There are so many devices, trying to stop the craziness and get[ting] a listing is more than a herculean task,” he said, adding that the proliferation of IoT devices is resulting in many more device types to track, raising the difficulty level.

Kurtz said ASU first looked at several products, including the ArcSight enterprise security manager from Hewlett Packard Enterprise and Elasticsearch, offered as a service by Amazon Web Services, before settling on the Splunk software in 2012. The university now uses it to correlate infrastructure issues with user activity, something that seems obvious, but which is difficult to do when each subsystem’s log exists in vacuum.

Few own up to source code theft cybersecurity threats

Application security isn’t enough. Though we design software to limit a malicious attack from hijacking processing logic or stealing data, a glaring omission exists: security against stealing the program source code itself. Few will discuss the problem. Fewer will admit to being victimized.

“The physical security of source code does not get the attention it demands,” said Michael Facemire, a Forrester Research vice president and principal analyst who serves application development and delivery professionals. “Protecting source code is no less important than protecting data.”

In a 2014 white paper that examined the scourge of trade-secret theft, consultancy PwC said, “Cybercrime is not strictly speaking a technology problem. It is a strategy problem, a human problem and a process problem.”

In other words, source code theft, a trade secret in the eyes of the law, has never been more alluring — or easier. The bulky decks of Hollerith cards, rolls of punched paper tape or program printouts of greenbar paper — tools of a bygone era — aren’t needed. For a developer jumping ship to the latest startup, or an operations staffer feeling underappreciated, a USB thumb drive, email attachment or surreptitious transmission via FTP will do just fine.

No one is immune from source code theft, not even the big guys. Source code for Adobe Acrobat was stolen in 2013, raising the specter of malware being embedded in PDF documents. Just a year earlier, Symantec — itself a cybersecurity company — had to deal with extortion attempts to keep the source code for Norton Antivirus private. In 2004, hackers set up shop to sell stolen source code for Cisco’s PIX firewall. None of these thefts involved credit card numbers of other customer data.

The Goldman Sachs case

No one is immune from source code theft, not even the big guys.

The magnitude of source code theft — and the impotence of the legal system to deal with it adequately — was highlighted with great alarm in the 72-page report, “Administration Strategy on Mitigating the Theft of U.S. Trade Secrets,” published in February 2013 by the Executive Office of the President of the United States.

That report recounts a major software development project by Wall Street brokerage firm Goldman Sachs, which spent a half-billion dollars to develop a system to support high-frequency trading.

On his final day of employment in 2009 before jumping to a competitor, Goldman Sachs developer Sergey Aleynikov “transferred this extremely valuable proprietary computer code to an external computer server,” along with thousands of other proprietary source code files to his home computers. Investigated by the FBI and prosecuted by the U.S. Attorney’s Office of the Southern District of New York, he was convicted and sentenced to 97 months in federal prison.

In February 2012, the conviction was overturned. The problem: The theft was not of physical goods. In its opinion, the 2nd Circuit Court of Appeals wrote, “Because Aleynikov did not ‘assume physical control’ over anything when he took the source code, and because he did not thereby ‘deprive [Goldman] of its use,’ Aleynikov did not violate the [National Stolen Property Act].”

It wasn’t until Dec. 28, 2012, that the loophole was closed when then-President Barack Obama signed Public Law 112-236, The Theft of Trade Secrets Clarification Act of 2012. As noted in the opinion, it was never Aleynikov’s intention to impede Goldman Sachs from running the code.

A second conviction, in a New York state court, was tossed in 2015, because the trial judge believed the source code had to be printed on paper for a guilty finding. The conviction was reinstated in January 2017 by a unanimous vote of a New York state appeals court. “It would be incongruous to allow defendant to escape criminal liability merely because he made a digital copy of the misappropriated source code instead of printing it onto a piece of paper,” wrote Justice Rosalyn Richter.

The federal report also identified numerous other cases of theft of trade secrets for the benefit of private companies and governmental organizations in China.

DevOps swims upstream

Dealing with potential source code theft needs to start long before a single line of code is ever written, according to  Judith Hurwitz, a cloud consultant and CEO of Hurwitz & Associates in Needham, Mass. “If the first question is, ‘What should an app do?’ the second question must be, ‘How do we keep the code, the processes and the data secure?'” she said.

One simple tactic — watermarking source code with strings that can be searched for later — doesn’t prevent theft, but may facilitate the task of tracking down wayward code.

For WSM, a St. Clair Shores, Mich., consultancy that has been providing source code and to-the-cloud server migration services since 2003, the answer is a renewed emphasis on DevOps that gets the ops portion involved earlier in the dev process than stipulated by tradition.

“We’re helping to consult on a ‘left shift’ in security, moving security practices further upstream into the development process to ensure that potential issues are caught before reaching production,” said Jeremy Steinert, WSM’s CTO. “This includes securing your code repository, continuous integration pipelines and development environments to ensure source code theft and security vulnerabilities are managed at every layer of the development process.”

WSM has helped several companies recently with data-intrusion incidents intended to scrap customer data. “This isn’t theft of code, but rather manipulation of it for the theft of customer data,” Steinert said. The principles behind securing the codebase, repositories and processes around development are the same in both situations, he said.

Joel Shore is news writer for TechTarget’s Business Applications and Architecture Media Group. Write to him at or follow @JshoreTT on Twitter.

IBM cracks the code for speeding up its deep learning platform

Graphics processing units are a natural fit for deep learning because they can crunch through large amounts of data quickly, which is important when training data-hungry models.

But GPUs have one catch. Adding more GPUs to a deep learning platform doesn’t necessarily lead to faster results. While individual GPUs process data quickly, they can be slow to communicate their computations to other GPUs, which has limited the degree to which users can take advantage of multiple servers to parallelize jobs and put a cap on the scalability of deep learning models.

IBM recently took on this problem to improve scalability in deep learning and wrote code for its deep learning platform to improve communication between GPUs.

“The rate at which [GPUs] update each other significantly affects your ability to scale deep learning,” said Hillery Hunter, director of systems acceleration and memory at IBM. “We feel like deep learning has been held back because of these long wait times.”

Hunter’s team wrote new software and algorithms to optimize communication between GPUs spread across multiple servers. The team used the algorithm to train an image-recognition neural network on 7.5 million images from the ImageNet-22k data set in seven hours. This is a new speed record for training neural networks on the image data set, breaking the previous mark of 10 days, which was held by Microsoft, IBM said.

Hunter said it’s essential to speed up training times in deep learning projects. Unlike virtually every other area of computing today, training deep learning models can take days, which might discourage more casual users.

“We feel it’s necessary to bring the wait times down,” Hunter said.

IBM is rolling out the new functionality in its PowerAI software, a deep learning platform that pulls together and configures popular open source machine learning software, including Caffe, Torch and Tensorflow. PowerAI is available on IBM’s Power Systems line of servers.

But the main reason to take note of the news, according to Forrester analyst Mike Gualtieri, is the GPU optimization software might bring new functionality to existing tools — namely Watson.

“I think the main significance of this is that IBM can bring deep learning to Watson,” he said.

Watson currently has API connectors for users to do deep learning in specific areas, including translation, speech to text and text to speech. But its deep learning offerings are prescribed. By opening up Watson to open source deep learning platforms, its strength in answering natural-language queries could be applied to deeper questions.

PC market decline coming to an end

The yearslong PC market decline may finally be over, thanks to increasing acceptance of Windows 10 and more innovation around 2-in-1s.

IT experts have debated the death of PCs for years now, as smartphones and tablets have emerged. But thanks to PC hardware and software improvements, such as faster processors, better battery life, and more lightweight and secure devices, the market is on the verge of a turnaround. The PC market will see 1.6% growth in 2018 and 2019, according to a July report from Gartner.

“Today, PCs are a lot better in terms of performance,” said Matt Kosht, an IT director at a utility company in Alaska. “Even cheap PCs are better at business tasks.”

The PC market declined 4.3% from the second quarter of 2016 to the same period this year, Gartner said. Growth in the mobile market could be to blame.

“Since the only device that people could buy six or seven years ago was PCs, that’s all they purchased,” said Ranjit Atwal, research director at Gartner. “Now, we … buy multiple devices.”

But the shift from PCs to other devices is occurring mostly among consumers and not in business, Atwal said. Business users will need to upgrade to newer PCs with more power and efficiency, thereby slowing down and eventually reversing the market decline, he added.

“On the business side, most users still have a PC as a main computing device,” he said. “Over the years, companies held onto their PCs, which is why there was a decline in shipments. But what we are seeing now is that they are trying to replace the PCs [with new ones].”  

Kosht agreed.

“I don’t think PCs are going away, not by a long shot,” he said.

Projected growth in worldwide PC market shipments

Windows 10 plays major role in PC uptick

On the business side, most users still have a PC as a main computing device.
Ranjit Atwalresearch director at Gartner

Microsoft has been successful with infusing the market with much-needed innovation in Windows 10. Although most organizations still use Windows 7, this phase is nearing an end, especially because Microsoft will no longer support that operating system after January 2020. When Microsoft launched Windows 10 in 2015, many organizations chose to stick with Windows 7 because 2020 was still a long time away. That is no longer the case.

The fact that Windows 10 offers newer hardware and improved security is another reason why the PC market decline will see a slowdown.

“Windows 10 came at the right time,” Atwal said. “Organizations are … focused on power, they are focused on mobility, they are focused on security, and a lot of those elements are incorporated in Windows 10. They can’t stay with Windows 7 because it doesn’t have that sort of productivity.”

Ultramobiles contribute to PC market growth

Gartner’s PC market data and projections included the Apple MacBook Air, Microsoft Surface Pro and 2-in-1 devices, such as Lenovo’s Yoga tablets — a category it refers to as “premium ultramobiles.”

The innovation in 2-in-1s and the other premium ultramobiles has injected the overall PC market with new life. These devices allow for faster processing power and better hardware, despite their light weight, making them popular among both consumers and businesses.

Two-in-one devices and other premium ultramobiles allow employees to do everything that a traditional work PC does in a more user-friendly format, said Melanie Seekins, chair of the Credentialed Mobile Device Security Professionals organization.

“A Surface … gives the best of both worlds,” Seekins said. “It gives you everything you love or enjoy about Microsoft and adds the touch and feel of a regular tablet, so now you can write on the screen or pull out your keyboard if you wanted.”

DevOps tools training sparks IT productivity

Enterprises have a new weapon to combat the IT skills shortage where new hiring and training practices fall short.

Most IT pros agree the fastest path to IT burnout is what Amazon engineers have termed “undifferentiated heavy lifting,” which is repetitive and uninteresting work that has little potential for wider impact beyond keeping the lights on. DevOps tools training, which involves IT automation practices, can reduce or eliminate such mundane work and can compensate against staff shortages and employee attrition.

“Automation tools aren’t used to eliminate staff; they’re used to help existing staff perform at a higher level,” said Pete Wirfs, a programmer specialist at SAIF Corp., a not-for-profit workers’ compensation insurance company in Salem, Ore., that has used Automic Software’s Automation Engine to orchestrate scripts.

The company has used Automation Engine since 2013, but last year, it calculated new application development would add hundreds of individual workflows to the IT operations workload. Instead, Wirfs said he found a way to automate database queries and use the results to kick off scripts, so a single centralized workflow could meet all the project’s needs.

As a result, SAIF has expanded its IT environment exponentially over the last four years with no additional operations staff. The data center also can run lights-out for a few hours each night, with the automation scripts set up to handle monitoring, health checks and route alerts to the appropriate contacts when necessary. No IT ops employees work on Sundays at SAIF at all.

“There’s no end to what we can find to automate,” Wirfs said.

DevOps tools training standardizes IT processes

SAIF’s case illustrates an important facet of DevOps tools training: standardization of a company’s tools and workflows. A move from monoliths to microservices can make an overall system more complex, but individual components become similar, repeatable units that are easier to understand, maintain and troubleshoot.

“The monoliths of the early 2000s were very complicated, but now, people are a lot more pragmatic,” said Nuno Pereira, CTO of iJET International, a risk management company in Annapolis, Md. “DevOps has given us a way to keep component complexity in check.”

In modern monitoring systems, DevOps tools training can curtail the notifications that bombard IT operations pros through centralized tools, such as Cisco’s AppDynamics and LogicMonitor. These are popular among DevOps shops because they boost the signal-to-noise ratio of highly instrumented and automated environments, and they establish a standardized common ground for collaborative troubleshooting.

“[With] LogicMonitor, [we can] capture data and make it easily viewable so that different disciplines of IT can speak the same language across skill sets,” said Andy Domeier, director of technology operations at SPS Commerce, a communications network for supply chain and logistics businesses based in Minneapolis.

Four or five years ago, problems in the production infrastructure weren’t positively identified for an average of about 30 minutes per incident, Domeier said. Now, within one to two minutes, DevOps personnel can determine there is a problem, with an average recovery time of 10 to 15 minutes, he estimated.

Standardization has been key to keeping up with ever-bigger web-scale infrastructure at DevOps bellwethers such as Google.

“If every group in a company has a different set of technologies, it is impossible to make organizationwide changes that lift all boats,” said Ben Sigelman, who built Dapper, a distributed tracing utility Google uses to monitor distributed systems. Google maintains one giant source-code repository, for example, which means any improvement immediately benefits the entire Google codebase.

“Lack of standardization is an impediment to DevOps, more than anything else,” Sigelman said.

Google has standardized on open source tools, which offer common platforms that can be used and developed by multiple companies, and this creates another force-multiplier for the industry. Sigelman, now CEO of a stealth startup called LightStep, said DevOps tools training has started to have a similar effect in the mainstream enterprise.

Will AI help?

DevOps tools training can go a long way to help small IT teams manage big workloads, but today’s efficiency improvements have their limits. Already, some tools, such as Splunk Insights, use adaptive machine-learning algorithms to give the human IT pro’s brain an artificial intelligence (AI) boost — a concept known as AIOps.

“The world is not going to get easier,” said Rick Fitz, senior vice president of IT markets for Splunk, based in San Francisco. “People are already overwhelmed with complexity and data. To get through the next five to 10 years, we have to automate the mundane so people can use their brains more effectively.”

People are already overwhelmed with complexity and data. To get through the next five to 10 years, we have to automate the mundane.
Rick Fitzsenior vice president of IT markets, Splunk

Strong enthusiasm for AIOps has spread throughout the industry. Today’s analytics products, such as Splunk, use statistics to predict when a machine will fail or the broader impact of a change to an IT environment. However, AIOps systems may move beyond rules-based systems to improve on those rules or gain insights humans won’t come up with on their own, said Brad Shimmin, analyst with GlobalData PLC, headquartered in London. Groups of companies will share data the way they share open source software development today and enhance the insights AIOps can create, he predicted.

The implications for AIOps are enormous. Network intrusion detection is just one of the many IT disciplines experts predict will change with AIOps over the next decade. AIOps may be able to detect attack signatures or malicious behavior in users that humans and today’s systems cannot detect — for example, when someone hijacks and maliciously uses an end-user account, even if the end user’s identifier and credentials remain the same.

But while AIOps has promise, those who’ve seen its early experimental implementations are skeptical that AIOps can move beyond the need for human training and supervision.

“AI needs a human being to tell it what matters to the business,” LightStep’s Sigelman said, based on what he saw while working at Google. “AI is a fashionable term, but where it’s most successful is when it’s used to sift through a large stream of data with user-defined filtering.”

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at or follow @PariseauTT on Twitter.

DevOps competency growth hindered by IT skills shortage

Enterprises must rethink how to staff IT departments in a labor market with many more job vacancies than candidates.

There are numerous causes of the skills shortage in the U.S.: long-term trends, such as declining birth rates in the developed world; and recent ones, such as a federal crackdown on foreign workers with H-1B visas. Combine those factors with the rapid proliferation of new ideas, such as Linux containers, heightened demand for new software and explosive growth of data in the tech sector, and a lack of DevOps competency — and shortage in IT labor generally — becomes pronounced.

“We’re on the edge of the biggest skills shortage in U.S. history,” said Don Rheem, CEO at E3 Solutions, a consulting firm in Washington, D.C., that works with corporate clients on HR practices and employee engagement. The country is at 4.4% unemployment — which Rheem said he considers full employment — and still has 5.9 million unfilled jobs. “Companies can’t afford to lose people with the competency to do high-tech work,” he said.

Enterprises look within for DevOps competency

Companies must first build an IT staff to foster DevOps competency. This means they must find both skilled programmers and IT ops specialists able to deploy the latest software development and infrastructure automation techniques. In coastal markets, such as the San Francisco Bay Area and New York, this is challenging; in markets outside those areas, it can seem impossible.

In Minneapolis and St. Paul, Minn., for example, the unemployment rate for workers in information technology was 1.6% in June 2017, according to the Bureau of Labor Statistics.

The dearth of good candidates is painfully clear for Andy Domeier, director of technology operations at SPS Commerce, a communications network for supply chain and logistics businesses based in the Twin Cities.

With the shortage of available skilled workers outside its walls, SPS has begun internal training programs to grow its own DevOps competency. It has established an internal technology community, with a knowledge-sharing website, conference debriefs and updates on new projects, and regular presentations from outside speakers.

The company has also created its own technology conference and worked to simplify and standardize what’s expected of engineers in the first month of work, so employees new to DevOps have consistent expectations, Domeier said.

It’s a similar story for Rosetta Stone, which has mostly avoided hiring outside DevOps experts, said Kevin Burnett, DevOps lead for the global education software company in Arlington, Va.

“We do have two DevOps people — me and a colleague — but we moved into these roles from elsewhere in the company; we were both developers in the product organization,” he said. To build the rest of the DevOps team, the company turned to long-tenured employees.

“If you find people with lots of general and company experience, their colleagues will be more likely to listen to them,” Burnett said. “If people don’t listen to your DevOps people, your change initiatives will not get anywhere.”

The best candidates have an interest in software deployment and experience with command-line and automation tools, but the most important trait in DevOps competency is problem-solving ability and inclination.

“If you have people who already love playing with AWS [Amazon Web Services], or who love building internal tools to make their colleagues more productive, or who wrote a script that can set up a local development environment, these are the people to talk to first,” he said.

New practices spring from DevOps competency shortage

Once you establish DevOps competency, employee retention is an even bigger challenge in a highly competitive seller’s market for technical skills.

“High tech has a lot to learn about culture from the manufacturing business,” E3’s Rheem said. Companies eager to retain employees often find themselves in a “perk race,” but perks quickly become entitlements, Rheem said.

High tech has a lot to learn about culture from the manufacturing business.
Don RheemCEO, E3 Solutions

“Most companies don’t have a great sense of what makes people want to come to work, which is predictability, consistency and the ability to rely on social resources,” he said.

Each human brain has a metabolic limit on the amount of work it can do, but the brain can also take into account the “social resources” of other human brains around it and view them as interchangeable with its own physical resources, Rheem said.

In the absence of a strong group, a good substitute is a deep connection with the mission and vision of the organization — a sense of ownership, he added.

These ideas have been part of successful employee-retention efforts for SPS’s Domeier.

Ownership is important, but it must have defined limits, Domeier said. Employers must break down the environment into smaller areas of accountability, so one engineer isn’t responsible for the maintenance of hundreds of systems or microservices.

“The whole system is a big burden to carry; you have to make sure expectations are realistic,” Domeier said. In some areas, engineers are accountable for a particular internal service or a single customer-facing product. Common-sense approaches to time off and on-call rotations are also essential; many DevOps organizations ensure employees aren’t on call for more than a week at a time.

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at or follow @PariseauTT on Twitter.

Top UC providers widen their lead in Gartner Magic Quadrant

While Gartner’s annual Magic Quadrant for Unified Communications closely examines vendors and their on-premises products, the report does not ignore the accumulation of cloud services. In fact, as the report pointed out, organizations investing in premises-centric technologies should vet the vendors’ cloud migration strategies and have a clear path toward cloud services with providers that are commercially and operationally viable.

Some unified communications (UC) vendors are expressing their cloud migration plans better than others. As a result, leading UC providers Cisco, Microsoft and Mitel are extending their market command and separating from the rest of the pack, said Mike Fasciani, a Gartner analyst and one of the authors of the report. 

While UC providers peddle cloud products, they’re still largely selling on-premises systems. According to Gartner, on-premises UC still accounts for 70% of the services sold in midsize and large enterprises. The market share for cloud-based unified communications as a service (UCaaS) is growing, however, relative to on-premises UC services.

“On premises is still the most common deployment model, especially in large enterprises,” Fasciani said. “But, clearly, the growth is in the cloud.”

Everybody chasing Microsoft

The on-premises UC market is mature, Gartner said, meaning services are standardized and vendors’ products are quite comparable. On-premises voice communications, for instance, lacks room for further innovation. Now, UC providers are looking to innovate elsewhere, specifically in the cloud, hybrid deployments, communications platform as a service and contact centers.

Unified communications has seen ongoing consolidation among vendors and users. In this video, explore these key trends and more.

Magic Quadrant leaders Cisco and Microsoft, for example, have solid cloud migration strategies for customers, while also promoting hybrid deployment options. Cisco offers its Spark cloud service, while Microsoft offers cloud-based Office 365, as well as Skype for Business hybrid models.

Other vendors in the report are not quite keeping pace with cloud and hybrid options, Fasciani said, except for fellow Magic Quadrant leader Mitel, which is in the process of acquiring ShoreTel. Mitel has looked to grow its business through acquisitions, which could create problems as it tries to integrate a mix of UC products and partners, Gartner said. 

But Mitel ultimately needs to grow via acquisitions, since organic growth would take too long, Fasciani said. Mitel will be the main competitor to Cisco, he added, while everyone chases Microsoft.

“I think Microsoft is the threat to everybody, including Cisco, and that’s because of the popularity of cloud-based Office 365,” Fasciani said. “So many enterprises are interested and wondering if Office 365 and Skype for Business is the way to go.”

Some UC providers stumble

Perhaps the biggest change in this year’s Magic Quadrant compared to 2016 is Avaya dropped from the leaders group. In January 2017, Avaya filed for Chapter 11 bankruptcy protection to reorganize its balance sheet and restructure debt. Avaya said last week it could emerge from bankruptcy in the fall.

While Avaya could come out of bankruptcy efficiently, it still needs to grow its business, Fasciani said. The question is: Can Avaya sell beyond voice and contact-center services? So far, it has not proven to be able to do that, he added. Avaya has strong brand recognition for its telephony, and its Zang service has developed cloud communication services with the help of a developer portal, Gartner said. 

Another notable drop in the Magic Quadrant is Alcatel-Lucent Enterprise (ALE) falling from challenger to niche player. Paris-based ALE is owned by China Huaxin, an industrial investment company. Gartner questioned China Huaxin’s long-term commitment to ALE, because it has invested little in the UC business since buying it three years ago.

Additionally, ALE is focusing more of its business on Western Europe, which limits the vendor’s exposure in other parts of the world, Gartner said.

On premises and cloud merging

In the Magic Quadrant, Gartner evaluates UC providers in relation to each other and how they perform against one another. While the report focuses on premises-based services for midsize and large organizations, Gartner also acknowledges the importance of cloud offerings. The analyst firm covers cloud UC products in its UCaaS Magic Quadrant.

Fasciani said enterprises should be careful about signing a multiyear maintenance contract with an on-premises service. Organizations should also examine vendors’ plans to migrate customers to more innovative offerings during the lifecycle of a contract and make sure licensing plans entitle them to new services.

So many enterprises are interested and wondering if Office 365 and Skype for Business is the way to go.
Mike FascianiUC analyst at Gartner

The line is blurring between on-premises and cloud services, Fasciani said. An organization could buy a service from Cisco, for example, and maybe not realize what elements are cloud-based and which are on premises.

“We’re going to have to expand our thought process about how we cover the space and go more into the cloud-based deployment models to ensure we’re capturing everything the vendors are offering,” Fasciani said.

In this year’s UC Magic Quadrant, Cisco, Microsoft and Mitel were named leaders. Huawei and NEC were named challengers. Avaya and Unify were named visionaries. ALE and ShoreTel were named niche players. ShoreTel is in the process of being acquired by Mitel. Interactive Intelligence was dropped from this year’s report because it was acquired by Genesys, a contact-center vendor.

Mobile data theft a risk from shared app libraries

Researchers said shared third-party libraries used by many mobile apps could increase the risk of mobile data theft through “intra-library collusion.”

The issue was detailed by Alastair Beresford, teaching fellow at Robinson College in Cambridge, England, and Vincent Taylor and Ivan Martinovic, a doctoral student and associate professor, respectively at Oxford University, in the paper, “Intra-Library Collusion: A Potential Privacy Nightmare on Smartphones.”

According to the researchers, the issue has often been overlooked because mobile security “has typically examined apps and third-party libraries in isolation.” However, they claim these shared libraries could cause more damage if used together for mobile data theft.

“This attack, which we call intra-library collusion, occurs when a single library embedded in more than one app on a device leverages the combined set of permissions available to it to pilfer sensitive user data,” the researchers wrote. “The possibility for intra-library collusion exists because libraries obtain the same privileges as their host app and popular libraries will likely be used by more than one app on a device.”

The team studied 30,000 smartphones and found that, because different apps are allowed different permissions, a malicious actor could combine the access granted to each app in order to build a user profile or perform mobile data theft.

Matthew Rose, ‎global director of application security strategy at Checkmarx, an application security software vendor headquartered in Israel, said there were a number of ways a shared library might be infected by a malicious actor.

“Typically third-party libraries are maintained by a group of people who maintain the code base. Since these libraries have many contributors it is sometimes difficult to have one person responsible for the entire library code base which can potentially allow malicious code to be inserted,” Rose told SearchSecurity. “There is also the question of these libraries inheriting functionality from other code bases so there are definite tradeoffs in terms of risk versus the utilization of existing third party libraries.”

The researchers said advertising libraries could be granted additional permissions to make this kind of attack more dangerous. The researchers wrote that libraries can track users without their consent.

The research focused on Android due to “the availability of data on lists of apps installed on Android devices,” but the team noted that they believe their insights would also hold true on iOS “due to similarities in access control and app deployment.”

Neither Google nor Apple responded to requests for comment at the time of this post.

Mobile data theft and permission creep

Unfortunately, the researchers had no easy answers for mitigating the threat of mobile data theft from intra-library collusion. The researchers noted that one approach would be to limit the permissions granted to these libraries, but doing so might hamper the ability of developers to monetize their apps, which “could serve as a deterrent to new app developers entering the market and thus the end users may ultimately suffer from reduced content.”

If the permission request is not in line with what you intend to use the app for then do not install it or grant the permissions.
Matthew Roseglobal director of application security strategy, Checkmarx

Additionally, the team suggested that the companies running the app stores or even nation states could enact policies or laws to detect and remove malicious third-party libraries, but each approach would be problematic. Detection would be difficult because apps can have legitimate reasons for sending data off-device, and enforcement may not scale beyond an app-by-app basis.

John Bambenek, threat intelligence manager at Fidelis Cybersecurity, said “it is very likely that a malicious library would remain undetected,” but noted there are easier paths to mobile data theft.

“In order to perform this attack, a malicious individual would need to create a library that then is used by multiple applications. They would then need to convince users to download an app [or multiple apps] with many permissions,” Bambenek told SearchSecurity. “In the real world, a malicious individual would just get a victim to install an application with a lot of permissions in the first place because it is more direct and easier. I wouldn’t expect this to be weaponized in the short-term by criminals.”

Rose said the more important issue was that “people need to be cognizant of what permissions a mobile app is asking for when they install it.” 

“Does the app really need to have access to your file system, geo location, or camera? Think about what the intended usage is for the mobile app and ask yourself if it is asking for more permissions than it actually needs,” Rose said. “If the permission request is not in line with what you intend to use the app for then do not install it or grant the permissions.”

Bambenek said developers also need to be careful to make sure it doesn’t appear their apps are attempting mobile data theft through permissions overreach.

“Mobile developers, and developers in general for that matter, need to always focus on secure coding and, in particular, least privilege,” Bambenek said. “Adopting a development model that writes code doing only what is necessary for it to do and little else would help greatly.”

AI washing muddies the artificial intelligence products market

Analysts predict that by 2020, artificial intelligence technologies will be in almost every new software and service release. And if they’re not actually in them, technology vendors will probably use smoke and mirrors marketing tactics to make users believe they are.

Many tech vendors already shoehorn the AI label into the marketing of every new piece of software they develop, and it’s causing confusion in the market. To muddle things further, major software vendors accuse their competitors of egregious mislabeling, even when the products in question truly do include artificial intelligence technologies.

AI mischaracterization is one of the three major problems in the AI market, as highlighted by Gartner recently. More than 1,000 vendors with applications and platforms describe themselves as artificial intelligence products vendors, or say they employ AI in their products, according to the research firm. It’s a practice Gartner calls “AI washing” — similar to the cloudwashing and greenwashing, which have become prevalent over the years as businesses overexaggerate their association to cloud computing and environmentalism.

AI goes beyond machine learning

When a technology is labelled AI, the vendor must provide information that makes it clear how AI is used as a differentiator and what problems it solves that can’t be solved by other technologies, explained Jim Hare, a research VP at Gartner, who focuses on analytics and data science.

You have to go in with the assumption that it isn’t AI, and the vendor has to prove otherwise.
Jim Hareresearch VP, Gartner

“You have to go in with the assumption that it isn’t AI, and the vendor has to prove otherwise,” Hare said. “It’s like the big data era — where all the vendors say they have big data — but on steroids.”

“What I’m seeing is that anything typically called machine learning is now being labelled AI, when in reality it is weak or narrow AI, and it solves a specific problem,” he said.

IT buyers must hold the vendor accountable for its claims by asking how it defines AI and requesting information about what’s under the hood, Hare said. Customers need to know what makes the product superior to what is already available, with support from customer case studies. Also, Hare urges IT buyers to demand a demonstration of artificial intelligence products using their own data to see them in action solving a business problem they have.

Beyond that, a vendor must share with customers the AI techniques it uses or plans to use in the product and their strategy for keeping up with the quickly changing AI market, Hare said.

The second problem Gartner highlights is that machine learning can address many of the problems businesses need to solve. The buzz around more complicated types of AI, such as deep learning, gets so much hype that businesses overlook simpler approaches.

“Many companies say to me, ‘I need an AI strategy’ and [after hearing their business problem] I say, ‘No you don’t,'” Hare said.

Really, what you need to look for is a solution to a problem you have, and if machine learning does it, great,” Hare said. “If you need deep learning because the problem is too gnarly for classic ML, and you need neural networks — that’s what you look for.”

Don’t use AI when BI works fine

When to use AI versus BI tools was the focus of a spring TDWI Accelerate presentation led by Jana Eggers, CEO of Nara Logics, a Cambridge, Mass., company, that describes its “synaptic intelligence” approach to AI as the combination of neuroscience and computer science.

BI tools use data to provide insights through reporting, visualization and data analysis, and people use that information to answer their questions. Artificial intelligence differs in that it’s capable of essentially coming up with solutions to problems on its own, using data and calculations.

Companies that want to answer a specific question or problem should use business analytics tools. If you don’t know the question to ask, use AI to explore data openly, and be willing to consider the answers from many different directions, she said. This may involve having outside and inside experts comb through the results, perform A/B testing, or even outsource via platforms such as Amazon’s Mechanical Turk.

With an AI project, you know your objectives and what you are trying to do, but you are open to finding new ways to get there, Eggers said.

AI isn’t easy

A third issue plaguing AI is that companies don’t have the skills on staff to evaluate, build and deploy it, according to Gartner. Over 50% of respondents to Gartner’s 2017 AI development strategies survey said the lack of necessary staff skills was the top challenge to AI adoption. That statistic appears to coincide with the data scientist supply and demand problem.

Companies surveyed said they are seeking artificial intelligence products that can improve decision-making and process automation, and most prefer to buy one of the many packaged AI tools rather than build one themselves. Which brings IT buyers back to the first problem of AI washing; it’s difficult to know which artificial intelligence products truly deliver AI capabilities, and which ones are mislabeled.

After determining a prepackaged AI tool provides enough differentiation to be worth the investment, IT buyers must be clear on what is required to manage it, Hare said; what human services are needed to change code and maintain models over the long term? Is it hosted in a cloud service and managed by the vendor, or does the company need knowledgeable staff to keep it running?

“It’s one thing to get it deployed, but who steps in to tweak and train models over time?” he said. “[IBM] Watson, for example, requires a lot of work to stand up and you need to focus the model to solve a specific problem and feed it a lot of data to solve that problem.”

Companies must also understand the data and compute requirements to run the AI tool, he added; GPUs may be required and that could add significant costs to the project. And cutting-edge AI systems require lots and lots of data. Storing that data also adds to the project cost.