Category Archives: Enterprise IT news

Enterprise IT news

Uber breach affected 57 million users, covered up for a year

Malicious actors stole personal data on hundreds of thousands of Uber drivers and millions of Uber users and the company allegedly covered up the breach for one year, including reportedly paying the attackers to keep quiet.

According to new CEO Dara Khosrowshahi, the Uber breach was due to two malicious actors accessing “a third-party cloud-based service” — reportedly GitHub and Amazon Web Services (AWS) — in late 2016 and downloading files containing names and driver’s license information on 600,000 U.S. Uber drivers and personal information — names, email addresses and phone numbers — for 57 million Uber customers from around the world. According to Bloomberg, which was first to report the Uber breach, the incident was covered up by two members of the company’s infosec team.

“None of this should have happened, and I will not make excuses for it. While I can’t erase the past, I can commit on behalf of every Uber employee that we will learn from our mistakes,” Khosrowshahi wrote in a blog post. “We are changing the way we do business, putting integrity at the core of every decision we make and working hard to earn the trust of our customers.”

Khosrowshahi said the “failure to notify affected individuals or regulators last year” prompted a number of actions, including firing the two individuals responsible for the Uber breach response — Joe Sullivan, former federal prosecutor and now ex-CSO at Uber, and Craig Clark, one of Sullivan’s deputies — notifying and offering ID and credit monitoring to the affected drivers, notifying regulators and monitoring the affected customer accounts.

Details of the Uber data breach

According to Bloomberg, the attackers accessed a private GitHub repository used by Uber in October 2016 and used stolen credentials from GitHub to access an archive of information stored on an AWS account.

Terry Ray, CTO of Imperva, said the use of GitHub “appears to be a prime example of good intentions gone bad.”

“Using an online collaboration and coding platform isn’t necessarily wrong, and it isn’t clear if getting your accounts hacked on these platforms is even uncommon. The problem begins with why live production data was used in an online platform where credentials were available in GitHub,” Ray told SearchSecurity. “Sadly, it’s all too common that developers are allowed to copy live production data for use in development, testing and QA. This data is almost never monitored or secured, and as we can see here, it is often stored in various locations and is often easily accessed by nefarious actors.”

Sullivan reportedly took the lead in the Uber breach response and, along with Clark, worked to keep the incident under wraps, including paying the attackers $100,000 to delete the stolen personal data keep quiet.

Khosrowshahi mentioned communication with the attackers in his blog post, but did not admit to any payment being made.

“At the time of the incident, we took immediate steps to secure the data and shut down further unauthorized access by the individuals. We subsequently identified the individuals and obtained assurances that the downloaded data had been destroyed,” Khosrowshahi wrote. “We also implemented security measures to restrict access to and strengthen controls on our cloud-based storage accounts.”

Jeremiah Grossman, chief of security strategy at SentinelOne, said it can be “difficult, if not impossible, for an organization to lock down” a vector like GitHub.

“Developers accidentally, and often unknowingly, share credentials over GitHub all the time where they become exposed,” Grossman told SearchSecurity. “While traditional security controls remain crucial to organizational security, it’s no good if individuals with access to private information expose their account credentials in a place where they can be obtained and misused by others.”

Willy Leichter, vice president of marketing at Virsec Systems, Inc., said if the details of this Uber breach cover up are verified, it could been extremely damaging for the company. 

“This is a staggering breach of customer trust, ethical behavior, common sense and legal requirements for breach notification. Paying hackers to conceal their crimes is as short-sighted as it is stupid,” Leichter told SearchSecurity. “If this had happened after the EU GDPR kicks in, Uber would cease to exist. That may be the outcome anyway.”

Uber breach ramifications

The 2016 breach is the latest in a long line of issues for Uber. At the time of the incident, Uber was already under investigation for separate privacy violations. The company is also battling various lawsuits from cities and users.

Jim Kennedy, vice president North America at Certes Networks, said Uber’s already questionable reputation should take a big hit.

“Most likely the Uber C-suite, seeing the repercussions of cyber-attacks on similar household names, were keen to avoid the reputational damage — a massive error of judgement,” Kennedy told SearchSecurity. “The reality is that customer distrust of the brand will be amplified by the company’s attempts to hide the facts from them and points to the need for change in the industry.”

Adam Levin, cyber security expert and co-founder and chairman for CyberScout, said the Uber breach is another example of the company “placing stock value over and above privacy at the expense of drivers and consumers.”

Customer distrust of the brand will be amplified by the company’s attempts to hide the facts from them and points to the need for change in the industry.
Jim Kennedyvice president North America at Certes Networks

“Uber did a hit and run on our privacy and created a completely avoidable extinction or near-extinction event, and further damaged and already tarnished brand,” Levin told SearchSecurity. “As ever, the goal for a company faced with a breach or compromise should be urgency, transparency and above all else, empathy for those affected.”

Ken Spinner, vice president of field engineering at Varonis, said the Uber data breach will likely “fire up already angry consumers, who are going to demand action and protection.”

“Every state attorney general is going to be salivating at the prospect of suing Uber. While there’s no overarching federal regulations in place in the U.S., there’s a patchwork of state regulations that dictate when disclosures must be made — often it’s when a set number of users have been affected,” Spinner told SearchSecurity. “No doubt Uber has surpassed this threshold and violated many of them by not disclosing the breach for over a year. This is the latest example of how hiding a breach rarely benefits a company and almost surely will backfire.”

StorOne attacks bottlenecks with new TRU storage software

Startup StorOne this week officially launched its TRU multiprotocol software, which its founder claims will improve the efficiency of storage systems.

The Israel-based newcomer spent six years developing Total Resource Utilization (TRU) software with the goal of eliminating bottlenecks caused by software that cannot keep up with faster storage media and network connectivity.

StorOne developers collapsed the storage stack into a single layer that is designed to support block (Fibre Channel and iSCSI), file (NFS, SMB and CIFS) and object (Amazon Simple Storage Service) protocols on the same drives. The company claims to support enterprise storage features such as unlimited snapshots per volume, with no adverse impact to performance.

TRU software is designed to run on commodity hardware and support hard disk drives; faster solid-state drives (SSDs); and higher performance, latency-lowering NVMe-based PCI Express SSDs on the same server. The software installs either as a virtual machine or a physical server.

StorOne CEO and founder Gal Naor said the TRU software-defined storage fits use cases ranging from high-performance databases to low-performance workloads, such as backup and data archiving.

‘Dramatically less resources’

“We need dramatically less resources to achieve better results. Results are the key here,” said Naor, whose experience in storage efficiency goes back to his founding of real-time compression specialist Storwize, which IBM acquired in 2010.

StorOne CTO Raz Gordon said storage software has failed to keep up with the speed of today’s drives and storage networks.

“We understood that the software is the real bottleneck today of storage systems. It’s not the drives. It’s not the connectivity,” said Gordon, who was the leading force behind the Galileo networking technology that Marvell bought in 2001.

The StorOne leaders are sparse on details so far about the product’s architecture and enterprise capabilities, beyond unlimited storage snapshots.

Marc Staimer, senior analyst at Dragon Slayer Consulting, said StorOne’s competition would include any software-defined storage products that support block and file protocols, hyper-converged systems, and traditional unified storage systems.

“It’s a crowded field, but they’re the only ones attacking the efficiency issue today,” Staimer said.

“Because of TRU’s storage efficiency, it gets more performance out of fewer resources. Less hardware equals lowers costs for the storage system, supporting infrastructure, personnel, management, power and cooling, etc.,” Staimer added. “With unlimited budget, I can get unlimited performance. But nobody has unlimited budgets today.”

StorOne user interface
TRU user interface shows updated performance metrics for IOPS, latency, I/O size and throughput.

Collapsed storage stack

The StorOne executives said they rebuilt the storage software with new algorithms to address bottlenecks. They claim StorOne’s collapsed storage stack enables the fully rated IOPS and throughput of the latest high-performance SSDs at wire speed.

“The bottom line is the efficiency of the system that results in great savings to our customers,” Gordon said. “You end up with much less hardware and much greater performance.”

StorOne claimed a single TRU virtual appliance with four SSDs could deliver the performance of a midrange storage system, and an appliance with four NVMe-based PCIe SSDs could achieve the performance and low latency of a high-end storage system. The StorOne system can scale up to 18 GBps of throughput and 4 million IOPS with servers equipped with NVMe-based SSDs, according to Naor. He said the maximum capacity for the TRU system is 15 PB, but he provided no details on the server or drive hardware.

“It’s the same software that can be high-performance and high-capacity,” Naor said. “You can install it as an all-flash array. You can install it as a hybrid. And you’re getting unlimited snapshots.”

Naor said customers could choose the level of disk redundancy to protect data on a volume basis. Users can mix and match different types of drives, and there are no RAID restrictions, he said.

StorOne pricing

Pricing for the StorOne TRU software is based on physical storage consumption through a subscription license. A performance-focused installation of 150 TB would cost 1 cent per gigabyte, whereas a capacity-oriented deployment of 1 PB would be $0.0006 per gigabyte, according to the company. StorOne said pricing could drop to $0.002 per gigabyte with multi-petabyte installations. The TRU software license includes support for all storage protocols and features.

StorOne has an Early Adopters Program in which it supplies free on-site hardware of up to 1 PB.

StorOne is based in Tel Aviv and also has offices in Dallas, New York and Singapore. Investors include Seagate and venture capital firms Giza and Vaizra. StorOne’s board of directors includes current Microsoft chairman and former Symantec and Virtual Instruments CEO John Thompson, as well as Ed Zander, former Motorola CEO and Sun Microsystems president.

Visual Studio Live Share aims to spur developer collaboration

NEW YORK — Developers at Microsoft’s event here last week got a sneak peek at a tool that aims to boost programmer productivity and improve application quality.

Microsoft’s Visual Studio Live Share, displayed at its Connect(); 2017 conference, lets developers work on the same code in real time. It also continues to bolster the company’s credibility in their eyes, delivering tools and services that make their jobs easier.

The software brings the Agile practice of pair programming to a broader set of programmers, except the programmers do not need to be physically together. Developers can remotely access and debug the same code in their respective editor or integrated development environment and share their full project context, rather than just their screens. Visual Studio Live Share works across multiple machines. Interested developers can sign up to join the Visual Studio Live Share preview, set for early 2018. It will be a limited, U.S.-only preview.

“It works not just between Visual Studio Code sessions between two Macs or between two Visual Studio sessions on Windows, but you can, in fact, have teams composed of multiple different parts of the Visual Studio family on multiple different operating systems all developing simultaneously,” said Scott Guthrie, executive vice president in Microsoft’s cloud and enterprise group.

The ability for developers to collaboratively debug and enhance the quality of applications in real time is extremely useful for developers looking for help with coding issues. While the capability has been around in various forms for 20 years, by integrating it into the Visual Studio tool set, Microsoft aims to standardize live sharing of code.

Scott Guthrie, Microsoft executive vice president of cloud and enterprise, presenting the keynote at Connect(); 2017.
Scott Guthrie, Microsoft executive vice president of cloud and enterprise, presenting the keynote at Connect(); 2017.

“I will be happy to see full collaboration make it to a shipping product,” said Theresa Lanowitz, an analyst at Voke, a research firm in Minden, Nev. “I had that capability shipping in 1994 at Taligent.”

Thomas Murphy, an analyst at Gartner, said he likes what he has heard about Visual Studio Live Share thus far, but wants to see it firsthand and compare it with pair programming tools such as AtomPair.

“[Microsoft is] doing a great job of being open and participating in open software in a nice incremental fashion,” he said. “But does it bring them new developers? That is a harder question. I think there are still plenty of people that think of Microsoft as the old world, and they are now in the new world.”

General availability of Visual Studio App Center

There are still plenty of [developers] that think of Microsoft as the old world, and they are now in the new world.
Thomas Murphyanalyst, Gartner

Also this week, Microsoft made its Visual Studio App Center generally available. Formerly known as Visual Studio Mobile Center and based on Xamarin Test Cloud, Visual Studio App Center is essentially a mobile backend as a service that provides a DevOps environment to help developers manage the lifecycle of their mobile apps. Objective-C, Swift, Android Java, Xamarin and React Native developers can all use Visual Studio App Center, according to the company.

Once a developer connects a code repository to Visual Studio App Center, the tool automatically creates a release pipeline of automated builds, tests the app in the cloud, manages distribution of the app to beta testers and app stores, and monitors usage of the app with crash analytics data using HockeyApp analytics tool Microsoft acquired in 2014.

“HockeyApp is very useful for telemetry data; that was a good acquisition,” Lanowitz said. Xamarin’s mobile development tools, acquired by Microsoft in 2016, also are strong, she said.

Darryl K. Taft covers DevOps, software development tools and developer-related issues as news writer for TechTarget’s SearchSoftwareQuality, SearchCloudApplications, SearchMicroservices and TheServerSide. Contact him at dtaft@techtarget.com or @darrylktaft on Twitter.

CEO: How SOTI software shoots to stand out

Technology vendors are distancing themselves from the term enterprise mobility management.

Earlier this month, BlackBerry CEO John Chen called enterprise mobility management (EMM) a “lousy market.” Meanwhile, SOTI Inc., in Mississauga, Ont., is aiming to become more of a household name in enterprise mobility by expanding into new areas.

“Anybody selling just EMM these days is a dinosaur,” SOTI CEO Carl Rodrigues said.

SOTI, which has 17,000 enterprise customers, offers products designed to address businesses’ various mobile needs, from managing and securing devices to building and supporting applications. SOTI software includes MobiControl for EMM, MobiAssist remote help desk software and SOTI Snap for mobile app development.

Here, Rodrigues discusses his company’s push into new areas, as well as changes in the EMM market.

What do mobile-minded organizations need to focus on besides EMM nowadays?

Rodrigues: In the first wave of companies needing EMM, they were just trying to manage office devices. We’re way past that. Mobile is becoming mission-critical to companies’ operations. Many are running completely on mobile, and they need to be able to create apps that run on mobile devices effectively and support remote workers.

Who can deliver a disruptive business with just EMM? You need to buy 10 other pieces of technology. To create an app, that can cost $800,000 to create one on an Android platform; then you need to do it again on Apple. RMAD [rapid mobile app development] tools eliminate that barrier. If companies know they don’t have to spend millions to get started, mobile becomes more accessible.

How has the EMM market changed, and how has SOTI software evolved to address that?

Customers [leveraging mobile] have many more problems than EMM.

Rodrigues: Customers have many more problems than EMM. Our platform tackles the core problems that our customers and IT have, from app generation to how to support your people out in the field.

SOTI software has some major competition in the EMM market. What sets you apart?

Rodrigues: There are other EMM solutions out there, but [those customers] need to buy a separate help desk solution. That’s not designed for the modern era. We can remote in to the mobile device and see what’s happening from the desktop.

What about joining forces with one of bigger players as the EMM market consolidates?

Rodrigues: They’ve all come to us and tried to purchase us. [Rodrigues declined to disclose which companies.] But why partner up when we have the better product?

If SOTI has the better product, why are other vendors doing better, and why isn’t SOTI ranked higher in market reports?

Rodrigues: Traditional analyst and market reports speak to outdated ways of evaluating businesses. Profitability is overlooked, with market share [valued] over profitability. The traditional way of looking at the EMM market is quickly becoming outdated and irrelevant.

But SOTI’s market-leading position has been validated by top analyst firms.

What is your favorite movie?

Rodrigues: The Spider-Man movies — the classic ones, with just Spider-Man. Now they have 95 other superheroes in one movie.

What is the best dish you cook?

Rodrigues: My family is from the Portuguese colony of Goa in India. One of the things I make is this special tea with cardamom and dark pepper. You use loose-leaf tea and cook it with the milk.

If you could travel anywhere in the world, where would you go?

Rodrigues: After university, I visited Santorini in Greece. It’s a beautiful place. The food is amazing. 

Why device upgrade strategies fail

Ivan Pepelnjak, writing in IP Space, was asked by one of his readers about why software anchoring a device upgrade is still plagued by delays and bugs. In Pepelnjak’s view, the challenge stems from the networking industry’s long commitment to command-line interface and routing platforms built atop 30-year-old code.

With device upgrade and software rollouts, engineers are often split between two realities. In one camp, engineers “vote with their wallets” and invest in technology that supports automation, while in the other group, engineers cling to manual configuration and face holdups accommodating hundreds of routers at a time because they lack a gradual rollout for updates. “I never cease to be amazed at how disinterested enterprise networking engineers are about network automation. Looks like they barely entered the denial phase of grief while everyone else is passing them by left and right,” Pepelnjak wrote.

Dig deeper into Pepelnjak’s thoughts on device upgrade strategies and what steps engineers should take to improve them.

Where cybersecurity jobs fall the shortest

Last week, Jon Oltsik, an analyst at Enterprise Strategy Group in Milford, Mass., blogged about the global cybersecurity skills shortage. This week, he revisited the topic, identifying the most acute shortfalls, using data compiled by ESG and the Information Systems Security Association. According to Oltsik, the top three areas where expertise is most lacking are security analysis and investigation, application security, and cloud security skills.

Survey respondents also pointed to concern about their organizations’ gap in skills such risk-compliance administration, security engineering and penetration testing. “The overall picture is bleak — many organizations may not have the right skills and resources to adequately secure new business and IT initiatives and may also lack ample skills to detect and respond to incidents in a timely fashion. Therefore, I keep coming back to two words — existential threat,” Oltsik wrote.

Read more of Oltsik’s thoughts on the cybersecurity skills shortage.

Juniper boosts Contrail for telcos

Zeus Kerravala, writing in ZK Research, gave high marks to Juniper Networks’ Contrail Cloud platform aimed at telcos. One plus: the platform’s tight integration with internal and third-party services and applications.

As a result, Contrail Cloud works easily with software from a number of sources, including network functions virtualization assurance through AppFormix; prevalidated virtualized network functions with Affirmed Networks as well as Juniper’s own vSRX virtual firewall, collaboration with Red Hat and end-to-end cloud management on behalf of customers.

Kerravala said in order to compete and offer services to enterprise customers, telcos must be able to exploit cloud architectures that support the rapid rollout of new services. “Juniper’s Contrail Cloud offerings takes much of the complexity out of the equation ensuring that telcos can meet the increasing demands of their business customers,” he wrote.

Explore more of Kerravala’s thoughts on Juniper Contrail.

Multiple Intel firmware vulnerabilities in Management Engine

New research has uncovered five Intel firmware vulnerabilities related to the controversial Management Engine, leading one expert to question why the Intel ME cannot be disabled.

The research that led to finding the Intel firmware vulnerabilities was undertaken “in response to issues identified by external researchers,” according to Intel. This likely refers to a flaw in Intel Active Management Technology — part of the Intel ME — found in May 2017 and a supposed Intel ME kill switch found in September. Due to issues like these, Intel “performed an in-depth comprehensive security review of our Intel Management Engine (ME), Intel Server Platform Services (SPS), and Intel Trusted Execution Engine (TXE) with the objective of enhancing firmware resilience.”

In a post detailing the Intel firmware vulnerabilities, Intel said the flaws could allow an attacker to gain unauthorized access to a system, impersonate the ME/SPS/TXE, execute arbitrary code or cause a system crash.

Mark Ermolov and Maxim Goryachy, researchers at Positive Technologies Research, an enterprise security company based in Framingham, Mass., were credited with finding three Intel firmware vulnerabilities, one in each of Intel ME, SPS and TXE.

“Intel ME is at the heart of a vast number of devices worldwide, which is why we felt it important to assess its security status. It sits deep below the OS and has visibility of a range of data, everything from information on the hard drive to the microphone and USB,” Goryachy told SearchSecurity. “Given this privileged level of access, a hacker with malicious intent could also use it to attack a target below the radar of traditional software-based countermeasures such as anti-virus.”

How dangerous are Intel ME vulnerabilities

The Intel ME has been a controversial feature because of the highly-privileged level of access it has and the fact that it can continue to run even when the system is powered off. Some have even suggested it could be used as a backdoor to any systems running on Intel hardware.

Tod Beardsley, research director at Rapid7, said that given Intel ME’s “uniquely sensitive position on the network,” he’s happy the security review was done, but he had reservations.

Controlling privilege isn’t difficult to do, but it is key to securing systems.
James Maudesenior security engineer, Avecto

“It is frustrating that it’s difficult to impossible to completely disable this particular management application, even in sites where it’s entirely unused. The act of disabling it tends to require actually touching a keyboard connected to the affected machine,” Beardsley told SearchSecurity. “This doesn’t lend itself well to automation, which is a bummer for sites that have hundreds of affected devices whirring away in far-flung data centers. It’s also difficult to actually get a hold of firmware to fix these things for many affected IoT devices.”

James Maude, senior security engineer at Avecto Limited, an endpoint security software company based in the U.K., said that the Intel firmware vulnerabilities highlight the importance of controlling user privileges because some of the flaws require higher access to exploit.

“From hardware to software, admin accounts with wide-ranging privilege rights present a large attack surface. The fact that these critical security gaps have appeared in hardware that can be found in almost every organization globally demonstrates that all businesses need to bear this in mind,” Maude told SearchSecurity. “Controlling privilege isn’t difficult to do, but it is key to securing systems. It’s time for both enterprises and individual users to realize that they can’t rely solely on inbuilt security — they must also have robust security procedures in place.”

However, Beardsley noted all of the firmware vulnerabilities across the Intel products require physical access to the machine in order to exploit.

“For the majority of issues that require local access, the best advice is simply not to allow untrusted users physical access to the affected systems,” Beardsley said. “This is pretty easy for server farms, but can get trickier for things like point-of-sale systems, kiosks, and other computing objects where low-level employees or the public are expected to touch the machines. That said, it’s nothing a little epoxy in the USB port can’t solve.”

How to win in the AI era? For now, it’s all about the data

Artificial intelligence is the new electricity, said deep learning pioneer Andrew Ng. Just as electricity transformed every major industry a century ago, AI will give the world a major jolt. Eventually.

For now, 99% of the economic value created by AI comes from supervised learning systems, according to Ng. These algorithms require human teachers and tremendous amounts of data to learn. It’s a laborious, but proven process.

AI algorithms, for example, can now recognize images of cats, although they required thousands of labeled images of cats to do so; and they can understand what someone is saying, although leading speech recognition systems needed 50,000 hours of speech — and their transcripts — to do so.

Ng’s point is that data is the competitive differentiator for what AI can do today — not algorithms, which, once trained, can be copied.

“There’s so much open source, word gets out quickly, and it’s not that hard for most organizations to figure out what algorithms organizations are using,” said Ng, an AI thought leader and an adjunct professor of computer science at Stanford University, at the recent EmTech conference in Cambridge, Mass.

His presentation gave attendees a look at the state of the AI era, as well as the four characteristics he believes will be a part of every AI company, which includes a revamp of job descriptions.

Positive feedback loop

So data is vital in today’s AI era, but companies don’t need to be a Google or a Facebook to reap the benefits of AI. All they need is enough data upfront to get a project off the ground, Ng said. That starter data will attract customers who, in turn, will create more data for the product.

“This results in a positive feedback loop. So, after a period of time, you might have enough data yourself to have a defensible business,” said Ng.

Andrew Ng, Stanford, AI, state of AI, deep learning, EmTech
Andrew Ng on stage at EmTech

A couple of his students at Stanford did just that when they launched Blue River Technology, an ag-tech startup that combines computer vision, robotics and machine learning for field management. The co-founders started with lettuce, collecting images and putting together enough data to get lettuce farmers on board, according to Ng. Today, he speculated, they likely have the biggest data asset of lettuce in the world.

“And this actually makes their business, in my opinion, pretty defensible because even the global giant tech companies, as far as I know, do not have this particular data asset, which makes their business at least challenging for the very large tech companies to enter,” he said.

Turns out, that data asset is actually worth hundreds of millions: John Deere acquired Blue River for $300 million in September.

“Data accumulation is one example of how I think corporate strategy is changing in the AI era, and in the deep learning era,” he said.

Four characteristics of an AI company

While it’s too soon to tell what successful AI companies will look like, Ng suggested another corporate disruptor might provide some insight: the internet.

One of the lessons Ng learned with the rise of the internet was that companies need more than a website to be an internet company. The same, he argued, holds true for AI companies.

“If you take a traditional tech company and add a bunch of deep learning or machine learning or neural networks to it, that does not make it an AI company,” he said.

Internet companies are architected to take advantage of internet capabilities, such as A/B testing, short cycle times to ship products, and decision-making that’s pushed down to the engineer and product level, according to Ng.

AI companies will need to be architected to do the same in relation to AI. What A/B testing’s equivalent will be for AI companies is still unknown, but Ng shared four thoughts on characteristics he expects AI companies will share.

  1. Strategic data acquisition. This is a complex process, requiring companies to play what Ng called multiyear chess games, acquiring important data from one resource that’s monetized elsewhere. “When I decide to launch a product, one of the criteria I use is, can we plan a path for data acquisition that results in a defensible business?” Ng said.
  2. Unified data warehouse. This likely comes as no surprise to CIOs, who have been advocates of the centralized data warehouse for years. But for AI companies that need to combine data from multiple sources, data silos — and the bureaucracy that comes with them — can be an AI project killer. Companies should get to work on this now, as “this is often a multiyear exercise for companies to implement,” Ng said.
  3. New job descriptions. AI products like chatbots can’t be sketched out the way apps can, and so product managers will have to communicate differently with engineers. Ng, for one, is training product managers to give product specifications.
  4. Centralized AI team. AI talent is scarce, so companies should consider building a single AI team that can then support business units across the organization. “We’ve seen this pattern before with the rise of mobile,” Ng said. “Maybe around 2011, none of us could hire enough mobile engineers.” Once the talent numbers caught up with demand, companies embedded mobile talent into individual business units. The same will likely play out in the AI era, Ng said.

Datos IO RecoverX backup gets table-specific

Datos IO RecoverX software, designed to protect scale-out databases running on public clouds, now allows query-specific recovery and other features to restore data faster.

RecoverX data protection and management software is aimed at application architects, database administrators and development teams. Built for nonrelational databases, it protects and recovers data locally and on software-as-a-service platforms.

Datos IO RecoverX works with scale-out databases, including MongoDB, Amazon DynamoDB, Apache Cassandra, DataStax Enterprise, Google Bigtable, Redis and SQLite. It supports Amazon Web Services, Google Cloud Platform and Oracle Cloud. RecoverX also protects data on premises.

RecoverX provides semantic deduplication for storage space efficiency and enables scalable versioning for flexible backups and point-in-time recovery.

More security, faster recovery in Datos IO RecoverX 2.5

The newly released RecoverX 2.5 gives customers the ability recover by querying specific tables, columns and rows within databases to speed up the restore process. Datos IO calls this feature “queryable recovery.” The software’s advanced database recovery function also includes granular and incremental recovery by selecting specific points in time.

The latest Datos IO RecoverX version also performs streaming recovery for better error-handling. The advanced database recovery capability for MongoDB clusters enables global backup of sharded or partitioned databases. The geographically dispersed shards are backed up in sync to ensure consistent copies in the recovery. Administrators can do local restores of the shards or database partitions to speed recovery.

RecoverX 2.5 also supports Transport Layer Security and Secure Sockets Layer encryptions, as well as X.509 certificates, Lightweight Directory Access Protocol authentication and Kerberos authentication.

With the granular recovery, you can pick and choose what you are looking for. That helps the time to recovery.
Dave Russelldistinguished analyst, Gartner

Dave Russell, distinguished analyst at Gartner, said Datos IO RecoverX 2.5 focuses more on greater control and faster recovery with its advanced recovery features.

“Some of these next-generation databases are extremely large and they are federated. The beautiful thing about databases is they have structure,” Russell said. “Part of what Datos IO does is leverage that structure, so you can pull up the [exact] data you are looking for. Before, you had to back up large databases, and in some cases, you had to mount the entire database to fish out what you want.

“With the granular recovery, you can pick and choose what you are looking for,” he said. “That helps the time to recovery.”

Peter Smails, vice president of marketing and business development at Datos IO, based in San Jose, Calif., said the startup is trying to combine the granularity of traditional backup with the visibility into scale-out databases that traditional backup tools lack.

“With traditional backup, you can restore at the LUN level and the virtual machine level. You can get some granularity,” Smails said. “What you can’t do is have the visibility into the specific construct of the database, such as what is in each row or column. We know the schema.

“Backup is not a new problem,” Smails said. “What we want to do through [our] applications is fundamentally different.”

DevOps value stream mapping plots course at Nationwide

SAN FRANCISCO — After a decade of change, Nationwide Insurance sees DevOps value stream mapping as its path to achieve IT nirvana, with an orderly flow of work from lines of business into the software delivery pipeline.

Since 2007, Nationwide Mutual Insurance Co., based in Columbus, Ohio, has streamlined workflows in these corporate groups according to Lean management principles, among software developers with the Agile method and in the software delivery pipeline with DevOps. Next, it plans to bring all those pieces together through an interface that creates a consistent model of how tasks are presented to developers, translated into code and deployed into production.

That approach, called value stream mapping, is a Lean management concept that originated at Toyota to record all the processes required to bring a product to market. Nationwide uses a feature called Landscape View in Tasktop Technologies’ Integration Hub model-based software suite to create its own record of how code artifacts flow through its systems, as part of an initiative to quicken the pace of software delivery.

Other DevOps software vendors, such as XebiaLabs and CollabNet, offer IT pros information about the health of the DevOps pipeline and its relationship to business goals. But Tasktop applies the Lean management concept of value stream mapping to DevOps specifically.

“It’s a diagram that shows all your connectivity and shows the flow of work,” said Carmen DeArdo, the technology director responsible for the software delivery pipeline at Nationwide, in an interview at DevOps Enterprise Summit here last week. “You can see how artifacts are flowing … What we’re hoping for in the future is more metrics and analytics around things like lead time.”

DevOps value stream mapping boosts pipeline consistency

Before Landscape View, Nationwide used Tasktop’s Sync product to integrate the tools that make up its DevOps pipeline. These tools include the following:

  • IBM Rational Doors Next Generation and Rational Team Concert software for team collaboration;
  • HP Quality Center  — now Micro Focus Quality Center Enterprise — for defect management;
  • Jenkins, GitHub and IBM UrbanCode for continuous integration and continuous delivery;
  • ServiceNow for IT service management;
  • New Relic and Splunk for monitoring;
  • IBM’s ChangeMan ZMF for mainframe software change management; and
  • Microsoft Team Foundation Server for .NET development.

One Tasktop Sync integration routes defects from HP Quality Center directly into a backlog for Agile teams in Rational Team Concert. Another integration feeds requirements in IBM Doors Next Generation into HP Quality Center to generate automated code tests.

However, the business still lacked a high-level understanding of how its products were brought to market, especially where business requirements were presented to the DevOps teams to be translated into software features and deployed.

Without that understanding, teams unsuccessfully tried to hasten software delivery with additional developers and engineers. However, that didn’t get to the root of delays in the creation of business requirements. Other attempts to bridge that gap with whiteboards, team meetings and consultants produced no sustainable improvements, DeArdo said.

The Landscape View value stream mapping software tool, however, presents a more objective view than anecdotal descriptions in a team meeting of how work flows to the DevOps team, from artifacts to deployments and incident responses. The software also helps the DevOps team understand lessons learned from incidents and apply them to application development backlogs.

Landscape View’s objective analysis of the DevOps pipeline, complete with its flaws, forces the IT team to set aside biases and misunderstandings and think about process improvement in a new way, DeArdo said. “It’s one thing to talk about value stream, and another to show a picture of what it could look like when things are connected.”

A screenshot of Tasktop Integration Hub's Landscape View feature, which helps Nationwide with DevOps value stream mapping.
A screenshot of Tasktop Integration Hub’s Landscape View feature, which helps Nationwide with DevOps value stream mapping.

A more accurate sense of how its processes work will help Nationwide more effectively improve those processes, DeArdo said. For example, the company has already amended how product defects move to the developer backlog, from an error-prone manual process that relied on email messages to an automated set of handoffs between software APIs.

DevOps to-do list and wish list still full

DevOps value stream mapping doesn’t mean Nationwide’s DevOps work is done. The company aims to use infrastructure as code more broadly and bring that aspect of IT under GitHub version control, as well as migrate more on-premises workloads to the public cloud. And even with the addition of value stream mapping software as an aid, it still struggles to introduce companywide systems thinking to a traditionally siloed set of business units and IT disciplines.

“We don’t really architect the value stream around the major DevOps metrics, [such as] frequency of deployment, reducing lead time or [mean time to resolution],” DeArdo said. “Maybe we do, in some sense, but not as intentionally as we could.”

To address this disparity, Nationwide will tie traditionally separate environments, which include a mainframe, into the same DevOps pipeline as the rest of its workloads.

Anything that has a request and a response and an SLA has a target on its back to be automated.
Carmen DeArdotechnology director, Nationwide

“We don’t buy in to the whole [bimodal] IT concept,” DeArdo said, in reference to a Gartner term that describes a DevOps approach limited to new applications, while legacy applications are managed separately. “[To say DevOps] is just for the cool kids, and if you’re on a legacy system, you need not apply, sends the wrong message.”

DeArdo would like Tasktop to extend DevOps value stream mapping on Integration Hub with the ability to run simulations of different value stream models to see what will work best. He’d also like to see more metrics and recommendations from Integration Hub to help identify what’s causing bottlenecks in the process and how to resolve them.

“Anything that has a request and a response and an SLA [service-level agreement] has a target on its back to be automated from a value stream perspective,” he said. “How can we make it self-service and improve it? If you can’t see it, you’re only touching part of the elephant.”

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

Microsoft Azure cloud database activity takes off at Connect();

Microsoft plunged deeper into the open source milieu last week, as it expanded support for non-Microsoft software in its Azure cloud database lineup.

Among a host of developer-oriented updates discussed at the Microsoft Connect(); 2017 conference were new ties to the Apache Spark processing engine and Apache Cassandra, one of the top NoSQL databases. The company also added the MariaDB database to open source relational database services available on Azure that already include MySQL and PostgreSQL.

Taken together, the moves are part of an ongoing effort to fill in Microsoft’s cloud data management portfolio on the Azure platform, and to keep up with cloud computing market leader Amazon Web Services (AWS).

A database named MariaDB

Azure cloud database inclusion of MariaDB shows Microsoft’s “deep commitment to supporting data stores that might not necessarily be from Microsoft,” said consultant Ike Ellis, a Microsoft MVP and a partner at independent development house Crafting Bytes in San Diego, Calif.

Databricks CEO Ali Ghodsi at Microsoft Connect
Ali Ghodsi, CEO of Databricks, speaks at last week’s Microsoft Connect conference. Microsoft and Databricks have announced Azure Databricks, new services to expand the use of Spark on Azure.

Such support is important because MariaDB has gained attention in recent years, very much as an alternative to MySQL, which was the original poster child for open source relational databases.

MariaDB is a fork of MySQL, with development overseen primarily by Michael “Monty” Widenius, the MySQL creator who was vocally critical of Oracle’s stewardship of MySQL once it became a part of that company’s database lineup. In recent years, under the direction of Widenius and others, MariaDB has added thread pooling, parallel replication and various query optimizations. Widenius appeared via video at the Connect(); event, which took place in New York and was streamed online, to welcome Microsoft into the MariaDB fold.

Microsoft said it was readying a controlled beta of Azure Database for MariaDB. The company also said it was joining the MariaDB Foundation, the group that formally directs the database’s development.

“MariaDB has a lot of traction,” Ellis said. “Microsoft jumping into MariaDB is going to help its traction even more.”

Cassandra on the cloud

While MariaDB support expands SQL-style data development for Azure, newly announced Cassandra support broadens the NoSQL part of the Azure portfolio, which already included a Gremlin graph database API and a MongoDB API.

In the cloud world, you aren’t selling software; you are selling services.
David Chappellindependent consultant

Unlike MongoDB, which is document-oriented, Apache Cassandra is a key-value store.

Like MongoDB, Cassandra has found considerable use in web and cloud data operations that must quickly shuttle fast arriving data for processing.

Now in preview, Microsoft’s Cassandra API works with Azure Cosmos DB. This is a Swiss army knife-style database — sometimes described as a multimodel database — that the company spawned earlier this year from an offering known as DocumentDB. The Cassandra update fills in an important part of the Azure cloud database picture, according to Ellis.

“With the Cassandra API, Microsoft has hit everything you would want to hit in NoSQL stores,” he said.

Self-service Spark

Microsoft’s latest Spark move sees it working with Databricks, the startup formed by members of the original team that conceived the Spark data processing framework at University of California, Berkeley computer science labs.

These new Spark services stand as an alternative to Apache Spark software already offered as part of Microsoft’s HDInsight product line, which was created together with Hadoop distribution provider Hortonworks.

Known as Azure Databricks, the new services were jointly developed by Databricks and Microsoft and are being offered by Microsoft as a “first-party Azure service,” according to Ali Ghodsi, CEO of San Francisco-based Databricks. Central to the offering is native integration with Azure SQL Data Warehouse, Azure Storage, Azure Cosmos DB and Power BI, he said.

Azure Databricks joins a host of recent cloud-based services appearing across a variety of clouds, mostly intended to simplify self-service big data analytics and machine learning over both structured and unstructured data.

Ghodsi said Databricks’ Spark software has found use in credit card companies doing fraud analytics and in real-time life sciences firms combining large data sets, IoT and other applications.

Taking machine learning mainstream

The Microsoft-Databricks deal described at Connect(); is part of a continuing effort to broaden Azure’s use for machine learning and analytics. Earlier, at its Microsoft Ignite 2017 event, the company showed an Azure Machine Learning Workbench, an Azure Machine Learning Experimentation Service and an Azure Machine Learning Model Management service.

Viewers generally cede overall cloud leadership to AWS, but cloud-based machine learning has become a more competitive area of contention. It is a place where Microsoft may have passed Amazon, according to David Chappell, principal at Chappell and Associates in San Francisco, Calif.

“AWS has a simple environment that is for use by developers. But it is so simple that it is quite constrained,” he said. “It gives you few options.”

The audience for Microsoft’s Azure machine learning efforts, Chappell maintained, will be broader. It spans developers, data scientists and others. “Microsoft is really trying to take machine learning mainstream,” he said.

Economics in the cloud

Microsoft’s broadened open source support is led by this year’s launch of SQL Server on Linux. But that is only part of Microsoft’s newfound open source fervor.

“Some people are skeptical of Microsoft and its commitment to open source, that it is like lip service,” Chappell said. “What they don’t always understand is that cloud computing and its business models change the economics of open source software.

“In the cloud world, you aren’t selling software; you are selling services,” Chappell continued. “Whether it is open source or not, whether it is MariaDB, MySQL or SQL Server — that doesn’t matter, because you are charging customers based on usage of services.”

Azure data services updates are not necessarily based on any newfound altruism or open source evangelism, Chappell cautioned. It’s just, he said, the way things are done in the cloud.