Automated machine learning vendor DotData looks to enter the IoT market with DotData Stream, a new product designed for real-time analytics in low-latency, low-memory environments, such as at the edge.
The product, announced Tuesday, may enable DotData to better compete in manufacturing and finance fields, which typically rely on real-time analytics or heavily use IoT devices.
New use cases
“DotData Stream now opens an entirely new world of IoT use cases for DotData’s autoML platform,” Forrester analyst Mike Gualtieri said.
While it’s priced as a separate product, DotData Stream pairs with DotData Enterprise, the company’s flagship automated machine learning platform. Customers can build a machine learning model within DotData Enterprise, then export it as a Docker image with an embedded model to provide predictive capabilities separate from DotData Enterprise.
Customers can then deploy the low-latency, low-memory model on IoT devices, and use it for use cases that require real-time processing, such as fraud detection, credit approval or predictive maintenance.
Mike GualtieriAnalyst, Forrester
“DotData Stream is ideal for applications where real-time prediction services are needed,” said Ryohei Fujimaki, CEO and founder of DotData.
“A few common use cases for DotData Stream are instant credit approval, fraud detection, automated underwriting, dynamic pricing and industrial IoT,” he said.
Competing automated machine learning
Automated machine learning, the process of automating parts of machine learning development, is a relatively new field. DotData, founded in 2018, competes against a handful of vendors in the field, including leaders DataRobot and H20.ai.
Both DataRobot and H20.ai enable edge model deployments, Gualtieri said. DotData Stream allows DotData to now compete in industrial IoT application on the edge.
“The use of autoML has the potential to be a game changer for manufacturing companies because autoML is significantly faster way to develop models,” Gualtieri said.
However, he added, “AutoML is not yet widely used for IoT use cases, even though the potential exists.”
“I don’t think any one of these vendors dominates in edge deployments at this time,” Gualtieri said.
While DotData Stream does open new markets for the vendor, the company may still need to prove to potential enterprise customers in the finance and manufacturing industries that it has the chops to tackle certain use cases.
“Enterprise buyers expect to see domain expertise in understanding specific use cases, such as fraud detection and predictive maintenance,” Gualtieri said.
Cornell University is becoming a hotbed of warning about automated hiring systems. In two separate papers, researchers have given the systems considerable scrutiny. Both papers cite problems with AI transparency, or the ability to explain how an AI system reaches a conclusion.
Vendors are selling automated hiring systems partly as a remedy to human bias. But they also argue they can speed up the hiring process and select applicants who will make good employees.
Manish Raghavan, a computer science doctoral student at Cornell who led the most recent study, questions vendors’ claims. If AI is doing a better job than hiring managers, “how do we know that’s the case or when will we know that that’s the case?” he said.
A major thrust of the research is the need for AI transparency. That’s not only needed for the buyers of automated hiring systems, but for job applicants as well.
At Cornell, Raghavan knows students who take AI-enabled tests as part of a job application. “One common complaint that I’ve heard is that it just viscerally feels upsetting to have to perform for a robot,” he said.
A job applicant may have to install an app to film a video interview, play a game that may measure cognitive ability or take a psychometric test that can be used to measure intelligence and personality.
“This sort of feels like they’re forcing you [the job applicant] to invest extra effort, but they’re actually investing less effort into you,” Raghavan said. Rejected applicants won’t know why they were rejected, the standards used to measure their performance, or how they can improve, he said.
Nascent research, lack of regulation
The paper, “Mitigating Bias in Algorithmic Employment Screening: Evaluating Claims and Practices,” is the work of a multidisciplinary team of computer scientists, as well as those with legal and sociological expertise. It argues that HR vendors are not providing insights into automated hiring systems.
Manish RaghavanDoctoral student in computer science, AI researcher, Cornell University
The researchers looked at the public claims of nearly 20 vendors that sell these systems. Many are startups, although some have been around for more than a decade. They argue that vendors are taking nascent research and translating it into practice “at sort of breakneck pace,” Raghavan said. They’re able to do so because of a lack of regulation.
Vendors can produce data from automated hiring systems that shows how their systems perform in helping achieve diversity, Raghavan said. “Their diversity numbers are quite good,” but they can cherry-pick what data they release, he said. Nonetheless, “it also feels like there is some value being added here, and their clients seem fairly happy with the results.”
But there are two levels of transparency that Raghavan would like to see improve. First, he suggested vendors release internal studies that show the validity of their assessments. The data should include how often vendors are running into issues of disparate impact, which refers to a U.S. Equal Employment Opportunity Commission formula for determining if hiring is having a discriminatory impact on a protected group.
A second step for AI transparency involves having third-party independent researchers do some of their own analysis.
Vendors argue that AI systems do a better job than humans in reducing bias. But researchers see a risk that they could embed certain biases against a group of people that won’t be easily discovered unless there’s an understanding for how these systems work.
One problem often cited is that an AI-enabled system can help improve diversity but still discriminate against certain groups or people. New York University researchers recently noted that most of the AI code today is being written by young white males, who many encode their biases.
Ask about the ‘magic fairy dust’
Ben Eubanks, principal analyst at Lighthouse Research & Advisory, believes the Cornell paper should be on every HR manager’s reading list, “not necessarily because it should scare them away but because it should encourage them to ask more questions about the magic fairy dust behind some technology claims.”
“Hiring is and has always been full of bias,” said Eubanks, who studies AI use in HR. “Algorithms are subject to some of those same constraints, but they can also offer ways to mitigate some of the very real human bias in the process.”
But the motivation for employers may be different, Eubanks said.
“Employers adopting these technologies may be more concerned initially with the outcomes — be it faster hiring, cheaper hiring, or longer retention rates — than about the algorithm actually preventing or mitigating bias,” Eubanks said. That’s what HR managers will likely be rewarded on.
Ajunwa’s paper raises problems with automated hiring, including systems that “discreetly eliminate applicants from protected categories without retaining a record.”
AI transparency adds confidence
Still, in an interview, Cornell’s Raghavan was even-handed about using AI and didn’t warn users away from automated hiring systems. He can see use cases but believes there is good reason for caution.
“I think what we can agree on is that the more transparency there is, the easier it will be for us to determine when is or is not the right time or the right place to be using these systems,” Raghavan said.
“A lot of these companies and vendors seem well-intentioned — they think what they’re doing is actually very good for the world,” he said. “It’s in their interest to have people be confident in their practices.”
Jumio, the identity verification technology vendor, released Jumio Go, a real-time, automated platform for identity verification. Coming at a time when cybercriminals are becoming ever more technologically advanced, Jumio Go uses a combination of AI, optical character recognition and biometrics to automatically verify a user’s identity in real time.
Jumio, founded in 2010, has long sold an AI for fraud prevention platform used by organizations in financial services, travel, gaming and retail industries. The Palo Alto, Calif., vendor’s new Jumio Go platform builds on its existing technologies, which include facial recognition and verification tools, while also simplifying them.
Jumio Go, launched Oct. 28, provides real-time identity verification, giving users results much faster than Jumio’s flagship product, which takes 30 to 60 seconds to verify a user, according to Jumio. It also eliminates the need to add a component, meaning the process of matching a real-time photo of a user’s face to a saved photo is entirely automated. That speeds up the process, and enables employees to take on other tasks, but also potentially could make it a little less secure.
The new product accepts fewer ID documents than Jumio’s flagship platform, but the tradeoff is the boost in real-time speed. Using natural language processing, Jumio’s platforms can read through and extract relevant information from documents. The system scans that information for irregularities, such as odd wordings or misspellings, which could indicate a fraud.
AI for fraud prevention in finance
For financial institutions, whose customers conduct much more business online, this type of fraud detection and identity verification technology is vital.
For combating fraud, “leveraging AI is critical,” said Amyn Dhala, global product lead at AI Express, Mastercard’s methodology for the deployment of AI that grew out of the credit card company’s 2017 acquisition of Brighterion.
Through AI Express, Mastercard sells AI for fraud prevention tools, as well as AI-powered technologies, to help predict credit risk, manage network security and catch money-laundering.
AI, Dhala said in an interview at AI World 2019 in Boston, is “important to provide a better customer experience and drive profitability,” as well as to ensure customer safety.
The 9 to 5 fraudster
For financial institutions, blocking fraudsters is no simple task. Criminals intent on fraud are taking a professional approach to their work, working for certain hours during the week and taking weekends off, according to an October 2019 report from Onfido, a London-based vendor of AI-driven identity software.
Also, today’s fraudsters are highly technologically skilled, said Dan Drapeau, head of technology at Blue Fountain Media, a digital marketing agency owned by Pactera, a technology consulting and implementation firm based in China.
Dan DrapeauHead of technology, Blue Fountain Media
“You can always throw new technology at the problem, but cybercriminals are always going to do something new and innovative, and AI algorithms have to catch up to that,” Drapeau said. “Cybercriminals are always that one step ahead.”
“As good as AI and machine learning get, it still will always take time to catch up to the newest innovation from criminals,” he added.
Still, by using AI for fraud prevention, financial organizations can stop good deal of fraud automatically, Drapeau said. Now, combining AI with manual work, such as checking or double-checking data and verification documents, works best, he said.
Automated transcription services have a variety of applications. Enterprises frequently use them to transcribe meetings, and call centers use them to transcribe phone calls into text to more easily analyze the substance of each call.
The services are widely used to aid the deaf, by automatically providing subtitles to videos and television shows, as well as in call centers that enable the deaf to communicate with each other by transcribing each person’s speech.
VTCSecure and Google
VTCSecure, a several-years-old startup based in Clearwater, Fla., uses Google Cloud’s Speech-to-Text services to power a transcription platform that is used by businesses, non-profits, and municipalities around the world to aid the deaf and hard of hearing.
The platform offers an array of capabilities, including video services that connect users to a real-time sign-language interpreter, and deaf-to-deaf call centers. The call centers, enabling users to connect via video, voice or real-time-text, build on Google Cloud’s Speech-to-Text technology to provide users with automatic transcriptions.
Google Cloud has long sold Speech-to-Text and Text-to-Speech services, which provide developers with the data and framework to create their own transcription or voice applications. For Hayes, the services, powered in part by speech technologies developed by parent company Alphabet Inc.’s DeepMind division, were easy to set up and adapt.
“It was one of the best processes,” said Peter Hayes, CEO of VTCSecure. He added that his company has been with happy with what it considers a high level of support from Google.
Hayes said Google provides technologies, as well as development support, for VTCSecure and for his newest company, TranslateLive.
Hayes also runs the platform on Google Cloud, after doing a demo for the FTC that he said lagged on a rival cloud network.
Google Cloud’s Speech-to-Text and Text-to-Speech technology, as well as the translation technologies used for TranslateLive, constantly receive updates from Google, Hayes said.
Verbit, unlike Google, adds humans to the transcription loop, explained Tom Livne, co-founder and CEO of the Israel-based startup. It relies on its home-grown models for an initial transcription, and then passes those off to remote human transcribers who fine-tune the transcription, reviewing them and making edits.
The combined process produces high accuracy, Livne said.
A lawyer, Livne initially started Verbit to specifically sell to law firms. However, the vendor moved quickly into the education space.
“We want to create an equal opportunity for students with disabilities,” Livne said. Technology, he noted, has long been able to aid those with disabilities.
Tom LivneCo-founder and CEO, Verbit
George Mason University, a public university in Fairfax, Va., relies on Verbit to automatically transcribe videos and online lectures.
“We address the technology needs of students with disabilities here on campus,” said Korey Singleton, assistive technology initiative manager at George Mason.
After trying out other vendors, the school settled on Verbit largely because of its competitive pricing, Singleton said. As most of its captioning and transcription comes from the development of online courses, the school doesn’t require a quick turnaround, Singleton said. So, Verbit was able to offer a cheaper price.
“We needed to find a vendor that could do everything we needed to do and provide us with a really good rate,” Singleton said. Verbit provided that.
Moving forward, George Mason will be looking for a way to automatically integrate transcripts with the courses. Now, putting them together is a manual process, but with some APIs and automated technologies, Singleton said he’s aiming to make that happen automatically.
The promise of AIOps platforms for enterprise IT pros lies in their potential to provide automated root cause analysis, and early customers have begun to use these tools to speed up problem resolution.
The city of Las Vegas needed an IT monitoring tool to replace a legacy SolarWinds deployment in early 2018 and found FixStream’s Meridian AIOps platform. The city introduced FixStream to its Oracle ERP and service-oriented architecture (SOA) environments as part of its smart city project, an initiative that will see municipal operations optimized with a combination of IoT sensors and software automation. Las Vegas is one of many U.S. cities working with AWS, IBM and other IT vendors on such projects.
FixStream’s Meridian offers an overview of how business process performance corresponds to IT infrastructure, as the city updates its systems more often and each update takes less time as part of its digital transformation, said Michael Sherwood, CIO for the city of Las Vegas.
“FixStream tells us where problems are and how to solve them, which takes the guesswork, finger-pointing and delays out of incident response,” he said. “It’s like having a new help desk department, but it’s not made up of people.”
The tool first analyzes a problem and offers insights as to the cause. It then automatically creates a ticket in the company’s ServiceNow IT service management system. ServiceNow acquired DxContinuum in 2017 and released its intellectual property as part of a similar help desk automation feature, called Agent Intelligence, in January 2018, but it’s the high-level business process view that sets FixStream apart from ServiceNow and other tools, Sherwood said.
FixStream’s Meridian AIOps platform creates topology views that illustrate the connections between parts of the IT infrastructure and how they underpin applications, along with how those applications underpin business processes. This was a crucial level of detail when a credit card payment system crashed shortly after FixStream was introduced to monitor Oracle ERP and SOA this spring.
“Instead of telling us, ‘You can’t take credit cards through the website right now,’ FixStream told us, ‘This service on this Oracle ERP database is down,'” Sherwood said.
This system automatically correlated an application problem to problems with deeper layers of the IT infrastructure. The speedy diagnosis led to a fix that took the city’s IT department a few hours versus a day or two.
AIOps platform connects IT to business performance
Michael SherwoodCIO for the city of Las Vegas
Some IT monitoring vendors associate application performance management (APM) data with business outcomes in a way similar to FixStream. AppDynamics, for example, offers Business iQ, which associates application performance with business performance metrics and end-user experience. Dynatrace offers end-user experience monitoring and automated root cause analysis based on AI.
The differences lie in the AIOps platforms’ deployment architectures and infrastructure focus, said Nancy Gohring, an analyst with 451 Research who specializes in IT monitoring tools and wrote a white paper that analyzes FixStream’s approach.
“Dynatrace and AppDynamics use an agent on every host that collects app-level information, including code-level details,” Gohring said. “FixStream uses data collectors that are deployed once per data center, which means they are more similar to network performance monitoring tools that offer insights into network, storage and compute instead of application performance.”
FixStream integrates with both Dynatrace and AppDynamics to join its infrastructure data to the APM data those vendors collect. Its strongest differentiation is in the way it digests all that data into easily readable reports for senior IT leaders, Gohring said.
“It ties business processes and SLAs [service-level agreements] to the performance of both apps and infrastructure,” she said.
OverOps fuses IT monitoring data with code analysis
While FixStream makes connections between low-level infrastructure and overall business performance, another AIOps platform, made by OverOps, connects code changes to machine performance data. So, DevOps teams that deploy custom applications frequently can understand whether an incident is related to a code change or an infrastructure glitch.
OverOps’ eponymous software has been available for more than a year, and larger companies, such as Intuit and Comcast, have recently adopted the software. OverOps identified the root cause of a problem with Comcast’s Xfinity cable systems as related to fluctuations in remote-control batteries, said Tal Weiss, co-founder and CTO of OverOps, based in San Francisco.
OverOps uses an agent that can be deployed on containers, VMs or bare-metal servers, in public clouds or on premises. It monitors the Java Virtual Machine or Common Language Runtime interface for .NET apps. Each time code loads into the CPU via these interfaces, OverOps captures a data signature and compares it with code it’s previously seen to detect changes.
From there, the agent produces a stream of log-like files that contain both machine data and code information, such as the number of defects and the developer team responsible for a change. The tool is primarily intended to catch errors before they reach production, but it can be used to trace the root cause of production glitches, as well.
“If an IT ops or DevOps person sees a network failure, with one click, they can see if there were code changes that precipitated it, if there’s an [Atlassian] Jira ticket associated with those changes and which developer to communicate with about the problem,” Weiss said.
In August 2018, OverOps updated its AIOps platform to feed code analysis data into broader IT ops platforms with a RESTful API and support for StatsD. Available integrations include Splunk, ELK, Dynatrace and AppDynamics. In the same update, the OverOps Extensions feature also added a serverless AWS Lambda-based framework, as well as on-premises code options, so users can create custom functions and workflows based OverOps data.
“There’s been a platform vs. best-of-breed tool discussion forever, but the market is definitely moving toward platforms — that’s where the money is,” Gohring said.
Automated infrastructure management took a step forward with the emergence of AIOps monitoring tools that use machine learning to proactively identify infrastructure problems.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
IT monitoring tools released in the last two months by New Relic, BMC and Splunk incorporate AI features, mainly machine learning algorithms, to correlate events in the IT infrastructure with problems in application and business environments. Enterprise IT ops pros have begun to use these tools to address problems before they arise.
New Relic’s machine learning features, codenamed Seymour at its beta launch in 2016, helped the mobile application development team at Scripps Networks Interactive in Knoxville, Tenn., identify misconfigured Ruby application dependencies and head off potential app performance issues.
“Just doing those simple updates allowed them to fix some errors they hadn’t realized were there,” said Mark Kelly, director of cloud and infrastructure architecture at Scripps, which owns web and TV channels, such as Food Network and HGTV that are viewed by an average of 50 to 70 million people per day.
Seymour is now generally available in New Relic’s Radar and Error Profiles features, which add a layer of analytics over the performance data collected by New Relic’s application performance management tools that help users hone their reactions. Radar uses algorithms similar to e-commerce product recommendation engines to tailor dashboards to individual users’ automated infrastructure management needs. The Error Profiles feature narrows down the possible causes of IT infrastructure errors. An engineer can then scan a prioritized list of the most unusual behaviors to identify a problem’s root cause.
“Before Radar, [troubleshooting] required some manual digging — now it’s automatically identifying problem areas we might want to look for,” Kelly said. “It takes some of that searching for the needle in the haystack out of the equation for us.”
Data correlation stems IT ops ticket tsunami
IT monitoring tools from BMC and Splunk also expanded their AIOps features this month. BMC’s TrueSight 11 IT monitoring and management platform will use new algorithms within the TrueSight Intelligence SaaS product to categorize service tickets so IT ops pros can more quickly resolve incidents, as well as assess the financial impact of bugs in application code. Event stream analytics in TrueSight Intelligence can predict IT service deterioration, and a separately licensed TrueSight Cloud Cost Control product forecasts infrastructure costs to optimize workload placement in hybrid clouds.
Chris Adamspresident and COO, Park Place
Park Place Technologies, an after-warranty server management company in Cleveland, Ohio, and a BMC partner, plans to fold TrueSight Intelligence analytics into a product that forewarns customers of equipment outages.
“We have ways to filter email alerts sent by equipment based on subject lines, but TrueSight does it faster, and can pull out strings of data from the emails as well,” said Chris Adams, president and COO of Park Place. “We want to be able to call the customer and say, ‘Three disk drives are going to fail, and here’s why.'”
Version 3.0 of Splunk’s IT Service Intelligence (ITSI) tool also correlates event data to pass along critical alerts to IT admins so they can more easily process Splunk log and monitoring data. ITSI 3.0 root cause analysis features predict the outcome of infrastructure changes, more quickly identify problem servers, and integrate with ITSM tools such as ServiceNow and PagerDuty — which offer their own machine learning features to further prune the flow of IT alerts.
Automated infrastructure management takes shape with AIOps
Eventually, IT pros hope that AIOps monitoring tools will go beyond dashboards and into automated infrastructure management action through proactive changes to infrastructure problems, as well as application-pull requests that address code problems through the DevOps pipeline.
“The Radar platform has that potential, especially if it can start integrating into our pipeline and help change events before they happen,” Kelly said. “I want it to help me do some of those automated tasks, detect my stacks going bad in advance, and give me some of that proactive feedback before I have a problem.”
Such products are already on the way. Cisco previewed a feature at its AppDynamics Summit recently that displays a forecast of events along a timeline, and highlights transactions that will be critically impacted by problems as they develop. The still-unnamed tool presents theories about the causes of future problems along with recommended actions for remediation. In the product demo, the user interface presented an “execute” button for recommended remediation, along with a button to choose “other actions.”
Cisco plans to eventually integrate technology from recently acquired Perspica with AppDynamics, which will perform machine learning analysis on streams of infrastructure data at wire speed.
For now, AppDynamics customers said they’re interested in ways such AIOps features can improve business outcomes. But the tools must still prove themselves valuable beyond what humans can forecast based on their own experience.
“It’s not going to replace a good analyst at this point — that’s what the analyst does for us, says how a change is going to affect the business,” said Krishna Dammavalam, SRE for Idexx Labs, a veterinary testing and lab equipment company in Westbrook, Maine. “If machine learning’s predictions are better than the analyst’s, that’s where the product will have value, but if the analyst is still better, there will still be room to grow.”
Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at [email protected]or follow @PariseauTTon Twitter.
Nearly half of all technology professionals expect to be automated out of a job in 10 years. That statistic, from a technology survey by recruitment firm Harvey Nash, is grim enough but what is perhaps worse is that the erosion has already begun. From software development to operations, security and data analytics, the rise of smart, flexible technology is enabling people without traditional tech backgrounds to take on what were once “expert-only” jobs. Tools such as low-code platforms, automation and even artificial technology are giving citizen data scientists a seat at IT’s table. And IT departments as we’ve known them will never be the same again.
A look at the data science field underscores this phenomenon. A skill in extreme demand, data science used to be considered the domain of highly trained mathematicians. But a slew of automated tools are creating a generation of citizen data scientists and bringing analytics to the masses. In the software development space, low and no code platforms let nearly anyone create mobile applications, something that has helped with the worldwide shortage of developers. But with artificial intelligence on the horizon, many worry that knowing how to code won’t be sufficient for future job security. And in software testing and on the operations side, automation continues to chip away at what have traditionally been manual (and plentiful) jobs.
The implications of these changes are vast, starting with security. It’s tough to know how IT departments can enforce compliance and security measures over a disparate group of citizen data scientists. And that’s just for starters. Keep reading to see our take on how IT departments are changing and adapting to the rise of the citizen technologist.