Tag Archives: deep

How to win in the AI era? For now, it’s all about the data

Artificial intelligence is the new electricity, said deep learning pioneer Andrew Ng. Just as electricity transformed every major industry a century ago, AI will give the world a major jolt. Eventually.

For now, 99% of the economic value created by AI comes from supervised learning systems, according to Ng. These algorithms require human teachers and tremendous amounts of data to learn. It’s a laborious, but proven process.

AI algorithms, for example, can now recognize images of cats, although they required thousands of labeled images of cats to do so; and they can understand what someone is saying, although leading speech recognition systems needed 50,000 hours of speech — and their transcripts — to do so.

Ng’s point is that data is the competitive differentiator for what AI can do today — not algorithms, which, once trained, can be copied.

“There’s so much open source, word gets out quickly, and it’s not that hard for most organizations to figure out what algorithms organizations are using,” said Ng, an AI thought leader and an adjunct professor of computer science at Stanford University, at the recent EmTech conference in Cambridge, Mass.

His presentation gave attendees a look at the state of the AI era, as well as the four characteristics he believes will be a part of every AI company, which includes a revamp of job descriptions.

Positive feedback loop

So data is vital in today’s AI era, but companies don’t need to be a Google or a Facebook to reap the benefits of AI. All they need is enough data upfront to get a project off the ground, Ng said. That starter data will attract customers who, in turn, will create more data for the product.

“This results in a positive feedback loop. So, after a period of time, you might have enough data yourself to have a defensible business,” said Ng.

Andrew Ng, Stanford, AI, state of AI, deep learning, EmTech
Andrew Ng on stage at EmTech

A couple of his students at Stanford did just that when they launched Blue River Technology, an ag-tech startup that combines computer vision, robotics and machine learning for field management. The co-founders started with lettuce, collecting images and putting together enough data to get lettuce farmers on board, according to Ng. Today, he speculated, they likely have the biggest data asset of lettuce in the world.

“And this actually makes their business, in my opinion, pretty defensible because even the global giant tech companies, as far as I know, do not have this particular data asset, which makes their business at least challenging for the very large tech companies to enter,” he said.

Turns out, that data asset is actually worth hundreds of millions: John Deere acquired Blue River for $300 million in September.

“Data accumulation is one example of how I think corporate strategy is changing in the AI era, and in the deep learning era,” he said.

Four characteristics of an AI company

While it’s too soon to tell what successful AI companies will look like, Ng suggested another corporate disruptor might provide some insight: the internet.

One of the lessons Ng learned with the rise of the internet was that companies need more than a website to be an internet company. The same, he argued, holds true for AI companies.

“If you take a traditional tech company and add a bunch of deep learning or machine learning or neural networks to it, that does not make it an AI company,” he said.

Internet companies are architected to take advantage of internet capabilities, such as A/B testing, short cycle times to ship products, and decision-making that’s pushed down to the engineer and product level, according to Ng.

AI companies will need to be architected to do the same in relation to AI. What A/B testing’s equivalent will be for AI companies is still unknown, but Ng shared four thoughts on characteristics he expects AI companies will share.

  1. Strategic data acquisition. This is a complex process, requiring companies to play what Ng called multiyear chess games, acquiring important data from one resource that’s monetized elsewhere. “When I decide to launch a product, one of the criteria I use is, can we plan a path for data acquisition that results in a defensible business?” Ng said.
  2. Unified data warehouse. This likely comes as no surprise to CIOs, who have been advocates of the centralized data warehouse for years. But for AI companies that need to combine data from multiple sources, data silos — and the bureaucracy that comes with them — can be an AI project killer. Companies should get to work on this now, as “this is often a multiyear exercise for companies to implement,” Ng said.
  3. New job descriptions. AI products like chatbots can’t be sketched out the way apps can, and so product managers will have to communicate differently with engineers. Ng, for one, is training product managers to give product specifications.
  4. Centralized AI team. AI talent is scarce, so companies should consider building a single AI team that can then support business units across the organization. “We’ve seen this pattern before with the rise of mobile,” Ng said. “Maybe around 2011, none of us could hire enough mobile engineers.” Once the talent numbers caught up with demand, companies embedded mobile talent into individual business units. The same will likely play out in the AI era, Ng said.

Digital Agriculture: Farmers in India are using AI to increase crop yields – Microsoft News Center India

The fields had been freshly plowed. The furrows ran straight and deep. Yet, thousands of farmers across Andhra Pradesh (AP) and Karnataka waited to get a text message before they sowed the seeds. The SMS, which was delivered in Telugu and Kannada, their native languages, told them when to sow their groundnut crops.

In a few dozen villages in Telengana, Maharashtra and Madhya Pradesh, farmers are receiving automated voice calls that tell them whether their cotton crops are at risk of a pest attack, based on weather conditions and crop stage. Meanwhile in Karnataka, the state government can get price forecasts for essential commodities such as tur (split red gram) three months in advance for planning for the Minimum Support Price (MSP).

Welcome to digital agriculture, where technologies such as Artificial Intelligence (AI), Cloud Machine Learning, Satellite Imagery and advanced analytics are empowering small-holder farmers to increase their income through higher crop yield and greater price control.

AI-based sowing advisories lead to 30% higher yields

“Sowing date as such is very critical to ensure that farmers harvest a good crop. And if it fails, it results in loss as a lot of costs are incurred for seeds, as well as the fertilizer applications,” says Dr. Suhas P. Wani, Director, Asia Region, of the International Crop Research Institute for the Semi-Arid Tropics (ICRISAT), a non-profit, non-political organization that conducts agricultural research for development in Asia and sub-Saharan Africa with a wide array of partners throughout the world.

Microsoft in collaboration with ICRISAT, developed an AI Sowing App powered by Microsoft Cortana Intelligence Suite including Machine Learning and Power BI. The app sends sowing advisories to participating farmers on the optimal date to sow. The best part – the farmers don’t need to install any sensors in their fields or incur any capital expenditure. All they need is a feature phone capable of receiving text messages.

Flashback to June 2016. While other farmers were busy sowing their crops in Devanakonda Mandal in Kurnool district in AP, G. Chinnavenkateswarlu, a farmer from Bairavanikunta village, decided to wait. Instead of sowing his groundnut crop during the first week of June, as traditional agricultural wisdom would have dictated, he chose to sow three weeks later, on June 25, based on an advisory he received in a text message.

Chinnavenkateswarlu was part of a pilot program that ICRISAT and Microsoft were running for 175 farmers in the state. The program sent farmers text messages on sowing advisories, such as the sowing date, land preparation, soil test based fertilizer application, and so on.

For centuries, farmers like Chinnavenkateswarlu had been using age-old methods to predict the right sowing date. Mostly, they’d choose to sow in early June to take advantage of the monsoon season, which typically lasted from June to August. But the changing weather patterns in the past decade have led to unpredictable monsoons, causing poor crop yields.

“I have three acres of land and sowed groundnut based on the sowing recommendations provided. My crops were harvested on October 28 last year, and the yield was about 1.35 ton per hectare.  Advisories provided for land preparation, sowing, and need-based plant protection proved to be very useful to me,” says Chinnavenkateswarlu, who along with the 174 others achieved an average of 30% higher yield per hectare last year.

“Sowing date as such is very critical to ensure that farmers harvest a good crop. And if it fails, it results in loss as a lot of costs are incurred for seeds, as well as the fertilizer applications.”

– Dr. Suhas P. Wani, Director, Asia Region, ICRISAT

To calculate the crop-sowing period, historic climate data spanning over 30 years, from 1986 to 2015 for the Devanakonda area in Andhra Pradesh was analyzed using AI. To determine the optimal sowing period, the Moisture Adequacy Index (MAI) was calculated. MAI is the standardized measure used for assessing the degree of adequacy of rainfall and soil moisture to meet the potential water requirement of crops.

The real-time MAI is calculated from the daily rainfall recorded and reported by the Andhra Pradesh State Development Planning Society. The future MAI is calculated from weather forecasting models for the area provided by USA-based aWhere Inc. This data is then downscaled to build predictability, and guide farmers to pick the ideal sowing week, which in the pilot program was estimated to start from June 24 that year.

Ten sowing advisories were initiated and disseminated until the harvesting was completed. The advisories contained essential information including the optimal sowing date, soil test based fertilizer application, farm yard manure application, seed treatment, optimum sowing depth, and more. In tandem with the app, a personalized village advisory dashboard provided important insights into soil health, recommended fertilizer, and seven-day weather forecasts.

“Farmers who sowed in the first week of June got meagre yields due to a long dry spell in August; while registered farmers who sowed in the last week of June and the first week of July and followed advisories got better yields and are out of loss,“ explains C Madhusudhana, President, Chaitanya Youth Association and Watershed Community Association of Devanakonda.

This year, ICRISAT has scaled sowing insights to 4,000 farmers across AP and Karnataka for the Kharif crop cycle (rainy season). Yield results from this year’s implementation are expected in early December.

Pest attack prediction enables farmers to plan

Microsoft is now taking AI in agriculture a step further. A collaboration with United Phosphorous (UPL), India’s largest producer of agrochemicals, led to the creation of the Pest Risk Prediction API that again leverages AI and machine learning to indicate in advance the risk of pest attack. Common pest attacks, such as Jassids, Thrips, Whitefly, and Aphids can pose serious damage to crops and impact crop yield. To help farmers take preventive action, the Pest Risk Prediction App, providing guidance on the probability of pest attacks was initiated.

“Our collaboration with Microsoft to create a Pest Risk Prediction API enables farmers to get predictive insights on the possibility of pest infestation. This empowers them to plan in advance, reducing crop loss due to pests and thereby helping them to double the farm income.”

– Vikram Shroff, Executive Director, UPL Limited

In the first phase, about 3,000 marginal farmers with less than five acres of land holding in 50 villages across in Telangana, Maharashtra and Madhya Pradesh are receiving automated voice calls for their cotton crops. The calls indicate the risk of pest attacks based on weather conditions and crop stage in addition to the sowing advisories. The risk classification is High, Medium and Low, specific for each district in each state.

“Our collaboration with Microsoft to create a Pest Risk Prediction API enables farmers to get predictive insights on the possibility of pest infestation. This empowers them to plan in advance, reducing crop loss due to pests and thereby helping them to double the farm income,” says Vikram Shroff, Executive Director, UPL Limited.

Price forecasting model for policy makers

Predictive analysis in agriculture is not limited to crop growing alone. The government of Karnataka will start using price forecasting for agricultural commodities, in addition to sowing advisories for farmers in the state. Commodity prices for items such as tur, of which Karnataka is the second largest producer, will be predicted three months in advance for major markets in the state.

At present, price forecasting for agricultural commodities using historical data and short-term arrivals is being used by the state government to protect farmers from price crash or shield population from high inflation. However, such accurate data collection is expensive and can be subject to tampering.

“We are certain that digital agriculture supported by advanced technology platforms will truly benefit farmers.”

– Dr. T.N. Prakash Kammardi, Chairman, KAPC, Government of Karnataka

Microsoft has developed a multivariate agricultural commodity price forecasting model to predict future commodity arrival and the corresponding prices. The model uses remote sensing data from geo-stationary satellite images to predict crop yields through every stage of farming.

This data along with other inputs such as historical sowing area, production, yield, weather, among other datasets, are used in an elastic-net framework to predict the timing of arrival of grains in the market as well as their quantum, which would determine their pricing.

“We are certain that digital agriculture supported by advanced technology platforms will truly benefit farmers. We believe that Microsoft’s technology will support these innovative experiments which will help us transform the lives of the farmers in our state,” says Dr. T.N. Prakash Kammardi, Chairman, Karnataka Agricultural Price Commission, Government of Karnataka.

The model currently being used to predict the prices of tur, is scalable, and time efficient and can be generalized to many other regions and crops.

AI in agriculture is just getting started

Shifting weather patterns such as increase in temperature, changes in precipitation levels, and ground water density, can affect farmers, especially those who are dependent on timely rains for their crops. Leveraging the cloud and AI to predict advisories for sowing, pest control and commodity pricing, is a major initiative towards creating increased income and providing stability for the agricultural community.

“Indian agriculture has been traditionally rain dependent and climate change has made farmers extremely vulnerable to crop loss. Insights from AI through the agriculture life cycle will help reduce uncertainty and risk in agriculture operations. Use of AI in agriculture can potentially transform the lives of millions of farmers in India and world over,” says Anil Bhansali, CVP C+E and Managing Director, Microsoft India (R&D) Pvt. Ltd.

Photos courtesy of ICRISAT

CIOs should lean on AI ‘giants’ for machine learning strategy

NEW YORK — Machine learning and deep learning will be part of every data science organization, according to Edd Wilder-James, former vice president of technology strategy at Silicon Valley Data Science and now an open source strategist at Google’s TensorFlow.

Wilder-James, who spoke at the Strata Data Conference, pointed to recent advancements in image and speech recognition algorithms as examples of why machine learning and deep learning are going mainstream. He believes image and speech recognition software has evolved to the point where it can see and understand some things as well as — and in some use cases better than — humans. That makes it ripe to become part of the internal workings of applications and the driver of new and better services to internal and external customers, he said.

But what investments in AI should CIOs make to provide these capabilities to their companies? When building a machine learning strategy, choice abounds, Wilder-James said.

Machine learning vs. deep learning

Deep learning is a subset of machine learning, but it’s different enough to be discussed separately, according to Wilder-James. Examples of machine learning models include optimization, fraud detection and preventive maintenance. “We use machine learning to identify patterns,” Wilder-James said. “Here’s a pattern. Now, what do we know? What can we do as a result of identifying this pattern? Can we take action?”

Deep learning models perform tasks that more closely resemble human intelligence such as image processing and recognition. “With a massive amount of compute power, we’re able to look at a massively large number of input signals,” Wilder-James said. “And, so what a computer is able to do starts to look like human cognitive abilities.”

Some of the terrain for machine learning will look familiar to CIOs. Statistical programming languages such as SAS, SPSS and Matlab are known territory for IT departments. Open source counterparts such as R, Python and Spark are also machine-learning ready. “Open source is probably a better guarantee of stability and a good choice to make in terms of avoiding lock-in and ensuring you have support,” Wilder-James said.

Unlike other tech rollouts

The rollout of machine learning and deep learning models, however, is a different process than most technology rollouts. After getting a handle on the problem, CIOs will need to investigate if machine learning is even an appropriate solution.

“It may not be true that you can solve it with machine learning,” Wilder-James said. “This is one important difference from other technical rollouts. You don’t know if you’ll be successful or not. You have to enter into this on the pilot, proof-of-concept ladder.”

The most time-consuming step in deploying a machine learning model is feature engineering, or finding features in the data that will help the algorithms self-tune. Deep learning models skip the tedious feature engineering step and go right to the training step. To tune a deep learning model correctly requires immense data sets, graphic processing units or tensor processing units, and time. Wilder-James said it could take weeks and even months to train a deep learning model.

One more thing to note: Building deep learning models is hard and won’t be a part of most companies’ machine learning strategy.

“You have to be aware that a lot of what’s coming out is the closest to research IT has ever been,” he said. “These things are being published in papers and deployed in production in very short cycles.”

CIOs whose companies are not inclined to invest heavily in AI research and development should instead rely on prebuilt, reusable machine and deep learning models rather than reinvent the wheel. Image recognition models, such as Inception, and natural language models, such as SyntaxNet and Parsey McParseface, are examples of models that are ready and available for use.

“You can stand on the shoulders of giants, I guess that’s what I’m trying to say,” Wilder-James said. “It doesn’t have to be from scratch.”

Machine learning tech

The good news for CIOs is that vendors have set the stage to start building a machine learning strategy now. TensorFlow, a machine learning software library, is one of the best known toolkits out there. “It’s got the buzz because it’s an open source project out of Google,” Wilder-James said. “It runs fast and is ubiquitous.”

While not terribly developer-friendly, a simplified interface called Keras eases the burden and can handle the majority of use cases. And TensorFlow isn’t the only deep learning library or framework option, either. Others include MXNet, PyTorch, CNTK, and Deeplearning4j.

For CIOs who want AI to live on premises, technologies such as Nvidia’s DGX-1 box, which retails for $129,000, are available.

But CIOs can also utilize cloud as a computing resource, which would cost anywhere between $5 and $15 an hour, according to Wilder-James. “I worked it out, and the cloud cost is roughly the same as running the physical machine continuously for about a year,” he said.

Or they can choose to go the hosted platform route, where a service provider will run trained models for a company. And other tools, such as domain-specific proprietary tools like the personalization platform from Nara Logics, can fill out the AI infrastructure.

“It’s the same kind of range we have with plenty of other services out there,” he said. “Do you rent an EC2 instance to run a database or do you subscribe to Amazon Redshift? You can pick the level of abstraction that you want for these services.”

Still, before investments in technology and talent are made, a machine learning strategy should start with the basics: “The single best thing you can do to prepare with AI in the future is to develop a competency with your own data, whether it’s getting access to data, integrating data out of silos, providing data results readily to employees,” Wilder-James said. “Understanding how to get at your data is going to be the thing to prepare you best.”

AWS and Microsoft announce Gluon, making deep learning accessible to all developers – News Center

New open source deep learning interface allows developers to more easily and quickly build machine learning models without compromising training performance. Jointly developed reference specification makes it possible for Gluon to work with any deep learning engine; support for Apache MXNet available today and support for Microsoft Cognitive Toolkit coming soon.

SEATTLE and REDMOND, Wash. — Oct. 12, 2017 — On Thursday, Amazon Web Services Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), and Microsoft Corp. (NASDAQ: MSFT) announced a new deep learning library, called Gluon, that allows developers of all skill levels to prototype, build, train and deploy sophisticated machine learning models for the cloud, devices at the edge and mobile apps. The Gluon interface currently works with Apache MXNet and will support Microsoft Cognitive Toolkit (CNTK) in an upcoming release. With the Gluon interface, developers can build machine learning models using a simple Python API and a range of prebuilt, optimized neural network components. This makes it easier for developers of all skill levels to build neural networks using simple, concise code, without sacrificing performance. AWS and Microsoft published Gluon’s reference specification so other deep learning engines can be integrated with the interface. To get started with the Gluon interface, visit https://github.com/gluon-api/gluon-api/.

Developers build neural networks using three components: training data, a model and an algorithm. The algorithm trains the model to understand patterns in the data. Because the volume of data is large and the models and algorithms are complex, training a model often takes days or even weeks. Deep learning engines like Apache MXNet, Microsoft Cognitive Toolkit and TensorFlow have emerged to help optimize and speed the training process. However, these engines require developers to define the models and algorithms up front using lengthy, complex code that is difficult to change. Other deep learning tools make model-building easier, but this simplicity can come at the cost of slower training performance.

The Gluon interface gives developers the best of both worlds — a concise, easy-to-understand programming interface that enables developers to quickly prototype and experiment with neural network models, and a training method that has minimal impact on the speed of the underlying engine. Developers can use the Gluon interface to create neural networks on the fly, and to change their size and shape dynamically. In addition, because the Gluon interface brings together the training algorithm and the neural network model, developers can perform model training one step at a time. This means it is much easier to debug, update and reuse neural networks.

“The potential of machine learning can only be realized if it is accessible to all developers. Today’s reality is that building and training machine learning models require a great deal of heavy lifting and specialized expertise,” said Swami Sivasubramanian, VP of Amazon AI. “We created the Gluon interface so building neural networks and training models can be as easy as building an app. We look forward to our collaboration with Microsoft on continuing to evolve the Gluon interface for developers interested in making machine learning easier to use.”

“We believe it is important for the industry to work together and pool resources to build technology that benefits the broader community,” said Eric Boyd, corporate vice president of Microsoft AI and Research. “This is why Microsoft has collaborated with AWS to create the Gluon interface and enable an open AI ecosystem where developers have freedom of choice. Machine learning has the ability to transform the way we work, interact and communicate. To make this happen we need to put the right tools in the right hands, and the Gluon interface is a step in this direction.”

“FINRA is using deep learning tools to process the vast amount of data we collect in our data lake,” said Saman Michael Far, senior vice president and CTO, FINRA. “We are excited about the new Gluon interface, which makes it easier to leverage the capabilities of Apache MXNet, an open source framework that aligns with FINRA’s strategy of embracing open source and cloud for machine learning on big data.”

“I rarely see software engineering abstraction principles and numerical machine learning playing well together — and something that may look good in a tutorial could be hundreds of lines of code,” said Andrew Moore, dean of the School of Computer Science at Carnegie Mellon University. “I really appreciate how the Gluon interface is able to keep the code complexity at the same level as the concept; it’s a welcome addition to the machine learning community.”

“The Gluon interface solves the age old problem of having to choose between ease of use and performance, and I know it will resonate with my students,” said Nikolaos Vasiloglou, adjunct professor of Electrical Engineering and Computer Science at Georgia Institute of Technology. “The Gluon interface dramatically accelerates the pace at which students can pick up, apply and innovate on new applications of machine learning. The documentation is great, and I’m looking forward to teaching it as part of my computer science course and in seminars that focus on teaching cutting-edge machine learning concepts across different cities in the U.S.”

“We think the Gluon interface will be an important addition to our machine learning toolkit because it makes it easy to prototype machine learning models,” said Takero Ibuki, senior research engineer at DOCOMO Innovations. “The efficiency and flexibility this interface provides will enable our teams to be more agile and experiment in ways that would have required a prohibitive time investment in the past.”

The Gluon interface is open source and available today in Apache MXNet 0.11, with support for CNTK in an upcoming release. Developers can learn how to get started using Gluon with MXNet by viewing tutorials for both beginners and experts available by visiting https://mxnet.incubator.apache.org/gluon/.

About Amazon Web Services

For 11 years, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud platform. AWS offers over 90 fully featured services for compute, storage, networking, database, analytics, application services, deployment, management, developer, mobile, Internet of Things (IoT), Artificial Intelligence (AI), security, hybrid and enterprise applications, from 44 Availability Zones (AZs) across 16 geographic regions in the U.S., Australia, Brazil, Canada, China, Germany, India, Ireland, Japan, Korea, Singapore, and the UK. AWS services are trusted by millions of active customers around the world — including the fastest-growing startups, largest enterprises, and leading government agencies — to power their infrastructure, make them more agile, and lower costs. To learn more about AWS, visit https://aws.amazon.com.

About Amazon

Amazon is guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. Customer reviews, 1-Click shopping, personalized recommendations, Prime, Fulfillment by Amazon, AWS, Kindle Direct Publishing, Kindle, Fire tablets, Fire TV, Amazon Echo, and Alexa are some of the products and services pioneered by Amazon. For more information, visit www.amazon.com/about and follow @AmazonNews.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) is the leading platform and productivity company for the mobile-first, cloud-first world, and its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777, rrt@we-worldwide.com

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.

ManageEngine launches OpManager Plus deep packet inspection tool

ManageEngine released its deep packet inspection tool, OpManager Plus. The deep packet inspection tool includes features such as bandwidth monitoring and captures packets from network flows to help engineers assess the causes of network bottlenecks or unusual traffic activity.

OpManager Plus is both a deep packet inspection tool and a network management platform, aimed at improving the ways that service providers and enterprises manage their IT infrastructure. The offering comes pre-equipped with discovery rules that can be reconfigured for different tasks, a set of alert engines and a collection of templates designed to help IT teams to set up a monitoring system. The offering is able to support network monitoring and tracking for virtualized systems, databases and enterprise applications. It can also be used to support configuration management and IP address management.

OpManager Plus monitors bandwidth using Simple Network Management Protocol in combination with network flows, alongside packet inspection. Through use of the app, engineers can determine if performance bottlenecks stem from the network or the application, ManageEngine said.

“Network teams rely on a set of tools to claim that the issue is on the app side, while the app team will blame it on the network. An integrated tool that gives visibility into network and application performance can help both teams identify what’s really causing the issue,” said Dev Anand, director of product management at ManageEngine, in a statement.

OpManager is priced at $4,995.

Nyansa expands analytics capabilities in Voyance

Nyansa Inc. has added analytics capabilities and a client troubleshooting dashboard to Voyance, the company’s network analysis tool.

The new features, launched this week, aim to provide better support to IT staff when they are resolving client network issues.

The analytics capabilities track and measure every client network transaction in real time, allowing IT staffs to distinguish between client versus network-wide issues at the time of an incident.

Other capabilities added include access to a client device’s timeline, summarized views of the root causes of network incidents, and the ability search for a client based on username, hostname or MAC address.

The Voyance network analysis tool is based on a combination of deep packet inspection and cloud-based analytics. It sends all collected network data to servers hosted by Amazon Web Services.

The data is then inspected and retransmitted to the user’s location, where it can be evaluated by IT staff via an easy-to-understand user display.

The network analysis tool is available through one-, three- or five-year subscriptions. Customers can run the software on a dedicated appliance on site or as a virtual machine within an AWS or Microsoft Azure deployment.

Nyansa released Voyance in April 2016. Its big-name customers include Uber, Netflix and Tesla Motors.

Aerohive premieres new access point

Aerohive Networks introduced a combined access point and switch, with capabilities embedded to support IoT. The vendor said that the AP150W can be installed in minutes either placed on a desktop or via an Ethernet-jack wall mounting. The new device supports 802.11ac Wave 2 connectivity.

The new device supports ZigBee and Bluetooth Low Energy, as well as Gigabit Ethernet switching. It can be used to power a variety of components through integrated Power over Ethernet and pass through ports, allowing it to be used in existing cabling and switch infrastructure.

The AP150W is available in September, priced at $299. The cost includes a subscription to Aerohive’s cloud-based Connect management app.

“By packing 802.11ac Wave 2 Wi-Fi, Gigabit Ethernet switching, Bluetooth Low Energy and ZigBee technologies into a small form factor…Wi-Fi in every room has finally become affordable and easy,” said Alan Amrod, Aerohive’s senior vice president of products, in a statement.

IBM cracks the code for speeding up its deep learning platform

Graphics processing units are a natural fit for deep learning because they can crunch through large amounts of data quickly, which is important when training data-hungry models.

But GPUs have one catch. Adding more GPUs to a deep learning platform doesn’t necessarily lead to faster results. While individual GPUs process data quickly, they can be slow to communicate their computations to other GPUs, which has limited the degree to which users can take advantage of multiple servers to parallelize jobs and put a cap on the scalability of deep learning models.

IBM recently took on this problem to improve scalability in deep learning and wrote code for its deep learning platform to improve communication between GPUs.

“The rate at which [GPUs] update each other significantly affects your ability to scale deep learning,” said Hillery Hunter, director of systems acceleration and memory at IBM. “We feel like deep learning has been held back because of these long wait times.”

Hunter’s team wrote new software and algorithms to optimize communication between GPUs spread across multiple servers. The team used the algorithm to train an image-recognition neural network on 7.5 million images from the ImageNet-22k data set in seven hours. This is a new speed record for training neural networks on the image data set, breaking the previous mark of 10 days, which was held by Microsoft, IBM said.

Hunter said it’s essential to speed up training times in deep learning projects. Unlike virtually every other area of computing today, training deep learning models can take days, which might discourage more casual users.

“We feel it’s necessary to bring the wait times down,” Hunter said.

IBM is rolling out the new functionality in its PowerAI software, a deep learning platform that pulls together and configures popular open source machine learning software, including Caffe, Torch and Tensorflow. PowerAI is available on IBM’s Power Systems line of servers.

But the main reason to take note of the news, according to Forrester analyst Mike Gualtieri, is the GPU optimization software might bring new functionality to existing tools — namely Watson.

“I think the main significance of this is that IBM can bring deep learning to Watson,” he said.

Watson currently has API connectors for users to do deep learning in specific areas, including translation, speech to text and text to speech. But its deep learning offerings are prescribed. By opening up Watson to open source deep learning platforms, its strength in answering natural-language queries could be applied to deeper questions.