Tag Archives: tools

DJI and Microsoft partner to bring advanced drone technology to the enterprise

New developer tools for Windows and Azure IoT Edge Services enable real-time AI and machine learning for drones

REDMOND, Wash. — May 7, 2018 — DJI, the world’s leader in civilian drones and aerial imaging technology, and Microsoft Corp. have announced a strategic partnership to bring advanced AI and machine learning capabilities to DJI drones, helping businesses harness the power of commercial drone technology and edge cloud computing.

Through this partnership, DJI is releasing a software development kit (SDK) for Windows that extends the power of commercial drone technology to the largest enterprise developer community in the world. Using applications written for Windows 10 PCs, DJI drones can be customized and controlled for a wide variety of industrial uses, with full flight control and real-time data transfer capabilities, making drone technology accessible to Windows 10 customers numbering nearly 700 million globally.

DJI logoDJI has also selected Microsoft Azure as its preferred cloud computing partner, taking advantage of Azure’s industry-leading AI and machine learning capabilities to help turn vast quantities of aerial imagery and video data into actionable insights for thousands of businesses across the globe.

“As computing becomes ubiquitous, the intelligent edge is emerging as the next technology frontier,” said Scott Guthrie, executive vice president, Cloud and Enterprise Group, Microsoft. “DJI is the leader in commercial drone technology, and Microsoft Azure is the preferred cloud for commercial businesses. Together, we are bringing unparalleled intelligent cloud and Azure IoT capabilities to devices on the edge, creating the potential to change the game for multiple industries spanning agriculture, public safety, construction and more.”

DJI’s new SDK for Windows empowers developers to build native Windows applications that can remotely control DJI drones including autonomous flight and real-time data streaming. The SDK will also allow the Windows developer community to integrate and control third-party payloads like multispectral sensors, robotic components like custom actuators, and more, exponentially increasing the ways drones can be used in the enterprise.

“DJI is excited to form this unique partnership with Microsoft to bring the power of DJI aerial platforms to the Microsoft developer ecosystem,” said Roger Luo, president at DJI. “Using our new SDK, Windows developers will soon be able to employ drones, AI and machine learning technologies to create intelligent flying robots that will save businesses time and money, and help make drone technology a mainstay in the workplace.”

In addition to the SDK for Windows, Microsoft and DJI are collaborating to develop commercial drone solutions using Azure IoT Edge and AI technologies for customers in key vertical segments such as agriculture, construction and public safety. Windows developers will be able to use DJI drones alongside Azure’s extensive cloud and IoT toolset to build AI solutions that are trained in the cloud and deployed down to drones in the field in real time, allowing businesses to quickly take advantage of learnings at one individual site and rapidly apply them across the organization.

DJI and Microsoft are already working together to advance technology for precision farming with Microsoft’s FarmBeats solution, which aggregates and analyzes data from aerial and ground sensors using AI models running on Azure IoT Edge. With DJI drones, the Microsoft FarmBeats solution can take advantage of advanced sensors to detect heat, light, moisture and more to provide unique visual insights into crops, animals and soil on the farm. Microsoft FarmBeats integrates DJI’s PC Ground Station Pro software and mapping algorithm to create real-time heatmaps on Azure IoT Edge, which enable farmers to quickly identify crop stress and disease, pest infestation, or other issues that may reduce yield.

With this partnership, DJI will have access to the Azure IP Advantage program, which provides industry protection for intellectual property risks in the cloud. For Microsoft, the partnership is an example of the important role IP plays in ensuring a healthy and vibrant technology ecosystem and builds upon existing partnerships in emerging sectors such as connected cars and personal wearables.

Availability

DJI’s SDK for Windows is available as a beta preview to attendees of the Microsoft Build conference today and will be broadly available in fall 2018. For more information on the Windows SDK and DJI’s full suite of developer solutions, visit: developer.dji.com.

About DJI

DJI, the world’s leader in civilian drones and aerial imaging technology, was founded and is run by people with a passion for remote-controlled helicopters and experts in flight-control technology and camera stabilization. The company is dedicated to making aerial photography and filmmaking equipment and platforms more accessible, reliable and easier to use for creators and innovators around the world. DJI’s global operations currently span across the Americas, Europe and Asia, and its revolutionary products and solutions have been chosen by customers in over 100 countries for applications in filmmaking, construction, inspection, emergency response, agriculture, conservation and other industries.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For additional information, please contact:

Michael Oldenburg, DJI Senior Communication Manager, North America – michael.oldenburg@dji.com

Chelsea Pohl, Microsoft Commercial Communications Manager – chelp@microsoft.com

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.

For more information, visit our:

Website: www.dji.com

Online Store: store.dji.com/

Facebook: www.facebook.com/DJI

Instagram: www.instagram.com/DJIGlobal

Twitter: www.twitter.com/DJIGlobal
LinkedIn: www.linkedin.com/company/dji

Subscribe to our YouTube Channel: www.youtube.com/DJI

 

 

The post DJI and Microsoft partner to bring advanced drone technology to the enterprise appeared first on Stories.

Kubernetes storage projects dominate CNCF docket

Enterprise IT pros should get ready for Kubernetes storage tools, as the Cloud Native Computing Foundation seeks ways to support stateful applications.

The Cloud Native Computing Foundation (CNCF) began its quest to develop container storage products this week when it approved an inception-level project called Rook, which connects Kubernetes orchestration to the Ceph distributed file system through the Kubernetes operator API.

The Rook project’s approval illustrates the CNCF’s plans to emphasize Kubernetes storage.

“It’s going to be a big year for storage in Kubernetes, because the APIs are a little bit more solidified now,” said CNCF COO Chris Aniszczyk. The operator API and a Container Storage Interface API were released in the alpha stage with Kubernetes 1.9 in December. “[The CNCF technical board is] saying that the Kubernetes operator API is the way to go in [distributed container] storage,” he said.

Rook project gave Prometheus a seat on HBO’s Iron Throne

HBO wanted to deploy Prometheus for Kubernetes monitoring, and it ideally would have run the time-series database application on containers within the Kubernetes cluster, but that didn’t work well with cloud providers’ persistent storage volumes.

Illya Chekrygin, UpboundIllya Chekrygin

“You always have to do this careful coordination to make sure new containers only get created in the same availability zone. And if that entire availability zone goes away, you’re kind of out of luck,” said Illya Chekrygin, who directed HBO’s implementation of containers as a senior staff engineer in 2017. “That was a painful experience in terms of synchronization.”

Moreover, when containers that ran stateful apps were killed and restarted in different nodes of the Kubernetes cluster, it took too long to unmount, release and remount their attached storage volumes, Chekrygin said.

Rook was an early conceptual project in GitHub at that time, but HBO engineers put it into a test environment to support Prometheus. Rook uses a storage overlay that runs within the Kubernetes cluster and configures the cluster nodes’ available disk space as a giant pool of resources, which is in line with how Kubernetes handles CPU and memory resources.

Rather than synchronize data across multiple specific storage volumes or locations, Rook uses the Ceph distributed file system to stripe the data across multiple machines and clusters and to create multiple copies of data for high availability. That overcomes the data synchronization problem, and it avoids the need to unmount and remount external storage volumes.

“It’s using existing cluster disk configurations that are already there, so nothing has to be mounted and unmounted,” Chekrygin said. “You avoid external storage resources to begin with.”

At HBO, a mounting and unmounting process that took up to an hour was reduced to two seconds, which was suitable for the Kubernetes monitoring system in Prometheus that scraped telemetry data from the cluster every 10 to 30 seconds.

However, Rook never saw production use at HBO, which, by policy, doesn’t put prerelease software into production. Instead, Chekrygin and his colleagues set up an external Prometheus instance that received a relay of monitoring data from an agent inside the Kubernetes cluster. That worked, but it required an extra network hop for data and made Prometheus management more complex.

“Kubernetes provides a lot of functionality out of the box, such as automatically restarting your Pod if your Pod dies, automatic scaling and service discovery,” Chekrygin said. “If you run a service somewhere else, it’s your responsibility on your own to do all those things.”

Kubernetes storage in the spotlight

Kubernetes is ill-equipped to handle data storage persistence … this is the next frontier and the next biggest thing.
Illya Chekryginfounding member, Upbound

The CNCF is aware of the difficulty organizations face when they try to run stateful applications on Kubernetes. As of this week, it now owns the intellectual property and trademarks for Rook, which currently lists Quantum Corp. and Upbound, a startup in Seattle founded by Rook’s creator, Bassam Tabbara, as contributors to its open source code. As an inception-level project, Rook isn’t a sure thing, more akin to a bet on an early stage idea. It has about a 50-50 chance of panning out, CNCF’s Aniszczyk said.

Inception-level projects must update their presentations to the technical board once a year to continue as part of CNCF. From the inception level, projects may move to incubation, which means they’ve collected multiple corporate contributors and established a code of conduct and governance procedures, among other criteria. From incubation, projects then move to the graduated stage, although the CNCF has yet to even designate Kubernetes itself a graduated project. Kubernetes and Prometheus are expected to graduate this year, Aniszczyk said.

The upshot for container orchestration users is Rook will be governed by the same rules and foundation as Kubernetes itself, rather than held hostage by a single for-profit company. The CNCF could potentially support more than one project similar to Rook, such as Red Hat’s Gluster-based Container Native Storage Platform, and Aniszczyk said those companies are welcome to present them to the CNCF technical board.

Another Kubernetes storage project that may find its way into the CNCF, and potentially complement Rook, was open-sourced by container storage software maker Portworx this week. The Storage Orchestrator Runtime for Kubernetes (STORK) uses the Kubernetes orchestrator to automate operations within storage layers such as Rook to respond to applications’ needs. However, STORK needs more development before it is submitted to the CNCF, said Gou Rao, founder and CEO at Portworx, based in Los Altos, Calif.

Kubernetes storage seems like a worthy bet to Chekrygin, who left his three-year job with HBO this month to take a position as an engineer at Upbound.

“Kubernetes is ill-equipped to handle data storage persistence,” he said. “I’m so convinced that this is the next frontier and the next biggest thing, I was willing to quit my job.”

Beth Pariseau is senior news writer for TechTarget’s Cloud and DevOps Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

Cryptomining, ransomware are top malware in 2017

Cryptomining, using tools to hijack a user’s CPU to mine cryptocurrency; ransomware and mobile malware continued to plague enterprises in 2017, according to a top malware report issued by Check Point Software Technologies Ltd.

The report, which investigated the top security issues facing enterprises in the last half of the year, said 20% of organizations were infected by cryptomining malware that in some cases can diminish CPU processing by more than half.

Check Point, based in San Carlos, Calif., also said in its top malware report that attack vectors shifted during the last half of the year, with infections based on the Simple Mail Transfer Protocol eclipsing those on HTTP. The increase — from 55% during the first half of 2017 to 62% after July — reflected the number of skilled hackers targeting vulnerabilities in documents, particularly Microsoft Office.

Mobile attacks, meantime, became more nefarious. The Check Point top malware study found that enterprises are now becoming vulnerable to threats either launched by mobile devices or through mobile malware such as Switcher.

“The second half of 2017 has seen cryptominers take the world by storm to become a favorite monetizing attack vector,” said Maya Horowitz, Check Point’s threat intelligence group manager, in a statement. “While this is not an entirely new malware type, the increasing popularity and value of cryptocurrency has led to a significant increase in the distribution of crypto-mining malware. It’s clear that there is still a lot that organizations need to do to fully protect themselves against attacks.”

Check Point based its second-half top malware report on its ThreatCloud intelligence service, which holds more than 250 million addresses analyzed for bot discovery and 11 million malware signatures.

Broadcom releases SDK for ASICs

Broadcom Ltd. issued an open source software development kit, or SDK, to enable developers to customize their use of Tomahawk switch silicon in their operations.

The first version of the kit, dubbed SDKLT, is based on the BCM56960 Tomahawk switch, used within top-of-rack switches and fabric designs. The open source code is downloadable from GitHub, with the associated logical table APIs available through an Apache 2.0 license, Broadcom said.

The SDKLT uses a logical table approach to simplify how developers add features to the switch silicon. All device physical resources, such as media access control address tables, Layer 3 route tables and other functions, are presented within logical tables instead of proprietary function calls, Broadcom said.

“The SDKLT brings a fresh, state-of-the-art software development approach to the broader community of network software developers where they can now fully and directly control and monitor the rich switch feature set optimized for SDN and cloud use cases,” said Ram Velaga, Broadcom’s senior vice president and general manager of switching products, in a statement.

Broadcom’s move follows a similar initiative by Barefoot Networks, which in 2016 released Tofino, a family of switches that can be customized through P4, an open source consortium with more than 60 members.

F5 launches training for app development

F5 Networks has introduced a new training program aimed at speeding up the time it now takes for enterprises to ramp up new applications and services.

The initiative, called Super-NetOps, is focused on enabling engineers and developers to deliver applications through a service model rather than a traditional, ticket-driven approach, Seattle-based F5 said.

By standardizing critical application services and basing how they’re developed through automated toolchains, F5 said applications can go live within minutes.

“Super-NetOps will help network operations professionals build on their decades of experience deploying, managing, maintaining, and securing applications and equip them to deliver the automation and agility needed by DevOps teams,” said Kara Sprague, F5’s senior vice president and general manager, in a statement.

The online course, which is free, will debut with two modules covering DevOps methodologies and the concepts of automation, orchestration and infrastructure as code. Future modules will include training about agile methodologies, application language frameworks and how to deploy third-party automation toolchains.

SAP offers extra help on HR cloud migrations

SAP recently launched a program that offers services and tools to help with an HR cloud migration. The intent is to help HR managers make a business case and to ease some of the initial integration steps.

 SAP has seen rapid growth of its SuccessFactors cloud human capital management platform. But the firm has some 14,000 users of its older on-premises HCM suite, mostly in Europe, who have not fully migrated. Some are in a hybrid model and have been using parts of SuccessFactors.

Customers may feel “a lot of trepidation” over the initial HR cloud migration steps, said Stephen Spears, chief revenue officer at SAP. He said SAP is trying to prove with its new Upgrade2Success program “that it’s not difficult to go from their existing HR, on-premises environment to the cloud.”

The problems that stand in the way of an HR cloud migration may be complicated, especially in Europe.

HR investment remains strong

The time may be right for SAP to accelerate its cloud adoption efforts. HR spending remains strong, said analysts, and users are shifting work to HR cloud platforms.

If I were a cloud HR systems provider, I would be very excited for the future, at least in North America.
David Wagnervice president of research, Computer Economics

IDC said HCM applications are forecast to generate just over $15 billion in revenues globally this year, up 8% over 2017. This does not include payroll, just HCM applications, which address core HR functions such as personnel records, benefits administration and workforce management.

The estimated 2018 growth rate is a bit below prior year-over-year growth, which was 9% to 10%, “but still quite strong versus other back office application areas,” said Lisa Rowan, an IDC analyst. Growth is being driven in part by strong interest in replacing older on-premises core HR systems with SaaS-based systems, she said.

Cloud adoption for HR is strong in U.S.

Computer Economics, a research and consulting firm, said that in terms of organizational spending priorities HR is “right down the middle” of the 14 technologies it tracks, said David Wagner, vice president of research at the firm. It surveyed 220 companies ranging from $50 million to multibillion-dollar firms.

“Investment is higher in everything this year,” Wagner said, but IT operational budgets are not going up very fast and the reason is the cloud transition. Organizations are converting legacy systems to cloud systems and investing the savings back into the IT budget. “They’re converting to the cloud as fast as is reasonable in organizations right now,” he said.

“If I were a cloud HR systems provider, I would be very excited for the future, at least in North America,” Wagner said.

Cloud adoption different story in Europe

But Europe, where SAP has about 80% of its on-premises users, may be a different story.

Wagner, speaking generally and not specific to SAP, said the problem with cloud adoption in Europe is that there are much more stringent compliance rules around data in the cloud. There’s a lot of concern about data crossing borders and where it’s stored, and how it’s stored and encrypted. “Cloud adoption in general in Europe is behind North America because of those rules,” he said.

SAP’s new cloud adoption program brings together some services and updated tools that help customers make a business case, demonstrate the ROI and help with data integration. It takes on some of the work that a systems integrator might do.

Charles King, an analyst at Pund-IT, said SAP is aiming to reduce the risk and uncertainties involved in a sizable project. 

“That’s a wise move since cost, risk and uncertainty is the unholy trinity of bugaboos that plague organizations contemplating such substantial changes,” King said.

IT monitoring, org discipline polish Nasdaq DevOps incident response

Modern IT monitoring can bring together developers and IT ops pros for DevOps incident response, but tools can’t substitute for a disciplined team approach to problems.

Dev and ops teams at Nasdaq Corporate Solutions LLC adopted a common language for troubleshooting with AppDynamics’ App iQ platform. But effective DevOps incident response also demanded focus on the fundamentals of team building and a systematic process for following up on incidents to ensure they don’t recur.

“We had some notion of incident management, but there was no real disciplined way for following up,” said Heather Abbott, senior vice president of corporate solutions technology, who joined the New York-based subsidiary of Nasdaq Inc. in 2014. “AppDynamics has [affected] how teams work together to resolve incidents … but we’ve had other housekeeping to do.”

Shared IT monitoring tools renew focus on incident resolution

Heather Abbott, NasdaqHeather Abbott

Nasdaq Corporate Solutions manages SaaS offerings for customers as they shift from private to public operations. Its products include public relations, investor relations, and board and leadership software managed with a combination of Amazon Web Services and on-premises data center infrastructure, though the on-premises infrastructure will soon be phased out.

In the past, Nasdaq’s dev and ops teams used separate IT monitoring tools, and teams dedicated to different parts of the infrastructure also had individualized dashboard views. The company’s shift to cross-functional teams, focused on products and user experience as part of a DevOps transformation, required a unified view into system performance. Now, all stakeholders share the AppDynamics App iQ interface when they respond to an incident.

With a single source of information about infrastructure performance, there’s less finger-pointing among team members during DevOps incident response, which speeds up problem resolution.

“You can’t argue with the data, and people have a better ongoing understanding of the system,” Abbott said. “So, you’re not going in and hunting and pecking every time there’s a complaint or we’re trying to improve something.”

DevOps incident response requires team vigilance

Since Abbott joined Nasdaq, incidents are down more than 35%. She cited the IT monitoring tool in part, but also pointed to changes the company made to the DevOps incident response process. The company moved from an ad hoc process of incident response divided among different departments to a companywide, systematic cycle of regular incident review meetings. Her team conducts weekly incident review meetings and tracks action items from previous incident reviews to prevent incidents from recurring. Higher levels of the organization have a monthly incident review call to review quality issues, and some of these incidents are further reviewed by Nasdaq’s board of directors.

We always need to focus on blocking and tackling … but as we move toward more complex microservices-based architectures, we’ll be building things into the platform like Chaos Monkey.
Heather Abbottsenior vice president of corporate solutions technology, Nasdaq

And there’s still room to improve the DevOps incident response process, Abbott said.

“We always need to focus on blocking and tackling,” she said. “We don’t have the scale within my line of business of Amazon or Netflix, but as we move toward more complex microservices-based architectures, we’ll be building things into the platform like Chaos Monkey.”

Like many companies, Nasdaq plans to tie DevOps teams with business leaders, so the whole organization can work together to improve customer experiences. In the past, Nasdaq has generated application log reports with homegrown tools. But this year, it will roll out AppDynamics’ Business iQ software, first with its investor-relations SaaS products, to make that data more accessible to business leaders, Abbott said.

AppDynamics App iQ will also expand to monitor releases through test, development and production deployment phases. Abbott said Nasdaq has worked with AppDynamics to create intelligent release dashboards to provide better automation and performance trends. “That will make it easy to see how system performance is trending over time, as we introduce change,” he said.

While Nasdaq mainly uses AppDynamics App iQ, the exchange also uses Datadog, because it offers event correlation and automated root cause analysis. AppDynamics has previewed automated root cause analysis based on machine learning techniques. Abbott said she looks forward to the addition of that feature, perhaps this year.

Beth Pariseau is senior news writer for TechTarget’s Cloud and DevOps Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

Cisco HyperFlex system upgrade targets hybrid cloud

Cisco has added to its hyper-converged infrastructure platform tools for running and managing hybrid applications split between public and private clouds. The latest technology in the Cisco HyperFlex system makes it a stronger competitor in the market, analysts said.

Cisco introduced this week the 3.0 software release for HyperFlex. The announcement came a day after Cisco said it would acquire Skyport Systems Inc., a maker of highly secure, cloud-managed, hyper-converged systems.

In general, HyperFlex combines software-defined storage and data services software with Cisco Unified Computing System. UCS integrates computing, networking and storage resources to provide efficiency and centralized management.

The latest release packs a lot more Cisco software into HyperFlex, which should improve interoperability and simplify support, said Dan Conde, an analyst at Enterprise Strategy Group Inc., based in Milford, Mass. “Cisco has taken many of the assets that used to be separate in their stable and made it available under a single [HyperFlex] umbrella.”

The new features should also make HyperFlex more competitive and useful as a hybrid cloud platform, analysts said. In the hyper-converged infrastructure (HCI) market, Cisco has lagged behind rivals Dell, Hewlett Packard Enterprise and Nutanix.

Software added to the Cisco HyperFlex system

HyperFlex customers now have the option of Cisco AppDynamics integration for monitoring performance of applications running on HyperFlex and across clouds. Other cloud-related management software available for the HCI system include Cisco Workload Optimization Manager (CWOM) and CloudCenter.

CWOM helps IT staff determine the resource needs of workloads. CloudCenter provides application-centric orchestration.

Other new features include support for Microsoft’s Hyper-V virtual machine (VM). HyperFlex supports the more popular VMware ESXi, but Hyper-V is often used to run Microsoft applications.

Release 3 of the Cisco HyperFlex system also contains support for Kubernetes-managed containers, making HyperFlex friendlier to developers building cloud-native applications.

Along with cloud apps, companies can run more enterprise applications on HyperFlex. Cisco released validated designs and guides for running Oracle, SAP, Microsoft and Splunk software.

The most prominent use case for HCI systems is running business applications on a general computing platform, according to Nemertes Research, based in Mokena, Ill. Roughly 30% of enterprises use HCI for general computing, followed by private cloud at 19%.

Increased scalability in the Cisco HyperFlex system

Cisco has increased the scalability of HyperFlex. Customers can raise VM density by joining HyperFlex systems into clusters, which can now contain up to 64 nodes. The previous maximum was eight.

Cisco has also added support for stretched clusters, which makes it possible to have nodes span multiple geographical locations.

Overall, analysts expect the new features to help Cisco add to the more than 2,500 companies using HyperFlex today.

“This announcement, combined with the market still being ripe for adoption, is a great combo going forward,” said Mike Leone, an analyst at Enterprise Strategy Group. “It will be interesting to see how the customer base grows now that they’re on a more level playing field with the competition.”

Plans for Skyport acquisition

The Skyport acquisition brings a tightly knit hardware and software product to Cisco’s portfolio. The system is primarily used to run business-critical data center applications.

“I think Cisco’s goal is to get the automated, security-wrapped provisioning software [in Skyport] and just fold it into their cloud and infrastructure management tools broadly,” said Nemertes analyst John Burke.

That may be so, but for now, Cisco has provided no details, saying in a statement it plans to use Skyport’s “intellectual property, seasoned software and network expertise to accelerate priority areas across multiple Cisco portfolios.”

The Skyport team will join Cisco’s networking group, led by general manager Jonathan Davidson, and the data center and computing systems product group, headed by general manager Liz Centoni. Cisco did not disclose financial terms.

ExtremeLocation latest addition to Extreme wireless portfolio

Extreme Networks is offering retail customers cloud-based tools that provide actionable intelligence from customer-activity data gathered through a store’s beacons and guest Wi-Fi.

Extreme debuted its ExtremeLocation service this week at the National Retail Federation conference in New York. The service is designed to work best with ExtremeWireless WiNG, a combined access point and Bluetooth Low Energy beacon. Extreme received the WiNG technology in the 2016 acquisition of Zebra Technologies’ wireless LAN business.

For ExtremeLocation to gather the maximum amount of customer data, shoppers would have to launch the retailer’s mobile app and log into the guest network of an Extreme-based Wi-Fi. At that point, where customers move in the store and where they linger would be recorded by the system.

ExtremeLocation tracks people within 5 to 7 meters of their actual location — a distance acceptable to many retailers. However, higher accuracy is possible by adding access points.

“The more access points you have, the more triangulation we can use and the more accurate you can get,” said Bob Nilsson, the director of vertical solutions at Extreme, based in San Jose, Calif.

Depending on the desired level of accuracy, a large department store could deploy from hundreds to thousands of access points. ExtremeLocation supports up to 100,000 access points across multiple locations.

Insight from customer activity on Extreme wireless

The collected information provides retailers with a view of where shoppers go, which products or displays they stop at and the amount of time spent in the store or at a specific location. Retailers can also track salespeople to ensure they are in high-trafficked areas.

Customers who turn on the mobile app can become targets for in-store promotions and coupons that the system sends through the beacons. Retailers can create policies for push notifications through a third-party system, such as customer relationship management or point-of-sale software. Extreme provides the APIs for integrating with those systems.

The ExtremeWireless WiNG access points send customer activity data to Extreme’s cloud-based software, which aggregates the information and displays the results on graphs, charts and other visuals, including a heat map of the store that shows where most shoppers are gathering. “It’s designed more for the store manager, the sales manager and the marketing side, rather than the IT side,” Nilsson said of the software.

Retailers are using location-based services for more than customer tracking. Cisco, for example, is demonstrating at the NRF conference the use of radio frequency identification tags to automatically notify a store employee that it’s time to restock a shelf.

Cisco is also demonstrating ad signage that’s attached to products in a store. When customers handle an item, the sign will change to a message enticing them to purchase the product.

Time-series monitoring tools give high-resolution view of IT

DevOps shops use time-series monitoring tools to glean a nuanced, historical view of IT infrastructure that improves troubleshooting, autoscaling and capacity forecasting.

Time-series monitoring tools are based on time-series databases, which are optimized for time-stamped data collected continuously or at fine-grained intervals. Since they store fine-grained data for a longer term than many metrics-based traditional monitoring tools, they can be used to compare long-term trends in DevOps monitoring data and to bring together data from more diverse sources than the IT infrastructure alone to link developer and business activity with the behavior of the infrastructure.

Time-series monitoring tools include the open source project Prometheus, which is popular among Kubernetes shops, as well as commercial offerings from InfluxData and Wavefront, the latter of which VMware acquired last year.

DevOps monitoring with these tools gives enterprise IT shops such as Houghton Mifflin Harcourt, an educational book and software publisher based in Boston, a unified view of both business and IT infrastructure metrics. It does so over a longer period of time than the Datadog monitoring product the company used previously, which retains data for only up to 15 months in its Enterprise edition.

“Our business is very cyclical as an education company,” said Robert Allen, director of engineering at Houghton Mifflin Harcourt. “Right before the beginning of the school year, our usage goes way up, and we needed to be able to observe that [trend] year over year, going back several years.”

Allen’s engineering team got its first taste of InfluxData as a long-term storage back end for Prometheus, which at the time was limited in how much data could be held in its storage subsystem — Prometheus has since overhauled its storage system in version 2.0. Eventually, Allen and his team decided to work with InfluxData directly.

Houghton Mifflin Harcourt uses InfluxData to monitor traditional IT metrics, such as network performance, disk space, and CPU and memory utilization, in its Amazon Web Services (AWS) infrastructure, as well as developer activity in GitHub, such as pull requests and number of users. The company developed its own load-balancing system using Linkerd and Finagle. And InfluxData also collects data on network latencies in that system, and it ties in with Zipkin’s tracing tool to troubleshoot network performance issues.

Multiple years of highly granular infrastructure data empowers Allen’s team of just five people to support nearly 500 engineers who deliver applications to the company’s massive Apache Mesos data center infrastructure.

InfluxData platform

Time-series monitoring tools boost DevOps automation

Time-series data also allows DevOps teams to ask more nuanced questions about the infrastructure to inform troubleshooting decisions.

“It allows you to apply higher-level statistics to your data,” said Louis McCormack, lead DevOps engineer for Space Ape Games, a mobile video game developer based in London and an early adopter of Wavefront’s time-series monitoring tool. “Instead of something just being OK or not OK, you can ask, ‘How bad is it?’ Or, ‘Will it become very problematic before I need to wake up tomorrow morning?'”

Instead of something just being OK or not OK, you can ask, ‘How bad is it?’ Or, ‘Will it become very problematic before I need to wake up tomorrow morning?’
Louis McCormacklead DevOps engineer, Space Ape Games

Space Ape’s infrastructure to manage is smaller than Houghton Mifflin Harcourt’s, at about 600 AWS instances compared to about 64,000. But Space Ape also has highly seasonal business cycles, and time-series monitoring with Wavefront helps it not only to collect granular historical data, but also to scale the IT infrastructure in response to seasonal fluctuations in demand.

“A service in AWS consumes Wavefront data to make the decision about when to scale DynamoDB tables,” said Nic Walker, head of technical operations for Space Ape Games. “Auto scaling DynamoDB is something Amazon has only just released as a feature, and our version is still faster.”

The company’s apps use the Wavefront API to trigger the DynamoDB autoscaling, which makes the tool much more powerful, but also requires DevOps engineers to learn how to interact with the Wavefront query language, which isn’t always intuitive, Walker said. In Wavefront’s case, this learning curve is balanced by the software’s various prebuilt data visualization dashboards. This was the primary reason Walker’s team chose Wavefront over open source alternatives, such as Prometheus. Wavefront is also offered as a service, which takes the burden of data management out of Space Ape’s hands.

Houghton Mifflin Harcourt chose a different set of tradeoffs with InfluxData, which uses a SQL-like query language that was easy for developers to learn, but the DevOps team must work with outside consultants to build custom dashboards. Because that work isn’t finished, InfluxData has yet to completely replace Datadog at Houghton Mifflin Harcourt, though Allen said he hopes to make the switch this quarter.

Time-series monitoring tools scale up beyond the capacity of traditional metrics monitoring tools, but both companies said there’s room to improve performance when crunching large volumes of data in response to broad queries. Houghton Mifflin Harcourt, for example, queries millions of data points at the end of each month to calculate Amazon billing trends for each of its Elastic Compute Cloud instances.

“It still takes a little bit of a hit sometimes when you look at those tags, but [InfluxEnterprise version] 1.3 was a real improvement,” Allen said.

Allen added that he hopes to use InfluxData’s time-series monitoring tool to inform decisions about multi-cloud workload placement based on cost. Space Ape Games, meanwhile, will explore AI and machine learning capabilities available for Wavefront, though the jury’s still out for Walker and McCormack whether AIOps will be worth the time it takes to implement. In particular, Walker said he’s concerned about false positives from AI analysis against time-series data.

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.