Tag Archives: grow

CenturyLink acquires Streamroot, adding P2P CDN capabilities

CenturyLink is looking to grow its content delivery network capabilities with the acquisition of privately held Streamroot Inc. Financial terms of the deal were not disclosed.

Streamroot’s technology provides a peer-to-peer (P2P) mesh approach for video content delivery applications. The advantage of the P2P content delivery network (CDN) approach, according to Streamroot, is it can potentially reach underserved markets and enable an alternative system for content delivery.

The deal was made public on Tuesday.

P2P CDNs are a fairly small business right now, and CenturyLink’s acquisition of Streamroot won’t change the CDN landscape, said 451 Research analyst Craig Matsumoto. That said, for CenturyLink, a P2P CDN capability is a nice, low-risk way to extend reach into different markets, especially internationally, he added.

“Think of live sports. Someone broadcasting a World Cup match is probably going to use multiple CDNs. So, if CenturyLink can claim extended reach into underserved areas, that’s a differentiator,” Matsumoto said.

Overall, he said, it’s known that P2P CDN technology can work at scale; though, to date, it’s been more a matter of finding use cases where the need is acute enough.

“If the CenturyLink-Streamroot deal works out, I could see the other CDNs working out partnerships or acquisitions with the other P2P startups,” he said.


In the past, the term P2P was often associated with BitTorrent as a network approach that uses the power of devices in the network to share data.

Streamroot’s P2P CDN is completely unlike BitTorrent, in that it allows premium content providers complete control to ensure only users who have accepted their terms of use can benefit from and contribute to the user experience improvements achieved by incorporating into a mesh of similarly licensed users, said Bill Wohnoutka, vice president of global internet and content delivery services at CenturyLink.

“Streamroot’s data science and client heuristics enable connected consumer devices, such as smart phones, tablets, computers, set-top consoles and smart TVs, to participate in the serving of premium content through a secure and private mesh delivery,” Wohnoutka said. “Mesh servers are made from users that demonstrate performance and are created within the boundaries of carrier and enterprise networks to minimize the negative impact of the traffic on the network.”

Streamroot and CenturyLink

While the acquisition is new, Wohnoutka noted that CenturyLink began reselling Streamroot’s mesh delivery service in April 2019. He added that, as over-the-top (OTT) content becomes more pervasive worldwide, CenturyLink felt now was the right time to accelerate innovation and acquire Streamroot.

Streamroot’s data science and client heuristics enable connected consumer devices … to participate in the serving of premium content through a secure and private mesh delivery.
Bill WohnoutkaVice president of global internet and content delivery services, CenturyLink

With the P2P CDN technology, Wohnoutka said the goal is enable customers to get the most out of CenturyLink’s CDN and other CDNs they may be using, supporting a hybrid CDN approach.

“It is a true last-mile solution that pushes edge computing all the way down to the user device to localize traffic and reduce the pressures that OTT content puts on carrier networks,” he said.

P2P CDNs will also likely benefit from the rollout of 5G access technology. Wohnoutka said, with 5G, there are inherent localization and traffic optimization algorithms embedded in the software, as well as a data science approach to ensure best performance during peak internet traffic and in hard-to-reach locations.

“The direct benefits are realized by the content customer, end user and, importantly, the ISPs [internet service providers] architecting their 5G networks for low latency, high performance and traffic efficiency,” he said.

Wohnoutka noted that CenturyLink’s fiber network already has more than 450,000 route miles of coverage. He added that the company’s CDN business is a key part of continued investment in edge computing capabilities that puts workloads closer to customers’ digital interactions.

“What we are bringing our customers with this acquisition is the advantage of data science and software to help them improve the user experience with rich media content during peak hours on the internet,” Wohnoutka said.

Go to Original Article

The use of technology in education has pros and cons

The use of technology in education continues to grow, as students turn to AI-powered applications, virtual reality and internet searches to enhance their learning.

Technology vendors, including Google, Lenovo and Microsoft, have increasingly developed technology to help pupils in classrooms and at home. That technology has proved popular with students in elementary education and higher education, and has been shown to benefit independent learning efforts, even as critics have expressed worry that can lead to decreased social interactions.

Lenovo, in a recent survey of 15,000 technology users across 10 countries, reported that 75% of U.S. parents who responded said their children are more likely to look something up online than ask for help with schoolwork. In China, that number was 85%, and in India, it was 89%.

Taking away stress

According to vendors, technology can augment schoolwork help busy parents give their children.

Parenting in general is becoming a challenge for a lot of the modern families as both parents are working and some parents may feel overwhelmed,” said Rich Henderson, director of global education solutions at Lenovo, a China-based multinational technology vendor.

If children can learn independently, that can take pressure and stress off of parents, Henderson continued.

Independent learning can include searching for information on the web, querying using a virtual assistant, or using specific applications.

About 45% of millennials and younger students find technology “makes it much easier to learn about new things,” Henderson said.

Many parents, however, said on the survey that they felt the use of technology in education, while beneficial to their children’s learning, also led to decreases in social interactions. Using the technology to look up answers, instead of consulting parents, teachers or friends, concerned parents that “their children may be becoming too dependent on technology and may not be learning the necessary social skills they require,” according to the survey.

At the same time, however, many parents felt that the use of technology in education would eventually help future generations become more independent learners.

Technology has certainly helped [children learn].
Rich HendersonDirector of global education solutions, Lenovo

“Technology has certainly helped [children learn] with the use of high-speed internet, more automated translation tools. But we can’t ignore the fact that we need students to improve their social skills, also,” Henderson said. “That’s clearly a concern the parents have.”

Yet, despite the worries, technology vendors have poured more and more money into the education space. Lenovo itself sells a number of hardware and software products for the classroom, including infrastructure to help teachers manage devices in a classroom, and a virtual reality (VR) headset and software to build a VR classroom.

The VR classroom has benefited students taking online classes, giving them a virtual classroom or lab to learn in.

Google in education

Meanwhile Google, in an Aug. 15 blog post, promoted the mobile learning application Socratic it had quietly acquired last year. The AI-driven application, released for iOS, can automatically solve mathematical and scientific equations by taking photos of them. The application can also search for answers to questions posed in natural language.

Use of technology in education, student, learning
The use of technology in education provides benefits and challenges for students.

Also, Socratic features references guides to topics frequently taught in schools, including algebra, biology and literature.

Microsoft, whose Office suite is used in many schools around the world, sells a range of educational and collaborative note-taking tools within its OneNote product. The tool, which includes AI-driven search functions, enables students to type in math equations, which it will automatically solve.

While apparently helpful, the increased use of technology in education, as well as the prevalence of AI-powered software for students, has sparked some criticism.

The larger implications

Mike Capps, CEO of AI startup Diveplane, which sells auditable, trainable, “transparent” AI systems, noted that the expanding use of AI and automation could make basic skills obsolete.

Many basic skills, including typing and driving, could eventually end up like Latin — learnable, potentially useful, but unnecessary.

AI systems could increasingly help make important life decisions for people, Capps said.

“More and more decisions about kids’ lives are made by computers, like college enrollment decisions and what car they should buy,” Capps said.

Go to Original Article

Startup Dgraph Labs growing graph database technology

Dgraph Labs Inc. is set to grow its graph database technology with the help of a cash infusion of venture financing.

The company was founded in 2015 as an effort to advance the state of graph database technology. Dgraph Labs’ founder and CEO Manish Jain previously worked at Google, where he led a team that was building out graph database systems. Jain decided there was a need for a high-performance graph database technology that could address different enterprise use cases.

Dgraph said July 31 it had completed an $11.5 million Series A funding round.

The Dgraph technology is used by a number of different organizations and projects. Among them is Intuit, which uses Dgraph as the back-end graph database for its’ open source project K-Atlas.

“We were looking for a graph database with high performance in querying large-scale data sets, fully distributed, highly available and as cloud-native as possible,” said Dawei Ding, engineering manager at Intuit.

Ding added that Dgraph’s graph database technology stood out from both architectural design as well as performance benchmarking perspective. Moreover, he noted that being fully open sourced made Dgraph an even more attractive solution for Intuit’s open source software project.

The graph database landscape

Multiple technologies are available in the graph database landscape, including from Neo4j, Amazon Neptune and DataStax Enterprise Graph, among others. In Jain’s view, many graph database technologies are actually graph layers, rather than full graph databases.

“By graph layer, what I mean is that they don’t control storage; they just overlay a graph layer on top of some other database,” Jain said.

So, for example, he said a common database used by graph layer-type technologies is Apache Cassandra or, in Amazon’s case, Amazon Aurora.

Screenshot of graph database from Dgraph Labs showing information about all movies directed by Steven Spielberg
Graph database of all the movies directed by Steven Spielberg, their country of filming, genres, actors in those movies and the characters played by those actors.

“The problem with that approach is that to do the graph traversal or to do a graph join, you need to first bring the data to the layer before you can interact with it and do interesting things on it,” Jain commented. “So, there’s multiple back and forth steps and, therefore, the performance likely will decrease.”

In contrast, the founding principle behind Dgraph was that graph database technology could be developed that can scale horizontally while also upping performance, because the database controls how data is stored on disk.

Open source and the enterprise

Dgraph is an open source project and hit its 1.0 milestone in December 2017. The project has garnered more than 10,000 stars on GitHub, which Jain points to as a measure of the undertaking’s popularity.

Going a step further is the company’s Dgraph Enterprise platform, which provides more capabilities that an organization might need for support, access control and management. Jain said Dgraph Labs is using the open core model, in which the open source application is free to use, but then if an organization wants certain features, it must pay for it.

Jain stressed that the core open source project is functional on its own — so much so that an organization could choose to run a 20 node Dgraph cluster with replication and consistency for free.

Why graph databases matter

We were looking for a graph database with high performance in querying large-scale data sets, fully distributed, highly available and as cloud-native as possible.
Dawei DingEngineering manager, Intuit

A problem with typical relational databases is that with every data model comes a new table, or new schema, and, over time, that can become a scaling challenge, Jain said. He added that in the relational database approach, large data sets tend to become siloed over time as well. With a graph database, it is possible to unify disparate data sources.

As an example of how a graph database technology approach can help to eliminate isolated data sources, Jain said one of Dgraph’s largest enterprise customers took 60 different vertical data silos stored in traditional database and put all of it into a Dgraph database.

“Now, they’re able to run queries across all of these data sets, to be able to power not only their apps, but also power real-time analytics,” Jain said.

What’s next for Dgraph Labs

With the new funding, Jain said the plan is to open new offices for the company as well as expand the graph database technology.

One key area of future expansion is building a Dgraph cloud managed service. Another area that will be worked on is third-party integration, with different technologies such as Apache Spark for data analysis.

“Now that we have a bunch of big companies using Dgraph, they need some additional features, like, for example, encryption, and so we are putting a good chunk of our time into building out capabilities,” he said.

Go to Original Article

IDC: SD-WAN market spend to top $5B in 2023

The global software-defined WAN infrastructure market will grow an average of nearly 31% annually through 2023 as vendors feed enterprise hunger for technology that connects employees to applications running on multiple cloud service providers.

That’s one of the findings of IDC’s latest SD-WAN forecast. The research firm said the market would reach $5.25 billion in 2023 from $1.4 billion in 2018, the beginning of the forecast period.

Enterprises have found SD-WAN a necessary technology for connecting branch locations and remote offices with SaaS applications and software running on public clouds, such as AWS and Microsoft Azure. Traditional WAN technology lacks most of the features needed for connecting to cloud and SaaS applications, such as simplified management, cost-effective bandwidth utilization and WAN flexibility, efficiency and security, IDC said.

The demand for SD-WAN will fuel a continuation of market consolidation through acquisition as companies with stronger business models buy weaker vendors for their intellectual property, customer base or presence in specific geographical regions, IDC said.

SD-WAN market consolidation

The SD-WAN market today has more than three dozen vendors, which is more than the market can support, analysts have said. The most significant acquisitions to date include VMware purchasing VeloCloud in 2017 and Cisco Systems acquiring Viptela and Oracle picking up Talari Networks in 2018.

Other trends spotted by IDC include SD-WAN evolving from a standalone product to a key feature within a broader SD-branch platform that encompasses additional network and security services.

“Vendors will compete intensely on this front during the next few years,” the IDC report said.

Businesses with lots of branch and remote offices are deploying SD-branch technology to simplify network operations through consolidation of WAN connectivity, network security, LAN and Wi-Fi in a unified platform, according to Lee Doyle, principal analyst for Doyle Research. Network and security vendors offering SD-branch options include Cisco Meraki, Cradlepoint, Fortinet, Hewlett Packard Enterprise’s Aruba Networks, Riverbed and Versa Networks.

Market share leaders

IDC defines SD-WAN infrastructure as comprising edge routing software or hardware and traditional routers and WAN optimization technology if they are an in-use and integrated component of an SD-WAN product.

Other infrastructure components include SD-WAN controllers for centralized implementation of application policy and WAN routing, network visibility and analytics.

Based on IDC’s definition of SD-WAN infrastructure, Cisco’s broad portfolio of hardware and software made it the market leader with a 46.4% share, the researcher said. VMware, which sells only software, was second with an 8.8% share, followed by Silver Peak, 7.4%; Nuage Networks, a Nokia company, 4.9%; and Riverbed, 4.3%.

Go to Original Article

SAP PartnerEdge initiative offers free S/4HANA Cloud resources

SAP is taking a new tact to grow its public cloud offering and development resources: It’s giving them away for free.

As of July 1, qualified SAP PartnerEdge members can now test and demonstrate systems built on SAP S/4HANA Cloud and SAP C/4HANA free of charge. Partners need to have a valid SAP PartnerEdge status and employ three consultants who are certified in “operations capabilities for SAP applications running on S/4HANA Cloud” to get free access to test and demo systems, according to the company. Partners who have at least three consultants certified for SAP C/4HANA applications also qualify for the initiative.

SAP PartnerEdge is designed to provide SAP partners the resources they can use to develop, sell, service and manage SAP systems and applications. The program currently has more than 19,800 partners worldwide, according to SAP.

The new SAP PartnerEdge initiative appears to have positive reviews from partners, but one analyst believes the effort is a long-overdue strategic move to build out SAP applications and keep pace with other cloud providers like AWS and Microsoft.

Seeding the cloud applications

The SAP PartnerEdge initiative is an attempt to open up SAP Cloud Platform and S/4HANA to a broader range of developers and seed the market with applications built on SAP technology, according to analyst Joshua Greenbaum, principal at Enterprise Applications Consulting, based in Berkeley, Calif. This is similar to the approach long favored by the likes of Microsoft and AWS.

“They want the developers of future great products — whether they’re internal development teams, startups or professional teams — to think about SAP as their development platform,” Greenbaum said. “That’s their fundamental strategy for growing the uptake of SAP Cloud Platform and S/4HANA.”

It’s being met with enthusiasm. Alain Dubois, chief marketing and business development officer at Beyond Technologies, called the program a “great initiative.”

Beyond Technologies, a Montreal-based SAP partner that specializes in development and integration of a range of SAP products, including S/4HANA and C/4HANA. It plans to take advantage of the free cloud access the new SAP PartnerEdge program offers.

“We will use it for demos and [proofs of concept] as well as for enablement, which is crucial for us as a value-added reseller because it will help [keep down] customer acquisition costs,” he said.

Shaun Syvertsen, CEO of ConvergentIS, is also looking forward to using the program’s resources. ConvergentIS is an SAP partner based in Calgary, Alberta, that provides SAP technology development and consulting services.

The SAP PartnerEdge program will drive the long-term success of SAP technology with partners, Syvertsen said. It’s particularly valuable for partners to get a deeper understanding of public cloud SAP versions.

“In particular, you have to understand that 20 years of on-premises experience is potentially dangerous in the cloud environment, as the setup and range of flexibility is quite different from on-premises,” Syvertsen said. “So, a new cloud-centric mindset and cloud-specific experience is critical.”

Better late than never

SAP partners have been beating the drum for an initiative like this for years to help them keep pace with competitors like AWS, Microsoft and Salesforce, Enterprise Applications Consulting’s Greenbaum explained.

“The partners and would-be partners have been saying that SAP has to emulate the rest of the market for a while,” he said. “There are lots of open source tools and there’s a huge amount of love and support [for developers] from competitor platform providers, and the partners have always said that SAP has to do something similar or they’ll go somewhere else.”

SAP has to do some work to catch up to Salesforce or AWS, but it’s still a relatively new game as cloud uptake numbers are just beginning to gain momentum in the enterprise applications market, according to Greenbaum.

What could be a differentiator for SAP also is the totality of what it can offer compared to the other cloud companies.

“Underneath the hood, SAP provides access to business services — data and processes — that are potentially very valuable,” he said. “Salesforce can do that, but only within the domain of CRM, and AWS doesn’t really do that at all. So, this is a good time for SAP. It would have been a better time two years ago, but the story is hardly over at this point.”

Availability for the new SAP PartnerEdge program for qualified partners began July 1. Registration will remain open until Sept. 30, 2019, according to the company. Partners who have already bought the test and demonstration licenses will receive a migration offer from SAP Partner Licensing Services if they want to migrate their existing services to the free access, according to SAP.

Go to Original Article

VMworld pushes vSAN HCI to cloud, edge

VMware executives predict the vSAN hyper-converged software platform will grow rapidly into a key building block for the vendor’s strategy to conquer the cloud and other areas outside the data center.

VMware spent a lot of time discussing the roadmap for its vSAN hyper-converged infrastructure (HCI) software roadmap at VMworld 2018 last month. The vSAN news included short-term specifics with the launch of a private beta program for the next version, along with more general overarching plans for the future.

VMware executives made it clear that vSAN HCI will play a big role in its long-term cloud strategy. They painted HCI as a technology spanning from the data center to the cloud to the edge, as it brings storage, compute and other resources together into a single platform.

The vSAN HCI software is built into VMware’s vSphere hypervisor, and is sold as part of integrated appliances such as Dell EMC VxRail and as Ready Node bundles with servers. VMware claims more than 14,000 vSAN customers, and IDC lists it as the revenue leader among HCI software.

VMware opened its private beta program for vSAN 6.7.1 during VMworld, adding file and native cloud storage and data protection features.

VSAN HCI: From DC to cloud to edge

During his opening day keynote at VMworld, VMware CEO Pat Gelsinger called vSAN “the engine that’s just been moving rapidly to take over the entire integration of compute and storage to expand to other areas.”

Where is HCI moving to? Just about everywhere, according to VMware executives. That includes Project Dimension, a planned hardware as a service designed to bring VMware SDDC infrastructure on premises.

“The definition of HCI has been expanding,” said Yanbing Li, VMware senior vice president and general manager of storage and availability. “We started with a simple mission of converging compute and storage by putting both on a software-defined platform running on standard servers. This is where a lot of our customer adoption has happened. But the definition of HCI is expanding up through the stack, across to the cloud and it’s supporting a wide variety of applications.”

VSAN beta: Snapshots, native cloud storage

The vSAN 6.7.1 beta includes policy-based native snapshots for data protection, NFS file services and support for persistent storage for containers. VMware also added the ability for vSAN to manage Amazon Elastic Block Storage (EBS) in AWS, a capacity reclamation feature and a Quickstart guided cluster creation wizard.

If it pans out as we hope, it will be data center as a service.
Chris GreggCIO, Mercy Ships

Lee Caswell, VMware vice president of products for storage and availability, said vSAN can now take point-in-time snapshots across a cluster. The snapshot capability is managed through VMware’s vCenter. There is no native vSAN replication yet, however. Replication still requires vSphere Replication.

Caswell said the file services include a clustered namespace, allowing users to move files to VMware Cloud on AWS and back without requiring separate mount points for each node.

The ability to manage elastic capacity in AWS allows customers to scale storage and compute independently,

“This is our first foray into storage-only scaling,” Caswell said.

The automatic capacity redemption will reclaim unused capacity on expensive solid-state drive storage.

Caswell said there was no timetable for when the features will make it into a general availability version of vSAN.

Mercy Ships was among the customers at VMworld expanding their vSAN HCI adoption. Mercy Ships uses Dell EMC VxRail appliances running vSAN in its Texas data center and is adding VxRail on two hospital ships that bring volunteer medical teams to underdeveloped areas. They include the current Africa Mercy floating hospital and a second ship under construction.

“The data center for us needs to be simple, straightforward, scalable and supportable,” Mercy Ships CIO Chris Gregg said. “That’s the dream we’re seeing through hyper-converged infrastructure. If it pans out as we hope, it will be data center as a service. Then, as an IT department we can focus on things that are really important to the organization. For us, that means serving more patients.”

Microsoft’s Airband Grant Fund invests in 8 start-ups delivering internet-connected solutions to rural communities around the globe – Microsoft on the Issues

Today, internet access is as essential as electricity. It empowers entrepreneurs to start and grow small businesses, farmers to implement precision agriculture, doctors to improve community health and students to do better in school. But almost half the world’s population is still not online, often because they live in underserved areas, and therefore miss out on opportunities to take advantage of and become part of the digital economy. As a global technology company, we believe we have a responsibility and a great opportunity to help close this gap.

That’s why we’re excited to announce the eight early-stage companies selected for our third annual Airband Grant Fund. These start-ups are overcoming barriers to provide affordable internet access to unconnected and underserved communities in the U.S., Africa and Asia using TV white spaces (TVWS) and other promising last-mile access technologies. Our grant fund will provide financing, technology, mentorship, networking opportunities and other support to help scale these start-ups’ innovative new technologies, services and business models. The Airband Grant Fund is part of the Microsoft Airband Initiative, launched last year to extend broadband access across the United States and, ultimately, connectivity around the globe.

We are excited to partner with this year’s cohort of Airband grantees, which include:

These companies are improving life for some of the most underserved communities here in the U.S. and around the world. For example, approximately 35 percent of people living on tribal lands in the U.S. lack broadband. Tribal Digital Village wants to change that. With support from our Airband Grant Fund, they will use TVWS – vacant broadcast spectrum that enables internet connections in challenging rural terrain – and other technologies to deploy broadband to tribal homes on 20 isolated reservations in Southern California. “We realized that without access to the internet, tribal students weren’t going to have access to advanced opportunities that other kids had,” said Matt Rantanen, director of technology for Tribal Digital Village. “But there was no infrastructure on tribal land and no telecommunications companies wanted to work with us to build it out. So we had to build it ourselves.”

ColdHubs is another organization finding innovative ways to tackle the broadband access challenge. In Owerri, Nigeria, ColdHubs is transforming their refrigerated crop storage rooms into Wi-Fi hot spots using TVWS technology. The company aims to empower smallholder farmers with the ability to earn better livelihoods. Their solar-powered crop storage facilities help reduce food spoilage, which causes 470 million smallholder farmers to lose 25 percent of their annual income. Farmers who use ColdHubs can extend the freshness of their fruits and vegetables from two to about 21 days, reducing post-harvest loss by 80 percent. By turning these facilities into Wi-Fi “Farm Connect Centers,” ColdHubs will enable farmers to get online and access agricultural training, resources to improve crop yields and marketing and digital skills training.

Whether in the U.S. or around the world, we believe in nurturing innovative solutions by supporting local companies and entrepreneurs. We are eager to work in close partnership with these Airband Grant Fund recipients over the next year to refine and expand the reach of their solutions. And in the coming months, we’ll have more to share on the exciting progress we’re making on our Airband Initiative, and our goal to deliver broadband to 2 million rural Americans by 2022, and to extend connectivity to underserved communities around the world. Learn more about the Airband Grant Fund recipients here.

Tags: ,

Land O’Lakes CIO strives to optimize economics of digital farming

Powered by big data, digital farming aims to help farmers grow crops in smarter, more sustainable and more efficient ways than ever before. Today, Land O’Lakes Inc. is looking at how to use analytics and cutting-edge technology to further reduce the risk farmers take on every growing season.

In an interview filmed at the recent MIT Sloan CIO Symposium, Michael Macrie, senior vice president and CIO at Land O’Lakes, provided a glimpse into how AI and machine learning are transforming one of the oldest industries in the world and why the economics of digital farming are a critical part of the way forward.

Editor’s note: The following was edited for clarity and brevity.

Is it hard to sell AI and machine learning to the business?

Michael Macrie: Yes. At their fundamental levels, these concepts are tools. They’re tools that could be used for a number of different reasons or solutions. So we like to talk about that business value — what’s the value they’re going bring to our business, what’s the value they’re going bring to that specific individual and the company, what’s the value they’re going bring to the customer or consumer. And if we can do that, we have a richer discussion than talking about the tool sets.

Yes, we may use big data, we may use AI behind the scenes, but the reality is that most businesspeople are looking for the result. They’re looking for the solution. And if we use magic behind the scenes, I’m not sure they care. And some of this is pretty close to magic these days. I think there’s a big opportunity for CIOs to talk differently to their stakeholders about the end result and what [companies] care about — not about the technology.

I talked to your predecessor in 2013 about digital farming. Where do things stand today?

Macrie: Back in 2013, we were just launching our first tool called the R7 tool. And now, with five years under our belt, we’ve become one of the market leaders and the largest distributor of ag-technology (agriculture technology) to America’s farmers. We have four proprietary tools that do very well. The tools help our farmers do more and grow more in the most sustainable way possible. And it’s been a great run for us in deploying these ag-technology resources. We bought a satellite company, which analyzes all of the [customer] imagery in real time. We project all those alerts down to the fields and down to people’s handhelds. We direct farmers on where to go every day.

It’s really changing American agriculture as we know it. It’s helping them grow more and spend less and do it in a way that’s more sustainably and environmentally friendly.

What kind of data do the tools rely on?

Macrie: For each and every field in the United States, we analyzed 40 years of history, and we can tell the farmer what the best seed is to plant in each piece of his field — different populations, different nutrient recommendations, all in a custom prescription. We beam that to the tractor. The tractor drives itself and plants these seeds. During the year, we monitor it with those same satellites, and we can detect anomalies before the naked eye can detect them and direct farmers and scouts out to those areas to do some diagnosis on what’s going on in the field: Does it need nutrients? Is there a weed? Is there a pest problem? We can then take action and preserve their yield during the course of the year.

Do you use AI and machine learning to detect anomalies?

Macrie: We use statistics and machine learning to detect the anomalies in the field. We run statistics across not only the field itself, but the field’s history and the other fields around it that were planted on similar dates to detect those anomalies.

Now that you’ve developed the tools and architecture for digital farming, what new problem are you looking to solve?

Macrie: What we’re working on right now is the economics and the economic variables for that farmer. We think we’ve cracked the code on how to grow more with less, and we’re bringing those solutions to market today and next year. After that though, we have to get the economics right.

A farmer is probably one of the largest gamblers in all of society. They take so much risk — on weather, on the crop yields and on the economic outcomes with the commodity pricing. We have to help them reduce that risk. We have to help them make it more manageable. And so that’s where we’re investing a lot of time and technology today to reduce the risk farmers take every year and provide them a more sustainable path to income.

View All Videos

Connectedness is king, as Neo4j graph database ports to Spark

Graph database provider Neo4j is taking steps to grow a data platform around its similarly named database, adding Cypher graph language support for the Apache Spark analytics engine.

That, along with additional analytics, visualization, data import and transformation capabilities for the Neo4j graph database, was discussed at the company’s GraphConnect conference in New York, where enterprise users described graph database implementations.

Graph databases hold advantages in an emerging style of data connectedness — that is, they have abilities to readily map and remap relationships between data points in a way that may outpace relational databases.

The graph approach gained momentum earlier this year, as Microsoft added graph database support to its Azure Cosmos multimodel database. Various graph database capabilities are also offered by Cambridge Semantics, DataStax, Franz Inc., IBM, MarkLogic, Oracle and others.

Scripps channeling data

Brant Boehmann, ScrippsBrant Boehmann

Graphs can be useful in managing digital assets, according to Brant Boehmann, senior software engineer at Scripps Networks Interactive. Boehmann, together with Scripps software development manager Chris Goodacre, described experiences with the Neo4j graph database at GraphConnect.

Scripps’ digital assets include content from HGTV, Food Network, Travel Channel and other cable properties. Asset attributes can range from cooking-segment recipes to digital-rights royalties for series and beyond.

Boehmann and Goodacre have forged a new system for managing assets that, increasingly, are repackaged and repurposed across new media types, some of which need to work on demand.

“Our need is to keep track of a large number of relationships — about 45 million-plus,” Boehmann said in an interview. He said Neo4j performance has scaled efficiently over three years, as the number of data points in the database has expanded.

Moreover, the Neo4j graph database supports a high level of abstraction. Boehmann called the Neo4j graph database models easily describable and “whiteboard-friendly.”

“You can think in terms of a series and its seasons. And you can, in effect, draw lines to make connections,” he said. That is in comparison to relational approaches where data models are based on tables and are connected via relational join operations.

Up from LDAP

The Scripps graph-based asset manager grew from earlier work on a Lightweight Directory Access Protocol repository, according to Goodacre, which, he agreed, could place it in the category of master data management systems.

Recommendation engines, social media applications and customer relationship models are among other areas where graph databases have gained acceptance. While graph data engines broadly lag relational stalwarts in use, they seem on the rise.

Chris Goodacre, ScrippsChris Goodacre

As described in a recent Forrester Research report on graphs, 51% of global data and analytics technology decision-makers have implemented, are implementing, are upgrading or are expanding graph databases in their organizations.

A major benefit is the graph’s focus on connected data that helps organizations ask more complex questions without having to do more complex programing, according to Noel Yuhanna, report author and Forrester analyst.

An important differentiator that eases the programming burden is the inherent graphical nature of these databases, in comparison to relational databases that organize the world according to table structures, Yuhanna wrote.

Graph databases expand in organizations.
Graph databases, although a relatively new technology, are finding favor among technology decision-makers, according to Forrester.

Connect the data

A continuous string of updates has actually made Neo4j more a platform, rather than just a database, according to Philip Rathle, vice president of products at Neo4j. Improvements have included a desktop developer console, graph analytics and data integration tools, he said.

Rathle said continued enhancements reflect the fact that users have different needs at different points in the data lifecycle. He cited new Neo4j support for Apache Spark in this regard.

He said the company has recently created a graph mapping layer that integrates the Neo4j graph database with the Spark Catalyst SQL optimizer. Users will be able to traverse large data volumes on Spark as graphs, he said. Key to the effort is connecting Cypher, a declarative property graph query language, with Spark.

According to Rathle, Neo4j is donating an early version of a Cypher for Apache Spark toolkit to the openCypher project. Rathle said Cypher for Apache Spark is now available in alpha stage under an Apache 2.0 license.

The openCypher project began life in 2015, when Neo4j sought to open up the language, somewhat at the behest of users like those at Scripps, where Cypher as a proprietary language caused unease.

“We thought the database was a best fit, but were concerned that the language was closed and proprietary,” Goodacre said. He said he and others had conversations with Neo4j on the topic, and the company has “taken steps in the right direction” with openCypher.

Goodacre’s colleague Boehmann said the graph query language was an important part in reducing overall query programming complexity for graph implementations, versus relational database alternatives.