Tag Archives: application

No-code and low-code tools seek ways to stand out in a crowd

As market demand for enterprise application developers continues to surge, no-code and low-code vendors seek ways to stand out from one another in an effort to lure professional and citizen developers.

For instance, last week’s Spark release of Skuid’s eponymous drag-and-drop application creation system adds on-premises, private data integration, a new Design System Studio, and new core components for tasks such as creation of buttons, forms, charts and tables.

A suite of prebuilt application templates aim to help users build and customize a bespoke application, such as salesforce automation, recruitment and applicant tracking, HR management and online learning.

And a native mobile capability enables developers to take the apps they’ve built with Skuid and deploy them on mobile devices with native functionality for iOS and Android.

Ray Wang, Constellation ResearchRay Wang

“We’re seeing a lot of folks who started in other low-code/no-code platforms move toward Skuid because of the flexibility and the ability to use it in more than one type of platform,” said Ray Wang, an analyst at Constellation Research in San Francisco.

Skuid CTO Mike DuensingMike Duensing

“People want to be able to get to templates, reuse templates and modify templates to enable them to move very quickly.”

Skuid — named for an acronym, Scalable Kit for User Interface Design — was originally an education software provider, but users’ requests to customize the software for individual workflows led to a drag-and-drop interface to configure applications. That became the Skuid platform and the company pivoted to no-code, said Mike Duensing, CTO of Skuid in Chattanooga, Tenn.

Quick Base adds Kanban reports

Quick Base Inc., in Cambridge, Mass., recently added support for Kanban reports to its no-code platform. Kanban is a scheduling system for lean and just-in-time manufacturing. The system also provides a framework for Agile development practices, so software teams can visually track and balance project demands with available capacity and ease system-level bottlenecks.

The Quick Base Kanban reports enable development teams to see where work is in process. It also lets end users interact with their work and update their status, said Mark Field, Quick Base director of products.

Users drag and drop progress cards between columns to indicate how much work has been completed on software delivery tasks to date. This lets them track project tasks through stages or priority, opportunities through sales stages, application features through development stages, team members and their task assignments and more, Field said.

Datatrend Technologies, an IT services provider in Minnetonka, Minn., uses Quick Base to build the apps that manage technology rollouts for its customers, and finds the Kanban reports handy.

A lot of low-code/no-code platforms allow you to get on and build an app but then if you want to take it further, you’ll see users wanting to move to something else.
Ray Wanganalyst, Constellation Research

“Quick Base manages that whole process from intake to invoicing, where we interface with our ERP system,” said Darla Nutter, senior solutions architect at Datatrend.

Previously, we kept data of work in progress through four stages (plan, execute, complete and invoice) in a table report with no visual representation, but with these reports users can see what they have to do at any given stage and prioritize work accordingly, she said.

“You can drag and drop tasks to different columns and it automatically updates the stage for you,” she said.

Like the Quick Base no-code platform, the Kanban reports require no coding or programming experience. Datatrend’s typical Quick Base users are project managers and business analysts, Nutter said.

For most companies, however, the issue with no-code and low-code systems is how fast users can learn and then expand upon it, Constellation Research’s Wang said.

“A lot of low-code/no-code platforms allow you to get on and build an app but then if you want to take it further, you’ll see users wanting to move to something else,” Wang said.

OutSystems sees AI as the future

OutSystems said it plans to add advanced artificial intelligence features into its products to increase developer productivity, said Mike Hughes, director of product marketing at OutSystems in Boston.

“We think AI can help us by suggesting next steps and anticipating what developers will be doing next as they build applications,” Hughes said.

OutSystems uses AI in its own tool set, as well as links to publicly available AI services to help organizations build AI-based products. To facilitate this, the company launched Project Turing and opened an AI Center of Excellence in Lisbon, Portugal, named after Alan Turing, who is considered the father of AI.

The company also will commit 20% of its R&D budget to AI research and partner with industry leaders and universities for research in AI and machine learning.

New Cohesity backup adds Helios SaaS management

Cohesity Inc. today released Helios, a SaaS application that works in conjunction with Cohesity DataPlatform to give IT administrators greater control over their consolidated secondary data.

Helios allows Cohesity backup customers to manage data under control of DataPlatform software, whether it is on premises or in public clouds. Helios enables customers to view and search secondary data, make global policy changes and perform upgrades through a single dashboard.

Helios requires an extra license separate from DataPlatform, based on the amount of data under management. Positioned as an add-on for DataPlatform users, it’s designed to enhance secondary data management with a slew of features, including some that utilize predictive analytics and machine learning.

With Helios, Cohesity is following the lead of rival Rubrik Inc., which launched its Polaris SaaS-based management last April. Cohesity and Rubrik sell scale-out, node-based secondary storage platforms that manage data on premises and in the cloud.

Raj Dutt, product marketing director at Cohesity, said one of Helios’ core goals is to simplify multicluster administration. The Cohesity backup SmartAssist feature suggests resource allocations across the environment based on service-level agreements set by the administrator. Using machine learning, Helios examines how an infrastructure is being used and suggests when to add resources or make adjustments. Helios will also allow its users to make peer comparisons by sharing anonymized metadata from other Cohesity customers.

Other features include global hardware health monitoring, pattern and password detection, video compression and machine learning to analyze how changes will impact clusters before they are rolled out.

Cohesity Helios screenshot
Helios brings multicluster management under one dashboard.

Dutt said the difference between Helios and competitors, such as Dell EMC CloudIQ analytics and Rubrik Polaris, is “almost none of the [others] offer active management on a global scale.”

Although Helios is generally available today, Dutt said not all of its features will be ready to go right out of the gate. They will be rolled out as part of monthly releases of the core Cohesity backup software, with the expectation that all of the planned capabilities will be available by the end of 2018.

Cohesity backup checks the SaaS boxes

Edwin Yuen, senior analyst at Enterprise Strategy Group, said Helios fills the major requirements for SaaS-based management across clusters and clouds.

They’re experts in their storage and they’re adding a management layer on top of it.
Edwin Yuensenior analyst, Enterprise Strategy Group

“Within systems management, you need to have three things,” he said. “One is inventory — you need to be able to know what you have out there and go and find it. No. 2, you need to have status — you need to know what’s going on with them. And three, you need to have actions — you need to actually be able to do something about them. A lot of tools don’t actually do that. … Helios does.”

Yuen also pointed out that many vendors are moving from simply selling their software licenses to SaaS-based, subscription models. “It’s often consumption-based, it’s a living service, you’ll get data updates so you’re not always waiting for another version,” Yuen said. “If you are going to manage across multiple destinations, that model does make a lot of sense.”

As more products offering assisted integration and optimization like the Cohesity backup software emerge in the multi-cloud management space, Yuen speculates there will be a growing demand for cross-platform, vendor-agnostic products. Helios can see and manage the metadata hosted on Microsoft Azure, Google and Amazon Web Services public clouds — as long as you’re running Cohesity DataPlatform.

“They’re experts in their storage and they’re adding a management layer on top of it,” Yuen said. “The question is are you going to be an expert in the management layer so that it doesn’t matter what storage you have? I think there’s going to be demand for this type of solution across the board for managing data.”

HYCU moves beyond Nutanix backup with Google Cloud support

HYCU is now well-versed in the Google cloud.

HYCU, which began with a backup application built specifically for hyper-converged vendor Nutanix, today launched a service to back up data stored on Google Cloud Platform (GCP).

HYCU sprung up in June 2017 as an application sold by Comtrade that offered native support for Nutanix AHV and Nutanix customers using VMware ESX hypervisors. The Comtrade Group spun off HYCU into a separate company in March 2018, with both the new company and the product under the HYCU brand.

Today, HYCU for Google became available through the GCP Marketplace. It is an independent product from HYCU for Nutanix, but there is a Nutanix angle to the Google backup: GCP is Nutanix’s partner for the hyper-converged vendor’s Xi public cloud services. HYCU CEO Simon Taylor said his team began working on Google backup around the time Nutanix revealed plans for Xi in mid-2017. HYCU beat Nutanix out of the gate, launching its Google service before any Nutanix Xi Cloud Services have become generally available.

“We believe Nutanix is the future of the data center, and we place our bets on them,” Taylor said. “Everyone’s been asking us, ‘Beyond Nutanix, where do you go from here?’ We started thinking of the concept of multi-cloud. We see people running fixed workloads on-prem, and if it’s dynamic, they’ll probably put it on a public cloud. And Google is the public cloud that’s near and dear to Nutanix’s heart.”

HYCU backup for GCP is integrated into Google Cloud Identity and Access Management, installs without agents and backs up data to Google Buckets. The HYCU service uses native GCP snapshots for backup and recovery.

Subbiah Sundaram, vice president of products at HYCU, based in Boston, said HYCU provides application- and clone-consistent backups, and it allows single-file recovery. Sundaram said because HYCU takes control of the snapshot, data transfers do not affect product systems.

Sundaram said HYCU for GCP was built for Google admins, rather than typical backup admins.

“When customers use the cloud, they think of it as buying a service, not running software. And that’s the experience we want them to have,” Sundaram said. “It’s completely managed by us. We create and provision the backup targets on Google and manage it for you.”

HYCU for GCP uses only GCP to stage backups, backing up data in different Google Cloud zones. Sundaram said HYCU may back up to clouds or on-premises targets in future releases, but the most common request from customers so far is to back up to other GCP zones.

HYCU charges for data protected, rather than total storage allocated for backup. For example, a customer allocating 100 GB to a virtual machine with 20 GB of data protected is charged for the 20 GB. List price for 100 GB of consumed virtual machine capacity starts at $12 per month, or 12 cents per gigabyte, for data backed up every 24 hours. The cost increases for more frequent backups. Customers are billed through the GCP Marketplace.

Industry analysts pointed out HYCU is a brand-new company in name only. Its Comtrade legacy gives HYCU 20-plus years of experience in data protection and monitoring, over 1,000 customers and hundreds of engineers. That can allow it to move faster than typical startups.

“They’re a startup that already has a ton of experience,” said Christophe Bertrand, senior data protection analyst for Enterprise Strategy Group in Milford, Mass. “When you’re a small organization, you have to make strategic calls on what to do next. So, now, they’re getting into Google Cloud, which is evolving to be more enterprise-friendly. Clearly, backup and recovery is one of the functions you need to get right for the enterprise. Combined with the way Nutanix supports Google, it’s a smart move for HYCU.”

Steven Hill, senior storage analyst for 451 Research, agreed that GCP support was a logical step for Nutanix-friendly HYCU.

We believe Nutanix is the future of the data center, and we place our bets on them.
Simon TaylorCEO, HYCU

“Nutanix partnering with Google is a good hybrid cloud play. So, theoretically, what you’re running on-prem runs exactly the same once it’s on Google Cloud,” he said. “HYCU comes in and says, ‘We can do data protection and backup and workload protection that just fits seamlessly in with all of this. Whether you’re on Google Cloud or whether you’re on Nutanix via AHV or Nutanix via ESX, it’s all the same to us.'”

Taylor said HYCU is positioned well for when Nutanix makes Xi available. Nutanix has said its first Xi Cloud Service will be disaster recovery. “We will be delivering for Xi,” Taylor said. “You can imagine that will require a much closer bridge between these two different products. Once Xi is available, we’ll be fast on their heels with a product that will support both purpose-built backup for Nutanix and purpose-built recovery for Xi in a highly integrated fashion.”

Although HYCU for GCP can protect data for non-Nutanix users, Taylor said HYCU remains as dedicated to building a business around Nutanix as ever. He emphasized that HYCU develops its software independently from Nutanix and Google, although he is determined to have a good working relationship with both.

“We believe data protection should be an extension of the platform it serves, not a stand-alone platform,” he said.

Still, Taylor flatly denied his goal is for HYCU to become part of Nutanix.

“Right now, we want to build a brand,” he said. “This is about building a business that matters, not about a quick exit.”

Data integration tools: SnapLogic update rewards solution selling

SnapLogic, a provider of application and data integration tools, has revamped its channel partner program, with an emphasis on solution selling.

The Partner Connect Program now features free sales and technical training, new deal referrals and reseller discounts, and a partner portal. SnapLogic said it will also offer incentives for creating and delivering offerings that combine SnapLogic with its technology partners. SnapLogic technology partners include Workday, Snowflake, Salesforce and Reltio.

“One of the big, overarching goals of [the SnapLogic partner program] refresh is … to focus much more on a solutions approach where we have our go-to-market partners building repeatable solutions based on SnapLogic and our technology partners,” said Rich Link, vice president of global channel sales and strategic alliances at SnapLogic, based in San Mateo, Calif.

SnapLogic described the application and data integration market as a $12 billion opportunity. Link added it is “a much different market than it has ever been before.”

“The interesting thing about our space is that, inherently, we are involved in multivendor projects,” Link said.

He also noted that integration, because of its role in digital transformation, is now receiving more attention. Partners can tap SnapLogic’s app and data integration tools to target customer projects involving data warehousing, data lakes, master data management, human capital management and customer relationship management.

SnapLogic currently works with about 40 channel partners, with about 10 to 15 that are “very active,” Link said. The company has been expanding the Partner Connect Program across Europe, the Middle East and Africa, and it recently added about 11 partners in that market.

SnapLogic partners can expect to see a more formalized market development funds (MDF) program this year. “Today, we are doing [MDF] a little more ad hoc, and we want to formalize that by the end of the year,” Link said.

AllCloud enters North American market

AllCloud, a cloud solutions provider that launched in Israel in 2014, has entered the North American market with the acquisition of Figur8 Cloud Solutions.

Figur8 is a Salesforce partner, with operations in San Francisco, Toronto, New York City and Vancouver, B.C. AllCloud delivers its cloud solutions in the Salesforce, Amazon Web Services, Google Cloud Platform and NetSuite environments.

Eran Gil, CEO at AllCloud, said consolidation has “created a big void in the market for global boutique cloud solutions providers.”

Gil has firsthand consolidation experience. He co-founded Cloud Sherpas, a cloud consulting services provider that was acquired by Accenture in 2015. Gil pointed to IBM’s purchase of Bluewolf and Wipro’s acquisition of Appirio as other examples of cloud solutions and SaaS consulting shakeout.

“Having seen that [consolidation] and also having seen the significant growth rate coming from the public cloud space, from the vendors we are very close to, we believe there is an even bigger opportunity than in the past,” Gil said.

Gil’s latest cloud consulting venture differs in some ways from the Cloud Sherpas experience. While that company focused on the SaaS layer, AllCloud focuses on IaaS and PaaS, in addition to SaaS, he noted.

“The big opportunity … is providing clients a more holistic solutions approach,” Gil said.

Other news

  • HYCU bolstered its partner program for selling its Data Protection for Nutanix The program now features a simplified deal registration program, co-branded marketing tools and campaign support, and a new partner portal, the vendor said.
  • FileCloud, an enterprise file sync-and-share (EFSS) vendor, unveiled a channel program for managed services providers and resellers. The program provides support, such as special partner pricing, for offering the vendor’s EFSS product, FileCloud Online.
  • Oblong Industries, a collaboration technology vendor, inked a distribution deal with ScanSource. Under the agreement, ScanSource will distribute Oblong’s immersive collaboration platform, Mezzanine, bundled with LG Electronics’ commercial displays and Cisco Webex Room Kit Series.
  • JetStream Software said its cloud migration tool, JetStream Migrate, is now generally available. Jetstream Migrate is designed for cloud and managed services providers that target large enterprises, the company said.
  • OneLogin, an identity and access management vendor, named Matt Hurley as its vice president of global channels, strategic alliances and professional services. Hurley joins OneLogin from Juniper Networks, where he held numerous channel-related roles.

Avi load balancer gets tighter with Cisco products

Avi Networks, a maker of software to improve application performance and security, has introduced version 18.1 of its Vantage Platform, which provides better integration with several Cisco products.

The upgrade offers “enhanced integrations” with Cisco AppDynamics, Tetration and its software-defined networking architecture, called Application Centric Infrastructure (ACI), according to Avi, based in Santa Clara, Calif. The ACI integration simplifies the process of placing application services, such as the Avi load balancer, on ACI networks.

Avi, which doesn’t sell physical hardware, provides software companies can deploy on premises or in the cloud. The Vantage Platform offers elastic load balancing and web application security on a per-application basis. The company also makes an application delivery controller that provides Layer 4-7 services to containerized applications running in cloud environments. The Avi load balancer and other services compete with products from F5 and Citrix.

The latest version of Vantage Platform provides integration between the Avi Controller and Cisco’s Application Policy Infrastructure Controller. Avi connects the controllers through REST APIs.

The Vantage upgrade also delivers telemetry from its Layer 4-7 services to the AppDynamics application performance management suite and the Tetration network analytics engine for the data center.

In June, Cisco Investments joined a $60 million round of funding for Avi, which brought its total funding to $115 million. Other investors included DAG Ventures, Greylock Partners, Lightspeed Venture Partners and Menlo Ventures.

LiveAction intros LiveNX Server Appliance

LiveAction plans to release on Aug. 1 its LiveNX Server Appliance, a network performance monitor developed with the help of Savvius, a packet monitor maker LiveAction acquired in June.

The latest product provides LiveAction customers with a hardware option for deploying the company’s technology. Previously, deployment options were limited to a public cloud or a virtualized server within a data center.

Savvius’ “extensive hardware tuning experience” made it possible for LiveAction to deliver the LiveNX hardware quickly, the company said. LiveAction plans to release other acquisition-related products in the future.

Flow monitoring is a core feature in LiveNX, which taps into the NetFlow data-collection component built into routers and switches from Cisco and other manufacturers. The software uses the data to determine packet loss, delay and round-trip time, while also showing network administrators how well the network is delivering application services.

Analysts expect LiveAction, based in Palo Alto, Calif., to combine its network performance monitor with Savvius’ packet monitor into a single product. Today, companies often buy those types of technologies separately, using a performance monitor for spotting problems and a packet monitor for performing in-depth analyses to pinpoint causes.

Corvil launches Intelligence Hub

Network analytics vendor Corvil plans to release this summer Intelligence Hub, a product designed to deliver intelligence to business operations, as well as network performance data to IT departments.

Intelligence Hub applies machine learning and predictive analytics to packet data to spot changes in business activity related to the total number of transactions, individual orders and products, conversion rates and response times. The software sends change alerts to business teams.

For network operators, the software provides many of the features contained in Corvil’s appliances, such as identifying and alerting on network anomalies, including packet loss, a dip in network performance or an increase in latency.

In general, Corvil products capture, timestamp and forward network packets to a separate capture appliance, where they are analyzed. Corvil can provide the hardware, or, in the case of Intelligence Hub, the software can run on a third-party device.

Corvil products can send customized streams of network data to big data sources, such as ElasticsearchHadoopMongoDB and Splunk, so IT departments can draw more targeted information from the tools.

Corvil, headquartered in Dublin, competes with ExtraHop, ThousandEyes, Riverbed and NetScout.

New WorkJam app aims at preventing nurse burnout

At a time when nurse burnout is reaching epic proportions, digital workplace application maker WorkJam is hoping to make a difference.

Though the tool is used by hourly and shift workers in a wide variety of industries, WorkJam recently hired a former nurse manager to be its healthcare industry principal and to help the company work toward preventing nurse burnout, said Will Eadie, global vice president of sales and strategy at WorkJam.

“Anything that’s shift-driven work requires a lot of communication with complex schedule changes and the knowledge to do the jobs,” he said. “The stakes are raised in healthcare because we’re talking about saving people’s lives.”

Surveys show high levels of burnout

That’s why there’s a need for preventing nurse burnout, as well as strategies to deal with staff shortages and challenges with hiring. Separate surveys done by Kronos and CareerBuilder in May of 2017 showed similarly high levels of burnout: Sixty-three percent of nurses surveyed by Kronos reported it, while the CareerBuilder survey had the number at 70%. The CareerBuilder survey also pointed to the difficulty in hiring nurses, noting the same jobs were posted as many as 10 times before being filled. A 2010 study by the National Institutes of Health takes the idea of preventing nurse burnout further tying it specifically to patient dissatisfaction. The “Nurse burnout and patient satisfaction” study points to improved communications and adequate staffing as two areas that could help reduce the risk of nurse burnout and patient dissatisfaction.

But it’s often not that straightforward. The healthcare field in general is challenging because it has so many moving parts, said John Sumser, principal analyst at HRExaminer. “Healthcare has an engagement problem, but that has partly to do with the status of systems inside of healthcare,” he said. “It’s a system based on rank, and it’s not like other work environments because doctors aren’t going to use the same systems nurses or interns do. Technology really can’t change the social structure.”

Anything that’s shift-driven work requires a lot of communication with complex schedule changes and the knowledge to do the jobs.
Will Eadieglobal vice president of sales and strategy, WorkJam

So the WorkJam team went directly to nurses to find out what might achieve nurse burnout prevention, Eadie said. The top of the list was a big part of workforce management: control of the work schedule. Instead of requesting a day off in a log book or chatting about shifts with a manager, nurses can log in to the WorkJam app and change, accept or swap shifts. “They can do this in real time, and there’s no chance of miscommunication,” Eadie said.

Communication, in general, is another hot button area for nurses, Eadie added. As is true for many hourly or shift workers, most nurses don’t have “work” email accounts and in some cases have been forced to create Facebook or WhatsApp groups in order to share vital communications. WorkJam offers push notifications in real time to only the staff who need them, Eadie said, eliminating the need for social media and its potential privacy concerns. “Sometimes something as simple as a message to ‘park in the back lot and come in the side door’ can go a long way to making nurses feel like they’re an important part of the team,” he said, and that’s key to preventing nurse burnout.

WorkJam integrates with HR systems

The WorkJam app starts by integrating with the existing HR systems for scheduling and incorporates the master employee list. Employees are offered the option to download the app to an Apple or Android phone, access it on the web or log in to hospital-provided kiosks.

Although the heart of the application is scheduling and communication, Eadie said some hospitals have gone further in the quest for preventing nurse burnout. “Because it can integrate with the scheduling system, it can tell an employee ‘Congratulations, you’ve been on time 20 days in a row so you get the On Time Hero badge,'” he said. Employee engagement and gamification are natural extensions, he added, but so is the addition of education and training. If a hospital needs to do a quick review with nurses about, for example, glove use before flu season, the app can be locked until that training is completed, he said. Making those important communications easier to digest helps with preventing nurse burnout, and could ultimately lead to online training mini-courses that staff could do on their own time and get paid for, Eadie said.

Ultimately, Eadie thinks WorkJam can be used in large hospitals to allow staff to pick up shifts in areas they might not normally work in, but are qualified to do so. “This can be used to facilitate moonlighting, or the ‘Uber-ization’ or crowdsourcing of work. Hospitals are already paying for staff. This will make it possible for everyone to be more flexible.”

MapR Data Platform gets object tiering and S3 support

MapR Technologies updated its Data Platform, adding support for Amazon’s S3 application programming interface and automated tiering to cloud-based object storage.

MapR is known for its distribution of open source Apache Hadoop software. It contributes to related open source projects designed to handle advanced analytics for large data sets across computer clusters. The 6.1 release of the MapR Data Platform — formerly MapR Converged Data Platform — adds storage management features for artificial intelligence applications that require real-time analytics.

MapR Data Platform 6.1, scheduled to become generally available this quarter, features policy-based data placement across performance, capacity and archive tiers. It also added fast-ingest erasure coding for high-capacity storage on premises and in public clouds, an installer option to enable security by default and volume-based encryption of data at rest.

Providing real-time analytics for AI requires coordination between on-premises, cloud and edge storage, said Jack Norris, senior vice president of data and applications at MapR, which is based in Santa Clara, Calif.

“What we’re seeing increasingly is that the time frame for AI is decreasing. It’s not enough to understand what happened in the business. It’s really, ‘How do you impact the business as it’s happening?'” Norris said.

MapR storage additions

MapR Data Platform 6.1 expands storage features by adding policy-based tiering to automatically move data. It now supports a performance tier of SSDs or SAS HDDs; a capacity tier of high-density HDDs; and an archival tier of third-party, S3-compliant object storage. Customers supply the commodity hardware.

The storage management features follow the 2017 addition of MapR-XD software to MapR Converged Data Platform. MapR-XD is based on the company’s distributed file system that was released in 2010. It includes a global namespace that can span on-premises and public cloud environments and support tiers of hot, warm and cold storage.

MapR writes all data to the performance tier and then determines the most appropriate way to store it, Norris said. Its tiering is independent of data format. Norris said the system could write NFS and read S3, or the reverse. For instance, MapR can place and store data as an object on one or more clouds and later pull back the data and restore it as a file transparently to the user.

“We do constant management of the data to account for node failure, disk failure, rebalancing of the cluster and eliminating hotspots,” he said.

New MapR release adds file stubs

The MapR software handles data transformations between file and object formats in the background. With past releases, the MapR system had to go through an intermediate step to shift data between file- and object-based storage. With 6.1, MapR retains file stubs to represent data that the system has shifted to cloud-based object storage. The stub stores the location of the data.

“When you need to access that data, we’re just pulling back an individual file,” Norris said. “You don’t want to pull back a whole directory or a whole volume. If you look at cost economics in the cloud, it’s expensive, because you get charged by data movement.”

The newly added support for the Amazon S3 API includes all core capabilities, such as the concept of buckets and access-control lists, Norris said.

MapR’s new erasure coding spreads pieces of data across disks. Norris said the MapR erasure coding preserves snapshots and compression.

The MapR Data Platform is available in Enterprise Standard and Enterprise Premium editions. The Enterprise Standard offering includes MapR-XD, MapR-Document Database, MapR Event Data Streams and Apache Hadoop, Spark and Drill. The Enterprise Premium software tacks on options such as real-time data integration with MapR Change Data Capture, Orbit Cloud Suite extensions and the ability to add the Data Science Refinery toolkit.

Deviation from Hadoop

Carl Olofson, a data management software research vice president at IDC, said MapR’s file system emulates the Hadoop Distributed File System, but its indexes and update-in-place capabilities set it apart. The challenge for MapR is the potential skepticism of having a “data lake solution that deviates so far from the Hadoop project code,” Olofson said.

“The good news there is that even the other Hadoop vendors are no longer solely focused on Hadoop, so MapR may be on top of an emerging trend,” he wrote.

Policy-based storage tiering is the key new capability in the MapR Data Platform 6.1, Olofson claimed. “As people move data lake technologies to the cloud, they are initially in sticker shock because of the storage costs associated with it,” he said. “The MapR approach not only addresses that, but they say it does it automatically.”

MapR’s competition includes Cloudera, Hortonworks and various open source technologies, according to Mike Matchett, principal IT industry analyst at Small World Big Data. He noted concerns that MapR is “at heart a closed proprietary platform.” But Matchett said he gives MapR an advantage over plain open source in terms of supporting mixed and now operational workloads.

“The theme for MapR in this release is to support big data AI and ML [machine learning] alongside and with business applications,” Matchett wrote in an email.

GPU implementation is about more than deep learning

When you consider a typical GPU implementation, you probably think of some advanced AI application. But that’s not the only place businesses are putting the chips to work.

“[GPUs] are obviously applicable for Google and Facebook and companies doing AI. But for startups like ours that have to justify capital spend in today’s business value, we still want that speed,” said Kyle Hubert, CTO at Simulmedia Inc.

The New York-based advertising technology company is using GPUs to make fairly traditional processes, like data reporting and business intelligence dashboards, work faster. Using a platform from MapD, Simulmedia has built a reporting and data querying tool that lets sales staff and others in the organization visualize how certain television ads are performing and answer any client inquiries as they come in.

Using GPUs for more than deep learning

Kyle Hubert, CTO, SimulmediaKyle Hubert

GPU technology is getting lots of attention today, primarily due to how businesses are using it. The chips power the training underlying some of the most advanced AI use cases, like image recognition, natural language translation and self-driving cars. But, of course, they were originally built to power video game graphics. Their main appeal is speedy processing power. And while that may be crucial for enabling neural networks to churn through millions of training examples, there are also other use cases in which the speed that comes from a GPU implementation is beneficial.

Simulmedia, founded in 2008, helps clients better target advertising on television networks. Initially, the team used spreadsheets to track metrics on how clients’ advertisements performed. But the data was too large — Simulmedia uses a combination of Nielsen and Experian data sets to target ads and assess effectiveness — and the visualization options were too limited.

Reports had to be built by the operations team, and there was little capability to do ad hoc queries. The MapD tool enables sales and product management teams to view data visualization reports and to do their own queries using a graphical interface or through SQL code.

Business focus pays off in GPU experience

There’s a lot of implicit knowledge that’s required to get GPUs up and running.
Kyle HubertCTO, Simulmedia

Some benefits of a GPU implementation focused on a standard business process go beyond simply speeding up that process. Hubert said it also prepares the business to implement the chips in a more pervasive way and prepares for a more AI-driven future.

He said the process of predicting which ads will perform best during particular time slots and on certain networks is heavy on data science. Simulmedia is looking at adding deep learning to its targeting, and these models will train on GPUs. Hubert said starting with GPUs in a standard business application has helped the team build a solid foundation on which to build out more GPU capability.

“There’s a lot of implicit knowledge that’s required to get GPUs up and running,” he said.

Aside from building institutional knowledge around how GPUs work, starting by applying the chips to more traditional use cases also helps to justify the cost, which can be substantial.

“They’re costly when you say, ‘I want a bunch of GPUs, and I don’t know what kind of results I’m going to get,'” Hubert said. “That’s a lot of capital investment when you don’t know your returns. When you do a dual-track approach, you can say, ‘I can get these GPUs, set them up for business users now, and I have a concrete ability to get immediate gratification. Then, I can carve out some of that to be future-looking.'”

Database DevOps tools bring stateful apps up to modern speed

DevOps shops can say goodbye to a major roadblock in rapid application development.

At this time in 2017, cultural backlash from database administrators (DBAs) and a lack of mature database DevOps tools made stateful applications a hindrance to the rapid, iterative changes made by Agile enterprise developers. But, now, enterprises have found both application and infrastructure tools that align databases with fast-moving DevOps pipelines.

“When the marketing department would make strategy changes, our databases couldn’t keep up,” said Matthew Haigh, data architect for U.K.-based babywear retailer Mamas & Papas. “If we got a marketing initiative Thursday evening, on Monday morning, they’d want to know the results. And we struggled to make changes that fast.”

Haigh’s team, which manages a Microsoft Power BI data warehouse for the company, has realigned itself around database DevOps tools from Redgate since 2017. The DBA team now refers to itself as the “DataOps” team, and it uses Microsoft’s Visual Studio Team Services to make as many as 15 to 20 daily changes to the retailer’s data warehouse during business hours.

Redgate’s SQL Monitor was the catalyst to improve collaboration between the company’s developers and DBAs. Haigh gave developers access to the monitoring tool interface and alerts through a Slack channel, so they could immediately see the effect of application changes on the data warehouse. They also use Redgate’s SQL Clone tool to spin up test databases themselves, as needed.

“There’s a major question when you’re starting DevOps: Do you try to change the culture first, or put tools in and hope change happens?” Haigh said. “In our case, the tools have prompted cultural change — not just for our DataOps team and dev teams, but also IT support.”

Database DevOps tools sync schemas

Redgate’s SQL Toolbelt suite is one of several tools enterprises can use to make rapid changes to database schemas while preserving data integrity. Redgate focuses on Microsoft SQL Server, while other vendors, such as Datical and DBmaestro, support a variety of databases, such as Oracle and MySQL. All of these tools track changes to database schemas from application updates and apply those changes more rapidly than traditional database management tools. They also integrate with CI/CD pipelines for automated database updates.

Radial Inc., an e-commerce company based in King of Prussia, Pa., and spun out of eBay in 2016, took a little more than two years to establish database DevOps processes with tools from Datical. In that time, the company has trimmed its app development processes that involve Oracle, SQL Server, MySQL and Sybase databases from days down to two or three hours.

“Our legacy apps, at one point, were deployed every two to three months, but we now have 30 to 40 microservices deployed in two-week sprints,” said Devon Siegfried, database architect for Radial. “Each of our microservices has a single purpose and its own data store with its own schema.”

That means Radial, a 7,000-employee multinational company, manages about 300 Oracle databases and about 130 instances of SQL Server. The largest database change log it’s processed through Datical’s tool involved more than 1,300 discrete changes.

“We liked Datical’s support for managing at the discrete-change level and forecasting the impact of changes before deployment,” Siegfried said. “It also has a good rules engine to enforce security and compliance standards.”

Datical’s tool is integrated with the company’s GoCD DevOps pipeline, but DBAs still manually kick off changes to databases in production. Siegfried said he hopes that will change in the next two months, when an update to Datical will allow it to detect finer-grained attributes of objects from legacy databases.

ING Bank Turkey looks to Datical competitor DBmaestro to link .NET developers who check in changes through Microsoft’s Team Foundation Server 2018 to its 20 TB Oracle core banking database. Before its DBmaestro rollout in November 2017, those developers manually tracked schema and script changes through the development and test stages and ensured the right ones deployed to production. DBmaestro now handles those tasks automatically.

“Developers no longer have to create deployment scripts or understand changes preproduction, which was not a safe practice and required more effort,” said Onder Altinkurt, IT product manager for ING Bank Turkey, based in Istanbul. “Now, we’re able to make database changes roughly weekly, with 60 developers in 15 teams and 70 application development pipelines.”

Database DevOps tools abstract away infrastructure headaches

Consistent database schemas and deployment scripts through rapid application changes is an important part of DevOps practices with stateful applications, but there’s another side to that coin — infrastructure provisioning.

Stateful application management through containers and container orchestration tools such as Kubernetes is still in its early stages, but persistent container storage tools from Portworx Inc. and data management tools from Delphix have begun to help ease this burden, as well.

GE Digital put Portworx container storage into production to support its Predix platform in 2017, and GE Ventures later invested in the company.

Now, [developers] make database changes roughly weekly, with 60 developers in 15 teams and 70 application development pipelines.
Onder AltinkurtIT product manager, ING Bank Turkey

“Previously, we had a DevOps process outlined. But if it ended at making a call to GE IT for a VM and storage provisioning, you give up the progress you made in reducing time to market,” said Abhishek Shukla, managing director at GE Ventures, based in Menlo Park, Calif. “Our DevOps engineering team also didn’t have enough time to call people in IT and do the infrastructure testing — all that had to go on in parallel with application development.”

Portworx allows developers to describe storage requirements such as capacity in code, and then triggers the provisioning at the infrastructure layer through container orchestration tools, such as Mesosphere and Kubernetes. The developer doesn’t have to open a ticket, wait for a storage administrator or understand the physical infrastructure. Portworx can arbitrate and facilitate data management between multiple container clusters, or between VMs and containers. As applications change and state is torn down, there is no clutter to clean up afterward, and Portworx can create snapshots and clone databases quickly for realistic test data sets.

Portworx doesn’t necessarily offer the same high-octane performance for databases as bare-metal servers, said a Portworx partner, Kris Watson, co-founder of ComputeStacks, which packages Portworx storage into its Docker-based container orchestration software for service-provider clients.

“You may take a minimal performance hit with software abstraction layers, but rapid iteration and reproducible copies of data are much more important these days than bare-metal performance,” Watson said.

The addition of software-based orchestration-to-database testing processes can drastically speed up app development, as Choice Hotels International discovered when it rolled out Delphix’s test data management software a little more than two years ago.

“Before that, we had never refreshed our test databases. And in the first year with Delphix, we refreshed them four or five times,” said Nick Suwyn, IT leader at the company, based in Rockville, Md. “That has cut down data-related errors in code and allowed for faster testing, because we can spin up a test environment in minutes versus taking all weekend.”

The company hasn’t introduced Delphix to all of its development teams, as it prioritizes a project to rewrite the company’s core reservation system on AWS. But most of the company’s developers have access to self-service test databases whenever they are needed, and Suwyn’s team will link Delphix test databases with the company’s Jenkins CI/CD pipelines, so developers can spin up test databases automatically through the Jenkins interface.

SAP and Accenture collaborate on entitlement management platform

SAP and Accenture are teaming to deliver an intelligent entitlement management application intended to help companies build and deploy new business models.

Entitlement management applications help companies grant, enforce and administer customer access entitlements (which are usually referred to as authorizations, privileges, access right, or permissions) to data, devices and services — including embedded software applications — from a single platform.

The new SAP Entitlement Management allows organizations to dynamically change individual customer access rights and install renewal automation capabilities in applications, according to SAP. This means they can create new offerings that use flexible pricing structures.

The new platform’s entitlement management and embedded analytics integrate with SAP S/4HANA’s commerce and order management functions, which according to SAP, can help organizations create new revenue streams and get new products and services to market faster.

Accenture will provide consulting, system development and integration, application implementation, and analytics capabilities to the initiative.

“As high-tech companies rapidly transition from stand-alone products to highly connected platforms, they are under mounting pressure to create and scale new intelligent and digital business models,” said David Sovie, senior managing director of Accenture’s high-tech practice, in a press release. “The solution Accenture is developing with SAP will help enable our clients to pivot to as-a-service business models that are more flexible and can be easily customized.”

SAP and Accenture go on the defense

SAP and Accenture also unveiled a new platform that provides digital transformation technology and services for defense and security organizations.

The digital defense platform is based in S/4HANA and contains advanced analytics capabilities, and allows more use of digital applications by military personnel. It includes simulations and analytics applications intended to help defense and security organizations plan and run operations efficiently and be able to respond quickly to changing operating environments, according to SAP and Accenture.

“This solution gives defense agencies the capabilities to operate in challenging and fast-changing geo-political environments that require an intelligent platform with deployment agility, increased situational awareness and industry-specific capabilities,” said Antti Kolehmainen, Accenture’s managing director of defense business, in a press release.

The platform provides data-driven insights intended to help leaders make better decisions, and it enables cross-enterprise data integration in areas like personnel, military supply chain, equipment maintenance, finances and real estate.

IoT integration will enable defense agencies to connect devices that can collect and exchange data. The digital defense platform technology is available to be deployed on premises or in the cloud, according to the companies.

“The next-generation defense solution will take advantage of the technology capabilities of SAP S/4HANA and Accenture’s deep defense industry knowledge to help defense agencies build and deploy solutions more easily and cost-effectively and at the same time enable the digital transformation in defense,” said Isabella Groegor-Cechowicz, SAP’s global general manager of public services, in a press release.

New application and customer experience tool for SAP environments

AppDynamics (a Cisco company) has unveiled a new application and customer experience monitoring software product for SAP environments.

AppDynamics for SAP provides visibility into SAP applications and customer experiences via code-level insights into customer taps, swipes and clicks, according to AppDynamics. This helps companies understand the performance of SAP applications and databases, as well as the code impact on customers and business applications.

To satisfy customer expectations, [the modern enterprise] needs to meet the demands of an agile, digital business, while also maintaining and operating essential core systems.
Thomas Wyattchief strategy officer, AppDynamics

“The modern enterprise is in a challenging position,” said Thomas Wyatt, AppDynamics’ chief strategy officer, in a press release. “To satisfy customer expectations, it needs to meet the demands of an agile, digital business, while also maintaining and operating essential core systems.”

AppDynamics for SAP allows companies to collaborate around business transactions, using a unit of measurement that automatically reveals customers’ interactions with applications. They can then identify and map transactions flowing between each customer-facing application and systems of records — SAP ERP or CRM systems that include complex integration layers, such as SAP Process Integration and SAP Process Orchestration.

AppDynamics for SAP includes ABAP code-level diagnostics and native ABAP agent monitoring that provides insights into SAP environments with code and database performance monitoring, dynamic baselines, and transaction snapshots when performance deviates from the norm. It also includes intelligent alerting to IT based on health rules and baselines that are automatically set for key performance metrics on every business transaction. Intelligent alerting policies integrate with existing enterprise workflow tools, including ServiceNow, PagerDuty and JIRA.

This means that companies can understand dependencies across the entire digital business and baseline, identify, and isolate the root causes of problems before they affect customers. AppDynamics for SAP also helps companies to plan SAP application migrations to the cloud and monitor user experiences post-migration, according to AppDynamics.