Workforce management and HR software vendor Kronos this week introduced Kronos InTouch DX, a time clock offering features including individualized welcome screens, multilanguage support, biometric authentication and integration with Workforce Dimensions.
The new time clock is aimed at providing ease of use and more personalization for employees.
“By adding consumer-grade personalization with enterprise-level intelligence, Kronos InTouch DX surfaces the most important updates first, like whether a time-off request has been approved or a missed punch needs to be resolved,” said Bill Bartow, vice president and global product management at Kronos.
InTouch DX works with Workforce Dimensions, Kronos’ workforce management suite, so when a manager updates the schedule, employees can see those updates instantly on the Kronos InTouch DX and when employees request time off through the Kronos InTouch DX, managers are notified in Workforce Dimensions, according to the company.
Workforce Dimensions is mobile-native and accessible on smartphones and tablets.
Other features of InTouch DX include:
Smart Landing: Provides a personal welcome screen alerting users to unread messages, time-off approvals or requests, shifts swap and schedule updates.
Individual Mode: Provides one-click access to a user’s most frequent self-service tasks such as viewing their schedule, checking their accruals bank or transferring job codes.
My Time: Combines an individual’s timecard and weekly schedule, providing an overall view so that employees can compare their punches to scheduled hours to avoid errors.
Multilanguage support: Available for Dutch, English, French (both Canadian and French), German, Japanese, Spanish, Traditional and Simplified Chinese, Danish, Hindi, Italian, Korean, Polish and Portuguese.
Optional biometric authentication: Available as an option for an extra layer of security or in place of a PIN number or a badge. The InTouch DX supports major employee ID Badge formats, as well as PIN/employee ID numbers.
Date and time display: Features an always-on date and time display on screen.
Capacitive touchscreen: Utilizes capacitive technology used in consumer electronic devices to provide precision and reliability.
“Time clocks are being jolted in the front of workers’ visibility with new platform capabilities that surpass the traditional time clock hidden somewhere in a corner. Biometrics, especially facial recognition, are key to accelerate and validate time punches,” said Holger Mueller, vice president and principal analyst at Constellation Research.
When it comes to purchasing a product like this, Mueller said organizations should look into a software platform. “[Enterprises] need to get their information and processes on it, it needs to be resilient, sturdy, work without power, work without connectivity and gracefully reconnect when possible,” he said.
Other vendors in the human capital management space include Workday, Paycor and WorkForce Software. Workday platform’s time-tracking and attendance feature works on mobile devices and provide real-time analytics to aid managers’ decisions. Paycor’s Time and Attendance tool offers a mobile punching feature that can verify punch locations and enable administrators to set location maps to ensure employees punch in or near the correct work locations. WorkForce’s Time and Attendance tool automates pay rules for hourly, salaried or contingent workforce.
Dell EMC today added predictive analytics and network management to its VxRail hyper-converged infrastructure family while expanding NVMe support for SAP HANA and AI workloads.
Dell EMC VxRail appliances combine Dell PowerEdge servers and Dell-owned VMware’s vSAN hyperconverged infrastructure (HCI) software. The launch of Dell’s flagship HCI platform includes two new all-NVMe appliance configurations, plus VxRail Analytic Consulting Engine (ACE) and support for SmartFabric Services (SFS) across multi-rack configurations.
The new Dell EMC VxRail appliance models are the P580N and the E560N. The P580N is a four-socket system designed for SAP HANA in-memory database workloads. It is the first appliance in the VxRail P Series performance line to support NVMe. The 1u E560N is aimed at high performance computing and compute-heavy workloads such as AI and machine learning, along with virtual desktop infrastructure.
The new 1U E Series systems support Nvidia T4 GPUs for extra processing power. The E Series also supports 8 TB solid-state drives, doubling the total capacity of previous models. The VxRail storage-heavy S570 nodes also now support the 8 TB SSDs.
ACE is generally available following a six-month early access program. ACE, developed on Dell’s Pivotal Cloud Foundry platform, performs monitoring and performance analytics across VxRail clusters. ACE provides alerts for possible system problems, capacity analysis and can help orchestrate upgrades.
The addition of ACE to VxRail comes a week after Dell EMC rival Hewlett Packard Enterprise made its InfoSight predictive analytics available on its SimpliVity HCI platform.
Wikibon senior analyst Stuart Miniman said the analytics, SFS and new VxRail appliances make it easier to manage HCI while expanding its use cases.
“Hyperconverged infrastructure is supposed to be simple,” he said. “When you add in AI and automated operations, that will make it simpler. We’ve been talking about intelligence and automation of storage our whole careers, but there has been a Cambrian explosion in that over the last year. Now they’re building analytics and automation into this platform.”
Bringing network management into HCI
Part of that simplicity includes making it easier to manage networking in HCI. Expanded capabilities for SFS on VxRail include the ability for HCI admins to manage networking switches across VxRail clusters without requiring dedicated networking expertise. SFS now applies across multi-rack VxRail clusters, automating switch configuration for up to six racks in one site. SFS supports from six switches in a two-rack configuration to 14 switches in a six-rack deployment.
Support for Mellanox 100 Gigabit Ethernet PCIe cards help accelerate streaming media and live broadcast functions.
“We believe that automation across the data center is key to fostering operational freedom,” Gil Shneorson, Dell EMC vice president and general manager for VxRail, wrote in a blog with details of today’s upgrades. “As customers expand VxRail clusters across multiple racks, their networking needs expand as well.”
Dell EMC VxRail vs. Nutanix: All about the hypervisor?
IDC lists Dell as the leader in the hyperconverged appliance market, which IDC said hit $1.8 billion in the second quarter of 2019. Dell had 29.2% of the market, well ahead of second-place Nutanix with 14.2%. Cisco was a distant third with 6.2.%
According to Miniman, the difference between Dell EMC and Nutanix often comes down to the hypervisor deployed by the user. VxRail closely supports market leader VMware, but VxRail appliances do not support other hypervisors. Nutanix supports VMware, Microsoft Hyper-V and the Nutanix AHV hypervisors. The Nutanix software stack competes with vSAN.
“Dell and Nutanix are close on feature parity,” Miniman said. “If you’re using VMware, then VxRail is the leading choice because it’s 100% VMware. VxRail is in lockstep with VMware, while Nutanix is obviously not in lockstep with VMware.”
Hewlett Packard Enterprise has made its InfoSight predictive analytics resource management capabilities available on its HPE SimpliVity hyper-converged infrastructure platform.
InfoSight provides capacity utilization reports and forecasts, and sends alerts of possible problems before users run out of capacity. HPE acquired InfoSight when it bought Nimble Storage in March 2017, two months after it acquired early hyper-converged infrastructure (HCI) startup SimpliVity. HPE ported Nimble’s InfoSight to its flagship 3 PAR arrays and it is used in its new Primera storage, as well as the ProLiant servers that SimpliVity runs on.
HPE is also connecting SimpliVity to its StoreOnce backup appliances, allowing customers to move data from SimpliVity nodes to the StoreOnce deduplication back boxes.
HPE disclosed plans to bring InfoSight to SimpliVity in June, and it is now generally available as part of the SimpliVity service agreement. StoreOnce integration with SimpliVity HCI is planned for the first half of 2020.
SimpliVity HCI trails Dell, Nutanix, Cisco
HPE has lagged its major server rivals in HCI sales, particularly Dell. IDC listed SimpliVity as fourth in branded HCI revenue in the second quarter with $83 million, only 2.3% of the market. No. 1 Dell ($533 million) and No. 2 Nutanix ($259 million) combined for nearly half of the total market share, with Cisco third at $114 million and 6.2 % share according to IDC.
HPE’s commitment to SimpliVity has also been questioned by its hedging on HCI products. HPE makes Nutanix technology available as part of its GreenLake as-a-service program, and Nutanix sells its software bundled on HPE ProLiant hardware. HPE customers can also use its servers with VMware vSAN HCI software. And HPE this year launched Nimble Storage dHCI, a disaggregated platform that is not true HCI but competes with HCI products while allowing a greater degree of independent scaling of compute and storage resources. Nimble dHCI is also generally available this week.
HCI ‘comes down to data’
Pittsburgh-based trucking company Pitt Ohio has been a SimpliVity customer since before HPE acquired the HCI pioneer. Systems engineer Justin Brooks said he was familiar with InfoSight as a previous Nimble Storage customer, so he signed up for the beta program on SimpliVity. Brooks said he has used InfoSight since June, and finds it significantly aids him in managing capacity on his 19 HCI nodes used for primary storage and disaster recovery.
“Most of it comes down to data – how much you’re replicating, and how much data is on there versus what the hypervisor supports,” Brooks said. “The InfoSight intelligence and prediction capabilities are great for SimpliVity, because on any hyper-convergence platform it’s all about scalability. You need to know when to scale out or move things around, so you can plan accordingly. Hyper-converged is not dirt cheap either, especially when it’s all-flash. It’s important to make sure you’re getting your money’s worth out of the resources.”
Brooks said he previously employed “guesstimates and fuzzy math” to predict SimpliVity HCI growth, but InfoSight does those predictions for him now. InfoSight data growth patterns over the past 30-, 60- and 90-day periods.
“You’re always worried how big your data sets are growing, especially on the SQL Server side,” he said. “You don’t get as high efficiency with dedupe and compression on SQL data as with file data. With SimpliVity you have to dig into the CLI and get deep in there, or see what was sent over from the production side or the DR side. InfoSight shows you that data more granularly.”
Brooks said he was concerned about HPE’s plans for SimpliVity when it made the acquisition in 2017, but he’s happy with its commitment. Pitt Ohio took advantage of an HPE buyback program to convert its SimpliVity OmniCubes that used Dell hardware into ProLiant-based SimpliVity nodes. Pitt Ohio had nine SimpliVity nodes before the HPE acquisition, and is up to 19 now. Brooks estimates 98% of his applications run on SimpliVity HCI. The trucking company is a VMware shop and first got into SimpliVity HCI for virtual desktop infrastructure. It has since switched from Cisco UCS servers and a variety of storage, including Dell EMC VMAX and Data Domain and Nimble arrays.
“We had a Frankenblock infrastructure,” Brooks said. “When we needed to refresh hardware, our options were to forklift everything in the data center or get on the hyper-converged route.”
Pitt Ohio now has one SimpliVity cluster for Microsoft SQL Server, another cluster for all other production workloads and a third for QA.
Brooks said he uses SimpliVity for data protection, but is considering adding a Cohesity backup appliance. That is so he can move file data to cheaper storage, and HPE sells Cohesity software on HPE Apollo servers. “We want to get some files off of SimpliVity because I’d rather not use all that flash disk for files,” Brooks said.
Multi-cloud management among enterprise IT shops is real, but the vision of routine container portability between clouds has yet to be realized for most.
Multi-cloud management is more common as enterprises embrace public clouds and deploy standardized infrastructure automation platforms, such as Kubernetes, within them. Most commonly, IT teams look to multi-cloud deployments for workload resiliency and disaster recovery, or as the most reasonable approach to combining companies with loyalty to different public cloud vendors through acquisition.
“Customers absolutely want and need multi-cloud, but it’s not the old naïve idea about porting stuff to arbitrage a few pennies in spot instance pricing,” said Charles Betz, analyst at Forrester Research. “It’s typically driven more by governance and regulatory compliance concerns, and pragmatic considerations around mergers and acquisitions.”
IT vendors have responded to this trend with a barrage of marketing around tools that can be used to deploy and manage workloads across multiple clouds. Most notably, IBM’s $34 billion bet on Red Hat revolves around multi-cloud management as a core business strategy for the combined companies, and Red Hat’s OpenShift Container Platform version 4.2 updated its Kubernetes cluster installer to support more clouds, including Azure and Google Cloud Platform. VMware and Rancher also use Kubernetes to anchor multi-cloud management strategies, and even cloud providers such as Google offer products such as Anthos with the goal of managing workloads across multiple clouds.
For some IT shops, easier multi-cloud management is a key factor in Kubernetes platform purchasing decisions.
“Every cloud provider has hosted Kubernetes, but we went with Rancher because we want to stay cloud-agnostic,” said David Sanftenberg, DevOps engineer at Cardano Risk Management Ltd, an investment consultancy firm in the U.K. “Cloud outages are rare, but it’s nice to know that on a whim we can spin up a cluster in another cloud.”
Multi-cloud management requires a deliberate approach
With Kubernetes and VMware virtual machines as common infrastructure templates, some companies use multiple cloud providers to meet specific business requirements.
Unified communications-as-a-service provider 8×8, in San Jose, Calif., maintains IT environments spread across 15 self-managed data centers, plus AWS, Google Cloud Platform, Tencent and Alibaba clouds. Since the company’s business is based on connecting clients through voice and video chat globally, placing workloads as close to customers’ locations as possible is imperative, and this makes managing multiple cloud service providers worthwhile. The company’s IT ops team keeps an eye on all its workloads with VMware’s Wavefront cloud monitoring tool.
“It’s all the same [infrastructure] templates, and all the monitoring and dashboards stay exactly the same, and it doesn’t really matter where [resources] are deployed,” said Dejan Deklich, chief product officer at 8×8. “Engineers don’t have to care where workloads are.”
Multiple times a year, Deklich estimated, the company uses container portability to move workloads between clouds when it gets a good deal on infrastructure costs, although it doesn’t move them in real time or spread apps among multiple clouds. Multi-cloud migration also only applies to a select number of 8×8’s workloads, Deklich said.
Dejan DeklichChief product officer, 8×8
“If you’re in [AWS] and using RDS, you’re not going to be able to move to Oracle Cloud, or you’re going to suffer connectivity issues; you can make it work, but why would you?” he said. “There are workloads that can elegantly be moved, such as real-time voice or video distribution around the world, or analytics, as long as you have data associated with your processing, but moving large databases around is not a good idea.”
Maintaining multi-cloud portability also requires a deliberate approach to integration with each cloud provider.
“We made a conscious decision that we want to be able to move from cloud to cloud,” Deklich said. “It depends on how deep you go into integration with a given cloud provider — moving a container from one to the other is no problem if the application inside is not dependent on a cloud-specific infrastructure.”
The ‘lowest common denominator’ downside of multi-cloud
Not every organization buys in to the idea that multi-cloud management’s promise of freedom from cloud lock-in is worthwhile, and the use of container portability to move apps from cloud to cloud remains rare, according to analysts.
“Generally speaking, companies care about portability from on-premises environments to public cloud, not wanting to get locked into their data center choices,” said Lauren Nelson, analyst at Forrester Research. “They are far less cautious when it comes to getting locked into public cloud services, especially if that lock in comes with great value.”
Lauren NelsonAnalyst, Forrester Research
In fact, some IT pros argue that lock-in is preferable to missing out on the value of cloud-specific secondary services, such as AWS Lambda.
“I am staunchly single cloud,” said Robert Alcorn, chief architect of platform and product operations at Education Advisory Board (EAB), a higher education research firm headquartered in Washington, D.C. “If you look at how AWS has accelerated its development over the last year or so, it makes multi-cloud almost a nonsensical question.”
For Alcorn, the value of integrating EAB’s GitLab pipelines with AWS Lambda outweighs the risk of lock-in to the AWS cloud. Connecting AWS Lambda and API Gateway to Amazon’s SageMaker for machine learning has also represented almost a thousandfold drop in costs compared to the company’s previous container-based hosting platform, he said.
Even without the company’s interest in Lambda integration, the work required to keep applications fully cloud-neutral isn’t worth it for his company, Alcorn said.
“There’s a ceiling to what you can do in a truly agnostic way,” he said. “Hosted cloud services like ECS and EKS are also an order of magnitude simpler to manage. I don’t want to pay the overhead tax to be cloud-neutral.”
Some IT analysts also sound a note of caution about the value of multi-cloud management for disaster recovery or price negotiations with cloud vendors, depending on the organization. For example, some financial regulators require multi-cloud deployments for risk mitigation, but the worst case scenario of a complete cloud failure or the closure of a cloud provider’s entire business is highly unlikely, Forrester’s Nelson wrote in a March 2019 research report, “Assess the Pain-Gain Tradeoff of Multicloud Strategies.”
Splitting cloud deployments between multiple providers also may not give enterprises as much of a leg up in price negotiations as they expect, unless the customer is a very large organization, Nelson wrote in the report.
The risks of multi-cloud management are also manifold, according to Nelson’s report, from high costs for data ingress and egress between clouds to network latency and bandwidth issues, broader skills requirements for IT teams, and potentially double the resource costs to keep a second cloud deployment on standby for disaster recovery.
Of course, value is in the eye of the beholder, and each organization’s multi-cloud mileage may vary.
“I’d rather spend more for the company to be up and running, and not lose my job,” Cardano’s Sanftenberg said.
Aruba’s latest switching hardware and software unifies network management and analytics across the data center and campus. The approach to modern networking is similar to the one that underpins rival Cisco’s initial success with enterprises upgrading campus infrastructure.
Aruba, a Hewlett Packard Enterprise company, launched this week its most significant upgrade to the two-year-old ArubaOS-CX (AOS-CX) network operating system. With the NOS improvements, Aruba unveiled two series of switches, the stackable CX 6300 and the modular CX 6400. Together, the hardware covers access, aggregation and core uses.
The latest releases arrive a year after HPE transferred management of its data center networking group to Aruba. The latter company is also responsible for HPE’s FlexNetwork line of switches and software.
The new CX hardware is key to taking AOS-CX to the campus, where companies can take advantage of the software’s advanced features. As modular hardware, the 6400 can act as an aggregation or core switch, while the 6300 drives the access layer of the network where traffic comes from wired or wireless mobile or IoT devices.
For the data center, Aruba has the 8400 switch series that also run AOS-CX. The hardware marked Aruba’s entry into the data center market, where it has to build credibility.
“Many non-Aruba customers and some Aruba campus customers are likely to take a wait-and-see posture,” said Brad Casemore, an analyst at IDC.
Nevertheless, having one NOS powering all the switches does make it possible to manage them with the Aruba software that runs on top of AOS-CX. Available software includes products for network management, analytics and access control.
For the wired and wireless LAN, Aruba has ClearPass, which lets organizations set access policies for groups of IoT and mobile devices; and Central, a cloud-based management console. For the data center, Aruba has HPE SimpliVity, which provides automated switch configurations during deployment of Aruba and HPE switches.
New features in the latest version of ArubaOS-CX include Dynamic Segmentation that lets enterprises assign polices to wired client devices based on port or user role. Other enhancements include support for an Ethernet VPN over VXLAN for data center connectivity.
Also, within the new 10.4 version of AOS-CX, Aruba integrated the Network Analytics Engine (NAE) with Aruba’s NetEdit software for orchestration of multiple switch configurations. NAE is a framework built into AOS-CX that lets enterprises monitor, troubleshoot and collect network data through the use of scripting agents.
Aruba vs. Cisco
How well Aruba’s unification strategy for networking can compete with Cisco’s remains to be seen. The latter company has had significant success with the Catalyst 9000 campus switching line introduced in 2017 with Cisco’s DNA Center management console. Some organizations use the DNA product in data center networking.
In the first quarter of 2019, Cisco’s success with the Catalyst 9000 boosted its revenue share of the campus switching market by 5 points, according to the research firm Dell’Oro Group. During the same quarter, the combined revenue of the other vendors, which included HPE, declined.
Competition is fierce in the campus infrastructure market because enterprises are just starting to upgrade networks. Driving the current upgrade cycle is the switch to Wi-Fi 6 — the next-generation wireless standard that can support more devices than the present technology.
Wi-Fi 6 lets enterprises add to their networks IoT devices ranging from IP telephones and surveillance cameras to medical devices and handheld computers. The latter is used in warehouses and on the factory floor.
That transition will drive companies to deploy aggregation and access switches with faster port speeds and PoE ports to power wired IoT gear.
Enterprises skeptical of cross-domain networking
Aruba, Cisco and other networking vendors pushing a unified campus and data center haven’t convinced many enterprises to head in that direction, IDC analyst Brandon Butler said. Adopting that cross-domain technology would require significant changes in current operations, which typically have separate IT teams responsible for the campus and the data center.
IDC has not spoken to many enterprises that have centralized management across domains, Butler said. “This idea that you’re going to have a single pane of glass across the data center and the campus and out to the edge, I just don’t know if the industry is quite there yet.”
Meanwhile, Aruba’s focus on its CX portfolio has left some industry observers wondering whether it would diminish the development of FlexNetwork switches and software.
However, Michael Dickman, VP of Aruba product line management, said the company plans to fully support its FlexNetwork architecture “in parallel” with the CX portfolio.
There are several management approaches and deployment options for organizations interested in using the Azure Stack HCI product.
Azure Stack HCI is a hyper-converged infrastructure product, similar to other offerings in which each node holds processors, memory, storage and networking components. Third-party vendors sell the nodes that can scale should the organization need more resources. A purchase of Azure Stack HCI includes the hardware, Windows Server 2019 operating system, management tools, and service and support from the hardware vendor. At time of publication, Microsoft’s Azure Stack HCI catalog lists more than 150 offerings from 19 vendors.
Azure Stack HCI, not to be confused with Azure Stack, gives IT pros full administrator rights to manage the system.
Tailor the Azure Stack HCI options for different needs
The basic components of an Azure Stack HCI node might be the same, but an organization can customize them for different needs, such as better performance or lowest price. For example, a company that wants to deploy a node in a remote office/branch office might select Lenovo’s ThinkAgile MX Certified Node, or its SR650 model. The SR650 scales to two nodes that can be configured with a variety of processors offering up to 28 cores, up to 1.5 TB of memory, hard drive combinations providing up to 12 TB (or SSDs offering more than 3.8 TB), and networking with 10/25 GbE. Each node comes in a 2U physical form factor.
If the organization needs the node for more demanding workloads, one option is the Fujitsu Primeflex. Azure Stack HCI node models such as the all-SSD Fujitsu Primergy RX2540 M5 scale to 16 nodes. Each node can range from 16 to 56 processor cores, up to 3 TB of SSD storage and 25 GbE networking.
Management tools for Azure Stack HCI systems
Microsoft positions the Windows Admin Center (WAC) as the ideal GUI management tool for Azure Stack HCI, but other familiar utilities will work on the platform.
The Windows Admin Center is a relatively new browser-based tool for consolidated management for local and remote servers. The Windows Admin Center provides a wide array of management capabilities, such as managing Hyper-V VMs and virtual switches, along with failover and hyper-converged cluster management. While it is tailored for Windows Server 2019 — the server OS used for Azure Stack HCI — it fully supports Windows Server 2012/2012 R2 and Windows Server 2016, and offers some functionality for Windows Server 2008 R2.
Azure Stack HCI users can also use more established management tools such as System Center. The System Center suite components handle infrastructure provisioning, monitoring, automation, backup and IT service management. System Center Virtual Machine Manager provisions and manages the resources to create and deploy VMs, and handle private clouds. System Center Operations Manager monitors services, devices and operations throughout the infrastructure.
Other tools are also available including PowerShell, both the Windows and the PowerShell Core open source versions, as well as third-party products, such as 5nine Manager for Windows Server 2019 Hyper-V management, monitoring and capacity planning.
It’s important to check over each management tool to evaluate its compatibility with the Azure Stack HCI platform, as well as other components of the enterprise infrastructure.
Icelandair’s web content repository has taken flight from a traditional, on-premises content management system to a headless CMS in the cloud to improve its online travel booking experience for customers.
We spoke with Icelandair’s global director of marketing Gísli Brynjólfsson andUXwriter Hallur Þór Halldórsson to discuss how they made this IT purchasing decision and what CX improvements the airline stands to gain by going to the cloud.
What was the technology problem that got Icelandair thinking about changing to a headless CMS in the cloud?
Halldórsson: When I came on to the project in 2015 we had a very old-fashioned on-premises CMS with a publishing front-end attached to it, which handled all the content for our booking site. Content managers had to go in and do a lot of cache-flushing and add code here, add code there to the site.
Load tests during cloud containerizing experiments on AWS in 2016 made people scared the site would crash a lot; people weren’t sure the CMS could handle what was coming in our digital transformation. We started looking for another CMS, using a different one for a year that wasn’t headless — but had API functionality — but it wasn’t quite doing what we expected. We ended up trying several cloud CMS vendors and Contentstack won the contract.
What about headless CMSmade sensein the context of your digital transformation plan?
Hallur Þór HalldórssonUX writer, Icelandair
Halldórsson: Headless became a requirement at one point to decouple it from the publishing end of the old CMS. We needed this approach if we wanted to personalize content for customers, which we eventually would like to do. But the ability to adapt quickly and scalability were the primary reasons to go with a headless CMS.
What features or functionality won the bid for Contentstack’s headless CMS?
Halldórsson: The way it handles localized content. We support 11 languages online and 16 locales (four different versions of English, two French), and you have to be able to manage that. Other vendors that impressed us otherwise didn’t have mature localization features.
What is on your digital transformation roadmap over the next couple years?
Halldórsson: The first thing we did was integrate our translation process into the CMS. Before, we had to paste text into a Microsoft Word document, send it to the translation agency, wait for it to come back and paste it into the CMS. Now it gets sent to the agency via API and is delivered back. Automating that workflow was first. Next is a Salesforce integration to more quickly give salespeople and customer service agents the content we know they’re looking for. Integrating a personalization engine, too, is a dream.
Editor’s note: This Q&A has been edited for clarity and brevity.
Stibo Systems is helping to advance the market for Master Data Management (MDM) with its latest release. The Stibo Systems 9.2 release of it multidomain MDN solution provides users with new features to manage, organize and make sense of data.
Stibo Systems got its start four years ago and is a division of Denmark-based Stibo A/S, an IT and print technology multinational that was founded in 1794 as a printing company. As part of the 9.2 update, the multidomain MDM system gains enhanced machine learning capabilities to help manage data across multiple data domains.
The update, which became generally available Sept. 4, also includes a bundled integration with the Sisense BI-analytics platform for executing data analytics on the multidomain MDM.
Though MDM is not a term that is heard as often in recent years as big data, Forrester vice president and research director Gene Leganza said MDM is as relevant now as it ever was, despite the significant changes that have come about in the big data era.
“For one thing, the years of stories of firms doing innovative things with data and analytics have gotten business leaders’ attention and anyone who was unaware of the value hidden in their data assets has gotten the message that they cannot afford to leave that value unmined,” Leganza said. “For these data management laggards — and there are a lot of them — newfound enthusiasm to improve their data capabilities usually means getting started with data governance and MDM.”
Simply collecting data, though, isn’t enough. Leganza said all data analysis is a “garbage-in-garbage-out” proposition, and the reliability and trustworthiness of data has never been more important as organizations work harder to evolve into data- and insights-driven cultures. Keeping data clean and usable is where multidomain MDM plays a key role.
Gene LeganzaVice president and director of research, Forrester
Looking at Stibo Systems, Leganza said that in the last few years, the vendor has significantly bolstered its general MDM capabilities and Forrester included them in theQ1 2019 Forrester Wave evaluation of MDMsystems, in which Stibo was ranked a “contender.” He noted that the evaluation did not include the features in the new 9.2 release, and adding machine learning to improve data quality and governance is something Forrester had noted customers were asking for.
“This new release strengthens both their product domain dominance as well as their general MDM capabilities, which should serve them well in the marketplace,” Leganza said.
The MDM system is a purpose-built platform for mastering data and the various domains that go into that, whether that be product domain, customer information, supplier details or vendor locations, said Doug Kimball, vice-president of global solution strategy at Stibo Systems.
Kimball said that with a multidomain MDM, Stibo Systems customers can connect data across different pieces of their domain. For example, a company could map customers to products and know where those products are by location, he said.
A sizeable amount of what goes into multidomain MDM is data governance, enabling data traceability, as well as compliance with regulations. The Stibo Systems platform brings data in from wherever a company has it, be it a database, data lake, ERP system or otherwise, Kimball said.
“We do the de-duplication, the matching of records, the address verification and all the things that make the data good and usable,” he said.
Among the changes in the 9.2 release, Kimball noted, is that the Cassandra database is now a database option, providing an alternative to just running an Oracle database.
For the product master data management component, Stibo Systems now has a partnership with Sisense to deliver embedded analytics. It’s now possible to create data visualizations and actionable insights that are effectively embedded in the user experience, Kimball said.
Also in the new release is an application called Smartsheet that can help to bridge the gap between multidomain MDM and a simple Excel spreadsheet.
Kimball said Stibo Systems is working on a new user experience interface that is intended to make it easier for users to navigate the multidomain MDM. The vendor is also working on MDM on the edge.
“We’re looking at the fact that you’ve got all these devices out there: smart watches, refrigerators, beacons, creating all this additional data that needs to be mastered,” Kimball said. “The data is on the edge, instead of being in traditional data stores.”
The LogMeIn Bold360 suite has been updated to include new security controls and management tools and an updated workload organization feature. According to LogMeIn, the updates are meant to enable customer service teams to work faster and improve overall performance.
The full list of updates includes the following:
Knowledge management tools: The latest version of Bold360’s search optimizer has search and filter features on customer intents, the capability to create articles for unresolved intents within the search optimizer, and can add phrasings to an article from an unresolved intent. It also has a task-driven interface so users can manage unanswered, answered, channeled and muted intents.
Monitor view: Administrators can now see the content of live chats, chatbot engagements, emails, SMS texts and messaging channels such as Facebook Messenger.
Workload organization: There is a new chat flagging feature that will let agents mark an engagement in case they need to refer back to it for any reason. Supervisors can also filter the monitor view by agent flags to keep track of open engagements.
Security updates: Bold360 received ISO 27001 certification, meaning it met requirements for managing sensitive company information so that it remains secure. Additionally, LogMeIn added IP Whitelisting for Agent Logins, which enables admins to restrict which networks agents can log into the Bold360 web workspace from.
This is the latest in a series of updates to LogMeIn Bold360, including improvements to the chatbot in April and the addition of AI features for bots and agents in June. LogMeIn also has a portfolio of unified communication products, including GoToMeeting, GoToWebinar, Grasshopper, Grasshopper Connect and Jive.
LogMeIn Bold360 competes in a crowded costumer experience market, going head-to-head with tech giants such as Salesforce, which will release CRM platform Salesforce Customer 360 in November.
Spectra Logic’s new storage management software can search through high-cost primary storage for inactive data and move it to a lower-cost tier, regardless of who makes the hardware.
StorCycle was released today, and unlike the operational software for Spectra Logic’s tape, object and disk storage appliances, it is completely stand-alone. It is installed on a virtual machine or dedicated server that sits between the primary storage tier and a perpetual storage tier, where it migrates inactive data from the former to the latter.
Spectra Logic separates storage into two tiers. The primary tier consists of fast, high-cost storage like flash and high-performance disk, while the perpetual storage tier consists of slower, low-cost storage like tape, object storage, network-attached storage and the public cloud. Moving infrequently used data out of the primary tier saves money, and StorCycle is designed to streamline that migration.
“StorCycle is the glue that ties these two tiers together,” said David Feller, vice president of product management and solutions engineering at Spectra Logic.
Automated data tiering is not new. Cloud file system startup Elastifile, which was bought by Google earlier this year, supports tiering to on-premises bare-metal servers, AWS and Google Cloud Platform (GCP). NetApp Cloud Volumes OnTap and Hitachi Vantara have similar storage optimization capabilities for hybrid environments, and Druva recently introduced a capability to optimize storage among different AWS tiers.
David FellerVice president of product management and solutions engineering, Spectra Logic
Spectra Logic’s offering stands out in that it is a very simple and complete product, said Mark Peters, principal analyst and practice director at IT analyst, research and validation firm Enterprise Strategy Group. Peters said while StorCycle is standalone, the fact that a customer can buy it with compatible secondary storage hardware spanning disk, object store and tape is a distinct advantage.
StorCycle’s most notable feature, though, is how it places an HTML link in the file’s original location pointing to where the archived file has been moved to. Peters said this helps prevent systems from timing out when trying to retrieve data from a high-latency source, such as tape or public cloud. More importantly, he said this simplifies the recall and recovery process for users, as they can access their archived data from where it was originally.
Peters said organizations need to be careful when evaluating storage management software, as it’s possible they could increase their overall costs rather than saving money. StorCycle needs to be installed on a VM or dedicated server, which is a relatively minimal expense, but some products may call for additional primary storage. He encouraged buyers to “do their homework,” and ensure that their storage management software costs don’t exceed their savings.
Peters also said some organizations develop automated data tiering in-house, which is potentially cheaper than buying third-party storage management software. However, this introduces a layer of complexity and could create a legacy problem down the line.
“They can potentially run into trouble if the person who developed the application leaves,” Peters said.
StorCycle can automatically detect inactive data and migrate it, but it also includes a feature for “project-based” migrations. Users can tag data sets to be moved to the perpetual tier, and StorCycle will perpetually move new data with that tag out of primary. When data generation for the project is complete, all of its related data is in the same place in the perpetual tier, ready for further analysis. Feller said ideal use cases for this include sensor-based and machine-gathered data, such as seismology studies or autonomous car research.
StorCycle enters beta today, with general availability slated for November 2019. The initial release will support AWS and Wasabi cloud, with support for Microsoft Azure and GCP in the works. The first release will need to be installed on VMs or servers running Windows systems, but support for Linux is being planned.
Spectra Logic did not have price details for StorCycle but stated it will be available both as a perpetual license and as an annual subscription license.