Tag Archives: among

Google buys AppSheet for low-code app development

Google has acquired low-code app development vendor AppSheet in a bid to up its cloud platform’s appeal among line-of-business users and tap into a hot enterprise IT trend.

Like similar offerings, AppSheet ingests data from sources such as Excel spreadsheets, Smartsheet and Google Sheets. Users apply views to the data — such as charts, tables, maps, galleries and calendars — and then develop workflows with AppSheet’s form-based interface. The apps run on Android, iOS and within browsers.

AppSheet, based in Seattle, already integrated with G Suite and other Google cloud sources, as well as Office 365, Salesforce, Box and other services. The company will continue to support and improve those integrations following the Google acquisition, AppSheet CEO Praveen Seshadri said in a blog post.

“Our core mission is unchanged,” Seshadri said. “We want to ‘democratize’ app development by enabling as many people as possible to build and distribute applications without writing a line of code.”

Terms of the deal were not disclosed, but the price tag for the low-code app development startup is likely far less than Google’s $2.6 billion acquisition of data visualization vendor Looker in June 2019.

Under the leadership of former longtime Oracle executive Thomas Kurian, Google Cloud was expected to make a series of deals to shore up its position in the cloud computing market, where it trails AWS and Microsoft by significant percentages.

So far, Kurian has not made moves to buy core enterprise applications such as ERP and CRM, two markets dominated by the likes of SAP, Oracle and Salesforce. Rather, the AppSheet purchase reflects Google Cloud’s perceived strength in application development, but with a gesture toward nontraditional coders.

As for why Google chose AppSheet to boost its low-code/no-code strategy, one reason could be the dwindling number of options. In the past couple of years, several prominent low-code/no-code vendors became acquisition targets. Notable examples include Siemens’ August 2018 purchase of Mendix for $730 million, and more recently, Swiss banking software provider Temenos’ move to buy Kony in a $559 million deal.

It’s not as if Google, Siemens and Temenos made a long shot bet, either. A survey released last year by Forrester Research, based on data collected in late 201, found that 23% of more than 3,000 developers surveyed reported their companies were already using low-code development platforms. In addition, another 22% indicated their organizations would buy into low-code within a year.

Low-code app dev platforms foster quick creation of business data-driven mobile apps.
Google’s purchase of AppSheet gives it low-code app dev tools for business users.

Low-code competition heightens

Google’s AppSheet buy pits it directly against cloud platform rival Microsoft, which has the citizen developer-targeted Power Apps low-code app development platform that has taken off like a rocket, said John Rymer, an analyst at Forrester. The acquisition of AppSheet also sets Google apart from cloud market share leader AWS, whose alleged super-secret low-code/no-code platform that was said to be under development by a team led by prominent development guru Adam Bosworth has yet to appear.

However, in AppSheet, Google is getting a winner, Rymer noted. “It’s a really good product and a really good team,” he said.

Moreover, the addition of AppSheet will help Google get more horsepower out of Apigee than just API management. The company wanted a broader platform with more functionality to address more customers and more use cases, Rymer said.

“So, I think they will be positioning this as a new platform anchored by Apigee,” he said. “Customers could use Apigee to create and publish APIs and AppSheet is how they would consume them. But they won’t stop there. They need process automation/workflow, so I would expect them to go there as well.”

AppSheet gives Google the potential to craft a more cohesive story that integrates that with Google Cloud and Anthos in the future.
Jeffrey HammondAnalyst, Forrester

Meanwhile, another key benefit Google gains from this acquisition is the integration that AppSheet already has with Google’s office productivity products, said Jeffrey Hammond, another Forrester analyst.

“G Suite has always felt a bit out of place to me at Google’s developer conferences, but it used to be one of the main ‘leads’ for the enterprise,” he said. “AppSheet gives Google the potential to craft a more cohesive story that integrates that with Google Cloud and Anthos in the future.”

Overall, this acquisition is yet another indication that low-code/no-code development has gone mainstream and the number of people building applications will continue to grow.

Go to Original Article
Author:

How to Resize Virtual Hard Disks in Hyper-V

We get lots of cool tricks with virtualization. Among them is the ability to change our minds about almost any provisioning decision. In this article, we’re going to examine Hyper-V’s ability to resize virtual hard disks. Both Hyper-V Server (2016+) and Client Hyper-V (Windows 10) have this capability.

An Overview of Hyper-V Disk Resizing

Hyper-V uses two different formats for virtual hard disk files: the original VHD and the newer VHDX. 2016 added a brokered form of VHDX called a “VHD Set”, which follows the same resize rules as VHDX. We can grow both the VHD and VHDX types easily. We can shrink VHDX files with only a bit of work. No supported way exists to shrink a VHD. Once upon a time, a tool was floating around the Internet that would do it. As far as I know, all links to it have gone stale.

You can resize any of Hyper-V’s three layout types (fixed, dynamically expanding, and differencing). However, you cannot resize an AVHDX file (a differencing disk automatically created by the checkpoint function).

Resizing a virtual disk file only changes the file. It does not impact its contents. The files, partitions, formatting — all of that remains the same. A VHD/X resize operation does not stand alone. You will need to perform additional steps for the contents.

Requirements for VHD/VHDX Disk Resizing

The shrink operation must occur on a system with Hyper-V installed. The tools rely on a service that only exists with Hyper-V.

If no virtual machine owns the virtual disk, then you can operate on it directly without any additional steps. Be aware that if a

If a virtual hard disk belongs to a virtual machine, the rules change a bit:

  • If the virtual machine is Off, any of its disks can be resized as though no one owned them
  • If the virtual machine is Saved or has checkpoints, none of its disks can be resized
  • If the virtual machine is Running, then there are additional restrictions for resizing its virtual hard disks

Special Requirements for Shrinking VHDX

Growing a VHDX doesn’t require any changes inside the VHDX. Shrinking needs a bit more. Sometimes, quite a bit more. The resize directions that I show in this article will grow or shrink a virtual disk file, but you have to prepare the contents before a shrink operation. We have another article that goes into detail on this subject.

Can I Resize a Hyper-V Virtual Machine’s Virtual Hard Disks Online?

A very important question: do you need to turn off a Hyper-V virtual machine to resize its virtual hard disks? The answer: sometimes.

  • If the virtual disk in question is the VHD type, then no, it cannot be resized online.
  • If the VM attached the disk in question to its virtual IDE chain, then no, you cannot resize the virtual disk while the virtual machine is online.
  • If the VM attached the disk in question to its virtual SCSI chain, then yes, you can resize the virtual disk while the virtual machine is online.

Resize a Hyper-V Virtual Machine's Virtual Hard Disks Online

Does Online VHDX Resize Work with Generation 1 Hyper-V VMs?

The generation of the virtual machine does not matter for virtual hard disk resizing. If the virtual disk is on the virtual SCSI chain, then you can resize it online.

Does Hyper-V Virtual Disk Resize Work with Linux Virtual Machines?

The guest operating system and file system do not matter. Different guest operating systems might react differently to a resize event, and the steps that you take for the guest’s file system will vary. However, the act of resizing the virtual disk does not change.

Do I Need to Connect the Virtual Disk to a Virtual Machine to Resize It?

Most guides show you how to use a virtual machine’s property sheet to resize a virtual hard disk. That might lead to the impression that you can only resize a virtual hard disk while a virtual machine owns it. Fortunately, you can easily resize a disconnected virtual disk. Both PowerShell and the GUI provide suitable methods.

How to Resize a Virtual Hard Disk with PowerShell

PowerShell is the preferred method for all virtual hard disk resize operations. It’s universal, flexible, scriptable, and, once you get the hang of it, much faster than the GUI.

The cmdlet to use is Resize-VHD:

The VHDX that I used in the sample began life at 20GB. Therefore, the above cmdlet will work as long as I did at least one of the following:

  • Left it unconnected
  • Connected it to a VM’s virtual SCSI controller
  • Turned the connected VM off

Notice the gb suffix on the SizeBytes parameter. PowerShell natively provides that feature; the cmdlet itself has nothing to do with it. PowerShell will automatically translate suffixes as necessary. Be aware that 1kb equals 1,024, not 1,000 (although both b and B both mean “byte”).

Had I used a number for SizeBytes smaller than the current size of the virtual hard disk file, I might have had some trouble. Each VHDX has a specific minimum size dictated by the contents of the file. See the discussion on shrinking at the end of this article for more information. Quickly speaking, the output of Get-VHD includes a MinimumSize field that shows how far you shrink the disk without taking additional actions.

This cmdlet only affects the virtual hard disk’s size. It does not affect the contained file system(s). We will cover that part in an upcoming section.

How to Resize a Disconnected Virtual Hard Disk with Hyper-V Manager

Hyper-V Manager allows you to resize a virtual hard disk whether or not a virtual machine owns it.

  1. From the main screen of Hyper-V Manager, first, select a host in the left pane. All VHD/X actions are carried out by the hypervisor’s subsystems, even if the target virtual hard disk does not belong to a specific virtual machine. Ensure that you pick a host that can reach the VHD/X. If the file resides on SMB storage, delegation may be necessary.
  2. In the far right Actions pane, click Edit Disk.
    Resize a Disconnected Virtual Hard Disk with Hyper-V Manager
  3. The first page is information. Click Next.
  4. Browse to (or type) the location of the disk to edit.
    locate virtual hard disk
  5. The directions from this point are the same as for a connected disk, so go to the next section and pick up at step 6.

Note: Even though these directions specify disconnected virtual hard disks, they can be used on connected virtual disks. All of the rules mentioned earlier apply.

Altaro Dojo Forums
forums logo

Connect with fellow IT pros and master Hyper-V

Moderated by Microsoft MVPs

How to Resize a Virtual Machine’s Virtual Hard Disk with Hyper-V Manager

Hyper-V Manager can also resize virtual hard disks that are attached to virtual machines.

  1. If the virtual hard disk is attached to the VM’s virtual IDE controller, turn off the virtual machine. If the VM is saved, start it. If the VM has checkpoints, remove them.
  2. Open the virtual machine’s Settings dialog.
  3. In the left pane, choose the virtual disk to resize.
  4. In the right pane, click the Edit button in the Media block.
    Resize a Virtual Machine's Virtual Hard Disk with Hyper-V Manager
  5. The wizard will start by displaying the location of the virtual hard disk file, but the page will be grayed out. Otherwise, it will look just like the screenshot from step 4 of the preceding section. Click Next.
  6. Choose to Expand or Shrink the virtual hard disk. Shrink only appears for VHDXs or VHDSs, and only if they have unallocated space at the end of the file. If the VM is off, you will see additional options. Choose the desired operation and click Next.
    edit virtual hard disk wizard
  7. If you chose Expand, it will show you the current size and give you a New Size field to fill in. It will display the maximum possible size for this VHD/X’s file type. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable).
    expand virtual hard diskIf you chose Shrink (VHDX only), it will show you the current size and give you a New Size field to fill in. It will display the minimum possible size for this file, based on the contents. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable).
    shrink virtual hard disk
  8. Enter the desired size and click Next.
  9. The wizard will show a summary screen. Review it to ensure accuracy. Click Finish when ready.

The wizard will show a progress bar. That might happen so briefly that you don’t see it, or it may take some time. The variance will depend on what you selected and the speed of your hardware. Growing fixed disks will take some time; shrinking disks usually happens almost instantaneously. Assuming that all is well, you’ll be quietly returned to the screen that you started on.

This change only affects the virtual hard disk’s size. It does not affect the contained file system(s). We will cover that in the next sections.

Following Up After a Virtual Hard Disk Resize Operation

When you grow a virtual hard disk, only the disk’s parameters change. Nothing happens to the file system(s) inside the VHD/X. For a growth operation, you’ll need to perform some additional action. For a Windows guest, that typically means using Disk Management to extend a partition:

After a Virtual Hard Disk Resize Operation

Note: You might need to use the Rescan Disks operation on the Action menu to see the added space.

Of course, you could also create a new partition (or partitions) if you prefer.

Linux distributions have a wide variety of file systems with their own requirements for partitions and sizing. They also have a plenitude of tools to perform the necessary tasks. Perform an Internet search for your distribution and file system.

VHDX Shrink Operations

As previously mentioned, you can’t shrink a VHDX without making changes to the contained file system first. Review our separate article for steps.

What About VHD/VHDX Compact Operations?

I often see confusion between shrinking a VHD/X and compacting a VHD/X. These operations are unrelated. When we talk about resizing, then the proper term for reducing the size of a virtual hard disk is “shrink”. That changes the total allocated space of the contained partitions. “Compact” refers to removing the zeroed blocks of a dynamically expanding VHD/VHDX so that it consumes less space on physical storage. Compact makes no changes to the contained data or partitions. We have an article on compacting VHD/Xs that contain Microsoft file systems and another for compacting VHD/Xs with Linux file systems.

Note: this page was originally published in January 2018 and has been updated to be relevant as of December 2019.


Go to Original Article
Author: Eric Siron

For VMware, DSC provides ESXi host and resource management

PowerShell Desired State Configuration has been a favorite among Windows infrastructure engineers for years, and the advent of the VMware DSC module means users who already use DSC to manage Windows servers can use it to manage VMware, too. As VMware has continued to develop the module, it has increased the numbers of vSphere components the tool can manage, including VMware Update Manager.

DSC has been the configuration management tool of choice for Windows since it was released. No other tool offers such a wide array of capabilities to manage a Windows OS in code instead of through a GUI.

VMware also uses PowerShell technology to manage vSphere. The vendor officially states that PowerCLI, its PowerShell module, is the best automation tool it offers. So, it only makes sense that VMware would eventually incorporate DSC so that its existing PowerShell customers can manage their assets in code.

Why use DSC?

Managing a machine through configuration as code is not new, especially in the world of DevOps. You can write a server’s desired state in code, which ensures you can quickly resolve any drift in configuration by applying that configuration frequently.

In vSphere, ESXi hosts, in particular, are the prime candidates for this type of management. An ESXi host’s configurations do not change often, and when they do happen to change, admins must personally make that change. This means any change in the DSC configuration will apply to the hosts.

You can use this tool to manage a number of vSphere components, such as VMware Update Manger and vSphere Standard Switch.

How the LCM works

In DSC, the LCM makes up the brains of a node.

In DSC, Local Configuration Manager (LCM) makes up the brains of a node. It takes in the configuration file and then parses and applies the change locally.

ESXi and vCenter do not have LCM, so in the context of vSphere, you must use an LCM proxy, which runs as a Windows machine with PowerShell v5.1 and PowerCLI 10.1.1.

Installing the module

Installing the module is simple, as the DSC module is part of PowerShell Gallery. It only takes a single cmdlet to install the module on your LCM proxy:

C:> Install-Module -Name VMware.vSphereDSC

Updating the module when Windows releases additional versions is also a simple task. You can use the Update-Module cmdlet in PowerCLI:

C:> Update-Module vmware.vspheredsc

Resources

DSC ties a resource to a particular area of a system it can manage. The DSC module vmware.vspheredsc, for example, can manage various aspects of vSphere, such as the following:

C:Usersdan> Get-DscResource -Module vmware.vspheredsc | Select NameName
----
Cluster
Datacenter
DatacenterFolder
DrsCluster
Folder
HACluster
PowerCLISettings
vCenterSettings
vCenterStatistics
VMHostAccount
VMHostDnsSettings
VMHostNtpSettings
VMHostSatpClaimRule
VMHostService
VMHostSettings
VMHostSyslog
VMHostTpsSettings
VMHostVss
VMHostVssBridge
VMHostVssSecurity
VMHostVssShaping
VMHostVssTeaming

Many such resources are associated with ESXi hosts. You can manage settings such as accounts, Network Time Protocol and service through DSC. For clusters, manage settings such as HAEnabled, Distributed Resource Scheduler and DRS distribution. You can view the resources DSC can manage with the Get-DSCResource cmdlet:

C:> Get-DscResource -Name Cluster -Module vmware.vspheredsc -Syntax
Cluster [String] #ResourceName
{
[DependsOn = [String[]]]
[PsDscRunAsCredential = [PSCredential]]
Server = [String]
Credential = [PSCredential]
Name = [String]
Location = [String] DatacenterName = [String]
DatacenterLocation = [String]
Ensure = [String]
[HAEnabled = [Boolean]]
[HAAdmissionControlEnabled = [Boolean]]
[HAFailoverLevel = [Int32]]
[HAIsolationResponse = [String]]
[HARestartPriority = [String]]
[DrsEnabled = [Boolean]]
[DrsAutomationLevel = [String]]
[DrsMigrationThreshold = [Int32]]
[DrsDistribution = [Int32]]
[MemoryLoadBalancing = [Int32]]
[CPUOverCommitment = [Int32]]
}

With the capabilities of DSC now available to VMware admins, as well as Windows admins, they can control a variety of server variables through code and make vSphere and vCenter automation easy and accessible. They can apply broad changes across an entire infrastructure of hosts and ensure consistent configuration.

Go to Original Article
Author:

Intel partner marketplace to drive ecosystem collaboration

Intel has rolled out the Solutions Marketplace in a bid to facilitate collaboration among its global partner ecosystem.

The Intel Solutions Marketplace, launched Wednesday, provides a platform for Intel partners to create virtual storefronts where they can market their businesses and products. According to Intel, partners can use the Solutions Marketplace to browse other partners’ offerings and engage one another for collaborative purposes. The Solutions Marketplace is the latest move by the company in laying the groundwork for the Intel Partner Alliance program, a revamped partner program slated to launch in the second half of 2020.

“We have this ecosystem across our partner program that spans almost the entirety of our industry, and, oftentimes, the level of collaboration needed between two points in that ecosystem — or three points or even four points in that ecosystem — is growing as the complexity of end customer demands … grows, as well. The Intel Solutions Marketplace would be the way that that industry comes together to facilitate that,” said Eric Thompson, Intel general manager of global partner enablement.

Intel built the Solutions Marketplace on its Solutions Directory, a previously established feature of the company’s IoT Solutions Alliance program. The Solutions Directory lets partners post and promote their IoT-related products and solutions, Thompson said.

Given its origins, the Solutions Marketplace is heavily focused on IoT, but Thompson said offerings will expand into other technology areas. He noted that the marketplace also carries Intel Select Solutions — data center-oriented products for running enterprise software applications such as SAP HANA.

At launch, the Solutions Marketplace has approximately 4,600 unique offerings from about 1,000 Intel partners, Thompson said.

Making partner-to-partner collaborations easier

Other vendors, such as IBM, have recently made efforts to facilitate the partner matchmaking process within their complex channel ecosystems. There are a number of factors driving the need for partner-to-partner collaborations.

Thompson said the Intel partner ecosystem ranges from ODMs, OEMs and ISVs to systems and solutions integrators, services providers and cloud service providers. Customer demand for advanced IT solutions increasingly requires channel firms to combine their skill sets and expertise in joint engagements.

When Intel designed the Solutions Marketplace, the company sought partner feedback on the challenges typically involved in collaborations, Thompson said. Intel partners cited issues such as finding the right companies to connect with and identifying the right people in those companies to contact. “Our intent was to build features into the marketplace to help solve some of those challenges with collaboration,” he noted.

Each virtual storefront lets an Intel partner display a listing of its offerings as well as a detailed profile of its business and targeted industries, focus areas and geographic markets. Partners can be contacted by potential collaborators directly through the storefronts. Additionally, a dashboard gives partners insight into user visits to their storefronts, lead management functions and reporting on lead statistics, Thompson said.

With the Solutions Marketplace launched, Thompson noted that Intel will also continue to host face-to-face partner matchmaking events where partners can learn about one another’s companies and forge alliances.

Intel’s vision for the Solutions Marketplace is to also extend beyond partner-to-partner collaboration. The company aims to drive end customers to the Solutions Marketplace — for example, by shepherding customers to the platform from the Intel.com website, Thompson said. “We see this as a good opportunity for us to help connect end customers … to partners across that variety of solution spaces,” he said.

Other news

  • Rackspace, a cloud and managed service provider, augmented its Service Blocks portfolio of packaged services for cloud environments. The portfolio now includes Container Services Journey, a Service Block to help customers develop container strategies and containerized apps; Hybrid Transformation with VMware Cloud on AWS, which offers tools and expertise for transitioning to hybrid cloud with VMware Cloud on AWS; and Data Modernization, aimed at strengthening customers’ analytics processes, Rackspace said. Rackspace this week also closed its acquisition of Onica, an AWS Premier Consulting Partner and AWS Managed Service Provider.
  • IT management software vendor SolarWinds released the latest version of its N-central remote management and monitoring tool. N-central 12.2 adds network topology mapping capabilities, as well as features for disk encryption, automation and patching, SolarWinds said.
  • NTT Data, an IT services provider based in Tokyo, will resell GoodData’s analytics platform under a new agreement between the companies. NTT Data will also use GoodData’s technology in its iQuattro industrial IoT platform.
  • NTT Data Services, a Plano, Texas, division of NTT Data Corp., signed a definitive agreement to acquire Flux7, an IT services provider and AWS Premier Consulting Partner. Flux7’s expertise includes cloud implementation and migration, automation, and DevOps consulting services, according to NTT Data.
  • Cost and security are key barriers impeding SMBs’ cloud migration, an Insight Enterprises survey found. Fifty-six of the 408 SMB IT decision-maker respondents cited cloud costs as an obstacle, while 50% of those polled identified security requirements. In other findings, the Insight report said 95% of respondents have either implemented or plan to implement within the next year digital transformation initiatives, but 49% rate integrating new technology with legacy systems as very or extremely challenging.
  • Cloud managed service provider Faction introduced a free educational series for companies adopting VMware Cloud on AWS. Dubbed “6-Step Blueprint for the Success,” the program offers business and technical best practices.
  • MSP360, formerly CloudBerry Lab, rolled out macOS and iOS releases of MSP360 Remote Assistant, a freeware remote access and control offering. The Lewes, Del., company said the Apple-oriented releases will make it easier for MSPs to support customers from MacOS computers as well as iOS and iPadOS devices.
  • InterVision, an IT services provider with headquarters in Santa Clara, Calif., and St. Louis, said it has obtained Premier Consulting Partner status within the AWS Partner Network.
  • Wipro, an IT consulting and business process services company, has unveiled cloud Security Operations Center (SOC) services using Microsoft Azure Sentinel. Azure Sentinel is a security information event management offering. Wipro will provide managed cloud SOC services with integrated AI and orchestration capabilities in light of the Microsoft relationship. Wipro will also use its HOLMES AI platform to measure risk factors against compliance standards, according to the company.
  • CloudCheckr, a cloud management platform provider, rolled out a global partner enablement program. The Business Partner Program offers business expertise, sales enablement tools and cloud technology to support MSPs and resellers building cloud service practices, CloudCheckr said.
  • Coronet, a small business data breach platform provider, is partnering with Coalition, a cyber insurance provider for SMBs. The arrangement lets Coronet’s customers obtain Coalition’s cyber insurance products.
  • Identity services provider GlobalSign has signed up Impression, a Johannesburg, South Africa-based solutions provider, to its Certified Regional Partner program.
  • The Internet of Things Security Services Association (IoTSSA) named Robin Miller as its director of channel. Miller will oversee IoTSSA’s industry engagement as the organization develops cybersecurity education resources for MSPs and managed security service providers, IoTSS said.

Market Share is a news roundup published every Friday.

Go to Original Article
Author:

Contact center agent experience needs massive overhaul

Gone are the days when it is acceptable to have greater than 40% turnover rates among contact center agents.

Leading organizations are revamping the contact center agent experience to improve business metrics, such as operational costs, revenue and customer ratings; and a targeted agent program keeps companies at a competitive advantage, according to the Nemertes 2019-20 Intelligent Customer Engagement research study of 518 organizations.

The problems

CX leaders participating in the research pointed to several issues responsible for a failing contact center agent experience:

  • Low pay. In some organizations it’s at minimum wage, despite requirements for bachelor’s degrees and/or experience.
  • Dead-end job. Organizations typically do not have a growth path for agents. They expect them to last 18 months to two years, and there always will be a revolving door of agents coming and going.
  • Lack of customer context. Agents find it difficult to take pride in their work when they don’t have the right tools. Without CRM integrations, AI assistance and insightful agent desktops, it is difficult to delight customers.
  • Cranky customers. Agents also find it difficult to regularly interact with dissatisfied customers. With a better work environment, more interaction channels, better training, more analytics and context, they could change those attitudes.
  • No coaching. Because supervisors are busy interviewing and hiring to keep backfilling the agents who are leaving, they rarely have time to coach the agents they have. What’s more, they don’t have the analytics tools — from contact center vendors such as Avaya, Cisco, Five 9, Genesys and RingCentral; or from pure-play tools such as Clarabridge, Medallia, and Maritz CM — to provide performance insight.

The enlightenment

Those in the contact center know this has been status quo for decades, but that is starting to change.

One of the big change drivers is the addition of a chief customer officer (CCO). Today, 37% of organizations have a CCO, up from 25% last year. The CCO is an executive-level individual with ultimate responsibility for all customer-facing activities and strategy to maximize customer acquisition, retention and satisfaction.

The CCO has budget, staff and the attention of the entire C-suite. As a result, high agent turnover rates are no longer flying under the radar. After bringing the issue to CEOs and CFOs, they are investing resources into turning around the turnover rates.

Additionally, organizations value contact centers more today, with 61% of research participants say the company views the contact center as a “value center” versus a “cost center.” Four years ago, that figure was reversed, with two-thirds viewing the contact center as a cost center.

Research shows there are five common changes organizations are now making to improve the contact center agent experience and reduce the turnover rate.

Companies are adding more outbound contact centers, targeting sales or proactive customer engagement — such as customer check-ups, loyalty program invitations and discount offers — and they are supporting new products and services. This helps to explain why, despite the growth in self-service and AI-enabled digital channels, 44% of companies actually increased the number of agents in 2019, compared to 13% who decreased, 40% who were flat and 3% unsure.

The solution

Research shows there are five common changes organizations are now making to improve the contact center agent experience and reduce the turnover rate — now at 21%, down from 38% in 2016. These changes include:

  • Improved compensation plan. Nearly 47% of companies are increasing agent compensation, compared to the 7% decreasing it. The increase ranges from 22% to 28%. Average agent compensation is $49,404, with projected increases up to $60,272, minimally, by the end of 2020.
  • Investment in agent analytics. About 24% of companies are using agent analytics today, with another 20.2% planning to use the tools by 2021. Agent analytics provides data on performance to help with coaching and improvement, in addition to delivering real-time screen pops to help agents on the spot during interactions with customers. Those using analytics see a 52.6% improvement in revenue and a 22.7% decrease in operational costs.
  • Increases in coaching. By delivering data from analytics tools, supervisors have a better picture of areas of success and those that need improvement. By using a product such as Intradiem Contact Center RPA, they can automate the scheduling of training and coaching during idle times.
  • Addition of gamification. Agents are inspired with programs that inject competitiveness among agents, by awarding badges for bragging rights, weekly gift cards for top performance and monthly cash bonuses. Such rewards improve their loyalty to the company and reduce turnover.
  • Development of career path. Successful companies are developing a solid career path with escalations into marketing, product development and supervisory roles in the contact center or CX apps/analysis.

Developing a solid game plan that provides agents with the compensation, support and career path they deserve will drastically reduce turnover rates. In a drastic example, one consumer goods manufacturing company reduced agent turnover from 88% to 2% with a program that addressed the aforementioned issues. More typically, companies are seeing 5% to 15% reductions in their turnover rates one year after developing such a plan.

Go to Original Article
Author:

Enterprise IT weighs pros and cons of multi-cloud management

Multi-cloud management among enterprise IT shops is real, but the vision of routine container portability between clouds has yet to be realized for most.

Multi-cloud management is more common as enterprises embrace public clouds and deploy standardized infrastructure automation platforms, such as Kubernetes, within them. Most commonly, IT teams look to multi-cloud deployments for workload resiliency and disaster recovery, or as the most reasonable approach to combining companies with loyalty to different public cloud vendors through acquisition.

“Customers absolutely want and need multi-cloud, but it’s not the old naïve idea about porting stuff to arbitrage a few pennies in spot instance pricing,” said Charles Betz, analyst at Forrester Research. “It’s typically driven more by governance and regulatory compliance concerns, and pragmatic considerations around mergers and acquisitions.”

IT vendors have responded to this trend with a barrage of marketing around tools that can be used to deploy and manage workloads across multiple clouds. Most notably, IBM’s $34 billion bet on Red Hat revolves around multi-cloud management as a core business strategy for the combined companies, and Red Hat’s OpenShift Container Platform version 4.2 updated its Kubernetes cluster installer to support more clouds, including Azure and Google Cloud Platform. VMware and Rancher also use Kubernetes to anchor multi-cloud management strategies, and even cloud providers such as Google offer products such as Anthos with the goal of managing workloads across multiple clouds.

For some IT shops, easier multi-cloud management is a key factor in Kubernetes platform purchasing decisions.

“Every cloud provider has hosted Kubernetes, but we went with Rancher because we want to stay cloud-agnostic,” said David Sanftenberg, DevOps engineer at Cardano Risk Management Ltd, an investment consultancy firm in the U.K. “Cloud outages are rare, but it’s nice to know that on a whim we can spin up a cluster in another cloud.”

Multi-cloud management requires a deliberate approach

With Kubernetes and VMware virtual machines as common infrastructure templates, some companies use multiple cloud providers to meet specific business requirements.

Unified communications-as-a-service provider 8×8, in San Jose, Calif., maintains IT environments spread across 15 self-managed data centers, plus AWS, Google Cloud Platform, Tencent and Alibaba clouds. Since the company’s business is based on connecting clients through voice and video chat globally, placing workloads as close to customers’ locations as possible is imperative, and this makes managing multiple cloud service providers worthwhile. The company’s IT ops team keeps an eye on all its workloads with VMware’s Wavefront cloud  monitoring tool.

Dejan Deklich, chief product officer, 8x8Dejan Deklich

 “It’s all the same [infrastructure] templates, and all the monitoring and dashboards stay exactly the same, and it doesn’t really matter where [resources] are deployed,” said Dejan Deklich, chief product officer at 8×8. “Engineers don’t have to care where workloads are.”

Multiple times a year, Deklich estimated, the company uses container portability to move workloads between clouds when it gets a good deal on infrastructure costs, although it doesn’t move them in real time or spread apps among multiple clouds. Multi-cloud migration also only applies to a select number of 8×8’s workloads, Deklich said.

We made a conscious decision that we want to be able to move from cloud to cloud. It depends on how deep you go into integration with a given cloud provider.
Dejan DeklichChief product officer, 8×8

“If you’re in [AWS] and using RDS, you’re not going to be able to move to Oracle Cloud, or you’re going to suffer connectivity issues; you can make it work, but why would you?” he said. “There are workloads that can elegantly be moved, such as real-time voice or video distribution around the world, or analytics, as long as you have data associated with your processing, but moving large databases around is not a good idea.”

Maintaining multi-cloud portability also requires a deliberate approach to integration with each cloud provider.

“We made a conscious decision that we want to be able to move from cloud to cloud,” Deklich said. “It depends on how deep you go into integration with a given cloud provider — moving a container from one to the other is no problem if the application inside is not dependent on a cloud-specific infrastructure.”

The ‘lowest common denominator’ downside of multi-cloud

Not every organization buys in to the idea that multi-cloud management’s promise of freedom from cloud lock-in is worthwhile, and the use of container portability to move apps from cloud to cloud remains rare, according to analysts.

“Generally speaking, companies care about portability from on-premises environments to public cloud, not wanting to get locked into their data center choices,” said Lauren Nelson, analyst at Forrester Research. “They are far less cautious when it comes to getting locked into public cloud services, especially if that lock in comes with great value.”

Generally speaking, companies care about portability from on-premises environments to public cloud, not wanting to get locked into their data center choices. They are far less cautious when it comes to getting locked into public cloud services …
Lauren NelsonAnalyst, Forrester Research

In fact, some IT pros argue that lock-in is preferable to missing out on the value of cloud-specific secondary services, such as AWS Lambda.

“I am staunchly single cloud,” said Robert Alcorn, chief architect of platform and product operations at Education Advisory Board (EAB), a higher education research firm headquartered in Washington, D.C. “If you look at how AWS has accelerated its development over the last year or so, it makes multi-cloud almost a nonsensical question.”

For Alcorn, the value of integrating EAB’s GitLab pipelines with AWS Lambda outweighs the risk of lock-in to the AWS cloud. Connecting AWS Lambda and API Gateway to Amazon’s SageMaker for machine learning  has also represented almost a thousandfold drop in costs compared to the company’s previous container-based hosting platform, he said.

Even without the company’s interest in Lambda integration, the work required to keep applications fully cloud-neutral isn’t worth it for his company, Alcorn said.

“There’s a ceiling to what you can do in a truly agnostic way,” he said. “Hosted cloud services like ECS and EKS are also an order of magnitude simpler to manage. I don’t want to pay the overhead tax to be cloud-neutral.”

Some IT analysts also sound a note of caution about the value of multi-cloud management for disaster recovery or price negotiations with cloud vendors, depending on the organization. For example, some financial regulators require multi-cloud deployments for risk mitigation, but the worst case scenario of a complete cloud failure or the closure of a cloud provider’s entire business is highly unlikely, Forrester’s Nelson wrote in a March 2019 research report, “Assess the Pain-Gain Tradeoff of Multicloud Strategies.”

Splitting cloud deployments between multiple providers also may not give enterprises as much of a leg up in price negotiations as they expect, unless the customer is a very large organization, Nelson wrote in the report.

The risks of multi-cloud management are also manifold, according to Nelson’s report, from high costs for data ingress and egress between clouds to network latency and bandwidth issues, broader skills requirements for IT teams, and potentially double the resource costs to keep a second cloud deployment on standby for disaster recovery.

Of course, value is in the eye of the beholder, and each organization’s multi-cloud mileage may vary.

“I’d rather spend more for the company to be up and running, and not lose my job,” Cardano’s Sanftenberg said.

Go to Original Article
Author:

Analyst forecasts the next big things in BI

Consolidation among business intelligence vendors is driven by what’s perceived to be the next big things in BI, and that was the case during the run of merger and acquisition deals during the first half of 2019.

According to Wayne Eckerson, founder and principal consultant of Eckerson Group, self-service analytics was a key part of what made Looker and Tableau attractive to Google and Salesforce, respectively. When the next consolidation wave hits, according to Eckerson, augmented intelligence could be a big driver, as could cloud-based BI tools — and they will be viewed as the next big things in BI.

Eckerson has more than 25 years of experience in the BI software market and is the author of two books — Secrets of Analytical Leaders: Insights from Information Insiders and Performance Dashboards: Measuring, Monitoring, and Managing Your Business.

In the second part of a two-part Q&A, Eckerson talks about the driving forces behind the recent merger and acquisition deals, self-service analytics, and what excites him about the future of BI. In the first part, Eckerson discusses the divide between enterprises that use data and those that don’t, as well as the importance of DataOps and data strategies and how they play into the data divide.

Among other trends, a wave of consolidation over the last six to 12 months has left fewer vendors but ones with more end-to-end capabilities. What do you see as the next big things in BI that might spark the next wave?

Headshot of Wayne Eckerson, founder and principal consultant of Eckerson GroupWayne Eckerson

Eckerson: It definitely goes in cycles — we’ve seen this consolidation before. The last big one was in 2007-08 when the three biggest BI players — Business Objects, Cognos, Hyperion — were bought by large application vendors SAP, IBM and Oracle, respectively. Usually these cycles are based on the advent of new technology that’s come into the market. In 2002, we moved from client-server to the web, and now we’re in the age of the cloud and self-service, and Looker and Tableau caught the self-service wave with visualization and desktop tools. The next big disruption to the BI market is the cloud. We’re seeing a lag between when these new BI technologies fully maturing on these new platforms and when they get purchased and the market consolidates. If we’re to project out, maybe we’ll see some consolidation around cloud-based, AI-based BI tools where things are much more automated, things are in the cloud, and maybe it’s all embedded and you won’t even notice the BI tools. That’s probably the next wave in five or 10 years.

One thing that jumps out about first Google’s acquisition of Looker and then even more so with Salesforce’s purchase of Tableau is the price. Why are companies suddenly paying so much for BI vendors?

Eckerson: They’ve always gone for a premium, but now the premium is in the billions and not the hundreds of millions. We’re in this data-driven age now, and these are the tools that the business users touch and feel and use, so that maybe gives them a higher premium than middleware or database technology that’s behind the scenes. Tableau has been a meteor, and they probably sold at the right time for them. They’re under duress now from competitors. I think it’s just a testimony to how much data is front and center to the way businesses operate in today’s environment.

Ease of use to make data available to the citizen data scientist has been a significant push. Do you see self-service analytics taking over, or will there always be some things that are just too big and complex for average users and self-service BI will just be part of the picture?

Eckerson: Self-service is an interesting topic because there’s been so much frustration with IT, the IT bottleneck and delivering new applications for analytics that businesses wanted self-service just to get away from IT, but in the end what you really want is a blend of both. There are things that are too complex for the average business person to create on their own. If you want to build an enterprise unit for everybody, no business unit is going to do that alone, so you’d need a central group just for that. And then every division has some complex custom apps that need to be built, so you’ll need a corporate development team to build cutting-edge applications that will really help the company compete. On a day-to-day basis every business unit needs its core data analysts and data scientists to be looking into data to help optimize decisions, help optimize business processes, respond creatively and quickly to events as they happen on the ground, to win business, to avoid losing business, to manage risk — all that stuff. The self-service is really the agile, innovative arm of the business, whereas as the corporate IT team is the run-the-business operational side that will build stuff that’s needed on a long-term basis. You need both sides to operate effectively.

As you analyze the BI industry, what are the next big things in BI that get you excited?

Eckerson: I am excited about AI for BI — it’s really transforming the way people are using data to make decisions, and it’s going to transform these BI tools. Before you needed a hypothesis of what to look for when you’re doing an analysis, and now the tools will dig into the data for you. They’ll do thousands of drill-downs in a matter of seconds and expose and surface only the most relevant correlations for you to look at. That’s pretty interesting. DataOps is pretty interesting, because that will fix the back end — the data that’s being delivered into these analytical tools. I think time-series analytics is the next big wave that we’ll see hit the marketplace. Especially as the internet of things and big data take hold, companies can use time-series analytics to automate decisions. The intersection of time-series analytics, AI and cloud-based computing with its infinite storage and elasticity — the combination of those things is going to bring about a sea change. There’s a lot to be excited about in our space.

Editors’ note: This interview has been edited for clarity and conciseness.

Go to Original Article
Author:

Using visualizations and analytics in media content

BOSTON — Among countless online newspapers and journals, blogs, videos and social media feeds, the modern digital consumer has a dizzying amount of media sources to choose from.

As content creators vie for consumer attention, some organizations have turned to data visualization and advanced analytics in media to gain an advantage.

Visualizing data analytics in media

Take, for example, Condé Nast, an American-based mass media company whose 19 brands attract around 150 million consumers.

With a diverse portfolio that includes The New Yorker, Wired and Teen Vogue, the media company needs to capture the attention of numerous social groups and niches around the world. Condé Nast has found that interactive charts and graphs seem to appeal to inquisitiveness of most types of consumers.

Compared with static images, interactive visualizations “introduce a whole new level [to content], and increase time spent” on content by consumers, said Danielle Carrick, a data visualization designer and developer at Condé Nast, during a presentation this week at the 2018 Data Visualization Summit.

Carrick showed examples of colorful, easy-to-read charts and graphs. Large gray and red bars with moveable sliders on the entertainment and culture site Glamour plainly illustrated the disparity between men and women Oscar nominees since 1928.

On Teen Vogue, an in-depth interactive scatterplot of tweets from @realDonaldTrump splashed red dots across the screen. Each visualization, though in itself an example of analytics in media, was different.

“Same type of data, totally different way to look at it,” Carrick said of the visualizations.

Danielle Carrick, Condé Nast, 2018 Data Visualization Summit Boston
Danielle Carrick of Condé Nast speaks at the 2018 Data Visualization Summit in Boston this week.

Static still around

The benefits of consistently changing the way data sets are illustrated are twofold, Carrick said. This varied approach gives consumers new and fresh ways to interact with different data sets, and also enables her and her team to be creative.

Same type of data, totally different way to look at it.
Danielle Carrickdata visualization designer and developer, Condé Nast

Carrick noted that despite the increased use of interactive visuals, static graphs and images are far from being phased out.

Static visuals still are used most often, and are developed separately by each brand, rather than a team working directly under the Condé Nast flag. Understandably, interactive data sets are harder to create, and require input from the local editor, writer and design team working on the content piece.

There’s a lot of communication, Carrick said, and ultimately, it’s up to the brand to decide if it will use the visual.

“They’re not going to publish something they don’t think they’re readers are interested in,” she said.

Internally, the team employs Qlik software, which has revamped its visualization capabilities recently to better compete with rival self-service BI vendor Tableau, for analytics in media.

And while Carrick admitted that more tracking needs to be done to measure the results of using interactive visuals, they seem to both draw in more consumers and keep them on the webpage longer.

Ad analytics

Visualizations aren’t the only ways organizations are using analytics in media, however.

In a separate presentation at the parallel 2018 Big Data Innovation Summit, Carla Pacione, senior director of data and systems at Comcast Spotlight, talked about how advanced analytics plays a role in the telecommunication conglomerate’s advertising efforts. In particular, Pacione highlighted the importance of digital metrics, which she claimed to have “really took the level of advertising to a whole new level.”

Thanks to new and updated technologies in TV and digital metrics, including embedding a pixel in commercials that can capture household and engagement data, organizations like Comcast can better measure metrics today and enable them to gain deeper insights, Pacione said.

Comcast is piloting more advanced “household addressable TV advertising” — the ability to send more targeted and relevant ads to different households watching the same TV program.

While Pacione noted Comcast uses third-party organizations to track purchases and predict future purchases, better being able to measure metrics has enabled such analytics in media advertising advancements.

With so many different ways of consuming media, Pacione said it will be important for media partners to work together to share information and advice and ultimately better target consumers.

Already, she said, “we’re starting to see that sharing in the industry because there’s just so much to learn.”

The 2018 Data Visualization Summit and the 2018 Big Data Innovation Summit were held Sept. 11 to 12 at the Renaissance Boston Waterfront Hotel.

New tech trends in HR: Josh Bersin predicts employee experience ‘war’

LAS VEGAS — Among fresh tech trends in HR, one that may garner the most interest is a new layer of software — which superstar analyst Josh Bersin called an employee experience platform — that will fit between core HR and talent management tools.

Bersin said he expects employee experience to become the next-generation employee portal — in other words, the go-to application for modern workers who need HR-based information. Vendors are lining up to address the need, he added.

“There is going to be a holy war for [what] system your employees use first,” said Bersin, an independent analyst who founded Bersin by Deloitte. Although his quote served as hyperbole, it nonetheless stuck with attendees here at the 2018 HR Technology Conference & Exposition.

“He hit home,” said Rita Reslow, senior director of global benefits at HR software vendor Kronos, based in Lowell, Mass. “We have all these systems, and we keep buying more.” But she wondered aloud when one product would tie her systems together for employees.

No vendor has achieved a true employee experience platform, Bersin told a room packed with 900 or so attendees at the conference on Tuesday. However, ServiceNow, PeopleDoc — which Ultimate Software acquired in July — and possibly IBM appear to have a head start, he added.

Tech trends in HR point to team successes

There is going to be a holy war for [what] system your employees use first.
Josh Bersinindependent analyst

Bersin, who plans to release an extensive report about 2019 tech trends in HR, said software development within the industry reflects a shift in management that steers away from employee engagement and company culture in favor of increased team performance.

Unless a recession hits, “I think the focus of the tech market for the next couple of years … is on performance, productivity and agility,” he said.

The shift to productivity will require future technology to simplify work life, said Cliff Howe, manager of enterprise applications at Cox Enterprises, a communications and media company in Atlanta. “Our employees are being inundated,” Howe said. “We don’t want to hit our employees with too much [technology].”

Bersin suggested that HR software buyers consider the following tips when evaluating new human capital management products:

  • Shop around for vendors that focus on your company’s particular market. For example, if your organization exhibits a compliance-based culture, find a vendor that mirrors that approach.
  • Evaluate the “personality of the vendor,” he said. As an example, determine if the vendor’s reps listen to your decision-makers and help them. If the answer is no, it may be time to drop that vendor from consideration.

AI auditing, real-time payrolls needed in future

In other upcoming tech trends in HR, Bersin pegged AI as a quickly growing field that smart HR departments will learn how to monitor and audit in the future. That notion was on the minds of many at the HR Technology Conference, for which TechTarget — the publisher of SearchHRSoftware — is a media partner.

AI innovation has increased rapidly in the last two years. Today, even small HR software vendors with three to five engineers can use technology from Google or IBM, combine it with open source options and scale a new product on the cloud quickly, Bersin said. HR professionals will need to adjust their skills in order to better understand why AI software makes its decisions, which is an area not fully grasped yet, he added.

Howe agreed AI has grown beyond wish-list status. “AI will be a requirement, rather than a shiny object,” he said.

Bersin also noted that software will need to reflect a possible switch to a continuous payroll model — perhaps as often as daily. Younger workers, some of whom might not have bank accounts, have increased their demands to be compensated in real time, and this request is not just for the gig economy, he said.

Convenience: Driver of BI innovation

Allaa “Ella” Hilal is among that rare breed of computer experts who straddle the academic and commercial worlds. As director of data at Ottawa-based Shopify, Hilal oversees data product development for the e-commerce company’s international and larger merchants, also known as Plus customers. She is also an adjunct associate professor in the Centre for Pattern Analysis and Machine Intelligence at the University of Waterloo in Ontario, where she earned a Ph.D. in electrical and computer engineering.

An expert in data intelligence, wireless sensor networks and autonomous systems, Hilal is among the featured speakers at the Real Business Intelligence Conference on June 27 to 28 in Cambridge, Mass.  Here, Hilal discusses what’s driving business intelligence (BI) innovation today and some of the pitfalls companies should be aware of.

What is driving BI innovation today?

Ella Hilal: First of all, in this day and age, companies are creating more and more products to derive customer convenience. This convenience ends up saving time, which ties to money. When we become more efficient, whether it’s in our IT systems or in our daily commute, we gain moments that we can spend on something else. We can have more time with our families and loved ones, or even gain more time or resources to do the things we love or care about.

There is this immediate need and craving for more efficiency and convenience from the customer side. And businesses all are aware of this craving. They are trying to think about what they can do with the data that exists within the systems or data being collected from IoT, which they know is valuable. The power of BI lies in the fact that it can take all of these different data sources and derive valuable insights to drive business decisions and data products that empower customers and the business in general.

There are many methodologies of how you can apply this to your business, and I plan to discuss some methodologies during my talk at the Real Business Intelligence Conference.

Companies have been doing business intelligence for a long time; they’ve had to figure out which data is useful and which is not for their businesses. What’s different about capitalizing on data generated from technologies like IoT and smart systems?

Hilal: Generally, only 12% of company data that is analyzed today is critical to a business — the rest is either underutilized or untapped. If we think we’re doing such a good job with the analytics we have today, imagine if you apply these efforts across the entire data available in your business. At Shopify, we work to identify the pain points of running a business and use data to provide value to the merchants so they have a better experience as an entrepreneurs.

So, there is huge value we can mine and surface. And when we talk about advanced analytics, we’re not talking about just basic business analytics; we’re talking also about applying AI, machine learning, prediction, forecasting and even prescriptive analytics.

Most CIOs are acutely aware that AI and advanced analytics should be part of a BI innovation strategy. But even big companies are having trouble finding skilled people to do this work.

Hilal: It’s a problem every company will face, because the skilled data scientist is still scarce compared to the need. One challenge is that the people who have the technical abilities to do this strong analytical work don’t always have the business acumen that is needed for an experienced data scientist. They might be very smart in doing sophisticated analysis, but if we don’t tie that with business acumen, they fail to communicate the business value and enable the decision-makers with useful insights. Furthermore, the lack of business acumen makes it challenging to build data products you can utilize or sell. So, you need to build the right kind of team.

Community and university collaborations are one of the strongest approaches that big companies are adopting; you can see that Google, Uber and Shopify, for example, are all partnering with university research labs and reaping the benefits from a technical perspective. They have the technical team and the business acumen team, which then brings the work in-house to focus on data analytics products. So, you get to bridge the gap between this amazing research initiative and the productization of the results.

Another benefit is that with these partnerships, researchers with very strong technical AI and statistical backgrounds can also develop business acumen, because they are working closely with product managers and production teams. This is definitely a longer-term strategy. Wearing my research hat, I can say that universities are also working hard to introduce programs with a mix of computer science and machine learning, programs with a good mix of the old pillars of data science and new approaches.

So, companies need to come up with new frameworks for capitalizing on data. Are there pitfalls companies want to keep in mind?

Hilal: You’ll hear me say this time and time again: We all need to have a sense of responsible innovation. We’re in this industrial race to build really good products that can succeed in the market, and we need to keep in mind that we are building these products for ourselves, as well as for others.

When we create these products, it is the distributed responsibility of all of us to make sure that we embed our morals and ethics in them, making sure they are secure, they are private, they don’t discriminate. At Shopify, we are always asking ourselves, ‘Will this close or open a door for a merchant?’ It is not enough that our products are functional; they have to maintain certain ethical standards, as well.

We’ve reported on how the IoT space may pose a threat because developers are under such pressure to get these products to market that considerations like security and ethics and who owns the data are an afterthought.

Hilal: We should not be putting anything out there that we wouldn’t want in our own homes. But this is not just about AI or IoT. Whether it is a piece of software or hardware system, we need to make sure that security is not a bolt-on, or that privacy is fixed after the fact with a new policy statement — these things need to be done early on and need to be thought of before and throughout the production process.