Tag Archives: tools

Facial recognition technology: The need for public regulation and corporate responsibility – Microsoft on the Issues

All tools can be used for good or ill. Even a broom can be used to sweep the floor or hit someone over the head. The more powerful the tool, the greater the benefit or damage it can cause. The last few months have brought this into stark relief when it comes to computer-assisted facial recognition – the ability of a computer to recognize people’s faces from a photo or through a camera. This technology can catalog your photos, help reunite families or potentially be misused and abused by private companies and public authorities alike.

Facial recognition technology raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression. These issues heighten responsibility for tech companies that create these products. In our view, they also call for thoughtful government regulation and for the development of norms around acceptable uses. In a democratic republic, there is no substitute for decision making by our elected representatives regarding the issues that require the balancing of public safety with the essence of our democratic freedoms. Facial recognition will require the public and private sectors alike to step up – and to act.

We’ve set out below steps that we are taking, and recommendations we have for government regulation.

First, some context

Facial recognition technology has been advancing rapidly over the past decade. If you’ve ever seen a suggestion on Facebook or another social media platform to tag a face with a suggested name, you’ve seen facial recognition at work. A wide variety of tech companies, Microsoft included, have utilized this technology the past several years to turn time-consuming work to catalog photos into something both instantaneous and useful.

So, what is changing now? In part it’s the ability of computer vision to get better and faster in recognizing people’s faces. In part this improvement reflects better cameras, sensors and machine learning capabilities. It also reflects the advent of larger and larger datasets as more images of people are stored online. This improvement also reflects the ability to use the cloud to connect all this data and facial recognition technology with live cameras that capture images of people’s faces and seek to identify them – in more places and in real time.

Advanced technology no longer stands apart from society; it is becoming deeply infused in our personal and professional lives. This means the potential uses of facial recognition are myriad. At an elementary level, you might use it to catalog and search your photos, but that’s just the beginning. Some uses are already improving security for computer users, like recognizing your face instead of requiring a password to access many Windows laptops or iPhones, and in the future a device like an automated teller machine.

Some emerging uses are both positive and potentially even profound. Imagine finding a young missing child by recognizing her as she is being walked down the street. Imagine helping the police to identify a terrorist bent on destruction as he walks into the arena where you’re attending a sporting event. Imagine a smartphone camera and app that tells a person who is blind the name of the individual who has just walked into a room to join a meeting.

But other potential applications are more sobering. Imagine a government tracking everywhere you walked over the past month without your permission or knowledge. Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first. This has long been the stuff of science fiction and popular movies – like “Minority Report,” “Enemy of the State” and even “1984” – but now it’s on the verge of becoming possible.

Perhaps as much as any advance, facial recognition raises a critical question: what role do we want this type of technology to play in everyday society?

The issues become even more complicated when we add the fact that facial recognition is advancing quickly but remains far from perfect. As reported widely in recent months, biases have been found in the performance of several fielded face recognition technologies. The technologies worked more accurately for white men than for white women and were more accurate in identifying persons with lighter complexions than people of color. Researchers across the tech sector are working overtime to address these challenges and significant progress is being made. But as important research has demonstrated, deficiencies remain. The relative immaturity of the technology is making the broader public questions even more pressing.

Even if biases are addressed and facial recognition systems operate in a manner deemed fair for all people, we will still face challenges with potential failures. Facial recognition, like many AI technologies, typically have some rate of error even when they operate in an unbiased way. And the issues relating to facial recognition go well beyond questions of bias themselves, raising critical questions about our fundamental freedoms.

Politics meets Silicon Valley

In recent weeks, the politics of the United States have become more intertwined with these technology developments on the West Coast. One week in the middle of June put the issues raised by facial recognition technology in bold relief for me and other company leaders at Microsoft. As the country was transfixed by the controversy surrounding the separation of immigrant children from their families at the southern border, a tweet about a marketing blog Microsoft published in January quickly blew up on social media and sparked vigorous debate. The blog had discussed a contract with the U.S. Immigration and Customs Enforcement, or ICE, and said that Microsoft had passed a high security threshold; it included a sentence about the potential for ICE to use facial recognition.

We’ve since confirmed that the contract in question isn’t being used for facial recognition at all. Nor has Microsoft worked with the U.S. government on any projects related to separating children from their families at the border, a practice to which we’ve strongly objected. The work under the contract instead is supporting legacy email, calendar, messaging and document management workloads. This type of IT work goes on in every government agency in the United States, and for that matter virtually every government, business and nonprofit institution in the world. Some nonetheless suggested that Microsoft cancel the contract and cease all work with ICE.

The ensuing discussion has illuminated broader questions that are rippling across the tech sector. These questions are not unique to Microsoft. They surfaced earlier this year at Google and other tech companies. In recent weeks, a group of Amazon employees has objected to its contract with ICE, while reiterating concerns raised by the American Civil Liberties Union (ACLU) about law enforcement use of facial recognition technology. And Salesforce employees have raised the same issues related to immigration authorities and these agencies’ use of their products. Demands increasingly are surfacing for tech companies to limit the way government agencies use facial recognition and other technology.

These issues are not going to go away. They reflect the rapidly expanding capabilities of new technologies that increasingly will define the decade ahead. Facial recognition is the technology of the moment, but it’s apparent that other new technologies will raise similar issues in the future. This makes it even more important that we use this moment to get the direction right.

The need for government regulation

The only effective way to manage the use of technology by a government is for the government proactively to manage this use itself. And if there are concerns about how a technology will be deployed more broadly across society, the only way to regulate this broad use is for the government to do so. This in fact is what we believe is needed today – a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.

While we appreciate that some people today are calling for tech companies to make these decisions – and we recognize a clear need for our own exercise of responsibility, as discussed further below – we believe this is an inadequate substitute for decision making by the public and its representatives in a democratic republic. We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology. As a general principle, it seems more sensible to ask an elected government to regulate companies than to ask unelected companies to regulate such a government.

Such an approach is also likely to be far more effective in meeting public goals. After all, even if one or several tech companies alter their practices, problems will remain if others do not. The competitive dynamics between American tech companies – let alone between companies from different countries – will likely enable governments to keep purchasing and using new technology in ways the public may find unacceptable in the absence of a common regulatory framework.

It may seem unusual for a company to ask for government regulation of its products, but there are many markets where thoughtful regulation contributes to a healthier dynamic for consumers and producers alike. The auto industry spent decades in the 20th century resisting calls for regulation, but today there is broad appreciation of the essential role that regulations have played in ensuring ubiquitous seat belts and air bags and greater fuel efficiency. The same is true for air safety, foods and pharmaceutical products. There will always be debates about the details, and the details matter greatly. But a world with vigorous regulation of products that are useful but potentially troubling is better than a world devoid of legal standards.

That’s why Microsoft called for national privacy legislation for the United States in 2005 and why we’ve supported the General Data Protection Regulation in the European Union. Consumers will have more confidence in the way companies use their sensitive personal information if there are clear rules of the road for everyone to follow. While the new issues relating to facial recognition go beyond privacy, we believe the analogy is apt.

It seems especially important to pursue thoughtful government regulation of facial recognition technology, given its broad societal ramifications and potential for abuse. Without a thoughtful approach, public authorities may rely on flawed or biased technological approaches to decide who to track, investigate or even arrest for a crime. Governments may monitor the exercise of political and other public activities in ways that conflict with longstanding expectations in democratic societies, chilling citizens’ willingness to turn out for political events and undermining our core freedoms of assembly and expression. Similarly, companies may use facial recognition to make decisions without human intervention that affect our eligibility for credit, jobs or purchases. All these scenarios raise important questions of privacy, free speech, freedom of association and even life and liberty.

So what issues should be addressed through government regulation? That’s one of the most important initial questions to address. As a starting point, we believe governments should consider the following issues, among others:

  • Should law enforcement use of facial recognition be subject to human oversight and controls, including restrictions on the use of unaided facial recognition technology as evidence of an individual’s guilt or innocence of a crime?
  • Similarly, should we ensure there is civilian oversight and accountability for the use of facial recognition as part of governmental national security technology practices?
  • What types of legal measures can prevent use of facial recognition for racial profiling and other violations of rights while still permitting the beneficial uses of the technology?
  • Should use of facial recognition by public authorities or others be subject to minimum performance levels on accuracy?
  • Should the law require that retailers post visible notice of their use of facial recognition technology in public spaces?
  • Should the law require that companies obtain prior consent before collecting individuals’ images for facial recognition? If so, in what situations and places should this apply? And what is the appropriate way to ask for and obtain such consent?
  • Should we ensure that individuals have the right to know what photos have been collected and stored that have been identified with their names and faces?
  • Should we create processes that afford legal rights to individuals who believe they have been misidentified by a facial recognition system?

This list, which is by no means exhaustive, illustrates the breadth and importance of the issues involved.

Another important initial question is how governments should go about addressing these questions. In the United States, this is a national issue that requires national leadership by our elected representatives. This means leadership by Congress. While some question whether members of Congress have sufficient expertise on technology issues, at Microsoft we believe Congress can address these issues effectively. The key is for lawmakers to use the right mechanisms to gather expert advice to inform their decision making.

On numerous occasions, Congress has appointed bipartisan expert commissions to assess complicated issues and submit recommendations for potential legislative action. As the Congressional Research Service (CRS) noted last year, these commissions are “formal groups established to provide independent advice; make recommendations for changes in public policy; study or investigate a particular problem, issue, or event; or perform a duty.” Congress’ use of the bipartisan “9/11 Commission” played a critical role in assessing that national tragedy. Congress has created 28 such commissions over the past decade, assessing issues ranging from protecting children in disasters to the future of the army.

We believe Congress should create a bipartisan expert commission to assess the best way to regulate the use of facial recognition technology in the United States. This should build on recent work by academics and in the public and private sectors to assess these issues and to develop clearer ethical principles for this technology. The purpose of such a commission should include advice to Congress on what types of new laws and regulations are needed, as well as stronger practices to ensure proper congressional oversight of this technology across the executive branch.

Issues relating to facial recognition go well beyond the borders of the United States. The questions listed above – and no doubt others – will become important public policy issues around the world, requiring active engagement by governments, academics, tech companies and civil society internationally. Given the global nature of the technology itself, there likely will also be a growing need for interaction and even coordination between national regulators across borders.

Tech sector responsibilities

The need for government leadership does not absolve technology companies of our own ethical responsibilities. Given the importance and breadth of facial recognition issues, we at Microsoft and throughout the tech sector have a responsibility to ensure that this technology is human-centered and developed in a manner consistent with broadly held societal values. We need to recognize that many of these issues are new and no one has all the answers. We still have work to do to identify all the questions. In short, we all have a lot to learn. Nonetheless, some initial conclusions are clear.

First, it’s incumbent upon those of us in the tech sector to continue the important work needed to reduce the risk of bias in facial recognition technology. No one benefits from the deployment of immature facial recognition technology that has greater error rates for women and people of color. That’s why our researchers and developers are working to accelerate progress in this area, and why this is one of the priorities for Microsoft’s Aether Committee, which provides advice on several AI ethics issues inside the company.

As we pursue this work, we recognize the importance of collaborating with the academic community and other companies, including in groups such as the Partnership for AI. And we appreciate the importance not only of creating data sets that reflect the diversity of the world, but also of ensuring that we have a diverse and well-trained workforce with the capabilities needed to be effective in reducing the risk of bias. This requires ongoing and urgent work by Microsoft and other tech companies to promote greater diversity and inclusion in our workforce and to invest in a broader and more diverse pipeline of talent for the future. We’re focused on making progress in these areas, but we recognize that we have much more work to do.

Second, and more broadly, we recognize the need to take a principled and transparent approach in the development and application of facial recognition technology. We are undertaking work to assess and develop additional principles to govern our facial recognition work. We’ve used a similar approach in other instances, including trust principles we adopted in 2015 for our cloud services, supported in part by transparency centers and other facilities around the world to enable the inspection of our source code and other data. Similarly, earlier this year we published an overall set of ethical principles we are using in the development of all our AI capabilities.

As we move forward, we’re committed to establishing a transparent set of principles for facial recognition technology that we will share with the public. In part this will build on our broader commitment to design our products and operate our services consistent with the UN’s Guiding Principles on Business and Human Rights. These were adopted in 2011 and have emerged as the global standard for ensuring corporate respect for human rights. We periodically conduct Human Rights Impact Assessments (HRIAs) of our products and services, and we’re currently pursuing this work with respect to our AI technologies.

We’ll pursue this work in part based on the expertise and input of our employees, but we also recognize the importance of active external listening and engagement. We’ll therefore also sit down with and listen to a variety of external stakeholders, including customers, academics and human rights and privacy groups that are focusing on the specific issues involved in facial recognition. This work will take  up to a few months, but we’re committed to completing it expeditiously .

We recognize that one of the difficult issues we’ll need to address is the distinction between the development of our facial recognition services and the use of our broader IT infrastructure by third parties that build and deploy their own facial recognition technology. The use of infrastructure and off-the-shelf capabilities by third parties are more difficult for a company to regulate, compared to the use of a complete service or the work of a firm’s own consultants, which readily can be managed more tightly. While nuanced, these distinctions will need consideration.

Third, in the meantime we recognize the importance of going more slowly when it comes to the deployment of the full range of facial recognition technology. Many information technologies, unlike something like pharmaceutical products, are distributed quickly and broadly to accelerate the pace of innovation and usage. “Move fast and break things” became something of a mantra in Silicon Valley earlier this decade. But if we move too fast with facial recognition, we may find that people’s fundamental rights are being broken.

For this reason, based in part on input from the Aether Committee, we’re moving more deliberately with our facial recognition consulting and contracting work. This has led us to turn down some customer requests for deployments of this service where we’ve concluded that there are greater human rights risks. As we’re developing more permanent principles, we will continue to monitor the potential uses of our facial recognition technologies with a view to assessing and avoiding human rights abuses.

In a similar vein, we’re committed to sharing more information with customers who are contemplating the potential deployment of facial recognition technology. We will continue work to provide customers and others with information that will help them understand more deeply both the current capabilities and limitations of facial recognition technology, how these features can and should be used, and the risks of improper uses.

Fourth, we’re committed to participating in a full and responsible manner in public policy deliberations relating to facial recognition. Government officials, civil liberties organizations and the broader public can only appreciate the full implications of new technical trends if those of us who create this technology do a good job of sharing information with them. Especially given our urging of governments to act, it’s incumbent on us to step forward to share this information. As we do so, we’re committed to serving as a voice for the ethical use of facial recognition and other new technologies, both in the United States and around the world.

We recognize that there may be additional responsibilities that companies in the tech sector ought to assume. We provide the foregoing list not with the sense that it is necessarily complete, but in the hope that it can provide a good start in helping to move forward.

Some concluding thoughts

Finally, as we think about the evolving range of technology uses, we think it’s important to acknowledge that the future is not simple. A government agency that is doing something objectionable today may do something that is laudable tomorrow. We therefore need a principled approach for facial recognition technology, embodied in law, that outlasts a single administration or the important political issues of a moment.

Even at a time of increasingly polarized politics, we have faith in our fundamental democratic institutions and values. We have elected representatives in Congress that have the tools needed to assess this new technology, with all its ramifications. We benefit from the checks and balances of a Constitution that has seen us from the age of candles to an era of artificial intelligence. As in so many times in the past, we need to ensure that new inventions serve our democratic freedoms pursuant to the rule of law. Given the global sweep of this technology, we’ll need to address these issues internationally, in no small part by working with and relying upon many other respected voices. We will all need to work together, and we look forward to doing our part.

Tags: , , ,

Taking stock of Windows Server management tools

The right Windows server management tools keep the business running with minimal interruptions. But administrators should be open to change as the company’s needs evolve.

Many organizations run on mix of new and old technologies that complicate the maintenance workload of the IT staff. Administrators need to take stock of their systems and get a complete rundown of all the variables associated with the server operating systems under their purview. While it might not be possible to use one utility to run the entire data center, administrators must assess which tool offers the most value by weighing the capabilities of each.

For these everyday tasks, administrators have a choice of several Windows server management tools that come at no extra cost. Some have been around for years, while others recently emerged from development. The following guide helps IT workers understand why certain tools work well in particular scenarios.

Choose a GUI or CLI tool?

Windows server management tools come in two flavors: graphical user interface (GUI) and command-line interface (CLI).

Many administrators will admit it’s easier to work with a GUI tool because the interface offers point-and-click management without a need to memorize commands. A disadvantage to a GUI tool is the amount of time it takes to execute a command, especially if there are a large number of servers to manage.

Administrators can use both PowerShell versions side by side, which might be necessary for some shops.

Learning how to use and implement a CLI tool can be a slow process because it takes significant effort to learn the language. One other downside is many of these CLI tools were not designed to work together; the administrator must learn how to pipe output from one CLI tool to the next to develop a workflow.

A GUI tool is ideal when there are not many servers to manage, or for one-time or infrequent tasks. A CLI tool is more effective for performing a series of actions on multiple servers.

This tip offers more specifics about the two interfaces used with server management tools.

Windows Admin Center: A new management contender

Windows Admin Center, formerly Project Honolulu, is a GUI tool that combines local and remote server management tools in a single console for a consolidated administrative experience.

Windows Admin Center is one of Microsoft’s newer Windows server management tools that makes it easier to work with nondomain machines, particularly those running Server Core.

Windows Admin Center can only manage Windows systems and lacks the functionality IT workers have come to expect with the Remote Server Administration Tools application.

Administrators interested in using Windows Admin Center as one of their primary Windows server management tools should be aware of potential security issues before implementing it in their data center.

This article provides additional details about the features of this tool.

A venerable offering expands to new platforms

Now more than 10 years old, PowerShell is one of the key Windows server management tools due to its potent ability to manage multiple machines through scripting. No longer just a Windows product, Microsoft converted the automation and configuration management tool into an open source project. Microsoft initially called this new offering PowerShell Core, but now refers to it as just PowerShell. The open source version of PowerShell runs on Linux and macOS platforms. Microsoft supports Windows PowerShell but does not plan to add more features to it.

Administrators can use both PowerShell versions side by side, which might be necessary for some shops. At the moment, Windows PowerShell provides more functionality because certain features have yet to be ported to PowerShell Core.

This link offers more information about the differences between the two versions that administrators need to know.

New tools unveiled to monitor, manage and optimize SAP environments


The world of the SAP intelligent enterprise requires new tools to monitor, manage and optimize SAP environments as they evolve to include new SAP platforms, integrations and advanced technologies.

SAP’s vision of the intelligent enterprise includes SAP Data Hub, which incorporates integration and data management components, and it shows the company can embrace modern open source platforms, like Hadoop and Spark, and hybrid and multi-cloud deployment, according to Doug Henschen, an analyst at Constellation Research.

This openness, along with extending cloud initiatives to Microsoft Azure, Google Cloud Platform and IBM private cloud instances, necessitated a move to bring customers hybrid and multi-cloud data management capabilities, Henschen said.

“The Data Hub, in particular, facilitates hybrid and multi-cloud data access without data movement and copying,” he said. “This is crucial in harnessing data from any source, no matter where it may be running, to facilitate data-driven decisioning.”

At SAP Sapphire Now 2018, several vendors unveiled new tools — or updates to existing ones — that address some of the challenges associated with moving SAP systems to the intelligent enterprise landscape.

  • Tricentis Tosca’s continuous testing method is designed to keep pace with modern SAP environments, unlike traditional testing methods, which were built for previous versions of SAP applications. These legacy testing systems may not always adequately support S/4HANA and Fiori 2.0, so many SAP users have to use manual testing to validate releases, according to Tricentis. Cloud-enabled Tricentis Tosca 11.2 now supports a variety of the newest SAP versions, including S/4HANA and Fiori 2.0.
  • Worksoft announced the release of Worksoft Interactive Capture 2.0, which is test automation software for SAP environments. Worksoft Interactive Capture 2.0 operates on the principle that it’s critical to keep existing SAP applications operating as new systems and applications are being developed. Worksoft Interactive Capture 2.0 allows business users and application functional experts to create automated business workflows, test documentation and test cases.
  • Virtual Forge announced its CodeProfiler for HANA can now scan the SAPUI5 programming language. CodeProfiler for HANA provides detailed information on code quality as a programmer writes code, similar to spell check on a word processor, according to Virtual Forge. This allows coders to identify and manage performance, security and compliance deficiencies early in the HANA application development process. Reducing or eliminating performance decline and application downtime is particularly critical, as HANA enables real-time business applications.
  • As more organizations move their SAP environments to S/4HANA — or plan to — it becomes important to understand how users actually interact with SAP applications. Knoa Software showed a new version of its user experience management application, Knoa UEM for Enterprise Applications — it’s also resold by SAP as SAP User Experience Management by Knoa. The product allows organizations to view and analyze how users are interacting with SAP applications, including activities that lead to errors, never-used applications and workarounds that are needed because an application’s software is bad, according to Knoa. The latest version of Knoa UEM for Enterprise Applications allows companies that are migrating to S/4HANA to analyze usage on a range of SAP applications, including SAP Fiori, SAP Business Client, SAP Enterprise Portal and SAP GUI for Windows. It can also be used for SAP Leonardo application development by determining how customers actually use the applications and developing a business case for the application based on accurate measurements of user experience improvements in the new apps.
  • General Data Protection Regulation (GDPR) compliance is a huge issue now, and Attunity released Gold Client for Data Protection, a data governance application for SAP environments. Gold Client for Data Protection enables the identification and masking of personally identifiable information across production SAP ECC systems, according to Attunity. The software helps organizations to find PII across SAP systems, which then enables them to enforce GDPR’s “right to be forgotten” mandate.

Dig Deeper on SAP development

Apstra adds EVPN configuration to AOS

Apstra has added to its AOS intent-based operating system tools that remove the manual chore of configuring the Ethernet VPN protocol used in securing multi-tenancy environments in a private data center.

Apstra added the capability in AOS 2.2, which the company released this week. The latest OS version contains other enhancements, including integration with authentication systems, support for more network hardware and software and better anomaly detection.

The EVPN configuration capability, however, is likely to have the broadest appeal to network operators. Engineers typically use the protocol with the Border Gateway Protocol (BGP) and the Virtual Extensible LAN (VXLAN) encapsulation protocol. VXLAN creates an overlay network on an existing Layer 3 infrastructure.

Apstra customers use EVPN, BGP and VXLAN together to segment the physical network in a multi-tenancy architecture. IT operations use multi-tenancy to serve multiple customers on a single instance of a software application. Network segmentation isolates customers so data and malware can’t travel between them.

The benefit of EVPN configuration

Apstra added EVPN configuration in its AOS software to remove an arduous task from the to-do list of IT staff, said Carly Stoughton, a senior technical marketing engineer at Apstra.

“We’ll actually automate that complex EVPN configuration for you, which is huge,” she said. “Configuring this plus BGP, plus VXLAN, if you’re doing that on every single switch in your data center, that’s a very complex configuration, and it’s a human, error-prone process.”

Apstra already provided tools for configuring BGP and VXLAN, so the EVPN capability filled a hole in AOS.

Apstra’s AOS software is part of the network management layer. The product monitors the configurations of multi-vendor network hardware and software and alerts IT managers when the intent of the setting is violated. Other startups taking an intent-based approach to network management include Forward Networks, Intentionet and Veriflow.

Other improvements in AOS 2.2 include support for switches from Mellanox Technologies and Dell EMC, a unit of Dell Technologies. New software support comprises OpenSwitch, an open source network operating system, version 3.6 of the Cumulus NOS and version 16.04 of the Ubuntu server OS.

In January, Apstra introduced customizable analytics in AOS. The feature lets network operators define the type of data they want the software to collect and then set the rules for extracting intelligence from the information.

Apstra said it would release the latest version of AOS this month at no additional cost to customers.

Call center chatbots draw skepticism from leaders

ORLANDO, Fla. — Artificial intelligence chatbot vendors may hype machine learning tools to enhance customer service, but call center leaders aren’t necessarily ready to trust them in the real world.

Part of the reason is call centers are judged by hard-to-achieve performance metrics based on volume, efficiency and customer satisfaction. Once a call center performs successfully against those expectations set by management, it’s hard to convince leaders to entrust call center chatbots with the hard-fought, quality customer relations programs they’ve built with humans.

“I don’t anticipate them having any kind of utility here,” said Jason Baker, senior vice president of operations for Entertainment Benefits Group (EBG), which manages discount tickets and other promotions for 61 million employees at 40,000 client companies. Baker oversees EBG customer service spanning multiple call centers.

“We strive for creating personalized and memorable experiences,” Baker said. “A chatbot — I understand the reason behind it, and, depending upon the type of environment, it might make sense — but in the travel and entertainment industry, you have to have the personalized touch with all interactions.”

Artificial intelligence chatbots were the most-talked-about technology at the ICMI Contact Center Expo, with a mix of trepidation and interest.

Navy Federal Credit Union employs “a few bots” for fielding very basic customer questions, such as balance inquiries, said Georgia Adams, social care supervisor at credit union, based in Vienna, Va.

Her active social media team publishes tens of thousands of posts and comments annually on Twitter, Instagram and Facebook without the help of artificial intelligence chatbots, but “they’re on the horizon.” She stressed that call center chatbots must be transparent — identifying themselves as a bot — and be empowered to transfer customers to human customer service agents quickly to be effective.

“It’s coming, whether you want it or not,” Adams said. “We’re strategizing [and] looking at it. I certainly think they have a lot of value, especially when it comes to things that are basically self-service … but if I’m talking to a bot, I want to know I’m talking to a bot.”

Call center chatbots not gunning for humans’ jobs — yet

Another part of the reason call center personnel might be wary of chatbots — true or not, fair or unfair — is robotic automation will eventually take the humans’ jobs. This idea was dismissed by neutral industry experts such as ICMI founding partner Brad Cleveland, who said alternative customer service channels such as interactive voice response (IVR), email, social media and live chat each caused similar panic in the call center world when they were new. But none of them significantly affected call volumes.

“We hear predictions that artificial intelligence will replace all the jobs out there,” Cleveland said, not just in customer service. “If it does, we’re definitely going to be the last ones standing in customer service. But I don’t think it’s going to happen that way at all.”

Cleveland said he believes artificial intelligence chatbots will likely have utility in the near future, as technology advances and call centers find appropriate uses for them. Machine learning tools that aren’t chatbots, too, will make a difference, he said.

One example on display was an AI tool that can be trained to find — and adapt on the fly — pre-worded answers to common, or complex and time-consuming, customer queries that a human agent can paste into a chat window after a quick edit for sense and perhaps personalization. The idea is they get smarter and more on point over months of use.

But even live chat channels have their limits when they’re run by humans, let alone artificial intelligence chatbots. Frankie Littleford, vice president of customer support at JetBlue, based in Long Island City, N.Y., said during a breakout session here that her agents have to develop a sixth sense about when to stop typing and pick up the phone.

“You know in your gut when to take it out of email or whatever channel that isn’t person-to-person,” Littleford said. “You just continue to make someone angrier when you’re going back and forth — and let’s face it, a lot of people are really brave when they’re not face-to-face or on the phone … If your agents are skilled to speak with those customers, you can allow them to climb their mountain of anger and then de-escalate.”

AI chatbot benefits illustration
ICMI commissioned an artist for select Call Center Expo sessions by whiteboard artist Heather Klar. Here, she illustrated high points from a pro-AI chatbot lecture.

Vendors hold out hope

ICMI attendees weren’t fully buying into the promise of AI chatbots, but undeterred software vendors kept up the full-court press, attempting to sell the benefits of automation and allay fears that chatbots will eventually replace attendees’ jobs.

“We don’t use [AI] to replace human work,” said Mark Bloom, Salesforce Service Cloud senior director of strategy and operations, during his keynote, adding that organizations that attempt to replace people with AI tools haven’t been successful. “We want to augment the work our people are doing and make them more intelligent. That is how we are moving forward.”

You could train a new employee, and they could leave tomorrow. A bot is not going to give up and leave, it’s not going to get sick, and it’s so scalable.
Kaye Chapmancontent and client training manager, Comm100

Setting up call center chatbots will require extensive training in test environments — just like human agents do. Once they’re trained, they require maintenance and updating, but they will solve another vexing problem for call center managers — employee turnover, said Kaye Chapman, content and client training manager for chatbot vendor Comm100, based in Vancouver, B.C.

“You could train a new employee, and they could leave tomorrow,” Chapman said. “A bot is not going to give up and leave, it’s not going to get sick, and it’s so scalable.”

Bob Furniss, vice president at Bluewolf, an IBM subsidiary known for Salesforce automation integrations that’s based in New York, said he believes artificial intelligence chatbots are coming, and AI in general will change both our personal and work lives. He said the potential is there for AI to help ease call center agents’ workload — up to 30% of the simplest customer queries — similar to the promises of IVR and the other channels when they came online in the industry.

Just like all other call center systems, Furniss warned that anything AI-powered will require attention and maintenance to attenuate its actions and keep abreast of changing workflow and updated customer relations strategies.

“This is just like any other technology we have in the contact center,” Furniss said. “You don’t set it and leave it, just like workforce management [applications]. There’s an art and a skill to it.”

DJI and Microsoft partner to bring advanced drone technology to the enterprise

New developer tools for Windows and Azure IoT Edge Services enable real-time AI and machine learning for drones

REDMOND, Wash. — May 7, 2018 — DJI, the world’s leader in civilian drones and aerial imaging technology, and Microsoft Corp. have announced a strategic partnership to bring advanced AI and machine learning capabilities to DJI drones, helping businesses harness the power of commercial drone technology and edge cloud computing.

Through this partnership, DJI is releasing a software development kit (SDK) for Windows that extends the power of commercial drone technology to the largest enterprise developer community in the world. Using applications written for Windows 10 PCs, DJI drones can be customized and controlled for a wide variety of industrial uses, with full flight control and real-time data transfer capabilities, making drone technology accessible to Windows 10 customers numbering nearly 700 million globally.

DJI logoDJI has also selected Microsoft Azure as its preferred cloud computing partner, taking advantage of Azure’s industry-leading AI and machine learning capabilities to help turn vast quantities of aerial imagery and video data into actionable insights for thousands of businesses across the globe.

“As computing becomes ubiquitous, the intelligent edge is emerging as the next technology frontier,” said Scott Guthrie, executive vice president, Cloud and Enterprise Group, Microsoft. “DJI is the leader in commercial drone technology, and Microsoft Azure is the preferred cloud for commercial businesses. Together, we are bringing unparalleled intelligent cloud and Azure IoT capabilities to devices on the edge, creating the potential to change the game for multiple industries spanning agriculture, public safety, construction and more.”

DJI’s new SDK for Windows empowers developers to build native Windows applications that can remotely control DJI drones including autonomous flight and real-time data streaming. The SDK will also allow the Windows developer community to integrate and control third-party payloads like multispectral sensors, robotic components like custom actuators, and more, exponentially increasing the ways drones can be used in the enterprise.

“DJI is excited to form this unique partnership with Microsoft to bring the power of DJI aerial platforms to the Microsoft developer ecosystem,” said Roger Luo, president at DJI. “Using our new SDK, Windows developers will soon be able to employ drones, AI and machine learning technologies to create intelligent flying robots that will save businesses time and money, and help make drone technology a mainstay in the workplace.”

In addition to the SDK for Windows, Microsoft and DJI are collaborating to develop commercial drone solutions using Azure IoT Edge and AI technologies for customers in key vertical segments such as agriculture, construction and public safety. Windows developers will be able to use DJI drones alongside Azure’s extensive cloud and IoT toolset to build AI solutions that are trained in the cloud and deployed down to drones in the field in real time, allowing businesses to quickly take advantage of learnings at one individual site and rapidly apply them across the organization.

DJI and Microsoft are already working together to advance technology for precision farming with Microsoft’s FarmBeats solution, which aggregates and analyzes data from aerial and ground sensors using AI models running on Azure IoT Edge. With DJI drones, the Microsoft FarmBeats solution can take advantage of advanced sensors to detect heat, light, moisture and more to provide unique visual insights into crops, animals and soil on the farm. Microsoft FarmBeats integrates DJI’s PC Ground Station Pro software and mapping algorithm to create real-time heatmaps on Azure IoT Edge, which enable farmers to quickly identify crop stress and disease, pest infestation, or other issues that may reduce yield.

With this partnership, DJI will have access to the Azure IP Advantage program, which provides industry protection for intellectual property risks in the cloud. For Microsoft, the partnership is an example of the important role IP plays in ensuring a healthy and vibrant technology ecosystem and builds upon existing partnerships in emerging sectors such as connected cars and personal wearables.

Availability

DJI’s SDK for Windows is available as a beta preview to attendees of the Microsoft Build conference today and will be broadly available in fall 2018. For more information on the Windows SDK and DJI’s full suite of developer solutions, visit: developer.dji.com.

About DJI

DJI, the world’s leader in civilian drones and aerial imaging technology, was founded and is run by people with a passion for remote-controlled helicopters and experts in flight-control technology and camera stabilization. The company is dedicated to making aerial photography and filmmaking equipment and platforms more accessible, reliable and easier to use for creators and innovators around the world. DJI’s global operations currently span across the Americas, Europe and Asia, and its revolutionary products and solutions have been chosen by customers in over 100 countries for applications in filmmaking, construction, inspection, emergency response, agriculture, conservation and other industries.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For additional information, please contact:

Michael Oldenburg, DJI Senior Communication Manager, North America – michael.oldenburg@dji.com

Chelsea Pohl, Microsoft Commercial Communications Manager – chelp@microsoft.com

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.

For more information, visit our:

Website: www.dji.com

Online Store: store.dji.com/

Facebook: www.facebook.com/DJI

Instagram: www.instagram.com/DJIGlobal

Twitter: www.twitter.com/DJIGlobal
LinkedIn: www.linkedin.com/company/dji

Subscribe to our YouTube Channel: www.youtube.com/DJI

 

 

The post DJI and Microsoft partner to bring advanced drone technology to the enterprise appeared first on Stories.

Kubernetes storage projects dominate CNCF docket

Enterprise IT pros should get ready for Kubernetes storage tools, as the Cloud Native Computing Foundation seeks ways to support stateful applications.

The Cloud Native Computing Foundation (CNCF) began its quest to develop container storage products this week when it approved an inception-level project called Rook, which connects Kubernetes orchestration to the Ceph distributed file system through the Kubernetes operator API.

The Rook project’s approval illustrates the CNCF’s plans to emphasize Kubernetes storage.

“It’s going to be a big year for storage in Kubernetes, because the APIs are a little bit more solidified now,” said CNCF COO Chris Aniszczyk. The operator API and a Container Storage Interface API were released in the alpha stage with Kubernetes 1.9 in December. “[The CNCF technical board is] saying that the Kubernetes operator API is the way to go in [distributed container] storage,” he said.

Rook project gave Prometheus a seat on HBO’s Iron Throne

HBO wanted to deploy Prometheus for Kubernetes monitoring, and it ideally would have run the time-series database application on containers within the Kubernetes cluster, but that didn’t work well with cloud providers’ persistent storage volumes.

Illya Chekrygin, UpboundIllya Chekrygin

“You always have to do this careful coordination to make sure new containers only get created in the same availability zone. And if that entire availability zone goes away, you’re kind of out of luck,” said Illya Chekrygin, who directed HBO’s implementation of containers as a senior staff engineer in 2017. “That was a painful experience in terms of synchronization.”

Moreover, when containers that ran stateful apps were killed and restarted in different nodes of the Kubernetes cluster, it took too long to unmount, release and remount their attached storage volumes, Chekrygin said.

Rook was an early conceptual project in GitHub at that time, but HBO engineers put it into a test environment to support Prometheus. Rook uses a storage overlay that runs within the Kubernetes cluster and configures the cluster nodes’ available disk space as a giant pool of resources, which is in line with how Kubernetes handles CPU and memory resources.

Rather than synchronize data across multiple specific storage volumes or locations, Rook uses the Ceph distributed file system to stripe the data across multiple machines and clusters and to create multiple copies of data for high availability. That overcomes the data synchronization problem, and it avoids the need to unmount and remount external storage volumes.

“It’s using existing cluster disk configurations that are already there, so nothing has to be mounted and unmounted,” Chekrygin said. “You avoid external storage resources to begin with.”

At HBO, a mounting and unmounting process that took up to an hour was reduced to two seconds, which was suitable for the Kubernetes monitoring system in Prometheus that scraped telemetry data from the cluster every 10 to 30 seconds.

However, Rook never saw production use at HBO, which, by policy, doesn’t put prerelease software into production. Instead, Chekrygin and his colleagues set up an external Prometheus instance that received a relay of monitoring data from an agent inside the Kubernetes cluster. That worked, but it required an extra network hop for data and made Prometheus management more complex.

“Kubernetes provides a lot of functionality out of the box, such as automatically restarting your Pod if your Pod dies, automatic scaling and service discovery,” Chekrygin said. “If you run a service somewhere else, it’s your responsibility on your own to do all those things.”

Kubernetes storage in the spotlight

Kubernetes is ill-equipped to handle data storage persistence … this is the next frontier and the next biggest thing.
Illya Chekryginfounding member, Upbound

The CNCF is aware of the difficulty organizations face when they try to run stateful applications on Kubernetes. As of this week, it now owns the intellectual property and trademarks for Rook, which currently lists Quantum Corp. and Upbound, a startup in Seattle founded by Rook’s creator, Bassam Tabbara, as contributors to its open source code. As an inception-level project, Rook isn’t a sure thing, more akin to a bet on an early stage idea. It has about a 50-50 chance of panning out, CNCF’s Aniszczyk said.

Inception-level projects must update their presentations to the technical board once a year to continue as part of CNCF. From the inception level, projects may move to incubation, which means they’ve collected multiple corporate contributors and established a code of conduct and governance procedures, among other criteria. From incubation, projects then move to the graduated stage, although the CNCF has yet to even designate Kubernetes itself a graduated project. Kubernetes and Prometheus are expected to graduate this year, Aniszczyk said.

The upshot for container orchestration users is Rook will be governed by the same rules and foundation as Kubernetes itself, rather than held hostage by a single for-profit company. The CNCF could potentially support more than one project similar to Rook, such as Red Hat’s Gluster-based Container Native Storage Platform, and Aniszczyk said those companies are welcome to present them to the CNCF technical board.

Another Kubernetes storage project that may find its way into the CNCF, and potentially complement Rook, was open-sourced by container storage software maker Portworx this week. The Storage Orchestrator Runtime for Kubernetes (STORK) uses the Kubernetes orchestrator to automate operations within storage layers such as Rook to respond to applications’ needs. However, STORK needs more development before it is submitted to the CNCF, said Gou Rao, founder and CEO at Portworx, based in Los Altos, Calif.

Kubernetes storage seems like a worthy bet to Chekrygin, who left his three-year job with HBO this month to take a position as an engineer at Upbound.

“Kubernetes is ill-equipped to handle data storage persistence,” he said. “I’m so convinced that this is the next frontier and the next biggest thing, I was willing to quit my job.”

Beth Pariseau is senior news writer for TechTarget’s Cloud and DevOps Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.