Tag Archives: understanding

S/4HANA Cloud integrates Qualtrics for continuous improvement

SAP is focused on better understanding what’s on the minds of their customers with the latest release of S/4HANA Cloud.

SAP S/4HANA Cloud 1911, which is now available, has SAP Qualtrics experience management (XM) embedded into the user interface, creating a feedback loop for the product management team about the application. This is one of the first integrations of Qualtrics XM into SAP products since SAP acquired the company a year ago for $8 billion.

“Users can give direct feedback on the application,” said Oliver Betz, global head of product management for S/4HANA Cloud at SAP. “It’s context-sensitive, so if you’re on a homescreen, it asks you, ‘How do you like the homescreen on a scale of one to five?’ And then the user can provide more detailed feedback from there.”

The customer data is consolidated and anonymized and sent to the S/4HANA Cloud product management team, Betz said.

“We’ll regularly screen the feedback to find hot spots,” he said. “In particular we’re interested in the outliers to the good and the bad, areas where obviously there’s something we specifically need to take care of, or also some areas where users are happy about the new features.”

Oliver BetzOliver Betz

Because S/4HANA Cloud is a cloud product that sends out new releases every quarter, the customer feedback loop that Qualtrics provides will inform developers on how to continually improve the product, Betz said.

“This is the first phase in the next iteration [of S/4HANA Cloud], which will add more granular features,” he said. “From a product management perspective, you can potentially have a new application and have some questions around the application to better understand the usage, what customers like and what they don’t like, and then to take it in a feedback loop to iterate over the next quarterly shipments so we can always provide new enhancements.”

Qualtrics integration may take time to provide value

It has taken a while, but it’s a good thing that SAP has now begun a real Qualtrics integration story, said Jon Reed, analyst and co-founder of Diginomica.com, an analysis and news site that focuses on enterprise applications. Still, SAP faces a few obstacles before the integration into S/4HANA Cloud can be a real differentiator.

Jon ReedJon Reed

“This isn’t a plug-and-play thing where customers are immediately able to use this the way you would a new app on your phone, like a new GPS app. This is useful experiential data which you must then analyze, manage and apply,” Reed said. “Eventually, you could build useful apps and dashboards with it, but you still have to apply the insights to get the value. However, if SAP has made those strides already on integrating Qualtrics with S/4HANA Cloud 1911, that’s a positive for them and we’ll see if it’s an advantage they can use to win sales.”

The Qualtrics products are impressive, but it’s still too early in the game to judge how the SAP S/4HANA integration will work out, said Vinnie Mirchandani, analyst and founder of Deal Architect, an enterprise applications focused blog.

“SAP will see more traction with Qualtrics in the employee and customer experience feedback area,” Mirchandani said. “Experiential tools have more impact where there are more human touchpoints — employees, customer service, customer feedback on product features — so I think the blend with SuccessFactors and C/4HANA is more obvious. This doesn’t mean that S/4 won’t see benefits, but the traction may be higher in other parts of the SAP portfolio.”

Vinnie MirchandaniVinnie Mirchandani

SAP SuccessFactors is also beginning to integrate Qualtrics into its employee experience management functions.

It’s a good thing that SAP is attempting to become a more customer-centric company, but it will need to follow through on the promise and make it a part of the company culture, said Faith Adams, senior analyst who focuses on customer experience at Forrester Research.

Many companies are making efforts to appear to be customer-centric, but aren’t following through with the best practices that are required to become truly customer-centric, like taking actions on the feedback they get, Adams said.

“It’s sometimes more of a ‘check the box’ activity rather than something that is embedded into the DNA or a way of life,” Adams said. “I hope that SAP does follow through on the best practices, but that’s to be determined.”

Bringing analytics to business users

SAP S/4HANA Cloud 1911 also now has SAP Analytics Cloud directly embedded. This will enable business users to take advantage of analytics capabilities without going to separate applications, according to SAP’s Betz.

It comes fully integrated out of the box and doesn’t require configuration, Betz said. Users can take advantage of included dashboards or create their own.

“The majority usage at the moment is in the finance application where you can directly access your [key performance indicators] there and have it all visualized, but also create and run your own dashboards,” he said. “This is about making data more available to business users instead of waiting for a report or something to be sent; everybody can have this information on hand already without having some business analyst putting [it] together.”

Dana GardnerDana Gardner

The embedded analytics capability could be an important differentiator for SAP in making data analytics more democratic across organizations, said Dana Gardner, president of IT consultancy Interarbor Solutions LLC. He believes companies need to break data out of “ivory towers” now as machine learning and AI grow in popularity and sophistication.

“The more people that use more analytics in your organization, the better off the company is,” Gardner said. “It’s really important that SAP gets aggressive on this, because it’s big and we’re going to see much more with machine learning and AI, so you’re going to need to have interfaces with the means to bring the more advanced types of analytics to more people as well.”

Go to Original Article
Author:

The 3 types of DNS servers and how they work

Not all DNS servers are created equal, and understanding how the three different types of DNS servers work together to resolve domain names can be helpful for any information security or IT professional.

DNS is a core internet technology that translates human-friendly domain names into machine-usable IP addresses, such as www.example.com into 192.0.2.1. The DNS operates as a distributed database, where different types of DNS servers are responsible for different parts of the DNS name space.

The three DNS server types server are the following:

  1. DNS stub resolver server
  2. DNS recursive resolver server
  3. DNS authoritative server

Figure 1 below illustrates the three different types of DNS server.

A stub resolver is a software component normally found in endpoint hosts that generates DNS queries when application programs running on desktop computers or mobile devices need to resolve DNS domain names. DNS queries issued by stub resolvers are typically sent to a DNS recursive resolver; the resolver will perform as many queries as necessary to obtain the response to the original query and then send the response back to the stub resolver.

Types of DNS servers
Figure 1. The three different types of DNS server interoperate to deliver correct and current mappings of IP addresses with domain names.

The recursive resolver may reside in a home router, be hosted by an internet service provider or be provided by a third party, such as Google’s Public DNS recursive resolver at 8.8.8.8 or the Cloudflare DNS service at 1.1.1.1.

Since the DNS operates as a distributed database, different servers are responsible — authoritative in DNS-speak — for different parts of the DNS name space.

Figure 2 illustrates a hypothetical DNS resolution scenario in which an application uses all three types of DNS servers to resolve the domain name www.example.com into an IPv4 address — in other words, a DNS address resource record.

DNS servers interoperating
Figure 2. DNS servers cooperate to accurately resolve an IP address from a domain name.

In step 1, the stub resolver at the host sends a DNS query to the recursive resolver. In step 2, the recursive resolver resends the query to one of the DNS authoritative name servers for the root zone. This authoritative name server does not have the response to the query but is able to provide a reference to the authoritative name server for the .com zone. As a result, the recursive resolver resends the query to the authoritative name server for the .com zone.

This process continues until the query is finally resent to an authoritative name server for the www.example.com zone that can provide the answer to the original query — i.e., what are the IP addresses for www.example.com? Finally, in step 8, this response is sent back to the stub resolver.

One thing worth noting is that all these DNS messages are transmitted in the clear, and there is the potential for malicious actors to monitor users’ internet activities. Anyone administering DNS servers should be aware of DNS privacy issues and the ways in which those threats can be mitigated.

Go to Original Article
Author:

AI bias and data stewardship are the next ethical concerns for infosec

When it comes to artificial intelligence and machine learning, there is a growing understanding that rather than constantly striving for more data, data scientists should be striving for better data when creating AI models.

Laura Norén, director of research at Obsidian Security, spoke about data science ethics at Black Hat USA 2018, and discussed the potential pitfalls of not having quality data, including AI bias learned from the people training the model.

Norén also looked forward to the data science ethics questions that have yet to be asked around what should happen to a person’s data after they die.

Editor’s note: This is part two of our talk with Norén and it has been edited for length and clarity.

What do you think about how companies go about AI and machine learning right now?
 
Laura Norén: I think some of them are getting smarter. At a very large scale, it’s not noise, but you get a lot of data that you don’t really need to store forever. And frankly it costs money to store data. It costs money to have lots and lots and lots of variable features in your model. If you get a more robust model and you’re aware of where your signal is coming from, you may also decide not to store particular kinds of data because it’s actually inefficient at some point.
 
For instance, astronomers have this problem. They’ve been building telescopes that are generating so much data, it cripples the system. They’ve had seven years of planning just to figure out which data to keep, because they can’t keep it all.

There’s a myth out there that in order to develop really great machine learning systems you need to have everything, especially at the outset, when you don’t really know what the predictive features are going to be. It’s nontrivial to do the math and to use the existing data and tests and simulations to figure out what you really need to store and what you don’t need to capture in the first place. It’s part of the hoarding mythology that somehow we need all of the data all of the time for all time for every person.

How does data science ethics relate to issues of AI bias caused by the data that’s fed in?
 
Norén: That is such a great, great question. I absolutely know that it’s going to be important. We’re aware of that, we’re watching for it, we’re monitoring for it so we can test for bias in this case against Russians. Because it’s cybersecurity, that’s a bias we might have. You can test for that kind of thing. And so we’re building tests for those kinds of predictable biases we might have.

I wish I had a great story of how we discovered that we’re biased against Russians or North Koreans or something like that. But I don’t have that yet because it would just be wrong to kind of run into some of the great stories that I’m sure we’re going to run into soon enough.
 
How do you identify what could be an AI bias that you need to worry about when first building the system?

 
Norén: When you have low data or your models are kind of all over the place because it’s the very beginning, you might be able to use social science to help you look for early biases. All of the data that we’re feeding into these systems are generated by humans and humans are inherently biased, that’s how we’ve evolved. That turns out to be really strong, evolutionarily speaking, and then not so great in advanced evolution.
 
You can test for things that you think might have a known bias, which then it helps to know your history. Like I said, in cybersecurity you might worry about being biased specifically against particular regions. So you may have a higher false-positive rate for Russians or for Russian language content or Chinese language content, or something like that. You could specifically test for those because you went in knowing that you might have a bias. It’s a little bit more technical and difficult to unearth biases that you were not expecting. We’re using technical solutions and data social science to try to help surface those.

I think social science has been kind of the sleeper hit in data science. It turns out it really helps if you know your domain really well. In our case, that’s social science because we’re dealing with humans. In other cases, it might help to be a really good biologist if you’re starting to do genomics at a predictive level. In general, the strongest data scientists we see are people who have both very high technical skills in the data science vertical but also deep knowledge of their domain.
 
It sounds like a lot of the potential mitigations for AI bias and data science issues boil down to being more proactive rather than reactive. In that spirit, what is an issue that you think will become a bigger topic of discussion in the next five years?
 
Norén: I do actually think it’s going to be very interesting just how people feel about what happens to their data as more and more companies have more and more data about people forever and their data are going to outlive them. There have been some people who are already working on that kind of thing.
 
Say you have a best friend and your best friend dies, but you have all these emails and chats, texts, back-and-forth with your best friend. Someone is developing a chatbot that mimics your best friend by being trained on all those actual conversations you had and will then live on past your best friend. So you can continue to talk with your best friend even though your best friend is dead. That’s an interesting, kind of provocative, almost artistic take on that point.
 
But I think it’s going to be a much bigger topic of conversation to try to understand what it means to have yourself, profiles and data live out beyond the end of your own life and be able to extend to places that you’re not actually in. It will drive decisions about you that you will have no agency over. The dead best friend has no agency over that chatbot.

Indefinite data storage will become much, much more topical in conversation and we’ll also start to see then why the right to be forgotten is an insufficient response to that kind of thing because it assumes that you know where to go as your agency, or that you even have agency at all. You’re dead; you obviously don’t have any agency. Maybe you should, maybe you shouldn’t. That’s an interesting ethical question.

Users are already finding they don’t always have agency over their data even when alive, aren’t they?
 
Norén: Even if you’re alive, if you don’t really know who holds your data, you may have no agency to get rid of it. I can’t call up Equifax and tell them to delete my data. I’m an American, but I don’t have that. I know they’re stewards of it but there’s nothing I could do about that.

We’ll probably favor conversation a lot more in terms of being good guardians of data rather than talking about it in terms of something that we own or don’t own; it will be about stewardship and guardianship.

We’ll probably favor conversation a lot more in terms of being good guardians of data rather than talking about it in terms of something that we own or don’t own; it will be about stewardship and guardianship. That’s a language that I’m borrowing from medical ethics because they’re using that type of language to deal with DNA.
 
Can someone else own your DNA? They’ve decided no. DNA is such an intrinsic part of a person’s identity and a person’s physicality that it can’t be owned in whole by someone else. But that someone else, like a hospital or a research lab, could take guardianship of it.

The language is out there, but we haven’t really seen it move all the way through the field of data science. It’s kind of stuck over in genomics and the Henrietta Lacks story. She was a woman who had ovarian cancer, and she died. But her cells, her cancer cells, were really robust. They worked really well in research settings and they lived on well past Henrietta’s life. Her family was unaware of this. There’s this beautiful book written about what it means to find out that part of your family — this diseased family member that you cared about a lot — is still alive and is still fueling all this research when you didn’t even know anything about it. That’s kind of where that conversation got started, but I see a lot of parallels there between data science and what people think of when they think of DNA.
 
One of the things that’s so different about data science is that we now can actually have a much more complete record of an individual than we have ever been able to have. It’s not just a different iteration on the same kind of thing. You used to be able to have some sort of dossier on you that has your birthdate and your Social Security number, your name and whether you were married. That’s such a small amount of information compared to every single interaction that you’ve had with a piece of software, with another person, with a communication, every medical record, everything that we might know about your DNA. And our knowledge will continue to get deeper and deeper and deeper as science progresses. And we don’t really know what that’s going to do to the concept of individuality and finiteness.
 
I think about these things very deeply. We’re going to see that in terms of, ‘Wow, what does it mean that your data is so complete and it exists in places and times that you could never exist and will never exist?’ That’s why I think that decay by design thing is so important.