We’ve all had those moments: You’re about to video call your parents and your laundry is all over the place, or you’re about to have a meeting with a potential investor and your business plan is on a whiteboard behind you, or you’re being interviewed on live television and your adorable child comes marching into the room. There are plenty of life’s moments that can get in the way of you being the focus in every video call—and that’s why we’re introducing background blur in Skype video calls.
Background blur in Skype is similar to background blur in Microsoft Teams. It takes the stress out of turning on your video and puts the focus where it belongs—on you! With a simple toggle, right-click, or even through your Skype settings, your background will be instantly and subtly blurred, leaving just you as the only focal point.*
Background blur in Skype and Teams uses artificial intelligence (AI)—trained in human form detection—to keep you in focus during your call. This technology is also trained to detect your hair, hands, and arms, making a call with background blur just as relaxed and easy as a regular video call.
Background blur is available on most desktops and laptops with the latest version of Skype. For more questions about background blur in Skype, read our support article. We also love to hear from you on the Skype Community, where millions of Skype users have registered to share their expertise, feedback, and Skype stories.
*We do our best to make sure that your background is always blurred, but we cannot guarantee that your background will always be blurred.
Dell EMC storage technologists predict the top trends in 2019 will be the growth of storage class memory, NVMe-oF, multi-cloud deployments, autonomous storage and container-based technologies.
Sudhir Srinivasan, senior vice president and CTO of the Dell EMC storage business, and Danny Cobb, a vice president and corporate fellow at Dell EMC, discussed those upcoming trends during a recent podcast interview with TechTarget.
We looked at the impact ultralow latency storage technologies, such as the 3D XPoint developed by Intel and Micron, Samsung’s Z-NAND and Toshiba XL-Flash, as well as end-to-end NVMe will have on Dell EMC storage. We also talked about how Dell EMC storage will support deploying hybrid and multi-cloud infrastructure and containerized applications.
What is the premier prediction you’d like to make for 2019 in the enterprise data storage industry?
Danny Cobb: For me, the premier item is the long-awaited arrival of enterprise-grade storage class memory into our customers’ data centers. We’ve seen over the years the slow, steady progress of storage class memory. We’ve watched it begin to mature, initially shipping in the client and the consumer space. Then we’ve seen it begin to move its way into the enterprise space in single-node and low-availability situations and things like that. And so for 2019, it finally takes that third step into the enterprise as a completely reliable, multiported, enterprise-worthy storage device that gives us as designers a whole new performance level to deal with. Sort of relative to the 100-microsecond flash world, storage class memory now brings us down into the 10-microsecond storage class memory world. And those extra 90 microseconds matter a lot for customers who have real-time storage demands and are trying to run new, advanced, high-frequency I/O workloads.
I was going to ask you for what types of customers you thought this would matter the most. Is it going to be a significant difference for them?
Cobb: There’s going to be a diversity of workloads that this applies to. And so for a traditional OLTP [online transaction processing] workload, any time you can reduce latency, any time you can reduce overall server and CPU utilization, any time you can analyze more transactions per second, close more business per day or whatever that happens to be, you’re providing value to the business. In the new emerging workload space, obviously people love to point at the needs of the high-frequency traders who are analyzing click or tick data in real time and then wanting to react and transact based on that real-time information. And so I think they will see tremendous benefit because you’re essentially giving them a 10 times improvement in the latency for accessing storage media. And any time you can provide a 10x improvement to someone who has a real-time information need, you’re giving them some real business benefit.
What is the type of storage class memory that you think you’re going to be using this year?
Cobb: I think we’re going to see a variety of things available to us. The one that’s on the top of everyone’s mind right now is, of course, 3D XPoint from Intel and Micron. We’ve been discussing that quite a bit since its launch in 2015. And that’s really created the definition or the requirements for what this new tier of sort of 10- or 20-microsecond class storage is going to be able to deliver. Not standing still are companies like Samsung and Toshiba who have taken a different approach. They’re taking their flash technology and optimizing it further and further to reduce its latency and bring its performance into the same ballpark as 3D XPoint. And so, as storage designers, we’re going to have a nice variety of choices for that fastest tier of storage media available to us. But the one that’s on the tip of everyone’s tongue right now is certainly 3D XPoint.
Is this going to mean that storage systems cost more when you’re using storage class memory?
Cobb: There’s certainly a cost difference between 3D XPoint and the traditional flash NAND that we’re using. And so one aspect of it is if you’re simply moving a workload from flash to 3D XPoint, that storage footprint does cost more. But it also delivers up to 10x the benefits that are there. And so for many customers, that’s worth it. But in an overall, end-to-end systems design, when you do intelligent data management, when you do tiering as we do in our storage platforms, we’re able to take advantage of 3D XPoint or of low-latency flash as both the fastest tier of storage, but also in many ways displacing the cost of [dynamic] RAM(DRAM) in our storage arrays. And so, while if you just looked at 3D XPoint or low-latency NAND versus traditional NAND, you see a cost delta there. If you look at the overall system cost, we also believe we can displace some of the cost of DRAM in our systems and essentially start to amortize the cost of the new media of 3D XPoint at a system level to deliver better price/performance than the predecessor could.
That sounds interesting. Sudhir, what is your top prediction for the coming year?
Sudhir Srinivasan: The biggest one on my mind is cloud. I think cloud is certainly emerging as a big aspect of every customer’s infrastructure plans. And I think in 2019, what we’ll see is the mainstreaming of the hybrid, multi-cloud world — a world where it’s neither all cloud nor all on prem but a mixture of all of those, including multiple clouds. And I think we’re seeing that across the board. Every customer I’ve talked to is thinking in that direction. And the basic idea is that you want to use the right kind of infrastructure for each kind of application or data set. So, enabling that vision is going to be key for us.
Are customers putting themselves in a situation where they’re going to have a lot of silos if they use multiple clouds?
Srinivasan: I think it’s sort of like the cloud version of the multivendor strategy. Every customer fears being locked into a particular vendor, or a particular cloud in this case. And so they would like the ability, in theory, to be able to move across clouds. And whether they actually leverage it or not remains to be seen. But I think what I’m hearing a lot of customers do is they will spread their bets. They will not put everything in one cloud or the other. And certain clouds are better at certain aspects or certain services than others perhaps. And so that’s the basis they will be using, I think, to place their different workloads in the different clouds.
Are there additional technologies they’re going to need to invest in to deal with this situation where they’re using multiple clouds?
Srinivasan: Absolutely. That’s a great point. Thanks for bringing that up. Part two of the answer is that we have an opportunity to enable our customers to move data and applications and workloads across all of these clouds. And because we already store the data, we understand the data, and we have technologies that allow us to move data from one location to another, that’s going to be key.
Are there any other major trends you envision starting in 2019?
Cobb: Maybe one thing I’d add onto that, and it goes a bit hand and glove with the storage class memory trend, is the move to NVMe over Fabrics [NVMe-oF] and really what some in the industry will now call end-to-end NVMe. You and I have talked in the past about the emergence of NVMe and how, as a local implementation, it helps optimize the hardware-software boundary between the CPUs and the storage. But, the truth be told, NVMe was created because of the fact that storage class memory was on the horizon. We needed a more optimized CPU and PCIe storage implementation to take advantage of the very low latency and very high performance of storage class memory. And so NVMe came into existence with that goal in mind, first deployed for flash, but now we’ll really see it hit its stride in a world of storage class memory and 10-microsecond media devices.
Then we bring in NVMe over Fabrics, and that now extends those optimizations to the network itself, whether it’s the incumbent Fibre Channel SAN that’s so popular among the high-reliability, high-availability enterprise storage customers or the next-generation data center network where you’re using RDMA-capable Ethernet networks as a way of connecting fabrics of systems together. In any event, the fact that NVMe over Fabrics can layer seamlessly on top of those types of networks and deliver the latency and throughput advantages that are unlocked with storage class memory, that really is just another step in this end-to-end system optimization that we’re seeing on the technology front right now. And for 2019, NVMe over Fabrics goes from proof of concept to production. And that’s a big step. If you think about all the parts of the data center that have to be touched to deploy a new enterprise class of storage area network technology, all of those things are now lined up and ready to go and ready for production in 2019, and we’ll be hearing a lot of success stories about that.
I know we talked about some of the use cases where this technology will be particularly important. But how pervasive do you envision this technology becoming in enterprises? Do end users really need this level of speed that we’re talking about with their general workloads? Or is this really going to be more of a niche technology?
Cobb: As we’ve seen the adoption of any of these new technologies, we look for the areas where the business benefit is most valuable. And particularly with technologies that are driven around performance, they often show up in areas where the additional performance is an absolute requirement because for those use cases, you sort of write your business value proposition on the back of an envelope — you know, time equals money. And so certainly technologies like that which follow the adoption trajectory of flash itself — where in 2008, it went to the top of the pyramid at the most information-intensive enterprises — and now, 10, 11 years later, it’s pervasive across the entire industry. I don’t know that these other technologies will take a full 10 years to mature and become deployed widely, but I do know that the demand for additional performance to consider larger data sets in less time to make business decisions, make predictions and have an impact, for that value proposition, these technologies will be deployed very, very rapidly. For areas where the technology pull and the business pull isn’t quite as fast, it will take a little bit longer until it just becomes the de facto mainstream way these technologies are deployed. And there will always be some legacy around, but to a large extent, the new systems being purchased and rolled out will be based on these new technologies.
Srinivasan: I think there’s a confluence of two trends here that I think will propel this forward even faster — which is the trend of using software-defined storage, which as we know has been growing very fast. And the thing with software-defined storage is it’s all Ethernet based. And NVMe over Fabrics, especially over Ethernet obviously, allows that level of performance and reliability and so on and so forth to come into the software-defined storage world as well. So, I think the combination of those two is going to make this go even faster, although I think that particular aspect of it is probably a bit further out — so probably 2020 and beyond. But I think there’s definitely a big trend.
Is there anything else that’s going to be driving some of the trends we’ve talked about today? I hear a lot about edge devices these days. Is computing going to be done a lot more at the edge moving forward then?
Danny Cobbcorporate fellow, Dell EMC
Cobb: There’s certainly a trend that I call the real-time edge, and that is the first place where data that’s required to perform some type of enterprise decision is first touched by IT. And so if you think about high-speed telemetry or high-speed ingest or scenarios like a connected car doing collision avoidance or a financial trader or a credit card provider doing real-time risk analysis on purchases and things like that, there’s tremendous benefit to being able to make that decision or make that inference in an AI sense at the closest point where the data comes into existence. So that means the data is there at the edge. The compute is there at the edge. And many times the decision or prediction happens right there at the edge without taking the time or even having the time to transit a network back to a core enterprise data center or certainly off into a cloud. So, the ability to get the data, the compute and make the decision right at the edge in real time has tremendous value in these new emerging edge and internet of things deployments.
Srinivasan: We have a great example of this already happening today. A lot of these edge use cases are still really emerging and early. But the one that’s actually very advanced is in video surveillance, where all of the facial recognition or license plate recognition and that kind of compute already happen at the edge. And a lot of the data is processed right there, and only the relevant pieces of data are propagated upstream into the core or the cloud.
Are there any other short- or long-term trends that you envision happening that we haven’t touched on yet?
Srinivasan: The one that’s really dear to my heart is what I call autonomous storage or smart storage or intelligent storage. And the idea here is that our customers want our storage devices to be more self-driving. The joke that I make all the time is if we are building self-driving cars, surely by now we can build a self-driving storage system. And we have definitely been on the mission to do that. And it consists of two pieces. Just like in the automobile world or the self-driving autonomous vehicle world, the vehicle itself, in our case the storage system, needs to have a fairly sophisticated machine learning/inference engine-type of capability so it can make those real-time decisions. So, that’s really in a sense your edge already. It needs to be able to make those decisions in real time.
But a second component is sort of a global brain, in the cloud perhaps. In the autonomous vehicle world, that would be your weather system or your traffic navigation system that would guide you on what the best routes are at this present moment in time or based on your driving history. So, it does the deep learning across a fleet of devices that are out there in the field under different operating conditions, and it informs the models that are running in the devices at the edge to make them more efficient and self-driving. So, we have started to see the emergence of these technologies in both locations in 2018 in our own product portfolio and other vendors as well. And I think you will see a lot more of that in 2019.
Are there any other major areas in storage where we expect to see new and different trends?
Srinivasan: There is one more area. It’s not entirely new and different in the sense of, we’ve seen the early stages of it in 2018 already, and that has to do with containerized applications and cloud-native applications and specifically persistent storage for those applications, including not just the storage but also data protection for that storage as well. So, this is all about how do you provide enterprise-grade, reliable storage for applications that are built in a cloud-native, born-in-the-cloud approach, whether that’s containerized microservices, etc. And this has to do with having the right integration into the modern DevOps frameworks and ecosystems. And so that, I think, is going to be a big topic in 2019.
To what degree are enterprises doing this? Is it just really the largest enterprises that need this storage for containerized environments, or is it seeping down beyond that, because it involves a lot of expertise on the organization’s part to take this approach? I’m wondering to what degree you expect to see this happen across all types of users.
Srinivasan: That’s a great question. So, I think it’s a bit of a sandwich in the sense of, it’s definitely happening at the high-end enterprises. And that’s driven primarily by them having to sort of react to the threat of the cloud. The agility that you get from these kinds of environments is what’s driving them to do that. At the bottom end though, if you’re a startup today, this is how you develop your product from day one. You go into the cloud, and you start developing with a containerized, microservice-style architecture. So, you’re just born that way. Those are sort of the two things that are driving the option.
Thinking about ways to make your PowerPoint presentation stand out? How about adding 3D embedded animations? Thanks to the Windows 10 October 2018 Update, you can.
You can now insert 3D models with built-in animations to your PowerPoint and Word documents. These embedded 3D animations make it faster and easier for anyone to add movement and animation to slides and documents. To give you even more creative flexibility, there are more than 30 new animated 3D models.
To give you even more creative flexibility, there are more than 30 new animated 3D models.
Check it out in action:
If you like this, check out more Windows 10 Tips.
Updated February 11, 2019 10:18 am
This week at the HIMSS 2019 conference, the healthcare IT community will explore solutions to the most urgent challenges facing modern health. Microsoft will share new innovations to help health organizations navigate the complex technology transformations needed to deliver modern patient experiences that promote successful treatments and well-being.
The Microsoft Healthcare team will showcase intelligent healthcare solutions that connect health data and systems securely in the cloud, improve communication with teams and patients, and advance precision healthcare. These featured solutions—powered by Microsoft 365, Azure, and the new Microsoft Healthcare Bot service—interoperate with Microsoft Business Applications to enable personalized care, empower care teams, and advance precision healthcare.
Today, people want the same level of access and engagement with healthcare providers as they get from other digital brand experiences. Case in point: a recent survey by Transcend Insights found that 93 percent of patients expect care providers to provide access to information about their medical history, and 71 percent want to digitally provide status updates to better inform diagnoses and decisions.
Dynamics 365 unifies operations and patient engagement, breaking down silos created by the patchwork of business systems and databases within the organization. As patients interact with web portals and clinicians, providers can access a 360-degree view of the patient for more personalized service. And by using solutions like the Microsoft Dynamics 365 Health Accelerator, healthcare providers can more easily create new use cases and workflows using the Fast Healthcare Interoperability Resources (FHIR) based data model.
Improving patient engagement with virtual clinics
As the healthcare industry shifts to value-based care, many providers focus on face-to-face patient experiences at the clinic or hospital. Now imagine the challenge of improving care for patients scattered across remote, difficult-to-reach villages.
Helsinki and Uusimaa Hospital (HUS) are solving the issue by using Microsoft cloud solutions to create a virtual hospital that provides remote, virtual health services throughout Finland, including sparsely populated regions.
HUS moved to the cloud with Microsoft Azure and Microsoft 365 to create digital hubs for its medical specialties, and then added Dynamics 365 for Customer Service. Now providers have tools to access a 360-degree view of patients across departments and care givers to improve treatment. Patients can access self-service portals for medical information and self-help therapies, plus receive virtual one-on-one treatments from specialists. It’s a win-win for everyone. Patients are empowered to feel more in control of their health which boosts confidence and support, and providers can provide personalized care to more patients.
HUS is also conducting pilot programs with artificial intelligence and machine learning algorithms to direct patients to the right place right away and improve digital healthcare services. Providers will be able to gain insights from complex data to develop precision medicine and treatments for different patients and groups.
Learn more about the HUS Virtual Hospital in this customer story, as well as in the short video below.
Empowering care teams for exceptional at-home services
Another way Dynamics 365 is improving patient care is by enabling care teams to remotely monitor patients, share knowledge across health teams, and coordinate the right level of care.
In Australia, more older citizens are choosing to live at home, rather than a nursing facility. For residential wellness provider ECH, this means making life simple for 15,000 clients while providing support for the domestic healthcare workers. ECH deployed Dynamics 365 to streamline the onboarding of new clients, helping to match them with the right specialists. They also adopted Dynamics 365 for Talent to attract and onboard skilled care providers and set them up for success which is critical in a field with high stress and turnover. They’re helping to reduce burnout and attrition by using Dynamics 365 to promote continuous learning, track employee accomplishments, and help workers get certified and trained.
Improving operational outcomes with no-code apps
A key to exceptional patient experiences is empowering staff to streamline care processes, reduce redundancy, and gain insights to make decisions faster.
New York’s largest healthcare provider, Northwell Health, is streamlining patient care processes using Dynamics 365 for Customer Service and the Microsoft Power Platform to give employees tools to optimize patient care, reduce costs, and ensure regulatory compliance.
Using PowerApps, a Northwell Health doctor with no technical expertise created an app that gives physicians, nurses, and administrators visibility into tasks that need to be completed like patient requests to ensure a patient gets a needed X-ray. The app takes data entered into Dynamics 365, stored in the PowerApps Common Data Service, and augments it with attributes from the Microsoft Dynamics 365 Healthcare Accelerator, which makes it easier to create new use cases and workflows using a FHIR-based data model.
By connecting the app to the Microsoft Bot Framework, clinicians and administrators can leverage predictive insights and automated workflows to quickly get fast answers about patients. Plus, all data is on the trusted Azure cloud that helps ensure the compliance, confidentiality, integrity, and accessibility of sensitive data.
Get the full story at HIMSS
These three stories are just a peek at how Microsoft Business Applications are helping transform patient experiences. If you are attending HIMSS, be sure to visit our booth (#2500) and attend sessions to learn more from our healthcare technology experts. Find more information about our location and sessions in this schedule, and be sure to check out the resources below:
Microsoft Translator is happy to announce that it is now certified for ISO, HIPAA, and SOC compliance. This comes as a result of Azure’s commitment to privacy and security.
Last year, Translator announced that it was GDPR compliant as a data processor. Now, Microsoft Translator is ISO, HIPAA, and SOC compliant, in addition to receiving CSA and FedRAMP public cloud attestation.
ISO: Microsoft Translator is ISO certified with five certifications applicable to the service. The International Organization for Standardization (ISO) is an independent nongovernmental organization and the world’s largest developer of voluntary international standards. Translator’s ISO certifications demonstrate its commitment to providing a consistent and secure service. Microsoft Translator’s ISO certifications are:
ISO 27001 Information Security Management Standards
HIPAA: The Microsoft Translator service complies with the US Health Insurance Portability and Accountability Act (HIPAA) Health Information Technology for Economic and the Clinical Health (HITECH) Act, which govern how cloud services can handle personal health information. This ensures that the health services can provide translations to clients knowing that personal data is kept private. Microsoft Translator is included in Microsoft’s HIPAA Business Associate Agreement (BAA). Health care organizations can enter into the BAA with Microsoft to detail each party’s role in regard to security and privacy provisions under HIPAA and HITECH.
SOC: The American Institute of Certified Public Accountants (AICPA) developed the Service Organization Controls (SOC) framework, a standard for controls that safeguard the confidentiality and privacy of information stored and processed in the cloud, primarily in regard to financial statements. Microsoft Translator is now SOC type 1, 2, and 3 compliant.
CSA STAR: The Cloud Security Alliance (CSA) defines best practices to help ensure a more secure cloud computing environment, and to helping potential cloud customers make informed decisions when transitioning their IT operations to the cloud. The CSA published a suite of tools to assess cloud IT operations: the CSA Governance, Risk Management, and Compliance (GRC) Stack. It was designed to help cloud customers assess how cloud service providers follow industry best practices and standards, and comply with regulations. Microsoft Translator has received CSA STAR Attestation.
FedRAMP: The US Federal Risk and Authorization Management Program (FedRAMP) attests that Microsoft Translator adheres to the security requirements needed for use by US government agencies in the public Azure cloud. The US Office of Management and Budget requires all executive federal agencies to use FedRAMP to validate the security of cloud services. FedRAMP attestation for Microsoft Translator in the dedicated Azure Government cloud is forthcoming.
The Microsoft Translator service is subject to annual audits on all of its certifications to ensure the service continues to be compliant. View more information about Microsoft’s commitment to compliance in the Microsoft Trust Center
Experts are recommending that enterprises strive for two-factor authentication — especially new types of 2FA — because of its ease of use and lower risk of human error.
Mark Risher, head of account security at Google, agreed that 2FA should be the baseline of security for enterprises. But he also noted that some types of 2FA are commonly misunderstood by users or may seem more daunting than they should.
In our discussion, Risher talked about the new types of 2FA, like Universal Second Factor (U2F)and WebAuthn, and how those new options could be game changers for users and enterprises alike.
Editor’s note:This interview has been edited for length and clarity.
Can you walk through the different types of 2FA?
Mark Risher: There are two things that are on the table right now. One of them is how a user authenticates to Google, and the second is how the user authenticates to some non-Google service — a payroll provider, a benefit site, document collaboration, what have you. For the first one, authenticating to Google, we already offer many different types of 2FA, including what is a more robust security key-based approach.
Then, when you want to get to the third party, you have two options. The first one is you can single sign-on and go through Google. The user’s connecting first to Google and then tapping back that trust chain, extending that trust chain to the third party — that’s the single sign-on approach. Or, the user can go direct to the third party, and there they could utilize the open standards that we built. Google Authenticator is a product based off of an open standard that’s called TOTP, [or] time-based one-time password.
But security key is also an open standard, and the recently standardized WebAuthn web authentication is another open standard where people can completely leave Google out of the mix and just go straight to that payroll provider using a second factor that follows these new advanced standards.
What are the benefits of WebAuthn over other types of 2FA?
Risher: The challenge is very few people in the world spend time thinking about authentication, and they should; it’s a means to an end. No one cares about your login page, but [both users and threat actors] are trying to get some valuable service on the other side of it. The problem is that, because they don’t think about it as much, all of these nonpassword or beyond-passwords-type solutions start to feel the same. Particularly in the enterprise world, people say, ‘Ten years ago, I had this RSA thing I’d hung on my keychain, and every minute it would give me a new code. And everything that’s come since then feels like a variation of the same thing, so I don’t get the difference.’
The huge difference is that, with all of these one-time password-based solutions, things like the RSA SecurID, the code sent to your phone, or even the user-friendly thing, where your phone gets just a yes-no button to pop up — we call it Google Prompt, but other companies have their own thing — all of those, there’s one critical gap that is exploited by attackers: … the onus is on the user to make sure that he or she is on the correct site for typing in that thing, or pressing that button or what have you. And if the user messes up, if the user gets fooled by a phishing attack, by a reasonable facsimile of the site, then the user has just passed that information to the adversary and is now back to square one, where they’re no better off than a password.
The other camp is this modern camp, which includes security key and standards like U2F and WebAuthn. It’s a game changer, because the user no longer has that burden of responsibility. In the modern technique, you flip it upside down — the site or app has to prove to the key that it is legitimate, that it is exactly what it claims to be and if, and only if, that proof succeeds will the key release its information back up to the site.
You’ll notice in that second scenario I didn’t mention the user at all. The user can be off at the coffee machine paying no attention whatsoever, totally distracted and totally uneducated on which site it is, because the human has been taken out of the loop. The key is proving itself to the site, and the site is proving itself to the key.
That’s a big game changer, because now you’ve taken something that could happen anywhere around the world and relies on humans never making mistakes — which we know is not possible — and turned it into a problem where you need to have physical proximity; you need to be right there by the machine. And it’s something computers are good at — they do exact matches very well.
What kind of work is needed on the enterprise side in order to provide that proof to make this work?
Risher: It depends on how the enterprise app does authentications. The easiest one to roll out is the single-sign-on-based platform, because that uses the standards that have been around for a long time, including SAML [Security Assertion Markup Language]. Using that allows an enterprise to say, ‘I’m not going to change all 100 of my back-office applications; I’m just going to have people first connect to this secure gateway, and then the gateway will relay on to this.’
This is the easiest, because it means you’re usually integrating just in one place — that core authentication — and then all the different apps rely upon it. The other option is piecemeal. You go to each of those services and see if they support authentication. And that complexity and that tedium is the reason that enterprises have generally liked single-sign-on-based approaches.
The ultimate aim — no matter what types of 2FA are being discussed — tends to be ease of use and less or fewer points for human mistakes.
Risher: Exactly, and it’s rare that we get that bifecta. Usually, people are expecting better security comes at a cost, and the cost is more complexity for the user. Instead, with this stuff, we kind of have a sweet spot where it’s better security for the user, and the complexity is borne by the computer. So, it’s actually easier for the users.
It’s actually so rare, people are suspicious. But just because it feels easy doesn’t mean they’re giving something up. In fact, they’ve moved into a whole new class of modern authentication that is much more robust, much more strong, much better.
If you were the IT manager at a smaller enterprise, how would you sell this to the board to be able to get the resources to implement something like this?
Risher: The way I would do that is by explaining the concept of attack surface. If you are using passwords or phishable second vectors like a one-time password, the set of attackers is literally everyone with an internet connection anywhere in the world. If any of those people knows your password, they are able to connect to the service. [It] doesn’t mean they want to, [and it] doesn’t mean that they’re focusing on you. But, statistically speaking, they will eventually.
So, what I would say to the board is, ‘Do you feel comfortable in a world where if anyone anywhere in the world learns our password they are able to access our services? Or, would I like to shrink that down by 10 orders of magnitude to only people that have physical possession of my device can connect to my service?’ That’s really the transformation: those 10 orders of the magnitude from anyone anywhere in the world down to physical possession of the device.
Is it ever really limited to just physical access? Even with the physical security key authentication set up on a Google account, there is an option to fall back to SMS-based codes or the Google Prompt.
Risher: We have offered the option to use security keys for our regular accounts for many years. But, you’re right, that does have a fall back. So, that has not actually raised your security. It’s given you the convenience of a second factor that you don’t have to do any heavy lifting for, but you’re not getting the security benefit. To have the security benefits, you need to turn off, disable [or] preclude any of those fallback mechanisms. And that’s why for our Advanced Protection Program we have disabled any of those other fallbacks.
With Advanced Protection, once you enroll, if you lose the security key, then you truly cannot connect. And if someone attacking does not have the security key, they truly cannot connect. That’s why we sell two of them together in the kit for the Titan Key. And that’s why, when we do Advanced Protections enrollment, we require people to set up two separate keys, because the idea is that, if you lose them, you’re really out for good. You need to give one to a family member or leave it at home or put it in a safe place so that you have that fallback under your control.
Do you see others moving to offer more secure options like that, as well?
Risher: We haven’t seen many. It is emerging, and it’s something we’re trying to pressure and encourage people to do, because that’s the direction we need to move in to get those true security guarantees. With that said, there’s a tension, particularly with companies that operate at large scale — scales comparable to Google — that it is more common for users to accidentally lock themselves out than for attackers to be trying to break in.
Right now, for every one attack that we stop, there are 1,000 innocent people that we make go through some inconvenience. And we do need to play with that ratio. That’s why some others of our peers in the industry have been hesitant to fully embrace the kind of mandatory enforcement of security keys, but it’s clearly the direction it has to go. Right now, for high-risk individuals — whether they are celebrities, or high-ranking officials, or people with a lot of control at an enterprise, or journalists or activists — we are strongly promoting Advanced Protection programs, because that’s the direction that we want everyone to move into.