Tag Archives: user

How to rebuild the SYSVOL tree using DFSR

Active Directory has a number of different components to keep track of user and resource information in an organization….

If one piece starts to fail and a recovery effort falters, it could mean it’s time for a rebuilding process.

The system volume (SYSVOL) is a shared folder found on domain controllers in an Active Directory domain that distributes the logon and policy scripts to users on the domain. Creating the first domain controller also produces SYSVOL and its initial contents. As you build domain controllers, the SYSVOL structure is created, and the contents are replicated from another domain controller. If this replication fails, it could leave the organization in a vulnerable position until it is corrected.

How the SYSVOL directory is organized

SYSVOL contains the following items:

  • group policy data;
  • logon scripts;
  • staging folders used to synchronize data and files between domain controllers; and
  • file system junctions.
domain controller shares
Figure 1: Use the Get-SmbShare cmdlet to show the SYSVOL and NETLOGON shares on an Active Directory domain controller.

The Distributed File System Replication (DFSR) service replicates SYSVOL data on Windows 2008 and above when the domain functional level is Windows 2008 and above.

SYSVOL folder contents
Figure 2. The SYSVOL folder contains four folders: domain, staging, staging areas and sysvol.

The position of SYSVOL on disk is set when you promote a server to a domain controller. The default location is C:WindowsSYSVOLsysvol, as shown in Figure 1.

For this tutorial, we will use PowerShell Core v7 preview 3, because it fixes the .NET Core bug related to displaying certain properties, such as ProtectedFromAccidentalDeletion.

SYSVOL contains a number of folders, as shown in Figure 2.

How to protect SYSVOL before trouble strikes

As the administrator in charge of Active Directory, you need to consider how you’ll protect the data in SYSVOL to protect the system in case of corruption or user error.

Windows backs up SYSVOL as part of the system state, but you should not restore from system state, as it might not result in a proper restoration of SYSVOL. If you’re working with the relative identifier master flexible server master operations holder, you definitely don’t want to restore system state and risk having multiple objects with the same security identifier. You need a file-level backup of the SYSVOL area. Don’t forget you can use Windows Server backup to protect SYSVOL on a domain controller if you can’t use your regular backup approach.

If you can’t use a backup, then login scripts can be copied to a backup folder. Keep the backup folder on the same volume so the permissions aren’t altered. You can back up group policy objects (GPOs) with PowerShell:

Import-Module GroupPolicy -SkipEditionCheck

The SkipEditionCheck parameter is required, because the GroupPolicy module hasn’t had CompatiblePSEditions in the module manifest set to include Core.

Create a folder for the backups:

New-Item -ItemType Directory -Path C: -Name GPObackup

Use the date to create a subfolder name and create the subfolder for the current backup:

$date = (Get-Date -Format ‘yyyyMMdd’).ToString()

New-Item -ItemType Directory -Path C:GPObackup -Name $date

Run the backup:

Backup-GPO -All -Path (Join-Path -Path C:GPObackup -ChildPath $date)

If you still use login scripts, rather doing everything through GPOs, the system stores your scripts in the NETLOGON share in the C:WindowsSYSVOLdomainscripts folder.

Restore the SYSVOL folder

SYSVOL replication through DFSR usually works. However, as with any system, it’s possible for something to go wrong. There are two scenarios that should be covered:

  • Loss of SYSVOL information on a single domain controller. The risk is the change that removed the data from SYSVOL has replicated across the domain.
  • Loss of SYSVOL on all domain controllers, which requires a compete rebuild.

The second case involving a complete rebuild of SYSVOL is somewhat more complicated, with the first case being a subset of the second. The following steps explain how to recover from a complete loss of SYSVOL, with added explainers to perform an authoritative replication of a lost file.

Preparing for a SYSVOL restore

To prepare to rebuild the SYSVOL tree, stop the DFSR service on all domain controllers:

Stop-Service DFSR

On domain controllers where you can’t perform a restore, you’ll need to rebuild the SYSVOL tree folder structure and share structure.

On the domain controller with the SYSVOL you want to fix — or the one with the data you need to replicate — disable DFSR and make the server authoritative.

Get-ADObject -Identity “CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=TTSDC01,OU=Domain Controllers,DC=Sphinx,DC=org” -Properties * |

Set-ADObject -Replace @{‘msDFSR-Enabled’=$false; ‘msDFSR-options’=1}

Disable DFSR on the other domain controllers in the domain. The difference in the commands is you’re not setting the msDFSR-options property.

Get-ADObject -Identity “CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=TTSDC02,OU=Domain Controllers,DC=Sphinx,DC=org” -Properties * |

 Set-ADObject -Replace @{‘msDFSR-Enabled’=$false}

Rebuild the SYSVOL tree data

The next step is to restore the data. You can skip this if you’re just forcing replication of lost data.

On domain controllers where you can’t perform a restore, you’ll need to rebuild the SYSVOL tree folder structure and share structure. This tutorial assumes you’ve created SYSVOL in the default location with the following folder structure:

C:WindowsSYSVOL

C:WindowsSYSVOLdomain

C:WindowsSYSVOLdomainpolicies

C:WindowsSYSVOLdomainscripts

C:WindowsSYSVOLstaging

C:WindowsSYSVOLstagingdomain

C:WindowsSYSVOLstaging areas

C:WindowsSYSVOLsysvol

You can use the following PowerShell commands to re-create the folders in the minimum number of steps. Be sure to change the nondefault location of the Stest folder used below to match your requirements.

New-Item -Path C:StestSYSVOLdomainscripts -ItemType Directory

New-Item -Path C:StestSYSVOLdomainpolicies -ItemType Directory

New-Item -Path C:StestSYSVOLstagingdomain -ItemType Directory

New-Item -Path C:StestSYSVOL’staging areas’ -ItemType Directory

New-Item -Path C:StestSYSVOLsysvol -ItemType Directory

Re-create the directory junction points. Map SYSVOLdomain (source folder) to SYSVOLSYSVOL and SYSVOLstagingdomain (source folder) to SYSVOLstaging areas.

You need to run mklink as administrator from a command prompt, rather than PowerShell:

C:Windows>mklink /J C:stestSYSVOLSYSVOLsphinx.org C:stestSYSVOLdomain

Junction created for C:stestSYSVOLSYSVOLsphinx.org <<===>> C:stestSYSVOLdomain

C:Windows>mklink /J “C:stestSYSVOLstaging areassphinx.org” C:stestsysvolStagingdomain

Junction created for C:stestSYSVOLstaging areassphinx.org <<===>> C:stestsysvolStagingdomain

Set the following permissions on the SYSVOL folder:

NT AUTHORITYAuthenticated Users                           ReadAndExecute, Synchronize

NT AUTHORITYSYSTEM                                                        FullControl

BUILTINAdministrators           Modify, ChangePermissions, TakeOwnership, Synchronize

BUILTINServer Operators                                   ReadAndExecute, Synchronize

Inheritance should be blocked.

If you don’t have a backup of the GPOs, re-create the default GPOs with the DCGPOFIX utility, and then re-create your other GPOs.

You may need to re-create the SYSVOL share (See Figure 1). Set the share permissions to the following:

Everyone: Read

Authenticated Users: Full control

Administrators group: Full control

Set the share comment (description) to Logon server share.

Check that the NETLOGON share is available. It remained available during my testing process, but you may need to re-create it. 

Share permissions for NETLOGON are the following:

Everyone: Read

Administrators: Full control

You should be able to restart replication.

How to restart Active Directory replication

Start the DFSR service and reenable DFSR on the authoritative server:

Start-Service  -Name DFSR

Get-ADObject -Identity “CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=TTSDC01,OU=Domain Controllers,DC=Sphinx,DC=org” -Properties * | Set-ADObject -Replace @{‘msDFSR-Enabled’=$true}

Run the following command to initialize SYSVOL:

DFSRDIAG POLLAD

If you don’t have the DFS management tools installed, run this command from a Windows PowerShell 5.1 console:

Install-WindowsFeature RSAT-DFS-Mgmt-Con

The ServerManager module cannot load into PowerShell Core at this time.

Start DFSR service on other domain controllers:

Start-Service -Name DFSR

Enable DFSR on the nonauthoritative domain controllers. Check that replication has occurred.

Get-ADObject -Identity “CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=TTSDC02,OU=Domain Controllers,DC=Sphinx,DC=org” -Properties * | Set-ADObject -Replace @{‘msDFSR-Enabled’=$true}

Run DFSRDIAG on the nonauthoritative domain controllers:

DFSRDIAG POLLAD

The results might not be immediate, but replication should restart, and then SYSVOL should be available.

The process to rebuilding the SYSVOL tree is not something that occurs every day. With any luck, you won’t have to do it ever, but it’s a skill worth developing to ensure you can protect and recover your Active Directory domain.

Go to Original Article
Author:

Microsoft is awarded Zscaler’s Technology Partner of the Year for 2019

Last week at Zscaler’s user conference, Zenith Live, Microsoft received Zscaler’s Technology Partner of the Year Award in the Impact category. The award was given to Microsoft for the depth and breadth of integrations we’ve collaborated with Zscaler on and the positive feedback received from customers about these integrations.

Together with Zscaler—a Microsoft Intelligent Security Association (MISA) member—we’re focused on providing our joint customers with secure, fast access to the cloud for every user. Since partnering with Zscaler, we’ve delivered several integrations that help our customers better secure their environments, including:

  • Azure Active Directory (Azure AD) integration to extend conditional access policies to Zscaler applications to validate user access to cloud-based applications. We also announced support for user provisioning of Zscaler applications to enable automated, policy-based provisioning and deprovisioning of user accounts with Azure AD.
  • Microsoft Intune integration that allows IT administrators to provision Zscaler applications to specific Azure AD users or groups within the Intune console and configure connections by using the existing Intune VPN profile workflow.
  • Microsoft Cloud App Security integration to discover and manage access to Shadow IT in an organization. Zscaler can be leveraged to send traffic data to Microsoft’s Cloud Access Security Broker (CASB) to assess cloud services against risk and compliance requirements before making access control decisions for the discovered cloud apps.

“We’re excited to see customers use Zscaler and Microsoft solutions together to deliver fast, secure, and direct access to the applications they need. The Technology Partner of the Year Award is a testament of Microsoft’s commitment to helping customers better secure their environments.”
—Punit Minocha, Vice President of Business Development at Zscaler

“The close collaboration between our teams and deep integration across Zscaler and Microsoft solutions help our joint customers be more secure and ensure their users stay productive. We’re pleased to partner with Zscaler and honored to be named Zscaler’s Technology Partner of the Year.”
—Alex Simons, Corporate Vice President of Program Management at Microsoft

We’re thrilled to be Zscaler’s Technology Partner of the Year in the Impact category and look forward to our continued partnership and what Zscaler.

Go to Original Article
Author: Microsoft News Center

New Workspace One features focus on intelligence, security

VMware unveiled new Workspace One features at its annual user conference. Two standouts include Virtual Assistant, which will help to set up a device and answer frequently asked questions, and Employee Experience Management, which can proactively monitor endpoint security.

Workspace One is VMware’s digital workspace product that enables IT to manage endpoints and provide end users access to their desktops and applications wherever they are. The new features include AI capabilities that are designed to help IT and HR get new employees settled faster, as well as better identify potential security issues before they spread throughout the organization.

In this Q&A from VMworld, Shankar Iyer, SVP of end-user computing at VMware, talks about what the new Workspace One features can provide IT, why a zero-trust model is a security must and what customers can expect in the future from Workspace One.

What do organizations need to do at the start to get the most out of the new Workspace One features?

Shankar Iyer: It’s easy for an organization to latch on to it because it’s running in the cloud. The Virtual Assistant piece, we’ve partnered with IBM Watson and our framework will integrate with any NLP [natural language processing] type of programming that organizations use. We’re also seeing in the market the need for these general purpose questions answered, and Watson is a general purpose machine. We thought it was the best starting point from an NLP perspective.

We wanted to build a standard way for these bot frameworks to be able to integrate into our Virtual Assistant product. … But organizations can still customize a lot within Workspace One. Every organization that implements us is different, but there are patterns within industries or types of organizations that can ease that input.

How has the importance of security affected end-user computing?

Iyer: In the old days, the security model was about building this wall and not letting any activity leave the room. Now it’s an open floor: You can go to a company, and sometimes networks are open. Devices come from anywhere. It behooves customers to build this zero-trust security model.

As a result, you’ll need to put up some barriers and gates and that can benefit a platform like Workspace One. You need identity access; you need to establish device compliance and security hygiene. You need to have data collection through every point, and you need this intelligence ability to decode the data in real time and alert you if someone is coming in on a device we haven’t seen before from a place we haven’t seen before, so I’m going to notch up his risk score. If that risk score reaches a point, you can shut off access.

But how do you balance the desire for improved end-user experience with the need for better security?

Iyer: If you implement a zero-trust model you won’t compromise user experience. Because then say an employee comes into a network on a trusted device and, as IT, we’re going to give them the whole experience with no barriers. But through machine learning, if I detect an anomaly, I can start putting up gates. Say you used a friend’s device; the only inconvenience is probably a second login with a login pin. The end user will be OK with that. But if I try to challenge you with dozens of different password logins, that’s when you, as an end user, can get frustrated. It’s progressive enforcement as a need.

VMware SVP of EUC Shankar Iyer
VMware SVP of end-user computing, Shankar Iyer, addresses hundreds of VMworld attendees during the digital experience keynote.

The other thing security people are accepting is there’s no way to block everything. Even when a security concern slips through the cracks, with new Workspace One features, it tracks every action that the end user did. The moment you cross a threshold, we can shut off access.

That philosophy of security where you do progressive security boundaries, while not compromising experience by using all this data to fix things when things go wrong, is what we’re going for.

What is VMware looking forward to with Workspace One and what can customers expect?

Iyer: We’re starting to see an adoption of Workspace One features to optimize experience and when we break it down to a new employee’s Day Zero, Day One, Day Two and offboarding, there’s a lot we can do. We can optimize each one of those days and better bridge the physical and virtual world. For example, when you walk into your office, we badge in. Why do that when you have a smartphone? There are capabilities of using those devices as identity.

You’ll see this experience get more automated, and bringing the power of intelligence to IT to make them more productive and adding services, things like ticketing will diminish over time. Those are some areas we can still optimize. To do that, other facets of the platform like zero trust will need to be leveraged.

Go to Original Article
Author:

Heineken’s Athina Syrrou and Microsoft’s Brad Anderson talk Teams in ‘The Shiproom’ | Transform

In this episode of “The Shiproom,” Athina Syrrou, who leads collaboration and end user devices for Heineken, joins Microsoft’s Brad Anderson, corporate vice president of Microsoft 365, to discuss what got Heineken interested in using Microsoft Teams and what they’ve learned about it since beginning the pilot – including how to introduce and adopt it efficiently.

Syrrou explains how she chooses the tools she provides to her global workforce, and how she uses the cloud to give her users maximum flexibility to choose the apps and devices they need.  She also schools Anderson on how to use common Greek idioms around the office (which explains why he’s recently been mumbling things about roller skates, chair legs and ducks).

Other discussion topics: The superiority of Greek yogurt, the perfect beer to pair with cereal, the benefits of moving to Intune, elephants and how deploying Microsoft 365 gives users the flexibility needed to do their best work and enable BYOD.

Stop by The Shiproom on YouTube to view more episodes. To learn how you can shift to a modern desktop with Microsoft 365, visit Microsoft365.com/Shift.

Go to Original Article
Author: Microsoft News Center

Microsoft shuts down zero-day exploit on September Patch Tuesday

Microsoft shut down a zero-day vulnerability launched by a Twitter user in August and a denial-of-service flaw on September Patch Tuesday.

A security researcher identified by the Twitter handle SandboxEscaper shared a zero-day exploit in the Windows task scheduler on Aug. 27. Microsoft issued an advisory after SandboxEscaper uploaded proof-of-concept code on GitHub. The company fixed the ALPC elevation of privilege vulnerability (CVE-2018-8440) with its September Patch Tuesday security updates. A malicious actor could use the exploit to gain elevated privileges in unpatched Windows systems.

“[The attacker] can run arbitrary code in the context of local system, which pretty much means they own the box … that one’s a particularly nasty one,” said Chris Goettl, director of product management at Ivanti, based in South Jordan, Utah.

The vulnerability requires local access to a system, but the public availability of the code increased the risk. An attacker used the code to send targeted spam that, if successful, implemented a two-stage backdoor on a system.

“Once enough public information gets out, it may only be a very short period of time before an attack could be created,” Goettl said. “Get the Windows OS updates deployed as quickly as possible on this one.”

Microsoft addresses three more public disclosures

Administrators should prioritize patching three more public disclosures highlighted in September Patch Tuesday.

Microsoft resolved a denial-of-service vulnerability (CVE-2018-8409) with ASP.NET Core applications. An attacker could cause a denial of service with a specially crafted request to the application. Microsoft fixed the framework’s web request handling abilities, but developers also must build the update into the vulnerable application in .NET Core and ASP.NET Core.

Chris Goettl of IvantiChris Goettl

A remote code execution vulnerability (CVE-2018-8457) in the Microsoft Scripting Engine opens the door to a phishing attack, where an attacker uses a specially crafted image file to compromise a system and execute arbitrary code. A user could also trigger the attack if they open a specially constructed Office document.

“Phishing is not a true barrier; it’s more of a statistical challenge,” Goettl said. “If I get enough people targeted, somebody’s going to open it.”

This exploit is rated critical for Windows desktop systems using Internet Explorer 11 or Microsoft Edge. Organizations that practice least privilege principles can mitigate the impact of this exploit.

Another critical remote code execution vulnerability in Windows (CVE-2018-8475) allows an attacker to send a specially crafted image file to a user, who would trigger the exploit if they open the file.

September Patch Tuesday issues 17 critical updates

September Patch Tuesday addressed more than 60 vulnerabilities, 17 rated critical, with a larger number focused on browser and scripting engine vulnerabilities.

“Compared to last month, it’s a pretty mild month. The OS and browser updates are definitely in need of attention,” Goettl said.

Microsoft closed two critical remote code execution flaws (CVE-2018-0965 and CVE-2018-8439) in Hyper-V and corrected how the Microsoft hypervisor validates guest operating system user input. On an unpatched system, an attacker could run a specially crafted application on a guest operating system to force the Hyper-V host to execute arbitrary code.

Microsoft also released an advisory (ADV180022) for administrators to protect Windows systems from a denial-of-service vulnerability named “FragmentSmack” (CVE-2018-5391). An attacker can use this exploit to target the IP stack with eight-byte IP fragments and withholding the last fragment to trigger full CPU utilization and force systems to become unresponsive.

Microsoft also released an update to a Microsoft Exchange 2010 remote code execution vulnerability (CVE-2018-8154) first addressed on May Patch Tuesday. The fix corrects the faulty update that could break functionality with Outlook on the web or the Exchange Control Panel. 

“This might catch people by surprise if they are not looking closely at all the CVEs this month,” Goettl said.

CloudHealth’s Kinsella weighs in on VMware, cloud management

VMware surprised many customers and industry watchers at its annual user conference, VMworld 2018, held this week, with its acquisition of CloudHealth Technologies, a multi-cloud management tool vendor. This went down only days before CloudHealth cut the ribbon on its new Boston headquarters. Joe Kinsella, CTO and founder at CloudHealth, spoke with us about events leading up to the acquisition, as well as his thoughts on the evolution of the cloud market.

Why sell now? And why VMware?

Joe KinsellaJoe Kinsella

Joe Kinsella: A year ago, we raised a [Series] D round of funding of $46 million. The reason we did that is because we had no intention of doing anything other than build a large public company — until recently. A few months ago, VMware approached us with a partnership conversation. We talked about what we could do together. It became clear that the two of us together would accelerate the vision that I set out to do six years ago. We could do what we set out to do faster, on the platform of VMware.

How will VMware and CloudHealth rationalize the products that overlap within the two companies?

Kinsella: The CloudHealth brand will be a unifying brand across their own portfolio of SaaS and cloud products. That said, in the process of doing that, there will be overlap, but also some opportunities, and we will have to rationalize that over time. There is no need to do it in the short term. [VMware] vRealize and CloudHealth are successful products. We will integrate with VMware, but we will continue to offer a choice.

What was happening in the market to drive your decision?

[Enterprises] have settled on a nuanced approach to leverage a broad portfolio of cloud options, which means many public clouds, many private clouds and a diverse set of SaaS products. Managing [such] a diverse portfolio is incredibly complex.
Joe KinsellaCTO and founder, CloudHealth Technologies

Kinsella: Cloud management has evolved rapidly. What drives it [is something] I call the ‘three phases of cloud adoption.’ In phase one, enterprises said they would not go to the public cloud, despite the fact that their lines of business used the public cloud. Phase two was this irrational exuberance that everything went to the public cloud. [Enterprises in phase three] have settled on a nuanced approach to leverage a broad portfolio of cloud options, which means many public clouds, many private clouds and a diverse set of SaaS products. Managing a single cloud is complex; managing [such] a diverse portfolio is incredibly complex.

What’s your view today of cloud market adoption and how the landscape is evolving?

Kinsella: Today, the majority of workloads still run on premises. But public cloud growth has been dramatic, as we all know. Amazon remains the market leader by a good amount. [Microsoft’s] Azure business has grown quickly, but a lot of that growth includes the Office 365 product as well. Google has not been a big player until recently. It’s only been in the past 12 months that we felt the Google strategy that Diane Green started to execute in the market. Alibaba has made some big moves and is a cloud to watch. Though Amazon is still far ahead, it’s finally getting competitive.

But customers don’t really just focus on one source anymore, correct?

Kinsella: I’ve talked about the concept of the heterogenous cloud, which is building applications and business services that take advantage of services from multiple service providers. We think of them as competitors today, but instead of buying services from Amazon, Google or Azure, you might build a business service that takes advantage of services from all three. I think that’s the future. I believe these multiple cloud providers will continue to exist and be differentiated based on the services they provide.

Skip User Research Unless You’re Doing It Right — Seriously


Skip User Research Unless You’re Doing It Right — Seriously

Is your research timeless? It’s time to put disposable research behind us

Focus on creating timeless research. (Photo: Aron on Unsplash)

We need to ship soon. How quickly can you get us user feedback?”

What user researcher hasn’t heard a question like that? We implement new tools and leaner processes, but try as we might, we inevitably meet the terminal velocity of our user research — the point at which it cannot be completed any faster while still maintaining its rigor and validity.

And, you know what? That’s okay! While the need for speed is valuable in some contexts, we also realize that if an insight we uncover is only useful in one place and at one time, it becomes disposable. Our goal should never be disposable research. We want timeless research.

Speed has its place

Now, don’t get me wrong. I get it. I live in this world, too. First to market, first to patent, first to copyright obviously requires an awareness of speed. Speed of delivery can also be the actual mechanism by which you get rapid feedback from customers.

I recently participated in a Global ResOps workshop. One thing I heard loud and clear was the struggle for our discipline to connect into design and engineering cycles. There were questions about how to address the “unreasonable expectations” of what we can do in short time frames. I also heard that researchers struggle with long and slow timelines: Anyone ever had a brilliant, generative insight ignored because “We can’t put that into the product for another 6 months”?

The good news is that there are methodologies such as “Lean” and “Agile” that can help us. Our goal as researchers is to use knowledge to develop customer-focused solutions. I personally love that these methodologies, when implemented fully, incorporate customers as core constituents in collaborative and iterative development processes.

In fact, my team has created an entire usability and experimentation engine using “Lean” and “Agile” methods. However, this team recognizes that letting speed dictate user research is a huge risk. If you cut corners on quality, customer involvement, and adaptive planning, your research could become disposable.

Do research right, or don’t do it at all

I know, that’s a bold statement. But here’s why: When time constraints force us to drop the rigor and process that incorporates customer feedback, the user research you conduct loses its validity and ultimately its value.

The data we gather out of exercises that over-index on speed are decontextualized and disconnected from other relevant insights we’ve collected over time and across studies. We need to pause and question whether this one-off research adds real value and contributes to an organization’s growing understanding of customers when we know it may skip steps critical to identifying insights that transcend time and context.

User research that takes time to get right has value beyond the moment for which it was intended. I’m betting you sometimes forgo conducting research if you think your stakeholders believe it’s too slow. But, if your research uncovered an insight after v1 shipped, you could still leverage that insight on v1+x.

For example, think of the last time a product team asked you, “We’re shipping v1 next week. Can you figure out if our customers want or need this?” As a researcher, you know you need more time to answer this question in a valid way. So, do you skip this research? No. Do you rush through your research, compromising its rigor? No. You investigate anyway and apply your learnings to v2.

To help keep track of these insights, we should build systems that capture our knowledge and enable us to resurface it across development cycles and projects. Imagine this: “Hey Judy, remember that thing we learned 6 months ago? Research just reminded me that it is applicable in our next launch!”

That’s what we’re looking for: timeless user insights that help our product teams again and again and contribute to a curated body of knowledge about our customers’ needs, beliefs, and behaviors. Ideally, we house these insights in databases, so they can be accessed and retrieved easily by anyone for future use (but that’s another story for another time). If we only focus on speed, we lose sight of that goal.

Creating timeless research

Here’s my point: we’ll always have to deal with requests to make our research faster, but once you or your user research team has achieved terminal velocity with any given method, stop trying to speed it up. Instead, focus on capturing each insight, leveling it up to organizational knowledge, and applying that learning in the future. Yes, that means when an important insight doesn’t make v1, go ahead and bring it back up to apply to v2. Timeless research is really about building long-term organizational knowledge and curating what you’ve already learned.

Disposable research is the stuff you throw away, after you ship. To be truly lean, get rid of that wasteful process. Instead, focus your research team’s time on making connections between past insights, then reusing and remixing them in new contexts. That way, you’re consistently providing timeless research that overcomes the need for speed.

Have you ever felt pressure to bypass good research for the sake of speed? Tell me about it in the comments, or tweet @insightsmunko.


To stay in-the-know with what’s new at Microsoft Research + Insight, follow us on Twitter and Facebook. And if you are interested in becoming a user researcher at Microsoft, head over to careers.microsoft.com.

Cisco lays groundwork for augmented reality in Cisco Webex app

An overhaul of the back-end infrastructure and user interface of the Cisco Webex app, rolling out this month, lays the groundwork for the vendor to expand support for augmented reality, virtual reality and other advanced video-centric technologies.

The redesign, which will be released throughout August, prioritizes video and simplifies scheduling, calendar management and in-meeting controls. Beyond that, the vendor has enhanced the cloud infrastructure that powers the video conferencing platform.

The announcement is the result of years of platform work that will allow the Cisco Webex app to better use the public cloud in conjunction with its private cloud video infrastructure, said Sri Srinivasan, vice president and general manager of the vendor’s team collaboration group.

“We’re putting the plumbing together for intelligent experiences across the board,” Srinivasan said. “I don’t think we’re ready to talk about everything AR/VR [augmented reality and virtual reality] on Webex yet, but think of it as the base plumbing.”

In April, Cisco announced that Apple iOS users would be able to share augmented reality files during meetings within the Cisco Webex app. A team of architects could use the feature to view — and edit in real time — a three-dimensional blueprint of a building they were designing, for example.

Cisco also recently began a beta partnership with startup Atheer Inc. to let Webex customers use that vendor’s AR platform, which is compatible with AR smart glasses from vendors such as Microsoft and Toshiba.

A field worker wearing smart glasses could use Atheer’s software to share a video feed of his or her current view to a meeting within the Cisco Webex app. Team members could then upload documents or drawings to the worker’s smart glasses to help solve a problem.

Cisco has been at the vanguard of combining immersive technologies with collaboration apps, analysts said. Microsoft has also taken steps to add AR to its collaboration portfolio. This spring, Microsoft released previews of two new AR apps for Microsoft HoloLens that integrate with Microsoft Teams.

“Microsoft, with HoloLens, is quite prominent these days, and they have a set of specialized applications,” said Adam Preset, analyst at Gartner. “Cisco will have opened up options to do the same with the Atheer partnership, but they’ll also have brought AR into a common application people use every day in Webex.”

Augmented reality use cases limited, but expanding

So far, augmented reality has seen the most adoption in the fields of healthcare, oil and gas production, and manufacturing, said J.P. Gownder, vice president and principal analyst at Forrester Research. But the technology would be useful in any vertical with a high proportion of field workers and significant visualization needs, he said.

By 2019, 20% of large enterprises are expected to have evaluated and adopted augmented reality, virtual reality or mixed reality technology, according to projections by Gartner. Field services, logistics, training and analytics are the most common uses cases in the enterprise market at this point, according to the firm.

Immersive commerce could soon become a typical use case of augmented reality, said Marty Resnick, analyst at Gartner. Customer service agents could use AR tools to help customers fix a problem they are having at home with a product.

IDC predicted global spending on augmented and virtual reality technologies will grow at a compound annual rate of 71.6% between 2017 and 2022. Consumers will drive most of that growth, but the verticals of retail, transportation and manufacturing are also expected to ramp up investments in such products.

“Expect more consumer and business applications to leverage AR. And within seven years, it will just be another part of the conference, marketing and business collaboration stack,” said Wayne Kurtzman, analyst at IDC.