Tag Archives: monitoring

IT monitoring, org discipline polish Nasdaq DevOps incident response

Modern IT monitoring can bring together developers and IT ops pros for DevOps incident response, but tools can’t substitute for a disciplined team approach to problems.

Dev and ops teams at Nasdaq Corporate Solutions LLC adopted a common language for troubleshooting with AppDynamics’ App iQ platform. But effective DevOps incident response also demanded focus on the fundamentals of team building and a systematic process for following up on incidents to ensure they don’t recur.

“We had some notion of incident management, but there was no real disciplined way for following up,” said Heather Abbott, senior vice president of corporate solutions technology, who joined the New York-based subsidiary of Nasdaq Inc. in 2014. “AppDynamics has [affected] how teams work together to resolve incidents … but we’ve had other housekeeping to do.”

Shared IT monitoring tools renew focus on incident resolution

Heather Abbott, NasdaqHeather Abbott

Nasdaq Corporate Solutions manages SaaS offerings for customers as they shift from private to public operations. Its products include public relations, investor relations, and board and leadership software managed with a combination of Amazon Web Services and on-premises data center infrastructure, though the on-premises infrastructure will soon be phased out.

In the past, Nasdaq’s dev and ops teams used separate IT monitoring tools, and teams dedicated to different parts of the infrastructure also had individualized dashboard views. The company’s shift to cross-functional teams, focused on products and user experience as part of a DevOps transformation, required a unified view into system performance. Now, all stakeholders share the AppDynamics App iQ interface when they respond to an incident.

With a single source of information about infrastructure performance, there’s less finger-pointing among team members during DevOps incident response, which speeds up problem resolution.

“You can’t argue with the data, and people have a better ongoing understanding of the system,” Abbott said. “So, you’re not going in and hunting and pecking every time there’s a complaint or we’re trying to improve something.”

DevOps incident response requires team vigilance

Since Abbott joined Nasdaq, incidents are down more than 35%. She cited the IT monitoring tool in part, but also pointed to changes the company made to the DevOps incident response process. The company moved from an ad hoc process of incident response divided among different departments to a companywide, systematic cycle of regular incident review meetings. Her team conducts weekly incident review meetings and tracks action items from previous incident reviews to prevent incidents from recurring. Higher levels of the organization have a monthly incident review call to review quality issues, and some of these incidents are further reviewed by Nasdaq’s board of directors.

We always need to focus on blocking and tackling … but as we move toward more complex microservices-based architectures, we’ll be building things into the platform like Chaos Monkey.
Heather Abbottsenior vice president of corporate solutions technology, Nasdaq

And there’s still room to improve the DevOps incident response process, Abbott said.

“We always need to focus on blocking and tackling,” she said. “We don’t have the scale within my line of business of Amazon or Netflix, but as we move toward more complex microservices-based architectures, we’ll be building things into the platform like Chaos Monkey.”

Like many companies, Nasdaq plans to tie DevOps teams with business leaders, so the whole organization can work together to improve customer experiences. In the past, Nasdaq has generated application log reports with homegrown tools. But this year, it will roll out AppDynamics’ Business iQ software, first with its investor-relations SaaS products, to make that data more accessible to business leaders, Abbott said.

AppDynamics App iQ will also expand to monitor releases through test, development and production deployment phases. Abbott said Nasdaq has worked with AppDynamics to create intelligent release dashboards to provide better automation and performance trends. “That will make it easy to see how system performance is trending over time, as we introduce change,” he said.

While Nasdaq mainly uses AppDynamics App iQ, the exchange also uses Datadog, because it offers event correlation and automated root cause analysis. AppDynamics has previewed automated root cause analysis based on machine learning techniques. Abbott said she looks forward to the addition of that feature, perhaps this year.

Beth Pariseau is senior news writer for TechTarget’s Cloud and DevOps Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

ThousandEyes-Juniper pact focuses on hybrid WANs

ThousandEyes has deployed its network performance monitoring agents on routers and customer premises equipment, or CPE, made by Juniper Networks to improve visibility for hybrid WANs and other extended networks.

ThousandEyes software, running as virtual network functions on NFX250 branch routers, will support a wide range of capabilities, the companies said, including gauging network health and confirming traffic paths. Among other capabilities, the agents probe latency and bandwidth, monitor MPLS and automate outage detection. They also can report connection errors for FTP, HTTP, Session Initiation Protocol and Real-Time Transport Protocol-based applications, and carry out root-cause analysis for problems stemming from domain name system and Border Gateway Protocol routing.

The proliferation of hybrid WANs, SD-WAN and SaaS offerings, as well as ongoing consolidation of data centers, means enterprises face visibility challenges with their extended networks. The addition of ThousandEyes’ software is aimed at eliminating some of those challenges, said Mihir Maniar, vice president of product management for Juniper Networks. 

“As more and more of our customers move to cloud-centric networks to realize its cost and agility promises, the migration — often to a hybrid public-private environment — can also bring new network blind spots that, if left unchecked, can wreak havoc on service delivery, application development, SLAs [service-level agreements] and the overall end-user experience,” Maniar said in a statement.

Cloud revenues soar 25% in Q3: IDC

Sales of cloud infrastructure products, such as Ethernet and servers, surged in the third quarter of 2017, growing 25.5% year over year and reaching $11.3 billion, according to the most recent study by IDC. The firm’s Worldwide Quarterly Cloud IT Infrastructure Tracker found that public cloud investments fueled most of the sales increase, representing 68% of all cloud IT infrastructure sales during the quarter. Storage platforms generated the highest growth, with revenue up 45% over the same quarter in 2016.

IDC said all regions of the world, except for Latin America, experienced double-digit growth in cloud infrastructure spending, with the fastest growth in Asia-Pacific and in Central and Eastern Europe. Private cloud revenues reached $3.6 billion, an annual increase of 13.1%. Noncloud IT infrastructure sales, meantime, rose 8% to $14.2 billion.

“2017 has been a strong year for public cloud IT infrastructure growth, accelerating throughout the year,” said Kuba Stolarski, research director for computing platforms at IDC, in a statement.

“While hyperscalers such as Amazon and Google are driving the lion’s share of the growth, IDC is seeing strong growth in the lower tiers of public cloud and continued growth in private cloud on a worldwide scale,” he added.

New Intel and AMD platforms launched in 2017 will provide a further boost to the cloud segment, Stolarksi said, as providers and enterprises take steps to upgrade their IT infrastructures.

Lambda MSA issues preliminary optical specification

The 100G Lambda Multi-Source Agreement, or MSA Group, released preliminary interoperability specifications based on 100 Gbps pulse amplitude modulation 4-based optical technology. The new optical interface specification is intended for next-generation networking equipment and is suitable for tasks requiring increased bandwidth and greater bandwidth density.

In addition to ensuring optical receivers from multiple vendors can work together, the new spec increases the distances supported by both 100 Gigabit Ethernet and 400 GbE  systems from the 500 meters currently specified in the IEEE 802.3 Ethernet standard to up to 10 kilometers for 100 GbE and up to 2 kilometers over duplex single-mode fiber for 400 GbE.

The Lambda MSA group is comprised of major networking vendors, such as Arista Networks, Broadcom, Cisco and Juniper Networks, as well as major enterprises, such as Alibaba and Nokia. Final specifications will be released later in 2018, the MSA Group said.

Time-series monitoring tools give high-resolution view of IT

DevOps shops use time-series monitoring tools to glean a nuanced, historical view of IT infrastructure that improves troubleshooting, autoscaling and capacity forecasting.

Time-series monitoring tools are based on time-series databases, which are optimized for time-stamped data collected continuously or at fine-grained intervals. Since they store fine-grained data for a longer term than many metrics-based traditional monitoring tools, they can be used to compare long-term trends in DevOps monitoring data and to bring together data from more diverse sources than the IT infrastructure alone to link developer and business activity with the behavior of the infrastructure.

Time-series monitoring tools include the open source project Prometheus, which is popular among Kubernetes shops, as well as commercial offerings from InfluxData and Wavefront, the latter of which VMware acquired last year.

DevOps monitoring with these tools gives enterprise IT shops such as Houghton Mifflin Harcourt, an educational book and software publisher based in Boston, a unified view of both business and IT infrastructure metrics. It does so over a longer period of time than the Datadog monitoring product the company used previously, which retains data for only up to 15 months in its Enterprise edition.

“Our business is very cyclical as an education company,” said Robert Allen, director of engineering at Houghton Mifflin Harcourt. “Right before the beginning of the school year, our usage goes way up, and we needed to be able to observe that [trend] year over year, going back several years.”

Allen’s engineering team got its first taste of InfluxData as a long-term storage back end for Prometheus, which at the time was limited in how much data could be held in its storage subsystem — Prometheus has since overhauled its storage system in version 2.0. Eventually, Allen and his team decided to work with InfluxData directly.

Houghton Mifflin Harcourt uses InfluxData to monitor traditional IT metrics, such as network performance, disk space, and CPU and memory utilization, in its Amazon Web Services (AWS) infrastructure, as well as developer activity in GitHub, such as pull requests and number of users. The company developed its own load-balancing system using Linkerd and Finagle. And InfluxData also collects data on network latencies in that system, and it ties in with Zipkin’s tracing tool to troubleshoot network performance issues.

Multiple years of highly granular infrastructure data empowers Allen’s team of just five people to support nearly 500 engineers who deliver applications to the company’s massive Apache Mesos data center infrastructure.

InfluxData platform

Time-series monitoring tools boost DevOps automation

Time-series data also allows DevOps teams to ask more nuanced questions about the infrastructure to inform troubleshooting decisions.

“It allows you to apply higher-level statistics to your data,” said Louis McCormack, lead DevOps engineer for Space Ape Games, a mobile video game developer based in London and an early adopter of Wavefront’s time-series monitoring tool. “Instead of something just being OK or not OK, you can ask, ‘How bad is it?’ Or, ‘Will it become very problematic before I need to wake up tomorrow morning?'”

Instead of something just being OK or not OK, you can ask, ‘How bad is it?’ Or, ‘Will it become very problematic before I need to wake up tomorrow morning?’
Louis McCormacklead DevOps engineer, Space Ape Games

Space Ape’s infrastructure to manage is smaller than Houghton Mifflin Harcourt’s, at about 600 AWS instances compared to about 64,000. But Space Ape also has highly seasonal business cycles, and time-series monitoring with Wavefront helps it not only to collect granular historical data, but also to scale the IT infrastructure in response to seasonal fluctuations in demand.

“A service in AWS consumes Wavefront data to make the decision about when to scale DynamoDB tables,” said Nic Walker, head of technical operations for Space Ape Games. “Auto scaling DynamoDB is something Amazon has only just released as a feature, and our version is still faster.”

The company’s apps use the Wavefront API to trigger the DynamoDB autoscaling, which makes the tool much more powerful, but also requires DevOps engineers to learn how to interact with the Wavefront query language, which isn’t always intuitive, Walker said. In Wavefront’s case, this learning curve is balanced by the software’s various prebuilt data visualization dashboards. This was the primary reason Walker’s team chose Wavefront over open source alternatives, such as Prometheus. Wavefront is also offered as a service, which takes the burden of data management out of Space Ape’s hands.

Houghton Mifflin Harcourt chose a different set of tradeoffs with InfluxData, which uses a SQL-like query language that was easy for developers to learn, but the DevOps team must work with outside consultants to build custom dashboards. Because that work isn’t finished, InfluxData has yet to completely replace Datadog at Houghton Mifflin Harcourt, though Allen said he hopes to make the switch this quarter.

Time-series monitoring tools scale up beyond the capacity of traditional metrics monitoring tools, but both companies said there’s room to improve performance when crunching large volumes of data in response to broad queries. Houghton Mifflin Harcourt, for example, queries millions of data points at the end of each month to calculate Amazon billing trends for each of its Elastic Compute Cloud instances.

“It still takes a little bit of a hit sometimes when you look at those tags, but [InfluxEnterprise version] 1.3 was a real improvement,” Allen said.

Allen added that he hopes to use InfluxData’s time-series monitoring tool to inform decisions about multi-cloud workload placement based on cost. Space Ape Games, meanwhile, will explore AI and machine learning capabilities available for Wavefront, though the jury’s still out for Walker and McCormack whether AIOps will be worth the time it takes to implement. In particular, Walker said he’s concerned about false positives from AI analysis against time-series data.

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

SolarWinds’ AppOptics melds network device monitoring, app behavior

SolarWinds has beefed up its cloud monitoring platform with tools that allow managers to track both application performance and infrastructure components in a single view.

The upgrades to SolarWinds’ Cloud software-as-a-service portfolio include a new application, as well as updates to two existing products.

The new network device monitoring application, AppOptics, uses a common dashboard to track application performance metrics and network component health — both within the enterprise network or throughout a public cloud provider’s network.

The software combines two existing SolarWinds cloud monitoring apps, Librato and TraceView, into a single network device monitoring product, said Christoph Pfister, executive vice president of products at the company, based in Austin, Texas. Initially, AppOptics will support both Amazon Web Services and Microsoft Azure; support for other providers could be added at a later date, Pfister said.

“Infrastructure and application monitoring are now in separate silos,” he said. “We are trying to integrate them. The digital experience has become very important. But behind the scenes, applications have become very complex, making monitoring and troubleshooting challenging.”

AppOptics uses downloaded agents to collect tracing, host and infrastructure monitoring metrics to feed a common dashboard, through which managers can keep tabs on network device monitoring and application behavior and take appropriate steps in the wake of performance degradation.

In addition to launching AppOptics, SolarWinds added a more powerful search engine and more robust analytics to Papertrail, its log management application. And it added capabilities to Pingdom, a digital experience measurement tool, to allow enterprises to react more quickly to issues that might affect user engagement with a website or service.

Both AppOptics and Papertrail are available Nov. 20; SolarWinds will release Pingdom Nov. 27. All are available as downloads from SolarWinds. The cloud monitoring platform is priced at $7.50 per host, per month.

SolarWinds AppOptics monitoring dashboard

Ruckus launches high-speed WLAN switches

Ruckus Wireless Inc. introduced a new group of wireless LAN switches engineered to support network edge and aggregation functions.

The new switches, the ICX 7650 series, come in three models, including a multi-gigabit access switch that supports both 2.5 Gbps and 5 Gbps throughput; a core switch with Layer 3  features and up to 24 10 Gbps and 24 1 Gbps fiber ports of capacity; and a high-performance gigabit switch that can be deployed as a stack of up to 12 switches.

“As more wireless users access cloud and data-intensive applications on their devices, the demand for high-speed, resilient edge networks continues to increase,” said Siva Valliappan, vice president of campus product management at Ruckus, based in Sunnyvale, Calif., in a statement. “The ICX 7650 switch family captures all these requirements, enabling users to scale and future-proof their network infrastructure to meet the increasing demand of wired and wireless network requirements for seven to 10 years,” he added.

The switches, available early next year, are priced starting at $11,900, Ruckus said.

DDoS attacks on rise, thanks to IoT

Distributed denial-of-service, or DDoS, attacks have risen sharply in the past year, according to a new security report from Corero Network Security.

The firm, based in Marlborough, Mass., said Corero enterprise customers experienced an average of 237 DDoS attempts each day during the third quarter of 2017, a 35% increase from the year-earlier period and almost double from what they experienced in the first quarter of 2017.

The company attributed the growth in attacks to DDoS for-hire services and the proliferation of unsecured internet of things (IoT) devices. One piece of malware, dubbed the Reaper, has already infected thousands of IoT gadgets, Corero said.

In addition, Corero’s study found that hackers are using multiple ways to penetrate an organization’s security perimeter. Some 20% of attacks recorded in the second quarter of 2017 used multiple attack vectors, the company said.

Finally, Corero said ransom-oriented DDoS attacks also rose in the third quarter, attributing many of them to one group, the Phantom Squad, which targeted companies across the United States, Europe and Asia.

AIOps tools portend automated infrastructure management

Automated infrastructure management took a step forward with the emergence of AIOps monitoring tools that use machine learning to proactively identify infrastructure problems.

IT monitoring tools released in the last two months by New Relic, BMC and Splunk incorporate AI features, mainly machine learning algorithms, to correlate events in the IT infrastructure with problems in application and business environments. Enterprise IT ops pros have begun to use these tools to address problems before they arise.

New Relic’s machine learning features, codenamed Seymour at its beta launch in 2016, helped the mobile application development team at Scripps Networks Interactive in Knoxville, Tenn., identify misconfigured Ruby application dependencies and head off potential app performance issues.

“Just doing those simple updates allowed them to fix some errors they hadn’t realized were there,” said Mark Kelly, director of cloud and infrastructure architecture at Scripps, which owns web and TV channels, such as Food Network and HGTV that are viewed by an average of 50 to 70 million people per day.

Seymour is now generally available in New Relic’s Radar and Error Profiles features, which add a layer of analytics over the performance data collected by New Relic’s application performance management tools that help users hone their reactions. Radar uses algorithms similar to e-commerce product recommendation engines to tailor dashboards to individual users’ automated infrastructure management needs. The Error Profiles feature narrows down the possible causes of IT infrastructure errors. An engineer can then scan a prioritized list of the most unusual behaviors to identify a problem’s root cause.

“Before Radar, [troubleshooting] required some manual digging — now it’s automatically identifying problem areas we might want to look for,” Kelly said. “It takes some of that searching for the needle in the haystack out of the equation for us.”

Screenshot of APM error messages
A screenshot of New Relic’s Error Profiles feature shows the troubleshooting hints it delivers to IT pros.

Data correlation stems IT ops ticket tsunami

IT monitoring tools from BMC and Splunk also expanded their AIOps features this month. BMC’s TrueSight 11 IT monitoring and management platform will use new algorithms within the TrueSight Intelligence SaaS product to categorize service tickets so IT ops pros can more quickly resolve incidents, as well as assess the financial impact of bugs in application code. Event stream analytics in TrueSight Intelligence can predict IT service deterioration, and a separately licensed TrueSight Cloud Cost Control product forecasts infrastructure costs to optimize workload placement in hybrid clouds.

We want to be able to call the customer and say, ‘Three disk drives are going to fail, and here’s why.’
Chris Adamspresident and COO, Park Place

Park Place Technologies, an after-warranty server management company in Cleveland, Ohio, and a BMC partner, plans to fold TrueSight Intelligence analytics into a product that forewarns customers of equipment outages.

“We have ways to filter email alerts sent by equipment based on subject lines, but TrueSight does it faster, and can pull out strings of data from the emails as well,” said Chris Adams, president and COO of Park Place. “We want to be able to call the customer and say, ‘Three disk drives are going to fail, and here’s why.'”

Version 3.0 of Splunk’s IT Service Intelligence (ITSI) tool also correlates event data to pass along critical alerts to IT admins so they can more easily process Splunk log and monitoring data. ITSI 3.0 root cause analysis features predict the outcome of infrastructure changes, more quickly identify problem servers, and integrate with ITSM tools such as ServiceNow and PagerDuty — which offer their own machine learning features to further prune the flow of IT alerts.

AppDynamics presentation at the AppDynamics Summit October 19, 2017
Linda Tong, left, VP of AppDynamics, speaks during The Convergence of IT and Business presentation at AppDynamics Summit on Thursday, October 19, 2017, in New York City.

Automated infrastructure management takes shape with AIOps

Eventually, IT pros hope that AIOps monitoring tools will go beyond dashboards and into automated infrastructure management action through proactive changes to infrastructure problems, as well as application-pull requests that address code problems through the DevOps pipeline.

“The Radar platform has that potential, especially if it can start integrating into our pipeline and help change events before they happen,” Kelly said. “I want it to help me do some of those automated tasks, detect my stacks going bad in advance, and give me some of that proactive feedback before I have a problem.”

Such products are already on the way. Cisco previewed a feature at its AppDynamics Summit recently that displays a forecast of events along a timeline, and highlights transactions that will be critically impacted by problems as they develop. The still-unnamed tool presents theories about the causes of future problems along with recommended actions for remediation. In the product demo, the user interface presented an “execute” button for recommended remediation, along with a button to choose “other actions.”

Cisco plans to eventually integrate technology from recently acquired Perspica with AppDynamics, which will perform machine learning analysis on streams of infrastructure data at wire speed.

For now, AppDynamics customers said they’re interested in ways such AIOps features can improve business outcomes. But the tools must still prove themselves valuable beyond what humans can forecast based on their own experience.

“It’s not going to replace a good analyst at this point — that’s what the analyst does for us, says how a change is going to affect the business,” said Krishna Dammavalam, SRE for Idexx Labs, a veterinary testing and lab equipment company in Westbrook, Maine. “If machine learning’s predictions are better than the analyst’s, that’s where the product will have value, but if the analyst is still better, there will still be room to grow.”

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

New Akamai products aim to court software developers

Just six months after acquiring applications performance monitoring company SOASTA, Akamai has released a number of new offerings, including a new version of SOASTA’s mPulse, marking perhaps the first time the content delivery network provider has directly marketed itself to developers.

And the process of getting to the new Akamai products has been an eye-opener. “We’ve been looking at ourselves in a mirror and thought we had our eyes open, but we’ve been kidding ourselves,” admitted Ari Weil, strategic product and marketing leader of Akamai. “We’ve heard ‘we don’t care about you guys, you’re not in our consciousness,'” he said.

Standing out in a crowd

In the DevOps world, Akamai is a well-known name on the operations side of things. But despite DevOps bringing the two sides closer together, at the end of the day developers, under pressure to create better applications more quickly, are understandably focused on the tools that can help them get work done faster. The existing Akamai products weren’t really “developer-friendly” and even before being acquired, SOASTA found itself having to occasionally sell the value proposition of its mPulse APM tool.

“They (Akamai) still have some work to do when it comes to building developer awareness,” said Jeffrey Hammond, vice president and principal analyst serving application development and delivery professionals at Forrester Research. “Most developers tend to know who Akamai is conceptually but not necessarily why they need to care, and what Akamai’s technology can do to improve their development efforts.”

But Weil is determined to change all of that. The latest Akamai products are very DevOps-oriented and are designed to make it simple for developers to work with Akamai offerings directly. Now developers can tie in to a variety of public clouds, work with a selection of public APIs and fine-tune performance monitoring to get just the information they need. Akamai has also opened its tools up to working with Varnish, a popular open source http cache system. 

Rethinking the business

But to get to this point with the new Akamai products, Weil and his team had to have lots of conversations with developers to try to understand their perspective. “Developers don’t want to learn how to work with you. They want you to learn how to work with them in the lowest friction way possible,” he explained. And that took some intense rethinking of the business, perhaps no more so when it came to public APIs. “We thought we really knew what we were doing, but we realized we weren’t thinking about what the developers really needed when it came to APIs. This was a material shift for us and how we do business.” The company was also thinking about simplifying the developer’s life when it came to deploying Akamai as code in the cloud, he said. “If you put your app on Akamai, all the risk management issues just go away,” Weil said.

When it came to updating the mPulse Akamai products, Weil said the company tried to keep in mind how developers would use the tool. There are now hooks built into open source test communities, and developers have more control and insight into code analysis than before. And with those tweaks came the feedback Weil wanted to hear. “Developers are telling us this (version of mPulse) helps explain to business why they’re doing what they’re doing and now business can understand the build.”

Troubleshoot Azure AD synchronization issues with these strategies

be sure to monitor the synchronization to ensure changes are replicated successfully. You can implement monitoring mechanisms to trigger alerts in the event of AD synchronization issues, but in order to actually address any issues, you need to resolve the conflicts with objects.

Resolve InvalidSoftMatch errors

Once a full AD synchronization is complete, the directory synchronization tool performs delta synchronization. During delta synchronization, the tool checks attributes of the objects that have been changed and new objects that need to be replicated to Windows Azure Active Directory (WAAD). For example, if you change a user account in on-premises AD, when DirSync performs the next delta synchronization, it checks what has been changed. DirSync follows two rules before the modified or new objects can be replicated: Hard Match and Soft Match.

When it comes time to update or add an object in WAAD, Azure AD matches the object using the SourceAnchor property of the object to the ImmutableID property of the object in WAAD. This match is generally called a Hard Match in AD synchronization. If the SourceAnchor data doesn’t match the ImmutableID data, Azure AD performs a Soft Match. Soft Match checks the value of ProxyAddresses and UserPrincipalName attributes before the object can be updated or added. You might hit Soft Match errors if Hard Match doesn’t find any matching object and Soft Match does find a matching object, but that object contains a different value in the ImmutableID property. This situation usually occurs when the matching object was synchronized with another object in on-premises AD. This type of error is called InvalidSoftMatch. To resolve InvalidSoftMatch errors, run the Azure AD Connect Health for Sync tool, which can help you identify conflicting objects. Once the conflicting objects have been identified, check to see which object shouldn’t be present in WAAD. Once identified, either remove the duplicate object or change the value, and then let the directory synchronization attempt a replication of the objects automatically. You can also force directory synchronization as explained below.

Make sure AD synchronization user account is operational

It’s important to ensure that the account you configure for synchronization is operational. By default, accounts created in Azure cloud are set to expire within 90 days. The password for the synchronization account must be set to never expire. To change the synchronization service account to never expire, you can use the Set-MsolUser PowerShell cmdlet. First, you need to connect to Azure by running the Connect-MsolService cmdlet and then find the synchronization service account by running the Get-MsolUser –UserPrincipalName AccountName@DomainName.com cmdlet. Once the synchronization service account is identified, set the account’s password to never expire by using the Set-MsolUser cmdlet as shown below:

Set-MsolUser –UserPrincipalName AccountName@DomainName.Com –PasswordNeverExpires $True

There’s no need to restart the directory synchronization service for the changes to take effect.

Perform a full or delta synchronization

Note that the directory synchronization tool performs a full AD synchronization when you first install the tool. Once the full synchronization is complete, it continues to perform delta synchronizations. If you need to trigger a full synchronization immediately, use the PowerShell cmdlets that are available with the installation of the directory synchronization tool. The Start-ADSyncSyncCycle PowerShell cmdlet can help you perform either a full or delta synchronization.

Note that the directory synchronization tool performs a full AD synchronization when you first install the tool.

Run Import-Module ADSync to import the directory synchronization modules and then execute the PowerShell commands below to initiate a full or delta synchronization.

To force full synchronization, execute the Start-ADSyncSyncCycle –PolicyType Initial PowerShell command, and to force delta synchronization, execute the Start-ADSyncSyncCycle –PolicyType Delta PowerShell command. If you encounter any issues, check the event logs.

General purpose built-in tools

The directory synchronization installation creates various files under the C:Program FilesWindows Azure Active Directory Sync folder. The two most important files are ConfigWizard and DirSyncSetup.Log. ConfigWizard allows you to reconfigure the AD synchronization settings. For any synchronization-related errors that might have occurred during the initial or delta synchronization, check the DirSyncSetup.Log file.

Powered by WPeMatico