Tag Archives: different

ArangoDB 3.6 accelerates performance of multi-model database

By definition, a multi-model database provides multiple database models for different use cases and user needs. Among the popular options users have for a multi-model database is ArangoDB from the open source database vendor.

ArangoDB 3.6, released into general availability Jan. 8, brings a series of new updates to the multi-model database platform. Among the updates are improved performance capabilities for queries and overall database operations. Also, the new OneShard feature from the San Mateo, Calif.-based vendor is a way for organizations to create robust data resilience as well as use synchronization capabilities.

For Kaseware, based in Denver, ArangoDB has been a core element since the company was founded in 2016, enabling the law enforcement software vendor’s case management system.

“I specifically sought out a multi-model database because for me, that simplified things,” said Scott Baugher, the co-founder, president and CTO of Kaseware, and a former FBI special agent. “I had fewer technologies in my stack, which meant fewer things to keep updated and patched.”

Kaseware uses ArangoDB as a document, key/value, and graph database. Baugher noted that the one other database the company uses is ElasticSearch, for its full-text search capabilities. Kaseware uses ElasticSearch because until fairly recently, ArangoDB did not offer full-text search capabilities, he said.

“If I were starting Kaseware over again now, I’d take a very hard look at eliminating ElasticSearch from our stack as well,” Baugher said. “I say that not because ElasticSearch isn’t a great product, but it would allow me to even further simplify my deployment stack.” 

Adding OneShard to ArangoDB 3.6

With OneShard, users will gain a new option for database distribution. OneShard is a feature for users for whom data is small enough to fit on a single node, but the requirement for fault tolerance still requires the database to replicate data across multiple nodes, said Joerg Schad, head of engineering and machine learning at ArangoDB.

I specifically sought out a multi-model database because for me, that simplified things. I had fewer technologies in my stack, which meant fewer things to keep updated and patched.
Scott BaugherCo-founder, president and CTO of Kaseware

“ArangoDB will basically colocate all data on a single node and hence offer local performance and transactions as queries can be evaluated on a single node,” Schad said. “It will still replicate the data synchronously to achieve fault tolerance.”

Baugher said he’ll be taking a close look at OneShard.

He noted that Kaseware now uses ArangoDB’s “resilient single” database setup, which in his view is similar, but less robust. 

“One main benefit of OneShard seems to be the synchronous replication of the data to the backup or failover databases versus the asynchronous replication used by the active failover configuration,” Baugher said.

Baugher added that OneShard also allows database reads to happen from any database node. This contrasts with active failover, in that reads are limited to the currently active node only. 

“So for read-heavy applications like ours, OneShard should not only offer performance benefits, but also let us make better use of our standby nodes by having them respond to read traffic,” he said.

More performance gains in ArangoDB 3.6

The ArangoDB 3.6 multi-model database also provides users with faster query execution thanks to a new feature for subquery optimization. Schad explained that when writing queries, it is a typical pattern to build a complex based on multiple simple queries. 

“With the improved subquery optimization, ArangoDB optimizes and processes such queries more efficiently by merging them into one which especially improves performance for larger data sizes up to a factor of 28x,” he said.

The new database release also enables parallel execution of queries to further improve performance. Schad said that if a query requires data from multiple nodes, with ArangoDB 3.6 operations can be parallelized to be performed concurrently. The end results, according to Schad, are improvements of 30% to 40% for queries involving data across multiple nodes.

Looking forward to the next release of ArangoDB, scalability improvements will be at the top of the agenda, he said.

“For the upcoming 3.7 release, we are already working on improving the scalability even further for larger data sizes and larger clusters,” Schad said.

Go to Original Article
Author:

How to Use Failover Clusters with 3rd Party Replication

In this second post, we will review the different types of replication options and give you guidance on what you need to ask your storage vendor if you are considering a third-party storage replication solution.

If you want to set up a resilient disaster recovery (DR) solution for Windows Server and Hyper-V, you’ll need to understand how to configure a multi-site cluster as this also provides you with local high-availability. In the first post in this series, you learned about the best practices for planning the location, node count, quorum configuration and hardware setup. The next critical decision you have to make is how to maintain identical copies of your data at both sites, so that the same information is available to your applications, VMs, and users.

Multi-Site Cluster Storage Planning

All Windows Server Failover Clusters require some type of shared storage to allow an application to run on any host and access the same data. Multi-site clusters behave the same way, but they require multiple independent storage arrays at each site, with the data replicated between them. The data for the clustered application or virtual machine (VM) on each site should use its own local storage array, or it could have significant latency if each disk IO operation had to go to the other location.

If you are running Hyper-V VMs on your multi-site cluster, you may wish to use Cluster Shared Volumes (CSV) disks. This type of clustered storage configuration is optimized for Hyper-V and allows multiple virtual hard disks (VHDs) to reside on the same disk while allowing the VMs to run on different nodes. The challenge when using CSV in a multi-site cluster is that the VMs must make sure that they are always writing to their disk in their site, and not the replicated copy. Most storage providers offer CSV-aware solutions, and you must make sure that they explicitly support multi-site clustering scenarios. Often the vendors will force writes at the primary site by making the CSV disk at the second site read-only, to ensure that the correct disks are always being used.

Understanding Synchronous and Asynchronous Replication

As you progress in planning your multi-site cluster you will have to select how your data is copied between sites, either synchronously or asynchronously. With asynchronous replication, the application will write to the clustered disk at the primary site, then at regular intervals, the changes will be copied to the disk at the secondary site. This usually happens every few minutes or hours, but if a site fails between replication cycles, then any data from the primary site which has not yet been copied to the secondary site will be lost. This is the recommended configuration for applications that can sustain some amount of data loss, and this generally does not impose any restrictions on the distance between sites. The following image shows the asynchronous replication cycle.

Asynchronous Replication in a Multi-Site Cluster

Asynchronous Replication in a Multi-Site Cluster

With synchronous replication, whenever a disk write command occurs on the primary site, it is then copied to the secondary site, and an acknowledgment is returned to both the primary and secondary storage arrays before that write is committed. Synchronous replication ensures consistency between both sites and avoids data loss in the event that there is a crash between a replication cycle. The challenge of writing to two sets of disks in different locations is that the physical distance between sites must be close or it can affect the performance of the application. Even with a high-bandwidth and low-latency connection, synchronous replication is usually recommended only for critical applications that cannot sustain any data loss, and this should be considered with the location of your secondary site.  The following image shows the asynchronous replication cycle.

Synchronous Replication in a Multi-Site Cluster

Synchronous Replication in a Multi-Site Cluster

As you continue to evaluate different storage vendors, you may also want to assess the granularity of their replication solution. Most of the traditional storage vendors will replicate data at the block-level, which means that they track specific segments of data on the disk which have changed since the last replication. This is usually fast and works well with larger files (like virtual hard disks or databases), as only blocks that have changed need to be copied to the secondary site. Some examples of integrated block-level solutions include HP’s Cluster Extension, Dell/EMC’s Cluster Enabler (SRDF/CE for DMX, RecoverPoint for CLARiiON), Hitachi’s Storage Cluster (HSC), NetApp’s MetroCluster, and IBM’s Storage System.

There are also some storage vendors which provide a file-based replication solution that can run on top of commodity storage hardware. These providers will keep track of individual files which have changed, and only copy those. They are often less efficient than the block-level replication solutions as larger chunks of data (full files) must be copied, however, the total cost of ownership can be much less. A few of the top file-level vendors who support multi-site clusters include Symantec’s Storage Foundation High Availability, Sanbolic’s Melio, SIOS’s Datakeeper Cluster Edition, and Vision Solutions’ Double-Take Availability.

The final class of replication providers will abstract the underlying sets of storage arrays at each site. This software manages disk access and redirection to the correct location. The more popular solutions include EMC’s VPLEX, FalconStor’s Continuous Data Protector and DataCore’s SANsymphony. Almost all of the block-level, file-level, and appliance-level providers are compatible with CSV disks, but it is best to check that they support the latest version of Windows Server if you are planning a fresh deployment.

By now you should have a good understanding of how you plan to configure your multi-site cluster and your replication requirements. Now you can plan your backup and recovery process. Even though the application’s data is being copied to the secondary site, which is similar to a backup, it does not replace the real thing. This is because if the VM (VHD) on one site becomes corrupted, that same error is likely going to be copied to the secondary site. You should still regularly back up any production workloads running at either site.  This means that you need to deploy your cluster-aware backup software and agents in both locations and ensure that they are regularly taking backups. The backups should also be stored independently at both sites so that they can be recovered from either location if one datacenter becomes unavailable. Testing recovery from both sites is strongly recommended. Altaro’s Hyper-V Backup is a great solution for multi-site clusters and is CSV-aware, ensuring that your disaster recovery solution is resilient to all types of disasters.

If you are looking for a more affordable multi-site cluster replication solution, only have a single datacenter, or your storage provider does not support these scenarios, Microsoft offers a few solutions. This includes Hyper-V Replica and Azure Site Recovery, and we’ll explore these disaster recovery options and how they integrate with Windows Server Failover Clustering in the third part of this blog series.

Let us know if you have any questions in the comments form below!


Go to Original Article
Author: Symon Perriman

What admins need to know about Azure Stack HCI

Despite all the promise of cloud computing, it remains out of reach for administrators who cannot, for different reasons, migrate out of the data center.

Many organizations still grapple with concerns, such as compliance and security, that weigh down any aspirations to move workloads from on-premises environments. For these organizations, hyper-converged infrastructure (HCI) products have stepped in to approximate some of the perks of the cloud, including scalability and high availability. In early 2019, Microsoft stepped into this market with Azure Stack HCI. While it was a new name, it was not an entirely new concept for the company.

Some might see Azure Stack HCI as a mere rebranding of the existing Windows Server Software-Defined (WSSD) program, but there are some key differences that warrant further investigation from shops that might benefit from a system that integrates with the latest software-defined features in the Windows Server OS.

What distinguishes Azure Stack HCI from Azure Stack?

When Microsoft introduced its Azure Stack HCI program in March 2019, there was some initial confusion from many in IT. The company offered a similarly named product called Azure Stack, which uses the name of Microsoft’s cloud platform, to run a version of Azure inside the data center.

Microsoft developed Azure Stack HCI for local VM workloads that run on Windows Server 2019 Datacenter edition. While not explicitly tied to the Azure cloud, organizations that use Azure Stack HCI can connect to Azure for hybrid services, such as Azure Backup and Azure Site Recovery.

Azure Stack HCI offerings use OEM hardware from vendors such as Dell, Fujitsu, Hewlett Packard Enterprise and Lenovo that is validated by Microsoft to capably run the range of software-defined features in Windows Server 2019.

How is Azure Stack HCI different from the WSSD program?

While Azure Stack is essentially an on-premises version of the Microsoft cloud computing platform, its approximate namesake, Azure Stack HCI, is more closely related to the WSSD program that Microsoft launched in 2017.

Microsoft made its initial foray into the HCI space with its WSSD program, which utilized the software-defined features in the Windows Server 2016 Datacenter edition on hardware validated by Microsoft.

For Azure Stack HCI, Microsoft uses the Windows Server 2019 Datacenter edition as the foundation of this product with updated software-defined functionality compared to Windows Server 2016.

Windows Server gives administrators the virtualization layers necessary to avoid the management and deployment issues related to proprietary hardware. Windows Server’s software-defined storage, networking and compute capabilities enable organizations to more efficiently pool the hardware resources and use centralized management to sidestep traditional operational drawbacks.

For Azure Stack HCI, Microsoft uses the Windows Server 2019 Datacenter edition as the foundation of this product with updated software-defined functionality compared to Windows Server 2016. For example, Windows Server 2019 offers expanded pooled storage of 4 petabytes in Storage Spaces Direct, compared to 1 PB on Windows Server 2016. Microsoft also updated the clustering feature in Windows Server 2019 for improved workload resiliency and added data deduplication to give an average of 10 times more storage capacity than Windows Server 2016.

What are the deployment and management options?

The Azure Stack HCI product requires the use of the Windows Server 2019 Datacenter edition, which the organization might get from the hardware vendor for a lower cost than purchasing it separately.

To manage the Azure Stack HCI system, Microsoft recommends using Windows Admin Center, a relatively new GUI tool developed as the potential successor to Remote Server Administration Tools, Microsoft Management Console and Server Manager. Microsoft tailored Windows Admin Center for smaller deployments, such as Azure Stack HCI.

Windows Admin Center drive dashboard
The Windows Admin Center server management tool offers a dashboard to check on the drive performance for issues related to latency or when a drive fails.

Windows Admin Center encapsulates a number of traditional server management utilities for routine tasks, such as registry edits, but it also handles more advanced functions, such as the deployment and management of Azure services, including Azure Network Adapter for companies that want to set up encryption for data transmitted between offices.

Companies that purchase an Azure Stack HCI system get Windows Server 2019 for its virtualization technology that pools storage and compute resources from two nodes up to 16 nodes to run VMs on Hyper-V. Microsoft positions Azure Stack HCI as an ideal system for multiple scenarios, such as remote office/branch office and VDI, and for use with data-intensive applications, such as Microsoft SQL Server.

How much does it cost to use Azure Stack HCI?

The Microsoft Azure Stack HCI catalog features more than 150 models from 20 vendors. A general-purpose node will cost about $10,000, but the final price will vary depending on the level of customization the buyer wants.

There are multiple server configuration options that cover a range of processor models, storage types and networking. For example, some nodes have ports with 1 Gigabit Ethernet, 10 GbE, 25 GbE and 100 GbE, while other nodes support a combination of 25 GbE and 10 GbE ports. Appliances optimized for better performance that use all-flash storage will cost more than units with slower, traditional spinning disks.

On top of the price of the hardware is the annual maintenance and support fees that are typically a percentage of the purchase price of the appliance.

If a company opts to tap into the Azure cloud for certain services, such as Azure Monitor to assist with operational duties by analyzing data from applications to determine if a problem is about to occur, then additional fees will come into play. Organizations that remain fixed with on-premises use for their Azure Stack HCI system will avoid these extra costs.

Go to Original Article
Author:

Grafana Labs observability platform set to grow

Data resides in many different places and getting observability of data is a key challenge for many database managers and other data professionals.

Among the most popular technologies for data observability is the open source Grafana project, which is led by commercial open source database vendor Grafana Labs. The company leads multiple open source projects and also sells enterprise-grade products and services that enable a full data observability platform.

On Oct. 24, Grafana Labs marked the next major phase of the vendor’s evolution, raising $24 million in a Series A round of funding led by Lightspeed Venture Partners, with participation from Lead Edge Capital. The new money will help the vendor grow beyond its roots to address a wider range of data use cases, according to the company.

In this Q&A, Raj Dutt, co-founder and CEO, discusses the intersection of open source and enterprise software and where Grafana is headed.

Why are you now raising a Series A?

Raj Dutt: We just celebrated our five-year anniversary earlier this month and we’ve built a sustainable company that was running at cashflow breakeven.

So the reason why we’ve raised funding is because we think we’ve proven phase one of our business model and our platform. Now we’re basically accelerating that to go well beyond Grafana Labs itself into a full stack, composable observability platform. So it’s mainly around accelerating what we’re doing in the observability ecosystem.

We’re thinking about building this open and composable observability stack with the larger ecosystem that doesn’t just include our own open source projects. You may know us obviously as the company behind Grafana, but we’re actually the company behind Loki, which is another very interesting, very popular open source project. But we also participate in other projects that we don’t necessarily own. We are one of the driving forces behind the Prometheus project and we are actively involved in the Graphite project

Raj DuttRaj Dutt

Grafana itself has a history since it was started of being database-neutral. So today, we’re interoperating natively and in real time with 42 different data sources. We’re all about bringing your data together, no matter where it lives.

While Grafana Labs as a company works with a Cloud Native Computing Foundation (CNCF) project such as Prometheus, have you considered contributing Grafana to the CNCF, or another open source organization?

Dutt: Not really, I said we work with some CNCF projects like Prometheus, but there’s no desire on our part to put our own projects such as Grafana or Loki into the CNCF.

We are an open source observability company and this is our core competency and our  core brand. Part of our strategy for delivering differentiated solutions to our customers involves being more in control of our own destiny, so to speak.

We very much believe in the power of the community. We do have a pretty active community, though certainly more than 50 percent of the work is done by Grafana Labs. We have a habit of always hiring the top contributors within the community, which is how we scale our engineering team.

If you look at the Grafana plugin ecosystem, of which there are close to 100-plus plugins, the majority of those have been contributed by the community and not developed by Grafana Labs.

What are your plans for the next major release with Grafana 7?

Dutt: Grafana 7 is slated for 2020. We’ve generally done a major release of Grafana every year that normally coincides with our annual Grafana user conference, which next year will be coming back to Amsterdam.

The major theme for Grafana 7 is really about it becoming more of a developer platform for people to build use case specific experiences with and also going beyond metrics into logging and tracing. So we’re really building this full observability stack and that is our 2020 product vision.

We think that the three pillars of observability are logging, metrics and traces, but it’s really about how you bring that data together and contextualize it in a seamless experience and that’s what we can do with Grafana at the center of this platform.

We can give people the choice to continue to use, say, Splunk for logging, Datadog for metrics, or New Relic for APM (application performance management), while not requiring them to store all their data in one database. We think it is a really compelling option to customers to give them the choice and flexibility to use best-of-breed open source software without locking them in.

What is the intersection between open source and enterprise software for Grafana Labs?

Dutt: With Grafana Enterprise, we take the open source version and we add certain capabilities and integrations to it. So we take Grafana, the open source version, and we add data sources, and we combine it with 24/7 support. We also add features generally around authentication and security clients that are generally appealing to our largest users.

With Grafana Labs, the company is all about creating these truly open source projects with communities under real open source licensing, and then finding ways generally under non-open source licensing to differentiate them.

If you want to have something be open source, then make it really open source, and if it doesn’t work through a business model to make a particular thing open source, then don’t make it open source.
Raj DuttCEO, Grafana Labs

You know, if you want to have something be open source, then make it really open source, and if it doesn’t work through a business model to make a particular thing open source, then don’t make it open source.

So our view is we have a lot of open source software, which is truly open source, meaning under a real open source license like Apache, and we also have our enterprise offerings that are not open.

We consider ourselves an open source company, because it’s in our DNA, but we really don’t want to play games with a lot of these newfangled open source licenses that you’re seeing proliferate.

How is Grafana being used today for data management and analytics use cases?

Dutt: We have gone from seeing Grafana demand driven primarily from the development teams and the operations team. What’s happened recently is, particularly with the support of things like SQL data sources as well as support for things like BigQuery and other data sources, we’ve seen a lot of business users and business metrics being brought into Grafana very organically.

So we’re at this interesting intersection now where we’re being pushed into business analytics by our developer centric customers and users. But we don’t claim to compete head on with say, you know, Tableau or Power BI. We don’t consider ourselves a BI company, but the open source Grafana project is definitely being pulled in that direction by its user base.

The Grafana project itself has always been use case agnostic. There’s nothing in Grafana that is specific to IT, cloud native or anything like that, and that has been a deliberate decision. We’re kind of excited to see where the community organically takes us.

This interview has been edited for clarity and conciseness.

Go to Original Article
Author:

The 3 types of DNS servers and how they work

Not all DNS servers are created equal, and understanding how the three different types of DNS servers work together to resolve domain names can be helpful for any information security or IT professional.

DNS is a core internet technology that translates human-friendly domain names into machine-usable IP addresses, such as www.example.com into 192.0.2.1. The DNS operates as a distributed database, where different types of DNS servers are responsible for different parts of the DNS name space.

The three DNS server types server are the following:

  1. DNS stub resolver server
  2. DNS recursive resolver server
  3. DNS authoritative server

Figure 1 below illustrates the three different types of DNS server.

A stub resolver is a software component normally found in endpoint hosts that generates DNS queries when application programs running on desktop computers or mobile devices need to resolve DNS domain names. DNS queries issued by stub resolvers are typically sent to a DNS recursive resolver; the resolver will perform as many queries as necessary to obtain the response to the original query and then send the response back to the stub resolver.

Types of DNS servers
Figure 1. The three different types of DNS server interoperate to deliver correct and current mappings of IP addresses with domain names.

The recursive resolver may reside in a home router, be hosted by an internet service provider or be provided by a third party, such as Google’s Public DNS recursive resolver at 8.8.8.8 or the Cloudflare DNS service at 1.1.1.1.

Since the DNS operates as a distributed database, different servers are responsible — authoritative in DNS-speak — for different parts of the DNS name space.

Figure 2 illustrates a hypothetical DNS resolution scenario in which an application uses all three types of DNS servers to resolve the domain name www.example.com into an IPv4 address — in other words, a DNS address resource record.

DNS servers interoperating
Figure 2. DNS servers cooperate to accurately resolve an IP address from a domain name.

In step 1, the stub resolver at the host sends a DNS query to the recursive resolver. In step 2, the recursive resolver resends the query to one of the DNS authoritative name servers for the root zone. This authoritative name server does not have the response to the query but is able to provide a reference to the authoritative name server for the .com zone. As a result, the recursive resolver resends the query to the authoritative name server for the .com zone.

This process continues until the query is finally resent to an authoritative name server for the www.example.com zone that can provide the answer to the original query — i.e., what are the IP addresses for www.example.com? Finally, in step 8, this response is sent back to the stub resolver.

One thing worth noting is that all these DNS messages are transmitted in the clear, and there is the potential for malicious actors to monitor users’ internet activities. Anyone administering DNS servers should be aware of DNS privacy issues and the ways in which those threats can be mitigated.

Go to Original Article
Author:

How to rebuild the SYSVOL tree using DFSR

Active Directory has a number of different components to keep track of user and resource information in an organization….

If one piece starts to fail and a recovery effort falters, it could mean it’s time for a rebuilding process.

The system volume (SYSVOL) is a shared folder found on domain controllers in an Active Directory domain that distributes the logon and policy scripts to users on the domain. Creating the first domain controller also produces SYSVOL and its initial contents. As you build domain controllers, the SYSVOL structure is created, and the contents are replicated from another domain controller. If this replication fails, it could leave the organization in a vulnerable position until it is corrected.

How the SYSVOL directory is organized

SYSVOL contains the following items:

  • group policy data;
  • logon scripts;
  • staging folders used to synchronize data and files between domain controllers; and
  • file system junctions.
domain controller shares
Figure 1: Use the Get-SmbShare cmdlet to show the SYSVOL and NETLOGON shares on an Active Directory domain controller.

The Distributed File System Replication (DFSR) service replicates SYSVOL data on Windows 2008 and above when the domain functional level is Windows 2008 and above.

SYSVOL folder contents
Figure 2. The SYSVOL folder contains four folders: domain, staging, staging areas and sysvol.

The position of SYSVOL on disk is set when you promote a server to a domain controller. The default location is C:WindowsSYSVOLsysvol, as shown in Figure 1.

For this tutorial, we will use PowerShell Core v7 preview 3, because it fixes the .NET Core bug related to displaying certain properties, such as ProtectedFromAccidentalDeletion.

SYSVOL contains a number of folders, as shown in Figure 2.

How to protect SYSVOL before trouble strikes

As the administrator in charge of Active Directory, you need to consider how you’ll protect the data in SYSVOL to protect the system in case of corruption or user error.

Windows backs up SYSVOL as part of the system state, but you should not restore from system state, as it might not result in a proper restoration of SYSVOL. If you’re working with the relative identifier master flexible server master operations holder, you definitely don’t want to restore system state and risk having multiple objects with the same security identifier. You need a file-level backup of the SYSVOL area. Don’t forget you can use Windows Server backup to protect SYSVOL on a domain controller if you can’t use your regular backup approach.

If you can’t use a backup, then login scripts can be copied to a backup folder. Keep the backup folder on the same volume so the permissions aren’t altered. You can back up group policy objects (GPOs) with PowerShell:

Import-Module GroupPolicy -SkipEditionCheck

The SkipEditionCheck parameter is required, because the GroupPolicy module hasn’t had CompatiblePSEditions in the module manifest set to include Core.

Create a folder for the backups:

New-Item -ItemType Directory -Path C: -Name GPObackup

Use the date to create a subfolder name and create the subfolder for the current backup:

$date = (Get-Date -Format ‘yyyyMMdd’).ToString()

New-Item -ItemType Directory -Path C:GPObackup -Name $date

Run the backup:

Backup-GPO -All -Path (Join-Path -Path C:GPObackup -ChildPath $date)

If you still use login scripts, rather doing everything through GPOs, the system stores your scripts in the NETLOGON share in the C:WindowsSYSVOLdomainscripts folder.

Restore the SYSVOL folder

SYSVOL replication through DFSR usually works. However, as with any system, it’s possible for something to go wrong. There are two scenarios that should be covered:

  • Loss of SYSVOL information on a single domain controller. The risk is the change that removed the data from SYSVOL has replicated across the domain.
  • Loss of SYSVOL on all domain controllers, which requires a compete rebuild.

The second case involving a complete rebuild of SYSVOL is somewhat more complicated, with the first case being a subset of the second. The following steps explain how to recover from a complete loss of SYSVOL, with added explainers to perform an authoritative replication of a lost file.

Preparing for a SYSVOL restore

To prepare to rebuild the SYSVOL tree, stop the DFSR service on all domain controllers:

Stop-Service DFSR

On domain controllers where you can’t perform a restore, you’ll need to rebuild the SYSVOL tree folder structure and share structure.

On the domain controller with the SYSVOL you want to fix — or the one with the data you need to replicate — disable DFSR and make the server authoritative.

Get-ADObject -Identity “CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=TTSDC01,OU=Domain Controllers,DC=Sphinx,DC=org” -Properties * |

Set-ADObject -Replace @{‘msDFSR-Enabled’=$false; ‘msDFSR-options’=1}

Disable DFSR on the other domain controllers in the domain. The difference in the commands is you’re not setting the msDFSR-options property.

Get-ADObject -Identity “CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=TTSDC02,OU=Domain Controllers,DC=Sphinx,DC=org” -Properties * |

 Set-ADObject -Replace @{‘msDFSR-Enabled’=$false}

Rebuild the SYSVOL tree data

The next step is to restore the data. You can skip this if you’re just forcing replication of lost data.

On domain controllers where you can’t perform a restore, you’ll need to rebuild the SYSVOL tree folder structure and share structure. This tutorial assumes you’ve created SYSVOL in the default location with the following folder structure:

C:WindowsSYSVOL

C:WindowsSYSVOLdomain

C:WindowsSYSVOLdomainpolicies

C:WindowsSYSVOLdomainscripts

C:WindowsSYSVOLstaging

C:WindowsSYSVOLstagingdomain

C:WindowsSYSVOLstaging areas

C:WindowsSYSVOLsysvol

You can use the following PowerShell commands to re-create the folders in the minimum number of steps. Be sure to change the nondefault location of the Stest folder used below to match your requirements.

New-Item -Path C:StestSYSVOLdomainscripts -ItemType Directory

New-Item -Path C:StestSYSVOLdomainpolicies -ItemType Directory

New-Item -Path C:StestSYSVOLstagingdomain -ItemType Directory

New-Item -Path C:StestSYSVOL’staging areas’ -ItemType Directory

New-Item -Path C:StestSYSVOLsysvol -ItemType Directory

Re-create the directory junction points. Map SYSVOLdomain (source folder) to SYSVOLSYSVOL and SYSVOLstagingdomain (source folder) to SYSVOLstaging areas.

You need to run mklink as administrator from a command prompt, rather than PowerShell:

C:Windows>mklink /J C:stestSYSVOLSYSVOLsphinx.org C:stestSYSVOLdomain

Junction created for C:stestSYSVOLSYSVOLsphinx.org <<===>> C:stestSYSVOLdomain

C:Windows>mklink /J “C:stestSYSVOLstaging areassphinx.org” C:stestsysvolStagingdomain

Junction created for C:stestSYSVOLstaging areassphinx.org <<===>> C:stestsysvolStagingdomain

Set the following permissions on the SYSVOL folder:

NT AUTHORITYAuthenticated Users                           ReadAndExecute, Synchronize

NT AUTHORITYSYSTEM                                                        FullControl

BUILTINAdministrators           Modify, ChangePermissions, TakeOwnership, Synchronize

BUILTINServer Operators                                   ReadAndExecute, Synchronize

Inheritance should be blocked.

If you don’t have a backup of the GPOs, re-create the default GPOs with the DCGPOFIX utility, and then re-create your other GPOs.

You may need to re-create the SYSVOL share (See Figure 1). Set the share permissions to the following:

Everyone: Read

Authenticated Users: Full control

Administrators group: Full control

Set the share comment (description) to Logon server share.

Check that the NETLOGON share is available. It remained available during my testing process, but you may need to re-create it. 

Share permissions for NETLOGON are the following:

Everyone: Read

Administrators: Full control

You should be able to restart replication.

How to restart Active Directory replication

Start the DFSR service and reenable DFSR on the authoritative server:

Start-Service  -Name DFSR

Get-ADObject -Identity “CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=TTSDC01,OU=Domain Controllers,DC=Sphinx,DC=org” -Properties * | Set-ADObject -Replace @{‘msDFSR-Enabled’=$true}

Run the following command to initialize SYSVOL:

DFSRDIAG POLLAD

If you don’t have the DFS management tools installed, run this command from a Windows PowerShell 5.1 console:

Install-WindowsFeature RSAT-DFS-Mgmt-Con

The ServerManager module cannot load into PowerShell Core at this time.

Start DFSR service on other domain controllers:

Start-Service -Name DFSR

Enable DFSR on the nonauthoritative domain controllers. Check that replication has occurred.

Get-ADObject -Identity “CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=TTSDC02,OU=Domain Controllers,DC=Sphinx,DC=org” -Properties * | Set-ADObject -Replace @{‘msDFSR-Enabled’=$true}

Run DFSRDIAG on the nonauthoritative domain controllers:

DFSRDIAG POLLAD

The results might not be immediate, but replication should restart, and then SYSVOL should be available.

The process to rebuilding the SYSVOL tree is not something that occurs every day. With any luck, you won’t have to do it ever, but it’s a skill worth developing to ensure you can protect and recover your Active Directory domain.

Go to Original Article
Author:

Construct a solid Active Directory password policy

The information technology landscape offers many different methods to authenticate users, including digital certificates, one-time password tokens and biometrics.

However, there is no escaping the ubiquity of the password. The best Active Directory password policy for your organization should meet the threshold for high security and end-user satisfaction while minimizing the amount of maintenance effort.

Password needs adjust over time

Before the release of Windows Server 2008, Active Directory (AD) password policies were scoped exclusively at the domain level. The AD domain represented the fundamental security and administrative boundary within an AD forest.

The guidance at the time was to give all users within a domain the same security requirements. If a business needed more than one password policy, then your only choice was to break the forest into one or more child domains or separate domain trees.

Windows Server 2008 introduced fine-grained password policies, which allow administrators to assign different password settings objects to different AD groups. Your domain users would have one password policy while you would have different policies for domain administrators and your service accounts.

More security policies mean more administrative work

Deploying multiple password policies within a single AD domain allows you to check your compliance boxes and have additional flexibility, but there are trade-offs. First, increasing the complexity of your Active Directory password policy infrastructure results in greater administrative burden and increased troubleshooting effort.

Second, the more intricate the password policy, the unhappier your users will be. This speaks to the information security counterbalance between security strength on one side and user convenience on the other.

What makes a quality password? For the longest time, we had the following recommendations:

  • minimum length of 8 characters;
  • a mixture of uppercase and lowercase letters;
  • inclusion of at least one number;
  • inclusion of at least one non-alphanumeric character; and
  • no fragments of a username.

Ideally, the password should not correspond to any word in any dictionary to thwart dictionary-based brute force attacks. One way to develop a strong password is to create a passphrase and “salt” the passphrase with numbers and/or non-alphanumeric characters.

Ideally, the password should not correspond to any word in any dictionary to thwart dictionary-based, brute force attacks.

The key to remembering a passphrase is to make it as personal as possible. For example, take the following phrase: The hot dog vendor sold me 18 cold dogs.

That phrase may have some private meaning, which makes it nearly impossible to forget. Next, we take the first letter of each word and the numbers to obtain the following string: Thdvsm18cd.

If we switch the letter s with a dollar sign, then we’ve built a solid passphrase of Thdv$m18cd.

Striking the right balance

One piece of advice I nearly always offer to my consulting clients is to keep your infrastructure as simple as possible, but not too simple. What that means related to your Active Directory password policy is:

  • keep your domains to a minimum in your AD forest;
  • minimize your password policies while staying in compliance with your organizational/security requirements;
  • relax the password policy restrictions; and
  • encourage users to create a single passphrase that is both easy to remember but hard to guess.

Password guidelines adjust over time

Relax the password policy? Yes, that’s correct. In June 2017, the National Institute of Standards and Technology (NIST) released Special Publication 800-63B, which presented a more balanced approach between usability and security.

When you force your domain users to change their passwords regularly, they are likely to reuse some portion of their previous passwords, such as password, password1, password2, and so forth.

The new NIST guidance suggests that user passwords:

  • range between 8 and 64 characters in length;
  • have the ability to use non-alphanumerics, but do not make it a requirement;
  • prevent sequential or repeating characters;
  • prevent context-specific passwords such as user name and company name;
  • prevent commonly used passwords; and
  • prevent passwords from known public data breaches.

Boost password quality with help from tools

These are great suggestions, but they are difficult to implement with native Active Directory password policy tools. For this reason, many businesses purchase a third-party password management tool, such as Anixis Password Policy Enforcer, ManageEngine ADSelfService Plus, nFront Password Filter, Specops Password Policy, Thycotic Secret Server and Tools4ever Password Complexity Manager, to name a few.

Third-party password policy tools tap into the cloud to take advantage of public identity breach databases, lists of the most common passwords and other sources to make your domain password policy much more contemporary and organic. It’s worth considering the cost of these products when you consider the potential loss from a data breach that happened because of a weak password.

Go to Original Article
Author:

Easily integrate Skype calls into your content with the new content creators feature | Skype Blogs

Skype is used worldwide as a tool for bringing callers into a variety of different podcasts, live streams, and TV shows. Today, we made it even simpler to bring your incoming audio and video calls to life with the Skype for content creators feature.

Building off the Skype TX appliance for professional studios, we built the feature directly into the desktop app, so podcasters, vloggers, and live streamers can bring Skype calls directly into their content without the need for expensive equipment, studio setup, or multiple crew members.

From a one-on-one audio call up to a four-person group video—incoming calls are available for you to build your own content by integrating Skype calls.

The feature uses NewTek NDI®. You need an NDI-enabled application or device to use Skype for content creators.

There are a number of NDI-enabled software and appliances to choose from,* including:

  • NewTek TriCaster®
  • Xsplit
  • OBS with NDI plugin
  • Pro presenter
  • Wirecast
  • vMIX
  • Ecamm Live for Mac
  • Ovrstream

You will be able to edit, brand, and distribute your Skype content, which can then be sent to a group of friends, uploaded as a podcast or vlog, or live streamed to an audience of millions using platforms such as Facebook, YouTube, Twitch, and LinkedIn.

Skype for content creators is now available on the latest version of Skype for Windows and Mac. Visit Skype for content creators to learn more. We would love to hear from and see what you have created using this feature; email us at [email protected]

*Third-party applications have not been checked, verified, certified, or otherwise approved or endorsed by Skype. Applications may be subject to the third-party provider’s own terms and privacy policy. Skype does not control and is not responsible for the download, installation, pricing, quality, performance, availability, support, or terms and conditions of purchase of third-party applications.

For Sale – MacBook Pros for sale: 15″ 2018 refresh and 13″ late 2013

Hi all

Slightly different advert this time!

I upgraded to the new 15″ 2018 MBP, and it’s overkill for my needs – slightly too big and the new quad core 13″ will have sufficient power and be a lot cheaper… so in the interests of being sensible here goes:

Specs in brief:
Silver 15.4″ 2018 MBP, retails at £2,699 – asking £2,200
It’s been turned on and used twice – Apple warranty until 28 August 2019. (I have the original packaging – and the brown box it came in!)
+ 6 core 2.6ghz i7
+ 512GB SSD
+ 16GB 2400Ghz RAM
+ Radeon Pro 560X with 4GB of GDDR5 memory and automatic graphics switching
+ Intel UHD Graphics 630

Separately, I am selling on behalf of a friend a late 2013 13″ MBP. Specs are:
Silver 13″ 2013 MBP, asking £550
+ Runs beautifully and to my eyes it’s in perfect condition (it has been repaired once, I don’t know what was repaired, but I’ve tested it for a few days and it has worked perfectly – and I am posting this ad using it)
+ Shows a cycle count of 2
+ 256SSD
+ Intel IRIS 1536mb
+ running OS X 10.9.5

As always, I prefer potential buyers to see the goods before buying, but postage is an option.

Thanks for reading, and let me know if any questions.
Mike

Price and currency: 2200 and 500
Delivery: Delivery cost is included within my country
Payment method: Cash or BACS on collection preferred
Location: London
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

For Sale – MacBook Pros for sale: 15″ 2018 refresh and 13″ late 2013

Hi all

Slightly different advert this time!

I upgraded to the new 15″ 2018 MBP, and it’s overkill for my needs – slightly too big and the new quad core 13″ will have sufficient power and be a lot cheaper… so in the interests of being sensible here goes:

Specs in brief:
Silver 15.4″ 2018 MBP, retails at £2,699 – asking £2,200
It’s been turned on and used twice – Apple warranty until 28 August 2019. (I have the original packaging – and the brown box it came in!)
+ 6 core 2.6ghz i7
+ 512GB SSD
+ 16GB 2400Ghz RAM
+ Radeon Pro 560X with 4GB of GDDR5 memory and automatic graphics switching
+ Intel UHD Graphics 630

Separately, I am selling on behalf of a friend a late 2013 13″ MBP. Specs are:
Silver 13″ 2013 MBP, asking £550
+ Runs beautifully and to my eyes it’s in perfect condition (it has been repaired once, I don’t know what was repaired, but I’ve tested it for a few days and it has worked perfectly – and I am posting this ad using it)
+ Shows a cycle count of 2
+ 256SSD
+ Intel IRIS 1536mb
+ running OS X 10.9.5

As always, I prefer potential buyers to see the goods before buying, but postage is an option.

Thanks for reading, and let me know if any questions.
Mike

Price and currency: 2200 and 500
Delivery: Delivery cost is included within my country
Payment method: Cash or BACS on collection preferred
Location: London
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.