Tag Archives: many

Organize Active Directory with these strategies


It’s a familiar refrain for many in the IT field: You start a new job and have to clean up the previous administrator’s…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

handiwork, such as their Active Directory group configuration.

You might inherit an Active Directory group strategy from an admin who didn’t think the process through, leaving you with a setup that doesn’t reflect the usage patterns of your users. Administrators who take the time to organize Active Directory organizational units and groups in a more coherent fashion will simplify their workload by making it easier to audit Active Directory identities and minimize the Active Directory attack surface.

Here are some practical tips and tricks to streamline your Active Directory (AD) administrator work and support your security compliance officers.

The traditional Active Directory design pattern

To start, always organize individual user accounts into groups. Avoid giving access permissions to individual user accounts because that approach does not scale.

Figure 1 shows Microsoft’s recommendation to organize Active Directory user accounts for resource access.

AGDLP model
Figure 1. Microsoft recommends the account, global, domain local, permission security model to organize Active Directory user accounts.

The account, global, domain local, permission (AGDLP) model uses the following workflow:

  • Organize users into global groups based on business criteria, such as department and location.
  • Place the appropriate global groups into domain local groups on resource servers based on similar resource access requirements.
  • Grant resource permissions to domain local groups only.

Note how this model uses two different scopes. Global groups organize AD users at the domain level, and domain local groups organize global groups at the access server level, such as a file server or a print server.

Employ role-based access control principles

Role-based access control (RBAC) grants access to groups based on job role. For example, consider network printer access:

  • Most users need only the ability to submit and manage their own print jobs.
  • Some users have delegated privileges to manage the entire print queue.
  • Select users have full administrative access to the printer’s hardware and software.

Microsoft helps with some of the planning work by prepopulating RBAC roles in Active Directory. For instance, installing the Domain Name Service role creates several sub-administrative groups in Active Directory.

[embedded content]

How to set up users and groups in Active Directory

Instead of relying on prebuilt groups, think about the user population and how to design global and domain local groups. Try to organize Active Directory global groups according to business rules and domain local groups based on access roles.

You might have global groups defined for each business unit at your organization, including IT, accounting, legal, manufacturing and human resources. You might also have domain local groups based on specific job tasks: print queue managers, print users, file access managers, file readers, database reporters and database developers.

When you organize Active Directory, the goals are to describe both the user population and their resource access requirements completely and accurately while you keep the number of global and domain local groups as small as possible to reduce the management workload.

Keep group nesting to a minimum if possible

You should keep group nesting to a minimum because it increases your administrative overhead and makes it more difficult to troubleshoot effective access. You should only populate global groups with individual Active Directory user accounts and only populate domain local groups with global groups.

effective access tab
Figure 2. The effective access tab displays the effective permissions for groups, users and device accounts.

The Windows Server and client operating systems have a feature called effective access, found in the advanced security settings dialog box in a file or folder’s properties sheet. You model effective access for a particular user, group or computer account from this location. But analyzing multiple folders with this feature doesn’t scale. You have to run it multiple times to analyze permissions.

In a multi-domain environment, nesting is unavoidable. Stick to single domain topologies when possible.

cross-domain resource access
Figure 3. A cross-domain resource access configuration in Active Directory offers more flexibility to the administrator.

I recommend the topology in Figure 3 because while global groups can contain Active Directory user accounts from their own domain only, you can add global groups to discretionary access control lists in any forest domain.

Here’s what’s happening in the topology in Figure 3:

  • A: Global groups represent marketing department employees in the contoso.com and corp.contoso.com domains.
  • B: We create a domain local group on our app server named Mktg App Access and populate it with both global groups.
  • C: We assign permissions on our line-of-business marketing app to the Mktg App Access domain local group.

When you need to organize Active Directory groups, develop a naming convention that makes sense to everyone on your team and stick to it.

You might wonder why there is no mention of universal groups. I avoid them because they slow down user logon times due to global catalog universal group membership lookups. Universal groups also make it easy to be sloppy during group creation and with resource access strategy.

How to design for the hybrid cloud

Microsoft offers Azure Active Directory for cloud identity services that you can synchronize with on-premises Active Directory user and group accounts, but Azure AD does not support organizational units. Azure AD uses a flat list of user and group accounts that works well for identity purposes.

With this structure in mind, proper user and group naming is paramount. You should also sufficiently populate Active Directory properties to make it easier to manage these accounts in the Azure cloud.

When you need to organize Active Directory groups, develop a naming convention that makes sense to everyone on your team and stick to it.

One common group naming pattern involves prefixes. For example, you might start all your global group names with GL_ and your domain local group names with DL_. If you use Exchange Server, then you will have distribution groups in addition to the AD security groups. In that instance, you could use the DI_ prefix.

3 brilliant design details from the new Microsoft Office

Since the introduction of Google Docs, many of us avoid Microsoft Office like the plague. But Office is still a mainstay in business. Excel, for instance, is the untouchable spreadsheet champion of the world, which is why a remarkable 1 billion people on the planet still use the software suite. And for all of them, Microsoft is rolling out a series of welcome design updates that should make the experience better. The company is focusing on creating simplicity–but without costing users power.

advertisement

These updates are a year in the works and promise to be but the first of many starting this June. “We’re on the beginning of the journey. I want to make that part clear,” says Jon Friedman, chief designer on Microsoft Office. “This is not the older world of software where you deliver something and move on to the next thing.” Here are three new features that aren’t just useful–they show where Microsoft is taking its flagship business tools.

A simplified ribbon

Here’s a staggering stat: 95% of people use only 10 commands in the top “ribbon” bar of Outlook. That means that of 32 (or so) functions that might be in a typical Outlook bar, 22 are wasted space. So Microsoft hid them, spreading those functions across various tabs. And that’s true whether you’re in Outlook or Word.

[Image: Microsoft]

Microsoft paired less information with cleaner information by remaking all of the functions as clean, wireframe icons. They’re actually optimized for accessibility for the vision impaired, and scale to tiny sizes clearly. But they also give the ribbon a sense of white space that was lacking before, allowing you, as Friedman puts it, to focus on your emails rather than your menus.

AI Buttons

But can a simpler menu bar become too simple? In some cases, yes. So Microsoft had to negotiate where–and how–to surface the rest of the program’s deep library of commands. “It turns out, 95% of the things people do are 10 commands in Outlook,” Friedman reiterates. “The other 5% are the 11th, 12th, and 13th commands that I use, and they’re completely different from the 11th, 12th, and 13th commands that you use.” In other words, we’re all the same user until the point we’re different. And when we’re different, we’re incredibly different.

[Image: Microsoft]

To accommodate each unique case, Microsoft deployed AI. Tap on a search bar, and search lists the top three commands on your screen that you’re most likely to need–customized to you. The technology is called “0-Query” and it doesn’t even need you to type in the search bar to give you a predictive answer. Truth be told, it’s similar to the way that iOS and Android suggest apps that you’re likely to open at any given time, but it’s the first time we’ve seen this tool applied to desktop productivity software.

“We’re very [focused on] anticipating people’s needs,” says Friedman. “We think this is what’s going to allow us to find that balance between simplicity and power.”

advertisement

advertisement

Emotional Understanding

Another addition? “We have this little Coming Soon button in Outlook, and we want to give [users] a heads up,” says Friedman. “If an Outlook visual refresh is coming soon, we show them it’s coming up, and ask them if it helps [to get the heads up].”

Sounds minor, right? Why create this feature?

[Image: Microsoft]

As part of the design process, Microsoft did empathic research, surveying users to figure out what made their favorite productivity apps their favorites–and that included studying their own software and that of competitors.

What Microsoft concluded was that people wanted to feel three sensations when working with business tools: productive, in control, and safe. And they wanted to avoid two feelings: inadequacy and uncertainty.

Moving forward, Microsoft wants to focus many of its design updates in Office in response to these core emotions. Perhaps that sounds too heady, but it’s not really that complicated.

A perfect example of how small design features can quell uncertainty? “If you chase [features] without understanding the emotional response to them, then you might find yourself in a place where you have something highly efficient but not enjoyable,” says Friedman.

Challenges of blockchain muddle understanding of the technology

I have read many articles and generalized argle-barg on the topic of blockchain and cryptocurrencies, and a couple things stand out: Nobody has a great definition for either, and the two are often so thoroughly conflated that most attempts at cogent definitions are pointless..

Nobody seems to agree on what a blockchain is — or isn’t — except in some loose, arm-flapping way. That lack of understanding represents one of the most significant challenges of blockchain. Cryptocurrencies are mostly understood in that “I’ll know it when I see it” way, where we agree on a vague idea without understanding the core of the idea. If you ask the average person about blockchain or cryptocurrencies, and to the extent that she is aware of either, the answer you’ll probably get is simple: bitcoin.

Conceptually, of course, the idea of a blockchain is like the idea of one of its main components: cryptography. Cryptography is understood as a monolithic thing in only the most abstracted macro sense possible, where different types of cryptography — among them symmetrical, asymmetrical or public key — are all implemented in vastly different manners.

Blockchains are the same: The bitcoin blockchain underpinning the popular currency bearing its name is not the same as the Ethereum blockchain upon which the cryptocurrency Ether sits. And this is where not having good definitions of what we’re talking about hurts the larger conversation around how this technology can add value outside of currency applications.

What are the challenges of blockchain in an evolving marketplace?

At its core, a blockchain is a distributed system of recording and storing transaction records. Think of it as a superdatabase — one where each participant maintains, calculates and updates new entries. More importantly, nodes work together to verify the information is truthful, thus providing security and a permanent audit trail.

How blockchain works

So, with that concept in mind, we can broaden our understanding of the technology. Ethereum wasn’t designed as a one-trick pony to run a monetary application like its bitcoin cousin; rather, it’s more of an application platform, using virtual tokens in the place of cash. Instead of simply trading currency, one might trade cat videos, for instance.

And if those cat videos were traded on a blockchain platform, everyone would be able to verify who first introduced a particular video into the system and each person who modified that video or reintroduced a bad copy. You could have the most distributed and verifiable cat video platform ever created. Luckily, there are many more use cases for this technology than just a cat video distribution platform.

In world of medical records — or, for argument’s sake, network configuration changes — we see a unique opportunity to improve many aspects of the use, transfer and security of those records.

What if we could guarantee the records introduced to the system are authentic? Note that I didn’t say accurate, because you can easily authenticate bad data when it’s introduced to a system. But let’s say we introduce our records, and they are accurate.

We go to a new doctor who would normally need to have paperwork from us to authorize the retrieval of those records, and transferring the records often takes longer than would be ideal. Waiting for the requesting doctor to send the forms to the records’ holder and get a response back can take significant effort. And even if the systems are the same, with easy access afforded to the requisite records, those records could have been tampered with or have errors and incomplete data.

The same thing could happen with a blockchain-based system, but there would be a distributed record of the tampering — something that would be all but impossible to hide. In this way, your records could be made available to everyone with a reason to see them, with each view, change and movement recorded in a permanent and tamper-resistant system.

There are challenges even beyond the implementation of our hypothetical configuration system on top of blockchain. Disparate systems would have to be combined into a ubiquitous and fairly homogeneous platform. There would have to be standards applied to the introduction of data in the first place: Bad data in; bad data out. But, in this case, it would potentially become a permanent fixture. There are also challenges in the blockchain implementation itself — the applications on top of it notwithstanding.

And those challenges are substantial.

Adding more nodes and records makes the ledger more complex

I have to agree that today’s applicability of the technology to networking is just not there. Where I disagree, however, is in the assertion that there is no need for it, or that it cannot, or should not, ever be applied to the problem of network management.

One of the challenges of blockchain is in its very nature: the distributed ledger. Because every endpoint has to have a copy of the entire blockchain, and that blockchain is constantly growing as more things are added, the system gets slower, taking up more space. If the same sort of system was implemented under a medical records system, you can see where it would become untenable very quickly.

Each blockchain implementation is different, but derivative, so this is a problem that is likely fixable. But it has to be accounted for in the beginning. Different implementations have already begun to solve this inherent weakness.

That brings us to another issue: Changing blockchains after the fact is not an easy task. Imagine a time where our network configuration data sits on a system in which a significant bug is found.

How does that system get patched? How does the blockchain adapt? And how do the requisite changes affect the integrity of the records’ data sitting on the system? These are difficult problems to solve, and they’re even harder to anticipate upfront. Most of the major blockchain implementations have gone through some amount of retrograde “shoulda-woulda-coulda,” and it’s likely we’ll fail to anticipate every possible problem in the initial rollout of any system.

Networking and the challenges of blockchain: Can they be overcome?

Blockchain technology, as it applies to networking, is very much a work in progress. GlobalData analyst Mike Fratto rejected blockchain technology, saying the ledger is “untested, unproven and overly complex, making it unsuitable for networking.”

While I disagree with Fratto’s assessment in a broad sense, I have to agree that today’s applicability of the technology to networking is just not there. Where I disagree, however, is in the assertion that there is no need for it, or that it cannot, or should not, ever be applied to the problem of network management.

I was prepared for the inevitable conclusion there are no viable production-ready blockchain implementations out in the wild. Had I come to that conclusion, however, I would have been almost entirely incorrect. I have talked with several large companies that are either developing or actively using blockchain technologies of one type or another — most seem to be based on the Linux Foundation’s Hyperledger platform, in close orbit with IBM. Most of the applications I was able to get information on are related to supply chain security in one way or another.

Tracking the ingredients used in a product from field to store shelf is one popular example. Securing critical manufacturing parts from creation through shipping and onto final build is another. These use cases are not from hyperbolic tech startups or boutique manufacturers; they are from large, established, blue-chip and Fortune 500 companies not given to flights of fancy in their supply chain. As these installations become more widespread, I imagine we will start to see more published case studies, leading to more installations. For now, however, a lot of these remain in the shadows, happily ensconced behind nondisclosure agreements.

The hype today may be all around the various cryptocurrencies that exist in the market, from the bitcoins and Ethers of the world to the nascent and opaque world of boutique vanity coins. The real excitement and potential lies not in the coins, however, but in the application of the underlying technology — including overcoming the challenges of blockchain — to everyday IT challenges.

Software-based networks lay foundation of networking’s future

Enterprise networking takes many, constantly changing forms. Today, networking is in the midst of an evolution from hardware to software — a transition to what many call software-based networks that will reshape how companies do business in the years to come.

Software-based networking uses programmability and automation, rather than closed systems housed on proprietary hardware, as the key levers to operate the network. It relies on a variety of concepts — among them software-defined networking, network functions virtualization (NFV) and open source software. Applied properly, software-based networks can increase productivity, reduce downtime and enable an organization to respond more quickly to business and competitive pressures.

But even as the concept gains traction, software-based networking remains in flux. Software-based networks mean different things to different people, and it will take some time for enterprises to sort out which approaches will work best for them.

The reality is that most organizations will use a combination of software-based networking tools in some way. Server virtualization and open source software are already familiar concepts, while enterprises deploy SDN and NFV where they make sense. At the same time, network engineers are beginning to pay attention to the emergence of intent-based networking and its promise that network components can be provisioned and configured automatically.

Software-defined WAN, through its use of virtual connections to enable traffic to flow along multiple paths, has already demonstrated that software-enabled networks can handle the demands of the modern enterprise.

At some point, the lines between software-based networking and legacy networking will disappear, and everything will just be called … networking. But until that day comes, software-based networks open up the possibilities for networking engineers to use a new set of networking tools that better support their businesses.

And it will help them get ready for the next round of changes.

Razor Blade Pro 17″ 4k Touch Screen – Intel® Core™ i7-7820HK – 32gb RAM

Hi,

Used to be active on this forum a little bit many years ago, nice to see its still going strong. Bought this in the summer last year. Used it a handful of times, if that. I just haven’t had the use of it that I was expecting, mainly because I have since discovered I just prefer gaming on a desktop much more.

Product details:

– 17.3” 4K Touch Display with NVIDIA® G-SYNC™
– Overclocked Quad-Core 7th Gen Intel® Core™ i7-7820HK
– 32GB RAM
– 500gb SSD storage

Product is available here for…

Razor Blade Pro 17″ 4k Touch Screen – Intel® Core™ i7-7820HK – 32gb RAM

60GB OCZ Agility 3

Hey all, after many years of being my World of Warcraft workhorse, I have finally upgraded. This still runs like a dream, never had a single issue with it, I just have no need for it anymore.

There are a couple scuffs on it which I will point out in the pictures, these happened when it got stuck in a cheap cases SSD tray and the screws seized. Only eternal scuffs nothing major.

Looking for £20 delivered via 1st class recorded. Open to offers as always.

Payment via BT please.

Pics to…

60GB OCZ Agility 3

60GB OCZ Agility 3

Hey all, after many years of being my World of Warcraft workhorse, I have finally upgraded. This still runs like a dream, never had a single issue with it, I just have no need for it anymore.

There are a couple scuffs on it which I will point out in the pictures, these happened when it got stuck in a cheap cases SSD tray and the screws seized. Only eternal scuffs nothing major.

Looking for £20 delivered via 1st class recorded. Open to offers as always.

Payment via BT please.

Pics to…

60GB OCZ Agility 3

How to Choose the Right Deployment Strategy

When it comes to computing, we always have another way. Sometimes, we have so many ways that you can become paralyzed with indecision. This article gives some clear guidelines for the usage of containers and virtual machines as well as a look at Nano Server’s place in the world.

Read the post here: How to Choose the Right Deployment Strategy

OCZ/PC Power & Cooling 80 Plus PLATINUM MK III Silencer PSU 1200W + Brand New SSD’s

My PSU is absolutely rock solid. It has supplied me with much needed power throughout many of my gaming builds. You wont be able to tell the difference between this or a brand new one like this, it has been kept in excellent condition. The PSU does not come with any warranty, I’ve had it for a very long time, well kept and maintained.

This is a Platinum Rated PSU and it is Semi-Modular.
Please see the pictures. Reason for Sale – I want to buy the £300 PSU from the other thread.
£150…

OCZ/PC Power & Cooling 80 Plus PLATINUM MK III Silencer PSU 1200W + Brand New SSD’s