Tag Archives: become

Active Directory nesting groups strategy and implementation

Trying to set up nesting groups in Active Directory can quickly become a challenge, especially if you don’t have a solid blueprint in place.

Microsoft recommends that you apply a nesting and role-based access control (RBAC), specifically the AGDLP for single-domain environments and AGUDLP for multi-domain/multi-forest environments. But implementing either arrangement in a legacy setup that lacks a clear strategy when it comes to RBAC and nesting can take time to clean up. The effort will be worthwhile, because the end result will make your environment more secure and dynamic.

Why should I use a nesting groups strategy?

A good nesting approach, such as AGDLP or AGUDLP, gives you a great overview of who has what permissions, which can help in certain situations such as audits. This setup is also useful because it eliminates the need for troubleshooting if something doesn’t work. Lastly, it reduces administrative overhead by making the assignment of permissions to other domains straightforward.

What is AGDLP?

AGDLP stands for:

  • Accounts (the user or computer)
  • Global group (also called role group)
  • Domain Local groups (also called access groups)
  • Permissions (the specific permission tied to the domain local group)

The acronym is the exact order used to nest the groups.

Accounts will be a member of a global group that, in turn, is a member of a domain local group. The domain local group holds the specific permission to resources we want the global group to have access to, such as files and printer queues.

We can see in the illustration below how this particular nesting group comes together:

AGDLP nesting group
AGDLP is Microsoft’s recommended nesting group for role-based access configuration in a single domain setting.

By using AGDLP nesting and RBAC principles, you get an overview of a role’s specific permissions, which can be easily copied to other role groups if needed. With AGDLP, you only need to remember to always tie the permission to the domain local group at the end of the nesting chain and never to the global group.

What is AGUDLP?

AGUDLP is the multi-domain/multi-forest version of AGDLP, with the one difference being a universal group added to the nesting chain. You can use these universal groups to add role groups (global groups) from other domains without too much effort.

The universal group — also called a resource group — should have the same name as the corresponding role group, except for its prefix, as illustrated below:

AGUDLP nesting group
For organizations with multiple domains and forests, AGUDLP is recommended to make it easier to add role groups from other domains.

What are the implementation concerns with AGDLP/AGUDLP?

There are four important rules related to the use of AGDLP or AGUDLP:

  1. Decide on a naming convention of your groups.
  2. One user can have multiple roles. Don’t create more role groups than necessary.
  3. Always use the correct group type: domain local, global, universal, etc.
  4. Never assign permissions directly to the global or universal groups. This will break the nesting strategy and its corresponding permissions summary for the organization.

Should you use AGDLP or AGUDLP?

If you don’t need to assign permissions across multiple domains, then always use AGDLP. Groups nested with AGDLP can be converted to AGUDLP if needed and require less work to operate. If you’re in doubt, use AGDLP.

To convert an AGDLP nested group to AGUDLP, do the following:

  1. Create a universal group.
  2. Transfer the memberships of the global group to the universal group.
  3. Add the universal group as a member of the global group.
  4. Have all users and computers update their Kerberos ticket or log out and log in.
  5. Remove all the domain local groups from the global group.

Why a naming convention is necessary with nesting groups

You should decide on a naming convention before you implement AGDLP or AGUDLP; it’s not a requirement, but without one, you will quickly lose control of the organization you worked to build.

There are multiple naming schemes, but you can create a customized one that fits your organization. A good naming convention should have the following criteria:

  • Be easy to read.
  • Be simple enough to parse with scripts.
  • Contain no whitespace characters, such as spaces.
  • Contain no special characters — characters that are not numbers or from the alphabet — except for the underscore or minus sign.

Here are a few examples for the different group types:

Role groups

Naming convention: Role_[Department]_[RoleName]
Examples: Role_IT_Helpdesk or Role_HR_Managers

If you use the AGUDLP principle, then there should be a corresponding resource group with a Res prefix such as Res_IT_Helpdesk or Res_HR_Managers.

Permission groups (domain local groups)

Naming convention: ACL_[PermissionCategory][PermissionDescription][PermissionType]
Examples: ACL_Fileshare_HR-Common_Read or ACL_Computer_Server1_Logon or ACL_Computer_Server1_LocalAdmin.

Executing AGDLP and AGUDLP

It might be challenging to implement AGDLP in older domains that lack a conventional arrangement. It’s imperative to identify and test thoroughly to uncover a lot of the oddities to make everything conform to the new setup.

A rough outline of the implementation plan looks like this:

  • Educate and inform your co-workers to keep them from creating groups and assigning permissions in a way that doesn’t adhere to the new arrangement.
  • Ask the HR department for assistance to identify roles. It’s possible a user might have multiple roles.
  • Create role groups and their corresponding Res groups — if you use AGUDLP — and assign new permissions with the AGDLP/AGUDLP principle.
  • Identify existing permissions and change them to adhere to AGDLP/AGUDLP. You could either rename the groups and adjust their group type or build new groups side by side with the intent to replace the old group at a later date.

Go to Original Article
Author:

Data center energy usage combated by AI efficiency

Data centers have become an important part of our data-driven world. They act as a repository for servers, storage systems, routers and all manner of IT equipment and can stretch as large as an entire building — especially in an age of AI that requires advanced computing

Establishing how much power these data centers utilize and the environmental impact they have can be difficult, but according to a recent paper in Science Magazine, the entire data center industry in 2018 utilized an estimated 205 TWh. This roughly translates to 1% of global electricity consumption.

Enterprises that utilize large data centers can use AI, advancements in storage capacity and more efficient servers to mitigate the power required for the necessary expansion of data centers.

The rise of the data center

Collecting and storing data is fundamental to business operation, and while having your own infrastructure can be costly and challenging, having unlimited access to this information is crucial to advancements.

Provoking the most coverage because of their massive size, data centers of tech giants like Google and Amazon often require the same amount of energy as small towns. But there is more behind these numbers, according to Eric Masanet, associate professor of Mechanical Engineering and Chemical and Biological Engineering at Northwestern University and coauthor of the aforementioned article.

The last detailed estimates of global data center energy use appeared in 2011, Masanet said.

Since that time, Masanet said, there have been many claims that the world’s data centers were requiring more and more energy. This has given policymakers and the public the impression that data centers’ energy use and related carbon emissions have become a problem.

Counter to this, Masanet and his colleagues’ studies on the evolution of storage, server and network technology found that efficiency gains have significantly mitigated the growth in energy usage in this area. From 2010 to 2018, compute instances went up by 550%, while energy usage increased just 6% in the same time frame. While data center energy usage is on the rise, it has been curbed dramatically through the development of different strategies.

Getting a step ahead of the data center footprint

The workings behind mediated energy increases are all tied to advancements in technology. Servers have become more efficient, and the partitioning of servers through server virtualization has curbed the energy required for the rapid growth of compute instances.

A similar trend is noticeable in the storage of data. While the demand has significantly increased, the combination of storage-drive efficiencies and densities has limited total increase of global storage energy usage to just threefold. To further curb the rising desire for more data and therefore the rising energy costs and environmental impact, companies integrating AI when designing their data centers.

Data center efficiency gains have stalled
Data center efficiency has increased greatly but may be leveling off.

“You certainly could leverage AI to analyze utility consumption data and optimize cost,” said Scott Laliberte, a managing director with Protiviti and leader of the firm’s Emerging Technologies practice.

“The key for that would be having the right data available and developing and training the model to optimize the cost.”  

By having AI collect data on their data centers and optimizing the energy usage, these companies can help mitigate the power costs, especially concerning cooling, one of the more costly and concerning of the processes within data centers.

“The strategy changed a little bit — like trying to build data centers below ground or trying to be near water resources,” said Juan José López Murphy, Technical Director and Data Science Practice Lead at Globant, a digitally native services company.

But cooling these data centers has been such a large part of their energy usage that companies have had to be creative. Companies like AWS and GCP are trying new locations like the middle of the desert or underground and trying to develop cooling systems that are based on water and not just air, Murphy said.

Google utilizes an algorithm that manages cooling at some of their data centers that can learn from data gathered and limit energy consumption by adjusting cooling configurations.

Energy trends

For the time being, both the demand for data centers and their efficiency has grown. Now the advancement of servers and storage drives as well as the implementation of AI in the building process has almost matched the growing energy demand. This may not continue, however.

“Historical efficiency gains may not be able to outpace rapidly rising demand for data center services in the not-too-distant future,” Masanet said. “Clearly greater attention to data center energy use is warranted.”

The increased efficiencies have done well to stem the tide of demand, but the future remains uncertain for data center’s energy requirements.

Go to Original Article
Author:

Coronavirus: VPN hardware becomes a chokepoint for remote workers

VPN hardware has become a bottleneck for companies with a high number of workers staying home to avoid spreading the coronavirus, networking vendors reported.

Many companies have VPN concentrators or gateways with insufficient licensing or capacity to accommodate the unexpected demand, executives said. As a result, some businesses have had to scramble to provide network access to the high number of remote workers. Many of those employees live in cities that have closed schools and asked people to stay home.

“It seems to be at the enterprise gateway that we see issues,” Angelique Medina, director of product marketing at network monitoring company ThousandEyes, said. 

Competitor Kentik saw similar problems with VPNs used by the corporate customers of internet service providers and telcos, said Avi Freedman, CEO of Kentik. About half of the vendor’s customers are service providers with enterprise subscribers.

Kentik found that the high number of remote workers is overtaxing the typical 1 Gb link that connects the concentrator or the gateway to the corporate network. A gateway can include a router and firewall.

“It’s not a lot of traffic by internet standards, but it is by some of the corporate architectures that are in place,” Freedman said.

Freedman and Medina said companies would likely look at cloud-based VPN gateways as a faster way to offload traffic than buying, configuring and installing more hardware. However, Freedman pointed out that the cloud might not be an option for highly regulated companies or organizations with strict compliance policies.

“Draining internet traffic, looking at cloud solutions are absolutely in the top three, along with upgrading the infrastructure that you have,” Freedman said.

Cisco customers up VPN licensing

The use of VPNs has risen considerably since schools and businesses have closed in states that include California, New York, Illinois, Ohio and Maryland. Verizon reported this week a 34% increase in VPN use since last week and a 20% rise in web traffic.

In an email, Cisco security CTO Bret Hartman said customers are upgrading their VPN licenses to cover more simultaneous users. Also, just in the last seven days, trial requests for Cisco’s AnyConnect VPN software has reached 40% of the total for last year. Meanwhile, the number of authentication requests made to VPNs through Cisco’s multi-factor authentication software Duo has increased 100% over the previous week, Hartman said.

Despite the increase in internet activity, Verizon and AT&T have not reported significant network problems. Both companies were closely monitoring usage in areas where the coronavirus outbreak is most severe.

“We will work with and prioritize network demand in assisting many U.S. hospitals, first responders and government agencies, as needed,” Verizon said in a statement.

Verizon reported in a recent Security Exchange Commission filing that it planned to increase capital spending from between $17 billion and $18 billion to $17.5 billion to $18.5 billion in 2020. The additional money was to “accelerate Verizon’s transition to 5G and help support the economy during this period of disruption.”

Go to Original Article
Author:

SMBs struggle with data utilization, analytics

While analytics have become a staple of large enterprises, many small and medium-sized businesses struggle to utilize data for growth.

Large corporations can afford to hire teams of data scientists and provide business intelligence software to employees throughout their organizations. While many SMBs collect data that could lead to better decision-making and growth, data utilization is a challenge when there isn’t enough cash in the IT budget to invest in the right people and tools.

Sensing that SMBs struggle to use data, Onepath, an IT services vendor based in Kennesaw, Ga., conducted a survey of more than 100 businesses with 100 to 500 employees to gauge their analytics capabilities for the “Onepath 2020 Trends in SMB Data Analytics Report.”

Among the most glaring discoveries, the survey revealed that 86% of the companies that invested in personnel and analytics surveyed felt they weren’t able to fully exploit their data.

Phil Moore, Onepath’s director of applications management services, recently discussed both the findings of the survey and the challenges SMBs face when trying to incorporate analytics into their decision-making process.

In Part II of this Q&A, he talks about what failure to utilize data could ultimately mean for SMBs.

What was Onepath’s motivation for conducting the survey about SMBs and their data utilization efforts?

Phil MoorePhil Moore

Phil Moore: For me, the key finding was that we had a premise, a hypothesis, and this survey helped us validate our thesis. Our thesis is that analytics has always been a deep pockets game — people want it, but it’s out of reach financially. That’s talking about the proverbial $50,000 to $200,000 analytics project… Our goal and our mission is to bring that analytics down to the SMB market. We just had to prove our thesis, and this survey proves that thesis.

It tells us that clients want it — they know about analytics and they want it.

What were some of the key findings of the survey?

Moore: Fifty-nine percent said that if they don’t have analytics, it’s going to take them longer to go to market. Fifty-six percent said it will take them longer to service their clients without analytics capabilities. Fifty-four percent, a little over half, said if they didn’t have analytics, or when they don’t have analytics, they run the risk of making a harmful business decision.

We have people trying analytics — 67% are spending $10,000 a year or more, and 75% spent at least 132 hours of labor maintaining their systems — but they’re not getting what they need.
Phil MooreDirector of applications management services, Onepath

That tells us people want it… We have people trying analytics — 67% are spending $10,000 a year or more, and 75% spent at least 132 hours of labor maintaining their systems — but they’re not getting what they need. A full 86 % said they’re underachieving when they’re taking a swing with their analytics solution.

What are the key resources these businesses lack in order to fully utilize data? Is it strictly financial or are there other things as well?

Moore: We weren’t surprised, but what we hadn’t thought about is that the SMB market just doesn’t have the in-house skills. One in five said they just don’t have the people in the company to create the systems.

Might new technologies help SMBs eventually exploit data to its full extent?

Moore: The technologies have emerged and have matured, and one of the biggest things in the technology arena that helps bring the price down, or make it more available, is simply moving to the cloud. An on-premises analytics solution requires hardware, and it’s just an expensive footprint to get off the ground. But with Microsoft and their Azure Cloud and their Office 365, or their Azure Synapse Analytics offering, people can actually get to the technology at a far cheaper price point.

That one technology right there makes it far more affordable for the SMB market.

What about things like low-code/no-code platforms, natural language query, embedded analytics — will those play a role in helping SMBs improve data utilization for growth?

Moore: In the SMB market, they’re aware of things like machine learning, but they’re closer to the core blocking and tackling of looking at [key performance indicators], looking at cash dashboards so they know how much cash they have in the bank, looking at their service dashboard and finding the clients they’re ignoring.

The first and easiest one that’s going to apply to SMBs is low-code/no-code, particularly in grabbing their source data, transforming it and making it available for analytics. Prior to low-code/no-code, it’s really a high-code alternative, and that’s where it takes an army of programmers and all they’re doing is moving data — the data pipeline.

But there will be a set of the SMB market that goes after some of the other technologies like machine learning — we’ve seen some people be really excited about it. One example was looking at [IT help] tickets that are being worked in the service industry and comparing it with customer satisfaction. What they were measuring was ticket staleness, how many tickets their service team were ignoring, and as they were getting stale, their clients would be getting angry for lack of service. With machine learning, they were able to find that if they ignored a printer ticket for two weeks, that is far different than ignoring an email problem for two weeks. Ignoring an email problem for two days leads to a horrible customer satisfaction score. Machine learning goes in and relates that stuff, and that’s very powerful. The small and medium-sized business market will get there, but they’re starting at earlier and more basic steps.

Editor’s note: This Q&A has been edited for brevity and clarity.

Go to Original Article
Author:

A roundup of the Cisco certification changes in 2020

As network engineer skills become increasingly generalized, Cisco aims to match its certifications to the skills network engineers need in their daily lives.

Announced at Cisco Live 2019, the new Cisco certification changes rolled out on Feb. 24, 2020. Experts have touted the relevant material and the myriad topics Cisco’s certifications cover with these changes and potential benefits for network engineers. With more focus on automation and software skills and less on infrequently used coding languages, Cisco aims to spring its certification tracks forward into the new decade.

The Cisco Certified Network Associate (CCNA), Cisco Certified Network Professional (CCNP) and Cisco Certified Internetwork Expert (CCIE) certifications all expanded the breadth of topics covered, yet all shrunk in size. Cisco also introduced new DevNet certifications among the other Cisco certification changes.

How did existing Cisco certifications change?

Cisco’s standard certification tracks — CCNA, CCNP and CCIE — all added new material that aims to be more relevant to current job roles and help advance the careers of network engineers. In addition to new material, the certifications also include fewer track options than before.

Cisco Certified Network Associate. CCNA is an entry-level certification for network engineers early in their careers. Formerly, Cisco issued the Cisco Certified Entry Networking Technician (CCENT) certification, which was the step before CCNA. After CCENT, CCNA offered different certifications for various career tracks, including CCNA Routing and Switching and CCNA Collaboration.

Now, CCENT is gone, and the recent Cisco certification changes transformed the CCNA from 10 separate tracks into a single unified exam, apart from the CCNA CyberOps track. Cisco author Wendell Odom said most topics in the new CCNA exam come from the former CCNA Routing and Switching track, with about one-third of new material.

A CCNA certification isn’t a prerequisite for higher certifications, yet it provides fundamental networking skills that network engineers require for current job roles.

Cisco Certified Network Professional. CCNP is an intermediate-level certification and a step up from CCNA. Similar to the CCNA changes, Cisco consolidated the CCNP certification tracks, although less drastically than with CCNA. Cisco cut CCNP from eight to five tracks, which, like CCNA, reflect holistic industry changes to bring more relevant material to Cisco’s certifications.

According to Cisco, the new CCNP tracks — which are also the new CCIE tracks — are the following:

  1. Enterprise
  2. Security
  3. Service Provider
  4. Collaboration
  5. Data Center

While these are the five core exams a network engineer can take, they must also take a concentration exam within the core topic to attain a CCNP certification. If a person solely takes the core exam and passes, she receives a Cisco Certified Specialist certification in that topic area.

Network engineers can take several core or concentration exams and receive a Cisco Certified Specialist certification upon passing, which can prove to employers the engineer has those specific skills.

Authors Brad Edgeworth and Jason Gooley said these changes didn’t remove much material, but they added more width to the knowledge and skills network engineers should have in their careers.

Cisco Certified Internetwork Expert. CCIE is an expert-level certification and a step up from CCNP. The CCIE and CCNP tracks fall under the same umbrellas and shrunk to the aforementioned five tracks. To become CCIE-certified, network engineers must take and pass one core exam — Enterprise, Security, etc. — and that topic’s corresponding lab.

Formerly, CCIE exams focused more on highly advanced skills and less on critical knowledge in areas such as network design skills. After the Cisco certification changes, the CCIE exams now include more practical knowledge for advanced network engineers.

network engineer skills
The recent Cisco certification changes aim to sharpen relevant network engineer skills, including management and automation capabilities.

What are the new Cisco certifications?

In Cisco’s new DevNet track, the company added three certifications that reflect the certification pyramid for standard Cisco certifications. The DevNet certifications are the following:

  1. Cisco Certified DevNet Associate
  2. Cisco Certified DevNet Specialist
  3. Cisco Certified DevNet Professional

The DevNet tracks encompass network automation, software and programmability skills that Cisco certifications previously lacked and that the industry has deemed increasingly important.

While DevNet lacks a CCIE-equivalent track, the requirements for a DevNet certification reflect those of its equivalent in Cisco’s standard certifications. For example, a person must pass one core and one concentration exam to receive a Cisco Certified DevNet Professional certification.

The DevNet track’s goal is to give network engineers a certification path for skills the industry says they need and help them adapt to newer, advanced technologies — such as network automation — that employers increasingly seek out. And, as the industry continues to change, so will Cisco’s certifications.

Go to Original Article
Author:

5 PowerShell tools to help simplify admin tasks and support

PowerShell has become one of the most ubiquitous scripting languages in use today. Originally released in 2008, PowerShell has caught on like wildfire with systems admins, software developers and engineers who manage and automate thousands of repetitive tasks.

There are a variety of products designed to help PowerShell developers build better scripts. Tools that complement a PowerShell scripter include advanced editors, products to create scripts with a low-code approach and services more tailored to specific products such Active Directory (AD).

Here, we examine five companies that offer products and services that focus on or heavily depend on PowerShell. We’ll examine each product’s focus, target audience, pricing and how each PowerShell tool integrates into the bigger tech ecosystem. Note that product costs are listed in U.S. dollars.

Cimitra Software

The Cimitra IT process delegation tool is designed to decrease the resolution time of IT-related events. It enables non-administrative users to perform tasks typically requiring higher privilege escalation. Using Cimitra, an IT specialist can safely create and delegate routine IT tasks for other people to execute who don’t have the skills or access to otherwise perform the task.

Cimitra’s tool can help users:

  • reset AD passwords;
  • restart servers; and
  • update phone numbers in employee databases.

Each of these tasks is connected to an action that’s exposed via a web-based GUI. For any action, Cimitra could be running commands or invoking an API or random PowerShell scripts to accomplish tasks.

Some of the tools covered here have both competing and complementary features. When selecting a PowerShell product, pay close attention to the product’s focus and target audience.

To ensure Cimitra admins can manage user activity, the tool provides various role-based access controls and can integrate with authentication providers. The tool supports Lightweight Directory Access Protocol and multifactor authentication as well as various auditing roles.

Cimitra offers three licenses:

  • A free downloadable version that includes three users, three agents and unlimited actions with no expiration, and a support forum.
  • A Team version that includes 25 users, unlimited actions and agents, and an auditing panel to control and monitor user activity. This version also includes email support and costs $10,000 per year.
  • An Enterprise version that includes 100 users and the same features as the Team version. This version also includes phone and email support, and costs $25,000 per year.

Cimitra Server is offered as a Docker container and can be hosted in an organization’s data center or in the cloud.

Ironman Software Universal Automation

Universal Automation (UA) enables users to execute and schedule PowerShell scripts using a product that specifically offers PowerShell automation. This tool is designed to make it easier for users to invoke, control access to and manage a team’s PowerShell scripts.

Users can upload PowerShell scripts to the product, which then reads and parses the scripts to create graphical representations for easy use.

UA natively understands complex tasks in PowerShell scripts, such as the progress bar using the Write-Progress cmdlet or interactivity using the Read-Host cmdlet. UA can also read script parameters automatically, so there’s no need for an organization to adjust its scripts to use the UA platform.

UA automatically integrates with Git to support DevOps best practices and persists job output to a database for auditing and future evaluation. This PowerShell tool can also be configured for role-based access to provide users with the correct amount of privileges.

This product works well for individual PowerShell users and teams that need to schedule scripts in a platform that’s more powerful and PowerShell-centric than task scheduler. It also works well for organizations that want to implement DevOps practices into a PowerShell development environment.

UA offers an optional web-based GUI that enables users to manage the tool without having to drop down to a command line.

UA is currently in beta and offers two pricing models:

  • The free plan enables users to execute up to two concurrent jobs at once and 25 jobs per day.
  • The paid plan is licensed per agent at an introductory beta price of $99.99. This includes one year of upgrades and removes any restrictions on job execution.

The tool is built as a cross-platform PowerShell module. UA can be hosted on premises on IIS, Azure, AWS and other clouds.

Sapien Technologies PowerShell Studio

PowerShell Studio is a PowerShell scripting IDE. This product can visually design UIs for PowerShell scripts and use event-driven coding strategies, setting it apart from other PowerShell editors.

PowerShell Studio can code, test and run scripts on a variety of PowerShell versions, package them as executables and deploy them via Windows Installer packages.

This tool also includes an integrated debugger, profiler and support for many other script-based tools. Sapien provides IntelliSense for PowerShell modules that can’t be installed on the development machine. By using different machine profiles, IntelliSense and platform-specific settings can detect incompatibilities at the coding stage.

PowerShell Studio focuses primarily on PowerShell administrators who develop tools for themselves and others. This tool is designed for PowerShell power users who build lots of scripts and tooling.

PowerShell Studio offers a 45-day free trial. After that, the tool  costs $399, which includes one year of upgrades and free forum support. The upgrades and support subscription can be extended annually, and the license never expires.

PowerShell Studio integrates out of the box with many common tools, including the PSScriptAnalyzer PowerShell tool, Pester, Git, Sapien’s PowerShell HelpWriter and VersionRecall. Wherever possible, access to these tools is prominently placed on the main user interface and requires only the push of a button.

ScriptRunner Software platform

ScriptRunner is an all-in-one PowerShell product that simplifies the way IT professionals, admin teams and DevOps engineers write and manage PowerShell scripts. Features include:

  • Centralized script and module management, which helps to ensure a standardized development process and companywide, consistent use of PowerShell scripts.
  • Secure credential administration, which enables users to run and delegate scripts in a safe environment.
  • Convenient web interfaces, which enable users to easily manage all PowerShell activities. Help desk teams and end-user work with automatically generated web-based input forms.
  • Centralized script execution that ensures all manual, scheduled, and event- and process-driven PowerShell activities can be monitored at a glance.

Admins can use ScriptRunner roles to delegate securely to help desk teams and end users. Domain users can perform defined tasks in on-premises, hybrid or cloud systems without administrative back-end permissions.

ScriptRunner offers a free 30-day trial, as well as an Essential Edition for up to five users that’s ideal for small IT and service desk teams. Contact ScriptRunner for a price quote.

System Frontier

System Frontier helps organizations reduce admin rights and simplify IT support by delegating granular admin permissions. IT admins can turn PowerShell and other scripts into secure web-based tools without having to build GUIs by hand.

This privilege access management tool is designed for systems admins who manage Windows and Linux servers, network devices, AD or Office 365 resources, and have PowerShell or other scripting skills.

System Frontier is licensed per managed node and is broken into server and non-server licenses. Server nodes cover Windows and Linux servers, network devices and other devices acting in a server capacity. Non-server nodes cover managing workstations and user accounts such as AD or Office 365 users.

The tool offers four licensing options:

  • A free Community Edition that’s limited to 5,000 server or endpoint nodes, 50 delegated users, five custom tools and community support.
  • A free 30-day trial version with features enabled that anyone with a business email address can download.
  • The Pro version starts at $29 per server node or $5.80 per non-server node. It’s limited to 100 delegated users and 20 custom tools. Priority email support is included.
  • The Enterprise version starts at $49 per server node or $9.80 per non-server node. It includes unlimited delegated users and unlimited custom tools, as well as priority email and phone support.

System Frontier offers integrations for enterprise applications, including ServiceNow, Remedy, Cisco and Check Point. This tool also has a built-in REST API that enables other applications and services to integrate with it. Due to its script-based nature, users can build PowerShell scripts on their own to connect to a near-endless number of other services.

A PowerShell tool to meet every organization’s needs

Each of the products examined here represents an ecosystem that’s cropped up from PowerShell or other scripting languages. Although each product has a strong foundation with PowerShell, each serves a different purpose.

Some of the tools covered here have both competing and complementary features. When selecting a product, pay close attention to the product’s focus and target audience. Note which tools complement each other and choose a product or products that focus on your organization’s specific needs.

Go to Original Article
Author:

How to fortify your virtualized Active Directory design

Active Directory is much more than a simple server role. It has become the single sign-on source for most, if not all, of your data center applications and services. This access control covers workstation logins and extends to clouds and cloud services.

Since AD is such a key part of many organizations, it is critical that it is always available and has the resiliency and durability to match business needs. Microsoft had enough foresight to set up AD as a distributed platform that can continue to function — without much or, in some cases, no interruption in services — even if parts of the system went offline. This was helpful when AD nodes were still physical servers that were often spread across multiple racks or data centers to avoid downtime. So, the question now becomes, what’s the right way to virtualize Active Directory design?

Don’t defeat the native AD distributed abilities

Active Directory is a distributed platform, so virtualizing it will hinder the native distributed functionality of the software. AD nodes can be placed on different hosts and fail-over software will restart VMs if a host crashes, but what if your primary storage goes down? It’s one scenario you should not discount.

When you undertake the Active Directory design process for a virtualization platform, you must go beyond just a host failure and look at common infrastructure outages that can take out critical systems. One of the advantages of separate physical servers was the level of resiliency the arrangement provided. While we don’t want to abandon virtual servers, we must understand the limits and concerns associated with them and consider additional areas such as management clusters.

Management clusters are often slightly lower tier platforms — normally still virtualized — that only contain management servers, applications and infrastructure. This is where you would want to place a few AD nodes, so they are outside of the production environment they manage. The challenge with a virtualized management cluster is that it can’t be placed on the same physical storage location as production; this defeats the purpose of separation of duties. You can use more cost-effective storage platforms such as a virtual storage area network for shared storage or even local storage.

Remember, this is infrastructure and not core production, so IOPS should not be as much of an issue because the goal is resiliency, not performance. This means local drives and RAID groups should be able to provide the IOPS required.

How to keep AD running like clockwork

One of the issues with AD controllers in a virtualized environment is time drift.

All computers have clocks and proper timekeeping is critical to both the performance and security of the entire network. Most servers and workstations get their time from AD, which helps to keep everything in sync and avoids Kerberos security login errors.

These AD servers would usually get their time from a time source if they were physical or from the hosts if virtualized from them. The AD servers would then keep the time synchronized with the internal clock of the computer based on CPU cycles.

When you virtualize a server, it no longer has a set number of CPU cycles to base its time on. That means time can drift until it reaches out for an external time check to reset itself. But that time check can also be off since you might be unable to tell the passage of time until the next check, which compounds the issue. Time drift can become stuck in a nasty loop because the virtualization hosts often get their time from Active Directory.

Your environment needs an external time source that is not dependent on virtualization to keep things grounded. While internet time sources are tempting, having the infrastructure reach out for time checks might not be ideal. A core switch or other key piece of networking gear can offer a dependable time source that is unlikely to be affected by drift due to its hardware nature. You can then use this time source as the sync source for both the virtualization hosts and AD, so all systems are on the same time that comes from the same source.

Some people will insist on a single physical server in a virtualized data center for this reason. That’s an option, but one that is not usually needed. Virtualization isn’t something to avoid in Active Directory design, but it needs to be done with thought and planning to ensure the infrastructure can support the AD configuration. Management clusters are key to the separation of AD nodes and roles.

This does not mean that high availability (HA) rules for Hyper-V or VMware environments are not required. Both production and management environments should have HA rules to prevent AD servers from running on the same hosts.

Rules should be in place to ensure these servers restart first and have reserved resources for proper operations. Smart HA rules are easy to overlook as more AD controllers are added and the rules configuration is forgotten.

The goal is not to prevent outages from happening — that’s not possible. It is to have enough replicas and roles of AD in the right places so users won’t notice. You might scramble a little behind the scenes if a disruption happens, but that’s part of the job. The key is to keep customers moving along without them knowing about any of the issues happening in the background.

Go to Original Article
Author:

What Exactly is Azure Dedicated Host?

In this blog post, we’ll become more familiar with a new Azure service called Azure Dedicated Hosts. Microsoft announced the service as preview some time ago and will go general-available with it in the near future.

Microsoft Azure Dedicated Host allows customers to run their virtual machines on a dedicated host not shared with other customers. While in a regular virtual machine scenario different customers or tenants share the same hosts, with Dedicated Host, a customer does no longer share the hardware. The picture below illustrates the setup.

Azure Dedicated Hosts

With a Dedicated Host, Microsoft wants to address customer concerns regarding compliance, security, and regulations, which could come up when running on a shared physical server. In the past, there was only one option to get a dedicated host in Azure. The option was to use very large instances like a D64s v3 VM size. These instances were so large that they consumed one host, and the placement of other VMs was not possible.

To be honest here, with the improvements in machine placement, larger hosts, and with that a much better density, there was no longer a 100% guaranty that the host is still dedicated. Another thing regarding instances is they are extremely expensive, as you can see in the screenshot from the Azure Price Calculator.

Azure price calculator

How to Setup a Dedicated Host in Azure

The setup of a dedicated host is pretty easy. First, you need to create a host group with your preferences for availability, like Availability Zones and Number of Fault Domains. You also need to decide for a Host Region, Group Name, etc.

How To Setup A Dedicated Host In Azure

After you created the host group, you can create a host within the group. Within the current preview, only VM Type Ds3 and Es3 Family are available to choose from. Microsoft will add more options soon.

Create dedicated host

More Details About Pricing

As you can see in the screenshot, Microsoft added the option to use Azure Hybrid Use Benefits for Dedicated Host. That means you can use your on-prem Windows Server and SQL Server licenses with Software Assurance to reduce your costs in Azure.

Azure Hybrid Use Benefits pricing

Azure Dedicated Host also gives you more insides into the host like:

  • The underlying hardware infrastructure (host type)
  • Processor brand, capabilities, and more
  • Number of cores
  • Type and size of the Azure Virtual Machines you want to deploy

An Azure Customer can control all host-level platform maintenance initiated by Azure, like OS updates. An Azure Dedicated Host gives you the option to schedule maintenance windows within 35 days where these updates are applied to your host system. During this self-maintenance window, customers can apply maintenance to hosts at their own convenience.

So looking a bit deeper in that service, Azure becomes more like a traditional hosting provider who gives a customer a very dynamic platform.

The following screenshot shows the current pricing for a Dedicated Host.

Azure Dedicated Host pricing details

Following virtual machine types can be run on a dedicated host.

Virtual Machines on a Dedicated Host

Currently, you have a soft limit from 3000 vCPUs for a dedicated host per region. That limit can be enhanced by submitting a support ticket.

When Would I Use A Dedicated Host?

In most cases, you would choose a dedicated host because of compliance reasons. You may not want to share a host with other customers. Another reason could be that you want a guaranteed CPU architecture and type. If you place your VMs on the same host, then it is guaranteed that it will have the same architecture.

Further Reading

Microsoft already published a lot of documentation and blogs about the topic so you can deepen your knowledge about Dedicated Host.

Resource #1: Announcement Blog and FAQ 

Resource #2: Product Page 

Resource #3: Introduction Video – Azure Friday “An introduction to Azure Dedicated Hosts | Azure Friday”

Go to Original Article
Author: Florian Klaffenbach

Proofpoint: Ransomware payments made in half of U.S. attacks

Ransomware payments to cybercriminals could soon become the rule rather than the exception, according to new research from Proofpoint.

Proofpoint’s sixth annual “State of the Phish” report, released Thursday, surveyed 600 working infosec professionals across seven countries: the U.S., Australia, France, Japan, the U.K., Spain and Germany. The report showed that 33% of global organizations infected with ransomware in 2019 opted to pay the ransom. In the U.S. alone, 51% of organizations that experienced a ransomware attack decided to pay the ransom, which was the highest percentage among the seven countries surveyed.

Gretel Egan, security awareness and training strategist at Proofpoint, said she wasn’t surprised that a third of survey respondents had made ransomware payments after being attacked. While law enforcement agencies and infosec vendors have consistently urged victims not to pay ransoms, she said she understood “the lure” such payments represent, especially for healthcare or critical infrastructure organizations.

“Often you see a hospital or a medical center having to completely shut down and turn patients away because life-saving services are not available,” she said. “Those organizations, in that moment, can look at a $20,000 ransom [demand] and say ‘I can be completely back online and running my business again very quickly’ as opposed to going through a relatively lengthy process even if they’re restoring from backups, which can take weeks to be fully operational again.”

Egan said that even when organizations do make ransomware payments, there are no guarantees. According to 2020 State of the Phish report, among the organizations that opted to pay the ransom, 22% never got access to their data and 9% were hit with additional ransomware attacks. Because this was the first time Proofpoint asked survey respondents about ransomware payments, the vendor couldn’t say whether the numbers represented an increase or decrease from 2018.

However, Egan said Proofpoint observed another concerning trend with ransomware attacks where threat actors exfiltrate organizations’ data before encrypting and then threaten to shame victims by making sensitive data public. “They’ll say ‘I’m going to share your information because you’re not going to pay me.’ It’s almost like doubling down on the blackmail,” Egan said. “I tell people there is no low that’s too low for [cybercriminals].”

Refusal to pay ransoms did not deter threat actors as 2019 saw a resurgence of ransomware attacks, according to Proofpoint’s report. Last year’s State of the Phish report showed just 10 percent of organizations experience a ransomware attack in 2018, as opposed to a whopping 65% in 2019.

“2018 was such a down year for ransomware in general, but it came storming back in 2019,” Egan said.

In addition to the survey, Proofpoint also analyzed more than 9 million suspicious emails reported by customers and an additional 50 million simulated phishing attacks sent by the vendor. Egan said the data showed phishing emails aren’t as big of a threat vector for ransomware attacks as in the past, which indicates cybercriminals are changing their strategies.

“We’re not seeing as many ransomware payloads delivered via e-mail,” she said. “From a threat level side, infections are coming in as secondary infections. There’s a system already compromised with malware and then threat actors take advantage of first level infiltration to then launch ransomware within the system.”

BEC on the rise

The report also found a significant rise in cybercriminals utilizing business email compromise (BEC) as a preferred attack. An alarming 86% of organizations surveyed by Proofpoint faced BEC attempts in 2019. Like ransomware payments, BEC attacks can result in millions of dollars in losses for organizations; 34% of respondents said they experienced financial losses or wire transfer fraud.

“There are many ways for attackers to benefit financially from initiating a BEC attack,” Egan said. “For example, the FBI has flagged cases of people going after W2 employee forms and using that to commit tax fraud. In many cases, BEC attacks are underreported because of the embarrassment and issue with having to admit you’ve been fooled.”

Egan said BEC attacks are typically successful because threat actors take their time and do their research, forging emails that appear innocuous to both the human eye and some email security products designed to detect such threats.

“Attacks like BEC are favorable for attackers because they don’t have malware or payload attachments. There are no dangerous links imbedded in them so it’s difficult for technical safeguards to stop and block them, particularly if you’re dealing with an account that’s been compromised,” she said. “Many of the emails are coming from a known and trusted account, or within an organization, or person-to-person from an account that’s been compromised. Attackers are switching to a more people-centric approach.”

The trend of more people-centric attacks led to 55% of organizations dealing with at least one successful phishing attack in 2019.

“Business email compromise is a longer-term kind of con,” Egan said. ” Threat actors don’t launch out of the gate asking for bank routing information. They establish a relationship over time to lull someone into believing they’re a trusted email account, so the user isn’t questioning it.”

Proofpoint said security awareness training is a method that saw success in combating such threats, with 78% of organizations reporting that training resulted in measurably lower phishing susceptibility. The report emphasized the importance of understanding who is being targeted, and more importantly, the types of attacks organizations are facing and will face, to reduce social engineering threats such as BEC and spear phishing emails.

Go to Original Article
Author:

Threat actors scanning for vulnerable Citrix ADC servers

An unpatched vulnerability in Citrix Application Delivery Controller and Citrix Gateway products has become the target of scans by potential threat actors.

Kevin Beaumont, a security researcher based in the U.K., and Johannes Ullrich, fellow at the SANS Internet Storm Center, independently discovered evidence of people scanning for Citrix ADC and Gateways vulnerable to CVE-2019-19781 over the past week.

Citrix disclosed the vulnerability on Dec. 17, which affects all supported versions of Citrix ADC and Citrix Gateway (formerly NetScaler and NetScaler Gateway, respectively.) Citrix warned that successful exploitation could allow an unauthenticated attacker to run arbitrary code and urged customers to apply mitigation techniques because a patch is not yet available.

Beaumont warned this could “become a serious issue” because of the ease of exploitation and how widespread the issue could be.

“In my Citrix ADC honeypot, CVE-2019-19781 is being probed with attackers reading sensitive credential config files remotely using ../ directory traversal (a variant of this issue). So this is in the wild, active exploitation starting up,” Beaumont wrote on Twitter. “There are way more boxes exposed than Pulse Secure, and you can exploit to RCE pre-auth with one POST and one GET request. Almost every box is also still vulnerable.”

Researchers at Positive Technologies have estimated as many as 80,000 businesses in 158 countries could have vulnerable Citrix products.

Neither Beaumont nor Ullrich saw any public exploits of the Citrix ADC vulnerability, and Ullrich wrote in a blog post that he would not describe the scans as “sophisticated.”

However, Craig Young, computer security researcher for Tripwire’s vulnerability and exposure research team, wrote on Twitter he had reproduced a remote code exploit for the vulnerability and he would “be surprised if someone hasn’t already used this in the wild.”

Florian Roth, CTO of Nextron Systems, detailed a Sigma rule to detect exploitation of the Citrix ADC vulnerability, but Young noted that his functional exploit could “absolutely exploit NetScaler CVE-2019-19781 without leaving this in the logs.”

Young described how he developed the exploit but did not release any proof-of-concept code.

“VERT’s research has identified three vulnerable behaviors which combine to enable code execution attacks on the NetScaler/ADC appliance,” Young wrote in a blog post. “These flaws ultimately allow the attacker to bypass an authorization constraint to create a file with user-controlled content which can then be processed through a server-side scripting language. Other paths towards code execution may also exist.”

All researchers involved urged customers to implement configuration changes detailed in Citrix’s mitigation suggestions while waiting for a proper fix.

Citrix did not respond to requests for comment at the time of this writing and it is unclear when a firmware update will be available to fix the issue.

Go to Original Article
Author: