Tag Archives: challenge

Essential components and tools of server monitoring

Though server capacity management is an essential part of data center operations, it can be a challenge to figure out which components to monitor and what tools are available. How you address server monitoring can change depending on what type of infrastructure you run within your data center, as virtualized architecture requirements differ from on-premises processing needs.

With the capacity management tools available today, you can monitor and optimize servers in real time. Monitoring tools keep you updated on resource usage and automatically allocate resources between appliances to ensure continuous system uptime.

For a holistic view of your infrastructure, capacity management software should monitor these server components to some degree. Tracking these components can help you troubleshoot issues and predict any potential changes in processing requirements.

CPU. Because CPUs handle basic logic and I/O operations, as well as route commands for other components in the server, they’re always in use. High CPU usage can indicate an issue with the CPU, but more likely it’s a sign that the issue is with a connected component. Above 70% utilization applications on the server can become sluggish or stop responding.

Memory. High memory usage can result from multiple concurrent applications, but a faulty process that’s usually less resource-intensive may cause additional issues. The memory hardware component itself rarely fails, but you should investigate performance when its usage rates rise.

Storage area network. SAN component issues can occur at several points, including connection cabling, host bus adapters, switches and the storage servers themselves. A single SAN server can host data for multiple applications and often span multiple physical sites, which leads to significant business effects if any component fails.

Server disk capacity. Storage disks help alleviate storage issues and reduce bottlenecks for data storage with the right amount of capacity. Problems can arise when more users access the same application that uses a particular storage location, or if a resource-intensive process is located on a server not designed for the application. If you can’t increase disk capacity, you can monitor it and investigate when rates rise, so you can optimize future usage.

Storage I/O rates. You should also monitor storage I/O rates. Bottlenecks and high I/O rates can indicate a variety of issues, including CPU problems, disk capacity limitations, process bugs and hardware failure.

Physical temperatures of servers. Another vital component to monitor is server temperatures. Data centers are cooled to prevent any hardware component problems, but temperatures can increase for a variety of reasons: HVAC failure, internal server hardware failure (CPU, RAM or motherboard), external hardware failure (switches and cabling) or a software failure (firmware bug or application process issues).

OS, firmware and server applications. The entire server software stack must work together to ensure optimal usage (Basic I/O System, OS, hypervisors, drivers and applications.) Failed regular updates could lead to issues for the server, any hosted applications, faulty stakeholder user experience or downtime.

Streamline reporting with software tools

Most server monitoring software tracks and notifies you of any issues with servers in your technology stack. They include default and custom component monitoring, automated and manual optimization features, and standard and custom alerting options.

The software sector for server monitoring covers all types of architectures as well as required depth and breadth of data collection. Here is a shortlist of server capacity monitoring software for your data center.

SolarWinds Server & Application Monitor
SolarWinds’ software provides monitoring, optimization and diagnostic tools in a central hub. You can quickly identify which server resources are at capacity in real time, use historical reporting to track trends and forecast resource purchasing. Additional functions let you diagnose and fix virtual and physical storage capacity bottlenecks that affect application health and performance.

HelpSystems Vityl Capacity Management
Vityl Capacity Management is a comprehensive capacity management offering that makes it easy for organizations to proactively manage performance and do capacity planning in hybrid IT setups. It provides real-time monitoring data and historical trend reporting, which helps you understand the health and performance of your network over time.

BMC Software TrueSight Capacity Optimization
The TrueSight Capacity Optimization product helps admins plan, manage and optimize on-premises and cloud server resources through real-time and predictive features. It provides insights into multiple network types (physical, virtual or cloud) and helps you manage and forecast server usage.

VMware Capacity Planner
As a planning tool, VMware’s Capacity Planner can gather and analyze data about your servers and better forecast future usage. The forecasting and prediction functionality provides insights on capacity usage trends, as well as virtualization benchmarks based on industry performance standards.

Splunk App for Infrastructure
The Splunk App for Infrastructure (SAI) is an all-in-one tool that uses streamlined workflows and advanced alerting to monitor all network components. With SAI, you can create custom visualizations and alerts for better real-time monitoring and reporting through metric grouping and filtering based on your data center and reporting needs.

Go to Original Article

Active Directory nesting groups strategy and implementation

Trying to set up nesting groups in Active Directory can quickly become a challenge, especially if you don’t have a solid blueprint in place.

Microsoft recommends that you apply a nesting and role-based access control (RBAC), specifically the AGDLP for single-domain environments and AGUDLP for multi-domain/multi-forest environments. But implementing either arrangement in a legacy setup that lacks a clear strategy when it comes to RBAC and nesting can take time to clean up. The effort will be worthwhile, because the end result will make your environment more secure and dynamic.

Why should I use a nesting groups strategy?

A good nesting approach, such as AGDLP or AGUDLP, gives you a great overview of who has what permissions, which can help in certain situations such as audits. This setup is also useful because it eliminates the need for troubleshooting if something doesn’t work. Lastly, it reduces administrative overhead by making the assignment of permissions to other domains straightforward.

What is AGDLP?

AGDLP stands for:

  • Accounts (the user or computer)
  • Global group (also called role group)
  • Domain Local groups (also called access groups)
  • Permissions (the specific permission tied to the domain local group)

The acronym is the exact order used to nest the groups.

Accounts will be a member of a global group that, in turn, is a member of a domain local group. The domain local group holds the specific permission to resources we want the global group to have access to, such as files and printer queues.

We can see in the illustration below how this particular nesting group comes together:

AGDLP nesting group
AGDLP is Microsoft’s recommended nesting group for role-based access configuration in a single domain setting.

By using AGDLP nesting and RBAC principles, you get an overview of a role’s specific permissions, which can be easily copied to other role groups if needed. With AGDLP, you only need to remember to always tie the permission to the domain local group at the end of the nesting chain and never to the global group.

What is AGUDLP?

AGUDLP is the multi-domain/multi-forest version of AGDLP, with the one difference being a universal group added to the nesting chain. You can use these universal groups to add role groups (global groups) from other domains without too much effort.

The universal group — also called a resource group — should have the same name as the corresponding role group, except for its prefix, as illustrated below:

AGUDLP nesting group
For organizations with multiple domains and forests, AGUDLP is recommended to make it easier to add role groups from other domains.

What are the implementation concerns with AGDLP/AGUDLP?

There are four important rules related to the use of AGDLP or AGUDLP:

  1. Decide on a naming convention of your groups.
  2. One user can have multiple roles. Don’t create more role groups than necessary.
  3. Always use the correct group type: domain local, global, universal, etc.
  4. Never assign permissions directly to the global or universal groups. This will break the nesting strategy and its corresponding permissions summary for the organization.

Should you use AGDLP or AGUDLP?

If you don’t need to assign permissions across multiple domains, then always use AGDLP. Groups nested with AGDLP can be converted to AGUDLP if needed and require less work to operate. If you’re in doubt, use AGDLP.

To convert an AGDLP nested group to AGUDLP, do the following:

  1. Create a universal group.
  2. Transfer the memberships of the global group to the universal group.
  3. Add the universal group as a member of the global group.
  4. Have all users and computers update their Kerberos ticket or log out and log in.
  5. Remove all the domain local groups from the global group.

Why a naming convention is necessary with nesting groups

You should decide on a naming convention before you implement AGDLP or AGUDLP; it’s not a requirement, but without one, you will quickly lose control of the organization you worked to build.

There are multiple naming schemes, but you can create a customized one that fits your organization. A good naming convention should have the following criteria:

  • Be easy to read.
  • Be simple enough to parse with scripts.
  • Contain no whitespace characters, such as spaces.
  • Contain no special characters — characters that are not numbers or from the alphabet — except for the underscore or minus sign.

Here are a few examples for the different group types:

Role groups

Naming convention: Role_[Department]_[RoleName]
Examples: Role_IT_Helpdesk or Role_HR_Managers

If you use the AGUDLP principle, then there should be a corresponding resource group with a Res prefix such as Res_IT_Helpdesk or Res_HR_Managers.

Permission groups (domain local groups)

Naming convention: ACL_[PermissionCategory][PermissionDescription][PermissionType]
Examples: ACL_Fileshare_HR-Common_Read or ACL_Computer_Server1_Logon or ACL_Computer_Server1_LocalAdmin.

Executing AGDLP and AGUDLP

It might be challenging to implement AGDLP in older domains that lack a conventional arrangement. It’s imperative to identify and test thoroughly to uncover a lot of the oddities to make everything conform to the new setup.

A rough outline of the implementation plan looks like this:

  • Educate and inform your co-workers to keep them from creating groups and assigning permissions in a way that doesn’t adhere to the new arrangement.
  • Ask the HR department for assistance to identify roles. It’s possible a user might have multiple roles.
  • Create role groups and their corresponding Res groups — if you use AGUDLP — and assign new permissions with the AGDLP/AGUDLP principle.
  • Identify existing permissions and change them to adhere to AGDLP/AGUDLP. You could either rename the groups and adjust their group type or build new groups side by side with the intent to replace the old group at a later date.

Go to Original Article

Gartner Names Microsoft a Leader in the 2019 Enterprise Information Archiving (EIA) Magic Quadrant – Microsoft Security

We often hear from customers about the explosion of data, and the challenge this presents for organizations in remaining compliant and protecting their information. We’ve invested in capabilities across the landscape of information protection and information governance, inclusive of archiving, retention, eDiscovery and communications supervision. In Gartner’s annual Magic Quadrant for Enterprise Information Archiving (EIA), Microsoft was named a Leader again in 2019.

According to Gartner, “Leaders have the highest combined measures of Ability to Execute and Completeness of Vision. They may have the most comprehensive and scalable products. In terms of vision, they are perceived to be thought leaders, with well-articulated plans for ease of use, product breadth and how to address scalability.” We believe this recognition represents our ability to provide best-in-class protection and deliver on innovations that keep pace with today’s compliance needs.

This recognition comes at a great point in our product journey. We are continuing to invest in solutions that are integrated into Office 365 and address information protection and information governance needs of customers. Earlier this month, at our Ignite 2019 conference, we announced updates to our compliance portfolio including new data connectors, machine learning powered governance, retention, discovery and supervision – and innovative capabilities such as threading Microsoft Teams or Yammer messages into conversations, allowing you to efficiently review and export complete dialogues with context, not just individual messages. In customer conversations, many of them say these are the types of advancements that are helping them be more efficient with their compliance requirements, without impacting end-user productivity.

Learn more

Read the complimentary report for the analysis behind Microsoft’s position as a Leader.

For more information about our Information Archiving solution, visit our website and stay up to date with our blog.

Gartner Magic Quadrant for Enterprise Information Archiving, Julian Tirsu, Michael Hoeck, 20 November 2019.

*This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Microsoft.

Gartner does not endorse any vendor, product, or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved.

Go to Original Article
Author: Steve Clarke

Kubernetes security opens a new frontier: multi-tenancy

SAN DIEGO — As enterprises expand production container deployments, a new Kubernetes security challenge has emerged: multi-tenancy.

Among the many challenges with multi-tenancy in general is that it is not easy to define, and few IT pros agree on a single definition or architectural approach. Broadly speaking, however, multi-tenancy occurs when multiple projects, teams or tenants, share a centralized IT infrastructure, but remain logically isolated from one another.

Kubernetes multi-tenancy also adds multilayered complexity to an already complex Kubernetes security picture, and demands that IT pros wire together a stack of third-party and, at times, homegrown tools on top of the core Kubernetes framework.

This is because core upstream Kubernetes security features are limited to service accounts for operations such as role-based access control — the platform expects authentication and authorization data to come from an external source. Kubernetes namespaces also don’t offer especially granular or layered isolation by default. Typically, each namespace corresponds to one tenant, whether that tenant is defined as an application, a project or a service.

“To build logical isolation, you have to add a bunch of components on top of Kubernetes,” said Karl Isenberg, tech lead manager at Cruise Automation, a self-driving car service in San Francisco, in a presentation about Kubernetes multi-tenancy here at KubeCon + CloudNativeCon North America 2019 this week. “Once you have Kubernetes, Kubernetes alone is not enough.”

Karl Isenberg, Cruise Automation
Karl Isenberg, tech lead manager at Cruise Automation, presents at KubeCon about multi-tenant Kubernetes security.

However, Isenberg and other presenters here said Kubernetes multi-tenancy can have significant advantages if done right. Cruise, for example, runs very large Kubernetes clusters, with up to 1,000 nodes, shared by thousands of employees, teams, projects and some customers. Kubernetes multi-tenancy means more highly efficient clusters and cost savings on data center hardware and cloud infrastructure.

“Lower operational costs is another [advantage] — if you’re starting up a platform operations team with five people, you may not be able to manage five [separate] clusters,” Isenberg added. “We [also] wanted to make our investments in focused areas, so that they applied to as many tenants as possible.”

Multi-tenant Kubernetes security an ad hoc practice for now

The good news for enterprises that want to achieve Kubernetes multi-tenancy securely is that there are a plethora of third-party tools they can use to do it, some of which are sold by vendors, and others open sourced by firms with Kubernetes development experience, including Cruise and Yahoo Media.

Duke Energy Corporation, for example, has a 60-node Kubernetes cluster in production that’s stretched across three on-premises data centers and shared by 100 web applications so far. The platform is comprised of several vendors’ products, from Diamanti hyper-converged infrastructure to Aqua Security Software’s container firewall, which logically isolates tenants from one another at a granular level that accounts for the ephemeral nature of container infrastructure.

“We don’t want production to talk to anyone [outside of it],” said Ritu Sharma, senior IT architect at the energy holding company in Charlotte, N.C., in a presentation at KubeSec Enterprise Summit, an event co-located with KubeCon this week. “That was the first question that came to mind — how to manage cybersecurity when containers can connect service-to-service within a cluster.”

Some Kubernetes multi-tenancy early adopters also lean on cloud service providers such as Google Kubernetes Engine (GKE) to take on parts of the Kubernetes security burden. GKE can encrypt secrets in the etcd data store, which became available in Kubernetes 1.13, but isn’t enabled by default, according to a KubeSec presentation by Mike Ruth, one of Cruise’s staff security engineers.

Google also offers Workload Identity, which matches up GCP identity and access management with Kubernetes service accounts so that users don’t have to manage Kubernetes secrets or Google Cloud IAM service account keys themselves. Kubernetes SIG-Auth looks to modernize how Kubernetes security tokens are handled by default upstream to smooth Kubernetes secrets management across all clouds, but has run into snags with the migration process.

In the meantime, Verizon’s Yahoo Media has donated a project called Athenz to open source, which handles multiple aspects of authentication and authorization in its on-premises Kubernetes environments, including automatic secrets rotation, expiration and limited-audience policies for intracluster communication similar to those offered by GKE’s Workload Identity. Cruise also created a similar open source tool called RBACSync, along with Daytona, a tool that fetches secrets from HashiCorp Vault, which Cruise uses to store secrets instead of in etcd, and injects them into running applications, and k-rail for workload policy enforcement.

Kubernetes Multi-Tenancy Working Group explores standards

While early adopters have plowed ahead with an amalgamation of third-party and homegrown tools, some users in highly regulated environments look to upstream Kubernetes projects to flesh out more standardized Kubernetes multi-tenancy options.

For example, investment banking company HSBC can use Google’s Anthos Config Management (ACM) to create hierarchical, or nested, namespaces, which make for more highly granular access control mechanisms in a multi-tenant environment, and simplifies their management by automatically propagating shared policies between them. However, the company is following the work of a Kubernetes Multi-Tenancy Working Group established in early 2018 in the hopes it will introduce free open source utilities compatible with multiple public clouds.

Sanjeev Rampal, Kubernetes Multi-Tenancy Working Group
Sanjeev Rampal, co-chair of the Kubernetes Multi-Tenancy Working Group, presents at KubeCon.

“If I want to use ACM in AWS, the Anthos license isn’t cheap,” said Scott Surovich, global container engineering lead at HSBC, in an interview after a presentation here. Anthos also requires VMware server virtualization, and hierarchical namespaces available at the Kubernetes layer could offer Kubernetes multi-tenancy on bare metal, reducing the layers of abstraction and potentially improving performance for HSBC.

Homegrown tools for multi-tenant Kubernetes security won’t fly in HSBC’s highly regulated environment, either, Surovich said.

“I need to prove I have escalation options for support,” he said. “Saying, ‘I wrote that’ isn’t acceptable.”

So far, the working group has two incubation projects that create custom resource definitions — essentially, plugins — that support hierarchical namespaces and virtual clusters that create self-service Kubernetes API Servers for each tenant. The working group has also created working definitions of the types of multi-tenancy and begun to define a set of reference architectures.

The working group is also considering certification of multi-tenant Kubernetes security and management tools, as well as benchmark testing and evaluation of such tools, said Sanjeev Rampal, a Cisco principal engineer and co-chair of the group.

Go to Original Article

When to Use SCVMM (And When Not To)

Microsoft introduced Hyper-V as a challenge to the traditional hypervisor market. Rather than a specialty hallmark technology, they made it into a standardized commodity. Instead of something to purchase and then plug into, Microsoft made it ubiquitously available as something to build upon. As a side effect, administrators manage Hyper-V using markedly different approaches than other systems. In this unfamiliar territory, we have a secondary curse of little clear guidance. So, let’s take a look at the merits and drawbacks of Microsoft’s paid Hyper-V management tool, System Center Virtual Machine Manager.

What is System Center Virtual Machine Manager?

“System Center” is an umbrella name for Microsoft’s datacenter management products, much like “Office” describes Microsoft’s suite of desktop productivity applications. System Center has two editions: Standard and Datacenter. Unlike Office, the System Center editions do not vary by the number of member products that you can use. Both editions allow you to use all System Center tools. Instead, the different editions differ by the number of systems that you can manage. We will not cover licensing in this article; please consult your reseller.

System Center Virtual Machine Manager, or SCVMM, or just VMM, presents a centralized tool to manage multiple Hyper-V hosts and clusters. It provides the following features:

  • Bare-metal deployment of Hyper-V hosts
  • Pre-defined host and virtual switch configuration combinations
  • Control over clusters, individual hosts, and virtual machines
  • Virtual machine templating
  • Simultaneous deployment of multiple templates for automated setup of tiered services
  • Granular access controls (control over specific hosts, VMs, deployments, etc.)
  • Role-based access
  • Self-service tools
  • Control over Microsoft load balancers
  • Organization of offline resources (ISOs, VHDXs, etc.)
  • Automatic balancing of clustered virtual machines
  • Control over network virtualization
  • Partial control over ESXi hosts

In essence, VMM allows you to manage your datacenter as a cloud.

Can I Try VMM Before Buying?

You can read the list above to get an idea of the product’s capabilities. But, you can’t distinguish much about a product from a simple bulleted list. You learn the most about a tool by using it. To that end, you can download an evaluation copy of the System Center products. I created a link to the current long-term version (2019). If you scroll below that, you will find an evaluation for the semi-annual channel releases. Because of the invasive nature of VMM, I highly recommend that you restrict it to a testbed of systems. If you don’t have a test environment, then it presents you with a fantastic opportunity to try out nested virtualization.

Why Should I Use VMM to Manage my Hyper-V Environment?

Rather than trying to take you through a world tour of features that you could more easily explore on your own, I want to take this up to a higher-level view. Let’s settle one fact upfront: not everyone needs VMM. To make a somewhat bolder judgment, very few Hyper-V deployments need it. So, let’s cover the ones that do.

VMM for Management at Scale

The primary driver of VMM use has less to do with features than with scale. Understand that VMM does almost nothing that you cannot do yourself with freely-available tools. It can make tasks easier. The more hosts you have, the more work to do. So, if you’ve got many hosts, it doesn’t hurt to have some help. Of course, the word “many” does not have a universal meaning. Where do we draw the line?

For starters, we would not draw any line at all. If you’ve gone through the evaluation, you like what VMM has to offer, and the licensing cost does not drive you away, then use VMM. If you go through the effort to configure it properly, then VMM can work for even a very small environment. We’ll dive deeper into that angle in the section that discusses the disincentives to use VMM.

Server hosting providers with dozens or hundreds of clients make an obvious case for VMM. VMM does one thing easy that nothing else can: role-based access. The traditional tools allow you to establish host administrators, but nothing more granular. If you want a simple tool to establish control for tenants, VMM can do that.

VMM solves another problem that makes the most sense in the context of hosting providers: network virtualization. The term “network virtualization” could have several meanings, so let’s disambiguate it. With network virtualization, we can use the same IP addresses in multiple locations without collision. In many contexts, we can provide that with network address translation (NAT) routers. But, for tenants, we need to separate their traffic from other networks while still using common hardware. We could do that with VLANs, but that gives us two other problems. First, we have a hard limit on the number of VLANs that can co-exist. Second, customers may want to stretch their networks, including their VLANs, into the hosted environment. With current versions of Hyper-V, we have the ability to manage network virtualization with PowerShell, but VMM still makes it easier.

So, if you manage very large environments that can make use of VMM’s tenant management, or if you have a complicated networking environment that can benefit from network virtualization, then VMM makes sense for you.

VMM for Cloud Management

VMM for cloud management really means much the same thing as the previous section. It simply changes the approach to thinking about it. The common joke goes, “the cloud is just someone else’s computer”. But, how does that change when it’s your cloud? Of course, that joke has always represented a misunderstanding of cloud computing.

A cloud makes computing resources available in a controlled fashion. Prior to the powers of virtualization, you would either assign physical servers or you’d slice out access to specific resources (like virtual servers in Apache). With virtualization, you can create virtual machines of particular sizes, which supplants the physical server model. With a cloud, at least the way that VMM treats it, you can quickly stand up all-new systems for clients. You can even give them the power to do deploy their own.

Nothing requires the term “client” to apply only to external, paying customers. “Client” could easily mean internal teams. You can have an “accounting cloud” and a “sales cloud” and whatever else you need. Hosting providers aren’t the only entities that need to easily provide computing resources.

Granular Management Capabilities

I frequently see requests for granular control over Hyper-V resources. Administrators want to grant access to specific users to manage or connect to particular virtual machines. They want helpdesk staff to be able to reboot VMs, but not change settings. They want to allow different administrators to perform different functions based on their roles within the organization. I also think that some people just want to achieve a virtual remote desktop environment without paying the accompanying license fees.

VMM enables all of those things (except the VDI sidestep, of course). Some of these things are impossible with native tools. With difficulty, you can achieve some in other ways, such as with Constrained PowerShell Endpoints. VMM does it all, and with much greater ease.

The Quick Answer to Choosing VMM

I hope that all of this information provides a clearer image. When you have a large or complex Hyper-V environment, especially with multiple stakeholders that need to manage their own systems, VMM can help you. If you read through all of the above and did not see how any of that could meaningfully apply to your organization, then the next section may fit you better.

Reasons NOT to Use SCVMM?

We’ve seen the upsides of VMM. Now it’s time for a look at the downsides.

VMM Does Not Come Cheap – or Alone

You can’t get VMM by itself. You must buy into the entire suite or get nothing at all. I won’t debate the merits of the other members of this suite in this article. Whether you want them or not, they all come as a set. That means that you pay for the set. If you get the quote and feel any hesitation at paying it, then that’s a sign that it might not be worth it to you.

VMM is Heavy

Hyper-V’s built-in management tools require almost nothing. The PowerShell module and MMC consoles are lightweight. They require a bit of disk space to store and a spot of memory to operate. They communicate with the WMI/CIM interfaces to do their work.

VMM shows up at the opposite end. It needs a server application install, preferably on a dedicated system. It stores all of its information in a Microsoft SQL database. It requires an agent on every managed host.

VMM Presents its Own Challenges

VMM is not easy to install, configure, or use. You will have questions during your first install that the documentation does not cover. It does not get easier. I have talked with others that have different experiences from mine; some with problems that I did not encounter, and others that have never dealt with things that routinely irritate me. I will limit this section to the things that I believe every potential VMM customer will need to prepare for.

Networking Complexity

We talked about the powers of network virtualization earlier. That technology necessitates complexity. However, VMM makes things difficult even when you have a simple Hyper-V networking design. In my opinion, it’s needlessly complicated. You have several configuration points. If you miss one, something will not work. To tell the full story, a successful network configuration can be easily duplicated to other systems, even overwriting existing configurations. However, in smaller deployments, the negatives can greatly outweigh the positives.

General Complexity

I singled out networking in its own section because I feel that VMM’s designers could have created an equally capable networking system with a substantially simpler configuration. But, I think they can justify most of the rest of the complexity. VMM was built to enable you to run your own cloud – multiple clouds, even. That requires a bit more than the handful of interfaces necessary to wrangle a couple of hosts and a handful of VMs.

Over-Eager Problem Solving

When VMM detects problems, it tries to apply fixes. That sounds good, except that the “fixes” are often worse than the disease – and sometimes there aren’t even any problems to fix. I’ve had hosts drained of their VMs, sitting idle, all because VMM suddenly decided that there was a configuration problem with the virtual switch. Worse, it wouldn’t specify what it didn’t like about that virtual switch or propose how to remedy the problem. You’ll see unspecified problems with hosts and virtual machines that VMM won’t ignore and require you to burn time in tedious housekeeping.

Convoluted Error Messaging

A point of common frustration that you’ll eventually run into: the error messages. VMM often leaves cryptic error messages in its logs. I’ve encountered numerous messages that I could not understand or find any further information about. These cost time and energy to research. Inability to uncover what triggered something or even find an actual problem – these things eventually lead to “alarm fatigue”. You simply ignore the messages that don’t seem to matter, thereby taking a risk that you’ll miss something that does matter.

Mixed Version Limitations

With the introduction of changes in Hyper-V in the 2012 series, Microsoft directly addressed an earlier problem: simultaneous management of different versions of Hyper-V. You can currently use Hyper-V Manager and Failover Cluster Manager in the Windows 8+ and Windows Server 2012+ versions to control any version of Hyper-V that employs the v2 namespace. Officially, Microsoft says that any given built-in management tool will work with the version they were released with, any lower version that supports v2, and one version higher. They can only manage the features that they know about, of course, but they’ll work.

Conversely, I have not seen any version of VMM that can control a higher-level Hyper-V version. VMM 2016 controls 2016 and lower, but not 2019. Furthermore, System Center rarely releases on the same schedule as Windows Server. VMM-reliant shops that wanted to migrate to Hyper-V in Windows Server 2019 had to wait several months for the release of VMM 2019.

The Quick Answer to Choosing Against VMM

As mentioned a few earlier times in this article, the decision against VMM will largely rest on the scale of your deployment. Whether or not the problems that I mentioned above matter to you – or even apply to you – you will need to invest time and effort specifically for managing VMM. If you do not have that time, or if that effort is simply not worth it to you, then do not use VMM.

Remember that you have several free tools available: Hyper-V Manager, Failover Cluster Manager, their PowerShell modules, and Windows Admin Center.

Addressing the Automatic Recommendation for VMM

Part of the impetus behind writing this article was the oft-repeated directive to always use VMM with Hyper-V. For some writers and forum responders, it’s simply automatic. Unfortunately, it’s simply bad advice. It’s true that VMM provides an integrated, all-in-one management experience. But, if you’ve only got a handful of hosts, you can get a lot of mileage out of the free management tools. Where the graphical tools prove functionally inadequate, PowerShell can pick up the slack. I know that some administrators resist using PowerShell or any other command-line tools, but they simply have no valid reasons.

I will close this out by repeating what I said earlier in the article: get the evaluations and try out VMM. Set up networking, configure hosts, deploy virtual machines, and build-out services. You should know quickly if it’s all worth it to you. Decide for yourself. And remember to come back and tell us your experiences! Good luck!

Go to Original Article
Author: Eric Siron

Grafana Labs observability platform set to grow

Data resides in many different places and getting observability of data is a key challenge for many database managers and other data professionals.

Among the most popular technologies for data observability is the open source Grafana project, which is led by commercial open source database vendor Grafana Labs. The company leads multiple open source projects and also sells enterprise-grade products and services that enable a full data observability platform.

On Oct. 24, Grafana Labs marked the next major phase of the vendor’s evolution, raising $24 million in a Series A round of funding led by Lightspeed Venture Partners, with participation from Lead Edge Capital. The new money will help the vendor grow beyond its roots to address a wider range of data use cases, according to the company.

In this Q&A, Raj Dutt, co-founder and CEO, discusses the intersection of open source and enterprise software and where Grafana is headed.

Why are you now raising a Series A?

Raj Dutt: We just celebrated our five-year anniversary earlier this month and we’ve built a sustainable company that was running at cashflow breakeven.

So the reason why we’ve raised funding is because we think we’ve proven phase one of our business model and our platform. Now we’re basically accelerating that to go well beyond Grafana Labs itself into a full stack, composable observability platform. So it’s mainly around accelerating what we’re doing in the observability ecosystem.

We’re thinking about building this open and composable observability stack with the larger ecosystem that doesn’t just include our own open source projects. You may know us obviously as the company behind Grafana, but we’re actually the company behind Loki, which is another very interesting, very popular open source project. But we also participate in other projects that we don’t necessarily own. We are one of the driving forces behind the Prometheus project and we are actively involved in the Graphite project

Raj DuttRaj Dutt

Grafana itself has a history since it was started of being database-neutral. So today, we’re interoperating natively and in real time with 42 different data sources. We’re all about bringing your data together, no matter where it lives.

While Grafana Labs as a company works with a Cloud Native Computing Foundation (CNCF) project such as Prometheus, have you considered contributing Grafana to the CNCF, or another open source organization?

Dutt: Not really, I said we work with some CNCF projects like Prometheus, but there’s no desire on our part to put our own projects such as Grafana or Loki into the CNCF.

We are an open source observability company and this is our core competency and our  core brand. Part of our strategy for delivering differentiated solutions to our customers involves being more in control of our own destiny, so to speak.

We very much believe in the power of the community. We do have a pretty active community, though certainly more than 50 percent of the work is done by Grafana Labs. We have a habit of always hiring the top contributors within the community, which is how we scale our engineering team.

If you look at the Grafana plugin ecosystem, of which there are close to 100-plus plugins, the majority of those have been contributed by the community and not developed by Grafana Labs.

What are your plans for the next major release with Grafana 7?

Dutt: Grafana 7 is slated for 2020. We’ve generally done a major release of Grafana every year that normally coincides with our annual Grafana user conference, which next year will be coming back to Amsterdam.

The major theme for Grafana 7 is really about it becoming more of a developer platform for people to build use case specific experiences with and also going beyond metrics into logging and tracing. So we’re really building this full observability stack and that is our 2020 product vision.

We think that the three pillars of observability are logging, metrics and traces, but it’s really about how you bring that data together and contextualize it in a seamless experience and that’s what we can do with Grafana at the center of this platform.

We can give people the choice to continue to use, say, Splunk for logging, Datadog for metrics, or New Relic for APM (application performance management), while not requiring them to store all their data in one database. We think it is a really compelling option to customers to give them the choice and flexibility to use best-of-breed open source software without locking them in.

What is the intersection between open source and enterprise software for Grafana Labs?

Dutt: With Grafana Enterprise, we take the open source version and we add certain capabilities and integrations to it. So we take Grafana, the open source version, and we add data sources, and we combine it with 24/7 support. We also add features generally around authentication and security clients that are generally appealing to our largest users.

With Grafana Labs, the company is all about creating these truly open source projects with communities under real open source licensing, and then finding ways generally under non-open source licensing to differentiate them.

If you want to have something be open source, then make it really open source, and if it doesn’t work through a business model to make a particular thing open source, then don’t make it open source.
Raj DuttCEO, Grafana Labs

You know, if you want to have something be open source, then make it really open source, and if it doesn’t work through a business model to make a particular thing open source, then don’t make it open source.

So our view is we have a lot of open source software, which is truly open source, meaning under a real open source license like Apache, and we also have our enterprise offerings that are not open.

We consider ourselves an open source company, because it’s in our DNA, but we really don’t want to play games with a lot of these newfangled open source licenses that you’re seeing proliferate.

How is Grafana being used today for data management and analytics use cases?

Dutt: We have gone from seeing Grafana demand driven primarily from the development teams and the operations team. What’s happened recently is, particularly with the support of things like SQL data sources as well as support for things like BigQuery and other data sources, we’ve seen a lot of business users and business metrics being brought into Grafana very organically.

So we’re at this interesting intersection now where we’re being pushed into business analytics by our developer centric customers and users. But we don’t claim to compete head on with say, you know, Tableau or Power BI. We don’t consider ourselves a BI company, but the open source Grafana project is definitely being pulled in that direction by its user base.

The Grafana project itself has always been use case agnostic. There’s nothing in Grafana that is specific to IT, cloud native or anything like that, and that has been a deliberate decision. We’re kind of excited to see where the community organically takes us.

This interview has been edited for clarity and conciseness.

Go to Original Article

For Sale – Thrustmaster TS-XW steering wheel and Playseat Challenge Chair

x1 Thrustmaster TS XW steering wheel for xbox / pc

x1 Playseat Challenge collapsible racing chair.

Both purchased in last 6 months and genuinely used twice. Just no time.

I would keep but Mrs is pressuring me and I’ve given in.

Pics to follow.
Receipt for Steering wheel and chair available

Price and currency: 500
Delivery: Goods must be exchanged in person
Payment method: In person
Location: Manchester
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Kubernetes networking expands its horizons with service mesh

Enterprise IT operations pros who support microservices face a thorny challenge with Kubernetes networking, but service mesh architectures could help address their concerns.

Kubernetes networking under traditional methods faces performance bottlenecks. Centralized network resources must handle an order of magnitude more connections once the user migrates from VMs to containers. As containers appear and disappear much more frequently, managing those connections at scale quickly can create confusion on the network, and stale information inside network management resources can even misdirect traffic.

IT pros at KubeCon this month got a glimpse at how early adopters of microservices have approached Kubernetes networking issues with service mesh architectures. These network setups are built around sidecar containers, which act as a proxy for application containers on internal networks. Such proxies offload networking functions from application containers and offer a reliable way to track and apply network security policies to ephemeral resources from a centralized management interface.

Proxies in a service mesh better handle one-time connections between microservices than can be done with traditional networking models. Service mesh proxies also tap telemetry information that IT admins can’t get from other Kubernetes networking approaches, such as transmission success rates, latencies and traffic volume on a container-by-container basis.

“The network should be transparent to the application,” said Matt Klein, a software engineer at San Francisco-based Lyft, which developed the Envoy proxy system to address networking obstacles as the ride-sharing company moved to a microservices architecture over the last five years.

“People didn’t trust those services, and there weren’t tools that would allow people to write their business logic and not focus on all the faults that were happening in the network,” Klein said.

With a sidecar proxy in Envoy, each of Lyft’s services only had to understand its local portion of the network, and the application language no longer factored in its function. At the time, only the most demanding web application required proxy technology such as Envoy. But now, the complexity of microservices networking makes service mesh relevant to more mainstream IT shops.

The National Center for Biotechnology Information (NCBI) in Bethesda, Md., has laid the groundwork for microservices with a service mesh built around Linkerd, which was developed by Buoyant. The bioinformatics institute used Linkerd to modernize legacy applications, some as many as 30 years old, said Borys Pierov, a software developer at NCBI.

Any app that uses the HTTP protocol can point to the Linkerd proxy, which gives NCBI engineers improved visibility and control over advanced routing rules in the legacy infrastructure, Pierov said. While NCBI doesn’t use Kubernetes yet — it uses HashiCorp Consul and CoreOS rkt container runtime instead of Kubernetes and Docker — service mesh will be key to container networking on any platform.

“Linkerd gave us a look behind the scenes of our apps and an idea of how to split them into microservices,” Pierov said. “Some things were deployed in strange ways, and microservices will change those deployments, including the service mesh that moves us to a more modern infrastructure.”

Matt Klein speaks at KubeCon
Matt Klein, software engineer at Lyft, presents the company’s experiences with service mesh architectures at KubeCon.

Kubernetes networking will cozy up with service mesh next year

Linkerd is one of the most well-known and widely used tools among the multiple open source service mesh projects in various stages of development. However, Envoy has gained notoriety because it underpins a fresh approach to the centralized management layer, called Istio. This month, Buoyant also introduced a better performing and efficient successor to Linkerd, called Conduit.

Linkerd gave us a look behind the scenes of our apps … Some things were deployed in strange ways, and microservices will change those deployments, including the service mesh that moves us to a more modern infrastructure.
Borys Pierovsoftware developer, National Center for Biotechnology Information

It’s still too early for any of these projects to be declared the winner. The Cloud Native Computing Foundation (CNCF) invited Istio’s developers, which include IBM, Microsoft and Lyft, to make the Istio CNCF project, CNCF COO Chris Aniszczyk said at KubeCon. But Buoyant also will formally present Conduit to the CNCF next year, and multiple projects could coexist within the foundation, Aniszczyk said.

Kubernetes networking challenges led Gannett’s USA Today Network to create its own “terrible, over-orchestrated” service mesh-like system, in the words of Ronald Lipke, senior engineer on the USA Today platform-as-a-service team, who presented on the organization’s Kubernetes experience at KubeCon. HAProxy and the Calico network management system have supported Kubernetes networking in production so far, but there have been problems under this system with terminating nodes cleanly and removing them from Calico quickly so traffic isn’t misdirected.

Lipke likes the service mesh approach, but it’s not yet a top priority for his team at this early stage of Kubernetes deployment. “No one’s really asking for it yet, so it’s taken a back seat,” he said.

This will change in the new year. The company plans to rethink the HAproxy approach to reduce its cloud resource costs and improve network tracing for monitoring purposes. The company has done proof-of-concept evaluations around Linkerd and plans to look at Conduit, he said in an interview after his KubeCon session.

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at [email protected] or follow @PariseauTT on Twitter.

Take your first step towards the Imagine Cup with the Big Idea Challenge

Take your first step towards the Imagine Cup with the Big Idea Challenge

The 2018 Imagine Cup season is underway, and we are thrilled to announce the Big Idea Challenge! Pitch your world changing Imagine Cup idea and your team could win $3,000 USD, and technical resources to take your idea to the next level!

  • $3,000 USD – 1st Place prize
  • $2,000 USD – 2nd Place prize
  • $1,000 USD – 3rd Place prize
  • Judge feedback on project submission – Top 10 teams
  • $600 in Azure credits – Top 50 teams

Student developers around the world are asked every year to pitch their projects to investors, partners, customers, publishers, and even potential teammates. It’s how you share your vision, how you persuade people that you don’t just have the right idea, you’re also the right team to make it happen. Every winning pitch has a solid project plan to back it up. Want to win the Imagine Cup and get your project funded? Judges will want to know the why and the how you plan to bring your project to market.

How can you get a head start on the Imagine Cup? Create a three-minute pitch video as well as a project plan document and let us know what your Imagine Cup idea is all about. Your entry will be reviewed by a team of judges who will score it on a number of different criteria as described in the Official Rules. We’ve got plenty of resources to get you started: Take a look at winning Imagine Cup pitches as well as how to build your project plan, both should help you feel confident in taking that first step!

Start your journey today by registering for the 2018 Imagine Cup and submitting your solution to the Big Idea challenge from your account page by January 31st, 2018; winners will be announced in February. Don’t wait; 50 teams will win, and your Big Idea could take home $3,000 USD!

Pablo Veramendi
Imagine Cup Competition Manager