Tag Archives: challenge

Gartner Names Microsoft a Leader in the 2019 Enterprise Information Archiving (EIA) Magic Quadrant – Microsoft Security

We often hear from customers about the explosion of data, and the challenge this presents for organizations in remaining compliant and protecting their information. We’ve invested in capabilities across the landscape of information protection and information governance, inclusive of archiving, retention, eDiscovery and communications supervision. In Gartner’s annual Magic Quadrant for Enterprise Information Archiving (EIA), Microsoft was named a Leader again in 2019.

According to Gartner, “Leaders have the highest combined measures of Ability to Execute and Completeness of Vision. They may have the most comprehensive and scalable products. In terms of vision, they are perceived to be thought leaders, with well-articulated plans for ease of use, product breadth and how to address scalability.” We believe this recognition represents our ability to provide best-in-class protection and deliver on innovations that keep pace with today’s compliance needs.

This recognition comes at a great point in our product journey. We are continuing to invest in solutions that are integrated into Office 365 and address information protection and information governance needs of customers. Earlier this month, at our Ignite 2019 conference, we announced updates to our compliance portfolio including new data connectors, machine learning powered governance, retention, discovery and supervision – and innovative capabilities such as threading Microsoft Teams or Yammer messages into conversations, allowing you to efficiently review and export complete dialogues with context, not just individual messages. In customer conversations, many of them say these are the types of advancements that are helping them be more efficient with their compliance requirements, without impacting end-user productivity.

Learn more

Read the complimentary report for the analysis behind Microsoft’s position as a Leader.

For more information about our Information Archiving solution, visit our website and stay up to date with our blog.

Gartner Magic Quadrant for Enterprise Information Archiving, Julian Tirsu, Michael Hoeck, 20 November 2019.

*This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Microsoft.

Gartner does not endorse any vendor, product, or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved.

Go to Original Article
Author: Steve Clarke

Kubernetes security opens a new frontier: multi-tenancy

SAN DIEGO — As enterprises expand production container deployments, a new Kubernetes security challenge has emerged: multi-tenancy.

Among the many challenges with multi-tenancy in general is that it is not easy to define, and few IT pros agree on a single definition or architectural approach. Broadly speaking, however, multi-tenancy occurs when multiple projects, teams or tenants, share a centralized IT infrastructure, but remain logically isolated from one another.

Kubernetes multi-tenancy also adds multilayered complexity to an already complex Kubernetes security picture, and demands that IT pros wire together a stack of third-party and, at times, homegrown tools on top of the core Kubernetes framework.

This is because core upstream Kubernetes security features are limited to service accounts for operations such as role-based access control — the platform expects authentication and authorization data to come from an external source. Kubernetes namespaces also don’t offer especially granular or layered isolation by default. Typically, each namespace corresponds to one tenant, whether that tenant is defined as an application, a project or a service.

“To build logical isolation, you have to add a bunch of components on top of Kubernetes,” said Karl Isenberg, tech lead manager at Cruise Automation, a self-driving car service in San Francisco, in a presentation about Kubernetes multi-tenancy here at KubeCon + CloudNativeCon North America 2019 this week. “Once you have Kubernetes, Kubernetes alone is not enough.”

Karl Isenberg, Cruise Automation
Karl Isenberg, tech lead manager at Cruise Automation, presents at KubeCon about multi-tenant Kubernetes security.

However, Isenberg and other presenters here said Kubernetes multi-tenancy can have significant advantages if done right. Cruise, for example, runs very large Kubernetes clusters, with up to 1,000 nodes, shared by thousands of employees, teams, projects and some customers. Kubernetes multi-tenancy means more highly efficient clusters and cost savings on data center hardware and cloud infrastructure.

“Lower operational costs is another [advantage] — if you’re starting up a platform operations team with five people, you may not be able to manage five [separate] clusters,” Isenberg added. “We [also] wanted to make our investments in focused areas, so that they applied to as many tenants as possible.”

Multi-tenant Kubernetes security an ad hoc practice for now

The good news for enterprises that want to achieve Kubernetes multi-tenancy securely is that there are a plethora of third-party tools they can use to do it, some of which are sold by vendors, and others open sourced by firms with Kubernetes development experience, including Cruise and Yahoo Media.

Duke Energy Corporation, for example, has a 60-node Kubernetes cluster in production that’s stretched across three on-premises data centers and shared by 100 web applications so far. The platform is comprised of several vendors’ products, from Diamanti hyper-converged infrastructure to Aqua Security Software’s container firewall, which logically isolates tenants from one another at a granular level that accounts for the ephemeral nature of container infrastructure.

“We don’t want production to talk to anyone [outside of it],” said Ritu Sharma, senior IT architect at the energy holding company in Charlotte, N.C., in a presentation at KubeSec Enterprise Summit, an event co-located with KubeCon this week. “That was the first question that came to mind — how to manage cybersecurity when containers can connect service-to-service within a cluster.”

Some Kubernetes multi-tenancy early adopters also lean on cloud service providers such as Google Kubernetes Engine (GKE) to take on parts of the Kubernetes security burden. GKE can encrypt secrets in the etcd data store, which became available in Kubernetes 1.13, but isn’t enabled by default, according to a KubeSec presentation by Mike Ruth, one of Cruise’s staff security engineers.

Google also offers Workload Identity, which matches up GCP identity and access management with Kubernetes service accounts so that users don’t have to manage Kubernetes secrets or Google Cloud IAM service account keys themselves. Kubernetes SIG-Auth looks to modernize how Kubernetes security tokens are handled by default upstream to smooth Kubernetes secrets management across all clouds, but has run into snags with the migration process.

In the meantime, Verizon’s Yahoo Media has donated a project called Athenz to open source, which handles multiple aspects of authentication and authorization in its on-premises Kubernetes environments, including automatic secrets rotation, expiration and limited-audience policies for intracluster communication similar to those offered by GKE’s Workload Identity. Cruise also created a similar open source tool called RBACSync, along with Daytona, a tool that fetches secrets from HashiCorp Vault, which Cruise uses to store secrets instead of in etcd, and injects them into running applications, and k-rail for workload policy enforcement.

Kubernetes Multi-Tenancy Working Group explores standards

While early adopters have plowed ahead with an amalgamation of third-party and homegrown tools, some users in highly regulated environments look to upstream Kubernetes projects to flesh out more standardized Kubernetes multi-tenancy options.

For example, investment banking company HSBC can use Google’s Anthos Config Management (ACM) to create hierarchical, or nested, namespaces, which make for more highly granular access control mechanisms in a multi-tenant environment, and simplifies their management by automatically propagating shared policies between them. However, the company is following the work of a Kubernetes Multi-Tenancy Working Group established in early 2018 in the hopes it will introduce free open source utilities compatible with multiple public clouds.

Sanjeev Rampal, Kubernetes Multi-Tenancy Working Group
Sanjeev Rampal, co-chair of the Kubernetes Multi-Tenancy Working Group, presents at KubeCon.

“If I want to use ACM in AWS, the Anthos license isn’t cheap,” said Scott Surovich, global container engineering lead at HSBC, in an interview after a presentation here. Anthos also requires VMware server virtualization, and hierarchical namespaces available at the Kubernetes layer could offer Kubernetes multi-tenancy on bare metal, reducing the layers of abstraction and potentially improving performance for HSBC.

Homegrown tools for multi-tenant Kubernetes security won’t fly in HSBC’s highly regulated environment, either, Surovich said.

“I need to prove I have escalation options for support,” he said. “Saying, ‘I wrote that’ isn’t acceptable.”

So far, the working group has two incubation projects that create custom resource definitions — essentially, plugins — that support hierarchical namespaces and virtual clusters that create self-service Kubernetes API Servers for each tenant. The working group has also created working definitions of the types of multi-tenancy and begun to define a set of reference architectures.

The working group is also considering certification of multi-tenant Kubernetes security and management tools, as well as benchmark testing and evaluation of such tools, said Sanjeev Rampal, a Cisco principal engineer and co-chair of the group.

Go to Original Article
Author:

When to Use SCVMM (And When Not To)

Microsoft introduced Hyper-V as a challenge to the traditional hypervisor market. Rather than a specialty hallmark technology, they made it into a standardized commodity. Instead of something to purchase and then plug into, Microsoft made it ubiquitously available as something to build upon. As a side effect, administrators manage Hyper-V using markedly different approaches than other systems. In this unfamiliar territory, we have a secondary curse of little clear guidance. So, let’s take a look at the merits and drawbacks of Microsoft’s paid Hyper-V management tool, System Center Virtual Machine Manager.

What is System Center Virtual Machine Manager?

“System Center” is an umbrella name for Microsoft’s datacenter management products, much like “Office” describes Microsoft’s suite of desktop productivity applications. System Center has two editions: Standard and Datacenter. Unlike Office, the System Center editions do not vary by the number of member products that you can use. Both editions allow you to use all System Center tools. Instead, the different editions differ by the number of systems that you can manage. We will not cover licensing in this article; please consult your reseller.

System Center Virtual Machine Manager, or SCVMM, or just VMM, presents a centralized tool to manage multiple Hyper-V hosts and clusters. It provides the following features:

  • Bare-metal deployment of Hyper-V hosts
  • Pre-defined host and virtual switch configuration combinations
  • Control over clusters, individual hosts, and virtual machines
  • Virtual machine templating
  • Simultaneous deployment of multiple templates for automated setup of tiered services
  • Granular access controls (control over specific hosts, VMs, deployments, etc.)
  • Role-based access
  • Self-service tools
  • Control over Microsoft load balancers
  • Organization of offline resources (ISOs, VHDXs, etc.)
  • Automatic balancing of clustered virtual machines
  • Control over network virtualization
  • Partial control over ESXi hosts

In essence, VMM allows you to manage your datacenter as a cloud.

Can I Try VMM Before Buying?

You can read the list above to get an idea of the product’s capabilities. But, you can’t distinguish much about a product from a simple bulleted list. You learn the most about a tool by using it. To that end, you can download an evaluation copy of the System Center products. I created a link to the current long-term version (2019). If you scroll below that, you will find an evaluation for the semi-annual channel releases. Because of the invasive nature of VMM, I highly recommend that you restrict it to a testbed of systems. If you don’t have a test environment, then it presents you with a fantastic opportunity to try out nested virtualization.

Why Should I Use VMM to Manage my Hyper-V Environment?

Rather than trying to take you through a world tour of features that you could more easily explore on your own, I want to take this up to a higher-level view. Let’s settle one fact upfront: not everyone needs VMM. To make a somewhat bolder judgment, very few Hyper-V deployments need it. So, let’s cover the ones that do.

VMM for Management at Scale

The primary driver of VMM use has less to do with features than with scale. Understand that VMM does almost nothing that you cannot do yourself with freely-available tools. It can make tasks easier. The more hosts you have, the more work to do. So, if you’ve got many hosts, it doesn’t hurt to have some help. Of course, the word “many” does not have a universal meaning. Where do we draw the line?

For starters, we would not draw any line at all. If you’ve gone through the evaluation, you like what VMM has to offer, and the licensing cost does not drive you away, then use VMM. If you go through the effort to configure it properly, then VMM can work for even a very small environment. We’ll dive deeper into that angle in the section that discusses the disincentives to use VMM.

Server hosting providers with dozens or hundreds of clients make an obvious case for VMM. VMM does one thing easy that nothing else can: role-based access. The traditional tools allow you to establish host administrators, but nothing more granular. If you want a simple tool to establish control for tenants, VMM can do that.

VMM solves another problem that makes the most sense in the context of hosting providers: network virtualization. The term “network virtualization” could have several meanings, so let’s disambiguate it. With network virtualization, we can use the same IP addresses in multiple locations without collision. In many contexts, we can provide that with network address translation (NAT) routers. But, for tenants, we need to separate their traffic from other networks while still using common hardware. We could do that with VLANs, but that gives us two other problems. First, we have a hard limit on the number of VLANs that can co-exist. Second, customers may want to stretch their networks, including their VLANs, into the hosted environment. With current versions of Hyper-V, we have the ability to manage network virtualization with PowerShell, but VMM still makes it easier.

So, if you manage very large environments that can make use of VMM’s tenant management, or if you have a complicated networking environment that can benefit from network virtualization, then VMM makes sense for you.

VMM for Cloud Management

VMM for cloud management really means much the same thing as the previous section. It simply changes the approach to thinking about it. The common joke goes, “the cloud is just someone else’s computer”. But, how does that change when it’s your cloud? Of course, that joke has always represented a misunderstanding of cloud computing.

A cloud makes computing resources available in a controlled fashion. Prior to the powers of virtualization, you would either assign physical servers or you’d slice out access to specific resources (like virtual servers in Apache). With virtualization, you can create virtual machines of particular sizes, which supplants the physical server model. With a cloud, at least the way that VMM treats it, you can quickly stand up all-new systems for clients. You can even give them the power to do deploy their own.

Nothing requires the term “client” to apply only to external, paying customers. “Client” could easily mean internal teams. You can have an “accounting cloud” and a “sales cloud” and whatever else you need. Hosting providers aren’t the only entities that need to easily provide computing resources.

Granular Management Capabilities

I frequently see requests for granular control over Hyper-V resources. Administrators want to grant access to specific users to manage or connect to particular virtual machines. They want helpdesk staff to be able to reboot VMs, but not change settings. They want to allow different administrators to perform different functions based on their roles within the organization. I also think that some people just want to achieve a virtual remote desktop environment without paying the accompanying license fees.

VMM enables all of those things (except the VDI sidestep, of course). Some of these things are impossible with native tools. With difficulty, you can achieve some in other ways, such as with Constrained PowerShell Endpoints. VMM does it all, and with much greater ease.

The Quick Answer to Choosing VMM

I hope that all of this information provides a clearer image. When you have a large or complex Hyper-V environment, especially with multiple stakeholders that need to manage their own systems, VMM can help you. If you read through all of the above and did not see how any of that could meaningfully apply to your organization, then the next section may fit you better.

Reasons NOT to Use SCVMM?

We’ve seen the upsides of VMM. Now it’s time for a look at the downsides.

VMM Does Not Come Cheap – or Alone

You can’t get VMM by itself. You must buy into the entire suite or get nothing at all. I won’t debate the merits of the other members of this suite in this article. Whether you want them or not, they all come as a set. That means that you pay for the set. If you get the quote and feel any hesitation at paying it, then that’s a sign that it might not be worth it to you.

VMM is Heavy

Hyper-V’s built-in management tools require almost nothing. The PowerShell module and MMC consoles are lightweight. They require a bit of disk space to store and a spot of memory to operate. They communicate with the WMI/CIM interfaces to do their work.

VMM shows up at the opposite end. It needs a server application install, preferably on a dedicated system. It stores all of its information in a Microsoft SQL database. It requires an agent on every managed host.

VMM Presents its Own Challenges

VMM is not easy to install, configure, or use. You will have questions during your first install that the documentation does not cover. It does not get easier. I have talked with others that have different experiences from mine; some with problems that I did not encounter, and others that have never dealt with things that routinely irritate me. I will limit this section to the things that I believe every potential VMM customer will need to prepare for.

Networking Complexity

We talked about the powers of network virtualization earlier. That technology necessitates complexity. However, VMM makes things difficult even when you have a simple Hyper-V networking design. In my opinion, it’s needlessly complicated. You have several configuration points. If you miss one, something will not work. To tell the full story, a successful network configuration can be easily duplicated to other systems, even overwriting existing configurations. However, in smaller deployments, the negatives can greatly outweigh the positives.

General Complexity

I singled out networking in its own section because I feel that VMM’s designers could have created an equally capable networking system with a substantially simpler configuration. But, I think they can justify most of the rest of the complexity. VMM was built to enable you to run your own cloud – multiple clouds, even. That requires a bit more than the handful of interfaces necessary to wrangle a couple of hosts and a handful of VMs.

Over-Eager Problem Solving

When VMM detects problems, it tries to apply fixes. That sounds good, except that the “fixes” are often worse than the disease – and sometimes there aren’t even any problems to fix. I’ve had hosts drained of their VMs, sitting idle, all because VMM suddenly decided that there was a configuration problem with the virtual switch. Worse, it wouldn’t specify what it didn’t like about that virtual switch or propose how to remedy the problem. You’ll see unspecified problems with hosts and virtual machines that VMM won’t ignore and require you to burn time in tedious housekeeping.

Convoluted Error Messaging

A point of common frustration that you’ll eventually run into: the error messages. VMM often leaves cryptic error messages in its logs. I’ve encountered numerous messages that I could not understand or find any further information about. These cost time and energy to research. Inability to uncover what triggered something or even find an actual problem – these things eventually lead to “alarm fatigue”. You simply ignore the messages that don’t seem to matter, thereby taking a risk that you’ll miss something that does matter.

Mixed Version Limitations

With the introduction of changes in Hyper-V in the 2012 series, Microsoft directly addressed an earlier problem: simultaneous management of different versions of Hyper-V. You can currently use Hyper-V Manager and Failover Cluster Manager in the Windows 8+ and Windows Server 2012+ versions to control any version of Hyper-V that employs the v2 namespace. Officially, Microsoft says that any given built-in management tool will work with the version they were released with, any lower version that supports v2, and one version higher. They can only manage the features that they know about, of course, but they’ll work.

Conversely, I have not seen any version of VMM that can control a higher-level Hyper-V version. VMM 2016 controls 2016 and lower, but not 2019. Furthermore, System Center rarely releases on the same schedule as Windows Server. VMM-reliant shops that wanted to migrate to Hyper-V in Windows Server 2019 had to wait several months for the release of VMM 2019.

The Quick Answer to Choosing Against VMM

As mentioned a few earlier times in this article, the decision against VMM will largely rest on the scale of your deployment. Whether or not the problems that I mentioned above matter to you – or even apply to you – you will need to invest time and effort specifically for managing VMM. If you do not have that time, or if that effort is simply not worth it to you, then do not use VMM.

Remember that you have several free tools available: Hyper-V Manager, Failover Cluster Manager, their PowerShell modules, and Windows Admin Center.

Addressing the Automatic Recommendation for VMM

Part of the impetus behind writing this article was the oft-repeated directive to always use VMM with Hyper-V. For some writers and forum responders, it’s simply automatic. Unfortunately, it’s simply bad advice. It’s true that VMM provides an integrated, all-in-one management experience. But, if you’ve only got a handful of hosts, you can get a lot of mileage out of the free management tools. Where the graphical tools prove functionally inadequate, PowerShell can pick up the slack. I know that some administrators resist using PowerShell or any other command-line tools, but they simply have no valid reasons.

I will close this out by repeating what I said earlier in the article: get the evaluations and try out VMM. Set up networking, configure hosts, deploy virtual machines, and build-out services. You should know quickly if it’s all worth it to you. Decide for yourself. And remember to come back and tell us your experiences! Good luck!


Go to Original Article
Author: Eric Siron

Grafana Labs observability platform set to grow

Data resides in many different places and getting observability of data is a key challenge for many database managers and other data professionals.

Among the most popular technologies for data observability is the open source Grafana project, which is led by commercial open source database vendor Grafana Labs. The company leads multiple open source projects and also sells enterprise-grade products and services that enable a full data observability platform.

On Oct. 24, Grafana Labs marked the next major phase of the vendor’s evolution, raising $24 million in a Series A round of funding led by Lightspeed Venture Partners, with participation from Lead Edge Capital. The new money will help the vendor grow beyond its roots to address a wider range of data use cases, according to the company.

In this Q&A, Raj Dutt, co-founder and CEO, discusses the intersection of open source and enterprise software and where Grafana is headed.

Why are you now raising a Series A?

Raj Dutt: We just celebrated our five-year anniversary earlier this month and we’ve built a sustainable company that was running at cashflow breakeven.

So the reason why we’ve raised funding is because we think we’ve proven phase one of our business model and our platform. Now we’re basically accelerating that to go well beyond Grafana Labs itself into a full stack, composable observability platform. So it’s mainly around accelerating what we’re doing in the observability ecosystem.

We’re thinking about building this open and composable observability stack with the larger ecosystem that doesn’t just include our own open source projects. You may know us obviously as the company behind Grafana, but we’re actually the company behind Loki, which is another very interesting, very popular open source project. But we also participate in other projects that we don’t necessarily own. We are one of the driving forces behind the Prometheus project and we are actively involved in the Graphite project

Raj DuttRaj Dutt

Grafana itself has a history since it was started of being database-neutral. So today, we’re interoperating natively and in real time with 42 different data sources. We’re all about bringing your data together, no matter where it lives.

While Grafana Labs as a company works with a Cloud Native Computing Foundation (CNCF) project such as Prometheus, have you considered contributing Grafana to the CNCF, or another open source organization?

Dutt: Not really, I said we work with some CNCF projects like Prometheus, but there’s no desire on our part to put our own projects such as Grafana or Loki into the CNCF.

We are an open source observability company and this is our core competency and our  core brand. Part of our strategy for delivering differentiated solutions to our customers involves being more in control of our own destiny, so to speak.

We very much believe in the power of the community. We do have a pretty active community, though certainly more than 50 percent of the work is done by Grafana Labs. We have a habit of always hiring the top contributors within the community, which is how we scale our engineering team.

If you look at the Grafana plugin ecosystem, of which there are close to 100-plus plugins, the majority of those have been contributed by the community and not developed by Grafana Labs.

What are your plans for the next major release with Grafana 7?

Dutt: Grafana 7 is slated for 2020. We’ve generally done a major release of Grafana every year that normally coincides with our annual Grafana user conference, which next year will be coming back to Amsterdam.

The major theme for Grafana 7 is really about it becoming more of a developer platform for people to build use case specific experiences with and also going beyond metrics into logging and tracing. So we’re really building this full observability stack and that is our 2020 product vision.

We think that the three pillars of observability are logging, metrics and traces, but it’s really about how you bring that data together and contextualize it in a seamless experience and that’s what we can do with Grafana at the center of this platform.

We can give people the choice to continue to use, say, Splunk for logging, Datadog for metrics, or New Relic for APM (application performance management), while not requiring them to store all their data in one database. We think it is a really compelling option to customers to give them the choice and flexibility to use best-of-breed open source software without locking them in.

What is the intersection between open source and enterprise software for Grafana Labs?

Dutt: With Grafana Enterprise, we take the open source version and we add certain capabilities and integrations to it. So we take Grafana, the open source version, and we add data sources, and we combine it with 24/7 support. We also add features generally around authentication and security clients that are generally appealing to our largest users.

With Grafana Labs, the company is all about creating these truly open source projects with communities under real open source licensing, and then finding ways generally under non-open source licensing to differentiate them.

If you want to have something be open source, then make it really open source, and if it doesn’t work through a business model to make a particular thing open source, then don’t make it open source.
Raj DuttCEO, Grafana Labs

You know, if you want to have something be open source, then make it really open source, and if it doesn’t work through a business model to make a particular thing open source, then don’t make it open source.

So our view is we have a lot of open source software, which is truly open source, meaning under a real open source license like Apache, and we also have our enterprise offerings that are not open.

We consider ourselves an open source company, because it’s in our DNA, but we really don’t want to play games with a lot of these newfangled open source licenses that you’re seeing proliferate.

How is Grafana being used today for data management and analytics use cases?

Dutt: We have gone from seeing Grafana demand driven primarily from the development teams and the operations team. What’s happened recently is, particularly with the support of things like SQL data sources as well as support for things like BigQuery and other data sources, we’ve seen a lot of business users and business metrics being brought into Grafana very organically.

So we’re at this interesting intersection now where we’re being pushed into business analytics by our developer centric customers and users. But we don’t claim to compete head on with say, you know, Tableau or Power BI. We don’t consider ourselves a BI company, but the open source Grafana project is definitely being pulled in that direction by its user base.

The Grafana project itself has always been use case agnostic. There’s nothing in Grafana that is specific to IT, cloud native or anything like that, and that has been a deliberate decision. We’re kind of excited to see where the community organically takes us.

This interview has been edited for clarity and conciseness.

Go to Original Article
Author:

For Sale – Thrustmaster TS-XW steering wheel and Playseat Challenge Chair

x1 Thrustmaster TS XW steering wheel for xbox / pc

x1 Playseat Challenge collapsible racing chair.

Both purchased in last 6 months and genuinely used twice. Just no time.

I would keep but Mrs is pressuring me and I’ve given in.

Pics to follow.
Receipt for Steering wheel and chair available

Price and currency: 500
Delivery: Goods must be exchanged in person
Payment method: In person
Location: Manchester
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Kubernetes networking expands its horizons with service mesh

Enterprise IT operations pros who support microservices face a thorny challenge with Kubernetes networking, but service mesh architectures could help address their concerns.

Kubernetes networking under traditional methods faces performance bottlenecks. Centralized network resources must handle an order of magnitude more connections once the user migrates from VMs to containers. As containers appear and disappear much more frequently, managing those connections at scale quickly can create confusion on the network, and stale information inside network management resources can even misdirect traffic.

IT pros at KubeCon this month got a glimpse at how early adopters of microservices have approached Kubernetes networking issues with service mesh architectures. These network setups are built around sidecar containers, which act as a proxy for application containers on internal networks. Such proxies offload networking functions from application containers and offer a reliable way to track and apply network security policies to ephemeral resources from a centralized management interface.

Proxies in a service mesh better handle one-time connections between microservices than can be done with traditional networking models. Service mesh proxies also tap telemetry information that IT admins can’t get from other Kubernetes networking approaches, such as transmission success rates, latencies and traffic volume on a container-by-container basis.

“The network should be transparent to the application,” said Matt Klein, a software engineer at San Francisco-based Lyft, which developed the Envoy proxy system to address networking obstacles as the ride-sharing company moved to a microservices architecture over the last five years.

“People didn’t trust those services, and there weren’t tools that would allow people to write their business logic and not focus on all the faults that were happening in the network,” Klein said.

With a sidecar proxy in Envoy, each of Lyft’s services only had to understand its local portion of the network, and the application language no longer factored in its function. At the time, only the most demanding web application required proxy technology such as Envoy. But now, the complexity of microservices networking makes service mesh relevant to more mainstream IT shops.

The National Center for Biotechnology Information (NCBI) in Bethesda, Md., has laid the groundwork for microservices with a service mesh built around Linkerd, which was developed by Buoyant. The bioinformatics institute used Linkerd to modernize legacy applications, some as many as 30 years old, said Borys Pierov, a software developer at NCBI.

Any app that uses the HTTP protocol can point to the Linkerd proxy, which gives NCBI engineers improved visibility and control over advanced routing rules in the legacy infrastructure, Pierov said. While NCBI doesn’t use Kubernetes yet — it uses HashiCorp Consul and CoreOS rkt container runtime instead of Kubernetes and Docker — service mesh will be key to container networking on any platform.

“Linkerd gave us a look behind the scenes of our apps and an idea of how to split them into microservices,” Pierov said. “Some things were deployed in strange ways, and microservices will change those deployments, including the service mesh that moves us to a more modern infrastructure.”

Matt Klein speaks at KubeCon
Matt Klein, software engineer at Lyft, presents the company’s experiences with service mesh architectures at KubeCon.

Kubernetes networking will cozy up with service mesh next year

Linkerd is one of the most well-known and widely used tools among the multiple open source service mesh projects in various stages of development. However, Envoy has gained notoriety because it underpins a fresh approach to the centralized management layer, called Istio. This month, Buoyant also introduced a better performing and efficient successor to Linkerd, called Conduit.

Linkerd gave us a look behind the scenes of our apps … Some things were deployed in strange ways, and microservices will change those deployments, including the service mesh that moves us to a more modern infrastructure.
Borys Pierovsoftware developer, National Center for Biotechnology Information

It’s still too early for any of these projects to be declared the winner. The Cloud Native Computing Foundation (CNCF) invited Istio’s developers, which include IBM, Microsoft and Lyft, to make the Istio CNCF project, CNCF COO Chris Aniszczyk said at KubeCon. But Buoyant also will formally present Conduit to the CNCF next year, and multiple projects could coexist within the foundation, Aniszczyk said.

Kubernetes networking challenges led Gannett’s USA Today Network to create its own “terrible, over-orchestrated” service mesh-like system, in the words of Ronald Lipke, senior engineer on the USA Today platform-as-a-service team, who presented on the organization’s Kubernetes experience at KubeCon. HAProxy and the Calico network management system have supported Kubernetes networking in production so far, but there have been problems under this system with terminating nodes cleanly and removing them from Calico quickly so traffic isn’t misdirected.

Lipke likes the service mesh approach, but it’s not yet a top priority for his team at this early stage of Kubernetes deployment. “No one’s really asking for it yet, so it’s taken a back seat,” he said.

This will change in the new year. The company plans to rethink the HAproxy approach to reduce its cloud resource costs and improve network tracing for monitoring purposes. The company has done proof-of-concept evaluations around Linkerd and plans to look at Conduit, he said in an interview after his KubeCon session.

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at [email protected] or follow @PariseauTT on Twitter.

Take your first step towards the Imagine Cup with the Big Idea Challenge

Take your first step towards the Imagine Cup with the Big Idea Challenge

The 2018 Imagine Cup season is underway, and we are thrilled to announce the Big Idea Challenge! Pitch your world changing Imagine Cup idea and your team could win $3,000 USD, and technical resources to take your idea to the next level!

  • $3,000 USD – 1st Place prize
  • $2,000 USD – 2nd Place prize
  • $1,000 USD – 3rd Place prize
  • Judge feedback on project submission – Top 10 teams
  • $600 in Azure credits – Top 50 teams

Student developers around the world are asked every year to pitch their projects to investors, partners, customers, publishers, and even potential teammates. It’s how you share your vision, how you persuade people that you don’t just have the right idea, you’re also the right team to make it happen. Every winning pitch has a solid project plan to back it up. Want to win the Imagine Cup and get your project funded? Judges will want to know the why and the how you plan to bring your project to market.

How can you get a head start on the Imagine Cup? Create a three-minute pitch video as well as a project plan document and let us know what your Imagine Cup idea is all about. Your entry will be reviewed by a team of judges who will score it on a number of different criteria as described in the Official Rules. We’ve got plenty of resources to get you started: Take a look at winning Imagine Cup pitches as well as how to build your project plan, both should help you feel confident in taking that first step!

Start your journey today by registering for the 2018 Imagine Cup and submitting your solution to the Big Idea challenge from your account page by January 31st, 2018; winners will be announced in February. Don’t wait; 50 teams will win, and your Big Idea could take home $3,000 USD!

Pablo Veramendi
Imagine Cup Competition Manager

Revved-up Windows Server release cadence now serves DevOps

The challenge Microsoft faces today is it must address two sets of customers. First, there are the application developers who work on continuous deployment cycles. And second, there are traditional IT pros who prefer consistency over speed.

To that end, Microsoft has moved Windows Server 2016 to a more formal release cycle it uses with its other products, such as Windows 10 and Office 365 ProPlus, which gives businesses a choice. A Long-Term Servicing Channel brings new releases twice a year to meet the needs of traditional enterprise deployments and a Semi-Annual Channel that serves cloud-centric organizations with short-term, fast-evolving environments that need fast access to new features.

Here are the services and features included in each Windows Server release schedule and what to expect for support and updates.

What are the new release channels for Windows Server 2016?

Microsoft’s long-held pattern is to introduce a major Windows Server release every two or three years, followed by roughly a decade of support. Microsoft calls this its Long-Term Servicing Channel, formerly known as the long-term servicing branch. Windows Server with Desktop Experience — the full GUI-based installation — and Windows Server Core iterations will follow the Long-Term Servicing Channel and get new rollouts every two or three years.

The Semi-Annual Channel lets Microsoft test and experiment with new server OS features and functionality.

The more intriguing change is Microsoft’s short-term release model, known as the Semi-Annual Channel. This approach promises a notable rollout each year in the spring and fall. This channel caters to organizations that want quick access to advanced capabilities, such as support for Linux Bash and the latest Docker container features. Volume-licensed businesses with the Software Assurance program are eligible for the Semi-Annual Channel release.

The Nano Server installation option now only exists in the Semi-Annual Channel. Server Core comes in both short- and long-term channel alternatives.

The Semi-Annual Channel lets Microsoft test and experiment with new server OS features and functionality, and then bundle the most successful and well-received enhancements into the long-term channel. There is no guarantee that all features placed in the Semi-Annual Channel will make it into the next long-term Windows Server release.

What are the support and update details for the channels?

Microsoft offers different support terms for the different release channels.

Long-Term Servicing Channel: These Windows Server products get five years of mainstream support, five years of extended support and an optional six more years of support through Microsoft’s Premium Assurance program. This means Windows Server with Desktop Experience and the long-term version of Server Core receive up to 16 years of support.

That Long-Term Servicing Channel timeline applies to the Windows Server 2016 RTM, which rolled out in October 2016. It gets mainstream support until January 2022, with extended support until January 2027. With Premium Assurance, the support lifecycle for Windows Server 2016 finishes in 2033.

Semi-Annual Channel: Windows Server products in this release, such as Server Core and Nano Server, receive support for 18 months. Microsoft anticipates the shorter support window will be adequate, since new rollouts will arrive twice a year. This short-term channel is designed for organizations that want to change OSes quickly, such as cloud-ready data centers and IT infrastructures. Businesses that use the Semi-Annual Channel will likely update the OS long before the support expiration date.

Microsoft made the Windows Server 2016 Semi-Annual Channel fall release, known as Windows Server 2016 version 1709, available in October 2017. This means mainstream support ends in April 2019.

Security updates: Microsoft switched Windows Server’s servicing model to a cumulative approach in October 2016. Admins used to choose to install or uninstall individual updates, but now there is a single monthly rollup with all the fixes. Each new monthly update — released on the second Tuesday of each month, known as Patch Tuesday — supersedes the previous updates.

This all-or-nothing approach ensures admins can’t use a fragmented patching regimen. However, the IT staff must take a more thorough testing approach before the update rolls out to production.

As part of the Semi-Annual Channel release, Microsoft changed Nano Server to a container base image model in Windows Server 2016 version 1709. Because Nano Server has no servicing stack, the admin updates the OS through a deployment of the latest build of the runtime image via Docker.

How do admins test features before a channel release?

Microsoft’s Semi-Annual Channel poses serious challenges for IT professionals. A business accustomed to the Long-Term Servicing Channel release cadence will decide whether or not to use a newer version. But this is not the case with short-term cycles of every six months.

Organizations that subscribe to the Semi-Annual Channel require a more stringent testing approach before each new Windows Server release moves to production. The 18-month support cycle leaves little latitude to delay subsequent rollouts.

The Windows Insider Program enables IT pros to test and comment on the new features and functionality of an upcoming Semi-Annual Channel release. This helps admins try preview builds in their labs to check performance, verify compatibility with applications and tools and plan for any infrastructure changes, such as more storage or additional network bandwidth.

Azure IoT Hub Device Provisioning Service is now in public preview – Internet of Things

Setting up and managing Internet of Things (IoT) devices can be a challenge of the first order for many businesses. That’s because provisioning entails a lot of manual work, technical know-how, and staff resources. And certain security requirements, such as registering devices with the IoT hub, can further complicate provisioning.

During the initial implementation, for instance, businesses have to create unique device identities that are registered to the IoT hub and install individual device connection credentials, which enable revocation of access in event of compromise. IT staff also may want to maintain an enrollment list that controls what devices are allowed to automatically provision.

Wouldn’t it be great if there was a secure, automated way to remotely deploy and configure devices during registration to the IoT hub—and throughout their lifecycles? With Microsoft’s IoT Hub Device Provisioning Service (DPS), now in public preview, you can.

In a post on the Azure blog, [Title], Sam George explains how the IoT Hub Device Provisioning Service can provide zero-touch provisioning that eliminates configuration and provisioning hassles when onboarding IoT devices that connect to Azure services. This allows businesses to quickly and accurately provision millions of devices in a secure and scalable manner. In fact, IoT Hub Device Provisioning Service simplifies the entire device lifecycle management through features that enable secure device management and device reprovisioning. Next year, we plan to add support for ownership transfer and end-of-life management.

DPS is now available in the Eastern U.S., Western Europe, and Southeast Asia. To learn more about how Azure IoT Hub Device Provisioning Service can take the pain out of deploying and managing an IoT solution in a secure, reliable way, read our blog post announcing the public preview. And for technical details, check out Microsoft’s DPS documentation center.

Tags: Announcement, Azure IoT Hub, Device Provisioning Service