Tag Archives: separate

For Sale – 12 Card 1080TI Asus Strix(Or separate cards)

Selling my 12 Card 1080TI Mining Rig-Strix OC 11GB (Or separate cards)

It has been mining in an air conditioned tent with ventilation since January.

I’ve been very careful to keep all cards at around 60-65 degrees by only using light overclock.

I was too paranoid that I might break it.

It has 3 Antec 1300w platinum power supplies. These cost me £280 each.

Asrock BTC H110 Pro
120GB Sandisk SSD
8GB Ram
6x external fans
Mining rig frame is currently £40 on ebay.

I can throw in a screen mouse and keyboard if you want them.

All risers tested and working obviously

It still has plenty of warranty left on the GPU’s and I can help with creating a support ticket if something were to happen to any of them.

This is too expensive to send in the post. I will hand deliver this to anywhere in the UK for a small cost. Probably around 20 quid.

Price and currency: 5000
Delivery: Goods must be exchanged in person
Payment method: Bank transfer or Cash
Location: Redhill
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

For Sale – 12 Card 1080TI Asus Strix(Or separate cards)

Selling my 12 Card 1080TI Mining Rig-Strix OC 11GB (Or separate cards)

It has been mining in an air conditioned tent with ventilation since January.

I’ve been very careful to keep all cards at around 60-65 degrees by only using light overclock.

I was too paranoid that I might break it.

It has 3 Antec 1300w platinum power supplies. These cost me £280 each.

Asrock BTC H110 Pro
120GB Sandisk SSD
8GB Ram
6x external fans
Mining rig frame is currently £40 on ebay.

I can throw in a screen mouse and keyboard if you want them.

All risers tested and working obviously

It still has plenty of warranty left on the GPU’s and I can help with creating a support ticket if something were to happen to any of them.

This is too expensive to send in the post. I will hand deliver this to anywhere in the UK for a small cost. Probably around 20 quid.

Price and currency: 5000
Delivery: Goods must be exchanged in person
Payment method: Bank transfer or Cash
Location: Redhill
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Manage Hyper-V containers and VMs with these best practices

Containers and VMs should be treated as the separate instance types they are, but there are specific management strategies that work for both that admins should incorporate.


Containers and VMs are best suited to different workload types, so it makes sense that IT administrators would…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

use both in their virtual environments, but that adds another layer of complexity to consider.

One of the most notable features introduced in Windows Server 2016 was support for containers. At the time, it seemed that the world was rapidly transitioning away from VMs in favor of containers, so Microsoft had little choice but to add container support to its flagship OS.

Today, organizations use both containers and VMs. But for admins that use a mixture, what’s the best way to manage Hyper-V containers and VMs?

To understand the management challenges of supporting both containers and VMs, admins need to understand a bit about how Windows Server 2016 works. From a VM standpoint, Windows Server 2016 Hyper-V isn’t that different from the version of Hyper-V included with Windows Server 2012 R2. Microsoft introduced a few new features, as with every new release, but the tools and techniques used to create and manage VMs were largely unchanged.

In addition to being able to host VMs, Windows Server 2016 includes native support for two different types of containers: Windows Server containers and Hyper-V containers. Windows Server containers and the container host share the same kernel. Hyper-V containers differ from Windows Server containers in that Hyper-V containers run inside a special-purpose VM. This enables kernel-level isolation between containers and the container host.

Hyper-V management

When Microsoft created Hyper-V containers, it faced something of a quandary with regard to the management interface.

The primary tool for managing Hyper-V VMs is Hyper-V Manager — although PowerShell and System Center Virtual Machine Manager (SCVMM) are also viable management tools. This has been the case ever since the days of Windows Server 2008. Conversely, admins in the open source world used containers long before they ever showed up in Windows, and the Docker command-line interface has become a standard for container management.

Ultimately, Microsoft chose to support Hyper-V Manager as a tool for managing Hyper-V hosts and Hyper-V VMs, but not containers. Likewise, Microsoft chose to support the use of Docker commands for container management.

Management best practices

Although Hyper-V containers and VMs both use the Hyper-V virtualization engine, admins should treat containers and VMs as two completely different types of resources. While it’s possible to manage Hyper-V containers and VMs through PowerShell, most Hyper-V admins seem to prefer using a GUI-based management tool for managing Hyper-V VMs. Native GUI tools, such as Hyper-V Manager and SCVMM, don’t support container management.

As admins work to figure out the best way to manage Hyper-V containers and VMs, it’s important for them to remember that both depend on an underlying host.

Admins who wish to manage their containers through a GUI should consider using one of the many interfaces that are available for Docker. Kitematic is probably the best-known of these interfaces, but there are third-party GUI interfaces for containers that arguably provide a better overall experience.

For example, Datadog offers a dashboard for monitoring Docker containers. Another particularly nice GUI interface for Docker containers is DockStation.

Those who prefer an open source platform should check out the Docker Monitoring Project. This monitoring platform is based on the Kubernetes dashboard, but it has been adapted to work directly with Docker.

As admins work to figure out the best way to manage Hyper-V containers and VMs, it’s important for them to remember that both depend on an underlying host. Although Microsoft doesn’t provide any native GUI tools for managing VMs and containers side by side, admins can use SCVMM to manage all manner of Hyper-V hosts, regardless of whether those servers are hosting Hyper-V VMs or Hyper-V containers.

Admins who have never worked with containers before should spend some time experimenting with containers in a lab environment before attempting to deploy them in production. Although containers are based on Hyper-V, creating and managing containers is nothing like setting up and running Hyper-V VMs. A great way to get started is to install containers on Windows 10.

Dig Deeper on Microsoft Hyper-V management

Wanted – Motherboard (LGA1151) & 1151/Skylake CPU

Wanting to purchase a skylake bundle or mobo and cpu separate. Thinking of an i5 6600k or better, and a board with lga1151 socket. If ddr4 ram was also included that would be perfect.

Location: North Lincolnshire

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Panasas storage, director blades split into separate devices

Panasas has revamped its scale-out NAS, adding a separate hardware appliance to disaggregate ActiveStor director blades from its hybrid arrays of the same name.

The Panasas storage rollout encompasses two interrelated products with different launch dates. The ActiveStor Hybrid 100 (ASH-100), the latest generation of Panasas’ hybrid storage, is due for general availability in December. The ASH-100 uses solid-state drives to accelerate metadata requests.

The new product entry is the ActiveStor Director 100 (ASD-100), a control-plane engine that sits atop a rack of ActiveStor arrays. ASD-100 director blade appliances are scheduled for release by March 2018, in tandem with its PanFS 7.0 parallel file system.

The ASH-100 array and ASD-100 blade appliance are compatible with ActiveStor AS18 and AS20 systems. Until now, Panasas integrated director blades in a dedicated slot on the 11-slot array chassis.

Addison Snell, CEO of IT analyst firm Intersect360 in Sunnyvale, Calif., said adding a separate metadata server allows Panasas to expand on its PanFS parallel file system.

“The reason this is important is that different levels of workloads will require different levels of performance,” Snell said. “Panasas lets you right-size your metadata performance to your application. Enterprise storage increasingly is migrating to different things that are classified as high-performance workloads, beyond the traditional uses. You’ve got big data, AI and machine learning starting to take off. The attention has turned to ‘How do I achieve reliable performance at scale so that I can tailor to my individual workload?'”

The revamp improves performance of high-performance computing and hyperscale workloads, especially seeking and opening lots of small files, said Dale Brantley, a Panasas director of systems engineering.

“This is a disaggregated director appliance that lets you unlock the full functionality of the software contained within. You will be able to cache millions or tens of millions of entries in the Director’s memory, rather than doing memory thrashing,” Brantley said.

“These products together allow us to tailor the environment more for specific workloads. Our customers are using more small-file workloads. This is just one more workload that the HPC cluster has to support. This will be a foundational platform for our next-generations systems.”

Panasas' storage stack
The Panasas ASD-100 director blade sits atop the vendor’s ActiveStor Hybrid storage, allowing customers to scale them separately.

Panasas storage protocol reworks memory allocation for streaming

ASH-100 uses a system-on-a-chip CPU design based on an Intel Atom C2558 processor. The 2U Panasas storage array tops out at 57 TB of raw capacity with 200 populated shelves. A shelf scales to 264 TB of disk storage and 21 TB of flash.

All I/O requests are buffered in RAM. Each ASH-100 blade includes a built-in 16 GB DDR3 RAM card to speed client requests. A new feature is the ability to independently scale HDDs and SSDs of varying capacities in the ASH-100 box.

Brantley said changes to the Linux kernel in recent years have hindered the streaming capability of large file systems. To compensate, Panasas wrote code that enables its DirectFlow parallel file system protocol in PanFS to enhance read-ahead techniques and boost throughput.

The ASD-100 Director appliance is a 2U four-node chassis with 96 GB of DDR4 nonvolatile dual-inline memory modules (NVDIMM) to protect metadata transactions. Previous ActiveStor blades used an onboard battery to back up DRAM as persistent cache for the metadata logs.

Brantley said Panasas storage engineers wrote an NVDIMM driver that they will share with the FreeBSD operating system community. Updates to FreeBSD are slated for PanFS 7.0, along with a dynamic GUI and aids for implementing NFS on Linux servers.

Panasas said PanFS 7.0 will include an improved NFS Server implementation and updates to the FreeBSD operating system. Panasas storage engineers wrote a SNIA-compatible NVDIMM driver that Brantley said will be made available to the FreeBSD community.