Tag Archives: which

For Sale – Lenovo ThinkCentre M93p/3.5 GHz/Windows 10 Pro

ThinkCentre M93, which boasts the latest processers, innovative productivity features. M93p and adds Intel® vPro™ to optimize remote manageability.

Intel Core i5 4690K 3.5 GHz Quad Core CPU (K for overclockable) Max Turbo Frequency 3.9 GHz

This is a very fast capable PC. It’s actually a Business Workstation described as cutting edge computing for large enterprise by Lenovo.

Go to Original Article
Author:

For Sale – Lenovo ThinkCentre M93p/3.5 GHz/Windows 10 Pro

ThinkCentre M93, which boasts the latest processers, innovative productivity features. M93p and adds Intel® vPro™ to optimize remote manageability.

Intel Core i5 4690K 3.5 GHz Quad Core CPU (K for overclockable) Max Turbo Frequency 3.9 GHz

This is a very fast capable PC. It’s actually a Business Workstation described as cutting edge computing for large enterprise by Lenovo.

Go to Original Article
Author:

Wanted – high end z370 0r z390 mb

hi have a i7 8700k in a msi gaming plus z370 which has sh*t audio so fitted xonar sound card trouble is i bought a sharkoon rev 200 case which turns the mb so the gpu is on show so i cant have a sound card so looking for a high end 1151 mb as cheap as i can get it-i want good sound and rgb control let me know what you have and how much
cheers
daz

Location
bolton

Go to Original Article
Author:

For Sale – Lenovo ThinkCentre M93p/3.5 GHz/Windows 10 Pro

ThinkCentre M93, which boasts the latest processers, innovative productivity features. M93p and adds Intel® vPro™ to optimize remote manageability.

Intel Core i5 4690K 3.5 GHz Quad Core CPU (K for overclockable) Max Turbo Frequency 3.9 GHz

This is a very fast capable PC. It’s actually a Business Workstation described as cutting edge computing for large enterprise by Lenovo.

Go to Original Article
Author:

Wanted – LGA 1150 Motherboard

I have an MSI mini ITX board (MSI H97 AC) which i am in the process of removing from my small pc.
I have the box and most of the gubbins that came with it.

One of the tabs to remove the RAM is broken but it does not stop the ram being removed or re-seated.
One of the antenna for wifi may be missing – I’ll have to check the other box as I have 2 of these PC’s.
How does £35 inc delivery sound?
Go to Original Article
Author:

IT pros clamor for Kubernetes multi-cloud deployment API

Enterprises are watching the development of the Kubernetes Cluster API project, which they hope will evolve into a declarative multi-cloud deployment standard for container infrastructure.

With a declarative API, developers can describe the desired outcome and the system handles the rest. Kubernetes today requires users to deploy a series of such APIs separately for each cloud provider and on-premises IT environment. This makes it difficult to take a cohesive, consistent approach to spinning up multiple clusters, especially in multi-cloud environments. Existing Kubernetes deployment procedures may also offer so many configuration options that it’s easy for end users to overcomplicate installations.

Enterprises that have taken a declarative, also known as immutable, approach to other layers of the IT infrastructure as they adopt DevOps want to enforce the same kind of simple, repeatable standards for Kubernetes clusters through a standard declarative API. Some IT shops have struggled and failed to implement their own APIs for those purposes, and say the community effort around Kubernetes Cluster API has better potential to achieve those goals than their individual projects.

One such company, German IT services provider Giant Swarm, created its own Kubernetes deployment API in 2017 to automate operations for more than 200 container clusters it manages for customers in multiple public clouds. It used a central Kubernetes management cluster fronted by the RESTful API to connect to Kubernetes Operators within each workload cluster. Eventually, though, Giant Swarm found that system too difficult to maintain as Kubernetes and cloud infrastructures continually changed.

“Managing an additional REST API is cumbersome, especially since users have to learn a new [interface],” said Marcel Müller, platform engineer at Giant Swarm, in an online presentation at a virtual IT conference held by API platform vendor Kong last month. “We had to restructure our API quite often, and sometimes we didn’t have the resources or knowledge to make the right long-term [architectural] decisions.”

Kubernetes Cluster API
Kubernetes Cluster API architecture

Switching between cloud providers proved especially confusing and painful for users, since tooling is not transferable between them, Müller said.

“The conclusion we got to by early 2019 was that community collaboration would be really nice here,” he said. “A Kubernetes [special interest group] would take care of leading this development and ensuring it’s going in the correct direction — thankfully, this had already happened because others faced similar issues and come to the same conclusion.”

A Kubernetes [special interest group] would take care of leading this development and ensuring it’s going in the correct direction — thankfully, this had already happened because others faced similar issues and come to the same conclusion.
Marcel Müller Platform engineer, Giant Swarm

That special interest group (SIG), SIG-Cluster-Lifecycle, was formed in late 2017, and created Cluster API as a means to standardize Kubernetes deployments in multiple infrastructures. That project issued its first alpha release in March 2019, as Müller and his team grew frustrated with their internal project, and Giant Swarm began to track its progress as a potential replacement.

Cluster API installs Kubernetes across clouds using MachineSets, which are similar to the Kubernetes ReplicaSets Giant Swarm already uses. Users can also manage Cluster API through the familiar kubectl command line interface, rather than learning to use a separate RESTful API. 

Still, the project is still in an early alpha phase, according to its GitHub page, and therefore changing rapidly; as an experimental project, it isn’t necessarily suited for production use yet. Giant Swarm will also need to transition gradually to Cluster API to ensure the stability of its Kubernetes environment, Müller said.

Cluster API bridges Kubernetes multi-cloud gap

Cluster API is an open source alternative to centralized Kubernetes control planes also offered by several IT vendors, such as Red Hat OpenShift, Rancher and VMware Tanzu. Some enterprises may prefer to let a vendor tackle the API integration problem and leave support to them as well. In either case, the underlying problem at hand is the same — as enterprise deployments expand and mature, they need to control and automate multiple Kubernetes clusters in multi-cloud environments.

For some users, multiple clusters are necessary to keep workloads portable across multiple infrastructure providers; others prefer to manage multiple clusters rather than deal with challenges that can emerge in Kubernetes networking and multi-tenant security at large scale. The core Kubernetes framework does not address this.

“[Users] need a ‘meta control plane’ because one doesn’t just run a single Kubernetes cluster,” said John Mitchell, an independent digital transformation consultant in San Francisco. “You end up needing to run multiple [clusters] for various reasons, so you need to be able to control and automate that.”

Before vendor products and Cluster API emerged, many early container adopters created their own tools similar to Giant Swarm’s internal API. In Mitchell’s previous role at SAP Ariba, the company created a project called Cobalt to build, deploy and operate application code on bare metal, AWS, Google Cloud and Kubernetes.

Mitchell isn’t yet convinced that Cluster API will be the winning approach for the rest of the industry, but it’s at least in the running.

“Somebody in the Kubernetes ecosystem will muddle their way to something that mostly works,” he said. “It might be Cluster API.”

SAP’s Concur Technologies subsidiary, meanwhile, created Scipian to watch for changes in Kubernetes custom resource definitions (CRDs) made as apps are updated. Scipian then launches Terraform jobs to automatically create, update and destroy Kubernetes infrastructure in response to those changes, so that Concur ops staff don’t have to manage those tasks manually. Scipian’s Terraform modules work well, but Cluster API might be a simpler mechanism once it’s integrated into the tool, said Dale Ragan, principal software design engineer at the expense management SaaS provider based in Bellevue, Wash.

“Terraform is very amenable to whatever you need it to do,” Ragan said. “But it can be almost too flexible for somebody without in-depth knowledge around infrastructure — you can create a network, for example, but did you create it in a secure way?”

With Cluster API, Ragan’s team may be able to enforce Kubernetes deployment standards more easily, without requiring users to have a background in the underlying toolset.

“We created a Terraform controller so we can run existing modules using kubectl [with Cluster API],” Ragan said. “As we progress further, we’re going to use CRDs to replace those modules … as a way to create infrastructure in ‘T-shirt sizes’ instead of talking about [technical details].”

Go to Original Article
Author:

What’s the biggest cybersecurity threat in 2020? Experts weigh in

Every day, CISOs must decide which cyberthreats to prioritize in their organizations. When it comes to choosing which threats are the most concerning, the list from which to choose from is nearly boundless.

At RSA Conference 2020, speakers discussed several of the most concerning threats this year, from ransomware and election hacking to supply chain attacks and beyond. To pursue the topic of concerning threats, SearchSecurity asked several experts at the conference what they considered to be the biggest cybersecurity threat today.

“It has to be ransomware,” CrowdStrike CTO Mike Sentonas said. “It may not be the most complex attack, but what organizations are facing around the world is a huge increase in e-crime activity, specifically around the use of ransomware. The rise over the last twelve months has been incredible, simply because of the amount of money there is to be made.”

Trend Micro vice president of cybersecurity Greg Young agreed.

“It has to be ransomware, definitely. Quick money. We’ve certainly seen a change of focus where the people who are least able to defend themselves, state and local governments, particularly in some of the poorer areas, budgets are low and the bad guys focus on that,” he said. “The other thing is I think there’s much more technological capability than there used to be. There’s fewer toolkits and fewer flavors of attacks but they’re hitting more people and they’re much more effective, so I think there’s much more efficiency and effectiveness with what the bad guys are doing now.”

Sentonas added that he expects the trend of ransomware to continue.

“We’ve seen different ransomware groups or e-crime groups that are delivering ransomware have campaigns that have generated over $5 million, we’ve seen campaigns that have generated over $10 million. So with so much money to be made, in many ways, I don’t like saying it, but in many ways it’s easy for them to do it. So that’s driving the huge increase and focus on ransomware. I think, certainly for the next 12 to 24 months, this trend will continue. The rise of ransomware is showing no signs it’s going to slow down,” Sentonas explained.

“Easy” might just be the key word here. The biggest threat to cybersecurity, according to BitSight vice president of communications and government affairs Jake Olcott, is that companies “are still struggling with doing the basics” when it comes to cybersecurity hygiene.

“Look at all the major examples — Equifax, Baltimore, the list could go on — where it was not the case of a sophisticated adversary targeting an organization with a zero-day malware that no one had seen before. It might have been an adversary targeting an organization with malware that was just exploiting known vulnerabilities. I think the big challenge a lot of companies have is just doing the basics,” Olcott said.

Lastly, Akamai CTO Patrick Sullivan said that the biggest threat in cybersecurity is that to the supply chain, as highlighted at Huawei’s panel discussion at RSAC.

“The big trend is people are looking at their supply chain,” he said. “Like, what is the risk to the third parties you’re partnering with, to the code you’re developing with partners, so I think it’s about looking beyond that first circle to the second circle of your supply chain and your business partners.”

Go to Original Article
Author:

Intel CSME flaw deemed ‘unfixable’ by Positive Technologies

An underlying flaw in Intel chipsets, which was originally disclosed in May of 2019, was recently discovered by Positive Technologies to be far worse than previously reported.

Researchers from the vulnerability management vendor discovered a bug in the read-only memory of the Intel Converged Security and Management Engine (CSME) could allow threat actors to compromise platform encryption keys and steal sensitive information. The Intel CSME vulnerability, known as CVE-2019-0090, is present in both the hardware and the firmware of the boot ROM and affects all chips other than Intel’s 10th-generation “Ice Point” processors.

“We started researching the Intel CSME IOMMU [input-output memory management unit] in 2018,” Mark Ermolov, lead specialist of OS and hardware security at Positive Technologies, said via email. “We’ve been interested in that topic especially because we’ve known that Intel CSME shares its static operative memory with the host (main CPU) on some platforms. Studying the IOMMU mechanisms, we were very surprised that two main mechanisms of CSME and IOMMU are turned off by default. Next, we started researching Intel CSME boot ROM’s firmware to ascertain when CSME turns on the IOMMU mechanists and we found that there is a very big bug: the IOMMU is activated too late after x86 paging structures were created and initialized, a problem we found in October.”

The bug, which Positive calls “unfixable,” can be found in most Intel chipsets released in the last five years, according to Positive Technologies. 

“Intel CSME is responsible for initial authentication of Intel-based systems by loading and verifying all other firmware for modern problems,” Ermolov said. “It is the cryptographic basis for hardware security technologies developed by Intel and used everywhere, such as DRM, fTPM [firmware Trusted Platform Module] and Intel Identity protection. The main concern is that, because this vulnerability allows a compromise at the hardware level, it destroys the chain of trust for the platform as a whole.”

Although Intel has issued patches and mitigations that complicate the attack, Positive Technologies said fully patching the flaw is impossible because firmware updates can’t fully address all of the vectors.

“In the CVE-2019-0090 patch, Intel blocked ISH [Integrated Sensors Hub], so now it can’t issue DMA transactions to CSME. But we’re convinced there are other exploitation vectors and they will be found soon. To exploit a system that has not patched for CVE-2019-0090, an attacker doesn’t need to be very sophisticated,” Ermolov said.

In addition, Positive Technologies said extracting the chipset key is impossible to detect.

“The chipset key being leaked can’t be detected by CSME or by the main OS,” Ermolov said. “You’re already in danger, but you don’t know it. The attack (by DMA) also doesn’t leave any footprint. When an attacker uses the key to compromise the machine’s identity, this might be detected by you and you only, but only after it’s happened when it is too late.”

Once they’ve breached the system, threat actors can exploit this vulnerability in several ways, according to Positive Technologies.

“With the chipset key, attackers can pass off an attacker computer as the victims’ computer. They can gain remote certification into companies to access digital content usually under license (such as videos or films from companies like Netflix),” the company said via email. “They can steal temporary passwords to embezzle money. They can pose as a legitimate point-of-sale payment terminal to charge funds to their own accounts. Abusing this vulnerability, criminals can even spy on companies for industrial espionage or steal sensitive data from customers.”

Positive Technologies recommended disabling Intel CSME-based encryption or completely replacing CPUs with the latest generation of Intel chips.

This is the second vulnerability disclosed regarding Intel chips since January, when computer science researchers discovered a speculative execution attack that leaks data from an assortment of Intel processors released before the fourth quarter of 2018.

Go to Original Article
Author:

For Sale – or Trade (eGPU enclosure) : Dell 3020M USFF (4150T, 8GB, SSD, WiFi)

Very compact PC which has been used mainly as an HTPC. Intel 4150T, 8GB, wifi, 128GB SSD, Win10.
Will consider trade with a graphics card enclosure as long as it’s TB3 compatible. Cash your way depending on model.

Location
bristol
Price and currency
120
Delivery cost included
Delivery Is Included
Prefer goods collected?
I have no preference
Advertised elsewhere?
Advertised elsewhere
Payment method
BT

Last edited:

Go to Original Article
Author:

How to create and deploy a VMware VM template

A VMware VM template — also known as a golden image — is a perfect copy of a VM from which you can deploy identical VMs. Templates include a VM’s virtual disks and settings, and they can not only save users time but help them avoid errors when configuring new Windows and Linux VMs.

VM templates enable VMware admins to create exact copies of VMs for cloning, converting and deploying. They can be used to simplify configuration and ensure the standardization of VMs throughout your entire ecosystem. Templates can also be used as long-term backups of VMs. However, you can’t operate a VM template without converting it back to a standard VM.

VSphere templates can be accessed through your content library. The content library wizard will then walk you through configuration steps, such as publishing and optimizing templates. It designates roles and privileges that you can then assign to users, and it eases VM deployment options.

Best practices for Hyper-V templates

You can create and deploy VMware VM templates through Hyper-V, as well. Hyper-V templates enable users to deploy VMs quickly with greater security, such as with shielded VMs, and reduce network congestion. They rely on System Center Virtual Machine Manager (SCVMM) and require specific configurations.

To create a Hyper-V template, select a base object from which you want to create the template — an extant VM template, a virtual hard disk or a VM. Assign a name to the new template and configure the virtual hardware and operating settings the deployed VM will use.

Keep in mind that not every VM is a viable template candidate. If your system partition is not the same as your Windows partition, you won’t be able to use that VM as a template source.

To create a shielded VM — one that protects against a compromised host — run the Shielded Template Disk Creation Wizard. Specify your required settings in the wizard and click Generate to produce the template disk, then copy that disk to your template library. The disk should appear in your content library with a small shield icon, which signifies that it has shielded technology.

How to create a VMware VM template with Packer

Packer is a free tool that can help you automate vSphere template creation and management. It features multiple builders optimized for VMware Fusion, Workstation Pro or Workstation Player. The vmware-iso Packer plugin builder supports using a remote ESXi server to build a template, and the vsphere-iso plugin helps you connect to a vCenter environment and build on any host in a cluster.

When you use Packer to make a VM template, you use two main file types. The JSON file makes up the template, and the autounattend.xml file automates Windows installation on your VM. Once your scripts, JSON file and autounattend file are ready, you can build a VM template in Packer. When the build is complete, Packer converts the VM to a template that you can view and deploy through PowerCLI.

Use PowerCLI to deploy a template

You can use PowerCLI to deploy new VMs from a template. Create an OS customization specification through PowerCLI to start the deployment process and to ensure that when you create your VMs from a template, you can still change certain settings to make them unique. These settings would include organization name, security identifier, local administrator password, Active Directory domain, time zone, domain credentials, Windows product key and AutoLogonCount registry key. The PowerCLI cmdlet might resemble the following:

C:> New-OSCustomizationSpec -Name ‘WindowsServer2016’ -FullName ‘TestName’ -OrgName ‘MyCompany’ -OSType Windows -ChangeSid -AdminPassword (Read-Host -AsSecureString) -Domain ‘NTDOMAIN’ -TimeZone 035 -DomainCredentials (Get-Credential) -ProductKey ‘5555-7777-3333-2222’ -AutoLogonCount 1

After your OS is customized, you can easily deploy a VM from a template or multiple VMs from the same template. Start by placing the OS customization specifications into the variable $Specs.

$Specs = Get-OSCustomizationSpec -Name ‘WindowsServer2016’

Then, use the VM template in the variable $Template.

$Template = Get-Template -Name ‘ Windows2016Template’

Finish by deploying your VM using the New-VM cmdlet and piping in your template and OS specifications.

New-VM -Name ‘Windows16VM’ -Template $Template -OSCustomizationSpec $Spec -VMHost ‘ESXiHost’ -Datastore ‘VMDatastore’

Troubleshoot VM templates

Joining a VM to an Active Directory domain can cause the system to create a computer account for the VM, which then leaves that computer account orphaned during the template creation process.

There are a few common mistakes to VM template creation and deployment that you’ll want to avoid.

Creating a VMware template directly from a VM ends up destroying the VM. Always create a clone of a VM prior to creating a template from one. Even if you create a VM solely to become a template, template creation could fail and destroy your VM. A common reason for template creation failure is trying to create a template from a Linux VM. In that case, the template creation process wants to Sysprep a VM but Sysprep is designed for Windows OSes.

You also need to ensure that the model VM you want to turn into a template isn’t domain-jointed. Joining a VM to an Active Directory domain can cause the system to create a computer account for the VM, which then leaves that computer account orphaned during the template creation process. To work around this issue, have the template itself handle the domain join and secure the library share in a way that prevents anyone other than VM admins from having access.

Finally, don’t include any preinstalled applications on a VM template. The Sysprep process often breaks such applications. You can instead use an application profile or configure a VM template to run a script for automated application installation.

Go to Original Article
Author: