I have an MSI mini ITX board (MSI H97 AC) which i am in the process of removing from my small pc. I have the box and most of the gubbins that came with it.
One of the tabs to remove the RAM is broken but it does not stop the ram being removed or re-seated. One of the antenna for wifi may be missing – I’ll have to check the other box as I have 2 of these PC’s. How does £35 inc delivery sound? Go to Original Article
Enterprises are watching the development of the Kubernetes Cluster API project, which they hope will evolve into a declarative multi-cloud deployment standard for container infrastructure.
With a declarative API, developers can describe the desired outcome and the system handles the rest. Kubernetes today requires users to deploy a series of such APIs separately for each cloud provider and on-premises IT environment. This makes it difficult to take a cohesive, consistent approach to spinning up multiple clusters, especially in multi-cloud environments. Existing Kubernetes deployment procedures may also offer so many configuration options that it’s easy for end users to overcomplicate installations.
Enterprises that have taken a declarative, also known as immutable, approach to other layers of the IT infrastructure as they adopt DevOps want to enforce the same kind of simple, repeatable standards for Kubernetes clusters through a standard declarative API. Some IT shops have struggled and failed to implement their own APIs for those purposes, and say the community effort around Kubernetes Cluster API has better potential to achieve those goals than their individual projects.
One such company, German IT services provider Giant Swarm, created its own Kubernetes deployment API in 2017 to automate operations for more than 200 container clusters it manages for customers in multiple public clouds. It used a central Kubernetes management cluster fronted by the RESTful API to connect to Kubernetes Operators within each workload cluster. Eventually, though, Giant Swarm found that system too difficult to maintain as Kubernetes and cloud infrastructures continually changed.
“Managing an additional REST API is cumbersome, especially since users have to learn a new [interface],” said Marcel Müller, platform engineer at Giant Swarm, in an online presentation at a virtual IT conference held by API platform vendor Kong last month. “We had to restructure our API quite often, and sometimes we didn’t have the resources or knowledge to make the right long-term [architectural] decisions.”
Switching between cloud providers proved especially confusing and painful for users, since tooling is not transferable between them, Müller said.
“The conclusion we got to by early 2019 was that community collaboration would be really nice here,” he said. “A Kubernetes [special interest group] would take care of leading this development and ensuring it’s going in the correct direction — thankfully, this had already happened because others faced similar issues and come to the same conclusion.”
Marcel Müller Platform engineer, Giant Swarm
That special interest group (SIG), SIG-Cluster-Lifecycle, was formed in late 2017, and created Cluster API as a means to standardize Kubernetes deployments in multiple infrastructures. That project issued its first alpha release in March 2019, as Müller and his team grew frustrated with their internal project, and Giant Swarm began to track its progress as a potential replacement.
Cluster API installs Kubernetes across clouds using MachineSets, which are similar to the Kubernetes ReplicaSets Giant Swarm already uses. Users can also manage Cluster API through the familiar kubectl command line interface, rather than learning to use a separate RESTful API.
Still, the project is still in an early alpha phase, according to its GitHub page, and therefore changing rapidly; as an experimental project, it isn’t necessarily suited for production use yet. Giant Swarm will also need to transition gradually to Cluster API to ensure the stability of its Kubernetes environment, Müller said.
Cluster API bridges Kubernetes multi-cloud gap
Cluster API is an open source alternative to centralized Kubernetes control planes also offered by several IT vendors, such as Red Hat OpenShift, Rancher and VMware Tanzu. Some enterprises may prefer to let a vendor tackle the API integration problem and leave support to them as well. In either case, the underlying problem at hand is the same — as enterprise deployments expand and mature, they need to control and automate multiple Kubernetes clusters in multi-cloud environments.
For some users, multiple clusters are necessary to keep workloads portable across multiple infrastructure providers; others prefer to manage multiple clusters rather than deal with challenges that can emerge in Kubernetes networking and multi-tenant security at large scale. The core Kubernetes framework does not address this.
“[Users] need a ‘meta control plane’ because one doesn’t just run a single Kubernetes cluster,” said John Mitchell, an independent digital transformation consultant in San Francisco. “You end up needing to run multiple [clusters] for various reasons, so you need to be able to control and automate that.”
Before vendor products and Cluster API emerged, many early container adopters created their own tools similar to Giant Swarm’s internal API. In Mitchell’s previous role at SAP Ariba, the company created a project called Cobalt to build, deploy and operate application code on bare metal, AWS, Google Cloud and Kubernetes.
Mitchell isn’t yet convinced that Cluster API will be the winning approach for the rest of the industry, but it’s at least in the running.
“Somebody in the Kubernetes ecosystem will muddle their way to something that mostly works,” he said. “It might be Cluster API.”
SAP’s Concur Technologies subsidiary, meanwhile, created Scipian to watch for changes in Kubernetes custom resource definitions (CRDs) made as apps are updated. Scipian then launches Terraform jobs to automatically create, update and destroy Kubernetes infrastructure in response to those changes, so that Concur ops staff don’t have to manage those tasks manually. Scipian’s Terraform modules work well, but Cluster API might be a simpler mechanism once it’s integrated into the tool, said Dale Ragan, principal software design engineer at the expense management SaaS provider based in Bellevue, Wash.
“Terraform is very amenable to whatever you need it to do,” Ragan said. “But it can be almost too flexible for somebody without in-depth knowledge around infrastructure — you can create a network, for example, but did you create it in a secure way?”
With Cluster API, Ragan’s team may be able to enforce Kubernetes deployment standards more easily, without requiring users to have a background in the underlying toolset.
“We created a Terraform controller so we can run existing modules using kubectl [with Cluster API],” Ragan said. “As we progress further, we’re going to use CRDs to replace those modules … as a way to create infrastructure in ‘T-shirt sizes’ instead of talking about [technical details].”
Every day, CISOs must decide which cyberthreats to prioritize in their organizations. When it comes to choosing which threats are the most concerning, the list from which to choose from is nearly boundless.
At RSA Conference 2020, speakers discussed several of the most concerning threats this year, from ransomware and election hacking to supply chain attacks and beyond. To pursue the topic of concerning threats, SearchSecurity asked several experts at the conference what they considered to be the biggest cybersecurity threat today.
“It has to be ransomware,” CrowdStrike CTO Mike Sentonas said. “It may not be the most complex attack, but what organizations are facing around the world is a huge increase in e-crime activity, specifically around the use of ransomware. The rise over the last twelve months has been incredible, simply because of the amount of money there is to be made.”
Trend Micro vice president of cybersecurity Greg Young agreed.
“It has to be ransomware, definitely. Quick money. We’ve certainly seen a change of focus where the people who are least able to defend themselves, state and local governments, particularly in some of the poorer areas, budgets are low and the bad guys focus on that,” he said. “The other thing is I think there’s much more technological capability than there used to be. There’s fewer toolkits and fewer flavors of attacks but they’re hitting more people and they’re much more effective, so I think there’s much more efficiency and effectiveness with what the bad guys are doing now.”
Sentonas added that he expects the trend of ransomware to continue.
“We’ve seen different ransomware groups or e-crime groups that are delivering ransomware have campaigns that have generated over $5 million, we’ve seen campaigns that have generated over $10 million. So with so much money to be made, in many ways, I don’t like saying it, but in many ways it’s easy for them to do it. So that’s driving the huge increase and focus on ransomware. I think, certainly for the next 12 to 24 months, this trend will continue. The rise of ransomware is showing no signs it’s going to slow down,” Sentonas explained.
“Easy” might just be the key word here. The biggest threat to cybersecurity, according to BitSight vice president of communications and government affairs Jake Olcott, is that companies “are still struggling with doing the basics” when it comes to cybersecurity hygiene.
“Look at all the major examples — Equifax, Baltimore, the list could go on — where it was not the case of a sophisticated adversary targeting an organization with a zero-day malware that no one had seen before. It might have been an adversary targeting an organization with malware that was just exploiting known vulnerabilities. I think the big challenge a lot of companies have is just doing the basics,” Olcott said.
Lastly, Akamai CTO Patrick Sullivan said that the biggest threat in cybersecurity is that to the supply chain, as highlighted at Huawei’s panel discussion at RSAC.
“The big trend is people are looking at their supply chain,” he said. “Like, what is the risk to the third parties you’re partnering with, to the code you’re developing with partners, so I think it’s about looking beyond that first circle to the second circle of your supply chain and your business partners.”
An underlying flaw in Intel chipsets, which was originally disclosed in May of 2019, was recently discovered by Positive Technologies to be far worse than previously reported.
Researchers from the vulnerability management vendor discovered a bug in the read-only memory of the Intel Converged Security and Management Engine (CSME) could allow threat actors to compromise platform encryption keys and steal sensitive information. The Intel CSME vulnerability, known as CVE-2019-0090, is present in both the hardware and the firmware of the boot ROM and affects all chips other than Intel’s 10th-generation “Ice Point” processors.
“We started researching the Intel CSME IOMMU [input-output memory management unit] in 2018,” Mark Ermolov, lead specialist of OS and hardware security at Positive Technologies, said via email. “We’ve been interested in that topic especially because we’ve known that Intel CSME shares its static operative memory with the host (main CPU) on some platforms. Studying the IOMMU mechanisms, we were very surprised that two main mechanisms of CSME and IOMMU are turned off by default. Next, we started researching Intel CSME boot ROM’s firmware to ascertain when CSME turns on the IOMMU mechanists and we found that there is a very big bug: the IOMMU is activated too late after x86 paging structures were created and initialized, a problem we found in October.”
“Intel CSME is responsible for initial authentication of Intel-based systems by loading and verifying all other firmware for modern problems,” Ermolov said. “It is the cryptographic basis for hardware security technologies developed by Intel and used everywhere, such as DRM, fTPM [firmware Trusted Platform Module] and Intel Identity protection. The main concern is that, because this vulnerability allows a compromise at the hardware level, it destroys the chain of trust for the platform as a whole.”
Although Intel has issued patches and mitigations that complicate the attack, Positive Technologies said fully patching the flaw is impossible because firmware updates can’t fully address all of the vectors.
“In the CVE-2019-0090 patch, Intel blocked ISH [Integrated Sensors Hub], so now it can’t issue DMA transactions to CSME. But we’re convinced there are other exploitation vectors and they will be found soon. To exploit a system that has not patched for CVE-2019-0090, an attacker doesn’t need to be very sophisticated,” Ermolov said.
In addition, Positive Technologies said extracting the chipset key is impossible to detect.
“The chipset key being leaked can’t be detected by CSME or by the main OS,” Ermolov said. “You’re already in danger, but you don’t know it. The attack (by DMA) also doesn’t leave any footprint. When an attacker uses the key to compromise the machine’s identity, this might be detected by you and you only, but only after it’s happened when it is too late.”
Once they’ve breached the system, threat actors can exploit this vulnerability in several ways, according to Positive Technologies.
“With the chipset key, attackers can pass off an attacker computer as the victims’ computer. They can gain remote certification into companies to access digital content usually under license (such as videos or films from companies like Netflix),” the company said via email. “They can steal temporary passwords to embezzle money. They can pose as a legitimate point-of-sale payment terminal to charge funds to their own accounts. Abusing this vulnerability, criminals can even spy on companies for industrial espionage or steal sensitive data from customers.”
Positive Technologies recommended disabling Intel CSME-based encryption or completely replacing CPUs with the latest generation of Intel chips.
This is the second vulnerability disclosed regarding Intel chips since January, when computer science researchers discovered a speculative execution attack that leaks data from an assortment of Intel processors released before the fourth quarter of 2018.
Very compact PC which has been used mainly as an HTPC. Intel 4150T, 8GB, wifi, 128GB SSD, Win10. Will consider trade with a graphics card enclosure as long as it’s TB3 compatible. Cash your way depending on model.
A VMware VM template — also known as a golden image — is a perfect copy of a VM from which you can deploy identical VMs. Templates include a VM’s virtual disks and settings, and they can not only save users time but help them avoid errors when configuring new Windows and Linux VMs.
VM templates enable VMware admins to create exact copies of VMs for cloning, converting and deploying. They can be used to simplify configuration and ensure the standardization of VMs throughout your entire ecosystem. Templates can also be used as long-term backups of VMs. However, you can’t operate a VM template without converting it back to a standard VM.
VSphere templates can be accessed through your content library. The content library wizard will then walk you through configuration steps, such as publishing and optimizing templates. It designates roles and privileges that you can then assign to users, and it eases VM deployment options.
Best practices for Hyper-V templates
You can create and deploy VMware VM templates through Hyper-V, as well. Hyper-V templates enable users to deploy VMs quickly with greater security, such as with shielded VMs, and reduce network congestion. They rely on System Center Virtual Machine Manager (SCVMM) and require specific configurations.
To create a Hyper-V template, select a base object from which you want to create the template — an extant VM template, a virtual hard disk or a VM. Assign a name to the new template and configure the virtual hardware and operating settings the deployed VM will use.
Keep in mind that not every VM is a viable template candidate. If your system partition is not the same as your Windows partition, you won’t be able to use that VM as a template source.
To create a shielded VM — one that protects against a compromised host — run the Shielded Template Disk Creation Wizard. Specify your required settings in the wizard and click Generate to produce the template disk, then copy that disk to your template library. The disk should appear in your content library with a small shield icon, which signifies that it has shielded technology.
How to create a VMware VM template with Packer
Packer is a free tool that can help you automate vSphere template creation and management. It features multiple builders optimized for VMware Fusion, Workstation Pro or Workstation Player. The vmware-iso Packer plugin builder supports using a remote ESXi server to build a template, and the vsphere-iso plugin helps you connect to a vCenter environment and build on any host in a cluster.
When you use Packer to make a VM template, you use two main file types. The JSON file makes up the template, and the autounattend.xml file automates Windows installation on your VM. Once your scripts, JSON file and autounattend file are ready, you can build a VM template in Packer. When the build is complete, Packer converts the VM to a template that you can view and deploy through PowerCLI.
Use PowerCLI to deploy a template
You can use PowerCLI to deploy new VMs from a template. Create an OS customization specification through PowerCLI to start the deployment process and to ensure that when you create your VMs from a template, you can still change certain settings to make them unique. These settings would include organization name, security identifier, local administrator password, Active Directory domain, time zone, domain credentials, Windows product key and AutoLogonCount registry key. The PowerCLI cmdlet might resemble the following:
There are a few common mistakes to VM template creation and deployment that you’ll want to avoid.
Creating a VMware template directly from a VM ends up destroying the VM. Always create a clone of a VM prior to creating a template from one. Even if you create a VM solely to become a template, template creation could fail and destroy your VM. A common reason for template creation failure is trying to create a template from a Linux VM. In that case, the template creation process wants to Sysprep a VM but Sysprep is designed for Windows OSes.
You also need to ensure that the model VM you want to turn into a template isn’t domain-jointed. Joining a VM to an Active Directory domain can cause the system to create a computer account for the VM, which then leaves that computer account orphaned during the template creation process. To work around this issue, have the template itself handle the domain join and secure the library share in a way that prevents anyone other than VM admins from having access.
Finally, don’t include any preinstalled applications on a VM template. The Sysprep process often breaks such applications. You can instead use an application profile or configure a VM template to run a script for automated application installation.
Up for sale is my MacBook Pro, this was repaired by Curry’s due to the LED’s in the screen failing which resulted in horizontal lines. The unit hasn’t been dropped, so I am unsure as to how the screen error occurred.
Due to the time taken to repair, I have now sourced an iMac so this is surplus to requirements.
Overall condition is very good, minor wear and tear marks on the rubber feet due to being placed on a desk, bottom of the actual casing is in good condition.
Specs taken from website;
15.4-inch MacBook Pro 2.2GHz 6-core Intel Core i7 with Retina display – Space Grey
Touch Bar with integrated Touch ID sensor
15.4-inch (diagonal) LED-backlit display with IPS technology; 2880×1800 native resolution at 220 pixels per inch
16GB of 2400MHz DDR4 onboard memory
720p FaceTime HD Camera
Radeon Pro Graphics 555X
Will come with a Logik power adapter/charger as the original is being used elsewhere.
I purchased the MacBook new from Currys. It was given an additional 3 months guarantee from Currys after the repair, but the official one year warranty with Apple expires on 01/04/2020. Original box will be included.
Just received this RMA replacement which is still brand new and factory sealed. This is however a 3 slot card and the case I want to put it in only supports 2 slot cards, so looking to trade it for 2 slot card. I don’t mind if its a faster or slower card (within reason) and I’m happy to adjust either way with cash on top etc. Pretty much anything considered but in an ideal world it should have warranty and be from a smoke free home.
Quantum wants to remove the human element from handling tape, which would greatly reduce the chance of tapes getting damaged or contaminated.
Quantum’s new Ransomware Protection Packs are hardware and software bundles that combine a Quantum tape library with a built-in vault partition. The partition is not connected to any network or any software that writes to tape. The robot inside the device physically moves the tape into the offline vault, and the backup software will see it as ejected. The goal is to create tape-based backup copies of data and move them to an area ransomware and malware can’t reach, but still within the same physical appliance.
Quantum launched three pre-defined bundles, ranging from 600 TB to 2.4 PB of storage. Each bundle includes a Quantum Scalar tape library (i3 model for small, i6 for medium and large) and the Active Vault software that generates and manages the offline partition within the library. Other tape vendors such as Spectra Logic and IBM have similar capabilities that allow for partitioning within their tape libraries. However, Quantum’s Active Vault uniquely creates offline partitions that aren’t connected to networks or backup applications.
Neither the Scalar libraries nor Active Vault are new products from Quantum. The vendor ported Active Vault to all its libraries and repackaged them together into ransomware protection products. Previously, Active Vault was only offered on enterprise products and used by large media companies to vault their digital data archives.
The idea of moving tapes to a vault isn’t new either. Many businesses ship tapes to off-site facilities, often to vendors who offer tape vaulting services such as Iron Mountain. It is a common way to satisfy the 3-2-1 rule of backup.
But the new Quantum tape products allow for in-library vaulting, which cuts out the need to handle tapes or transport them. Enterprise Strategy Group (ESG) senior lab analyst Vinny Choinski, who is currently researching how enterprise customers are using tape, said tape is more reliable than disk — with the caveat that no one ever touches it. Tape is a stable medium at rest, but risks getting damaged or corrupted when moved.
“Tape is actually more reliable than spinning disk. The errors that come in are human errors — people handling tapes and moving them,” Choinski said.
Tape was commonly the method of moving large amounts of data out of a business’s data center, and the Quantum Ransomware Protection Packs don’t provide that. However, Eric Bassier, senior director of product marketing at Quantum, made the argument that tape is no longer useful as an off-site backup medium — that’s what cloud is for. Instead, tape’s advantage comes from being air gapped, and it’s still the most cost-effective medium for storing large amounts of data long-term and off-network.
“Tape’s role now is about being offline, not off-site,” Bassier said.
Christophe Bertrand, senior analyst at ESG, agreed about the shifting role of tape. He said other than archiving massive data sets, the other main use of tape is to create an isolated, disconnected layer. This second use case has gained more relevance thanks to increasingly sophisticated ransomware. Bertrand said businesses no longer use tape as their primary target for backup, but they do still use its air gapping capabilities to keep data out of reach of cyberattacks.
Bertrand said tape still has a role to play in data centers, but there’s a skill gap among IT teams as administrators with tape knowledge are aging out of the work force. Although his research did not specifically focus on the tape medium, he found that cybersecurity and data protection expertise is lacking in today’s IT world.
“There’s a new generation of people in data centers,” Bertrand said.
Bertrand said Quantum’s Ransomware Protection Packs don’t require tape expertise to use and appear to be designed with IT generalists in mind. This accessibility is important, as he foresees tape becoming relevant again because of ransomware’s continuing threat.
I am making some changes to my setup which includes mounting my Unifi AC Pro access point on he ceiling (previously was sitting on a shelf). However, i am missing the plastic mounting bracket. Does anyone have one that they don’t require any more?