Improving patient care through AI and blockchain: Part 2 – Microsoft Industry Blogs

Whether you’re interested in using Artificial Intelligence (AI) and Machine Learning (ML) to drive better health outcomes, reduce your operational costs, or improve fraud detection, one way you can better unlock these capabilities is through leveraging blockchain.

In my last blog, “Improving Patient Care through AI and Blockchain – Part 1,” I discussed several opportunities for blockchain to help advance AI in healthcare, from sourcing more training data from across a consortium, to tracking provenance of data, improving the quality of AI with auditing, and protecting the integrity of AI using blockchain. In this second blog, take a look at four more reasons to consider blockchain for advancing AI in healthcare.

  1. Shared models
    In cases where constraints exist that preclude the sharing of raw training data from across a consortium of healthcare organizations, for legal or other reasons, it may be possible to incrementally train shared models, enabled by the blockchain. In this approach the AI / ML models themselves can be shared across the network of healthcare organizations in the consortium, rather than the raw training data, and these shared models can be incrementally trained by each organization using its training data, and within its firewall. Blockchain can then be used to share the models as well as metadata about training data, results, validations, audit trails, and so forth.
  2. Incentivizing collaboration using cryptocurrencies and tokens
    Cryptocurrencies and tokens on blockchain can be used to incent and catalyze collaboration to advance AI / ML in healthcare. From sharing of training data, to collaboration on shared models, results, validations, and so forth, healthcare organizations can be rewarded with cryptocurrencies or tokens proportional to their participation and contribution. Depending on how the blockchain is setup these cryptocurrencies or tokens could be redeemed by participating healthcare organizations for meaningful rewards, or monetized. This can be useful in any AI / ML blockchain initiative both as an accelerant, and could also be critical to overcome potential impediments and reservations to collaboration that can arise where the size / value of contributions from organizations across the consortium are asymmetrical.
  3. Validating inference results and building trust fasterBefore AI / ML models can be used for patient care they must be validated to ensure safety and efficacy. A single organization validating a model alone will take more time to achieve an acceptable level of trust than would be the case for a consortium of healthcare organizations concurrently collaborating to validate a shared model. Blockchain can be used to coordinate and collaborate around such validation to increase synergy, minimize redundant efforts, accelerate validation, and establish trust in a new model faster.
  4. Automation through smart contracts and DAOsExecutable code for processing transactions associated with AI / ML, whether procurement of training data or otherwise, can be implemented on blockchains in the form of smart contracts. DAOs (Decentralized Autonomous Organizations) such as non-profits can also be built using smart contracts to automate whole enterprises that can facilitate advancing AI / ML in healthcare at scale.

Keep the conversation going

If you’re interested in using AI, ML, or blockchain for healthcare, you know that new opportunities are constantly surfacing and with it come a whole host of new questions. Follow me on LinkedIn and Twitter to get updates on these topics as well as cloud computing, security, privacy, and compliance. If you would like to explore a partnership as you work to implement AI and/or blockchain for your healthcare organization, we’d love to hear from you.

For more resources and tips on blockchain for healthcare, take a look at part 1 of this series here.

Kibana monitoring apps zoom in on Kubernetes infrastructure

A pair of Kibana monitoring apps has entered the Kubernetes fray, as enterprises scale up their container usage. But the apps face a market that already has many options from which IT pros can choose.

The apps were released last week with version 6.5 of Elastic Inc.’s Elastic Stack, the commercialized version of a suite of open source time-series data monitoring tools that includes the Elasticsearch data index and query engine; Logstash log collection software; and Kibana data visualization tool, formerly known as the ELK Stack. Users already could collect data within the Kubernetes infrastructure with Elastic Stack before this release, but had to build custom Kibana monitoring dashboards.

Now, two prebuilt apps in Elastic Stack 6.5 display metric and log data, respectively, and add to users’ choices for enterprise Kubernetes management. Another software product, Elastic APM, was also released in beta last week, with distributed tracing features for applications that run on Kubernetes.

Elastic, fresh from a successful initial public offering in October 2018, looks to capitalize on its momentum as a hot open source software company at a time when enterprise DevOps shops favor such tools.

“Elastic has a broad tool portfolio, so users can select a tool to solve a particular problem, and it might be free to start,” said Stephen Elliot, analyst at IDC. “Then, when customers need to increase scale and features, they can convert to a licensed product and reuse existing open source skills they already have.”

Another pair of updated data visualization and presentation tools rolled out with Elastic Stack 6.5, dubbed Canvas and Spaces, gives Kibana monitoring a facelift . Canvas can visualize data through customizable infographics that display live data from Elasticsearch and create multipage presentations. Spaces applies role-based access control to data objects visualized with Kibana to beef up security for compliance-conscious enterprises.

Kibana monitoring apps launch amid competitive headwinds

Some large-scale Kubernetes early adopters already use Elasticsearch in the enterprise. E-commerce giant eBay modified Elasticsearch data collectors, called Beats, to streamline monitoring in its large Kubernetes clusters and contributed that project’s code to open source in 2018. Some enterprise IT pros also consider time-series monitoring tools for granular container monitoring at scale, and fewer products compete for attention here — application performance management (APM) tools such as New Relic can ingest time-series data via Kubernetes APIs, for example.

There will be a tug of war in the DMZ between [monitoring] tools that come from the application level down and tools that come from the infrastructure up.
Tony Baeranalyst, Ovum

However, large companies that must track vast amounts of infrastructure may prefer native time-series data collection from Elastic Stack. The Kibana monitoring apps in Elastic Stack 6.5 also offer a middle ground between raw, open source time-series data collectors, such as Prometheus and Rockset, and packaged proprietary tools, such as New Relic Infrastructure.

Still, Elastic faces stiff competition as it takes a step further into the market for Kubernetes management tools with its Kibana monitoring apps and Elastic APM. It must face off with established vendors in APM, such as AppDynamics, Datadog, Dynatrace and New Relic, as well as other time-series infrastructure monitoring tools in the container space, such as InfluxData and the open source Prometheus tool, which was originally co-developed with Kubernetes and now governed alongside it by the Cloud Native Computing Foundation. Logstash and the Kibana log analytics app also compete with Splunk, SignalFx and Datadog’s Log Management tool.

“Elastic emerged as an open source answer to Splunk that was natively engineered for cloud scale,” said Tony Baer, analyst at Ovum, based in London. “It also represents a bottom-up grassroots approach as an index and storage engine for cloud and big data, while many APM tools originated in smaller, walled, on-premises gardens.”

But tools from the proprietary on-premises world offer a higher-level view of IT infrastructure that doesn’t come as naturally to Elastic Stack, though version 6.5’s Kibana monitoring apps add such views.

“Ultimately, it’s going to be about which products are aggregating data from all the others,” Baer said, as most enterprises will use a variety of tools. “There will be a tug of war in the DMZ [demilitarized zone] between tools that come from the application level down and tools that come from the infrastructure up.”

Kibana monitoring interface
An example of the Kibana monitoring interface, which now includes apps that display Kubernetes infrastructure logs and metrics.

Time-series monitoring battle looms in the cloud

In the public cloud, Elastic also competes with its own core open source technology, as it’s incorporated into the Amazon Elasticsearch service. And another competitor emerged in November 2018, called Rockset, a hosted service founded by former Facebook engineers that looks to leapfrog both Elastic Cloud and AWS Elasticsearch. Rockset has a granular data collection and query engine that doesn’t require users to manage the underlying cloud infrastructure and can quickly apply a standard SQL interface to diverse, high-velocity time-series data sets.

“Elasticsearch is a fantastic on-premises solution,” said Venkat Venkataramani, CEO and co-founder of Rockset, based in San Mateo, Calif. “But we’re saying if you want public cloud infrastructure to host your data index, there’s a better way.”

Elastic APM also represents an early foray into the distributed tracing space for applications and must integrate machine learning and automated root-cause analysis natively to match DevOps monitoring competitors. However, users who want such features can integrate them on their own through wizards available in this release.

For Sale – HP ProLiant Gen 8 G1610T SERVER 4GB Ram or 16GB Ram No HDD + Custom Gaming PC Omen X

HP Gen8 Microserver – NOW SOLD

Corsair DDR3 ECC Ram @ £100

HP Omen X Custom PC
I have also decided to sell my new plex box after managing to acquire a second HP Omen X case and looking to do a thread ripper build leaving my newly built machine up for sale the specs are as follows.

Case = HP Omen X
Motherboard = MSI Z270M Mortar (have the box/drivers dvd/manual)
CPU = Intel Pentium G4600
PSU = Corsair RM450 (also have the box for this)
Ram = 32GB (2x16GB sticks) DDR4 2133MHz
GPU = MSI RX 550 Aero 4GB (also have the box for this)
M.2 = Samsung 970 Evo (super fast NVME) for the boot disk
DVD = Can’t remember the model of the DVD drive but will update when I get round to checking it out

Space for 4x 3.5″ hard drives which all have their own drive sledge for easy access to upgrade/add as yours get full. My original plan was to run this as an Unraid server and pass through the GPU so I could run windows in a VM which works very well by the way. Really only selling due to getting hold of a second HP Omen X case and really wanting to build a threadripper for the same job. Since I didn’t plan on selling this one I didn’t keep all the boxes but the CPU, Motherboard, GPU, and M.2 are all brand new parts. The Ram was reclaimed from a new Dell unit which I upgraded. The case was purchased used and came with the DVD drive preinstalled.

I am looking for £900 or near offer for this as it is essentially a brand new machine which has been run for less than 30 days as an Unraid server. I can install windows 10 for the buyer if requested but will not be including a license code. Will also need to be collected due to weight. Will get some pics uploaded later today.

Price and currency: 16GB DDR3 ECC Ram £100 / HP Omen X Custom PC £900 ono
Delivery: Goods must be exchanged in person
Payment method: Cash on Collection / Bank Transfer
Location: Salford
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Public Key Infrastructure Explained | Everything you need to know

Here on the Hyper-V Dojo Blog I, of course, focus on Hyper-V, but it frequently intersects several other technologies that administrators need to work with which means those are also highly relevant for Hyper-V users. Of these, public key infrastructure (PKI) and its components – most importantly, certificates – remain poorly understood, and therefore poorly implemented. In Hyper-V, we employ PKI for Shielded VMs and replication. Beyond that, we have hordes of other uses for certificates; websites, code signing, secure e-mail, and Windows Admin Center, to name a few. With the complexity behind encryption technologies and the arcane nature of related tools, I find that many in IT tend to avoid the entire technology group as too complex. In reality, I could probably stand in front of a classroom and deliver all of the most important aspects of PKI in fifteen minutes.

Since I haven’t got a classroom handy and I would like to distribute this information far and wide, I will use this article to explain the technology. I will refer back to it in future articles.

Before we get started, here’s a quick table of contents if you want to skip to a certain section.

  1. What is Public Key Infrastructure (PKI)
  2. The Core Use of PKI and Certificates
  3. A Very Brief Introduction to Digital Encryption
    1. Encoding and Encryption Ciphers
    2. Keyed Encryption Ciphers
    3. Symmetric Encryption
    4. Asymmetric Encryption
  4. PKI Identification
  5. The PKI Certificate Issuance Process
  6. The PKI Certificate Validation Process
    1. Why Not Contact the Certification Authority During Validation?
  7. The PKI Certificate Revocation Process
  8. PKI Identity Verification Visualization
  9. Certificate Signing Operations
  10. SSL Encrypted Communications
  11. Offline Certification Authority
    1. The Risks of an Online Root Certification Authority
    2. Offline CA Creation Process
    3. An Offline CA Without a CRL
  12. What IS a Certification Authority?
    1. Does a Certification Authority Require a CRL?
  13. The Dangers of Self-Signed Certificates
  14. Going Further with PKI

1. What is Public Key Infrastructure (PKI)?

A public key infrastructure or PKI establishes a digital trust hierarchy in which a central authority securely verifies the identity of objects. We commonly use PKI to certify users and computers. It functions by maintaining, distributing, validating, and revoking SSL/TLS certificates built from the public key of public/private key pairs.

There are many associated terms connected to public key infrastructure you’ll need to be familiar with so I’ll lay them out here. You don’t necessarily need to memorize these, or even understand all of them at this stage – you might even want to skim or even skip this section and just use it as a reference later. I have deliberately kept these descriptions simple to stay within the scope of the article.

  • SSL: “SSL” stands for “Secure Sockets Layer”. SSL was designed to secure digital communications traveling over insecure channels. TLS has supplanted SSL, but we still use the term SSL, mostly for familiarity reasons. It has cemented itself into the common vernacular, so there’s little use in fighting it. After this point, I will use “SSL” in its generic sense.
  • TLS: “TLS” stands for “Transport Layer Security”. This technology group serves the same fundamental purpose as SSL and depends upon the same basic components and concepts, but the technologies cannot be used interchangeably.
  • Cipher: an algorithm used for encoding or encryption. In SSL, the term often refers to a collection, or “suite” of ciphers, each with a different purpose in an SSL conversation.
  • Key: A digital key used with SSL/TLS is just a sequence of bits, usually expressed in hexadecimal characters. Ciphers use keys to encrypt and decrypt data. Keys used in standard PKI are expected to be a certain number of bits, a power of 2 starting at 1024 (ex: 2048, 4096, etc.). A longer key provides stronger defense against brute-force cracking when used in a cipher, but also requires more computing overhead. In PKI, keys come in pairs:
    • Private key: a key that is held by its owner and never shared with anyone else. The private key in a private/public pair is the only key that can be used to decrypt data that was encrypted by the public key. A private key that is accessed by anyone other than its owner is considered “compromised”.
    • Public key: a key that can be shared with anyone. The public key in a private/public pair is the only key that can be used to decrypt data that was encrypted by the private key. The “PK” in “PKI” comes from its usage of public keys. It also serves as a reminder: we use “public key” in PKI but never “private key”, because no private key should ever enter the infrastructure.
  • Certificate: A certificate is a digital file used for identity and authorization. You will often see these referred to as “SSL certificates”. However, SSL implies communications, whereas certificates have more purposes. The term has lodged itself in common jargon, so it too will continue despite its technical inaccuracy. When I remember, I say “PKI certificate” instead.
    Certificates contain many components. Some of these items:
    • Identifying information. There are several defined fields, and most certificates contain only a subset. Examples:
      • Common Name: The name of the object that the certificate identifies. Sometimes that is a fully qualified domain name, such as www.altaro.com. Sometimes, it is just a name, such as “Eric Siron”.
      • Locality: The city, or equivalent, of the entity represented by the certificate
      • Organization: The name of the organization that owns the certificate
    • A public key
    • Validity period
  • Encoding: Passing data through an algorithm to transform it for the purpose of facilitating a process or conforming to a standard. For instance, base-64 encoding can turn character string sequences from a human-readable form that might cause problems for simple string handlers (like URLs) into strings that computers can easily process but humans would struggle with. Text can be encoded in UTF8 (and others) so that it fits a common standard. “Decoding” is a convenience term that we use to mean “reversing encoding”, although it could be argued that there is no such thing. We simply perform a different encoding pass on the new data that generates output that matches the original data. The most important thing to understand: encoding does not provide any meaningful security. We only use encoding for convenience.
  • Encryption: Encryption is similar to encoding, but uses algorithms (usually called ciphers in this context) to obscure the data as opposed to adapting it for a functional purpose. “Decryption” reverses encryption.
  • Cracking: a term that traces its origins to the same concepts behind physical-world activities such as “cracking a safe”. It refers to the action of decrypting data without having access to the private key. I previously mentioned “brute-force” cracking, which means trying all possible keys one at a time until finding the correct one. I’ll leave further research on other techniques to you.
  • Certification Authority: Sometimes shortened to “certificate authority”. Often abbreviated to “CA”. An entity that signs and revokes certificates.
  • Self-signed certificate: A certificate in which the identity represented by the certificate also signed and issued the certificate. The term “self-signed” is often used erroneously to describe a PKI that an organization maintains internally. A certificate signed by any authority other than the certificate holder is not self-signed, even if that authority is not reachable on the public Internet or automatically trusted by computers and devices.
  • Root Certification Authority: The top-most entity of the PKI, and the only entity that expects others to blindly trust it. Uses a self-signed certificate. Can sign, issue, and revoke certificates.
  • Intermediate Certification Authority: Sometimes referred to as a “subordinate CA”. A  CA whose certificate was signed and issued by another CA. Generally identical in function to a root CA, although the root or a superior intermediate CA can place restraints on it.
  • Certificate chain: a single unit that contains all of the information needed to trace through all intermediate CAs back to and including the root CA.
  • Server certificate and client certificate: technically incorrect, yet commonly used terms. In typical usage, these terms mean “the certificate used by the server in a given SSL communication” and “the certificate used by the client in a given SSL communication”, respectively. However, you cannot correctly say, “this certificate file is a client certificate”. “Server” and “client” are arbitrary designations for a digital transmission and have no meaning whatsoever when you’re only referring to a single entity (the certificate holder). A certificate is a certificate.
  • Constraints, key usage, and enhanced key usage: actions that a CA has authorized the certificate holder to perform. For instance, consider a development application that uses a private key to sign a piece of code. If the CA has signed the matching certificate for code signing usage, then a computer that runs that code and trusts the CA will treat the code as properly signed. However, a private key can be used for any purpose — constraints only limit the actions the issuing certification authority will validate. That means that you still cannot correctly refer to a certificate with the Client Authentication key usage as a “client certificate”.
  • Certificate Revocation List (CRL): A list of certificates that the CA has marked invalid. If a certificate appears on this list, then no client should consider it reliable. The CA signs the CRL to make it tamper-proof so that it can be freely distributed and trusted.
  • Online Certificate Status Protocol responder (OCSP responder): CRL’s are just simple lists. A client must download the entire CRL and search through it to check on any given certificate. For very long CRLs and/or low-powered clients, that can take a lot of time. An OCSP responder keeps a copy of the revoked certificate list and can perform the search for any client that asks.

2. The Core Use of PKI and Certificates

We use PKI and certificates for a multitude of purposes. Functionally, though, they all derive from two central needs: identification and encryption. I first learned about PKI through encryption, so I’ll start there.

3. A Very Brief Introduction to Digital Encryption

I only intend to talk enough about encryption to explain the problems that PKI solves. A fulfilling lifetime career could be made from this subject. I recommend that you find experts if you want to know more.

3.1 Encoding and Encryption Ciphers

At its simplest, a cipher is an algorithm. We apply the term “cipher” to an algorithm when it has uses in encoding or encryption. Let’s look at a trivial cipher: ROT13. It involves only the 26 characters used in the English alphabet. It encrypts character-by-character, replacing the character to be encrypted with the character 13 places forward, wrapping around at “A” if passing “Z”. “A” becomes “M”, “B” becomes “N”, etc.

The ROT13 cipher exhibits two major problems:

  • You only need to know that ROT13 was used to encrypt in order to restore the original data. Even if you don’t know that, it usually does not require a great deal of effort to discover. In simpler terms, ROT13 does not require a key for decryption.
  • You cannot effectively use ROT13 on any message that contains characters outside the 26 characters in the English alphabet. It does not define a way to encrypt a “1” or a “ñ”, nor does it define any way to output those characters.

Therefore, ROT13 depends on a specific, limited range of source material and cannot provide any guarantee of safety.

Keyless ciphers find their primary value in blocking casual access to data. You might use a keyless cipher in a puzzle game in which you want the player to eventually figure out the message. In the larger world of encoding, keyless algorithms find a great many more uses, such as base-64 encoding and the various text encoding schemes. I briefly covered those in the terminology section above.

For meaningful secrecy, we turn to keyed ciphers.

3.2 Keyed Encryption Ciphers

A keyed cipher differs from the previously-discussed ciphers in that it depends upon at least one key. These ciphers come in two types:

  • Symmetric ciphers: The algorithm uses the same key to encrypt and decrypt. Furthermore, the encrypted data is usually the same size as its unencrypted source.
  • Asymmetric ciphers: The algorithm uses one key to encrypt and a different key to decrypt. Data encrypted asymmetrically tends to be larger than its unencrypted source

3.3 Symmetric Encryption

Symmetric encryption is the easiest to understand. Imagine if you decided to use ROT13, but moved to the character immediately after. You could consider the 1 to be a key in an algorithm defined as “ROT13 + k“.  Just knowing the cipher is no longer enough; you also need to know (or figure out) the key.

A real-world corollary to symmetric encryption would be a standard home safe. The physical locking mechanism correlates to a digital algorithm. It can be opened with a physical key or combination. Anyone in possession of an identical copy of the physical key or knowledge of the combination can open the safe.

A practical discussion involves a “real” symmetric encryption cipher. Let’s choose 3DES (3-pass Data Encryption Standard). Jim wants to share data with Jane and only Jane. So, he chooses to encrypt it with 3DES. He then gives the encrypted data (usually called ciphertext) to Jane. In order for Jane to decrypt it, she must know the key. Once he gives it to her, it becomes a “shared key”.

Symmetric Encryption

The good parts: DES3 depends only on a key and an algorithm. Technically, it can work with any kind of data. Contrast that against ROT13, which can only work with the 26 characters that make up its algorithm. Because DES3’s algorithm does not depend on the data, the ciphertext is useless without the key. Knowing that DES3 created the ciphertext does nearly nothing for an attacker. The other nice thing: the ciphertext from most symmetric algorithms is the same size as the plaintext.

However, we still have a problem, which I hope the diagram made clear. How can Jim securely deliver the key to Jane? Symmetric encryption works well for protecting information intended only for personal consumption. But, we want to communicate with others. So, we turn to asymmetric encryption.

3.4 Asymmetric Encryption

Asymmetric encryption solves the secret key problem. The encryption cipher requires one key while the decryption cipher requires a different one.

Asymmetric Encryption

Asymmetric encryption has no perfectly analogous physical-world examples. You can see similarities in the mailbox of an apartment complex or other multi-tenant building. The mail carrier uses one key to load the boxes with incoming mail and the tenants use their individual keys to retrieve their mail. The analogy fails in that even though any given key, whether the carrier’s or a tenant’s, can only open one door, the contents of the mailbox (the data) can be equally accessed from either side.

You can see a different analogy on your own front door. You can control the inside of a lock with a simple twist of the knob. You could call that a public key — anyone can turn it. However, the outside of a lock requires a specific key that you protect — you could call that a private key. This analogy fails to align with asymmetric encryption in that either “key” can freely lock and unlock the door.

The important thing to keep in mind for asymmetric encryption: data encrypted by one key can only be decrypted using another. Even the key that was used to create the ciphertext cannot be used to return it to plaintext.

This fact serves as the basis for PKI. Keys are created in pairs. The owner permanently holds on to one key (the private key) and freely distributes the other (the public key). When the key holder wants to securely distribute data, it uses the private key to encrypt it. When someone wants to send data that only the key holder can read, they can encrypt the data with that entity’s public key.

This all looks really good right? The downside: asymmetric ciphertext cannot be the same size as the plaintext. It is usually much larger — often over double. That translates to increased storage, transmission, and decryption effort costs.

4. PKI Identification

We have more to talk about in encryption, but first, we need to cover the greater purpose of public key infrastructure: identity. In our earlier example, Jim wants to share data with Jane. How can Jane be certain that Jim sent the data that she received? If they only use symmetric encryption, she only knows that the sender has the correct key. She also has that key, so she knows that it has been shared at least once. She has no certainty that Jim did not share it with a malicious third party. Similarly, Jim can’t be certain that Jane is the person that received his transmission, or that a person that intercepted the secret key does not also have a way to intercept the ciphertext. We use PKI certificates to solve the identity problem.

Certificates represent the very core of PKI. A certificate’s primary purpose is to establish identity, verified by a central authority. One perfect real-world analogy is state-issued identification:

State Identification

At the very top of this “certificate”, we find the central authority. We find identifying information for the individual. We also have a validity period and a bar-coded sequence that we could think of like a serial number. The picture corresponds to a public key. When “James P. Keiai” needs to prove his identity, he presents this identification card. The person evaluating it goes through these elements:

  • Is the “certification authority” trusted? A government-issued ID might only be accepted within that government’s borders. In any case, the authority must be known and trusted in order for the certificate to have value. In PKI, we typically address that by pre-installing CA certificates. On Windows systems, you can see them in the Certificates snap-in under Trusted Root Certification Authorities and Intermediate Certification Authorities.
  • Is the “certificate” genuine? Government-issued IDs typically include some tamper-resistant elements. PKI certificates include a signature created by the certification authority that provides tamper-proofing
  • Was the “certificate” issued to the person presenting it? This “certificate” includes a photograph which can be compared to the person holding the “certificate”. A PKI certificate includes a public key. Only the matching private key can supply data that can be decrypted using that public key.
  • Is the certificate within its validity period? Given sufficient time, any tamper-proofing can be circumvented. An issuer might also lose trust in the entity. We use validity periods to address those problems.

5. The PKI Certificate Issuance Process

To best understand PKI certificates, let’s start by looking at the issuance process.

  1. An entity (computer, user, device, etc.) generates its own private and public key pair
  2. The entity generates a certificate signing request (CSR) including the public key and identifying information to include on the certificate (common name, locality, subject alternate names, etc.)
  3. The CSR is submitted to a certification authority
  4. The authority generates a certificate including all of the above information and usage authorization (such as server authentication and code signing) and writes a record of the issuance into its database

Some things to note:

  • The entity never discloses the private key, not even to the certification authority
  • The entity decides on the keys, not the CA. A CA can refuse to issue a certificate for a key of insufficient length, but it has no other say on the composition of the key
  • The CA decides which key usages it will apply to the final certificate

The entity can now present that certificate to anyone that asks. By presenting a certificate, the entity makes a statement: “I am who I say am and the certification authority will vouch for me”.

6. The PKI Certificate Validation Process

A simple process verifies a certificate. When a certification authority creates a certificate, it includes signing information. It creates the signature from its own private key, which means that only the authority’s public key can be used to verify it. With that, it effectively states that it vouches for the validity of the presented certificate. Because it has signed the certificate that includes the entity’s public key, the CA also vouches that any encrypted data that can be read or any signature that can be verified by the entity’s public key must have been created by the entity’s private key.

An important thing to note: no one contacts the certification authority during this process to validate the certificate; its signature on the certificate has done that. The system, trying to verify the certificate, either trusts the issuer or it does not. We expect the system to keep a local list of certificate authorities that it trusts. We also expect that it will automatically accept certificates signed by them. If the issuer does not exist in the system’s local list, then we expect the system to prompt for an override or reject the certificate. These are conventions; nothing in the technology requires any of this.

Even though the system won’t contact the certificate authority, it can look for revocation information. The issuer must have included revocation information in the certificate for that check to occur. Revocation information might be published in a location other than the certification authority.

6.1 Why Not Contact the Certification Authority During Validation?

It might seem like an oversight — shouldn’t someone trying to validate a certificate have the ability to contact the issuer? That requirement would create two problems:

  • The certification authority would need to be online all the time — we’ll look at problems with that in a bit
  • The client seeking verification would need to be online. Even though that seems like a guaranteed condition today, it certainly wasn’t at the dawn of PKI and should never be taken as a given even in today’s connected world

Requiring the CA to always be available represents a problem, but PKI already solved it. If the client has the CA’s certificate and the certificate that it signed, then it already has all the information it needs to know that the CA signed the certificate. The revocation process deals with bad certificates.

7. The PKI Certificate Revocation Process

If a private key becomes compromised, anything that it ever signed or encrypted becomes suspect. The certification authority must be deliberately told to revoke the certificate; it has no automatic way to know of compromise.

To revoke a certificate, the CA marks it as revoked in its own database. It can then issue or update a Certificate Revocation List (CRL). Alternatively, it can make revocation information available to an Online Certificate Status Protocol responder (OCSP). The location of a CRL and/or OCSP responder must be included in all certificates signed by the certification authority or they will never be checked.

Important: any system can host CRLs and OCSP responders. The certification authority only needs to generate the revocation information. CRLs must carry the CA’s signature, so they need no particular security. Therefore, you can take the CA offline but keep the system(s) that host its CRLs and perform OCSP operations online.

8. PKI Identity Verification Visualization

The following image shows the most salient components of the preceding explanations:

PKI Identity Verification

In order:

  1. Entity generates a private/public key pair
  2. Entity crafts a certificate signing request and submits it to the certification authority
  3. The certification authority issues a certificate and records it in the database
  4. Entity presents the certificate to the client
  5. The client presumably has the signing certification authority’s certificate or can get it
  6. Client checks that the certificate does not appear on the CRL
  7. If 4, 5, and 6 all checks out, the client will accept the certificate

That wraps up PKI identity. With identity established, we can continue our encryption discussion.

9. Certificate Signing Operations

Remember how we wanted certainty that Jim was sending data and Jane was receiving it? We can do that, and add tamper-proofing, using signatures created by private keys. It works like this (simplified):

  1. Jim creates a message.
  2. Jim passes the message through a “hashing” algorithm — a fancy word meaning that the algorithm crunched the data and produced a number.
  3. Jim uses his private key to encrypt the hash.
  4. Jim attaches the resulting cipher to the end of the message and transmits it to Jane.
  5. Jane uses the same hashing algorithm on the message.
  6. Jane uses Jim’s public key to decrypt the signature.
  7. If the hash computed by Jane matches the hash in the decrypted plaintext, Jane has verified the condition and authenticity of the message; Jim’s private key signed the message and no one altered it.

Most e-mail applications can do all of that automatically. You just tell Outlook to sign your e-mail and tell it which certificate to use. As long as your system has a matching private key, Outlook will handle the rest. You can’t just pick any old certificate from your store though; this all depends on ownership of the correct private key.

Of course, in our example earlier, we talked about encryption. If you encrypt a message with your private key and the recipient uses the public key from your certificate to decrypt it, that has a side effect of providing the same identification and tamper-proofing as a signature.

By exchanging public certificates, Jim can make certain that only Jane can read the message and Jane can be certain that Jim sent it: Jim encrypts the message with Jane’s public key and signs it with his private key. However, we still would have the problem of puffed-up messages caused by asymmetric encryption. Enter SSL encryption.

10. SSL Encrypted Communications

After all that preamble, we can now absorb the gist of SSL communications rather easily. I don’t want to dig too far into the depths of encrypted communication because that extends beyond my goal of simplicity. I’m going to trim down to the minimal steps of typical communications in an HTTPS conversation, such as reading a web page:

  1. A client contacts the server.
  2. The client and server exchange information about the communications they intend to perform, such as the ciphers to use (SSL handshake).
  3. The server transmits its certificate to the client.
  4. The client checks that it trusts the certification authority that issued the certificate. If it does not recognize the CA and does not get an override, the communication ends.
  5. The client checks revocation information for the certificate. If the certificate is revoked or revocation information is unavailable, then the client might attempt to obtain an override. Implementations vary on how they deal with null or unreachable CRL information, but almost all will refuse to communicate with any entity using a revoked certificate.
  6. The client generates a portion of a temporary key for symmetric encryption.
  7. The client uses the server’s public key to encrypt the partial temporary key.
  8. The client sends the encrypted partial key to the server.
  9. The server decrypts the partial key using its own private key.
  10. The server completes the secret key.
  11. The client and server agree to use the secret key. All communications in the same conversation are encrypted with that key.

It would be possible to use asymmetric encryption for the entire conversation. However, as we talked about earlier, symmetric encryption results in ciphertext that greatly exceeds the size of the unencrypted source. To solve that problem without exposing a plaintext key, SSL only uses asymmetric encryption while the client and server establish identity and work together to create a symmetric shared key. From that point forward, they only use symmetric encryption. That keeps the size of transmitted data to a minimum. Even better, if an attacker manages to break any point of the transmission besides the initial negotiation, they will only gain a temporary key.

All of that explains why we use suites of ciphers: we need multiple algorithms to make this work.

Altaro Dojo Forums

Got any burning Hyper-V questions?

Introducing the

forums logo

Ask questions, read answers, leave comments, and master Hyper-V

Moderated by Microsoft MVPs and leading IT industry experts

11. Offline Certification Authority

If you’re going to build a PKI, it will have a root certification authority at its heart. Keeping that authority safe must be your primary concern. Administrators commonly take their root certification authority offline to protect it.

11.1 The Risks of an Online Root Certification Authority

You face a dilemma: if you keep the certification authority online, that increases the odds of a compromised private key. If you take it offline, it can’t sign certificates. Using an offline root certificate in a multi-CA PKI resolves the dilemma with minimal side effects.

PKI always carries some risk of private key compromise. However, there are two problems specific to the root authority:

  • If the root is compromised, every certificate that carries its signature is untrustworthy. That includes every certification authority in the PKI. Every single one, and therefore every certificate they issued, becomes invalid.
  • The root certification authority uses a self-signed certificate. No CA database contains it and no one has the authority to revoke it. Even if the CA could revoke its own certificate, it would no longer be trusted to sign the CRL, thereby invalidating its own invalidation. In simpler terms, the root CA’s certificate cannot legitimately appear in any CRL, therefore it cannot be revoked.

As previously mentioned, most PKI certificates say: “I am who I say I am and the certification authority will vouch for me.” The root certification authority says: “I am who I am because I say so.” The only certain thing protecting the root CA from compromise is its validity period. Therefore, stronger steps must be taken to safeguard the root CA’s private key.

11.2 Offline CA Creation Process

I will publish an article containing a complete walk-through on building a standalone and subordinate certification authority set. The essential steps:

  1. Create a certificate pair.
  2. Create a self-signed certificate from that pair.
  3. Create another certificate pair.
  4. Use the private key from step 1 to issue a certificate for the new pair. Ensure the certificate is authorized to act as a certification authority and has information to reach a CRL.

The key pair and certificate in steps 1 and 2 represent the root certification authority. The key pair and certificate in steps 3 and 4 represent an intermediate (subordinate) authority. The CRL information on the subordinate CA’s certificate (in step 4) points to a CRL created by the root CA.

  1. Generate a CRL from the root CA. Publish it at the location specified in the intermediate CA’s information.
  2. Take the root CA offline.
  3. At regular intervals, bring the root CA “online” (not necessarily reachable) and update the CRL.
  4. Perform standard certificate issuance and revocation operations with the intermediate CA.

The CRLs for both authorities must be kept online and reachable at all times.

ImportantThe CRL information on a certificate always refers to the CRL of that certificate’s issuer. That seems straightforward enough on endpoint certificates. However, it can get confusing for CA certificates. If it helps your memory, a certificate contains a reference to the CRL that might list it. Therefore, a well-formed root CA certificate will not contain CRL information because no CRL could ever contain a root certificate. Each intermediate CA’s certificate will contain CRL information for its parent CA, not for itself.

11.3 An Offline CA Without a CRL

You can create an offline CA without a CRL. You simply issue the intermediate CA certificate(s) with no CRL distribution information. If the subordinate’s certificate contains no CRL information, then it will be trusted until it expires. However, doing so is only marginally more secure than just using a single online root CA. If the subordinate CA’s private key ever becomes compromised, then, even though the root CA has the power to revoke it, no one will know how to reach a CRL to find out. Removing the intermediate CA’s CRL would help to remove trust in the certificates that it issued, but not every process checks a CRL and most will ignore a missing CRL.

I understand the temptation of creating an offline root CA without generating a CRL. You would not need to maintain anything. You could even delete the root CA’s files. However, you gain almost no security.

12. What IS a Certification Authority?

I think this article presented a clear idea of the concept of a certification authority and the function of a certification authority. I don’t believe that it cleared up all the mystique of a certification authority, though.

When you’re running the wizard in Windows Server to set up a CA, it throws up all kinds of warnings about the permanence of the computer name and domain membership. I believe that gives the impression that a CA is a dreadful, magical beast. But, a simpler explanation underlies all those warnings. When you make a domain member into a CA, the wizard builds a lot of scaffolding around it in the directory. Microsoft could have fashioned a bunch of brittle triggers and conditional checks and resolution steps on the system’s name and domain membership status. Or, they could just tell you never to change either and (correctly) blame any problems on a failure to comply. Whatever impression that leaves on the unwary, the CA truly has a very simple structure.

A functional PKI certification authority must contain these things:

  1. A public and private key pair. It uses the private key to sign things and the public key to prove that it signed things.
  2. A certificate that it or a parent CA signed.
  3. A list of issued certificates.

That’s all. Implementations vary, of course. For instance, OpenSSL depends on a couple of files to tell it what the next numbered certificate and CRL will carry. However, all CAs need the three listed components. If you want to take a CA offline but only keep the most important parts, you can’t get by without those. Since #2 can be freely distributed to anyone, only #1 and #3 require security.

Be aware that I don’t know how to fully regenerate a failed Windows CA using only those components. When you use the Windows wizard to build a CA, it allows you to re-use a previously existing private key. I don’t know of a way to make it use a previous certificate database (feel free to use the comment form if you do know). If you want to use Windows Server as your offline root CA, I recommend that you take the entire Windows Server installation offline and keep it safe. Personally, I use something else entirely… details in another post.

12.1 Does a Certification Authority Require a CRL?

We talked about this only a few sections ago, but I focused on security concerns there. Structurally, a CA does not require a CRL. Even more technically, a CA does not include a CRL at all. A CA includes a list of certificates that it issued; its revocation process marks a certificate as revoked on that list. An external tool generates a CRL by making a sublist of only the revoked certificates on the master and having the CA sign the resulting list.

Do go back and read the part about dramatically reduced security when a CA does not publish revocation information if you missed it. Please. It really does matter.

13. The Dangers of Self-Signed Certificates

As more technology requires certificates (e.g., Windows Admin Center), I see more requests for information on creating self-signed certificates. STOP USING SELF-SIGNED CERTIFICATES. People say, “Oh, it’s OK, I only use them in my test lab.” How can you call it a “test” lab if you do things that you would never do in production? What exactly do you believe you are testing? Great, process “alpha” works with self-signed certificates, which the manufacturer probably already stated… so what does that prove? How can you apply that knowledge when you transition to production? A proper test environment duplicates a production environment to the greatest possible extent.

Self-signed certificates represent an overt danger if used in production. Above, you saw how we have no choice but to use them for root CAs, so we go to great lengths to protect them. Think about those problems in the context of a self-signed endpoint:

  • No one can revoke a self-signed certificate. If you lost control of the private key, data encrypted by an attacker would forever be just as good as data encrypted by you. Data meant for you could always be signed by your public key and read by your attacker.
  • Why should anyone trust a self-signed certificate, even members of your own organization? Literally, anyone can create a certificate and stick your name on it.
  • You can’t lock up the private key. You’re actively using it. And, you imported its public key on at least one other machine which now blindly trusts it. That does not qualify as “secure” in any sense of the word.
  • A compromised root CA would be terrible, but tearing down the entire PKI would at least address the problem. You can use centralized tools to remove the compromised CA from Windows’ trust lists. If you manually imported a self-signed certificate, you will have to manually hunt it down.

I understand that self-signed certificates seem easy. But, you can almost as easily learn PKI. You can quickly set up and configure PKI. Windows Server PKI offers auto-enroll and auto-renew, so a tiny bit of early pain saves all sort of ongoing effort. You will never regret adding “PKI” to your skills list. Do the right thing, not the expedient thing.

14. Going Further with PKI

I know that the topic of public key infrastructure can seem daunting, but administrators can no longer afford to ignore it. I am appalled by the proliferation of self-signed certificates, especially when it takes such little effort to build a fully functional PKI. If you don’t know how to do that, watch this space for a forthcoming how-to. If you can’t wait that long, head on over to the Altaro Dojo Forums and start a discussion. I’m actively part of that community and answering questions on a daily basis.

Resolve an Outlook outage when using Office 365

Many organizations have made the move to Exchange Online, but the switch to this cloud service can present new challenges to administrators trying to troubleshoot an Outlook outage.

When a user files a ticket about Microsoft Outlook not working, the clock is ticking for the help desk to track down the cause. It’s challenging enough when this type of issue occurs with on-premises Exchange, but when your organization uses Exchange Online for email, it’s imperative to know how to check on the outage efficiently.

The problem could be localized to one user’s mail profile, but often times, it’s an issue that affects multiple users. The support person needs to move fast to find out if this affects a subset of email users or the entire enterprise.

How to verify or report the outage

Outlook outage information
Admins can check for Outlook outage information in the Office 365 admin center.

Start by verifying that there’s an outage report from Microsoft. Log into the Office 365 admin center and check the advisories under the Service health section.

If the advisories do not describe the Outlook issue at hand, send a report to Microsoft. This helps track the affected tenants and ensures Microsoft sends a notification when the incident is resolved. Submit the report from the New service request section in the Office 365 admin center under Support.

The New service request button
Admins can report a new service incident using the Office 365 console.

For companies with a Microsoft Premier Support contract, admins can open a Sev A call for critical issues for a quick resolution. The Technical Account Manager automatically gets added to the call.

Other ways to analyze an Outlook outage

Administrators have a few other avenues to investigate an Outlook outage while awaiting word from Microsoft.

Microsoft adds new features and integrations to its Office 365 applications routinely, so it can be helpful to check the product roadmap to see if there are any clues there about anything new that might affect the tenant. It’s helpful for admins to check this site on a regular basis to help them identify problems or track what’s coming, what’s been released and what’s been canceled.

Check the Outlook version

Microsoft might have different release cycles for a particular version of Office. If Microsoft issued an update that affects some or all of the users, then it can lead to a quicker resolution.

Check the Outlook version to see the version and build, then see if the affected users have the same client.
Check the Outlook version to see the version and build, then see if the affected users have the same client.

See if there are any clues in the Message center

Check the Message center in the Office 365 admin console to see which features and updates Microsoft released to the tenant, which could uncover a reason for the Outlook outage.

The Office 365 Message center
Check the Message center in the Office 365 admin center for notices about new releases to the tenant.

It can also be helpful to connect with other experts in the Office 365 community and the Microsoft product blogs and forums to see if there are similar reports to aid in these troubleshooting efforts.

Challenge accepted—MARLÖ competition among conference highlights – Microsoft Research

Malmo

With the latest Project Malmo competition, we’re calling on researchers and engineers to test the limits of their thinking as it pertains to artificial intelligence, particularly multi-task, multi-agent reinforcement learning. Last week, a group of attendees at the 14th Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE’18) participated in a one-day workshop featuring the competition, exchanging ideas on the unique challenges of the research area with some of the field’s leading minds.

Learning to Play: The Multi-Agent Reinforcement Learning in MalmÖ (MARLÖ) Competition requires participants to design learning agents capable of collaborating with or competing against other agents to complete tasks of varying difficulty across 3D games. It is the second competition affiliated with Project Malmo, which is an open-ended platform designed for the experimentation of artificial intelligence. Last year’s Malmo Collaborative AI Challenge yielded a diversity and creativity in approach that exceeded expectations, and we look forward to the same this time around.

The competition, co-hosted by Microsoft, Queen Mary University of London, and CrowdAI, is open to participants worldwide through December 31 (submit your entries here).

Sam Devlin, a game AI researcher from the Machine Intelligence and Perception group at Microsoft Research Cambridge, organized the MARLÖ AIIDE 2018 Workshop in collaboration with our academic partners Diego Perez-Liebana of Queen Mary and Sharada Mohanty of École Polytechnique Fédérale de Lausanne, Switzerland.

The workshop included a short tutorial of MARLÖ that allowed attendees to experiment with competition agents and keynote addresses from two distinguished speakers. There were also a series of short contributed talks and a panel session to encourage attendees to share ideas around the application of reinforcement learning in modern commercial video games.

Jesse Cluff, Principal Engineering Lead, The Coalition

Jesse Cluff, Principal Engineering Lead, The Coalition

The first keynote speaker was Jesse Cluff, Principal Engineering Lead with The Coalition. Jesse has more than 20 years of experience in the industry, working on many exciting game titles, including Jackie Chan Stuntmaster, The Simpsons: Hit & Run, Bully, and Gears of War 4. During the workshop, he explored two aspects of game AI—the hardware side in discussing how we run programs in real time with limited resources and the emotional side in discussing how we maximize the enjoyment of players while controlling difficulty. He also talked about how AI techniques are actually used in commercial game products and the challenges he’s facing that still need further research.

Martin Schmid, Research Scientist, DeepMind

Martin Schmid, Research Scientist, DeepMind

Martin Schmid, a research scientist with DeepMind, was the second keynote speaker. He is the lead author of DeepStack, the first computer program to outplay human professionals at heads-up no-limit Texas Hold’em poker, and he spoke about the program as an example of how successful AI methods used in complex games of perfect information like Go can advance AI application in imperfect-information games like poker. The work has huge practical significance since we regularly have to deal with imperfect information in the real world. These two keynotes were inspiring for faculty, researchers, and graduate students in attendance.

From left: Mobchase, Buildbattle, and Treasurehunt

From left: Mobchase, Buildbattle, and Treasurehunt

The workshop also featured the MARLÖ competition’s kickoff tournament. Agents of the participating teams competed in a round robin to achieve the highest scores across three different games—Mobchase, Buildbattle, and Treasurehunt. At the end of the day, we announced the rankings of the enrolled teams. The top three eligible teams will each be presented with the Progress Award, a travel grant worth up to $2,500 for use toward a relevant conference at which they can publish their competition results. The MARLÖ competition is open until December 31, after which the final tournament will be held offline. We hope to see more participants join.

Microsoft acquisition of FSLogix to help push cloud desktops

The Microsoft acquisition of FSLogix gives organizations another reason to consider the company’s Windows Virtual Desktop offering.

Microsoft this week acquired FSLogix, an application provisioning and performance management vendor in Suwanee, Ga., to improve the Office 365 user experience on virtual desktops. The move could help Microsoft compete with cloud desktop offerings, such as Amazon WorkSpaces, and attract more organizations to Windows Virtual Desktop (WVD).

“Microsoft recognizes FSLogix as a company that can help provide that more predictable experience and performance to stand out among the technologies that host Windows and Office 365,” said Mark Bowker, analyst at Enterprise Strategy Group in Milford, Mass.

Windows Virtual Desktop, introduced at Microsoft Ignite in September, enables IT to run virtualized Windows 10 on Azure. FSLogix’s technology will allow for faster user profile load times in Office 365 ProPlus on Windows virtual desktops, particularly within the Outlook email application and OneDrive file-sharing service, according to a Microsoft acquisition blog post.

“User profiles for Office 365 have long been a frustration for users and administrators of VDI and virtual app environments,” said Jo Harder, a cloud architect at a hosting provider and analyst at The Virtualization Practice. “FSLogix is able to successfully address the user experience gaps … and this will become even more important as Microsoft brings Windows Virtual Desktop to life.”

The company did not update the status of Windows Virtual Desktop, however, which it had pledged to release into preview by the end of the year. Some customers will want the latest Microsoft acquisition to help improve performance for on-premises deployments of Windows and Office, but that does not seem to be Microsoft’s objective.

“[The acquisition] is really focused on the cloud deployment aspect of it,” Bowker said. “This is about providing the best experience for Office 365.”

FSLogix addresses Office 365 virtual desktop performance

Microsoft recognizes FSLogix as a company that can help provide that more predictable experience and performance.
Mark Bowkeranalyst at Enterprise Strategy Group

FSLogix Apps, the company’s flagship tool, helps IT simplify app provisioning with a number of features. FSLogix Profile Container isolates and stores user profiles in containers, so IT can manage profiles independently and deliver more resources to users. Office 365 Container extends that capability to Office 365 profiles, and Cloud Cache stores profile containers on premises or in the cloud.

The Microsoft acquisition of FSLogix will help with Office 365 performance issues in particular, experts said.

“The problems in [end-user computing] grow bigger each and every day because of the way Office 365 works,” said Trond Eirik Haavarstein, a Citrix and Microsoft technology blogger and trainer based in Brazil, in an email. “[It] seems like the latest FSLogix Cloud Cache and Microsoft WVD was what finally got Microsoft to buy FSLogix.”

Some customers run into connection problems or slow load times in Office 365, for example, Bowker said.

“If a customer starts to feel hesitation or speed bumps along an Office 365 deployment, it’s going to slow their adoption of cloud-connected Office 365 in some cases,” he said. “A company may take a closer look at something like Google G Suite.”

Office 365 can be especially problematic in nonpersistent virtual desktop deployments. Searching the Outlook inbox, for example, can hit snags, because the client stores search index on a device-by-device basis. If a user logs into a virtual desktop from different devices, the search index will not follow him. Office 365 customers have also experienced caching problems.

To iron out those issues, organizations may look to third-party tools, but this introduces additional costs and complexities, said Sacha Thomet, system engineer at Die Mobiliar, a large insurance company in Bern, Switzerland.

“A lot of customers … have maybe a bad user experience after [moving] to Office 365 without any third-party products, especially combined with virtual desktops and apps,” he said in an email.

Plus, an app that runs on the internet can be unpredictable, so IT departments welcome any additional control over cloud app performance, Bowker said.

Microsoft did not disclose pricing of the acquisition and declined to comment for this article.

Executive editor Alyssa Provazza and assistant site editor John Powers contributed to this report.

For Sale – HP ProLiant Gen 8 G1610T SERVER 4GB Ram or 16GB Ram No HDD + Custom Gaming PC Omen X

HP Gen8 Microserver – NOW SOLD

Corsair DDR3 ECC Ram @ £100

HP Omen X Custom PC
I have also decided to sell my new plex box after managing to acquire a second HP Omen X case and looking to do a thread ripper build leaving my newly built machine up for sale the specs are as follows.

Case = HP Omen X
Motherboard = MSI Z270M Mortar (have the box/drivers dvd/manual)
CPU = Intel Pentium G4600
PSU = Corsair RM450 (also have the box for this)
Ram = 32GB (2x16GB sticks) DDR4 2133MHz
GPU = MSI RX 550 Aero 4GB (also have the box for this)
M.2 = Samsung 970 Evo (super fast NVME) for the boot disk
DVD = Can’t remember the model of the DVD drive but will update when I get round to checking it out

Space for 4x 3.5″ hard drives which all have their own drive sledge for easy access to upgrade/add as yours get full. My original plan was to run this as an Unraid server and pass through the GPU so I could run windows in a VM which works very well by the way. Really only selling due to getting hold of a second HP Omen X case and really wanting to build a threadripper for the same job. Since I didn’t plan on selling this one I didn’t keep all the boxes but the CPU, Motherboard, GPU, and M.2 are all brand new parts. The Ram was reclaimed from a new Dell unit which I upgraded. The case was purchased used and came with the DVD drive preinstalled.

I am looking for £900 or near offer for this as it is essentially a brand new machine which has been run for less than 30 days as an Unraid server. I can install windows 10 for the buyer if requested but will not be including a license code. Will also need to be collected due to weight. Will get some pics uploaded later today.

Price and currency: 16GB DDR3 ECC Ram £100 / HP Omen X Custom PC £900 ono
Delivery: Goods must be exchanged in person
Payment method: Cash on Collection / Bank Transfer
Location: Salford
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.