Portworx update targets hybrid and multi-cloud deployments

Portworx is adding the ability to move container-based application data across clouds with an update to its storage software for Kubernetes.

Portworx PX-Enterprise 2.0 now includes PX-Motion migration and a PX-Central management console. These features build on the startup’s original focus on helping customers provision persistent storage for stateful container-based applications.

Portworx CTO Gou Rao said the PX-Enterprise update responds to the growing trend of DevOps teams moving to Linux containers, rather than virtual machines, to manage and run their application infrastructure across clouds. These clouds can be on-premises or public clouds.

New PX-Motion feature

PX-Motion enables Kubernetes users to shift applications, data and configuration information between clusters. Customers can move the data or Kubernetes objects that control an application on demand. They can also set the product to automate workflows, such as backup and recovery and blue-green deployments for stateful applications. Users run two identical production environments in a blue-green deployment, with only one live at any time, to reduce the risk of downtime.

“Once a customer enables the PX-Motion functionality, they just go back to managing their applications through Kubernetes,” Rao said.

Migrating data between clouds could be hard with monolithic applications running in a machine-centric world, where any change might require moving a virtual machine image from one cloud to another, Rao said. By contrast, applications packaged in containers are distributed and service-oriented, with a smaller footprint, paving the way for Portworx features such as PX-Motion, he said.

“It’s not like a customer in the enterprise has one big, single, large Kubernetes cluster. What we have seen happening in the past couple of years is that people are typically running multiple Kubernetes clusters for a variety of reasons,” including compliance, risk mitigation or cost, Rao said. One common use case is running multiple clusters in the same public or private cloud and treating them as distinct cloud footprints, he added.

Portworx PX-Motion blue-green deployment
Portworx PX-Motion migrates container-based applications, data and configuration information as part of a DevOps-style blue-green deployment.

PX-Central management console

Portworx added PX-Central to facilitate single-pane-of-glass management, monitoring and metadata services across Portworx clusters built on the Kubernetes container scheduler and orchestration engine. Users can visualize and control an ongoing migration at an application level through custom policies.

Rhett Dillingham, a senior analyst at Moor Insights & Strategy, said Kubernetes users often struggle with storage availability and data management within and across clusters. This can be the case particularly when they run stateful applications, such as databases.

He said Portworx took an innovative approach to provide high-availability storage in the first version of PX-Enterprise. Now, it’s tackling data management to help enterprises scale Kubernetes deployments across clusters, development pipeline environments and multi-cloud environments.

Portworx’s competitors in cloud-native storage for Kubernetes include startup StorageOS and open source OpenEBS. But Dillingham said many organizations aren’t yet using any tools to solve the problems Portworx addresses.

“They just use the cloud block storage service from their cloud infrastructure platform,” Dillingham said. “But, as customers grow their use of Kubernetes to address more applications across more environments, they often recognize they need more storage platform capability than what they can get from the cloud infrastructure services.”

Portworx customer Fractal Industries started using Portworx PX-Enterprise to scale its machine-learning- and AI-based analytics-as-a-service product that ingests, integrates and correlates data in real time. The Reston, Va., startup uses stateful and stateless container-based services to power its Fractal OS platform and had trouble finding a good option to help with the stateful services. Fractal tried the open source REX-Ray container storage orchestration engine, mounted volumes, local volumes and other technologies before settling on Portworx.

Sunil Pentapati, director of platform operations at Fractal, said PX-Enterprise not only helps Fractal with scaling storage volumes and backup-and-restore functionality, but it also provides a common data management layer and real-time automation capabilities. He said the product includes APIs that minimize the manual effort Fractal needs to expend. The new Portworx PX-Enterprise 2.0 update will be important, as Fractal pursues its long-term goals related to automation, Pentapati noted.

“What is very important for us as a company is cloud independence,” Pentapati said. “The way we built our product is to abstract everything in such a way that we could scale our product across cloud providers or where the customer wants. For us to achieve that independence, it is not enough just to tackle the compute layer. We also need to extend the automation all the way to the data layer, too. Having the ability to scale our automation and operate across clouds is what Portworx brings to the table.”

Subscription-based pricing for Portworx PX-Enterprise is $1,500 per virtual machine, per year. PX-Motion and PX-Central are new features bundled with PX-Enterprise at no extra cost to the customer.

PX-Enterprise integrates Kubernetes distributions and container orchestration systems, such as Amazon Elastic Container Service for Kubernetes, Microsoft Azure Kubernetes Service, Docker Enterprise Edition, Google Kubernetes Engine, Heptio, IBM Cloud Kubernetes Service, Mesosphere DC/OS, Pivotal Container Service for Kubernetes, Rancher and Red Hat OpenShift.

Go to Original Article
Author:

How to Make an Offline Root Certificate Authority for Windows PKI in WSL

In a previous article, I talked about the concepts involved in PKI. In this article, I want to show you how to build your own PKI. I will mostly write this as a how-to, on the assumption that you read the previous article or already have equivalent knowledge. I will take a novel approach of implementing the root certification authority in Windows Subsystem for Linux. That allows you to enjoy the benefits of an offline root CA with almost none of the drawbacks.

Why Use Windows Subsystem for Linux as a Root Certification Authority?

Traditionally, building a Windows PKI with an offline CA involves Windows Server systems for all roles. Most Windows administrators find it easy — you only need to know the one operating system. I have seen some that employ a standard Linux system as the offline. Of course, you don’t necessarily need a Windows system at all.

I use Windows Subsystem for Linux to create an offline root CA and use a Windows system for an online intermediate (subordinate) CA for these reasons:

  • Licensing If you use a Windows Server system as the offline root, it consumes a license (physical installation) or a virtualization right (virtual installation). Since the offline root CA should be kept offline for nearly the entirety of its existence, that’s a waste.
  • Built-in components I won’t show you anything in this article that you can’t find natively in Windows/Windows Server. You could certainly go through all the effort to obtain and install a full Linux installation to use as the root CA. You could download and install OpenSSL for Windows to mimic what I’m doing with WSL.
  • Setup ease You can bring up the WSL instance with almost no effort. Compare to the alternatives.
  • Active Directory integration You can certainly create a full PKI without using Windows Server at all. You lose a lot, though. You can get a lot of mileage out of features such as auto-enrollment.

Using WSL for the offline root allows us to protect it easily. Using Windows Server as the intermediate allows us maximal benefits.

There a quite a few parts to this configuration so please use the table of contents below to keep track of your progress:

Prerequisites for a WSL/Windows Server Certification Pair

You’ll need a few things to make this work:

  • Credentials for a domain account that belongs to the Enterprise Admins security group
  • An installation of Windows Server (2016 used in this article; anything 2012 or later should suffice) to use as the online intermediate (subordinate) certificate server
    • Domain-joined
    • Machine name set (you cannot change it afterward)
    • IP set
  • A Windows system capable of running Windows Subsystem for Linux (Windows 10 used in this article; a current iteration of Windows Server, Semi-Annual Channel or Windows Server LTSC version 2019+ should suffice). This article is written with the expectation that you will use a physical Windows 10 system and store vital information on a removable USB device.
    • Any installation of Linux with OpenSSL, virtual or physical, would suffice as an alternative
    • OpenSSL on a Windows installation would also suffice
  • A folder on the Windows system where files can be transferred to and from the WSL environment. Pick something easy to type (I used D:CA in this article)
  • A DNS name where you will publish the root CA’s certificate and certificate revocation list (CRL). It must be reachable by the systems and devices that will treat your CA as authoritative. This DNS name becomes a permanent part of the issued certificates, so choose wisely.

If you want, you can use the same Windows Server system to host the online intermediate CA and the offline root. You only need to ensure that the Windows Server version can run WSL. Ordinarily, running both together would be a terrible action. In this case, the root CA will not “exist” long enough to cause concern.

Phase 1: Prepare the Subordinate CA on the Windows Server

Counter-intuitively, we will start with the subordinate certification authority. It will reduce the amount of file shuffling necessary. Because this setup is a simple procession of wizard screens, I did not include every screen.

Install the Subordinate CA Role

We begin in Server Manager. I have not done this in PowerShell although it is possible.

  1. Start the Add roles and features wizard from within Server Manager
  2. Select Active Directory Certificate Services:
    Active Directory Certificate Services
  3. You will be prompted to add the management features. You can skip this if you only want the Certificate Authority role to exist on the target server.
    Active Directory Certificate Services Features
  4. By default, the wizard will only want to install the Certification Authority role. We will install and configure other roles later. You can select them now, but you will be unable to configure them. I recommend only taking the default at this time:
    Certification Authority role services

Complete the wizard and wait for the installation to finish.

Perform Initial CA Configuration

Once the wizard completes, you have the role installed but you cannot use it. Configure the subordinate CA role.

  1. In Server Manager, click the notification flag with the yellow triangle at the top right of the window, then click Configure Active Directory Certificate Services on the destination server:
  2. In the wizard, choose the enterprise admin account selected for this procedure:
    credentials
  3. Choose to configure the Certification Authority only. If you had selected to install other roles, do not attempt to configure them at this time.
    role services AD CS configuration
  4. Choose to configure the CA as an Enterprise CA (you can select Standalone CA if you prefer; you will miss out on most Active Directory features, however):
    setup type AD CS Configuration
  5. Select the Subordinate CA type:
    CA type AD CS Configuration
  6. Choose to Create a new private key.
    Note 1: if you were re-installing a previously existing CA, you would work through the Use existing private key branch instead.
    Note 2: If you’d like to use OpenSSL to create the key set instead, you’ll find instructions further down. You would do that if you wanted more information to appear on the CA’s certificate than the Microsoft wizard allows, such as organization and location information. However, you will need to pause here, perform the WSL steps, and then return to this point.
    Private Key AD CS Configuration
  7. The default Cryptography selections should suffice for most installations. Reducing any settings could cause compatibility problems; increasing them might cause compatibility and performance problems. Do not check Allow administrator interaction when the private key is accessed by the CA. That will cause credential prompts where you don’t want them.
    Cryptography AD CS Configuration
  8. Choose an appropriate name for the CA’s common name. The default should work well enough, although I tend to choose a friendlier name. I recommend that you avoid spaces; they can be used but will cause problems in some places. You can also change the suffix, if necessary for your usage.
    CA name AD CS Configuration
  9. Choose to Save a certificate request to file on the target machine. You can make the filename friendlier if you like. Make sure that you keep track of the name though because you’ll enter it in a future step.
    parent CA AD CS Configuration
  10. Proceed through the remainder of the wizard.

You have now completed the initial configuration of the subordinate certificate authority. Now we turn to the WSL system to set up your offline root CA.

Enable Windows Subsystem for Linux and Choose Your Distribution

You can skip this entire section if you are bringing your own method for running OpenSSL.

Enabling Windows Subsystem for Linux is slightly different depending on whether you are using a desktop or server operating system.

Enable WSL on Windows 10

As of this writing, Windows 10 provides the simplest way to install WSL. If you prefer a server SKU, skip to the next sub-section.

  1. Use Cortana to search for Turn Windows features on or off (or enough of a subset for the following to be found), then open the link
    Windows Features on or off
  2. Choose Windows Subsystem for Linux:
    windows subsystem for linux windows features
  3. Once that’s complete, use the Microsoft Store to find and install the Linux distribution of your choice. I prefer Kali, but any should do (you can find starting instructions for installing a non-Store distribution on Microsoft’s blog):
    kali linux launch

Start the Kali image and follow its prompts to set up your user and password.

Enable WSL on SAC or Windows Server LTSC 2019 or Later

Be aware that not all Linux distributions will work on non-GUI servers (in any way that I know of) because they do not include a .exe launcher. For example, Kali only comes as appx and I could not get it to work. Ubuntu comes as exe and should not pose problems.

  1. In PowerShell, run  Install-WindowsFeature -Name Microsoft-Windows-Subsystem-Linux
  2. Restart the Windows Server computer.
  3. Access Microsoft’s list of Linux distributions. You have two purposes on this site: selecting your desired distribution and getting its URL. The page includes its own instructions which do not meaningfully deter from mine.
  4. Download the selected distribution:  Invoke-WebRequest -Uri https://aka.ms/wsl-kali-linux -OutFile $env:USERPROFILEDownloadsKali.zip -UseBasicParsing
  5. Create a folder to run the distribution from:  mkdir C:DistrosKali
  6. Extract the distribution:  Expand-Archive -Path $env:USERPROFILEDownloadsKali.zip -DestinationPath C:DistrosKali
  7. Check the directory listing to check your options:

     

    1. For .exe, just run it directly:  DistrosUbuntuubunt1804.exe
    2. For .appx, use Install-AppxPackage first (desktop experience only, apparently), then run your distribution from the Start menu:  Install-AppxPackage -Path C:DistrosKaliDistroLauncher-Appx_1.1.4.0_x64.appx

Follow the prompts to set up your user account and password.

Windows Subsystem for Linux Storage

This article is not about WSL. If you want to know more than I’m showing you, then you’ll need to research it elsewhere. However, we’re dealing with a root certificate authority and security is extremely vital. So I want to make one thing clear: be aware that the files for your WSL installation will be held underneath your user profile (C:Usersyour-user-accountAppDataLocalPackagesthedistroname). If you follow my directions, you will not permanently leave sensitive information in this location. If you skip that part, then you must maintain significant security over that location.

As mentioned in the intro, I will have you use a USB device on your physical system running WSL. WSL can see your Windows file system at /mnt/DRIVELETTER. As an example, your C: drive is /mnt/c, your D: drive is /mnt/d, etc.

Before proceeding, pick a folder to use to transfer files back and forth from WSL and your Windows installation. I am using “D:CA” (/mnt/d/CA). Copy in the CSR (the .req file) created by the wizard at the end of the Windows section above.

Creating a Root Certification Authority in Windows Subsystem for Linux

I have used Kali in WSL for all of these steps. Instructions should be the same, or at least similar, for other distributions. If you use a full installation of Linux rather than WSL, you must make modifications, primarily in transferring files between Windows and Linux. Your mileage may vary.

Configuring OpenSSL

Like most Linux programs, OpenSSL depends on a configuration file. I will include one, but OpenSSL provides a default file that you can modify. If you wish to copy that default file out to Windows for editing in something like Notepad++, I will provide the exact point at which you will perform that step. If you wish to use mine as your template, place it in the Windows-side folder that you’re using for transfer (D:CA on my system).

To keep the file short, I used only a few comments. Notes on the configuration points appear after the listing. Check each line before using this file yourself. I believe that you will only need to modify the four items at the top, but you may disagree. I have trimmed away all items that I considered non-essential for the task at hand. You will be unable to use this file for OpenSSL operations unrelated to certification authorities and x509 certificates.

OpenSSL Configuration Notes

A few notes on the decisions that I made in this file:

  • I included points where you can configure OCSP for the root CA if you wish, but I commented them out. Since your root CA will only sign one certificate, I see no justification for OCSP.
  • You will want to change the names to match your own; the scripts that I’m about to show you expect the same names, so take care that they align.
  • I set the CRL to expire after 210 days (about 7 months). You will need to regenerate and redeploy the CRL within that time or clients will cease trusting the subordinate CA, and therefore all certificates it has signed. CRL regeneration instructions provided later.
  • It is possible to sign the subordinate CA without including CRL information. That will alleviate the need to periodically redeploy the CRL. However, it will also eliminate most of the benefit of using an offline CA. One major reason to keep the root offline is because if it is ever compromised, no authority can revoke it. If you do not configure your subordinate CA so that it can be revoked, then it will also remain valid even if compromised.
  • Some other tutorials include generating a CRL for the root (here, in the v3_ca section). That is a wasted effort. No one is above the root, so it cannot be revoked. A CA cannot revoke its own certificate.
  • I restricted the subordinate CA with “pathlen:0”. That way, if it is ever compromised, thieves cannot use it to create further CAs. You can change or remove this restriction if you need a more complex CA structure.

You are certainly welcome to make any changes that suit your situation. Just remember that the upcoming processes have item names that must completely match what’s in your openssl.cnf.

The Process for Creating a Root Certification Authority Using openssl

On my system, I placed all of these into a single file. I copied out the blocks separately and pasted them into the WSL interface. At the end of each block, the system will prompt for input of some kind. It would be possible to pre-supply many (maybe all) of these items. I do not classify the creation of a root certification authority as a worthy purpose for full automation.

Overall instructions: paste the text from these blocks directly into WSL.

Part 1: Establish Variables

In this section, replace my named items with your own. Do not modify anything in the “composite variables” section. If you restart the WSL interface for any reason, you will need to resubmit these settings.

Note that I set the duration of the root CA’s validity to about 30 years and the subordinate CA’s validity to about 10 years. A CA cannot issue a certificate with a lifespan beyond its own. If both certificates expire at the same time, you will need to replace your entire PKI simultaneously.

Interim Optional Step: Copy the Default OpenSSL into Windows for Editing

If you wish to copy out the default openssl.cnf into Windows for easy editing rather than using mine, this will help (some distributions use a different directory structure):

Part 2: Create the Environment

Note: before starting, and for the duration of this procedure, I recommend setting  sudo s. Most everything requires sudo to run properly anyway.

This section will build up the folder structure that openssl expects without disrupting the built-in openssl setting. This is not a permanent build, so there is no need to try to integrate.

Part 3: Place Your Edited openssl.cnf File

I edited my openssl.cnf file in Windows and have written all my instructions accordingly. If you took a different approach, such as editing it within WSL in vim, then your goal in this section is to move into the $dir_root directory (/ca in my design):

Note: under WSL, openssl doesn’t care about Windows vs. UNIX line-endings in its cnf file. In any other configuration, make sure you have line endings set appropriately for your environment.

Part 4: Create the Root Certification Authority’s Keys

Use the following to generate new keys to use for your root certification authority:

You will be prompted to provide a password (“pass phrase”) to protect the key. Do not lose the password or the key will be irretrievable. You will also need it several times in the remaining steps of this article.

Warning: the key set you just created is the most important file in the entire procedure. If any unauthorized individual gains access to it, you should consider your root CA compromised. They will need the password to unlock it, of course, but the key cannot be considered secured.

Part 5: Self-sign the Root Certification Authority Certificate

The following will create a new certificate from the private/public key pair that you created in part 4.

You will first be prompted for the password to the private key. You will also be prompted for the common name of the root certification authority. I stored that in $rootcaname, but in this particular prompt I always typed it in manually.

Part 6: Sign the Subordinate CA Certificate

In this section, you’ll copy in the certificate request file that you created for the subordinate CA back in the Windows setup steps. Then, you’ll have your new root CA sign it.

Note: if you want to use openssl to create the subordinate CA’s certificate instead of using one provided by the Windows wizard, those instructions appear after this section. Perform those steps and then continue with Part 7.

You will be prompted for the password to the root CA’s private key (from step 4). You might also see an error about “index.txt.attr”. You can ignore that message. It will automatically create the necessary file with defaults.

I included -batch to suppress as much input as possible, but you will still see the certificate’s details and its base64-encoded text on-screen.

Part 7: Create the Certificate Revocation List

This step creates the certificate revocation list from the root CA. The list will be empty, but the file is absolutely necessary.

You will be prompted for the password to the root CA’s private key (from step 4).

Part 8: Separate out the Files to be Used in Windows

Certificate chains are often required. openssl can generate them easily, and you have both certificates on hand right now. You won’t find a much better time. Furthermore, several of the files that you’ll want to use in your Windows environment are scattered throughout the CA file structure. We’ll place all of these files into the same output directory (/ca/out in my design, held in $dir_out). I also rename the sub CA’s certificate name from 01.pem to a .crt file that contains its common name:

Part 9: Transfer all Files to Windows

Now we’ll copy out the entire CA file structure into Windows wholesale:

You’ll find all of the files needed for your Windows infrastructure in the out subfolder of your targeted Windows folder. Those are all safe to use and share anywhere. The rest of the folder structure should be treated as a singular unit and secured immediately. The private key for the root CA resides in private. It is imperative that you protect the .key file.

Part 10: Cleanup

You have now completed all the necessary steps in WSL. Cleanup with these steps:

  1. Start by verifying each file in the out folder; Windows should be able to open all of them directly. If a file is damaged, it will not open. Be aware that the subordinate CA will not yet be able to validate the entire chain because the root does not yet appear in the local trusted root store.
  2. Place the contents of the out folder somewhere that you can access them in upcoming sections. Remember that all of these are public files; they do not need any particular security measures.
  3. Once you have verified that all output files are valid, transfer all files in your target Windows folder to a secured, offline location, such as an external USB device that can be placed into a safe. You do not need to include the out folder.
  4. In WSL, run:  rm r $dir_root Note: this completely removes all of the CA data from WSL!

Remember that the CA environment that you created in WSL was intended to be temporary. You can’t really “save” it anywhere, and I’ve had multiple WSL environments fail completely for no reason that I could discern. Do not expect to keep it.

If space on your secured device is at a premium, you can delete the newcerts1.pem file. That is your sub-CA’s signed certificate, which you kept as part of the out file listing. If space is even more precious than that, the only file that you absolutely must keep is the .key file in private. With that, you can sign new certificates, revoke compromised certificates, and regenerate the CRL. You should also have the various serial files and certificate database (index.txt, serial, crlnumber, etc.), but you could reasonably approximate those if absolutely necessary. You will not be able to perfectly reproduce the root CA’s public certificate, but hopefully, you’ll have some copies of it spread around. openssl.cnf would also be difficult to reproduce, but not impossible. To avoid problems, the best thing is to retain the entire directory structure.

Altaro Dojo Forums

Got any burning Hyper-V questions?

Introducing the

forums logo

Ask questions, read answers, leave comments, and master Hyper-V

Moderated by Microsoft MVPs and leading IT industry experts

Optional: Use OpenSSL to Generate the Subordinate CA’s Keys and Certificate Request

You might wish to use OpenSSL to generate your subordinate CA’s keys and its CSR. The primary reason that I can think of would be control over the informational fields that the automatic CSR does not generate, such as location and organization data.

This is a text dump of the commands involved. It expects the same environmental setup that I used in Part 1 above and the folder structure from Part 2. I did not break out the separate sections this time, to ensure that you do not blindly copy/paste the entire thing. I documented where OpenSSL will interrupt you.

The pkcs12 output portion is optional, but it allows you to move your entire subordinate CA’s keys and certificate in a single package. Just remember to take special care of it, as it contains the CA’s private key.

Distributing the Root Certification Authority and Revocation List

It may seem counter-intuitive, but we’re not going to finish configuration of the subordinate CA yet. You could do it, but it will complain about not being able to verify the root. Things go more smoothly if it can validate against the root.

Place the Root Certificate into the Domain

I recommend distributing your root CA certificate by using group policy. If you place it into the domain policy, it will appear on all domain members at their next policy refresh. If you delete it from that policy, domain members will (usually) remove it at their next policy refresh.

I feel that the Group Policy Management Console falls under the category of “common knowledge”, so I will not give you a detailed walk-through on installing or using it. You will find it as an installable feature on Windows Server. You can read Microsoft’s official documentation.

In GPMC, follow these steps:

  1. Create a new GPO and link it to the root of the domain:
    GPO
  2. Give the GPO a meaningful name:
    GPO name
  3. Edit the new policy. Drill down to Computer ConfigurationWindows SettingsSecurity SettingsPublic Key Policies. Right-click Trusted Root Certification Authorities and click Import.
    trusted root certification authority
  4. Click Next to go to the import page where you can browse for the root CA’s certificate file:
    certificate import wizard
  5. Proceed through the remainder of the wizard without changing anything.

You do not need to modify the user settings; you can disable that branch if you wish.

Configuring DNS for Root Certificate and CRL Distribution

I am going to use the same system that hosts the subordinate CA to distribute the root CA’s certificate and CRL. Remember that these files are public by nature. There is no security risk. However, you do want to have the ability to separate them later if you ever need to decommission or revoke the subordinate CA. To facilitate this, I established a fully-qualified domain name just for the root CA. I’ll have DNS point that name to the subordinate certificate server where an IIS virtual directory will respond. If the situation changes in the future, it’s just a matter of retargeting DNS to the replacement web server.

We start in DNS with a simple CNAME record:

rootca properties

Configuring IIS for Root Certificate and CRL Distribution

My process only covers one possible option. You could simply install a standalone instance of IIS. You could use a different web server. You could use a Linux system. You could probably even get a container involved. You could employ a network load balancer. Whatever solution you employ, you only have one goal: ensure that the root certificate and CRL can be reached by any system that needs to validate the subordinate CA or a certificate that it signed.

In this article, I will piggyback off of the IIS installation enabled by the subordinate CA’s Certification Authority Web Enrollment role. It fits this purpose perfectly. In its current incarnation, this role has little more value than automatically publishing the CRT and CRL for the subordinate CA. It aligns with our goal of publishing the root CA’s CRT and CRL.

Install the Certification Authority Web Enrollment Feature

The role has not been installed by these instructions so far, so I’ll start with that.

  1. On the certificate server (or a management workstation connected to it), start the Add roles and features wizard in Server Manager. Step forward to the Roles page.
  2. Expand Active Directory Certificate Services and check Certification Authority Web Enrollment:
    server roles
  3. The wizard will prompt you to install several components of IIS. Agree by clicking Add Features.
    certification authority web enrollment
  4. Proceed through the remainder of the wizard, keeping all defaults.

Create an IIS Site to Publish the Root CA Certificate and CRL

We will configure the newly-installed role later. Right now, we want to set up the root CA’s information.

  1. In C:inetpub, create a folder named “rootca”. Place the root certification authority’s CRT and CRL file.
    rootca
  2. In Internet Information Services Manager, create a new site:
    Internet Information Services Manager
  3. Configure the new site. Pay attention to the indicated areas. The Site Name is up to you. Use port 80; all of the items are digitally-signed and public. Publishing them with https is counter-productive, at best. Make sure that you use the same FQDN for the Host name that you indicated in the CRL information in your openssl.cnf file.
  4. Test access to the CRL and CRT by accessing the complete URL that appears in the subordinate CA’s CRL information:
    CRL and CRT
  5. Don’t forget to test the .CRT as well.

Complete Configuration of the Subordinate CA

Everything is in place for the subordinate CA. Follow these simple steps to finish up:

  1. Run  gpupdate /force to ensure that group policy publishes the root certificate to the subordinate server.
  2. Assuming Server 2016, use Cortana to Manage computer certificates. On older servers, you’ll need to manually run MMC.EXE and add the Certificates snap-in.
    Manage computer certificates
  3. Make certain that the certificate appears in Trusted Root Certification Authorities:
    Trusted Root Certification Authorities
  4. Start the Certification Authority tool. You can find it under Windows Administrative Tools.
  5. Right-click your authority, go to All Tasks, and select Install CA Certificate.
    Certification Authority local
  6. Browse for any one of the subordinate CA’s certificate files that you generated into the out folder:
    CA installation
  7. Provided that everything is OK, it will run a short progress bar and then return you to the management console.
  8. Right-click on your certification authority, go to All Tasks, and click Start Service.
  9. Open Server Manager and click the notification flag with the yellow triangle at the top right of the window, then click Configure Active Directory Certificate Services on the destination server. If it is not present for some reason, then one of the recent tasks should show a link back to the Add roles and features wizard. It will start on the last page, which includes a link to the certification role configuration wizard.
    Configure Active Directory Certificate Services on the destination server
  10. Proceed to the role configuration page. Check Certification Authority Web Enrollment.
    Certification Authority Web Enrollment
  11. IIS should now have a “CertEnroll” virtual directory underneath the Default Web Site that redirects to C:Windowssystem32CertSrvCertEnroll. It should contain the CRT and CRL for your subordinate CA:
    CertEnroll Default Web Site

Congratulations! You have set up a functional public key infrastructure, complete with an offline root CA and an operational enterprise subordinate CA! If you check the certificate list on a domain member with a current policy update, you should see the sub-CA with an OK status:

public key infrastructure

You can request certificates using the Certificates MMC snap-in on a domain-joined computer. You can also instruct group policy to automatically issue certificates. Explaining such usage (and all of the other possibilities) exceeds what I want to accomplish in this article.

Root CA Maintenance and Activities

You don’t need to do much with your root CA, but ignoring it will cause problems.

I highly recommend placing all of the above activities into a single text file and storing it with your CA’s files. You can then easily reuse the portions that you need. You’ll also have them available if you have an emergency and need to rebuild a new PKI from scratch in a hurry. Append the functions in this section to the file with relevant comments.

When you need to reuse your files, spin up a WSL environment, enter  sudo s, and then run the portion that generates the environment variables. Then, run  cp rf $dir_host_transfer/* $dir_root.

Updating the Root CA’s CRL

You will need to issue an updated CRL prior to the expiration of the existing one or problems will ensue. Specifically, anything checking the validity of any certificate in the chain may opt to treat the entire chain as invalid if it cannot retrieve a current CRL.

Assuming that you have performed the preceding bit about regenerating the CA structure within WSL, you can create an updated CRL very simply:

To output the necessary CRL files back to the Windows environment:

Remember to  rm r /ca to remove the CA from WSL after you’ve done this. Copy the updated CRL file out to the web location and place the CA files back into secured storage.

Revoking the Subordinate CA’s CRL

If your subordinate CA becomes compromised, you’ll need to revoke it. That’s easily done. These instructions assume you followed the previous portion about rebuilding the CA structure and setting up the environment variables.

Copy the index.txt file back to Windows; it contains the CA’s database and will now have a record of the revoked subordinate CA certificate.

Perform all the steps in the “Updating the Root CA’s CRL” section above. Remove the authority from Active Directory.

Renewing the Subordinate CA’s CRL

“Renewing” a certificate is just a phrase we use. In truth, you just set up another subordinate CA, usually with the same name. If you want, you can issue another CSR from the same private/public key pair and have your CA sign it. I would not recommend using the same private key, though. CA’s traditionally have long validation periods and any key can be cracked given enough time. Generating a new private key resets the clock.

In the Certification Authority tool, right-click your authority, go to All Tasks and select Renew CA Certificate.

Renew CA Certificate

Follow the wizard to generate a new CSR. In the WSL portion above, locate the portion in Part 1 where you copy in the CSR file. Then, proceed from part 6 through to the end. Wrap up by starting at step 4 of the “Complete Configuration of the Subordinate CA” section above.

Further Reading

Microsoft’s base documentation for Certificate Services: https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831740(v%3dws.11)

Information on the newer web services: https://social.technet.microsoft.com/wiki/contents/articles/7734.certificate-enrollment-web-services-in-active-directory-certificate-services.aspx

Public Key Infrastructure Explained | Everything you need to know

Want to ask a question that relates directly to your situation? Head on over to the Altaro Dojo Forums and start a discussion. I’m actively part of that community and answering questions on a daily basis.

Go to Original Article
Author: Eric Siron

For Sale – Core i7 gaming PC for sale, 1070, 6700, 16gb

Discussion in ‘Desktop Computer Classifieds‘ started by GingerRocky, Dec 1, 2018.

  1. GingerRocky

    Active Member

    Joined:
    Sep 30, 2016
    Messages:
    593
    Products Owned:
    0
    Products Wanted:
    1
    Trophy Points:
    31
    Location:
    chelmsford
    Ratings:
    +65

    Hi

    Selling this rig that was made from parts laying around and the addition of a new psu. It flies and is a lovely PC. Spec is below. GPU was as new in box, purchased for another PC and used in that build for a month before the mobo died, most parts used in this. price includes delivery

    Processor: Core i7 6700 quad core
    Memory: Corsair vengeance LPX 16 GB (2x8gb) 2400 mhz
    Motherboard: MSI Z170A SLI ATX
    Hard Drive: 2 x 500 GB 2.5″ and 1 x 250GB 2.5″
    Solid state drive: 240 GB SSD for windows
    Power supply: Riotoro 500 watt bronze psu
    Dedicated graphics card: EVGA FTW GTX 1070 8GB
    DVD-RW: No
    Case: Aerocool Cylon RGB
    Operating system: Windows 10 pro

    Price and currency: 869
    Delivery: Delivery cost is included within my country
    Payment method: bank transfer
    Location: heybridge
    Advertised elsewhere?: Advertised elsewhere
    Prefer goods collected?: I have no preference

    ______________________________________________________
    This message is automatically inserted in all classifieds forum threads.
    By replying to this thread you agree to abide by the trading rules detailed here.
    Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

    • Landline telephone number. Make a call to check out the area code and number are correct, too
    • Name and address including postcode
    • Valid e-mail address

    DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Share This Page

Loading…

Go to Original Article
Author:

Ways administrators can avoid IT burnout

Administrators buried under heavy workloads and those who feel they lack the opportunity to challenge themselves both have something in common: They are at risk of IT burnout.

Experts haven’t clearly defined burnout, but it has a lot of similarities to clinical depression. That should ring alarm bells; it’s a lot more than just having a bad day at work. A common cause of burnout is doing too much, both in the amount of work and the time spent doing work.

Beyond staffing issues, work politics, bad managers and other personnel issues, some technical practices can help administrators avoid IT burnout by reducing the stress associated with a heavy workload.

Ease your workload with automation

Administrators who find themselves doing the same tasks, especially ones done on a deadline, on a regular basis need to figure out how to change this manual process. Automation can eliminate unrewarding and repetitive jobs that offer no challenge and that take time away from other, more meaningful duties.

Don’t set out to automate the world, but start with quick wins and make part of the process more streamlined and less painful.

For example, an administrator might get requests to create email groups, which requires some effort working out what the requester wants and then beginning the configuration work to set the group up, add initial members, and add or remove members to the group. Administrators might not need to play any part in the process. You can set up notifications to get alerts when a user creates a group to stay informed without being a cog in that wheel.

Allow users to create these email groups, force a naming convention, set up an approval process. Or you can automate the creation of groups based a logical arrangement, such as the user’s department. If automation isn’t an option, then let certain staff update the group’s members. If nothing else, create a web form with all the fields needed to create a group.

Don’t set out to automate the world, but start with quick wins and make part of the process more streamlined and less painful. Where it makes sense, let the users step in to do the work.

Invest the time to complete a turnaround

There’s a Catch-22: Trying to improve the situation to spend less time working will require additional work. You also might think you don’t have time to implement the change, which is a hard mindset to get past. But, without this effort, the only alternative is burnout.

[embedded content]
How to deal with pressures as an IT professional

Ideally, you’ll have support to make changes from colleagues, management or even friends, if you are self-employed. Without this assistance, these necessary adjustments will be difficult to implement, but you must strive to find ways to improve procedures and workflows.

Spend a few more minutes writing a PowerShell script that automatically deprovisions a user when they leave rather than manually removing the account. These endeavors will build certain skills and increase job satisfaction when you no longer have to click through to perform mundane tasks an endless number of times.

What else can I do about IT burnout?

I find many IT pros have a difficult time saying no when they are pitched a new project. Even when their existing workload already keeps them fully occupied, they will try to find a way to make room for another task.

A more sensible approach is to not agree to a proposal right away. Work out your priorities and a reasonable time to deliver. If there’s a conflict with another project, let someone higher up in the management chain decide the priority.

Although these changes can be overwhelming, it’s necessary to start somewhere and improve incrementally. It helps to step back over time and see the progress you’ve made to make the work less of a grind and more enjoyable.

If all of this still seems too hard, then talk to somebody about it. Reach out to a professional for help.

Go to Original Article
Author:

Customize Microsoft Translator’s Neural Machine Translation to translate just the way you want: Custom Translator now in General Availability

Custom Translator, now in general availability, significantly improves the quality of your translations by letting you build your own customized neural translation models tuned with your own pre-translated content.​ Using Custom Translator, you can translate your product names and industry jargon just the way you want.

With Custom Translator, an extension of the Microsoft Translator Text API, part of the Cognitive Services suite of products on Azure, you can build neural translation models that understand the terminology used in your own business and industry. The customized translation model will then seamlessly integrate into existing applications, workflows, and websites.

Custom Translator can be used with Microsoft Translator’s advanced neural machine translation when translating text using the Microsoft Translator Text API and speech translation using the Azure Cognitive Services Speech Service.

Preview customers of Custom Translator have already noted its improvements on translation quality and its usefulness regardless of the amount of pre-translated, bilingual content available.

Alex Yanishevsky, Senior Manager for machine translation at Welocalize, a leading language service provider, remarked, “Using Custom Translator, we’ve seen very good quality in comparison to other engines. It is very flexible. You can make engines just based on dictionaries if you don’t have enough data, and if you do have enough data you can make an engine based on data plus dictionaries. From the standpoint of customization, having that flexibility is really important.”

How it works

Custom Translator is easy to use and does not require a developer once the call to the Translator service has been properly set up in your app’s code. Custom Translator features a simple and intuitive web app that guides you through the 4-step process of customizing a model:

  1. Upload your data
  2. Train a model
  3. Test the model
  4. Deploy the new customized model to be used in your app

View the process in the image below.

For advanced use, there is also the Custom Translator API (preview) to automate the customization into your workflows.

Building and using custom NMT with Translator is quick, easy, and cost effective. By optimizing how training is performed, and how the Translator runtime incorporates the custom training, our team was able to provide a solution for customizing the Translator NMT models with a training cost that is less than 1% of the cost of training a new neural translation model from scratch. This, in turn, enables Microsoft to provide a cost-effective and simple pricing model to our users.

General availability pricing will go into effect on February 1st, 2019.

Get started now

  1. Ensure you have a Translator Text API key
    If you don’t have a key already, learn how to sign up.
  2. Log into the Custom Translator portal
    You can use your Microsoft account or corporate email to sign into the portal.
  3. Watch the how-to video and read the documentation.
  4. Questions?
    Ask them on Stack Overflow. We monitor these daily!

Go to Original Article
Author: Steve Clarke

Oracle HPC cloud aims at mainstream business data

ERP systems have an enormous amount of data that many firms are just beginning to tap. To answer complex questions with this information — and to do so quickly — it takes a lot of computing power. It is one reason why Oracle just added a high-performance computing, or HPC, cloud capability to its portfolio of public cloud infrastructure services.

The new Oracle HPC cloud capability is aimed at two audiences. The first is users who run legacy HPC applications on premises, such as scientific projects, R&D applications and virtual product design in lieu of physical prototypes. The second audience is ERP managers who need to apply high-end computing resources to business data processed in the Oracle Cloud Infrastructure service.

ERP systems manage a lot of data. “Most of our customers today are trying to find more and more value out of that data, and that’s what we’re we are seeing [as] growth,” said Karan Batta, director of product management for Oracle Cloud Infrastructure.

HPC expanding to mainstream business uses

The use of HPC outside of its traditional scientific and research use is expanding to areas such as sales analysis and planning, supply chain planning and HR-related workforce analysis, said Steve Conway, senior research vice president at HPC research firm Hyperion Research in St. Paul, Minn.

An HPC system can take in far more data to allow for deeper analysis. This approach is also being coupled with AI technologies, such as an inference engine that can be applied to many situations in ERP, Conway said.

“The larger the data sets, the more accurate the results are going to be,” he said.

Enterprises not only want ERP-related questions answered “that are more complicated than before, but they also want the answers in something very near-real time,” Conway said.

HPC ideal for machine learning models

The larger the data sets, the more accurate the results are going to be.
Steve Conwaysenior research vice president, Hyperion Research

High-performance computing is a set of technologies and processes designed to maximize performance. What Oracle HPC offers users is a clustered network with access to bare-metal processing, both CPU and GPU. Bare metal is a single-tenant server or system that doesn’t use virtualization, which can add latency.

Another key technology Oracle HPC uses to speed performance is remote direct memory access (RDMA), which allows an application to write directly to memory remotely without involving the CPU or operating system.

The use of bare metal and RDMA in a cloud platform means vendors are “getting past one of the very big bottlenecks that affected a lot of cloud computing, which is virtualization,” Conway said.

Business application developers can take mammoth data sets from ERP systems and put this data into a machine learning model. From that model, a business can figure out what “kind of insights they can generate out of that data,” and then feed it back into the ERP system, Batta said.

Machine learning models are computationally intensive and can take hours, days or even weeks to run if they don’t have access to enough computing resources, Batta said. That’s where high-performance computing comes in.

The Oracle HPC system will allow scaling up to 1,080 cores for a single project, although the number of available cores will be expanded in time, Batta said.

Go to Original Article
Author:

Wanted – Basic Desktop PC for Light Gaming

Discussion in ‘Desktop Computer Classifieds‘ started by fjordvik, Nov 24, 2018.

  1. fjordvik

    fjordvik

    Novice Member

    Joined:
    Apr 29, 2015
    Messages:
    2
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    1
    Location:
    Derby
    Ratings:
    +0

    Looking for a desktop pc for light gaming for my kids. Somethig that runs something like a 2200g or 1030 to 1050ti. Max spend £300 inc postage.

    Thanks

    Location: Derby

    ______________________________________________________
    This message is automatically inserted in all classifieds forum threads.
    By replying to this thread you agree to abide by the trading rules detailed here.
    Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

    • Landline telephone number. Make a call to check out the area code and number are correct, too
    • Name and address including postcode
    • Valid e-mail address

    DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

  2. P5X

    P5X

    Active Member

    Joined:
    Jun 27, 2003
    Messages:
    448
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    18
    Ratings:
    +11

    I have this if it’s any good for you, I was mainly using it for general browsing and kids were using it for roblox etc. I think the specs are roughly as follows but let me know if you’re interested and I will confirm and take pics:

    Intel E5200
    4GB RAM
    Seagate 500GB HDD (st500dm002 – brand new)
    ATI HD 3650
    Antec aria (I think) cube case with AR-350 PSU.
    DVDRW

Share This Page

Loading…

Go to Original Article
Author:

IT pros look to iPaaS tools for LOB integration demands

Enterprise iPaaS adoption continues to accelerate, as IT pros look to provide departmental users with application integration tools they can use to access data.

The increase in integration work derives largely from enterprise cloud adoption and an uptick in IoT endpoints, according to analyst firm Gartner, which counts about 120 integration-platform-as-a-service (iPaaS) vendors in a market that grew 72% in 2017, up from 61% in 2016.

“We don’t see it slowing down,” said Betty Zakheim, a Gartner research director.

To help IT pros meet the needs of line-of-business (LOB) customers, and also keep company data secure, vendors have released iPaaS tools aimed at nontechnical employees. For example, Workato this week released packages for IT pros and business teams.

They include Automation Editions for business teams, such as sales, marketing and HR. The editions have connectors to apps and systems that are used by each department, as well as workflows to build custom “recipes,” or sets of instructions, without the need to code.

Additions to the overall platform include RecipeIQ, which adds machine learning to bolster the recipes’ functionality, and OpsIQ, which improves the way the platform manages recipes. The company bases its prices on the number of transactions, connections and features a customer has.

One IT architect for a logistics company that supports retailers said his IT department is constantly paranoid about what users might do on their own. The company uses Workato’s integration software with Apigee, an API management tool now owned by Google.

The architect, who declined to be identified, said the tool gives departmental teams access to data, albeit with some guardrails. “They have power, but they can’t go overboard,” he said.

Application integration and LOBs

Enterprises historically could integrate local data behind a firewall, but as cloud apps appeared and data proliferated, the job grew more difficult. The need to connect people, applications and devices creates an enormous technical challenge for enterprises and vendors. LOB projects typically keep valuable data in discrete systems — often SaaS apps or legacy databases — which complicated horizontal integration efforts.

How a company chooses to integrate this data depends on its priorities and existing platforms. Big names that address this market include Informatica, Dell Boomi, SnapLogic and Jitterbit, as well as established legacy companies, such as Microsoft and Oracle.

“Some focus on data integration techniques, some API management, some EDI [electronic data interchange] and some pack in workflows,” Zakheim said. Nearly all have included graphical configuration, versus coding, which makes the creation of integration flows more straightforward.

IPaaS tools lay the foundation

The integration of LOB apps and their data is likely to become a significant trend for 2019.

“It needs to be,” said Vijay Tella, CEO and founder of Workato, based in Cupertino, Calif. “A lot of the drive comes from the business side. They see this as an automation problem.”

Application automation and integration are central to nearly every project these days at Wilbur-Ellis, a $3 billion holding company, with divisions in agribusiness, chemicals and feed.

“If I look back on the last three major projects, they all involve a separate system that has to integrate,” said Dan Willey, CIO at the San Francisco-based company.

Many of these iPaaS tools are conceptually good for modern, cloud-based companies, but sometimes you are saddled with an application that doesn’t play well. In the case of Wilbur-Ellis, an ERP system by Oracle’s JD Edwards is a stumbling block, Willey said.

Wilbur-Ellis uses Dell Boomi’s connectors to connect customer and order data. The company will also use the tool in a broader sense as an API management platform.

“It’s a hard problem to solve,” Willey said. “It’s interchanging between your tool sets, data in your back-end systems, front-end systems, IoT data and other things that need to be lined up to make it happen.”

“We want to look at how weather will impact our month,” he added. “All of that information is available through APIs. You can be very creative, and it’s as big as you want to think about it.”

Vassar College has used SnapLogic since 2015 to connect systems that share data between many departments that have moved from on-premises storage to SaaS and a Workday ERP system for finance and human capital management. The school has vital processes in place to match student and employee data that funnels down to the college’s ID system, said Mark Romanovsky, data system architect at Vassar.

“We knew that IT would still handle integration requirements, but we were concerned the more we pushed specialized solutions to those departments,” he said. “SnapLogic lets us structure and deliver a data set for easier reporting based on current projects.”

Go to Original Article
Author:

Wanted – Buffalo DiskStation – 4 caddies wanted

Model no. HD-QS4.0 TSU2R5EU – though I think that some other models would have had the same ones, like the duo. The part I am after is attached by the screw next to the numbers and is a sort of open metal box, which holds the drive in position. I wonder if someone might have a defunct one around that I could get the caddies from. 4 wanted in total, but would be grateful for any. There are no ‘plugs’ on the caddies – they are seperate, and to the side, purely just the metal ‘boxes’ with an L-bracket to be held by the screw. As you can see from the photos, with the single drive, the one I have has no caddies…

Location: Rustington, West Sussex, UK

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Go to Original Article
Author:

Customize Microsoft Translator’s Neural Machine Translation to translate just the way you want: Custom Translator now in General Availability

Custom Translator, now in general availability, significantly improves the quality of your translations by letting you build your own customized neural translation models tuned with your own pre-translated content.​ Using Custom Translator, you can translate your product names and industry jargon just the way you want.

With Custom Translator, an extension of the Microsoft Translator Text API, part of the Cognitive Services suite of products on Azure, you can build neural translation models that understand the terminology used in your own business and industry. The customized translation model will then seamlessly integrate into existing applications, workflows, and websites.

Custom Translator can be used with Microsoft Translator’s advanced neural machine translation when translating text using the Microsoft Translator Text API and speech translation using the Azure Cognitive Services Speech Service.

Preview customers of Custom Translator have already noted its improvements on translation quality and its usefulness regardless of the amount of pre-translated, bilingual content available.

Alex Yanishevsky, Senior Manager for machine translation at Welocalize, a leading language service provider, remarked, “Using Custom Translator, we’ve seen very good quality in comparison to other engines. It is very flexible. You can make engines just based on dictionaries if you don’t have enough data, and if you do have enough data you can make an engine based on data plus dictionaries. From the standpoint of customization, having that flexibility is really important.”

How it works

Custom Translator is easy to use and does not require a developer once the call to the Translator service has been properly set up in your app’s code. Custom Translator features a simple and intuitive web app that guides you through the 4-step process of customizing a model:

  1. Upload your data
  2. Train a model
  3. Test the model
  4. Deploy the new customized model to be used in your app

View the process in the image below.

For advanced use, there is also the Custom Translator API (preview) to automate the customization into your workflows.

Building and using custom NMT with Translator is quick, easy, and cost effective. By optimizing how training is performed, and how the Translator runtime incorporates the custom training, our team was able to provide a solution for customizing the Translator NMT models with a training cost that is less than 1% of the cost of training a new neural translation model from scratch. This, in turn, enables Microsoft to provide a cost-effective and simple pricing model to our users.

General availability pricing will go into effect on February 1st, 2019.

Get started now

  1. Ensure you have a Translator Text API key
    If you don’t have a key already, learn how to sign up.
  2. Log into the Custom Translator portal
    You can use your Microsoft account or corporate email to sign into the portal.
  3. Watch the how-to video and read the documentation.
  4. Questions?
    Ask them on Stack Overflow. We monitor these daily!

Go to Original Article
Author: Steve Clarke