Kioxia expects no impact to NAND flash production in the aftermath of a fire this week at its Fab 6 semiconductor manufacturing facility in Yokkaichi, Japan, a company spokesperson confirmed today.
Kioxia — formerly Toshiba Memory Corp. — notified customers that the fire took place on Jan. 7 at about 6:10 a.m. local time. Firefighters contained the blaze to a single piece of machinery at the Fab 6 plant, and all machines other than the damaged one are now operating, according to the company spokesperson.
Western Digital, Kioxia’s joint venture (JV) partner, issued a statement confirming a “small fire” that “local firefighters quickly extinguished.” The company said that no employees sustained injuries.
“We are working closely with our JV partner to promptly bring the fab back to normal operational status. We expect any supply impact to be minimal,” Western Digital stated.
Analysts predict no supply, market impact
Don Jeanette, a vice president at Trendfocus, a data storage market research and consulting firm, met this week with multiple Kioxia employees. He said they told him the impact would be minimal, and the fire affected only a small portion of a clean room at the Fab 6 plant, which produces 3D NAND flash.
Likewise, Greg Wong, founder and principal analyst at Forward Insights, said his checks confirmed the impact was small, and there should be no market impact and no major disruptions in NAND flash supply to customers.
“Most NAND suppliers continue to carry above normal inventory levels,” Wong said.
Unrelated NAND flash prices increase
NAND flash prices have been on the rise. But Joseph Unsworth, a research vice president at Gartner, attributed the price increase to strong demand for solid-state drives (SSDs) from hyperscale and PC markets and lean supply due to fab delays and 3D NAND technology transitions. He said the NAND price increase has no relationship to the Kioxia fire or a recent Samsung power outage.
Samsung’s semiconductor facility in Hwaseong, Korea, experienced a power outage on Dec. 31, 2019. A Samsung spokesperson said power was “immediately restored,” and the facility resumed normal operation.
Jim Handy, general director and semiconductor analyst at Objective Analysis, said he has seen no market impact from the recent Samsung power outage and he expects none from this week’s Kioxia fire. He said a June power outage that interrupted production at Toshiba’s Yokkaichi plant also had “almost no impact.”
“We’re in a big oversupply right now,” Handy said. “The prices have gone up a little bit because there is an inventory build going on. Some Chinese NAND buyers are worried that the trade war is going to cut off their source of supply, so they’ve been building a little bit of a stockpile. And that’s given the illusion of a shortage. But when you compare real demand against real supply, there’s still an oversupply.”
Handy said he views the current NAND price increase as a temporary blip, and he predicts prices will follow costs and remain low this year. Handy expects the price trend will extend through 2021, thanks to new Chinese manufacturer Yangtze Memory Technologies Co. coming online and causing the NAND oversupply to continue.
SAN DIEGO — As enterprises expand production container deployments, a new Kubernetes security challenge has emerged: multi-tenancy.
Among the many challenges with multi-tenancy in general is that it is not easy to define, and few IT pros agree on a single definition or architectural approach. Broadly speaking, however, multi-tenancy occurs when multiple projects, teams or tenants, share a centralized IT infrastructure, but remain logically isolated from one another.
Kubernetes multi-tenancy also adds multilayered complexity to an already complex Kubernetes security picture, and demands that IT pros wire together a stack of third-party and, at times, homegrown tools on top of the core Kubernetes framework.
This is because core upstream Kubernetes security features are limited to service accounts for operations such as role-based access control — the platform expects authentication and authorization data to come from an external source. Kubernetes namespaces also don’t offer especially granular or layered isolation by default. Typically, each namespace corresponds to one tenant, whether that tenant is defined as an application, a project or a service.
“To build logical isolation, you have to add a bunch of components on top of Kubernetes,” said Karl Isenberg, tech lead manager at Cruise Automation, a self-driving car service in San Francisco, in a presentation about Kubernetes multi-tenancy here at KubeCon + CloudNativeCon North America 2019 this week. “Once you have Kubernetes, Kubernetes alone is not enough.”
However, Isenberg and other presenters here said Kubernetes multi-tenancy can have significant advantages if done right. Cruise, for example, runs very large Kubernetes clusters, with up to 1,000 nodes, shared by thousands of employees, teams, projects and some customers. Kubernetes multi-tenancy means more highly efficient clusters and cost savings on data center hardware and cloud infrastructure.
“Lower operational costs is another [advantage] — if you’re starting up a platform operations team with five people, you may not be able to manage five [separate] clusters,” Isenberg added. “We [also] wanted to make our investments in focused areas, so that they applied to as many tenants as possible.”
Multi-tenant Kubernetes security an ad hoc practice for now
The good news for enterprises that want to achieve Kubernetes multi-tenancy securely is that there are a plethora of third-party tools they can use to do it, some of which are sold by vendors, and others open sourced by firms with Kubernetes development experience, including Cruise and Yahoo Media.
Duke Energy Corporation, for example, has a 60-node Kubernetes cluster in production that’s stretched across three on-premises data centers and shared by 100 web applications so far. The platform is comprised of several vendors’ products, from Diamanti hyper-converged infrastructure to Aqua Security Software’s container firewall, which logically isolates tenants from one another at a granular level that accounts for the ephemeral nature of container infrastructure.
“We don’t want production to talk to anyone [outside of it],” said Ritu Sharma, senior IT architect at the energy holding company in Charlotte, N.C., in a presentation at KubeSec Enterprise Summit, an event co-located with KubeCon this week. “That was the first question that came to mind — how to manage cybersecurity when containers can connect service-to-service within a cluster.”
Some Kubernetes multi-tenancy early adopters also lean on cloud service providers such as Google Kubernetes Engine (GKE) to take on parts of the Kubernetes security burden. GKE can encrypt secrets in the etcd data store, which became available in Kubernetes 1.13, but isn’t enabled by default, according to a KubeSec presentation by Mike Ruth, one of Cruise’s staff security engineers.
Google also offers Workload Identity, which matches up GCP identity and access management with Kubernetes service accounts so that users don’t have to manage Kubernetes secrets or Google Cloud IAM service account keys themselves. Kubernetes SIG-Auth looks to modernize how Kubernetes security tokens are handled by default upstream to smooth Kubernetes secrets management across all clouds, but has run into snags with the migration process.
In the meantime, Verizon’s Yahoo Media has donated a project called Athenz to open source, which handles multiple aspects of authentication and authorization in its on-premises Kubernetes environments, including automatic secrets rotation, expiration and limited-audience policies for intracluster communication similar to those offered by GKE’s Workload Identity. Cruise also created a similar open source tool called RBACSync, along with Daytona, a tool that fetches secrets from HashiCorp Vault, which Cruise uses to store secrets instead of in etcd, and injects them into running applications, and k-rail for workload policy enforcement.
Kubernetes Multi-Tenancy Working Group explores standards
While early adopters have plowed ahead with an amalgamation of third-party and homegrown tools, some users in highly regulated environments look to upstream Kubernetes projects to flesh out more standardized Kubernetes multi-tenancy options.
For example, investment banking company HSBC can use Google’s Anthos Config Management (ACM) to create hierarchical, or nested, namespaces, which make for more highly granular access control mechanisms in a multi-tenant environment, and simplifies their management by automatically propagating shared policies between them. However, the company is following the work of a Kubernetes Multi-Tenancy Working Group established in early 2018 in the hopes it will introduce free open source utilities compatible with multiple public clouds.
“If I want to use ACM in AWS, the Anthos license isn’t cheap,” said Scott Surovich, global container engineering lead at HSBC, in an interview after a presentation here. Anthos also requires VMware server virtualization, and hierarchical namespaces available at the Kubernetes layer could offer Kubernetes multi-tenancy on bare metal, reducing the layers of abstraction and potentially improving performance for HSBC.
Homegrown tools for multi-tenant Kubernetes security won’t fly in HSBC’s highly regulated environment, either, Surovich said.
“I need to prove I have escalation options for support,” he said. “Saying, ‘I wrote that’ isn’t acceptable.”
So far, the working group has two incubation projects that create custom resource definitions — essentially, plugins — that support hierarchical namespaces and virtual clusters that create self-service Kubernetes API Servers for each tenant. The working group has also created working definitions of the types of multi-tenancy and begun to define a set of reference architectures.
The working group is also considering certification of multi-tenant Kubernetes security and management tools, as well as benchmark testing and evaluation of such tools, said Sanjeev Rampal, a Cisco principal engineer and co-chair of the group.
With more than 80 years of pioneering innovation in reservoir characterization, drilling, production, and exploration, Schlumberger is the leading provider of upstream products and services in the oil and gas industry. It’s a highly collaborative business, both internally, with more than 100,000 employees working in 85 countries, and externally, with the world’s leading global oil and gas customers depending on Schlumberger products and services. Moving data and expensive resources through a complex network of delivery systems calls for reliable, real-time collaboration and communication for all stakeholders. That’s one reason why standardizing on Microsoft 365 is a major step forward in the company’s strategy to harness the cloud and drive efficient customer service.
Recently, Schlumberger’s VP of Information Technology Sebastien Lehnherr had this to say about driving teamwork and productivity on a global scale:
“Operational efficiency and agility are requirements in a highly regulated service industry striving for performance and service quality. We use Microsoft 365 as a core element of our digital strategy—Microsoft Teams, Enterprise Mobility + Security, Power BI, and Windows 10 empower our employees globally with the intuitive, feature-rich tools that help them collaborate more efficiently and be more productive working in the office or while on the road.”
With highly secure, collaborative cloud apps at their fingertips, Schlumberger employees working in the field and at head offices in Paris, Houston, London, and The Hague are empowered by agile digital connections that accelerate service delivery and keep customers’ products moving to market. Unimpeded communication helps connect the big-picture expertise from Schlumberger’s leadership centers with local experience at production sites, adding value to customer relationships. We’re also excited to see how Schlumberger’s global Windows 10 deployment will add value to its Microsoft cloud business productivity platform.
IT pros who’ve run production workloads with Kubernetes for at least a year say it can open up frontiers for IT operations within their organizations.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
It’s easier to find instances of Kubernetes in production in the enterprise today versus just a year ago. This is due to the proliferation of commercial platforms that package this open source container orchestration software for enterprise use, such as CoreOS Tectonic and Rancher Labs’ container management product, Rancher. In the two years since the initial release of Kubernetes, early adopters said the platform has facilitated big changes in high availability (HA) and application portability within their organizations.
For example, disaster recovery (DR) across availability zones (AZs) in the Amazon Web Services (AWS) public cloud was notoriously unwieldy with VM-based approaches. Yet, it has become the standard for Kubernetes deployments at SAP’s Concur Technologies during the last 18 months.
Concur first rolled out the open source, upstream Kubernetes project in production to support a receipt image service in December 2015, at a time when clusters that spanned multiple AZs for HA were largely unheard-of, said Dale Ragan, principal software engineer for the firm, based in Bellevue, Wash.
“We wanted to prepare for HA, running it across AZs, rather than one cluster per AZ, which is how other people do it,” Ragan said. “It’s been pretty successful — we hardly ever have any issues with it.”
Ragan’s team seeks 99.999% uptime for the receipt image service, and it’s on the verge of meeting this goal now with Kubernetes in production, Ragan said.
Kubernetes in production offers multicloud multi-tenancy
Kubernetes has spread to other teams within Concur, though those teams run multi-tenant clusters based on CoreOS’s Tectonic, while Ragan’s team sticks to a single-tenant cluster still tied to upstream Kubernetes. The goal is to move that first cluster to CoreOS, as well, though the company must still work out licensing and testing to make sure the receipt imaging app works well on Tectonic, Ragan said. CoreOS has prepared for this transition with recent support for the Terraform infrastructure-as-code tool, with which Ragan’s team underpins its Kubernetes cluster.
CoreOS just released a version of Tectonic that supports automated cluster creation and HA failover across AWS and Microsoft Azure clouds, which is where Concur will take its workloads next, Ragan said.
“Using other cloud providers is a big goal of ours, whether it’s for disaster recovery or just to run a different cluster on another cloud for HA,” Ragan said. With this in mind, Concur has created its own tool to monitor resources in multiple infrastructures called Scipian, which it will soon release to the open source community.
Ragan said the biggest change in the company’s approach to Kubernetes in production has been a move to multi-tenancy in newer Tectonic clusters and the division of shared infrastructures into consumable pieces with role-based access. Network administrators can now provision a network, and it can be consumed by developers that roll out Kubernetes clusters without having to grant administrative access to those developers, for example.
In the next two years, Ragan said he expects to bring the company’s databases into the Kubernetes fold to also gain container-based HA and DR across clouds. For this to happen, the Kubernetes 1.7 additions to StatefulSets and secrets management must emerge from alpha and beta versions as soon as possible; Ragan said he hopes to roll out those features before the end of this year.
Kubernetes in production favors services-oriented approach
Dallas-based consulting firm etc.io uses HA across cloud data centers and service providers for its clients, which it helps to deploy containers. During the most recent Amazon outage, etc.io clients had failover between AWS and public cloud providers OVH and Linode through Rancher’s orchestration of Kubernetes clusters, said E.T. Cook, chief advocate for the firm.
“With Rancher, you can orchestrate domains across multiple data centers or providers,” Cook said. “It just treats them all as one giant intranetwork.”
In the next two years, Cook said he expects Rancher will make not just cloud infrastructures, but container orchestration platforms such as Docker Swarm and Kubernetes interchangeable with little effort. He said he evaluates these two platforms frequently because they change so fast. Cook said it’s too soon to pick a winner in the container orchestration market yet, despite the momentum behind Kubernetes in production at enterprises.
Docker’s most recent Enterprise Edition release favors enterprise approaches to software architectures that are stateful and based on permanent stacks of resources. This is in opposition to Kubernetes, which Cook said he sees as geared toward ephemeral stateless workloads, regardless of its recent additions to StatefulSets and access control features.
E.T. Cookchief advocate, etc.io
“Much of the time, there’s no functional difference between Docker Swarm and Kubernetes, but they have fundamentally different ways of getting to that result,” Cook said.
The philosophy behind Kubernetes favors API-based service architecture, where interactions between services are often payloads, and “minions” scale up as loads and queues increase, Cook said. In Docker, by contrast, the user sets up a load balancer, which then forwards requests to scaled services.
“The services themselves are first-class citizens, and the load balancers expose to the services — whereas in the Kubernetes philosophy, the service or endpoint itself is the first-class citizen,” Cook said. “Requests are managed by the service themselves in Kubernetes, whereas in Docker, scaling and routing is done using load balancers to replicated instances of that service.”
The two platforms now compete for enterprise hearts and minds, but before too long, Cook said he thinks it might make sense for organizations to use each for different tasks — perhaps Docker to serve the web front-end and Kubernetes powering the back-end processing.
Ultimately, Cook said he expects Kubernetes to find a long-term niche backing serverless deployments for cloud providers and midsize organizations, while Docker finds its home within the largest enterprises that have the critical mass to focus on scaled services. For now, though, he’s hedging his bets.
“It’s like the early days of HD DVD vs. Blu-ray,” Cook said. “Long term, there may be another major momentum shift — even though, right now, the market’s behind Kubernetes.”
Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at [email protected]or follow @PariseauTT on Twitter.