Tag Archives: servers

For Sale – 3x HP DL380 G7 2x E5620 2.40GHz Rack Servers

For Sale

3 servers in total. Spec of each server:

HP DL380 G7 Rack Server
2x E5620 2.40GHz 4 core CPU
24GB RAM
P410i RAID
8x 2.5 SAS HDD 146GB 10k
4x NIC
1x iLO Remote Management (NIC)
Redundant PSUs

Fully working order including remote management.

Pick only from Bristol.

Used for a personal cloud/virtualization project and no longer needed.

Open to offers.

Have a few other bits, such as Cisco switches, and an ML G9 Tower.

Location
Bristol
Price and currency
£100 each server
Delivery cost included
Delivery is NOT included
Prefer goods collected?
I prefer the goods to be collected
Advertised elsewhere?
Advertised elsewhere
Payment method
BT, PPG or COD

Last edited:

Go to Original Article
Author:

For Sale – 3x HP DL380 G7 2x E5620 2.40GHz Rack Servers

For Sale

3 servers in total. Spec of each server:

HP DL380 G7 Rack Server
2x E5620 2.40GHz 4 core CPU
24GB RAM
P410i RAID
8x 2.5 SAS HDD 146GB 10k
4x NIC
1x iLO Remote Management (NIC)
Redundant PSUs

Fully working order including remote management.

Pick only from Bristol.

Used for a personal cloud/virtualization project and no longer needed.

Open to offers.

Have a few other bits, such as Cisco switches, and an ML G9 Tower.

Location
Bristol
Price and currency
£100 each server
Delivery cost included
Delivery is NOT included
Prefer goods collected?
I prefer the goods to be collected
Advertised elsewhere?
Advertised elsewhere
Payment method
BT, PPG or COD

Last edited:

Go to Original Article
Author:

AWS storage changes the game inside, outside data centers

The impact of Amazon storage on the IT universe extends beyond the servers and drives that store exabytes of data on demand for more than 2.2 million customers. AWS also influenced practitioners to think differently about storage and change the way they operate.

Since Amazon Simple Storage Service (S3) launched in March 2006, IT pros have re-examined the way they buy, provision and manage storage. Infrastructure vendors have adapted the way they design and price their products. That first AWS storage service also sparked a raft of technology companies — most notably Microsoft and Google — to focus on public clouds.

“For IT shops, we had to think of ourselves as just another service provider to our internal business customers,” said Doug Knight, who manages storage and server services at Capital BlueCross in central Pennsylvania. “If we didn’t provide good customer service, performance, availability and all those things that you would expect out of an AWS, they didn’t have to use us anymore.

“That was the reality of the cloud,” Knight said. “It forced IT departments to evolve.”

The Capital BlueCross IT department became more conscious of storing data on the “right” and most cost-effective systems to deliver whatever performance level the business requires, Knight said. The AWS alternative gives users a myriad of choices, including block, file and scale-out object storage, fast flash and slower spinning disk, and Glacier archives at differing price points.

“We think more in the context of business problems now, as opposed to just data and numbers,” Knight said. “How many gigabytes isn’t relevant anymore.”

Capital BlueCross’ limited public cloud footprint consists of about 100 TB of a scale-out backup repository in Microsoft’s Azure Blob Storage and the data its software-as-a-service (SaaS) applications generate. Knight said the insurer “will never be in one cloud,” and he expects to have workloads in AWS someday. Knight said he has noticed his on-premises storage vendors have expanded their cloud options. Capital BlueCross’ main supplier, IBM, even runs its own public cloud, although Capital BlueCross doesn’t use it.

Expansion of consumption-based pricing

Facing declining revenue, major providers such as Dell EMC, Hewlett Packard Enterprise and NetApp introduced AWS-like consumption-based pricing to give customers the choice of paying only for the storage they use. The traditional capital-expense model often leaves companies overbuying storage as they try to project their capacity needs over a three- to five-year window.

While the mainstream vendors pick up AWS-like options, Amazon continues to bolster its storage portfolio with enterprise capabilities found in on-premises block-based SAN and file-based NAS systems. AWS added its Elastic Block Store (EBS) in August 2008 for applications running on Elastic Cloud Compute (EC2) instances. File storage took longer, with the Amazon Elastic File System (EFS) arriving in 2016 and FSx for Lustre and Windows File Server in 2018.

AWS ventured into on-premises hardware in 2015 with a Snowball appliance to help businesses ship data to the cloud. In late 2019, Amazon released Outposts hardware that gives customers storage, compute and database resources to build on-premises applications using the same AWS tools and services that are available in the cloud.

Amazon S3 API impact

Amid the ever-expanding breadth of offerings, it’s hard to envision any AWS storage option approaching the popularity and influence of the first one. Simple Storage Service, better known as S3, stores objects on cheap, commodity servers that can scale out in seemingly limitless fashion. Amazon did not invent object storage, but its S3 application programming interface (API) has become the de facto industry standard.

“It forced IT to look at redesigning their applications,” Gartner research vice president Julia Palmer said of S3.  

Amazon storage timeline
AWS storage has grown from the object-based Simple Storage Service (S3) to include block, file, archival and on-premises options.

Palmer said when she worked in engineering at GoDaddy, the Internet domain registrar and service provider designed its own object storage to talk to various APIs. But the team members gradually realized they would need to focus on the S3 API that everyone else was going to use, Palmer said.

Every important storage vendor now supports the S3 API to facilitate access to object storage. Palmer said that, although object systems haven’t achieved the level of success on premises that they have in the cloud, the idea that storage can be flexible, infinitely scalable and less costly by running on commodity hardware has had a dramatic impact on the industry.

“Before, it was file or block,” she said. “And that was it.”

Object storage use cases expand

Because of higher performance storage emerging in the cloud and on premises, object storage is expanding beyond the original backup and archiving use cases to workloads such as big data analytics. For instance, Pure Storage and NetApp sell all-flash hardware for object storage, and object software pioneer SwiftStack improves throughput through parallel I/O.

Enrico Signoretti, a senior data storage analyst at GigaOm, said he fields calls every day from IT pros who want to use object storage for more use cases.

“Everyone is working to make object storage faster,” Signoretti said. “It’s growing like crazy.”

Major League Baseball (MLB) is trying to get its developers to move away from files and write to S3 buckets, as it plans a 10- to 20-PB open source Ceph object storage cluster. Truman Boyes, MLB’s SVP of infrastructure, said developers have been working with files for so long that it will take time to convince them that the object approach could be easier. 

“From an application designer’s perspective, they don’t have to think about how to have resilient storage. They don’t have to worry if they’ve copied it to the right number of places and built in all these mechanisms to ensure data integrity,” Boyes said. “It just happens. You talk to an API, and the API figures it out for you.”

Ken Rothenberger, an enterprise architect at General Mills, said Amazon S3 object storage significantly influenced the way he thinks about data durability. Rothenberger said the business often mandates zero data loss, and traditional block storage requires the IT department to keep backups and multiple copies of data.

AWS storage challengers

By contrast, AWS S3 and Glacier stripe data across at least three facilities located 10 km to 60 km away from each other and provide 99.999999999% durability. Amazon technology VP Bill Vass said the 10 km distance is to withstand an F5 tornado that is 5 km wide, and the 60 km is for speed-of-light latency. “Certainly none of the other cloud providers do it by default,” Vass said.

Startup Wasabi Technologies claims to provide 99.999999999% durability through a different technology approach, and takes aim at Amazon S3 Standard on price and performance. Wasabi eliminated data egress fees to target one of the primary complaints of AWS storage customers.

Vass countered that egress charges pay for the networking gear that enables access at 8.8 terabits per second on S3. He also noted that AWS frequently lowers storage prices, just as it does across the board for all services.

“You don’t usually get that aggressive price reduction from on-prem [options], along with the 11 nines durability automatically spread across three places,” Vass said.

Amazon’s shortcomings in block and file storage have given rise to a new market of “cloud-adjacent” storage providers, according to Marc Staimer, president of Dragon Slayer Consulting. Staimer said Dell EMC, HPE, Infinidat and others put their storage into facilities located within close proximity of AWS compute nodes. They aim to provide a “faster, more scalable, more secure storage” alternative to AWS, Staimer said.

But the most serious cloud challengers for AWS storage remain Azure and Google. AWS also faces on-premises challenges from traditional vendors that provide the infrastructure for data centers where many enterprises continue to store most of their data.

Cloud vs. on-premises costs

Jevin Jensen, VP of global infrastructure at Mohawk Industries, said he tracks the major cloud providers’ prices and keeps an open mind. But at this point in time, he finds that his company is able to keep its “fully loaded” costs at least 20% lower by running its SAP, payroll, warehouse management and other business-critical applications in-house, with on-premises storage.

Jensen said the cost delta between the cloud and Mohawk’s on-premises data center was initially about 50%, leaving him to wonder, “Why are we even thinking about cloud?” He said the margin dropped to 20% or 30% as AWS and the other cloud providers reduced their prices.

Like many enterprises, Mohawk uses the public cloud for SaaS applications and credit card processing. The Georgia-based global flooring manufacturer also has Azure for e-commerce. Jensen said the mere prospect of moving more workloads and data off-site enables Mohawk to secure better discounts from its infrastructure suppliers.

“They know we have stuff in Azure,” Jensen said. “They know we can easily go to Amazon.”

Go to Original Article
Author:

For Sale – ALL SOLD Clearout – 15 x 2TB HDDs CAN NOW POST!

Hi all,

I have for sale 15 x 2TB HDDs taken from one of my backup Unraid servers. These are a mixture of Hitachi, WD and Toshiba (Hitachi Rebrand) and have varying hours. NO WARRANTY is left on any of the drives. I have had no issues with the drives, 2 of which are recerts. None of the drives have errors.

I have attached all 15 drive’s preclear logs. Please pay attention to the “199-UDMA_CRC_Error_Count” line. These CRC errors were due to what I thought at the time was a dodgy SAS card when in fact was due to faulty cables. If you look at the logs you will see the initial count and cycle count has not increased and that the status count is zero.

For those not aware of “preclear”, it is essentially a stress test before you commit a drive to an array and compares S.M.A.R.T values to determine its health. You cannot complete a preclear with a duff drive, the preclear will just fail to start after checking values.

Reason for sale..I’ve moved my wedding forward and so am selling off a few bits of hardware to fund it!

I’m looking for £20 + delivery per HDD. If purchasing multiples then I can knock a few quid off. If you want to buy all 15 then i’m sure we can knock some more off, although I would insist they are collected. Delivery method would be either Collect+ or Royalmail.

All the drives are currently resting in new anti static bags awaiting a new owner(s).

1 x WD Red & 1 x WD Green brotchaq – paid and posted
2 x HDDs 19329hrs & 17249hrs – Andrips – paid and posted
2 x Tosh/Hitachi & 1 Hitachi – paul99 – paid and posted
8 x Hitachi – moofie – paid and posted

Thanks

IMG_20191022_163420.jpgIMG_20191022_163603.jpgIMG_20191022_163552.jpgIMG_20191022_163557.jpgIMG_20191022_172519.jpg

Go to Original Article
Author:

For Sale – Clearout – 15 x 2TB HDDs CAN NOW POST!

Hi all,

I have for sale 15 x 2TB HDDs taken from one of my backup Unraid servers. These are a mixture of Hitachi, WD and Toshiba (Hitachi Rebrand) and have varying hours. NO WARRANTY is left on any of the drives. I have had no issues with the drives, 2 of which are recerts. None of the drives have errors.

I have attached all 15 drive’s preclear logs. Please pay attention to the “199-UDMA_CRC_Error_Count” line. These CRC errors were due to what I thought at the time was a dodgy SAS card when in fact was due to faulty cables. If you look at the logs you will see the initial count and cycle count has not increased and that the status count is zero.

For those not aware of “preclear”, it is essentially a stress test before you commit a drive to an array and compares S.M.A.R.T values to determine its health. You cannot complete a preclear with a duff drive, the preclear will just fail to start after checking values.

Reason for sale..I’ve moved my wedding forward and so am selling off a few bits of hardware to fund it!

I’m looking for £20 + delivery per HDD. If purchasing multiples then I can knock a few quid off. If you want to buy all 15 then i’m sure we can knock some more off, although I would insist they are collected. Delivery method would be either Collect+ or Royalmail.

All the drives are currently resting in new anti static bags awaiting a new owner(s).

Thanks

IMG_20191022_163420.jpgIMG_20191022_163603.jpgIMG_20191022_163552.jpgIMG_20191022_163557.jpgIMG_20191022_172519.jpg

Go to Original Article
Author:

For Sale – Clearout – 15 x 2TB HDDs Hitachi/WD/Toshiba

Hi all,

I have for sale 15 x 2TB HDDs taken from one of my backup Unraid servers. These are a mixture of Hitachi, WD and Toshiba (Hitachi Rebrand) and have varying hours. NO WARRANTY is left on any of the drives. I have had no issues with the drives, 2 of which are recerts. None of the drives have errors.

I have attached all 15 drive’s preclear logs. Please pay attention to the “199-UDMA_CRC_Error_Count” line. These CRC errors were due to what I thought at the time was a dodgy SAS card when in fact was due to faulty cables. If you look at the logs you will see the initial count and cycle count has not increased and that the status count is zero.

For those not aware of “preclear”, it is essentially a stress test before you commit a drive to an array and compares S.M.A.R.T values to determine its health. You cannot complete a preclear with a duff drive, the preclear will just fail to start after checking values.

Reason for sale..I’ve moved my wedding forward and so am selling off a few bits of hardware to fund it!

I’m looking for £22 each collected as I don’t have boxes to post out. If purchasing 3+ i can knock a few quid off. If you want to buy all 15 then i’m sure we can knock some more off.

All the drives are currently resting in new anti static bags awaiting a new owner(s).

Thanks

IMG_20191022_163420.jpgIMG_20191022_163603.jpgIMG_20191022_163552.jpgIMG_20191022_163557.jpgIMG_20191022_172519.jpg

Go to Original Article
Author:

The 3 types of DNS servers and how they work

Not all DNS servers are created equal, and understanding how the three different types of DNS servers work together to resolve domain names can be helpful for any information security or IT professional.

DNS is a core internet technology that translates human-friendly domain names into machine-usable IP addresses, such as www.example.com into 192.0.2.1. The DNS operates as a distributed database, where different types of DNS servers are responsible for different parts of the DNS name space.

The three DNS server types server are the following:

  1. DNS stub resolver server
  2. DNS recursive resolver server
  3. DNS authoritative server

Figure 1 below illustrates the three different types of DNS server.

A stub resolver is a software component normally found in endpoint hosts that generates DNS queries when application programs running on desktop computers or mobile devices need to resolve DNS domain names. DNS queries issued by stub resolvers are typically sent to a DNS recursive resolver; the resolver will perform as many queries as necessary to obtain the response to the original query and then send the response back to the stub resolver.

Types of DNS servers
Figure 1. The three different types of DNS server interoperate to deliver correct and current mappings of IP addresses with domain names.

The recursive resolver may reside in a home router, be hosted by an internet service provider or be provided by a third party, such as Google’s Public DNS recursive resolver at 8.8.8.8 or the Cloudflare DNS service at 1.1.1.1.

Since the DNS operates as a distributed database, different servers are responsible — authoritative in DNS-speak — for different parts of the DNS name space.

Figure 2 illustrates a hypothetical DNS resolution scenario in which an application uses all three types of DNS servers to resolve the domain name www.example.com into an IPv4 address — in other words, a DNS address resource record.

DNS servers interoperating
Figure 2. DNS servers cooperate to accurately resolve an IP address from a domain name.

In step 1, the stub resolver at the host sends a DNS query to the recursive resolver. In step 2, the recursive resolver resends the query to one of the DNS authoritative name servers for the root zone. This authoritative name server does not have the response to the query but is able to provide a reference to the authoritative name server for the .com zone. As a result, the recursive resolver resends the query to the authoritative name server for the .com zone.

This process continues until the query is finally resent to an authoritative name server for the www.example.com zone that can provide the answer to the original query — i.e., what are the IP addresses for www.example.com? Finally, in step 8, this response is sent back to the stub resolver.

One thing worth noting is that all these DNS messages are transmitted in the clear, and there is the potential for malicious actors to monitor users’ internet activities. Anyone administering DNS servers should be aware of DNS privacy issues and the ways in which those threats can be mitigated.

Go to Original Article
Author:

USBAnywhere vulnerabilities put Supermicro servers at risk

Security researchers discovered a set of vulnerabilities in Supermicro servers that could allow threat actors to remotely attack systems as if they had physical access to the USB ports.

Researchers at Eclypsium, based in Beaverton, Ore., discovered flaws in the baseboard management controllers (BMCs) of Supermicro servers and dubbed the set of issues “USBAnywhere.” The researchers said authentication issues put servers at risk because “BMCs are intended to allow administrators to perform out-of-band management of a server, and as a result are highly privileged components.

“The problem stems from several issues in the way that BMCs on Supermicro X9, X10 and X11 platforms implement virtual media, an ability to remotely connect a disk image as a virtual USB CD-ROM or floppy drive. When accessed remotely, the virtual media service allows plaintext authentication, sends most traffic unencrypted, uses a weak encryption algorithm for the rest, and is susceptible to an authentication bypass,” the researchers wrote in a blog post. “These issues allow an attacker to easily gain access to a server, either by capturing a legitimate user’s authentication packet, using default credentials, and in some cases, without any credentials at all.”

The USBAnywhere flaws make it so the virtual USB drive acts in the same way a physical USB would, meaning an attacker could load a new operating system image, deploy malware or disable the target device. However, the researchers noted the attacks would be possible on systems where the BMCs are directly exposed to the internet or if an attacker already has access to a corporate network.

Rick Altherr, principal engineer at Eclypsium, told SearchSecurity, “BMCs are one of the most privileged components on modern servers. Compromise of a BMC practically guarantees compromise of the host system as well.”

Eclypsium said there are currently “at least 47,000 systems with their BMCs exposed to the internet and using the relevant protocol.” These systems would be at additional risk because BMCs are rarely powered off and the authentication bypass vulnerability can persist unless the system is turned off or loses power.

Altherr said he found the USBAnywhere vulnerabilities because he “was curious how virtual media was implemented across various BMC implementations,” but Eclypsium found that only Supermicro systems were affected.

According to the blog post, Eclypsium reported the USBAnywhere flaws to Supermicro on June 19 and provided additional information on July 9, but Supermicro did not acknowledge the reports until July 29.

“Supermicro engaged with Eclypsium to understand the vulnerabilities and develop fixes. Supermicro was responsive throughout and worked to coordinate availability of firmware updates to coincide with public disclosure,” Altherr said. “While there is always room for improvement, Supermicro responded in a way that produced an amicable outcome for all involved.”

Altherr added that customers should “treat BMCs as a vulnerable device. Put them on an isolated network and restrict access to only IT staff that need to interact with them.”

Supermicro noted in its security advisory that isolating BMCs from the internet would reduce the risk to USBAnywhere but not eliminate the threat entirely . Firmware updates are currently available for affected Supermicro systems, and in addition to updating, Supermicro advised users to disable virtual media by blocking TCP port 623.

Go to Original Article
Author:

Try these PowerShell networking commands to stay connected

While it would be nice if they did, servers don’t magically stay online on their own.

Servers go offline for a lot of reasons; it’s your job to find a way to determine network connectivity to these servers quickly and easily. You can use PowerShell networking commands, such as the Test-Connection and Test-NetConnection cmdlets to help.

The problem with ping

For quite some time, system administrators used ping to test network connectivity. This little utility sends an Internet Control Message Protocol message request to an endpoint and listens for an ICMP reply.

ping test
The ping utility runs a fairly simple test to check for a response from a host.

Because ping only tests ICMP, this limits its effectiveness to fully test a connection. Another caveat: The Windows firewall blocks ICMP requests by default. If the ICMP request doesn’t reach the server in question, you’ll get a false negative which makes ping results irrelevant.

The Test-Connection cmdlet offers a deeper look

We need a better way to test server network connectivity, so let’s use PowerShell instead of ping. The Test-Connection cmdlet also sends ICMP packets but it uses Windows Management Instrumentation which gives us more granular results. While ping returns text-based output, the Test-Connection cmdlet returns a Win32_PingStatus object which contains a lot of useful information.

The Test-Connection command has a few different parameters you can use to tailor your query to your liking, such as changing the buffer size and defining the number of seconds between the pings. The output is the same but the request is a little different.

Test-Connection www.google.com -Count 2 -BufferSize 128 -Delay 3

You can use Test-Connection to check on remote computers and ping a remote computer as well, provided you have access to those machines. The command below connects to the SRV1 and SRV2 computers and sends ICMP requests from those computers to www.google.com:

Test-Connection -Source 'SRV2', 'SRV1' -ComputerName 'www.google.com'

Source Destination IPV4Address IPV6Address
Bytes Time(ms)

------ ----------- ----------- -----------
----- --------

SRV2 google.com 172.217.7.174
32 5

SRV2 google.com 172.217.7.174
32 5

SRV2 google.com 172.217.7.174
32 6

SRV2 google.com 172.217.7.174
32 5

SRV1 google.com 172.217.7.174
32 5

SRV1 google.com 172.217.7.174
32 5

SRV1 google.com 172.217.7.174
32 5

SRV1 google.com 172.217.7.174
32 5

If the output is too verbose, and you just want a simple result, use the Quiet parameter.

Test-Connection -ComputerName google.com -Quiet
True

For more advanced network checks, try the Test-NetConnection cmdlet

If simple ICMP requests aren’t enough to test network connectivity, PowerShell also provides the Test-NetConnection cmdlet. This cmdlet is the successor to Test-Connection and goes beyond ICMP to check network connectivity.

For basic use, Test-NetConnection just needs a value for the ComputerName parameter and will mimic Test-Connection‘s behavior.

Test-NetConnection -ComputerName www.google.com

ComputerName : www.google.com
RemoteAddress : 172.217.9.68
InterfaceAlias : Ethernet 2
SourceAddress : X.X.X.X
PingSucceeded : True
PingReplyDetails (RTT) : 34 ms

Test-NetConnection has advanced capabilities and can test for open ports. The example below will check to see if port 80 is open:

Test-NetConnection -ComputerName www.google.com -Port 80

ComputerName : google.com
RemoteAddress : 172.217.5.238
RemotePort : 80
InterfaceAlias : Ethernet 2
SourceAddress : X.X.X.X
TcpTestSucceeded : True

The boolean TcpTestSucceeded returns True to indicate port 80 is open.

We can also use the TraceRoute parameter with the Test-NetConnection cmdlet to check the progress of packets to the destination address.

Test-NetConnection -ComputerName google.com -TraceRoute

ComputerName : google.com
RemoteAddress : 172.217.5.238
InterfaceAlias : Ethernet 2
SourceAddress : X.X.X.X
PingSucceeded : True
PingReplyDetails (RTT) : 44 ms
TraceRoute : 192.168.86.1
192.168.0.1
142.254.146.117
74.128.4.113
65.29.30.36
65.189.140.166
66.109.6.66
66.109.6.30
107.14.17.204
216.6.87.149
72.14.198.28
108.170.240.97
216.239.54.125
172.217.5.238

If you dig into the help for the Test-NetConnection cmdlet, you’ll find it has quite a few parameters to test many different situations.

Go to Original Article
Author:

IBM Power9 bulks up for AI workloads

The latest proprietary Power servers from IBM, armed by the long-awaited IBM Power9 processors, look for relevance among next-generation enterprise workloads, but the company will need some help from its friends to take on its biggest market challenger.

IBM emphasizes increased speed and bandwidth with its AC922 Power Systems to better take on high-performance computing tasks, such as building models for AI and machine learning training. The company said it plans to pursue mainstream commercial applications, such as building supply chains and medical diagnostics, but those broader-based opportunities may take longer to materialize.

“Most big enterprises are doing research and development on machine learning, with some even deploying such projects in niche areas,” said Patrick Moorhead, president and principal analyst at Moor Insights & Strategy. “But it will be 12 to 18 months before enterprises can even start driving serious volume in that space.”

The IBM Power9-based systems’ best chance for short-term commercial success is at the high end of the market.

“Power9 as a platform for AI is the focus over the next year or two,” said Charles King, principal analyst at Pund-IT Research Inc. “We are still a ways from seeing this sort of technology come down further into the commercial markets.”

Power9 as a platform for AI is the focus over the next year or two.
Charles Kingprincipal analyst, Pund-IT Research Inc.

But IBM may need to rely on its most important business partner and customer to drive the Power9’s commercial acceptance.

Google, a co-founder, along with IBM and Nvidia, of the OpenPower Foundation, contributed work around Power8 and ported over its applications to work with IBM’s Power-based systems. Google executives have declined to say how the company would deploy the Power9 internally and for what applications, but broadly deploying the IBM Power9 processor in servers for its data centers could seed confidence among corporate users, Moorhead said.

“To gain share at the macro level they need a Google deployment,” he said. “This could inspire others to deploy Power9 who are actually running large amounts of their production workloads.”

Under the hood of Power9

At the heart of the IBM AC922 system’s architecture are PCI-Express 4.0, Nvidia’s NVLink 2.0 and OpenCAPI, which together improve speed and bandwidth, according to the company. The NVLink 2.0, developed jointly by IBM and Nvidia, is claimed to transport data between the IBM Power9 CPU and Nvidia’s GPU seven to 10 times faster than an earlier version of the technology. The systems are also tuned to take advantage of popular AI frameworks: TensorFlow, a Google-developed open source software library for numerical computation using data flow graphs; Chainer, a framework supporting neural networks; and Caffe, a deep learning framework developed by Berkeley AI Research.

These “accelerators” are part of the IBM Power9 evolving hardware architecture, and are designed to solidify the system’s competitive footing in the cloud computing market.

“We have seen how aggressive the compute requirements have grown in the Linux space, especially as AI workloads were added to the mix,” said Stefanie Chiras, vice president of IBM’s Power Systems. “It now requires a different level of infrastructure underneath to support that level of data transport.”

IBM faces off with Intel

Some of IBM’s server competitors have pledged to deliver systems built to handle AI workloads, and some said they believe Intel will be Big Blue’s most serious competitor. Intel unveiled its AI processor called Nervana late last year and promised a finished product by the end of this year.

Intel’s advantage in the budding competition for AI processors is the overwhelming market share of its server-based Xeon processors, compared to that of proprietary chips such as IBM. Nervana could prove a formidable competitor to IBM in the AI market, but the Power9 with its accompanying accelerator technologies has the edge right now, Moorhead said.

“Intel will point out they have about 95% of the processors and their Nervana accelerator, but IBM is the only one out there with NVLink that has the highest bandwidth connection you can have between a CPU and GPU,” Moorhead said. “Intel would have to significantly change its architecture to support something like NVLink, and they won’t do that any time soon.”