Tag Archives: Hardware

Quorum OnQ solves Amvac Chemical’s recovery problem

Using a mix of data protection software, hardware and cloud services from different vendors, Amvac Chemical Corp. found itself in a cycle of frustration. Backups failed at night, then had to be rerun during the day, and that brought the network to a crawl.

The Los Angeles-based company found its answer with Quorum’s one-stop backup and disaster recovery appliances. Quorum OnQ’s disaster recovery as a service (DRaaS) combines appliances that replicate across sites with cloud services.

The hardware appliances are configured in a hub-and-spoke model with an offsite data center colocation site. The appliances perform full replication to the cloud that backs up data after hours.

“It might be overkill, but it works for us,” said Rainier Laxamana, Amvac’s director of information technology.

Quorum OnQ may be overkill, but Amvac’s previous system underwhelmed. Previously, Amvac’s strategy consisted of disk backup to early cloud services to tape. But the core problem remained: failed backups. The culprit was the Veritas Backup Exec applications that the Veritas support team, while still part of Symantec, could not explain. A big part of the Backup Exec problem was application support.

“The challenge was that we had different versions of an operating system,” Laxamana said. “We had legacy versions of Windows servers so they said [the backup application] didn’t work well with other versions.

“We were repeating backups throughout the day and people were complaining [that the network] was slow. We repeated backups because they failed at night. That slowed down the network during the day.”

We kept tapes at Iron Mountain, but it became very expensive so we brought it on premises.
Rainier Laxamanadirector of information technology, Amvac

Quorum OnQ provides local and remote instant recovery for servers, applications and data. The Quorum DRaaS setup combines backup, deduplication, replication, one-click recovery, automated disaster recovery testing and archiving. Quorum claims OnQ is “military-grade” because it was developed for U.S. Naval combat systems and introduced into the commercial market in 2010.

Amvac develops crop protection chemicals for agricultural and commercial purposes. The company has a worldwide workforce of more than 400 employees in eight locations, including a recently opened site in the Netherlands. Quorum OnQ protects six sites, moving data to the main data center. Backups are done during the day on local appliances. After hours, the data is replicated to a DR site and then to another DR site hosted by Quorum.

“After the data is replicated to the DR site, the data is replicated again to our secondary DR site, which is our biggest site,” Laxamana said. “Then the data is replicated to the cloud. So the first DR location is our co-located data center and the secondary DR our largest location. The third is the cloud because we use Quorum’s DRaaS.”

Amvac’s previous data protection configuration included managing eight physical tape libraries.

“It was not fun managing it,” Laxamana said. “And when we had legal discovery, we had to go through 10 years of data. We kept tapes at Iron Mountain, but it became very expensive so we brought it on premises.”

Laxamana said he looked for a better data protection system for two years before finding Quorum. Amvac looked at Commvault but found it too expensive and not user-friendly enough. Laxamana and his team also looked at Unitrends. At the time, Veeam Software only supported virtual machines, and Amvac needed to protect physical servers. Laxamana said Unitrends was the closest that he found to Quorum OnQ.

“The biggest (plus) with Quorum was that the interface was much more user-friendly,” he said. “It’s more integrated. With Unitrends, you need a third party to integrate the Microsoft Exchange.”

Panasas storage, director blades split into separate devices

Panasas has revamped its scale-out NAS, adding a separate hardware appliance to disaggregate ActiveStor director blades from its hybrid arrays of the same name.

The Panasas storage rollout encompasses two interrelated products with different launch dates. The ActiveStor Hybrid 100 (ASH-100), the latest generation of Panasas’ hybrid storage, is due for general availability in December. The ASH-100 uses solid-state drives to accelerate metadata requests.

The new product entry is the ActiveStor Director 100 (ASD-100), a control-plane engine that sits atop a rack of ActiveStor arrays. ASD-100 director blade appliances are scheduled for release by March 2018, in tandem with its PanFS 7.0 parallel file system.

The ASH-100 array and ASD-100 blade appliance are compatible with ActiveStor AS18 and AS20 systems. Until now, Panasas integrated director blades in a dedicated slot on the 11-slot array chassis.

Addison Snell, CEO of IT analyst firm Intersect360 in Sunnyvale, Calif., said adding a separate metadata server allows Panasas to expand on its PanFS parallel file system.

“The reason this is important is that different levels of workloads will require different levels of performance,” Snell said. “Panasas lets you right-size your metadata performance to your application. Enterprise storage increasingly is migrating to different things that are classified as high-performance workloads, beyond the traditional uses. You’ve got big data, AI and machine learning starting to take off. The attention has turned to ‘How do I achieve reliable performance at scale so that I can tailor to my individual workload?'”

The revamp improves performance of high-performance computing and hyperscale workloads, especially seeking and opening lots of small files, said Dale Brantley, a Panasas director of systems engineering.

“This is a disaggregated director appliance that lets you unlock the full functionality of the software contained within. You will be able to cache millions or tens of millions of entries in the Director’s memory, rather than doing memory thrashing,” Brantley said.

“These products together allow us to tailor the environment more for specific workloads. Our customers are using more small-file workloads. This is just one more workload that the HPC cluster has to support. This will be a foundational platform for our next-generations systems.”

Panasas' storage stack
The Panasas ASD-100 director blade sits atop the vendor’s ActiveStor Hybrid storage, allowing customers to scale them separately.

Panasas storage protocol reworks memory allocation for streaming

ASH-100 uses a system-on-a-chip CPU design based on an Intel Atom C2558 processor. The 2U Panasas storage array tops out at 57 TB of raw capacity with 200 populated shelves. A shelf scales to 264 TB of disk storage and 21 TB of flash.

All I/O requests are buffered in RAM. Each ASH-100 blade includes a built-in 16 GB DDR3 RAM card to speed client requests. A new feature is the ability to independently scale HDDs and SSDs of varying capacities in the ASH-100 box.

Brantley said changes to the Linux kernel in recent years have hindered the streaming capability of large file systems. To compensate, Panasas wrote code that enables its DirectFlow parallel file system protocol in PanFS to enhance read-ahead techniques and boost throughput.

The ASD-100 Director appliance is a 2U four-node chassis with 96 GB of DDR4 nonvolatile dual-inline memory modules (NVDIMM) to protect metadata transactions. Previous ActiveStor blades used an onboard battery to back up DRAM as persistent cache for the metadata logs.

Brantley said Panasas storage engineers wrote an NVDIMM driver that they will share with the FreeBSD operating system community. Updates to FreeBSD are slated for PanFS 7.0, along with a dynamic GUI and aids for implementing NFS on Linux servers.

Panasas said PanFS 7.0 will include an improved NFS Server implementation and updates to the FreeBSD operating system. Panasas storage engineers wrote a SNIA-compatible NVDIMM driver that Brantley said will be made available to the FreeBSD community.

For Sale – PC Parts – I5 2500K, Gigabyte MB, 16GB XMS3 Ram, Samsung Blu Ray Drive, 2TB HDD

Up for sale are a bunch of PC hardware (unboxed).

Corsair XMS3 4 x 4GB (16GB) DDR3 RAM 1600MHz: £90
Giga Byte GA-Z68X-UD3H-B3 Motherboard: £80
Intel i5 – 2500K 3.30GHz: £65
Samsung SH-B123 S-ata LightScribe BD-Rom/DVD Writer: £35
Western Digital WD20EARX 2TB Caviar Green Quiet SATA 6Gb/s IntelliPower 64Mb Cache 8ms HDD: £40

All tested and fully working

Price and currency: £35
Delivery: Delivery cost is included within my country
Payment method: Paypal
Location: Derby
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

For Sale – PC Parts – I5 2500K, Gigabyte MB, 16GB XMS3 Ram, Samsung Blu Ray Drive, 2TB HDD

Up for sale are a bunch of PC hardware (unboxed).

Corsair XMS3 4 x 4GB (16GB) DDR3 RAM 1600MHz: £90
Giga Byte GA-Z68X-UD3H-B3 Motherboard: £80
Intel i5 – 2500K 3.30GHz: £65
Samsung SH-B123 S-ata LightScribe BD-Rom/DVD Writer: £35
Western Digital WD20EARX 2TB Caviar Green Quiet SATA 6Gb/s IntelliPower 64Mb Cache 8ms HDD: £40

All tested and fully working

Price and currency: £35
Delivery: Delivery cost is included within my country
Payment method: Paypal
Location: Derby
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Switch used as container platform in software-based networking

LAS VEGAS — The move from hardware- to software-based networking in the data center is likely to result in more applications running in top-of-rack white box switches. Some of the open source technology that’s likely to make that happen was highlighted at this month’s Future:Net networking conference.

Programs running on a commodity top-of-rack (ToR) switch would likely be in the form of containers or microservices to avoid the overhead necessary in running a full-blown application. Presenters at the event, held at VMware’s VMworld conference, focused on technology that could become the foundation for running the mini-programs on a switch’s Linux-based network operating system.

Piling Layer 4-7 network services — such as virtual firewalls, load balancers and WAN acceleration — on a ToR switch would be “extremely important” to increase the value of software-based networking, said John Fruehe, a TechTarget contributor and independent networking analyst.

“If you’re doing it at the top of rack, you’re getting closer to the servers, and you can do a much better job of managing your traffic, managing your security and managing your applications,” he said.

Technology that could become part of a container- or microservices-aware stack includes Cilium, eBPF, Envoy and Istio. Here are the definitions for the foundational technology:

  •  Extended Berkley Packet Filter, or eBPF, is a Linux kernel technology that provides a foundation for developers to build I/O modules and to load and unload the modules without rebooting the host.
  • Cilium uses eBPF to provide an efficient way to define and enforce network-layer and HTTP-layer security policies.
  • Envoy is a high-performance C++ distributed proxy and communication bus that runs alongside any application language or framework. It supports many load-balancing features, such as automatic retries, circuit breaking and global rate limiting.
  • Istio is an open platform for connecting, managing and securing microservices. It can be used to manage traffic flows between the services, enforce access policies and aggregate telemetry data.

Why migrate to software-based networking

The list is a sampling of what could become key technologies in separating networking software from the underlying hardware, a process known as disaggregation. Taking the intelligence out of hardware and placing it in software makes a network more agile and, therefore, more adaptable to changes in cloud computing environments.

Large companies that can afford to build technology not yet fully developed in the open source community are already heavy users of containers and microservices. Examples include major cloud providers, such as Amazon, Google and Microsoft. Those companies are expected to account for 62% of total container deployments by 2020, IDC analyst Gary Chen said during a Future:Net presentation.

Network operators are also at the cutting edge of technology development within software-based networking. AT&T, for example, completed this year a field trial of white box switches that carried customer traffic from Washington, D.C., to San Francisco. The switches, which came from multiple vendors, ran AT&T’s internally developed ToR packet network control software.

The investment major tech companies are making in trailblazing networking is expected to eventually find its way into products that mainstream enterprises can consume. Rather than buy networking hardware filled with proprietary technology, companies will someday have the option of buying components from multiple vendors to piece together a system tailored to buyers’ individual needs.

The transition, however, could take as many as 10 years, Fruehe said. “It’s going to happen slowly, but it’s going to happen.”

Microsoft Surface PCs Gain Ground in the Enterprise

Despite some early stumbles, Microsoft’s decision to enter the PC hardware market appears to be paying off.

The Redmond, Wash. technology giant trails just behind Apple in the business PC laptop and desktop market, according to a Spiceworks survey of nearly 1,000 IT professionals in the U.S., Canada and the U.K. Three of percent of organizations use Microsoft-branded PC compared to the four percent that use Apple Macs.

“Since jumping into the hardware market with its Surface line, Microsoft has had success making inroads into the tablet and 2-in-1 market,” Peter Tsai, senior technology analyst at Spiceworks, told eWEEK. “Many IT pros praised the hardware’s sleek design, powerful specs in a highly-portable package, and ability to run Windows, which allows employees to be productive while giving IT departments management flexibility.”

The Surface Pro, for example, offers corporate IT buyers a blend of premium packaging and powerful processors. Add long-lasting battery life to the equation and business travelers can legitimately get work done on a cross-country or trans-Atlantic flight.

Further reading

For Microsoft’s hardware ambitions, it also helps that the company is branching out beyond the 2-in-1 form factor. “In the last couple of years, Microsoft has expanded its Surface lineup to include sleek and powerful laptops and all-in-one desktops, providing more options for IT departments that liked Surface tablets and 2-in-1s, but didn’t think they were a good fit for all users,” Tsai noted.

In 2015, the company unveiled its first-ever laptop, the Surface Book. Challenging Apple’s MacBook Pro, the device features a detachable, touch- and stylus-enabled tablet with a graphical power boost provided by an optional discrete GPU (graphical processing unit) subsystem from Nvidia.

To court creative professionals, Microsoft last year launched the high-end Surface Studio all-in-one PC with a high-resolution 28-inch display that folds down to provide users with a drafting table-like experience when they put their Surface Pen stylus to the screen. This spring, in an education-themed media event in New York City, the company took the wraps off the Surface Laptop.

For technology executives who are well served by Microsoft’s software and cloud services offerings, entrusting their PC hardware needs to the company isn’t much of a stretch. “And with the success of Windows 10 and Microsoft Azure, perhaps more IT departments are now willing to give Microsoft hardware a try,” concluded Tsai.

Microsoft is well positioned to capitalize on the momentum, for the near future at least. Over the next 12 months, 15 percent of organizations plan to increase their investments in Microsoft Surface PCs compared to 8 percent for Apple laptops and PCs.

Of course, Microsoft has quite a lot of ground to cover before it can catch up to market leaders Dell and Hewlett Packard (HP).

Dell currently commands nearly half (47 percent) of the business PC market, according to the Spiceworks study. HP ranks second with 21 percent and Lenovo comes in third with 14 percent followed by Apple. Over the next 12 months, organizations plan to increase their spending on Dell and HP PCs by 25 percent and 17 percent, respectively.

IBM cooks up a hardware architecture for tastier cloud-based services

IBM hopes to raise its competitive profile in cloud services when it introduces new hardware and cloud infrastructure by the end of this year or early 2018.

The company will add a new collection of hardware and software products that deliver artificial intelligence (AI) and cloud-based services faster and more efficiently.

Among the server-based hardware technologies are 3D Torus, an interconnection topology for message-passing multicomputer systems, and new accelerators from Nvidia, along with advanced graphics processing unit (GPU) chips. Also included is Single Large Expensive Disk technology, a traditional disk technology currently used in mainframes and all-flash-based storage, according to sources familiar with the company’s plans.

The architecture achieves sub-20-millisecond performance latencies by eliminating routers and switches, and it embeds those capabilities into chips that communicate more directly with each other, one source said.

The new collection of hardware applies some of the same concepts as IBM’s Blue Gene supercomputer, which were among those used to create Watson. In the model of those special-purpose machines, the new system is designed specifically to do one thing: Deliver AI-flavored cloud-based services.

These technologies, which can work with both IBM Power and Intel chips in the same box, will be used only in servers housed in IBM’s data centers. IBM will not sell servers containing these technologies commercially to corporate users. The new technologies could reach IBM’s 56 data centers late this year or early next year.

AI to the rescue for IBM’s cognitive cloud

IBM’s cloud business has grown steadily from its small base over the past three to four years to revenues of $3.9 billion in the company’s second quarter reported last month and $15.1 billion over the past 12 months. The company’s annual run rate for as-a-service revenues rose 32% from a year ago to $8.8 billion.

At the same time, sales of the company’s portfolio of cognitive solutions, with Watson at its core, took a step back, falling 1% in the second quarter after 3% growth in this year’s first quarter.

That doesn’t represent a critical setback, but it has caused some concern, because the company hangs much of its future growth on Watson.

Three years ago, IBM sunk $1 billion to set up its Watson business unit in the New York City borough Manhattan. IBM CEO Ginni Rometty has often cited lofty goals for the unit when claiming Watson would reach 1 billion consumers by the end of 2017, $1 billion in revenues by the end of 2018 and, eventually, $10 billion in revenue by an unnamed date. For IBM to achieve those goals, it requires a steady infusion of AI and machine learning technologies.

IBM executives remain confident, given the technical advancements in AI and machine learning capabilities built into Watson and a strict focus on corporate business users, while competitors — most notably Amazon — pursue consumer markets.

“All of our efforts around cognitive computing and AI are aimed at businesses,” said John Considine, general manager of cloud infrastructure at IBM. “This is why we have made such heavy investments in GPUs, bare-metal servers and infrastructure, so we can deliver these services with the performance levels corporate users will require.”

However, not everyone is convinced that IBM can reach its goals for cognitive cloud-based services, at least in the predicted time frames. And it will still be an uphill climb for Big Blue, as it looks to vie with cloud competitors faster out of the gate.

Lydia Leong, an analyst with Gartner, could not confirm details of IBM’s upcoming new hardware for cloud services, but pointed to the company’s efforts around a new cloud-oriented architecture dubbed Next Generation Infrastructure. NGI will be a new platform run inside SoftLayer facilities, but it’s built from scratch by a different team within IBM, she said.

My expectation is IBM will not have a long-term speed advantage with this — I’m not even sure they will have a short-term one.
Lydia Leonganalyst, Gartner

IBM intends to catch up to the modern world of infrastructure with hardware and software more like those from competitors Amazon Web Services and Microsoft Azure, and thus deliver more compelling cloud-based services. NGI will be the foundation on which to build new infrastructure-as-a-service (IaaS) offerings, while IBM Bluemix, which remains a separate entity, will continue to run on top of bare metal.

Leong said she is skeptical, however, that any new server hardware will give the company a performance advantage to deliver cloud services.

“My expectation is IBM will not have a long-term speed advantage with this — I’m not even sure they will have a short-term one,” Leong said. “Other cloud competitors are intensely innovative and have access to the same set of technologies and tactical ideas, and they will move quickly.”

IBM has stumbled repeatedly with engineering execution in its cloud portfolio, which includes last year’s launch and demise of a new IaaS offering, OpenStack for Bluemix. “[IBM has] talked to users about this [NGI] for a while, but the engineering schedule keeps getting pushed back,” she said.

IBM now enters the cloud infrastructure market extremely late — and at a time when the core infrastructure war has been mostly won, Leong said. She suggested IBM might be better served to avoid direct competition with market leaders and focus its efforts where it has an established advantage and can differentiate with things like Watson.

Big Switch-HPE partnership opens chances for enterprise SDN

Hewlett Packard Enterprise could better its chances of taking its open switching hardware to mainstream enterprises through a partnership with Big Switch Networks Inc., a developer of software-based networking products.

The companies announced this week that HPE would sell and support Big Switch software on its Altoline switches, which also run products from Big Switch rival Cumulus Networks. The Big Switch-HPE partnership will make the company’s products available on Altoline 6960, 6941 and 6921 leaf-spine switches.

Adding Big Switch as an option on Altoline hardware is important because the vendor — along with Cumulus — is starting to gain traction among enterprises, according to Gartner’s latest Magic Quadrant for Data Center Networking. Today, the biggest users of products in which networking software is separate from the underlying hardware are financial institutions, carriers and cloud providers operating data centers many times larger than the average enterprise data center.

“To date, we’ve only observed HPE take Altoline into extra, extra-large environments,” said Gartner analyst Andrew Lerner. “So, this [latest partnership] certainly could have a big impact on the market, but only if HPE positions the Big Switch offerings prominently in the enterprise.”

Big Switch offerings in HPE partnership

HPE will offer two Big Switch products: Big Cloud Fabric and Big Monitoring Fabric (BMF). The latter is a network packet broker, and the former is orchestration software for switches in a network fabric.

Big Switch has sold BMF to a small number of enterprises that use the product in noncritical applications, according to Gartner. Because BMF is relatively risk-free, Big Switch has used it to build trust with skittish customers.

HPE’s credibility with businesses could bolster Big Switch’s reputation in the market. Gartner said it believes HPE’s primary long-term role in data center networking will be as a systems integrator and reseller, versus a direct competitor to other vendors.

Last year, HPE announced a reseller agreement with Arista, which provides switches mostly to operators of mega-scale data centers. With Big Switch, HPE can sell BMF to enterprises without competing with the Arista-HPE partnership.

“The Big Switch NPB [network packet broker] offering is a stand-alone and dedicated use case that HPE could position in a way that doesn’t conflict with Arista,” Lerner said. “So, there’s a bit more potential enterprise impact with BMF on Altoline.”

Gartner has seen more enterprises using software-based networking products that run on another vendor’s hardware. The analyst firm estimated 1,000 companies have licensed the bundles for production environments.

Roughly 30% of Cumulus’ more than 500 paying customers are enterprises, according to Gartner. The rest are service providers. Cumulus’ flagship product is a Linux-based network operating system, called Cumulus Linux.

Lumina makes debut as SDN consultant

In other SDN-related news, startup Lumina Networks Inc. made its debut in the market with an SDN controller bought from Brocade Communications Systems Inc., which is in the process of selling itself to chipmaker Broadcom.

The controller, which is 100% open source, is built on specifications set by the OpenDaylight Project. The group focuses on creating an open source code base for the major components of an SDN platform. Communication service providers are the primary users of OpenDaylight-based technology.

“Lumina is more of a network-operator play, and not large enterprise,” Lerner said.

Lumina said it would contribute to the open source community future enhancements to the controller. The company’s revenues will mostly come from network development services that help service providers build production systems based on OpenDaylight.

Andrew Coward, who was Brocade’s vice president of business strategy, is CEO of Lumina, which is headquartered in San Jose, Calif.