For Sale – £1199 – 15 Inch MacBook Pro (2017) [i7 2.9GHz, RX 560, 512GB, AppleCare + extras!]

Hi Richard, no problem at all – happy to answer any questions

@OldG Nitro

Why was the MacBook originally sent back to Apple for repair (when the damage to the screen was done etc) & has it ever been sent back to them other than that time?

I turned on the laptop one time and one of the speakers fuzzed and just blew. Bit of an out of the ordinary issue, but I took it back to Apple to fix. They replaced what’s known as the top cover (keyboard, trackpad, battery, speakers and IO ports), but in the process damaged the display and the headphone jack. I then returned the laptop, where they replaced the display and top cover again, but this didn’t resolve the problem with the headphone jack, so in the end they replaced the logic board (CPU, GPU, RAM, SSD and controllers for IO devices). Essentially, everything inside this laptop has been replaced at some point.​

Have you had any other issues with it?

Other than the above problems, nothing​

Are there any issues with the screen, dead pixels etc?

None from what I can see – there may be the odd hairline scratch that you can pick up in specific lighting conditions, but this would be because I clean the display fairly regularly.​

Any problems with the keyboard (from what I’ve seen a known issue)?

I haven’t personally experienced any, but as I’ve mentioned, this laptop didn’t get much use as I’ve got a desktop for when I’m home, and work provided me with a 15 inch pro.​

Is the AppleCare transferable to a new owner?

From memory, it acts like an extended version of the original warranty, and is tied to the serial number of the machine. However, should anything go wrong, I’d be more than happy to help out with any problems.​

I forgot to add, can you please take a couple of close up photos of the 2 tiny marks you pointed out on the case?

I’ll get a couple of photos taken as soon as possible for you ​

A couple of other closing thoughts: I now use a Surface Book for all of my university work (I’m able to take notes in class without hoarding tonnes of paper handouts), and by far, the more premium machine is the MacBook. If I could afford to keep both, I absolutely would. The only reason I use the Surface device at the moment is because of the ability to annotate my lecture handouts, of which I have a lot. In every other sense, the MacBook is a better device.
Go to Original Article
Author:

For Sale – £1700 – Alienware 17 R4 Laptop and Graphics Amp – i7, GTX1080, 16GB, 2x SSD, QHD 1440p 120Hz G-Sync

Selling my ‘beloved’ Alienware 17 R4.

Great machine, runs anything you can throw at it. Outstanding specification, including the screen.
Never overclocked. I have been the sole owner from new. Great condition, no damage etc., really looked after this.

Extras:

  • Alienware Graphics Amplifier (external GPU box) also included, empty, so you could upgrade the GPU if you wanted.
  • Alienware branded neoprene carry case and original box included.

Any questions, please ask.

Looking for: £1,800 now £1,700

Techradar review link here:

Due to the price and weight, I am looking for collection, and payment via bank transfer.

Specs below:
CPU: 2.9GHz Intel Core i7-7820HK (quad-core, 8MB cache, overclocking up to 4.4GHz)
Graphics: Nvidia GeForce GTX 1080 (8GB GDDR5X VRAM); Intel HD Graphics 630
RAM: 16GB DDR4 (2,400MHz)
Screen: 17.3-inch QHD (2,560 x 1,440), 120Hz, TN anti-glare at 400 nits; Nvidia G-Sync; Tobii eye-tracking
Storage: 512GB SSD (M.2 NVME), 1TB SSD WD Blue (M.2 SATA), 1TB HDD (7,200 RPM)
Ports: 1 x USB 3.0 port, 1 x USB-C port, 1 x USB-C Thunderbolt 3 port, HDMI 2.0, Mini-DisplayPort, Ethernet, Graphics Amplifier Port, headphone jack, microphone jack, Noble Lock
Connectivity: Killer 1435 802.11ac 2×2 Wi-Fi; Bluetooth 4.1
Camera: Alienware FHD camera with Tobii IR eye-tracking
Weight: 9.74 pounds (4.42kg)
Size: 16.7 x 13.1 x 1.18 inches (42.4 x 33.3 x 3cm; W x D x H)

Go to Original Article
Author:

For Sale – 2015 MacBook Pro 15″, i7 2.2 GHz, 16GB RAM, 256GB SSD, Nifty Drive + Covers

Europe’s busiest forums, with independent news and expert reviews, for TVs, Home Cinema, Hi-Fi, Movies, Gaming, Tech and more.

AVForums.com is owned and operated by M2N Limited,
company number 03997482, registered in England and Wales.

Powered by Xenforo, Hosted by Nimbus Hosting, Original design Critical Media Ltd.
This website uses the TMDb API but is not endorsed or certified by TMDb.

Copyright © 2000-2020 E. & O.E.

Go to Original Article
Author:

On-premises server monitoring tools meet business needs, budget

Although the market has shifted and more vendors are providing cloud-based monitoring, there are still a wide range of feature-rich server monitoring tools for organizations that must keep their workloads on site for security and compliance reasons.  

Here we examine open source and commercial on-premises server monitoring tools from eight vendors. Although these products broadly achieve the same IT goals, they differ in their approach, complexity of setup — including the ongoing aspects of maintenance and licensing — and cost. 

Cacti

Cacti is an open source network monitoring and graphing front-end application for RRDtool, an industry-standard open source data logging tool. RRDtool is the data collection portion of the product, while Cacti handles network graphing for the data that’s collected. Since both Cacti and RRDtool are open source, they may be practical options for organizations that are on a budget. Cacti support is community-driven.

Cacti can be ideal for organizations that already have RRDtool in place and want to expand on what it can display graphically. For organizations that don’t have RRDtool installed, or aren’t familiar with Linux commands or tools, both Cacti and RRDtool could be a bit of a challenge to install, as they don’t include a simple wizard or agents. This should be familiar territory for Linux administrators, but may require additional effort for Windows admins. Note that Cacti is a graphing product and isn’t really an alerting or remediation product. 

ManageEngine Applications Manager

The ManageEngine system is part of an extensive line of server monitoring tools that include application-specific tools as well as cloud and mobile device management. The application monitoring framework enables organizations to purchase agents from various vendors, such as Oracle and SAP, as well as customer application-specific tools. These server monitoring tools enable admins to perform cradle-to-grave monitoring, which can help them troubleshoot and resolve application server issues before they impact end-user performance. ManageEngine platform strengths include its licensing model and the large number of agents available. Although the monitoring license per device is all-inclusive for interfaces or sensors needed per device, the agents are sold individually.

Thirty-day trials are available for many of the more than 100 agents. Licensing costs range from less than $1,000 for 25 monitors and one user to more than $7,000 for 250 monitors with one user and an additional $245 per user. Support costs are often rolled into the cost of the monitors. This can be ideal for organizations that want to make a smaller initial investment and grow over time.

Microsoft System Center Operations Manager

The product monitors servers, enterprise infrastructure and applications, such as Exchange and SQL, and works with both Windows and Linux clients. Microsoft System Center features include configuration management, orchestration, VM management and data protection. System Center isn’t as expansive on third-party applications as it is with native Microsoft applications. System Center is based on core licensing to match Server 2016 and later licensing models.

The base price for Microsoft System Center Operations Manager starts at $3,600, assuming two CPUs and 16 cores total and can be expanded with core pack licenses. With Microsoft licensing, the larger the environment in terms of CPU cores, the more a customer site can expect to pay. While Microsoft offers a 180-day trial of System Center, this version is designed for the larger Hyper-V environments. Support is dependent on the contract the organization selects.  

Nagios Core

Nagios Core is free open source software that provides metrics to monitor server and network performance. Nagios can help organizations provide increased server, services, process and application availability. While Nagios Core comes with a graphical front end, the scope of what it can monitor is somewhat limited. But admins can deploy additional community-provided front ends that offer more views and additional functionality. Nagios Core natively installs and operates on Linux systems and Unix variants.

For additional features and functionality, the commercial Nagios XI product offers true dashboards, reporting, GUI configuration and enhanced notifications. Pricing for this commercial version ranges from less than $7,000 for 500 nodes and an additional $1,500 per enterprise for reporting and capacity planning tools. In addition to agents for OSes, users can also add network monitoring for a single point of service. Free 60-day trials and community support are available for the products that work with the free Nagios Core download.

Opsview

Opsview system monitoring software includes on-premises agents as well as agents from all the major cloud vendors. While the free version provides 25 hosts to monitor, the product’s main benefit is that it can support both SMBs and the enterprise. Pricing for a comprehensive offering that includes 300 hosts, reporting, multiple collectors and network analyzer is less than $20,000 a year, depending on the agents selected.  

Enterprise packages are available via custom quote. The vendor offers both on-premises and cloud variations. The list of agents Opsview can monitor is one of the most expansive of any of the products, bridging cloud, application, web and infrastructure. Opsview also offers a dedicated mobile application. Support for most packages is 24/7 and includes customer portals and a knowledgebase.

Paessler PRTG Network Manager

PRTG can monitor from the infrastructure to the application stack. The licensing model for PRTG Network Monitor follows a sensor model format over a node, core or host model. This means a traditional host might have more than 20 sensors monitoring anything from CPU to bandwidth. Services range from networking and bandwidth monitoring to other more application-specific services such as low Microsoft OneDrive or Dropbox drive space. A fully functional 30-day demo is available and pricing ranges from less than $6,000 for 2,500 sensors to less than $15,000 for an unlimited number of sensors. Support is email-based.

SolarWinds Server and Application Monitor

SolarWinds offers more than 1,000 monitoring templates for various applications and systems, such as Active Directory, as well as several virtualization platforms and cloud-based applications. It also provides dedicated virtualization, networking, databases and security monitoring products. In addition to standard performance metrics, SolarWinds provides application response templates to help admins with troubleshooting. A free 30-day trial is available. Pricing for 500 nodes is $73,995 and includes a year of maintenance.  

Zabbix

This free, open source, enterprise-scale monitoring product includes an impressive number of agents that an admin can download. Although most features aren’t point and click, the dashboards are similar to other open source platforms and are more than adequate. Given the free cost of entry and the sheer number of agents, this could be an ideal product for organizations that have the time and Linux experience to bring it online. Support is community-based and additional support can be purchased from a reseller.

The bottom line on server monitoring tools

The products examined here differ slightly in size, scope and licensing model. Outside of the open source products, many commercial server monitoring tools are licensed by node or agent type. It’s important that IT buyers understand all the possible options when getting quotes, as they can be difficult to understand.

Pricing varies widely, as do the features of the dashboards of the various server monitoring tools. Ensure the staff is comfortable with the dashboard and alerting functionality of each system as well as mobile ability and notifications. If an organization chooses an open source platform, keep in mind that the installation could require more effort if the staff isn’t Linux savvy.  

The dashboards for the open source monitors typically aren’t as graphical as the paid products, but that’s part of the tradeoff with open source. Many of the commercial products are cloud-ready or have that ability, so even if an organization doesn’t plan to monitor its servers in the cloud today, they can take advantage of this technology in the future. 

Go to Original Article
Author:

2019 storage mergers and acquisitions covered by clouds

Most of the enterprise storage-related mergers and acquisitions that happened or closed in 2019 had a cloud twist.

Take IBM’s $34 billion blockbuster acquisition of Red Hat. That was about “resetting the hybrid cloud landscape” with access to the “world’s largest open source community,” IBM CEO Ginni Rometty said in October 2018 of the proposed deal. The acquisition closed in July 2019.

Although storage was hardly the impetus for the acquisition, IBM now has Red Hat’s open source-based storage portfolio. That includes the Gluster file system, Ceph multiprotocol software-defined storage and OpenShift Container Storage and Hyperconverged Infrastructure products that are well suited to cloud use.

OpenText’s $1.45 billion purchase of cloud-based data protection, disaster recovery (DR) and endpoint security provider Carbonite in November heads the list of 2019 backup acquisitions. The Waterloo, Canada-based information management vendor completed the acquisition on Dec. 24. 

Earlier in the year, Carbonite factored into another one of the biggest 2019 storage-related mergers and acquisitions. The Boston-based provider bought cybersecurity firm Webroot for $618.5 million to address ransomware threats and bolster endpoint protection.

Cloud providers’ mergers and acquisitions

Public cloud providers AWS and Google each acquired multiple startups specializing in data storage or migration. Amazon purchased Israel-based startup CloudEndure, an AWS Advanced Technology Partner, to expand its capabilities in application workload and data migration, backup and DR. CloudEndure’s key technologies include continuous data replication to speed DR in the cloud.

AWS scooped up another Israeli startup, NVMe flash specialist E8 Storage, over the summer. E8’s arrays feature NVMe solid-state drives (SSDs) to target analytics and other data-intensive workloads requiring low latency. The startup’s technology includes an NVMe-over-TCP implementation integrated into the operating system. E8 also sold its software for use with various industry-standard servers.

Google also bought a pair of Israeli startups in 2019. In July, Google fortified its enterprise-class file storage with the acquisition of Elastifile. Google previously collaborated with the startup on a managed file storage service that Elastifile CEO Erwan Menard said would provide higher performance, greater scale-out capacity and more enterprise-grade features than Google’s Cloud Filestore. Google said engineers would integrate the Elastifile and Cloud Filestore technology.

Earlier in 2019, Google picked up Alooma for its enterprise data migration capabilities. The transaction happened less than a year after Google added Velostrata, another Israeli startup that specializes in cloud migration. Alooma’s tool focuses on shifting data from databases and enterprise applications to a single data warehouse, whereas Velostrata can move entire VM-based databases and applications to the cloud.  

HPE buys MapR, Cray

Hewlett Packard Enterprise’s August purchase of struggling Hadoop distributor MapR included a cloud angle. HPE said MapR’s enterprise-grade file system and cloud storage services would complement its BlueData container platform it acquired in November 2019. HPE said the combination will enable users to combine artificial intelligence (AI), machine learning and analytics data pipelines across on-premises, hybrid and multi-cloud environments. 

HPE’s biggest 2019 transaction with a storage component was its $1.4 billion acquisition of supercomputing heavyweight Cray. HPE identified high-performance computing (HPC) as a key component of its strategic direction to target organizations that run AI, machine learning and big data analytics workloads.

Flash-related mergers and acquisitions

Flash played a key role in several 2019 storage-related mergers and acquisitions. Pure Storage bought Swedish file software startup Compuverde for $48 million in April to turn its flagship FlashArray into a unified storage system. Pure said the unified FlashArray would target workloads such as enterprise file sharing, databases over the NFS and SMB file protocols, and VMware over NFS.

Compuverde was Pure’s second acquisition since August 2018, when the flash pioneer bought data deduplication software startup StorReduce. Pure integrated the StorReduce technology into its GPU-based FlashBlade, which targets AI, machine learning, analytics and HPC workloads.

DataDirect Networks (DDN) continued its storage expansion with the September acquisition of Western Digital’s IntelliFlash business unit. The IntelliFlash purchase adds NVMe- and SAS-based flash hardware and accompanying software. Western Digital, a leading disk and solid-state drive (SSD) vendor, said it no longer plans to sell storage systems.

Although DDN’s roots are in storage for HPC environments, the vendor has been broadening its portfolio through acquisition. DDN bought Nexenta in May for its software-defined, hardware-agnostic file, block and object services. In September 2018, DDN completed its $60 million purchase of hybrid flash array vendor Tintri, less than three months after buying Intel’s Lustre File System business.

In mid-2019, StorCentric tacked on sagging NVMe flash system startup Vexata and small and midsize business (SMB) backup software provider Retrospect a week apart. Formed in August 2018, StorCentric is also the parent company of Drobo and Nexsan. Drobo, Nexsan, Retrospect and Vexata operate as separate divisions under StorCentric. Drobo sells direct-attached NAS and iSCSI SAN systems for SMBs, while Nexsan focuses on block and unified storage and secure archiving.

In August, Toshiba Memory (now known as Kioxia) announced plans to acquire the flash-based SSD business of Taiwan-based Lite-On Technology for $165 million. Acting president and CEO Nobuo Hayasaka said the Lite-On technology would help the company “to meet the projected growth in demand for SSDs in PCs and data centers being driven by the increased use of cloud services.”

Also in August, Virtual Instruments completed its purchase of Metricly, which was formerly called Netuitive. In October, Virtual Instruments changed its name to Virtana and introduced a new SaaS-based CloudWisdom monitoring and cost analysis tool that uses the Metricly technology.

Backup mergers and acquisitions

Data protection vendors kept busy on the mergers and acquisitions front in 2019.

OpenText’s $1.45 billion deal for Carbonite in November was the largest data protection transaction and followed months of rumors about a possible sale. Carbonite’s subscription-based cloud backup protects servers, endpoints and SaaS applications for businesses and consumers.

In September, Commvault spent $225 million on software-defined storage startup Hedvig to converge primary and secondary storage and address the problem of data fragmentation. Hedvig’s scale-out Distributed Storage Platform runs on commodity servers and supports provisioning and management of block, file and object storage across private and public clouds. Commvault plans a phased rollout of the Hedvig software on its HyperScale data protection appliance, with full integration in mid-2021.

Veritas Technologies strengthened its storage analytics and monitoring capabilities through its March acquisition of Aptare. Aptare’s IT Analytics suite includes storage, backup, capacity, fabric, replication and virtualization management components, in addition to file analytics. Aptare IT Analytics will complement the popular Veritas NetBackup and Backup Exec data protection products and InfoScale storage management software.

Other data protection-related mergers and acquisitions in 2019 included: 

  • Cohesity’s May purchase of Imanis Data to enable customers to back up and recover Hadoop and NoSQL workloads and distributed databases, such as MongoDB, Cassandra, Cloudera and Couchbase DB.
  • Druva’s July acquisition of CloudLanes, a hybrid cloud data protection and migration startup, to let customers securely ingest data from on-premises systems, move it to the cloud and restore it locally.
  • Acronis’ December buy of Microsoft Hyper-V and Azure cloud management and security provider 5nine to complement its cyber protection capabilities. Acronis plans to integrate 5nine’s technology into its Cyber Platform and offer new services.
  • Mainframe software specialist Compuware’s December purchase of Innovation Data Processing’s enterprise data protection, business continuance and storage resource management assets. The transaction was Compuware’s sixth mainframe-related software or services acquisition in the last three years.

Go to Original Article
Author:

Using the Windows Admin Center Azure services feature

To drive adoption of its cloud platform, Microsoft is lowering the technical barrier to Azure through the Windows Admin Center management tool.

Microsoft increasingly blurs the lines between on-premises Windows Server operating systems and its cloud platform.

One way the company has done this is by exposing Azure services alongside Windows Server services in the Windows Admin Center. Organizations that might have been reluctant to go through a lengthy deployment process that required PowerShell expertise can use the Windows Admin Center Azure functionality to set up a hybrid arrangement with just a few clicks in some instances.

Azure Backup

One of the Azure services that Windows Server 2019 can use natively is Azure Backup. This cloud service backs up on-premises resources to Azure. This service offers 9,999 recovery points for each instance and is capable of triple redundant storage within a single Azure region by creating three replicas.

Azure Backup can also provide geo-redundant storage, which insulates protected resources against regional disasters.

You access Azure Backup through the Windows Admin Center, as shown in Figure 1. After you register Windows Server with Azure, setting up Azure Backup takes four steps.

Azure Backup setup
Figure 1: The Windows Admin Center walks you through the steps to set up Azure Backup.

Microsoft designed Azure Backup to replace on-premises backup products. Organizations may find that Azure Backup is less expensive than their existing backup system, but the opposite may also be true. The costs vary widely depending on the volume of data, the type of replication and the data retention policy.

Azure Active Directory

Microsoft positions the Windows Admin Center as a one of the primary management tools for Windows Server. Because sensitive resources are exposed within the Windows Admin Center console, Microsoft offers a way to add an extra layer of security through Azure Active Directory.

When you enable the requirement for Azure Active Directory security, you will be required to authenticate into both the local machine and into Azure Active Directory.

To use Azure Active Directory, you must register the Windows Server with Azure, then you can require Azure Active Directory authentication to be used by opening the Windows Admin Center and then clicking on the Settings icon, followed by the Access tab. Figure 2 shows a simple toggle switch to turn Azure Active Directory authentication on or off.

Azure Active Directory authentication
Figure 2: The toggle switch in the Windows Admin Center sets up Azure Active Directory authentication.

Azure Site Recovery

Azure Site Recovery replicates machines running on-premises to the Microsoft Azure cloud. If a disaster occurs, you can fail over mission-critical workloads to use the replica VMs in the cloud. Once on-premises functionality returns, you can fail back workloads to your data center. Using the Azure cloud as a recovery site is far more cost-effective than building your own recovery data center, or even using a co-location facility.

Like other Azure services, Azure Site Recovery is exposed through the Windows Admin Center. To use it, the server must be registered with Azure. Although Hyper-V is the preferred hosting platform for use with Azure Site Recovery, the service also supports the replication of VMware VMs. The service also replicates between Azure VMs.

To enable a VM for use with the Azure Site Recovery services, open the Windows Admin Center and click on the Virtual Machines tab. This portion of the console is divided into two separate tabs. A Summary tab details the host’s hardware resource consumption, while the Inventory tab lists the individual VMs on the host.

Click on the Inventory tab and then select the checkbox for the VM you want to replicate to the Azure cloud. You can select multiple VMs and there is also a checkbox above the Name column to select all the VMs on the list. After selecting one or more VMs, click on More, and then choose the Set Up VM Protection option from the drop-down list, shown in Figure 3.

VM protection
Figure 3: To set up replication to Azure with the Azure Site Recovery service, select one or more VMs and then choose the Set Up VM Protection option.

The console will open a window to set up the host with Azure Site Recovery. Select the Azure subscription to use, and to create or select a resource group and a recovery vault. You will also need to select a location, as shown in Figure 4.

Azure Site Recovery setup
Figure 4: After you select the VMs to protect in Azure Site Recovery, finalize the process by selecting a location in the Azure cloud.

Storage Migration Service

The Storage Migration Service migrates the contents of existing servers to new physical servers, VMs or to the Azure cloud. This can help organizations reduce costs through workload consolidation.

You access the Storage Migration Service by selecting the Storage Migration Service tab in the Windows Admin Center, which opens a dialog box outlining the storage migration process as shown in Figure 5. The migration involves getting an inventory of your servers, transferring the data from those servers to the new location, and then cutting over to the new server.

Storage Migration Services overview
Figure 5: Microsoft developed Storage Migration Services to ease migrations to new servers, VMs or Azure VMs through a three-step process.

As time goes on, it seems almost inevitable that Microsoft will update the Windows Admin Center to expose even more Azure services. Eventually, this console will likely provide access to all of the native Windows Server services and all services running in Azure.

Go to Original Article
Author:

For Sale – Dell 9370 i7 FHD, Dell 7590 i7 UHD touch, Alienware m15 r1 GTX 1070

Hi Adam

Your answer sort of tipped your hand, I’d be a fair way below what you’re asking so if you think the current price is keen we’ll be too far apart.

It sort of solves a problem for me as I’d prefer the 8 core one, even if I have to wait a few months to find one (paitent and frugal are my middle names).

Have a good NY and GLWS

Go to Original Article
Author:

Vendors detail cloud-based backup past, present, future

It’s safe to say cloud-based backup has gone mainstream.

In the last five years, cloud backup grew from something that organizations often greeted with skepticism to a technology that’s at least a part of many businesses’ data protection plans.

Some of that evolution is a result of users getting more comfortable with the idea of backing up data in the cloud and the security there. Some of it is a result of vendors adding functionality such as security, backup of cloud-born software as a service (SaaS) data and other enhancements. Challenges remain, though.

In part one of this feature, several experts in cloud-based backup detailed how the market has developed and what businesses can expect in the years to come. In part two, executives from backup vendors, including cloud backup pioneers, discuss their impressions of the past, present and future of the technology.

How has cloud-based backup evolved in the last five years?

Eran Farajun, executive vice president, Asigra: [Cloud-based backup has] become a lot more mainstream as a service.

Headshot of Asigra's Eran FarajunEran Farajun

Because it’s become so popular, it’s become a target. So, it’s moved from becoming a defensive mechanism to it becoming an attack vector. It’s a way that people get attacked, which has then caused even more evolution in the last, I would say two or three years, where cloud backup now has to include security and safety elements. Otherwise, you’re not going to be able to recover confidently with it.

Hal Lonas, CTO, Carbonite: We have seen a rise in the popularity of cloud, especially over the past five years as it becomes a more scalable and economical solution for businesses — particularly SMBs that are expanding rapidly. It has also been highly embraced by the service provider and solution market.

Public cloud has also come a long way, especially among highly regulated industries such as healthcare and finance. We’re seeing these organizations turn to the cloud more frequently than before, as it provides an easier and more cost-effective way to meet their recovery time objective and recovery point objective requirements.

Danny Allan, CTO, Veeam: The first perspective of customers was, ‘I’ll just take my backups and [move them] to the cloud,’ and there wasn’t really thought given to what that meant.

We’ve become a lot more efficient about the data movement, both in and out, and secondly, there are now options that didn’t exist in the past. If you need to recover data in the cloud, you can, or you can recover back on premises. And if you are recovering it back on premises, you can do that efficiently.

Headshot of Arcserve's Oussama El-HilaliOussama El-Hilali

Oussama El-Hilali, CTO, Arcserve: [There has been] tremendous evolution both in quantity and quality of the cloud backup. We’ve seen a number of vendors emerge to provide backup to the cloud. We’ve seen the size of the backups grow. We’ve seen the number of people who are interested in going to cloud backup grow as well.

I think one of the fundamental things in data protection has been creating the distance between the primary and secondary data, in case of disaster.

Where are we in the story of cloud-based backup? Is it at the height of its popularity?

Farajun: I don’t think it’s at the height. It’s still growing fairly quickly as an overall service. So, it’s not flat; it’s still growing in double-digit figures year over year.

And I think what lends to its popularity is future evolution. It’ll get more secure. It has to be more secure.

There will be new types of workloads that get included as part of your backup service. For example, backing up machines today is fairly common. Backing up containers is not as common today, but it will be in three to five years.

The cloud market is mature and is fast becoming the infrastructure of choice for many companies, whether at the SMB or enterprise level.
Hal LonasCTO, Carbonite

I think cloud backup for SaaS applications [will grow]. A lot of cloud backup services and vendors support Office 365, Salesforce and G Suite, but as more and more end customers adopt more software as a service, the data itself also has to be protected. So, you’ll see more cloud backup functionality and capabilities protect a broader set of SaaS applications beyond just the big ones.

Lonas: The cloud market is mature and is fast becoming the infrastructure of choice for many companies, whether at the SMB or enterprise level. This can be proven with the popularity of Microsoft Azure, AWS and Google along with other cloud providers.

Right now, many still equate cloud with security and while cloud solves some problems, it is not a complete cure. Rather, we will see more cloud-oriented security solutions protecting cloud assets and their specific issues in the upcoming years.

One of the biggest pain points with cloud adoption today is migrating data to these infrastructures. The good news is that there are a number of tools available now to alleviate the traditional issues related to data loss, hours of downtime and diverted key resources.

Allan: We’re not at the height of its popularity. We’re in early stages of customers sending their data into the cloud. It’s been growing exponentially. I know cloud has been around for 10 years, but it’s only really in the last year that customers are actually sending backup data into the cloud. I would attribute that to intelligent cloud backup — using intelligence to know how to do it and how to leverage it efficiently. 

El-Hilali: It’s a good step, but we’re not at the peak, or anywhere close to the peak.

The reason being is that if you look at the cloud providers, whether it’s public cloud like AWS or companies like us, the features are still evolving. And the refinement is still ongoing.

What do you expect in the cloud backup market in the next five years?

Farajun: I think there will be more consolidation. I think that more of the old-school vendors, the big broad vendors, will continue to add more cloud backup service capability as part of their offerings portfolio. They’ll either acquire companies that do it or they will stand up services that do it themselves. There will be more acquisitions by bigger MSPs that buy smaller MSPs because they deliver cloud backup services and they have the expertise.

I think you’ll see an increase of channel partners bringing [cloud-based backup] back in-house and actually being the service provider instead of just being a broker. And that will happen because it adds more value to their business.

And I think you’ll see unfortunately ransomware attacking more and more backup software, whether it’s delivered as a service or on premises, just because it’s so damaging.

Lonas: Looking ahead, we will see cloud backup and data protection continue to gain popularity, especially as businesses implement cyber-resiliency plans.

More organizations now trust the cloud to be available, secure and meet their business needs. We will continue to see Moore’s Law drive down network and storage costs so that businesses can continue to reduce their on-premises footprint. Some of this change is technical, and some is cultural, as most of us trust the cloud in our personal lives more than businesses do; and we expect to see this trend continue to shift for businesses in the future.

Allan: I think there’s going to be a whole emergence of machine learning-based companies that exist only in the cloud, and all they need is access to your data. In the past, what was the problem with machine learning and artificial intelligence on premises? You had to install it on premises to get access to that data or you needed to pick up petabytes of data and get it to that company. If it’s already there, you can imagine a marketplace emerging that will give you value-added services on top of this data.

El-Hilali: I think the potential for DRaaS will continue to grow and I say that because the availability of the data, the spontaneity of recovery, is becoming more of a need than a good-to-have.

Go to Original Article
Author:

AWS AI tools focus on developers

AWS is the undisputed leader in the cloud market. As for AI, the cloud division of tech giant Amazon is also in a dominant position.

“Machine learning is at a place now where it is accessible enough that you don’t need Ph.Ds,” said Joel Minnick, head of product marketing for AI, machine learning and deep learning at AWS.

Partly, that’s due to a natural evolution of the technology, but vendors such as Google, AWS, IBM, DataRobot and others have made strides in making the process of creating and deploying machine learning and deep learning easier.

AWS AI

Over the last few years, AWS has invested heavily in making it easier for developers and engineers to create and deploy AI models, Minnick said, speaking with TechTarget at the AWS re:Invent 2019 user conference in Las Vegas in December 2019.

AWS’ efforts to simplify the machine leaning lifecycle were on full display at re:Invent. During the opening keynote, led by AWS CEO Andy Jassy, AWS revealed new products and updates for Amazon SageMaker, AWS’ full-service suite of machine learning development, deployment and governance products.

Those products and updates included new and enhanced tools for creating and managing notebooks, automatically making machine learning models, debugging models and monitoring models.

SageMaker Autopilot, a new AutoML product, in particular, presents an accessible way for users who are new to machine learning to create and deploy models, according to Minnick.

In general, SageMaker is one of AWS’ most important products, according to a blog-post-styled report on re:Invent from Nick McQuire, vice president of enterprise research at CCS Insight. The report noted that AWS, due largely to SageMaker, its machine learning-focused cloud services, and a range of edge and robotics products, is a clear leader in the AI space.

“Few companies (if any) are outpacing AWS in machine learning in 2019,” McQuire wrote, noting that SageMaker alone received 150 updates since the start of 2018.

Developers for AWS AI

In addition to the SageMaker updates, AWS in December unveiled another new product in its Deep series: DeepComposer.

The product series, which also includes DeepLens and DeepRacer, is aimed at giving machine learning and deep learning newcomers a simplified and visual means to create specialized models.

Introduced in late 2017, DeepLens is a camera that enables users to run deep learning models on it locally. The camera, which is fully programmable with AWS Lambda, comes with tutorials and sample projects to help new users. It integrates with a range of AWS products and services, including SageMaker and its Amazon Rekognition image analysis service.

“[DeepLens] was a big hit,” said Mike Miller, director of AWS AI Devices at AWS.

DeepRacer, revealed the following year, enables users to apply machine learning models to radio controlled (RC) model cars and make them autonomously race along tracks. Users can build models in SageMaker and bring them into a simulated racetrack, where they can train the models before bringing them into a 1/18th scale race car.

An AWS racing league makes DeepRacer competitive, with AWS holding yearlong tournaments comprised of multiple races. DeepRacer, Miller declared, has been exceedingly successful.

“Tons of customers around the world have been using DeepRacer to engage and upskill their employees,” Miller said.

Dave Anderson, director of technology at Liberty Information Technology, the IT arm of Liberty Mutual, said many people on his team take part in the DeepRacer tournaments.

“It’s a really fun way to learn machine learning,” Anderson said in an interview. “It’s good fun.”

Composing with AI

Meanwhile, DeepComposer as the name suggests, helps train users on machine learning and deep learning through music. The product comes with a small keyboard that can plug into a PC along with a set of pretrained music genre models. The keyboard itself isn’t unusual, but by using the models and accompanying software, users automatically create and tweak fairly basic pieces of music within a few genres.

With DeepComposer, along with DeepLens and Deep Racer, “developers of any skill level can find a perch,” Miller said.

The products fit into Amazon’s overall AI strategy well, he said.

“For the last 20 years, Amazon has been investing in machine learning,” Miller said. “Our goal is to bring those same AI and machine learning techniques to developers of all types.”

The Deep products are just “the tip of the spear for aspiring machine learning developers,” Miller said. Amazon’s other products, such as SageMaker, extend that machine learning technology development strategy.

“We’re super excited to get more machine learning into the hands of more developers,” Miller said.

Go to Original Article
Author:

Cloudian CEO: AI, IoT drive demand for edge storage

AI and IoT is driving demand for edge storage as data is being created faster than it can be reasonably moved across clouds, object storage vendor Cloudian’s CEO said.

Cloudian CEO Michael Tso said “Cloud 2.0” is giving rise to the growing importance of edge storage among other storage trends. He said customers are getting smarter about how they use the cloud, and that’s leading to growing demand for products that can support private and hybrid clouds. He also detects an increased demand for resiliency against ransomware attacks.

We spoke with Tso about these trends, including the Edgematrix subsidiary Cloudian launched in September 2019 that focuses on AI use cases at the edge. Tso said we can expect more demand for edge storage and spoke about an upcoming Cloudian product related to this. He also talked about how AI relates to object storage, and if Cloudian is preparing other Edgematrix-like spinoffs.

What do you think storage customers are most concerned with now?
Michael Tso: I think there is a lot, but I’ll just concentrate on two things here. One is that they continue to just need lower-cost, easier to manage and highly scalable solutions. That’s why people are shifting to cloud and looking at either public or hybrid/private.

Related to that point is I think we’re seeing a Cloud 2.0, where a lot of companies now realize the public cloud is not the be-all, end-all and it’s not going to solve all their problems. They look at a combination of cloud-native technologies and use the different tools available wisely.

I think there’s the broad brush of people needing scalable solutions and lower costs — and that will probably always be there — but the undertone is people getting smarter about private and hybrid.

Point number two is around data protection. We’re now seeing more and more customers worried about ransomware. They’re keeping backups for longer and longer and there is a strong need for write-once compliant storage. They want to be assured that any ransomware that is attacking the system cannot go back in time and mess up the data that was stored from before.

Cloudian actually invested very heavily in building write-once compliant technologies, primarily for financial and the military market because that was where we were seeing it first. Now it’s become a feature that almost everyone we talked to that is doing data protection is asking for.

People are getting smarter about hybrid and multi-cloud, but what’s the next big hurdle to implementing it?

Tso: I think as people are now thinking about a post-cloud world, one of the problems that large enterprises are coming up with is data migration. It’s not easy to add another cloud when you’re fully in one. I think if there’s any kind of innovation in being able to off-load a lot of data between clouds, that will really free up that marketplace and allow it to be more efficient and fluid.

Right now, cloud is a bunch of silos. Whatever data people have stored in cloud one is kind of what they’re stuck with, because it will take them a lot of money to move data out to cloud two, and it’s going to take them years. So, they’re kind of building strategies around that as opposed to really, truly being flexible in terms of where they keep data.

What are you seeing on the edge?

Tso: We’re continuing to see more and more data being created at the edge, and more and more use cases of the data needing to be stored close to the edge because it’s just too big to move. One classic use case is IoT. Sensors, cameras — that sort of stuff. We already have a number of large customers in the area and we’re continuing to grow in that area.

The edge can mean a lot of different things. Unfortunately, a lot of people are starting to hijack that word and make it mean whatever they want it to mean. But what we see is just more and more data popping up in all kinds of locations, with the need of having low-cost, scalable and hybrid-capable storage.

We’re working on getting a ruggedized, easy-to-deploy cloud storage solution. What we learned from Edgematrix was that there’s a lot of value to having a ruggedized edge AI device. But the unit we’re working on is going to be more like a shipping container or a truck as opposed to a little box like with Edgematrix.

What customers would need a mobile cloud storage device like you just described?

Tso: There are two distinct use cases here. One is that you want a cloud on the go, meaning it is self-contained. It means if the rest of the infrastructure around you has been destroyed, or your internet connectivity has been destroyed, you are still able to do everything you could do with the cloud. The intention is a completely isolatable cloud.

In the military application, it’s very straightforward. You always want to make sure that if the enemy is attacking your communication lines and shooting down satellites, wherever you are in the field, you need to have the same capability that you have during peak time.

But the civilian market, especially in global disaster, is another area that we are seeing demand. It’s state and local governments asking for it. In the event of a major disaster, oftentimes for a period, they don’t have any access to the internet. So the idea is to run in a cloud in a ruggedized unit that is completely stand-alone until connectivity is restored.

AI-focused Edgematrix started as a Cloudian idea. What does AI have to do with object storage?
Tso: AI is an infinite data consumer. Improvements on AI accuracy is a log scale — it’s an exponential scale in terms of the amount of data that you need for the additional improvements in accuracy. So, a lot of the reasons why people are accumulating all this data is to run their AI tools and run AI analysis. It’s part of the reason why people are keeping all their data.

Being S3 object store compatible is a really big deal because that allows us to plug into all of the modern AI workloads. They’re all built on top of cloud-native infrastructure, and what Cloudian provides is the ability to run those workloads wherever the data happens to be stored, and not have to move the data to another location.

Are you planning other Edgematrix-like spinoffs?
Tso: Not in the immediate future. We’re extremely pleased with the way Edgematrix worked out, and we certainly are open to do more of this kind of spin off.

We’re not a small company anymore, and one of the hardest things for startups in our growth stage is balancing creativity and innovation with growing the core business. We seem to have found a good sort of balance, but it’s not something that we want to do in volume because it’s a lot of work.

Go to Original Article
Author: