Tag Archives: heard

The growing ties between networking roles and automation

For years now, network professionals have heard they need to adapt to changing technologies or risk extinction. The messages are plentiful:

  • Learn these programming skills to stay relevant.
  • Take these training courses, but don’t get too vendor-focused.
  • Change your mindset.
  • Change your organization’s culture.
  • Change your skill sets to keep up with shifting networking roles and responsibilities.

All of these suggestions are increasingly valid and can prove valuable in the evolving networking industry. In fact, skill sets revolving around network programmability, cloud computing and cybersecurity are central to in-demand IT positions and roles, according to Mark Leary, directing analyst at Learning@Cisco, part of Cisco Services, who discussed a Cisco-sponsored report about IT jobs and skill sets, which was released by research firm IDC.

As those networking roles and responsibilities evolve, network professionals are also evolving. For example, organizations seek IT staff with skills in Python, Java, Linux, development, administration, support and engineering, among others, Leary said. In the evolution of networking jobs, employees need to be able to communicate with other teams, including security and developers, for business cross-projects and initiatives.

The dirty word: Automation

This industry and skill set evolution is necessary for the transition to the automated network, according to Zeus Kerravala, founder of ZK Research in Westminster, Mass. Modern network infrastructure doesn’t work well with heavily manual command-line interface configurations, he said during a recent Cisco webinar on the evolution of network engineering. Instead, it uses automation and APIs to deliver services and information.

But automation has traditionally been considered a dirty word that sends employees in all industries into a panic, just like it did in the 1800s and 1900s. Automation was expected to steal jobs and replace human intelligence. But as network automation use cases have matured, Kerravala said, employees and organizations increasingly see how automating menial network tasks can benefit productivity.

To automate, however, network professionals need programming skills to determine the desired network output. They need to be able to tell the network what they want it to do.

All of this brings me to an obvious term that’s integral to automation and network programming: program, which means to input data into a machine to cause it to do a certain thing. Another definition says to program is “to provide a series of instructions.” If someone wants to give effective instructions, a person must understand the purpose of the instructions being relayed. A person needs the foundation — or the why of it all — to get to the actual how.

Regarding network automation, the why is to ultimately achieve network readiness for what the network needs to handle, whether that’s new applications or more traffic, Cisco’s Leary said.

“One of the reasons you develop skills in network programming is to leverage all the automation tools,” he said. “As a result, you’re making use of those technologies and data to make sure your network isn’t just up and available, but [is] now network-ready.”

Vendors have a part in this, too

The impetus also falls on the networking vendors to provide products that help professionals in their networking roles.

But network readiness — and the related issue of network programmability — goes beyond skills and the ability to input data, according to Lee Doyle, principal analyst at Doyle Research. The impetus also falls on the networking vendors to provide products that help professionals in their networking roles.

Yes, we’ve seen the early versions of products focused on achieving expressed intent and outcomes. But we’ve also seen the hazy sheen of marketing fade away to reveal frizzled shreds of hype.

Ultimately, we need to determine what we want to accomplish with our networks and why. This likely results in myriad opinions, but most of us would consider growth beneficial. Learning new things offers the opportunity for more knowledge. Knowledge can benefit the employee, the organization and maybe even society. This idea may gravitate toward the idyllic, but consider some effects of remaining stagnant: irrelevant skills or knowledge, lost productivity and inefficacy.

“A business needs to be agile, but it’s only as agile as its least agile component,” Kerravala said. While Kerravala considered that component to be the network, the network could encompass organizations, vendors and network professionals.

So, I bring these questions to you — the network professional. Do you think you need to learn new skills in order to keep up with shifting networking roles? Do you want to reskill? Or, do you think vendors need to up their game?

Fixing Erratic Behavior on Hyper-V with Network Load Balancers

For years, I’d never heard of this problem. Then, suddenly, I’m seeing it everywhere. It’s not easy to precisely outline a symptom tree for you. Networked applications will behave oddly. Remote desktop sessions may skip or hang. Some network traffic will not pass at all. Other traffic will behave erratically. Rather than try to give you a thorough symptom tree, we’ll just describe the setup that can be addressed with the contents of this article: you’re using Hyper-V with a third-party network load balancer and experiencing network-related problems.

Acknowledgements

Before I ever encountered it, the problem was described to me by one my readers. Check out our Complete Guide to Hyper-V Networking article and look in the comments section for Jahn’s input. I had a different experience, but that conversation helped me reach a resolution much more quickly.

Problem Reproduction Instructions

The problem may appear under other conditions, but should always occur under these:

  • The network adapters that host the Hyper-V virtual switch are configured in a team
    • Load-balancing algorithm: Dynamic
    • Teaming mode: Switch Independent (likely occurs with switch-embedded teaming as well)
  • Traffic to/from affected virtual machines passes through a third-party load-balancer
    • Load balancer uses a MAC-based system for load balancing and source verification
      • Citrix Netscaler calls its feature “MAC based forwarding”
      • F5 load balancers call it “auto last hop”
    • The load balancer’s “internal” IP address is on the same subnet as the virtual machine’s
  • Sufficient traffic must be exiting the virtual machine for Hyper-V to load balance some of it to a different physical adapter

I’ll go into more detail later. This list should help you determine if you’re looking at an article that can help you.

Resolution

Fixing the problem is very easy, and can be done without downtime. I’ll show the options in preference order. I’ll explain the impacting differences later.

Option 1: Change the Load-Balancing Algorithm

Your best bet is to change the load-balancing algorithm to “Hyper-V port”. You can change it in the lbfoadmin.exe graphical interface if your management operating system is GUI-mode Windows Server. To change it with PowerShell (assuming only one team):

There will be a brief interruption of networking while the change is made. It won’t be as bad as the network problems that you’re already experiencing.

Option 2: Change the Teaming Mode

Your second option is to change your teaming mode. It’s more involved because you’ll also need to update your physical infrastructure to match. I’ve always been able to do that without downtime as long as I changed the physical switch first, but I can’t promise the same for anyone else.

Decide if you want to use Static teaming or LACP teaming. Configure your physical switch accordingly.

Change your Hyper-V host to use the same mode. If your Hyper-V system’s management operating system is Windows Server GUI, you can use lbfoadmin.exe. To change it in PowerShell (assuming only one team):

or

In this context, it makes no difference whether you pick static or LACP. If you want more information, read our article on the teaming modes.

Option 3: Disable the Feature on the Load Balancer

You could tell the load balancer to stop trying to be clever. In general, I would choose that option last.

An Investigation of the Problem

So, what’s going on? What caused all this? If you’ve got an environment that matches the one that I described, then you’ve unintentionally created the perfect conditions for a storm.

Whose fault is it? In this case, I don’t really think that it’s fair to assign fault. Everyone involved is trying to make your network traffic go faster. They sometimes do that by playing fast and loose in that gray area between Ethernet and TCP/IP. We have lots of standards that govern each individually, but not so many that apply to the ways that they can interact. The problem arises because Microsoft is playing one game while your load balancer plays another. The games have different rules, and neither side is aware that another game is afoot.

Traffic Leaving the Virtual Machine

We’ll start on the Windows guest side (also applies to Linux). Your application inside your virtual machine wants to send some data to another computer. That goes something like this:

  1. Application: “Network, send this data to computer www.altaro.com on port 443”.
  2. Network: “DNS server, get me the IP for www.altaro.com”
  3. Network: “IP layer, determine if the IP address for www.altaro.com is on the same subnet”
  4. Network: “IP layer, send this packet to the gateway”
  5. IP layer passes downward for packaging in an Ethernet frame
  6. Ethernet layer transfers the frame

The part to understand: your application and your operating system don’t really care about the Ethernet part. Whatever happens down there just happens. Especially, it doesn’t care at all about the source MAC.

lb_out_traffic

Traffic Crossing the Hyper-V Virtual Switch

Because this particular Ethernet frame is coming out of a Hyper-V virtual machine, the first thing that it encounters is the Hyper-V virtual switch. In our scenario, the Hyper-V virtual switch rests atop a team of network adapters. As you’ll recall, that team is configured to use the Dynamic load balancing algorithm in Switch Independent mode. The algorithm decides if load balancing can be applied. The teaming mode decides which pathway to use and if it needs to repackage the outbound frame.

Switch independent mode means that the physical switch doesn’t know anything about a team. It only knows about two or more Ethernet endpoints connected in standard access mode. A port in that mode can “host” any number of MAC addresses;the physical switch’s capability defines the limit. However, the same MAC address cannot appear on multiple access ports simultaneously. Allowing that would cause all sorts of problems.

lb_broken_si_traffic

So, if the team wants to load balance traffic coming out of a virtual machine, it needs to ensure that the traffic has a source MAC address that won’t cause the physical switch to panic. For traffic going out anything other than the primary adapter, it uses the MAC address of the physical adapter.

lb_good_si_traffic

So, no matter how many physical adapters the team owns, one of two things will happen for each outbound frame:

  • The team will choose to use the physical adapter that the virtual machine’s network adapter is registered on. The Ethernet frame will travel as-is. That means that its source MAC address will be exactly the same as the virtual network adapter’s (meaning, not repackaged)
  • The team will choose to use an adapter other than the one that the virtual machine’s network adapter is registered on. The Ethernet frame will be altered. The source MAC address will be replaced with the MAC address of the physical adapter

Note: The visualization does not cover all scenarios. A virtual network adapter might be affinitized to the second physical adapter. If so, its load balanced packets would travel out of the shown “pNIC1” and use that physical adapter’s MAC as a source.

Traffic Crossing the Load Balancer

So, our frame arrives at the load balancer. The load balancer has a really crummy job. It needs to make traffic go faster, not slower. And, it acts like a TCP/IP router. Routers need to unpackage inbound Ethernet frames, look at their IP information, and make decisions on how to transmit them. That requires compute power and time.

lb_router_hard

If it needs too much time to do all this, then people would prefer to live without the load balancer. That means that the load balancer’s manufacturer doesn’t sell any units, doesn’t make any money, and goes out of business. So, they come up with all sorts of tricks to make traffic faster. One way to do that is by not doing quite so much work on the Ethernet frame. This is a gross oversimplification, but you get the idea:

lb_router_easy

Essentially, the load balancer only needs to remember which MAC address sent which frame, and then it doesn’t need to worry so much about all that IP nonsense (it’s really more complicated than that, but this is close enough).

The Hyper-V/Load Balancer Collision

Now we’ve arrived at the core of the problem: Hyper-V sends traffic from virtual machines using source MAC addresses that don’t belong to those virtual machines. The MAC addresses belong to the physical NIC. When the load balancer tries to associate that traffic with the MAC address of the physical NIC, everything breaks.

Trying to be helpful (remember that), the load balancer attempts to return what it deems as “response” traffic to the MAC that initiated the conversation. The MAC, in this case, belongs directly to that second physical NIC. It wasn’t expecting the traffic that’s now coming in, so it silently discards the frame.

That happens because:

  • The Windows Server network teaming load balancing algorithms are send only; they will not perform reverse translations. There are lots of reasons for that and they are all good, so don’t get upset with Microsoft. Besides, it’s not like anyone else does things differently.
  • Because the inbound Ethernet frame is not reverse-translated, its destination MAC belongs to a physical NIC. The Hyper-V virtual switch will not send any Ethernet frame to a virtual network adapter unless it owns the destination MAC
  • In typical system-to-system communications, the “responding” system would have sent its traffic to the IP address of the virtual machine. Through the normal course of typical networking, that traffic’s destination MAC would always belong to the virtual machine. It’s only because your load balancer is trying to speed things along that the frame is being sent to the physical NIC’s MAC address. Otherwise, the source MAC of the original frame would have been little more than trivia.

Stated a bit more simply: Windows Server network teaming doesn’t know that anyone cares about its frames’ source MAC addresses and the load balancer doesn’t know that anyone is lying about their MAC addresses.

Why Hyper-V Port Mode Fixes the Problem

When you select the Hyper-V port load balancing algorithm in combination with the switch independent teaming mode, each virtual network adapter’s MAC address is registered on a single physical network adapter. That’s the same behavior that Dynamic uses. However, no load balancing is done for any given virtual network adapter; all traffic entering and exiting any given virtual adapter will always use the same physical adapter. The team achieves load balancing by placing each virtual network adapter across its physical members in a round-robin fashion.

lb_si_hp

Source MACs will always be those of their respective virtual adapters, so there’s nothing to get confused about.

I like this mode as a solution because it does a good job addressing the issue without making any other changes to your infrastructure. The drawback would be if you only had a few virtual network adapters and weren’t getting the best distribution. For a 10GbE system, I wouldn’t worry.

Why Static and LACP Fix the Problem

Static and LACP teaming involve your Windows Server system and the physical switch agreeing on a single logical pathway that consists of multiple physical pathways. All MAC addresses are registered on that logical pathway. Therefore, the Windows Server team has no need of performing any source MAC substitution regardless of the load balancing algorithm that you choose.

lb_stdlacp

Since no MAC substitution occurs here, the load balancer won’t get anything confused.

I don’t like this method as much. It means modifying your physical infrastructure. I’ve noticed that some physical switches don’t like the LACP failover process very much. I’ve encountered some that need a minute or more to notice that a physical link was down and react accordingly. With every physical switch that I’ve used or heard of, the switch independent mode fails over almost instantly.

That said, using a static or LACP team will allow you to continue using the Dynamic load balancing algorithm. All else being equal, you’ll get a more even load balancing distribution with Dynamic than you will with Hyper-V port mode.

Why You Should Let the Load Balancer Do Its Job

The third listed resolution suggests disabling the related feature on your load balancer. I don’t like that option, personally. I don’t have much experience with the Citrix product, but I know that the F5 buries their “Auto Last Hop” feature fairly deeply. Also, these two manufacturers enable the feature by default. It won’t be obvious to a maintainer that you’ve made the change.

However, your situation might dictate that disabling the load balancer’s feature causes fewer problems than changing the Hyper-V or physical switch configuration. Do what works best for you.

Using a Different Internal Router Also Addresses the Issue

In all of these scenarios, the load balancer performs routing. Actually, these types of load balancers always perform routing, because they present a single IP address for the service to the outside world and translate internally to the back-end systems.

However, nothing states that the internal source IP address of the load balancer must exist in the same subnet as the back-end virtual machines. You might do that for performance reasons; as I said above, routing incurs overhead. However, this all a known quantity and modern routers are pretty good at what they do. If any router is present between the load balancer and the back-end virtual machines, then the MAC address issue will sort itself out regardless of your load balancing and teaming mode selections.

Have You Experienced this Phenomenon?

If so, I’d love to hear from you. What system did you experience it happening? How did you resolve the situation (if you were able)? Perhaps you’ve just encountered it and arrived here to get a solution – if so let me know if this explanation was helpful or if you need any further assistance regarding your particular environment. The comment section below awaits.

Managing Secrets Securely in the Cloud

You’ve probably heard some version of the story about a developer who mistakenly checked in his AWS S3 key to Github. He pulled the key within 5 minutes but still racked up a multi-thousand dollar bill from bots that crawl open source sites looking for secrets. As developers we all understand and care about keeping dev and production secrets safe but managing those secrets on your own or especially in a team can be cumbersome. We are pleased to announce several new features that together will make detecting secrets in code and working with secrets stored securely on Azure easier than it’s ever been before.

Safeguarding Secrets while building for Azure

Most of us know it’s a best practice to keep secret settings like connection strings, domain passwords, or other credentials as a runtime configuration and outside the source code. Azure Key Vault provides a security location to safeguard keys and other secrets used by cloud apps. Azure App services recently added support for Managed Service identity which means apps running on App Service can easily get authorized to access a Key Vault and other AAD-protected resources so you no longer need to store secrets visibility in environment variables.

If you do this though, getting your local dev environment setup with the right secrets can be a pain, especially if you work in a team. We hear many developers distribute secrets for shared dev services through email or just check them into source code. So we created the App Authentication Extension to make it easy to develop apps locally while keeping your secrets in Key Vault. With the extension installed, your locally running app uses the identity signed into Visual Studio to get secrets you are authorized to access directly from Key Vault. This works great in a team environment where you might have security group for the dev team with access to a dev environment Key Vault.

Azure key vault

Azure service authentication account selection setting in Tools Options

In ASP.NET applications the ASP.NET Key Vault and User Secret configuration builders with .NET 4.7.1 is a NuGet package that allows secret app settings to be saved in secure configuration stores instead of in web.config as plaintext, without changing application source code. In ASP.NET Core applications there is a small code change, to load Key Vault as a configuration provider and once you do this you are set. This change isn’t done yet, but we’re hoping to eliminate it soon.

App Settings

Here are a couple of walkthroughs that show you how everything works:

Credential Scanner (CredScan) Code Analyzer Preview

We also wanted to make it easier for devs to find secrets in their code to encourage moving secrets to more secure locations like User Secrets or Azure Key Vault. The Credential Scan Code Analyzer is a very early preview that can detect Storage access keys, SAS tokens, API management keys, Cosmos DB access keys, AAD Service principal keys, connection strings for SQL, Azure SQL, Service Bus, Azure Logic apps, BizTalk server, and various other credential types. As you edit your code the analyzer scans your code and immediately warns you about secrets it finds in any open documents with warnings in the error list and in the Build and Code Analysis at Commit time. It’s something we’ve been developing, utilizing, and improving within Microsoft for some time now.

The Credential Scan Code Analyzer is a preview and ships in the experimental DevLabs extension, Continuous Delivery Tools for Visual Studio. This is because we know this is an important area that goes beyond open documents and can stretch all the way into your CI environment. Rather than waiting, we released an experimental version now because we think it’s useful and we want your feedback on how you would use this in your environment.

Please install these extensions and give the walkthroughs a try to let us know what you think.

Catherine Wang, Program Manager, Azure Developer Experience Team
@cawa_cathy

Catherine is a Program Manager for Azure Developer Experience team in Microsoft. I worked on Azure security tooling, Azure diagnostics, Storage Explorer, Service Fabric and Docker tools. Interested in making development experience simple, smooth and productive.

Xbox One X: Explaining 4K, HDR, Supersampling and More

As we continue to roll up to the release date of Xbox One X on November 7, you’ve no doubt heard us talking a lot about feature terminology: 4K, High Dynamic Range (HDR), supersampling and others. Just a few weeks ago, Major Nelson sat down with Albert Penello to cover many of your questions about Xbox One Enhanced, what that means, and how game developers look to harness the power of Xbox One X. It’s a great piece and we highly recommend you check it out.

The simple fact is that all your games will look and play better on Xbox One X. Yes, your picture will look sharper on a 4K screen. Yes, your brights are brighter and more detailed thanks to HDR. Xbox One X will still deliver faster performance, better loading times, and greater framerates — these are all features Xbox One X will bring into your living room regardless of your TV.

Today we’d like to take a trip back down terminology lane, just to keep everything fresh in your minds and perhaps educate a few more of you about these buzzwords you’re seeing today — because you’re going to keep seeing them in the months and years to come — and we’re here to help.

Xbox One X Explaining 4K Article Image

What is 4K?

Simply put, it’s a higher resolution than that of a 1920×1080 (1080p) television, coming in at a crisp 3840×2160 (2160p). This means any supported television programming or game can run at a higher resolution than the previous standard of 1080p, so you have an even sharper and more detailed picture.

Xbox One X is the only console that can deliver the crisp and clean 4K resolution for your supported games, streaming video, and when pairing a 4K screen with the built-in Ultra High Definition (UHD) Blu-ray drive, you’ll get cinema-like picture quality in your living room.

Bonus: you can sit even closer to your 4K television screen than your 1080p display before the image starts to break down. But be warned: your parents will still be upset with you if they catch you sitting that close to the TV.

Xbox One X Explaining 4K Article Image

What is UHD, 4K UHD, Ultra HD, and Others?

It’s kinda the same thing as 4K. They both play in the same high-resolution ballpark, but the term UHD and other variants of it is used largely with consumer brand televisions. So, if you see one of those labels on a television, you’re still getting a 4K television. It’s just that different manufacturers are choosing to go with their own terminology over others, thus creating a great deal of confusion.

What is HDR?

This is money. While higher resolution is great and gives us a sharper, clearer picture when playing games, it’s HDR that really helps with emersion thanks to deeper colors — blacks and whites in particular — and a rich contrast allowing for greater detail in all parts of the image. Obviously, it’s easier to see this effect rather than describe it, but I’ll give it a shot.

When looking at a bunch of thick clouds against a bright blue sky, imagine not being able to make out the shading they make upon one other. You’ll wind up with giant flat white objects floating in the sky. Now add the shading element back in, making it easier to see the shapes and bulbs of clouds floating in the sky. That’s essentially what HDR brings to your games and movies, a greater level of contrast to objects, helping you see a better distinction between brightness, shadows, shading, and more of those subtle bits of an image you may have been taking for granted all this time. HDR makes images look more natural, more detailed, and more realistic.

Xbox One X Explaining 4K Article Image

What is Supersampling?

If there’s one thing I’d love for you to take away from all this information is this: a 4K screen is not required to play great games on Xbox One X. Your games will still look and run much better on Xbox One X than any other console on the market regardless of your television and that’s thanks to supersampling. Supersampling also helps to reduce “jaggies” around the edges of objects and other staircasing-like effects

Think of supersampling as the cousin of upscaling. Instead of taking a lower resolution image and blowing up (creating a distortion) like upscaling’s effect, supersampling takes a high-resolution image and scales it down to your television’s native resolution — be it 720p or 1080p — to bring all the information that Xbox One X is pouring into its games and beaming to the screen.

But supersampling is more than just resolution as any Xbox One X Enhanced title running on at 1080p television is still taking advantage of the power of the system. That makes your draw distance better, greater special effects, and everything else that’s running under the hood of the world’s most powerful console.

Xbox One X Explaining 4K Article Image

Yes, a 4K screen will deliver a sharper and clearer image, and HDR will bolster the light and dark features of your game, but supersampling does an effective job of letting you play your games best on Xbox One X regardless of your screen.

What is Xbox One X Enhanced?

This is a term to let you know that the game developer has tapped into the power of Xbox One X. It means faster loading times, higher resolution textures, higher framerates — but it’s up to the developer to decide the best way they want to utilize that power.

So, when you see Xbox One X Enhanced mentioned on your favorite game’s product page or game box, it means there has been some special care taken to utilize the Scorpio engine. We recommend following up for more info on our recently launched Xbox One Enhanced site that has a deep dive on this feature.

Xbox One X Explaining 4K Article Image

How do I know if the Xbox One X Enhanced game updates are on my console?

By the time Xbox One X launches on November 7, some of your favorite games may still be getting ready for the Xbox One X revolution. The best way to know when Xbox One X Enhanced Updates are ready is to keep an eye on those game’s social media channels and official sites — they’ll be one of the first to let you know there is an update available. You can also check out our Xbox One X Enhanced Games list found here, which will be updated daily.

On the console, if you have Automatic Updates on, your console should update when new versions are available on the service. To see which games have been updated, go to My Games and Apps, then My Games, and filter for Xbox One X Enhanced — if the game is listed in this section, it has been updated. If the game is not included in the Xbox One X Enhanced section under My Games and Apps, the game update is either not live yet, or you haven’t installed the update. If the update hasn’t been installed yet, the fastest way to update it is to launch the game – you can only play online when you have the most recent version of the game available.

If you don’t have Automatic Updates turned on, go to Updates to check for the latest available game updates. You can also sort “My Games and Apps” by “Last Update” to see a list of recent updates.

Xbox One X Explaining 4K Article Image

We hope this article helped bring a new level of understanding to some of the terms you’ve been seeing lately. We honestly can’t wait for you to play games on Xbox One X so you can see the difference for yourself, and perhaps come up with a better analogy about HDR than cloud shapes.

Wasabi Technologies takes on Amazon S3 on price, performance

Daring businesses to switch from Amazon to a company they’ve never heard of for cloud storage is a bold challenge. But Wasabi Technologies’ founders were so encouraged by its product launch that they raised another $10.8 million to fund a second data center.

Wasabi CEO David Friend said he expected the free trial of 1 TB for 30 days to attract a few dozen prospects when it became available on May 3. When more than 500 signed up, the Boston-based startup had to waitlist new subscribers until the week of May 17 to keep up with the server capacity demand.

Friend said about 80 users have converted to paying customers, and Wasabi boosted the available storage capacity at its leased data center space in Ashburn, Va., from about 7 PB to more than 20 PB to stay 90 days ahead of demand.

Those customers are likely lured mostly by Wasabi’s claims that its cloud storage is significantly cheaper and faster than Amazon’s Simple Storage Service (S3). They may also find it encouraging that Wasabi founders Friend and CTO Jeff Flowers also started Carbonite, an early successful cloud storage player for consumers and small and medium-sized businesses.

Wasabi CEO David FriendDavid Friend

The founders also likely learned a few things from Flowers’ post-Carbonite efforts to build on-premises cold data storage for financial and security firms and service providers. Storiant, initially known as SageCloud, raised $14.8 million in equity and debt between August 2012 and May 2015. But Storiant shut down operations in November 2015 and sold off its intellectual property for a mere $90,000.

“They were selling hardware systems and ended up competing with EMC, Dell and HP, which I thought was a mistake,” said Friend, who was CEO and later executive chairman at Carbonite, as well as a director on Storiant’s board.

Wasabi Technologies raises $8.2 million in 2016

In 2016, Friend, Flowers and Storiant’s founding engineers shifted their focus back to public cloud storage at BlueArchive, now called Wasabi Technologies. The startup raised $8.2 million over two rounds in 2016 to get started.

Has Wasabi built a better mousetrap when people don’t realize they have a mouse problem? Or, is this a real issue?
Stu Minimansenior analyst, Wikibon

Wasabi added $10.8 million through a convertible note that will become equity when the company decides to raise a Series B round of funding. That will help finance the West Coast expansion to a colocation facility in San Jose, Calif., or Seattle, according to Friend. That would allow Wasabi to add automatic replication across multiple geographies for compliance, and to mitigate the risk of having all customer data in a single data center. Wasabi is also investigating expansion into Europe, a prospect that Friend said he hadn’t planned to pursue until next year.

“I’m a cautious, conservative kind of guy, and I don’t like just spending money without knowing what I’m going to get for it. But at this point in time, the market is almost limitless for this,” Friend said. “Every day, new opportunities show up at the company for amounts of storage that are more than we had in our whole second-year projection. If any of these big deals start to come in our direction, it’s going to be pretty impressive.”

Speed ‘blows people away’

Friend said the speed at which Wasabi’s software can read and write data is “what really blows people away.” It offers performance that he said is generally achievable only at higher cost with on-premises data center hardware. He said the Wasabi software takes control of disk write heads and packs data onto storage drives more efficiently and at higher speed than Linux or Windows operating systems can.

“We get our speed by parallelizing. The speed comes from breaking the data up and reading it and writing it simultaneously to many drives at the same time,” Friend said. He added that the data is distributed with sufficient redundancy to enable 11 nines of data durability, as Amazon does.

Friend said Wasabi keeps costs low by buying directly from hard disk drive (HDD) manufacturers at about the same price as Amazon does in the low-margin HDD business. He said Wasabi’s technology also enables longer disk life.

Wasabi charges a flat 0.39 cents per GB per month for storage and 4 cents per GB for egress. Competing public clouds vary prices based on the amount of data stored or transferred, the type of storage service — such as cold or nearline — and the requests made, such as puts and gets.

“Our vision is that cloud storage is going to become a commodity that’s out there for everybody to use. You don’t need three plugs in the wall for good electricity, so-so electricity and crappy but cheap electricity. You don’t need all these different kinds of storage as well,” Friend said.

Wasabi vs. Amazon S3 and Glacier

Friend said he expects most potential customers to compare Wasabi to Amazon S3. But one trial participant, Phoenix-based WestStar Multimedia Entertainment Inc., pitted Wasabi against Amazon’s colder, cheaper Glacier, Backblaze and Google Coldline in addition to Amazon S3, Microsoft Azure Backup and Rackspace.

WestStar vice president of information technology Chris Wojno said his company had a pressing need to back up more than 26 TB of video with an estimated data growth rate of 2.7 TB per month. WestStar produces The Kim Komando Show, a syndicated digital lifestyle radio program, and operates a multimedia website.

Wojno calculated costs based on storing 39 TB of data and found Wasabi had the lowest per-month price per GB. If he chose Wasabi, his per-month cost would be $3,747.90 less than Rackspace, $1,590.80 less than Azure Backup, and $744.90 less than Amazon S3. The price differential was far less over Google Coldline ($120.90), Backblaze ($42.90) and Glacier ($3.90), according to his spreadsheet analysis.

Wojno also weighed the data recovery cost for 39 TB of backed-up video in the event of a disaster. Backblaze was least expensive at $780, compared to $1,560 for Wasabi and $3,900 for Glacier. But Wojno figured Blackblaze’s higher per-month storage fee than Wasabi would negate the savings.

Based on Wojno’s calculations, WestStar selected Wasabi Technologies for cloud storage. Wojno admitted he would have been suspicious of the new company had he not been familiar with Friend through his work at Carbonite, a former sponsor of the radio show. Komando, an owner of WestStar, last month invested in Wasabi after her company became a paying customer.

Wojno said WestStar spent about two weeks backing up 26.5 TB of video over a 200 Mbps connection with backup software from Wasabi partner CloudBerry Lab. He noted that WestStar received a complimentary CloudBerry license for his participation in a webinar with the vendors.

Friend said migrating data through transfer to a storage appliance, such as Amazon Web Services (AWS) Snowball, and transport by truck to the cloud storage provider is “an idea whose time has come and gone.”

“It’s much cheaper to go and put in a 10 Gigabit [Ethernet] pipe for a month, move your data and then shut it off, assuming you’re in a metropolitan area where such things are available,” Friend said.

AWS remains a formidable Goliath

Stu Miniman, a senior analyst at Wikibon, said Wasabi faces a stiff challenge against Amazon, the clear No. 1 cloud storage player. He said Amazon could lower costs as it has done in the past, or improve performance to respond to any perceived threat. Plus, he hasn’t heard many public cloud users complaining that storage is a problem.

“Has Wasabi built a better mousetrap when people don’t realize they have a mouse problem? Or, is this a real issue?” Miniman said.

Miniman said users might look to the free 30-day trial for new applications. He said the question is how long they’ll stick with the service over the long haul, especially if the initial application runs for only a limited time.

Opportunities with AWS customers

Friend said Wasabi Technologies is going after AWS customers who want to save money on their long-term data storage or keep a second copy of their data with a different cloud provider. Wasabi provides a free tool that customers can install in Amazon Elastic Compute Cloud (EC2) to copy their S3-stored data to Wasabi automatically.

Friend said, thanks to Wasabi’s S3 compatibility, organizations using EC2 to host applications could leave the applications there and move data to Wasabi’s data center via Amazon’s Direct Connect, rather than store it in Amazon S3. He said Wasabi does not compete against Amazon’s Elastic Block Storage, which he said is designed for fast-moving data that doesn’t stay in memory long.

Friend said Wasabi uses immutable buckets to protect data against accidental deletion, sabotage, viruses, malware, ransomware or other threats. Customers can specify the length of time they want a data bucket to be immutable.