Tag Archives: problem

How Microsoft re-envisioned the data warehouse with Azure Synapse Analytics

About four years ago, the Microsoft Azure team began to notice a big problem troubling many of its customers. A mass migration to the cloud was in full swing, as enterprises signed up by the thousands to reap the benefits of flexible, largescale computing and data storage. But the next iteration of that tech revolution, in which companies would use their growing stores of data to get more tangible business benefits, had stalled.

Technology providers, including Microsoft, have built a variety of systems to collect, retrieve and analyze enormous troves of information that would uncover market trends and insights, paving the way toward a new era of improved customer service, innovation and efficiency.

But those systems were built independently by different engineering teams and sold as individual products and services. They weren’t designed to connect with one another, and customers would have to learn how to operate them separately, wasting time, money and precious IT talent.

“Instead of trying to add more features to each of our services, we decided to take a step back and figure out how to bring their core capabilities together to make it easy for customers to collect and analyze all of their increasingly diverse data, to break down data silos and work together more collaboratively,” said Raghu Ramakrishnan, Microsoft’s chief technology officer for data.

At its Ignite conference this week in Orlando, Florida, Microsoft announced the end result of a yearslong effort to address the problem: Azure Synapse Analytics, a new service that merges the capabilities of Azure SQL Data Warehouse with new enhancements such as on-demand query as a service.

Microsoft said this new offering will help customers put their data to work much more quickly, productively and securely by pulling together insights from all data sources, data warehouses and big data analytics systems. And, the company said, with deeper integration between Power BI and Azure Machine Learning, Azure Synapse Analytics can reduce the time required to process and share that data, speeding up the insights that businesses can glean.

What’s more, it will allow many more businesses to take advantage of game-changing technologies like data analytics and artificial intelligence, which are helping scientists to better predict the weather, search engines to better understand people’s intent and workers to more easily handle mundane tasks.

This newest effort to break down data silos also builds on other Microsoft projects, such as the Open Data Initiative and Azure Data Share, which allows you to share data from multiple sources and even other organizations.

Microsoft said Azure Synapse Analytics is also designed to support the increasingly popular DevOps strategy, in which development and operations staff collaborate more closely to create and implement services that work better throughout their lifecycles.

YouTube Video

A learning process

Azure Synapse Analytics is the result of a lot of work, and a little trial and error.

At first, Ramakrishnan said, the team developed highlevel guidelines showing customers how to glue the systems together themselves. But they quickly realized that was too much to ask.

“That required a lot of expertise in the nitty gritty of our platforms,” Ramakrishnan said. Customers made it overwhelmingly clear that we needed to do better.”

So, the company went back to the drawing board and spent an additional two years revamping the heart of its data business, Azure SQL Data Warehouse, which lets customers build, test, deploy and manage applications and services in the cloud.

A breakthrough came when the company realized that customers need to analyze all their data in a single service, without having to copy terabytes of information across various systems to use different analytic capabilities – as has traditionally been the case with data warehouses and data lakes.

With the new offering, customers can use their data analytics engine of choice, such as Apache Spark or SQL, on all their data. That’s true whether it’s structured data, such as rows of numbers on spreadsheets, or unstructured data, such as a collection of social media posts.

This project was risky. It involved deep technical surgery: completely rewriting the guts of the SQL query processing engine to optimize it for the cloud and make it capable of instantly handling big bursts of work as well as very large and diverse datasets.

It also required unprecedented integration among several teams within Microsoft, some of whom would have to make hard choices. Established plans had to be scrapped. Resources earmarked for new features would be redirected to help make the entire system work better.

“In the beginning, the conversations were often heated. But as we got into the flow of it, they became easier. We began to come together,” Ramakrishnan said.

Microsoft also had to make sure that the product would work for any company, regardless of employees’ technical expertise.

“Most companies can’t afford to hire teams of 20 people to drive data projects and wire together multiple systems. There aren’t even enough skilled people out there to do all that work,” said Daniel Yu, director of product marketing for Azure Data and Artificial Intelligence.

Making it easy for customers

Customers can bring together various sources of data into a single feed with Azure Synapse Analytics Studio, a console – or single pane of glass that will allow a business professional with minimal technical expertise to locate and collect data from multiple sources like sales, supply chain, finance and product development. They can then choose how and where to store that data, and they can use it to create reports through Microsoft’s popular Power BI analytics service.

In a matter of hours, Azure Synapse will deliver useful business insights that used to take days or even weeks and months, said Rohan Kumar, corporate vice president for Azure Data.

“Let’s say an executive wants a detailed report on sales performance in the eastern U.S. over the last six months,” Kumar said. Today, a data engineer has to do a lot of work to find where that data is stored and write a lot of brittle code to tie various services together. They might even have to bring in a systems integrator partner. With Azure Synapse, there’s no code required. It’s a much more intuitive experience.”

The complexity of the technical problems Azure Synapse addressed would be hard to overstate. Microsoft had to meld multiple independent components into one coherent form factor, while giving a wide range of people – from data scientists to line of business owners – their preferred tools for accessing and using data.


With Azure Synapse, there’s no code required. It’s a much more intuitive experience.”

~ Rohan Kumar, corporate vice president for Azure Data


That includes products like SQL Server, the open source programming interface Apache Spark, Azure Data Factory and Azure Data Studio, as well as notebook interfaces preferred by many data professionals to clean and model data.

“Getting all those capabilities to come together fluidly, making it run faster, simpler, eliminating overlapping processes – there was some scary good stuff getting done,” Ramakrishnan said.

The result is a data analytics system that will be as easy to use as a modern mobile phone. Just as the smartphone replaced several devices by making all of their core capabilities intuitively accessible in a single device, the Azure Synapse “smartphone for data” now allows a data engineer to build an entire end-to-end data pipeline in one place. It also enables data scientists and analysts to look at the underlying data in ways that are natural to them.

And just as the phone has driven waves of collaboration and business innovation, Azure Synapse will free up individuals and companies to introduce new products and services as quickly as they can dream them up, Microsoft said.

“If we can help different people view data through a lens that is natural to them, while it’s also visible to others in ways natural to them, then we will transform the way companies work,” Ramakrishnan said. That’s how we should measure our success.

Top photo: Rohan Kumar, corporate vice president for Azure Data, says Azure Synapse will deliver useful business insights that used to take days or even weeks and months. Photo by Scott Eklund/Red Box Pictures.

Related:

Go to Original Article
Author: Microsoft News Center

HR execs and politicians eye student debt relief

Student debt relief is not only an election issue in the 2020 race for president, but a problem for HR managers. Some firms, including a hospital in New York, are doing something about it.

Montefiore St. Luke’s Cornwall Hospital began offering a student loan relief program this year for its non-union employees. It employs 1,500 people and provides employees 32 vacation days a year.

Most employees don’t take all that time off, said Dan Bengyak, vice president of administrative services at the not-for-profit medical center with hospitals in Newburgh and Cornwall. He oversees HR, IT and other administrative operations.

In February, the hospital detailed its plan to apply paid time off to student debt relief. Employees in the Parents Plus Loan program had the option as well. The hospital set two sign-up windows, the first in May. Forty employees signed up. The next window is in November.

The program “has been extremely well received and it definitely has offered us a real competitive advantage in the recruiting world,” Bengyak said. He believes it will help with retention as well.

The maximum employee contribution for student debt relief is $5,000. The hospital also provides tuition help. This combination “offers significant financial assistance,” to employees seeking advanced degrees, Bengyak said.

A SaaS platform handles payments

The hospital uses Tuition.io, a startup founded in 2013 and based in Santa Monica, Calif. The platform manages all of the payments to the loan services. Its users pay a lump sum to cover the cost of the assistance. The employer doesn’t know the amount of the employee’s debt. The platform notifies the employee when a payment is posted.

It definitely has offered us a real competitive advantage in the recruiting world.
Dan BengyakVP of administrative services, Montefiore St. Luke’s Cornwall Hospital

Payments can be made as a monthly contribution, a lump sum on an employment anniversary or other methods, according to Scott Thompson, CEO at Tuition.io.

Tuition.io also analyzes repayment data, which can show the program’s retention impact, according to Thompson.

“Those individuals who are participating in this benefit stay longer with the employer — they just do,” he said. 

About one in five students has over $100,000 in debt and is, by definition, broke, Thompson said. They can’t afford an employer’s 401K program or buy a house. Employees with a burdensome loan “are always looking for a new job that pays you more money because you simply have to,” he said.

Legislation in pipeline

The amount of student loan debt is in excess of $1.5 trillion and exceeds credit card and auto debt combined, said Robert Keach, a past president at the American Bankruptcy Institute, in testimony at a recent U.S. House Judiciary Committee hearing on bankruptcy. More than a quarter of borrowers are in delinquency or default, he said. Student loan debt is expected to exceed $2 trillion by 2022.

“High levels of post-secondary education debt correlate with lower earnings, lower rates of home ownership, fewer automobile purchases, higher household financial distress, and delayed marriage and family formation, among other ripple effects,” Keach said.

Congress is considering legislation that may make it easier for firms to help employees with debt. One example is the Employer Participation in Repayment Act, a bill that has bipartisan support in both chambers. It would enable employers to give up to $5,250 annually per employee, tax free.

Go to Original Article
Author:

Troubleshoot common VM live migration errors

Problem solve
Get help with specific problems with your technologies, process and projects.

VM live migration errors in Hyper-V include connection issues, virtual switch mismatches and incompatible hardware. Learn more about these common errors and how to resolve them.


Although there are many things that can cause VM live migrations to fail, there are some issues that are especially…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

common. Some of the more prevalent VM live migration issues include failed attempts to connect to the destination server, unsupported protocol versions, virtual switch mismatches and incompatible hardware.

One of the more common issues that occurs during a VM live migration involves an error message popping up when you attempt to connect to the destination host. An example of this error is shown in Figure A.

Hyper-V error
Figure A. Hyper-V displays an error saying it couldn’t connect to the destination server.

The error message cautions you to make sure the Windows Remote Management (WinRM) port is open on the destination host. WinRM uses port 5985 (HTTP) and port 5986 (HTTPS).

If the required firewall ports are open, there are a couple of other things you can check. Make sure the WinRM service is running on the destination host. You should also make sure domain name system resolution is working properly and that you’re able to correctly resolve both the computer name and the fully qualified domain name of the remote host.

Unsupported protocol version

Another common VM live migration issue is an unsupported protocol version. An example of this error is shown in Figure B.

VM migration error
Figure B. The operation failed because of an unsupported protocol version.

This error occurs because older versions of Hyper-V don’t support some of the latest VM features. Entering the following command displays the version of each VM on a given Hyper-V host:

Get-VM * | Select-Object Name, Version

The error shown in Figure B occurs if you try to move a version 8.0 VM to a Hyper-V server running Windows Server 2012 R2. Windows Server 2012 R2 doesn’t support VM versions above 5.0.

Virtual switch mismatch

Live migrations can also fail if the destination host doesn’t contain a virtual switch with the same name as the one the VM is using. This error varies depending on whether you’re attempting the live migration using Hyper-V Manager or PowerShell.

In modern versions of Hyper-V Manager, a virtual switch mismatch isn’t a showstopper. Instead, the wizard will inform you of the problem and give you a chance to pick a different virtual switch, as shown in Figure C.

Hyper-V virtual switch
Figure C. Hyper-V gives you a chance to pick a different virtual switch.

If you attempt the migration using PowerShell, then Hyper-V will simply produce an error.

Incompatible hardware

Another somewhat common VM live migration issue is when the destination computer isn’t compatible with the VM’s hardware requirements. This error occurs because the source and destination Hyper-V hosts are running on hardware that is significantly different.

In most cases, you can correct the problem by using processor compatibility mode. Using processor compatibility mode tells the Hyper-V host to run the VM using only basic CPU features rather than attempting to use any advanced CPU features. Once processor compatibility mode is enabled, live migration will usually work. Incidentally, it’s almost always possible to turn processor compatibility mode back off once the VM live migration is complete.

Enable processor compatibility mode.
Figure D. You can fix some VM live migration errors by enabling processor compatibility mode.

You can turn on processor compatibility mode from Hyper-V Manager by right-clicking on the VM and choosing the settings command from the shortcut menu. When the VM’s settings window opens, expand the processor container and then select the compatibility sub-container. Now, just select the Migrate to a physical computer with a different processor version checkbox, as shown in Figure D.

These are just a few of the conditions that can cause live migration errors in Hyper-V. If you experience a really stubborn VM live migration error, take a moment to make sure the clocks on the Hyper-V hosts and the domain controllers are all in sync with one another.

Dig Deeper on Microsoft Hyper-V management

How to tackle an email archive migration for Exchange Online

Problem solve
Get help with specific problems with your technologies, process and projects.

A move from on-premises Exchange to Office 365 also entails determining the best way to transfer legacy archives. This tutorial can help ease migration complications.


A move to Office 365 seems straightforward enough until project planners broach the topic of the email archive…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

migration.

Not all organizations keep all their email inside their messaging platform. Many organizations that archive messages also keep a copy in a journal that is archived away from user reach for legal reasons.

The vast majority of legacy archive migrations to Office 365 require third-party tools and must follow a fairly standardized process to complete the job quickly and with minimal expense. Administrators should migrate mailboxes to Office 365 first and then the archive for the fastest way to gain benefits from Office 365 before the archive reingestion completes.

An archive product typically scans mailboxes for older items and moves those to longer term, cheaper storage that is indexed and deduplicated. The original item typically gets replaced with a small part of the message, known as a stub or shortcut. The user can find the email in their inbox and, when they open the message, an add-in retrieves the full content from the archive.

Options for archived email migration to Office 365

The native tools to migrate mailboxes to Office 365 cannot handle an email archive migration. When admins transfer legacy archive data for mailboxes, they usually consider the following three approaches:

  1. Export the data to PST archives and import it into user mailboxes in Office 365.
  2. Reingest the archive data into the on-premises Exchange mailbox and then migrate the mailbox to Office 365.
  3. Migrate the Exchange mailbox to Office 365 first, then perform the email archive migration to put the data into the Office 365 mailbox.

Option 1 is not usually practical because it takes a lot of manual effort to export data to PST files. The stubs remain in the user’s mailbox and add clutter.

Option 2 also requires a lot of labor-intensive work and uses a lot of space on the Exchange Server infrastructure to support reingestion.

That leaves the third option as the most practical approach, which we’ll explore in a little more detail.

Migrate the mailbox to Exchange Online

When you move a mailbox to Office 365, it migrates along with the stubs that relate to the data in the legacy archive. The legacy archive will no longer archive the mailbox, but users can access their archived items. Because the stubs usually contain a URL path to the legacy archive item, there is no dependency on Exchange to view the archived message.

Some products that add buttons to restore the individual message into the mailbox will not work; the legacy archive product won’t know where Office 365 is without further configuration. This step is not usually necessary because the next stage is to migrate that data into Office 365.

Transfer archived data

Legacy archive solutions usually have a variety of policies for what happens with the archived data. You might configure the system to keep the stubs for a year but make archive data accessible via a web portal for much longer.

There are instances when you might want to replace the stub with the real message. There might be data that is not in the user’s mailbox as a stub but that users want on occasion.

We need tools that not only automate the data migration, but also understand these differences and can act accordingly.

We need tools that not only automate the data migration, but also understand these differences and can act accordingly. The legacy archive migration software should examine the data within the archive and then run batch jobs to replace stubs with the full messages. In this case, you can use the Exchange Online archive as a destination for archived data that no longer has a stub.

Email archive migration software connects via the vendor API. The software assesses the items and then exports them into a common temporary format — such as an EML file — on a staging server before connecting to Office 365 over a protocol such as Exchange Web Services. The migration software then examines the mailbox and replaces the stub with the full message.

migration dashboard
An example of a third-party product’s dashboard detailing the migration progress of a legacy archive into Office 365.

Migrate journal data

With journal data, the most accepted approach is to migrate the data into the hidden recoverable items folder of each mailbox related to the journaled item. The end result is similar to using Office 365 from the day the journal began, and eDiscovery works as expected when following Microsoft guidance.

For this migration, the software scans the journal and creates a database of the journal messages. The application then maps each journal message to its mailbox. This process can be quite extensive; for example, an email sent to 1,000 people will map to 1,000 mailboxes.

After this stage, the software copies each message to the recoverable items folder of each mailbox. While this is a complicated procedure, it’s alleviated by software that automates the job.

Legacy archive migration offerings

There are many products tailored for an email archive migration. Each has its own benefits and drawbacks. I won’t recommend a specific offering, but I will mention two that can migrate more than 1 TB a day, which is a good benchmark for large-scale migrations. They also support chain of custody, which audits the transfer of all data

TransVault has the most connectors to legacy archive products. Almost all the migration offerings support Enterprise Vault, but if you use a product that is less common, then it is likely that TransVault can move it. The TransVault product accesses source data either via an archive product’s APIs or directly to the stored data. TransVault’s service installs within Azure or on premises.

Quadrotech Archive Shuttle fits in alongside a number of other products suited to Office 365 migrations and management. Its workflow-based process automates the migration. Archive Shuttle handles fewer archive sources, but it does support Enterprise Vault. Archive Shuttle accesses source data via API and agent machines with control from either an on-premises Archive Shuttle instance or, as is more typical, the cloud version of the product.

Dig Deeper on Exchange Online administration and implementation

Fixing Erratic Behavior on Hyper-V with Network Load Balancers

For years, I’d never heard of this problem. Then, suddenly, I’m seeing it everywhere. It’s not easy to precisely outline a symptom tree for you. Networked applications will behave oddly. Remote desktop sessions may skip or hang. Some network traffic will not pass at all. Other traffic will behave erratically. Rather than try to give you a thorough symptom tree, we’ll just describe the setup that can be addressed with the contents of this article: you’re using Hyper-V with a third-party network load balancer and experiencing network-related problems.

Acknowledgements

Before I ever encountered it, the problem was described to me by one my readers. Check out our Complete Guide to Hyper-V Networking article and look in the comments section for Jahn’s input. I had a different experience, but that conversation helped me reach a resolution much more quickly.

Problem Reproduction Instructions

The problem may appear under other conditions, but should always occur under these:

  • The network adapters that host the Hyper-V virtual switch are configured in a team
    • Load-balancing algorithm: Dynamic
    • Teaming mode: Switch Independent (likely occurs with switch-embedded teaming as well)
  • Traffic to/from affected virtual machines passes through a third-party load-balancer
    • Load balancer uses a MAC-based system for load balancing and source verification
      • Citrix Netscaler calls its feature “MAC based forwarding”
      • F5 load balancers call it “auto last hop”
    • The load balancer’s “internal” IP address is on the same subnet as the virtual machine’s
  • Sufficient traffic must be exiting the virtual machine for Hyper-V to load balance some of it to a different physical adapter

I’ll go into more detail later. This list should help you determine if you’re looking at an article that can help you.

Resolution

Fixing the problem is very easy, and can be done without downtime. I’ll show the options in preference order. I’ll explain the impacting differences later.

Option 1: Change the Load-Balancing Algorithm

Your best bet is to change the load-balancing algorithm to “Hyper-V port”. You can change it in the lbfoadmin.exe graphical interface if your management operating system is GUI-mode Windows Server. To change it with PowerShell (assuming only one team):

There will be a brief interruption of networking while the change is made. It won’t be as bad as the network problems that you’re already experiencing.

Option 2: Change the Teaming Mode

Your second option is to change your teaming mode. It’s more involved because you’ll also need to update your physical infrastructure to match. I’ve always been able to do that without downtime as long as I changed the physical switch first, but I can’t promise the same for anyone else.

Decide if you want to use Static teaming or LACP teaming. Configure your physical switch accordingly.

Change your Hyper-V host to use the same mode. If your Hyper-V system’s management operating system is Windows Server GUI, you can use lbfoadmin.exe. To change it in PowerShell (assuming only one team):

or

In this context, it makes no difference whether you pick static or LACP. If you want more information, read our article on the teaming modes.

Option 3: Disable the Feature on the Load Balancer

You could tell the load balancer to stop trying to be clever. In general, I would choose that option last.

An Investigation of the Problem

So, what’s going on? What caused all this? If you’ve got an environment that matches the one that I described, then you’ve unintentionally created the perfect conditions for a storm.

Whose fault is it? In this case, I don’t really think that it’s fair to assign fault. Everyone involved is trying to make your network traffic go faster. They sometimes do that by playing fast and loose in that gray area between Ethernet and TCP/IP. We have lots of standards that govern each individually, but not so many that apply to the ways that they can interact. The problem arises because Microsoft is playing one game while your load balancer plays another. The games have different rules, and neither side is aware that another game is afoot.

Traffic Leaving the Virtual Machine

We’ll start on the Windows guest side (also applies to Linux). Your application inside your virtual machine wants to send some data to another computer. That goes something like this:

  1. Application: “Network, send this data to computer www.altaro.com on port 443”.
  2. Network: “DNS server, get me the IP for www.altaro.com”
  3. Network: “IP layer, determine if the IP address for www.altaro.com is on the same subnet”
  4. Network: “IP layer, send this packet to the gateway”
  5. IP layer passes downward for packaging in an Ethernet frame
  6. Ethernet layer transfers the frame

The part to understand: your application and your operating system don’t really care about the Ethernet part. Whatever happens down there just happens. Especially, it doesn’t care at all about the source MAC.

lb_out_traffic

Traffic Crossing the Hyper-V Virtual Switch

Because this particular Ethernet frame is coming out of a Hyper-V virtual machine, the first thing that it encounters is the Hyper-V virtual switch. In our scenario, the Hyper-V virtual switch rests atop a team of network adapters. As you’ll recall, that team is configured to use the Dynamic load balancing algorithm in Switch Independent mode. The algorithm decides if load balancing can be applied. The teaming mode decides which pathway to use and if it needs to repackage the outbound frame.

Switch independent mode means that the physical switch doesn’t know anything about a team. It only knows about two or more Ethernet endpoints connected in standard access mode. A port in that mode can “host” any number of MAC addresses;the physical switch’s capability defines the limit. However, the same MAC address cannot appear on multiple access ports simultaneously. Allowing that would cause all sorts of problems.

lb_broken_si_traffic

So, if the team wants to load balance traffic coming out of a virtual machine, it needs to ensure that the traffic has a source MAC address that won’t cause the physical switch to panic. For traffic going out anything other than the primary adapter, it uses the MAC address of the physical adapter.

lb_good_si_traffic

So, no matter how many physical adapters the team owns, one of two things will happen for each outbound frame:

  • The team will choose to use the physical adapter that the virtual machine’s network adapter is registered on. The Ethernet frame will travel as-is. That means that its source MAC address will be exactly the same as the virtual network adapter’s (meaning, not repackaged)
  • The team will choose to use an adapter other than the one that the virtual machine’s network adapter is registered on. The Ethernet frame will be altered. The source MAC address will be replaced with the MAC address of the physical adapter

Note: The visualization does not cover all scenarios. A virtual network adapter might be affinitized to the second physical adapter. If so, its load balanced packets would travel out of the shown “pNIC1” and use that physical adapter’s MAC as a source.

Traffic Crossing the Load Balancer

So, our frame arrives at the load balancer. The load balancer has a really crummy job. It needs to make traffic go faster, not slower. And, it acts like a TCP/IP router. Routers need to unpackage inbound Ethernet frames, look at their IP information, and make decisions on how to transmit them. That requires compute power and time.

lb_router_hard

If it needs too much time to do all this, then people would prefer to live without the load balancer. That means that the load balancer’s manufacturer doesn’t sell any units, doesn’t make any money, and goes out of business. So, they come up with all sorts of tricks to make traffic faster. One way to do that is by not doing quite so much work on the Ethernet frame. This is a gross oversimplification, but you get the idea:

lb_router_easy

Essentially, the load balancer only needs to remember which MAC address sent which frame, and then it doesn’t need to worry so much about all that IP nonsense (it’s really more complicated than that, but this is close enough).

The Hyper-V/Load Balancer Collision

Now we’ve arrived at the core of the problem: Hyper-V sends traffic from virtual machines using source MAC addresses that don’t belong to those virtual machines. The MAC addresses belong to the physical NIC. When the load balancer tries to associate that traffic with the MAC address of the physical NIC, everything breaks.

Trying to be helpful (remember that), the load balancer attempts to return what it deems as “response” traffic to the MAC that initiated the conversation. The MAC, in this case, belongs directly to that second physical NIC. It wasn’t expecting the traffic that’s now coming in, so it silently discards the frame.

That happens because:

  • The Windows Server network teaming load balancing algorithms are send only; they will not perform reverse translations. There are lots of reasons for that and they are all good, so don’t get upset with Microsoft. Besides, it’s not like anyone else does things differently.
  • Because the inbound Ethernet frame is not reverse-translated, its destination MAC belongs to a physical NIC. The Hyper-V virtual switch will not send any Ethernet frame to a virtual network adapter unless it owns the destination MAC
  • In typical system-to-system communications, the “responding” system would have sent its traffic to the IP address of the virtual machine. Through the normal course of typical networking, that traffic’s destination MAC would always belong to the virtual machine. It’s only because your load balancer is trying to speed things along that the frame is being sent to the physical NIC’s MAC address. Otherwise, the source MAC of the original frame would have been little more than trivia.

Stated a bit more simply: Windows Server network teaming doesn’t know that anyone cares about its frames’ source MAC addresses and the load balancer doesn’t know that anyone is lying about their MAC addresses.

Why Hyper-V Port Mode Fixes the Problem

When you select the Hyper-V port load balancing algorithm in combination with the switch independent teaming mode, each virtual network adapter’s MAC address is registered on a single physical network adapter. That’s the same behavior that Dynamic uses. However, no load balancing is done for any given virtual network adapter; all traffic entering and exiting any given virtual adapter will always use the same physical adapter. The team achieves load balancing by placing each virtual network adapter across its physical members in a round-robin fashion.

lb_si_hp

Source MACs will always be those of their respective virtual adapters, so there’s nothing to get confused about.

I like this mode as a solution because it does a good job addressing the issue without making any other changes to your infrastructure. The drawback would be if you only had a few virtual network adapters and weren’t getting the best distribution. For a 10GbE system, I wouldn’t worry.

Why Static and LACP Fix the Problem

Static and LACP teaming involve your Windows Server system and the physical switch agreeing on a single logical pathway that consists of multiple physical pathways. All MAC addresses are registered on that logical pathway. Therefore, the Windows Server team has no need of performing any source MAC substitution regardless of the load balancing algorithm that you choose.

lb_stdlacp

Since no MAC substitution occurs here, the load balancer won’t get anything confused.

I don’t like this method as much. It means modifying your physical infrastructure. I’ve noticed that some physical switches don’t like the LACP failover process very much. I’ve encountered some that need a minute or more to notice that a physical link was down and react accordingly. With every physical switch that I’ve used or heard of, the switch independent mode fails over almost instantly.

That said, using a static or LACP team will allow you to continue using the Dynamic load balancing algorithm. All else being equal, you’ll get a more even load balancing distribution with Dynamic than you will with Hyper-V port mode.

Why You Should Let the Load Balancer Do Its Job

The third listed resolution suggests disabling the related feature on your load balancer. I don’t like that option, personally. I don’t have much experience with the Citrix product, but I know that the F5 buries their “Auto Last Hop” feature fairly deeply. Also, these two manufacturers enable the feature by default. It won’t be obvious to a maintainer that you’ve made the change.

However, your situation might dictate that disabling the load balancer’s feature causes fewer problems than changing the Hyper-V or physical switch configuration. Do what works best for you.

Using a Different Internal Router Also Addresses the Issue

In all of these scenarios, the load balancer performs routing. Actually, these types of load balancers always perform routing, because they present a single IP address for the service to the outside world and translate internally to the back-end systems.

However, nothing states that the internal source IP address of the load balancer must exist in the same subnet as the back-end virtual machines. You might do that for performance reasons; as I said above, routing incurs overhead. However, this all a known quantity and modern routers are pretty good at what they do. If any router is present between the load balancer and the back-end virtual machines, then the MAC address issue will sort itself out regardless of your load balancing and teaming mode selections.

Have You Experienced this Phenomenon?

If so, I’d love to hear from you. What system did you experience it happening? How did you resolve the situation (if you were able)? Perhaps you’ve just encountered it and arrived here to get a solution – if so let me know if this explanation was helpful or if you need any further assistance regarding your particular environment. The comment section below awaits.

Microsoft pledges to cut carbon emissions by 75 percent by 2030 – Microsoft on the Issues

At Microsoft, we believe climate change is an urgent problem that demands a global response from all industries. We are committed to doing our part and have been taking steps to address and reduce our carbon footprint for nearly a decade. In 2009, Microsoft set its first carbon emissions target. In 2012, we became one of the first companies to put an internal global carbon fee in place, which enables us to operate 100 percent carbon neutral. Last year, we put in place targets to get more energy from renewable sources.

Today, we will take the next step in this journey by pledging to reduce our operational carbon emissions 75 percent by 2030, against a 2013 baseline. We’ll do this through continued progress against our carbon neutrality and renewable energy commitments, as well as investments in energy efficiency. This puts Microsoft on a path, as a company, to meet the goals set in the Paris climate agreement, which is a level of decarbonization that many scientists believe is necessary to keep global temperature increase below 2 degrees Celsius. We estimate this will help avoid more than 10 million metric tons of carbon emissions by 2030.

As we expand our global cloud infrastructure, we will increasingly turn to renewable energy because it is a clean power source and gives us better financial predictability. It’s good for the environment, our customers and our business. Our cloud-based programs to reduce resource consumption have already cut energy consumption at our main campus in Redmond, Washington by nearly 20 percent, reducing emissions and our power bill. The data we’ve collected on our energy consumption laid the groundwork for us to now buy our own clean energy at market rates, and we’ll soon be powering our Puget Sound campus with 100 percent carbon-free energy. Put simply, the environment and our business both benefit each time we’ve implemented sustainability targets and goals.

We’ve also seen that the private sector can be a catalyst for exponential change. This is particularly true for companies like ours. Artificial intelligence (AI) and the cloud are enabling companies and governments to make smarter, real-time decisions that lower emissions and reduce resource consumption in areas from buildings to transportation to manufacturing to agriculture to the production and distribution of electricity. We’re working not only to enable these transformations, but also to create and democratize new innovations through programs like AI for Earth that can help our customers adapt and thrive in a changing environment.

But even with our commitments within our operations and work with our customers, there’s still more to do.

As a global company, the changes we make in how we operate our business and the goals we set have a worldwide impact. It’s our hope that this pledge inspires others to join us in setting targets, and provides confidence to governments, companies and individuals that it’s possible for entities to help reach the goals set in the Paris climate agreement. By raising our ambitions and taking these actions, our goal is to help make the future more sustainable and beneficial to everyone.

Tags: Brad Smith, carbon emissions, carbon fee, Paris climate agreement

Default S3 encryption walls off vulnerable customer data

AWS has updated its security policies and defaults for Amazon S3 encryption to address a recurring problem for customers that are ill-prepared for the complexity of the service.

Amazon Simple Storage Service (S3) is one of the most popular services on AWS, but its ever-expanding ancillary security options on both client and server sides has led customers to misconfigure settings and expose their data to the public. The latest change by AWS to encrypt objects for S3 buckets as the default setting could help mollify some of those issues.

Several household-name companies, including Accenture, Verizon and WWE, were publicly shamed this year over leaky S3 buckets — exposed not because of malicious attacks, but through the efforts of security firms scanning for vulnerabilities. There’s no evidence data was stolen or copied in those cases, but bad actors likely would follow the same path to access corporate information stored on AWS.

One of the most attractive elements of S3 is its flexibility, with multiple configurations and connections to numerous AWS tools and services. But that variety introduces choices, and sometimes users unknowingly make the wrong ones.

A simple check box item for S3 encryption would be a simple fix even for enterprises with hundreds of accounts and thousands of buckets, said Zohar Alon, CEO of Dome9, a cloud security company in Mountain View, Calif. But with so many ways to configure S3, users might not realize they’ve exposed their data.

“The 22-year-old developer will not take the time to read the manual of what do the five options mean, so we need to pre-position it,” Alon said. “We need to direct them to the right answer. We need to take check boxes away rather than add more.”

The 22-year-old developer will not take the time to read the manual … we need to direct them to the right answer.
Zohar AlonCEO, Dome9

Encryption is one of several policy choices for users, and those who want to encrypt everything must reject non-encrypted objects. The new S3 encryption default will instead automatically encrypt all objects, even new ones.

AWS was built to provide a set of tools for customers to choose how to develop their applications. In the case of encryption, Amazon has made a choice for them — and it’s the right one, because of the changing nature of workloads hosted on its platform, said Fernando Montenegro, an analyst at 451 Research.

“As these [workloads] became more critical they recognize their customers are having additional demands,” he said.” As they add more workloads related to specific compliance regimes they have to follow that and have the right level of encryption.”

S3 encryption is an important step because 90% of users defer to the default option, Alon said. This won’t solve every problem, however, especially as cloud workloads begin to sprawl across multiple platforms.

“There are many ways you can shoot yourself in the leg when storing data on [Microsoft] Azure just like on AWS, so it’s asking a lot to expect the security team to figure that out across an ever-growing footprint of cloud assets and subscriptions.”

Go beyond S3 encryption

For the continued edification of AWS customers, buckets that are publicly accessible will carry a prominent indicator in the S3 console, new permission checks identify why a bucket is public, and additional information in inventory reports identifies the status of each object.

S3 is a powerful service, but users often overlook the responsibilities that come along with that, Montenegro said. He’s particularly high on the permission checks and inventory reports because they can help address the knowledge gap.

“As more people begin to use this they have a clearer picture of what you’re doing might have unintended consequences,” he said.

This isn’t Amazon’s first response to this problem. In the past six months it added new Config rules and emailed customers to caution them to take note of their publicly accessible assets. Amazon Macie, a service introduced over the summer, incorporates machine learning to track the S3 usage and identify anomalies. Other recent AWS updates include more control over access management when replicating to a separate destination account, and the ability to replicate encrypted data that uses AWS Key Management Service across regions.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at [email protected].

Tools for system administrators that don’t cost a dime

Windows admins can’t solve every problem with System Center or PowerShell. There are times when a simple utility…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

fills a particular need to assist a troubleshooting exercise or just make a daily task easier.

A system administrator handles a number of small tasks on a daily basis. They must often create screenshots for documentation or to pass along to the help desk to help illustrate an issue. There are many freeware utilities available that make the IT staff more productive. These helpful free tools for system administrators are worth a look.

Check on disk space use

Windows Explorer and PowerShell are fine for general file management, but some tools for system administrators offer more functionality than native software. Dirsize and SpaceSniffer are two freeware applications that give a quick overview of what takes up space on the disk. These utilities are portable on Windows, which provides additional flexibility.

Windows Explorer and PowerShell are fine for general file management, but some tools for system administrators offer more functionality than native software.

Dirsize: This is the more basic application. It provides a File Explorer tree view and shows the size of each folder. Admins can adjust the color coding to their preference; the default settings highlight folders with less data in green, while folders that take up more space show up as red.

SpaceSniffer: A more advanced tool for system administrators, SpaceSniffer offers a visual representation of boxes to show what folders and files use large amounts of space. These boxes are also layered to show the location of data within a specific folder. Admins cut or delete unwanted data from the application with a right click on a file or folder.

Capture screenshots in a snap

The native PrintScreen or Alt+PrtScr hotkey in Windows saves the entire screen or active window, respectively, to the clipboard. The Snipping Tool, which debuted in Windows Vista, selects a specific part of the screen for a screenshot. But there are even better free tools for system administrators for this purpose.

Greenshot: This tool runs in the background and uses both the PrintScreen option and combinations of the Alt, Shift and Ctrl keys to grab certain parts or the entire screen based on preferences. Configure the different commands to capture full screen, window, region, last region and a scrolling window in Internet Explorer. Greenshot also configures apps that automatically open screenshots, such as MS Paint or GreenShot’s own editor, to highlight areas and add comments to the image. Admins then have several options, such as sending the screenshot to a printer or adding it to an email message. This is a useful tool for system administrators who take many screenshots to share information and get technical support. Greenshot also has a portable version.

ShareX: This utility is more feature-rich than Greenshot with its greater customization options and optical character recognition. ShareX also provides more upload locations. Some admins should look at this setting first since screenshots go to the Imgur image-sharing site by default. ShareX stores the Imgur URLs to share the full image, its thumbnail and the link to delete the image from the site. Users can automatically upload the screenshot to most major social media platforms, create a thumbnail of the image or choose from a wide range of other options. ShareX is the ideal freeware screenshot choice for advanced users, while Greenshot suits those with simpler needs.

Manipulate and store text

The Notepad and WordPad text editors are adequate for simple text handling, but there are several freeware utilities that make it easier for admins to type and store text.

Notepad++: This application touts a wide array of features. It numbers and highlights lines of text, allows tabbed documents and generates syntax highlighting for numerous languages, such as JavaScript, PowerShell and extensible markup language.

Another advanced feature is macro recording, which is useful when search and replace is insufficient. For example, a user who wants to remove a trailing space off the end of each line can use the feature to record the End+Backspace+Down Arrow key combination and play it back for each line in the file. This just scratches the surface of the capabilities in Notepad++.

Ditto: This tool is a way to overcome the inherent limits in the Windows clipboard. For example, if the admin copies text with Ctrl+C but doesn’t paste the content into a document or email, it invariably gets overwritten when the admin copies more text.

Ditto stores text and images copied to the clipboard, which admins can refer to at any time. The Ctrl+~ hotkey brings up the list of cached clipboard entries. The admin then chooses which item to paste. The program includes a setting to share clipboard entries to different computers. Admins who constantly copy and paste into the clipboard will appreciate the other features in this highly configurable application.

Gain remote control of servers

Windows admins spend a majority of their time on computers that are not physically near them. But sometimes they must manage multiple computers that are all within an arm’s length. Microsoft offers a different freeware option that works in each scenario.

Remote Desktop Connection Manager (RDCMan): This Microsoft tool gives Windows administrators a single management console to select and then connect to a remote server. Admins don’t need to memorize every server name and click on the right one. In RDCMan, each server can have its own remote desktop settings, whereas the native Remote Desktop app in Windows only remembers the last settings used. RDCMan produces a thumbnail view to show all the servers in the list and displays what the desktop showed in the last session. Admins use RDCMan to configure multiple desktop sets so they can group servers to their preference.

Mouse without Borders: This virtual keyboard, video, mouse (KVM) switch from Microsoft enables admins to control up to four PCs at once from a single mouse and keyboard over the network. The client must run on each device, but this is a great option if there are multiple physical PCs and laptops on the admin’s desk. When the cursor moves off the edge of one monitor, it appears on the next computer. The admin can copy and paste files from one computer to the next, as well as key in commands from a single keyboard. Even if it’s only a two-PC setup of a user box and an admin box, Mouse without Borders is worth the cost compared to a physical KVM. There are two caveats: It requires space for multiple monitors and isn’t ideal if the hardware constantly changes.

Next Steps

Forget virtual desktop infrastructure — Remote Desktop Session Host is the future

Dual Monitor Tools tailored for multiple monitor users

Avoid falling into the IT automation money pit