Tag Archives: problem

Troubleshoot common VM live migration errors

Problem solve
Get help with specific problems with your technologies, process and projects.

VM live migration errors in Hyper-V include connection issues, virtual switch mismatches and incompatible hardware. Learn more about these common errors and how to resolve them.


Although there are many things that can cause VM live migrations to fail, there are some issues that are especially…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

common. Some of the more prevalent VM live migration issues include failed attempts to connect to the destination server, unsupported protocol versions, virtual switch mismatches and incompatible hardware.

One of the more common issues that occurs during a VM live migration involves an error message popping up when you attempt to connect to the destination host. An example of this error is shown in Figure A.

Hyper-V error
Figure A. Hyper-V displays an error saying it couldn’t connect to the destination server.

The error message cautions you to make sure the Windows Remote Management (WinRM) port is open on the destination host. WinRM uses port 5985 (HTTP) and port 5986 (HTTPS).

If the required firewall ports are open, there are a couple of other things you can check. Make sure the WinRM service is running on the destination host. You should also make sure domain name system resolution is working properly and that you’re able to correctly resolve both the computer name and the fully qualified domain name of the remote host.

Unsupported protocol version

Another common VM live migration issue is an unsupported protocol version. An example of this error is shown in Figure B.

VM migration error
Figure B. The operation failed because of an unsupported protocol version.

This error occurs because older versions of Hyper-V don’t support some of the latest VM features. Entering the following command displays the version of each VM on a given Hyper-V host:

Get-VM * | Select-Object Name, Version

The error shown in Figure B occurs if you try to move a version 8.0 VM to a Hyper-V server running Windows Server 2012 R2. Windows Server 2012 R2 doesn’t support VM versions above 5.0.

Virtual switch mismatch

Live migrations can also fail if the destination host doesn’t contain a virtual switch with the same name as the one the VM is using. This error varies depending on whether you’re attempting the live migration using Hyper-V Manager or PowerShell.

In modern versions of Hyper-V Manager, a virtual switch mismatch isn’t a showstopper. Instead, the wizard will inform you of the problem and give you a chance to pick a different virtual switch, as shown in Figure C.

Hyper-V virtual switch
Figure C. Hyper-V gives you a chance to pick a different virtual switch.

If you attempt the migration using PowerShell, then Hyper-V will simply produce an error.

Incompatible hardware

Another somewhat common VM live migration issue is when the destination computer isn’t compatible with the VM’s hardware requirements. This error occurs because the source and destination Hyper-V hosts are running on hardware that is significantly different.

In most cases, you can correct the problem by using processor compatibility mode. Using processor compatibility mode tells the Hyper-V host to run the VM using only basic CPU features rather than attempting to use any advanced CPU features. Once processor compatibility mode is enabled, live migration will usually work. Incidentally, it’s almost always possible to turn processor compatibility mode back off once the VM live migration is complete.

Enable processor compatibility mode.
Figure D. You can fix some VM live migration errors by enabling processor compatibility mode.

You can turn on processor compatibility mode from Hyper-V Manager by right-clicking on the VM and choosing the settings command from the shortcut menu. When the VM’s settings window opens, expand the processor container and then select the compatibility sub-container. Now, just select the Migrate to a physical computer with a different processor version checkbox, as shown in Figure D.

These are just a few of the conditions that can cause live migration errors in Hyper-V. If you experience a really stubborn VM live migration error, take a moment to make sure the clocks on the Hyper-V hosts and the domain controllers are all in sync with one another.

Dig Deeper on Microsoft Hyper-V management

How to tackle an email archive migration for Exchange Online

Problem solve
Get help with specific problems with your technologies, process and projects.

A move from on-premises Exchange to Office 365 also entails determining the best way to transfer legacy archives. This tutorial can help ease migration complications.


A move to Office 365 seems straightforward enough until project planners broach the topic of the email archive…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

migration.

Not all organizations keep all their email inside their messaging platform. Many organizations that archive messages also keep a copy in a journal that is archived away from user reach for legal reasons.

The vast majority of legacy archive migrations to Office 365 require third-party tools and must follow a fairly standardized process to complete the job quickly and with minimal expense. Administrators should migrate mailboxes to Office 365 first and then the archive for the fastest way to gain benefits from Office 365 before the archive reingestion completes.

An archive product typically scans mailboxes for older items and moves those to longer term, cheaper storage that is indexed and deduplicated. The original item typically gets replaced with a small part of the message, known as a stub or shortcut. The user can find the email in their inbox and, when they open the message, an add-in retrieves the full content from the archive.

Options for archived email migration to Office 365

The native tools to migrate mailboxes to Office 365 cannot handle an email archive migration. When admins transfer legacy archive data for mailboxes, they usually consider the following three approaches:

  1. Export the data to PST archives and import it into user mailboxes in Office 365.
  2. Reingest the archive data into the on-premises Exchange mailbox and then migrate the mailbox to Office 365.
  3. Migrate the Exchange mailbox to Office 365 first, then perform the email archive migration to put the data into the Office 365 mailbox.

Option 1 is not usually practical because it takes a lot of manual effort to export data to PST files. The stubs remain in the user’s mailbox and add clutter.

Option 2 also requires a lot of labor-intensive work and uses a lot of space on the Exchange Server infrastructure to support reingestion.

That leaves the third option as the most practical approach, which we’ll explore in a little more detail.

Migrate the mailbox to Exchange Online

When you move a mailbox to Office 365, it migrates along with the stubs that relate to the data in the legacy archive. The legacy archive will no longer archive the mailbox, but users can access their archived items. Because the stubs usually contain a URL path to the legacy archive item, there is no dependency on Exchange to view the archived message.

Some products that add buttons to restore the individual message into the mailbox will not work; the legacy archive product won’t know where Office 365 is without further configuration. This step is not usually necessary because the next stage is to migrate that data into Office 365.

Transfer archived data

Legacy archive solutions usually have a variety of policies for what happens with the archived data. You might configure the system to keep the stubs for a year but make archive data accessible via a web portal for much longer.

There are instances when you might want to replace the stub with the real message. There might be data that is not in the user’s mailbox as a stub but that users want on occasion.

We need tools that not only automate the data migration, but also understand these differences and can act accordingly.

We need tools that not only automate the data migration, but also understand these differences and can act accordingly. The legacy archive migration software should examine the data within the archive and then run batch jobs to replace stubs with the full messages. In this case, you can use the Exchange Online archive as a destination for archived data that no longer has a stub.

Email archive migration software connects via the vendor API. The software assesses the items and then exports them into a common temporary format — such as an EML file — on a staging server before connecting to Office 365 over a protocol such as Exchange Web Services. The migration software then examines the mailbox and replaces the stub with the full message.

migration dashboard
An example of a third-party product’s dashboard detailing the migration progress of a legacy archive into Office 365.

Migrate journal data

With journal data, the most accepted approach is to migrate the data into the hidden recoverable items folder of each mailbox related to the journaled item. The end result is similar to using Office 365 from the day the journal began, and eDiscovery works as expected when following Microsoft guidance.

For this migration, the software scans the journal and creates a database of the journal messages. The application then maps each journal message to its mailbox. This process can be quite extensive; for example, an email sent to 1,000 people will map to 1,000 mailboxes.

After this stage, the software copies each message to the recoverable items folder of each mailbox. While this is a complicated procedure, it’s alleviated by software that automates the job.

Legacy archive migration offerings

There are many products tailored for an email archive migration. Each has its own benefits and drawbacks. I won’t recommend a specific offering, but I will mention two that can migrate more than 1 TB a day, which is a good benchmark for large-scale migrations. They also support chain of custody, which audits the transfer of all data

TransVault has the most connectors to legacy archive products. Almost all the migration offerings support Enterprise Vault, but if you use a product that is less common, then it is likely that TransVault can move it. The TransVault product accesses source data either via an archive product’s APIs or directly to the stored data. TransVault’s service installs within Azure or on premises.

Quadrotech Archive Shuttle fits in alongside a number of other products suited to Office 365 migrations and management. Its workflow-based process automates the migration. Archive Shuttle handles fewer archive sources, but it does support Enterprise Vault. Archive Shuttle accesses source data via API and agent machines with control from either an on-premises Archive Shuttle instance or, as is more typical, the cloud version of the product.

Dig Deeper on Exchange Online administration and implementation

Fixing Erratic Behavior on Hyper-V with Network Load Balancers

For years, I’d never heard of this problem. Then, suddenly, I’m seeing it everywhere. It’s not easy to precisely outline a symptom tree for you. Networked applications will behave oddly. Remote desktop sessions may skip or hang. Some network traffic will not pass at all. Other traffic will behave erratically. Rather than try to give you a thorough symptom tree, we’ll just describe the setup that can be addressed with the contents of this article: you’re using Hyper-V with a third-party network load balancer and experiencing network-related problems.

Acknowledgements

Before I ever encountered it, the problem was described to me by one my readers. Check out our Complete Guide to Hyper-V Networking article and look in the comments section for Jahn’s input. I had a different experience, but that conversation helped me reach a resolution much more quickly.

Problem Reproduction Instructions

The problem may appear under other conditions, but should always occur under these:

  • The network adapters that host the Hyper-V virtual switch are configured in a team
    • Load-balancing algorithm: Dynamic
    • Teaming mode: Switch Independent (likely occurs with switch-embedded teaming as well)
  • Traffic to/from affected virtual machines passes through a third-party load-balancer
    • Load balancer uses a MAC-based system for load balancing and source verification
      • Citrix Netscaler calls its feature “MAC based forwarding”
      • F5 load balancers call it “auto last hop”
    • The load balancer’s “internal” IP address is on the same subnet as the virtual machine’s
  • Sufficient traffic must be exiting the virtual machine for Hyper-V to load balance some of it to a different physical adapter

I’ll go into more detail later. This list should help you determine if you’re looking at an article that can help you.

Resolution

Fixing the problem is very easy, and can be done without downtime. I’ll show the options in preference order. I’ll explain the impacting differences later.

Option 1: Change the Load-Balancing Algorithm

Your best bet is to change the load-balancing algorithm to “Hyper-V port”. You can change it in the lbfoadmin.exe graphical interface if your management operating system is GUI-mode Windows Server. To change it with PowerShell (assuming only one team):

There will be a brief interruption of networking while the change is made. It won’t be as bad as the network problems that you’re already experiencing.

Option 2: Change the Teaming Mode

Your second option is to change your teaming mode. It’s more involved because you’ll also need to update your physical infrastructure to match. I’ve always been able to do that without downtime as long as I changed the physical switch first, but I can’t promise the same for anyone else.

Decide if you want to use Static teaming or LACP teaming. Configure your physical switch accordingly.

Change your Hyper-V host to use the same mode. If your Hyper-V system’s management operating system is Windows Server GUI, you can use lbfoadmin.exe. To change it in PowerShell (assuming only one team):

or

In this context, it makes no difference whether you pick static or LACP. If you want more information, read our article on the teaming modes.

Option 3: Disable the Feature on the Load Balancer

You could tell the load balancer to stop trying to be clever. In general, I would choose that option last.

An Investigation of the Problem

So, what’s going on? What caused all this? If you’ve got an environment that matches the one that I described, then you’ve unintentionally created the perfect conditions for a storm.

Whose fault is it? In this case, I don’t really think that it’s fair to assign fault. Everyone involved is trying to make your network traffic go faster. They sometimes do that by playing fast and loose in that gray area between Ethernet and TCP/IP. We have lots of standards that govern each individually, but not so many that apply to the ways that they can interact. The problem arises because Microsoft is playing one game while your load balancer plays another. The games have different rules, and neither side is aware that another game is afoot.

Traffic Leaving the Virtual Machine

We’ll start on the Windows guest side (also applies to Linux). Your application inside your virtual machine wants to send some data to another computer. That goes something like this:

  1. Application: “Network, send this data to computer www.altaro.com on port 443”.
  2. Network: “DNS server, get me the IP for www.altaro.com”
  3. Network: “IP layer, determine if the IP address for www.altaro.com is on the same subnet”
  4. Network: “IP layer, send this packet to the gateway”
  5. IP layer passes downward for packaging in an Ethernet frame
  6. Ethernet layer transfers the frame

The part to understand: your application and your operating system don’t really care about the Ethernet part. Whatever happens down there just happens. Especially, it doesn’t care at all about the source MAC.

lb_out_traffic

Traffic Crossing the Hyper-V Virtual Switch

Because this particular Ethernet frame is coming out of a Hyper-V virtual machine, the first thing that it encounters is the Hyper-V virtual switch. In our scenario, the Hyper-V virtual switch rests atop a team of network adapters. As you’ll recall, that team is configured to use the Dynamic load balancing algorithm in Switch Independent mode. The algorithm decides if load balancing can be applied. The teaming mode decides which pathway to use and if it needs to repackage the outbound frame.

Switch independent mode means that the physical switch doesn’t know anything about a team. It only knows about two or more Ethernet endpoints connected in standard access mode. A port in that mode can “host” any number of MAC addresses;the physical switch’s capability defines the limit. However, the same MAC address cannot appear on multiple access ports simultaneously. Allowing that would cause all sorts of problems.

lb_broken_si_traffic

So, if the team wants to load balance traffic coming out of a virtual machine, it needs to ensure that the traffic has a source MAC address that won’t cause the physical switch to panic. For traffic going out anything other than the primary adapter, it uses the MAC address of the physical adapter.

lb_good_si_traffic

So, no matter how many physical adapters the team owns, one of two things will happen for each outbound frame:

  • The team will choose to use the physical adapter that the virtual machine’s network adapter is registered on. The Ethernet frame will travel as-is. That means that its source MAC address will be exactly the same as the virtual network adapter’s (meaning, not repackaged)
  • The team will choose to use an adapter other than the one that the virtual machine’s network adapter is registered on. The Ethernet frame will be altered. The source MAC address will be replaced with the MAC address of the physical adapter

Note: The visualization does not cover all scenarios. A virtual network adapter might be affinitized to the second physical adapter. If so, its load balanced packets would travel out of the shown “pNIC1” and use that physical adapter’s MAC as a source.

Traffic Crossing the Load Balancer

So, our frame arrives at the load balancer. The load balancer has a really crummy job. It needs to make traffic go faster, not slower. And, it acts like a TCP/IP router. Routers need to unpackage inbound Ethernet frames, look at their IP information, and make decisions on how to transmit them. That requires compute power and time.

lb_router_hard

If it needs too much time to do all this, then people would prefer to live without the load balancer. That means that the load balancer’s manufacturer doesn’t sell any units, doesn’t make any money, and goes out of business. So, they come up with all sorts of tricks to make traffic faster. One way to do that is by not doing quite so much work on the Ethernet frame. This is a gross oversimplification, but you get the idea:

lb_router_easy

Essentially, the load balancer only needs to remember which MAC address sent which frame, and then it doesn’t need to worry so much about all that IP nonsense (it’s really more complicated than that, but this is close enough).

The Hyper-V/Load Balancer Collision

Now we’ve arrived at the core of the problem: Hyper-V sends traffic from virtual machines using source MAC addresses that don’t belong to those virtual machines. The MAC addresses belong to the physical NIC. When the load balancer tries to associate that traffic with the MAC address of the physical NIC, everything breaks.

Trying to be helpful (remember that), the load balancer attempts to return what it deems as “response” traffic to the MAC that initiated the conversation. The MAC, in this case, belongs directly to that second physical NIC. It wasn’t expecting the traffic that’s now coming in, so it silently discards the frame.

That happens because:

  • The Windows Server network teaming load balancing algorithms are send only; they will not perform reverse translations. There are lots of reasons for that and they are all good, so don’t get upset with Microsoft. Besides, it’s not like anyone else does things differently.
  • Because the inbound Ethernet frame is not reverse-translated, its destination MAC belongs to a physical NIC. The Hyper-V virtual switch will not send any Ethernet frame to a virtual network adapter unless it owns the destination MAC
  • In typical system-to-system communications, the “responding” system would have sent its traffic to the IP address of the virtual machine. Through the normal course of typical networking, that traffic’s destination MAC would always belong to the virtual machine. It’s only because your load balancer is trying to speed things along that the frame is being sent to the physical NIC’s MAC address. Otherwise, the source MAC of the original frame would have been little more than trivia.

Stated a bit more simply: Windows Server network teaming doesn’t know that anyone cares about its frames’ source MAC addresses and the load balancer doesn’t know that anyone is lying about their MAC addresses.

Why Hyper-V Port Mode Fixes the Problem

When you select the Hyper-V port load balancing algorithm in combination with the switch independent teaming mode, each virtual network adapter’s MAC address is registered on a single physical network adapter. That’s the same behavior that Dynamic uses. However, no load balancing is done for any given virtual network adapter; all traffic entering and exiting any given virtual adapter will always use the same physical adapter. The team achieves load balancing by placing each virtual network adapter across its physical members in a round-robin fashion.

lb_si_hp

Source MACs will always be those of their respective virtual adapters, so there’s nothing to get confused about.

I like this mode as a solution because it does a good job addressing the issue without making any other changes to your infrastructure. The drawback would be if you only had a few virtual network adapters and weren’t getting the best distribution. For a 10GbE system, I wouldn’t worry.

Why Static and LACP Fix the Problem

Static and LACP teaming involve your Windows Server system and the physical switch agreeing on a single logical pathway that consists of multiple physical pathways. All MAC addresses are registered on that logical pathway. Therefore, the Windows Server team has no need of performing any source MAC substitution regardless of the load balancing algorithm that you choose.

lb_stdlacp

Since no MAC substitution occurs here, the load balancer won’t get anything confused.

I don’t like this method as much. It means modifying your physical infrastructure. I’ve noticed that some physical switches don’t like the LACP failover process very much. I’ve encountered some that need a minute or more to notice that a physical link was down and react accordingly. With every physical switch that I’ve used or heard of, the switch independent mode fails over almost instantly.

That said, using a static or LACP team will allow you to continue using the Dynamic load balancing algorithm. All else being equal, you’ll get a more even load balancing distribution with Dynamic than you will with Hyper-V port mode.

Why You Should Let the Load Balancer Do Its Job

The third listed resolution suggests disabling the related feature on your load balancer. I don’t like that option, personally. I don’t have much experience with the Citrix product, but I know that the F5 buries their “Auto Last Hop” feature fairly deeply. Also, these two manufacturers enable the feature by default. It won’t be obvious to a maintainer that you’ve made the change.

However, your situation might dictate that disabling the load balancer’s feature causes fewer problems than changing the Hyper-V or physical switch configuration. Do what works best for you.

Using a Different Internal Router Also Addresses the Issue

In all of these scenarios, the load balancer performs routing. Actually, these types of load balancers always perform routing, because they present a single IP address for the service to the outside world and translate internally to the back-end systems.

However, nothing states that the internal source IP address of the load balancer must exist in the same subnet as the back-end virtual machines. You might do that for performance reasons; as I said above, routing incurs overhead. However, this all a known quantity and modern routers are pretty good at what they do. If any router is present between the load balancer and the back-end virtual machines, then the MAC address issue will sort itself out regardless of your load balancing and teaming mode selections.

Have You Experienced this Phenomenon?

If so, I’d love to hear from you. What system did you experience it happening? How did you resolve the situation (if you were able)? Perhaps you’ve just encountered it and arrived here to get a solution – if so let me know if this explanation was helpful or if you need any further assistance regarding your particular environment. The comment section below awaits.

Microsoft pledges to cut carbon emissions by 75 percent by 2030 – Microsoft on the Issues

At Microsoft, we believe climate change is an urgent problem that demands a global response from all industries. We are committed to doing our part and have been taking steps to address and reduce our carbon footprint for nearly a decade. In 2009, Microsoft set its first carbon emissions target. In 2012, we became one of the first companies to put an internal global carbon fee in place, which enables us to operate 100 percent carbon neutral. Last year, we put in place targets to get more energy from renewable sources.

Today, we will take the next step in this journey by pledging to reduce our operational carbon emissions 75 percent by 2030, against a 2013 baseline. We’ll do this through continued progress against our carbon neutrality and renewable energy commitments, as well as investments in energy efficiency. This puts Microsoft on a path, as a company, to meet the goals set in the Paris climate agreement, which is a level of decarbonization that many scientists believe is necessary to keep global temperature increase below 2 degrees Celsius. We estimate this will help avoid more than 10 million metric tons of carbon emissions by 2030.

As we expand our global cloud infrastructure, we will increasingly turn to renewable energy because it is a clean power source and gives us better financial predictability. It’s good for the environment, our customers and our business. Our cloud-based programs to reduce resource consumption have already cut energy consumption at our main campus in Redmond, Washington by nearly 20 percent, reducing emissions and our power bill. The data we’ve collected on our energy consumption laid the groundwork for us to now buy our own clean energy at market rates, and we’ll soon be powering our Puget Sound campus with 100 percent carbon-free energy. Put simply, the environment and our business both benefit each time we’ve implemented sustainability targets and goals.

We’ve also seen that the private sector can be a catalyst for exponential change. This is particularly true for companies like ours. Artificial intelligence (AI) and the cloud are enabling companies and governments to make smarter, real-time decisions that lower emissions and reduce resource consumption in areas from buildings to transportation to manufacturing to agriculture to the production and distribution of electricity. We’re working not only to enable these transformations, but also to create and democratize new innovations through programs like AI for Earth that can help our customers adapt and thrive in a changing environment.

But even with our commitments within our operations and work with our customers, there’s still more to do.

As a global company, the changes we make in how we operate our business and the goals we set have a worldwide impact. It’s our hope that this pledge inspires others to join us in setting targets, and provides confidence to governments, companies and individuals that it’s possible for entities to help reach the goals set in the Paris climate agreement. By raising our ambitions and taking these actions, our goal is to help make the future more sustainable and beneficial to everyone.

Tags: Brad Smith, carbon emissions, carbon fee, Paris climate agreement

Default S3 encryption walls off vulnerable customer data

AWS has updated its security policies and defaults for Amazon S3 encryption to address a recurring problem for customers that are ill-prepared for the complexity of the service.

Amazon Simple Storage Service (S3) is one of the most popular services on AWS, but its ever-expanding ancillary security options on both client and server sides has led customers to misconfigure settings and expose their data to the public. The latest change by AWS to encrypt objects for S3 buckets as the default setting could help mollify some of those issues.

Several household-name companies, including Accenture, Verizon and WWE, were publicly shamed this year over leaky S3 buckets — exposed not because of malicious attacks, but through the efforts of security firms scanning for vulnerabilities. There’s no evidence data was stolen or copied in those cases, but bad actors likely would follow the same path to access corporate information stored on AWS.

One of the most attractive elements of S3 is its flexibility, with multiple configurations and connections to numerous AWS tools and services. But that variety introduces choices, and sometimes users unknowingly make the wrong ones.

A simple check box item for S3 encryption would be a simple fix even for enterprises with hundreds of accounts and thousands of buckets, said Zohar Alon, CEO of Dome9, a cloud security company in Mountain View, Calif. But with so many ways to configure S3, users might not realize they’ve exposed their data.

“The 22-year-old developer will not take the time to read the manual of what do the five options mean, so we need to pre-position it,” Alon said. “We need to direct them to the right answer. We need to take check boxes away rather than add more.”

The 22-year-old developer will not take the time to read the manual … we need to direct them to the right answer.
Zohar AlonCEO, Dome9

Encryption is one of several policy choices for users, and those who want to encrypt everything must reject non-encrypted objects. The new S3 encryption default will instead automatically encrypt all objects, even new ones.

AWS was built to provide a set of tools for customers to choose how to develop their applications. In the case of encryption, Amazon has made a choice for them — and it’s the right one, because of the changing nature of workloads hosted on its platform, said Fernando Montenegro, an analyst at 451 Research.

“As these [workloads] became more critical they recognize their customers are having additional demands,” he said.” As they add more workloads related to specific compliance regimes they have to follow that and have the right level of encryption.”

S3 encryption is an important step because 90% of users defer to the default option, Alon said. This won’t solve every problem, however, especially as cloud workloads begin to sprawl across multiple platforms.

“There are many ways you can shoot yourself in the leg when storing data on [Microsoft] Azure just like on AWS, so it’s asking a lot to expect the security team to figure that out across an ever-growing footprint of cloud assets and subscriptions.”

Go beyond S3 encryption

For the continued edification of AWS customers, buckets that are publicly accessible will carry a prominent indicator in the S3 console, new permission checks identify why a bucket is public, and additional information in inventory reports identifies the status of each object.

S3 is a powerful service, but users often overlook the responsibilities that come along with that, Montenegro said. He’s particularly high on the permission checks and inventory reports because they can help address the knowledge gap.

“As more people begin to use this they have a clearer picture of what you’re doing might have unintended consequences,” he said.

This isn’t Amazon’s first response to this problem. In the past six months it added new Config rules and emailed customers to caution them to take note of their publicly accessible assets. Amazon Macie, a service introduced over the summer, incorporates machine learning to track the S3 usage and identify anomalies. Other recent AWS updates include more control over access management when replicating to a separate destination account, and the ability to replicate encrypted data that uses AWS Key Management Service across regions.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.

Tools for system administrators that don’t cost a dime

Windows admins can’t solve every problem with System Center or PowerShell. There are times when a simple utility…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

fills a particular need to assist a troubleshooting exercise or just make a daily task easier.

A system administrator handles a number of small tasks on a daily basis. They must often create screenshots for documentation or to pass along to the help desk to help illustrate an issue. There are many freeware utilities available that make the IT staff more productive. These helpful free tools for system administrators are worth a look.

Check on disk space use

Windows Explorer and PowerShell are fine for general file management, but some tools for system administrators offer more functionality than native software. Dirsize and SpaceSniffer are two freeware applications that give a quick overview of what takes up space on the disk. These utilities are portable on Windows, which provides additional flexibility.

Windows Explorer and PowerShell are fine for general file management, but some tools for system administrators offer more functionality than native software.

Dirsize: This is the more basic application. It provides a File Explorer tree view and shows the size of each folder. Admins can adjust the color coding to their preference; the default settings highlight folders with less data in green, while folders that take up more space show up as red.

SpaceSniffer: A more advanced tool for system administrators, SpaceSniffer offers a visual representation of boxes to show what folders and files use large amounts of space. These boxes are also layered to show the location of data within a specific folder. Admins cut or delete unwanted data from the application with a right click on a file or folder.

Capture screenshots in a snap

The native PrintScreen or Alt+PrtScr hotkey in Windows saves the entire screen or active window, respectively, to the clipboard. The Snipping Tool, which debuted in Windows Vista, selects a specific part of the screen for a screenshot. But there are even better free tools for system administrators for this purpose.

Greenshot: This tool runs in the background and uses both the PrintScreen option and combinations of the Alt, Shift and Ctrl keys to grab certain parts or the entire screen based on preferences. Configure the different commands to capture full screen, window, region, last region and a scrolling window in Internet Explorer. Greenshot also configures apps that automatically open screenshots, such as MS Paint or GreenShot’s own editor, to highlight areas and add comments to the image. Admins then have several options, such as sending the screenshot to a printer or adding it to an email message. This is a useful tool for system administrators who take many screenshots to share information and get technical support. Greenshot also has a portable version.

ShareX: This utility is more feature-rich than Greenshot with its greater customization options and optical character recognition. ShareX also provides more upload locations. Some admins should look at this setting first since screenshots go to the Imgur image-sharing site by default. ShareX stores the Imgur URLs to share the full image, its thumbnail and the link to delete the image from the site. Users can automatically upload the screenshot to most major social media platforms, create a thumbnail of the image or choose from a wide range of other options. ShareX is the ideal freeware screenshot choice for advanced users, while Greenshot suits those with simpler needs.

Manipulate and store text

The Notepad and WordPad text editors are adequate for simple text handling, but there are several freeware utilities that make it easier for admins to type and store text.

Notepad++: This application touts a wide array of features. It numbers and highlights lines of text, allows tabbed documents and generates syntax highlighting for numerous languages, such as JavaScript, PowerShell and extensible markup language.

Another advanced feature is macro recording, which is useful when search and replace is insufficient. For example, a user who wants to remove a trailing space off the end of each line can use the feature to record the End+Backspace+Down Arrow key combination and play it back for each line in the file. This just scratches the surface of the capabilities in Notepad++.

Ditto: This tool is a way to overcome the inherent limits in the Windows clipboard. For example, if the admin copies text with Ctrl+C but doesn’t paste the content into a document or email, it invariably gets overwritten when the admin copies more text.

Ditto stores text and images copied to the clipboard, which admins can refer to at any time. The Ctrl+~ hotkey brings up the list of cached clipboard entries. The admin then chooses which item to paste. The program includes a setting to share clipboard entries to different computers. Admins who constantly copy and paste into the clipboard will appreciate the other features in this highly configurable application.

Gain remote control of servers

Windows admins spend a majority of their time on computers that are not physically near them. But sometimes they must manage multiple computers that are all within an arm’s length. Microsoft offers a different freeware option that works in each scenario.

Remote Desktop Connection Manager (RDCMan): This Microsoft tool gives Windows administrators a single management console to select and then connect to a remote server. Admins don’t need to memorize every server name and click on the right one. In RDCMan, each server can have its own remote desktop settings, whereas the native Remote Desktop app in Windows only remembers the last settings used. RDCMan produces a thumbnail view to show all the servers in the list and displays what the desktop showed in the last session. Admins use RDCMan to configure multiple desktop sets so they can group servers to their preference.

Mouse without Borders: This virtual keyboard, video, mouse (KVM) switch from Microsoft enables admins to control up to four PCs at once from a single mouse and keyboard over the network. The client must run on each device, but this is a great option if there are multiple physical PCs and laptops on the admin’s desk. When the cursor moves off the edge of one monitor, it appears on the next computer. The admin can copy and paste files from one computer to the next, as well as key in commands from a single keyboard. Even if it’s only a two-PC setup of a user box and an admin box, Mouse without Borders is worth the cost compared to a physical KVM. There are two caveats: It requires space for multiple monitors and isn’t ideal if the hardware constantly changes.

Next Steps

Forget virtual desktop infrastructure — Remote Desktop Session Host is the future

Dual Monitor Tools tailored for multiple monitor users

Avoid falling into the IT automation money pit

Project ‘Honolulu’: What you need to know

19 Sep 2017 by Eric Siron
   
0    
Windows Server

The biggest problem with Hyper-V isn’t Hyper-V at all. It’s the management experience. We’ve all had our complaints about that, so I don’t think a rehash is necessary. Thing is, Hyper-V is far from alone. Microsoft has plenty of management issues across its other infrastructure roles and features as well. Enter Project ‘Honolulu’: an attempt to unify and improve the management experience for Microsoft’s infrastructure offerings.

Before I get very far into this, I want one thing to be made abundantly clear: the Honolulu Project is barely out of its infancy. As I write this, it is exiting private preview. The public beta bits aren’t even published yet.

With that said, unless many things change dramatically between now and release, this is not the Hyper-V management solution that you have been waiting for. At its best, it has a couple of nice touches. In a few cases, it is roughly equivalent to what we have now. For most things, it is worse than what we have available today. I hate to be so blunt about it because I believe that Microsoft has put a great deal of effort into Honolulu. However, I also feel like they haven’t been paying much attention to the complaints and suggestions the community has made regarding the awful state of Hyper-V management tools.

What is Project ‘Honolulu’

When you look at Honolulu, it will appear something like an Azure-ified Server Manager. It adopts the right-to-left layouts that the Azure tools use, as opposed to the up-and-down scrolling that we humans and our mice are accustomed to.

Thou shalt not use Honolulu in a window
Thou shalt not use Honolulu in a window

This sort of thing is normative for the Azure tools. If you have a 50″ 4k screen and nothing else to look at, I’m sure that it looks wonderful. If you are using VMConnect or one of those lower resolution slide-out monitors that are still common in datacenters, then you might not enjoy the experience. And yes, the “<” icon next to Tools means that you can collapse that panel entirely. It doesn’t help much. I don’t know when it became passé for columns to be resizable and removable. Columns should be resizable and removable.

As you see it in that screenshot, Honolulu is running locally. It can also run in a gateway mode on a server. You can then access it from a web browser from other systems and devices.

Requirements for Running Project ‘Honolulu’

For the Honolulu Project itself, you can install on:

  • Windows 10
  • Windows Server 2012 through 2016

On a Windows 10 desktop or a Server 2012 system, it will only be accessible locally.

If you install on a server 2012 R2 through 2016 SKU, it will operate in the aforementioned gateway mode. You just open a web browser to that system on whatever port you configure, ex: https://managementmentsystem:6516. You will be prompted for credentials.

When you provide credentials to Honolulu, the systems that you connect to will be associated with your account. If you connect to Honolulu with a different user account, it will not display any of the servers that were chosen under a different account. Each need to be set up separately. You can import lists to reduce the pain.

Note: As it stands right now, I cannot get Honolulu work on a 2012 R2 system. It will open, but then refuses to connect to any server in my organization. I am actively working on this problem and will report back if a solution can be found. That’s one of the dangers of using early software, not a lifelong condemnation of the product.

Requirements for Targets of Honolulu

The target system(s) must be a Server SKU 2012 through 2016. It/they must have Windows Management Framework 5 or higher loaded. The easiest way to tell is by opening a PowerShell prompt and running
$PSVersionTable. The PowerShell version and the Windows Management Framework version will always be the same. It also helps if you can verify that you can connect from the management system to the target with
Enter-PSSession.

The following screenshot shows an example. I first tested that my management system has the correct version. Then I connected to my target and checked the WMF version there. I should have no problems setting up the first system to run Project Honolulu to connect to the second system.

running project honolulu

If you are running all of the systems in the same domain, then this will all “just work”. I’m not sure yet how cross-domain authentication works. If you’ve decided that security is unimportant and you’re running your Hyper-V host(s) in workgroup mode, then you will need to swing the door wide open to attackers by configuring TrustedHosts on the target system(s) to trust any computer that claims to have the name of your Honolulu system.

Requirements for Viewing Project ‘Honolulu’

Honolulu presents its views via HTML 5 web pages. Edge and Chrome work well. Internet Explorer doesn’t work at all:

hono_noie

I think it will be interesting to see how that plays out in the enterprise. Windows 10 isn’t exactly the best corporate player, so several organizations are hanging on to Windows 7. Others are moving to Windows 10, but opting for the Long-Term Servicing Branch (LTSB). LTSB doesn’t include Edge. So, is Microsoft (inadvertently?) pushing people toward Google Chrome?

Connecting to a Target Server in Honolulu

When you first start up Honolulu, you have little to look at:

hono_lonely

Click the + Add link to get started adding systems. Warning: If you’re going to add clusters, do that following the instructions in the next section. Only follow this for stand-alone hosts.

Type the name of a system to connect to, and it will automatically start searching. Hopefully, it will find the target. You can click the Submit button whether it can find it or not.

A working system:

hono_goodaddserver

A non-working system:

hono_broken

As you can see in the links, you can also Import Servers. For this, you need to supply a text file that contains a list of target servers.

Connecting to a Target Cluster in Honolulu

Honolulu starts out in “Server Manager” mode, so it will only connect to servers. If you try to connect it to a failover cluster in Server Manager mode, it will pick up the owning node instead. In order to connect to a failover cluster, you need to switch the mode.

At the top of the window, find the Server Manager heading. Drop that down and select Failover Cluster Manager.

hono_fcm

Now, add clusters with the + Add button. When it detects the cluster, it will also prompt you to add the nodes as members of Server Manager:

hono_addcluster

Windows Management Framework Error for Honolulu

As mentioned in the beginning, every target system needs to have at least Windows Management Framework version 5 installed. If a target system does not meet that requirement, Honolulu will display that status:

hono_wmf

The Really Quick Tour for Honolulu

I focus on Hyper-V and I’m certain that dozens of other Honolulu articles are already published (if not more). So, let’s burn through the non-Hyper-V stuff really fast.

Right-click doesn’t do anything useful anywhere in Honolulu. Train yourself to use only the left mouse button.

Server Manager has these sections:

  • Overview: Shows many of the things that you can see in Computer Properties. Also has several real-time performance charts, such as CPU and memory. For 2016+ you can see disk statistics. I like this page in theory, but the execution is awful. It assumes that you always want to see the basic facts about a host no matter what and that you have a gigantic screen resolution. My VMConnect screen is set to 1366×768 and I can’t even see a single performance chart in its entirety:
    hono_overview
  • Certificates: No more dealing with all the drama of manually adding the certificates snap-in! Also, you can view the computer and user certificates at the same time! Unfortunately, it doesn’t look like you can request a new certificate, but most other functionality seems to be here.
  • Devices: You can now finally see the devices installed on a Server Core/Hyper-V Server installation. You can’t take any action except Disable, unfortunately. It’s still better than what we had.
  • Events: Event Viewer, basically.
  • Files: Mini-File Explorer in your browser! You can browse the directory structure and upload/download files. You can view properties, but you can’t do anything with shares or permissions.
  • Firewall: Covers the most vital parts of firewall settings (profile en/disabling and rule definitions).
  • Local Users and Groups: Add and remove local user accounts. Add them to or remove them from groups. You cannot add or delete local groups. Adding a user to a group is completely free-text; no browsing. Also, if you attempt to add a user that doesn’t exist, you get a confirmation message that tells you that it worked, but the field doesn’t populate.
  • Network: View the network connections and set basic options for IPv4 and IPv6.
  • Processes: Mostly like Task Manager. Has an option to Create Process Dump.
  • Registry: Nifty registry editor; includes Export and Import functions. Very slow, though; personally I’d probably give up and use regedit.exe for as long as I’m given a choice.
  • Roles and Features: Mostly what you expect. No option for alternate install sources, though, so you won’t be using it to install .Net 3.5. Also, I can’t tell how to discard accidental changes. No big deal if you only accidentally checked a single item. For some reason, clicking anywhere on a line toggles the checked/not checked state, so you can easily change something without realizing that you did it.
  • Services: Interface for installed services. Does not grant access to any advanced settings for a service (like the extra tabs on the SNMP Service). Also does not recognize the Delayed Start modifier for Automatic services. I would take care to only use this for Start and Stop functions.
  • Storage: Works like the Storage part of the Files and Storage Services section in Server Manager. Like the preceding sections, includes most of the same features as its real Server Manager counterpart, but not all.
  • Storage Replica: I’m not using Storage Replica anywhere so I couldn’t gauge this one. Requires a special setup.
  • Virtual Machines and Virtual Switches: These two sections will get more explanation later.
  • Windows Update: Another self-explanatory section. This one has most of the same functionality as its desktop counterpart, although it has major usability issues on smaller screens. The update list is forced to yield space to the restart scheduler, which consumes far more screen real estate than it needs to do its job.

Virtual Switches in Honolulu

Alphabetically, this comes after Virtual Machines, but I want to get it out of the way first.

The Virtual Switches section in Project ‘Honolulu’ mostly mimics the virtual switch interface in Hyper-V Manager. So, it gets props for being familiar. It takes major dings for duplicating Hyper-V Manager’s bad habits.

First, the view:

hono_virtualswitchoverview

Functionality:

  • New Virtual Switch
  • Delete Virtual Switch
  • Rename Virtual Switch
  • Modify some settings of a virtual switch

The Settings page (which I had to stitch together because it successfully achieves the overall goal of wasting maximal space):

hono_vswitchsettings

The New Virtual Switch screen looks almost identical, except that it’s in a sidebar so it’s not quite as wide.

Notes on Honolulu’s virtual switch page:

  • Copies Hyper-V Manager’s usage of the adapter’s cryptic Description field instead of its name field.
  • If you look in the Network Adapter setting on the Settings for vSwitch screenshot and then compare it to the overview screen shot, you should notice something: It didn’t pick the team adapter that I really have my vSwitch on. Also, you can’t choose the team adapter. I didn’t tinker with that because I didn’t want to break my otherwise functional system, but not being able to connect a virtual switch to a team is a non-starter for me.
  • Continues to use the incorrect and misleading “Share” terminology for “Shared with Management OS” and “Allow management OS to share this network adapter”. Hey Microsoft, how hard would it really be to modify those to say “Used by Management OS” and “Allow management OS to use this virtual switch”?
  • No VLAN settings.
  • No SR-IOV settings.
  • No Switch-Embedded Teaming settings
  • No options for controlling management OS virtual NICs beyond the first one

Virtual Machines in Honolulu

All right, this is why we’re here! Make sure that you’re over something soft or the let-down might sting.

Virtual Machine Overview

The overview is my favorite part, although it also manifests the wasteful space usage that plagues this entire tool. Even on a larger resolution, it’s poorly made. However, I like the information that it displays, even if you need to scroll a lot to see it all.

At the top, you get a quick VM count and a recap of recent events:

hono_vmov

Even though I like the events being present, that tiny list will be mostly useless on an environment of any size. Also, it might cause undue alarm. For instance, those errors that you see mean that Dynamic Memory couldn’t expand any more because the VMs had reached their configured maximum. You can’t see that here because it needs two inches of whitespace padding to its left and right.

You can also see the Inventory link. We’ll come back to that after the host resources section.

Virtual Machine Host Resource Usage

I mostly like the resource view. Even on my 1366×768 VMConnect window, I have enough room to fit the CPU and memory charts side-by-side. But, they’re stacked and impossible to see together. I’ve stitched the display for you to see what it could look like with a lot of screen to throw at it:

hono_hostresources

Virtual Machine Inventory

Back at the top of the Virtual Machines page, you can find the Inventory link. That switches to a page where you can see all of the virtual machines:

hono_vminventory

That doesn’t look so bad, right? My primary complaint with the layout is that I believe that the VM’s name should be prioritized. I’d rather have an idea of the VM’s name as opposed to the Heart Beat or Protected statuses, if given a choice.

My next complaint is that, even at 1366×768, which is absolutely a widescreen resolution, the elements have some overrun. If I pick a VM that’s on, I must be very careful when trying to access the More menu so that I don’t inadvertently Shutdown the guest instead:

hono_smoosh

What’s on that More menu? Here you go:

hono_more

That’s for a virtual machine that’s turned on. No, your eyes are not deceiving you. You cannot modify any of the settings of a virtual machine while it is running. Power states and checkpoints are the limit.

I don’t know what Protected means. It’s not about being shielded or clustered. I suppose it means that it’s being backed up to Azure? If you’re not using Azure backup then this field just wastes even more space.

Virtual Machine Settings

If you select a virtual machine that’s off, you can then modify its settings. I elected not to take all of those screenshots. Fitting with the general Honolulu motif, they waste a great deal of space and present less information than Hyper-V Manager. These setting groupings are available:

  • General: The VM’s name, notes, automatic start action, automatic stop action, and automatic critical state action
  • Memory: Startup amount, Dynamic Memory settings, buffer, and weight
  • Processors: Number only. No NUMA, compatibility mode, reservation, or weight settings
  • Disks: I could not get the disks tab to load for any virtual machine on any host, whether 2012 R2 or 2016. It just shows the loading animation
  • Networks: Virtual switch connection, VLAN, MAC (including spoofing), and QoS. Nothing about VMQ, IOV, IPSec, DHCP Guard, Router Guard, Protected Network, Mirroring, Guest Teaming, or Consistent Device Naming
  • Boot Order: I could not get this to load for any virtual machine.

Other Missing Hyper-V Functionality in Honolulu

A criticism that we often level at Hyper-V Manager is just how many settings it excludes. If we only start from there, Project ‘Honolulu’ excludes even more.

Features available in Hyper-V Manager that Honolulu does not expose:

  • Hyper-V host settings — any of them. Live Migration adapters, Enhanced Session Mode, RemoteFX GPUs, and default file locations
  • No virtual SAN manager. Personally, I can live with that, since people need to stop using pass-through disks anyway. But, there are some other uses for this feature and it still works, so it makes the list of Honolulu’s missing features.
  • Secure boot
  • VM Shielding
  • Virtual TPM
  • Virtual hardware add/remove
  • Indication of VM Generation
  • Indication/upgrade of VM version
  • Shared Nothing Live Migration (intra-cluster Live Migration does work; see the Failover Clustering section below)
  • Storage (Live) Migration
  • Hyper-V Replica
  • Smart page file

Except for the automatic critical action setting, I did not find anything in Project ‘Honolulu’ that isn’t in Hyper-V Manager. So, don’t look here for nested VM settings or anything like that.

Failover Clustering for Hyper-V in Honolulu

Honolulu’s Failover Cluster Manager is even more of a letdown than Hyper-V. Most of the familiar tabs are there, but it’s almost exclusively read-only. However, we Hyper-V administrators get the best of what it can offer.

If you look on the Roles tab, you can find the Move action. That initiates a Quick or Live Migration:

hono_migration

Unfortunately, it forces you to pick a destination host. In a small cluster like mine, no big deal. In a big cluster, you’d probably like the benefit of the automatic selector. You can’t even see what the other nodes’ load levels look like to help you to decide.

Other nice features missing from Honolulu’s Failover Cluster Manager:

  • Assignment, naming, and prioritizing of networks
  • Node manipulation (add/evict)
  • Disk manipulation (add/remove cluster disk, promote/demote Cluster Shared Volume, CSV ownership change)
  • Quorum configuration
  • Core resource failover
  • Cluster validation. The report is already in HTML, so even if this tool can’t run validation, it would be really nice if it could display the results of one

Showstopping Hyper-V Issues in Project ‘Honolulu’

Pay attention to the dating of this article, as all things can change. As of this writing, these items prevent me from recommending Honolulu:

  • No settings changes for running virtual machines. The Hyper-V team has worked very hard to allow us to change more and more things while the virtual machine is running. Honolulu negates all of that work, and more.
  • No Hyper-V switch on a team member
  • No VMConnect (console access). If you try to connect to a VM, it uses RDP. I use a fair number of Linux guests. Microsoft has worked hard to make it easy for me to use Linux guests. For Windows guests, RDP session cuts out the pre-boot portions that we sometimes need to see.
  • No host configuration

Any or all of these things might change between now and release. I’ll be keeping up with this project in hopes of being able to change my recommendation.

The Future of Honolulu

I need to stress, again, that Honolulu is just a baby. Yes, it needs a lot of work. My general take on it, though, is that it’s beginning life by following in the footsteps of the traditional Server Manager. The good: it tries to consolidate features into a single pane of glass. The bad: it doesn’t include enough. Sure, you can use Server Manager/Honolulu to touch all of your roles and features. You can’t use it as the sole interface to manage any of them, though. As-is, it’s a decent overview tool, but not much more.

Where Honolulu goes from here is in all of our hands. I’m writing this article a bit before the project goes into public beta, so you’re probably reading it at some point afterward. Get the bits, set it up, and submit your feedback. Be critical, but be nice. Designing a functional GUI is hard. Designing a great GUI is excruciatingly difficult. Don’t make it worse with cruel criticism.

Have any questions or feedback?

Leave a comment below!


Wanted – Surface pro pen

Hi I am looking for a pen for my surface pro 3. My present one has a sync problem for some unknown reason, even after I changed all the batteries.

Please let me know what you got!

Broken one considered, I know I only need the top end, which is the Bluetooth part.

Many thanks

Location: Hemel Hempstead

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.