Can mankind’s greatest technological advancements help solve the biggest ecological challenges facing planet earth? Can technology help accelerate biodiversity conservation? Can it predict global warming to reduce the potential impact? Can it help conserve fresh water? Can it help achieve global food security? These are some of the existential questions that have kept Lucas Joppa awake at night for more than a decade.
Today, as the first Chief Environmental Officer at Microsoft, Joppa leads AI for Earth, a five-year, $50 million global program that blends ecological science and cutting-edge AI to solve some of the planet’s most pressing environmental challenges. We caught up with him to learn more about the program, his experience with technology interventions for environmental advancement, and his vision of deploying AI to advance sustainability across the globe. Here are some edited excerpts from our conversation.
You’ve a PhD in Ecology and have worked as a Peace Corps Volunteer in Malawi. This is not exactly a profile of someone who’d be working at a technology company. How did you decide to join Microsoft?
My educational background is in environmental studies. After completing my undergraduate degree in Wildlife Ecology, I spent time in the Peace Corps working for Malawi’s Department of National Parks and Wildlife. Then I did my PhD in Ecology. What all the work on the environment side taught me was just how serious environmental issues really are.
The science shows the seriousness of the issues, but the work also highlighted just how monumental the task is actually going to be, to find our way to sustainable solutions where the human species can exist in a more sustainable manner, with the rest of the life on earth. As soon as I began to truly realize the enormity of the challenge, I started panicking a little like everyone does. But it also got me thinking that there’s got to be some way to get out ahead.
I began to see that there was one thing that was accelerating exponentially and potentially even faster than the degradation of our planet’s natural resources. And that was technology. Thus, I decided to drive my career towards leveraging advances in technology to address the negative effects of human activities on rest of the life on Earth and started focusing on the computational aspects of ecology. I joined Microsoft Research to focus on and lead research programs at the intersection of environmental and computer science. What enthused me was that Microsoft, about a decade ago, had realized that this was where the real challenges were, both for society and the technology sector.
How did you transition to the role of the Chief Environmental Officer at Microsoft? How did the AI for Earth program come about?
I pursued research programs for about eight years at Microsoft Research. That experience prepared us to step back a couple of years ago and see the progress we had made in research from an environmental and technology perspective and how we could place it all the way into shipped products.
I put together all those learnings into one document, which I called “AI for Earth”. It laid out the opportunities I saw for Microsoft to really make a more concerted, company-wide effort, than simply a research program, to leverage our 35 years of ongoing investment in AI research and technology and focus all those efforts on the four key areas of agriculture, water, biodiversity, and climate change.
From my experience at Microsoft Research, we knew what the problems were, and we’d done enough on the technology front. So, it was time to put it into action. Last year, I left Microsoft Research and started serving as the company’s first Chief Environmental Scientist leading the AI for Earth program. That position recently expanded to Microsoft’s first Chief Environmental Officer, which allows me to oversee the whole environmental sustainability mission and mandate across the company.
Microsoft and National Geographic are teaming up to support data scientists who are tackling the “world’s biggest challenges.” The two companies today announced the AI for Earth Innovation Grant program, a $1 million grant that’ll provide recipients financial assistance, access to AI tools and cloud services, and more to advance conservation research.
The grant program, which is accepting applications until October 8, will support between five and 15 projects in five core areas: agriculture, biodiversity, conservation, climate change, and water. In addition to funding, researchers will gain access to Microsoft’s AI platform and development tools, inclusion in the National Geographic Explorer community, and affiliation with National Geographic Labs, National Geographic’s research incubation and accelerator initiative.
“[I]n Microsoft, we found a partner that is well-positioned to accelerate the pace of scientific research and new solutions to protect our natural world,” Jonathan Baillie, chief scientist and executive vice president at the National Geographic Society, said in a statement. “With today’s announcement, we will enable outstanding explorers seeking solutions for a sustainable future with the cloud and AI technologies that can quickly improve the speed, scope, and scale of their work, as well as support National Geographic Labs’ activities around technology and innovation for a planet in balance.”
The aim is to make trained algorithms broadly available to the global community of environmental researchers, Lucas Joppa, Microsoft’s chief environmental scientist, said in a press release.
“Microsoft is constantly exploring the boundaries of what technology can do, and what it can do for people and the world,” Joppa said. “We believe that humans and computers, working together through AI, can change the way that society monitors, models, and manages Earth’s natural systems. We believe this because we’ve seen it — we’re constantly amazed by the advances our AI for Earth collaborators have made over the past months. Scaling this through National Geographic’s … network will create a whole new generation of explorers who use AI to create a more sustainable future for the planet and everyone on it.”
Selected recipients will be announced in December.
The AI for Earth Innovation Grant is an expansion of Microsoft’s AI for Earth program, announced in June 2017. In December, the Redmond company committed $50 million to an “extended strategic plan” that includes providing advanced training to universities and NGOs and the formation of a “multi-disciplinary” team of AI and sustainability experts.
Microsoft claims that in the past two years, the AI for Earth program has awarded more than 35 grants globally for access to its Azure platform and AI technologies.
During E3 this year, we showed the biggest and most diverse Xbox games lineup ever. Many of those games are coming in 2018, and as gamers we cannot wait to share them with fans. Among Microsoft Studios exclusives games are new IP like Sea of Thieves (launching March 20), deep gameplay in fan-favorite franchises like State of Decay 2, and the explosive return of a celebrated franchise to Xbox with Crackdown 3 – all releasing on Xbox One and PC. That’s in addition to all the great ID@Xbox and cross-platform games (more on that below), and as well as other unannounced surprises to come.
As we prepare to step into the new year, it’s a great time to reflect on our favorite gaming moments of 2017 on Xbox and PC. We released the world’s most powerful console Xbox One X with the largest games lineup in Xbox history, including more than 1,300 titles and over 220 exclusives. Among those were incredible 2017 releases from Microsoft Studios, third party publishers and innovative independent developers, with over 85 specifically enhanced to take advantage of the power of Xbox One X. We also offered new ways for gamers to access content, interact with spectators and participate in the game development process.
The excitement capped recently with The Game Awards, where Forza Motorsport 7 took home Best Sports/Racing Game, PlayerUnknown’s Battlegrounds won Best Multiplayer Game, and Cuphead snagged three awards, including Best Independent Game.
Join us as we look back at our top 10 games moments of 2017.
The World’s Most Powerful Console Launches
Reception to Xbox One X has been incredible and we’re so grateful for our fans. One of the things we’ve enjoyed most is seeing how developers are taking advantage of the world’s most powerful console, with 4K Ultra HD, HDR, Dolby Atmos and other exciting features and improvements. Xbox One X truly is the best place to experience games with many enhanced games available and more adding support as we move into 2018.
The Biggest PC Game of 2017 Comes Exclusively to Console on Xbox One
PlayerUnknown’s Battlegrounds took the gaming world by storm in 2017. Over 25 million players experienced PUBG on PC and console fans got a shot at winning their own chicken dinners with its exclusive release on Xbox One as part of the Xbox Game Preview program last week. The reception from Xbox fans has been phenomenal with over one million copies sold in the first 48 hours alone. Through the Xbox Game Preview program, fans get to be a part of the development process, and PUBG on Xbox One will continually receive content updates including the new desert map, Miramar, optimizations, and more exciting stuff in the months ahead.
Xbox Game Pass Opens up a New Way to Play
Launching Xbox Game Pass was a big moment for Xbox this year. Giving fans more options to diversify and expand their library of games, as well as discover new experiences has been important to us, which is what we set out to achieve with Xbox Game Pass. We have been excited by the amazing fan reaction and engagement with Xbox Game Pass, which gives unlimited access to over 100 great games on Xbox One.
Xbox is the Best Place to Play Multi-Platform Games
This past year, we worked with fantastic publishers and developers across the industry to make Xbox the best place to play their games. Players responded by playing more than 1,870 third-party games for over 12 billion hours. The Xbox community demonstrated how Xbox Live is the fastest and most reliable gaming network by logging in more than 3.7 billion hours in multiplayer across games from our partners. This has inspired our partners to enhance over 80 Xbox One X games with any combination of benefits like faster loading times, higher resolution textures, HDR support or steadier framerates. We’re looking forward to a great 2018 full of new games and even more Enhanced games with our players and partners.
Forza Motorsport 7 Sets a New Bar
Throughout the years, Turn 10 Studios has worked hard to push the award-winning Forza franchise to new heights, and the pinnacle (so far) was this year’s Forza Motorsport 7 – a game built from the ground up to showcase native 4K gaming at 60 frames per second on Xbox One X. It was an honor bringing the community the most comprehensive, beautiful and authentic racing game ever made. In September alone, more than 7.9 million unique players raced in the Forza community. Next up is the Xbox One X Enhanced version of Forza Horizon 3 coming January 15, and we’ll have more exciting news in the months to come.
Original Xbox Games Join Backward Compatibility Collection
One of the big highlights this year was bringing Original Xbox classics into the Xbox One Backward Compatibility program. Compatibility is important to Xbox, to developers and their games, and our community. In 2017, we added 136 Xbox 360, 13 Original Xbox and seven enhanced Xbox 360 titles, which helped contribute to the more than 400 titles available today and 740 million hours of Backward Compatible games played to-date.
Independent Developers Drive Innovation
Every year, we look back at all the great games released by independent developers around the world, and 2017 was a banner year for great independent games. First off, we launched the Xbox Live Creators Program in August and it’s already flourishing with more than 100 games playable on Xbox One and Windows 10 PCs today. And ID@Xbox continues to be the home for creativity, innovation, and just plain rad games. From console exclusive Cuphead to Thimbleweed Park, What Remains of Edith Finch to Aer, or Fortnite to Tacoma, and favorites like Ark, Smite, Warframe and Rocket League ID@Xbox has fast become a super relevant way for independent developers to bring their creations to the Xbox One and Windows 10 community. With Deep Rock Galactic, Battlerite, The Darwin Project, Black Desert, Full Metal Furies, Robocraft, Below and hundreds of others in the pipeline for 2018, 2017 may get a run for its money!
Minecraft Becomes First Game Ever to Unify Players on PC, Mobile, and Console
2017 was a year focused on breaking down barriers for Minecraft. With the release of cross-device multiplayer in the Better Together Update, Mojang and the Minecraft team brought tens of millions of Minecraft players together on the game they love where they want, when they want and on the device they want. In 2018, Minecraft players can look forward to cross-device multiplayer on Nintendo Switch, a whole new world of under-the-sea adventures with the Update Aquatic, a spiffy new graphics engine powering the Super Duper Graphics Pack, and more.
Mixer Unlocks New Interactive Streaming Experiences
For the Mixer team, 2017 was about community growth and innovation. We welcomed a ton of new streamers, expanded our existing communities and added new ones, and made huge investments in new features like 4-person Co-Streaming, Mixer HypeZone and our new mobile app for iOS and Android that make Mixer the best place to stream and watch for console, PC, and mobile gamers alike. Seeing communities, creators and game developers unite around new experiences on Mixer has been incredible – Telltale Games using “Crowd Play” enables the audience to control story decisions, for example, or developers using new means of interactivity in Minecraft, The Darwin Project, Death’s Door and more. 2018 will bring even more interactive games as well as new capabilities for streamers.
Reinvesting in Age of Empires
Celebrating the 20-year anniversary of the Age of Empires franchise with the community at gamescom this year was epic. We announced Age of Empires IV there, which has long been on many fan wish lists. Add that we are partnering with Relic Entertainment, one of the best RTS developers in the world, and there is a lot to look forward to with the storied PC franchise. It’s just the beginning, as Age of Empires: Definitive Edition launches early next year, and Definitive Editions of Age of Empires II and Age of Empires III are planned in the future.
The SteamVR library comes to Windows Mixed Reality
The Windows 10 Fall Creators Update brings mixed reality to your PC this holiday season, and along with it a catalog of over 2000 of the most popular VR games available with SteamVR. Windows Mixed Reality headsets don’t require any external cameras or sensors, and set up in just minutes so you can enjoy the immersive action of the top VR games available today, along with all of the apps in the Microsoft Store.
On behalf of the entire Xbox team, thank you to all Xbox fans. You are what makes gaming on Xbox great and the motivation that drives our team. 2018 is going to be an amazing year, and we can’t wait to share it with you.
If you haven’t already, be sure to check out your own personal Xbox Year-in-Review to see how you gamed in 2017. Happy holidays!
The two biggest public cloud providers have set their sights on VMware workloads, though they’re taking different approaches to accommodate the hypervisor heavyweight and its customers.
A little over a year after Amazon Web Services (AWS) and VMware pledged to build a joint offering to bridge customers’ public and private environments, Microsoft this week introduced a similar service for its Azure public cloud. There’s one important distinction, however: VMware is out of the equation, a hostile move met with equal hostility from VMware, which said it would not support the service.
Azure Migrate offers multiple ways to get on-premises VMware workloads to Microsoft’s public cloud. Customers now can move VMware-based applications to Azure with a free tool to assess their environments, map out dependencies and migrate using Azure Site Recovery. Once there, customers can optimize workloads for Azure via cost management tools Microsoft acquired from Cloudyn.
This approach eschews the VMware virtualization and adapts these applications into a more cloud-friendly architecture that can use a range of other Azure services. A multitude of third-party vendors offer similar capabilities. It’s the other part of the Azure migration service that has drawn the ire of VMware.
VMware virtualization on Azure is a bare-metal subset of Azure Migrate that can run a full VMware stack on Azure hardware. It’s expected to be generally available sometime next year. This offering is a partnership with unnamed VMware-certified partners and VMware-certified hardware, but it notably cuts VMware out of the process, and out of the revenue stream.
In response, VMware criticized Microsoft characterization of the Azure migration service as part of a transition to public cloud. In a blog post, Ajay Patel, VMware senior vice president, cited the lack of joint engineering between VMware and Microsoft and said the company won’t recommend or support the product.
This isn’t the first time these two companies have butted heads. Microsoft launched Hyper-V almost a decade ago with similar aggressive tactics to pull companies off VMware’s hypervisor, said Steve Herrod, who was CTO at VMware at the time. Herrod is currently managing director at venture capital firm General Catalyst.
Part of the motivation here could be Microsoft posturing either to negotiate a future deal with VMware or to ensure it doesn’t lose out on these types of migration, Herrod said. And of course, if VMware had its way, its software stack would be on all the major clouds, he added.
VMware on AWS, which became generally available in late August, is operated by VMware, and through the company’s Cloud Foundation program ports its software-defined data centers to CenturyLink, Fujitsu, IBM Cloud, NTT Communications, OVH and Rackspace. The two glaring holes in that swath of partnerships are Azure and Google Cloud, widely considered to be the second and third most popular public clouds behind AWS.
Companies have a mix of applications, some of which are well-suited to transition to the cloud, and others must stay inside a private data center or can’t be re-architected for the cloud. Hence, a hybrid cloud strategy has become an attractive option, and VMware’s recent partnerships have made companies feel more comfortable with the public cloud and curb the management of their own data centers.
“I talk to a lot of CIOs and they love the fact that they can buy VMware and now feel VMware has given them the all-clear to being in the cloud,” Herrod said. “It’s purely promise that they’re not locked into running VMware in their own data center that has caused them to double down on VMware.”
Jeff Katoanalyst, Taneja Group
VMware virtualization on Azure is also an acknowledgement that some applications are not good candidates for the cloud-native approach, said Jeff Kato, an analyst at Taneja Group in Hopkinton, Mass.
“The fact that they have to offer VMware bare metal to accelerate things tells you there are workloads people are reluctant to move to the public cloud, whether that’s on Hyper-V or even AWS,” he said.
Some customers will prefer VMware on AWS, but it won’t be a thundering majority, said Carl Brooks, an analyst at 451 Research. There’s also no downside for Microsoft to support what customers already do, and the technical aspect of this move is relatively trivial, he added.
“It’s a buyer’s market, and none of the major vendors are going to benefit from trying to narrow user options — quite the opposite,” Brooks said.
Perhaps it’s no coincidence that Microsoft debuted the Azure migration service in the days leading up to AWS’ major user conference, re:Invent, where there is expected to be more talk about the partnership between Amazon and VMware. It’s also notable that AWS is only a public cloud provider, so it doesn’t have the same level of competitive friction as there has been historically between Microsoft and VMware, Kato said.
“Microsoft [is] trying to ride this Azure momentum to take more than their fair share of [the on-premises space], and in order to do that, they’re going to have to come up with a counter attack to VMware on AWS,” he said.
Despite VMware’s lack of support for the Azure migration service, it’s unlikely it can do anything to stop it, especially if it’s on certified hardware, Kato said. Perhaps VMware could somehow interfere with how well the VMware stack integrates with native Azure services, but big enterprises could prevent that, at least for their own environments.
“If the customer is big enough, they’ll force them to work together,” Kato said.
Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at firstname.lastname@example.org.
As the biggest cloud providers battle for customers, the main tactic always comes back to cost.
A few years ago Amazon, Microsoft and Google engaged in a public cloud price war after Google Cloud Platform (GCP) entered the market and began to undercut the other two hyperscalers. Prices continue to drop, though the fanfare and rapid-fire back and forth has largely subsided. But in the past month, the public cloud war has reignited in a new way, as these vendors add more cloud pricing models to reduce users’ costs.
The first salvo in the latest round of one-upmanship came from Amazon Web Services (AWS), with last week’s long-anticipated departure from per-hour billing in response to per-minute billing available on GCP and Microsoft Azure. Amazon jumped ahead with per-second billing, only to be matched days later by Google – which stated that its customers will feel less impact from the change than users of a certain unnamed vendor that used to charge on a per-hour basis – a thinly-veiled shot at AWS.
Not to be outdone, Microsoft this week added Reserved VM Instances, through which users can purchase advanced capacity in one- and three-year increments and save up to 72% compared to the on-demand price. It’s roughly modeled after AWS EC2 Reserved Instances, but adds a decidedly Microsoft slant, with even bigger discounts for users that incorporate Azure Hybrid Use Benefit to transfer Windows Server licenses to Azure.
The race around cloud costs has become less about direct cuts and more about cloud pricing models that give users a variety of ways to design their workloads, said Greg Arnette, CTO and founder of Sonian, an archival storage company Waltham, Mass., whose service works with AWS, Azure and GCP.
“At some point, it feels like pricing has to bottom out, so it has to be about more creativity on how to design and develop software for how you use the cloud to find more savings,” Arnette said.
Microsoft may have trailed competitors in this area because cloud pricing models are different than how it’s used to selling to enterprises, but its customers likely see these options on AWS and GCP and ask why they can’t get the same thing on Azure, said Owen Rogers, who heads up the Cloud Price Index at 451 Research.
“For the most part, Microsoft has been really slow to tackle the issue of cloud economics,” he said. “It’s almost like Azure is now playing catch up with Google and AWS when it comes to cloud economics, but they’re also trying to be more flexible.”
Between Microsoft’s quickened pace to adapt its pricing options and Google and Amazon’s shift to per-second billing, there’s constant pressure to show ongoing value to users, Rogers said.
The per-second shift likely won’t impact users much for now, particularly for VMs that run constantly, Rogers said. He does, however, see potential benefits in the future, as users move to short-lived workloads that run on containers or are constructed around serverless functions.
Microsoft’s me-too updates go beyond price
These types of discounts aren’t new. AWS and GCP have had spot instances for years, an option Microsoft finally added in May. AWS has built out its EC2 Reserved Instances program so extensively that some worry it’s on the brink of being too complicated. Google has a set of discounts for continued usage, and added its take on reserved instances earlier this year.
The me-too updates aren’t limited cloud pricing models. Microsoft took its turn to play catch up this week with a spread of important features to coincide with Ignite, one of its major annual IT conferences. Among the new tools, which have popular equivalents on other cloud platforms, is Azure Data Box, with which users mail up to 100 terabytes of data from private data centers to the cloud. Microsoft also added multiple availability zones within a region, another major upgrade for customers that want more resiliency and high availability. This service is available in two regions now (East US 2 and West Europe) with previews for additional zones in the US, Europe and in Asia by the end of the year.
Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at email@example.com.
The biggest problem with Hyper-V isn’t Hyper-V at all. It’s the management experience. We’ve all had our complaints about that, so I don’t think a rehash is necessary. Thing is, Hyper-V is far from alone. Microsoft has plenty of management issues across its other infrastructure roles and features as well. Enter Project ‘Honolulu’: an attempt to unify and improve the management experience for Microsoft’s infrastructure offerings.
Before I get very far into this, I want one thing to be made abundantly clear: the Honolulu Project is barely out of its infancy. As I write this, it is exiting private preview. The public beta bits aren’t even published yet.
With that said, unless many things change dramatically between now and release, this is notthe Hyper-V management solution that you have been waiting for. At its best, it has a couple of nice touches. In a few cases, it is roughly equivalent to what we have now. For most things, it is worse than what we have available today. I hate to be so blunt about it because I believe that Microsoft has put a great deal of effort into Honolulu. However, I also feel like they haven’t been paying much attention to the complaints and suggestions the community has made regarding the awful state of Hyper-V management tools.
What is Project ‘Honolulu’
When you look at Honolulu, it will appear something like an Azure-ified Server Manager. It adopts the right-to-left layouts that the Azure tools use, as opposed to the up-and-down scrolling that we humans and our mice are accustomed to.
This sort of thing is normative for the Azure tools. If you have a 50″ 4k screen and nothing else to look at, I’m sure that it looks wonderful. If you are using VMConnect or one of those lower resolution slide-out monitors that are still common in datacenters, then you might not enjoy the experience. And yes, the “<” icon next to Tools means that you can collapse that panel entirely. It doesn’t help much. I don’t know when it became passé for columns to be resizable and removable. Columns should be resizable and removable.
As you see it in that screenshot, Honolulu is running locally. It can also run in a gateway mode on a server. You can then access it from a web browser from other systems and devices.
Requirements for Running Project ‘Honolulu’
For the Honolulu Project itself, you can install on:
Windows Server 2012 through 2016
On a Windows 10 desktop or a Server 2012 system, it will only be accessible locally.
If you install on a server 2012 R2 through 2016 SKU, it will operate in the aforementioned gateway mode. You just open a web browser to that system on whatever port you configure, ex: https://managementmentsystem:6516. You will be prompted for credentials.
When you provide credentials to Honolulu, the systems that you connect to will be associated with your account. If you connect to Honolulu with a different user account, it will not display any of the servers that were chosen under a different account. Each need to be set up separately. You can import lists to reduce the pain.
Note: As it stands right now, I cannot get Honolulu work on a 2012 R2 system. It will open, but then refuses to connect to any server in my organization. I am actively working on this problem and will report back if a solution can be found. That’s one of the dangers of using early software, not a lifelong condemnation of the product.
Requirements for Targets of Honolulu
The target system(s) must be a Server SKU 2012 through 2016. It/they must have Windows Management Framework 5 or higher loaded. The easiest way to tell is by opening a PowerShell prompt and running $PSVersionTable. The PowerShell version and the Windows Management Framework version will always be the same. It also helps if you can verify that you can connect from the management system to the target with Enter-PSSession.
The following screenshot shows an example. I first tested that my management system has the correct version. Then I connected to my target and checked the WMF version there. I should have no problems setting up the first system to run Project Honolulu to connect to the second system.
If you are running all of the systems in the same domain, then this will all “just work”. I’m not sure yet how cross-domain authentication works. If you’ve decided that security is unimportant and you’re running your Hyper-V host(s) in workgroup mode, then you will need to swing the door wide open to attackers by configuring TrustedHosts on the target system(s) to trust any computer that claims to have the name of your Honolulu system.
Requirements for Viewing Project ‘Honolulu’
Honolulu presents its views via HTML 5 web pages. Edge and Chrome work well. Internet Explorer doesn’t work at all:
I think it will be interesting to see how that plays out in the enterprise. Windows 10 isn’t exactly the best corporate player, so several organizations are hanging on to Windows 7. Others are moving to Windows 10, but opting for the Long-Term Servicing Branch (LTSB). LTSB doesn’t include Edge. So, is Microsoft (inadvertently?) pushing people toward Google Chrome?
Connecting to a Target Server in Honolulu
When you first start up Honolulu, you have little to look at:
Click the + Add link to get started adding systems. Warning: If you’re going to add clusters, do that following the instructions in the next section. Only follow this for stand-alone hosts.
Type the name of a system to connect to, and it will automatically start searching. Hopefully, it will find the target. You can click the Submit button whether it can find it or not.
A working system:
A non-working system:
As you can see in the links, you can also Import Servers. For this, you need to supply a text file that contains a list of target servers.
Connecting to a Target Cluster in Honolulu
Honolulu starts out in “Server Manager” mode, so it will only connect to servers. If you try to connect it to a failover cluster in Server Manager mode, it will pick up the owning node instead. In order to connect to a failover cluster, you need to switch the mode.
At the top of the window, find the Server Manager heading. Drop that down and select Failover Cluster Manager.
Now, add clusters with the + Add button. When it detects the cluster, it will also prompt you to add the nodes as members of Server Manager:
Windows Management Framework Error for Honolulu
As mentioned in the beginning, every target system needs to have at least Windows Management Framework version 5 installed. If a target system does not meet that requirement, Honolulu will display that status:
The Really Quick Tour for Honolulu
I focus on Hyper-V and I’m certain that dozens of other Honolulu articles are already published (if not more). So, let’s burn through the non-Hyper-V stuff really fast.
Right-click doesn’t do anything useful anywhere in Honolulu. Train yourself to use only the left mouse button.
Server Manager has these sections:
Overview: Shows many of the things that you can see in Computer Properties. Also has several real-time performance charts, such as CPU and memory. For 2016+ you can see disk statistics. I like this page in theory, but the execution is awful. It assumes that you always want to see the basic facts about a host no matter what and that you have a gigantic screen resolution. My VMConnect screen is set to 1366×768 and I can’t even see a single performance chart in its entirety:
Certificates: No more dealing with all the drama of manually adding the certificates snap-in! Also, you can view the computer and user certificates at the same time! Unfortunately, it doesn’t look like you can request a new certificate, but most other functionality seems to be here.
Devices: You can now finally see the devices installed on a Server Core/Hyper-V Server installation. You can’t take any action except Disable, unfortunately. It’s still better than what we had.
Events: Event Viewer, basically.
Files: Mini-File Explorer in your browser! You can browse the directory structure and upload/download files. You can view properties, but you can’t do anything with shares or permissions.
Firewall: Covers the most vital parts of firewall settings (profile en/disabling and rule definitions).
Local Users and Groups: Add and remove local user accounts. Add them to or remove them from groups. You cannot add or delete local groups. Adding a user to a group is completely free-text; no browsing. Also, if you attempt to add a user that doesn’t exist, you get a confirmation message that tells you that it worked, but the field doesn’t populate.
Network: View the network connections and set basic options for IPv4 and IPv6.
Processes: Mostly like Task Manager. Has an option to Create Process Dump.
Registry: Nifty registry editor; includes Export and Import functions. Very slow, though; personally I’d probably give up and use regedit.exe for as long as I’m given a choice.
Roles and Features: Mostly what you expect. No option for alternate install sources, though, so you won’t be using it to install .Net 3.5. Also, I can’t tell how to discard accidental changes. No big deal if you only accidentally checked a single item. For some reason, clicking anywhere on a line toggles the checked/not checked state, so you can easily change something without realizing that you did it.
Services: Interface for installed services. Does not grant access to any advanced settings for a service (like the extra tabs on the SNMP Service). Also does not recognize the Delayed Start modifier for Automatic services. I would take care to only use this for Start and Stop functions.
Storage: Works like the Storage part of the Files and Storage Services section in Server Manager. Like the preceding sections, includes most of the same features as its real Server Manager counterpart, but not all.
Storage Replica: I’m not using Storage Replica anywhere so I couldn’t gauge this one. Requires a special setup.
Virtual Machines and Virtual Switches: These two sections will get more explanation later.
Windows Update: Another self-explanatory section. This one has most of the same functionality as its desktop counterpart, although it has major usability issues on smaller screens. The update list is forced to yield space to the restart scheduler, which consumes far more screen real estate than it needs to do its job.
Virtual Switches in Honolulu
Alphabetically, this comes after Virtual Machines, but I want to get it out of the way first.
The Virtual Switches section in Project ‘Honolulu’ mostly mimics the virtual switch interface in Hyper-V Manager. So, it gets props for being familiar. It takes major dings for duplicating Hyper-V Manager’s bad habits.
First, the view:
New Virtual Switch
Delete Virtual Switch
Rename Virtual Switch
Modify some settings of a virtual switch
The Settings page (which I had to stitch together because it successfully achieves the overall goal of wasting maximal space):
The New Virtual Switch screen looks almost identical, except that it’s in a sidebar so it’s not quite as wide.
Notes on Honolulu’s virtual switch page:
Copies Hyper-V Manager’s usage of the adapter’s cryptic Description field instead of its name field.
If you look in the Network Adapter setting on the Settings for vSwitch screenshot and then compare it to the overview screen shot, you should notice something: It didn’t pick the team adapter that I really have my vSwitch on. Also, you can’t choose the team adapter. I didn’t tinker with that because I didn’t want to break my otherwise functional system, but not being able to connect a virtual switch to a team is a non-starter for me.
Continues to use the incorrect and misleading “Share” terminology for “Shared with Management OS” and “Allow management OS to share this network adapter”. Hey Microsoft, how hard would it really be to modify those to say “Used by Management OS” and “Allow management OS to use this virtual switch”?
No VLAN settings.
No SR-IOV settings.
No Switch-Embedded Teaming settings
No options for controlling management OS virtual NICs beyond the first one
Virtual Machines in Honolulu
All right, this is why we’re here! Make sure that you’re over something soft or the let-down might sting.
Virtual Machine Overview
The overview is my favorite part, although it also manifests the wasteful space usage that plagues this entire tool. Even on a larger resolution, it’s poorly made. However, I like the information that it displays, even if you need to scroll a lot to see it all.
At the top, you get a quick VM count and a recap of recent events:
Even though I like the events being present, that tiny list will be mostly useless on an environment of any size. Also, it might cause undue alarm. For instance, those errors that you see mean that Dynamic Memory couldn’t expand any more because the VMs had reached their configured maximum. You can’t see that here because it needs two inches of whitespace padding to its left and right.
You can also see the Inventory link. We’ll come back to that after the host resources section.
Virtual Machine Host Resource Usage
I mostly like the resource view. Even on my 1366×768 VMConnect window, I have enough room to fit the CPU and memory charts side-by-side. But, they’re stacked and impossible to see together. I’ve stitched the display for you to see what it could look like with a lot of screen to throw at it:
Virtual Machine Inventory
Back at the top of the Virtual Machines page, you can find the Inventory link. That switches to a page where you can see all of the virtual machines:
That doesn’t look so bad, right? My primary complaint with the layout is that I believe that the VM’s name should be prioritized. I’d rather have an idea of the VM’s name as opposed to the Heart Beat or Protected statuses, if given a choice.
My next complaint is that, even at 1366×768, which is absolutely a widescreen resolution, the elements have some overrun. If I pick a VM that’s on, I must be very careful when trying to access the More menu so that I don’t inadvertently Shutdown the guest instead:
What’s on that More menu? Here you go:
That’s for a virtual machine that’s turned on. No, your eyes are not deceiving you. You cannot modify any of the settings of a virtual machine while it is running. Power states and checkpoints are the limit.
I don’t know what Protected means. It’s not about being shielded or clustered. I suppose it means that it’s being backed up to Azure? If you’re not using Azure backup then this field just wastes even more space.
Virtual Machine Settings
If you select a virtual machine that’s off, you can then modify its settings. I elected not to take all of those screenshots. Fitting with the general Honolulu motif, they waste a great deal of space and present less information than Hyper-V Manager. These setting groupings are available:
General: The VM’s name, notes, automatic start action, automatic stop action, and automatic critical state action
Memory: Startup amount, Dynamic Memory settings, buffer, and weight
Processors: Number only. No NUMA, compatibility mode, reservation, or weight settings
Disks: I could not get the disks tab to load for any virtual machine on any host, whether 2012 R2 or 2016. It just shows the loading animation
Networks: Virtual switch connection, VLAN, MAC (including spoofing), and QoS. Nothing about VMQ, IOV, IPSec, DHCP Guard, Router Guard, Protected Network, Mirroring, Guest Teaming, or Consistent Device Naming
Boot Order: I could not get this to load for any virtual machine.
Other Missing Hyper-V Functionality in Honolulu
A criticism that we often level at Hyper-V Manager is just how many settings it excludes. If we only start from there, Project ‘Honolulu’ excludes even more.
Features available in Hyper-V Manager that Honolulu does not expose:
Hyper-V host settings — any of them. Live Migration adapters, Enhanced Session Mode, RemoteFX GPUs, and default file locations
No virtual SAN manager. Personally, I can live with that, since people need to stop using pass-through disks anyway. But, there are some other uses for this feature and it still works, so it makes the list of Honolulu’s missing features.
Virtual hardware add/remove
Indication of VM Generation
Indication/upgrade of VM version
Shared Nothing Live Migration (intra-cluster Live Migration does work; see the Failover Clustering section below)
Storage (Live) Migration
Smart page file
Except for the automatic critical action setting, I did not find anything in Project ‘Honolulu’ that isn’t in Hyper-V Manager. So, don’t look here for nested VM settings or anything like that.
Failover Clustering for Hyper-V in Honolulu
Honolulu’s Failover Cluster Manager is even more of a letdown than Hyper-V. Most of the familiar tabs are there, but it’s almost exclusively read-only. However, we Hyper-V administrators get the best of what it can offer.
If you look on the Roles tab, you can find the Move action. That initiates a Quick or Live Migration:
Unfortunately, it forces you to pick a destination host. In a small cluster like mine, no big deal. In a big cluster, you’d probably like the benefit of the automatic selector. You can’t even see what the other nodes’ load levels look like to help you to decide.
Other nice features missing from Honolulu’s Failover Cluster Manager:
Cluster validation. The report is already in HTML, so even if this tool can’t run validation, it would be really nice if it could display the results of one
Showstopping Hyper-V Issues in Project ‘Honolulu’
Pay attention to the dating of this article, as all things can change. As of this writing, these items prevent me from recommending Honolulu:
No settings changes for running virtual machines. The Hyper-V team has worked very hard to allow us to change more and more things while the virtual machine is running. Honolulu negates all of that work, and more.
No Hyper-V switch on a team member
No VMConnect (console access). If you try to connect to a VM, it uses RDP. I use a fair number of Linux guests. Microsoft has worked hard to make it easy for me to use Linux guests. For Windows guests, RDP session cuts out the pre-boot portions that we sometimes need to see.
No host configuration
Any or all of these things might change between now and release. I’ll be keeping up with this project in hopes of being able to change my recommendation.
The Future of Honolulu
I need to stress, again, that Honolulu is just a baby. Yes, it needs a lot of work. My general take on it, though, is that it’s beginning life by following in the footsteps of the traditional Server Manager. The good: it tries to consolidate features into a single pane of glass. The bad: it doesn’t include enough. Sure, you can use Server Manager/Honolulu to touch all of your roles and features. You can’t use it as the sole interface to manage anyof them, though. As-is, it’s a decent overview tool, but not much more.
Where Honolulu goes from here is in all of our hands. I’m writing this article a bit before the project goes into public beta, so you’re probably reading it at some point afterward. Get the bits, set it up, and submit your feedback. Be critical, but be nice. Designing a functional GUI is hard. Designing a great GUI is excruciatingly difficult. Don’t make it worse with cruel criticism.
LAS VEGAS — Enterprise applications and data are increasingly moving to the cloud, but the endpoint remains the biggest security risk.
Ransomware, spear phishing and other emerging endpoint threats often fly under the radar of traditional security tools. And as they grow more sophisticated, they can trick even the most vigilant and well-educated user into clicking a malicious link or opening a malware-laden attachment.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
In response to these endpoint security threats, Microsoft in Windows 10 has embraced the concept of micro-virtualization, which isolates applications and other system processes from each other. That way, if one process falls victim to an attack, it doesn’t affect the rest of the PC or the corporate network at large.
Microsoft also partners with Bromium, which developed micro-virtualization, to extend the technology’s capabilities further into Windows. In an interview at VMworld, Bromium co-founders Ian Pratt and Simon Crosby discuss that partnership and explain how organizations can protect themselves against emerging endpoint security threats.
Is the hype around ransomware real?
Ian Pratt: The whole point of ransomware is that it announces its presence and demands money. If you think about it, it’s the easiest kind of thing to detect.
The malware which tries to be stealthy — hides in your machine, steals your intellectual property or credit card data or patient records — typically those kinds of attacks have far more cost to the organization.
It’s really kind of odd that so much of the behaviors are being driven around ransomware. It’s drawing attention away from bigger risks.
What are the major challenges your customers are facing?
Pratt: Windows is their biggest challenge, not because Windows is worse from a security point of view, but because it’s most attacked. That’s where most organizations’ intellectual property lives.
Ian Prattpresident, Bromium
It’s an impossible problem trying to secure Windows and all the applications. They’re just way too big of an attack surface. [Windows is] pushing 150 million lines of code, much of it written in the 1980s, when security was not what people focused on.
Simon Crosby: Out there on PCs, [organizations are] still doing arcane, silly stuff. A huge amount of the challenge is on legacy PCs.
What have been the effects of your partnership with Microsoft?
Crosby: The core capabilities of micro-virtualization are being adopted into Hyper-V, both on the Windows 10 client but also Windows Server. On the client side, in Windows 10, if you are running an enterprise license and you’re on the right hardware, then a couple of key Windows services move out of the operating system and into micro VMs. In particular, there is a service that manages locally maintained passwords and their hashes on the host. The goal there is to make the Windows kernel and progressively more and more applications protected and distrusted from each other.
How important is it to educate users about phishing and ransomware, compared to addressing these endpoint threats from a technical perspective?
Pratt: Blaming users, or hoping users will spot this stuff, is ridiculous. Some of the spear phishing attacks we’ve seen have been so well-crafted. We saw one, and the domain was a misspelling of Bromium. But if you looked at it, [you wouldn’t immediately notice]. You need to make it so that the user can click with confidence.
How can organizations find the right balance between security and user productivity?
Crosby: Why did [organizations] get more and more permissive on iPhones? Because they were actually pretty good with security. We see a lot of overly reactive stuff. ‘Let’s close everything down.’ That just isn’t the way forward, because ultimately users have to be productive and they’ll find a way around, and that’ll be a security loophole and the bad guy will find a way in again.