Tag Archives: organization

Benefits of virtualization highlighted in top 10 stories of 2019

When an organization decides to pursue a virtual desktop solution, a host of questions awaits it.

Our most popular virtual desktop articles this year highlight that fact and show how companies are still trying to get a handle on the virtual desktop infrastructure terrain. The stories explain the benefits of virtualization and provide comparisons between different enterprise options.

A countdown of our most-read articles, determined by page views, follows.

  1. Five burning questions about remote desktop USB redirection

Virtual desktops strive to mimic the traditional PC experience, but using local USB devices can create a sticking point. Remote desktop USB redirection enables users to attach their devices to their local desktop and have it function normally. In 2016, we explored options for redirection, explained how the technology worked and touched upon problem areas such as how scanners are infamously problematic with redirection.

  1. Tips for VDI user profile management

Another key factor for virtualizing the local desktop experience includes managing things like a user’s browser bookmarks, desktop background and settings. That was the subject of this FAQ from 2013 and our ninth most popular story for 2019. The article outlines options for managing virtual desktop user profiles, from implementing identical profiles for everyone to ensuring that settings once saved locally carry over to the virtual workspace.

  1. VDI hardware comparison: Thin vs. thick vs. zero clients

The push toward centralizing computing services has created a market for thin and zero clients, simple and low-cost computing devices reliant on servers. In implementing VDI, IT professionals should consider the right option for their organization. Thick clients, the traditional PC, provide proven functionality, but they also sidestep some of the biggest benefits of virtualization such as lower cost, energy efficiency and increased security. Thin clients provide a mix of features, and their simplicity brings VDI’s assets, such as centralized management and ease of local deployment, to bear. Zero clients require even less configuration, as they have nothing stored locally, but they tend to be proprietary.

  1. How to troubleshoot remote and virtual desktop connection issues

Connection issues can disrupt employee workflow, so avoiding and resolving them is paramount for desktop administrators. Once the local hardware has been ruled out, there are a set of common issues — exceeded capacity, firewalls, SSL certificates and network-level authentication — that IT professionals can consider when solving the puzzle.

  1. Comparing converged vs. hyper-converged infrastructure

What’s the difference between converged infrastructure (CI) and hyper-converged infrastructure (HCI)? This 2015 missive took on that question in our sixth most popular story for 2019. In short, while CI houses four data center functions — computing, storage, networking and server virtualization — into a single chassis, HCI looks to add even more features through software. HCI’s flexibility and scalability were touted as advantages over the more hardware-focused CI.

  1. Differences between desktop and server virtualization

To help those seeking VDI deployment, this informational piece from 2014 focused on how desktop virtualization differs from server virtualization. Server virtualization partitions one server into many, enabling organizations to accomplish tasks like maintaining databases, sharing files and delivering media. Desktop virtualization, on the other hand, delivers a virtual computer environment to a user. While server virtualization is easier to predict, given its uniform daily functions, a virtual desktop user might call for any number of potential applications or tasks, making the distinction between the two key.

  1. Application virtualization comparison: XenApp vs. ThinApp vs. App-V

This 2013 comparison pitted Citrix, VMware and Microsoft’s virtualization services against each other to determine the best solution for streaming applications. Citrix’s XenApp drew plaudits for the breadth of the applications it supported, but its update schedule provided only a short window to migrate to newer versions. VMware ThinApp’s portability was an asset, as it did not need installed software or device drivers, but some administrators said the service was difficult to deploy and the lack of a centralized management platform made handling applications trickier. Microsoft’s App-V provided access to popular apps like Office, but its agent-based approach limited portability when compared to ThinApp.

  1. VDI shops mull XenDesktop vs. Horizon as competition continues

In summer 2018, we took a snapshot of the desktop virtualization market as power players Citrix and VMware vied for a greater share of users. At the time, Citrix’s product, XenDesktop, was used in 57.7% of on-premises VDI deployments, while VMware’s Horizon accounted for 26.9% of the market. Customers praised VMware’s forward-facing emphasis on cloud, while a focus on security drew others to Citrix. Industry watchers wondered if Citrix would maintain its dominance through XenDesktop 7.0’s end of life that year and if challenger VMware’s vision for the future would pay off.

  1. Compare the top vendors of thin client systems

Vendors vary in the types of thin client devices they offer and the scale they can accommodate. We compared offerings from Advantech, Asus, Centerm Information, Google, Dell, Fujitsu, HP, Igel Technology, LG Electronics, Lenovo, NComputing, Raspberry Pi, Samsung, Siemens and 10ZiG Technology to elucidate the differences between them, and the uses for which they might be best suited.

  1. Understanding nonpersistent vs. persistent VDI

This article from 2013 proved some questions have staying power. Our most popular story this year explained the difference between two types of desktops that can be deployed on VDI. Persistent VDI provides each user his or her own desktop, allowing more flexibility for users to control their workspaces but requiring more storage and heightening complexity. Nonpersistent VDI did not save settings once a user logged out, a boon for security and consistent updates, but less than ideal in providing easy access to needed apps.

Go to Original Article
Author:

NSS Labs drops antitrust suit against AMTSO, Symantec and ESET

NSS Labs ended its legal battle against the Anti-Malware Testing Standards Organization, Symantec and ESET.

The independent testing firm dropped its antitrust lawsuit Tuesday, which was filed in 2018 against AMTSO (a nonprofit organization) and several top endpoint security vendors, including Symantec, ESET and CrowdStrike. The suit accused the vendors and AMTSO of conspiring to prevent NSS Labs from testing their products by boycotting the company.

In addition, NSS Labs accused the vendors of instituting restrictive licensing agreements that prevented the testing firm from legally purchasing products for public testing. The suit also alleged AMTSO adopted a draft standard that required independent firms like NSS Labs to give AMTSO vendor members advance notice of how their products would be tested, which NSS Labs argued was akin to giving vendors answers to the test before they took it.

In May, NSS Labs and CrowdStrike agreed to a confidential settlement that resolved the antitrust suit as well as other lawsuits between the two companies stemming from NSS Labs’ 2017 endpoint protection report that included negative test results for CrowdStrike’s Falcon platform. Under the settlement, NSS Labs retracted the test results, which the firm admitted were incomplete, and issued an apology to CrowdStrike.

In August, a U.S. District Court judge for the Northern District of California dismissed NSS Labs’ antitrust claims, ruling in part that NSS Labs failed to show how the alleged conspiracy damaged the market, which is required for antitrust claims. The judge also said NSS Labs’ complaint failed to show ESET and AMTSO participated in the alleged conspiracy (Symantec did not challenge the conspiracy allegations in the motion to dismiss). The ruling allowed the company to amend the complaint; instead, NSS Labs dropped its lawsuit.

Still, the testing firm had some harsh words in its statement announcing the dismissal of the suit. NSS Labs said vendors “were using a Draft Standard from the non-profit group to demonstrate their dissatisfaction with tests that revealed their underperforming products and associated weaknesses, which did not support their marketing claims.”

“During the past year, AMTSO has made progress to be more fair and balanced in its structure, vendors have shown progress in working with testing organizations, and the market itself has had significant change and notable acquisition activity,” NSS Labs CEO Jason Brvenik said in the statement. “It is said that sunshine is the best disinfectant, and that has been our experience here. We look forward to continued improvement in the security vendor behaviors.”

AMTSO sent the following statement to SearchSecurity:

“While AMTSO welcomes NSS Lab’s decision to dismiss, its actions were disruptive, expensive, and without merit,” said Ian McShane, an AMTSO Board member and senior director of security products at Elastic. “However, we agree with its statement that ‘sunshine is the best disinfectant,’ and we’re looking forward to NSS Labs re-joining AMTSO, and to its voluntary participation in standard-based testing. We believe this will give customers a greater assurance that the tests were conducted fairly.”

AMTSO did not comment on whether the organization has made any specific changes to its structure or policies in the wake of the antitrust suit.

NSS Labs changed its approach to testing results earlier this year with its 2019 Advanced Endpoint Protection Group Test, which redacted the names of vendors that received low scores and “caution” ratings. At RSA Conference 2019, Brvenik told SearchSecurity that NSS Labs decided to take a “promote, not demote” approach that focuses on the vendors that are doing well.

Go to Original Article
Author:

How to Configure Failover Clusters to Work with DR & Backup

As your organization grows it is important to not only plan your high-availability solution to maintain service continuity, but also a disaster recovery solution in the event that the operations of your entire datacenter are compromised. High-availability (HA) allows your applications or virtual machines (VMs) to stay online by moving them to other server nodes in your cluster. But what happens if your region experiences a power outage, hurricane or fire?  What if your staff cannot safely access your datacenter? During times of crisis, your team will likely be focused on the well-being of their family or home, and not particularly interested in the availability of their company’s services. This is why it is important to not only protect against local crashes but to be able to move your workloads between datacenters or clouds, using disaster recovery (DR). Because you will need to have access to your data in both locations, you will need to make sure that the data is replicated and consistent in both locations.  The architecture of your DR solution will influence the replication solution you select.

Basic Architecture of a Multi-Site Failover Cluster

Basic Architecture of a Multi-Site Failover Cluster

This three-part blog post will first look at the design decisions to create a resilient multi-site infrastructure, then in future posts the different types of replicated storage you can use from third parties, along with Microsoft’s DFS-Replication, Hyper-V Replica, and Azure Site Recovery (ASR), and backup best practices for each.

Probably the first design decision will be the physical location of your second site.  In some cases, this may be your organization’s second office location, and you will not have any input.  Sometimes you will be able to select the datacenter of a service provider who allows cohosting.  When you do have a choice, first consider the disaster between these locations. Make sure that the two sites are on separate power grids.  Then consider what type of disasters your region is susceptible to, whether that is hurricanes, wildfires, earthquakes or even terrorist attacks.  If your primary site is along a coastline, then consider finding an inland location. Ideally, you should select a location that is far enough away from your primary site to avoid multi-site failure. Some organizations even select a site that is hundreds or thousands of miles away!

At first, selecting a cross-country location may sound like the best solution, but with added distance comes added latency.  If you wish to run different services from both sites (an active/active configuration), then be aware that the distance can cause performance issues as information needs to travel further across networks. If you decide to use synchronous replication, you may be limited to a few hundred miles or less to ensure that the data stays consistent.  For this reason, many organizations choose an active/passive configuration where the datacenter which is closer to the business or its customers will function as the primary site, and the secondary datacenter remains dormant until it is needed. This solution is easier to manage, yet more expensive as you have duplicate hardware which is mostly unused. Some organizations will use a third (or more) site to provide greater resiliency, but this adds more complexity when it comes to backup, replication and cluster membership (quorum).

Now that you have picked your sites, you should determine the optimal number of cluster nodes in each location.  You should always have at least two nodes at each site so that if a host crashes it can failover within the primary site before going to the DR site to minimize downtime.  You can configure local failover first through the cluster’s Preferred Owner setting.  The more nodes you have at each site, the more local failures you can sustain before moving to the secondary site.

Use Local Failover First before Cross-Site Failover

Use Local Failover First before Cross-Site Failover

It is also recommended that you have the same number of nodes at each site, ideally with identical hardware configurations.  This means that the performance of applications should be fairly consistent in both locations and it should reduce your maintenance costs.  Some organizations will allocate older hardware to their secondary site, which is still supported, but the workloads will be slower until they return to the primary site.  With this type of configuration, you should also configure automatic failback so that the workloads are restored to the faster primary site once it is healthy.

If you have enough hardware, then a best practice is to deploy at least three nodes at each site so that if you lose a single node and have local failover there will be less of a performance impact.  In the event that you lose one of your sites in a genuine disaster for an extended period of time, you can then evict all the nodes from that site, and still have a 3-node cluster running in a single site.  In this scenario, having a minimum of three nodes is important so that you can sustain the loss of one node while keeping the rest of the cluster online by maintaining its quorum.

If you are an experienced cluster administrator, you probably identified the problem with having two sites with an identical number nodes – maintaining cluster quorum.  Quorum is the cluster’s membership algorithm to ensure that there is exactly one owner of each clustered workload.  This is used to avoid a “split-brain” scenario when there is a partition between two sets of cluster nodes (such as between two sites), and two hosts independently run the same application, causing data inconsistency during replication.  Quorum works by giving each cluster node a vote, and a majority (51% or more) of voters must be in communication with each other to run all of the workloads.  So how is this possible with the recommendation of two balanced sites with three nodes each (6 total votes)?

The most common solution is to have an extra vote in a third site (7 total votes).  So long as either the primary or secondary site can communicate with that voter in the third site, that group of nodes will have a majority of votes and operate all the workloads.  For those who do not have the luxury of the third site, Microsoft allows you to place this vote inside the Microsoft Azure cloud, using a Cloud Witness Disk.  For a detailed understanding of this scenario, check out this Altaro blog about Understanding File Share Cloud Witness and Failover Clustering Quorum in the Microsoft Azure Cloud.

Use a File Share Cloud Witness to Maintain Quorum

Use a File Share Cloud Witness to Maintain Quorum

If you are familiar with designing a traditional Windows Server Failover Cluster, you know that redundancy of every hardware and software component is critical to eliminate any single point of failure.  With a disaster recovery solution, this concept is extended by also providing redundancy to your datacenters, including the servers, storage, and networks.  Between each site, you should have multiple redundant networks for cross-site communications.

You will next configure your shared storage at each site cross-site replication between the disks using either a third-party replication solution such as Altaro VM Backup or Microsoft’s Hyper-V Replica or Azure Site Recovery.  These configurations will be covered in the subsequent blog posts in this series.  Finally make sure that the entire multi-site cluster, including the replicated storage, does not fail any of the Cluster Validation Wizard tests.

Again, we’ll be covering more regarding this topic in future blog posts, so keep an eye out for them! Additionally, have you worked through multi-site failover planning for a failover cluster before? What things went well? What were the troubles you ran into? We’d love to know in the comments section below!


Go to Original Article
Author: Symon Perriman

Vote of confidence: Politico Europe makes polling data visual to give readers a better election view | Transform

In an era rife with hacked campaigns, bots and election interference, one news organization has returned to an age-old maxim: Every vote truly does count. And they’re using data to prove it to readers.

Politico Europe, a joint venture between the American media organization Politico and German publisher Axel Springer, unveiled an election-coverage hub ahead of the 2019 European parliamentary votes held in May. It gives citizens a deeper look at the democratic process and allows them to connect their top issues with candidates in the field.

Now called Poll of Polls, the platform offers interactive data visualizations built with Microsoft Power BI. It also provides political news stories and analyses of votes cast during elections within each of the 28 member states of the European Union.

Rings of empty seats inside the European Parliament building.
The European Parliament building. (Getty Images)

Launched in collaboration with Microsoft, the site aims to show readers how their individual votes can affect political outcomes on a continental level.

“To illustrate that, we have charts showing how many votes it would take to switch an MEP (Member of European Parliament),” says Etienne Bauvir, director of business intelligence and technology at Politico Europe.

“In countries where turnout is notoriously low, like some Eastern European countries, it didn’t take many votes in May to shift an MEP and to have her lose a seat or win a seat. That’s one thing we wanted to make evident to readers – the impact of one vote can be big in some countries,” he says.

Case in point: Romania, where the Social Democrats earned 22.5% of the vote – causing the party to lose six parliament seats – while Renew Europe collected 22.4% of the vote – causing that party to gain seven seats. European parliamentary elections are held every five years.

Politico Europe’s new hub provides one page for each country, enabling readers to drill further into more precise 2019 election results, such as how pro-European Union candidates fared in France against skeptics of the EU. (Pro-EU MEPs in France currently outnumber EU skeptics 48 to 28.)

Etienne Bauvir's face.
Etienne Bauvir.

Poll of Polls also tracks fresh polling data in each country, offering projections of 2020 votes in individual nations. That helps readers better understand some of the complexities of European politics, including the power of ideological groups.

“In the projections, we can be reactive and proactive in our data analyses,” Bauvir says. “In the European election process, many national parties come together to form groups in the European Parliament. It’s often not clear which party will form which group. We can offer an accurate picture of that reality.”

Think U.S. elections are confusing? Elections to the European Parliament can span thousands of candidates representing hundreds of parties across 28 nations.

With Power BI, visitors to the Politico hub can use interactive features to maneuver polling or election data in ways that help them digest election night results or votes yet to come – both in national parliaments and for the entire European Parliament.

Hanna Pawelec's face.
Hanna Pawelec.

For example, one Politico visualization shows a graph of polling data in the United Kingdom, where citizens will elect their new parliament Dec. 12. By moving a cursor left and right, readers can view how the Conservative, Labour and Liberal Democrat parties have performed in polling each day from late 2018 to present.

“In the spring, we also had visualizations showing forecasts of how the future European Parliament will look. We could update those visualizations just by changing one data file,” says Hanna Pawelec, a Politico Europe data analyst. “By quickly updating those visualizations, we were one of the first newsrooms to show more in-depth analysis.”

“Power BI is easy enough for a citizen journalist to create a simple interactive with little training, but powerful enough for a seasoned data scientist to do complex analysis across multiple datasets,” says Ben Rudolph, senior director of Microsoft News Labs. “It’s the definition of democratized technology.”

Microsoft News Labs represents the company’s global effort to help journalists and journalism succeed by augmenting human creativity with innovative AI and content-creation technology.

Rudolph’s team began collaborating with Politico Europe after learning that the news organization wanted its audience to understand how the vote in one country could re-shape the entire European political landscape.

The two groups met in Brussels, Belgium (where Politico Europe is based) to discuss solutions that would help readers and viewers better engage with the news organization’s election coverage.

“The challenge wasn’t just wrangling the complexity of 242 parties competing for 705 seats in the European Parliament, but creating an experience that was at once compelling and transparent,” says Vera Chan, Microsoft senior manager for worldwide journalist relations.

Readers flocked to the news site. During the final stages of the European elections in May, Politico Europe’s traffic hit an all-time high with a nearly 30% increase compared to traffic measured one year earlier.

“This election,” Bauvir says, “was the moment to really widen our readership to the average citizen throughout Europe. We’ve now succeeded in retaining much of the additional audience we engaged. That’s another big success due in part to this hub and those visualizations.”

The Politico Europe newsroom where several journalists worked at their desks.
The Politico Europe newsroom in Brussels.

In the months since the election, traffic to Politico Europe remains on average 24% higher compared to the same period in 2018.

Politico Europe is now examining ways to expand the platform, focusing again on Europe, says Natasha Bernard, communications coordinator at Politico Europe.

“Data journalism with Power BI can play a unique role building audience trust,” Rudolph says. “Not only does an interactive visual give readers deep insight into a story, it also gives them unprecedented access to the data behind that insight.

“It’s a completely transparent means of storytelling,” he adds. “We think this will be increasingly important for outlets of all sizes as we approach the 2020 election cycle.”

Images courtesy of Politico Europe.

Go to Original Article
Author: Steve Clarke

Set up PowerShell script block logging for added security

PowerShell is an incredibly comprehensive and easy to use language. But administrators need to protect their organization from bad actors who use PowerShell for criminal purposes.

PowerShell’s extensive capabilities as a native tool in Windows make it tempting for an attacker to exploit the language. Increasingly, malicious software and bad actors are using PowerShell to either glue together different attack methods or run exploits entirely through PowerShell.

There are many methods and security best practices available to secure PowerShell, but one of the most valued is PowerShell script block logging. Script blocks are a collection of statements or expressions used as a single unit. Users denote them by everything inside the curly brackets within the PowerShell language.

Starting in Windows PowerShell v4.0 but significantly enhanced in Windows PowerShell v5.0, script block logging produces an audit trail of executed code. Windows PowerShell v5.0 introduced a logging engine that automatically decrypts code that has been obfuscated with methods such as XOR, Base64 and ROT13. PowerShell includes the original encrypted code for comparison.

PowerShell script block logging helps with the postmortem analysis of events to give additional insights if a breach occurs. It also helps IT be more proactive with monitoring for malicious events. For example, if you set up Event Subscriptions in Windows, you can send events of interest to a centralized server for a closer look.

Set up a Windows system for logging

Two primary ways to configure script block logging on a Windows system are by either setting a registry value directly or by specifying the appropriate settings in a group policy object.

To configure script block logging via the registry, use the following code while logged in as an administrator:

New-Item -Path "HKLM:SOFTWAREWow6432NodePoliciesMicrosoftWindowsPowerShellScriptBlockLogging" -Force
Set-ItemProperty -Path "HKLM:SOFTWAREWow6432NodePoliciesMicrosoftWindowsPowerShellScriptBlockLogging" -Name "EnableScriptBlockLogging" -Value 1 -Force

You can set PowerShell logging settings within group policy, either on the local machine or through organizationwide policies.

Open the Local Group Policy Editor and navigate to Computer Configuration > Administrative Templates > Windows Components > Windows PowerShell > Turn on PowerShell Script Block Logging.

Turning on PowerShell script block logging
Set up PowerShell script block logging from the Local Group Policy Editor in Windows.

When you enable script block logging, the editor unlocks an additional option to log events via “Log script block invocation start / stop events” when a command, script block, function or script starts and stops. This helps trace when an event happened, especially for long-running background scripts. This option generates a substantial amount of additional data in your logs.

PowerShell script block logging option
PowerShell script block logging tracks executed scripts and commands run on the command line.

How to configure script block logging on non-Windows systems

PowerShell Core is the cross-platform version of PowerShell for use on Windows, Linux and macOS. To use script block logging on PowerShell Core, you define the configuration in the powershell.config.json file in the $PSHome directory, which is unique to each PowerShell installation.

From a PowerShell session, navigate to $PSHome and use the Get-ChildItem command to see if the powershell.config.json file exists. If not, create the file with this command:

sudo touch powershell.config.json

Modify the file using a tool such as the nano text editor and paste in the following configuration.

{
"PowerShellPolicies": {
"ScriptBlockLogging": {
"EnableScriptBlockInvocationLogging": false,
"EnableScriptBlockLogging": true
}
},
"LogLevel": "verbose"
}

Test PowerShell script block logging

Testing the configuration is easy. From the command line, run the following:

PS /> { "log me!" }
"log me!"

Checking the logs on Windows

How do you know what entries to watch out for? The main event ID to watch out for is 4104. This is the ScriptBlockLogging entry for information that includes user and domain, logged date and time, computer host and the script block text.

Open Event Viewer and navigate to the following log location: Applications and Services Logs > Microsoft > Windows > PowerShell > Operational.

Click on events until you find the one from the test that is listed as Event ID 4104. Filter the log for this event to make the search quicker.

Windows Event 4104
Event 4104 in the Windows Event Viewer details PowerShell activity on a Windows machine.

On PowerShell Core on Windows, the log location is: Applications and Services Logs > PowerShellCore > Operational.

Log location on non-Windows systems

On Linux, PowerShell script block logging will log to syslog. The location will vary based on the distribution. For this tutorial, we use Ubuntu which has syslog at /var/log/syslog.

Run the following command to show the log entry; you must elevate with sudo in this example and on most typical systems:

sudo cat /var/log/syslog | grep "{ log me! }"

2019-08-20T19:40:08.070328-05:00 localhost powershell[9610]: (6.2.2:9:80) [ScriptBlock_Compile_Detail:ExecuteCommand.Create.Verbose] Creating Scriptblock text (1 of 1):#012{ "log me!" }#012#012ScriptBlock ID: 4d8d3cb4-a5ef-48aa-8339-38eea05c892b#012Path:

To set up a centralized server on Linux, things are a bit different since you’re using syslog by default. You can use rsyslog to ship your logs to a log aggregation service to track PowerShell activity from a central location.

Go to Original Article
Author:

Microsoft events — the year ahead – The Official Microsoft Blog

Empowering every person and every organization on the planet to achieve more is a 7 billion-person mission that we don’t take lightly. None of us at Microsoft could ever hope to reach that objective without a vast set of partnerships with curious and passionate people who seek to deeply understand technology and its power to transform individuals, businesses and industries. Facilitating connections, sharing our technologies and partnering to create solutions to real-world challenges is why we create the many Microsoft event experiences we host around the world.

Microsoft event experiences are designed to benefit specific audiences and structured to support clear objectives. We’re committed to closely aligning with all our partners, customers, and business and IT decision makers and connecting you with peers and industry leaders. To find out more about each event, visit our event website for details. Or, if you’re looking for a quick description of each event, read below to get a snapshot of our upcoming events.

Flagship events
IT professionals and developers
Microsoft Ignite — For IT professionals, decision makers, implementors, architects, developers and data professionals. This event provides opportunities to explore the latest tools, receive deep technical training and get specific questions answered by Microsoft experts. With more than 26,000 attendees who join to learn, connect and explore what Microsoft has to offer, this truly is the place where reality meets imagination. Orlando, Florida | Nov. 4-8, 2019

Developers
Microsoft Build — Where leading architects, developers, start-ups and student developers converge to focus on the latest tech trends and innovate for the future. We maintain our “produced by developers and for developers” mantra while inviting the next generation of developers to participate in the student zone. Seattle, Washington | May 19-21, 2020

Microsoft partners
Microsoft Business Applications Summit — An annual opportunity to bring together a community of Microsoft customers and partners in roles that include power users, business analysts, evangelists, implementers and technical architects. This event provides a forum to learn how Microsoft’s end-to-end Dynamics 365 and Power Platform can create and extend solutions to drive business success. Anaheim, California | April 20-21, 2020

Microsoft Inspire — Where Microsoft partners meet to connect and celebrate as one community at the close of Microsoft’s fiscal year. With hundreds of thousands of partners across the world, our partner ecosystem is stronger and more united than ever. We invite you to learn more about how Microsoft leaders are supporting our partners, and how partners can capitalize on the opportunities ahead. We’ve co-located our Microsoft sales kick-off event to build on our shared partnership philosophy. Las Vegas, Nevada | July 20-24, 2020

Regional tours

We started our regional tours for attendee convenience and to gauge how digital transformation is happening around the world. They’ve been a success on both fronts. This year we’re expanding to 30 markets for Microsoft Ignite The Tour and starting Microsoft Envision I The Tour in seven cities. Check out one of the stops on our regional tours in a city near you.

IT professionals and developers
Microsoft Ignite The Tour — We are bringing the best of Microsoft Ignite to you by traveling to 30 cities around the world for both ease of access and for the robust localized content for these distinct markets. Join us for in-depth learning and experiences in a free, two-day format that allows IT professionals and developers to learn new ways to build solutions, migrate, and manage infrastructure and connect with local industry leaders and peers. Visit Microsoft Ignite The Tour for locations and dates.

Business decision makers
Microsoft Envision | The Tour — An invitation-only, single-day event held in multiple cities around the world. With a global focus, this summit allows members of the C-suite to focus on challenges and trends that are changing the way organizations do business. Taking inspiration from our CEO Summit, this conference is designed to give leaders a chance to step back and learn about smart strategies to tackle emerging issues, power new efficiencies and build new business models and revenue streams. Visit Microsoft Envision I The Tour for locations and dates.

Digital learning

For those unable to make it in person or who are looking to quickly skill up on a particular topic, we offer digital learning options. Watch training sessions and event keynote sessions at any time. View multiple modules or choose a learning path tailored to today’s developer and technology masterminds that are designed to prepare you for industry-recognized Microsoft certifications.

Additional events

We’re just scratching the surface of the full picture of events that Microsoft has to offer. If you don’t find what you are looking for here, visit our full global events catalog for a list of events in your region and possibly your own city. These are events that are organized around specific product offerings and located in easily accessible locations with a wide range of class levels offered.

We invite everyone to join us to learn and grow, join us to connect with your peers, join us to get the answers you need so that you can deliver the solutions that can help propel your digital transformation. Visit our events website of flagship and regional events, and we look forward to seeing you in the year ahead.

Tags: , , , ,

Go to Original Article
Author: Microsoft News Center

Beyond overhead: What drives donor support in the digital era – Microsoft on the Issues

One of the greatest challenges to running a successful nonprofit organization has always been that donors look at nonprofits’ stewardship of funds as a primary way to assess impact. While there is no doubt that nonprofits must use donor funds responsibly, tracking to see if a nonprofit maintains the highest possible ratio of spending on programs-to spending on overhead is a poor proxy for understanding how effective a nonprofit truly is. In fact, the imperative to limit overhead has forced many organizations to underinvest in efforts to improve efficiency. Ironically, this has long prevented nonprofits from utilizing innovative digital technologies that could help them be more efficient and effective.

Now more than ever, cloud-based technology can have a transformative effect on how nonprofit organizations increase impact and reduce costs. The same technologies that give for-profit businesses insights about customers and markets, create operational efficiencies and speed up innovation can also help nonprofits target donors and raise funds more strategically, design and deliver programming more efficiently, and connect field teams with headquarters more effectively. This means smart investments in digital tools are essential to every nonprofit’s ability to make progress toward its mission.

The good news is that a major shift is underway. As part of our work at Microsoft Tech for Social Impact to understand how nonprofits can use technology to drive progress and demonstrate impact, we recently surveyed 2,200 donors, volunteers and funding decision-makers to learn how they decide which organizations to support, what their expectations are for efficiency and effectiveness, and how they feel about funding technology infrastructure at the nonprofits they support.

The results, which we published recently in the white paper “Beyond overhead: Donor expectations for driving impact with technology,” make clear that people donate to organizations they trust and that donors are increasingly looking at data beyond the ratio of program spending to overhead spending to measure impact. We also found that those who support nonprofits now overwhelmingly recognize the critical role technology plays in driving impact and delivering value. Nearly four out of five supporters (which includes both donors and volunteers) and more than nine out of 10 funding decision-makers told us they support directing donations to improve technology at a nonprofit. An overwhelming majority — 85 percent of supporters and 95 percent of funding decision-makers — are more likely to contribute to organizations that can show that they are using technology to improve how it runs programs.

At the same time, the survey found that most people expect organizations to use donations more efficiently and to advance the causes they work for more effectively than in the past. Among supporters, for example, 79 percent believe nonprofits should be better at maximizing funding than they were 10 years ago. Just over 80 percent of funding decision-makers believe nonprofits should be more effective at achieving their goals and advancing the causes they work for now than in the past.

To give you a better sense of what potential donors are looking for as they consider where to target their nonprofit contributions and how much they weigh technology into their thinking, we have developed a tool using Power BI so you can look at the data in greater detail. Within the tool, you can see how people responded to questions about overall effectiveness and efficiency, the importance of technology as a driver of success, how likely they are to support organizations that use technology to demonstrate impact, and their willingness to fund technology improvements at the nonprofits they support.

To make the tool as useful as possible for your organization, you can sort the data by supporters and funding decision-makers, and you can explore how responses varied by region. As you move through the data, you will see how these critical groups of supporters and funders think about these important questions in the region where your organization operates:

The ultimate goal of this survey was to get a clearer picture of what motivates people to contribute to an organization and how technology can help nonprofits meet supporters’ expectations. Overall, I believe our research provides some important insights that can help any organization be more successful. Fundamentally, we found that people donate to organizations that are perceived to be trustworthy, and that trust is achieved though operational transparency and effective communications. More than ever before, donors recognize that using data to measure and demonstrate impact is the foundation for trust.

I encourage you to read the full report and learn more about Microsoft’s commitment to support nonprofits.

Go to Original Article
Author: Microsoft News Center

How to deal with the on-premises vs. cloud challenge

For some administrators, the cloud is not a novelty. It’s critical to their organization. Then, there’s you, the lone on-premises holdout.

With all the hype about cloud and Microsoft’s strong push to get IT to use Azure for services and workloads, it might seem like you are the only one in favor of remaining in the data center in the great on-premises vs. cloud debate. The truth is the cloud isn’t meant for everything. While it’s difficult to find a workload not supported by the cloud, that doesn’t mean everything needs to move there.

Few people like change, and a move to the cloud is a big adjustment. You can’t stop your primary vendors from switching their allegiance to the cloud, so you will need to be flexible to face this new reality. Take a look around at your options as more vendors narrow their focus away from the data center and on-premises management.

Is the cloud a good fit for your organization?

The question is: Should it be done? All too often, it’s a matter of money. For example, it’s possible to take a large-capacity file server in the hundreds of terabytes and place it in Azure. Microsoft’s cloud can easily support this workload, but can your wallet?

Once you get over the sticker shock, think about it. If you’re storing frequently used data, it might make business sense to put that file server in Azure. However, if this is a traditional file server with mostly stale data, then is it really worth the price tag as opposed to using on-premises hardware?

Azure file server
When you run the numbers on what it takes to put a file server in Azure, the costs can add up.

Part of the on-premises vs. cloud dilemma is you have to weigh the financial costs, as well as the tangible benefits and drawbacks. Part of the calculation in determining what makes sense in an operational budget structure, as opposed to a capital expense, is the people factor. Too often, admins find themselves in a situation where management sees one side of this formula and wants to make that cloud leap, while the admins must look at the reality and explain both the pros and cons — the latter of which no one wants to hear.

Part of the on-premises vs. cloud dilemma is you have to weigh the financial costs, as well as the tangible benefits and drawbacks.

The cloud question also goes deeper than the Capex vs. Opex argument for the admins. With so much focus on the cloud, what happens to those environments that simply don’t or can’t move? It’s not only a question of what this means today, but also what’s in store for them tomorrow.

As vendors move on, the walls close in

With the focus for most software vendors on cloud and cloud-related technology, the move away from the data center should be a warning sign for admins that can’t move to the cloud. The applications and tools you use will change to focus on the organizations working in the cloud with less development on features that would benefit the on-premises data center.

One of the most critical aspects of this shift will be your monitoring tools. As cloud gains prominence, it will get harder to find tools that will continue to support local Windows Server installations over cloud-based ones. We already see this trend with log aggregation tools that used to be available as on-site installs that are now almost all SaaS-based offerings. This is just the start.

If a tool moves from on premises to the cloud but retains the ability to monitor data center resources, that is an important distinction to remember. That means you might have a workable option to keep production workloads on the ground and work with the cloud as needed or as your tools make that transition.

As time goes on, an evaluation process might be in order. If your familiar tools are moving to the cloud without support for on-premises workloads, the options might be limited. Should you pick up new tools and then invest the time to install and train the staff how to use them? It can be done, but do you really want to?

While not ideal, another viable option is to take no action; the install you have works, and as long as you don’t upgrade, everything will be fine. The problem with remaining static is getting left behind. The base OSes will change, and the applications will get updated. But, if your tools can no longer monitor them, what good are they? You also introduce a significant security risk when you don’t update software. Staying put isn’t a good long-term strategy.

With the cloud migration will come other choices

The same challenges you face with your tools also apply to your traditional on-premises applications. Longtime stalwarts, such as Exchange Server, still offer a local installation, but it’s clear that Microsoft’s focus for messaging and collaboration is its Office 365 suite.

The harsh reality is more software vendors will continue on the cloud path, which they see as the new profit centers. Offerings for on-premises applications will continue to dwindle. However, there is some hope. As the larger vendors move to the cloud, it opens up an opportunity in the market for third-party tools and applications that might not have been on your radar until now. These products might not be as feature-rich as an offering from the larger vendors, but they might tick most of the checkboxes for your requirements.

Go to Original Article
Author:

Amazon, Intel, NBCUniversal spill buying secrets at HR Tech 2018

LAS VEGAS — Amazon’s talent acquisition organization has more than 3,500 people, including 2,000 recruiters, and is very interested in testing out new technology. That is probably welcome news to vendors here at HR Tech 2018. But Amazon and other big HR technology users warned against being dazzled by vendors’ products and recommended following a disciplined and tough evaluation process.

“I think it’s important to stay abreast with what’s happening in the market,” said Kelly Cartwright, the head of recruiting transformation at Amazon. “I’m really, really passionate about doing experiments and pilots and seeing whether or not something can work,” she said, speaking on a talent acquisition technology panel at HR Tech 2018.

It’s important to “block out time and take those [vendor] calls and listen to what those vendors have to say because one of them actually might have a solution for you that can be a game changer,” Cartwright said.

A warning about new HR tech

But Cartwright also had a clear warning for attendees at the HR Tech 2018. It won’t help to make the investment in a new technology until “you really clarify” what it is you want to use it for, she said.

What has to happen first in investigating HR trends and new technologies is to “start with a clear problem that you’re trying to solve for,” Cartwright said. She illustrated her point with example questions: Is the problem improving diversity in the pipeline? Or is it ensuring that there are enough potential candidates visiting your recruiting website?

Endorsing this approach was Gail Blum, manager of talent acquisition operations at NBCUniversal, who appeared with Cartwright on the panel.

Blum said NBCUniversal may not always have the budget for a particular new HR technology, but vendors increasingly are offering free pilots. Companies can choose to take a particular problem “and see if that new tool or vendor has the ability to solve that,” she said.

Attendees walk through the expo area at the 2018 HR Technology Conference
New HR tech is in abundance at the 2018 HR Technology Conference & Expo

New tech that doesn’t integrate is next to useless

Critical to any new HR technology is its ability to integrate with existing talent systems, such as an applicant tracking system, Blum said. She wants to know: Will the system have a separate log-in? “That’s always something that we ask upfront with all of these vendors.”

“If you are requiring everyone to have to go to two different systems the usage probably isn’t going to be great,” Blum said, who said that was their experience from some previous rollouts. If the systems don’t integrate, a new technology addition “isn’t really going to solve your problem in the end,” she said.      

There was no disagreement on this panel at HR Tech 2018 about the need to be rigorous with vendors to avoid being taken in by a shiny new technology.

We ask really invasive questions of the vendors.
Allyn Baileytalent acquisition capability adoption transformation leader, Intel

If Intel is going to partner with a talent vendor “it’s a long-term play,” said Allyn Bailey, talent acquisition capability adoption transformation leader at the chipmaker.

“We ask really invasive questions of the vendors,” Bailey said. “The vendors really hate it when we do it,” she said.

But Bailey said they will probe a vendor’s stability, their financing and whether they are positioning themselves to gather some big-name customers and then sell the business. “That freaks me out because my investment with that vendor is around that partnership to build a very customized solution to meet my needs,” she said. 

TechTarget, the publisher of SearchHRSoftware, is a media partner for HR Tech 2018.

Helping customers shift to a modern desktop – Microsoft 365 Blog

IT is complex. And that means it can be difficult to keep up with the day-to-day demands of your organization, let alone deliver technological innovation that drives the business forward. In desktop management, this is especially true: the process of creating standard images, deploying devices, testing updates, and providing end user support hasn’t changed much in years. It can be tedious, manual, and time consuming. We’re determined to change that with our vision for a modern desktop powered by Windows 10 and Office 365 ProPlus. A modern desktop not only offers end users the most productive, most secure computing experience—it also saves IT time and money so you can focus on driving business results.

Today, we’re pleased to make three announcements that help you make the shift to a modern desktop:

  • Cloud-based analytics tools to make modern desktop deployment even easier.
  • A program to ensure app compatibility for upgrades and updates of Windows and Office.
  • Servicing and support changes to give you additional deployment flexibility.

Analytics to make modern desktop deployment easier

Collectively, you’ve told us that one of your biggest upgrade and update challenges is application testing. A critical part of any desktop deployment plan is analysis of existing applications—and the process of testing apps and remediating issues has historically been very manual and very time consuming. Microsoft 365 offers incredible tools today to help customers shift to a modern desktop, including System Center Configuration Manager, Microsoft Intune, Windows Analytics, and Office Readiness Toolkit. But we’ve felt like there’s even more we could do.

Today, we’re announcing that Windows Analytics is being expanded to Desktop Analytics—a new cloud-based service integrated with ConfigMgr and designed to create an inventory of apps running in the organization, assess app compatibility with the latest feature updates of Windows 10 and Office 365 ProPlus, and create pilot groups that represent the entire application and driver estate across a minimal set of devices.

The new Desktop Analytics service will provide insight and intelligence for you to make more informed decisions about the update readiness of your Windows and Office clients. You can then optimize pilot and production deployments with ConfigMgr. Combining data from your own organization with data aggregated from millions of devices connected to our cloud services, you can take the guess work out of testing and focus your attention on key blockers. We’ll share more information about Desktop Analytics and other modern desktop deployment tools at Ignite.

Standing behind our app compatibility promise

We’re also pleased to announce Desktop App Assure—a new service from Microsoft FastTrack designed to address issues with Windows 10 and Office 365 ProPlus app compatibility. Windows 10 is the most compatible Windows operating system ever, and using millions of data points from customer diagnostic data and the Windows Insider validation process, we’ve found that 99 percent of apps are compatible with new Windows updates. So you should generally expect that apps that work on Windows 7 will continue to work on Windows 10 and subsequent feature updates. But if you find any app compatibility issues after a Windows 10 or Office 365 ProPlus update, Desktop App Assure is designed to help you get a fix. Simply let us know by filing a ticket through FastTrack, and a Microsoft engineer will follow up to work with you until the issue is resolved. In short, Desktop App Assure operationalizes our Windows 10 and Office 365 ProPlus compatibility promise: We’ve got your back on app compatibility and are committed to removing it entirely as a blocker.

Desktop App Assure will be offered at no additional cost to Windows 10 Enterprise and Windows 10 Education customers. We’ll share more details on this new service at Ignite and will begin to preview this service in North America on October 1, 2018, with worldwide availability by February 1, 2019.

Servicing and support flexibility

Longer Windows 10 servicing for enterprises and educational institutions
In April 2017, we aligned the Windows 10 and Office 365 ProPlus update cadence to a predictable semi-annual schedule, targeting September and March. While many customers—including Mars and Accenture—have shifted to a modern desktop and are using the semi-annual channel to take updates regularly with great success, we’ve also heard feedback from some of you that you need more time and flexibility in the Windows 10 update cycle.

Based on that feedback, we’re announcing four changes:

  • All currently supported feature updates of Windows 10 Enterprise and Education editions (versions 1607, 1703, 1709, and 1803) will be supported for 30 months from their original release date. This will give customers on those versions more time for change management as they move to a faster update cycle.
  • All future feature updates of Windows 10 Enterprise and Education editions with a targeted release month of September (starting with 1809) will be supported for 30 months from their release date. This will give customers with longer deployment cycles the time they need to plan, test, and deploy.
  • All future feature updates of Windows 10 Enterprise and Education editions with a targeted release month of March (starting with 1903) will continue to be supported for 18 months from their release date. This maintains the semi-annual update cadence as our north star and retains the option for customers that want to update twice a year.
  • All feature releases of Windows 10 Home, Windows 10 Pro, and Office 365 ProPlus will continue to be supported for 18 months (this applies to feature updates targeting both March and September).

In summary, our new modern desktop support policies—starting in September 2018—are:

Windows 7 Extended Security Updates
As previously announced, Windows 7 extended support is ending January 14, 2020. While many of you are already well on your way in deploying Windows 10, we understand that everyone is at a different point in the upgrade process.

With that in mind, today we are announcing that we will offer paid Windows 7 Extended Security Updates (ESU) through January 2023. The Windows 7 ESU will be sold on a per-device basis and the price will increase each year. Windows 7 ESUs will be available to all Windows 7 Professional and Windows 7 Enterprise customers in Volume Licensing, with a discount to customers with Windows software assurance, Windows 10 Enterprise or Windows 10 Education subscriptions. In addition, Office 365 ProPlus will be supported on devices with active Windows 7 Extended Security Updates (ESU) through January 2023. This means that customers who purchase the Windows 7 ESU will be able to continue to run Office 365 ProPlus.

Please reach out to your partner or Microsoft account team for further details.

Support for Office 365 ProPlus on Windows 8.1 and Windows Server 2016
Office 365 ProPlus delivers cloud-connected and always up-to-date versions of the Office desktop apps. To support customers already on Office 365 ProPlus through their operating system transitions, we are updating the Windows system requirements for Office 365 ProPlus and revising some announcements that were made in February. We are pleased to announce the following updates to our Office 365 ProPlus system requirements:

  • Office 365 ProPlus will continue to be supported on Windows 8.1 through January 2023, which is the end of support date for Windows 8.1.
  • Office 365 ProPlus will also continue to be supported on Windows Server 2016 until October 2025.

Office 2016 connectivity support for Office 365 services
In addition, we are modifying the Office 365 services system requirements related to service connectivity. In February, we announced that starting October 13, 2020, customers will need Office 365 ProPlus or Office 2019 clients in mainstream support to connect to Office 365 services. To give you more time to transition fully to the cloud, we are now modifying that policy and will continue to support Office 2016 connections with the Office 365 services through October 2023.

Shift to a modern desktop

You’ve been talking, and we’ve been listening. Specifically, we’ve heard your feedback on desktop deployment, and we’re working hard to introduce new capabilities, services, and policies to help you on your way. The combination of Windows 10 and Office 365 ProPlus delivers the most productive, most secure end user computing experience available. But we recognize that it takes time to both upgrade devices and operationalize new update processes. Today’s announcements are designed to respond to your feedback and make it easier, faster, and cheaper to deploy a modern desktop. We know that there is still a lot of work to do. But we’re committed to working with you and systematically resolving any issues. We’d love to hear your thoughts and look forward to seeing you and discussing in more detail in the keynotes and sessions at Ignite in a few weeks!