Tag Archives: organization

ConnectWise threat intelligence sharing platform changes hands

Nonprofit IT trade organization CompTIA said it will assume management and operations of the Technology Solution Provider Information Sharing and Analysis Organization established by ConnectWise in August 2019.

Consultant and long-time CompTIA member MJ Shoer will remain as the TSP-ISAO’s executive director under the new arrangement. The TSP-ISAO retains its primary mission of fostering real-time threat intelligence sharing among channel partners, CompTIA said.

MJ ShoerMJ Shoer

Nancy Hammervik, CompTIA’s executive vice president of industry relations, discussed CompTIA’s TSP-ISAO leadership role with Shoer during the CompTIA Communities and Councils Forum event this week. CompTIA conducted the event virtually after cancelling its Chicago in-person event due to the coronavirus pandemic.

Shoer said CompTIA is uniquely positioned to enhance the TSP-ISAO. “If you look at all the educational opportunities and resources that CompTIA brings to the table … those are going to be integral to this in terms of helping to further educate the world of TSPs … about the cyber threats and how to respond,” he said.

He added that CompTIA’s involvement in government policy work will contribute to the success of the threat intelligence sharing platform, as “the government is going to be key.” ISAOs were chartered by the Department of Homeland Security as a result of an executive order by former president Barack Obama in 2015.

Hammervik and Shoer also underscored that CompTIA’s commitment to vendor neutrality will help the TSP-ISAO bring together competitive companies in pursuit of a collective benefit. “We all face these threats. We have all seen some of the reports about MSPs being used as threat vectors against their clients. If we don’t … stop that, it can harm the industry from the largest member to the smallest,” Shoer said.

About 650 organizations have joined the TSP-ISAO, according to Hammervik. Membership in the organization in 2020 is free for TSP companies.

Shoer said his goal for the TSP-ISAO is to develop a collaborative platform that can share qualified, real-time and actionable threat intelligence with TSPs so they can secure their own and customers’ businesses. He said ultimately, the organization would like to automate elements of the threat intelligence sharing, but it may be a long-term goal as AI and other technologies mature.

Wipro launches Microsoft technology unit

Wipro Ltd., a consulting and business process services company based in Bangalore, India, launched a business unit dedicated to Microsoft technology.

Wipro said its Microsoft Business Unit will focus on developing offerings that use Microsoft’s enterprise cloud services. Those Wipro offerings will include:

  • Cloud Studio, which provides migration services for workloads on such platforms as Azure and Dynamics 365.
  • Live Workspace, which uses Microsoft’s Modern Workplace, Azure’s Language Understanding Intelligent Service, Microsoft 365 and Microsoft’s Power Platform.
  • Data Discovery Platform, which incorporates Wipro’s Holmes AI system and Azure.

Wipro’s move follows HCL Technologies’ launch in January 2020 of its Microsoft Business Unit and Tata Consultancy Services’ rollout in November 2019 of a Microsoft Business Unit focusing on Azure’s cloud and edge capabilities. Other large IT service providers with Microsoft business units include Accenture/Avenade and Infosys.

Other news

  • 2nd Watch, a professional services and managed cloud company based in Seattle, unveiled a managed DevOps service, which the company said lets clients take advantage of DevOps culture without having to deploy the model on their own. The 2nd Watch Managed DevOps offering includes an assessment and strategy phase, DevOps training, tool implementation based on the GitLab platform, and ongoing management. 2nd Watch is partnering with GitLab to provide the managed DevOps service.
  • MSPs can now bundle Kaseya Compliance Manager with a cyber insurance policy from Cysurance. The combination stems from a partnership between Kaseya and Cysurance, a cyber insurance agency. Cysurance’s cyber policy is underwritten by Chubb.
  • Onepath, a managed technology services provider based in Atlanta, rolled out Onepath Analytics, a cloud-based business intelligence offering for finance professionals in the SMB market. The analytics offering includes plug-and-play extract, transform and load, data visualization and financial business metrics such as EBITDA, profit margin and revenue as a percentage of sales, according to the company. Other metrics maybe included, the company said, if the necessary data is accessible.
  • Avaya and master agent Telarus have teamed up to provide Avaya Cloud Office by Ring Central. Telarus will offer the unified communications as a service product to its network of 4,000 technology brokers, Avaya said.
  • Adaptive Networks, a provider of SD-WAN as a service, said it has partnered with master agent Telecom Consulting Group.
  • Spinnaker Support, an enterprise software support services provider, introduced Salesforce application management and consulting services. The company also provides Oracle and SAP application support services.
  • Avanan, a New York company that provides a security offering for cloud-based email and collaboration suites, has hired Mike Lyons as global MSP/MSSP sales director.
  • Managed security service provider High Wire Networks named Dave Barton as its CTO. Barton will oversee and technology solutions and channel sales engineering for the company’s Overwatch Managed Security Platform, which is sold through channel partners, the company said.

Market Share is a news roundup published every Friday.

Go to Original Article

Microsoft misconfiguration exposed 250M customer service records

Microsoft became the latest organization to accidentally expose private data on the web.

The software giant Wednesday admitted it had exposed 250 million customer support records on five Elasticsearch servers, which were inadvertently made publicly accessible on the web for nearly a month.

According to Comparitech, which discovered the exposure, most personally identifiable information (PII) such as payment information was redacted. However, exposed information included customer email addresses, IP addresses, locations, descriptions of customer service and support claims and cases, Microsoft support agent emails, case numbers, resolutions, remarks, and internal notes marked as confidential.

“I was immediately stunned by the size and by the structure of data there, and even when I saw that most of the data there was automatically redacted, still there were some records with personal data in plain text,” Bob Diachenko, leader of Comparitech’s security research team, told SearchSecurity.

Microsoft, which corrected the misconfiguration last month, issued a statement that said it found no malicious use of the exposed data. The company said its investigation found that misconfigured Azure security rules were applied to the databases in early December.

On Dec. 28, according to Comparitech, the databases were first indexed by BinaryEdge, a search engine. One day later, Diachenko discovered the exposed databases and immediately contacted Microsoft. Within two days, the servers and data were secured.

“They acted really quickly and professionally,” Diachenko said. “In general, Microsoft’s response was exemplary. I wish every company would have such a brilliant internet response protocol in place.”

“We have solutions to help prevent this kind of mistake, but unfortunately, they were not enabled for this database,” Microsoft said in its statement. It is unknown what these solutions are and why they weren’t in place; when SearchSecurity contacted Microsoft, the company declined to comment beyond the public statement.

Go to Original Article

Benefits of virtualization highlighted in top 10 stories of 2019

When an organization decides to pursue a virtual desktop solution, a host of questions awaits it.

Our most popular virtual desktop articles this year highlight that fact and show how companies are still trying to get a handle on the virtual desktop infrastructure terrain. The stories explain the benefits of virtualization and provide comparisons between different enterprise options.

A countdown of our most-read articles, determined by page views, follows.

  1. Five burning questions about remote desktop USB redirection

Virtual desktops strive to mimic the traditional PC experience, but using local USB devices can create a sticking point. Remote desktop USB redirection enables users to attach their devices to their local desktop and have it function normally. In 2016, we explored options for redirection, explained how the technology worked and touched upon problem areas such as how scanners are infamously problematic with redirection.

  1. Tips for VDI user profile management

Another key factor for virtualizing the local desktop experience includes managing things like a user’s browser bookmarks, desktop background and settings. That was the subject of this FAQ from 2013 and our ninth most popular story for 2019. The article outlines options for managing virtual desktop user profiles, from implementing identical profiles for everyone to ensuring that settings once saved locally carry over to the virtual workspace.

  1. VDI hardware comparison: Thin vs. thick vs. zero clients

The push toward centralizing computing services has created a market for thin and zero clients, simple and low-cost computing devices reliant on servers. In implementing VDI, IT professionals should consider the right option for their organization. Thick clients, the traditional PC, provide proven functionality, but they also sidestep some of the biggest benefits of virtualization such as lower cost, energy efficiency and increased security. Thin clients provide a mix of features, and their simplicity brings VDI’s assets, such as centralized management and ease of local deployment, to bear. Zero clients require even less configuration, as they have nothing stored locally, but they tend to be proprietary.

  1. How to troubleshoot remote and virtual desktop connection issues

Connection issues can disrupt employee workflow, so avoiding and resolving them is paramount for desktop administrators. Once the local hardware has been ruled out, there are a set of common issues — exceeded capacity, firewalls, SSL certificates and network-level authentication — that IT professionals can consider when solving the puzzle.

  1. Comparing converged vs. hyper-converged infrastructure

What’s the difference between converged infrastructure (CI) and hyper-converged infrastructure (HCI)? This 2015 missive took on that question in our sixth most popular story for 2019. In short, while CI houses four data center functions — computing, storage, networking and server virtualization — into a single chassis, HCI looks to add even more features through software. HCI’s flexibility and scalability were touted as advantages over the more hardware-focused CI.

  1. Differences between desktop and server virtualization

To help those seeking VDI deployment, this informational piece from 2014 focused on how desktop virtualization differs from server virtualization. Server virtualization partitions one server into many, enabling organizations to accomplish tasks like maintaining databases, sharing files and delivering media. Desktop virtualization, on the other hand, delivers a virtual computer environment to a user. While server virtualization is easier to predict, given its uniform daily functions, a virtual desktop user might call for any number of potential applications or tasks, making the distinction between the two key.

  1. Application virtualization comparison: XenApp vs. ThinApp vs. App-V

This 2013 comparison pitted Citrix, VMware and Microsoft’s virtualization services against each other to determine the best solution for streaming applications. Citrix’s XenApp drew plaudits for the breadth of the applications it supported, but its update schedule provided only a short window to migrate to newer versions. VMware ThinApp’s portability was an asset, as it did not need installed software or device drivers, but some administrators said the service was difficult to deploy and the lack of a centralized management platform made handling applications trickier. Microsoft’s App-V provided access to popular apps like Office, but its agent-based approach limited portability when compared to ThinApp.

  1. VDI shops mull XenDesktop vs. Horizon as competition continues

In summer 2018, we took a snapshot of the desktop virtualization market as power players Citrix and VMware vied for a greater share of users. At the time, Citrix’s product, XenDesktop, was used in 57.7% of on-premises VDI deployments, while VMware’s Horizon accounted for 26.9% of the market. Customers praised VMware’s forward-facing emphasis on cloud, while a focus on security drew others to Citrix. Industry watchers wondered if Citrix would maintain its dominance through XenDesktop 7.0’s end of life that year and if challenger VMware’s vision for the future would pay off.

  1. Compare the top vendors of thin client systems

Vendors vary in the types of thin client devices they offer and the scale they can accommodate. We compared offerings from Advantech, Asus, Centerm Information, Google, Dell, Fujitsu, HP, Igel Technology, LG Electronics, Lenovo, NComputing, Raspberry Pi, Samsung, Siemens and 10ZiG Technology to elucidate the differences between them, and the uses for which they might be best suited.

  1. Understanding nonpersistent vs. persistent VDI

This article from 2013 proved some questions have staying power. Our most popular story this year explained the difference between two types of desktops that can be deployed on VDI. Persistent VDI provides each user his or her own desktop, allowing more flexibility for users to control their workspaces but requiring more storage and heightening complexity. Nonpersistent VDI did not save settings once a user logged out, a boon for security and consistent updates, but less than ideal in providing easy access to needed apps.

Go to Original Article

NSS Labs drops antitrust suit against AMTSO, Symantec and ESET

NSS Labs ended its legal battle against the Anti-Malware Testing Standards Organization, Symantec and ESET.

The independent testing firm dropped its antitrust lawsuit Tuesday, which was filed in 2018 against AMTSO (a nonprofit organization) and several top endpoint security vendors, including Symantec, ESET and CrowdStrike. The suit accused the vendors and AMTSO of conspiring to prevent NSS Labs from testing their products by boycotting the company.

In addition, NSS Labs accused the vendors of instituting restrictive licensing agreements that prevented the testing firm from legally purchasing products for public testing. The suit also alleged AMTSO adopted a draft standard that required independent firms like NSS Labs to give AMTSO vendor members advance notice of how their products would be tested, which NSS Labs argued was akin to giving vendors answers to the test before they took it.

In May, NSS Labs and CrowdStrike agreed to a confidential settlement that resolved the antitrust suit as well as other lawsuits between the two companies stemming from NSS Labs’ 2017 endpoint protection report that included negative test results for CrowdStrike’s Falcon platform. Under the settlement, NSS Labs retracted the test results, which the firm admitted were incomplete, and issued an apology to CrowdStrike.

In August, a U.S. District Court judge for the Northern District of California dismissed NSS Labs’ antitrust claims, ruling in part that NSS Labs failed to show how the alleged conspiracy damaged the market, which is required for antitrust claims. The judge also said NSS Labs’ complaint failed to show ESET and AMTSO participated in the alleged conspiracy (Symantec did not challenge the conspiracy allegations in the motion to dismiss). The ruling allowed the company to amend the complaint; instead, NSS Labs dropped its lawsuit.

Still, the testing firm had some harsh words in its statement announcing the dismissal of the suit. NSS Labs said vendors “were using a Draft Standard from the non-profit group to demonstrate their dissatisfaction with tests that revealed their underperforming products and associated weaknesses, which did not support their marketing claims.”

“During the past year, AMTSO has made progress to be more fair and balanced in its structure, vendors have shown progress in working with testing organizations, and the market itself has had significant change and notable acquisition activity,” NSS Labs CEO Jason Brvenik said in the statement. “It is said that sunshine is the best disinfectant, and that has been our experience here. We look forward to continued improvement in the security vendor behaviors.”

AMTSO sent the following statement to SearchSecurity:

“While AMTSO welcomes NSS Lab’s decision to dismiss, its actions were disruptive, expensive, and without merit,” said Ian McShane, an AMTSO Board member and senior director of security products at Elastic. “However, we agree with its statement that ‘sunshine is the best disinfectant,’ and we’re looking forward to NSS Labs re-joining AMTSO, and to its voluntary participation in standard-based testing. We believe this will give customers a greater assurance that the tests were conducted fairly.”

AMTSO did not comment on whether the organization has made any specific changes to its structure or policies in the wake of the antitrust suit.

NSS Labs changed its approach to testing results earlier this year with its 2019 Advanced Endpoint Protection Group Test, which redacted the names of vendors that received low scores and “caution” ratings. At RSA Conference 2019, Brvenik told SearchSecurity that NSS Labs decided to take a “promote, not demote” approach that focuses on the vendors that are doing well.

Go to Original Article

How to Configure Failover Clusters to Work with DR & Backup

As your organization grows it is important to not only plan your high-availability solution to maintain service continuity, but also a disaster recovery solution in the event that the operations of your entire datacenter are compromised. High-availability (HA) allows your applications or virtual machines (VMs) to stay online by moving them to other server nodes in your cluster. But what happens if your region experiences a power outage, hurricane or fire?  What if your staff cannot safely access your datacenter? During times of crisis, your team will likely be focused on the well-being of their family or home, and not particularly interested in the availability of their company’s services. This is why it is important to not only protect against local crashes but to be able to move your workloads between datacenters or clouds, using disaster recovery (DR). Because you will need to have access to your data in both locations, you will need to make sure that the data is replicated and consistent in both locations.  The architecture of your DR solution will influence the replication solution you select.

Basic Architecture of a Multi-Site Failover Cluster

Basic Architecture of a Multi-Site Failover Cluster

This three-part blog post will first look at the design decisions to create a resilient multi-site infrastructure, then in future posts the different types of replicated storage you can use from third parties, along with Microsoft’s DFS-Replication, Hyper-V Replica, and Azure Site Recovery (ASR), and backup best practices for each.

Probably the first design decision will be the physical location of your second site.  In some cases, this may be your organization’s second office location, and you will not have any input.  Sometimes you will be able to select the datacenter of a service provider who allows cohosting.  When you do have a choice, first consider the disaster between these locations. Make sure that the two sites are on separate power grids.  Then consider what type of disasters your region is susceptible to, whether that is hurricanes, wildfires, earthquakes or even terrorist attacks.  If your primary site is along a coastline, then consider finding an inland location. Ideally, you should select a location that is far enough away from your primary site to avoid multi-site failure. Some organizations even select a site that is hundreds or thousands of miles away!

At first, selecting a cross-country location may sound like the best solution, but with added distance comes added latency.  If you wish to run different services from both sites (an active/active configuration), then be aware that the distance can cause performance issues as information needs to travel further across networks. If you decide to use synchronous replication, you may be limited to a few hundred miles or less to ensure that the data stays consistent.  For this reason, many organizations choose an active/passive configuration where the datacenter which is closer to the business or its customers will function as the primary site, and the secondary datacenter remains dormant until it is needed. This solution is easier to manage, yet more expensive as you have duplicate hardware which is mostly unused. Some organizations will use a third (or more) site to provide greater resiliency, but this adds more complexity when it comes to backup, replication and cluster membership (quorum).

Now that you have picked your sites, you should determine the optimal number of cluster nodes in each location.  You should always have at least two nodes at each site so that if a host crashes it can failover within the primary site before going to the DR site to minimize downtime.  You can configure local failover first through the cluster’s Preferred Owner setting.  The more nodes you have at each site, the more local failures you can sustain before moving to the secondary site.

Use Local Failover First before Cross-Site Failover

Use Local Failover First before Cross-Site Failover

It is also recommended that you have the same number of nodes at each site, ideally with identical hardware configurations.  This means that the performance of applications should be fairly consistent in both locations and it should reduce your maintenance costs.  Some organizations will allocate older hardware to their secondary site, which is still supported, but the workloads will be slower until they return to the primary site.  With this type of configuration, you should also configure automatic failback so that the workloads are restored to the faster primary site once it is healthy.

If you have enough hardware, then a best practice is to deploy at least three nodes at each site so that if you lose a single node and have local failover there will be less of a performance impact.  In the event that you lose one of your sites in a genuine disaster for an extended period of time, you can then evict all the nodes from that site, and still have a 3-node cluster running in a single site.  In this scenario, having a minimum of three nodes is important so that you can sustain the loss of one node while keeping the rest of the cluster online by maintaining its quorum.

If you are an experienced cluster administrator, you probably identified the problem with having two sites with an identical number nodes – maintaining cluster quorum.  Quorum is the cluster’s membership algorithm to ensure that there is exactly one owner of each clustered workload.  This is used to avoid a “split-brain” scenario when there is a partition between two sets of cluster nodes (such as between two sites), and two hosts independently run the same application, causing data inconsistency during replication.  Quorum works by giving each cluster node a vote, and a majority (51% or more) of voters must be in communication with each other to run all of the workloads.  So how is this possible with the recommendation of two balanced sites with three nodes each (6 total votes)?

The most common solution is to have an extra vote in a third site (7 total votes).  So long as either the primary or secondary site can communicate with that voter in the third site, that group of nodes will have a majority of votes and operate all the workloads.  For those who do not have the luxury of the third site, Microsoft allows you to place this vote inside the Microsoft Azure cloud, using a Cloud Witness Disk.  For a detailed understanding of this scenario, check out this Altaro blog about Understanding File Share Cloud Witness and Failover Clustering Quorum in the Microsoft Azure Cloud.

Use a File Share Cloud Witness to Maintain Quorum

Use a File Share Cloud Witness to Maintain Quorum

If you are familiar with designing a traditional Windows Server Failover Cluster, you know that redundancy of every hardware and software component is critical to eliminate any single point of failure.  With a disaster recovery solution, this concept is extended by also providing redundancy to your datacenters, including the servers, storage, and networks.  Between each site, you should have multiple redundant networks for cross-site communications.

You will next configure your shared storage at each site cross-site replication between the disks using either a third-party replication solution such as Altaro VM Backup or Microsoft’s Hyper-V Replica or Azure Site Recovery.  These configurations will be covered in the subsequent blog posts in this series.  Finally make sure that the entire multi-site cluster, including the replicated storage, does not fail any of the Cluster Validation Wizard tests.

Again, we’ll be covering more regarding this topic in future blog posts, so keep an eye out for them! Additionally, have you worked through multi-site failover planning for a failover cluster before? What things went well? What were the troubles you ran into? We’d love to know in the comments section below!

Go to Original Article
Author: Symon Perriman

Vote of confidence: Politico Europe makes polling data visual to give readers a better election view | Transform

In an era rife with hacked campaigns, bots and election interference, one news organization has returned to an age-old maxim: Every vote truly does count. And they’re using data to prove it to readers.

Politico Europe, a joint venture between the American media organization Politico and German publisher Axel Springer, unveiled an election-coverage hub ahead of the 2019 European parliamentary votes held in May. It gives citizens a deeper look at the democratic process and allows them to connect their top issues with candidates in the field.

Now called Poll of Polls, the platform offers interactive data visualizations built with Microsoft Power BI. It also provides political news stories and analyses of votes cast during elections within each of the 28 member states of the European Union.

Rings of empty seats inside the European Parliament building.
The European Parliament building. (Getty Images)

Launched in collaboration with Microsoft, the site aims to show readers how their individual votes can affect political outcomes on a continental level.

“To illustrate that, we have charts showing how many votes it would take to switch an MEP (Member of European Parliament),” says Etienne Bauvir, director of business intelligence and technology at Politico Europe.

“In countries where turnout is notoriously low, like some Eastern European countries, it didn’t take many votes in May to shift an MEP and to have her lose a seat or win a seat. That’s one thing we wanted to make evident to readers – the impact of one vote can be big in some countries,” he says.

Case in point: Romania, where the Social Democrats earned 22.5% of the vote – causing the party to lose six parliament seats – while Renew Europe collected 22.4% of the vote – causing that party to gain seven seats. European parliamentary elections are held every five years.

Politico Europe’s new hub provides one page for each country, enabling readers to drill further into more precise 2019 election results, such as how pro-European Union candidates fared in France against skeptics of the EU. (Pro-EU MEPs in France currently outnumber EU skeptics 48 to 28.)

Etienne Bauvir's face.
Etienne Bauvir.

Poll of Polls also tracks fresh polling data in each country, offering projections of 2020 votes in individual nations. That helps readers better understand some of the complexities of European politics, including the power of ideological groups.

“In the projections, we can be reactive and proactive in our data analyses,” Bauvir says. “In the European election process, many national parties come together to form groups in the European Parliament. It’s often not clear which party will form which group. We can offer an accurate picture of that reality.”

Think U.S. elections are confusing? Elections to the European Parliament can span thousands of candidates representing hundreds of parties across 28 nations.

With Power BI, visitors to the Politico hub can use interactive features to maneuver polling or election data in ways that help them digest election night results or votes yet to come – both in national parliaments and for the entire European Parliament.

Hanna Pawelec's face.
Hanna Pawelec.

For example, one Politico visualization shows a graph of polling data in the United Kingdom, where citizens will elect their new parliament Dec. 12. By moving a cursor left and right, readers can view how the Conservative, Labour and Liberal Democrat parties have performed in polling each day from late 2018 to present.

“In the spring, we also had visualizations showing forecasts of how the future European Parliament will look. We could update those visualizations just by changing one data file,” says Hanna Pawelec, a Politico Europe data analyst. “By quickly updating those visualizations, we were one of the first newsrooms to show more in-depth analysis.”

“Power BI is easy enough for a citizen journalist to create a simple interactive with little training, but powerful enough for a seasoned data scientist to do complex analysis across multiple datasets,” says Ben Rudolph, senior director of Microsoft News Labs. “It’s the definition of democratized technology.”

Microsoft News Labs represents the company’s global effort to help journalists and journalism succeed by augmenting human creativity with innovative AI and content-creation technology.

Rudolph’s team began collaborating with Politico Europe after learning that the news organization wanted its audience to understand how the vote in one country could re-shape the entire European political landscape.

The two groups met in Brussels, Belgium (where Politico Europe is based) to discuss solutions that would help readers and viewers better engage with the news organization’s election coverage.

“The challenge wasn’t just wrangling the complexity of 242 parties competing for 705 seats in the European Parliament, but creating an experience that was at once compelling and transparent,” says Vera Chan, Microsoft senior manager for worldwide journalist relations.

Readers flocked to the news site. During the final stages of the European elections in May, Politico Europe’s traffic hit an all-time high with a nearly 30% increase compared to traffic measured one year earlier.

“This election,” Bauvir says, “was the moment to really widen our readership to the average citizen throughout Europe. We’ve now succeeded in retaining much of the additional audience we engaged. That’s another big success due in part to this hub and those visualizations.”

The Politico Europe newsroom where several journalists worked at their desks.
The Politico Europe newsroom in Brussels.

In the months since the election, traffic to Politico Europe remains on average 24% higher compared to the same period in 2018.

Politico Europe is now examining ways to expand the platform, focusing again on Europe, says Natasha Bernard, communications coordinator at Politico Europe.

“Data journalism with Power BI can play a unique role building audience trust,” Rudolph says. “Not only does an interactive visual give readers deep insight into a story, it also gives them unprecedented access to the data behind that insight.

“It’s a completely transparent means of storytelling,” he adds. “We think this will be increasingly important for outlets of all sizes as we approach the 2020 election cycle.”

Images courtesy of Politico Europe.

Go to Original Article
Author: Steve Clarke

Set up PowerShell script block logging for added security

PowerShell is an incredibly comprehensive and easy to use language. But administrators need to protect their organization from bad actors who use PowerShell for criminal purposes.

PowerShell’s extensive capabilities as a native tool in Windows make it tempting for an attacker to exploit the language. Increasingly, malicious software and bad actors are using PowerShell to either glue together different attack methods or run exploits entirely through PowerShell.

There are many methods and security best practices available to secure PowerShell, but one of the most valued is PowerShell script block logging. Script blocks are a collection of statements or expressions used as a single unit. Users denote them by everything inside the curly brackets within the PowerShell language.

Starting in Windows PowerShell v4.0 but significantly enhanced in Windows PowerShell v5.0, script block logging produces an audit trail of executed code. Windows PowerShell v5.0 introduced a logging engine that automatically decrypts code that has been obfuscated with methods such as XOR, Base64 and ROT13. PowerShell includes the original encrypted code for comparison.

PowerShell script block logging helps with the postmortem analysis of events to give additional insights if a breach occurs. It also helps IT be more proactive with monitoring for malicious events. For example, if you set up Event Subscriptions in Windows, you can send events of interest to a centralized server for a closer look.

Set up a Windows system for logging

Two primary ways to configure script block logging on a Windows system are by either setting a registry value directly or by specifying the appropriate settings in a group policy object.

To configure script block logging via the registry, use the following code while logged in as an administrator:

New-Item -Path "HKLM:SOFTWAREWow6432NodePoliciesMicrosoftWindowsPowerShellScriptBlockLogging" -Force
Set-ItemProperty -Path "HKLM:SOFTWAREWow6432NodePoliciesMicrosoftWindowsPowerShellScriptBlockLogging" -Name "EnableScriptBlockLogging" -Value 1 -Force

You can set PowerShell logging settings within group policy, either on the local machine or through organizationwide policies.

Open the Local Group Policy Editor and navigate to Computer Configuration > Administrative Templates > Windows Components > Windows PowerShell > Turn on PowerShell Script Block Logging.

Turning on PowerShell script block logging
Set up PowerShell script block logging from the Local Group Policy Editor in Windows.

When you enable script block logging, the editor unlocks an additional option to log events via “Log script block invocation start / stop events” when a command, script block, function or script starts and stops. This helps trace when an event happened, especially for long-running background scripts. This option generates a substantial amount of additional data in your logs.

PowerShell script block logging option
PowerShell script block logging tracks executed scripts and commands run on the command line.

How to configure script block logging on non-Windows systems

PowerShell Core is the cross-platform version of PowerShell for use on Windows, Linux and macOS. To use script block logging on PowerShell Core, you define the configuration in the powershell.config.json file in the $PSHome directory, which is unique to each PowerShell installation.

From a PowerShell session, navigate to $PSHome and use the Get-ChildItem command to see if the powershell.config.json file exists. If not, create the file with this command:

sudo touch powershell.config.json

Modify the file using a tool such as the nano text editor and paste in the following configuration.

"PowerShellPolicies": {
"ScriptBlockLogging": {
"EnableScriptBlockInvocationLogging": false,
"EnableScriptBlockLogging": true
"LogLevel": "verbose"

Test PowerShell script block logging

Testing the configuration is easy. From the command line, run the following:

PS /> { "log me!" }
"log me!"

Checking the logs on Windows

How do you know what entries to watch out for? The main event ID to watch out for is 4104. This is the ScriptBlockLogging entry for information that includes user and domain, logged date and time, computer host and the script block text.

Open Event Viewer and navigate to the following log location: Applications and Services Logs > Microsoft > Windows > PowerShell > Operational.

Click on events until you find the one from the test that is listed as Event ID 4104. Filter the log for this event to make the search quicker.

Windows Event 4104
Event 4104 in the Windows Event Viewer details PowerShell activity on a Windows machine.

On PowerShell Core on Windows, the log location is: Applications and Services Logs > PowerShellCore > Operational.

Log location on non-Windows systems

On Linux, PowerShell script block logging will log to syslog. The location will vary based on the distribution. For this tutorial, we use Ubuntu which has syslog at /var/log/syslog.

Run the following command to show the log entry; you must elevate with sudo in this example and on most typical systems:

sudo cat /var/log/syslog | grep "{ log me! }"

2019-08-20T19:40:08.070328-05:00 localhost powershell[9610]: (6.2.2:9:80) [ScriptBlock_Compile_Detail:ExecuteCommand.Create.Verbose] Creating Scriptblock text (1 of 1):#012{ "log me!" }#012#012ScriptBlock ID: 4d8d3cb4-a5ef-48aa-8339-38eea05c892b#012Path:

To set up a centralized server on Linux, things are a bit different since you’re using syslog by default. You can use rsyslog to ship your logs to a log aggregation service to track PowerShell activity from a central location.

Go to Original Article

Microsoft events — the year ahead – The Official Microsoft Blog

Empowering every person and every organization on the planet to achieve more is a 7 billion-person mission that we don’t take lightly. None of us at Microsoft could ever hope to reach that objective without a vast set of partnerships with curious and passionate people who seek to deeply understand technology and its power to transform individuals, businesses and industries. Facilitating connections, sharing our technologies and partnering to create solutions to real-world challenges is why we create the many Microsoft event experiences we host around the world.

Microsoft event experiences are designed to benefit specific audiences and structured to support clear objectives. We’re committed to closely aligning with all our partners, customers, and business and IT decision makers and connecting you with peers and industry leaders. To find out more about each event, visit our event website for details. Or, if you’re looking for a quick description of each event, read below to get a snapshot of our upcoming events.

Flagship events
IT professionals and developers
Microsoft Ignite — For IT professionals, decision makers, implementors, architects, developers and data professionals. This event provides opportunities to explore the latest tools, receive deep technical training and get specific questions answered by Microsoft experts. With more than 26,000 attendees who join to learn, connect and explore what Microsoft has to offer, this truly is the place where reality meets imagination. Orlando, Florida | Nov. 4-8, 2019

Microsoft Build — Where leading architects, developers, start-ups and student developers converge to focus on the latest tech trends and innovate for the future. We maintain our “produced by developers and for developers” mantra while inviting the next generation of developers to participate in the student zone. Seattle, Washington | May 19-21, 2020

Microsoft partners
Microsoft Business Applications Summit — An annual opportunity to bring together a community of Microsoft customers and partners in roles that include power users, business analysts, evangelists, implementers and technical architects. This event provides a forum to learn how Microsoft’s end-to-end Dynamics 365 and Power Platform can create and extend solutions to drive business success. Anaheim, California | April 20-21, 2020

Microsoft Inspire — Where Microsoft partners meet to connect and celebrate as one community at the close of Microsoft’s fiscal year. With hundreds of thousands of partners across the world, our partner ecosystem is stronger and more united than ever. We invite you to learn more about how Microsoft leaders are supporting our partners, and how partners can capitalize on the opportunities ahead. We’ve co-located our Microsoft sales kick-off event to build on our shared partnership philosophy. Las Vegas, Nevada | July 20-24, 2020

Regional tours

We started our regional tours for attendee convenience and to gauge how digital transformation is happening around the world. They’ve been a success on both fronts. This year we’re expanding to 30 markets for Microsoft Ignite The Tour and starting Microsoft Envision I The Tour in seven cities. Check out one of the stops on our regional tours in a city near you.

IT professionals and developers
Microsoft Ignite The Tour — We are bringing the best of Microsoft Ignite to you by traveling to 30 cities around the world for both ease of access and for the robust localized content for these distinct markets. Join us for in-depth learning and experiences in a free, two-day format that allows IT professionals and developers to learn new ways to build solutions, migrate, and manage infrastructure and connect with local industry leaders and peers. Visit Microsoft Ignite The Tour for locations and dates.

Business decision makers
Microsoft Envision | The Tour — An invitation-only, single-day event held in multiple cities around the world. With a global focus, this summit allows members of the C-suite to focus on challenges and trends that are changing the way organizations do business. Taking inspiration from our CEO Summit, this conference is designed to give leaders a chance to step back and learn about smart strategies to tackle emerging issues, power new efficiencies and build new business models and revenue streams. Visit Microsoft Envision I The Tour for locations and dates.

Digital learning

For those unable to make it in person or who are looking to quickly skill up on a particular topic, we offer digital learning options. Watch training sessions and event keynote sessions at any time. View multiple modules or choose a learning path tailored to today’s developer and technology masterminds that are designed to prepare you for industry-recognized Microsoft certifications.

Additional events

We’re just scratching the surface of the full picture of events that Microsoft has to offer. If you don’t find what you are looking for here, visit our full global events catalog for a list of events in your region and possibly your own city. These are events that are organized around specific product offerings and located in easily accessible locations with a wide range of class levels offered.

We invite everyone to join us to learn and grow, join us to connect with your peers, join us to get the answers you need so that you can deliver the solutions that can help propel your digital transformation. Visit our events website of flagship and regional events, and we look forward to seeing you in the year ahead.

Tags: , , , ,

Go to Original Article
Author: Microsoft News Center

Beyond overhead: What drives donor support in the digital era – Microsoft on the Issues

One of the greatest challenges to running a successful nonprofit organization has always been that donors look at nonprofits’ stewardship of funds as a primary way to assess impact. While there is no doubt that nonprofits must use donor funds responsibly, tracking to see if a nonprofit maintains the highest possible ratio of spending on programs-to spending on overhead is a poor proxy for understanding how effective a nonprofit truly is. In fact, the imperative to limit overhead has forced many organizations to underinvest in efforts to improve efficiency. Ironically, this has long prevented nonprofits from utilizing innovative digital technologies that could help them be more efficient and effective.

Now more than ever, cloud-based technology can have a transformative effect on how nonprofit organizations increase impact and reduce costs. The same technologies that give for-profit businesses insights about customers and markets, create operational efficiencies and speed up innovation can also help nonprofits target donors and raise funds more strategically, design and deliver programming more efficiently, and connect field teams with headquarters more effectively. This means smart investments in digital tools are essential to every nonprofit’s ability to make progress toward its mission.

The good news is that a major shift is underway. As part of our work at Microsoft Tech for Social Impact to understand how nonprofits can use technology to drive progress and demonstrate impact, we recently surveyed 2,200 donors, volunteers and funding decision-makers to learn how they decide which organizations to support, what their expectations are for efficiency and effectiveness, and how they feel about funding technology infrastructure at the nonprofits they support.

The results, which we published recently in the white paper “Beyond overhead: Donor expectations for driving impact with technology,” make clear that people donate to organizations they trust and that donors are increasingly looking at data beyond the ratio of program spending to overhead spending to measure impact. We also found that those who support nonprofits now overwhelmingly recognize the critical role technology plays in driving impact and delivering value. Nearly four out of five supporters (which includes both donors and volunteers) and more than nine out of 10 funding decision-makers told us they support directing donations to improve technology at a nonprofit. An overwhelming majority — 85 percent of supporters and 95 percent of funding decision-makers — are more likely to contribute to organizations that can show that they are using technology to improve how it runs programs.

At the same time, the survey found that most people expect organizations to use donations more efficiently and to advance the causes they work for more effectively than in the past. Among supporters, for example, 79 percent believe nonprofits should be better at maximizing funding than they were 10 years ago. Just over 80 percent of funding decision-makers believe nonprofits should be more effective at achieving their goals and advancing the causes they work for now than in the past.

To give you a better sense of what potential donors are looking for as they consider where to target their nonprofit contributions and how much they weigh technology into their thinking, we have developed a tool using Power BI so you can look at the data in greater detail. Within the tool, you can see how people responded to questions about overall effectiveness and efficiency, the importance of technology as a driver of success, how likely they are to support organizations that use technology to demonstrate impact, and their willingness to fund technology improvements at the nonprofits they support.

To make the tool as useful as possible for your organization, you can sort the data by supporters and funding decision-makers, and you can explore how responses varied by region. As you move through the data, you will see how these critical groups of supporters and funders think about these important questions in the region where your organization operates:

The ultimate goal of this survey was to get a clearer picture of what motivates people to contribute to an organization and how technology can help nonprofits meet supporters’ expectations. Overall, I believe our research provides some important insights that can help any organization be more successful. Fundamentally, we found that people donate to organizations that are perceived to be trustworthy, and that trust is achieved though operational transparency and effective communications. More than ever before, donors recognize that using data to measure and demonstrate impact is the foundation for trust.

I encourage you to read the full report and learn more about Microsoft’s commitment to support nonprofits.

Go to Original Article
Author: Microsoft News Center

How to deal with the on-premises vs. cloud challenge

For some administrators, the cloud is not a novelty. It’s critical to their organization. Then, there’s you, the lone on-premises holdout.

With all the hype about cloud and Microsoft’s strong push to get IT to use Azure for services and workloads, it might seem like you are the only one in favor of remaining in the data center in the great on-premises vs. cloud debate. The truth is the cloud isn’t meant for everything. While it’s difficult to find a workload not supported by the cloud, that doesn’t mean everything needs to move there.

Few people like change, and a move to the cloud is a big adjustment. You can’t stop your primary vendors from switching their allegiance to the cloud, so you will need to be flexible to face this new reality. Take a look around at your options as more vendors narrow their focus away from the data center and on-premises management.

Is the cloud a good fit for your organization?

The question is: Should it be done? All too often, it’s a matter of money. For example, it’s possible to take a large-capacity file server in the hundreds of terabytes and place it in Azure. Microsoft’s cloud can easily support this workload, but can your wallet?

Once you get over the sticker shock, think about it. If you’re storing frequently used data, it might make business sense to put that file server in Azure. However, if this is a traditional file server with mostly stale data, then is it really worth the price tag as opposed to using on-premises hardware?

Azure file server
When you run the numbers on what it takes to put a file server in Azure, the costs can add up.

Part of the on-premises vs. cloud dilemma is you have to weigh the financial costs, as well as the tangible benefits and drawbacks. Part of the calculation in determining what makes sense in an operational budget structure, as opposed to a capital expense, is the people factor. Too often, admins find themselves in a situation where management sees one side of this formula and wants to make that cloud leap, while the admins must look at the reality and explain both the pros and cons — the latter of which no one wants to hear.

Part of the on-premises vs. cloud dilemma is you have to weigh the financial costs, as well as the tangible benefits and drawbacks.

The cloud question also goes deeper than the Capex vs. Opex argument for the admins. With so much focus on the cloud, what happens to those environments that simply don’t or can’t move? It’s not only a question of what this means today, but also what’s in store for them tomorrow.

As vendors move on, the walls close in

With the focus for most software vendors on cloud and cloud-related technology, the move away from the data center should be a warning sign for admins that can’t move to the cloud. The applications and tools you use will change to focus on the organizations working in the cloud with less development on features that would benefit the on-premises data center.

One of the most critical aspects of this shift will be your monitoring tools. As cloud gains prominence, it will get harder to find tools that will continue to support local Windows Server installations over cloud-based ones. We already see this trend with log aggregation tools that used to be available as on-site installs that are now almost all SaaS-based offerings. This is just the start.

If a tool moves from on premises to the cloud but retains the ability to monitor data center resources, that is an important distinction to remember. That means you might have a workable option to keep production workloads on the ground and work with the cloud as needed or as your tools make that transition.

As time goes on, an evaluation process might be in order. If your familiar tools are moving to the cloud without support for on-premises workloads, the options might be limited. Should you pick up new tools and then invest the time to install and train the staff how to use them? It can be done, but do you really want to?

While not ideal, another viable option is to take no action; the install you have works, and as long as you don’t upgrade, everything will be fine. The problem with remaining static is getting left behind. The base OSes will change, and the applications will get updated. But, if your tools can no longer monitor them, what good are they? You also introduce a significant security risk when you don’t update software. Staying put isn’t a good long-term strategy.

With the cloud migration will come other choices

The same challenges you face with your tools also apply to your traditional on-premises applications. Longtime stalwarts, such as Exchange Server, still offer a local installation, but it’s clear that Microsoft’s focus for messaging and collaboration is its Office 365 suite.

The harsh reality is more software vendors will continue on the cloud path, which they see as the new profit centers. Offerings for on-premises applications will continue to dwindle. However, there is some hope. As the larger vendors move to the cloud, it opens up an opportunity in the market for third-party tools and applications that might not have been on your radar until now. These products might not be as feature-rich as an offering from the larger vendors, but they might tick most of the checkboxes for your requirements.

Go to Original Article