Tag Archives: When

How to take advantage of Teams-Exchange integration

When Microsoft introduced Teams, there was already an appetite in the marketplace for a platform that supports real-time chat, collaboration, meetings and calling.

The Slack success story motivated Microsoft to release its own version of a team messaging app in 2017. The introduction of Microsoft Teams provided a new way to communicate and collaborate, leading to less use of some Exchange functions. Because Exchange and email continue to be important, Microsoft developed Teams-Exchange integration functionality to give organizations a way to customize how they work with each application.

Exchange is still the go-to tool to organize and manage meetings, send email and centralize all key contact information, such as phone numbers and addresses. For users who rely on Microsoft Teams for collaboration, there are several ways to pull data from Outlook or Exchange Online into the Microsoft Teams channels or vice versa. The following examples highlight some of the Teams-Exchange integration requests administrators might get from users.

Access key Exchange data from within Microsoft Teams

Users who spend most of their time within Teams will want a way to retrieve email and calendars. Teams users can add a new tab with any content they like.

For Outlook email, add a tab by clicking on the (+) symbol in Teams as shown in Figure 1.1.

Teams content tab
Figure 1.1: Click the + symbol in Microsoft Teams to set up a new tab.

From the icons list at the top, select the one labeled Website. Give it a name and add the URL https://outlook.office365.com/mail/inbox for Outlook on the web as show in Figure 1.2.

Teams tab setup
Figure 1.2: To complete the setup for a new tab showing Outlook in Microsoft teams, give the tab a name and add the URL for Outlook on the web.

Use the tabs to add shared calendars for Teams

One of the other areas that users have missed from Teams relates to group calendars. Without direct access to a team’s calendars, many workers must switch between Outlook and Teams to view these shared calendars. A workaround is to create a new tab as explained above, but in this case, set it up to display the group’s shared calendar. Microsoft has this on its 2020 roadmap, but the following instructions will work today.

First, click on the office group calendar from the Outlook web client as shown in Figure 2.1.

office group calendar URL
Figure 2.1: Select the office group calendar from the Outlook web client to get the URL for the calendar.

After clicking the calendar icon, copy the URL from the address bar in the browser as shown in Figure 2.2.

copy calendar link
Figure 2.2: Copy the calendar URL from the address bar in the browser.

Next, go to Teams and add a new tab to the Team channel, select the Website icon and then paste the URL stored from the earlier step to complete the new tab as show in Figure 2.3.

Teams tab calendar setup
Figure 2.3: Complete the tab setup for a shared calendar in Teams by giving the tab a name and adding the calendar URL.

Notify users within Teams of certain email

Another capability that users might find helpful is getting a notification within Microsoft Teams when they receive a specific email.

For this setup, the Exchange administrator will use the automation platform called Power Automate, formerly known as Microsoft Flow. Power Automate is a service included with Office 365 to connect apps on the Microsoft platform so administrators can build customized routines that run automatically when certain conditions are met.

To start, sign into Power Automate and create a new flow. Select the trigger for Outlook named When a new email arrives and add the action in Teams called Post a message as shown in Figure 3.1.

Power Automate flow created
Figure 3.1: Use Power Automate to set up an automated task that triggers when a new email arrives from a specific person and results in a notification posted in a Teams channel.

You will need to perform basic configurations such as email account, filters for what type of email to monitor for and where to post the message. By default, once a flow is created it is active.

Notify users within Teams of certain events

Another useful automation routine to set up is to forward reminders in Teams for specific events. Since Exchange is the platform that manages all calendars and events, you can use a Power Automate task similar to the previous tip that triggers with an email.

Use Power Automate to build a flow that monitors a calendar — the user calendar, shared resource calendars or shared calendars — for a certain event, then automatically post a message to Teams when the start time approaches as shown in Figure 4.1.

Power Automate flow
Figure 4.1: Build a flow in Power Automate to monitor a calendar and then send a notification to a channel in Teams.

There are many more integration opportunities between Microsoft Teams and Exchange Online. For example, administrators can investigate the bots feature in Teams for another way to connect and process commands related to Exchange email, calendars and tasks. Services such as the Virtual Assistant and Bot Framework can offer more advanced integration capabilities without the help of a software developer.

Go to Original Article
Author:

Contact tracing apps seem effective, but have privacy concerns

When the state of Rhode Island launched a contact tracing app for COVID-19 in May, public health officials said the program could help curtail the pandemic, but privacy advocates worry that the app, and ones like it, take too much data while potentially sharing it with too many people.

As stores, restaurants, parks and offices in the U.S. begin to open back up months after the first COVID-19 related stay-at-home orders, enterprises and governments face the difficult challenge of providing goods and services while keeping people safe.

To tackle that challenge, enterprises and governments are turning to technology to create contact tracing apps.

Balance of safety and privacy

A decades-old strategy to help slow the spread of contagious diseases, contact tracing is the process of identifying infected people and tracking down who they have been in contact with and notifying them of a potential infection.

While this was largely done manually in the past, enterprises, as well as local and state governments, are beginning to use apps to do it, including mobile applications that use location data to track a person’s whereabouts, to more quickly and effectively track where COVID-19 may have spread. Using AI-powered big data analytics, governments and enterprises can then process the data more anonymously.

Contact tracing apps, however, have raised concerns from privacy advocates, who say that some platforms either take too much identifying information, such as GPS data, give too much data to government authorities, or both.

The Electronic Frontier Foundation (EFF), for one, explicitly opposes automated COVID-19 contact tracing apps that track location through GPS or cell phone location, as well as apps that send information about possibly infected people directly to the government.

 “This data is highly intrusive of location privacy, yet not sufficiently granular to show whether two people were within transmittal distance (six feet),” said Adam Schwartz, senior staff attorney at the EFF.

Rhode Island, with its recently unveiled CRUSH COVID RI app, is an example.

Released May 19, the app uses GPS location data to track the people and places users visited for at least 10 minutes over the past 20 days. If a user tests positive for COVID-19, they can agree to share their location data with the state health department so it can identify people the user was in contact with and alert them.

CRUSH COVID RI app
CRUSH COVID RI application

Signing up for the app is voluntary, and location data, unless shared with the health department, is stored entirely on users’ phones. It’s deleted after 20 days.

Despite the fact that these apps are voluntary, privacy advocates worry that apps that use GPS data to track people, and that send data to the government, are invasive.

 “We are disappointed that some nations and states are using location apps and hybrid location/proximity apps. The voluntariness of such apps does not cure the lack of data minimization,” Schwartz said.

The American Civil Liberties Union was similarly critical of such contract tracing technologies, saying they carry some inherent risk of exposing an infected person’s medical condition to people with whom they come in contact.

However, some contact tracing platforms aim to be privacy-friendly.

These include a Google-Apple initiative, which has drawn wide interest, as well as a tracing app from the Pan-European Privacy-Preserving Proximity Tracing consortium.

These mobile apps use a phone’s Bluetooth Low Energy beacons to interact with other phones, enabling the phone of an enrolled user to announce itself with a different random large number to nearby phones every few minutes. Phones keep a log of the numbers they send out, as well as the numbers sent out by nearby phones.

If a user is diagnosed as infected with COVID-19, they can then voluntarily upload the that list of numbers to a central server. Those users who are not infected have their numbers automatically compared to the numbers on the server. If enough numbers match, then users are notified that they may have been in contact with someone who is infected.

That’s different from Rhode Island’s new app, which uses GPS data and which uploads information to government officials.

A Bluetooth system is more accurate and less revealing than an app that uses geolocation data, an ACLU white paper on tracing apps noted. While Bluetooth tracking could potentially reveal associations, it’s less likely to do so.

The EFF, likewise, is wary about contact tracing apps that track proximity using Bluetooth, Schwartz said.

“This system might not help; if it does, it will be a small part of a larger public health response that must focus on manual interview-based contact tracing and widespread testing,” he said.

“This system carries privacy risks that must be mitigated through voluntariness, data minimization and open source code. We oppose hybrid tracking apps that use both proximity and location,” Schwartz continued.

Enterprise-level

Meanwhile, national governments around the world, including the governments of South Korea, Singapore, China and Australia, have developed and released contact tracing apps. Some enterprises are also beginning to consider the implications of having their employees use contact tracing apps.

Enterprises with global operations have particularly shown a greater willingness to use technology-based contact tracing within countries with less legal or cultural opposition to contact tracing, said Deborah Golden, U.S. cyber risk services leader at Deloitte Risk and Financial Advisory.

“In the U.S., we expect that organizations will likely lean on a variety of approaches to reach the next normal. Some organizations may even bypass this challenge altogether and realize they are able to maintain fully remote operations in perpetuity,” Golden said. “Others that are more dependent on physical presence may consider a combination of physical protocols.”

Before using or developing contact tracing apps, however, governments and enterprises need to deeply consider the privacy implications the platform may have, as well as methods to help ensure users’ personal data stays safe and anonymous, she noted.

The creators

Regardless of the method used for contact tracing, or who is deploying the apps, companies that create such apps need to ensure they are anonymizing data and keeping people’s information private, according to some vendors.

Maven Wave, an Atos-owned technology consulting firm that specializes in digital delivery skills and cloud-powered applications, is working with vendors to develop technology-assisted contact tracing (TACT) apps.

“There’s a whole bunch of things that need to happen” to keep information private, said Brian Ray, managing director of AI and machine learning at Maven Wave.

“Redaction, making data points anonymous, having a control system in place, having a way to audit that process” are just some of the things tech companies need to do, he said.

Meanwhile, enterprises considering using TACT apps should take into account many privacy and data protection concerns, regardless of whether contact tracing apps require users to opt in, said Golden.

Organizations should carefully consider how this data will be protected, accessed, stored, transmitted and reported.
Deborah GoldenU.S. cyber risk services leader, Deloitte Risk and Financial Advisory

“In adopting these technologies, organizations are creating large datasets of sensitive personal health information and personally identifiable information,” she said. “Organizations should carefully consider how this data will be protected, accessed, stored, transmitted and reported.”

“Leaders need to think through where organizational lines of responsibility exist for communication with regulatory officials, employees, customers and other stakeholder groups, as well as how communication should occur to foster trust and transparency — particularly when disparate regulatory guidance may exist across geographies or industries,” Golden continued.

Yet, governments and the public may have different, even opposing, views about what data should be shared, added Asif Dhar, chief health informatics officer and a principal in Deloitte Consulting’s Monitor Deloitte practice, which is working with states and companies to build and deploy contact tracing apps.

“Active engagement with consumers and employees is critical to gain an appreciation of their preferences to establish clear expectations,” he said. “For example, organizations should establish clear consenting platforms so that stakeholders understand when and under what circumstances data is used.”

Without a focus on trust and transparency, organizations may risk low acceptance of apps, Dhar continued. Organization should also consider ways to adequately protect data, including where data is stored, who can access it, and how and when it can be accessed.

Still, even if enterprises or governments set up fairly secure, anonymized contact tracing apps, it’s no guarantee they will provide the information needed to keep people safe.

Effectiveness of apps

How many people use available contact tracing apps can play a part in their effectiveness.

If only a few people download and use an app, the app may convey inaccurate results, such as indicating to officials that fewer people are getting infected. That may create a false sense of security. People simply not getting tested, or not changing their infection status in the app, would also skew the results.

But according to Prince Kohli, CTO at RPA vendor Automation Anywhere, people are generally willing to download the apps and provide data.

Automation Anywhere helped develop contact tracing apps in conjunction with other companies in several countries, including Australia and China. Some apps ask users to answer surveys about where they have been and their medical status. Most people have been willing to answer questions like these, said Kohli.

“This is not data that people are trying to hide,” he said.

A usage rate as low as a 10% to 20% in a group could provide relevant results, Kohli said, as long as the percentage indicates a truly random sampling of people.

Even so, app usage and COVID-19 testing rates aren’t the only determining factors of an app’s effectiveness.

While use thresholds are an important factor, other considerations, such as whether a person has their phone on them when going out or not, or if a person travels across disparate geographical areas, can help determine efficacy, according to Golden.

The Rhode Island app, for example, can’t be downloaded by users outside of Rhode Island, making it useless for tracking visitors to the state.

“Although contact tracing applications may be an important tool in a country’s ability to return to work, there is no silver bullet in getting back to normal,” Golden said. “Organizations cannot negate the opportunity that human contact tracers and other physical and digital health safety tools and protocols offer.”

Go to Original Article
Author:

Q&A: Recounting the rough-and-tumble history of PowerShell

To examine the history of PowerShell requires going back to a time before automation, when point-and-click administration ruled.

In the early days of IT, GUI-based systems management was de rigueur in a Windows environment. You added a new user by opening Active Directory, clicking through multiple screens to fill in the name, group membership, logon script and several other properties. If you had dozens or hundreds of new users to set up, it could take quite some time to complete this task.

To increase IT efficiency, Microsoft produced a few command-line tools. These initial automation efforts in Windows — batch files and VBScript — helped, but they did not go far enough for administrators who needed undiluted access to their systems to streamline how they worked with the Windows OS and Microsoft’s ever-growing application portfolio.

It wasn’t until PowerShell came out in 2006 that Microsoft gave administrators something that approximated shell scripting in Unix. PowerShell is both a shell — used for simple tasks such as gathering the system properties on a machine — and a scripting language to execute more advanced infrastructure jobs. Each successive PowerShell release came with more cmdlets, updated functionality and refinements to further expand the administrator’s dominion over Windows systems and its users. In some instances, the only way to make certain adjustments in some products is via PowerShell.

Today, administrators widely use PowerShell to manage resources both in the data center and in the cloud. It’s difficult to comprehend now, but shepherding a command-line tool through the development gauntlet at a company that had built its brand on the Windows name was a difficult proposition.

Don JonesDon Jones

Don Jones currently works as vice president of content partnerships and strategic initiatives for Pluralsight, a technology skills platform vendor, but he’s been fully steeped in PowerShell from the start. He co-founded PowerShell.org and has presented PowerShell-related sessions at numerous tech conferences.

Jones is also an established author and his latest book, Shell of an Idea, gives a behind-the-scenes history of PowerShell that recounts the challenges faced by Jeffrey Snover and his team.

In this Q&A, Jones talks about his experiences with PowerShell and why he felt compelled to cover its origin story.

Editor’s note: This interview has been edited for length and clarity.

What was it like for administrators before PowerShell came along?

Don Jones: You clicked a lot of buttons and wizards. And it could get really painful. Patching machines was a pain. Reconfiguring them was a pain. And it wasn’t even so much the server maintenance — it was those day-to-day tasks.

I worked at Bell Atlantic Network Integration for a while. We had, maybe, a dozen Windows machines and we had one person who basically did nothing but do new user onboarding: creating the domain account and setting up the mailbox. There was just no better way to do it, and it was horrific.

I started digging into VBScript around the mid-1990s and tried to automate some of those things. We had a NetWare server, and you periodically had to log on, look for idle connections and disconnect them to free up a connection for another user if we reached our license limit. I wrote a script to do something a human being was sitting and doing manually all day long.

This idea of automating — that was just so powerful, so tremendous and so life-affirming that it became a huge part of what I wound up doing for my job there and jobs afterward.

Do you remember your introduction to PowerShell?

Jones: It was at a time when Microsoft was being a little bit more free talking about products that they were working on it. There was a decent amount of buzz about this Monad shell, which was its code name. I felt this is clearly going to be the next thing and was probably going to replace VBScript from what they were saying.

I was working with a company called Sapien Technologies at the time. They produce what is probably still the most popular VBScript code editor. I said, ‘We’re clearly going to have to do something for PowerShell,’ and they said, ‘Absolutely.’ And PrimalScript was, I think, the first non-Microsoft tool that really embraced PowerShell and became part of that ecosystem.

That attracted the attention of Jeffrey Snover at Microsoft. He said, ‘We’re going to launch PowerShell at TechEd Europe 2006 in Barcelona, [Spain], and I’d love for you to come up and do a little demo of PrimalScript. We want to show people that this is ready for prime time. There’s a partner ecosystem. It’s the real deal, and it’s safe to jump on board.’

That’s where I met him. That was the first time I got to present at a TechEd and that set up the next large chapter of my career.

I think PowerShell has earned its place in a lot of people’s toolboxes and putting it out there as open source was such a huge step.
Don JonesVice president of content partnerships and strategic initiatives, Pluralsight

What motivated you to write this book?

Jones: I think I wanted to write it six or seven years ago. I remember being either at a TechEd or [Microsoft] Ignite at a bar with [Snover], Bruce Payette and, I think, Ken Hansen. You’re at a bar with the bar-top nondisclosure agreement. And they’re telling these great stories. I’m like, ‘We need to capture that.’ And they say, ‘Yeah, not right now.’

I’m not sure what really spurred me. Partly, because my career has moved to a different place. I’m not in PowerShell anymore. I felt being able to write this history would be, if not a swan song, then a nice bookend to the PowerShell part of my career. I reached out to a couple of the guys again, and they said, ‘You know what? This is the right time.’ We started talking and doing interviews.

As I was going through that, I realized the reason it’s the right time is because so many of them are no longer at Microsoft. And, more importantly, I don’t think any of the executives who had anything to do with PowerShell are still at Microsoft. They left around 2010 or 2011, so there’s no repercussions anymore.

Regarding Jeffrey Snover, do you think if anybody else had been in charge of the PowerShell project that it would have become what it is today?

Jones: I don’t think so. By no means do I want to discount all the effort everyone else put in, but I really do think it was due to [Snover’s] absolute dogged determination, just pure stubbornness.

He said, ‘Bill Gates got it fairly early.’ And even Bill Gates getting it and understanding it and supporting it didn’t help. That’s not how it worked. [Snover] really had to lead them through some — not just people who didn’t get it or didn’t care — but people who were actively working against them. There was firm opposition from the highest levels of the company to make this stop.

Because you got in close to the ground floor with PowerShell, were you able to influence any of its functionality from the outside?

Jones: Oh, absolutely. But it wasn’t really just me. It was all the PowerShell MVPs. The team had this deep recognition that we were their biggest fans — and their biggest critics.

They went out of their way to do some really sneaky stuff to make sure they could get our feedback. Windows Vista moving into Windows 7, there was a lot of secrecy. Microsoft knew it had botched — perceptually if nothing else — the Vista release. They needed Windows 7 to be a win, and they were being really close to the vest about it. For them to show us anything that had anything to do with Windows 7 was verboten at the highest levels of the company. Instead they came up with this idea of the “Windows Vista update,” which was nothing more than an excuse to show us PowerShell version 3 without Windows 7 being in the context.

They wanted to show us workflows. They put us in a room and they not only let us play with it and gave us some labs to run through, but they had cameras running the whole time. They said, ‘Tell us what you think.’

I think nearly every single release of PowerShell from version 2 onward had a readme buried somewhere. They listed the bug numbers and the person who opened it. A ton of those were us: the MVPs and people in the community. We would tell the team, ‘Look, this is what we feel. This is what’s wrong and here’s how you can fix it.’ And they would give you fine-print credit. Even before it went open source, there was probably more community interaction with PowerShell than most Microsoft products.

I came from the perspective of teaching. By the time I was really in with PowerShell, I wasn’t using it in a production environment. I was teaching it to people. My feedback tended to be along the lines of, ‘Look, this is hard for people to grasp. It’s hard to understand. Here’s what you could do to improve that.’ And a lot of that stuff got adopted.

Was there any desire — or an offer — to join Microsoft to work on PowerShell directly?

Jones: If I had ever asked, it probably could have happened. I had had previous dealings with Microsoft, as a contractor, that I really enjoyed.

I applied for a job there — and I did not enjoy how that went down.

My feeling was that I was making a lot more money and having a lot more impact as an independent.

What is your take on PowerShell since it made the switch to an open source project?

Jones: It’s been interesting. PowerShell 6, which was the first cross-platform open source was a big step backward in a lot of ways. Just getting it cross-platform was a huge step. You couldn’t take the core of PowerShell, and, at that point, 11 years of add-on development and bring it all with you at once. I think a lot of people looked at it as an interesting artifact.

The very best IT people reach out for whatever tool you put in front of them. They rip it apart and they try to figure out how is this going make my job better, easier, faster, different, whatever. They use all of them.
Don JonesVice president of content partnerships and strategic initiatives, Pluralsight

[In PowerShell 7], they’ve done so much work to make it more functional. There’s so much parity now across macOS, Linux and Windows. I feel the team tripled down and really delivered and did exactly what they said they were going to do.

I think a lot more people take it seriously. PowerShell is now built into the Kali Linux distribution because it’s such a good tool. I think a lot of really hardcore, yet open-minded, Linux and Unix admins look at PowerShell and — once they take the time to understand it — they realize this is what shells structurally should have been.

I think PowerShell has earned its place in a lot of people’s toolboxes and putting it out there as open source was such a huge step.

Do you see PowerShell ever making any inroads with Linux admins?

Jones: I don’t think they’re the target audience. If you’ve got a tool that does the job, and you know how to use it, and you know how to get it done, that’s fine.

We have a lot of home construction here in [Las] Vegas. I see guys putting walls up with a hammer and nails. Am I going to force you to use a nail gun? No. Are you going to be a lot faster? Yes, if you took a little time to learn how to use it. You never see younger guys with the hammer; it’s always the older guys who’ve been doing this for a long, long time.

I feel that PowerShell has already been through this cycle once. We tried to convince everyone that you needed to use PowerShell instead of the GUI, and a lot of admins stuck with the GUI. That’s a fairly career-limiting move right now, and they’re all finding that out. They’re never going to go any further. The people who picked it up, they’re the ones who move ahead.

The very best IT people reach out for whatever tool you put in front of them. They rip it apart and they try to figure out how is this going make my job better, easier, faster, different, whatever. They use all of them.

You don’t lose points for using PowerShell and Bash. It would be stupid for Linux administrators to fully commit to PowerShell and only PowerShell, because you’re going to run across systems that have this other thing. You need to know them both.

Microsoft has released a lot of administrative tools — you’ve got PowerShell, Office 365 CLI and Azure CLI to name a few. Someone new to IT might wonder where to concentrate their efforts when there are all these options.

Jones: You get a pretty solid command-line tool in the Azure CLI. You get something that’s very purpose-specific. It’s scoped in fairly tightly. It doesn’t have an infinite number of options. It’s a straightforward thing to write tutorials around. You’ve got an entire REST API that you can fire things off at. And if you’re a programmer, that makes a lot more sense to you and you can write your own tools around that.

PowerShell sits kind of in the middle and can be a little bit of both. PowerShell is really good at bringing a bunch of things together. If you’re using the Azure CLI, you’re limited to Azure. You’re not going to use the Azure CLI to do on-prem stuff. PowerShell can do both. Some people don’t have on-prem, they don’t need that. They just have some very simple basic Azure needs. And the CLI is simpler and easier to start with.

Where do you see PowerShell going in the next few years?

Jones: I think you’re going to continue to see a lot of investment both by Microsoft and the open source community. I think the open source people have — except for the super paranoid ones — largely accepted that Microsoft’s purchase of GitHub was not inimical. I think they have accepted that Microsoft is really serious about open source software. I think people are really focusing on making PowerShell a better tool for them, which is really what open source is all about.

I think you’re going to continue to see it become more prevalent on more platforms. I think it will wind up being a high common denominator for hiring managers who understand the value it brings to a business and some of the outcomes it helps achieve. Even AWS has invested heavily in their management layer in PowerShell, because they get it — also because a lot of the former PowerShell team members now work for AWS, including Ken Hansen and Bruce Payette, who invented the language.

I suspect that, in the very long run, it will probably shift away from Microsoft control and become something a little more akin to Mozilla, where there will be some community foundation that, quote unquote, owns PowerShell, where a lot of people contribute to it on an equal basis, as opposed to Microsoft, which is still holding the keys but is very engaged and accepting of community contributions.

I think PowerShell will probably outlive most of its predecessors over the very long haul.

Go to Original Article
Author:

Arcserve enhances portfolio of Sophos-secured backup

Restoring from backups is often the last resort when data is compromised by ransomware, but savvy criminals are also targeting those backups.

Arcserve enhanced its Sophos partnership to provide cybersecurity aimed at safeguarding backups, preventing cybercriminals from taking out organizations’ last line of ransomware defense. The Secured by Sophos line of Arcserve products, originally consisting of on-premises appliances that integrated Arcserve backup and Sophos security, extended its coverage to SaaS and cloud with two new entries: Arcserve Cloud Backup for Office 365 and Arcserve Unified Data Protection (UDP) Cloud Hybrid.

Arcserve UDP Cloud Hybrid Secured by Sophos is an extension to existing Arcserve software and appliances. It replicates data to the cloud, and the integrated Sophos Intercept X Advanced software scans the copies for malware and other security threats. The Sophos software recognizes the difference between encryption performed by normal backup processes and unauthorized encryption from bad actors.

Arcserve Cloud Backup for Office 365 Secured by Sophos is a stand-alone product for protecting and securing Office 365 data. It also uses Sophos Intercept X Advanced endpoint security, and it can do backup and restore for Microsoft Exchange emails, OneDrive and SharePoint.

Both new products are sold on an annual subscription model, with pricing based on storage and compute.

IDC research director Phil Goodwin described what has been an escalating battle between organizations and cybercriminals. Data protection vendors keep improving their products, and organizations keep learning more about backups. This trend allows companies to quickly and reliably restore their data from backups and avoid paying ransoms. Criminals, in turn, learn to target backups.

“Bad guys are increasingly attacking backup sets,” Goodwin said.

Arcserve’s Secured by Sophos products combines security and backup, specifically protecting backup data from cyberthreats. Organizations can realign their security to encompass backup data, but Arcserve’s products provide security out of the box. Goodwin said Acronis is the only other vendor he could think of that has security integrated into backup, while others such as IBM have data protection and security as separate SKUs.

diagram of Arcserve Office 365 backup secured by Sophos
Sophos Intercept X Advanced is now running in the cloud, scanning Office 365 backup data for malware.

From a development standpoint, security and data protection call on different skill sets, but both are necessary for combating ransomware. Goodwin said combining the two makes for stronger defense system.

Oussama El-Hilali, CTO at Arcserve, said adding Office 365 to the Secured by Sophos line was important because more businesses are adopting the platform than in the past. There was already an upward trend of businesses putting mission-critical data on SharePoint and OneDrive, but the boost in remote work deployments caused by the COVID-19 pandemic accelerated that.

El-Hilali said the pandemic has increased the need for protecting data in clouds and SaaS applications more for SMBs than enterprises, because larger organizations may have large, on-premises storage arrays they can use. The Office 365 product is sold stand-alone because many smaller businesses only need an Office 365 data protection component, and nothing for on premises.

“The [coronavirus] impact is more visible in the SMB market. A small business is probably using a lot of SaaS, and probably doesn’t have a lot of data on-prem,” El-Hilali said.

Unfortunately, its Office 365’s native data retention, backup and security features are insufficient in a world where many users are accessing their data from endpoint and mobile devices. Goodwin said there is a strong market need, and third parties such as Arcserve are taking that chance.

“There’s a big opportunity there with Office 365 — it’s one of the greatest areas of vulnerability from the perspective of SaaS apps,” Goodwin said.

Go to Original Article
Author:

Conversations in culture podcasts | Microsoft In Culture

“When you hear that someone has autism, you can’t go in with specific stereotypes, you need to just start engaging with them one-on-one, learning about them, learning what their challenges are, what they enjoy doing, how they prefer to interact with someone. It’s a very individualized disorder.”

Meet Kendall and Delaney Foster. They’re the sisters behind Unified Robotics, an inclusive after-school program for students with cognitive disabilities. Delaney started the program in 2015 to create a shared activity between her and her sister Kendall, who has autism spectrum disorder. In this episode, find out how the Foster sisters are raising disability awareness. They prove that when programs and technology include people with disabilities, everyone wins.

Read the full episode transcript

Go to Original Article
Author: Steve Clarke

How to Recover Deleted Emails in Microsoft 365

When the CEO realizes they deleted a vital email thread three weeks ago, email recovery becomes suddenly becomes an urgent task. Sure, you can look in the Deleted Items folder in Outlook, but beyond that, how can you recover what has undergone “permanent” deletion? In this article, we review how you can save the day by bringing supposedly unrecoverable email back from the great beyond.

Before we continue, we know that for all Microsoft 365 admins security is a priority. And in the current climate of COVID-19, it’s well documented how hackers are working around the clock to exploit vulnerabilities. As such, we assembled two Microsoft experts to discuss the critical security features in Microsoft 365 you should be using right now in a free webinar on May 27. Don’t miss out on this must-attend event – save your seat now!

Now onto saving your emails!

Deleted Email Recovery in Microsoft And Office 365

Email Recovery for Outlook in Exchange Online through Microsoft and Office can be as simple as dragging and dropping the wayward email from the Deleted Items folder to your Inbox. But what do you do when you can’t find the email you want to recover?

First, let’s look at how email recovery is structured in Microsoft 365. There are few more layers here than you might think! In Microsoft 365, deleted email can be in one of three states: Deleted, Soft-Deleted, or Hard-Deleted. The way you recover email and how long you have to do so depends on the email’s delete status and the applicable retention policy.

Email Recovery in Microsoft 365

Let’s walk through the following graphic and talk about how email gets from one state to another, the default policies, how to recover deleted email in each state, and a few tips along the way.

Items vs. Email

Outlook is all about email yet also has tasks, contacts, calendar events, and other types of information. For example, you can delete calendar entries and may be called on to recover them, just like email. For this reason, the folder for deleted content is called “Deleted Items.” Also, when discussing deletions and recovery, it is common to refer to “items” rather than limiting the discussion to just email.

Policy

Various rules control the retention period for items in the different states of deletion. A policy is an automatically applied action that enforces a rule related to services. Microsoft 365 has hundreds of policies you can tweak to suit your requirements. See Overview of Retention policies for more information.

‘Deleted Items’ Email

When you press the Delete key on an email in Outlook, it’s moved to the Deleted Items folder. That email is now in the “Deleted” state, which simply means it moved to the Deleted Items folder. How long does Outlook retain deleted email? By default – forever! You can recover your deleted mail with just a drag and drop to your Inbox. Done!

If you can’t locate the email in the Deleted Items folder, double-check that you have the Deleted Items folder selected, then scroll to the bottom of the email list. Look for the following message:

Outlook Deleted Items Folder

If you see the above message, your cache settings may be keeping only part of the content in Outlook and rest in the cloud. The cache helps to keep mailbox sizes lower on your hard drive, which in turn speeds up search and load times. Click on the link to download the missing messages.

But I Didn’t Delete It!

If you find content in the Deleted Items and are sure you did not delete it, you may be right! Administrators can set Microsoft 365 policy to delete old Inbox content automatically.

Mail can ‘disappear’ another way. Some companies enable a personal archive mailbox for users. When enabled, by default, any mail two years or older will “disappear” from your Inbox and the Deleted Items folder. However, there is no need to worry. While apparently missing, the email has simply moved to the Archives Inbox. A personal Archives Inbox shows up as a stand-alone mailbox in Outlook, as shown below.

Stand-alone mailbox in Outlook

As a result, it’s a good idea to search the Archives Inbox, if it is present when searching for older messages.

Another setting to check is one that deletes email when Outlook is closed. Access this setting in Outlook by clicking “File,” then “Options,” and finally “Advanced” to display this window:

Outlook Advanced Options

If enabled, Outlook empties the Deleted Items when closed. The deleted email then moves to the ‘soft-delete’ state, which is covered next. Keep in mind that with this setting, all emails will be permanently deleted after 28 days

‘Soft-Deleted’ Email

The next stage in the process is Soft-Deleted. Soft-Deleted email is in the Deleted-Items folder but is still easily recovered. At a technical level, the mail is deleted locally from Outlook and placed in the Exchange Online folder named Deletions, which is a sub-folder of Recoverable Items. Any content in Recoverable Items folder in Exchange Online is, by definition, considered soft-deleted.

You have, by default, 14 days to recover soft-deleted mail. The service administrator can change the retention period to a maximum of 30 days. Be aware that this can consume some of the storage capacity assigned to each user account and you could get charged for overages.

How items become soft-deleted

There are three ways to soft-delete mail or other Outlook items.

  1. Delete an item already in the Deleted Items folder. When you manually delete something that is already in the Deleted Items folder, the item is soft-deleted. Any process, manual or otherwise that deletes content from this folder results in a ‘soft-delete’
  1. Pressing Shift + Delete on an email in your Outlook Inbox will bring up a dialog box asking if you wish to “permanently” delete the email. Clicking Yes will remove the email from the Deleted-Items folder but only perform a soft-delete. You can still recover the item if you do so within the 14 day retention period.

Soft Deleting Items in Outlook

  1. The final way items can be soft-deleted is by using Outlook policies or rules. By default, there are no policies that will automatically remove mail from the Deleted-Items folder in Outlook. However, users can create rules that ‘permanently’ (soft-delete) email. If you’re troubleshooting missing email, have the user check for such rules as shown below. You can click Rules on the Home menu and examine any created rules in the Rules Wizard shown below.

Microsoft Outlook Policies and Rules

Note that the caution is a bit misleading as the rule’s action will soft-delete the email, which, as already stated, is not an immediate permanent deletion.

Recovering soft-deleted mail

You can recover soft-deleted mail directly in Outlook. Be sure the Deleted Items folder is selected, then look for “Recover items recently removed from this folder at the top of the mail column, or the “Recover Deleted Items from Server” action on the Home menu bar.

Recovering soft-deleted mail in Outlook

Clicking on the recover items link opens the Recover Deleted Items window.

Recover Deleted Items, Microsoft Outlook

Click on the items you want to recover or Select All, and click OK.

NOTE: The recovered email returns to your Deleted Items folder. Be sure to move it into your Inbox.

If the email you’re looking for is not listed, it could have moved to the next stage: ‘Hard-Deleted.’

While users can recover soft-deleted email, Administrators can also recover soft-deleted email on their behalf using the ‘Hard-Deleted’ email recovery process described next (which works for both hard and soft deletions). Also, Microsoft has created two PowerShell commands very useful in this process for those who would rather script the tasks. You can use the Get-RecoverableItems and Restore-RecoverableItems cmdlets to search and restore soft-deleted email.

Hard-Deleted Email

The next stage for deletion is ‘Hard Delete.’ Technically, items are hard deleted when items moved from the Recoverable folder to the Purges folder in Exchange online. Administrators can still recover items in the folder with the recovery period set by policy which ranges from 14 (the default) to 30 (the maximum). You can extend the retention beyond 30 days by placing legal or litigation hold on the item or mailbox.

How items become Hard-Deleted

There are two ways content becomes hard-deleted.

  1. By policy, soft-deleted email is moved to the hard-deleted stage when the retention period expires.
  2. Users can hard-delete mail manually by selecting the Purge option in the Recover Deleted Items window shown above. (Again, choosing to ‘permanently delete’ mail with Shift + Del, results in a soft-delete, not a hard-delete.)

Recovering Hard-Deleted Mail

Once email enters the hard-delete stage, users can no longer recover the content. Only service administrators with the proper privileges can initiate recovery, and no administrators have those privileges by default, not even the global admin. The global admin does have the right to assign privileges so that they can give themselves (or others) the necessary rights. Privacy is a concern here since administrators with these privileges can search and export a user’s email.

Microsoft’s online documentation Recover deleted items in a user’s mailbox details the step-by-step instructions for recovering hard-deleted content. The process is a bit messy compared to other administrative tasks. As an overview, the administrator will:

  1. Assign the required permissions
  2. Search the Inbox for the missing email
  3. Copy the results to a Discovery mailbox where you can view mail in the Purged folder (optional).
  4. Export the results to a PST file.
  5. Import the PST to Outlook on the user’s system and locate the missing email in the Purged folder

Last Chance Recovery

Once hard-deleted items are purged, they are no longer discoverable by any method by users or administrators. You should consider the recovery of such content as unlikely. That said, if the email you are looking for is not recoverable by any of the above methods, you can open a ticket with Microsoft 365 Support. In some circumstances, they may be able to find the email that has been purged but not yet overwritten. They may or may not be willing to look for the email, but it can’t hurt to ask, and it has happened.

What about using Outlook to backup email?

Outlook does allow a user to export email to a PST file. To do this, click File” in the Outlook main menu, then “Import & Export” as shown below.

Outlook Menu, Import Export

You can specify what you want to export and even protect the file with a password.

While useful from time to time, a backup plan that depends on users manually exporting content to a local file doesn’t scale and isn’t reliable. Consequently, don’t rely on this as a possible backup and recovery solution.

Alternative Strategies

After reading this, you may be thinking, “isn’t there an easier way?” A service like Altaro Office 365 Backup allows you to recover from point-in-time snapshots of an inbox or other Microsoft 365 content. Having a service like this when you get that urgent call to recover a mail from a month ago can be a lifesaver.

Summary

Users can recover most deleted email without administrator intervention. Often, deleted email simply sits in the Deleted folder until manually cleared. When that occurs, email enters the ‘soft-deleted stage,’ and is easily restored by a user within 14-days. After this period, the item enters the ‘hard-deleted’ state. A service administrator can recover hard-deleted items within the recovery window. After the hard-deleted state, email should be considered uncoverable. Policies can be applied to extend the retention times of deleted mail in any state. While administrators can go far with the web-based administration tools, the entire recovery process can be scripted with PowerShell to customize and scale larger projects or provide granular discovery. It is always a great idea to use a backup solution designed for Microsoft 365, such as Altaro Office 365 Backup.

Finally, if you haven’t done so already, remember to save your seat on our upcoming must-attend webinar for all Microsoft 365 admins:

Critical Security Features in Office/Microsoft 365 Admins Simply Can’t Ignore

Is Your Office 365 Data Secure?

Did you know Microsoft does not back up Office 365 data? Most people assume their emails, contacts and calendar events are saved somewhere but they’re not. Secure your Office 365 data today using Altaro Office 365 Backup – the reliable and cost-effective mailbox backup, recovery and backup storage solution for companies and MSPs. 

Start your Free Trial now


Go to Original Article
Author: Brett Hill

Employees empowered to make data-driven decisions spur growth

When employees on the front lines, the ones actually meeting with customers rather than those in back offices and board rooms, are given the tools, training and authority to make data-driven decisions, it makes a huge difference in the success of a given organization.

That’s the finding of a new report from ThoughtSpot, an analytics vendor founded in 2012 and based in Sunnyvale, Calif., and the Harvard Business Review titled, “The New Decision Makers: Equipping Frontline Workers for Success.”

A total of 464 business executives across 16 industry sectors in North America, Europe and Asia were surveyed for the report.

The survey found that only 20% of organizations are giving their  front-line employees both the authority and the tools — self-service analytics platforms and training — to make decisions based on analytics. Those organizations, meanwhile, were most likely among the respondents to have seen more than 10% annual growth in revenue in recent years.

As a result of their objective success — their growth — those enterprises were dubbed Leaders.

Another 43%, however, were deemed Laggards. They were organizations that have so far failed to give their  front-line employees the ability to make decisions driven by data, either by not providing them the BI tools and training or simply not giving them the authority, and were seeing that reflected in their bottom line.

Scott Holden, ThoughtSpot's chief marketing officerScott Holden

Scott Holden, chief marketing officer at ThoughtSpot, recently discussed the report in detail.

In a Q&A, he delves into the hypothesis that led ThoughtSpot and the Harvard Business Review to conduct the research, as well as many of the key findings.

Among them were that, even among Leaders, a vast majority of organizations (86%) believe they’re not yet doing to enough to give  front-line employees everything they need in order to make data-driven decisions. Meanwhile, among those that have at least begun giving  front-line employees the necessary tools and training, 72% have seen an increase in productivity.

What was the motivation for the survey — what did you see in the market that led you and Harvard Business Review to team up and undertake the project?

Scott Holden: We sponsored this research because we had a hunch that companies that empowered their frontline employees with faster access to data would outperform their [competitors]. That was the primary premise, and we wanted to explore the idea and see what other dynamics surrounded that — what was holding them back, what were the Leaders pursuing and doing better than the Laggards — and that was the impetus for this.

What were the key findings, what did ThoughtSpot and HBR discover?

Holden: We all know that technology plays a huge role in productivity gains and empowering people, but there’s also a big cultural transformation required — a process change — and how do the things you do as leaders impact how people adopt technology, and so that was another big component of this. We wanted to explore both dimensions, with the goal of giving leaders that are trying to transform their companies a guide to how to do something.

When you look at the key findings, there are a few things that stand out. Not surprisingly, but it was good to confirm this, companies want to empower the front lines. Ninety percent of all respondents said, ‘Yes, we want to do this; our success is dependent on being able to give fast access to data to all people.’ That’s not surprising, but it’s good to see the number be so high. But then it gets a little bit more surprising because almost the same percentage, 86% of them, said that they need to do more. They’re basically saying they need to provide better technology and tools to empower those employees. They’re saying, ‘We’re not doing enough,’ and more specifically, only 7% of the people surveyed though they were actually doing enough. That was our hunch, but the data proved out really strongly to say that there’s certainly a movement afoot here and people want to be doing this and they need to be doing a better job of it.

How do organizations stand to benefit from empowering employees to make data-driven decisions — what can they accomplish that they couldn’t before?

Holden: There was the benefits that companies saw — if you do this we think this will happen — but there was a nice nuance if you dig into those performance improvements, which is the difference reported based on what the Leaders were doing versus what the Laggards were doing.

When you empower an employee with more and better access to data, they actually become a happier and more engaged employee, and … that translates into better service and better customer satisfaction.
Scott HoldenChief marketing officer, ThoughtSpot

The dimensions by which people saw improvements were around productivity, employee satisfaction, employee engagement, customer satisfaction, and then an improvement in quality of products and services. Some specific stats were that if people said they were to give their employees better access to data and facts, 72% said they would increase their productivity, 69% said that they would increase customer employee engagement and satisfaction, and 67% said they would increase the quality of their products and services.

Basically, everybody thinks that across the business, if we do this we’re going to see big, big improvements in what would be the core levers for any business. If you look at higher employee engagement and higher customer satisfaction, when you empower an employee with more and better access to data, they actually become a happier and more engaged employee, and if they’re the one that’s on the front line talking to your customer, that translates into better service and better customer satisfaction. There’s a nice tie-in to how this actually plays to delivering better services and experiences to your customers in a really relevant way.

What are the obstacles?

Holden: This is where it gets into Leaders versus Laggards. I was really kind of blown away. One of the things that I saw was that — and this is a little counterintuitive — the laggards were 10 times more likely to say they don’t want to empower the front lines. There’s a good chunk of the Laggards out there — 42% of the Laggards – [that] said they actually don’t think they should empower the front lines. This really gets into what I think underlies this big thing that’s standing in the way of the analytics industry right now, which is that historically analytics and data-driven decision-making was done at the top, sort of ivory tower analytics. If you were a C-level executive you probably had a data analyst or someone who worked for you who gave you access to information, and you were able to use your management dashboard to help you make decisions. For practical reasons, it’s hard to give every employee on the front lines access to data analysts, and there may be some trust issues.

What we’re seeing in this report is that there’s a real lack of trust, and that’s why you’re seeing a lot of Laggards say they don’t think they need it. You see that the Laggards have a backward view of what’s driving success, and that empowering the front lines really is an important thing and they’re missing it.

If an organization wants to empower  front-line employees to make data-driven decisions but hasn’t begun the process, how does it get from Point A to Point B?

Holden: New technology can help. If you make it faster and easier, people are more likely to use it. But there are a couple of other key elements here. In the report, there are five key ways that folks are empowering the front lines: leadership; putting data in the hands of folks; governance, which is building the right process and security around the data that you do expose; training; and facilitation, which is a nuance that ties into training, having managers who are bought in because they’re the ones that facilitate the training and make it happen.

Technology companies across the board are so eager to talk about technology, but you can see that other than data, the other things are about leadership, governance, management, training, and it is a full cultural experience to transform your business to be more data-driven. Fifty percent of the Leaders said that culture was a key factor where only 20% of the Laggards did — and Leaders and Laggards were based on success metrics, objective measures that show whether they’re outperforming their industry or not. Building a culture around data-driven decisions is a key factor here that can’t be underestimated.

What is the danger for organizations that don’t give their  front-line employees the tools to make data-driven decisions?

Holden: There’s a huge danger, and this is why the Laggards versus Leaders thing was so stark. If you aren’t buying into being a data-driven company and putting the leadership, the culture, the training, the thinking in place, you are going to fall behind. This report statistically says that you are going to miss out on a big opportunity if you’re not thinking strategically about making this shift. I think it’s a pretty big wake-up call. Data has been a key asset for companies for a while now, but pushing data further out into the front lines, and the success that can have on your business, is a newer concept — it’s not just empowering leaders to make decisions but empowering the marketing manager, the retail associate, the local hospital administrator, the person on the factory floor. Those folks need fast access to data too, and that is an eye-opening discovery.

Editor’s note: This Q&A has been edited for clarity and conciseness.

Go to Original Article
Author:

Update makes Storage Migration Service more cloud-friendly

In days of yore, when Microsoft released a new version of Windows Server, the features in its administrative tools remained fixed until the next major version, which could be three or four years. Today’s Microsoft no longer follows this glacial release cadence.

The PowerShell team drops previews every few weeks and plans to deliver a major version annually. The developers for the Windows Admin Center put out the 16th update in November 2019 since the April 2018 general availability release. Among the new features and refinements is more cloud-friendly functionality to one of its tools, the Storage Migration Service.

The Storage Migration Service is a feature in Windows Server 2019 designed to reduce the traditional headaches associated with moving unstructured data — such as Microsoft Word documents, Excel files and videos — to a newer file server either on premises or in the cloud. Some files come with a lot of baggage in the form of Active Directory memberships or specific share properties that can hamstring a manual migration.

Firing up robocopy and hoping everything copies to the new file server without issue has not typically gone well for administrators when complaints roll in from users about missing file or share permissions. And that’s just the typical experience when moving from one on-premises file server to another. The technical leap to get all that data and its associated properties into a file server in the cloud normally requires a team of experts to ensure a seamless transition.

That’s where version 1910 of the Windows Admin Center steps in. Microsoft developers tweaked the underlying functionality to account for potential configuration mishaps that would botch a file server migration to the cloud, such as insufficient space for the destination server. Windows Admin Center now comes with an option to create an Azure VM that handles the minutiae, such as installation of roles and domain join setup.

This video tutorial by contributor Brien Posey explains how to use the Storage Migration Service to migrate a Windows Server 2008 file server to a newer supported Windows Server version. The transcript of these instructions is below.

With Windows Server 2008 and 2008 R2 having recently reached the end of life, it’s important to transition away from those servers if you haven’t already done so.

In this video, I want to show you how to use the Storage Migration Service to transfer files from a Windows Server 2008 file server over to something newer, such as Windows Server 2019.

With the Windows Admin Center open, go to the Storage Migration Service tab. I’ve used Server Manager to install the Storage Migration Service and the Storage Migration Service proxy. I went into the Add Roles and Features and then added those. I’ve enabled the necessary firewall rules. Specifically, you need to allow SMB, netlogon service and WMI (Windows Management Instrumentation).

There are three steps involved in a storage migration. Step one is to create a job and inventory your servers. Step two is to transfer data from your own servers. Step three is to cut over to the new servers.

Let’s start with step one. The first thing we need to do is to create an inventory of our own server. Click on New job, and then I choose my source device. I have a choice between either Windows servers and clusters or Linux servers. Since I’m going to transfer data off of Windows Server 2008, I select Windows servers and clusters.

I have to give the job a name. I will call this 2008 and click OK.

Next, I provide a set of credentials and then I have to add a device to inventory. Click Add a device and then we could either enter the device name or find it with an Active Directory search. I’m going to search for Windows servers, which returns five results. The server legacy.poseylab.com is my Windows Server 2008 machine. I’ll select that and click Add.

The next thing is to select this machine and start scanning it to begin the inventory process. The scan succeeded and found a share on this machine.

Click Next and enter credentials for the destination server. We’re prompted to specify the destination server. I’m going to select Use an existing server or VM, click the Browse button and search for a Windows server. I’ll use a wildcard character as the server name to search Active Directory.

I’ve got a machine called FileServer.poseylab.com that’s the server that I’m going to use as my new file server. I’ll select that and click Add and then Scan, so now we see a list of everything that’s going to be transferred.

The C: volume on our old server is going to be mapped to the C: volume on our new server. We can also see which shares are going to be transferred. We’ve only got one share called Files in the C:Files path. It’s an SMB share with 55.5 MB of data in it. We will click the Include checkbox to select this particular share to be transferred.

Click Next and we can adjust some transfer settings. The first option is to choose a validation method for transmitted files. By default, no validation is used, but being that I’m transferring such a small amount of data, I will enable CRC64 validation. Next, we can set the maximum duration of the file transfer in minutes.

Next, we can choose what happens with users and groups; we have the option of renaming accounts with the same name, reusing accounts with the same name or not transferring users and groups. We can specify the maximum number of retries and the delay between retries in seconds. I’m going to go with the default values on those and click Next.

We validate the source and the destination device by clicking the Validate button to run a series of tests to make sure that you’re ready to do the transfer. The validation tests passed, so we’re free to start the transfer. Click Next.

This screen is where we start the transfer. Click on Start transfer to transfer all the data. After the transfer completes, we need to verify our credentials. We have a place to add our credentials for the source device and for the destination device. We will use the stored credentials that we used earlier and click Next.

We have to specify the network adapter on both the source and the destination servers. I’m going to choose the destination network adapter and use DHCP (Dynamic Host Configuration Protocol). I’m going to assign a randomly generated name to the old server after the cutover, so the new server will assume the identity of the old server. Click Next.

We’re prompted once again for the Active Directory credentials. I’m going to use the stored credentials and click Next.

We’re taken to the validation screen. The source device original name is legacy.poseylab.com and it’s going to be renamed to a random name. The destination server’s original name was fileserver.poseylab.com and is going to be renamed to legacy.poseylab.com, so the destination server is going to assume the identity of the source server once all of this is done. To validate this, click on the server and then click Validate. The check passed, so I’ll go ahead and click Next.

The last step in the process would be to perform the cutover. Click on Start cut over to have the new server assume the identity of the old server.

That’s how a migration from Windows Server 2008 to Windows Server 2019 works using the Storage Migration Service.

View All Videos
Go to Original Article
Author:

3 zero-day fixes in heavy April Patch Tuesday release

Just when things couldn’t get worse, the hits keep on coming for Windows administrators.

At a time when the coronavirus pandemic is straining resources and stretching administrators’ nerves, the next avalanche of security updates landed on April Patch Tuesday. Microsoft delivered fixes for 113 vulnerabilities, including three zero-days with varying levels of severity on both supported and unsupported Windows systems. The total number of vulnerabilities repaired this month was just two shy of March’s epic release.

Out of the 113 bugs repaired on April Patch Tuesday, 19 are rated critical. Microsoft products that received fixes include Windows, both Edge browsers (HTML- and Chromium-based), Internet Explorer, ChakraCore, Microsoft Office and Microsoft Office Services and Web Apps, Windows Defender, Visual Studio, Microsoft Dynamics, and Microsoft Apps for Android and Mac systems.

The heightened urgency to patch quickly due to multiple zero-days will test the mettle of administrators, many of whom have been working tirelessly to help users work remotely with little time to prepare.

“That’s a nice recipe for disaster,” said Chris Goettl, director of product management and security at Ivanti, a security and IT management vendor based in South Jordan, Utah.

He noted that all the zero-days affect the Windows 7 and Server 2008/2008 R2 OSes, which all reached end-of-life in January but have patches available for customers that can afford to subscribe to the Extended Security Updates program. Goettl said he noticed a pattern with this crop of Microsoft updates.

Chris Goettl, director of product management and security, IvantiChris Goettl

“It looks like the [zero-day] exploits are happening, in most of these cases, on the older platforms. So it’s very likely these are targeting Windows 7 and Server 2008 platforms, especially trying to take advantage of people’s inability to patch,” he said.

Three zero-days affect Windows systems

Two bugs (CVE-2020-0938 and CVE-2020-1020) in the Adobe Font Manager Library affect all supported Windows OSes on both the client and server side, leaving unpatched systems vulnerable to remote code execution attacks. A user could trigger the exploit several ways, including opening a malicious file or examining a document via the File Explorer preview pane. 

Windows 10 systems have built-in protections that would limit the attacker to the AppContainer sandbox where they would not be able to do much damage, Goettl noted. 

The other zero-day (CVE-2020-1027) is an elevation-of-privilege vulnerability in the Windows kernel rated important that affects all supported Windows versions. To take advantage of the flaw, the attacker would need local credentials to run a malicious file. The patch changes how the Windows kernel handles objects in memory.

Other noteworthy April Patch Tuesday fixes

Initially reported by Microsoft as another zero-day but revised shortly thereafter, CVE-2020-0968 describes a remote code execution flaw in the Internet Explorer scripting engine. The bug is rated critical for Windows client systems and moderate for Windows Server OSes due to built-in protections. 

The attacker can target a user a few different ways — through a website with user-contributed ads or content or via a document specially crafted with the IE scripting engine and using ActiveX to run malicious code — but the damage is limited to the privilege level of the user of the unpatched system.

“This one is able to be mitigated if the user has less than full admin rights,” Goettl said. “In those cases, [the attacker] would get full control of the box, but then they would have to exploit something else to gain full administrative access.”

Hyper-V shops will want to address a remote-code flaw (CVE-2020-0910) rated critical for Windows 10 and Windows Server 2019 systems. This bug lets an attacker with credentials on a guest OS run code on the Hyper-V host. 

CVE-2020-0935 is a publicly disclosed vulnerability in the OneDrive for Windows application rated important that could let an attacker run a malicious application to take control of the targeted system. OneDrive has its own updating system so customers with machines connected to the Internet should have the fix, but IT workers will need to perform manual updates on systems that have been air-gapped.

Report: Hundreds of thousands of Exchange systems remain vulnerable

Exchange Server is a notoriously complex messaging platform to manage. It’s one of the most important communication tools for just about every company, which means downtime is not an option. When you combine these factors, it’s no surprise that many Exchange Server systems do not get the patching attention they deserve.

Cybersecurity services company Rapid7 highlighted this issue with a recent report that shows more than 350,000 Exchange Server systems were still susceptible to a flaw that Microsoft corrected in February.

CVE-2020-0688 is a remote code execution vulnerability that only requires an attacker to have the credentials of an Exchange user account — not even an administrator — to overtake the Exchange Server system and possibly Active Directory.

Rapid7 claimed its researchers uncovered even more troubling news.

“There are over 31,000 Exchange 2010 servers that have not been updated since 2012. There are nearly 800 Exchange 2010 servers that have never been updated,” Rapid7’s Tom Sellers wrote in the blog.

Many IT workers use a staggered deployment to roll out Microsoft updates in stages as one way to limit issues with a faulty update. Many organizations can spare several Windows client and server systems for testing, but it’s rare to see a similar non-production environment for an Exchange Server system.

“Exchange updates are complex and take a long time,” Goettl said. “And because of the way some companies have customized their email services, Exchange can be very sensitive [to updates] as well. You can’t duplicate your Exchange environment very easily.”

Microsoft offers VPN help in wake of pandemic

With more remote users connected to VPN due to the coronavirus pandemic, rolling out this month’s Patch Tuesday updates could slow access across the network to other resources for end users. 

Most organizations were caught unprepared by the sudden surge of remote users. With enough time and money, IT could alleviate potential congestion through traffic shaping or upgraded infrastructure to increase network speeds. Other organizations can avoid problems with limited bandwidth over VPN by using a third-party patching offering or Microsoft Intune to route security updates directly from Microsoft to the end user’s machine. But some organizations that use Microsoft Endpoint Configuration Manager — formerly System Center Configuration Manager — do not have that functionality, which limits their options. 

Microsoft engineer Stefan Röll wrote a blog to help these customers with a tutorial to set up a VPN split tunnel configuration. This type of arrangement helps avoid network overload.

“Managing your [d]evices (especially security updates and software installations) is necessary and will become challenging as the majority of your work force will be connected to the corporate network via VPN. Depending on the number of clients even a couple of 100MB security updates will quickly add up to several [gigabytes] or [terabytes] that [need] to be pushed out over your VPN network. Without further consideration you can quickly overload your VPN connection causing other applications to degrade in performance or to completely fail,” Röll wrote. 

Go to Original Article
Author:

What’s the biggest cybersecurity threat in 2020? Experts weigh in

Every day, CISOs must decide which cyberthreats to prioritize in their organizations. When it comes to choosing which threats are the most concerning, the list from which to choose from is nearly boundless.

At RSA Conference 2020, speakers discussed several of the most concerning threats this year, from ransomware and election hacking to supply chain attacks and beyond. To pursue the topic of concerning threats, SearchSecurity asked several experts at the conference what they considered to be the biggest cybersecurity threat today.

“It has to be ransomware,” CrowdStrike CTO Mike Sentonas said. “It may not be the most complex attack, but what organizations are facing around the world is a huge increase in e-crime activity, specifically around the use of ransomware. The rise over the last twelve months has been incredible, simply because of the amount of money there is to be made.”

Trend Micro vice president of cybersecurity Greg Young agreed.

“It has to be ransomware, definitely. Quick money. We’ve certainly seen a change of focus where the people who are least able to defend themselves, state and local governments, particularly in some of the poorer areas, budgets are low and the bad guys focus on that,” he said. “The other thing is I think there’s much more technological capability than there used to be. There’s fewer toolkits and fewer flavors of attacks but they’re hitting more people and they’re much more effective, so I think there’s much more efficiency and effectiveness with what the bad guys are doing now.”

Sentonas added that he expects the trend of ransomware to continue.

“We’ve seen different ransomware groups or e-crime groups that are delivering ransomware have campaigns that have generated over $5 million, we’ve seen campaigns that have generated over $10 million. So with so much money to be made, in many ways, I don’t like saying it, but in many ways it’s easy for them to do it. So that’s driving the huge increase and focus on ransomware. I think, certainly for the next 12 to 24 months, this trend will continue. The rise of ransomware is showing no signs it’s going to slow down,” Sentonas explained.

“Easy” might just be the key word here. The biggest threat to cybersecurity, according to BitSight vice president of communications and government affairs Jake Olcott, is that companies “are still struggling with doing the basics” when it comes to cybersecurity hygiene.

“Look at all the major examples — Equifax, Baltimore, the list could go on — where it was not the case of a sophisticated adversary targeting an organization with a zero-day malware that no one had seen before. It might have been an adversary targeting an organization with malware that was just exploiting known vulnerabilities. I think the big challenge a lot of companies have is just doing the basics,” Olcott said.

Lastly, Akamai CTO Patrick Sullivan said that the biggest threat in cybersecurity is that to the supply chain, as highlighted at Huawei’s panel discussion at RSAC.

“The big trend is people are looking at their supply chain,” he said. “Like, what is the risk to the third parties you’re partnering with, to the code you’re developing with partners, so I think it’s about looking beyond that first circle to the second circle of your supply chain and your business partners.”

Go to Original Article
Author: