Tag Archives: efforts

Databricks bolsters security for data analytics tool

Some of the biggest challenges with data management and analytics efforts is security.

Databricks, based in San Francisco, is well aware of the data security challenge, and recently updated its Databricks’ Unified Analytics Platform with enhanced security controls to help organizations minimize their data analytics attack surface and reduce risks. Alongside the security enhancements, new administration and automation capabilities make the platform easier to deploy and use, according to the company.

Organizations are embracing cloud-based analytics for the promise of elastic scalability, supporting more end users, and improving data availability, said Mike Leone, a senior analyst at Enterprise Strategy Group. That said, greater scale, more end users and different cloud environments create myriad challenges, with security being one of them, Leone said.

“Our research shows that security is the top disadvantage or drawback to cloud-based analytics today. This is cited by 40% of organizations,” Leone said. “It’s not only smart of Databricks to focus on security, but it’s warranted.”

He added that Databricks is extending foundational security in each environment with consistency across environments and the vendor is making it easy to proactively simplify administration.

As organizations turn to the cloud to enable more end users to access more data, they’re finding that security is fundamentally different across cloud providers.
Mike LeoneSenior analyst, Enterprise Strategy Group

“As organizations turn to the cloud to enable more end users to access more data, they’re finding that security is fundamentally different across cloud providers,” Leone said. “That means it’s more important than ever to ensure security consistency, maintain compliance and provide transparency and control across environments.”

Additionally, Leone said that with its new update, Databricks provides intelligent automation to enable faster ramp-up times and improve productivity across the machine learning lifecycle for all involved personas, including IT, developers, data engineers and data scientists.

Gartner said in its February 2020 Magic Quadrant for Data Science and Machine Learning Platforms that Databricks Unified Analytics Platform has had a relatively low barrier to entry for users with coding backgrounds, but cautioned that “adoption is harder for business analysts and emerging citizen data scientists.”

Bringing Active Directory policies to cloud data management

Data access security is handled differently on-premises compared with how it needs to be handled at scale in the cloud, according to David Meyer, senior vice president of product management at Databricks.

Meyer said the new updates to Databricks enable organizations to more efficiently use their on-premises access control systems, like Microsoft Active Directory, with Databricks in the cloud. A member of an Active Directory group becomes a member of the same policy group with the Databricks platform. Databricks then maps the right policies into the cloud provider as a native cloud identity.

Databricks uses the open source Apache Spark project as a foundational component and provides more capabilities, said Vinay Wagh, director of product at Databricks.

“The idea is, you, as the user, get into our platform, we know who you are, what you can do and what data you’re allowed to touch,” Wagh said. “Then we combine that with our orchestration around how Spark should scale, based on the code you’ve written, and put that into a simple construct.”

Protecting personally identifiable information

Beyond just securing access to data, there is also a need for many organizations to comply with privacy and regulatory compliance policies to protect personally identifiable information (PII).

“In a lot of cases, what we see is customers ingesting terabytes and petabytes of data into the data lake,” Wagh said. “As part of that ingestion, they remove all of the PII data that they can, which is not necessary for analyzing, by either anonymizing or tokenizing data before it lands in the data lake.”

In some cases, though, there is still PII that can get into a data lake. For those cases, Databricks enables administrators to perform queries to selectively identify potential PII data records.

Improving automation and data management at scale

Another key set of enhancements in the Databricks platform update are for automation and data management.

Meyer explained that historically, each of Databricks’ customers had basically one workspace in which they put all their users. That model doesn’t really let organizations isolate different users, however, and has different settings and environments for various groups.

To that end, Databricks now enables customers to have multiple workspaces to better manage and provide capabilities to different groups within the same organization. Going a step further, Databricks now also provides automation for the configuration and management of workspaces.

Delta Lake momentum grows

Looking forward, the most active area within Databricks is with the company’s Delta Lake and data lake efforts.

Delta Lake is an open source project started by Databrick and now hosted at the Linux Foundation. The core goal of the project is to enable an open standard around data lake connectivity.

“Almost every big data platform now has a connector to Delta Lake, and just like Spark is a standard, we’re seeing Delta Lake become a standard and we’re putting a lot of energy into making that happen,” Meyer said.

Other data analytics platforms ranked similarly by Gartner include Alteryx, SAS, Tibco Software, Dataiku and IBM. Databricks’ security features appear to be a differentiator.

Go to Original Article

How to navigate a ransomware recovery process

If your defenses and backups fail despite your best efforts, your ransomware recovery effort can take one of several paths to restore normalcy to your organization.

Ransomware is bad enough. Don’t rush to bring systems and workloads back online and cause additional problems. The first item on your agenda is to take inventory of what still functions and what needs repairs. This has to be done quickly, but without mistakes. Management will want to know what needs to be done, but you can’t give a report until you have a full understanding. While you don’t need to break down every single server, you will need to have everything categorized. Think Active Directory, file servers, backups, networking infrastructure, email and communication, and production servers to start.

Take stock of the situation

The list of affected systems and VMs won’t be comprehensive. You have to start with machines that are a priority, and production servers are not in this case. If Active Directory is down, then it’s a safe bet most of your production servers — and the IT infrastructure — won’t be running correctly even if they weren’t directly affected.

To start with a ransomware recovery effort, check your backups first before anywhere else. Too many folks have deleted encrypted VMs only to find the malware wiped out their backup systems and end up going from bad to worse. Mistakes happen when you rush.

A somewhat easy path of restoring servers does exist if your backups are intact, current and operational. The restoration process needs to be tested before you delete any VMs. Rather than removing affected machines, try relocating them to lower-tier storage, external storage or even local storage on a host. Your goal is to get the encrypted VMs out of the way to give yourself space to work, then try the restores and get the VMs running before you remove their encrypted counterpart.

It might be time to make difficult choices

If the attack corrupted your backup system or the ransomware recovery effort failed, then someone above your pay grade will have to make some decisions. You will have to have a few difficult conversations, partly because the responsibility of the backups — and their reliability — rested on you. It’s possible it’s not entirely your fault for different reasons, such as not getting proper funding. This will have to be a conversation for a later time. At the moment, it’s time to make a decision: Pay the ransom, rebuild the systems or file a report.

Reporting requires the involvement of senior management and the company legal team. If you work for a government entity or public company, then you might have very specific guidelines that you must follow for legal reasons. If you work for a private company, then you still have possible legal issues with your customers about what you can and cannot disclose. No matter what you say, it will not be taken well. You want to be honest with your customers, but you also need to be mindful and limit how much data you share publicly.

The other aspect to reporting involves the authorities. Your organization might not even have been the intended target if you were hit by an older ransomware variant. If that’s the case, it’s possible there might be a decryption tool. It’s a long shot, but something worth check before you rebuild from scratch.

While distasteful, paying the ransomware is also an option. You need to consider how much will it cost to rebuild and recover versus handing over the ransom. It’s not an easy call to make because a payment does not come with any guarantees.

Most companies that pay the ransom typically don’t disclose that they paid or that they were even attacked. I suspect most organizations get their data unlocked, otherwise the ransomware business model would collapse.

The challenge with rebuilding is the effort involved. There are relatively few companies that have people who fully understand how every aspect of their environments work. Many IT infrastructures are the combined result of in-house experts and outside consultants. People install systems and take that knowledge with them when they leave. Their replacements learn how to keep these systems online, but that is very different from installing or building them from scratch. Repairing Active Directory is a challenge, but to rebuild an Active Directory with thousands of users and groups with permissions from documentation — with any luck — is next to impossible unless you have a lot of time and expertise.

Recovering from a ransomware attack is not an easy task, because not every situation is identical. If your defenses and backup recovery fail, the reconstruction effort will not be easy or cheap. You will either have to pay the ransom or spend money in overtime and consultants to rebuild mission-critical systems. Chances are your customers will find out what is happening during this recovery process, so you’ll have to have a communication plan and a single point of contact for the sake of consistency.

Ransomware isn’t something just for the IT department to handle; the decisions and the road to recovery will involve several stakeholders and real costs. Plan ahead and map out your steps to avoid rushing into bad choices that can’t be reversed.

Go to Original Article

CIOs express hope, concern for proposed interoperability rule

While CIOs applaud the efforts by federal agencies to make healthcare systems more interoperable, they also have significant concerns about patient data security.

The Office of the National Coordinator for Health IT (ONC) and the Centers for Medicare & Medicaid Services proposed rules earlier this year that would further define information blocking, or unreasonably stopping a patient’s information from being shared, as well as outline requirements for healthcare organizations to share data such as using FHIR-based APIs so patients can download healthcare data onto mobile healthcare apps.

The proposed rules are part of an ongoing interoperability effort mandated by the 21st Century Cures Act, a healthcare bill that provides funding to modernize the U.S. healthcare system. Final versions of the proposed information blocking and interoperability rules are on track to be released in November.

“We all now have to realize we’ve got to play in the sandbox fairly and maybe we can cut some of this medical cost through interoperability,” said Martha Sullivan, CIO at Harrison Memorial Hospital in Cynthiana, Ky.

CIOs’ take on proposed interoperability rule

To Sullivan, interoperability brings the focus back to the patient — a focus she thinks has been lost over the years.

She commended ONC’s efforts to make patient access to health information easier, yet she has concerns about data stored in mobile healthcare apps. Harrison’s system is API-capable, but Sullivan said the organization will not recommend APIs to patients for liability reasons.

Healthcare CIOs at Meditech's 2019 Physician and CIO Forum shared their thoughts on proposed interoperability rules from ONC and CMS.
Physicians and CIOs at EHR vendor Meditech’s 2019 Physician and CIO Forum in Foxborough, Mass. Helen Waters, Meditech executive vice president, spoke at the event.

“The security concerns me because patient data is really important, and the privacy of that data is critical,” she said.

Harrison may not be the only organization reluctant to promote APIs to patients. A study published in the Journal of the American Medical Association of 12 U.S. health systems that used APIs for at least nine months found “little effort by healthcare systems or health information technology vendors to market this new capability to patients” and went on to say “there are not clear incentives for patients to adopt it.”

Jim Green, CIO at Boone County Hospital in Iowa, said ONC’s efforts with the interoperability rule are well-intentioned but overlook a significant pain point: physician adoption. He said more efforts should be made to create “a product that’s usable for the pace of life that a physician has.”

The product also needs to keep pace with technology, something Green described as being a “constant battle.”

There are some nuances there that make me really nervous as a CIO.
Jeannette CurrieCIO of Community Hospitals, Beth Israel Deaconess Medical Center

Interoperability is often temporary, he said. When a system gets upgraded or a new version of software is released, it can throw the system’s ability to share data with another system out of whack.

“To say at a point in time, ‘We’re interoperable with such-and-such a product,’ it’s a point in time,” he said.

Interoperability remains “critically important” for healthcare, said Jeannette Currie, CIO of Community Hospitals at Beth Israel Deaconess Medical Center in Boston. But so is patient data security. That’s one of her main concerns with ONC’s efforts and the interoperability rule, something physicians and industry experts also expressed during the comment period for the proposed rules.

“When I look at the fact that a patient can come in and say, ‘I need you to interact with my app,’ and when I look at the HIPAA requirements I’m still beholden to, there are some nuances there that make me really nervous as a CIO,” she said.

Go to Original Article

Chief transformation officer takes digital one step further

There’s a new player on the block when it comes to the team leading digital efforts within a healthcare organization.

Peter Fleischut, M.D., has spent the last two years leading telemedicine, robotics and robotic process automation and artificial intelligence efforts at New York-Presbyterian as its chief transformation officer, a relatively new title that is beginning to take form right alongside the chief digital officer.

Fleischut works as part of the organization’s innovation team under New York-Presbyterian CIO Daniel Barchi. Formerly the chief innovation officer for New York-Presbyterian, Fleischut described his role as improving care delivery and providing a better digital experience.

“I feel like we’re past the age of innovating. Now it’s really about transforming our care model,” he said.

What is a chief transformation officer?

The chief transformation officer is “larger than a technology or digital role alone,” according to Barchi.

Indeed, Laura Craft, analyst at Gartner, said she’s seeing healthcare organizations use the title more frequently to indicate a wider scope than, say, the chief digital officer.

The chief digital officer, a title that emerged more than five years ago, is often described as taking an organization from analog to digital. The digital officer role is still making inroads in healthcare today. Kaiser Permanente recently named Prat Vemana as its first chief digital officer for the Kaiser Foundation Health Plan and Hospitals. In the newly created role, Vemana is tasked with leading Kaiser Permanente’s digital strategy in collaboration with internal health plan and hospital teams, according to a news release.

A chief transformation officer, however, often focuses not just on digital but also emerging tech, such as AI, to reimagine how an organization does business.

“It has a real imperative to change the way [healthcare] is operating and doing business, and healthcare organizations are struggling with that,” Craft said. 

Barchi, who has been CIO at New York-Presbyterian for four years, said the role of chief transformation officer was developed by the nonprofit academic medical center to “take technology to the next level” and scale some of the digital programs it had started. The organization sought to improve not only back office functions but to advance the way it operates digitally when it comes to the patient experience, from hospital check-in to check-out.

I feel like we’re past the age of innovating. Now it’s really about transforming our care model.
Peter Fleischut, M.D.Chief transformation officer, New York-Presbyterian

Fleischut was selected for the role due to his background as a clinician, as well as the organization’s former chief innovation officer. He has been in the role for two years and is charged with further developing and scaling New York-Presbyterian’s AI, robotics and telemedicine programs.

The organization, which has four major divisions and is comprised of 10 hospitals, deeply invested in its telemedicine efforts and built a suite of services about four years ago. In 2016, it completed roughly 1,000 synchronous video visits between providers and patients. Now, the organization expects to complete between 500,000 and 1,000,000 video visits by the end of 2019, Fleischut said during his talk at the recent mHealth & Telehealth World Summit in Boston.

One of the areas where New York-Presbyterian expanded its telemedicine services under Fleischut’s lead was in emergency rooms, offering low-acuity patients the option of seeing a doctor virtually instead of in-person, which shortened patient discharge times from an average baseline of two and a half hours to 31 minutes.

The healthcare organization has also expanded its telemedicine services to kiosks set up in local Walgreens, and has a mobile stroke unit operating out of three ambulances. Stroke victims are treated in the ambulance virtually by an on-call neurologist.  

“At the end of the day with innovation and transformation, it’s all about speed, it’s all about time, and that’s what this is about,” Fleischut said. “How to leverage telemedicine to provide faster, quicker, better care to our patients.”

Transforming care delivery, hospital operations  

Telemedicine is one example of how New York-Presbyterian is transforming the way it interacts with patients. Indeed, that’s one of Fleischut’s main goals — to streamline the patient experience digitally through tools like telemedicine, Barchi said.

“The way you reach patients is using technology to be part of their lives,” Barchi said. “So Pete, in his role, is really important because we wanted someone focused on that patient experience and using things like telemedicine to make the patient journey seamless.” 

But for Fleischut to build a better patient experience, he also has to transform the way the hospital operates digitally, another one of his major goals.

As an academic medical center, Barchi said the organization invests significantly in advanced, innovative technology, including robotics. Barchi said he works with one large budget to fund innovation, information security and electronic medical records.

One hospital operation Fleischut worked to automate using robotics was food delivery. Instead of having hospital employees deliver meals to patients, New York-Presbyterian now uses large robots loaded with food trays that are programmed to deliver patient meals.

Fleischut’s work, Barchi said, will continue to focus on innovative technologies transforming the way New York-Presbyterian operates and delivers care.

“Pete’s skills from being a physician with years of experience, as well as his knowledge of technology, allow him to be truly transformative,” Barchi said.

In his role as chief transformation officer, Fleischut said he considers people and processes the most important part of the transformation journey. Without having the right processes in place for changing care delivery and without provider buy-in, the effort will not be a success, he said.

“Focusing on the people and the process leads to greater adoption of technologies that, frankly, have been beneficial in other industries,” he said.

Go to Original Article

Data analytics in government efforts lack structure

CAMBRIDGE, Mass. — The U.S. government is adept at collecting massive amounts of data. Efforts to deploy data analytics in government agencies, however, can be weak and disorganized.

At some agencies, officials say there’s a lack of a cohesive system for government analytics and management.

“I recently learned that we have no real concept of data archiving, and data backup and protection,” said Bobby Saxon, CTO at the Centers for Medicare & Medicaid Services (CMS).

“We have archived everything in every place,” Saxon said. “It’s really just wasted data right now.”

Data analytics struggles

Speaking on a panel about data analytics in government at the annual MIT Chief Data Officer and Information Quality (CDOIQ) Symposium at the university’s Tang Center, Saxon spoke on the struggles his agency has with analytics.

CMS, finally moving out of crisis mode after dealing with widely publicized IT problems with its healthcare.gov website, has an “OK” structure for data analytics and management, Saxon said.

While Saxon said he and his colleagues are working to improve the situation, currently the organization tends to rely on outside vendors to deal with difficult and pressing analytics problems.

“In the world of predictive analytics, typically the average vendor or subject expert will ask what are your questions, and go off and try to solve questions for you, and then ask if you have any more questions,” Saxon said.

Panelists at the annual MIT CDOIQ Symposium in Cambridge, Mass.
Left to right: Bobby Saxon, CTO, Centers for Medicare & Medicaid Services; John Eltinge, U.S. Census Bureau; and Mark Krzysko of the Department of Defense at the annual MIT CDOIQ Symposium in Cambridge, Mass.

Outside help costly

Ultimately, while government analytics problems tend to be fixed to some extent, the IT corrections solutions can take weeks, and often simply are too expensive in the long term, Saxon explained.

I recently learned that we have no real concept of data archiving, and data backup and protection.
Bobby SaxonCTO, Centers for Medicare & Medicaid Services

In addition, employees aren’t learning additional data analytics in government techniques, and can’t immerse themselves in the problems at hand and actually be able to discover the root issues of what might be going wrong.

Panel moderator Mark Krzysko of the Department of Defense’s Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, noted a similar problem in his agency.

Krzysko said he had honed a personal strategy in his early years with the agency: “Use the tools they’ve given you.”

When a data dilemma arose, often he might see employees making calls to the Air Force or the Army for answers, instead of relying on their own government analytics tools, he said.

The panel, “Data Analytics to Solve Government Problems,” was part of the 12th Annual MIT CDOIQ Symposium, held July 18 to 20.

The panel also included John Eltinge of the United States Census Bureau.

ONC focuses on creating interoperability between health systems

Efforts to create interoperability between health systems and stop data blocking have been going on for some time now. Although there are some isolated examples of interoperability in healthcare, in general, experts agree it still is not widely happening.

In a call with reporters, officials from the Office of the National Coordinator for Health IT (ONC) discussed the federal entity’s role in promoting and creating interoperability between health systems.

Donald Rucker, M.D., national coordinator for health information technology at ONC, said the agency is focused on three interoperability use cases. The first is enabling health information to move from point A to point B, regardless of location or IT system. The second is enterprise accountability. This use case mainly addresses “complaints about ‘I can’t get my data out of a system,'” Rucker said. “That bulk transferability, that sort of fundamental data liquidity and … data in bulk so you can actually do analytics on it and see what’s going on overall.” And the third is competition and modernity. “And that’s open APIs,” Rucker said. “You look at Silicon Valley, you look at modern computing, if you go to any of the modern computer science conferences it’s all about APIs.”

The challenge is a lot of these things are far more than just standards.
Donald Rucker, M.D.national coordinator for health information technology, ONC

Genevieve Morris, principal deputy national coordinator for health information technology at ONC, explained that, because of the use cases Rucker cited, ONC is tweaking its interoperability roadmap.

“The way that we’re thinking about interoperability right now is basically four targets: technical, trust, financial and workforce,” she said.

The roadmap to interoperability

“The challenge is a lot of these things are far more than just standards,” Rucker said, explaining that business relationships tend to complicate things as well.

Rucker used problem lists — a list of patient’s ailments — as an example.

“It can accrue over time. [For example, the patient] had a cold in 1955, right? Do they still have a cold? Probably not. So you have to curate it,” Rucker said. “It’s often said, ‘I don’t have a shareable problem list. I don’t have an interoperable problem list’. … We don’t have a business model to keep the problem list up to date and meaningful.”

Rucker’s point is that, when people talk about interoperability between health systems, they’re often talking about many different issues. “They’re often asking for a whole bunch of other stuff as well,” Rucker said. “They’re asking for data curation, and maybe a part of the goal of ACOs and HMOs and value-based payments is to provide incentives for these things, but we’re not there yet.”

Underneath all of this is one question, Rucker said: “Are we going to pay for structure?”

“Is it going to be free text and maybe we throw in natural language processing or machine learning or you just read it?” Rucker said. “We’re trying to be mindful of that, we’re trying to be mindful, if you will, of the physics of information and what is plausible to regulate, what society has to sort out, what payment mechanisms we have to sort out.”

APIs: It’s complicated

Some laud APIs as the key to interoperability between health systems. But Rucker says it’s a bit more complicated.

“Two things to consider: One is the API on the vendor level, right? The technical support for the EMR vendor. The second is what does an open API mean at the provider level? Open API sort of gets thrown around, but potentially they are very different approaches,” he said.

Rucker explained that while an open API from a vendor, for example, is a set of tools enabling access to information, that information actually sits in the databases of the healthcare providers. And this is where the complexities come into play, despite APIs enabling access to information.

“So if I’m a Silicon Valley app developer, I can’t hook up to some large national EMR vendor because they don’t actually have any of the clinical data,” he said. “The data is all sitting in, you know, pick your hospital system, pick your provider. So that’s really the dichotomy.”

ONC’s role in preventing data blocking

From a regulatory point of view, creating boundaries around data blocking can be tough, Rucker said.

“A large part of our work is coming up with language that meets everyone’s needs here and that’s a difficult task,” Rucker said.

He added that ONC can’t simply mandate everyone throw away whatever systems they’re currently using and implement a totally new IT system. “We have to be mindful of what’s out there and what can be done,” Rucker said.

Rucker pointed out, however, that the 21st Century Cures Act, a U.S. law enacted in December 2016, does include a data blocking reporting requirement. Meaning that when healthcare organizations experience or come across any instance of data blocking, they must report it to ONC.

“To the extent that information blocking exists, we’re presumably going to see some set of reports from some folks on that,” he said.

Powered by WPeMatico

Office 365 admin roles give users the power of permissions

When a business moves to the Office 365 platform, its collaborative capabilities can go beyond joint efforts on…

team projects — it also extends into the IT department by letting users handle some tasks traditionally reserved for administrators.

Office 365 admin roles let IT teams deputize trusted users to perform certain business functions or administrative jobs. While it can be helpful to delegate some administrative work to an end user to reduce help desk tickets, it’s important to limit the number of end users with advanced capabilities to reduce security risks.

Organizations that plan to move to Office 365 should explore the administrative options beforehand. Companies already on the platform should review administrative rights and procedures on a regular basis.

Two levels of administrative permissions

By default, new accounts created in the Office 365 admin center do not have administrative permissions. An Office 365 user account can have two levels of administrative permissions: customized administrator role and global administrator role.

In a customized administrator role, the user account has one or more individual administrator roles. Available Office 365 admin roles include billing administrator, compliance administrator, Dynamics 365 administrator, Exchange administrator, password administrator, Skype for Business administrator, Power BI service administrator, service administrator, SharePoint administrator and user management administrator.

Some Office 365 admin roles provide application-specific permissions, while others provide service-specific permissions. For example, end users granted an Exchange administrator role can manage Exchange Online, while users with the password administrator role can reset passwords, monitor service health and manage service requests.

Customized administrator configurations benefit both large and small organizations. In large organizations, it’s common for separate administrators to manage different services, such as Exchange, Skype for Business and SharePoint. Conversely, small organizations typically have fewer administrators who manage multiple — if not all — systems. In either scenario, if additional help is needed for certain tasks, you can assign appropriate administrative roles to the most qualified users, allowing them to make modifications to the tenancy.

The global administrator role provides complete control over Office 365 services. It’s the only administrator role that can assign users with Office 365 admin roles. The first account created in a new Office 365 tenancy automatically gets the global administrator role. An organization can give the global administrator role to multiple user accounts, but it’s best to restrict this role to as few accounts as possible.

Managing Yammer requires careful planning because it’s separate in the Yammer admin center. The highest level of administrative permissions in Yammer is the verified admin role. An organization can give all Office 365 global administrators this role, but regular users with a Yammer verified role shouldn’t have it.

Security and compliance permissions

An organization must also decide how to configure permissions in the Security & Compliance Center. These permissions use the same role-based access control (RBAC) permissions model that on-premises Exchange and Exchange Online use.

The Security & Compliance Center features eight role groups that allow a user to perform administrative tasks related to security and compliance. For example, members of the eDiscovery Manager role group receive case management and compliance search roles that allow the user to create, delete and edit eDiscovery cases. These users also can perform search queries across mailboxes.

Office 365 provides 29 different roles that an organization can add to role groups, and each role holds different security and compliance permissions. This comprehensive range of role groups and available roles means that an organization must determine the most appropriate security and compliance permissions model.

It’s important to understand differences in role groups and plan permissions accordingly. For example, both the Security & Compliance Center and Exchange Online have role groups named organization management, but they are separate entities and serve different permissions purposes.

Multifactor authentication matters

Enabling Azure multifactor authentication adds another layer of protection around Office 365 accounts with administrator access. Administrators provide proof of their identity via a second authentication factor, such as a phone call acknowledgement, text message verification code or phone app notification, each time they log into the Office 365 account.

If the business uses Azure multifactor authentication, it should educate administrators and service desk staff to ensure everyone knows operational and service desk procedures involved with the security service.

Keep tabs on administrator actions

As administrators make changes to the systems and grant or revoke permissions to users and other administrators, you’ll need a way to review these actions.

In the Office 365 Security & Compliance Center, an organization can enable audit logging and search the log for details of administrator activities from the last 90 days. This log tracks a wide range of administrator actions, such as user deletion, password resets, group membership changes and eDiscovery activities.

Powered by WPeMatico

Arm yourself for battle against an email virus outbreak

The onslaught of ransomware and devious social engineering efforts means it’s only a matter of time before your…


* remove unnecessary class from ul
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

* Replace “errorMessageInput” class with “sign-up-error-msg” class
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {

* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
return validateReturn;

* DoC pop-up window js – included in moScripts.js which is not included in responsive page
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);

organization is hit with a major email virus outbreak.

Administrators should prepare on-premises Exchange — and themselves — to quickly stem the bleeding when that malware lands in a user’s inbox. And while the techniques to protect on-premises Exchange Server aren’t new, they are important steps to reduce the effects of an attack. Even if the antivirus scanner fails to detect the threat, there are ways to isolate affected mailboxes, slow the proliferation and even stop the spread of a virus. Have procedures, processes and scripts in place to fight off an email virus outbreak before trouble starts.

Study the risk chart

Every antivirus tool is different, so the risk chart in Figure 1 doesn’t include all the steps to take during an email virus outbreak. But it shows what to do within Exchange if the antivirus software or SMTP gateway cannot stop the threat. Armed with this plan, administrators have a clear course of action to help the system weather an attack.

Risk chart
Figure 1: This chart explains what action an administrator should perform based on the impact of the threat to the Exchange Server.

The risk chart also indicates the appropriate response based on the severity and distribution of the threat. For example, a widespread distribution of the destructive Locky ransomware warrants a far greater response than when the Tinba malware hits a single mailbox. Use this chart as a baseline to assemble a threat-response plan.

Clean the mailbox

If an outbreak gets beyond the gateway and desktop virus scanners, use the Exchange Management Shell to quickly run a script that will search-and-destroy the offending email from the mailbox. This will limit the damage.

With Exchange 2016, use the Search-Mailbox command with the –deletecontent switch. Be sure the administrative account has the Mailbox Import Export management role. Here is the example of the syntax:

search-mailbox “Bryant, Steve” -searchquery ‘Get rich now!!!’ -deletecontent

This command looks at the body of all messages in the mailbox for the string “Get rich now!!!” and purges those items from mailboxes. If an outbreak strikes, modify the command to search for specific phrases in the offending email and delete them. Be careful: This command will wipe results permanently. Administrators can execute this in a reporting mode as a test before using the purge script:

search-mailbox “Bryant, Steve” -searchquery ‘”Get rich now!!!”‘ -EstimateResultOnly

For large mailboxes or multiple mailboxes, the New-MailboxSearch command is an option because Search-Mailbox can only check one mailbox at a time. But there will be some differences in how this method removes data compared to other methods. More details about the New-MailboxSearch command are available here.

Scour email from multiple mailboxes

To search multiple mailboxes, admins can either scan them all or specify mailboxes with an input file. A search through all mailboxes is the easiest way to track down infected messages, but it also could be the slowest way to clean a mailbox, depending on how many mailboxes exist.

An organization with fewer than 1,000 mailboxes could use this command for fast results:

Get-mailbox –resultsizeunlimited | search-mailbox -searchquery ‘”Get rich now!!!”‘ -deletecontent

Use wildcards and filters to scan certain mailboxes. For example, use the following code to scan all users from a specific mailbox database:

Get-mailbox –database MBDB01 –resultsizeunlimited | search-mailbox -searchquery ‘”Get rich now!!!”‘ -deletecontent

Alternatively, this string will clean all mailboxes — one server at a time:

Get-mailbox –server MBSERVER01 –resultsizeunlimited | search-mailbox -searchquery ‘”Get rich now!!!”‘ -deletecontent

As with the single search, use the –EstimateResultOnly switch to ensure the script works as intended.

Another way to search specific mailboxes is to use an input file:

$InputFile = get-content “C:affectedusers.txt”

foreach ($line in $Inputfile)         {search-mailbox $line -searchquery ‘”Get rich now!!!”‘ -deletecontent}

Isolate the mailbox

If the IT staff cannot clean a mailbox fast enough to contain the virus, then it’s best to isolate that mailbox. Exchange 2016 can quarantine a mailbox if it senses the mailbox has destabilized the database. This function makes the mailbox unavailable. Here is an example of a quarantine setting with a length of 60 minutes:

Enable-MailboxQuarantine “Bryant, Steve” -Duration 00.00:60:00

The previous command without the –Duration switch keeps the mailbox in quarantine until another command returns the mailbox to service:

Disable-MailboxQuarantine “Bryant, Steve”

With quarantine, the mailbox is offline but cannot be cleaned. No one can access it.  

To allow mail delivery to the mailbox — but make it inaccessible to users — use the following command to restrict client access. The user cannot connect to the mailbox, but the administrator can clean it with PowerShell.

Set-CASMailbox “Bryant, Steve” -ActiveSyncEnabled $false -ImapEnabled $false -EwsEnabled $false -MAPIEnabled $false -OWAEnabled $false -PopEnabled $false -OWAforDevicesEnabled $false

Use wildcards to isolate multiple mailboxes at a time. To re-enable access, use the same script with $true:

Set-CASMailbox “Bryant, Steve” -ActiveSyncEnabled $true -ImapEnabled $true -EwsEnabled $true -MAPIEnabled $true -OWAEnabled $true -PopEnabled $true -OWAforDevicesEnabled $true

Slow the arrival of mail

If the outbreak continues to affect users and slows the system, adjust the influx of mail to reduce the invasion. Throttle the inbound SMTP connector to alleviate server strain and still permit functions to run.

The first step is to identify inbound internet connectors. For this example, we have a separate IP bound to each server. The names are consistent and start with Internet Receive Connector Server; we can run a script and set the details for those connectors. The default setting for the tarpitinterval parameter puts the SMTP response on a five-second delay.

get-receiveconnector | Where-Object {$_.identity -like “*internet*”} | select name, MaxInboundConnectionPerSource, tarpitinterval

Inbound internet connectors
Identify the inbound internet connectors.


Other settings will regulate email, but start with these. The idea is to ease the arrival of inbound messages and give IT more time to clean and isolate — without crippling connectivity.

This command reduces the number of connections per source from 20 to 5, and increases the tarpit interval from five seconds to 30 seconds:

get-receiveconnector | Where-Object {$_.identity -like “*internet*”} | set-receiveconnector -MaxInboundConnectionPerSource 5 –tarpitinterval 00:00:30

The command enables inbound mail to flow, but limits how many messages a single internet host can send at one time. Adjust these numbers as needed, but do not forget to put the settings back to defaults when the crisis is over.

If you haven’t created specific receive connectors for internet traffic, use the command below to work with “default” receive connectors. This also slows server-to-server traffic within the environment.

get-receiveconnector | Where-Object {$_.identity -like “*default*”} | set-receiveconnector -MaxInboundConnectionPerSource 5 –tarpitinterval 00:00:30

Stop mail from the attack source

If the severity or scope of the attack is severe enough, an administrator can stop all inbound internet traffic. For this, disable internet connectors. In this case, the environment has specific connectors for inbound internet traffic, which facilitates throttling and mail restrictions.

get-receiveconnector | Where-Object {$_.identity -like “*internet*”} | set-receiveconnector –Enabled $False

If your Exchange configuration doesn’t have named connectors for internet connectivity, you’ll need to find another way to disable inbound SMTP traffic at the firewall or gateway.

Slow all inbound mail

If the email virus outbreak uses the Exchange system to spread the infection, slow all receive connectors to give the staff more time to clean. This command sets the default receive connectors on all servers to hold back connections from all sources, including server-to-server transport:

get-receiveconnector | set-receiveconnector -MaxInboundConnectionPerSource 5 –tarpitinterval 00:00:30

This will slow mail delivery and allow SMTP queues to grow. Watch the queue drive closely and change the MaxInboundConnectionPerSource and tarpitinterval settings to adjust the speed until mail flow reaches a manageable rate.

Stop all inbound mail

In very drastic cases, stop all inbound mail flow to give IT time to clean mailboxes or prepare for a recovery scenario. Use this command to take that step:

get-receiveconnector | set-receiveconnector –Enabled $False

Isolate affected servers

In some situations, a specific site or server could experience an outbreak that’s worse than any other segment within the organization. Use this command to isolate a server, stop its transport service and halt all mail transfers:

Get-Service -Name MSExchangeTransport -ComputerName SERVERA | Stop-service

After the repairs, restart the service with Start-Service with this command:

Get-Service -Name MSExchangeTransport -ComputerName SERVERA | Start-service

Prepare for restoration

In some cases, an IT team won’t be able to clean the email virus outbreak completely because of time constraints or the amount of damage that Exchange data received. In these circumstances, the only solution might be to restore data from a backup.

Next Steps

Pinpoint security risks to lock down Exchange

Which email security gateways are the best?

How to impede ransomware

Powered by WPeMatico