Tag Archives: provides

Managing Mailbox Retention and Archiving Policies in Microsoft 365

Microsoft 365 (formerly Office 365) provides a wide set of options for managing data classification, retention of different types of data, and archiving data. This article will show the options a Microsoft 365 administrator has when setting up retention policies for Exchange, SharePoint, and other Microsoft 365 workloads and how those policies affect users in Outlook. It’ll also cover the option of an Online Archive Mailbox and how to set one up.

There’s also an accompanying video to this article which shows you how to configure a retention policy, retention labels, enabling Archive mailboxes, and creating a move to archive retention tag.

[embedded content]

Before we continue, we know that for all Microsoft 365 admins security is a priority. And in the current climate of COVID-19, it’s well documented how hackers are working around the clock to exploit vulnerabilities. As such, we assembled two Microsoft experts to discuss the critical security features in Microsoft 365 you should be using right now in a free webinar on May 27. Don’t miss out on this must-attend event – save your seat now!

How To Manage Retention Policies in Microsoft 365

There are many reasons to consider labeling data and using retention policies but before we discuss these let’s look at how Office 365 manages your data in the default state. For Exchange Online (where mailboxes and Public Folders are stored if you use them), each database has at least four copies, spread across two datacenters. One of these copies is a lagged copy which means the replication to it is delayed, to provide the option to recover from a data corruption issue. In short, a disk, server, rack, or even datacenter failure isn’t going to mean that you lose your mailbox data.

Further, the default policy (for a few years now) is that deleted items in Outlook stay in the Deleted Items folder “forever”, until you empty it, or they are moved to an archive mailbox. If an end-user deletes items out of their Deleted Items folder, they’re kept for another 30 days (as long as the mailbox was created in 2017 or later), meaning the user can recover it, by opening the Deleted Items folder and clicking the link.

Where to find recoverable items in Outlook, Microsoft 365

Where to find recoverable items in Outlook

This opens the dialogue box where a user can recover one or more items.

Recovering deleted items in Exchange Online, Microsoft 365

Recovering deleted items in Exchange Online

If an administrator deletes an entire mailbox it’s kept in Exchange Online for 30 days and you can recover it by restoring the associated user account.

Additionally, it’s also important to realize that Microsoft does not back up your data in Microsoft 365. Through native data protection in Exchange and SharePoint online they make sure that they’ll never lose your current data but if you have deleted an item, document or mailbox for good, it’s gone. There’s no secret place where Microsoft’s support can get it back from (although it doesn’t hurt to try), hence the popularity of third-party backup solutions such as Altaro Office 365 Backup.

Litigation Hold – the “not so secret” secret

One option that I have seen some administrators employ is to use litigation or in-place hold (the latter feature is being retired in the second half of 2020) which keeps all deleted items in a hidden subfolder of the Recoverable Items folder until the hold lapses (which could be never if you make it permanent). Note that you need at least an E3 or Exchange Online Plan 2 for this feature to be available. This feature is designed to be used when a user is under some form of investigation and ensures that no evidence can be purged by that user and it’s not designed as a “make sure nothing is ever deleted” policy. However, I totally understand the job security it can bring when the CEO is going ballistic because something super important is “gone”.

Litigation hold settings for a mailbox, Microsoft 365

Litigation hold settings for a mailbox

Retention Policies

If the default settings and options described above doesn’t satisfy the needs of your business or regulatory requirements you may have, the next step is to consider retention policies. A few years ago, there were different policy frameworks for the different workloads in Office 365, showing the on-premises heritage of Exchange and SharePoint. Thankfully we now have a unified service that spans most Office 365 workloads. Retention in this context refers to ensuring that the data can’t be deleted until the retention period expires.

There are two flavors here, label policies which publish labels to your user base, letting users pick a retention policy by assigning individual emails or documents a label (only one label per piece of content). Note that labels can do two things that retention policies can’t do, firstly they can apply from the date the content was labeled, and secondly, you can trigger a disposition / manual review of the SharePoint or OneDrive for Business document when the retention expires.

Labels only apply to objects that you label; it doesn’t retroactively scan through email or documents at rest. While labels can be part of a bigger data classification story, my recommendation is that anything that relies on users remembering to do something extra to manage data will only work with extensive training and for a small subset of very important data. You can (if you have E5 licensing for the users in question) use label policies to automatically apply labels to sensitive content, based on a search query you build (particular email subject lines or recipients or SharePoint document types in particular sites for instance) or to a set of trainable classifiers for offensive language, resumes, source-code, harassment, profanity, and threats. You can also apply a retention label to a SharePoint library, folder, or document set.

As an aside, Exchange Online also has personal labels that are similar to retention labels but created by users themselves instead of being created and published by administrators.

A more holistic flavor, in my opinion, is retention policies. These apply to all items stored in the various repositories and can apply across several different workloads. Retention policies can also both ensure that data is retained for a set period of time AND disposed of after the expiry of the data, which is often a regulatory requirement. A quick note here if you’re going to play around with policies is that they’re not instantaneously applied – it can take up to 24 hours or even 7 days, depending on the workload and type of policy – so prepare to be patient.

These policies can apply across Exchange, SharePoint (which means files stored in Microsoft 365 Groups, Teams, and Yammer), OneDrive for business, and IM conversations in Skype for Business Online / Teams and Groups. Policies can be broad and apply across several workloads, or narrow and only apply to a specific workload or location in that workload. An organization-wide policy can apply to the workloads above (except Teams, you need a separate policy for its content) and you can have up to 10 of these in a tenant. Non-org wide policies can be applied to specific mailboxes, sites, or groups or you can use a search query to narrow down the content that the policy applies to. The limits are 10,000 policies in a tenant, each of which can apply to up to 1000 mailboxes or 100 sites.

Especially with org-wide policies be aware that they apply to ALL selected content so if you set it to retain everything for four years and then delete it, data is going to automatically start disappearing after four years. Note that you can set the “timer” to start when the content is created or when it was last modified, the latter is probably more in line with what people would expect, otherwise, you could have a list that someone updates weekly disappear suddenly because it was created several years ago.

To create a retention policy login to the Microsoft 365 admin center, expand Admin centers, and click on Compliance. In this portal click on Policies and then Retention under Data.

Retention policies link in the Compliance portal, Microsoft 365

Retention policies link in the Compliance portal

Select the Retention tab and click New retention policy.

Retention policies and creating a new one, Microsoft 365

Retention policies and creating a new one

Give your policy a name and a description, select which data stores it’s going to apply to and whether the policy is going to retain and then delete data or just delete it after the specified time.

Retention settings in a policy, Microsoft 365

Retention settings in a policy

Outside of the scope of this article but related are sensitivity labels, instead of classifying data based on how long it should be kept, these policies classify data based on the security needs of the content. You can then apply policies to control the flow of emails with this content, or automatically encrypt documents in SharePoint for instance. You can also combine sensitivity and retention labels in policies.

Conflicts

Since there can be multiple policies applied to the same piece of data and perhaps even retention labels in play there could be a situation where conflicting settings apply. Here’s how these conflicts are resolved.

Retention wins over deletion, making sure that nothing is deleted that you expected to be retained and the longest retention period wins. If one policy says two years and another says five years, it’ll be kept for five. The third rule is that explicit wins over implicit so if a policy has been applied to a specific area such as a SharePoint library it’ll take precedence over an organization-wide general policy. Finally, the shortest deletion policy wins so that if an administrator has made a choice to delete content after a set period of time, it’ll be deleted then even if another policy applies that requires deletion after a longer period of time. Here’s a graphic that shows the four rules and their interaction:

Policy conflict resolution rules. Microsoft 365

Policy conflict resolution rules (courtesy of Microsoft)

As you can see, building a set of retention policies that really work for your business and don’t unintentionally cause problems is a project for the whole business, working out exactly what’s needed across different workloads, rather than the job of a “click-happy” IT administrator.

Archive Mailbox

It all started with trying to rid the world of PST stored emails. Back in the day, when hard drive and SAN storage only provided small amounts of storage, many people learnt to “expand” the capacity of their small mailbox quota with local PST files. The problem is that these local files aren’t backed up and aren’t included in regulatory or eDiscovery searches. Office 365 largely solved part of this problem by providing generous quotas, the Business plans provide 50 GB per mailbox whereas the Enterprise plans have 100 GB limits.

If you need more mailbox storage one option is to enable online archiving which provides another 50 GB mailbox for the Business plans and an unlimited (see below) mailbox for the Enterprise plans. There are some limitations on this “extra” mailbox, it can only be accessed online, and it’s never synchronized to your offline (OST) file in Outlook. When you search for content you must select “all mailboxes” to see matches in your archive mailbox. ActiveSync and the Outlook client on Android and iOS can’t see the archive mailbox and users may need to manually decide what to store in which location (unless you’ve set up your policies correctly).

For these reasons many businesses avoid archive mailboxes altogether, just making sure that all mailbox data is stored in the primary mailbox (after all, 100 GB is quite a lot of emails). Other businesses, particularly those with a lot of legacy PST storage find these mailboxes fantastic and use either manual upload or even drive shipping to Microsoft 365 to convert all those PSTs to online archives where the content isn’t going to disappear because of a failed hard drive and where eDiscovery can find it.

For those that really need it and are on E3 or E5 licensing you can also enable auto-expanding archives which will ensure that as you use up space in an online archive mailbox, additional mailboxes will be created behind the scenes to provide effectively unlimited archival storage.

To enable archive mailboxes, go to Security & Compliance Center, click on Information governance, and the Archive tab.

The Archive tab, Microsoft 365

The Archive tab

Click on a user’s name to be able to enable the archive mailbox.

Archive mailbox settings, Mod admin, Microsoft 365

Archive mailbox settings

Once you have enabled archive mailboxes, you’ll need a policy to make sure that items are moved into at the cadence you need. Go to the Exchange admin center and click on Compliance management – Retention tags.

Exchange Admin Center - Retention tags, Microsoft 365

Exchange Admin Center – Retention tags

Here you’ll find the Default 2 year move to archive tag or you can create a new policy by clicking on the + sign.

Exchange Retention tags default policies, Microsoft 365

Exchange Retention tags default policies

Pick Move to Archive as the action, give the policy a name and select the number of days that has to pass before the move happens.

Creating a custom Move to archive policy, Microsoft 365

Creating a custom Move to archive policy

Note that online archive mailboxes have NOTHING to do with the Archive folder that you see in the folder tree in Outlook, this is just an ordinary folder that you can move items into from your inbox for later processing. This Archive folder is available on mobile clients and also when you’re offline and you can swipe in Outlook mobile to automatically store emails in it.

Conclusion

Now you know how and when to apply retention policies and retention tags in Microsoft 365, as well as when online archive mailboxes are appropriate and how to enable them and configure policies to archive items.

Finally, if you haven’t done so already, remember to save your seat on our upcoming must-attend webinar for all Microsoft 365 admins:

Critical Security Features in Office/Microsoft 365 Admins Simply Can’t Ignore

Is Your Office 365 Data Secure?

Did you know Microsoft does not back up Office 365 data? Most people assume their emails, contacts and calendar events are saved somewhere but they’re not. Secure your Office 365 data today using Altaro Office 365 Backup – the reliable and cost-effective mailbox backup, recovery and backup storage solution for companies and MSPs. 

Start your Free Trial now


Go to Original Article
Author: Paul Schnackenburg

UnitedHealth Group and Microsoft collaborate to launch ProtectWell™ protocol and app to support return-to-workplace planning and COVID-19 symptom screening – Stories

  • ProtectWell™ provides employers a return-to-workplace framework backed by CDC guidelines and the latest clinical science
  • Protocol is supported by the ProtectWell™ smartphone app that screens for COVID-19 symptoms and clears employees for daily work
  • Solution powered by Microsoft technologies to enable scalability, security, privacy and compliance
  • ProtectWell™ will be offered free of charge to employers in the United States

MINNETONKA, Minn., and REDMOND, Wash. (May 15, 2020) – UnitedHealth Group (NYSE: UNH) and Microsoft Corp. (Nasdaq: MSFT) have joined forces to launch ProtectWell™, an innovative return-to-workplace protocol that enables employers to bring employees back to work in a safer environment. ProtectWell™ helps employees determine they are safe to go to work, co-workers know their colleagues have been screened, and employers feel confident that their workplace is ready to do business. ProtectWell™ incorporates Centers for Disease Control and Prevention (CDC) guidelines and the latest clinical research to limit the spread of COVID-19 by screening employees for symptoms and establishing guidelines to support the health and safety of the workforce and workplace.

ProtectWell™ combines UnitedHealth Group’s clinical and data analytics capabilities with Microsoft’s technology leadership to help in the next phases of COVID-19 recovery efforts. The ProtectWell™ app is powered by Microsoft Azure, AI and analytics solutions, and also takes advantage of the Microsoft Healthcare Bot service, which is being used around the world for AI-assisted COVID-19 symptom triaging.

UnitedHealth Group logoThe ProtectWell™ protocol is supported by a smartphone app that allows employers to offer workers a simple screening tool designed for everyday use. The ProtectWell™ app includes an AI-powered health care bot that asks users a series of questions to screen for COVID-19 symptoms or exposure. If risk of infection is indicated, employers can direct their employees to a streamlined COVID-19 testing process that enables closed-loop ordering and reporting of test results directly back to employers. Health care information is managed by UnitedHealth Group and employers in accordance with occupational health laws.

In addition, ProtectWell™ includes guidelines and resources to support a safe work environment, including physical distancing, personal hygiene, sanitation and more. Employers can also choose additional custom content specific to their workforce for a personalized experience.

Microsoft logo“As we plan for a safe and careful return to the workplace, employers need clear guidelines to ensure a safe environment and a robust process for employees to screen themselves for COVID-19 symptoms,” said Ken Ehlert, chief scientific officer, UnitedHealth Group. “We are pleased to collaborate with Microsoft to launch ProtectWell™, a simple and effective tool to ensure employers and employees have the information and resources they need to keep themselves, their colleagues and the public safe and healthy.”

Microsoft Executive Vice President, Worldwide Commercial Business, Judson Althoff said: “As businesses begin to reopen, employers will need to monitor and manage their workforce for COVID-19 symptoms to help ensure those at risk of spreading the virus stay home until cleared by medical providers. Microsoft is pleased to join with UnitedHealth Group to launch ProtectWell™, which helps organizations manage the complexity of this undertaking.”

UnitedHealth Group has implemented ProtectWell™ with its own frontline health care workers, is in process of implementing the tool across its business to enable safe return of team members to the workplace, and is making the platform available to all employers in the United States at no charge. Microsoft intends to deploy ProtectWell™ for its U.S.-based employees.

The ProtectWell™ smartphone app is powered by Microsoft Azure, together with its industry-leading security and compliance offerings, and allows employers to better plan, manage resources, care for their employees, and reallocate resources to help safeguard their workforce, workplace and business continuity. UnitedHealth Group will maintain control over protected health care data and will manage opt-in and consent requirements needed from app users. Microsoft will not have access to identifiable information shared via the ProtectWell™ app. De-identified workforce health trends and analytics information will help employers and policymakers make informed occupational and public health decisions.

ProtectWell™ is the latest of many initiatives announced by UnitedHealth Group to combat COVID-19. Other initiatives to date include:

  • Providing $1.5 billion in direct customer and consumer support through premium credits, cost-sharing waivers and other efforts.
  • Accelerating payments to providers throughout the crisis, with an initial tranche of nearly $2 billion.
  • Waiving cost-sharing for COVID-19 testing and treatment for U.S. members of UnitedHealthcare plans and simplifying access to care by reducing prior-authorization requirements.
  • Pioneering self-administered swab procedures to expand COVID-19 testing, reduce needed personal protective equipment and protect health care workers from unnecessary exposure to COVID-19.
  • Supporting the Mayo Clinic’s groundbreaking research into the therapeutic potential of using plasma from COVID-19 survivors.
  • Deploying 3,000 “light ventilators” to address critical shortages in the nation’s supply of ventilators.
  • Significantly expanding access to telehealth and virtual visits and redeploying 10,000 Optum clinicians to expand telehealth capabilities.
  • Providing a special enrollment period for fully insured customers to allow employees who did not opt in for coverage during the regular enrollment period to secure coverage.
  • Conducting proactive personal outreach to support seniors and the most vulnerable populations among our members.
  • Launching a free nationwide emotional support line to manage the stress and anxiety caused by COVID-19.
  • Investing nearly $75 million to help at-risk populations, support communities and protect the health care workforce.
  • Converting company cafeterias to provide more than 75,000 meals a week for people in need and keep our cafe team at work.

About UnitedHealth Group
UnitedHealth Group (NYSE: UNH) is a diversified health care company dedicated to helping people live healthier lives and helping to make the health system work better for everyone. UnitedHealth Group offers a broad spectrum of products and services through two distinct platforms: UnitedHealthcare, which provides health care coverage and benefits services; and Optum, which provides information and technology-enabled health services. For more information, visit UnitedHealth Group at www.unitedhealthgroup.com or follow @UnitedHealthGrp on Twitter.

About Microsoft
Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

UnitedHealth Group Contact:

Eric Hausman, 952-936-3963, [email protected]

Microsoft Contact:

Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777, [email protected]

Go to Original Article
Author: Microsoft News Center

UnitedHealth Group and Microsoft collaborate to launch ProtectWell™ protocol and app to support return-to-workplace planning and COVID-19 symptom screening – Stories

  • ProtectWell™ provides employers a return-to-workplace framework backed by CDC guidelines and the latest clinical science
  • Protocol is supported by the ProtectWell™ smartphone app that screens for COVID-19 symptoms and clears employees for daily work
  • Solution powered by Microsoft technologies to enable scalability, security, privacy and compliance
  • ProtectWell™ will be offered free of charge to employers in the United States

MINNETONKA, Minn., and REDMOND, Wash. (May 15, 2020) – UnitedHealth Group (NYSE: UNH) and Microsoft Corp. (Nasdaq: MSFT) have joined forces to launch ProtectWell™, an innovative return-to-workplace protocol that enables employers to bring employees back to work in a safer environment. ProtectWell™ helps employees determine they are safe to go to work, co-workers know their colleagues have been screened, and employers feel confident that their workplace is ready to do business. ProtectWell™ incorporates Centers for Disease Control and Prevention (CDC) guidelines and the latest clinical research to limit the spread of COVID-19 by screening employees for symptoms and establishing guidelines to support the health and safety of the workforce and workplace.

ProtectWell™ combines UnitedHealth Group’s clinical and data analytics capabilities with Microsoft’s technology leadership to help in the next phases of COVID-19 recovery efforts. The ProtectWell™ app is powered by Microsoft Azure, AI and analytics solutions, and also takes advantage of the Microsoft Healthcare Bot service, which is being used around the world for AI-assisted COVID-19 symptom triaging.

UnitedHealth Group logoThe ProtectWell™ protocol is supported by a smartphone app that allows employers to offer workers a simple screening tool designed for everyday use. The ProtectWell™ app includes an AI-powered health care bot that asks users a series of questions to screen for COVID-19 symptoms or exposure. If risk of infection is indicated, employers can direct their employees to a streamlined COVID-19 testing process that enables closed-loop ordering and reporting of test results directly back to employers. Health care information is managed by UnitedHealth Group and employers in accordance with occupational health laws.

In addition, ProtectWell™ includes guidelines and resources to support a safe work environment, including physical distancing, personal hygiene, sanitation and more. Employers can also choose additional custom content specific to their workforce for a personalized experience.

Microsoft logo“As we plan for a safe and careful return to the workplace, employers need clear guidelines to ensure a safe environment and a robust process for employees to screen themselves for COVID-19 symptoms,” said Ken Ehlert, chief scientific officer, UnitedHealth Group. “We are pleased to collaborate with Microsoft to launch ProtectWell™, a simple and effective tool to ensure employers and employees have the information and resources they need to keep themselves, their colleagues and the public safe and healthy.”

Microsoft Executive Vice President, Worldwide Commercial Business, Judson Althoff said: “As businesses begin to reopen, employers will need to monitor and manage their workforce for COVID-19 symptoms to help ensure those at risk of spreading the virus stay home until cleared by medical providers. Microsoft is pleased to join with UnitedHealth Group to launch ProtectWell™, which helps organizations manage the complexity of this undertaking.”

UnitedHealth Group has implemented ProtectWell™ with its own frontline health care workers, is in process of implementing the tool across its business to enable safe return of team members to the workplace, and is making the platform available to all employers in the United States at no charge. Microsoft intends to deploy ProtectWell™ for its U.S.-based employees.

The ProtectWell™ smartphone app is powered by Microsoft Azure, together with its industry-leading security and compliance offerings, and allows employers to better plan, manage resources, care for their employees, and reallocate resources to help safeguard their workforce, workplace and business continuity. UnitedHealth Group will maintain control over protected health care data and will manage opt-in and consent requirements needed from app users. Microsoft will not have access to identifiable information shared via the ProtectWell™ app. De-identified workforce health trends and analytics information will help employers and policymakers make informed occupational and public health decisions.

ProtectWell™ is the latest of many initiatives announced by UnitedHealth Group to combat COVID-19. Other initiatives to date include:

  • Providing $1.5 billion in direct customer and consumer support through premium credits, cost-sharing waivers and other efforts.
  • Accelerating payments to providers throughout the crisis, with an initial tranche of nearly $2 billion.
  • Waiving cost-sharing for COVID-19 testing and treatment for U.S. members of UnitedHealthcare plans and simplifying access to care by reducing prior-authorization requirements.
  • Pioneering self-administered swab procedures to expand COVID-19 testing, reduce needed personal protective equipment and protect health care workers from unnecessary exposure to COVID-19.
  • Supporting the Mayo Clinic’s groundbreaking research into the therapeutic potential of using plasma from COVID-19 survivors.
  • Deploying 3,000 “light ventilators” to address critical shortages in the nation’s supply of ventilators.
  • Significantly expanding access to telehealth and virtual visits and redeploying 10,000 Optum clinicians to expand telehealth capabilities.
  • Providing a special enrollment period for fully insured customers to allow employees who did not opt in for coverage during the regular enrollment period to secure coverage.
  • Conducting proactive personal outreach to support seniors and the most vulnerable populations among our members.
  • Launching a free nationwide emotional support line to manage the stress and anxiety caused by COVID-19.
  • Investing nearly $75 million to help at-risk populations, support communities and protect the health care workforce.
  • Converting company cafeterias to provide more than 75,000 meals a week for people in need and keep our cafe team at work.

About UnitedHealth Group
UnitedHealth Group (NYSE: UNH) is a diversified health care company dedicated to helping people live healthier lives and helping to make the health system work better for everyone. UnitedHealth Group offers a broad spectrum of products and services through two distinct platforms: UnitedHealthcare, which provides health care coverage and benefits services; and Optum, which provides information and technology-enabled health services. For more information, visit UnitedHealth Group at www.unitedhealthgroup.com or follow @UnitedHealthGrp on Twitter.

About Microsoft
Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

UnitedHealth Group Contact:

Eric Hausman, 952-936-3963, [email protected]

Microsoft Contact:

Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777, [email protected]

Go to Original Article
Author: Steve Clarke

ArangoDB 3.6 accelerates performance of multi-model database

By definition, a multi-model database provides multiple database models for different use cases and user needs. Among the popular options users have for a multi-model database is ArangoDB from the open source database vendor.

ArangoDB 3.6, released into general availability Jan. 8, brings a series of new updates to the multi-model database platform. Among the updates are improved performance capabilities for queries and overall database operations. Also, the new OneShard feature from the San Mateo, Calif.-based vendor is a way for organizations to create robust data resilience as well as use synchronization capabilities.

For Kaseware, based in Denver, ArangoDB has been a core element since the company was founded in 2016, enabling the law enforcement software vendor’s case management system.

“I specifically sought out a multi-model database because for me, that simplified things,” said Scott Baugher, the co-founder, president and CTO of Kaseware, and a former FBI special agent. “I had fewer technologies in my stack, which meant fewer things to keep updated and patched.”

Kaseware uses ArangoDB as a document, key/value, and graph database. Baugher noted that the one other database the company uses is ElasticSearch, for its full-text search capabilities. Kaseware uses ElasticSearch because until fairly recently, ArangoDB did not offer full-text search capabilities, he said.

“If I were starting Kaseware over again now, I’d take a very hard look at eliminating ElasticSearch from our stack as well,” Baugher said. “I say that not because ElasticSearch isn’t a great product, but it would allow me to even further simplify my deployment stack.” 

Adding OneShard to ArangoDB 3.6

With OneShard, users will gain a new option for database distribution. OneShard is a feature for users for whom data is small enough to fit on a single node, but the requirement for fault tolerance still requires the database to replicate data across multiple nodes, said Joerg Schad, head of engineering and machine learning at ArangoDB.

I specifically sought out a multi-model database because for me, that simplified things. I had fewer technologies in my stack, which meant fewer things to keep updated and patched.
Scott BaugherCo-founder, president and CTO of Kaseware

“ArangoDB will basically colocate all data on a single node and hence offer local performance and transactions as queries can be evaluated on a single node,” Schad said. “It will still replicate the data synchronously to achieve fault tolerance.”

Baugher said he’ll be taking a close look at OneShard.

He noted that Kaseware now uses ArangoDB’s “resilient single” database setup, which in his view is similar, but less robust. 

“One main benefit of OneShard seems to be the synchronous replication of the data to the backup or failover databases versus the asynchronous replication used by the active failover configuration,” Baugher said.

Baugher added that OneShard also allows database reads to happen from any database node. This contrasts with active failover, in that reads are limited to the currently active node only. 

“So for read-heavy applications like ours, OneShard should not only offer performance benefits, but also let us make better use of our standby nodes by having them respond to read traffic,” he said.

More performance gains in ArangoDB 3.6

The ArangoDB 3.6 multi-model database also provides users with faster query execution thanks to a new feature for subquery optimization. Schad explained that when writing queries, it is a typical pattern to build a complex based on multiple simple queries. 

“With the improved subquery optimization, ArangoDB optimizes and processes such queries more efficiently by merging them into one which especially improves performance for larger data sizes up to a factor of 28x,” he said.

The new database release also enables parallel execution of queries to further improve performance. Schad said that if a query requires data from multiple nodes, with ArangoDB 3.6 operations can be parallelized to be performed concurrently. The end results, according to Schad, are improvements of 30% to 40% for queries involving data across multiple nodes.

Looking forward to the next release of ArangoDB, scalability improvements will be at the top of the agenda, he said.

“For the upcoming 3.7 release, we are already working on improving the scalability even further for larger data sizes and larger clusters,” Schad said.

Go to Original Article
Author:

How to manage Exchange hybrid mail flow rules

An Exchange hybrid deployment generally provides a good experience for the administrator, but it can be found lacking in a few areas, such as transport rules.

Transport rules — also called mail flow rules — identify and take actions on all messages as they move through the transport stack on the Exchange servers. Exchange hybrid mail flow rules can be tricky to set up properly to ensure all email is reviewed, no matter if mailboxes are on premises or in Exchange Online in the cloud.

Transport rules solve many compliance-based problems that arise in a corporate message deployment. They add disclaimers or signatures to messages. They funnel messages that meet specific criteria for approval before they leave your control. They trigger encryption or other protections. It’s important to understand how Exchange hybrid mail flow rules operate when your organization runs a mixed environment.

Mail flow rules and Exchange hybrid setups

The power of transport rules stems from their consistency. For an organization with compliance requirements, transport rules are a reliable way to control all messages that meet defined criteria. Once you develop a transport rule for certain messages, there is some comfort in knowing that a transport rule will evaluate every email. At least, that is the case when your organization is only on premises or only in Office 365.

Things change when your organization moves to a hybrid Exchange configuration. While mail flow rules evaluate every message that passes through the transport stack, that does not mean that on-premises transport rules will continue to evaluate messages sent to or from mailboxes housed in Office 365 and vice versa.

No two organizations are alike, which means there is more than one resolution for working with Exchange hybrid mail flow rules.

Depending on your routing configuration, email may go from an Exchange Online mailbox and out of your environment without an evaluation by the on-premises transport rules. It’s also possible that both the mail flow rules on premises and the other set of mail flow rules in Office 365 will assess every email, which may cause more problems than not having any messages evaluated.

To avoid trouble, you need to consider the use of transport rules both for on-premises and for online mailboxes and understand how the message routing configuration within your hybrid environment will affect how Exchange applies those mail flow rules.

Message routing in Exchange hybrid deployments

A move to an Exchange hybrid deployment requires two sets of transport rules. Your organization needs to decide which mail flow rules will be active in which environment and how the message routing configuration you choose affects those transport rules.

All message traffic that passes through an Exchange deployment will be evaluated by the transport rules in that environment, but the catch is that an Exchange hybrid deployment consists of two different environments, at least when they relate to transport rules. A message sent from an on-premises mailbox to another on-premises mailbox generally won’t pass though the transport stack, and, thus, the mail flow rules, in Exchange Online. The opposite is also true: Messages sent from an online mailbox to another online mailbox in the same tenant will not generally pass though the on-premises transport rules. Copying the mail flow rules from your on-premises Exchange organization into your Exchange Online tenant does not solve this problem, but that can lead to some messages being handled by the same transport rule twice.

When you configure an Exchange hybrid deployment, you need to decide where your mail exchange (MX) record points. Some organizations choose to have the MX record point to the existing on-premises Exchange servers and then route message traffic to mailboxes in Exchange Online via a send connector. Other organizations choose to have the MX record point to Office 365 and then flow to the on-premises servers.

There are more decisions to be made about the way email leaves your organization as well. By default, an email sent from an Exchange Online mailbox to an external recipient will exit Office 365 directly to the internet without passing through the on-premises Exchange servers. This means that transport rules, which are intended to evaluate email traffic before it leaves your organization, may never have that opportunity.

Exchange hybrid mail flow rules differ for each organization

No two organizations are alike, which means there is more than one resolution for working with Exchange hybrid mail flow rules.

For organizations that want to copy transport rules from on-premises Exchange Server into Exchange Online, you can use PowerShell. The Export-TransportRuleCollection PowerShell cmdlet works on all currently supported versions of on-premises Exchange Server. This cmdlet creates an XML file that you can load into your Exchange Online tenant with another cmdlet called Import-TransportRuleCollection. This is a good first step to ensure all mail flow rules are the same in both environments, but that’s just part of the work.

Transport rules, like all Exchange Server features, have evolved over time. They may not work the same in all supported versions of on-premises Exchange Server and Exchange Online. Simply exporting and importing your transport rules may cause unexpected behavior.

One way to resolve this is to duplicate transport rules in both environments by adding two more transport rules on each side. The first new transport rule checks the message header and tells the transport stack — both on premises and in the cloud — that the message has already been though the transport rules in the other environment. This rule should include a statement to stop processing any further transport rules. A second new transport rule should add to the header with an indication that the message has already been though the transport rules in one environment. This is a difficult setup to get right and requires a good deal of care to implement properly if you choose to go this route.

I expect that the fairly new hybrid organization transfer feature of the Hybrid Configuration Wizard will eventually handle the export and import of transport rules, but that won’t solve the routing issues or the issues with running duplicate rules.

Go to Original Article
Author:

Amazon Quantum Ledger Database brings immutable transactions

The Amazon Web Services Quantum Ledger Database is now generally available.

The database provides a cryptographically secured ledger as a managed service. It can be used to store both structured and unstructured data, providing what Amazon refers to as an immutable transaction log.

The new database service was released on Sept. 10, 10 months after AWS introduced it as a preview technology.

The ability to provide a cryptographically and independently verifiable audit trail of immutable data has multiple benefits and use cases, said Gartner vice president and distinguished analyst Avivah Litan.

“This is useful for establishing a system of record and for satisfying various types compliance requirements, such as regulatory compliance,” Litan said. “Gartner estimates that QLDB and other competitive offerings that will eventually emerge will gain at least 20% of permissioned blockchain market share over the next three years.”

A permissioned blockchain has a central authority in the system to help provide overall governance and control. Litan sees the Quantum Ledger Database as satisfying several key requirements in multi-company projects, which are typically complementary to existing database systems.

Among the requirements is that once data is written to the ledger, the data is immutable and cannot be deleted or updated. Another key requirement that QLDB satisfies is that it provides a cryptographically and independently verifiable audit trail.

“These features are not readily available using traditional legacy technologies and are core components to user interest in adopting blockchain and distributed ledger technology,” Litan said. “In sum, QLDB is optimal for use cases when there is a trusted authority recognized by all participants and centralization is not an issue.”

How AWS Quantum Ledger Database works shown in diagram graphic
Diagram of how AWS Quantum Ledger Database works

Centralized ledger vs. de-centralized blockchain

The basic promise of many blockchain-based systems is that they are decentralized, and each party stores a copy of the ledger. For a transaction to get stored in a decentralized and distributed ledger, multiple parties have to come to a consensus. In this way, blockchains achieve trust in a distributed and decentralized way.

“Customers who need a decentralized application can use Amazon Managed Blockchain today,” said Rahul Pathak, general manager of databases, analytics and blockchain at AWS. “However, there are customers who primarily need the immutable and verifiable components of a blockchain to ensure the integrity of their data is maintained.”

By quantum, we imply indivisible, discrete changes. In QLDB, all the transactions are recorded in blocks to a transparent journal where each block represents a discrete state change.
Rahul PathakGeneral manager of databases, analytics and blockchain, Amazon Web Services

For customers who want to maintain control and act as the central trusted entity, just like any database application works today, a decentralized system with multiple entities is not the right fit for their needs, Pathak said.

“Amazon [Quantum Ledger Database] combines the data integrity capabilities of blockchain with the ease and simplicity of a centrally owned datastore, allowing a single entity to act as the central trusted authority,” Pathak said.

While QLDB includes the term “quantum” in its name, it’s not a reference to quantum computing

“By quantum, we imply indivisible, discrete changes,” Pathak said. “In QLDB, all the transactions are recorded in blocks to a transparent journal where each block represents a discrete state change.”

How the Amazon Quantum Ledger Database works

The immutable nature of QLDB is a core element of the database’s design. Pathak explained that QLDB uses a cryptographic hash function to generate a secure output file of the data’s change history, known as a digest. The digest acts as a proof of the data’s change history, enabling customers to look back and validate the integrity of their data changes.

From a usage perspective QLDB supports the PartiQL open standard query language that supports SQL-compatible access to data. Pathak said that customers can build applications with the Amazon QLDB Driver for Java to write code that accesses and manipulates the ledger database.

“This is a Java driver that allows you to create sessions, execute PartiQL commands within the scope of a transaction, and retrieve results,” he said. 

Developed internally at AWS

The Quantum Ledger Database is based on technology that AWS has been using for years, according to Pathak. AWS has been using an internal version of Amazon QLDB to store configuration data for some of its most critical systems, and has benefitted from being able to view an immutable history of changes, he said.

“Over time, our customers have asked us for the same ledger capability, and a way to verify that the integrity of their data is intact,” he said. “So, we built Amazon QLDB to be immutable and cryptographically verifiable.”

Go to Original Article
Author:

Understand Windows Insider Program for Business options

The Windows Insider Program for Business provides features that help IT plan for and deploy GA builds when they arrive.

The Windows Insider Program, which Microsoft introduced in 2014, lets IT try out new features in the upcoming Windows release before Microsoft makes them generally available. Microsoft added the Windows Insider Program for Business in April 2018 to provide organizations with tools to better prepare for upcoming releases.

Windows Insider Program for Business

Microsoft designed the Windows Insider Program for Business specifically for organizations to deploy preview builds from Windows 10 and Windows Server to participating employees for testing before they are GA.

IT pros can register their domains with the service and control settings centrally rather than registering users or configuring machines individually. Individual users can also join the Windows Insider Program for Business on their own, independently of IT’s corporate-wide review.

Microsoft designed the Windows Insider Program for Business specifically for organizations to deploy preview builds from Windows 10 and Windows Server to participating employees for testing before they are GA.

The preview builds don’t replace the channel releases because IT doesn’t deploy the new builds across its organization. They’re simply earlier Windows 10 builds IT teams can use to prepare their organizations for the updates.

The Windows Insider Program for Business preview build releases make it possible for IT to implement new services and tools more quickly once the GA release is available. The previews also help IT ensure that Microsoft addressed data security and governance issues in advance of the release.

The Windows Insider Program for Business allows administrators, developers, testers and other users to see what effect a new release might have on their devices, applications and infrastructures. Microsoft includes the Feedback Hub for IT pros and users to submit reactions about their experiences, make requests for new features and identify issues such as application compatibility, security and performance problems.

Microsoft also offers the Windows Insider Lab for Enterprise, a test deployment for insiders who Microsoft specially selects to test new, experimental or prerelease enterprise security and privacy features. The lab provides insiders with a virtual test infrastructure that comes complete with typical enterprise technologies such as Windows Information Protection, Windows Defender Application Guard and Microsoft App-V.

Getting started with the insider program

Microsoft recommends organizations sign up for the Windows Insider Program for Business and dedicate at least a few devices to the program. IT pros must register their users with the service and set up the target devices to receive preview builds.

Microsoft also recommends that organizations use Azure Active Directory work accounts when registering with the service, whether an organization registers users individually or as part of a domain account. A domain registration makes it easier for IT to manage the participating devices and track feedback from users across the organization. Users that want to submit feedback on behalf of the organization must have a domain registration, as well.

IT can install and manage preview builds on individual devices or on the infrastructure and deploy the builds across multiple devices in the domain, including virtual machines. Using Group Policies, IT can also enable, disable, defer or pause preview installations and set the branch readiness levels, which determine when the preview builds are installed.

Microsoft’s three preview readiness branches

IT can configure devices so the preview builds install automatically or allow users to choose their own install schedules. With mobile device management tools such as Microsoft Intune, IT can take over the preview readiness branch settings, assigning each user one of three preview deployment branches.

Fast. Devices at the Fast level are the first to receive build and feature updates. This readiness level implies some risk because it is the least stable and some features might not work on certain devices. As a result, IT should only install Fast builds on secondary devices and limit these builds to a select group of users.

Slow. Devices at the Slow level receive updates after Microsoft applies user and organization feedback from the Fast build. These builds are more stable, but users don’t see them as early in the process compared to the Fast builds. The Slow level generally targets a broader set of users.

Release Preview. Devices at the Release Preview level are the last to receive preview builds, but these builds are the most stable. Users still get to see and test features in advance and can provide feedback, but they have a much smaller window between the preview build and the final release.

Is the Windows Insider Program for Business for everyone?

An organization that participates in the Windows Insider Program for Business must be able to commit the necessary resources to effectively take advantage of the program’s features. To meet this standard, organizations must ensure that they can dedicate the necessary hardware and infrastructure resources and choose users who have enough time to properly test the builds.

An organization’s decision to invest in these resources depends on its specific circumstances, but deploying a Windows update is seldom without a few hiccups. With the Windows Insider Program for Business, IT can avoid some of these issues.

ComplyRight data breach affects 662,000, gets lawsuit

A data breach at ComplyRight, a firm that provides HR and tax services to businesses, may have affected 662,000 people, according to a state agency. It has also prompted a lawsuit, which was filed in federal court by a person who was notified that their personal data was breached. The lawsuit seeks class-action status.

The ComplyRight data breach included names, addresses, phone numbers, email addresses and Social Security numbers, some of which came from tax and W-2 forms.

ComplyRight’s services include a range of HR products, such as recruitment, time and attendance, as well as an online app for storing essential employee data. This particular attack was directed at its tax-form-preparation website. Hackers go after customer and employee data. The Identity Theft Resource Center 2018 midyear report, for instance, lists every known breach so far this year. It said the compromised data is a shopping list of HR managed data.

Company: No more than 10% of customers affected

The breach occurred between April 20 and May 22, and the company notified affected parties by mail.

ComplyRight, in a posted statement, said “a portion (less than 10%)” of people who have their tax forms prepared on its web platform were affected by a cyberattack, but it did not say how many customers were affected by its breach. The company knows the data was accessed or viewed, but it was unable to determine if the data was downloaded, according to the firm’s statement.

But the state of Wisconsin, which publishes data breach reports, has shed some light on the scale of the impact. It reported the ComplyRight data breach affected 662,000 people — including 12,155 Wisconsin residents. A spokesman for Wisconsin Department of Agriculture, Trade and Consumer Protection said this figure was provided verbally to the state by an attorney for ComplyRight.

Rick Roddis, president of ComplyRight, based in Pompano Beach, Fla., said in an email that the firm won’t be commenting, for now, beyond what it has posted on the site.

Among the steps ComplyRight said it took was the hiring of a third-party security expert who conducted a forensic investigation. The firm is also offering credit-monitoring services to affected parties.

Security expert Nikolai Vargas, who looked at the firm’s statement, said ComplyRight “is doing the bare minimum in terms of transparency and informing their clients of the details of the security incident.”

“In cases of a data breach, it is important to disclose how long the exposure occurred and the scope of the exposure,” said Vargas, who is CTO of Switchfast, an IT consulting and managed service provider based in Chicago. ComplyRight stating that “less than 10%” of individuals were affected “doesn’t really explain how many people were impacted,” he added.

“Technical details are nice to have, but they’re not always necessary and may need to be withheld until protections are put in place,” Vargas said.

Federal suit alleges poor protection

[ComplyRight] is doing the bare minimum in terms of transparency and informing their clients of the details of the security incident.
Nikolai VargasCTO at Switchfast

The ComplyRight data breach was first reported by Krebs on Security, which had heard from customers who had received breach notification letters.

Susan Winstead, an Illinois resident, received the notification from ComplyRight on July 17, outlining what happened. She is the plaintiff in the lawsuit filed July 20 in the U.S. District Court for the Northern District of Illinois.

The lawsuit faults ComplyRight for allegedly not properly protecting its data and not immediately notifying affected individuals, and the suit seeks damages for the improper disclosure of personal information, including the time and effort to remediate the data beach. 

Company faced difficult detective work

Another independent expert who looked at ComplyRight’s notice, Avani Desai, said the company “followed best practice for incident response.”

With a cyberattack, one of the most difficult processes initially is identifying that there was an actual attack and the true extent of it, said Desai, president of Schellman & Company, a security and privacy compliance assessor in Tampa, Fla. It’s important to ask the following questions early: Was there sensitive information that was involved? Which systems were exploited? The firm quickly hired a third-party forensic group, she noted.

“ComplyRight locked down the system prior to announcing the breach, which is important, because when organizations announce too quickly, we see copycat attacks hit the already vulnerable situation,” Desai said.

Mike Sanchez, chief information security officer of United Data Technologies, an IT technology and services firm in Doral, Fla., said the things the firm did right are “they disabled the platform and performed a forensic investigation to understand the cause of the breach, as well as the breadth of the malicious actor’s actions.”

But Sanchez said the firm’s statement, which he described as a “very high-level summary,” lacked many specifics, including the exact flaw that was used to gain access to the data.

The Identity Theft Resource Center reported that as of the first six months of this year, there were 668 breaches exposing nearly 22.5 million records.

Big Switch taps AWS VPCs for hybrid cloud networking

Big Switch Networks has introduced software that provides consistency in building and managing a network infrastructure within a virtual network in Amazon Web Services and the private data center.

The vendor, which provides a software-based switching fabric for open hardware, said this week it would release the hybrid cloud technology in stages. First up is a software release next month for the data center, followed by an application for AWS in the fourth quarter.

The AWS product, called Big Cloud Fabric — Public Cloud, provides the tools for creating and configuring a virtual network to deliver Layer 2, Layer 3 and security services to virtual machines or containers running on the IaaS provider. AWS also offers tools for building the virtual networks, which it calls Virtual Private Clouds (VPCs).

In general, customers use AWS VPCs to support a private cloud computing environment on the service provider’s platform. The benefit is getting more granular control over the virtual network that serves sensitive workloads.

Big Cloud Fabric — Public Cloud lets companies create AWS VPCs and assign security policies for applications running on the virtual networks. The product also provides analytics for troubleshooting problems. While initially available on AWS, Big Switch plans to eventually make Big Cloud Fabric — Public Cloud available on Google Cloud and Microsoft Azure.

Big Switch Networks' cloud-first portfolio

VPCs for the private data center

For the corporate data center, Big Switch plans to add tools to its software-based switching fabric — called Big Cloud Fabric — for creating and managing on-premises VPCs that operate the same way as AWS VPCs, said Prashant Gandhi, the chief product officer for Big Switch, based in Santa Clara, Calif.

Customers could use the on-premises VPCs, which Big Switch calls enterprise VPCs, as the virtual networks supporting computing environments that include Kubernetes and Docker containers, the VMware server virtualization vSphere suite, and the OpenStack cloud computing framework.

“With the set of tools they are announcing, [Big Switch] will be able to populate these VPCs and facilitate a consistent deployment and management of networks across cloud and on premises,” said Will Townsend, an analyst at Moor Insights & Strategy, based in Austin, Texas.

Big Switch already offers a version of its Big Monitoring Fabric (BMF) network packet broker for AWS. In the fourth quarter, Big Switch plans to release a single console, called Multi-Cloud Director, for accessing all BMF and Big Cloud Fabric controllers.

In general, Big Switch supplies software-based networking technology for white box switches. Big Cloud Fabric competes with products from Cisco, Midokura and Pluribus Networks, while BMF rivals include technology from GigamonIxia and Apcon.

Big Switch customers are mostly large enterprises, including communication service providers, government agencies and 20 Fortune 100 companies, according to the vendor.

Harness genomic data to provide patient-centered care

Simon kos headshotGenomic data provides the foundation for the delivery of personalized medicine, although cost-effective and secure management of this data is challenging. BC Platforms, a Microsoft partner and world leader in genomic data management and analysis solutions, created GeneVision for Precision Medicine, Built on Microsoft Cloud technology. GeneVision is an end-to-end genomic data management and analysis solution empowering physicians with clear, actionable insights, facilitating evidence-based treatment decisions.

We interviewed Simon Kos, Chief Medical Officer and Senior Director of Worldwide Health at Microsoft, to learn more about how digital transformation is enabling the delivery of personalized medicine at scale.

David Turcotte: What led to your transition from a clinical provider to a leader within the healthcare technology industry?
Simon Kos:
It wasn’t intentional. In critical care medicine, having the right information on hand to make patient decisions, and being able to team effectively with other clinicians is essential. I felt that the technology we were using didn’t help, and I saw that as a risk to good quality care. This insight led to an interest, and the hobby eventually became a career as I got more exposure to all the incredible solutions out there that really do improve healthcare.

Given your unique perspective within the healthcare technology industry, how do you see digital transformation progressing in healthcare?

Digitization efforts have been underway for more than thirty years. As an industry, healthcare is moving slower than others. It’s heavily regulated, complex, and there is a large legacy of niche systems. However, the shift is occurring, and it needs to happen. We have a fundamental sustainability issue, with healthcare expenditure climbing around the world, and our model of healthcare needs to change emphasis from treating sick people in hospitals to preventing chronic disease in the community setting. Each day I see new clinical models that can only be achieved by leveraging technology, enabling us to treat patients more effectively at lower cost.

How are you and other healthcare leaders managing the shift from fee-for-service to a value-based care model?

My role in the shift to value-based care is building capability within the Microsoft Partner Network—which is over 12,000 companies in health worldwide—and bringing visibility to those that support value-based care. For healthcare leaders more directly involved in either the provision or reimbursement side, the challenge is more commercial. Delivering the same kind of care won’t be as profitable, but adapting business processes comes with its own set of risks. I think the stories of organizations that have successfully transitioned to value-based care, the processes they use, and the technology they leverage, will be important for those who desire more clarity before progressing with their own journeys

What role does precision medicine play in delivering value-based care?

Right now, precision medicine seems to be narrowly confined to genetic profiling in oncology to determine which chemotherapy agents to use. That’s important since these drugs are expensive, and with cancer it’s imperative to start on a therapy that will work as soon as possible. However, I think the promise of precision medicine is so much broader than this. In understanding an individual’s risk profile through multi-omic analysis (i.e. genomics), we can finally get ahead of disease before it manifests, empower people with more targeted education, screen more diligently, and when patients do get unwell, intervene more effectively. Shifting some of the care burden to the patient, preventing disease, intervening early, and getting therapy right the first time, will drive the return on investment that makes value-based care economically viable.

As genomics continues to become more democratized, how will we continue to see it affect precision medicine?

It’s already scaling out beyond oncology. I expect to see genomics have increasing impact in areas like autoimmune disease, rare disease, and chronic disease. In doing so, I think precision medicine will cease to be something that primary care and specialists refer a patient on to a clinical geneticist or oncologist, instead they will integrate it into their model of care. I also see a role for the patients themselves to get more directly involved. As we continue to understand more about the human genome, the value of having your genome sequenced will increase. I see a day when knowing your genome is as common as knowing your blood type.

What role can technology play in closing the gap between genomics researchers and providers?

I think technology can federate genomics research. Research collaboration would tremendously increase the data researchers have to work with, which will accelerate breakthroughs. The more we understand about the genome, the more relevant it becomes to all providers. I also think machine learning has a role to play. Project Hanover aims to take the grunt work out of aggregating research literature. Finally, I think genomics needs to make its way into the electronic medical records that providers use, ideally with the automated clinical decision support that help them use it effectively.

What challenges are healthcare leaders facing when implementing a long-term, scalable genomics strategy?

On the technical side, compute and storage of genomic information are key considerations. The cloud is quickly becoming the only viable way to solve for this. Using the cloud requires a well-considered security and privacy approach. On the research side, there’s still so much we have to learn about the genome. As we learn more it will open new avenues of care. Finally, on the business side, we have resourcing and reimbursement. The talent pool of genomics today is insufficient for a world where precision medicine is mainstream. These specialized resources are costly, and even with the cost of sequencing coming down, staffing a genomics business is expensive. And then there’s the reality of reimbursement – right now only certain conditions qualify for NGS. So, I think any genomics business needs to start with what will be reimbursed but be ready to expand as the landscape evolves.

How do genomic solutions like BC Platforms’ GeneVision for Precision Medicine have the potential to transform a provider’s approach to patient care?

Providers are busy, and more demands are being placed on them to see more patients, see them faster, but also to personalize their care and deliver excellent outcomes. BC Platforms’ GeneVision allows insights to be surfaced from the system level raw data and delivered to the clinician to assist them in meeting these demands. The clinical reports that can be leveraged through GeneVision enable providers to make critical decisions about therapies and treatment within the context of their existing workflows.

In addition to report generation, GeneVision optimizes usage of stored genomic data so that when it is produced, it can be repeatedly re-utilized by merging it with clinical data as many times as a patient enters the health care system. GeneVision makes this possible through BC Platforms’ unique architecture, the dynamic storage capabilities of Microsoft Azure cloud technology, and Microsoft Genomics services. Together, these capabilities make genomic solutions like GeneVision a key factor in delivering patient-centered care at scale.

What will it take for genomics to become a part of routine patient care?

The initial barrier was cost. I think we are past that, with NGS dipping below $1000 and continuing to fall. Research into the genome is the current challenge. Genomics will eventually touch all aspects of medicine, but given the previous cost constraints we are the most advanced in oncology today. A key benefit of GeneVision is that it supports both whole genome sequencing and genotyping, which is currently the more cost-effective method to generate and store genomic data.  Although the cost of whole genome sequencing is coming down, this flexibility is essential to enabling rapid proliferation of genomics applications in healthcare. The future challenge will be educating the clinical provider workforce and introducing new models of care that leverage genomics. I think the reimbursement restrictions will melt away organically, as it becomes clearly more effective to take a precision approach to patient care.

What future applications of genomics in healthcare are you most excited about?

I’m really excited about the evolution of CRISPR and gene editing. Finding that you have a genetic variant that increases your risk of certain diseases can be helpful of course—it allows you to be aware, to screen, and take preventative steps. The ability to go a step further though and remediate that variant I think is incredibly powerful. At the same time, gene editing opens all sorts of other ethical issues, and I don’t yet think we have a mature approach to considering how we tackle that challenge.


BC Platforms GeneVision for Precision Medicine, Built on Microsoft Cloud technology, is available now on AppSource. Learn how GeneVision equips physicians with the tools they need to improve and accelerate patient outcomes by trying the demo today.