Tag Archives: provides

ArangoDB 3.6 accelerates performance of multi-model database

By definition, a multi-model database provides multiple database models for different use cases and user needs. Among the popular options users have for a multi-model database is ArangoDB from the open source database vendor.

ArangoDB 3.6, released into general availability Jan. 8, brings a series of new updates to the multi-model database platform. Among the updates are improved performance capabilities for queries and overall database operations. Also, the new OneShard feature from the San Mateo, Calif.-based vendor is a way for organizations to create robust data resilience as well as use synchronization capabilities.

For Kaseware, based in Denver, ArangoDB has been a core element since the company was founded in 2016, enabling the law enforcement software vendor’s case management system.

“I specifically sought out a multi-model database because for me, that simplified things,” said Scott Baugher, the co-founder, president and CTO of Kaseware, and a former FBI special agent. “I had fewer technologies in my stack, which meant fewer things to keep updated and patched.”

Kaseware uses ArangoDB as a document, key/value, and graph database. Baugher noted that the one other database the company uses is ElasticSearch, for its full-text search capabilities. Kaseware uses ElasticSearch because until fairly recently, ArangoDB did not offer full-text search capabilities, he said.

“If I were starting Kaseware over again now, I’d take a very hard look at eliminating ElasticSearch from our stack as well,” Baugher said. “I say that not because ElasticSearch isn’t a great product, but it would allow me to even further simplify my deployment stack.” 

Adding OneShard to ArangoDB 3.6

With OneShard, users will gain a new option for database distribution. OneShard is a feature for users for whom data is small enough to fit on a single node, but the requirement for fault tolerance still requires the database to replicate data across multiple nodes, said Joerg Schad, head of engineering and machine learning at ArangoDB.

I specifically sought out a multi-model database because for me, that simplified things. I had fewer technologies in my stack, which meant fewer things to keep updated and patched.
Scott BaugherCo-founder, president and CTO of Kaseware

“ArangoDB will basically colocate all data on a single node and hence offer local performance and transactions as queries can be evaluated on a single node,” Schad said. “It will still replicate the data synchronously to achieve fault tolerance.”

Baugher said he’ll be taking a close look at OneShard.

He noted that Kaseware now uses ArangoDB’s “resilient single” database setup, which in his view is similar, but less robust. 

“One main benefit of OneShard seems to be the synchronous replication of the data to the backup or failover databases versus the asynchronous replication used by the active failover configuration,” Baugher said.

Baugher added that OneShard also allows database reads to happen from any database node. This contrasts with active failover, in that reads are limited to the currently active node only. 

“So for read-heavy applications like ours, OneShard should not only offer performance benefits, but also let us make better use of our standby nodes by having them respond to read traffic,” he said.

More performance gains in ArangoDB 3.6

The ArangoDB 3.6 multi-model database also provides users with faster query execution thanks to a new feature for subquery optimization. Schad explained that when writing queries, it is a typical pattern to build a complex based on multiple simple queries. 

“With the improved subquery optimization, ArangoDB optimizes and processes such queries more efficiently by merging them into one which especially improves performance for larger data sizes up to a factor of 28x,” he said.

The new database release also enables parallel execution of queries to further improve performance. Schad said that if a query requires data from multiple nodes, with ArangoDB 3.6 operations can be parallelized to be performed concurrently. The end results, according to Schad, are improvements of 30% to 40% for queries involving data across multiple nodes.

Looking forward to the next release of ArangoDB, scalability improvements will be at the top of the agenda, he said.

“For the upcoming 3.7 release, we are already working on improving the scalability even further for larger data sizes and larger clusters,” Schad said.

Go to Original Article
Author:

How to manage Exchange hybrid mail flow rules

An Exchange hybrid deployment generally provides a good experience for the administrator, but it can be found lacking in a few areas, such as transport rules.

Transport rules — also called mail flow rules — identify and take actions on all messages as they move through the transport stack on the Exchange servers. Exchange hybrid mail flow rules can be tricky to set up properly to ensure all email is reviewed, no matter if mailboxes are on premises or in Exchange Online in the cloud.

Transport rules solve many compliance-based problems that arise in a corporate message deployment. They add disclaimers or signatures to messages. They funnel messages that meet specific criteria for approval before they leave your control. They trigger encryption or other protections. It’s important to understand how Exchange hybrid mail flow rules operate when your organization runs a mixed environment.

Mail flow rules and Exchange hybrid setups

The power of transport rules stems from their consistency. For an organization with compliance requirements, transport rules are a reliable way to control all messages that meet defined criteria. Once you develop a transport rule for certain messages, there is some comfort in knowing that a transport rule will evaluate every email. At least, that is the case when your organization is only on premises or only in Office 365.

Things change when your organization moves to a hybrid Exchange configuration. While mail flow rules evaluate every message that passes through the transport stack, that does not mean that on-premises transport rules will continue to evaluate messages sent to or from mailboxes housed in Office 365 and vice versa.

No two organizations are alike, which means there is more than one resolution for working with Exchange hybrid mail flow rules.

Depending on your routing configuration, email may go from an Exchange Online mailbox and out of your environment without an evaluation by the on-premises transport rules. It’s also possible that both the mail flow rules on premises and the other set of mail flow rules in Office 365 will assess every email, which may cause more problems than not having any messages evaluated.

To avoid trouble, you need to consider the use of transport rules both for on-premises and for online mailboxes and understand how the message routing configuration within your hybrid environment will affect how Exchange applies those mail flow rules.

Message routing in Exchange hybrid deployments

A move to an Exchange hybrid deployment requires two sets of transport rules. Your organization needs to decide which mail flow rules will be active in which environment and how the message routing configuration you choose affects those transport rules.

All message traffic that passes through an Exchange deployment will be evaluated by the transport rules in that environment, but the catch is that an Exchange hybrid deployment consists of two different environments, at least when they relate to transport rules. A message sent from an on-premises mailbox to another on-premises mailbox generally won’t pass though the transport stack, and, thus, the mail flow rules, in Exchange Online. The opposite is also true: Messages sent from an online mailbox to another online mailbox in the same tenant will not generally pass though the on-premises transport rules. Copying the mail flow rules from your on-premises Exchange organization into your Exchange Online tenant does not solve this problem, but that can lead to some messages being handled by the same transport rule twice.

When you configure an Exchange hybrid deployment, you need to decide where your mail exchange (MX) record points. Some organizations choose to have the MX record point to the existing on-premises Exchange servers and then route message traffic to mailboxes in Exchange Online via a send connector. Other organizations choose to have the MX record point to Office 365 and then flow to the on-premises servers.

There are more decisions to be made about the way email leaves your organization as well. By default, an email sent from an Exchange Online mailbox to an external recipient will exit Office 365 directly to the internet without passing through the on-premises Exchange servers. This means that transport rules, which are intended to evaluate email traffic before it leaves your organization, may never have that opportunity.

Exchange hybrid mail flow rules differ for each organization

No two organizations are alike, which means there is more than one resolution for working with Exchange hybrid mail flow rules.

For organizations that want to copy transport rules from on-premises Exchange Server into Exchange Online, you can use PowerShell. The Export-TransportRuleCollection PowerShell cmdlet works on all currently supported versions of on-premises Exchange Server. This cmdlet creates an XML file that you can load into your Exchange Online tenant with another cmdlet called Import-TransportRuleCollection. This is a good first step to ensure all mail flow rules are the same in both environments, but that’s just part of the work.

Transport rules, like all Exchange Server features, have evolved over time. They may not work the same in all supported versions of on-premises Exchange Server and Exchange Online. Simply exporting and importing your transport rules may cause unexpected behavior.

One way to resolve this is to duplicate transport rules in both environments by adding two more transport rules on each side. The first new transport rule checks the message header and tells the transport stack — both on premises and in the cloud — that the message has already been though the transport rules in the other environment. This rule should include a statement to stop processing any further transport rules. A second new transport rule should add to the header with an indication that the message has already been though the transport rules in one environment. This is a difficult setup to get right and requires a good deal of care to implement properly if you choose to go this route.

I expect that the fairly new hybrid organization transfer feature of the Hybrid Configuration Wizard will eventually handle the export and import of transport rules, but that won’t solve the routing issues or the issues with running duplicate rules.

Go to Original Article
Author:

Amazon Quantum Ledger Database brings immutable transactions

The Amazon Web Services Quantum Ledger Database is now generally available.

The database provides a cryptographically secured ledger as a managed service. It can be used to store both structured and unstructured data, providing what Amazon refers to as an immutable transaction log.

The new database service was released on Sept. 10, 10 months after AWS introduced it as a preview technology.

The ability to provide a cryptographically and independently verifiable audit trail of immutable data has multiple benefits and use cases, said Gartner vice president and distinguished analyst Avivah Litan.

“This is useful for establishing a system of record and for satisfying various types compliance requirements, such as regulatory compliance,” Litan said. “Gartner estimates that QLDB and other competitive offerings that will eventually emerge will gain at least 20% of permissioned blockchain market share over the next three years.”

A permissioned blockchain has a central authority in the system to help provide overall governance and control. Litan sees the Quantum Ledger Database as satisfying several key requirements in multi-company projects, which are typically complementary to existing database systems.

Among the requirements is that once data is written to the ledger, the data is immutable and cannot be deleted or updated. Another key requirement that QLDB satisfies is that it provides a cryptographically and independently verifiable audit trail.

“These features are not readily available using traditional legacy technologies and are core components to user interest in adopting blockchain and distributed ledger technology,” Litan said. “In sum, QLDB is optimal for use cases when there is a trusted authority recognized by all participants and centralization is not an issue.”

How AWS Quantum Ledger Database works shown in diagram graphic
Diagram of how AWS Quantum Ledger Database works

Centralized ledger vs. de-centralized blockchain

The basic promise of many blockchain-based systems is that they are decentralized, and each party stores a copy of the ledger. For a transaction to get stored in a decentralized and distributed ledger, multiple parties have to come to a consensus. In this way, blockchains achieve trust in a distributed and decentralized way.

“Customers who need a decentralized application can use Amazon Managed Blockchain today,” said Rahul Pathak, general manager of databases, analytics and blockchain at AWS. “However, there are customers who primarily need the immutable and verifiable components of a blockchain to ensure the integrity of their data is maintained.”

By quantum, we imply indivisible, discrete changes. In QLDB, all the transactions are recorded in blocks to a transparent journal where each block represents a discrete state change.
Rahul PathakGeneral manager of databases, analytics and blockchain, Amazon Web Services

For customers who want to maintain control and act as the central trusted entity, just like any database application works today, a decentralized system with multiple entities is not the right fit for their needs, Pathak said.

“Amazon [Quantum Ledger Database] combines the data integrity capabilities of blockchain with the ease and simplicity of a centrally owned datastore, allowing a single entity to act as the central trusted authority,” Pathak said.

While QLDB includes the term “quantum” in its name, it’s not a reference to quantum computing

“By quantum, we imply indivisible, discrete changes,” Pathak said. “In QLDB, all the transactions are recorded in blocks to a transparent journal where each block represents a discrete state change.”

How the Amazon Quantum Ledger Database works

The immutable nature of QLDB is a core element of the database’s design. Pathak explained that QLDB uses a cryptographic hash function to generate a secure output file of the data’s change history, known as a digest. The digest acts as a proof of the data’s change history, enabling customers to look back and validate the integrity of their data changes.

From a usage perspective QLDB supports the PartiQL open standard query language that supports SQL-compatible access to data. Pathak said that customers can build applications with the Amazon QLDB Driver for Java to write code that accesses and manipulates the ledger database.

“This is a Java driver that allows you to create sessions, execute PartiQL commands within the scope of a transaction, and retrieve results,” he said. 

Developed internally at AWS

The Quantum Ledger Database is based on technology that AWS has been using for years, according to Pathak. AWS has been using an internal version of Amazon QLDB to store configuration data for some of its most critical systems, and has benefitted from being able to view an immutable history of changes, he said.

“Over time, our customers have asked us for the same ledger capability, and a way to verify that the integrity of their data is intact,” he said. “So, we built Amazon QLDB to be immutable and cryptographically verifiable.”

Go to Original Article
Author:

Understand Windows Insider Program for Business options

The Windows Insider Program for Business provides features that help IT plan for and deploy GA builds when they arrive.

The Windows Insider Program, which Microsoft introduced in 2014, lets IT try out new features in the upcoming Windows release before Microsoft makes them generally available. Microsoft added the Windows Insider Program for Business in April 2018 to provide organizations with tools to better prepare for upcoming releases.

Windows Insider Program for Business

Microsoft designed the Windows Insider Program for Business specifically for organizations to deploy preview builds from Windows 10 and Windows Server to participating employees for testing before they are GA.

IT pros can register their domains with the service and control settings centrally rather than registering users or configuring machines individually. Individual users can also join the Windows Insider Program for Business on their own, independently of IT’s corporate-wide review.

Microsoft designed the Windows Insider Program for Business specifically for organizations to deploy preview builds from Windows 10 and Windows Server to participating employees for testing before they are GA.

The preview builds don’t replace the channel releases because IT doesn’t deploy the new builds across its organization. They’re simply earlier Windows 10 builds IT teams can use to prepare their organizations for the updates.

The Windows Insider Program for Business preview build releases make it possible for IT to implement new services and tools more quickly once the GA release is available. The previews also help IT ensure that Microsoft addressed data security and governance issues in advance of the release.

The Windows Insider Program for Business allows administrators, developers, testers and other users to see what effect a new release might have on their devices, applications and infrastructures. Microsoft includes the Feedback Hub for IT pros and users to submit reactions about their experiences, make requests for new features and identify issues such as application compatibility, security and performance problems.

Microsoft also offers the Windows Insider Lab for Enterprise, a test deployment for insiders who Microsoft specially selects to test new, experimental or prerelease enterprise security and privacy features. The lab provides insiders with a virtual test infrastructure that comes complete with typical enterprise technologies such as Windows Information Protection, Windows Defender Application Guard and Microsoft App-V.

Getting started with the insider program

Microsoft recommends organizations sign up for the Windows Insider Program for Business and dedicate at least a few devices to the program. IT pros must register their users with the service and set up the target devices to receive preview builds.

Microsoft also recommends that organizations use Azure Active Directory work accounts when registering with the service, whether an organization registers users individually or as part of a domain account. A domain registration makes it easier for IT to manage the participating devices and track feedback from users across the organization. Users that want to submit feedback on behalf of the organization must have a domain registration, as well.

IT can install and manage preview builds on individual devices or on the infrastructure and deploy the builds across multiple devices in the domain, including virtual machines. Using Group Policies, IT can also enable, disable, defer or pause preview installations and set the branch readiness levels, which determine when the preview builds are installed.

Microsoft’s three preview readiness branches

IT can configure devices so the preview builds install automatically or allow users to choose their own install schedules. With mobile device management tools such as Microsoft Intune, IT can take over the preview readiness branch settings, assigning each user one of three preview deployment branches.

Fast. Devices at the Fast level are the first to receive build and feature updates. This readiness level implies some risk because it is the least stable and some features might not work on certain devices. As a result, IT should only install Fast builds on secondary devices and limit these builds to a select group of users.

Slow. Devices at the Slow level receive updates after Microsoft applies user and organization feedback from the Fast build. These builds are more stable, but users don’t see them as early in the process compared to the Fast builds. The Slow level generally targets a broader set of users.

Release Preview. Devices at the Release Preview level are the last to receive preview builds, but these builds are the most stable. Users still get to see and test features in advance and can provide feedback, but they have a much smaller window between the preview build and the final release.

Is the Windows Insider Program for Business for everyone?

An organization that participates in the Windows Insider Program for Business must be able to commit the necessary resources to effectively take advantage of the program’s features. To meet this standard, organizations must ensure that they can dedicate the necessary hardware and infrastructure resources and choose users who have enough time to properly test the builds.

An organization’s decision to invest in these resources depends on its specific circumstances, but deploying a Windows update is seldom without a few hiccups. With the Windows Insider Program for Business, IT can avoid some of these issues.

ComplyRight data breach affects 662,000, gets lawsuit

A data breach at ComplyRight, a firm that provides HR and tax services to businesses, may have affected 662,000 people, according to a state agency. It has also prompted a lawsuit, which was filed in federal court by a person who was notified that their personal data was breached. The lawsuit seeks class-action status.

The ComplyRight data breach included names, addresses, phone numbers, email addresses and Social Security numbers, some of which came from tax and W-2 forms.

ComplyRight’s services include a range of HR products, such as recruitment, time and attendance, as well as an online app for storing essential employee data. This particular attack was directed at its tax-form-preparation website. Hackers go after customer and employee data. The Identity Theft Resource Center 2018 midyear report, for instance, lists every known breach so far this year. It said the compromised data is a shopping list of HR managed data.

Company: No more than 10% of customers affected

The breach occurred between April 20 and May 22, and the company notified affected parties by mail.

ComplyRight, in a posted statement, said “a portion (less than 10%)” of people who have their tax forms prepared on its web platform were affected by a cyberattack, but it did not say how many customers were affected by its breach. The company knows the data was accessed or viewed, but it was unable to determine if the data was downloaded, according to the firm’s statement.

But the state of Wisconsin, which publishes data breach reports, has shed some light on the scale of the impact. It reported the ComplyRight data breach affected 662,000 people — including 12,155 Wisconsin residents. A spokesman for Wisconsin Department of Agriculture, Trade and Consumer Protection said this figure was provided verbally to the state by an attorney for ComplyRight.

Rick Roddis, president of ComplyRight, based in Pompano Beach, Fla., said in an email that the firm won’t be commenting, for now, beyond what it has posted on the site.

Among the steps ComplyRight said it took was the hiring of a third-party security expert who conducted a forensic investigation. The firm is also offering credit-monitoring services to affected parties.

Security expert Nikolai Vargas, who looked at the firm’s statement, said ComplyRight “is doing the bare minimum in terms of transparency and informing their clients of the details of the security incident.”

“In cases of a data breach, it is important to disclose how long the exposure occurred and the scope of the exposure,” said Vargas, who is CTO of Switchfast, an IT consulting and managed service provider based in Chicago. ComplyRight stating that “less than 10%” of individuals were affected “doesn’t really explain how many people were impacted,” he added.

“Technical details are nice to have, but they’re not always necessary and may need to be withheld until protections are put in place,” Vargas said.

Federal suit alleges poor protection

[ComplyRight] is doing the bare minimum in terms of transparency and informing their clients of the details of the security incident.
Nikolai VargasCTO at Switchfast

The ComplyRight data breach was first reported by Krebs on Security, which had heard from customers who had received breach notification letters.

Susan Winstead, an Illinois resident, received the notification from ComplyRight on July 17, outlining what happened. She is the plaintiff in the lawsuit filed July 20 in the U.S. District Court for the Northern District of Illinois.

The lawsuit faults ComplyRight for allegedly not properly protecting its data and not immediately notifying affected individuals, and the suit seeks damages for the improper disclosure of personal information, including the time and effort to remediate the data beach. 

Company faced difficult detective work

Another independent expert who looked at ComplyRight’s notice, Avani Desai, said the company “followed best practice for incident response.”

With a cyberattack, one of the most difficult processes initially is identifying that there was an actual attack and the true extent of it, said Desai, president of Schellman & Company, a security and privacy compliance assessor in Tampa, Fla. It’s important to ask the following questions early: Was there sensitive information that was involved? Which systems were exploited? The firm quickly hired a third-party forensic group, she noted.

“ComplyRight locked down the system prior to announcing the breach, which is important, because when organizations announce too quickly, we see copycat attacks hit the already vulnerable situation,” Desai said.

Mike Sanchez, chief information security officer of United Data Technologies, an IT technology and services firm in Doral, Fla., said the things the firm did right are “they disabled the platform and performed a forensic investigation to understand the cause of the breach, as well as the breadth of the malicious actor’s actions.”

But Sanchez said the firm’s statement, which he described as a “very high-level summary,” lacked many specifics, including the exact flaw that was used to gain access to the data.

The Identity Theft Resource Center reported that as of the first six months of this year, there were 668 breaches exposing nearly 22.5 million records.

Big Switch taps AWS VPCs for hybrid cloud networking

Big Switch Networks has introduced software that provides consistency in building and managing a network infrastructure within a virtual network in Amazon Web Services and the private data center.

The vendor, which provides a software-based switching fabric for open hardware, said this week it would release the hybrid cloud technology in stages. First up is a software release next month for the data center, followed by an application for AWS in the fourth quarter.

The AWS product, called Big Cloud Fabric — Public Cloud, provides the tools for creating and configuring a virtual network to deliver Layer 2, Layer 3 and security services to virtual machines or containers running on the IaaS provider. AWS also offers tools for building the virtual networks, which it calls Virtual Private Clouds (VPCs).

In general, customers use AWS VPCs to support a private cloud computing environment on the service provider’s platform. The benefit is getting more granular control over the virtual network that serves sensitive workloads.

Big Cloud Fabric — Public Cloud lets companies create AWS VPCs and assign security policies for applications running on the virtual networks. The product also provides analytics for troubleshooting problems. While initially available on AWS, Big Switch plans to eventually make Big Cloud Fabric — Public Cloud available on Google Cloud and Microsoft Azure.

Big Switch Networks' cloud-first portfolio

VPCs for the private data center

For the corporate data center, Big Switch plans to add tools to its software-based switching fabric — called Big Cloud Fabric — for creating and managing on-premises VPCs that operate the same way as AWS VPCs, said Prashant Gandhi, the chief product officer for Big Switch, based in Santa Clara, Calif.

Customers could use the on-premises VPCs, which Big Switch calls enterprise VPCs, as the virtual networks supporting computing environments that include Kubernetes and Docker containers, the VMware server virtualization vSphere suite, and the OpenStack cloud computing framework.

“With the set of tools they are announcing, [Big Switch] will be able to populate these VPCs and facilitate a consistent deployment and management of networks across cloud and on premises,” said Will Townsend, an analyst at Moor Insights & Strategy, based in Austin, Texas.

Big Switch already offers a version of its Big Monitoring Fabric (BMF) network packet broker for AWS. In the fourth quarter, Big Switch plans to release a single console, called Multi-Cloud Director, for accessing all BMF and Big Cloud Fabric controllers.

In general, Big Switch supplies software-based networking technology for white box switches. Big Cloud Fabric competes with products from Cisco, Midokura and Pluribus Networks, while BMF rivals include technology from GigamonIxia and Apcon.

Big Switch customers are mostly large enterprises, including communication service providers, government agencies and 20 Fortune 100 companies, according to the vendor.

Harness genomic data to provide patient-centered care

Simon kos headshotGenomic data provides the foundation for the delivery of personalized medicine, although cost-effective and secure management of this data is challenging. BC Platforms, a Microsoft partner and world leader in genomic data management and analysis solutions, created GeneVision for Precision Medicine, Built on Microsoft Cloud technology. GeneVision is an end-to-end genomic data management and analysis solution empowering physicians with clear, actionable insights, facilitating evidence-based treatment decisions.

We interviewed Simon Kos, Chief Medical Officer and Senior Director of Worldwide Health at Microsoft, to learn more about how digital transformation is enabling the delivery of personalized medicine at scale.

David Turcotte: What led to your transition from a clinical provider to a leader within the healthcare technology industry?
Simon Kos:
It wasn’t intentional. In critical care medicine, having the right information on hand to make patient decisions, and being able to team effectively with other clinicians is essential. I felt that the technology we were using didn’t help, and I saw that as a risk to good quality care. This insight led to an interest, and the hobby eventually became a career as I got more exposure to all the incredible solutions out there that really do improve healthcare.

Given your unique perspective within the healthcare technology industry, how do you see digital transformation progressing in healthcare?

Digitization efforts have been underway for more than thirty years. As an industry, healthcare is moving slower than others. It’s heavily regulated, complex, and there is a large legacy of niche systems. However, the shift is occurring, and it needs to happen. We have a fundamental sustainability issue, with healthcare expenditure climbing around the world, and our model of healthcare needs to change emphasis from treating sick people in hospitals to preventing chronic disease in the community setting. Each day I see new clinical models that can only be achieved by leveraging technology, enabling us to treat patients more effectively at lower cost.

How are you and other healthcare leaders managing the shift from fee-for-service to a value-based care model?

My role in the shift to value-based care is building capability within the Microsoft Partner Network—which is over 12,000 companies in health worldwide—and bringing visibility to those that support value-based care. For healthcare leaders more directly involved in either the provision or reimbursement side, the challenge is more commercial. Delivering the same kind of care won’t be as profitable, but adapting business processes comes with its own set of risks. I think the stories of organizations that have successfully transitioned to value-based care, the processes they use, and the technology they leverage, will be important for those who desire more clarity before progressing with their own journeys

What role does precision medicine play in delivering value-based care?

Right now, precision medicine seems to be narrowly confined to genetic profiling in oncology to determine which chemotherapy agents to use. That’s important since these drugs are expensive, and with cancer it’s imperative to start on a therapy that will work as soon as possible. However, I think the promise of precision medicine is so much broader than this. In understanding an individual’s risk profile through multi-omic analysis (i.e. genomics), we can finally get ahead of disease before it manifests, empower people with more targeted education, screen more diligently, and when patients do get unwell, intervene more effectively. Shifting some of the care burden to the patient, preventing disease, intervening early, and getting therapy right the first time, will drive the return on investment that makes value-based care economically viable.

As genomics continues to become more democratized, how will we continue to see it affect precision medicine?

It’s already scaling out beyond oncology. I expect to see genomics have increasing impact in areas like autoimmune disease, rare disease, and chronic disease. In doing so, I think precision medicine will cease to be something that primary care and specialists refer a patient on to a clinical geneticist or oncologist, instead they will integrate it into their model of care. I also see a role for the patients themselves to get more directly involved. As we continue to understand more about the human genome, the value of having your genome sequenced will increase. I see a day when knowing your genome is as common as knowing your blood type.

What role can technology play in closing the gap between genomics researchers and providers?

I think technology can federate genomics research. Research collaboration would tremendously increase the data researchers have to work with, which will accelerate breakthroughs. The more we understand about the genome, the more relevant it becomes to all providers. I also think machine learning has a role to play. Project Hanover aims to take the grunt work out of aggregating research literature. Finally, I think genomics needs to make its way into the electronic medical records that providers use, ideally with the automated clinical decision support that help them use it effectively.

What challenges are healthcare leaders facing when implementing a long-term, scalable genomics strategy?

On the technical side, compute and storage of genomic information are key considerations. The cloud is quickly becoming the only viable way to solve for this. Using the cloud requires a well-considered security and privacy approach. On the research side, there’s still so much we have to learn about the genome. As we learn more it will open new avenues of care. Finally, on the business side, we have resourcing and reimbursement. The talent pool of genomics today is insufficient for a world where precision medicine is mainstream. These specialized resources are costly, and even with the cost of sequencing coming down, staffing a genomics business is expensive. And then there’s the reality of reimbursement – right now only certain conditions qualify for NGS. So, I think any genomics business needs to start with what will be reimbursed but be ready to expand as the landscape evolves.

How do genomic solutions like BC Platforms’ GeneVision for Precision Medicine have the potential to transform a provider’s approach to patient care?

Providers are busy, and more demands are being placed on them to see more patients, see them faster, but also to personalize their care and deliver excellent outcomes. BC Platforms’ GeneVision allows insights to be surfaced from the system level raw data and delivered to the clinician to assist them in meeting these demands. The clinical reports that can be leveraged through GeneVision enable providers to make critical decisions about therapies and treatment within the context of their existing workflows.

In addition to report generation, GeneVision optimizes usage of stored genomic data so that when it is produced, it can be repeatedly re-utilized by merging it with clinical data as many times as a patient enters the health care system. GeneVision makes this possible through BC Platforms’ unique architecture, the dynamic storage capabilities of Microsoft Azure cloud technology, and Microsoft Genomics services. Together, these capabilities make genomic solutions like GeneVision a key factor in delivering patient-centered care at scale.

What will it take for genomics to become a part of routine patient care?

The initial barrier was cost. I think we are past that, with NGS dipping below $1000 and continuing to fall. Research into the genome is the current challenge. Genomics will eventually touch all aspects of medicine, but given the previous cost constraints we are the most advanced in oncology today. A key benefit of GeneVision is that it supports both whole genome sequencing and genotyping, which is currently the more cost-effective method to generate and store genomic data.  Although the cost of whole genome sequencing is coming down, this flexibility is essential to enabling rapid proliferation of genomics applications in healthcare. The future challenge will be educating the clinical provider workforce and introducing new models of care that leverage genomics. I think the reimbursement restrictions will melt away organically, as it becomes clearly more effective to take a precision approach to patient care.

What future applications of genomics in healthcare are you most excited about?

I’m really excited about the evolution of CRISPR and gene editing. Finding that you have a genetic variant that increases your risk of certain diseases can be helpful of course—it allows you to be aware, to screen, and take preventative steps. The ability to go a step further though and remediate that variant I think is incredibly powerful. At the same time, gene editing opens all sorts of other ethical issues, and I don’t yet think we have a mature approach to considering how we tackle that challenge.


BC Platforms GeneVision for Precision Medicine, Built on Microsoft Cloud technology, is available now on AppSource. Learn how GeneVision equips physicians with the tools they need to improve and accelerate patient outcomes by trying the demo today.

Cisco acquires July Systems for its location, analytics services

Cisco announced this week the acquisition of a company that provides cloud-based location services through retailers’ Wi-Fi networks, while Extreme Networks and Ruckus Networks launched improvements to their wired and wireless LANs.

Cisco plans to use July Systems technology to improve its enterprise Wi-Fi platform for indoor location services. July, a privately held company headquartered in Burlingame, Calif., sells its product by subscription.

July Systems’ platform integrates with a company’s customer management system to identify people walking into a retail store or mall. The July software can then interact with the people through text messages, email or push notifications.

The system also continuously maps the physical location of retail customers and uses the information to calculate their behavior patterns. July Systems software can also send collected data to business intelligence applications for further analysis.

Before the acquisition, July Systems was a Cisco partner. The company made its location services and analytics available through the Cisco Connected Mobile Experiences. CMX is a set of location-based products that use Cisco’s wireless infrastructure.

Cisco plans to complete the acquisition by the end of October. The company did not release financial details.

Extreme, Ruckus releases

Extreme Networks has introduced wired and wireless LAN infrastructure called Smart OmniEdge that incorporates technology Extreme acquired when it bought Avaya’s enterprise networking business last year.

The latest release includes an on-premises version of Extreme’s cloud-based management application, called ExtremeCloud. Both versions provide a single console for overseeing the vendor’s wired and wireless infrastructure, including access points and edge switches. They are also engineered for zero-touch provisioning, enabling customers to configure and activate devices without manual intervention.

Other infrastructure additions include hosted software for radio frequency management on the wireless network, which in today’s workplace has to serve a variety of devices, including PCs, mobile phones, printers and projectors. Automated features in the technology include access point tuning and optimization, load balancing and troubleshooting.

Smart OmniEdge utilizes Avaya’s software-defined networking product for simpler provisioning, management and troubleshooting of switches and access points. Extreme has also added APIs to integrate third-party network products and hardware adapters that companies can plug into medical devices to download and enforce policies.

Extreme has designed Smart OmniEdge for networking a campus, hotel, healthcare facility and large entertainment venue. The company’s wired and wireless networking portfolio incorporates technology from acquisitions over several years, including wireless LAN vendor Zebra Technologies, Avaya’s software-based networking technology and Brocade’s data center network products.

Extreme’s acquisition strategy helped boost sales in its latest quarter ended in May by 76% to $262 million. However, results for the quarter, coupled with modest guidance for the current quarter, disappointed analysts, driving its stock down by 19.5%, according to the financial site Motley Fool.

Meanwhile, Ruckus Networks, an Arris company, released a new version of the operating system for its SmartZone controllers for the wired and wireless LAN. SmartZoneOS 5 provides a central console for controlling, managing and securing Ruckus access points and switches.

SmartZoneOS customers can build a single network control cluster to serve up to 450,000 clients. The controller also contains RESTful APIs, so managed service providers can invoke SmartZoneOS features and configurations.

In February, Ruckus launched SmartZoneOS software that provides essential management and security features for IoT devices. The software works in conjunction with a Ruckus IoT module plugged into the USB port on each of the company’s access points.