Tag Archives: secure

Harness genomic data to provide patient-centered care

Simon kos headshotGenomic data provides the foundation for the delivery of personalized medicine, although cost-effective and secure management of this data is challenging. BC Platforms, a Microsoft partner and world leader in genomic data management and analysis solutions, created GeneVision for Precision Medicine, Built on Microsoft Cloud technology. GeneVision is an end-to-end genomic data management and analysis solution empowering physicians with clear, actionable insights, facilitating evidence-based treatment decisions.

We interviewed Simon Kos, Chief Medical Officer and Senior Director of Worldwide Health at Microsoft, to learn more about how digital transformation is enabling the delivery of personalized medicine at scale.

David Turcotte: What led to your transition from a clinical provider to a leader within the healthcare technology industry?
Simon Kos:
It wasn’t intentional. In critical care medicine, having the right information on hand to make patient decisions, and being able to team effectively with other clinicians is essential. I felt that the technology we were using didn’t help, and I saw that as a risk to good quality care. This insight led to an interest, and the hobby eventually became a career as I got more exposure to all the incredible solutions out there that really do improve healthcare.

Given your unique perspective within the healthcare technology industry, how do you see digital transformation progressing in healthcare?

Digitization efforts have been underway for more than thirty years. As an industry, healthcare is moving slower than others. It’s heavily regulated, complex, and there is a large legacy of niche systems. However, the shift is occurring, and it needs to happen. We have a fundamental sustainability issue, with healthcare expenditure climbing around the world, and our model of healthcare needs to change emphasis from treating sick people in hospitals to preventing chronic disease in the community setting. Each day I see new clinical models that can only be achieved by leveraging technology, enabling us to treat patients more effectively at lower cost.

How are you and other healthcare leaders managing the shift from fee-for-service to a value-based care model?

My role in the shift to value-based care is building capability within the Microsoft Partner Network—which is over 12,000 companies in health worldwide—and bringing visibility to those that support value-based care. For healthcare leaders more directly involved in either the provision or reimbursement side, the challenge is more commercial. Delivering the same kind of care won’t be as profitable, but adapting business processes comes with its own set of risks. I think the stories of organizations that have successfully transitioned to value-based care, the processes they use, and the technology they leverage, will be important for those who desire more clarity before progressing with their own journeys

What role does precision medicine play in delivering value-based care?

Right now, precision medicine seems to be narrowly confined to genetic profiling in oncology to determine which chemotherapy agents to use. That’s important since these drugs are expensive, and with cancer it’s imperative to start on a therapy that will work as soon as possible. However, I think the promise of precision medicine is so much broader than this. In understanding an individual’s risk profile through multi-omic analysis (i.e. genomics), we can finally get ahead of disease before it manifests, empower people with more targeted education, screen more diligently, and when patients do get unwell, intervene more effectively. Shifting some of the care burden to the patient, preventing disease, intervening early, and getting therapy right the first time, will drive the return on investment that makes value-based care economically viable.

As genomics continues to become more democratized, how will we continue to see it affect precision medicine?

It’s already scaling out beyond oncology. I expect to see genomics have increasing impact in areas like autoimmune disease, rare disease, and chronic disease. In doing so, I think precision medicine will cease to be something that primary care and specialists refer a patient on to a clinical geneticist or oncologist, instead they will integrate it into their model of care. I also see a role for the patients themselves to get more directly involved. As we continue to understand more about the human genome, the value of having your genome sequenced will increase. I see a day when knowing your genome is as common as knowing your blood type.

What role can technology play in closing the gap between genomics researchers and providers?

I think technology can federate genomics research. Research collaboration would tremendously increase the data researchers have to work with, which will accelerate breakthroughs. The more we understand about the genome, the more relevant it becomes to all providers. I also think machine learning has a role to play. Project Hanover aims to take the grunt work out of aggregating research literature. Finally, I think genomics needs to make its way into the electronic medical records that providers use, ideally with the automated clinical decision support that help them use it effectively.

What challenges are healthcare leaders facing when implementing a long-term, scalable genomics strategy?

On the technical side, compute and storage of genomic information are key considerations. The cloud is quickly becoming the only viable way to solve for this. Using the cloud requires a well-considered security and privacy approach. On the research side, there’s still so much we have to learn about the genome. As we learn more it will open new avenues of care. Finally, on the business side, we have resourcing and reimbursement. The talent pool of genomics today is insufficient for a world where precision medicine is mainstream. These specialized resources are costly, and even with the cost of sequencing coming down, staffing a genomics business is expensive. And then there’s the reality of reimbursement – right now only certain conditions qualify for NGS. So, I think any genomics business needs to start with what will be reimbursed but be ready to expand as the landscape evolves.

How do genomic solutions like BC Platforms’ GeneVision for Precision Medicine have the potential to transform a provider’s approach to patient care?

Providers are busy, and more demands are being placed on them to see more patients, see them faster, but also to personalize their care and deliver excellent outcomes. BC Platforms’ GeneVision allows insights to be surfaced from the system level raw data and delivered to the clinician to assist them in meeting these demands. The clinical reports that can be leveraged through GeneVision enable providers to make critical decisions about therapies and treatment within the context of their existing workflows.

In addition to report generation, GeneVision optimizes usage of stored genomic data so that when it is produced, it can be repeatedly re-utilized by merging it with clinical data as many times as a patient enters the health care system. GeneVision makes this possible through BC Platforms’ unique architecture, the dynamic storage capabilities of Microsoft Azure cloud technology, and Microsoft Genomics services. Together, these capabilities make genomic solutions like GeneVision a key factor in delivering patient-centered care at scale.

What will it take for genomics to become a part of routine patient care?

The initial barrier was cost. I think we are past that, with NGS dipping below $1000 and continuing to fall. Research into the genome is the current challenge. Genomics will eventually touch all aspects of medicine, but given the previous cost constraints we are the most advanced in oncology today. A key benefit of GeneVision is that it supports both whole genome sequencing and genotyping, which is currently the more cost-effective method to generate and store genomic data.  Although the cost of whole genome sequencing is coming down, this flexibility is essential to enabling rapid proliferation of genomics applications in healthcare. The future challenge will be educating the clinical provider workforce and introducing new models of care that leverage genomics. I think the reimbursement restrictions will melt away organically, as it becomes clearly more effective to take a precision approach to patient care.

What future applications of genomics in healthcare are you most excited about?

I’m really excited about the evolution of CRISPR and gene editing. Finding that you have a genetic variant that increases your risk of certain diseases can be helpful of course—it allows you to be aware, to screen, and take preventative steps. The ability to go a step further though and remediate that variant I think is incredibly powerful. At the same time, gene editing opens all sorts of other ethical issues, and I don’t yet think we have a mature approach to considering how we tackle that challenge.


BC Platforms GeneVision for Precision Medicine, Built on Microsoft Cloud technology, is available now on AppSource. Learn how GeneVision equips physicians with the tools they need to improve and accelerate patient outcomes by trying the demo today.

BlackBerry and Microsoft partner to empower the mobile workforce

Companies deliver seamless Mobile App experience and policy compliance; BlackBerry Secure platform now available on Azure

WATERLOO, ONTARIO and REDMOND, Wash. – March 19, 2018 BlackBerry Limited (NYSE: BB; TSX: BB) and Microsoft Corp. (NASDAQ: MSFT) today announced a strategic partnership to offer enterprises a solution that integrates BlackBerry’s expertise in mobility and security with Microsoft’s unmatched cloud and productivity products.

BlackBerry logoThrough this partnership, the companies have collaborated on a first-of-its-kind solution: BlackBerry Enterprise BRIDGE. This technology provides a highly-secure way for their joint customers – the world’s largest banks, healthcare providers, law firms, and central governments – to seamlessly use native Microsoft mobile apps from within BlackBerry Dynamics.

By making Microsoft’s mobile apps seamlessly available from within BlackBerry Dynamics, enterprise users will now have a consistent experience when opening, editing, and saving a Microsoft Office 365 file such as Excel, PowerPoint, and Word on any iOS® or Android™ device. This enables users to work anytime, anyplace, with rich file fidelity. At the same time, corporate IT departments benefit from a greater return on their existing investments, and added assurance that their company’s data and privacy is secured to the highest standards and in compliance with corporate and regulatory policies.

“BlackBerry has always led the market with new and innovative ways to protect corporate data on mobile devices,” said Carl Wiese, president of Global Sales at BlackBerry. “We saw a need for a hyper-secure way for our joint customers to use native Office 365 mobile apps. BlackBerry Enterprise BRIDGE addresses this need and is a great example of how BlackBerry and Microsoft continue to securely enable workforces to be highly productive in today’s connected world.”

Microsoft logo“In an era when digital technology is driving rapid transformation, customers are looking for a trusted partner,” said Judson Althoff, executive vice president of Worldwide Commercial Business at Microsoft. “Our customers choose Microsoft 365 for productivity and collaboration tools that deliver continuous innovation, and do so securely. Together with BlackBerry, we will take this to the next level and provide enterprises with a new standard for secure productivity.”

“Along with a number of our peers in the Financial Services industry, we see strategic partnerships like this one as key to enhancing and bringing new products to market,” said George Sherman, Managing Director, CIO Global Technology Infrastructure, JPMorgan Chase. “This partnership will help create a more seamless mobile experience for end-users, which is a top priority for us at JPMorgan Chase.”

Lastly, the companies shared that the BlackBerry Secure platform for connecting people, devices, processes and systems, has been integrated with the Microsoft Azure cloud platform. Specifically, BlackBerry UEM Cloud, BlackBerry Workspaces, BlackBerry Dynamics, and BlackBerry AtHoc are now available on Azure.

To learn more, please visit  BlackBerry.com.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) is the leading platform and productivity company for the mobile-first, cloud-first world, and its mission is to empower every person and every organization on the planet to achieve more.

About BlackBerry

BlackBerry is a cybersecurity software and services company dedicated to securing the Enterprise of Things. Based in Waterloo, Ontario, the company was founded in 1984 and operates in North America, Europe, Asia, Australia, Middle East, Latin America and Africa. The Company trades under the ticker symbol “BB” on the Toronto Stock Exchange and New York Stock Exchange. For more information, visit www.BlackBerry.com.

BlackBerry and related trademarks, names and logos are the property of BlackBerry Limited and are registered and/or used in the U.S. and countries around the world. All other marks are the property of their respective owners. BlackBerry is not responsible for any third-party products or services.

###

Media Contacts:

BlackBerry

(519) 597-7273

mediarelations@BlackBerry.com

Microsoft Media Relations

WE Communications for Microsoft

(425) 638-7777

rrt@we-worldwide.com

Investor Contact:

BlackBerry Investor Relations

(519) 888-7465

investor_relations@BlackBerry.com

 

The post BlackBerry and Microsoft partner to empower the mobile workforce appeared first on Stories.

OpenShift on OpenStack aims to ease VM, container management

Virtualization admins increasingly use containers to secure applications, but managing both VMs and containers in the same infrastructure presents some challenges.

IT can spin containers up and down more quickly than VMs, and they require less overhead, so there are several practical uses cases for the technology. Security can be a concern, however, because all containers share the same underlying OS. As such, mission-critical applications are still better suited to VMs.

Using both containers and VMs can be helpful, because they each have their place. Still, adding containers to a traditional virtual infrastructure adds another layer of complexity and management for admins to contend with. The free and open source OpenStack provides infrastructure as a service and VM management, and organizations can run Red Hat’s OpenShift on OpenStack — and other systems — for platform as a service and container management.

Here, Brian Gracely, director of OpenShift product strategy at Red Hat, based in Raleigh, N.C., explains how to manage VMs and containers, and he shares how OpenShift on OpenStack can help.

What are the top challenges of managing both VMs and containers in virtual environments?

Brian Gracely, director of OpenShift product strategy at Red HatBrian Gracely

Brian Gracely: The first one is really around people and existing processes. You have infrastructure teams who, over the years, have become very good at managing VMs and … replicating servers with VMs, and they’ve built a set of operational things around that. When we start having the operations team deal with containers, a couple of things are different. Not all of them are as fluent in Linux as you might expect; containers are [based on] the OS. A lot of the virtualization people, especially in the VMware world, came from a Windows background. So, they have to learn a certain amount about what to do with the OS and how to deal with Linux constructs and commands.

Container environments tend to be more closely tied to people who are doing application developments. Application developers are … making changes to the application more frequently and scaling them up and down. The concept of the environment changing more frequently is sort of new for VM admins.

What is the role of OpenStack in modern data centers where VMs and containers coexist?

Gracely: OpenStack can become either an augmentation of what admins used to do with VMware or a replacement for VMware that gives them all of the VM capabilities they want to have in terms of networking, storage and so forth. In most of those cases, they want to also have hybrid capabilities, across public and private. And they can use OpenShift on OpenStack as that abstraction layer that allows them to run containerized applications and/or VM applications in their own data center.

Then, they’ll run OpenShift in one of the public clouds — Amazon or Azure or Google — and the applications that run in the cloud will end up being containerized on OpenShift. It gives them consistency from what the operations look like, and then there’s a pretty simple way of determining which applications can also run in the public cloud, if necessary.

What OpenShift features are most important to container management?

Gracely: OpenShift is based on Kubernetes technology — the de facto standard for managing containers.

If you’re a virtualization person … it’s essentially like vCenter for containers. It centrally manages policies, it centrally manages deployments of containers, [and] it makes sure that you use your compute resources really efficiently. If a container dies, an application dies, it’s going to be constantly monitoring that and will restart it automatically. Kubernetes at the core of OpenShift is the thing that allows people to manage containers at scale, as opposed to managing them one by one.

What can virtualization admins do to improve their container management skills?

Gracely: Become Linux-literate, Linux-skilled. There are plenty of courses out there that allow you to get familiar with Linux. Container technology, fundamentally, is Linux technology, so that’s a fundamental thing. There are tools like Katacoda, which is an online training system; you just go in through your browser. It gives you a Kubernetes environment to play around with, and there’s also an OpenShift set of trainings and tools that are on there.

Kubernetes is the thing that allows people to manage containers at scale, as opposed to managing them one by one.
Brian Gracelydirector of OpenShift product strategy at Red Hat

How can admins streamline management practices between other systems for VMs and OpenShift for containers?

Gracely: OpenShift runs natively on top of both VMware and OpenStack, so for customers that just want to stay focused on VMs, their world can look pretty much the way it does today. They’re going to provision however many VMs they need, and then give self-service access to the OpenShift platform and allow their developers to place containers on there as necessary. The infrastructure team can simply make sure that it’s highly available, that it’s patched, and if more capacity is necessary, add VMs.

Where we see … things get more efficient is people who don’t want to have silos anymore between the ops team and the development team. They’re either going down a DevOps path or combining them together; they want to merge processes. This is where we see them doing much more around automating environments. So, instead of just statically [building] a bunch of VMs and leaving them alone, they’re using tools like Ansible to provision not only the VMs, but the applications that go on top of those VMs and the local database.

Will VMs and containers continue to coexist, or will containers overtake VMs in the data center?

Gracely: More and more so, we’re seeing customers taking a container-first approach with new applications. But … there’s always going to be a need for good VM management, being able to deliver high performance, high I/O stand-alone applications in VMs. We very much expect to see a lot of applications stay in VMs, especially ones that people don’t expect to need any sort of hybrid cloud environment for, some large databases for I/O reasons, or [applications that], for whatever reason, people don’t want to put in containers. Then, our job is to make sure that, as containers come in, that we can make that one seamless infrastructure.

AWS Direct Connect updates help globe-spanning users

With a nod from AWS, customers with an international presence can now more simply establish secure network connections for workloads that span multiple regions.

An update to AWS Direct Connect enables enterprises to establish a single dedicated connection across multiple Amazon Virtual Private Clouds (VPCs) and cut down on administrative tasks. Enterprises have clamored for this capability, as the previous approach required them to set up unique connections in each region and peer VPCs across regions.

This feature, called AWS Direct Connect Gateways, is critical for large companies that want business continuity with data and application available across AWS regions, said Brad Casemore, an analyst with IDC.

“This is a critical capability for them as they set up direct connections to AWS services,” he said. “They want to ensure they can work across zones as dynamic application requirements dictate.”

All the major public cloud vendors have their own flavor of a dedicated networking service for enterprise customers to improve security, bandwidth and performance. These new AWS Direct Connect Gateways are global objects that exist across all public regions, with inter-region communication occurring on the AWS network backbone.

At Onica, an AWS consulting partner in Santa Monica, Calif., most of its enterprise customers have requested this capability because of the challenges created by the old model, said Kevin Epstein, Onica CTO.

Previously, users had to rely on IPsec virtual private networks to achieve the same result. That could still create real problems if, say, a master database is in one region and services in other regions rely on that database. Users must either replicate that database across AWS regions or suffer a degree of latency that’s unacceptable for certain workloads.

Amazon built its AWS regions to be self-contained to avoid cascading failures, and while that model helped limit the impact of the major AWS outage earlier this year, it hampers customers in other ways, Epstein said.

In the past, when other vendors added similar capabilities, AWS argued that segmentation between regions was the best way to operate on its platform securely. These gateways represent a change in that strategy.

As they’re finding out, [the network] still requires enhancements if they want to continue to expand their footprint.
Brad Casemoreanalyst, IDC

“This to me is the first major step in nodding to the global players and saying, ‘We understand the challenges and we’re going to take down those barriers for you,'” Epstein said.

AWS Direct Connect Gateways require IP address ranges that don’t overlap and all the VPCs must be in the same account. Amazon said it plans to add more flexibility here eventually.

The overlap issue may be a problem for large startups that haven’t considered IP address spacing, but it shouldn’t cause too many problems at large enterprises that already have a mature outlook on network allocation, Epstein said.

And while these gateways focus on connections to the cloud, Amazon is also making network changes within its cloud. AWS PrivateLink creates endpoints within VPCs through a virtual network and IP addresses within a VPC subnet.

PrivateLink can be connected via API to Kinesis, Service Catalog, EC2, EC2 Systems Manager and Elastic Load Balancing, with Key Management Service, CloudWatch and others to be added later. That allows customers to manage AWS services without any of that traffic travelling over the Internet and cut down on costly egress fees.

“This is mostly about keeping the traffic within the AWS network,” Casemore said. “Customers incur additional charges when data must traverse the Internet.”

Google addresses inter-zone latency

Customers with global footprints or latency sensitive apps have forced many cloud vendors, not just AWS, to look closer at networking. Google Cloud Platform, a distant third in the public cloud market behind AWS and Microsoft Azure, has made a number of moves in the past three months to bolster its networking capabilities.

GCP this month said the latest version of Andromeda, its internal software defined network stack, will reduce intra-zone network latency between VMs by 40%. Zones are Google’s equivalent to AWS regions.

With this move Google hopes to attract developers that prefer private hosting with bare metal over the public cloud to build latency-sensitive applications for high performance computing, financial transactions or gaming.

Customer will have calculate whether these improvements go far enough to address cost, bandwidth and latency, but it’s clear cloud vendors are focused on network innovations, Casemore said.

“It’s all about pulling a greater share of new and existing apps to the public cloud,” he said. “The network has certainly become an enabler and, as they’re finding out, it still requires enhancements if they want to continue to expand their footprint.”

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.

Public cloud security concerns wane as IaaS ramps, IAC rises

BOSTON — Enterprises have warmed up to the public cloud with the belief it can be at least as secure, if not more, than on-premises infrastructure. But IT teams still need to fortify their cloud apps, and some increasingly rely on automation and infrastructure as code to do the job.

It’s taken a long time for businesses’ public cloud security concerns to subside. In fact, though, the security controls put into place on the public cloud are often more robust than a company’s on-premises setup, in part because enterprises can tap into the public cloud providers’ significant security investments, said Andrew Delosky, cloud architect at Coca-Cola Co.

“A hack on you is a hack on your vendors,” Delosky said here, during a presentation at Alert Logic’s Cloud Security Summit this week. “[Cloud providers] don’t want to be in the news just as much as you don’t want to be in the news.”

While public cloud security concerns, in general, have dwindled, IT security professionals still take the subject seriously, said Bob Moran, chief information security officer at American Student Assistance, a federal student loan organization based in Boston.

“I think security professionals are the ones that are uncomfortable with cloud security because they don’t understand it,” said Moran, whose company’s cloud deployment is mostly SaaS right now, but includes some trials with Amazon Web Services (AWS) infrastructure.

Adjust a security strategy for cloud

IT security professionals face a learning curve to evolve their practices and tool sets for cloud. For starters, they need to grasp the concept of shared responsibility — a model by which a public cloud provider and a customer divvy up IT security tasks.

In AWS’ shared responsibility model for infrastructure as a service (IaaS), the vendor assumes responsibility for everything from the hypervisor down, said Danielle Greshock, solutions architect at AWS, in a presentation. This means AWS secures the hardware that underpins its IaaS offering, which includes servers, storage and networks, as well as the physical security of its global data centers. AWS users are generally responsible for the security of their data, applications and operating systems, as well as firewall configurations.

However, the line between AWS’ security responsibilities and those of its users can blur and shift, depending on which services you use.

“[With AWS Relational Database Service], you don’t actually have access to the underlying server, so we patch the operating system for you,” Greshock said. This is different than a traditional IaaS deployment based on Elastic Compute Cloud instances, where users themselves are responsible for OS patches and updates.

“Knowing what part you need to worry about is probably the key to your success,” Greshock said.

Apart from reviewing shared responsibility models, IT teams can evolve their security strategies for public IaaS in other ways. Some tried-and-true tools and practices they’ve used on premises, such as user access controls, encryption and patch management, remain in play with cloud, albeit with some adjustments. For identity and access management, for example, teams will want to sync any on-premises systems, such as Active Directory, with those they use in the cloud. If they delete or alter a user ID on premises, they implement the change in the public cloud, as well.

But some organizations have adopted more emerging technologies or practices, such as infrastructure as code (IAC), to ease public cloud security concerns.

In traditional on-premises models, IT teams centralize control over any new resources or services that users deploy, and this should still be the case with public IaaS. But cloud’s self-service nature enables users to spin up resources on demand — sometimes without IT approval – and bypass that centralized control, said Jason LaVoie, vice president of global platform and delivery at SessionM, a customer engagement software provider based in Boston.

With on-prem, you have an IT team with keys to the kingdom. But it doesn’t always work that way with AWS.
Jason LaVoievice president of global platform and delivery, SessionM

“With on-prem, you have an IT team with keys to the kingdom,” said LaVoie, whose company uses Amazon Web Services. “But it doesn’t always work that way with AWS.”

SessionM uses IAC to minimize the security risks in cloud self-service. IAC introduces more frequent and formal code reviews, increased automation and other practices that minimize the “human element” of public cloud resource deployment, so it helps reduce risk, LaVoie said.

Coca-Cola, which uses both AWS and Azure for IaaS, has adopted a similar approach.

“The whole infrastructure as code is such a revelation,” Delosky said. “Just being able to deploy the exact same application footprint, from an infrastructure level, every single time, no matter if you are in dev, test or production, with the same security controls … that’s a huge game-changer.”

Another way enterprises can evolve their security strategies for cloud is to appoint a dedicated IT staff member to oversee a cloud deployment, often with a specific focus on security or governance. Some organizations refer to this role as a cloud steward, said Adam Schepis, solutions architect at CloudHealth, a cloud management tool provider based in Boston.

Others, such as Coca-Cola, have created a Cloud Center of Excellence to unify IT and business leaders, as well as line-of-business managers, CISOs and others, to outline goals, discuss challenges and more.

“For us, that was probably the most critical thing we did,” Coca-Cola’s Delosky said.

Users plagued by iOS app security issues, according to new research

A new report shows despite Apple iOS’ reputation as a secure mobile operating system, users are at risk more often than it seems.

San Francisco-based mobile security software company Zimperium published its Global Threat Report from the second quarter of 2017 and highlighted iOS app security issues plaguing Apple users, finding that one in 50 iOS applications could potentially leak data to third parties.

“Enterprises have no way to detect this type of risk unless they are scanning apps for security or privacy issues,” the report stated, noting that 1,101 out of 50,000 iOS apps the researchers scanned had at least one security or privacy issue.

“Through deep analysis, Zimperium researchers found the 1,101 apps downloaded over 50 million times. Companies and individuals should be concerned if these iOS apps are on their devices and inside of their networks.”

Zimperium looked at the iOS app security risks and threats detected on zIPS-protected devices between April 1 and June 30, 2017. It categorized what it found as device threats and risks, network threats and app threats.

When studying device threats and risks, the researchers found that, so far in 2017, there have been more registered common vulnerabilities and exposures on both Android and iOS devices than in all of 2016.

“While not all vulnerabilities are severe, there were still hundreds that enabled remote code execution (such as Stagefright and Pegasus) that forced the business world to pay attention to mobile device security,” the report stated.

Zimperium also found that over 23% of iOS devices were not running the latest version of the operating system, which is somewhat unexpected, since Apple controls the update process itself. Despite that, the report also stated that the number of iOS devices with mobile malware was extremely low, at just 1%. However, Zimperium found that iOS devices “have a greater percentage of suspicious profiles, apps using weak encryption and potentially retrieving private information from devices.

“The most concerning risks associated with iOS devices were malicious configuration profiles and ‘leaky apps,'” the report stated. “These profiles can allow third parties to maintain persistence on a device, decrypt traffic, synchronize calendars and contacts, track the device’s location and could allow a remote connection to control the device or siphon data from the device without the user’s knowledge.”

Additional findings include man-in-the-middle attacks that were detected on 80% of the scanned devices, as well as the seven most severe iOS app security issues: malware, keychain sharing, MD2 encryption, private frameworks, private info URL, reading UDID and stored info being retrieved during public USB recharges.

In other news:

  • Following the initial breach report, new developments revealed CCleaner malware is worse than originally thought. Security company Morphisec and networking giant Cisco found and revealed CCleaner, a Windows tool set from Avast, had been taken over by hackers who installed backdoors on the software. The companies confirmed over 700,000 computers have been affected and now have backdoors on them. A few days after the reveal, Cisco Talos, the company’s security division, analyzed the command-and-control (C2) server to which the infected versions of CCleaner connects. “In analyzing the delivery code from the C2 server, what immediately stands out is a list of organizations, including Cisco, that were specifically targeted through delivery of a second-stage loader,” the Talos team wrote in a blog post. “Based on a review of the C2 tracking database, which only covers four days in September, we can confirm that at least 20 victim machines were served specialized secondary payloads.” According to Cisco Talos’ findings, Intel, Google, Microsoft, VMware and Cisco were among the targeted companies.
  • Media company Viacom Inc. is the latest major organization to expose sensitive information to the public due to a misconfigured AWS Simple Storage Service cloud storage bucket. According to Chris Vickery, director of cyber-risk research at UpGuard, based in Mountain View, Calif., Viacom exposed a wide array of internal resources, credentials and critical data. “Exposed in the leak are a master provisioning server running Puppet, left accessible to the public internet, as well as the credentials needed to build and maintain Viacom servers across the media empire’s many subsidiaries and dozens of brands,” UpGuard explained in a blog post. “Perhaps most damaging among the exposed data are Viacom’s secret cloud keys, an exposure that, in the most damaging circumstances, could put the international media conglomerate’s cloud-based servers in the hands of hackers.” The exposure, the research firm noted, could enable hackers to perform any number of damaging attacks through Viacom’s infrastructure.
  • The U.S. District Court for Washington, D.C., has dismissed two lawsuits filed in regard to the 2017 data breach of the Office of Personnel Management (OPM). One of the lawsuits was filed by the American Federation of Government Employees, a federal workers union, alleging that the data breaches occurred as a result of gross negligence by federal officials. The second suit was filed by another union, the National Treasury Employee Union. It targeted the OPM’s acting director and alleged constitutional violations of the victims’ right to information privacy. This week, the court dismissed both lawsuits because neither plaintiff “has pled sufficient facts to demonstrate that they have standing.” In 2015, the OPM revealed two data breaches that exposed over 20 million people, mostly U.S. federal employees, in which hackers stole their sensitive information.

iPhone Secure Enclave firmware encryption key leaked

Despite early reports, experts agree that the leak of the iPhone Secure Enclave Processor firmware encryption key should not pose a security risk and may even ultimately improve user security.

When a hacker/researcher going by the handle “xerub” released the firmware encryption key, the initial reaction was one of panic because the iPhone Secure Enclave is responsible for storing and processing highly sensitive data, as described by Mike Ash, software engineer and fellow at Plausible Labs, in response to the debate around the FBI wanting backdoor access to Apple’s encryption:

“The Secure Enclave contains its own [unique ID] and hardware AES engine. The passcode verification process takes place here, separated from the rest of the system. The Secure Enclave also handles Touch ID fingerprint processing and matching, and authorizing payments for Apple Pay,” Ash wrote in a blog post about iPhone Secure Enclave last year. “The Secure Enclave performs all key management for encrypted files. File encryption applies to nearly all user data.”

While most iPhone system apps use Secure Enclave, and all third-party apps use it by default since iOS 7, Ash wrote, “The main CPU can’t read encrypted files on its own. It must request the file’s keys from the Secure Enclave, which in turn is unable to provide them without the user’s passcode.”

What Apple’s iPhone Secure Enclave leak means

While this sounds bad, David Schuetz, senior security consultant at NCC Group, said in his own analysis that the encryption key xerub released was specific to the GSM model of the iPhone 5S — the first Apple device with the Secure Enclave Processor — running iOS 10.3.3.

Apple reportedly told TechRepublic that decrypting the iPhone Secure Enclave firmware “in no way provides access” to user data and that Apple does not have plans to patch affected devices.

Xerub also told TechRepublic the encryption key would not impact user security but said the “public scrutiny” around the release could improve the security of the iPhone Secure Enclave.

Schuetz added that modifying the iPhone Secure Enclave firmware would not be possible because “the firmware is also signed by Apple, and the attacker would need to be able to forge the signature to get the phone to install the hacked firmware.”

“I think this is a good thing, in the long run. This should have very little practical effect on the security of individual iOS devices, unless a very significant flaw is uncovered. Even then, the potential scope of the finding may be limited to only older devices,” Schuetz wrote. “If the security of the Secure Enclave is in any way directly reduced by the disclosure of the firmware, then it wasn’t truly secure in the first place.”