Tag Archives: industry

Allianz partners with Microsoft to digitally transform the insurance industry

Allianz and Microsoft to reimagine the insurance industry experience with Azure to streamline insurance processes; Microsoft will partner with Syncier, the B2B2X insurtech founded by Allianz, to offer customized insurance platform solutions and related services

Jean-Philippe Courtois, EVP and president, Microsoft Global Sales, Marketing & Operations and Christof Mascher, COO and member of the Board of Management of Allianz SE
Jean-Philippe Courtois, EVP and president, Microsoft Global Sales, Marketing & Operations (left) and Christof Mascher, COO and member of the Board of Management of Allianz SE (right). Source: allianz.com

MUNICH, Germany, and REDMOND, Wash. — Nov. 14, 2019 — On Thursday, Allianz SE and Microsoft Corp. announced a strategic partnership focused on digitally transforming the insurance industry, making the insurance process easier while creating a better experience for insurance companies and their customers. Through the strategic partnership, Allianz will move core pieces of its global insurance platform, Allianz Business System (ABS), to Microsoft’s Azure cloud and will open-source parts of the solution’s core to improve and expand capabilities.

Syncier will offer a configurable version of the solution called ABS Enterprise Edition to insurance providers as a service, allowing them to benefit from one of the most advanced and comprehensive insurance platforms in the industry, reducing costs and centralizing their insurance portfolio management. This will increase efficiencies across all lines of insurance business, resulting in better experiences through tailored customer service and simplified product offerings.

“Teaming up with Microsoft and leveraging Azure’s secure and trusted cloud platform will support us in digitalizing the insurance industry,” said Christof Mascher, COO and member of the Board of Management of Allianz SE. “Through this partnership, Allianz and Syncier strive to offer the most advanced Insurance as a Service solutions on Microsoft Azure. The ABS Enterprise Edition is an exciting opportunity, both for larger insurers needing to replace their legacy IT, and smaller players — such as insurtechs — looking for a scalable insurance platform.”

“Allianz is setting the standard for insurance solutions globally,” said Jean-Philippe Courtois, EVP and president, Microsoft Global Sales, Marketing & Operations. “Together, Microsoft and Allianz are offering a solution that combines Allianz’s deep knowledge of the insurance sector with Microsoft’s trusted Azure cloud platform. By delivering an open-source, cloud-based insurance platform and software application marketplace, we will support innovation and transformation across this sector.”

Syncier’s ABS Enterprise Edition can handle insurance processes across all lines of business: property and casualty, life, health, and assistance. It can be customized for any insurance company, country and regulatory requirements. Insurers, brokers and agents adopting the platform can service clients and manage entire portfolios end to end in one system, gaining a unique 360-degree view of each client and the business.

To accelerate industry innovation, Syncier will also offer an Azure cloud-based marketplace for ready-made software applications and services tailored to the insurance sector. Such solutions could include, for example, customer service chatbots or AI-based fraud detection. The marketplace enables insurance providers to easily and quickly implement the available solutions in a plug-and-play manner.

Allianz uses ABS globally as a platform for all lines of business and along with Microsoft is committed to supporting the ABS Enterprise Edition long term as an industry solution. Today, ABS handles around 60 million insurance policies in 19 countries and is being rolled out to all Allianz entities.

About Allianz

The Allianz Group is one of the world’s leading insurers and asset managers with more than 92 million retail and corporate customers. Allianz customers benefit from a broad range of personal and corporate insurance services, ranging from property, life and health insurance to assistance services to credit insurance and global business insurance. Allianz is one of the world’s largest investors, managing around 729 billion euros on behalf of its insurance customers. Furthermore, our asset managers PIMCO and Allianz Global Investors manage more than 1.5 trillion euros of third-party assets. Thanks to our systematic integration of ecological and social criteria in our business processes and investment decisions, we hold the leading position for insurers in the Dow Jones Sustainability Index. In 2018, over 142,000 employees in more than 70 countries achieved total revenues of 132 billion euros and an operating profit of 11.5 billion euros for the group. For more information on Syncier, visit www.syncier.com.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777, [email protected]

Gregor Wills, Allianz, +49 89 3800 61313, [email protected]

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.

Go to Original Article
Author: Microsoft News Center

Docker Enterprise spun off to Mirantis, company restructures

In a startling turn of events, Docker as the industry knows it is no more.

Mirantis, a privately held company based in Campbell, Calif., acquired the Docker Enterprise business from Docker Inc., including Docker employees, Docker Enterprise partnerships and some 750 Docker Enterprise customer accounts. The IP acquired in the deal for an undisclosed sum, announced today, includes Docker Engine – Enterprise, Docker Trusted Registry, Docker Universal Control Plane and Docker CLI.

“This is the end of Docker as we knew it, and it’s a stunning end,” said Jay Lyman, an analyst at 451 Research. The industry as a whole had been skeptical of Docker’s business strategy for years, particularly in the last six months as the company went quiet. The company underwent a major restructuring in the wake of the Mirantis deal today, naming longtime COO Scott Johnston as CEO. Johnston replaces Robert Bearden, who served just six months as the company’s chief executive.

“This validates a lot of the questions and uncertainty that have been surrounding Docker,” Lyman said. “We certainly had good reasons for asking the questions that we were.”

While not the end for Docker Enterprise, it appears to be the end for Docker’s Swarm orchestrator, which Mirantis will support for another two years. The primary focus will be on Kubernetes, Mirantis CEO Adrian Ionel wrote in a company blog post.

This is the end of Docker as we knew it, and it’s a stunning end.
Jay LymanAnalyst, 451 Research

Docker Enterprise customers are already being directed to Mirantis for support, though Docker account managers and points of contact remain the same for now, as they transition over to Mirantis. Going forward, Mirantis will incorporate Docker Kubernetes into its Kubernetes as a Service offering, which analysts believe will give it a fresh toehold in public and hybrid cloud container orchestration.

However, it’s a market already crowded with vendors. Competitors include big names such as Google, which offers hybrid Kubernetes services with Anthos, and IBM-Red Hat, which so far has dominated the enterprise market for on-premises and hybrid Kubernetes management with more than 1000 customers.

A surprising exit for Docker Inc.

While the value of the deal remains unknown, it’s unlikely that Mirantis, which numbers 400 employees and is best known for its on-premises OpenStack and Kubernetes-as-a-service business, could afford a blockbuster sum equivalent to the hundreds of millions of dollars in funding Docker Inc. received since it launched Docker Engine 1.0 in 2014.

“I thought Docker would find a bigger buyer — I’m not sure Mirantis has the resources or name to do a very large deal,” said Gary Chen, an analyst at IDC.

Analysts were also surprised that Docker split off Docker Enterprise rather than being acquired as a whole, though it’s possible a second deal for Docker’s remaining Docker Hub and Docker Desktop IP could follow.

“It could be another buyer only wanted that part of the business, but Docker put so much into Docker Enterprise for quite a while — this is a complete turnaround,” Chen said.

Docker Enterprise hit scalability, reliability snags for some

As Docker looked to differentiate its Kubernetes implementation within Docker Enterprise last year, one customer who used the Swarm orchestrator for some workloads hoped that Kubernetes support would alleviate scalability and stability concerns. Mitchell International, an auto insurance software company in San Diego, said it suffered a two-hour internal service outage when a Swarm master failed and a quorum algorithm to elect a new master node also did not work. This outage prompted Mitchell to move Linux containers to Amazon EKS, but members of its IT team hoped Docker Enterprise with Kubernetes support would replace swarm for Windows containers.

However, about a month ago, a senior architect at a large insurance company on the East Coast told SearchITOperations he’d experienced similar issues in his deployment, including the software’s Kubernetes integration.

This company’s environment is comprised of thousands of containers and hundreds of host nodes, and according to the architect, the Docker Enterprise networking implementation can become unstable at that scale. He traced this to its use of the Raft Consensus Algorithm, an open source utility which maintains consistency in distributed systems, and how it stores data in the open source RethinkDB, which can become corrupt when it processes high volumes of data, and out of sync with third-party overlay networks in the environment.

“The Docker implementation gives you the native Kubernetes APIs, but we do have concerns with how some of the core networking with their Universal Control Plane is implemented,” the architect said, speaking on condition of anonymity because he is not permitted to speak for his company in the press. “This is challenging at scale, and that carries forward into Kubernetes.”

The insurance company has been able to address this by running a greater number of relatively small Docker Enterprise clusters, but wasn’t satisfied with that as a long-term approach, and has begun to evaluate different Kubernetes distros from vendors such as Rancher and VMware to replace Docker Enterprise.

The senior architect was briefed on Mirantis’ managed service plans prior to the acquisition this week, and said his company will still move away from Docker Enterprise next year.

“We talked to Mirantis’ leadership team before [the acquisition] became public, but we don’t see a managed service as a strategic piece for us,” he said in an interview today. “I’m sure some customers will continue to ride out [the transition], but we’re not looking for a vendor to come in and manage our platform.”

Mirantis CEO pledges support, tech stability for customers

Docker reps said last year that it has many customers using Docker Enterprise with Windows and Swarm who had not run into the issue, in response to Mitchell International’s report of a problem. A company spokesperson did not respond to requests for comment about the more recent customer report of issues with Kubernetes last month.

Mirantis CEO Ionel said he hasn’t yet dug into that level of detail on the product, but that his company’s tech team will take the lead on Kubernetes product development going forward.

“Mirantis will contribute our Kubernetes expertise, including around scalability, robustness, ease of management and operation to the platform,” he said in an interview with SearchITOperations today. “That’s part of the unique value that we bring — the [Docker] brand will remain [Universal Control Plane], since that’s what customers are used to, but the technology underneath the hood is going to get an upgrade.”

At least for the foreseeable future, most Docker Enterprise customers will probably wait and see how the platform changes under Mirantis before they make a decision, consultants said.

“I know of only one Docker Enterprise customer, and I am sure they will stay on the platform, as it supports their production environment, until they see what Mirantis provides going forward,” said Chris Riley, DevOps delivery director at Cprime Inc., an Agile software development consulting firm in San Mateo, Calif.

Most enterprises have yet to deploy full container platforms in production, but most of his enterprise clients are either focused on OpenShift for its hybrid cloud support or using a managed Kubernetes service from a public cloud provider, Riley said.

Docker intends to refocus its efforts around Docker Desktop, but that product won’t be of interest to the insurance company’s senior architect and his team, who have developed their own process for moving apps from the developer desktop into the CI/CD pipeline.

In fact, the senior architect said he’d become frustrated by the company’s apparent focus on Docker Desktop over the last 18 months, while some Docker Enterprise customers waited for features such as blue-green container cluster upgrades, which Docker shipped in Docker Enterprise 3.0 in July.

“We’d been asking for ease of upgrade features for two years — it’s been a big pain point for us, to the point where we developed our own [software] to address it,” he said. “They finally started to get there [with version 3.0], but it’s a little too late for us.”

Mirantis’ Ionel said the company plans to include seamless upgrades as a major feature of its managed service offering. Other areas of focus will be centralized management of a fleet of Kubernetes clusters rather than just one, and self-service features for development teams.

Mirantis will acquire all of Docker’s customer support and customer success team employees, as well as the systems they use to support Docker Enterprise shops and all historical customer support data, Ionel said.

“Nothing there has changed,” he said. “They are still doing today what they were doing yesterday.”

Go to Original Article
Author:

Atlassian CISO Adrian Ludwig shares DevOps security outlook

BOSTON — Atlassian chief information security officer and IT industry veteran Adrian Ludwig is well aware of a heightened emphasis on DevOps security among enterprises heading into 2020 and beyond, and he believes that massive consolidation between DevOps and cybersecurity toolsets is nigh.

Ludwig, who joined Atlassian in May 2018, previously worked at Nest, Macromedia, Adobe and Google’s Android, as well as the U.S. Department of Defense. Now, he supervises Atlassian’s corporate security, including its cloud platforms, and works with the company’s product development teams on security feature improvements.

Atlassian has also begun to build DevOps security features into its Agile collaboration and DevOps tools for customers who want to build their own apps with security in mind. Integrations between Jira Service Desk and Jira issue tracking tools, for example, automatically notify development teams when security issues are detected, and the roadmap for Jira Align (formerly AgileCraft) includes the ability to track code quality, privacy and security on a story and feature level.

However, according to Ludwig, the melding of DevOps and IT security tooling, along with their disciplines, must be much broader and deeper in the long run. SearchSoftwareQuality caught up with him at the Atlassian Open event here to talk about his vision for the future of DevOps security, how it will affect Atlassian, and the IT software market at large.

SearchSoftwareQuality: We’re hearing more about security by design and applications security built into the DevOps process. What might we expect to see from Atlassian along those lines?

Ludwig: As a security practitioner, probably the most alarming factoid about security — and it gets more alarming every year — is the number of open roles for security professionals. I remember hearing at one point it was a million, and somebody else was telling me that they had found 3 million. So there’s this myth that people are going to be able to solve security problems by having more people in that space.

And an area that has sort of played into that myth is around tooling for the creation of secure applications. And a huge percentage of the current security skills gap is because we’re expecting security practitioners to find those tools, integrate those tools and monitor those tools when they weren’t designed to work well together.

Adrian LudwigAdrian Ludwig

It’s currently ridiculously difficult to build software securely. Just to think about what it means in the context of Atlassian, we have to license tools from half a dozen different vendors and integrate them into our environment. We have to think about how results from those tools flow into the [issue] resolution process. How do you bind it into Jira, so you can see the tickets, so you can get it into the hands of the developer? How do you make sure that test cases associated with fixing those issues are incorporated into your development pipeline? It’s a mess.

My expectation is that the only way we’ll ever get to a point where software can be built securely is if those capabilities are incorporated directly into the tools that are used to deliver it, as opposed to being add-ons that come from third parties.

SSQ: So does that include Atlassian?

Ludwig: I think it has to.

SSQ: What would that look like?

Ludwig: One of the areas that my team has been building something like that is around the way that we monitor our security investigations. We’ve actually released some open source projects in this area, where the way that we create alerts for Splunk, which we use as our SIEM, is tied into Jira tickets and Confluence pages. When we create alerts, a Confluence page is automatically generated, and it generates Jira tickets that then flow to our analysts to follow up on them. And that’s actually tied in more broadly to our overall risk management system.

We are also working on some internal tools to make it easier for us to connect the third-party products that look for security vulnerabilities directly into Bitbucket. Every single time we do a pull request, source code analysis runs. And it’s not just a single piece of source code analysis; it’s a wide range of them. Is that particular pull request referencing any out-of-date libraries? And dependencies that need to be updated? And then those become comments that get added into the peer review process.

My job is to make sure that we ship the most secure software that we possibly can, and if there are commercial opportunities, which I think there are, then it seems natural that we might do those as well.
Adrian LudwigCISO, Atlassian

It’s not something that we’re currently making commercially available, nor do we have specific plans at this point to do that, so I’m not announcing anything. But that’s the kind of thing that we are doing. My job is to make sure that we ship the most secure software that we possibly can, and if there are commercial opportunities, which I think there are, then it seems natural that we might do those as well.

SSQ: What does that mean for the wider market as DevOps and security tools converge?

Ludwig: Over the next 10 years, there’s going to be massive consolidation in that space. That trend is one that we’ve seen other places in the security stack. For example, I came from Android. Android now has primary responsibility, as a core platform capability, for all of the security of that device. Your historical desktop operating systems? Encryption was an add-on. Sandboxing was an add-on. Monitoring for viruses was an add-on. Those are all now part of the mobile OS platform.

If you look at the antivirus vendors, you’ve seen them stagnate, and they didn’t have an off-road onto mobile. I think it’s going to be super interesting to watch a lot of the security investments made over the last 10 years, especially in developer space, and think through how that’s going to play out. I think there’s going to be consolidation there. It’s all converging, and as it converges, a lot of stuff’s going to die.

Go to Original Article
Author:

Extending the power of Azure AI to business users

Today, Alysa Taylor, Corporate Vice President of Business Applications and Industry, announced several new AI-driven insights applications for Microsoft Dynamics 365.

Powered by Azure AI, these tightly integrated AI capabilities will empower every employee in an organization to make AI real for their business today. Millions of developers and data scientists around the world are already using Azure AI to build innovative applications and machine learning models for their organizations. Now business users will also be able to directly harness the power of Azure AI in their line of business applications.

What is Azure AI?

Azure AI is a set of AI services built on Microsoft’s breakthrough innovation from decades of world-class research in vision, speech, language processing, and custom machine learning. What I find particularly exciting is that Azure AI provides our customers with access to the same proven AI capabilities that power Xbox, HoloLens, Bing, and Office 365.

Azure AI helps organizations:

  • Develop machine learning models that can help with scenarios such as demand forecasting, recommendations, or fraud detection using Azure Machine Learning.
  • Incorporate vision, speech, and language understanding capabilities into AI applications and bots, with Azure Cognitive Services and Azure Bot Service.
  • Build knowledge-mining solutions to make better use of untapped information in their content and documents using Azure Search.

Bringing the power of AI to Dynamics 365 and the Power Platform

The release of the new Dynamics 365 insights apps, powered by Azure AI, will enable Dynamics 365 users to apply AI in their line of business workflows. Specifically, they benefit from the following built-in Azure AI services:

  • Azure Machine Learning which powers personalized customer recommendations in Dynamics 365 Customer Insights, analyzes product telemetry in Dynamics 365 Product Insights, and predicts potential failures in business-critical equipment in Dynamics 365 Supply Chain Management.
  • Azure Cognitive Services and Azure Bot Service that enable natural interactions with customers across multiple touchpoints with Dynamics 365 Virtual Agent for Customer Service.
  • Azure Search which allows users to quickly find critical information in records such as accounts, contacts, and even in documents and attachments such as invoices and faxes in all Dynamics 365 insights apps.

Furthermore, since Dynamics 365 insights apps are built on top of Azure AI, business users can now work with their development teams using Azure AI to add custom AI capabilities to their Dynamics 365 apps.

The Power Platform, comprised of three services – Power BI, PowerApps, and Microsoft Flow, also benefits from Azure AI innovations. While each of these services is best-of-breed individually, their combination as the Power Platform is a game-changer for our customers.

Azure AI enables Power Platform users to uncover insights, develop AI applications, and automate workflows through low-code, point-and-click experiences. Azure Cognitive Services and Azure Machine Learning empower Power Platform users to:

  • Extract key phrases in documents, detect sentiment in content such as customer reviews, and build custom machine learning models in Power BI.
  • Build custom AI applications that can predict customer churn, automatically route customer requests, and simplify inventory management through advanced image processing with PowerApps.
  • Automate tedious tasks such as invoice processing with Microsoft Flow.

The tight integration between Azure AI, Dynamics 365, and the Power Platform will enable business users to collaborate effortlessly with data scientists and developers on a common AI platform that not only has industry leading AI capabilities but is also built on a strong foundation of trust. Microsoft is the only company that is truly democratizing AI for businesses today.

And we’re just getting started. You can expect even deeper integration and more great apps and experiences that are built on Azure AI as we continue this journey.

We’re excited to bring those to market and eager to tell you all about them!

Go to Original Article
Author: Microsoft News Center

Learn how to get started with your networking career

As the networking industry rapidly changes, so could your networking career. Maybe you’re just starting out, or you want to take your career to the next level. Or maybe you want to hit the reset button and start over in your career. Regardless of experience, knowledge and career trajectory, everybody can use advice along the way.

Network engineer role requirements vary depending on a candidate’s experience, education and certifications, but one requirement is constant: Network engineers should have the skills to build, implement and maintain a computer network that supports an organization’s required services.

This compilation of expert advice brings together helpful insights for network engineers at any point in their networking careers in any area of networking. It includes information about telecommunications and Wi-Fi careers and discusses how 5G may affect job responsibilities.

The following expert advice can help budding, transforming and still-learning network engineers in their networking career paths.

What roles are included in a network engineer job description?

Network engineers have a variety of responsibilities that fall within multiple categories and require varying skills. All potential network engineers, however, should have a general understanding of the multiple layers of network communication protocols, like IP and TCP. Engineers that know how these protocols work can better develop fundamental networking wisdom, according to Terry Slattery, principal architect at NetCraftsmen.

The role of a network engineer is complex, which is why it’s often divided into subcategories. Potential responsibilities include the following:

Each of these paths has different responsibilities, requirements and training. For most networking careers, certifications and job experience are comparable to advanced degrees, Slattery said. Engineers should renew their certifications every few years to ensure they maintain updated industry knowledge, he added. As of mid-2019, network engineer salaries ranged from $60,000 to $180,000 a year. However, these salaries vary by location, market, experience and certifications of the candidate.

Learn more about network engineer job requirements.

What steps should I take to improve my networking career path?

As the networking industry transforms, network engineers eager to advance their networking careers have to keep up. One way to ensure engineers maintain relevant networking skills is for those engineers to get and retain essential certifications, said Amy Larsen DeCarlo, principal analyst at Current Analysis. The Cisco Certified Network Associate (CCNA) certification, in particular, provides foundational knowledge about how to build and maintain network infrastructures.

Network engineers should renew their certifications every few years, which requires a test to complete the renewal. Certifications don’t replace experience, DeCarlo said, but they assure employers that candidates have the essential, basic networking knowledge. Continuing education or specializing in a certain expertise area can also help engineers advance their networking careers, as can a maintained awareness of emerging technologies, such as cloud services.

Read more about how to advance your networking career.

network engineer skills
Learn more about the various paths you can take in your networking career.

What are the top telecom certifications?

Different types of certifications can benefit different aspects of networking. For a telecom networking career, the three main certification categories are vendor-based, technology-based or role-based, said Tom Nolle, president of CIMI Corp. Vendor-based certifications are valuable for candidates that mostly use equipment from a single vendor. However, these certifications can be time-consuming and typically require prior training or experience.

Technology-based certifications usually encompass different categories of devices, such as wireless or security services. These include certifications from the International Association for Radio, Telecommunications and Electromagnetics and the Telecommunications Certification Organization. These certifications are best for entry-level engineers or those who want to specialize in a specific area of networking. They are also equivalent to an advanced degree, Nolle said.

Role-based certifications are more general and ideal for candidates without degrees or those who want a field technician job. Certifications can make candidates more attractive to employers, as these credentials prove the candidate has the skills and experience the employer requires. One example of this type of certification is the NCTI Master Technician, which specializes in field and craft work for the cable industry.

Dive deeper into the specifics of telecom certifications.

Why should I stay up to date with Wi-Fi training?

One of the most complicated areas of networking is wireless LAN (WLAN) — Wi-Fi, in particular. Yet, Wi-Fi is essential in today’s networking environment. Like other networking career paths, WLAN engineers should refresh their Wi-Fi training every so often to remain credible, according to network engineer Lee Badman.

The history of Wi-Fi has been complicated, and the future can be daunting. But Wi-Fi training is a helpful way to understand common issues. In the past, many issues stemmed from the lack of an identical, holistic understanding of Wi-Fi among organizations and network teams, Badman said. Without a consistent Wi-Fi education plan, Wi-Fi training was a point of both success and failure.

While some training inconsistencies still linger now, Badman recommended the Certified Wireless Specialist course from Certified Wireless Network Professionals as a starting point for those interested in WLANs. A variety of vendor-agnostic courses are also available for other wireless roles, he said.

Discover more about Wi-Fi training in networking careers.

Will 5G networks require new network engineer skills?

Mobile network generations seem to change as rapidly as Wi-Fi does, causing many professionals to wonder what 5G will mean for networking careers in the future. In data centers, job requirements won’t change much, according to John Fruehe, an independent analyst. But 5G could launch a new era for cloud-based and mobile applications and drive security changes as well.

Network engineers should watch out for gaps in network security due to this new combination of enterprise networks, cloud services and 5G, Fruehe said. However, employees working in carrier networks may already see changes in how their organizations construct and provision communication services as a result of current 5G deployments. For example, 5G may require engineers to adhere to a new, fine-grained programmability to manage the increased volume of services organizations plan to run on 5G.

Networking areas where network engineer skills will be crucial are software-defined networking, software-defined radio access networks, network functions virtualization, automation and orchestration. This transformation is because manual command-line interfaces will no longer suffice when engineers program devices, as virtualization and automation are better suited to program devices.

Explore more about 5G’s potential effect on networking careers.

Go to Original Article
Author:

With the onset of value-based care, machine learning is making its mark

In a value-based care world, population health takes center stage.

The healthcare industry is slowly moving away from traditional fee-for-service models, where healthcare providers are reimbursed for the quantity of services rendered, and toward value-based care, which focuses on the quality of care provided. The shift in focus on quality versus quantity also shifts a healthcare organization’s focus to more effectively manage high-risk patients.

Making the shift to value-based care and better care management means looking at new data sources — the kind healthcare organizations won’t get just from the lab.

In this Q&A, David Nace, chief medical officer for San Francisco-based healthcare technology and data analytics company Innovaccer Inc., talks about how the company is applying AI and machine learning to patient data — clinical and nonclinical — to predict a patient’s future cost of care.

Doing so enables healthcare organizations to better allocate their resources by focusing their efforts on smaller groups of high-risk patients instead of the patient population as a whole. Indeed, Nace said the company is able to predict the likelihood of an individual experiencing a high-cost episode of care in the upcoming year with 52% accuracy.

What role does data play in Innovaccer’s individual future cost of care prediction model?

David Nace, chief medical officer, Innovaccer David Nace

David Nace: You can’t do anything at all around understanding a population or an individual without being able to understand the data. We all talk about data being the lifeblood of everything we want to accomplish in healthcare.

What’s most important, you’ve got to take data in from multiple sources — claims, clinical data, EHRs, pharmacy data, lab data and data that’s available through health information exchanges. Then, also [look at] nontraditional, nonclinical forms of data, like social media; or local, geographic data, such as transportation, environment, food, crime, safety. Then, look at things like availability of different community resources. Things like parks, restaurants, what we call food deserts, and bring all that data into one place. But none of that data is standardized.

How does Innovaccer implement and use machine learning algorithms in its prediction model?

Nace: Most of that information I just described — all the data sources — there are no standards around. So, you have to bring that data in and then harmonize it. You have to be able to bring it in from all these different sources, in which it’s stored in different ways, get it together in one place by transforming it, and then you have to harmonize the data into a common data model.

We’ve done a lot of work around that area. We used machine learning to recognize patterns as to whether we’ve seen this sort of data before from this kind of source, what do we know about how to transform it, what do we know about bringing it into a common data model.

Lastly, you have to be able to uniquely identify a cohort or an individual within that massive population data. You bring all that data together. You have to have a unique master patient index, and that’s been very difficult, because, in this country, we don’t have a national patient identifier.

We use machine learning to bring all that data in, transform it, get it into a common data model, and we use some very complex algorithms to identify a unique patient within that core population.

How did you develop a risk model to predict an individual’s future cost of care? 

You can’t do anything at all around understanding a population or an individual without being able to understand the data.
David NaceChief medical officer, Innovaccer

Nace: There are a couple of different sources of risk. There’s clinical risk, [and] there’s social, environmental and financial risk. And then there’s risk related to behavior. Historically, people have looked at claims data to look at the financial risk in kind of a rearview-mirror approach, and that’s been the history of risk detection and risk management.

There are models that the government uses and relies on, like CMS’ Hierarchical Condition Category [HCC] scoring, relying heavily on claims data and taking a look at what’s happened in the past and some of the information that’s available in claims, like diagnosis, eligibility and gender.

One of the things we wanted to do is, with all that data together, how do you identify risk proactively, not rearview mirror. How do you then use all of this new mass of data to predict the likelihood that someone’s going to have a future event, mostly cost? When you look at healthcare, everybody is concerned about what is the cost of care going to be. If they go back into the hospital, that’s a cost. If they need an operation, that’s a cost.

Why is predicting individual risk beneficial to a healthcare organization moving toward value-based care?

Nace: Usually, risk models are used for rearview mirror for large population risk. When the government goes to an accountable care organization or a Medicare Advantage plan and wants to say how much risk is in here, it uses the HCC model, because it’s good at saying what’s the risk of populations, but it’s terrible when you go down to the level of an individual. We wanted to get it down to the level of an individual, because that’s what humans work with.

How do social determinants of health play a role in Innovaccer’s future cost of care model?

Nace: We’ve learned in healthcare that the demographics of where you live, and the socioeconomic environment around you, really impact your outcome of care much more than the actual clinical condition itself.

As a health system, you’re starting to understand this, and you don’t want people to come back to the hospital. You want people to have good care plans that are highly tailored for them so they’re adherent, and you want to have effective strategies for managing care coordinators or managers.

Now, we have this social vulnerability index that we have a similar way of using AI to test against a population, reiterate multiple forms of regression analysis and come up with a highly specific approach to detecting the social vulnerability of that patient down to the level of a ZIP code around their economic and environmental risk. You can pull data off an API from Google Maps that shows food sources, crime rates, down to the level of a ZIP code. All that information, transportation methods, etc., we can integrate that with all that other clinical data in that data model.

We can now take a vaster amount of data that will not only get us that clinical risk, but also the social, environmental and economic risk. Then, as a health system, you can deploy your resources carefully.

Editor’s note: Responses have been edited for brevity and clarity.

Go to Original Article
Author:

CloudHealth’s Kinsella weighs in on VMware, cloud management

VMware surprised many customers and industry watchers at its annual user conference, VMworld 2018, held this week, with its acquisition of CloudHealth Technologies, a multi-cloud management tool vendor. This went down only days before CloudHealth cut the ribbon on its new Boston headquarters. Joe Kinsella, CTO and founder at CloudHealth, spoke with us about events leading up to the acquisition, as well as his thoughts on the evolution of the cloud market.

Why sell now? And why VMware?

Joe KinsellaJoe Kinsella

Joe Kinsella: A year ago, we raised a [Series] D round of funding of $46 million. The reason we did that is because we had no intention of doing anything other than build a large public company — until recently. A few months ago, VMware approached us with a partnership conversation. We talked about what we could do together. It became clear that the two of us together would accelerate the vision that I set out to do six years ago. We could do what we set out to do faster, on the platform of VMware.

How will VMware and CloudHealth rationalize the products that overlap within the two companies?

Kinsella: The CloudHealth brand will be a unifying brand across their own portfolio of SaaS and cloud products. That said, in the process of doing that, there will be overlap, but also some opportunities, and we will have to rationalize that over time. There is no need to do it in the short term. [VMware] vRealize and CloudHealth are successful products. We will integrate with VMware, but we will continue to offer a choice.

What was happening in the market to drive your decision?

[Enterprises] have settled on a nuanced approach to leverage a broad portfolio of cloud options, which means many public clouds, many private clouds and a diverse set of SaaS products. Managing [such] a diverse portfolio is incredibly complex.
Joe KinsellaCTO and founder, CloudHealth Technologies

Kinsella: Cloud management has evolved rapidly. What drives it [is something] I call the ‘three phases of cloud adoption.’ In phase one, enterprises said they would not go to the public cloud, despite the fact that their lines of business used the public cloud. Phase two was this irrational exuberance that everything went to the public cloud. [Enterprises in phase three] have settled on a nuanced approach to leverage a broad portfolio of cloud options, which means many public clouds, many private clouds and a diverse set of SaaS products. Managing a single cloud is complex; managing [such] a diverse portfolio is incredibly complex.

What’s your view today of cloud market adoption and how the landscape is evolving?

Kinsella: Today, the majority of workloads still run on premises. But public cloud growth has been dramatic, as we all know. Amazon remains the market leader by a good amount. [Microsoft’s] Azure business has grown quickly, but a lot of that growth includes the Office 365 product as well. Google has not been a big player until recently. It’s only been in the past 12 months that we felt the Google strategy that Diane Green started to execute in the market. Alibaba has made some big moves and is a cloud to watch. Though Amazon is still far ahead, it’s finally getting competitive.

But customers don’t really just focus on one source anymore, correct?

Kinsella: I’ve talked about the concept of the heterogenous cloud, which is building applications and business services that take advantage of services from multiple service providers. We think of them as competitors today, but instead of buying services from Amazon, Google or Azure, you might build a business service that takes advantage of services from all three. I think that’s the future. I believe these multiple cloud providers will continue to exist and be differentiated based on the services they provide.

M-Files cloud subscription turns hybrid with M-Files Online

To reflect the desire for flexibility, and regulatory shifts in the enterprise content management industry, software vendors are starting to offer users options for storing data on premises or in a cloud infrastructure.

The M-Files cloud strategy is a response to these industry changes. The information management software vendor has released M-Files Online, which enables users to manage content both in the cloud and behind a firewall on premises, under one subscription.

While not the first ECM vendor to offer hybrid infrastructure, the company claims that with the new M-Files cloud system, it is the first ECM software provider to provide both under one software subscription.

“What I’ve seen going on is users are trying to do two things at once,” said John Mancini, chief evangelist for the Association of Intelligent Information Management (AIIM). “On one hand, there are a lot of folks that have significant investment in legacy systems. On the other hand, they’re realizing quickly that the old approaches aren’t working anymore and are driving toward modernizing the infrastructure.”

Providing customer flexibility

It’s difficult, time-consuming and expensive to migrate an organization’s entire library of archives or content from on premises to the cloud, yet it’s also the way the industry is moving as emerging technologies like AI and machine learning have to be cloud-based to be able to function. That’s where a hybrid cloud approach can help organizations handle the migration process.

Organizations need to understand that cloud is coming, more data is coming and they need to be more agile.
John Mancinichief evangelist, Association of Intelligent Information Management

According to a survey by Mancini and AIIM, and sponsored by M-Files, 48% of the 366 professionals surveyed said they are moving toward a hybrid of cloud and on-premises delivery methods for information management over the next year, with 36% saying they are moving toward cloud and 12% staying on premises.

“We still see customers that are less comfortable to moving it all to the cloud and there are certain use cases where that makes sense,” said Mika Javanainen, vice president of product marketing at M-Files. “This is the best way to provide our customers flexibility and make sure they don’t lag behind. They may still run M-Files on premises, but be using the cloud services to add intelligence to your data.”

M-Files cloud system and its new online offering act as a hub for an organization’s storehouse of information.

“The content resides where it is, but we still provide a unified UI and access to that content and the different repositories,” Javanainen said.

M-Files Online screenshot
An M-Files Online screenshot shows how the information management company brings together an organization’s content from a variety of repositories.

Moving to the cloud to use AI

While the industry is moving more toward cloud-based ECM, there are still 60% of those in the AIIM survey that want some sort of on-premises storage, according to the survey.

“There are some parts of companies that are quite happy with how they are doing things now, or may understand the benefits of cloud but are resistant to change,” said Greg Milliken, senior vice president of marketing at M-Files. “[M-Files Online] creates an opportunity that allows users that may have an important process they can’t deviate from to access information in the traditional way while allowing other groups or departments to innovate.”

One of the largest cloud drivers is to realize the benefit of emerging business technologies, particularly AI. While AI can conceivably work on premises, that venue is inherently flawed due to the inability to store enough data on premises.

M-Files cloud computing can open up the capabilities of AI for the vendor’s customers. But for organizations to benefit from AI, they need to overcome fears of the cloud, Mancini said.

“Organizations need to understand that cloud is coming, more data is coming and they need to be more agile,” he said. “They have to understand the need to plug in to AI.”

Potential problems with hybrid clouds

Having part of your business that you want more secure to run on premises and part to run in the cloud sounds good, but it can be difficult to implement, according to Mancini.

“My experience talking to people is that it’s easier said than done,” Mancini said. “Taking something designed in a complicated world and making it work in a simple, iterative cloud world is not the easiest thing to do. Vendors may say we have a cloud offering and an on-premises offering, but the real thing customers want is something seamless between all permutations.”

Regardless whether an organization is managing through a cloud or behind a firewall, there are undoubtedly dozens of other software systems — file shares, ERP, CRM — which businesses are working with and hoping to integrate its information with. The real goal of ECM vendors and those in the information management space, according to Mancini, is to get all those repositories working together.

“What you’re trying to get to is a system that is like a set of interchangeable Lego blocks,” Mancini said. “And what we have now is a mishmash of Legos, Duplos, Tinker Toys and erector sets.”

M-Files claims its data hub approach — bringing all the disparate data under one UI via an intelligent metadata layer that plugs into the other systems — succeeds at this.

“We approach this problem by not having to migrate the data — it can reside where it is and we add value by adding insights to the data with AI,” Javanainen said.

M-Files Online, which was released Aug. 21, is generally available to customers. M-Files declined to provide detailed pricing information.

Experts skeptical an AWS switch is coming

Industry experts said AWS has no need to build and sell a white box data center switch as reported last week but could help customers by developing a dedicated appliance for connecting a private data center with the public cloud provider.

The Information reported last Friday AWS was considering whether to design open switches for an AWS-centric hybrid cloud. The AWS switch would compete directly with Arista, Cisco and Juniper Networks and could be available within 18 months if AWS went through with the project. AWS has declined comment.

Industry observers said this week the report could be half right. AWS customers could use hardware dedicated to establishing a network connection to the service provider, but that device is unlikely to be an AWS switch.

“A white box switch in and of itself doesn’t help move workloads to the cloud, and AWS, as you know, is in the cloud business,” said Brad Casemore, an analyst at IDC.

What AWS customers could use isn’t an AWS switch, but hardware designed to connect a private cloud to the infrastructure-as-a-service provider, experts said. Currently, AWS’ software-based Direct Connect service for the corporate data center is “a little kludgy today and could use a little bit of work,” said an industry executive who requested his name not be used because he works with AWS.

“It’s such a fragile and crappy part of the Amazon cloud experience,” he said. “The Direct Connect appliance is a badly needed part of their portfolio.”

AWS could also use a device that provides a dedicated connection to a company’s remote office or campus network, said John Fruehe, an independent analyst.  “It would speed up application [service] delivery greatly.”

Indeed, Microsoft recently introduced the Azure Virtual WAN service, which connects the Azure cloud with software-defined WAN systems that serve remote offices and campuses. The systems manage traffic through multiple network links, including broadband, MPLS and LTE.

Connectors to AWS, Google, Microsoft clouds

For the last couple of years, AWS and its rivals Google and Microsoft have been working with partners on technology to ease the difficulty of connecting to their respective services.

In October 2016, AWS and VMware launched an alliance to develop the VMware Cloud on AWS. The platform would essentially duplicate on AWS a private cloud built with VMware software. As a result, customers of the vendors could use a single set of tools to manage and move workloads between both environments.

A year later, Google announced it had partnered with Cisco to connect Kubernetes containers running on Google Cloud with Cisco’s hyper-converged infrastructure, called HyperFlex. Cisco would also provide management tools and security for the hybrid cloud system.

Microsoft, on the other hand, offers a hybrid cloud platform called the Azure Stack. The software runs on third-party hardware and shares its code, APIs and management portal with Microsoft’s Azure public cloud to create a common cloud-computing platform. Microsoft hardware partners for Azure Stack include Cisco, Dell EMC and Hewlett Packard Enterprise.

Financial firms, vendors push self-service software delivery

The heavily regulated financial industry requires more help with software delivery than any other. In particular, self-service software delivery appeals to firms that frequently revise codebases to accommodate policy changes and other forces.

“People don’t like writing [help desk] tickets. And, often, engineers don’t want to interact with other people at all,” said Niko Kurtti, a production engineer at Ottawa-based e-commerce platform vendor Shopify, who was half-joking at the recent QCon conference in New York City. “It’s just easier to have the machine take care of it.”

A handful of companies have stepped up to address this issue. Atomist has added self-service features to its Software Delivery Machine (SDM), with its API for Software that manages the different parts of the DevOps pipeline.

“It’s more like self-service with guardrails,” said Rod Johnson, CEO and co-founder of Atomist, based in San Francisco. “They want things to be easy and quick, but also regulated.”

Atomist adheres to the policies companies uniquely apply to their system. So, for example, if Atomist wants to add a security scan for errant open source code, rather than update each microservice by hand, Atomist makes the change once and replicates it across all the system’s services. The self-service software aspect of Atomist helps developers and DevOps teams consistently create projects and avoid IT help desk tickets — or tickets with other departments in the organization — to test or add new features.

Another entry into the self-service space is LaunchDarkly, based in Oakland, Calif., which sells a management platform for developers and operations teams to control the feature lifecycle from conception to delivery. The company’s software integrates release management into the development process and focuses on delivery. It puts all the potential features into the release and allows developers to flip a switch on features and functions for different end users. This lets a common code set deliver different functions and test different code simultaneously, rather than multiple different releases and code branches.

Rod Johnson, CEO, AtomistRod Johnson

Other examples of companies that sell similar products include startups Netsil, which focuses on monitoring Kubernetes and Docker-based microservices apps; Mobincube, which primarily targets mobile app development; and Bonitasoft, which comes out of the business process management and workflow engine world.

Some enterprises, though, choose to skip this product class and roll out their own self-service software delivery options, with scripts and integration with native tools.

Pulumi doesn’t necessarily aim to compete directly in the automation space, but it does want to standardize cloud app development and shares the idea of defining things like configuration in code, rather than YAML. Also, CloudBees and the Jenkins community have a complementary service, Jenkins X, which integrates Kubernetes with Jenkins.

Atomist addresses software delivery as a per-organization or per-team concern, rather than per project, which enables customers to apply consistent policies and governance. It provides a consistent model to automate tasks that matter to software teams, such as project creation and dependency updates.

CI/CD evolves with code automation and containers

Atomist is applying programming language concepts to add a new kind of automation and predictability to software delivery.
Mik KerstenCEO, Tasktop Technologies

With SDM, Atomist is creating a programmable pipeline that bridges a gap between coding languages and delivery pipelines, which some view as the next big innovation to follow CI/CD.

“Atomist is applying programming language concepts to add a new kind of automation and predictability to software delivery,” said Mik Kersten, CEO of Tasktop Technologies, a DevOps toolmaker based in Vancouver, B.C.

To date, the worlds of application code and CI/CD have been disconnected and based on completely different technologies and paradigms. Atomist’s programmable domain models span the application to deployment, so DevOps shops can use and code automations and directly interact with events in the pipeline through Slack, Kersten noted.

The ability to code automations is particularly attractive, said one software architect for a New York investment bank, who declined to be identified. “That would save our developers and DevOps [teams] lots of time and effort,” he said.

Atomist pledged SDM’s support for Docker and Kubernetes at the DockerCon 2018 conference in San Francisco last month. With this support, any Atomist user’s SDM would respond to code change events from the Atomist platform, automatically build new Docker containers as required and deploy them into the right Kubernetes environments based on that user’s unique software delivery needs established via their own policies.

“The actual management of containers within the software delivery process has been lacking in the market so far,” said Edwin Yuen, an analyst at Enterprise Strategy Group in Milford, Mass. “By integrating Dockerized apps and K8s into their SDM, as well as ChatOps and other tools, Atomist is looking to help operationalize container deployments, which is the next area of focus, as container applications go into broader adoption.”