Tag Archives: more

Microsoft’s new approach to hybrid: Azure services when and where customers need them | Innovation Stories

As business computing needs have grown more complex and sophisticated, many enterprises have discovered they need multiple systems to meet various requirements – a mix of technology environments in multiple locations, known as hybrid IT or hybrid cloud.

Technology vendors have responded with an array of services and platforms – public clouds, private clouds and the growing edge computing model – but there hasn’t necessarily been a cohesive strategy to get them to work together.

We got here in an ad hoc fashion,” said Erik Vogel, global vice president for customer experience for HPE GreenLake at Hewlett Packard Enterprise. Customers didn’t have a strategic model to work from.

Instead, he said, various business owners in the same company may have bought different software as a service (SaaS) applications, or developers may have independently started leveraging Amazon Web Services, Azure or Google Cloud Platform to develop a set of applications.

At its Ignite conference this week in Orlando, Florida, Microsoft announced its solution to such cloud sprawl. The company has launched a preview of Azure Arc, which offers Azure services and management to customers on other clouds or infrastructure, including those offered by Amazon and Google.

John JG Chirapurath, general manager for Azure data, blockchain and artificial intelligence at Microsoft, said the new service is both an acknowledgement of, and a response to, the reality that many companies face today. They are running various parts of their businesses on different cloud platforms, and they also have a lot of data stored on their own new or legacy systems.

In all those cases, he said, these customers are telling Microsoft they could use the benefits of Azure cloud innovation whether or not their data is stored in the cloud, and they could benefit from having the same Azure capabilities – including security safeguards – available to them across their entire portfolio.

We are offering our customers the ability to take their services, untethered from Azure, and run them inside their own datacenter or in another cloud,” Chirapurath said.

Microsoft says Azure Arc builds on years of work the company has done to serve hybrid cloud needs. For example, Azure Resource Manager, released in 2014, was created with the vision that it would manage resources outside of Azure, including in companies’ internal servers and on other clouds.

That flexibility can help customers operate their services on a mix of clouds more efficiently, without purchasing new hardware or switching among cloud providers. Companies can use a public cloud to obtain computing power and data storage from an outside vendor, but they can also house critical applications and sensitive data on their own premises in a private cloud or server.

Then there’s edge computing, which stores data where the user is, in between the company and the public cloud for example, on their customers’ mobile devices or on sensors in smart buildings like hospitals and factories.

YouTube Video

That’s compelling for companies that need to run AI models on systems that aren’t reliably connected to the cloud, or to make computations more quickly than if they had to send large amounts of data to and from the cloud. But it also must work with companies’ cloud-based, internet-connected systems.

“A customer at the edge doesn’t want to use different app models for different environments,” said Mark Russinovich, Azure chief technology officer. “They need apps that span cloud and edge, leveraging the same code and same management constructs.”

Streamlining and standardizing a customer’s IT structure gives developers more time to build applications that produce value for the business instead of managing multiple operating models. And enabling Azure to integrate administrative and compliance needs across the enterprise – automating system updates and security enhancements brings additional savings in time and money.

“You begin to free up people to go work on other projects, which means faster development time, faster time to market,” said HPE’s Vogel. HPE is working with Microsoft on offerings that will complement Azure Arc.

Arpan Shah, general manager of Azure infrastructure, said Azure Arc allows companies to use Azure’s governance tools for their virtual machines, Kubernetes clusters and data across different locations, helping ensure companywide compliance on things like regulations, security, spending policies and auditing tools.

Azure Arc is underpinned in part by Microsoft’s commitment to technologies that customers are using today, including virtual machines, containers and Kubernetes, an open source system for organizing and managing containers. That makes clusters of applications easily portable across a hybrid IT environment – to the cloud, the edge or an internal server.

“It’s easy for a customer to put that container anywhere,” Chirapurath said. “Today, you can keep it here. Tomorrow, you can move it somewhere else.”

Microsoft says these latest Azure updates reflect an ongoing effort to better understand the complex needs of customers trying to manage their Linux and Windows servers, Kubernetes clusters and data across environments.

“This is just the latest wave of this sort of innovation,” Chirapurath said. “We’re really thinking much more expansively about customer needs and meeting them according to how they’d like to run their applications and services.”

Top image: Erik Vogel, global vice president for customer experience for HPE GreenLake at Hewlett Packard Enterprise, with a prototype of memory-driven computing. HPE is working with Microsoft on offerings that will complement Azure Arc. Photo by John Brecher for Microsoft.

Related:

Go to Original Article
Author: Microsoft News Center

Microsoft Power Platform adds chatbots; Flow now Power Automate

More bots and automation tools went live on the Microsoft Power Platform, Microsoft announced today. In their formal introductions, Microsoft said the tools will make data sources flow within applications like SharePoint, OneDrive and Dynamics 365, and create more efficiencies with custom apps.

The more than 400 capabilities added to the Microsoft Power Platform focus on expanding its robotic process automation potential for users, as well as new integrations between the platform and Microsoft Teams, according to a blog post by James Phillips, corporate vice president of business applications at Microsoft.

Some of those include robotic process automation (RPA) tools for Microsoft Power Automate, formerly known as Flow, which makes AI tools easier to add into PowerApps. Also newly available are tools for creating user interfaces in Power Automate.

AI Builder adds a point-and-click means to fold common processes such as forms processing, object detection and text classification into apps — processes commonly used for SharePoint and OneDrive content curation.

Microsoft is adding these tools, as well as new security features to analytics platform Power BI, in part to coax customers who remain on premises into the Azure cloud, said G2 analyst Michael Fauscette.

PowerApps reduce the development needed to create necessary connections between systems in the cloud, such as content in OneDrive and SharePoint with work being done in Dynamics 365 CRM, Teams and ERP applications.

Microsoft Power Automate, formerly Flow
Microsoft Power Automate, a low-code app-design tool,is the new version ofFlow.

Chatbots go live

Also announced as generally available at Microsoft Ignite are Power Virtual Agents, do-it-yourself chatbots on the Microsoft Power Platform.

They’ll likely first be used by customer service teams on Dynamics 365, said Constellation Research analyst R “Ray” Wang, but they could spread to other business areas such as human resources, which could use the bots to answer common questions during employee recruiting or onboarding.

If an agent is costing you $15 an hour and the chatbot 15 cents an hour … it’s all about call deflection.
R ‘Ray’ WangAnalyst, Constellation Research

While some companies may choose outside consultants and developers to build custom chatbots instead of making their own on the Microsoft Power Platform, Wang said some companies may try it to build them internally. Large call centers employing many human agents and running on Microsoft applications would be logical candidates for piloting new bots.

“I think they’ll start coming here to build their virtual agents,” Wang said. “[Bot] training will be an issue, but it’s a matter of scale. If an agent is costing you $15 an hour and the chatbot 15 cents an hour … it’s all about call deflection.”

Microsoft Power Platform evolves

PowerApps, which launched in late 2015, originally found utility with users of Microsoft Dynamics CRM who needed to automate and standardize processes across data sets inside the Microsoft environment and connect to outside platforms such as Salesforce, said Gartner analyst Ed Anderson.

Use quickly spread to SharePoint, OneDrive and Dynamics ERP users, as they found that Flow — a low-code app-design tool — enabled the creation of connectors and apps without developer overhead. Third-party consultants and developers also used PowerApps to speed up deliverables to clients. Power BI, Power Automate and PowerApps together became known as the Microsoft Power Platform a year ago.

“PowerApps are really interesting for OneDrive and SharePoint because it lets you quickly identify data sources and quickly do something meaningful with them — connect them together, add some logic around them or customized interfaces,” Anderson said.

Go to Original Article
Author:

Google Cloud networking BYOIP feature could ease migrations

Google hopes a new networking feature will spur more migrations to its cloud platform and make the process easier at the same time.

Customers can now bring their existing IP addresses to Google Cloud’s network infrastructure in all of its regions around the world. Those who do can speed up migrations, cut downtime and lower costs, Google said in a blog post.

“Each public cloud provider is looking to reduce the migration friction between them and the customer,” said Stephen Elliot, an analyst at IDC. “Networking is a big part of that equation and IP address management is a subset.”

Bitly, the popular hyperlink-shortening service, is an early user of Google Cloud bring your own IP (BYOIP).

Many Bitly customers have custom web domains that are attached to Bitly IP addresses and switching to ones on Google Cloud networking would have been highly disruptive, according to the blog. Bitly also saved money via BYOIP because it didn’t have to maintain a co-location facility for the domains tied to Bitly IPs.

BYOIP could help relieve cloud migration headaches

IP address management is a well-established discipline in enterprise IT. It is one that has become more burdensome over time, not only due to workload migrations to the cloud, but also the vast increase in internet-connected devices and web properties companies have to wrangle.

Stephen Elliot, IDCStephen Elliot

AWS offers BYOIP though its Virtual Private Cloud service but hasn’t rolled it out in every region. Microsoft has yet to create a formal BYOIP service, but customers who want to retain their IP addresses can achieve a workaround through Azure ExpressRoute, its service for making private connections between customer data centers and Azure infrastructure.

Each public cloud provider is looking to reduce the migration friction between them and the customer.
Stephen Elliot Analyst, IDC

Microsoft and AWS will surely come up to par with Google Cloud networking on BYOIP, eventually. But as the third-place contestant among hyperscale cloud providers, Google — which has long touted its networking chops as an advantage — could gain a competitive edge in the meantime.

IP address changes are a serious pain point for enterprise migrations of any sort, particularly in the cloud, said Eric Hanselman, chief analyst at 451 Research.

“Hard-coded addresses and address dependencies can be hard to find,” he added. “They wind up being the ticking time bomb in many applications. They’re hard to find beforehand, but able to cause outages during a migration that are problematic to troubleshoot.”

Deepak Mohan, IDCDeepak Mohan

Overall, the BYOIP concept provides a huge benefit, particularly for large over-the-internet services, according to Deepak Mohan, another analyst at IDC.

“They often have IPs whitelisted at multiple points in the delivery and the ability to retain IP greatly simplifies the peripheral updates needed for a migration to a new back-end location,” Mohan said.

Go to Original Article
Author:

Consider these Office 365 alternatives to public folders

As more organizations consider a move from Exchange Server, public folders continue to vex many administrators for a variety of reasons.

Microsoft supports public folders in its latest Exchange Server 2019 as well as Exchange Online, but it is pushing companies to adopt some of its newer options, such as Office 365 Groups and Microsoft Teams. An organization pursuing alternatives to public folders will find there is no direct replacement for this Exchange feature. There reason for this is due to the nature of the cloud.

Microsoft set its intentions early on under Satya Nadella’s leadership with its “mobile first, cloud first” initiative back in 2014. Microsoft aggressively expanded its cloud suite with new services and features. This fast pace meant that migrations to cloud services, such as Office 365, would offer a different experience based on the timing. Depending on when you moved to Office 365, there might be different features than if you waited several months. This was the case for migrating public folders from on-premises Exchange Server to Exchange Online, which evolved over time and also coincided with the introduction of Microsoft Teams, Skype for Business and Office 365 Groups.

The following breakdown of how organizations use public folders can help Exchange administrators with their planning when moving to the new cloud model on Office 365.

Organizations that use public folders for email only

Public folders are a great place to store email that multiple people within an organization need to access. For example, an accounting department can use public folders to let department members use Outlook to access the accounting public folders and corresponding email content.

A shared mailbox has a few advantages over a public folder with the primary one being accessibility through the Outlook mobile app or from Outlook via the web.

Office 365 offers similar functionality to public folders through its shared mailbox feature in Exchange Online. A shared mailbox stores email in folders, which is accessible by multiple users.

A shared mailbox has a few advantages over a public folder with the primary one being accessibility through the Outlook mobile app or from Outlook via the web. This allows users to connect from their smartphones or a standard browser to review email going to the shared mailbox. This differs from public folder access which requires opening the Outlook client.

Organizations that use public folders for email and calendars

For organizations that rely on both email and calendars in their public folders, Microsoft has another cloud alternative that comes with a few extra perks.

Office 365 Groups not only lets users collaborate on email and calendars, but also stores files in a shared OneDrive for Business page, tasks in Planner and notes in OneNote. Office 365 Groups is another option for email and calendars made available on any device. Office 365 Groups owners manage their own permissions and membership to lift some of the burden of security administration from the IT department.

Microsoft provides migration scripts to assist with the move of content from public folders to Office 365 Groups.

Organizations that use public folders for data archiving

Some organizations that prefer to stay with a known quantity and keep the same user experience also have the choice to keep using public folders in Exchange Online.

The reasons for this preference will vary, but the most likely scenario is a company that wants to keep email for archival purposes only. The migration from Exchange on-premises public folders requires administrators to use Microsoft’s scripts at this link.

Organizations that use public folders for project communication and data sharing repository

The Exchange public folders feature is excellent for sharing email, contacts and calendar events. For teams working on projects, the platform shines as a way to centralize information that’s relevant to the specific project or department. But it’s not as expansive as other collaboration tools on Office 365.

Take a closer look at some of the other modern collaboration tools available in Office 365 in addition to Microsoft Teams and Office 365 Groups, such as Kaizala. These offerings extend the organization’s messaging abilities to include real-time chat, presence status and video conferencing.

Go to Original Article
Author:

Value-based care models hung up on lack of resources

A survey of more than 1,000 healthcare providers finds a lack of resources to be the biggest hurdle when shifting to a value-based care reimbursement model.

A value-based care model pays providers based on patient outcomes rather than the amount of services provided. The Centers for Medicare & Medicaid Services began promoting value-based care in 2008. Support for the initiative quickly followed with legislation, including the Affordable Care Act, which passed in 2010.

Despite the push, the shift to a value-based care rather than fee-for-service model has been slow — but steady. Indeed, data analytics company Definitive Healthcare LLC found that the number of U.S. states and territories with value-based care programs has risen from three in 2011 to 48 in 2018.

This year, the company surveyed more than 1,000 healthcare leaders to determine the state of value-based care, as well as what implementation will look like in 2020.

Value-based care: Barriers and accelerators

Kate Shamsuddin, senior vice president of strategy at Definitive Healthcare, said she was surprised that 25.3% of respondents pointed to lack of resources as the biggest barrier to implementing a value-based care model, given the initiative dates back to 2008.

Definitive Healthcare senior vice president of strategy Kate ShamsuddinKate Shamsuddin

“We would’ve anticipated that the number of resources required to support value-based care would’ve been increasing over time to support the success of these programs and initiatives,” she said. “So that was pretty surprising to see that at the top of the list as a barrier.”

Survey takers also pointed to “gaps in interoperability” and the “unpredictability of revenue stream” as barriers to implementing value-based care programs. “Changing regulations and policies” was another barrier identified by 16.2% of respondents.

Shamsuddin was struck by the “changing regulations and policies” barrier because of the amount of visibility the federal government has provided into policy implementation. Additionally, Shamsuddin said that while changing policies is listed as a barrier, 16.1% of respondents also selected it as a factor that is accelerating the adoption of value-based care.

Almost half, 44.8%, of survey respondents cited “appropriate provider compensation and incentives” as the biggest reason why adoption of a value-based care model moved forward within their organization. In a value-based care model, providers can receive bonuses for performing above-quality care standards. Yet they can also be penalized if their performance falls below those standards.

Shamsuddin said being able to adjust provider compensation and incentives is one way to ensure all stakeholders are “growing in the same direction” when implementing a value-based care program.

“That is one I think we’ll continue to see as an accelerator, especially with healthcare systems being a little bit more, let’s call it experimental, in how they’re willing to move away from the fee-for-service model,” she said.

What CIOs should pay attention to in 2020

As value-based care model implementation evolves in 2020, Shamsuddin said it will be important for healthcare CIOs to keep an eye on federal regulation and policy, which survey takers said was both a barrier and an accelerator.  

Additionally, one of the main areas that will cause change in value-based care program implementation is a growing understanding among providers of how accountable care organizations (ACOs) and bundled payment models such as the Medicare Shared Savings Program work, according to 31.1% of survey respondents.

ACOs and bundled payment models, or alternative payment models that require providers to take on risk and share in the losses and benefits of patient care, will “evolve and become easier to understand,” making it more likely for providers to transition to a value-based care model, according to the survey.

ACOs are associations of hospitals, providers and insurers that assume medical and financial responsibility for their patients; the Medicare Shared Savings Program is a voluntary program that encourages healthcare providers to come together as an ACO. The program provides different participation options to ACOs and allows them to take on varying levels of risk and responsibility for patients.

Consolidation within healthcare will also create what Shamsuddin called a “wild card” in how effective value-based programs will be. When two health systems are thinking about combining, Shamsuddin said it will require healthcare providers to be “open and strategic” around how they’re going to bring in value-based care initiatives during a merger.

Go to Original Article
Author:

How to Select a Placement Policy for Site-Aware Clusters

One of the more popular failover clustering enhancements in Windows Server 2016 and 2019 is the ability to define the different fault domains in your infrastructure. A fault domain lets you scope a single point of failure in hardware, whether this is a Hyper-V host (a cluster node), its enclosure (chassis), its server rack or an entire datacenter. To configure these fault domains, check out the Altaro blog post on configuring site-aware clusters and fault domains in Windows Server 2016 & 2019. After you have defined the hierarchy between your nodes, chassis, racks, and sites then the cluster’s placement policies, failover behavior, and health checks will be optimized. This blog will explain the automatic placements policies and advanced settings you can use to maximize the availability of your virtual machines (VMs) with site-aware clusters.

Site-Aware Placement Based on Storage Affinity

From reading the earlier Altaro blog about fault-tolerance, you may recall that the resiliency is created by distributing identical (mirrored) storage spaces direct (S2D) disks across the different fault domains.  Each node, chassis, rack or site may contain a copy of a VM’s virtual hard disks. However, you always want the VM to be in the same site as its disk for performance reasons to avoid having the I/O transmitted across distance. In the event that a VM is forced to start in a separate site from its disk, then it will automatically live migrate the VM to the same site as its disk after about a minute.  With site-awareness, the automatic enforcement of storage affinity between a VM and its disk is given the highest site placement priority.

Configuring Preferred Sites with Site-Aware Clusters

If you have configured multiple sites in your infrastructure, then you should consider which site is your “primary” site and which should be used as a backup. Many organizations will designate their primary site as the location closest to their customers or with the best hardware, and the secondary site as the failover location which may have limited hardware to only support critical workloads.  Some enterprises may deploy identical datacenters, and distribute specific workloads to each location to balance their resources. If you are splitting your workloads across different sites you can assign each clustered workload or VM (cluster group) a preferred site. Let’s say that you want your US-East VM to run in your primary datacenter and your US-West VM to run in your secondary datacenter, you could configure the following settings via PowerShell:

Designating a preferred site for the entire cluster will ensure that after a failure that the VMs will start in this location. After you defined your sites by creating a New-ClusterFaultDomain you can use the cluster-wide property PreferredSite to set the default location to launch VMs. Below is the PowerShell cmdlet:

Be aware of your capacity if you are usually distributing your workloads across two sites and they are forced to run in a single location as performance will diminish with less hardware. Consider using the VM prioritization feature and disabling automatic VM restarts after a failure, as this will ensure that only the most important VMs will run. You can find more information from this Altaro blog on how to configure start order priority for clustered VMs.

To summarize, placement priority is based on:

  • Storage affinity
  • Preferred site for a cluster group or VM
  • Preferred site for the entire cluster

Site-Aware Placement Based on Failover Affinity

When site-awareness has been configured for a cluster, there are several automatic failover policies that are enforced behind the scenes. First, a clustered VM or group will always failover to a node, chassis or rack within the same site before it moves to a different site. This is because local failover is always faster than cross-site failover since it can bring the VM online faster by accessing the local disk and avoid any network latency between sites. Similarly, site-awareness is also honored by the cluster when a node is drained for maintenance. The VMs will automatically move to a local node, rather than a cross-site node.

Cluster Shared Volumes (CSV) disks are also site-aware. A single CSV disk can store multiple Hyper-V virtual hard disks while allowing their VMs to run simultaneously on different nodes.  However, it is important that these VMs are all running on nodes within the same site. This is because the CSV service coordinates disk write access across multiple nodes to a single disk. In the case of Storage Spaces Direct (S2D), the disks are mirrored so there are identical copies running in different locations (or sites). If VMs were writing to mirrored CSV disks in different locations and replicating their data without any coordination, it could lead to disk corruption. Microsoft ensures that this problem never occurs by enforcing all VMs which share a CSV disk to run on the local site and write to a single instance of that disk. Furthermore, CSV distributes the VMs across different nodes within the same site, balancing the workloads and write requests to that coordinate node.

Site-Aware Health Checks and Cluster Heartbeats

Advanced cluster administrators may be familiar with cluster heartbeats, which are health checks between cluster nodes. This is the primary way in which cluster nodes validate that their peers are healthy and functioning. The nodes will ping each other once per predefined interval, and if a node does not respond after several attempts it will be considered offline, failed or partitioned from the rest of the cluster. When this happens, the host is not considered an active node in the cluster and it does not provide a vote towards cluster quorum (membership).

If you have configured multiple sites in different physical locations, then you should configure the frequency of these pings (CrossSiteDelay) and the number of health check which can be missed (CrossSiteThreshold) before a node is considered failed. The greater the distance between sites, the more network latency will exist, so these values should be tweaked to minimize the chances of a false failover during times when there is high network traffic. By default, the pings are sent every 1 second (1000 milliseconds) and when 20 are missed, a node is considered unavailable and any workloads it was hosting will be redistributed. You should test your network latency and cross-site resiliency regularly to determine whether you should increase or reduce these default values. Below is an example to change the testing frequency from every 1 second to 5 seconds and the number of missed responses from 20 to 30.

By increasing these values, it will now take longer for a failure to be confirmed and failover to happen resulting in greater downtime. The default time is 1-second x 20 misses = 20 seconds, and this example extends it to 5 seconds x 30 misses = 150 seconds.

Site-Aware Quorum Considerations

Cluster quorum is an algorithm that clusters use to determine whether there are enough active nodes in the cluster to run its core operations. For additional information, check out this series of blogs from Altaro about multi-site cluster quorum configuration.  In a multi-site cluster, quorum becomes complicated since there could be a different number of nodes in each site. With site-aware clusters, “dynamic quorum” will be used to automatically rebalance the number of nodes which have votes. This means that as clusters nodes drop out of membership, the number of voting nodes changes. If there are two sites with an equal number of voting nodes, then the group of nodes that are assigned to be the preferred site will stay online and run the workloads, while the lower priority site will reduce their votes and not host any VMs.

Windows Server 2012 R2 introduced a setting known as the LowerQuorumPriorityNodeID, which allowed you to set a node in a site as the least important, but this was deprecated in Windows Server 2016 and should no longer be used. The idea behind this was to easily declare which location was the least important when there were two sites with the same number of voting nodes. The site with the lower priority node would stay offline while the other partition would run the clustered workloads. That caused some confusion since the setting was only applied to a single host, but you may still see this setting referenced in blogs such as Altaro’s https://www.altaro.com/hyper-v/quorum-microsoft-failover-clusters/.

The site-awareness features added to the latest version of Window Server will greatly enhance a cluster’s resilience through a combination of user-defined policies and automatic actions. By creating the fault domains for clusters, it is easy to provide even greater VM availability by moving the workloads between nodes, chassis, racks, and sites as efficiently as possible. Failover clustering further reduces the configuration overhead by automatically applying best practices to make failover faster and keep your workloads online for longer.

Wrap-Up

Useful information yes? How many of you are using multi-site clusters in your organizations? Are you finding it easy to configure and manage? Having issues? If so, let us know in the comments section below! We’re always looking to see what challenges and successes people in the industry are running into!

Thanks for reading!


Go to Original Article
Author: Symon Perriman

Announcing AI Business School for Education for leaders, BDMs and students | | Microsoft EDU

We live in an ever more digital, connected world. With the emergence of Artificial Intelligence, the opportunity we have to provide truly personalized, accessible learning and experiences to all students around the world is now upon us. Leaders in education have the opportunity to dramatically impact outcomes more than ever, from changing the way in which they engage with students throughout the student journey, to providing truly personalized learning, to improving operational efficiencies across the institution. At Microsoft, our mission in education is to empower every student on the planet to achieve more. Through that lens, we believe education leaders should consider opportunities to introduce new technologies like AI into the design of learning and technological blueprint to expand the horizon for driving better outcomes and efficiencies for every student and institution around the world.

That’s why I’m excited to share that Microsoft’s AI Business School now offers a learning path for education. Designed for education leaders, decision-makers and even students, the Microsoft AI Business School for Education helps learners understand how AI can enhance the learning environment for all students—from innovations in the way we teach and assess, to supporting accessibility and inclusion for all students, to institutional effectiveness and efficiency with the use of AI tools. The course is designed to empower learners to gain specific, practical knowledge to define and implement an AI strategy. Industry experts share insights on how to foster an AI-ready culture and teach them how to use AI responsibly and with confidence. The learning path is available on Microsoft Learn, a free platform to support learners of all ages and experience levels via interactive, online, self-paced learning.

The Microsoft AI Business School for Education includes a number of modules across sales, marketing, technology and culture, but most importantly, it calls upon the expert insights from education leaders including:

  • Professor Peter Zemsky uses INSEAD’s Value Creation Framework to show the advantages AI presents for educational institutions and how an organization can determine the right approach that works with their strategy and goals.
  • Michelle Zimmerman, author of “Teaching AI: Exploring New Frontiers for Learning,” shares her experience as an educator and why she sees believes AI can transform how students learn.
  • David Kellerman of the University of New South Wales (UNSW) shares his perspective on what’s unique about AI in higher education and how using AI can transform the way institutions collaborate and encourage students to be lifelong learners. As a key research institution in Australia, the University of New South Wales (UNSW)is focused on being a learning institution that collaborates across academic and operational departments as it uses AI to create a personalized learning journey for students. Dr. Kellerman shares his perspective on what’s unique about AI in higher education and how using AI to transform the way institutions collaborate can create students that are lifelong learners.

The Microsoft AI Business School for Education joins a larger collection of industry-specific courses including financial services, manufacturing, retail, healthcare and government. With this holistic portfolio, the AI Business School can also help students learn about AI application across a number of industries and roles. We’ve already seen several universities and vocational colleges incorporate this curriculum into their courses across business, finance, economics and health-related degrees as a means of providing real-world examples of AI opportunity and impact.

New research has highlighted the importance of adopting AI to transform the learning experience for students. Last week at the Asian Summit on Education and Skills (ASES) in India, Microsoft and IDC unveiled the latest findings from the study “Future-Ready Skills: Assessing the use of AI within the Education sector in Asia Pacific.” The study found that Artificial Intelligence (AI) will help double the rate of innovation improvements for higher education institutions across the region. Despite 3 in 4 education leaders agreeing that AI is instrumental to an institute’s competitiveness, 68% of education institutions in the region today have actually yet to embark on their AI journey. Those who have started integrating AI have seen improvements in student engagement, efficiency and competitiveness, as well as increased funding and accelerated innovation.

Microsoft is proud to be working with schools and institutions around the world to improve understanding of Artificial Intelligence and support leaders, educators and students to get ready for the future, like the recent collaboration in India with CBSE to train up over 1000 educators.

Click here for free STEM resourcesExplore tools for student-centered learning

Go to Original Article
Author: Microsoft News Center

Bringing together deep bioscience and AI to help patients worldwide: Novartis and Microsoft work to reinvent treatment discovery and development   – The Official Microsoft Blog

In the world of commercial research and science, there’s probably no undertaking more daunting – or more expensive – than the process of bringing a new medicine to market. For a new compound to make it from initial discovery through development, testing and clinical trials to finally earn regulatory approval can take a decade or more. Nine out of 10 promising drug candidates fail somewhere along the way. As a result, on average, it costs life sciences companies $2.6 billion to introduce a single new prescription drug.

This is much more than just a challenge for life sciences companies. Streamlining drug development is an urgent issue for human health more broadly. From uncovering new ways to treat age-old sicknesses like malaria that still kills hundreds of thousands of people every year, to finding new cancer treatments, or developing new vaccines to prevent highly-contagious diseases from turning into global pandemics, the impact in terms of lives saved worldwide would be enormous if we could make inventing new medicines faster.

As announced today, this is why Novartis and Microsoft are collaborating to explore how to take advantage of advanced Microsoft AI technology combined with Novartis’ deep life sciences expertise to find new ways to address the challenges underlying every phase of drug development – including research, clinical trials, manufacturing, operations and finance. In a recent interview, Novartis CEO Vas Narasimhan spoke about the potential for this alliance to unlock the power of AI to help Novartis accelerate research into new treatments for many of the thousands of diseases for which there is, as yet, no known cure.

In the biotech industry, there have been amazing scientific advances in recent years that have the potential to revolutionize the discovery of new, life-saving drugs. Because many of these advances are based on the ability to analyze huge amounts of data in new ways, developing new drugs has become as much an AI and data science problem as it is a biology and chemistry problem. This means companies like Novartis need to become data science companies to an extent never seen before. Central to our work together is a focus on empowering Novartis associates at each step of drug development to use AI to unlock the insights hidden in vast amounts of data, even if they aren’t data scientists. That’s because while the exponential increase in digital health information in recent years offers new opportunities to improve human health, making sense of all the data is a huge challenge.

The issue isn’t just a problem of the overwhelming volume. Much of the information exists in the form of unstructured data, such as research lab notes, medical journal articles, and clinical trial results, all of which is typically stored in disconnected systems. This makes bringing all that data together extremely difficult. Our two companies have a dream. We want all Novartis associates – even those without special expertise in data science – to be able to use Microsoft AI solutions every day, to analyze large amounts of information and discover new correlations and patterns critical to finding new medicines. The goal of this strategic collaboration is to make this dream a reality. This offers the potential to empower everyone from researchers exploring the potential of new compounds and scientists figuring out dosage levels, to clinical trial experts measuring results, operations managers seeking to improve supply chains more efficiently, and even business teams looking to make more effective decisions. And as associates work on new problems and develop new AI models, they will continually build on each other’s work, creating a virtuous cycle of exploration and discovery. The result? Pervasive intelligence that spans the company and reaches across the entire drug discovery process, improving Novartis’ ability to find answers to some of the world’s most pressing health challenges.

As part of our work with Novartis, data scientists from Microsoft Research and research teams from Novartis will also work together to investigate how AI can help unlock transformational new approaches in three specific areas. The first is about personalized treatment for macular degeneration – a leading cause of irreversible blindness. The second will involve exploring ways to use AI to make manufacturing new gene and cell therapies more efficient, with an initial focus on acute lymphoblastic leukemia. And the third area will focus on using AI to shorten the time required to design new medicines, using pioneering neural networks developed by Microsoft to automatically generate, screen and select promising molecules. As our work together moves forward, we expect that the scope of our joint research will grow.

At Microsoft, we’re excited about the potential for this collaboration to transform R&D in life sciences. As Microsoft CEO Satya Nadella explained, putting the power of AI in the hands of Novartis employees will give the company unprecedented opportunities to explore new frontiers of medicine that will yield new life-saving treatments for patients around the world.

While we’re just at the beginning of a long process of exploration and discovery, this strategic alliance marks the start of an important collaborative effort that promises to have a profound impact on how breakthrough medicines and treatments are developed and delivered. With the depth and breadth of knowledge that Novartis offers in bioscience and Microsoft’s unmatched expertise in computer science and AI, we have a unique opportunity to reinvent the way new medicines are created. Through this process, we believe we can help lead the way forward toward a world where high-quality treatment and care is significantly more personal, more effective, more affordable and more accessible.

Tags: , , ,

Go to Original Article
Author: Steve Clarke

Cloud database services multiply to ease admin work by users

NEW YORK — Managed cloud database services are mushrooming, as more database and data warehouse vendors launch hosted versions of their software that offer elastic scalability and free users from the need to deploy, configure and administer systems.

MemSQL, TigerGraph and Yellowbrick Data all introduced cloud database services at the 2019 Strata Data Conference here. In addition, vendors such as Actian, DataStax and Hazelcast said they soon plan to roll out expanded versions of managed services they announced earlier this year.

Technologies like the Amazon Redshift and Snowflake cloud data warehouses have shown that there’s a viable market for scalable database services, said David Menninger, an analyst at Ventana Research. “These types of systems are complex to install and configure — there are many moving parts,” he said at the conference. With a managed service in the cloud, “you simply turn the service on.”

Menninger sees cloud database services — also known as database as a service (DBaaS) — as a natural progression from database appliances, an earlier effort to make databases easier to use. Like appliances, the cloud services give users a preinstalled and preconfigured set of data management features, he said. On top of that, the database vendors run the systems for users and handle performance tuning, patching and other administrative tasks.

Overall, the growing pool of DBaaS technologies provides good options “for data-driven companies needing high performance and a scalable, fully managed analytical database in the cloud at a reasonable cost,” said William McKnight, president of McKnight Consulting Group.

Database competition calls for cloud services

For database vendors, cloud database services are becoming a must-have offering to keep up with rivals and avoid being swept aside by cloud platform market leaders AWS, Microsoft and Google, according to Menninger. “If you don’t have a cloud offering, your competitors are likely to eat your lunch,” he said.

Strata Data Conference
The Strata Data Conference was held from Sept. 23 to 26 in New York City.

Todd Blaschka, TigerGraph’s chief operating officer, also pointed to the user adoption of the Atlas cloud service that NoSQL database vendor MongoDB launched in 2016 as a motivating factor for other vendors, including his company. “You can see how big of a revenue generator that has been,” Blaschka said. Services like Atlas “allow more people to get access [to databases] more quickly,” he noted.

Blaschka said more than 50% of TigerGraph’s customers already run its namesake graph database in the cloud, using a conventional version that they have to deploy and manage themselves. But with the company’s new TigerGraph Cloud service, users “don’t have to worry about knowing what a graph is or downloading it,” he said. “They can just build a prototype database and get started.”

TigerGraph Cloud is initially available in the AWS cloud; support will also be added for Microsoft Azure and then Google Cloud Platform (GCP) in the future, Blaschka said.

Yellowbrick Data made its Yellowbrick Cloud Data Warehouse service generally available on all three of the cloud platforms, giving users a DBaaS alternative to the on-premises data warehouse appliance it released in 2017. Later this year, Yellowbrick also plans to offer a companion disaster recovery service that provides cloud-based replicas of on-premises or cloud data warehouses.

More cloud database services on the way

MemSQL, one of the vendors in the NewSQL database category, detailed plans for a managed cloud service called Helios, which is currently available in a private preview release on AWS and GCP. Azure support will be added next year, said Peter Guagenti, MemSQL’s chief marketing officer.

About 60% of MemSQL’s customers run its database in the cloud on their own now, Guagenti said. But he added that the company, which primarily focuses on operational data, was waiting for the Kubernetes StatefulSets API object for managing stateful applications in containers to become available in a mature implementation before launching the Helios service.

Actian, which introduced a cloud service version of its data warehouse platform on AWS last March, said it will make the Avalanche service available on Azure this fall and on GCP at a later date.

We ultimately are the caretaker of the system. We may not do the actual work, but we guide them on it.
Naghman WaheedData platforms lead, Bayer Crop Science

DataStax, which offers a commercial version of the Cassandra open source NoSQL database, said it’s looking to make a cloud-native platform called Constellation and a managed version of Cassandra that runs on top of it generally available in November. The new technologies, which DataStax announced in May, will initially run on GCP, with support to follow on AWS and Azure.

Also, in-memory data grid vendor Hazelcast plans in December to launch a version of its Hazelcast Cloud service for production applications. The Hazelcast Cloud Dedicated edition will be deployed in a customer’s virtual private cloud instance, but Hazelcast will configure and maintain systems for users. The company released free and paid versions of the cloud service for test and development uses in March on AWS, and it also plans to add support for Azure and GCP in the future.

Managing managed database services vendors

Bayer AG’s Bayer Crop Science division, which includes the operations of Monsanto following Bayer’s 2018 acquisition of the agricultural company, uses managed database services on Teradata data warehouses and Oracle’s Exadata appliance. Naghman Waheed, data platforms lead at Bayer Crop Science, said the biggest benefit of both on-premises and cloud database services is offloading routine administrative tasks to a vendor.

“You don’t have to do work that has very little value,” Waheed said after speaking about a metadata management initiative at Bayer in a Strata session. “Why would you want to have high-value [employees] doing that work? I’d rather focus on having them solve creative problems.”

But he said there were some startup issues with the managed services, such as standard operating procedures not being followed properly. His team had to work with Teradata and Oracle to address those issues, and one of his employees continues to keep an eye on the vendors to make sure they live up to their contracts.

“We ultimately are the caretaker of the system,” Waheed said. “We do provide guidance — that’s still kind of our job. We may not do the actual work, but we guide them on it.”

Go to Original Article
Author:

CIOs express hope, concern for proposed interoperability rule

While CIOs applaud the efforts by federal agencies to make healthcare systems more interoperable, they also have significant concerns about patient data security.

The Office of the National Coordinator for Health IT (ONC) and the Centers for Medicare & Medicaid Services proposed rules earlier this year that would further define information blocking, or unreasonably stopping a patient’s information from being shared, as well as outline requirements for healthcare organizations to share data such as using FHIR-based APIs so patients can download healthcare data onto mobile healthcare apps.

The proposed rules are part of an ongoing interoperability effort mandated by the 21st Century Cures Act, a healthcare bill that provides funding to modernize the U.S. healthcare system. Final versions of the proposed information blocking and interoperability rules are on track to be released in November.

“We all now have to realize we’ve got to play in the sandbox fairly and maybe we can cut some of this medical cost through interoperability,” said Martha Sullivan, CIO at Harrison Memorial Hospital in Cynthiana, Ky.

CIOs’ take on proposed interoperability rule

To Sullivan, interoperability brings the focus back to the patient — a focus she thinks has been lost over the years.

She commended ONC’s efforts to make patient access to health information easier, yet she has concerns about data stored in mobile healthcare apps. Harrison’s system is API-capable, but Sullivan said the organization will not recommend APIs to patients for liability reasons.

Healthcare CIOs at Meditech's 2019 Physician and CIO Forum shared their thoughts on proposed interoperability rules from ONC and CMS.
Physicians and CIOs at EHR vendor Meditech’s 2019 Physician and CIO Forum in Foxborough, Mass. Helen Waters, Meditech executive vice president, spoke at the event.

“The security concerns me because patient data is really important, and the privacy of that data is critical,” she said.

Harrison may not be the only organization reluctant to promote APIs to patients. A study published in the Journal of the American Medical Association of 12 U.S. health systems that used APIs for at least nine months found “little effort by healthcare systems or health information technology vendors to market this new capability to patients” and went on to say “there are not clear incentives for patients to adopt it.”

Jim Green, CIO at Boone County Hospital in Iowa, said ONC’s efforts with the interoperability rule are well-intentioned but overlook a significant pain point: physician adoption. He said more efforts should be made to create “a product that’s usable for the pace of life that a physician has.”

The product also needs to keep pace with technology, something Green described as being a “constant battle.”

There are some nuances there that make me really nervous as a CIO.
Jeannette CurrieCIO of Community Hospitals, Beth Israel Deaconess Medical Center

Interoperability is often temporary, he said. When a system gets upgraded or a new version of software is released, it can throw the system’s ability to share data with another system out of whack.

“To say at a point in time, ‘We’re interoperable with such-and-such a product,’ it’s a point in time,” he said.

Interoperability remains “critically important” for healthcare, said Jeannette Currie, CIO of Community Hospitals at Beth Israel Deaconess Medical Center in Boston. But so is patient data security. That’s one of her main concerns with ONC’s efforts and the interoperability rule, something physicians and industry experts also expressed during the comment period for the proposed rules.

“When I look at the fact that a patient can come in and say, ‘I need you to interact with my app,’ and when I look at the HIPAA requirements I’m still beholden to, there are some nuances there that make me really nervous as a CIO,” she said.

Go to Original Article
Author: