Tag Archives: past

Voting vendor ES&S unveils vulnerability disclosure program

Election system vendors have had frosty relationships with the infosec community in the past, but one company is reversing course in an effort to improve the security of its products.

At Black Hat USA 2020 Wednesday, Chris Wlaschin, vice president of systems security for Election Systems & Software, (ES&S) formally announced the voting-machine manufacturer’s vulnerability disclosure program, which aims to strengthen election security by working with independent security researchers.

“This policy applies to all digital assets owned and operated by ES&S, including corporate IT networks and public-facing websites. Keep the details of any discovered vulnerabilities confidential until either they are fixed or at least 90 days have passed,” ES&S wrote in the disclosure policy.

Wlaschin shared the session with Mark Kuhr, co-founder and CTO of Synack, a crowdsourced security platform that will help manage the new program. They discussed a partnership to allow for penetration testing, on some ES&S products. In addition, they each shared examples of independent researchers’ work and remedies put in place through ES&S’ vulnerability disclosure program.

“Researchers are not waiting for a policy to be put in place — they are actively working on election security issues, and I’m proud to report that collaboration is working,” Wlaschin said.

A collaboration across vendor, government and researcher communities is important in securing the election systems, Kuhr said.

“From code readers, to voter registration systems to voter registration databases, there are complex systems and to tackle such a problem we need an aggressive approach and we need a united effort in order to do this,” Kuhr said.

Improvements on the horizon

During the virtual session, Kuhr announced that Synack updated its “Secure the Election” initiative with a more comprehensive penetration test, crowdsourced researchers and incentivized discovery.

“It’s going to help us push forward this idea that states should be working with that external research community to find vulnerabilities ahead of the adversary. This is a way to help the states move into a modern era of penetration testing,” Kuhr said.

In addition, ES&S partnered with Synack to test their newest generation electronic pollbook.

“These pollbooks are in wide use across the country. They are the front line of election technology where voters enter a polling place and are checked in using the electronic pollbook. We want to make sure everything that can be done to harden these pollbooks is done, so they are as secure as they can be,” Wlaschin said.

Lessons from the past

For years, election system vendors have shunned vulnerability disclosure and bug bounty programs while declining to participate in events like Black Hat or DEF CON. Communication between election system vendors and the security research community in the past has been an obstacle, according to Matt Olney, director of threat intelligence at Cisco Talos, but implementing vulnerability disclosure policies is a useful step in overcoming that obstacle.

“What I see with my history in security is that election vendors are still on the road to security maturity, and part of that maturity is to intake vulnerability disclosures and put out the appropriate patches without it being a highly contested thing,” Olney said. “Across the board, there’s still some space in terms of ensuring there’s a vulnerability policy, working well with researchers, engaging with the research community to get the most value out of it. And there’s a lot to get out of that community without a lot of cost on the vendor side, and I think they’re still figuring that out.”

ES&S has grappled with high-profile vulnerabilities and security issues in the past. In 2018, The New York Times reported some of the vendor’s products contained a flawed remote access program called PCAnywhere. After initially denying the report, ES&S admitted it had installed PCAnywhere on its election management system (EMS) workstations for a “small number of customers between 2000 and 2006,” according to a letter sent to Sen. Ron Wyden (D-Ore.) that was obtained by Motherboard.

Last year Motherboard reported that security researchers had found additional issues with how ES&S products electronically transmit vote totals, but the company pushed back on the research.

While election security has progressed, Kuhr said there is still more to be done.

“Testing timelines are too elongated, we need to have the ability to have continuous testing on these patches and to be able to push patches to the field very quickly,” Kuhr said. “The incorporation of federal standards on this type of product is also needed. Right now, states do not have to buy voter registration systems that are rigorously security tested because there are a lot of optional requirements.”

Go to Original Article

5 ways COVID-19 changed the data storage market

The past few months have forced the normally conservative data storage world to make on-the-spot adjustments to the ways people buy and use storage.

Recent earnings reports from leading storage companies provided a look at how they adapted to the changes. While they experienced mixed results, clear buying patterns and industry changes emerged in the data storage market. Storage leaders expect many of the changes will remain in place, even after the COVID-19 threat subsides.

The recent earnings calls showed some trends accelerated — such as a move from large data center arrays to hyper-converged infrastructure (HCI) and the cloud, and a shift from Capex to Opex spending. It also forced new selling strategies as face-to-face sales calls and conferences gave way to virtual events and virtual meetings between buyers and sellers working remotely.

One major storage CEO even experienced COVID-19 personally.

“I contracted COVID-19 in mid-March,” Pure Storage CEO Charlie Giancarlo said last week on the company’s earnings call. “And that experience has provided me with a deep personal appreciation for this virus and its impact. The changes in people’s lives and livelihoods are truly extraordinary. And our expectations of what is or will be normal are forever changed. Every day, each new report on the crisis brings an uneasy mixture of anxiety, uncertainty and hope about the future.”

Charles Giancarlo, CEO, Pure StorageCharlie Giancarlo

Storage vendors confronted this new normal over the last few months, with their business prospects also filled with uncertainty. Pure came out of it better than its larger direct competitors Dell EMC, NetApp and Hewlett Packard Enterprise. Still, it joined Dell and HPE in declining to give a forecast for this quarter because of uncertainty. NetApp did not give a long-term forecast but predicts a 6% revenue drop this quarter.

The following are some ways the data storage market changed during the first quarter of COVID-19:

Arrays give way to cloud, HCI

Flash array vendor Pure’s revenues increased 12% over last year, to $367 million. Other array vendors didn’t fare so well, while HCI and services revenue grew as organizations shifted to remote work and bought storage remotely.

Dell EMC’s storage revenue fell 5% to $3.8 billion, while its Infrastructure Solutions Group fell 8% overall (servers and networking dropped 10%). But while storage, servers and networking dipped, Dell reported double-digit growth in its VxRail HCI platform that combines those IT infrastructure tiers.

NetApp revenue dropped 12% to $1.4 billion, including a 21% decline in product revenue. NetApp all-flash array revenue of $656 million dropped 3% since last year, while cloud data services of $111 million more than doubled. NetApp claims it has more than 3,500 cloud data services customers.

NetApp CEO George KurianGeorge Kurian

“I would tell you that as we think about the go-forward strategic roadmap, it’s much more tied to software and cloud services,” NetApp CEO George Kurian said.

HPE storage revenues declined 16% since last year.

Hyper-converged infrastructure specialist Nutanix reported an 11% revenue increase to $318.3 million. Dell-owned VMware also reported revenue from its vSAN HCI software increased more than 20%, as did its NSX software-defined networking product.

VDI encore

It’s no surprise that the VDI expansion would lead to HCI sales, because VDI was among the first common use cases for hyper-convergence. One change since the early days of HCI is that now many of those desktops are sold as a cloud service.

Nutanix CEO Dheeraj PandeyDheeraj Pandey

Nutanix CEO Dheeraj Pandey said the increase for VDI and desktop as a service (DaaS) in March and April “brought us back to our roots, when a much larger piece of our business supported virtual desktop workloads.”

VDI also helped flash storage catch on, as a way to deal with boot storms and peak periods required for heavy volume of virtual desktops. Not all flash vendors benefited last quarter, but Pure did.

“Certainly, VDI was one of the major use cases out there,” Pure’s Giancarlo said.

In May, NetApp acquired VDI and DaaS startup CloudJumper to address that market.

Who’s buying? And how?

COVID-19’s impact on storage buying was far from uniform. The pandemic left some industries financially devastated, while others had to expand to keep up.

Dell COO Jeff Clarke said Dell saw demand drop among SMBs and industries such as retail, manufacturing, energy and transportation. But financial services, government, healthcare and life sciences increased spending.

Kurian said NetApp also saw an increase in healthcare spending, driven by the pandemic and a need for digital imaging.

Organizations spending on storage are increasingly going to a utility model, buying storage as a service. Pure’s subscription services jumped 37% year over year to $120 million, making up one-third of its overall revenue.

“What we saw in Q1 was that the urgency was to beef up what they currently had in, and that was largely on prem,” Giancarlo said. “But they wanted the option, they didn’t want to sign on to five years of more on prem or anything along those lines. They wanted the option of being able to move to the cloud at any point in time. And that’s exactly what our Pure as-a-Service is designed to do in several respects.”

While Dell’s overall revenue was flat from last year, its recurring revenue increased 16%, to around $6 billion. That recurring revenue includes utility and as-a-service pricing.

“We have a very, very modern way to consume and digest IT with the very best products in the marketplace,” Clarke said.

Virtualized sales become common

Remote work has changed the way vendors and customers interact. Like with user conferences, sales calls have become a virtual experience.

“Our teams had to be nimble and quickly embrace a new sales motion,” Dell’s Clarke said. “We successfully pivoted to all virtual engagements with hundreds of thousands of virtual customer interactions in the quarter.”

Clarke said there has been no negative impact, as he and his sales team can meet with more customers than in the past.

Nutanix, which shifted its 2020 .NEXT user conference to a virtual event and pushed it until Sept. 8, has also moved in-person regional shows and boot camps online. Pandey said Nutanix has seen no drop-off in qualified leads for its sales team from going virtual.

“We have gone completely virtual and are seeing comparable yield in terms of qualified leads and virtual meetings for our sales organization at less than half the cost,” he said.

Cost-saving: Furloughs, pay cuts, hiring freezes

Unsure of what the immediate future will look like, IT companies are enacting cost reduction plans and realigning their teams.

Dell is implementing a global hiring freeze, reduction in consulting and contractor costs, global travel restrictions and a suspension of its 401(k) match plan.

HPE said it would enact pay cuts across the board, with the executive team taking the biggest reductions. CEO Antonio Neri also said HPE would reduce and realign the workforce as part of a cost reduction plan save more than $1 billion over three years.

Nutanix implemented two nonconsecutive weeks of furloughs for a quarter of its employees and cut executive team members’ salaries by 10%.

Not all the vendors are reducing staff yet, though. NetApp CEO Kurian said the company reached its target goal of adding 200 primary sales reps, a quarter ahead of schedule.

Pure Storage’s Giancarlo said it’s his “personal mission” to avoid layoffs or furloughs through the rest of 2020, although the company did have layoffs — which he called a “rebalancing” — before COVID-19 hit. “We believe we’re going to be able to perform in such a way that we will not have layoffs or furloughs,” he said.

Despite the changes to the data storage market, one constant is data is growing in volume and important in business around the world.

“While we cannot predict when the world will return to normal, the enduring importance of data is clear,” Kurian said.

Go to Original Article

AI COVID-19 tech bolsters social distancing, supply chains

While we’ve had many pandemics in the past, none have been so life-changing as the struggle against the latest novel coronavirus, COVID-19. The impacts of the pandemic have significant economic and public health consequences — including widespread effects on education, e-commerce and global supply chains.

With the world’s attention on this virus, artificial intelligence researchers, companies and solution providers of all sorts are looking to apply AI and machine learning to the vast range of challenges that the world faces. Many companies are applying AI capabilities to medical and health needs, while others are applying AI to the ongoing challenges faced in the economy. AI-based COVID-19 solutions are bolstering industries to provide healthcare, enterprise communication and ensure social distancing.

AI helping keep people safe and distant

At this moment, there is no vaccine to combat the COVID-19 virus; the primary way to get control over the spread of the virus is through mitigation and suppression. The most effective treatment so far has been to practice self-isolation and to avoid crowded areas through social distancing. In the case of being tested positive, patients are told to quarantine themselves if they are showing manageable symptoms.

U.S.-based Athena Security is repurposing its security-based imaging solution to the healthcare field by analyzing thermal imagery to detect and track potentially sick patients. The company uses thermal imaging combined with algorithms that analyze the body temperatures of people to flag potentially sick individuals traveling in high traffic areas such as airports, stadiums, train stations and other locations.

Other regions have taken much more intrusive — some might say draconian — measures in monitoring and policing communities. Among the solutions employed by China were a surveillance system that used facial recognition and temperature detection for the identification of people who may have a fever. This technology was combined with mobile device tracking data and other information to not only spot those who were potentially sick, but match to facial records databases and indicate everyone who they might have potentially infected. In the Sichuan province, officials used AI-powered smart helmets that could identify people with a fever.

Using data analytics and big data, the Chinese government instigated a program whereby they monitored the risk each individual had of contracting the disease. This identification could be made based on the individual’s travel history, time spent in virus hot spots and exposure to people who had already contracted the disease. Based on this, the government assigned codes like red, yellow or green to indicate whether individuals are put in quarantine or advised self-isolation. Across China, drones are also used with thermal imaging to track infected patients, as well as to patrol public spaces for curfew compliance. This social tracking approach will probably become more commonplace as countries look to be more forceful and proactive in keeping infected people home and preventing the spread of disease.

How AI is assisting the fight against COVID-19
AI-based technology is assisting diagnosis, detection, supply chains and telemedicine.

Handling the wave of healthcare and employment claims

When a pandemic hits, no aspect of the global economy is untouched. Health insurance providers and healthcare officials are backlogged by numerous cases of claims that they must process immediately. Likewise, the growing unemployment caused by work closures is resulting in an exponential increase in jobless claim filings. A lot of resources are needed to verify these claims, process them and provide benefits. Furthermore, with government staff themselves working from home and away from internal governmental systems, many of those needed benefits are stuck behind process bottlenecks that require human intervention.

RPA and more cognitive process automation tools that utilize the power of natural language processing for document handling, and more nimble solutions that can dynamically adjust to process changes are being applied to help move claims forward, while minimizing human workload. While RPA adoption has been moving at a fast pace over the past few years, it is expected that the global pandemic and work-from-home requirements will give cognitive automation even more of a push this year.

The growth of video conferencing and chatbots

Likewise, the shift to work-from-home and home education has skyrocketed the demand for online conferencing and education platforms. This has in turn skyrocketed the consumption of the internet and is taxing global broadband providers. While internet providers work to adjust to the new normal of stay-at-home workers, the growth in online platforms is presenting additional opportunities enabled by AI.

As an increasing number of employees work from home, the load on their organization’s IT service desks are likewise increasing. Getting employees functional at home is vital to the running of the organization, but this is challenged by the fact that many IT service and operations staff are also working from home. As a result, companies are employing AI-based self-service solutions that can address common and critical IT service needs and resolve them autonomously without human interaction. These intelligent systems can provide step-by-step instructions from IT knowledge bases and the AI-backed digital assistants can help solve these queries freeing up IT for more complex cases.

Routine healthcare has been disrupted by the closure of many traditional doctors’ offices, while hospitals must deal with more urgent needs for COVID-19 patients. As such, there’s been an increased demand in telemedicine and health-based chatbots that can address a wide range of health concerns. Using these chatbots and intelligent assistants, less face-to-face interaction is needed between patients and medical staff, thereby reducing risks to these individuals. These tools are helping to reduce the overwhelming number of patients that hospitals and medical personnel may face. By employing bots and conversational AI tools it can help assess people with symptoms and address health needs without necessarily requiring an in-person doctor visit. Now, patients that can be managed at home will be advised to stay at home and free up vital resources for more severe cases.

One example of where we are seeing this in action is the Healthcare Bot service by Microsoft that uses AI-enabled chatbots to provide healthcare advice and some telemedicine capabilities. The system uses a natural conversation experience to impart personal health-related information and the government’s protocols on dealing with the pandemic.

AI and the supply chain

The demand for online commerce has increased tremendously as people shelter in place. The normal supply routes and logistical supplies suffer as a result of unprecedented lockdowns, closure of nonessential services and even curfews in extreme conditions. One way to address these restrictions is to use technology and robotics driven by AI for the safe provision of supplies, medical drugs and food supplies to those in lockdown.

Terra Drone is providing these services especially in the transportation of high-risk quarantined material and medical samples to and from these sites to Xinchang County’s disease control center. This considerably reduces the risk of medical personnel getting harmed by infected people or quarantined stuff. Other companies are utilizing AI to help speed up their logistics and warehouse functions and deliver goods reliably and safely with little disruption to the status quo.

AI seeking cures and treatments

The White House Office of Science and Technology Policy has urged researchers to employ AI to find solutions to issues relating to COVID-19. The U.S. Centers for Disease Control and Prevention (CDC) and World Health Organization (WHO) have likewise asked AI researchers to assist in vaccine research to combat the virus. There are almost 29,000 research documents that need to be analyzed and scrutinized to find information about the novel coronavirus. Computers can extract the required information much faster than humans. To meet this challenge, a Kaggle competition called “CORD-19” was developed to generate potential solutions from interdisciplinary fields to provide input to the available data set as part of this challenge.

One of the most potent capabilities of machine learning is its ability to find patterns in big data. As such, researchers are applying AI in the discovery of potential vaccines and effective treatments. Google subsidiary DeepMind, known for its AlphaZero and research into artificial general intelligence, recently put efforts to find a vaccine through the sequencing of six protein structures linked to COVID-19. Usually, research into vaccines can take a significant amount of time, but using significant GPU-based horsepower and powerful algorithms that can make sense of tremendous amounts of big data, new vaccines could be developed faster using this AI approach.

Companies of all sizes, including startups, are jumping in to help. In February 2020, British startup BenevolentAI published two articles that identified approved drugs that could potentially be used to target and block the viral replication of COVID-19. The AI system mined a large quantity of medical records and identified the patterns and signals which could imply potential solutions. Their system identified a total of six compounds that could block the cellular pathways that allow the virus to replicate. The company is reaching out to potential manufacturers of the identified drugs to pursue clinical trials that can test their efficacy.

Likewise, Insilico Medicine is also applying AI techniques to find a vaccine for COVID-19 and similar viruses. In February 2020, the company generated an extensive list of molecules that could bind a specific protein of the COVID-19 virus. Using their AI-based drug discovery platform utilizing deep learning, the company filtered the potential list of molecules down to just a hundred. They then seek to test seven molecules, which could be put on trial by medical labs for viability as a suitable vaccine for COVID-19.

Other startups such as Gero Pte. Ltd., based in Singapore, are using AI to spot potential anti-COVID-19 compounds that have previously been tested in humans. Using machine learning and AI-based pattern matching, the company identified medicines such as the generic agents niclosamide, used for parasite infections, and nitazoxanide, an antiparasitic and antiviral medicine, that could slow the new virus’s replication.

Applying AI to diagnosis and detection

A study published in the journal Radiology wrote that artificial intelligence-based deep learning models can accurately detect COVID-19 and differentiate it from forms of community-acquired pneumonia. The model, which is called the COVID-19 detection neural network (COVNet), extracts visual features from 4,356 computed tomography (CT) exams from 3,322 patients for the detection of COVID-19. To make the model more robust, community-acquired pneumonia (CAP) and non-pneumonia CT exams were included.

With COVID-19 first spreading like wildfire through China, Chinese companies hurried to provide innovative solutions to tackle the problem. Infervision introduced an AI-based solution that uses a machine learning model to increase the speed of medical image analysis and assist with the diagnosis of COVID-19 in patients. The use of AI-enhanced medical imaging reduces time needed to get positive results and can handle the large number of cases that need diagnosis at great speed and efficiency. As a result, hospitals and labs with scarce resources can quickly screen suspected COVID-19 patients and expedite treatment.

In addition to analyzing radiology imagery, AI systems can handle a range of other health-related data and diagnostics. A recent study presented by researchers at the University of Massachusetts Amherst aims to predict illness based on cough patterns. Other AI systems are listening to coughs and can potentially indicate patients who have the coronavirus from other patients who might have coughs originating from other illnesses. The combination of inputs from thermal images and audio input by microphones can assist clinics and other locations in identifying and segregating sick patients.

As can be seen above, the impacts of a global pandemic are widespread, impacting almost every corner of our society and economy. AI is being applied in a widespread manner as well, handling everything from the treatment and prevention of the virus to dealing with the impacts of the pandemic across the ecosystem. No doubt this is AI’s moment to shine and show how it can add transformational value across the globe.

Go to Original Article

Everbridge Critical Event Management tailored for COVID-19

47 million. That’s the number of coronavirus-related messages Everbridge sent on behalf of its users in the past week.

Everbridge Critical Event Management software is on the front lines of coronavirus IT response, aided by a specially targeted line of products and recent acquisitions.

Everbridge CTO Imad Mouline said the usage pattern for his company’s software is typically spiky. The system was built for large fluctuations in usage and can add capacity quickly.

“This is something we’re really, really good at,” Mouline said.

Other incidents have put Everbridge software to the test. For example, during Hurricane Dorian in 2019, Everbridge users sent out 14 million messages in just a few days, Mouline said, and that was in a smaller geographical area.

Everbridge takes on coronavirus with ‘Shield’

To aid employee protection and business continuity during the coronavirus pandemic, Everbridge launched COVID-19 Shield. The software as a service includes targeted pandemic data feeds and rapid deployment templates.

COVID-19 Shield uses the Everbridge Critical Event Management platform to help organizations identify risks, protect the workforce and manage disruptions to operations and supply chain.

Screenshot of Everbridge dashboard
An Everbridge dashboard shows assets that are potentially impacted by COVID-19 in the Washington D.C. area.

Everbridge has three COVID-19 service levels, which build on each other.

The entry-level “Know Your Risks” provides COVID-19 alerts featuring real-time intelligence such as case statistics, travel advisories, closures and supply chain impacts. The next level up, “Protect Your People,” manages critical response plans, automates communications and includes a potential threat feed and coronavirus-specific messaging templates.

“Protect Your Operations and Supply Chains,” which includes the other two offerings’ capabilities, automatically correlates alerts to physical assets, including buildings and people. It also initiates standard operating procedures to resolve issues and generates real-time status reports on remediation and recovery tasks.

COVID-19 Shield provides access to the Everbridge Data Sharing Private Network, where users can share information publicly and privately to facilitate enhanced local intelligence and response coordination.

Everbridge offers a “Rapid Deployment” package for governments, businesses and healthcare organizations that gets the COVID-19 Shield running in less than two days, according to the vendor. 

Mouline said the coronavirus-tailored products can help streamline communication, provide situational awareness and offer a quick form of protection.

Pricing is based on the size of the organization, for example, the number of people or assets in need of protection. Assets may include the number of office locations or supply chain elements.

The Everbridge Critical Event Management platform in total reaches more than 550 million people globally, according to the vendor, which is based in Burlington, Mass. Everbridge claims about 5,000 customers.

Learn best practices for pandemic response

Paul Kirvan, a business resilience and disaster recovery consultant, said it’s important for employees to heed messages from their businesses and government.

Emergency notification software such as Everbridge’s is most appropriate for notifying employees of any new company policies, government notifications, reminders about social distancing and hand washing, and other messages for broad distribution,” Kirvan wrote in an email. “The same can be true for notifying remote domestic offices, overseas offices, regulatory authorities, government agencies and other important stakeholders.”

Information sharing between companies and within industry groups is invaluable, not just for status reports but also to help share insights into effective crisis and continuity strategies, said Jackie Day, a partner at consulting firm Control Risks, on a webinar last week hosted by her company and Everbridge.

Companies should also take advantage of lessons learned from others who have gone through the pandemic crisis, such as Asian organizations, said Matt Hinton, a partner at Control Risks.

While talk of a business impact analysis is often greeted with eye rolls, Hinton said, companies with one are better prepared to deal with tricky scenarios.

There is no one-size-fits-all approach.

“Your actions have to be targeted,” Everbridge’s Mouline said.

Mouline advised organizations to clearly separate informational messaging from emergency messaging, as employees are bombarded with information.

You want to communicate on a regular basis, but you want to avoid over-alerting.
Imad MoulineCTO, Everbridge

“Use the alerting capabilities sparingly,” Mouline said. “You want to communicate on a regular basis, but you want to avoid over-alerting.”

And the crisis will end at some point, Hinton noted. So organizations need to be thinking about recovery and the transition back to the office environment.

“Recovery is often that forgotten son when it comes to crisis management,” Hinton said.

Everbridge acquires three companies

Everbridge has been busy with acquisitions lately, purchasing technology that is helping coronavirus response.

The Everbridge Critical Event Management platform’s new IoT extension module uses intellectual property from technology acquisitions of Connexient and CNL Software. Critical Event Management for IoT increases the number of uses for the Everbridge platform. For example, it improves the ability to coordinate first responders and other healthcare resources based on real-time data on the broader impact of COVID-19.

Specifically, Connexient provides information on indoor positioning and wayfinding, with a focus on healthcare organizations. CNL offers integrations with a variety of other types of devices, including access control systems, building management systems, intrusion detection systems and fire panels, Mouline said. The Critical Event Management platform will send out information on needed next steps, for example sounding an alarm or locking doors.

Everbridge also acquired cell broadcast provider One2many. The resulting unified Public Warning System provides a countrywide population alerting capability. The platform enables countries to share updates on viral hotspots and pandemic best practices; coordinate first responders and healthcare resources; establish two-way communications with at-risk populations; and manage disruptions to transportation, education and other services, according to Everbridge.

The three acquired companies have each become an “Everbridge company.” Everbridge did not release terms of the acquisitions.

Go to Original Article

For Sale – Desktop PC (Beginners Gaming PC/HomeServer)

For Sale is my old homeserver, ran 24/7 (light to medium use) for the past 2 years and never had any issues with it.

Fractal Design Node 804 (Holds a max 10x 3.5″ + 2x 2.5″ Hard Drives)
i5 4570 (Arctic Cooler 7 Pro)
Asus H81-Plus
16GB (2x*8GB) Crucial Ballistix @ 1600Mhz (Think I got them off these forums, 0 errors in Memtest)
MSI 390x 8GB (Recently removed from my main PC, again never any issues, has recently had fresh Thermal Grizly applied)
Samsung 840 128GB
Windows 10 Pro 64bit (This was purchased off ebay for a couple of quid so how legit it is who knows but it has re-activated today when I’ve done a clean install, selling as no Product Key though for the above reason)
Corsair HX750w
6x Akasa 120mm fans

PC is in full working condition, the case has a couple of minor marks (mostly the perspex windows)
The Corsair comes only with 1x 4 way Sata Power cable, 1x 4 way Molex cable, 2x 8pin cables (GPU) … these were enough to max hard drive capacity on the case.

I have the box for the case so I am willing to post at the buyers expense.

Go to Original Article

AI, Azure and the future of healthcare with Dr. Peter Lee – Microsoft Research

headshot of Peter Lee for the Microsoft Research Podcast

Episode 109 | March 4, 2020

Over the past decade, the healthcare industry has undergone a series of technological changes in an effort to modernize it and bring it into the digital world, but the call for innovation persists. One person answering that call is Dr. Peter Lee, Corporate Vice President of Microsoft Healthcare, a new organization dedicated to accelerating healthcare innovation through AI and cloud computing.

Today, Dr. Lee talks about how MSR’s advances in healthcare technology are impacting the business of Microsoft Healthcare. He also explains how promising innovations like precision medicine, conversational chatbots and Azure’s API for data interoperability may make healthcare better and more efficient in the future.



Peter Lee: In tech industry terms, you know, if the last decade was about digitizing healthcare, the next decade is about making all that digital data good for something, and that good for something is going to depend on data flowing where it needs to flow at the right time.

Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.

Host: Over the past decade, the healthcare industry has undergone a series of technological changes in an effort to modernize it and bring it into the digital world, but the call for innovation persists. One person answering that call is Dr. Peter Lee, Corporate Vice President of Microsoft Healthcare, a new organization dedicated to accelerating healthcare innovation through AI and cloud computing.

Today, Dr. Lee talks about how MSR’s advances in healthcare technology are impacting the business of Microsoft Healthcare. He also explains how promising innovations like precision medicine, conversational chatbots and Azure’s API for data interoperability may make healthcare better and more efficient in the future. That and much more on this episode of the Microsoft Research Podcast.

(music plays)

Host: Peter Lee, welcome to the podcast!

Peter Lee: Thank you. It’s great to be here.

Host: So you’re a Microsoft Corporate Vice President and head of a relatively new organization here called Microsoft Healthcare. Let’s start by situating that within the larger scope of Microsoft Research and Microsoft writ large. What is Microsoft Healthcare, why was it formed, and what do you hope to do with it?

Peter Lee: It’s such a great question because when, we were first asked to take this on, it was confusing to me! Healthcare is such a gigantic business in Microsoft. You know, the number that really gets me is, Microsoft has commercial contracts with almost 169,000 healthcare organizations around the world.

Host: Wow.

Peter Lee: I mean, it’s just massive. Basically, anything from a one-nurse clinic in Nairobi, Kenya, to Kaiser Permanente or United Healthcare, and everything in-between. And so it was confusing to try to understand, what is Satya Nadella thinking to ask a “research-y” organization to take this on? But, you know, the future of healthcare is so vibrant and dynamic right now, and is so dependent on AI, on Cloud computing, big data, I think he was really wanting us to think about that future.

Host: Let’s situate you.

Peter Lee: Okay.

Host: You cross a lot of boundaries from pure to applied research, computer science to medicine. You’ve been head of Carnegie Mellon University’s computer science department, but you were also an office director at DARPA, which is the poster child for applied research. You’re an ACM fellow and on the board of directors of the Allen Institute for AI, but you’re also a member of the National Academy of Medicine, fairly newly minted as I understand?

Peter Lee: Right, just this year.

Host: And on the board of Kaiser Permanente’s School of Medicine. So, I’d ask you what gets you up in the morning, but it seems like you never go to bed So instead, describe what you do for a living, Peter! How you choose what hat to wear in the morning and what’s a typical day in your life look like?

Peter Lee: Well, you know, this was never my plan. I just love research, and thinking hard about problems, being around other smart people and thinking hard about problems, getting real depth of understanding. That’s what gets me up. But I think the world today, what’s so exciting about it for anyone with the research gene, is that research, in a variety of areas, has become so important to practical, everyday life. It’s become important to Microsoft’s business. Not just Microsoft, but all of our competitors. And so I just feel like I’m in a lucky position, as well as a lot of my colleagues, I don’t think any of us started with that idea. We just wanted to do research and now we’re finding ourselves sort of in the middle of things.

Host: Right. Well, talk a little bit more about computer science and medicine. How have you moved from one to the other, and how do you kind of envision yourself in this arena?

Peter Lee: Well, my joke here is, these were changes that, actually, Satya Nadella forced me to make! And it’s a little bit of a joke because I was actually honored that he would think of me this way, but it was also painful because I was in a comfort zone just doing my own research, leading research teams, and then, you know, Satya Nadella becomes the CEO, Harry Shum comes on board to drive innovation, and I get asked to think about new ways to take research ideas and get them out into the world. And then, three years after that, I get asked to think about the same thing for healthcare. And each one of those, to my mind, are examples of this concept that Satya Nadella likes to talk about, “growth mindset.” I joke that growth mindset is actually a euphemism because each time you’re asked to make these changes, you just get this feeling of dread. You might have a minute where you’re feeling honored that someone would ask you something, but then…

Host: Oh, no! I’ve got to do it now!

Peter Lee: …and boy, I was, you know, on a roll in what I was doing before, and you do spend some time feeling sorry for yourself… but when you work through those moments, you find that you do have those periods in your life where you grow a lot. And my immersion with so many great people in healthcare over the last three or four years has been one of those big growth periods. And to be recognized, then, let’s say, by the National Academies is sort of validation of that.

Host: All right, so rewind just a little bit and talk about that space you were in just before you got into the healthcare situation. You were doing Microsoft Research. Where, on the spectrum from pure, like your Carnegie Mellon roots, to applied, like your DARPA roots, did that land? There’s an organization called NeXT here I think, yeah?

Peter Lee: That’s right. You know, when I was in academia, academia really knows how to do research.

Host: Yeah.

Peter Lee: And they really put the creatives, the graduate students and the faculty, at the top of the pyramid, socially, in the university. It’s just a great setup. And it’s organized into departments, which are each named after a research area or a discipline and within the departments there are groups of people organized by sub-discipline or area, and so it’s an organizing principle that’s tried and true. When I went to DARPA, it was completely different. The departments aren’t organized by research area, they’re organized by mission, some easily assessable goal or objective. You can always answer the question, have we accomplished it yet or not?

Host: Right.

Peter Lee: And so research at DARPA is organized around those missions and that was a big learning experience for me. It’s not like saying we’re going to do computer vision research. We’ll be doing that for the next fifty years. It’s, can we eliminate the language barrier for all internet-connected people? That’s a mission. You can answer the question, you know, how close are we?

Host: Right.

Peter Lee: And so the mix between those two modes of research, from academia to DARPA, is something that I took with me when I joined Microsoft Research and, you know, Microsoft Research has some mix, but I thought the balance could be slightly different. And then, when Satya Nadella became the CEO and Harry Shum took over our division, they challenged me to go bigger on that idea and that’s how NeXT started. NeXT tried to organize itself by missions and it tried to take passionate people and brilliant ideas and grow them into new lines of business, new engineering capabilities for Microsoft, and along the way, create new CVPs and TFs for our company. There’s a tension here because one of the things that’s so important for great research is stability. And so when you organize things like you do in academia, and in large parts of Microsoft Research, you get that stability by having groups of people devoted to an area. We have, for example, say, computer networking research groups that are best in the world.

Host: Right.

Peter Lee: And they’ve been stable for a long time and, you know, they just create more and more knowledge and depth, and that stability is just so important. You feel like you can take big risks when you have that stability. When you are mission-oriented, like in NeXT, these missions are coming and going all the time. So that has to be managed carefully, but the other benefit of that, management-wise, is more people get a chance to step up and express their leadership. So it’s not that either model is superior to the other, but it’s good to have both. And when you’re in a company with all the resources that Microsoft has, we really should have both.

Host: Well, let’s zoom out and talk, somewhat generally, about the promise of AI because that’s where we’re going to land on some of the more specific things we’ll talk about in a bit, but Microsoft has several initiatives under a larger umbrella called AI for Good and the aim is to bring the power of AI to societal-scale problems in things like agriculture, broadband accessibility, education, environment and, of course, medicine. So AI for Health is one of these initiatives, but it’s not the same thing as Microsoft Healthcare, right?

Peter Lee: Well, the whole AI for Good program is so exciting and I’m just so proud to be in a company that makes this kind of commitment. You can think of it as a philanthropic grants program and it is, in fact, in all of these areas, providing funding and technical support to really worthy teams, passionate people, really trying to bring AI to bear for the greater good.

Host: Mm-hmm.

Peter Lee: But it’s also the case that we devote our own research resources to these things. So it’s not just giving out grants, but it’s actually getting into collaborations. What’s interesting about AI for Health is that it’s the first pillar in the AI for Good program that actually overlaps with a business at Microsoft and that’s Microsoft Healthcare. One way that I think about it is, it’s an outlet for researchers to think about, what could AI do to advance medicine? When you talk to a lot of researchers in computer science departments, or across Microsoft research labs, increasingly you’ll see more and more of them getting interested in healthcare and medicine and the first things that they tend to think about, if they’re new to the field, are diagnostic and therapeutic applications. Can we come up with something that will detect ovarian cancer earlier? Can we come up with new imaging techniques that will help radiologists do a better job? Those sorts of diagnostic and therapeutic applications, I think, are incredibly important for the world, but they are not Microsoft businesses. So the AI for Health program can provide an outlet for those types of research passions. And then there are also, as a secondary element, four billion people on this planet today that have no reasonable access to healthcare. AI and technology have to be part of the solution to creating that more equitable access and so that’s another element that, again, doesn’t directly touch Microsoft’s business today in Microsoft Healthcare, but is so important we have a lot to offer so AI for Health is just, I think, an incredibly visionary and wonderful program for that.

Host: Well, let’s zoom back out… um, no, let’s zoom back in. I’ve lost track of the camera. I don’t know where it is! Let’s talk about the idea of precision medicine, or precision healthcare, and the dream of improving those diagnostic and therapeutic interventions with AI. Tell us what precision medicine is and how that plays out and how are the two rather culturally diverse fields of computer science and medicine coming together to solve for X here?

Peter Lee: Yeah, I think one of the things that is sometimes underappreciated is, over the past ten to twenty years, there’s been a massive digitization of healthcare and medicine. After the 2008 economic collapse, in 2009, there was the ARA… there was a piece of legislation attached to that called the HITECH Act, and HITECH actually required healthcare organizations to digitize health records. And so for the past ten years, we’ve gone from something like 15% of health records being in digital form, to today, now over 98% of health records are in digital form. And along with that, medical devices that measure you have gone digital, our ability to sequence and analyze your genome, your proteome, have gone digital and now the question is, what can we do with all the digital information? And on top of that, we have social information.

Host: Yeah.

Peter Lee: People are carrying mobile devices, people talk to computers at home, people go to their Walgreens to get their flu shots.

Host: Yeah.

Peter Lee: And all of this is in digital form and so the question is, can we take all of that digital data and use it to provide highly personalized and precisely targeted diagnostics and therapeutics to people.

Host: Mm-hmm.

Peter Lee: Can we get a holistic, kind of, 360-degree view of you, specifically, of what’s going on with you right now, and what might go on over the next several years, and target your wellness? Can we advance from sick care, which is really what we have today…

Host: Right.

Peter Lee: …to healthcare.

Host: When a big tech company like Microsoft throws its hat in the healthcare ring and publicly says that it has the goal of “transforming how healthcare is experienced and delivered,” I immediately think of the word disruption, but you’ve said healthcare isn’t something you disrupt. What do you mean by that, and if disruption isn’t the goal, what is?

Peter Lee: Right. You know, healthcare is not a normal business. Worldwide, it’s actually a $7.5 trillion dollar business. And for Microsoft, it’s incredibly important because, as we were discussing, it’s gone digital, and increasingly, that digital data, and the services and AI and computation to make good use of the data, is moving to the cloud. So it has to be something that we pay very close attention to and we have a business priority to support that.

Host: Right.

Peter Lee: But, you know, it’s not a normal business in many, many different senses. As a patient, people don’t shop, at least not on price, for their healthcare. They might go on a website to look at ratings of primary care physicians, but certainly, if you’re in a car accident, you’re unconscious. You’re not shopping.

Host: No.

Peter Lee: You’re just looking for the best possible care. And similarly, there’s a massive shift for healthcare providers away from what’s called fee-for-service, and toward something called value-based care where doctors and clinics are being reimbursed based on the quality of the outcomes. What you’re trying to do is create success for those people and organizations that, let’s face it, they’ve devoted their lives to helping people be healthier. And so it really is almost the purest expression of Microsoft’s mission of empowerment. It’s not, how do we create a disruption that allows us to make more money, but instead, you know, how do we empower people and organizations to deliver better – and receive better – healthcare? Today in the US, a primary care doctor spends almost twice as much time entering clinical documentation as they do actually taking care of patients. Some of the doctors we work with here at Microsoft call this “pajama time,” because you spend your day working with patients and then, at home, when you crawl into bed, you have to finish up your documentation. That’s a big source of burn out.

Host: Oh, yeah.

Peter Lee: And so, what can we do, using speech recognition technologies, natural language processing, diarization, to enable that clinical note-taking to be dramatically reduced? You know, how would that help doctors pay more attention to their patients? There is something called revenue-cycle management, and it’s sort of sometimes viewed as a kind of evil way to maximize revenues in a clinic or hospital system, but it is also a place where you can really try to eliminate waste. Today, in the US market, most estimates say that about a trillion dollars every year is just gone to waste in the US healthcare system. And so these are sort of data analysis problems, in this highly complex system, that really require the kind of AI and machine learning that we develop.

Host: And those are the kinds of disruptions we’d like to see, right?

Peter Lee: That’s right. Yeah.

Host: We’ll call them successes, as you did.

Peter Lee: Well, and they are disruptions though, they’re disruptions that help today’s working doctors and nurses. They help today’s hospital administrators.

(music plays)

Host: Let’s talk about several innovations that you’ve actually made to help support the healthcare industry’s transformation. Last year – year ago – at the HIMSS conference, you talked about tools that would improve communication, the healthcare experience and interoperability and data sharing in the cloud. Tell us about these innovations. What did you envision then, and now, a year later, how are they working out?

Peter Lee: Yeah. Maybe the one I like to start with is about interoperability. I sometimes have joked that it’s the least sexy topic, but it’s the one that is, I think, the most important to us. In tech industry terms, you know, if the last decade was about digitizing healthcare, the next decade is about making all that digital data good for something and that good for something is going to depend on data flowing where it needs to flow…

Host: Right.

Peter Lee: …at the right time. And doing that in a way that protects people’s privacy because health data is very, very personal. And so a fundamental issue there is interoperability. Today, while we have all this digital data, it’s really locked into thousands of different incompatible data formats. It doesn’t get exposed through modern APIs or microservices. It’s oftentimes siloed for business reasons, and so unlocking that is important. One way that we look at it here at Microsoft is, we are seeing a rising tidal wave of healthcare organizations starting to move to the cloud. Probably ten years from now, almost all healthcare organizations will be in the cloud. And so, with that historic shift that will happen only once, ever, in human history, what can we do today to ensure that we end up in a better place ten years from now than we are now? And interoperability is one of the keys there. And that’s something that’s been recognized by multiple governments. The US government, through the Centers for Medicare and Medicaid Services, has proposed new regulations that require the use of specific interoperable data standards and API frameworks. And I’m very proud that Microsoft has participated in helping endorse and guide the specific technical choices in those new rules.

Host: So what is the API that Microsoft has?

Peter Lee: So the data standard that we’ve put a lot of effort behind is something called FHIR. F-H-I-R, Fast Healthcare Interoperability Resources. And for anyone that’s used to working in the web, you can look at FHIR and you’ll see something very familiar. It’s a modern data standard, it’s extensible, because medical science is advancing all the time, and it’s highly susceptible to analysis through machine learning.

Host: Okay.

Peter Lee: And so it’s utterly modern and standardized, and I think FHIR can be a lingua franca for all healthcare data everywhere. And so, for Microsoft, we’ve integrated FHIR as a first-class data type in our cloud, in Azure.

Host: Oh, okay.

Peter Lee: We’ve enabled FHIR in Office. So the Teams application, for example, it can connect to health data for doctors and nurses. And there’s integration going on into Dynamics. And so it’s a way to convert everything that we do here at Microsoft into great healthcare-capable tools. And once you have FHIR in the cloud, then you also, suddenly, unlock all of the AI tools that we have to just enable all that precision medicine down the line.

Host: That’s such a Biblical reference right then! The cloud and the FHIR.

Peter Lee: You know, there are – there’s an endless supply of bad puns around FHIR. So thank you for contributing to that.

Host: Well, it makes me think about the Fyre Festival, which was spelt F-Y-R-E, which was just the biggest debacle in festival history

Peter Lee: I should say, by the way, another thing that everyone connected to Microsoft should be proud of is, we have really been one of the chief architects for this new future. One of the most important people in the FHIR development community is Josh Mandel, who works with us here at Microsoft Healthcare, and he has the title Chief Architect, but it’s not Chief Architect for Microsoft, it’s Chief Architect for the cloud.

Host: Oh, my gosh.

Peter Lee: So he spends time talking to the folks at Google, at AWS, at Salesforce and so on.

Host: Right.

Peter Lee: Because we’re trying to bring the entire cloud ecosystem along to this new future.

Host: Tell me a little bit about what role bots might play in this arena?

Peter Lee: Bots are really interesting because, how many listeners have received a lab test result and have no idea what it means? How many people have received some weird piece of paper or bill in the mail from their insurance company? It’s not just medical advice, you know, where you have a scratch in your throat and you’re worried about what you should do. That’s important too, but the idea of bots in healthcare really span all these other things. One of the most touching, in a project led by Hadas Bitran and her team, has been in the area of clinical trials. So there’s a website called clinicaltrials.gov and it contains a registry describing every registered clinical trial going on. So now, if you are desperate for more experimental care, or you’re a doctor treating someone and you’re desperate for this, you know, how do you find, out of thousands of documents, and they’re complicated…

Host: Right.

Peter Lee: …technical, medical, science things.

Host: Jargon-y.

Peter Lee: Yeah, and it’s difficult. If you go to clinicaltrials.gov and type into the search box ‘breast cancer’ you get hundreds of results. So the cool project that Hadas and her team led was to use machine reading from Microsoft Research out of Hoifung Poon’s team, to read all of those clinical trials documents and create a knowledge graph and use that knowledge graph then to drive a conversational chatbot so that you can engage in a conversation. So you can say, you know, “I have breast cancer. I’m looking for a clinical trial,” and the chatbot will start to ask you questions in order to narrow down, eventually, to the one or two or three clinical trials that might be just right for you. And so this is something that we just think has a lot of potential.

Host: Yeah.

Peter Lee: And business-wise, there are more mundane, but also important things. Just call centers. Boy, those nurses are busy. What would happen if we had a bot that would triage and tee up some of those things and really give superpowers to those call center nurses. And so it’s that type of thing that I think is very exciting about conversational tech in general. And of course, Microsoft Research and NeXT should be really proud of really pioneering a lot of this bot technology.

Host: Right. So if I employed a bot to narrow down the clinical trials, could I get myself into one? Is that what you’re explaining here?

Peter Lee: Yeah, in fact, the idea here is that this would help, tremendously, the connection between perspective patients and clinical trials. It’s so important because pharmaceutical companies, in clinics that are setting up clinical trials, more than 50% of them fail to recruit enough participants. They just never get off the ground because they don’t get enough. The recruitment problem is so difficult.

Host: Wow.

Peter Lee: And so this is something that can really help on both ends.

Host: I didn’t even think about it from the other angle. Like, getting people in. I always just assumed, well, a clinical trial, no biggie.

Peter Lee: It’s such a sad thing that most clinical trials fail. And fail because of the recruitment problem.

Host: Huh. Well, let’s talk a little bit more about some of the really interesting projects that are going on across the labs here at Microsoft Research. So what are some of the projects and who are some of the people that are working to improve healthcare in technology research?

Peter Lee: Yeah. I think pretty much every MSR lab is doing interesting things. There’s some wonderful work going on in the Cambridge UK lab, in Chris Bishop’s lab there, in a group being led by Aditya Nori. One of the things there has been a set of projects in collaboration with Novartis really looking at new ideas about AI-powered molecule design for cellular therapies, as well as very precise dosing of therapies for things like macular degeneration and so these are, sort of, bringing the very best machine learning and AI researchers shoulder-to-shoulder with the best researchers and scientists at Novartis to really kind of innovate and invent the future. In the MSR India lab, Sriram Rajamani’s team, they’ve been standing up a really impressive set of technologies and projects that have to do with global access to healthcare and this is something that I think is just incredibly, incredibly important. You know, we really could enable, through more intelligent medical devices for example, much less well-trained technicians and clinicians to be able to deliver healthcare at a distance. The other thing that is very exciting to me there is just looking at data. You know, how do we normalize data from lots of different sources?

Host: Right.

Peter Lee: And then MSR Asia in Beijing, they’ve increasingly been redirecting some of the amazing advances that that lab is famous for in computer vision to the medical imaging space. And there are just amazing possibilities in taking images that might not be high resolution enough for a precise diagnosis and using AI to, kind of, magically improve the resolution. And so just across board, you go from, kind of, lab to lab you just see some really inspiring work going on.

Host: Yeah, some of the researchers have been on the podcast. Antonio Criminisi with InnerEye, umm…  haven’t had Ethan Jackson from Premonition yet

Peter Lee: No, Premonition… Well, Antonio Criminisi and the work that he led on InnerEye, you know, we actually went all the way to an FDA 510(k) approval on the tumor segmentations…

Host: Wow.

Peter Lee: …and the components of that now are going into our cloud. Really amazing stuff.

Host: Yeah.

Peter Lee: And then Premonition, this is one of these things that is, in the age of coronavirus…

Host: Right?

Peter Lee: …is very topical.

Host: I was just going to refer to that, but I thought maybe I shouldn’t…

Peter Lee: The thing that is so important is, we talked of precision medicine before…

Host: Yeah.

Peter Lee: …but there is also an emerging science of precision population health. And in fact, the National Academy of Medicine just recently codified that as an official part of medical research and it’s bringing some of the same sort of precision medicine ideas, but to population health applications and studies. And so when you look at Premonition, and the ability to look at a whole community and get a genetically precise diagnosis of what is going on in that community, it is something that could really be a game-changer, especially in an era where we are seeing more challenging infectious disease outbreaks.

Host: I think a lot of people would say, can we speed that one up a little? I want you to talk for a minute about the broader tech and healthcare ecosystem and what it takes to be a leader, both thought and otherwise, in the field. So you’ve noted that we’re in the middle of a big transformation that’s only going to happen once in history and because of that, you have a question that you ask yourself and everyone who reports to you. So what’s the question that you ask, and how does the answer impact Microsoft’s position as a leader?

Peter Lee: Right. You know, healthcare, in most parts of the world, is really facing some big challenges. It’s at a financial breaking point in almost all developed countries. The spread of the latest access to good medical practice has been slowing in the developing world and as you, kind of, look at, you know, how to break out of these cycles, increasingly, people turn to technology. And the kind of shining beacon of hope is this mountain of digital data that’s being produced every single day and so how can we convert that into what’s called the triple aim of better outcomes, lower costs and better experiences? So then, when you come to Microsoft, you have to wonder, well, if we’re going to try to make a contribution, how do you do it? When Satya Nadella asked us to take this on, we told ourselves a joke that he was throwing us into the middle of the Pacific Ocean and asking us to find land, because it’s such a big complex space, you know, where do you go? And, we had more jokes about this because you start swimming for a while and you start meeting lots of other people who are just as lost and you actually feel a little ashamed to feel good about seeing other people drowning. But it fundamentally it doesn’t help you to figure out what to work on, and so we started to ask ourselves the question, if Microsoft were to disappear today, in what ways would healthcare be harmed or held back tomorrow and into the future? If our hyperscale cloud were to disappear today, in what ways would that matter to healthcare? If all of the AI capabilities that we can deploy so cheaply on that cloud were to disappear, how would that matter? And then, since we’re coming out of Microsoft Research, if Microsoft Research were to disappear today, in what ways would that matter? And asking ourselves that question has sort of helped us focus on the areas where we think we have a right to play. And I think the wonderful thing about Microsoft today is, we have a business model that makes it easy to align those things to our business priorities. And so it’s really a special time right now.

(music plays)

Host: Well, this is – not to change tone really quickly – but this is the part of the podcast where I ask what could possibly go wrong? And since we’ve actually just used a drowning in the sea metaphor, it’s probably apropos… but when you bring nascent AI technologies, and I say nascent because most people have said, even though it’s been going on for a long time, we’re still in an infancy phase of these technologies. When you bring that to healthcare, and you’re literally dealing with lifeanddeath consequences, there’s not any margin for error. So… I realize that the answer to this question could be too long for the podcast, but I have to ask, what keeps you up at night, and how are you and your colleagues addressing potential negative consequences at the outset rather than waiting for the problems to appear downstream?

Peter Lee: That’s such an important question and it actually has multiple answers. Maybe the one that I think would be most obvious to the listeners of this podcast has to do with patient safety. Medical practice and medical science has really advanced on the idea of prospective studies and clinical validation, but that’s not how computer science, broadly speaking, works. In fact, when we’re talking about machine learning it’s really based on retrospective studies. You know, we take data that was generated in the past and we try to extract a model through machine learning from it. And what the world has learned, in the last few years, is that those retrospective studies don’t necessarily hold up very well, prospectively. And so that gap is very dangerous. It can lead to new therapies and diagnoses that go wrong in unpredictable ways, and there’s sort of an over-exuberance on both sides. As technologists, we’re pretty confident about what we do and we see lots of problems that we can solve, and the healthcare community is sometimes dazzled by all of the magical machine learning we do and so there can be over-confidence on both sides. That’s one thing that I worry about a lot because, you know, all over our field, not just all over Microsoft, but across all the other major tech companies and universities, there are just great technologists that are doing some wonderful things and are very well-intentioned, but aren’t necessarily validated in the right way. And so that’s something that, really, is worrisome. Going along with safety is privacy of people’s health data. And while I think most people would be glad to donate their health data for scientific progress, no one wants to be exploited. Exploited for money, or worse, you know, denied, for example, insurance.

Host: Right.

Peter Lee: And you know, these two things can really lead to outcomes, over the next decade, that could really damage our ability to make good progress in the future.

Host: So that said, we’re pretty good at identifying the problem. We may be able to start a good conversation, air quotes, on that, but this is, for me, like, what are you doing?

Peter Lee: Yeah.

Host: Because this is a huge thing, and

Peter Lee: I really think, for real progress and real transformation, that the foundations have to be right and those foundations do start with this idea of interoperability. So the good thing is that major governments, including the US government, are seeing this and they are making very definitive moves to foster this interoperable future. And so now, our role in that is to provide the technical guidance and technologies so that that’s done in the right way. And so everything that we at Microsoft are doing around interoperability, around security, around identity management, differential privacy, all of the work that came out of Microsoft Research in confidential computing…

Host: Yeah.

Peter Lee: …all of those things are likely to be part of this future. As important as confidential computing has been as a product of Microsoft Research, it’s going to be way, way more important in this healthcare future. And so it’s really up to us to make sure that regulators and lawmakers and clinicians are aware and smart about these things. And we can provide that technical guidance.

Host: What about the other companies that you mentioned? I mean, you’re not in this alone and it’s not just companies, it’s nations, and, I dare say, rogue actors, that are skilled in this arena. How do you get, sort of, agreement and compliance?

Peter Lee: I would say that Microsoft is in a good position because it has a clear business model. If someone is asking us, well what are you going to with our data? We have a very clear business model that says that we don’t monetize on your data.

Host: Right.

Peter Lee: But everyone is going to have to figure that out. Also, when you are getting into a new area like healthcare, every tech company is a big, complicated place with lots of stakeholders, lots of competing internal interests, lots of politics.

Host: Right.

Peter Lee: And so Microsoft, I think, is in a very good position that way too. We’re all operating as one Microsoft. But it’s so important that we all find ways to work together. One point of contact has been engineered by the White House in something called the Blue Button Developers Conference. So that’s where I’m literally holding hands with my counterparts at Google, at Salesforce, at Amazon, at IBM, making certain pledges there. And so the convening power of governments is pretty powerful.

Host: It’s story time. We’ve talked a little about your academic and professional life. Give us a short personal history. Where did it all start for Peter Lee and how did he end up where he is today?

Peter Lee: Oh, my.

Host: Has to be short.

Peter Lee: Well, let’s see, so uh, I’m Korean by heritage. I was born in Ohio, but Korean by heritage and my parents immigrated from Korea. My dad was a physics professor. He’s long retired now and my mother a chemistry professor.

Host: Wow.

Peter Lee: And she passed away some years ago. But I guess as an Asian kid growing up in a physical science household, I was destined to become a scientist myself. And in fact, they never said it out loud, but I think it was a disappointment to them when I went to college to study math! And then maybe an even the bigger disappointment when I went from math to computer science in grad school. Of course they’re very proud of me now.

Host: Of course! Where’d you go to school?

Peter Lee: I went to the University of Michigan. I was there as an undergrad and then I was planning to go work after that. I actually interviewed at a little, tiny company in the Pacific Northwest called Microsoft…

Host: Back then!

Peter Lee: … and …but I was wooed by my senior research advisor at Michigan to stay on for my PhD and so I stayed and then went from grad school right to Carnegie Mellon University as a professor.

Host: And then worked your way up to leading the department…

Peter Lee: Yeah. So I was there for twenty four years. They were wonderful years. Carnegie Mellon University is just a wonderful, wonderful place. And um..

Host: It’s almost like there’s a pipeline from Microsoft Research to Carnegie Mellon. Everyone is CMU this, CMU that!

Peter Lee: Well, I remember, as an assistant professor, when Rick Rashid came to my office to tell me that he was leaving to start this thing called Microsoft Research and I was really sad and shocked by that. Now here I am!

Host: Right. Well, tell us, um, if you can, one interesting thing about you that people might not know.

Peter Lee: I don’t know if people know this or not, but I have always had an interest in cars, in fast cars. I spent some time, when I was young, racing in something called shifter karts and then later in open wheel Formula Ford, and then, when I got my first real job at Carnegie Mellon, I had enough money that I spent quite a bit of it trying to get a sponsored ride with a semi-pro team. I never managed to make it. It’s hard to kind of split being an assistant professor and trying to follow that passion. You know, I don’t do that too much anymore. Once you are married and have a child, the annoyance factor gets a little high, but it’s something that I still really love and there’s a community of people, of course, at a place like Microsoft, that’s really passionate about cars as well.

Host: As we close, Peter, I’d like you to leave our listeners with some parting advice. Many of them are computer science people who may want to apply their skills in the world of healthcare, but are not sure how to get there from here. Where, in the vast sea of technology and healthcare research possibilities, should emerging researchers set their sights and where should they begin their swim?

Peter Lee: You know, I think it’s all about data and how to make something good out of data. And today, especially, you know, we are in that big sea of data silos. Every one of them has different formats, different rules, most of them don’t have modern APIs. And so things that can help evolve that system to a true ocean of data, I think anything to that extent will be great. And it is not just tinkering around with interfaces. It’s actually AI. To, say, normalize the schemas of two different data sets, intelligently, is something that we will need to do using the, kind of, latest machine learning, latest program synthesis, the kind of, latest data science techniques that we have on offer.

Host: Who do you want on your team in the coming years?

Peter Lee: The thing that I think I find so exciting about great researchers today is their intellectual flexibility to start looking at an idea and getting more and more depth of understanding, but then evolve as a person to understanding, you know, what is the value of this in the world, and understanding that that is a competitive world. And so, how willing are you to compete in that competitive marketplace to make the best stuff? And that evolution that we are seeing over and over again with people out of Microsoft Research is just incredibly exciting. When you see someone like a Galen Hunt or a Doug Burger or a Lili Cheng come out of Microsoft Research and then evolve into these world leaders in their respective fields, not just in research, but spanning research to really competing in a highly competitive marketplace, that is the future.

Host: Peter Lee, thank you for joining us on the podcast today. It’s been an absolute delight.

Peter Lee: Thank you for having me. It’s been fun.

(music plays)

To learn more about Dr. Peter Lee and how Microsoft is working to empower healthcare professionals around the world, visit Microsoft.com/research

Go to Original Article
Author: Microsoft News Center

For Sale – For parts or complete. Desktop CAD/Photoshop etc. i7, Nvidia quadro…

Selling my project PC. Has been used (successfully) as a CCTV server for the past 18 months – 2 years without ever being pushed. All parts were bought new but no retail packaging. Please assume no warranty. No operating system installed either. Selling as we’ve now upgraded to a dedicated Xeon server. Parts listed below.

Generic desktop tower case.
Supermicro C7H270-CG-ML motherboard.
Intel i7 7700 3.6 ghz with stock cooler.
PNY Nvidia quadro M2000 4gb.
Kingston hyperx fury DDR4 16gb RAM (2x8gb).
Seagate Skyhawk 4tb HDD (NO OS).

Aside from the PSU this a solid machine with decent potential. Could easily be used for gaming with one or two changes and could be used for CAD or photoshop as is (or just change PSU). This handled HIKVision and up to 56 cameras (we had 13 on screen at any one time, could handle more) but admittedly struggled with playback on any more than four cameras at once (All 4K cameras). The case has a dent or two in it but entirely useable. Did intend to keep it for the Mrs for her photography but she’s bought a MacBook instead.

Cost around £2000 new. Asking £700 including postage but collection preferred (from Plymouth). Very open to offers as I’ve struggled to price this up to be honest.

Cheers, Chocky.

Go to Original Article

Dawn of a Decade: The Top Ten Tech Policy Issues for the 2020s

By Brad Smith and Carol Ann Browne

For the past few years, we’ve shared predictions each December on what we believe will be the top ten technology policy issues for the year ahead. As this year draws to a close, we are looking out a bit further. This January we witness not just the start of a new year, but the dawn of a new decade. It gives us all an opportunity to reflect upon the past ten years and consider what the 2020s may bring.

As we concluded in our book, Tools and Weapons: The Promise and the Peril of the Digital Age, “Technology innovation is not going to slow down. The work to manage it needs to speed up.” Digital technology has gone longer with less regulation than virtually any major technology before it. This dynamic is no longer sustainable, and the tech sector will need to step up and exercise more responsibility while governments catch up by modernizing tech policies. In short, the 2020s will bring sweeping regulatory changes to the world of technology.

Tech is at a crossroads, and to consider why, it helps to start with the changes in technology itself. The 2010s saw four trends intersect, collectively transforming how we work, live and learn. Continuing advances in computational power made more ambitious technical scenarios possible both for devices and servers, while cloud computing made these advances more accessible to the world. Like the invention of the personal computer itself, cloud computing was as important economically as it was technically. The cloud allows organizations of any size to tap into massive computing and storage capacity on demand, paying for the computing they need without the outlay of capital expenses. 

More powerful computers and cloud economics combined to create the third trend, the explosion of digital data. We begin the 2020s with 25 times as much digital data on the planet as when the past decade began.

These three advances collectively made possible a fourth: artificial intelligence, or AI. The 2010s saw breakthroughs in data science and neural networks that put these three advances to work in more powerful AI scenarios. As a result, we enter a new decade with an increasing capability to rely on machines with computer vision, speech recognition, and language translation, all powered by algorithms that recognize patterns within vast quantities of digital data stored in the cloud.

The 2020s will likely see each of these trends continue, with new developments that will further transform the use of technology around the world. Quantum computing offers the potential for breathtaking breakthroughs in computational power, compared to classical or digital computers. While we won’t walk around with quantum computers in our pockets, they offer enormous promise for addressing societal challenges in fields from healthcare to environmental sustainability.

Access to cloud computing will also increase, with more data centers in more countries, sometimes designed for specific types of customers such as governments with sensitive data. The quantity of digital data will continue to explode, now potentially doubling every two years, a pace that is even faster than the 2010s. This will make technology advances in data storage a prerequisite for continuing tech usage, explaining the current focus on new techniques such as optical- and even DNA-based storage.

The next decade will also see continuing advances in connectivity. New 5G technology is not only 20 times faster than 4G. Its innovative approach to managing spectrum means that it can support over a thousand more devices per meter than 4G, all with great precision and little latency. It will make feasible a world of ambient computing, where the Internet of Things, or IoT devices, become part of the embedded fabric of our lives, much as electrical devices do today. And well before we reach the year 2030, we’ll be talking about 6G and making use of thousands of satellites in low earth orbit.

All of this will help usher in a new AI Era that likely will lead to even greater change in the 2020s than the digital advances we witnessed during the past decade. AI will continue to become more powerful, increasingly operating not just in narrow use cases as it does today but connecting insights between disciplines. In a world of deep subject matter domains across the natural and social sciences, this will help advance learning and open the door to new breakthroughs.

In many ways, the AI Era is creating a world full of opportunities. In each technological era, a single foundational technology paved the way for a host of inventions that followed. For example, the combustion engine reshaped the first half of the 20th century. It made it possible for people to invent not just cars but trucks, tractors, airplanes, tanks, and submarines. Virtually every aspect of civilian economies and national security issues changed as a result.

This new AI Era likely will define not just one decade but the next three. Just as the impact of the combustion engine took four decades to unfold, AI will likely continue to reshape our world in profound ways between now and the year 2050. It has already created a new era of tech intensity, in which technology is reshaping every company and organization and becoming embedded in the fabric of every aspect of society and our lives.

Change of this magnitude is never easy. It’s why we live in both an era of opportunity and an age of anxiety. The indirect impacts of technology are moving some people and communities forward while leaving others behind. The populism and nationalism of our time have their roots in the enormous global and societal changes that technology has unleashed. And the rising economic power of large companies – perhaps especially those that are both tech platforms and content aggregators – has brought renewed focus to antitrust laws.

This is the backdrop for the top ten technology issues of the 2020s. The changes will be immense. The issues will be huge. And the stakes could hardly be higher. As a result, the need for informed discussion has rarely been greater. We hope the assessments that follow help you make up your own mind about the future we need collectively to help shape.

1. Sustainability – Tech’s role in the race to address climate change

A stream of recent scientific research on climate change makes clear that the planet is facing a tipping point. These dire predictions will catapult sustainability into one of the dominant global policy issues for the next decade, including for the tech sector. We see this urgency reflected already in the rapidly evolving views of our customers and employees, as well as in many electorates around the world. In countries where governments are moving more slowly on climate issues, we’re likely to see businesses and other institutions fill the gap. And over the coming decade, governments that aren’t prioritizing sustainability will be compelled to catch up.

For the tech sector, the sustainability issue will cut both ways. First, it will increase pressure on companies to make the use of technology more sustainable. With data centers that power the cloud ranking among the world’s largest users of electricity, Microsoft and other companies will need to move even more quickly than in recent years to use more and better renewable energy, while increasing work to improve electrical efficiency.

But this is just the tip of the iceberg. Far bigger than technology’s electrical consumption is “Scope 3” emissions – the indirect emissions of carbon in a company’s value chain for everything from the manufacturing of new devices to the production of concrete to build new buildings. While this is true for every sector of the economy, it’s an area where the tech sector will likely lead in part because it can. And should. With some of the world’s biggest income statements and healthiest balance sheets, look to Microsoft and other tech companies to invest and innovate, hopefully using the spirit of competition to bring out the best in each other.

This points to the other and more positive side of the tech equation for sustainability. As the world takes more aggressive steps to address the environment, digital data and technology will prove to be among the next decade’s most valuable tools. While carbon issues currently draw the most attention, climate issues have already become multifaceted. We need urgent and concerted action to address water, waste, biodiversity, and our ecosystems. Regardless of the issue or ultimate technology, insights and innovations will be fueled by data science and artificial intelligence. When quantum computing comes online, this will become even more promising.

By the middle or end of the next decade, the sustainability issue may have another impact that we haven’t yet seen and we’re not yet considering. This is on the world’s geopolitics. As the new decade begins, many governments are turning inward and nations are pulling apart. But sustainability is an issue that can’t be solved by any country alone. The world must unite to address environmental issues that know no boundaries. We all share a small planet, and the need to preserve humanity’s ability to live on it will force us to think and act differently across borders.

2. Defending Democracy – International threats and internal challenges

Early each New Year, we look forward to the release of the Economist Intelligence Unit’s annual Democracy Index. This past year’s report updated the data on the world’s 75 nations the Economist ranks as democracies. Collectively these countries account for almost half of the world’s population. Interestingly, they also account for 95 percent of Microsoft’s revenue. Perhaps more than any other company, Microsoft is the technology provider for the governments, businesses, and non-profits that support the world’s democracies. This gives us both an important vantage point on the state of democracy and a keen interest in democracy’s health.

Looking back at the past decade, the Economist’s data shows that the health of the world’s democracies peaked in the middle of the decade and has since declined slightly and stagnated. Technology-fueled change almost certainly has contributed in part to this trend.

As we enter the 2020s, defending democracy more than ever requires a focus on digital tech. The past decade saw nation-states weaponize code and launch cyber-attacks against the civilian infrastructure of our societies. This included the hacking of a U.S. presidential campaign in 2016, a tactic Microsoft’s Threat Intelligence Center has since seen repeated in numerous other countries. It was followed by the WannaCry and Not-Petya attacks in 2017, which unleashed damage around the world in ways that were unimaginable when the decade began.

The defense of democracy now requires determined efforts to protect political campaigns and governments from the hacking and leaking of their emails. Even more important, it requires digital protection of voter rolls and elections themselves. And most broadly, it requires protection against disinformation campaigns that have exploited the basic characteristics of social media platforms.

Each of these priorities now involves new steps by tech companies, as well as new strategies for and collaboration with and among governments. Microsoft is one of several industry leaders putting energy and resources into this area. Our Defending Democracy Program includes an AccountGuard program that protects candidates in 26 democratic nations, an ElectionGuard program to safeguard voting, and support for the NewsGuard initiative to address disinformation. As we look to the 2020s, we will need continued innovation to address the likely evolution of digital threats themselves.

The world will also need to keep working to solidify existing norms and add new legal rules to protect against cybersecurity threats. Recent years have seen more than 100 leading tech companies come together in a Tech Accord to advance security in new ways, while more than 75 nations and more than 1,000 multi-stakeholder signatories have now pledged their support for the Paris Call for Trust and Security in Cyberspace. The 2020s hopefully will see important advances at the United Nations, support from global groups such as the World Economic Forum, and by 2030, work on a global compact to make a Digital Geneva Convention a reality.

But the digital threats to democracy are not confined to attacks from other nations. As the new decade dawns, a new issue is emerging with potentially profound and thorny implications for the world’s democracies. Increasingly government officials in democratic nations are asking whether the algorithms that pilot social media sites are undermining the political health of their citizenries. 

It’s difficult to sustain a democracy if a population fragments into different “tribes” that are exposed to entirely different narratives and sources of information. While diverse opinions are older than democracy itself, one of democracy’s characteristics has traditionally involved broad exposure to a common set of facts and information. But over the past decade, behavioral-based targeting and monetization on digital platforms has arguably created more information siloes than democracy has experienced in the past. This creates a new question for a new decade. Namely, will tech companies and democratic governments alike need new approaches to address a new weakness for the world’s democracies? 

3.  Journalism – Technology needs to give the news business a boost

While we look to improve the health of the world’s democracies, we need to also monitor the well-being of another system playing a vital role in free societies across the globe: the independent press. For centuries, journalists have served as watch dogs for democracies, safeguarding political systems by monitoring and challenging public affairs and government institutions. As Victorian era historian Thomas Carlyle wrote, “There were Three Estates in Parliament; but, in the Reporters’ Gallery yonder, there sat a Fourth Estate more important far than they all.”

It’s clear that a healthy democracy requires healthy journalism, but newspapers are ailing – and many are on life support. The decline of quality journalism is not breaking news. It has been in slow decline since the start the 20th century with the advent of the radio and later when television overtook the air waves. By the turn of this century, the internet further eroded the news business as dotcoms like Craigslist disrupted advertising revenue, news aggregators lured away readers, and search engines and social media giants devoured both. While a number of bigger papers weathered the storm, most small local outlets were hard hit. According to data from the U.S. Bureau of Labor Statistics’ Occupational Employment Statistics, in 2018, 37,900 Americans were employed in the newsroom, down 14 percent from 2015 and down 47 percent from 2004.

The world will be hard pressed to strengthen its democracies if we can’t rejuvenate quality journalism. In the decade ahead the business model for journalism will need to evolve and become healthier, which hopefully will include partnerships that create new revenue streams, including through search and online ads. And as the world experiments with business models, we can’t forget to learn from and build on the public broadcasters that have endured through the years, like the BBC in the United Kingdom and NPR in the United States.  

Helping journalism recover will also include protecting journalists, as we’ve learned through Microsoft’s work with the Clooney Foundation for Justice. Around the world violence against journalists is on the rise, especially for those reporters covering conflict, human rights abuses, and corruption. According to the Committee to Protect Journalists, 25 journalists were killed, 250 were imprisoned, and 64 went missing in 2019. In the coming decade, look for digital technology like AI to play an important role in monitoring the safety of journalists, spotting threats, and helping ensure justice in the court of law. 

And lastly, it’s imperative that we use technology to protect the integrity of journalism. As the new decade begins, technologists warn that manipulated videos are becoming the purveyors of disinformation. These “deepfakes” do more than deceive the public, they call all journalism into question. AI is used to create this doctored media, but it will also be used to detect deepfakes and verify trusted, quality content. Look for the tech sector to partner with the news media and academia to create new tools and advocate for regulation to combat internet fakery and build trust in the authentic, quality journalism that underpins democracies around the world.

4. Privacy in an AI Era – From the second wave to the third

In the 2010s, privacy concerns exploded around the world. The decade’s two biggest privacy controversies redefined big tech’s relationships with government. In 2013, the Snowden disclosures raised the world’s ire about the U.S. Government’s access to data about people. The tech sector, Microsoft included, responded by expanding encryption protection and pushing back on our own government, including with litigation. Five years later, in 2018, the guns turned back on the tech sector after the Cambridge Analytica data scandal engulfed Facebook and digital privacy again became a top-level political issue in much of the world.

Along the way, privacy laws continued to spread around the world. The decade saw 49 new countries adopt broad privacy laws, adding to the 86 nations that protected privacy a decade ago. While the United States is not yet on that list, 2018 saw stronger privacy protections jump from Europe across the Atlantic and move all the way to the Pacific, as California’s legislature passed a new law that paves the way for action in Washington, D.C.

But it wasn’t just the geographic spread of privacy laws that marked the decade. With policy innovation centered in Brussels, the European Union effectively embarked on a second wave of privacy protection. The first wave was characterized by laws that required that web sites give consumers “notice and consent” rights before using their data. Europe’s General Data Protection Regulation, or GDPR, represented a second wave. It gives consumers “access and control” over their data, empowering them to review their data online and edit, move, or delete it under a variety of circumstances.

Both these waves empowered consumers – but also placed a burden on them to manage their data. With the volume of data mushrooming, the 2020s likely will see a third wave of privacy protection with a different emphasis. Rather than simply empowering consumers, we’re likely to see more intensive rules that regulate how businesses can use data in the first place. This will reach data brokers that are unregulated in some key markets today, as well as a focus on sensitive technologies like facial recognition and protections against the use of data to adversely impact vulnerable populations. We’re also likely to see more connections between privacy rules and laws in other fields, including competition law.

In short, fasten your seat belt. The coming decade will see more twists and turns for privacy issues.

5. Data and National Sovereignty – Economics meet geopolitics

When the combustion engine became the most important invention a century ago, the oil that fueled it became the world’s most important resource. With AI emerging as the most important technology for the next three decades, we can expect the data that fuels it to quickly become the 21st century’s most important resource. This quest to accumulate data is creating economic and geopolitical issues for the world.

As the 2020s commence, data economics are breeding a new generation of public policy issues. Part of this stems from the returns to scale that result from the use of data. While there are finite limits to the amount of gasoline that can be poured into the tank of a car, the desire for more data to develop a better AI model is infinite. AI developers know that more data will create better AI. Better AI will lead to even more usage for an AI system. And this in turn will create yet more data that will enable the system to improve yet again. There’s a risk that those with the most data, namely the first movers and largest companies and countries, will overtake others’ opportunity for success.

This helps explain the critical economic issues that are already emerging. And the geopolitical dynamics are no less vital.

Two of the biggest forces of the past decade – digital technology and geopolitics – pulled the world in opposite directions. Digital technology transmitted data across borders and connected people around the world. As technology brought the world together, geopolitical dynamics pulled countries apart and kindled tensions on issues from trade to immigration. This tug-of-war explains one reason a tech sector that started the decade as one of the most popular industries ended it under scrutiny and with mounting criticism.

This tension has created a new focus that is wrapped into a term that was seldom used just a few years ago – “digital sovereignty.” The current epicenter for this issue is Western Europe, especially Germany and France. With the ever-expanding ubiquity of digital technology developed outside of Europe and the potential international data flows that can result, the protection and control of national data is a new and complicated priority, with important implications for evolving concepts of national sovereignty.

The arrival of the AI Era requires that governments think anew about balancing some critical challenges. They need to continue to benefit from the world’s most advanced technologies and move a swelling amount of data across borders to support commerce in goods and services. But they want to do this in a manner that protects and respects national interests and values. From a national security perspective, this may lead to new rules that require that a nation’s public sector data stays within its borders unless the government provides explicit permission that it can move somewhere else. From an economic perspective, it may mean combining leading international technologies with incentives for local tech development and effective sovereignty protections.

All this has also created the need for open data initiatives to level the playing field. Part of this requires opening public data by governments to provide smaller players with access to larger data sets. Another involves initiatives to enable smaller companies and organizations to share – or “federate” – their data, without surrendering their ownership or control in the data they share. This in turn requires new licensing approaches, privacy protections, and technology platforms and tools. It also requires intellectual property policies, especially in the copyright space, that facilitate this work.

During the first two decades of this century, open source software development techniques transformed the economics of coding. During the next two decades, we’ll need open data initiatives that do the same thing for data and AI.

The past year has seen some of these concepts evolve from political theory to government proposals. This past October, the German Government proposed a project called GAIA-X to protect the country’s digital sovereignty. A month later, discussions advanced to propose a common approach that would bring together Germany and France.

It’s too early to know precisely how all these initiatives will evolve. For almost four centuries, the world has lived under a “Westphalian System” defined by territorial borders controlled by sovereign states. The technology advances of the past decade have placed new stress on this system. Every aspect of the international economy now depends on data that crosses borders unseen and at the speed of light. In an AI-driven economy and data-dependent world, the movement of data is raising increasingly important questions for sovereignty in a Westphalian world. The next decade will decide how this balance is struck.

6. Digital Safety – The need to constantly battle evolving threats

The 2010s began with optimism that new technology would advance online safety and better protect children from exploitation. It ended with a year during which terrorists and criminals used even newer technology to harm innocent children and adults in ways that seemed almost unimaginable when the decade began. While the tech sector and governments have moved to respond, the decade underscores the constant war that must be waged to advance digital safety.

Optimism marked the decade’s start in part because of PhotoDNA, developed in 2009 by Microsoft and Hany Farid, then a professor at MIT. The industry adopted it to identify and compare online photos to known illegal images of child exploitation. Working with key non-profit and law enforcement groups, the technology offered real hope for turning the tide against the horrific exploitation of children. And spurred on by the British Government and others, the tech sector took additional steps globally to address images of child pornography in search results and on other services.

Yet as the New York Times reported in late 2019, criminals have subsequently used advancing video and livestreaming technologies, as well as new approaches to file-sharing and encrypted messaging, to exploit children even more horrifically. As a result, political pressure is again pushing industry to do more to catch up. It’s a powerful lesson of the need for constant vigilance.

Meanwhile, online safety threats become more multifaceted. One of the decade’s tragic days came on March 15, 2019 in Christchurch, New Zealand. A terrorist and white supremacist used livestreaming on the internet as the stage for mass shootings at two mosques, killing 51 innocent civilians.

Led by Prime Minister Jacinda Ardern, the New Zealand Government spearheaded a global multi-stakeholder effort to create the Christchurch Call. It has brought governments and tech companies together to share information, launch a crisis incident protocol, and take other steps to reduce the possibility of others using the internet in a similar way in the future.

All of this has also led to new debate about the continued virtues of exempting social media platforms from legal liability for the content on their sites. Typified by section 230 of the United States’ Communications Decency Act, current laws shield these tech platforms from responsibilities faced by more traditional publishers. As we look to the 2020s, it seems hard to believe that this approach will survive the next decade without change.

7. Internet Inequality – A world of haves and have-nots

In 2010, fewer than a third of the world’s population had access to the internet. As this decade concludes, the number has climbed to more than half. This represents real progress. But much of the world still lacks internet access. And high-speed broadband access lags much farther behind, especially in rural areas.

In an AI Era, access to the internet and broadband have become indispensable for economic success. With public discussion increasingly focusing on economic inequality, we need to recognize that the wealth disparity in part is rooted in internet inequality.

There are many reasons to be optimistic that there will be faster progress in the decade ahead. But progress will require new approaches and not just more money.

This starts with working with better data about who currently has interest access and at what speeds. Imagine trying to restore electric power to homes after a big storm without accurate data on where the power is out. Yet that’s the fundamental reality in a country such as the United States when we discuss closing the broadband gap. The country spends billions of dollars a year without the data needed to invest it effectively. And this data gap is by no means confined to North America.

Better data can make its best contribution if it’s coupled with new and better technology. The next decade will see a world of new communications technologies, from 5G (and ultimately 6G) to thousands of low Earth orbiting satellites and terrestrial technologies like TV White Spaces. All of this is good news. But it will be essential to focus on where each technology can best be used, because there is no such thing as a one-size-fits-all approach for communications technology. For example, 5G will transform the world, but its signals travel shorter distances, making it less than optimal for many scenarios in rural areas.

With better data and new technology, it’s possible to bring high speed internet to 90 percent of the global population by 2030. This may sound ambitious, but with better data and sounder investments, it’s achievable. Internet equality calls for ambition on this level.

8. A Tech Cold War – Will we see a digital iron curtain down the Pacific?

The new decade begins with a tech question that wasn’t on the world’s radar ten years ago. Are we witnessing the start of a “tech cold war” between the United States and China? While it’s too early to know for certain, it’s apparent that recent years have been moving in this direction. And the 2020s will provide a definitive answer.

The 2010s saw China impose more constraints on technology and information access to its local market. This built on the Great Chinese Firewall constructed a decade before, with more active filtering of foreign content and more constraints on local technology licenses. In 2016, the Standing Committee of the National People’s Congress adopted a broad Cyber Security Law to advance data localization and enable the government to take “all necessary” steps to protect China’s sovereignty, including through a requirement to make key network infrastructure and information systems “secure and controllable.” Combined with other measures to manage digital technology that have raised human rights concerns, these policies have effectively created a local internet and tech ecosystem that is distinct from the rest of the world.

This Chinese tech ecosystem in the latter half of the decade also grew increasingly competitive. The pace and quality of innovation have been impressive. With companies such as Huawei, Ali Baba, and Tencent gaining worldwide prominence, Chinese technology is being adopted more globally while its own market is less open – and at the same time that it’s subject to Chinese cyber security public policies. 

As the 2010s close, the United States is responding with new efforts to contain the spread of Chinese technology. It’s not entirely different from the American efforts to contain Russian ideology and influence in the Cold War that began seven decades ago. Powered in part by American efforts to dissuade other governments from adopting 5G equipment from China, tensions heightened in 2019 when the U.S. Department of Commerce banned American tech companies from selling to Huawei components for its products.

In both Washington and Beijing, officials are entering the new decade preparing for these tensions around technology to harden. The implications are huge. Clearly, the best time to think about a Tech Cold War is before it begins. The Cold War between the United States and Soviet Union lasted more than four decades and impacted virtually every country on the planet. As we look ahead to the 2020s, the strategic questions for each country and the implications for the world are no smaller.

9. Ethics for Artificial Intelligence – Humanity needs to govern machines

For a world long accustomed to watching robots wreak havoc on the silver screen, the last few years have brought advances in artificial intelligence that still fall far short of the capabilities seen in science fiction, but are well beyond what had seemed possible when the decade began. While typically still narrow in scope, AI enters a new decade with an increasing ability to match human perception and cognition in vision, speech recognition, language translation, and machine learning based on discerning patterns in data.

In a decade that increasingly gave rise to anxiety over the impact of technology, it’s not surprising that these advances unleashed a wave of discussions focused on AI and its implications for ethics and human rights. If we’re going to empower machines to make decisions, how do we want these decisions to be made? This is a defining question not just for the decade ahead, but for all of us who are alive today. As the first generation of people to give machines the power to make decisions, we have a responsibility to get the balance right. If we fail, the generations that follow us are likely to pay a steep price.

The good news is that companies, governments, and civil society groups around the world have embraced the need to develop ethical and human rights principles for artificial intelligence. We published a set of six ethical principles at Microsoft in January 2018, and we’ve been tracking the trends. What we’re seeing is a global movement towards an increasingly common set of principles. It’s encouraging.

As we look to the 2020s, we’re likely to see at least two new trends. The first is the shift from the articulation of principles to the operationalization of ethics. In other words, it’s not sufficient to simply state what principles an organization wants to apply to its use of AI. It needs to implement this in more precise standards backed up by governance models, engineering requirements, and training, monitoring, and ultimately compliance. At Microsoft we published our first Responsible AI Standard in late 2019, spelling out many of these new pieces. No doubt we’ll improve upon it during the next few years, as we learn both from our own experience and the work of many others who are moving in a similar direction.

The second trend involves specific issues that are defining where “the rubber meets the road” for ethical and human rights concerns. The first such issue has involved facial recognition, which arguably has become a global policy issue more rapidly than any previous digital tech issue. Similar questions are being discussed about the use of AI for lethal autonomous weapons. And conversations are starting to focus on ethics and the use of algorithms more generally. This is just a beginning. By 2030, there will likely be enough issues to fill the table of contents for a lengthy book. If there’s one common theme that has emerged in the initial issues, it’s the need to bring together people from different countries, intellectual disciplines, and economic and government sectors to develop a more common vocabulary. It’s the only way people can communicate effectively with each other as we work to develop common and effective ethical practices for machines.

10. Jobs and Income Inequality in an AI Economy – How will the world manage a disruptive decade?

It’s clear that the 2020s will bring continued economic disruption as AI enables machines to replace many tasks and jobs that are currently performed by people. At the same time, AI will create new jobs, companies, and even industries that don’t exist today. As we’ve noted before, there is a lot to learn from the global economy’s transition from a horse-powered to automobile-driven economy a century ago. Like foundational technologies before it, AI will likely create something like an economic rollercoaster, with an uneven match between prosperity and distress during particular years or in specific places.

This will create many big issues, and two are already apparent. The first is the need to equip people with the new skills needed to succeed in an AI Economy. During the 2010s, technology drove globalization and created more economic opportunity for people in many developing economies around the world, perhaps especially in India and China. The resulting competition for jobs led not only to political pressure to turn inward in some developed nations, but to a recognition that economic success in the future requires more investments in education. As we saw through data published by LinkedIn, in a country like the United States there emerged a broadened interest in Europe’s approach to apprenticeships and technical skills and the pursuit of a range of post-secondary credentials. Given the importance of this trend, it’s not surprising that there was also broader political interest in addressing the educational costs for individuals pursuing these skills.

There’s every reason to believe that these trends will accelerate further in the decade ahead. If anything, expanding AI adoption will lead to additional economic ripple effects. We’re likely to see employers and governments alike invest in expanded learning opportunities. It has become a prerequisite for keeping pace.

In many ways, however, this marks the beginning rather than the conclusion of the economic debates that lie ahead. Four decades of technological change have already contributed to mounting income inequality. It’s a phenomenon that now impacts the politics of many communities and countries, with issues that range from affordable housing to tax rates, education and healthcare investments, and income redistribution.

All this raises some of the biggest political questions for the 2020s. It reminds us that history’s defining dates don’t always coincide with the start of a new decade. For example, one of the most important dates in American political history came on September 14, 1901. It was the day that Theodore Roosevelt succeeded to the United States Presidency. More than a century later, we can see that it represented the end of more than 30 years that combined advancing technology with regulatory restraint, which led to record levels of both prosperity and inequality. In important respects, it was the first day of the Progressive Era in the United States. Technology continued to progress, but in a new political age that included stronger business regulation, product liability laws, antitrust enforcement, public investment, and an income tax.

As we enter the 2020s, political leaders in many countries are debating whether to embark on a similar shift. No one has a crystal ball. But increasingly it seems like the next decade will usher in not only a new AI Era and AI Economy, but new approaches to politics and policy. As we’ve noted before, there’s a saying that “history doesn’t repeat itself, but it often rhymes.” From our vantage point, there seems a good chance that the next decade for technology and policy will involve some historical poetry.

Go to Original Article
Author: Microsoft News Center

Oracle looks to grow multi-model database features

Perhaps no single vendor or database platform over the past three decades has been as pervasive as the Oracle database.

Much as the broader IT market has evolved, so too has Oracle’s database. Oracle has added new capabilities to meet changing needs and competitive challenges. With a move toward the cloud, new multi-model database options and increasing automation, the modern Oracle database continues to move forward. Among the executives who have been at Oracle the longest is Juan Loaiza, executive vice president of mission critical database technologies, who has watched the database market evolve, first-hand, since 1988.

In this Q&A, Loaiza discusses the evolution of the database market and how Oracle’s namesake database is positioned for the future.

Why have you stayed at Oracle for more than three decades and what has been the biggest change you’ve seen over that time?

Juan LoaizaJuan Loaiza

Juan Loaiza: A lot of it has to do with the fact that Oracle has done well. I always say Oracle’s managed to stay competitive and market-leading with good technology.

Oracle also pivots very quickly when needed. How do you survive for 40 years? Well, you have to react and lead when technology changes.

Decade after decade, Oracle continues to be relevant in the database market as it pivots to include an expanding list of capabilities to serve users.

The big change that happened a little over a year ago is that Thomas Kurian [former president of product development] left Oracle. He was head of all development and when he left what happened is that some of the teams, like database and apps, ended rolling up to [Oracle founder and CTO] Larry Ellison. Larry is now directly managing some of the big technology teams. For example, I work directly with Larry.

What is your view on the multi-model database approach?

Loaiza: This is something we’re starting to talk more about. So the term that people use is multi-model but we’re using a different term, we’re using a term called converged database and the reason for that is because multi-model is kind of one component of it.

Multi-model really talks about different data models that you can model inside the database, but we’re also doing much more than that. Blockchain is an example of converging technology that is not even thought about normally as database technology into the database. So we’re going well beyond the conventional kind of multi-model of, Hey, I can do this, data format, and that data format.

Initially, the relational database was the mainstream database people used for both OLTP [online transaction processing] and analytics. What has happened in the last 10 to 15 years is that there have been a lot of new database technologies to come around, things like NoSQL, JSON, document databases, databases for geospatial data and graph databases too. So there’s a lot of specialty databases that have come around. What’s happening is, people are having to cobble together a complex kind of web of databases to solve one problem and that creates an enormous amount of complexity.

With the idea of a converged database, we’re taking all the good ideas, whether it’s NoSQL, blockchain or graph, and we’re building it into the Oracle database. So you can basically use one data store and write your application to that.

The analogy that we use is that of a smartphone. We used to have a music device and a phone device and a calendar device and a GPS device and all these things and what’s happened is they’ve all been converged into a smartphone.

Are companies actually shifting their on-premises production database deployments to the cloud?

Loaiza: There’s definitely a switch to the cloud. There are two models to cloud; one is kind of the grassroots. So we’re seeing some of that, for example, with our autonomous database that people are using now. So they’re like, ‘Hey, I’m in the finance department, and I need a reporting database,’ or, ‘hey, I’m in the marketing department, and I need some database to run some campaign with.’ So that’s kind of a grassroots and those guys are building a new thing and they want to just go to cloud. It’s much easier and much quicker to set up a database and much more agile to go to the cloud.

The second model is where somebody up in the hierarchy says, ‘Hey, we have a strategy to move to cloud.’ Some companies want to move quickly and some companies say, ‘Hey, you know, I’m going to take my time,’ and there’s everything in the middle.

Will autonomous database technology mean enterprises will need fewer database professionals?

Loaiza: The autonomous database addresses the mundane aspects of running a database. Things like tuning the database, installing it, configuring it, setting up HA [high availability], among other tasks. That doesn’t mean that there’s nothing for database professionals to do.

Like every other field where there is automation, what you do is you move upstream, you say, ‘Hey, I’m going to work on machine learning or analytics or blockchain or security.’ There’s a lot of different aspects of data management that require a lot of labor.

One of the nice things that we have in this industry is there is no real unemployment crisis in IT. There’s a lot of unfilled jobs.

So it’s pretty straightforward for someone who has good skills in data management to just move upstream and do something that’s going to add more specific value then just configuring and setting up databases, which is really more of a mechanical process.

This interview has been edited for clarity and conciseness.

Go to Original Article

Tallying the momentous growth and continued expansion of Dynamics 365 and the Power Platform – The Official Microsoft Blog

We’ve seen incredible growth of Dynamics 365 and the Power Platform just in the past year. This momentum is driving a massive investment in people and breakthrough technologies that will empower organizations to transform in the next decade.

We have allocated hundreds of millions of dollars in our business cloud that power business transformation across markets and industries and help organizations solve difficult problems.

This fiscal year we are also heavily investing in the people that bring Dynamics 365 and the Power Platform to life — a rapidly growing global network of experts, from engineers and researchers to sales and marketing professionals. Side-by-side with our incredible partner community, the people that power innovation at Microsoft will fuel transformational experiences for our customers into the next decade.

Accelerating innovation across industries

In every industry, I hear about the struggle to transform from a reactive to proactive organization that can respond to changes in the market, customer needs, and even within their own business. When I talk to customers who have rolled out Dynamics 365 and the Power Platform, the conversation shifts to the breakthrough outcomes they’ve achieved, often in very short time frames.

Customers talk about our unique ability to connect data holistically across departments and teams — with AI-powered insights to drive better outcomes. Let me share a few examples.

This year we’ve focused on a new vision for retail that unifies back office, in-store and digital experiences. One of Washington state’s founding wineries — Ste. Michelle Wine Estates — is onboarding Dynamics 365 Commerce to bridge physical and digital channels, streamline operations with cloud intelligence and continue building brand loyalty with hyper-personalized customer experiences.

When I talk to manufacturers, we often zero in on ways to bring more efficiency to the factory floor and supply chain. Again, it’s our ability to harness data from physical and digital worlds, reason over it with AI-infused insights, that opens doors to new possibilities. For example, Majans, the Australian-based snackfood company, is creating the factory of the future with the help of Microsoft Dynamics 365 Supply Chain Management, Power BI and Azure IoT Hub — bringing Internet of Things (IoT) intelligence to every step in the supply chain, from quality control on the production floor to key performance indicators to track future investments. When everyone relies on a single source of truth about production, inventory and sales performance, decisions employees make drive the same outcome — all made possible on our connected business cloud.

These connected experiences extend to emerging technologies that bridge digital and physical worlds, such as our investment in mixed reality. We’re working with companies like PACCAR — manufacturer of premium trucks — to improve manufacturing productivity and employee training using Dynamics 365 Guides and HoloLens 2, as well as Siemens to enable technicians to service its eHighway — an electrified freight transport system — by completing service steps with hands-free efficiency using HoloLens and two-way communication and documentation in Dynamics 365 Field Service.

For many of our customers, the journey to Dynamics 365 and the Power Platform started with a need for more personalized customer experiences. Our customer data platform (CDP) featuring Dynamics 365 Customer Insights, is helping Tivoli Gardens — one of the world’s longest-running amusement parks — personalize guest experiences across every touchpoint — on the website, at the hotel and in the park.  Marston’s has onboarded Dynamics 365 Sales and Customer Insights to unify guest data and infuse personalized experiences across its 1,500-plus pubs across the U.K.

The value of Dynamics 365 is compounded when coupled with the Power Platform. In late 2019, there are over 3 million monthly active developers on the Power Platform, from non-technical “citizen developers” to Microsoft partners developing world-class, customized apps. In the last year, we’ve seen a 700% growth in Power Apps production apps and a 300% growth in monthly active users. All of those users generate a ton of data, with more than 25 billion Power Automate steps run each day and 25 million data models hosted in the Power BI service.

The impact of the Power Platform is shared in the stories our customers share with us. TruGreen, one of the largest lawn care companies in the U.S., onboarded Dynamics 365 Customer Insights and the Microsoft Power Platform to provide more proactive and predictive services to customers, freeing employees to spend more time on higher value tasks and complex customer issue resolution. And the American Red Cross is leveraging Power Platform integration with Teams to improve disaster response times.

From the Fortune 500 companies below to the thousands of small and medium sized businesses, city and state governments, schools and colleges and nonprofit organizations — Dynamics 365 and the Microsoft Cloud are driving transformative success delivering on business outcomes.

24 business logos of Microsoft partners

Partnering to drive customer success

We can’t talk about growth and momentum of Dynamics 365 and Power Platform without spotlighting our partner community — from ISVs to System Integrators that are the lifeblood of driving scale for our business. We launched new programs, such as the new ISV Connect Program, to help partners get Dynamics 365 and Power Apps solutions to market faster.

Want to empower the next generation of connected cloud business? Join our team!

The incredible momentum of Dynamics 365 and Power Platform means our team is growing, too. In markets around the globe, we’re looking for people who want to make a difference and take their career to the next level by helping global organizations digitally transform on Microsoft Dynamics 365 and the Power Platform. If you’re interested in joining our rapidly growing team, we’re hiring across a wealth of disciplines, from engineering to technical sales, in markets across the globe. Visit careers.microsoft.com to explore business applications specialist career opportunities.

Tags: , ,

Go to Original Article
Author: Microsoft News Center