The post A new IP strategy for a new era of shared innovation appeared first on Stories.
The post Video: When its customers demanded innovation, Norwegian power company Agder Energi joined the revolution appeared first on Stories.
The post Advancing blockchain cybersecurity: Policy recommendations for growth and innovation appeared first on Stories.
The post Microsoft’s energy efficient datacenter pilot wins innovation award appeared first on Stories.
John Brownstein, chief innovation officer at Boston Children’s Hospital, is mining the digital phenotype.
“It’s this idea that all of the data you generate through your interactions with technology — whether it’s social media or with the devices — all of those digital breadcrumbs can actually bring in unique insights about a patient,” Brownstein said during his talk at the recent Harvard Institute for Applied Computational Science’s annual symposium.
A patient’s digital phenotype can be culled from search queries, internet traffic data, and virtual social settings such as Facebook and Twitter. Digital data sets like these can act as signals for potential infectious disease or foodborne illness outbreaks. They can also provide broader visibility into chronic disease, drug abuse and drug-diversion activity, where medication isn’t used by the person to whom it’s prescribed.
“We always say we’re putting the public back in public health,” he said.
Building the digital phenotype
Public health research that analyzes nonclinical data sets like these, a field of study known as computational or digital epidemiology, is a big data use case. Part of the digital phenotyping process is to ingest large data sets “from as many sources that we can identify or scrape on the web,” said Brownstein, who is also a professor of biomedical informatics at Harvard Medical School.
The nonclinical internet data comes from sources such as news sources and blog posts; social media sites, such as Twitter, Facebook and Instagram; as well as sites such as Yelp and even OpenTable, an application used to make a reservation at a restaurant. Brownstein said he also leverages data from traditional sources, such as electronic medical records.
These troves of nonclinical internet data not only provide new early signals about public health events and populations, they also give researchers access to data at a global scale, according to Brownstein. “You can imagine that the data we have across clinical settings is very geographically refined,” he said.
Once the data has been collected, tools are developed to organize the data by location and by keyword, which is then mapped to taxonomies and used as a structured database for analysis, according to Brownstein. This is where Brownstein and his team rely on machine learning tools to separate potential signals from noise.
Making sense of the digital phenotype is no easy task. Part of the complexity is because people talk about medications or their symptoms in unexpected ways that don’t map to medical and even nonmedical taxonomies. Examples of features that make the data hard to organize include typos, spelling variations, invented words and hashtags.
“It takes a huge amount of curation and development to get to a place where we can start to organize this content and take the ways in which people talk about illness and code them to more traditional taxonomies,” he said.
An early warning system
This kind of digital phenotyping has already proved successful. Brownstein has built public health “surveillance” tools such as HealthMap, which launched in 2006. It is a patient-facing public health system that he co-created, and it uses internet data such as aggregated news stories, blog posts, government websites and social data for “disease outbreak monitoring and real-time surveillance of emerging public health trends,” according to its website.
“It’s a global tracking system that basically ties as many data sources as we can get access to across hundreds of thousands of websites and 15 different languages,” Brownstein said. In 2014, HealthMap picked up on the deadly Ebola outbreak in West Africa a week before an official announcement was made. The early warning signal came from a news story of a “mystery hemorrhagic fever” killing eight in Guinea.
And, as the years tick by, social media data and internet data are helping produce more robust digital phenotypes. Brownstein and his team are monitoring traffic data of, for example, spikes in Wikipedia’s influenza page to gauge the state of global health. They’re also looking at sites such as OpenTable — reservation cancellations could be a signal of a potential influenza outbreak — and Yelp, which crowdsources reviews of businesses and restaurants. “I’m not sure if people know this, but 10% of Yelp reviews are food-poisoning-related,” he said.
In fact, reviewers often mention specific ingredients they believe caused the disease. When Brownstein and his team shared that information with the Centers for Disease Control and Prevention (CDC), the agency was incredulous, he said. After doing its own analysis, the CDC found that the reviewers were surprisingly accurate.
“From our perspective, the consumer, the patient is much smarter than we give them credit for,” Brownstein said.
The post Today in Technology: Innovation in the Heart of America appeared first on News Center.
Software steals the show when it comes to tech innovation today, and it overshadows any improvements to hardware. But get ready for the future data center: Hardware transformation will happen with dynamic random access memory, or DRAM, a data center staple for more than 20 years. Looking ahead, there are various options that promise increased efficiency, persistence and lower cost.
To address some of these innovations, SearchDataCenter sat down with Danny Cobb, vice president for global technology strategy at Dell Technologies. Cobb has witnessed a lot of change through the years — in his current role; at EMC, where he was a former CTO; and as a longtime technologist at Digital Equipment Corp. Cobb outlined various infrastructure technologies that will vie for IT pros’ attention in future data center plans.
You have spoken publicly about the single-level cell to multi-level cell memory evolution in the data center. What technologies will be essential to the future data center?
Danny Cobb: There is this notion of using artificial intelligence (AI) and machine learning techniques to optimize the infrastructure in real time. We are actively involved in work that thinks of these new compute models — graphics processing units, tensor processing units, field-programmable gate arrays (FPGAs), etc. — fundamentally as a service available on the fabric. You use machine learning and AI techniques to schedule workloads against the available resources in your data center. Three or four years ago, every single workload ran on this homogenous row upon row of homogenous virtualized x86. That’s the homogenous computing world.
This new world is heterogeneous computing. It is offload engines, it is accelerated AI, FPGAs being dynamically programmed in the data center. The infrastructure itself has to take on more knowledge, and we see the progression of that style of infrastructure and that style of computing and workloads in our platforms as they evolve.
Disaggregation and composable infrastructure seem to be the on-premises answer to cloud computing. What is its future in the data center?
Cobb: As an IT professional, the idea is to get the most jobs run and get the most value and process the most data per unit time and per unit cost on that infrastructure.
The very first problem that converged infrastructure solved was that I could now buy an entire stack of IT that works together … I can predict the performance of [that], and I understand the cost of [it] and my guys don’t have to do that for me.
Now, I want to deploy these things in finer-grained, more consumable chunks of capacity. That took us to hyper-converged. Now, I can buy smaller units — a single 1U server worth of stuff, put some management and orchestration capability around that to make the hardware manageable, and put a shared storage software stack on it and have a single, consolidated storage footprint that scales out.
Today, whether it is from Intel or AMD or other architectures, fundamentally, we have tightly coupled memory to processing via DDR [double data rate] — that’s a tough interface to break into if you want to pool and disaggregate memory. But there are examples in the industry and the technology roadmap that are getting us there. There is bus technology such as Gen Z, OpenCAPI and C6. That is one area where we have begun to separate the traditional memory hierarchy from the processing model that will enable flexibility.
Technology like PCIe [PCI Express] has fundamentally been the I/O bus for so long and done such a great job at doubling bandwidth every two years and [cutting] latency [in half]. That’s a great single system bus, but a terrible multisystem bus. It is not truly a fabric, and it does not have the ability to configure itself and tolerate having devices coming and going in real time like other fabric technologies. In the space of new buses, that is where RDMA over Ethernet and the capabilities of using that as a new intersystem fabric come into play. That also bleeds into some of those memory buses I mentioned before, whether it is C6 or Gen Z.
Those areas — Remote Direct Memory Access over Ethernet networks following Ethernet technology, 25 Gb to 100 Gb and the new memory bus technology — represent an entirely new innovation surface for systems.
What emerging technology has everyone’s attention?
Danny Cobbvice president of global technology strategy, Dell Technologies
Cobb: One that is top of mind is emerging memories. Imagine you have a cost-effective, very high-performance DRAM class memory that is persistent. How does that change every place you have an IoT [internet of things] sensor out there? If I can start to buffer that in a very low-cost, persistent device, now I have elements of persistent storage out on the edge, which today I really can’t do. If I put flash out there, that’s too slow. If I put DRAM out there, then I have to put a battery with it to keep it from losing state. This will enable a whole new class of architecture that will be enabled by persistent memory living in all these dirt-cheap, fingernail-sized processing solutions that go out in all these IoT devices.
A true DRAM-replacement persistent memory — that is the disruptive step. If we make it persistent, we start to change the way we write software. We don’t write software to do POSIX reads and writes to a file system with a volume manager. Instead, I do loads and stores from a processor into memory, and that is my application. These memory-native or memory-centric workloads will start to accelerate in their adoption. We already see pieces of that today with the move to SAP HANA and in-memory data management applications that come from the transactional world into this new world.
Those are largely evolutionary steps. The revolutionary step — at least one as revolutionary as the move to multithreaded programming 20 years ago — is this persistent memory model for applications. New software and a new programming language will be written for that.
Robert Gates covers data centers, data center strategies, server technologies, converged and hyper-converged infrastructure and open source operating systems for SearchDataCenter. Follow him on Twitter @RBGatesTT or email him at firstname.lastname@example.org.
Innovation at Starbucks isn’t just about technology; it’s also about language, the coffee company’s executive vice president and CTO Gerri Martin-Flickinger said at the recent Gartner Symposium in Orlando, Fla.
For example, there is no IT function at Starbucks; it’s called Starbucks Technology.
“[Once we rebranded IT,] we started thinking about ourselves a little differently,” Martin-Flickinger said. She said by taking off the artificial restrictions they had in their thinking about what IT was and what it could become, they became something bigger. “But it wasn’t just how we felt about it; it was also how our business partners felt about it. They started thinking about us differently.”
Another word change: There are also no “employees” at the coffee giant, only “partners.” It’s all part of Starbucks’ internal rebranding that started a couple of years ago and includes a revamped workplace culture, technology investments and mission statement all centered on providing digital engagement on a global scale.
“The best technology is the technology that actually enhances human connection,” Martin-Flickinger said. “I can’t think of a better brand than Starbucks that believes in the human connection.”
The star of the company’s digital lean-in position is its popular Mobile Order and Pay app, introduced in 2015. Since its rollout, the percentage of mobile-order transactions has continued to grow each quarter. At peak times, at least 2,000 stores are seeing more than 20% of transactions coming through this channel.
But Starbucks’ digital transformation goes beyond just an app, according to Martin-Flickinger. She offered a glimpse into the present state and future of innovation at Starbucks, which includes a cloud-based platform, collaboration tools, virtual reality (VR) and conversational computing.
An integration platform
Starbucks doesn’t have a unified point-of-sale environment, a single inventory system or a single supply chain system around the globe, Martin-Flickinger said, explaining it would be incredibly difficult to enforce a standard across all of its stores and systems. Instead, some stores are deeply connected on a consistent technical stack, but others are only loosely coupled.
Gerri Martin-Flickingerexecutive vice president and CTO at Starbucks
To manage this technical complexity, the company is building a cloud-based platform that will allow for integration and interconnection between diverse technical stacks and ownership models.
Martin-Flickinger gave the example of an American customer getting off a plane at Heathrow Airport in London and ordering his or her favorite drink through the mobile app. The integration platform, as she called it, takes care of issues like currency conversion, cost adjustments and sales regulations, so all the customer has to do is pick up the drink at the airport’s Starbucks.
Another example of Martin-Flickinger’s vision for the platform centers on drive-through windows. When Mobile Order and Pay customers pull up to a Starbucks drive-through, the reader board will change to show their favorite drink, offer personalized suggestions and greet them by name.
“These concepts are not far-fetched,” she said. “Everyone in this room probably has technologists who could put that together. But to put that together at scale across more than 26,000 stores across 75 countries with loosely coupled technology is a little more challenging.”
Facebook at Work, Autodesk to the fore
With so many stores in so many countries, communication and collaboration between personnel can be difficult. To connect its employees, the company has turned to a workplace collaboration platform: Facebook at Work.
“When we opened up Workplace, almost immediately we had store managers and partners telling us about the tech in their stores and ideas they had about how we could make it better or what we could do differently,” Martin-Flickinger said. “What’s really exciting is that our technologists — even our front-line technologists — have the opportunity to engage in that conversation directly.”
More than 80% of the chain’s store managers are using Workplace on a weekly basis, forming their own macro- and micro-communities, she said. Everyone in the executive team is engaged constantly on the collaboration platform, she added. Even Starbucks CEO Kevin Johnson uses it every week for a chat and a video conversation with partners.
Innovation at Starbucks also comes in the form of 3D renderings and virtual reality. Before completing final construction of a new store, the Starbucks team can tour it through detailed 3D renderings made in partnership with software company Autodesk.
These renderings, which are viewed by vice presidents and store development teams through VR headsets, give a realistic view of each new store — in the context of its surrounding environment. The Starbucks teams, for example, can view the layout at different times of day and see how the sunlight will move through the store as the day progresses. Even more impressive, Martin-Flickinger said, is teams can interact with the environment while wearing their VR headsets, moving furniture or even making a simulated beverage behind the counter.
Another emerging area Starbucks is exploring is conversational commerce. Martin-Flickinger showed an example of a customer speaking her order into her phone and a chatbot engaging with her via written text to inquire about the details of the order and place the order for her.
“By the time my kids enter their careers, my guess is keyboards will be a thing of the past,” Martin-Flickinger said.
Which American university is ranked tops for innovation? Stanford? MIT? You’d be wrong.
When it comes to innovation in academia in the United States, Arizona State University sits atop the prestigious U.S. News & World Report rankings for both 2016 and 2017. Stanford University and the Massachusetts Institute of Technology could do no better than second and third, respectively.
As ASU innovates by extending its technological reach into artificial intelligence, augmented reality, machine learning and cognitive computing, the school’s computing infrastructure, its mix of cloud-based and on-premises resources, including mainframes, continues to grow. Keeping track of thousands of servers and applications, along with activities of tens of thousands of student and faculty users — and then collating and correlating all of that information for viewing and tracking in a single, unified auditable view — became a top priority, according to Chris Kurtz, a system architect in ASU’s department of university analytics and data services.
A matter of correlated data
Diagnosing potential security issues or locating a broken multisystem integration necessitates looking at log files from each system involved in context, necessitating a consistent correlated data view.
“The problem we needed to solve is getting disparate logs from Windows, Linux, firewalls, switches, and more all in one place that’s easily searchable and can be audited and distributed in a protected environment,” Kurtz said. It’s all about obtaining logs from operational servers and network devices, and putting the information into the correct order chronologically or by user, to create correlated data for personnel charged with overseeing IT infrastructure operations and security. Think of it as an aggregation engine. “You want to see individual user logs and how that user transits across systems,” Kurtz said.
Those individual users add up, according to Kurtz. With more than 80,000 enrolled students and 20,000 faculty and others, ASU has a lot to keep track of. To aid with the machine data collection and collation of logs, ASU turned to Splunk Inc., a San Francisco provider of software that aims to transform machine-generated data into what the company calls “operational intelligence.”
Chris Kurtzsystem architect, Arizona State University
The collated and correlated data is necessary to give IT personnel clues where to look when something goes wrong, according to Kevin Davis, Splunk’s vice president of public sector.
“Moving at the speed of any IT system and the internet, systems become massive and complex and we tend to create silos,” he said. Silos hinder visibility across the totality of IT systems, making it difficult to find problems or to track a particular user’s travels. “It’s not something you think about it until something goes wrong.”
When something does go wrong, it’s somebody’s job — or the job of a lot of people — to figure out what went wrong and get the system back up and running as fast as possible. That’s job No. 1, Davis said. “After that, you can finally do a bit of triage.”
That job is not getting any easier, said Larry Ponemon, chairman of the Ponemon Institute, a Traverse City, Mich., research firm specializing in security. “There are so many devices, trying to stop the craziness and get[ting] a listing is more than a herculean task,” he said, adding that the proliferation of IoT devices is resulting in many more device types to track, raising the difficulty level.
Kurtz said ASU first looked at several products, including the ArcSight enterprise security manager from Hewlett Packard Enterprise and Elasticsearch, offered as a service by Amazon Web Services, before settling on the Splunk software in 2012. The university now uses it to correlate infrastructure issues with user activity, something that seems obvious, but which is difficult to do when each subsystem’s log exists in vacuum.
In this edition of Weekend Reading, we’ve got stories on Microsoft’s announcement about its restructuring plan, the focus on cloud and mobile at the Worldwide Partner Conference and Microsoft’s acquisition of InMage.
On Thursday, Microsoft announced a restructuring plan to simplify its organization and align the recently acquired Nokia Devices and Services business with the company’s overall strategy. “The first step to building the right organization for our ambitions is to realign our workforce,” Microsoft CEO Satya Nadella said in an email to employees. “With this in mind, we will begin to reduce the size of our overall workforce by up to 18,000 jobs in the next year. Of that total, our work toward synergies and strategic alignment on Nokia Devices and Services is expected to account for about 12,500 jobs, comprising both professional and factory workers.” Making these decisions to change, he wrote, “are difficult, but necessary.”
At the Worldwide Partner Conference in Washington, D.C., partners heard how Microsoft is integrating the cloud into the Microsoft Partner Network, with three new cloud-focused competencies based on performance for Office 365 and Microsoft Azure to be offered to better help partners serve customers. Azure Machine Learning University, a portfolio of online self-service learning assets, was also announced; it will help partners get started with Azure Machine Learning.
Microsoft announced the acquisition of InMage, an innovator in the emerging area of cloud-based business continuity. “Our customers tell us that business continuity – the ability to backup, replicate and quickly recover data and applications in case of a system failure – is incredibly important,” said Takeshi Numoto, corporate vice president, Cloud and Enterprise Marketing, Microsoft. “CIOs consistently rank business continuity as a top priority, but often don’t have the budgets or time to do it right.”
We got to meet Adam, Project Adam, that is, at the 15th annual Microsoft Research Faculty Summit. The goal of Project Adam is to enable software to visually recognize any object. For example, if you’re a dog lover, you know how to identify different dog breeds. But, what if your smartphone could identify them faster than you? Imagine pointing your phone at a dog and asking your phone, “What kind of dog is this?” and it identifies the exact breed.
Microsoft broadened its commitment to renewable energy with the announcement it will purchase 175 megawatts of wind energy from the Pilot Hill Wind Project in Illinois as part of a 20-year agreement. It’s the second such deal in as many years and indicative of Microsoft’s growing commitment to renewable energy and sustainability. When Pilot Hill comes online next year, it will be Microsoft’s largest wind project and one of the biggest corporate wind purchases from a single facility, generating more than enough energy to power Microsoft’s Chicago datacenter.
The Internet of Things lets major elevator manufacturers know about issues before they turn into problems. ThyssenKrupp Elevator, one of the world’s leading elevator manufacturers, maintains more than 1.1 million elevators worldwide, including those at New York City’s new 102-story One World Trade Center. ThyssenKrupp has teamed up with Microsoft and CGI to create a connected, intelligent line-of-business asset monitoring system that significantly improves elevator reliability. The company has connected its elevators to the cloud, gathering data from its sensors and systems, and transforming that data into valuable business intelligence. That includes being able to go beyond preventative maintenance to predictive, and even preemptive, maintenance.
This week on the Microsoft Facebook page, we captured and shared our favorite moments with a Windows Phone.
Thanks for checking out this edition of Weekend Reading, and we’ll see you next week!
Posted by Suzanne Choney
Microsoft News Center Staff