Tag Archives: learning

Accessibility tools support Hamlin Robinson students learning from home | | Microsoft EDU

More than ever, educators are relying on technology to create inclusive learning environments that support all learners. As we recognize Global Accessibility Awareness Day, we’re pleased to mark the occasion with a spotlight on an innovative school that is committed to digital access and success for all.

Seattle-based Hamlin Robinson School, an independent school serving students with dyslexia and other language-based learning differences, didn’t set a specific approach to delivering instruction immediately after transitioning to remote learning. “Our thought was to send home packets of schoolwork and support the students in learning, and we quickly realized that was not going to work,” Stacy Turner, Head of School, explained in a recent discussion with the Microsoft Education Team.

After about a week into distance learning, the school quickly went to more robust online instruction. The school serves grades 1-8 and students in fourth-grade and up are utilizing Office 365 Education tools, including Microsoft Teams. So, leveraging those same resources for distance learning was natural.

Built-in accessibility features

Stacy said the school was drawn to Microsoft resources for schoolwide use because of built-in accessibility features, such as dictation (speech-to-text), and the Immersive Reader, which relies on evidence-based techniques to help students improve at reading and writing.

“What first drew us to Office 365 and OneNote were some of the assistive technologies in the toolbar,” Stacy said. Learning and accessibility tools are embedded in Office 365 and can support students with visual impairments, hearing loss, cognitive disabilities, and more.

Josh Phillips, Head of Middle School, says for students at Hamlin Robinson, finding the right tools to support their learning is vital. “When we graduate our students, knowing that they have these specific language-processing needs, we want them to have fundamental skills within themselves and strategies that they know how to use. But we also want them to know what tools are available to them that they can bring in,” he said.

For example, for students who have trouble typing, a popular tool is the Dictate, or speech-to-text, function of Office 365. Josh said that a former student took advantage of this function to write a graduation speech at the end of eighth grade. “He dictated it through Teams, and then he was able to use the skills we were practicing in class to edit it,” Josh said. “You just see so many amazing ideas get unlocked and be able to be expressed when the right tools come along.”

Supporting teachers and students

Providing teachers with expertise around tech tools also is a focus at Hamlin Robinson. Charlotte Gjedsted, Technology Director, said the school introduced its teachers to Teams last year after searching for a platform that could serve as a digital hub for teaching and learning. “We started with a couple of teachers being the experts and helping out their teams, and then when we shifted into this remote learning scenario, we expanded that use,” Charlotte said.

“Teams seems to be easiest platform for our students to use in terms of the way it’s organized and its user interface,” added Josh.

He said it was clear in the first days of distance learning that using Teams would be far better than relying on packets of schoolwork and the use of email or other tools. “The fact that a student could have an assignment issued to them, could use the accessibility tools, complete the assignment, and then return the assignment all within Teams is what made it clear that this was going to be the right app for our students,” he said. 

A student’s view

Will Lavine, a seventh-grade student at the school says he appreciates the stepped-up emphasis on Teams and tech tools during remote learning and says those are helping meet his learning needs. “I don’t have to write that much on paper. I can use technology, which I’m way faster at,” he said.

“Will has been using the ease of typing to his benefit,” added Will’s tutor, Elisa Huntley. “Normally when he is faced with a hand written assignment, he would spend quite a bit of time to refine his work using only a pencil and eraser. But when he interfaces with Microsoft Teams, Will doesn’t feeling the same pressure to do it right the first time. It’s much easier for him to re-type something. His ideas are flowing in ways that I have never seen before.”

Will added that he misses in-person school, but likes the collaborative nature of Teams, particularly the ability to chat with teachers and friends.

With the technology sorted out, Josh said educators have been very focused on ensuring students are progressing as expected. He says that teachers are closely monitoring whether students are joining online classes, engaging in discussions, accessing and completing assignments, and communicating with their teachers.

Connect, explore our tools

We love hearing from our educator community and students and families. If you’re using accessibility tools to create more inclusive learning environments and help all learners thrive, we want to hear from you! One great way to stay in touch is through Twitter by tagging @MicrosoftEDU.

And if you want to check out some of the resources Hamlin Robinson uses, remember that students and educators at eligible institutions can sign up for Office 365 Education for free, including Word, Excel, PowerPoint, OneNote, and Microsoft Teams.

In honor of Global Accessibility Awareness Day, Microsoft is sharing some exciting updates from across the company. To learn more visit the links below:

Go to Original Article
Author: Microsoft News Center

Kite intros code completion for JavaScript developers

Kite, a software development tools startup specializing in AI and machine learning, has added code-completion capabilities for JavaScript developers.

San Francisco-based Kite’s AI-powered code completion technology to JavaScript initially targeted Python developers. JavaScript is arguably the most popular programming language and Kite’s move should be a welcome addition for JavaScript developers, as the technology can predict the next string of code they will write and complete it automatically.

“The use of AI is definitely making low-code even lower-code for sure, and no-code even more possible,” said Ronald Schmelzer, an analyst at Cognilytica in Ellicott City, Md. “AI systems are really good at determining patterns, so you can think of them as really advanced wizard or templating systems that can try to determine what you’re trying to do and suggest code or blocks or elements to complete your code.”

Kite’s Line-of-Code Completions feature uses advanced machine learning models to cut some of the mundane tasks that programmers perform to build applications, such as setting up build processes, searching for code snippets on Google, cutting and pasting boilerplate code from Stack Overflow, and repeatedly solving the same error messages, said Adam Smith, founder and CEO of Kite, in an interview.

Kite’s JavaScript code completions are currently available in private beta and can suggest code a developer has previously used or tap into patterns found in open source code files, Smith said. The deep learning models used to inform the Kite knowledge base have been trained on more than 22 million open source JavaScript files, he said.

Kite aims to advance the code-completion art

Unlike other code completion capabilities, Kite features layers of filtering such that only the most relevant completion results are returned, rather than a long list of completions ranked by probability, Smith said. Moreover, Kite’s completions work in .js, .jsx and .vue files and the system processes code locally on the user’s computer, rather than sending code to a cloud server for processing.

Ronald Schmelzer, analyst, CognilyticaRonald Schmelzer

Kite’s engineers took their time training the tool on the evergrowing JavaScript ecosystem and its frameworks, APIs and design patterns, Smith said. Thus, Kite works with popular JavaScript libraries and frameworks like React, Vue, Angular and Node.js. The system analyzes open source projects on GitHub and applies that data to machine learning models trained to predict the next word or words of code as programmers write in real time. This smarter programming environment makes it possible for developers to focus on what’s unique about their application.

There are other tools that offer code completion capabilities, such as the IntelliCode feature in the Microsoft Visual Studio IDE. IntelliCode provides more primitive code completion than Kite, Smith claimed. IntelliCode is the next generation of Microsoft’s older IntelliSense code completion technology. IntelliCode will predict the next word of code based on basic models, while Kite’s tool uses richer, more advanced deep learning models trained to predict further ahead to whole lines, and even multiple lines of code, Smith said.

AI systems are really good at determining patterns, so you can think of them as really advanced wizard or templating systems that can try to determine what you’re trying to do and suggest code or blocks or elements to complete your code.
Ronald SchmelzerAnalyst, Cognilytica

Moreover, Kite focuses on code completion, and not code correction, because programming code has to be exactly correct. For example, if you send someone a text with autocorrect errors, the tone of the message may still come across properly. But if you mistype a single letter of code, a program will not run.

Still, AI-powered code completion “Is still definitely a work in progress and much remains to be done, but OutSystems and others are also looking at AI-enabling their suites to deliver faster and more complete solutions in the low-code space,” Schmelzer said.

In addition to the new JavaScript code completion technology, Kite also introduced Kite Pro, the company’s first paid offering of code completions for Python powered by deep learning. Kite Pro provides features such as documentation in the Kite Copilot, which offers documentation for more than 800 Python libraries.

Kite works as a plugin for all of the most popular code editors, including Atom, JetBrains’ PyCharm/IntelliJ/WebStorm, Spyder, Sublime Text 3, VS Code and Vim. The product is available on Mac, Windows and Linux.

The basic version of Kite is free; however, Kite Pro costs $16.60 per user, per month. Custom team pricing also is available for teams that contact the company directly, Smith said.

Go to Original Article
Author:

Enterprises struggle to learn Microsoft Sonic networking

Enterprises learning how to use Microsoft Sonic in a production environment often struggle with the lack of management tools for the open source network operating system.

Other challenges revealed this week during a panel discussion at the OCP Virtual Summit included weak support for Sonic hardware. Also, the panelists said engineers had to work hard to understand how to operate the software.

The companies that participated in the discussion included Target, eBay, T-Mobile, Comcast and Criteo. All of them plan to eventually make Sonic their primary network operating system in the data center.

In general, they are seeking vendor independence and more control over the development and direction of their networks. They expected to achieve network automation similar to Sonic customers Facebook and Microsoft, which built Sonic and gave it to the Open Compute Project (OCP) for further development.

Challenges with Microsoft Sonic

Target is at the tail end of its evaluation of Sonic. The retailer plans to use it to power a single super-spine within a data center fabric, said Pablo Espinosa, vice president of engineering. The company plans to put a small percentage of a production workload on the network operating system (NOS) in the next quarter.

Eventually, Target wants to use Sonic to provide network connectivity to hundreds of microservices running on cloud computing environments. Target has virtualized almost 85% of its data centers to support cloud computing.

Target’s engineers have experience in writing enterprise software but not code to run on a NOS. Therefore, the learning curve has been steep, Espinosa said. “We’re still building this muscle.”

As a result, Target has turned to consultants to develop enterprise features for Sonic and take it through hardware testing, regression testing and more, Espinosa said.

Online advertising company Criteo was the only panel participant to have Sonic in production. The company is using the NOS on the spine and super-spine level in one of nine network fabrics, engineering manager Thomas Soupault said. The system has 64 network devices serving 3,000 servers.

Also, the company is building a 400 Gb Ethernet data center fabric in Japan that will run only Sonic. The network will eventually provide connectivity to 10,000 servers.

One of Criteo’s most significant problems is getting support for low-level issues in the open hardware running the NOS. Manufacturers won’t support any software unless required to in the contract.

Therefore, companies should expect difficult negotiations over support for drivers, the software development kit for the ASIC, and the ASIC itself. Other areas of contention include the switch abstraction interface that comes with the device for loading the buyer’s NOS of choice, Soupault said.

“It can be tricky,” he said. “When we asked all these questions to manufacturers, we got some good answers, and some very bad answers, too.”

Soupault stopped short of blaming manufacturers. Buyers and vendors are still struggling with the support model for Sonic. “If we could clarify this area, it might help others on Sonic” and boost adoption, he said.

Network management tools for Sonic are also in their infancy. Within eBay, developers are building agents and processes on the hardware for detecting problems with links and optics, said Parantap Lahiri, vice president of data center engineering at the online marketplace. However, discovering the problems is only the first step — eBay is still working on tools for identifying the root cause of problems.

We hope that the community will come together to build the tools and make the product easier to manage [through] more visibility for the operations teams.
Yiu LeeVice president of network architecture, Comcast

Comcast is developing a repository for streaming network telemetry that network monitoring tools could analyze to pinpoint problems, said Yiu Lee, the company’s vice president of network architecture. However, Comcast could use help from OCP members.

“We hope that the community will come together to build the tools and make the product easier to manage [through] more visibility for the operations teams,” he said.

Some startups are trying to fill the void. Network automation startup Apstra announced at the summit support for Sonic-powered leaf, spine and super-spine switches.

Going slowly with Microsoft Sonic

The panelists advised companies that want to use Sonic to start with a low-risk deployment with a clearly defined use case. They also recommended choosing engineers who are willing to learn different methods for operating a network.

Lahiri from eBay suggested that companies initially deploy Sonic on a single spine within a group. That would provide enough redundancy to overcome a Sonic failure.

Soupault advised designing a network architecture around Sonic. Criteo is using the NOS in an environment similar to that of Facebook and Microsoft, he said. “Our use case is very close to what Sonic has been built for.”

A company that wants to use the NOS also should be prepared to funnel the money saved from licensing into the hiring of people with the right skill sets, which should include understanding Linux.

Microsoft built Sonic on the open source operating system used mostly in servers. So, engineers have to know how to manage a Linux system and the containers inside it, Lahiri said.

Go to Original Article
Author:

Learn to manage Office 365 ProPlus updates

A move to the cloud can be confusing until you get your bearings, and learning how to manage Office 365 ProPlus updates will take some time to make sure they’re done right.

Office 365 is a bit of a confusing name. It is actually a whole suite of programs based on a subscription model, mostly cloud based. However, Office 365 ProPlus is a suite inside a suite: a subset collection of software contained in most Office 365 subscriptions. This package is the client install that contains the programs everyone knows: Word, Excel, PowerPoint and so on.

Editor’s note: Microsoft recently announced it would rename Office 365 ProPlus to Microsoft 365 Apps for enterprise, effective on April 21.

For the sake of comparison, Office 2019, Office 2016 and older versions are the on-premises managed suite with the same products, but with a much slower rollout pace for updates and fixes. Updates for new features are also slower and may not even appear until the next major version, which might not be until 2022 based on Microsoft’s release cadence.

Rolling the suite out hasn’t changed too much for many years. You can push out Office 365 ProPlus updates the same way you do other Windows updates, namely Windows Server Update Service (WSUS) and Configuration Manager. Microsoft gave the latter a recent branding adjustment and is now referring to it as Microsoft Endpoint Configuration Manager.

The Office 365 ProPlus client needs a different approach, because updates are not delivered or designed in the same way as the traditional Office products. You can still use Configuration Manager, but the setup is different.

Selecting the update channel for end users

Microsoft gives you the option to determine when your users will get new feature updates. There are five update channels: Insider Fast, Monthly Channel, Monthly Channel (Targeted), Semi-Annual Channel and Semi-Annual Channel (Targeted). Insider Fast gets updates first, Monthly Channel updates arrive on a monthly basis and Semi-Annual updates come every six months. Users in the Targeted channels get these updates first so they can report back to IT with any issues or other feedback.

You can configure the channel as part of an Office 365 ProPlus deployment with the Office Deployment Toolkit (ODT), but this only works at the time of install. There are two ways to configure the channel after deployment: Group Policy and Configuration Manager.

Using Group Policy for Office 365 ProPlus updates

Using Group Policy, you can set which channel a computer gets by enabling the Update Channel policy setting under Computer ConfigurationPoliciesAdministrative TemplatesMicrosoft Office 2016 (Machine)Updates. This is a registry setting located at HKLMSoftwarePoliciesMicrosoftoffice16.0commonofficeupdateupdatebranch. The options for this value are: Current, FirstReleaseCurrent, InsiderFast, Deferred, FirstReleaseDeferred.

Update Channel policy setting
Managing Office 365 ProPlus updates from Group Policy requires the administrator to select the Enabled option in the Update Channel policy setting.

A scheduled task, which is deployed as part of the Office 365 ProPlus install called Office Automatic Update 2.0, reads that setting and applies the updates.

You can use standard Group Policy techniques to target policies to specific computers or apply the registry settings.

Using Configuration Manager for Office 365 ProPlus updates

You can use Configuration Manager, utilizing ODT or Group Policy, to define which channel a client is in, but it also works as a software update point rather than using WSUS or downloading straight from Microsoft’s servers. With this method, you will need to ensure the Office 365 ProPlus channel builds across all the different deployed channels are available from the software update point in Configuration Manager.

Office 365 ProPlus updates work the same way as other Windows updates: Microsoft releases the update, a local WSUS server downloads them, Configuration Manager synchronizes with the WSUS server to copy the updates, and then Configuration Manager distributes the updates to the distribution points. You need to enable the Office 365 Client product on WSUS for this approach to work.

WSUS server settings
Set up Configuration Manager to handle Office 365 ProPlus updates by selecting the Office 365 Client product on the WSUS server.

It’s also possible to configure clients just to get the updates straight from Microsoft if you don’t want or need control over them.

Caveats for Office 365 ProPlus updates

When checking a client’s channel, the Office 365 ProPlus client will only show the channel it was in during its last update. Only when the client gets a new update will it show which channel it obtained the new update from, so the registry setting is a better way to check the current configuration.

When an Office 365 ProPlus client detects an update, it will download a compressed delta update. However, if you change the client to a channel that is on an older version of Office 365 ProPlus, the update will be much larger but still smaller than the standard Office 365 ProPlus install. Also, if you change the channel multiple times, it can take up to 24 hours for a second version change to be recognized and applied.

As always with any new product: research, test and build your understanding of these mechanisms before you roll out Office 365 ProPlus. If an update breaks something your business needs, you need know how to fix that situation across your fleet quickly.

Go to Original Article
Author:

Biometrics firm fights monitoring overload with log analytics

Log analytics tools with machine learning capabilities have helped one biometrics startup keep pace with increasingly complex application monitoring as it embraces continuous deployment and microservices.

BioCatch sought a new log analytics tool in late 2017. At the time, the Tel Aviv, Israel, firm employed a handful of workers and had just refactored a monolithic Windows application into microservices written in Python. The refactored app, which captures biometric data on how end users interact with web and mobile interfaces for fraud detection, required careful monitoring to ensure it still worked properly. Almost immediately after it completed the refactoring, BioCatch found the process had tripled the number of logs it shipped to a self-managed Elasticsearch repository.

“In the beginning, we had almost nothing,” said Tamir Amram, operations group lead for BioCatch, of the company’s early logging habits. “And, then, we started [having to ship] everything.”

The team found it could no longer manage its own Elasticsearch back end as that log data grew. Its IT infrastructure also mushroomed into 10 Kubernetes clusters distributed globally on Microsoft Azure. Each cluster hosts multiple sets of 20 microservices that provide multi-tenant security for each of its customers.

At that point, BioCatch had a bigger problem. It had to not only collect, but also analyze all its log data to determine the root cause of application issues. This became too complex to do manually. BioCatch turned to log analytics vendor Coralogix as a potential answer to the problem.

Log analytics tools flourish under microservices

Coralogix, founded in 2015, initially built its log management system on top of a hosted Elasticsearch service but couldn’t generate enough interest from customers.

“It did not go well,” Coralogix CEO Ariel Assaraf recalled of those early years for the business. “It was early in log analytics’ and log management’s appeal to the mainstream, and customers already had ‘good enough’ solutions.”

While the company still hosts Elasticsearch for its customers, based on the Amazon Open Distro for Elasticsearch, it refocused on log analytics, developed machine learning algorithms and monitoring dashboards, and relaunched in 2017.

That year coincided with the emergence of containers and microservices in enterprise IT shops as they sought to refactor monolithic applications with new design patterns. The timing proved fortuitous; since the Coralogix’s relaunch in 2017, it has gained more than 1,200 paying customers, according to Assaraf, at an average deal size of $50,000 a year.

Coralogix isn’t alone among DevOps monitoring vendors reaping the spoils of demand for microservices monitoring tools — not just in log analytics, but AI- and machine learning-driven infrastructure management, or AIOps, as well. These include application performance management (APM) vendors, such as New Relic, Datadog, AppDynamics and Dynatrace, along with Coralogix log analytics competitors Elastic Inc. and Splunk.

We were able to delegate log management to the support team, so the DevOps team wasn’t the only one owning and using logs.
Tamir AmramOperations group lead, BioCatch

In fact, analyst firm 451 Research predicted that the market for Kubernetes monitoring tools will dwarf the market for Kubernetes management products by 2022 as IT pros move from the initial phases of deploying microservices into “day two” management problems. Even more recently, log analytics tools have begun to play an increasing role in IT security operations and DevSecOps.

The newly relaunched Coralogix caught the eye of BioCatch in part because of its partnership with the firm’s preferred cloud vendor, Microsoft Azure. It was also easy to set up and redirect logs from the firm’s existing Elasticsearch instance, and the Coralogix-managed Elasticsearch service eliminated log management overhead for the BioCatch team.

“We were able to delegate log management to the support team, so the DevOps team wasn’t the only one owning and using logs,” Amram said. “Now, more than half of the company works with Coralogix, and more than 80% of those who work with it use it on a daily basis.”

Log analytics correlate app changes to errors

The BioCatch DevOps team adds tags to each application update that direct log data into Coralogix. Then, the software monitors application releases as they’re rolled out in a canary model for multiple tiers of customers. BioCatch rolls out its first application updates to what it calls “ring zero,” a group of early adopters; next, to “ring one;” and so on, according to each customer group’s appetite for risk. All those changes to multiple tiers and groups of microservices result in an average of 1.5 TB of logs shipped per day.

The version tags fed through the CI/CD pipeline to Coralogix enable the tool to identify issues and correlate them with application changes made by BioCatch developers. It also identifies anomalous patterns in infrastructure behavior post-release, which can catch problems that don’t appear immediately.

Coralogix log analytics
Coralogix log analytics uses version tags to correlate application issues with specific developer changes.

“Every so often, an issue will appear a day later because we usually release at off-peak times,” BioCatch’s Amram said. “For example, it can say, ‘sending items to this queue is 20 times slower than usual,’ which shows the developer why the queue is filling up too quickly and saturating the system.”

BioCatch uses Coralogix alongside APM tools from Datadog that analyze application telemetry and metrics. Often, alerts in Datadog prompt BioCatch IT ops pros to consult Coralogix log analytics dashboards. Datadog also began offering log analytics in 2018 but didn’t include this feature when BioCatch first began talks with Coralogix.

Coralogix also maintains its place at BioCatch because its interfaces are easy to work with for all members of the IT team, Amram said. This has grown to include not only developers and IT ops, but solutions engineers who use the tool to demonstrate to prospective customers how the firm does troubleshooting to maintain its service-level agreements.

“We don’t have to search in Kibana [Elasticsearch’s visualization layer] and say, ‘give me all the errors,'” Amram said. “Coralogix recognizes patterns, and if the pattern breaks, we get an alert and can immediately react.”

Go to Original Article
Author:

AWS, NFL machine learning partnership looks at player safety

The NFL will use AWS’ AI and machine learning products and services to better simulate and predict player injuries, with the goal of ultimately improving player health and safety.

The new NFL machine learning and AWS partnership, announced during a press event Thursday with AWS CEO Andy Jassy and NFL Commissioner Roger Goodell at AWS re:Invent 2019, will change the game of football, Goodell said.

“It will be changing the way it’s played, it will [change] the way its coached, the way we prepare athletes for the game,” he said.

The NFL machine learning journey

The partnership builds off Next Gen Stats, an existing NFL and AWS agreement that has helped the NFL capture and process data on its players. That partnership, revealed back in 2017, introduced new sensors on player equipment and the football to capture real-time location, speed and acceleration data.

That data is then fed into AWS data analytics and machine learning tools to provide fans, broadcasters and NFL Clubs with live and on-screen stats and predictions, including expected catch rates and pass completion probabilities.

Taking data from that, as well as from other sources, including video feeds, equipment choice, playing surfaces, player injury information, play type, impact type and environmental factors, the new NFL machine learning and AWS partnership will create a digital twin of players.

AWS CEO Andy Jassy and NFL Commissioner Roger Goodell
AWS CEO Andy Jassy, left, and NFL Commissioner Roger Goodell announced a new AI and machine learning partnership at AWS re:Invent 2019.

The NFL began the project with a collection of different data sets from which to gather information, said Jeff Crandall, chairman of the NFL Engineering Committee, during the press event.

It wasn’t just passing data, but also “the equipment that players were wearing, the frequency of those impacts, the speeds the players were traveling, the angles that they hit one another,” he continued.

Typically used in manufacturing to predict machine outputs and potential breakdowns, a digital twin is essentially a complex virtual replica of a machine or person formed out of a host of real-time and historical data. Using machine learning and predictive analytics, a digital twin can be fed into countless virtual scenarios, enabling engineers and data scientists to see how its real-life counterpart would react.

The new AWS and NFL partnership will create digital athletes, or digital twins of a scalable sampling of players, that can be fed into infinite scenarios without risking the health and safety of real players. Data collected from these scenarios is expected to provide insights into changes to game rules, player equipment and other factors that could make football a safer game.

“For us, what we see the power here is to be able to take the data that we’ve created over the last decade or so” and use it, Goodell said. “I think the possibilities are enormous.”

Partnership’s latest move to enhance safety

It will be changing the way it’s played, it will [change] the way its coached, the way we prepare athletes for the game.
Roger GoodellCommissioner, NFL

New research in recent years has highlighted the extreme health risks of playing football. In 2017, researchers from the VA Boston Healthcare System and the Boston University School of Medicine published a study in the Journal of the American Medical Association that indicated football players are at a high risk for developing long-term neurological conditions.

The study, which did not include a control group, looked at the brains of high school, college and professional-level football players. Of the 111 NFL-level football players the researchers looked at, 110 of them had some form of degenerative brain disease.

The new partnership is just one of the changes the NFL has made over the last few years in an attempt to make football safer for its players. Other recent efforts include new helmet rules, and a recent $3 million challenge to create safer helmets.

The AWS and NFL partnership “really has a chance to transform player health and safety,” Jassy said.

AWS re:Invent, the annual flagship conference of AWS, was held this week in Las Vegas.

Go to Original Article
Author:

How to achieve explainability in AI models

When machine learning models deliver problematic results, it can often happen in ways that humans can’t make sense of — and this becomes dangerous when there are no limitations of the model, particularly for high-stakes decisions. Without straightforward and simple tools that highlight explainability in AI models, organizations will continue to struggle in implementing AI algorithms. Explainable AI refers to the process of making it easier for humans to understand how a given model generates the results it does and planning for cases when the results should be second-guessed.

AI developers need to incorporate explainability techniques into their workflows as part of their overall modeling operations. AI explainability can refer to the process of creating algorithms for teasing apart how black box models deliver results or the process of translating these results to different types of people. Data science managers working on explainable AI should keep tabs on the data used in models, strike a balance between accuracy and explainability, and focus on the end user.

Opening the black box

Traditional rule-based AI systems included explainability in AI as part of models, since humans would typically handcraft the inputs to output. But deep learning techniques using semi-autonomous neural-network models can’t provide a model’s results map to an intended goal.

Researchers are working to build learning algorithms that generate explainable AI systems from data. Currently, however, most of the dominant learning algorithms do not yield interpretable AI systems, said Ankur Taly, head of data science at Fiddler Labs, an explainable AI tools provider.

“This results in black box ML techniques, which may generate accurate AI systems, but it’s harder to trust them since we don’t know how these systems’ outputs are generated,” he said. 

AI explainability often describes post-hoc processes that attempt to explain the behavior of AI systems, rather than alter their structure. Other machine learning model properties like accuracy are straightforward to measure, but there are no corresponding simple metrics for explainability. Thus, the quality of an explanation or interpretation of an AI system needs to be assessed in an application-specific manner. It’s also important for practitioners to understand the assumptions and limitations of the techniques they use for implementing explainability.

“While it is better to have some transparency rather than none, we’ve seen teams fool themselves into a false sense of security by wiring an off-the-shelf technique without understanding how the technique works,” Taly said. 

Start with the data

The results of a machine learning model could be explained by the training data itself, or how a neural network interprets a dataset. Machine learning models often start with data labeled by humans. Data scientists can sometimes explain the way a model is behaving by looking at the data it was trained on.

“What a particular neural network derives from a dataset are patterns that it finds that may or may not be obvious to humans,” said Aaron Edell, director of applied AI at AI platform Veritone.

But it can be hard to understand what good data looks like. Biased training data can show in up a variety of ways. A machine learning model trained to identify sheep might only come from pictures of farms, causing the model to misinterpret sheep in other settings, or white clouds on farm pictures as sheep. Facial recognition software can be trained on company faces — but if those faces are mostly male or white, the data is biased.

One good practice is to train machine learning models on data that should be indistinguishable from the data the model will be expected to run on. For example, a face recognition model that identified how long Jennifer Aniston appears in every episode of Friends should be trained on frames of actual episodes rather than Google image search results for ‘Jennifer Aniston.’ In a similar vein, it’s OK to train models on publicly available datasets, but generic pre-trained models as a service will be harder to explain and change if necessary.   

Balancing explainability, accuracy and risk

The real problem with implementing explainability in AI is that there are major trade-offs between accuracy, transparency and risk in different types of AI models, said Matthew Nolan, senior director of decision sciences at Pegasystems. More opaque models may be more accurate, but fail the explainability test. Other types of models like decision trees and Bayesian networks are considered more transparent but are less powerful and complex.

“These models are critical today as businesses deal with regulations such as like GDPR that require explainability in AI-based systems, but this sometimes will sacrifice performance,” said Nolan.

Focusing on transparency can cost a business, but turning to more opaque models can leave a model unchecked and might expose the consumer, customer and the business to additional risks or breaches.

To address this gap, platform vendors are starting to embed transparency settings into their AI tool sets. This can make it easier to companies to adjust the acceptable amount of opaqueness or transparency thresholds used in their AI models and gives enterprises the control to adjust the models based on their needs or on corporate governance policy so they can manage risk, maintain regulatory compliance and ensure customers a differentiated experience in a responsible way.

Data scientists should also identify when the complexity of new models are getting in the way of explainability. Yifei Huang, data science manager at sales engagement platform Outreach, said there are often simpler models available for attaining the same performance, but machine learning practitioners have a tendency toward using more fancy and advanced models.

Focus on the user

Explainability means different things to a highly skilled data scientist compared to a call center worker who may need to make decisions based on an explanation. The task of implementing explainable AI is not just to foster trust in explanations but also help the end users make decisions, said Ankkur Teredesai, CTO and co-founder at KenSci, an AI healthcare platform.

Often data scientists make the mistake of thinking about explanations from the perspective of a computer scientist, when the end user is a domain expert who may need just enough information to make a decision. For a model that predicts the risk of a patient being readmitted, a physician may want an explanation of the underlying medical reasons, while a discharge planner may want to know the likelihood of readmission to plan accordingly.

Teredesai said there is still no general guideline for explainability, particularly for different types of users. It’s also challenging to integrate these explanations into the machine learning and end user workflows. End users typically need explanations as possible actions to take based on a prediction rather than just explanation as reasons, and this requires striking the right balance between focusing on prediction and explanation fidelity.

There are a variety of tools for implementing explainability on top of machine learning models which generate visualizations and technical descriptions, but these can be difficult for end users to understand, said Jen Underwood, vice president of product management at Aible, an automated machine learning platform. Supplementing visualizations with natural language explanations is a way to partially bridge the data science literacy gap. Another good practice is to directly use humans in the loop to evaluate your explanations to see if they make sense to a human, said Daniel Fagnan, director of applied science on the Zillow Offers Analytics team. This can help lead to more accurate models through key improvements including model selection and feature engineering.

KPIs for AI risks

Enterprises should consider the specific reasons that explainable AI is important when looking towards how to measure explainability and accessibility. Teams should first and foremost establish a set of criteria for key AI risks including robustness, data privacy, bias, fairness, explainability and compliance, said Dr. Joydeep Ghosh, chief scientific officer at AI vendor CognitiveScale. It’s also useful to generate appropriate metrics for key stakeholders relevant to their needs.

External organizations like AI Global can help establish measurement targets that determine acceptable operating values. AI Global is a nonprofit organization that has established the AI Trust Index, a scoring benchmarks for explainable AI that is like a FICO score. This enables firms to not only establish their own best practices, but also compare the enterprise against industry benchmarks.

When someone offers you a silver bullet explainable AI technology or solution, check whether you can have a common-grounded conversation with the AI that goes deep and scales to the needs of the application.
Mark StefikResearch Fellow, PARC, a Xerox Company

Vendors are starting to automate this process with tools for automatically scoring, measuring and reporting on risk factors across the AI operations lifecycle based on the AI Trust Index. Although the tools for explainable AI are getting better, the technology is at an early research stage with proof-of-concept prototypes, cautioned Mark Stefik, a research fellow at PARC, a Xerox Company. There are substantial technology risks and gaps in machine learning and in AI explanations, depending on the application.

“When someone offers you a silver bullet explainable AI technology or solution, check whether you can have a common-grounded conversation with the AI that goes deep and scales to the needs of the application,” Stefik said.

Go to Original Article
Author:

Salesforce Trailhead app makes learning more convenient

SAN FRANCISCO — Salesforce customers see the value in the Trailhead learning platform and its new mobile app.

Trailhead Go for iOS is one of two new mobile apps that Salesforce announced here at Dreamforce 2019. Trailhead Go is a mobile extension of Trailhead, Salesforce’s free customer success learning platform enabling Salesforce users and nonusers to follow different paths to learn Salesforce skills. It now also offers Amazon Partner Connect to learn how to build Amazon Alexa skills and AWS. By the end of the year, Trailhead plans to roll out live and on-demand training videos.

Salesforce provides customer success tools to users before they even become customers. For most businesses, this model is flipped, providing these tools to users after they sign contracts, said Gerry Murray, a research director at IDC.

“It’s not only about how the product works, it’s about teaching the line- of-business people to elevate their skills or further their careers in and out of their companies,” Murray said. “Trailhead Go makes it all that more convenient.”

Making education accessible

A skills gap costs companies $1.3 trillion each year, said Sarah Franklin, general manager of Trailhead, in a keynote. While many workers think they can fill that gap with education, it has become more and more inaccessible. Over the last 20 years, student tuition has increased by 200%, and student debt has increased by 163%.

Anyone who has access to the Trailhead Go app can learn, said Ray Wang, principal analyst and founder at Constellation Research.

“You don’t have to go to school; you don’t need a computer; you just need a phone,” he said.

Customers see benefits

Trailhead Go app screenshot
This personalized homepage of the Trailhead Go app shows what trails a user is working on with a quick navigation bar at the bottom.

Supermums, based in London, equips moms with Salesforce skills through a combination of training, mentoring, work experience and job search support to get them into the Salesforce ecosystem. Trainees go through a customized six-month program where they earn 50 to 100 Trailhead badges. Trainees can benefit from the Trailhead app because they’ll be able to learn on the go, making it easier to fit into their schedules, said Heather Black, a certified Salesforce administrator and CEO of Supermums.

“[Trailhead Go] will help me complete more trails and fit it into my life while I’m busy supporting a team and juggling kids,” she said. “Trailhead Go makes this accessible to more people.”

Trailhead has also branched out beyond technical skills and into functional skills, Black said.

“It helps you develop as a person, as well as help you be successful in a Salesforce career,” she said.

Trailhead is great for helping learn the basics when people are entering the CRM world, said Sayantani Mitra, a data scientist at Goby Inc., a company that specializes in accounts payable automation.

“Read them, learn them, ask the community, ask people questions, do them multiple times,” Mitra said.

The best way to learn anything is practice, practice and practice more.
Sayantani MitraData scientist, Goby

But just getting a Salesforce certification won’t get someone a job, Mitra said. They have to know what they’re doing.

“The best way to learn anything is practice, practice and practice more,” Mitra said.

Mitra plans to use the Trailhead Go app particularly on long-haul flights.

“When I go home to India … you cannot watch movies for 20 hours or sleep for 20 hours; you need something more,” she said.

Trailhead Go is generally available now for free on the Apple App Store.

Go to Original Article
Author:

SwiftStack 7 storage upgrade targets AI, machine learning use cases

SwiftStack turned its focus to artificial intelligence, machine learning and big data analytics with a major update to its object- and file-based storage and data management software.

The San Francisco software vendor’s roots lie in the storage, backup and archive of massive amounts of unstructured data on commodity servers running a commercially supported version of OpenStack Swift. But SwiftStack has steadily expanded its reach over the last eight years, and its 7.0 update takes aim at the new scale-out storage and data management architecture the company claims is necessary for AI, machine learning and analytics workloads.

SwiftStack said it worked with customers to design clusters that scale linearly to handle multiple petabytes of data and support throughput of more than 100 GB per second. That allows it to handle workloads such as autonomous vehicle applications that feed data into GPU-based servers.

Marc Staimer, president of Dragon Slayer Consulting, said throughput of 100 GB per second is “really fast” for any type of storage and “incredible” for an object-based system. He said the fastest NVMe system tests at 120 GB per second, but it can scale only to about a petabyte.

“It’s not big enough, and NVMe flash is extremely costly. That doesn’t fit the AI [or machine learning] market,” Staimer said.

This is the second object storage product launched this week with speed not normally associated with object storage. NetApp unveiled an all-flash StorageGrid array Tuesday at its Insight user conference.

Staimer said SwiftStack’s high-throughput “parallel object system” would put the company into competition with parallel file system vendors such as DataDirect Networks, IBM Spectrum Scale and Panasas, but at a much lower cost.

New ProxyFS Edge

SwiftStack 7 plans introduce a new ProxyFS Edge containerized software component next year to give remote applications a local file system mount for data, rather than having to connect through a network file serving protocol such as NFS or SMB. SwiftStack spent about 18 months creating a new API and software stack to extend its ProxyFS to the edge.

Founder and chief product officer Joe Arnold said SwiftStack wanted to utilize the scale-out nature of its storage back end and enable a high number of concurrent connections to go in and out of the system to send data. ProxyFS Edge will allow each cluster node to be relatively stateless and cache data at the edge to minimize latency and improve performance.

SwiftStack 7 will also add 1space File Connector software in November to enable customers that build applications using the S3 or OpenStack Swift object API to access data in their existing file systems. The new File Connector is an extension to the 1space technology that SwiftStack introduced in 2018 to ease data access, migration and searches across public and private clouds. Customers will be able to apply 1space policies to file data to move and protect it.

Arnold said the 1space File Connector could be especially helpful for media companies and customers building software-as-a-service applications that are transitioning from NAS systems to object-based storage.

“Most sources of data produce files today and the ability to store files in object storage, with its greater scalability and cost value, makes the [product] more valuable,” said Randy Kerns, a senior strategist and analyst at Evaluator Group.

Kerns added that SwiftStack’s focus on the developing AI area is a good move. “They have been associated with OpenStack, and that is not perceived to be a positive and colors its use in larger enterprise markets,” he said.

AI architecture

A new SwiftStack AI architecture white paper offers guidance to customers building out systems that use popular AI, machine learning and deep learning frameworks, GPU servers, 100 Gigabit Ethernet networking, and SwiftStack storage software.

“They’ve had a fair amount of success partnering with Nvidia on a lot of the machine learning projects, and their software has always been pretty good at performance — almost like a best-kept secret — especially at scale, with parallel I/O,” said George Crump, president and founder of Storage Switzerland. “The ability to ratchet performance up another level and get the 100 GBs of bandwidth at scale fits perfectly into the machine learning model where you’ve got a lot of nodes and you’re trying to drive a lot of data to the GPUs.”

SwiftStack noted distinct differences between the architectural approaches that customers take with archive use cases versus newer AI or machine learning workloads. An archive customer might use 4U or 5U servers, each equipped with 60 to 90 drives, and 10 Gigabit Ethernet networking. By contrast, one machine learning client clustered a larger number of lower horsepower 1U servers, each with fewer drives and a 100 Gigabit Ethernet network interface card, for high bandwidth, he said.

An optional new SwiftStack Professional Remote Operations (PRO) paid service is now available to help customers monitor and manage SwiftStack production clusters. SwiftStack PRO combines software and professional services.

Go to Original Article
Author:

Countdown to Microsoft Global Learning Connection 2019: Two weeks to go—join us on Nov 5-6 to celebrate global learning and open students’ hearts and minds | | Microsoft EDU

The Microsoft Global Learning Connection (formerly Skype-a-Thon) event is almost here. Thousands of educators from more than 110 countries are preparing to connect their students with experts and classrooms around the world to share stories and cultural traditions, play games, and collaborate on projects. The goal is to empower young people to become more engaged global citizens and expand their horizons.

Our global community will count the virtual miles traveled after each connection. Ultimately, these will all contribute to our global goal of traveling 17 million virtual miles and connecting nearly a half-million students via Skype, Teams and Flipgrid.

This 48-hour annual event is a true celebration of the power of global learning and an opportunity to shift perspectives and foster greater empathy and compassion for our planet and each other. If you have arranged a connection, make sure to share your plans with us on social @SkypeClassroom with #MSFTGlobalConnect and #MicrosoftEDU.

And if you haven’t arranged a connection for the two days of the event, there is still time to join us.

Head to msftglobalclassroom.com to learn more about the event. We hope you will join us to connect and inspire your students on November 5 and 6.

To help you get started and plan your participation, we have gathered below all the necessary resources:

  • Download a step-by-step activity plan to help you organize your connections for the two-day
  • Access the teacher toolkit, which is full of resources for you and your students. This includes maps, stickers, digital passports, activity sheets, a letter to parents and more.
  • Are you interested in making the Global Learning Connection the starting point for an event at your school or getting ideas on how to tie the event with a global cause? Check out educators’ tips here.
  • Find out how to schedule connections via Skype, Teams and Flipgrid here.
  • Explore the event’s social toolkit and download ready-made templates to share your participation on social channels with our global community @SkypeClassroom with #MSFTGlobalConnect #MicrosoftEDU.

Happy Traveling!

Explore tools for Future Ready SkillsExplore tools for Future Ready Skills

Go to Original Article
Author: Microsoft News Center