Tag Archives: Training

Salesforce Trailhead gets social boost for admins, developers

Salesforce recently released two new features for its training platform, Salesforce Trailhead: a publicly displayed skills graph and the option for admins, developers and other certified pros to create a vanity URL. While these two features may seem like small changes, they pave the road for Trailhead to become an alternative to the résumé or, maybe more significantly, a way to steal eyeballs from rival Microsoft’s LinkedIn professional network.

Trailhead provides hands-on training programs for all facets of Salesforce and a number of other skills that can be useful for people working with Salesforce implementations. It also allows users to develop and share their own training strategies.

Originally, Trailhead tracked users’ accomplishments, but didn’t provide any way for the users to display their accomplishments to each other or to potential employers. More recently, Salesforce Trailhead started giving users badges they could display on their profiles to represent every training program they’ve completed.

Because these training programs often include hands-on elements and actual coding projects, this gave users the ability to show off their skills. The skills graph takes this one step further, providing a pie chart for Trailhead users to prove their skills to prospective employers.

Salesforce Trailhead profile as personal brand

Alan Lepofsky, vice president and principal analyst at Constellation Research, said, “Salesforce wants to give you their Salesforce résumé. Over time, this should give people the ability to build up their personal brand.” The Salesforce résumé is built around the badges, and the public skills graph is, as Lepofsky said, just “taking it to the next level of publicity.”

“People are looking to differentiate at more than the product level,” Lepofsky added. Instead, they’re competing over extra and unexpected skills. Employers are looking for more granular information on the skills of their prospective employees than that they have Salesforce experience. Being able to point to quantifiable measurements of specific skills would be an asset on the job market, Lepofsky said.

Brent Leary, co-founder at partner at CRM Essentials, said the Salesforce Trailhead badges and skills graph would “make it easy for people looking for the right kind of résumé and experience … to know that these experiences are legitimate.”

Furthermore, according to Leary, because the skills graph and badges are standardized, this provides a useful way for employers to compare prospective hires from different geographical locations and with different backgrounds.

A profile in Salesforce Trailhead
Recent updates to Salesforce Trailhead include a revised skills graph and badges to display skills they’ve earned and knowledge they’ve gained.

Increasing diversity of Salesforce recruiting side benefit?

Sarah Franklin, executive vice president of development relations and general manager of Salesforce Trailhead, said she sees these new features as more than just a way for Salesforce to build its own version of LinkedIn.

Franklin said the skills graph “changes the résumé from ‘What you have done?’ to ‘What are your skills?'” A traditional résumé lists past job experience and educational history, but it doesn’t include any way to prove that the job applicants have gone through training and have new skills that they didn’t use at previous jobs. The purpose of the Trailhead skills graph, according to Franklin, is “if you learn [new skills] and you can show it on your skills graph, then you’re in.”

We have to make it affordable to have an easy on-ramp to a career.
Sarah Franklinexecutive vice president of development relations and general manager at Salesforce

She said she hopes that, with the inclusion of the skills graph, Salesforce Trailhead will be able to increase diversity in the company’s ecosystem. Franklin is a mother of daughters and a woman working in technology herself, so increasing, among other things, gender diversity in the technology space is important to her. “The problems we have with diversity, they’re systemic. It’s not job recruiting. To get recruited, you need a four-year degree. To get a four-year degree, you need lots of money, probably. It’s a systemic problem that we face, and we have to change the game,” she said.

The free training isn’t useful for only current Salesforce users. Franklin said college students with debt and no work experience, parents returning to the workforce after raising children and people looking for a brand-new career could be new users for Trailhead. People who come to the Salesforce ecosystem by nontraditional roads can also prove their qualifications to a potential employer via the skills graph.

“We have to make it affordable to have an easy on-ramp to a career,” Franklin said.

The vanity URL makes it easier for users to link prospective employers, LinkedIn pages and Twitter profiles to the Salesforce Trailhead profile. According to Lepofsky, Salesforce MVPs are already starting to jockey for position to be the first one to get their chosen Trailhead URL. “People aren’t sharing business cards anymore. They’re sharing URLs,” Franklin said. The vanity URL hopes to capitalize on that.

AU combines talent analytics with HR management

The use of talent analytics may be creating a need for HR staff with specialized training. One source for these skills is programs that offer master’s degrees in analytics. Another may be a new program at American University that combines analytics with HR management.

American University, or AU, is making talent analytics, which is also called people analytics, a core part of the HR management training in a new master’s degree program, said Robert Stokes, the director of the Master of Science in human resource analytics and management at AU.

Stokes said he believes AU’s master’s degree program is unique, “because metrics and analytics run through all the courses.” He said metrics are a part of that training in talent management, compliance and risk reduction, to name a few HR focus areas.

Programs that offer a master’s degree in analytics are relatively new. The first school to offer this degree was North Carolina State University in 2007. Now, more than two dozen schools offer similar programs. There are colleges that offer talent analytics training, but usually as a course in an HR program.

These master’s programs produce graduates who can meet a broad range of business analytics needs, including talent analytics.

“We definitely have interest from companies in hiring our students for their HR departments,” said Joel Sokol, the director of the Master of Science in analytics program at the Georgia Institute of Technology.  “It’s not the highest-demand business function that our students go into, of course, but it’s certainly on the list,” he said in an email.

Sokol also pointed out that one of the program’s advisory board members is a vice president of HR at AT&T.

Analytics runs through all of HR

The demand for analytics-trained graduates is high. North Carolina State, for instance, said 93% of its master’s students were employed at graduation and earned an average base salary of just over $95,000.

Interest in master’s degree analytics training follows the rise of business analytics. The interest in employing people with quantitative talent analytics skills is part of this trend.

What HR organizations are trying to do is discover “how to drive value from people data,” said David Mallon, the head of research for Bersin by Deloitte, headquartered in New York.

“It wouldn’t shock anybody” if a person from supply chain, IT or marketing “brought a lot of data to the table; it’s just how they get things done,” Mallon said. “But in most organizations, it would be somewhat shocking if the HR person brought data to the conversation,” he said.

Mallon said he is seeing clear traction by HR departments — backed up by its just-released research on people analytics maturity — to deliver better analysis. But he said only about 20% are doing new and different things with analytics.  “They have data scientists, they have analytics teams, [and] they’re using new kinds of technologies to capture data, to model data,” he said.

The march to people analytics

“Conservatively, our data shows that at least 44% of HR organizations have an HR [or] people analytics team of some kind,” Mallon said. The percentage of departments with at least someone responsible for it — even part time — may be as high as 67%, he said.

The AU program’s first class this fall has about 10 students, and Stokes said he expects it to grow as word about the program spreads. Most HR programs that provide analytics training do so under separate courses that may not be integrated with the broader HR training, he said.

The intent is to use analytics and metrics to measure and make better decisions, Stokes said. An organization, for instance, should be able to quantify how much fiscal value is delivered by a training program. This type of people analytics may still be new to many HR organizations, which may rely on surveys to assess the effectiveness of a training program.

Organizations that are more mature aren’t just using surveys to try to determine employee engagement, Mallon said. They may be analyzing what’s going on in internal and external social media.

“They’re mining — they’re watching the interactions of employees in collaboration platforms and on your intranet,” Mallon said. “They’re bringing in performance data from existing business systems like ERPs and CRMs,” he said.

The best-performing organizations are using automation and machine learning to handle the routine reporting to free up time for higher-value research, Mallon said. But they are also using these tools “to spot trends that they didn’t even know were there,” he said.

Scale up your deep learning with Batch AI preview

Imagine reducing your training time for an epoch from 30 minutes to 30 seconds, and testing many different hyper-parameter weights in parallel. Available now, in public preview, Batch AI is a new service that helps you train and test deep learning and other AI or machine learning models with the same scale and flexibility used by Microsoft’s data scientists. Managed clusters of GPUs enable you to design larger networks, run experiments in parallel and at scale to reduce iteration time and make development easier and more productive. Spin up a cluster when you need GPUs, then turn them off when you’re done and stop the bill.

Developing powerful AI involves combining large data sets for training with clusters of GPUs for experimenting with network design and optimization of hyper-parameters. Having access to this capability as a service helps data scientists and AI researchers get results faster and focus on building better models instead of managing infrastructure. This is where Batch AI comes in as part of the Microsoft AI platform.

“Deep learning researchers require increasing computing time to train complex neural networks with big data. Large computing clusters on Microsoft Azure is one of the solutions to resolve our researchers’ pain, and Azure Batch AI will be the key solution to connect on-premises and cloud environments. Preferred Networks is excited to integrate Chainer & ChainerMN with this service.” –Hiroshi Maruyama, Chief Strategy Officer, Preferred Networks, Inc

Secret-to-AI

Joseph Sirosh, Corporate Vice President of the Cloud AI Platform, spoke at the recent Microsoft Ignite conference about delivering Cloud AI for every developer with a comprehensive family of infrastructure for AI in Azure, services for AI, and tools to make AI development easier. Batch AI is part of this infrastructure, enabling easily distributed computing on Azure for parallel training, testing, and scoring. Scale-out to as many GPUs as you want.

There’s a great demo in Joseph’s Ignite talk (25 minutes in) that shows an end-to-end experience of data wrangling, training at scale, and using a trained AI model in Excel. The model was developed initially using a Data Science Virtual Machine in Azure, then scaled out to speed up experimentation, hyper-parameter tuning, and training. Using Batch AI, our data scientists were able to scale from 1 to 148 GPUs for the model, reducing training time per epoch from 30 minutes to 30 seconds. This made a huge difference in productivity when you need to run thousands of epochs. Our data scientists were able to experiment with the network design and hyper-parameter values and see results quickly. A version of the code behind this demo will available as a tutorial to use with Batch AI and Azure ML Machine Learning Services and Workbench.

What is Batch AI

Batch AI provides an API and services specialized for AI workflows. The key concepts are clusters and jobs.

Cluster describes the compute resources you want to use. Batch AI enables:

  • Provisioning clusters of GPUs or CPUs on demand

  • Installing software in a container or with a script

  • Automatic or manual scaling to manage costs

  • Access to low priority virtual machines for learning and experimentation

  • Mounting shared storage volumes for training and output data

A job is the code you want to run — a command line with parameters. Batch AI supports:

  • Using any deep learning framework or machine learning tools

  • Direct configuration of options for popular frameworks

  • Priority-based job queue for sharing a GPU quota or reserved instances

  • Restarting jobs if a virtual machine becomes unavailable

  • SDK, command line, portal and tools integration

Building systems of intelligence

Dr. Yogendra Narayan Pandey, Data Scientist at Halliburton Landmark, used Azure Batch AI and Azure Data Lake to develop predictive deep learning algorithms for static reservoir modeling to reduce the time and risk in oil field exploration compared to traditional simulation. He shared his work at the Landmark Innovation Forum & Expo 2017.

“With the huge amounts of storage and compute power of the Azure cloud, we are entering the age of predictive model-based discovery. Batch AI makes it straightforward for data scientists to use the tools they already know. Without Azure Batch AI and GPUs, it would have taken hours if not days for each model training job to complete.”

Batch AI includes recipes for popular AI frameworks that help you get started quickly without needing to learn the details of working with Azure virtual machines, storage, and networking. The recipes include cluster and job templates to use with the Azure CLI interface, as well as Jupyter Notebooks that demonstrate using the Python API.

End-to-end productivity

The Batch AI team is working to integrate with Microsoft AI tools including the Azure Machine Learning services and Workbench for data wrangling, experiment management, deployment of trained models, and Visual Studio Code Tools for AI.

E2E-AI

Partners around the world are also using Batch AI to help their customers scale-up their training to Azure and its powerful fleet of NVIDIA GPUs.

“We have long needed a service like Azure Batch AI. It is an appealing solution for deep learning engineers to speed up deep neural network training & hyper parameter search. I’m looking forward to creating end-to-end solutions by integrating our deep learning service CSLAYER and Azure Batch AI.”  –Ryo Shimizu, President & CEO of UEI Corporation

Getting started

We invite you to try Batch AI for training your models in parallel and at scale in Azure. We have sample recipes for popular AI frameworks to help you get started. We recommend starting with low priority virtual machines to minimize costs.

With Batch AI, you only pay for the compute and storage used for your training. There’s no additional charge for the cluster management and job scheduling. Using low priority virtual machines with Batch AI is the most cost-effective way to learn and develop until you are ready to leverage GPUs.

The team would like to hear any feedback or suggestions you have. We’re listening on Azure Feedback, Stack Overflow, MSDN, and by email.

DevOps tools training sparks IT productivity

Enterprises have a new weapon to combat the IT skills shortage where new hiring and training practices fall short.

Most IT pros agree the fastest path to IT burnout is what Amazon engineers have termed “undifferentiated heavy lifting,” which is repetitive and uninteresting work that has little potential for wider impact beyond keeping the lights on. DevOps tools training, which involves IT automation practices, can reduce or eliminate such mundane work and can compensate against staff shortages and employee attrition.

“Automation tools aren’t used to eliminate staff; they’re used to help existing staff perform at a higher level,” said Pete Wirfs, a programmer specialist at SAIF Corp., a not-for-profit workers’ compensation insurance company in Salem, Ore., that has used Automic Software’s Automation Engine to orchestrate scripts.

The company has used Automation Engine since 2013, but last year, it calculated new application development would add hundreds of individual workflows to the IT operations workload. Instead, Wirfs said he found a way to automate database queries and use the results to kick off scripts, so a single centralized workflow could meet all the project’s needs.

As a result, SAIF has expanded its IT environment exponentially over the last four years with no additional operations staff. The data center also can run lights-out for a few hours each night, with the automation scripts set up to handle monitoring, health checks and route alerts to the appropriate contacts when necessary. No IT ops employees work on Sundays at SAIF at all.

“There’s no end to what we can find to automate,” Wirfs said.

DevOps tools training standardizes IT processes

SAIF’s case illustrates an important facet of DevOps tools training: standardization of a company’s tools and workflows. A move from monoliths to microservices can make an overall system more complex, but individual components become similar, repeatable units that are easier to understand, maintain and troubleshoot.

“The monoliths of the early 2000s were very complicated, but now, people are a lot more pragmatic,” said Nuno Pereira, CTO of iJET International, a risk management company in Annapolis, Md. “DevOps has given us a way to keep component complexity in check.”

In modern monitoring systems, DevOps tools training can curtail the notifications that bombard IT operations pros through centralized tools, such as Cisco’s AppDynamics and LogicMonitor. These are popular among DevOps shops because they boost the signal-to-noise ratio of highly instrumented and automated environments, and they establish a standardized common ground for collaborative troubleshooting.

“[With] LogicMonitor, [we can] capture data and make it easily viewable so that different disciplines of IT can speak the same language across skill sets,” said Andy Domeier, director of technology operations at SPS Commerce, a communications network for supply chain and logistics businesses based in Minneapolis.

Four or five years ago, problems in the production infrastructure weren’t positively identified for an average of about 30 minutes per incident, Domeier said. Now, within one to two minutes, DevOps personnel can determine there is a problem, with an average recovery time of 10 to 15 minutes, he estimated.

Standardization has been key to keeping up with ever-bigger web-scale infrastructure at DevOps bellwethers such as Google.

“If every group in a company has a different set of technologies, it is impossible to make organizationwide changes that lift all boats,” said Ben Sigelman, who built Dapper, a distributed tracing utility Google uses to monitor distributed systems. Google maintains one giant source-code repository, for example, which means any improvement immediately benefits the entire Google codebase.

“Lack of standardization is an impediment to DevOps, more than anything else,” Sigelman said.

Google has standardized on open source tools, which offer common platforms that can be used and developed by multiple companies, and this creates another force-multiplier for the industry. Sigelman, now CEO of a stealth startup called LightStep, said DevOps tools training has started to have a similar effect in the mainstream enterprise.

Will AI help?

DevOps tools training can go a long way to help small IT teams manage big workloads, but today’s efficiency improvements have their limits. Already, some tools, such as Splunk Insights, use adaptive machine-learning algorithms to give the human IT pro’s brain an artificial intelligence (AI) boost — a concept known as AIOps.

“The world is not going to get easier,” said Rick Fitz, senior vice president of IT markets for Splunk, based in San Francisco. “People are already overwhelmed with complexity and data. To get through the next five to 10 years, we have to automate the mundane so people can use their brains more effectively.”

People are already overwhelmed with complexity and data. To get through the next five to 10 years, we have to automate the mundane.
Rick Fitzsenior vice president of IT markets, Splunk

Strong enthusiasm for AIOps has spread throughout the industry. Today’s analytics products, such as Splunk, use statistics to predict when a machine will fail or the broader impact of a change to an IT environment. However, AIOps systems may move beyond rules-based systems to improve on those rules or gain insights humans won’t come up with on their own, said Brad Shimmin, analyst with GlobalData PLC, headquartered in London. Groups of companies will share data the way they share open source software development today and enhance the insights AIOps can create, he predicted.

The implications for AIOps are enormous. Network intrusion detection is just one of the many IT disciplines experts predict will change with AIOps over the next decade. AIOps may be able to detect attack signatures or malicious behavior in users that humans and today’s systems cannot detect — for example, when someone hijacks and maliciously uses an end-user account, even if the end user’s identifier and credentials remain the same.

But while AIOps has promise, those who’ve seen its early experimental implementations are skeptical that AIOps can move beyond the need for human training and supervision.

“AI needs a human being to tell it what matters to the business,” LightStep’s Sigelman said, based on what he saw while working at Google. “AI is a fashionable term, but where it’s most successful is when it’s used to sift through a large stream of data with user-defined filtering.”

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at [email protected] or follow @PariseauTT on Twitter.

Online training for Azure Data Lake

We are pleased to announce the availability of new, free online training for Azure Data Lake. We’ve designed this training to get developers ramped up fast. It covers all the topics a developer needs to know to start being productive with big data and how to address the challenges of authoring, debugging, and optimizing at scale.

Explore the training

Click on the link below to start!

Microsoft Virtual Academy: Introduction to Azure Data Lake

Looking for more?

You can find this training and many more resources for developers.

Course outline

1 | Introduction to Azure Data Lake

Get an overview of the entire Azure Data Lake set of services including HDI, ADL Store, and ADL Analytics.

2 | Introduction to Azure Data Lake Tools for Visual Studio

Since ADL developers of all skill levels use Azure Data Lake Tools for Visual Studio, review the basic set of capabilities offered in Visual Studio.

3 | U-SQL Programming

Explore the fundamentals of the U-SQL language, and learn to perform the most common U-SQL data transformations.

4 | Introduction to Azure Data Lake U-SQL Batch Job

Find out what’s happening behind the scenes, when running a batch U-SQL script in Azure.

5 | Advanced U-SQL

Learn about the more sophisticated features of the U-SQL language to calculate more useful statistics and learn how to extend U-SQL to meet many diverse needs.

6 | Debugging U-SQL Job Failures

Since, at some point, all developers encounter a failed job, get familiar with the causes of failure and how they manifest themselves.

7 | Introduction to Performance and Optimization

Review the basic concepts that drive performance in a batch U-SQL job, and examine strategies available to address those issues when they come up, along with the tools that are available to help.

8 | ADLS Access Control Model

Explore how Azure Data Lake Store uses the POSIX Access Control model, which is very different for users coming from a Windows background.

9 | Azure Data Lake Outro and Resources

Learn about course resources.

Powered by WPeMatico

Now available: Windows Developer training course on Desktop Bridge

We are happy to announce that the Microsoft Virtual Academy training course, Developer’s Guide to the Desktop Bridge, is now available on demand.

This video course, delivered by the Desktop Bridge Program Management team, aims to help developers understand the concepts and benefits of the Desktop Bridge tooling in the Windows 10 Universal Windows Platform. Watch the video and find the relevant sample code here to start bringing your desktop apps to the Windows Store and to take advantage of new Windows 10 features of the Universal Windows Platform in your WPF, WinForms, MFC or other type of Win32 apps.

Do you want to distribute your desktop application in the Windows Store? We cover that in the course. Do you want to take advantage of modern Windows 10 app-to-app features in your existing desktop application? We cover that in the course. Do you want to light-up Windows 10 features, and still distribute your app to Windows 7 users? We cover that, too. Modernizing your desktop app gradually with UWP components? We cover all of these and more in the course.

There are eight modules:

  1. Intro to the Desktop Bridge
  2. Desktop App Converter
  3. Debugging and Testing Your Converted Apps
  4. Distributing Your Converted Apps
  5. Enhancing Desktop Applications with UWP Features
  6. Extending and Modernizing with UWP components
  7. What’s Next for the Desktop Bridge
  8. Post-Course Survey

For more information on the Desktop Bridge, please visit the Windows Dev Center.

Ready to submit your app to the Windows Store? Let us know!

Feedback or feature suggestions? Submit them on User Voice.