Tag Archives: tools

Small vendors that stand out in network automation

Incumbent vendors are typically behind in providing cutting-edge features in network management tools. So, enterprises looking for advanced analytics and network automation will more likely find them in small vendors’ products.

More advanced tools are critical to enterprises switching to software-based network management in the data center from a traditional hardware-centric model. Driving the shift are initiatives to move workloads to the cloud and digitize more internal and external operations.

In a study released this month, almost half of the 350 IT professionals surveyed by Enterprise Management Associates said they wanted advanced analytics for anomaly detection and traffic optimization.

Small vendors are addressing the demand by incorporating machine learning in network monitoring tools that search for potential problems. Examples of those vendors include Kentik and Moogsoft.

Besides more comprehensive analytics, enterprises want software that automatically configures, provisions and tests network devices. Those network automation features are vital to improving efficiency and reducing human error and operating expenses.

Gartner recently named three small vendors at the forefront of network automation: BeyondEdge, Intentionet and NetYCE.

Machine learning in network management

Moogsoft is using machine learning to reduce the number of events its network monitoring software flags to engineers. Moogsoft does that by identifying and then hiding multiple activities related to the same problem.

“It really helps streamline” network operations, said Terry Slattery, a network consultant at IT adviser NetCraftsmen.

Kentik, on the other hand, uses machine learning to correlate network traffic flow data generated by switches and routers that support the NetFlow protocol, Slattery said. The process can identify sources of malware or other potential security threats.

Moogsoft and Kentik use machine learning to improve specific features in their products. Vendors have yet to deploy it in broader network operations, which would likely require significant changes in network infrastructure.

Today, companies prefer to work on provisioning, monitoring and making hardware changes on a large scale. After that, they might start adding “smarts” to the network, said Jason Edelman, founder and CTO of consultancy Network to Code.

Gartner also named Network to Code as a small vendor that enterprises should consider. The consultancy’s client base includes 30 of the Fortune 500. The company specializes in the use of open source software for managing networks with a variety of vendor devices.

Gartner picks for automation

Among Gartner’s other small vendors, BeyondEdge was the only one focused on the campus network, where it competes with behemoths like Cisco and Hewlett Packard Enterprise’s Aruba.

BeyondEdge has developed overlay software for Ethernet switching fabrics and passive optical networks. The software lets enterprises create configurations based on business and application policies and then applies them at devices’ access points. BeyondEdge sells its vendor-agnostic technology through consumption-based pricing.

BeyondEdge is best suited for organizations that need to provision many ports for different classes of users, Gartner said. Those types of organizations are found in commercial real estate, hospitality, higher education and healthcare.

Intentionet and NetYCE provide tools for data center networks. The former has developed open source-based software that mathematically validates network configurations before deploying them. “This is a new capability in the market and can simultaneously enhance uptime and agility,” Gartner said.

NetYCE stands out for developing a straightforward UI that simplifies network configuration change management, network automation and orchestration capabilities, Gartner said.

“It provides a simple way for networking personnel — who may be novices in automation — to get up to speed quickly,” the analyst firm said.

NetYCE’s technology supports hardware from the largest established vendors. The company claims to provide adapters to nonsupported gear within two weeks, Gartner said.

Go to Original Article
Author:

Surge in digital health tools to continue post-pandemic

Health systems have rapidly rolled out digital health tools to meet the needs of both patients and providers during the COVID-19 crisis.

Interest in digital health tools, a broad term that refers to the use of technology to deliver healthcare services to patients digitally and can include technologies such as wearable devices, mobile apps and telehealth programs, will likely continue long after the pandemic ends, according to healthcare experts.

Already, healthcare systems are increasing the number of telehealth services they provide. They are embracing symptom checker tools and tools that enable practitioners to keep tabs on patients remotely. It’s also resulted in healthcare CIOs looking to contact tracing tools for managing the spread of the virus.

During a recent HIMSS webinar, four healthcare leaders discussed how the pandemic has accelerated the adoption of digital health tools and why that interest will continue after the pandemic ends.

Digital health tools help with response

Digital health tools such as telehealth programs have become a crucial element of the pandemic, especially as governments and health systems began mandating work-from-home and shelter-in-place orders, according to Bernardo Mariano Jr., CIO and director of digital health innovation at the World Health Organization in Switzerland.

Bernardo MarianoBernardo Mariano

But, Mariano said, more work needs to be done, including the development of an international health data exchange standard so countries can do a better job of supporting each other during a crisis such as COVID-19. For example, Mariano said, while Italy was suffering from an overload of patients at hospitals, neighboring countries may have been able to help treat patients remotely through telemedicine. The lack of an international “principle or regulation” hindered that capability, he said.

As the pandemic stretches on, Mariano said the proliferation of contact tracing technologies is also growing, with countries seeking to use the technology as part of their reopening strategies. Mariano said the COVID-19 crisis could accelerate the adoption of a global healthcare surveillance system like contact tracing that will enable countries to quickly analyze, assess and respond to outbreaks.

“The power of digital solutions to minimize the impact of COVID has never been so clear,” he said.

‘Digital front door technologies’ are key

Pravene Nath, global head of digital health strategy at Roche, a biotech company with an office in San Francisco, also cited the explosive growth of telehealth as an indicator of the impact COVID-19 has had on healthcare. While they are instrumental now, Nath also believes digital health tools will last beyond the pandemic.

Pravene NathPravene Nath

Nath said the crisis is enabling healthcare systems to readily make a case for “digital front door technologies,” or tools that guide patients to the right place before stepping into a healthcare facility. A digital front door can include tools such as acute care telehealth, chatbot assessments, virtual visits, home monitoring and self-monitoring tools.

“I think the disruption here is in the access and utilization of traditional care models that’s heightened the value of digitally-driven chronic disease care management, such as platforms like MySugr for diabetes management,” he said. MySugr is an app-based digital diabetes management platform that integrates with glucose-monitoring devices.

“We think the adoption of these kinds of technologies will accelerate now as a result of the total disruption to physical access to traditional healthcare environments,” he said.

Nath said after the pandemic has passed, healthcare systems that quickly rolled out digital health technologies will need time to assess how to be “good stewards” of that technology and patient data moving forward.

Mobile app use grows

“Digital technologies play an important role in managing the crisis,” said Päivi Sillanaukee, director general of the Finland Ministry of Social Affairs and Health.

Päivi Sillanaukee Päivi Sillanaukee

Digital health has played a role in keeping patients informed via mobile apps and other online methods. Sillanaukee said by having tools that provide reliable, up-to-date information to patients has resulted in a decrease in time-consuming calls to healthcare workers.

Finland has also begun looking into contact tracing tools, although Sillanaukee said she has seen an acceleration in discussions about patient data safety along with the contact tracing discussion.

Pandemic bypasses change management

While the benefits of digital health were evident before the crisis, such as remotely connecting patients to doctors, Benedict Tan, group chief digital strategy officer at Singapore Health Services, said the challenge has long been change management and getting buy-in from providers for digital health tools.

Benedict TanBenedict Tan

But COVID-19 and social distancing have changed that, suddenly presenting a need for tools such as telehealth, analytics and remote monitoring to help manage patients during the crisis, and they are showing the value of such tools, he said.

“What COVID-19 has done is accelerate, or give motivation, for all of us to work together to leverage and see the benefits of what digital health can bring to society,” he said.

Go to Original Article
Author:

Kite intros code completion for JavaScript developers

Kite, a software development tools startup specializing in AI and machine learning, has added code-completion capabilities for JavaScript developers.

San Francisco-based Kite’s AI-powered code completion technology to JavaScript initially targeted Python developers. JavaScript is arguably the most popular programming language and Kite’s move should be a welcome addition for JavaScript developers, as the technology can predict the next string of code they will write and complete it automatically.

“The use of AI is definitely making low-code even lower-code for sure, and no-code even more possible,” said Ronald Schmelzer, an analyst at Cognilytica in Ellicott City, Md. “AI systems are really good at determining patterns, so you can think of them as really advanced wizard or templating systems that can try to determine what you’re trying to do and suggest code or blocks or elements to complete your code.”

Kite’s Line-of-Code Completions feature uses advanced machine learning models to cut some of the mundane tasks that programmers perform to build applications, such as setting up build processes, searching for code snippets on Google, cutting and pasting boilerplate code from Stack Overflow, and repeatedly solving the same error messages, said Adam Smith, founder and CEO of Kite, in an interview.

Kite’s JavaScript code completions are currently available in private beta and can suggest code a developer has previously used or tap into patterns found in open source code files, Smith said. The deep learning models used to inform the Kite knowledge base have been trained on more than 22 million open source JavaScript files, he said.

Kite aims to advance the code-completion art

Unlike other code completion capabilities, Kite features layers of filtering such that only the most relevant completion results are returned, rather than a long list of completions ranked by probability, Smith said. Moreover, Kite’s completions work in .js, .jsx and .vue files and the system processes code locally on the user’s computer, rather than sending code to a cloud server for processing.

Ronald Schmelzer, analyst, CognilyticaRonald Schmelzer

Kite’s engineers took their time training the tool on the evergrowing JavaScript ecosystem and its frameworks, APIs and design patterns, Smith said. Thus, Kite works with popular JavaScript libraries and frameworks like React, Vue, Angular and Node.js. The system analyzes open source projects on GitHub and applies that data to machine learning models trained to predict the next word or words of code as programmers write in real time. This smarter programming environment makes it possible for developers to focus on what’s unique about their application.

There are other tools that offer code completion capabilities, such as the IntelliCode feature in the Microsoft Visual Studio IDE. IntelliCode provides more primitive code completion than Kite, Smith claimed. IntelliCode is the next generation of Microsoft’s older IntelliSense code completion technology. IntelliCode will predict the next word of code based on basic models, while Kite’s tool uses richer, more advanced deep learning models trained to predict further ahead to whole lines, and even multiple lines of code, Smith said.

AI systems are really good at determining patterns, so you can think of them as really advanced wizard or templating systems that can try to determine what you’re trying to do and suggest code or blocks or elements to complete your code.
Ronald SchmelzerAnalyst, Cognilytica

Moreover, Kite focuses on code completion, and not code correction, because programming code has to be exactly correct. For example, if you send someone a text with autocorrect errors, the tone of the message may still come across properly. But if you mistype a single letter of code, a program will not run.

Still, AI-powered code completion “Is still definitely a work in progress and much remains to be done, but OutSystems and others are also looking at AI-enabling their suites to deliver faster and more complete solutions in the low-code space,” Schmelzer said.

In addition to the new JavaScript code completion technology, Kite also introduced Kite Pro, the company’s first paid offering of code completions for Python powered by deep learning. Kite Pro provides features such as documentation in the Kite Copilot, which offers documentation for more than 800 Python libraries.

Kite works as a plugin for all of the most popular code editors, including Atom, JetBrains’ PyCharm/IntelliJ/WebStorm, Spyder, Sublime Text 3, VS Code and Vim. The product is available on Mac, Windows and Linux.

The basic version of Kite is free; however, Kite Pro costs $16.60 per user, per month. Custom team pricing also is available for teams that contact the company directly, Smith said.

Go to Original Article
Author:

Talkdesk adds virtual agents, rebrands CCaaS suite as CX Cloud

Users of Talkdesk’s contact-center-as-a-service suite have new tools to enhance customer experience, such as virtual agents, remote agent support and deeper hooks into marketing, integrations with CRM cloud platforms and connections to enterprise collaboration tools such as Slack and Microsoft Teams.

The company released 20 new features in the weeks leading up to its recent Opentalk 2020 virtual user conference, and renamed its CCaaS offering Talkdesk CX Cloud. While some of the features, such as a workforce management and business continuity, either were up and running or long-planned, the COVID-19 pandemic gave rise to new ones such as CXTalent, which uses AI to pair job seekers with organizations looking to fill remote contact center roles.

For contact centers, the most significant of the new Talkdesk features revolve around the company’s foray into workforce management, said Sheila McGee-Smith, president and principal analyst at McGee-Smith Analytics. That means Talkdesk is taking on new, bigger competitors such as NICE InContact, Verint and Genesys.

“They’re building an entire workforce management suite, which includes [agent] performance management and quality monitoring,” McGee-Smith said. “It’s been on their website, but they’ve never publicly taken that step to say ‘Yeah, we’re doing this.'”

Virtual agents, collaboration connectors in Talkdesk CX Cloud

Connecting to enterprise collaboration tools helps agents find answers to customer questions more quickly, said Charanya Kannan, chief product officer at Talkdesk. Customer service cloud vendors  including ServiceNow have introduced features to connect agents to their company’s in-house experts who help solve account problems or technical issues.

Charanya Kannan, Talkdesk Chief Product Officer, introduces CX Cloud at the company's Opentalk 2020 virtual user conference.
Charanya Kannan, Talkdesk Chief Product Officer, introduces CX Cloud at the company’s Opentalk 2020 virtual user conference.

“A lot of times when customers ask questions, agents will have to communicate with the rest of the organization to get answers,” Kannan said. “At companies where some of these questions are very deep, you need to bring in your technical account manager or different people internally. This provides a mechanism to collaborate, making customer experience not just the job of the contact center employee.”

Many of Talkdesk’s customers, she added, run contact centers with 1,000 or more agents. Finding in-house experts via popular collaboration tools can be an efficient way to navigate large, multinational organizations that are in the process of moving whole IT operations to the cloud.

Other new Talkdesk CX Cloud features include connectors to CRM systems, so salespeople can see more detail about their customers’ interactions with customer service, and vice versa. Currently, Talkdesk customers connect to about 60 different CRMs, Kannan said. Salesforce is by far the most popular, followed by ServiceNow and Zendesk. About 70% of Talkdesk customers use one of those three CRMs.

“Salesforce and Talkdesk share a lot of similarities,” Kannan said, adding that they fit together well because companies that use Salesforce are already familiar with and comfortable working on an extensible multi-tenant cloud SaaS platform, which Talkdesk also is.

Salesforce added voice capabilities for contact centers to its Service Cloud offering late last year, making it a potential competitor for Talkdesk.

Go to Original Article
Author:

Las Vegas shores up SecOps with multi-factor authentication

The city of Las Vegas used AI-driven infrastructure security tools to stop an attacker in January before sensitive IT systems were accessed, but the city’s leadership bets future attempts won’t even get that far.

“Between CrowdStrike [endpoint security] and Darktrace [threat detection], both tools did exactly what they were supposed to do,” said Michael Sherwood, chief innovation officer for Las Vegas. “We had [a user] account compromised, and that allowed someone to gain short-term access to our systems.”

The city’s IT staff thwarted that attacker almost immediately in the early morning of Jan. 7. IT pros took measures to keep the attacker from accessing any of the city’s data once security monitoring tools alerted them to the intrusion.

The city has also used Okta access management tools for the last two years to consolidate user identity and authentication for its internal employees and automate access to applications through a self-service portal. Next, it will reinforce that process with multi-factor authentication using the same set of tools, in the hopes further cyberattacks will be stopped well outside its IT infrastructure.

Multi-factor security will couple a physical device — such as an employee badge or a USB key issued by the city — with usernames and passwords. This will reduce the likelihood that such an account compromise will happen again, Sherwood said. Having access management and user-level SecOps centralized within Okta has been key for the city to expand its security measures quickly based on what it learned from this breach. By mid-February, its IT team was able to test different types of multi-factor authentication systems and planned to roll one out within 60 days of the security incident.

Michael SherwoodMichael Sherwood

“With dual-factor authentication, you can’t just have a user ID and password — something you know,” Sherwood said. “A bad actor might know a user ID and password, but now they have to [physically] have something as well.”

SecOps automation a shrewd gamble for Las Vegas

Las Vegas initially rolled out Okta in 2018 to improve the efficiency of its IT help desk. Sherwood estimated the access management system cut down on help desk calls relating to forgotten passwords and password resets by 25%. The help desk also no longer had to manually install new applications for users because of an internal web portal connected to Okta that automatically manages authorization and permissions for self-service downloads. That freed up help desk employees for more strategic SecOps work, which now includes the multi-factor authentication rollout.

Another SecOps update slated for this year will add city employees’ mobile devices to the Okta identity management system, and an Okta single sign-on service for Las Vegas citizens that use the city’s web portal.

Residents will get one login for all services under this plan, Sherwood said. “If they get a parking citation and they’re used to paying their sewer bill, it’s the same login, and they can pay them both through a shopping cart.”

With dual-factor authentication, you can’t just have a user ID and password — something you know. A bad actor might know a user ID and password, but now they have to [physically] have something as well.
Michael SherwoodChief innovation officer, city of Las Vegas

Okta replaced a hodgepodge of different access management systems the city used previously, usually built into individual IT systems. When Las Vegas evaluated centralized access management tools two years ago, Okta was the only vendor in the group that was completely cloud-hosted, Sherwood said. This was a selling point for the city, since it minimized the operational overhead to set up and run the system.

Okta’s service competes with the likes of Microsoft Active Directory, OneLogin and Auth0. Las Vegas also uses Active Directory for access management in its back-end IT infrastructure, while Okta serves the customer and employee side of the organization.

“There is still separation between certain things, even though one product may well be capable of [handling] both,” he said.

Ultimately, the city would like to institute a centralized online payment system for citizens to go along with website single sign-on, and Sherwood said he’d like to see Okta offer that feature and electronic signatures as well.

“They’d have lot of opportunity there,” he said. “We can do payments and electronic signatures with different providers, but it would be great having that more integrated into the authentication process.”

An Okta representative said the company doesn’t have plans to support payment credentials at this time but that the company welcomes customer feedback.

Go to Original Article
Author:

Biometrics firm fights monitoring overload with log analytics

Log analytics tools with machine learning capabilities have helped one biometrics startup keep pace with increasingly complex application monitoring as it embraces continuous deployment and microservices.

BioCatch sought a new log analytics tool in late 2017. At the time, the Tel Aviv, Israel, firm employed a handful of workers and had just refactored a monolithic Windows application into microservices written in Python. The refactored app, which captures biometric data on how end users interact with web and mobile interfaces for fraud detection, required careful monitoring to ensure it still worked properly. Almost immediately after it completed the refactoring, BioCatch found the process had tripled the number of logs it shipped to a self-managed Elasticsearch repository.

“In the beginning, we had almost nothing,” said Tamir Amram, operations group lead for BioCatch, of the company’s early logging habits. “And, then, we started [having to ship] everything.”

The team found it could no longer manage its own Elasticsearch back end as that log data grew. Its IT infrastructure also mushroomed into 10 Kubernetes clusters distributed globally on Microsoft Azure. Each cluster hosts multiple sets of 20 microservices that provide multi-tenant security for each of its customers.

At that point, BioCatch had a bigger problem. It had to not only collect, but also analyze all its log data to determine the root cause of application issues. This became too complex to do manually. BioCatch turned to log analytics vendor Coralogix as a potential answer to the problem.

Log analytics tools flourish under microservices

Coralogix, founded in 2015, initially built its log management system on top of a hosted Elasticsearch service but couldn’t generate enough interest from customers.

“It did not go well,” Coralogix CEO Ariel Assaraf recalled of those early years for the business. “It was early in log analytics’ and log management’s appeal to the mainstream, and customers already had ‘good enough’ solutions.”

While the company still hosts Elasticsearch for its customers, based on the Amazon Open Distro for Elasticsearch, it refocused on log analytics, developed machine learning algorithms and monitoring dashboards, and relaunched in 2017.

That year coincided with the emergence of containers and microservices in enterprise IT shops as they sought to refactor monolithic applications with new design patterns. The timing proved fortuitous; since the Coralogix’s relaunch in 2017, it has gained more than 1,200 paying customers, according to Assaraf, at an average deal size of $50,000 a year.

Coralogix isn’t alone among DevOps monitoring vendors reaping the spoils of demand for microservices monitoring tools — not just in log analytics, but AI- and machine learning-driven infrastructure management, or AIOps, as well. These include application performance management (APM) vendors, such as New Relic, Datadog, AppDynamics and Dynatrace, along with Coralogix log analytics competitors Elastic Inc. and Splunk.

We were able to delegate log management to the support team, so the DevOps team wasn’t the only one owning and using logs.
Tamir AmramOperations group lead, BioCatch

In fact, analyst firm 451 Research predicted that the market for Kubernetes monitoring tools will dwarf the market for Kubernetes management products by 2022 as IT pros move from the initial phases of deploying microservices into “day two” management problems. Even more recently, log analytics tools have begun to play an increasing role in IT security operations and DevSecOps.

The newly relaunched Coralogix caught the eye of BioCatch in part because of its partnership with the firm’s preferred cloud vendor, Microsoft Azure. It was also easy to set up and redirect logs from the firm’s existing Elasticsearch instance, and the Coralogix-managed Elasticsearch service eliminated log management overhead for the BioCatch team.

“We were able to delegate log management to the support team, so the DevOps team wasn’t the only one owning and using logs,” Amram said. “Now, more than half of the company works with Coralogix, and more than 80% of those who work with it use it on a daily basis.”

Log analytics correlate app changes to errors

The BioCatch DevOps team adds tags to each application update that direct log data into Coralogix. Then, the software monitors application releases as they’re rolled out in a canary model for multiple tiers of customers. BioCatch rolls out its first application updates to what it calls “ring zero,” a group of early adopters; next, to “ring one;” and so on, according to each customer group’s appetite for risk. All those changes to multiple tiers and groups of microservices result in an average of 1.5 TB of logs shipped per day.

The version tags fed through the CI/CD pipeline to Coralogix enable the tool to identify issues and correlate them with application changes made by BioCatch developers. It also identifies anomalous patterns in infrastructure behavior post-release, which can catch problems that don’t appear immediately.

Coralogix log analytics
Coralogix log analytics uses version tags to correlate application issues with specific developer changes.

“Every so often, an issue will appear a day later because we usually release at off-peak times,” BioCatch’s Amram said. “For example, it can say, ‘sending items to this queue is 20 times slower than usual,’ which shows the developer why the queue is filling up too quickly and saturating the system.”

BioCatch uses Coralogix alongside APM tools from Datadog that analyze application telemetry and metrics. Often, alerts in Datadog prompt BioCatch IT ops pros to consult Coralogix log analytics dashboards. Datadog also began offering log analytics in 2018 but didn’t include this feature when BioCatch first began talks with Coralogix.

Coralogix also maintains its place at BioCatch because its interfaces are easy to work with for all members of the IT team, Amram said. This has grown to include not only developers and IT ops, but solutions engineers who use the tool to demonstrate to prospective customers how the firm does troubleshooting to maintain its service-level agreements.

“We don’t have to search in Kibana [Elasticsearch’s visualization layer] and say, ‘give me all the errors,'” Amram said. “Coralogix recognizes patterns, and if the pattern breaks, we get an alert and can immediately react.”

Go to Original Article
Author:

7 PowerShell courses to help hone skills for all levels of expertise

PowerShell can be one of the most effective tools administrators have for managing Windows systems. But it can be difficult to master, especially when time is limited. An online PowerShell course can expedite this process by prioritizing the most important topics and presenting them in logical order.

Admins have plenty of PowerShell courses from which to choose, offered by well-established vendors. But with so many courses available, it isn’t always clear which ones will be the most beneficial. To help make the course selection process easier, here we offer a sampling of popular PowerShell courses that cater to varying levels of experience.

Windows currently ships with PowerShell 5.1, but PowerShell Core 6 is available for download, and PowerShell 7 is in preview. PowerShell Core is a cross-platform version of PowerShell that runs on multiple OS platforms. It isn’t an upgrade to Windows PowerShell, but a separate application that runs on the same system.

Some of the PowerShell courses listed here, as well as other online classes, specify the PowerShell version on which the course is based. But not all classes offer this information, and some courses provide only a range, such as PowerShell 4 or later. So, before signing up for an online course, be sure to verify the PowerShell version.

Learning Windows PowerShell

This popular PowerShell tutorial from Udemy is designed for beginners. This course targets systems admins who have no prior PowerShell experience but want to use PowerShell to manage Windows desktops and servers. This course is based on PowerShell 5. But this shouldn’t be an issue when learning basic concepts, which is the primary focus of this PowerShell tutorial.

Admins have plenty of PowerShell courses from which to choose, offered by well-established vendors.

The course provides background information about PowerShell and explains how to set up the PowerShell environment, including how to configure the console and work with profiles. The course introduces cmdlets, shows how they’re related to .NET objects and classes, and explains how to build a pipeline using cmdlets and other language elements. With this information, systems admins will have the basics they need to move onto the next topic: PowerShell scripts.

The tutorial on scripting is nearly as extensive as the section on cmdlets. The course examines the details of script elements, such as variables, constants, comparison operators, if statements, looping structures and regular expressions. This is followed by details on PowerShell providers and how to work with files and folders, and then a discussion of administration basics. This course can help provide participants with a solid foundation in PowerShell so they’re ready to take on more advanced topics.

Introduction to Windows PowerShell 5.1

This Udemy tutorial is based on PowerShell 5.1, so it’s more current than the previous course. The training is geared toward both beginner PowerShell users and more experienced admins who want to hone their PowerShell skills. The course covers a wide range of topics, from understanding PowerShell syntax to managing Active Directory (AD). Participants who sign up for this course should already know how to run PowerShell, but they don’t need to be advanced users.

The course covers the basics of how to use both the PowerShell console and the Intelligent Scripting Environment (ISE). It explains what steps to take to get help and find commands. This is followed by an in-depth look at the PowerShell command syntax. The material also covers objects and their properties and methods, as well as an explanation of how to build a PowerShell pipeline.

Participants can move onto the section on scripting, which starts with a discussion on arrays and variables. Users then learn how to build looping structures and conditional statements, and how to use PowerShell functions. This course demonstrates how to use PowerShell to work with AD, covering such tasks as installing and configuring server roles.

PowerShell version 5.1 and 6: Step-by-Step

This tutorial, which is one of Udemy’s highest rated PowerShell courses, is geared toward admins who want to learn how to use PowerShell to perform management tasks. The course is broad in scope and covers both PowerShell 5.1 and PowerShell Core 6. Users who sign up for this course should have a basic understanding of the Windows OS — both desktop and server versions.

Because the course covers so many topics, it’s longer than the previous two training sessions and goes into more detail. It explains the differences between PowerShell and the Windows Command Prompt, how to determine the PowerShell version and how to work with aliases. The course also examines the steps necessary to run unsupported commands and create PowerShell transcripts.

This PowerShell tutorial also examines more advanced topics, such as working with object members, creating hash tables and managing execution policy levels. This is followed by a detailed discussion about the Common Information Model (CIM) and how it can manage hard drives and work with BIOS. In addition, participants will learn how to create profile scripts, functions and modules, as well as how to use script parameters and to pause script execution. Because the course is so comprehensive, admins should come away with a solid understanding of how to use PowerShell to script their daily management tasks.

Udemy course pricing

Udemy distinguishes between personal and business users. For personal users, Udemy charges by the course, with prices for PowerShell courses ranging between $25 and $200. Udemy also offers personal users a 30-day, money-back guarantee.

Udemy also offers two business plans that provide unlimited access to its courses. The Team plan supports between five and 20 users and costs $240 per user, per year. It also comes with a 14-day trial. Contact Udemy for details regarding its Enterprise plan, which supports 21 or more users. Udemy also offers courses to help users prepare for IT certifications, supporting such programs as Cisco CCNA, Oracle Certification and Microsoft Certification.

Windows PowerShell: Essentials

Pluralsight offers a variety of PowerShell courses, as well as learning paths. A path is a series of related courses that provide users with a strategy for learning a specific technology. This path includes six courses ranging from beginner to advanced user. Participants should come away with a strong foundation in how to create PowerShell scripts that automate administrative processes. Before embarking on this path, however, they should have a basic understanding of Windows networking and troubleshooting.

The beginning courses on this path provide users with the information they need to start working with PowerShell, even if they’re first-timers. Users will learn how to use cmdlets, work with objects and get help when they need it. These courses also introduce concepts such as aliases, providers and mapping network drives. The intermediate tutorials build on the beginning courses by explaining how to work with objects and the PowerShell pipeline, and how to format output. The intermediate courses also focus on using PowerShell in a networked environment, covering such topics as CIM and Windows Management Instrumentation.

The advanced courses build on the beginning and intermediate tutorials by focusing on automation scripts. Admins will learn how to use PowerShell scripting to automate their routine processes and tasks. They’ll also learn how to troubleshoot problems in their scripts if PowerShell exhibits unusual behavior. The path approach might not be for everyone, but for those ready to invest their time in a comprehensive program, this path could prove a valuable resource.

Practical Desired State Configuration

Those not suited to a learning path can choose from a variety of other Pluralsight courses that address specific technologies. This highly rated course caters to advanced users and provides real-world examples of how to use PowerShell to write Desired State Configurations (DSCs). Those interested in the course should be familiar with PowerShell and DSC principles.

DSC refers to a new way of managing Windows Server that shifts the focus from point-and-click GUIs to infrastructure as code. To achieve this, admins can use PowerShell to build DSCs. This process is the focus of this course, which covers several advanced topics ranging from writing configurations with custom resources to building dynamic collector configurations.

The tutorial demonstrates how to use custom resources in a configuration and offers an in-depth discussion of securing DSC operations. Participants then learn how to use the DSC model to configure and manage AD, covering such topics as building domains and creating users and groups. The course demonstrates how to set up Windows event forwarding. Although not everyone is looking for such advanced topics, for some users, this course might be just what they need to progress their PowerShell skills.

Pluralsight pricing

Pluralsight doesn’t charge by the course, but rather it offers three personal plans and two business plans. The personal plans start at $299 per year, and the business plans start at $579 per user, per year. All plans include access to the entire course library. In addition, Pluralsight offers a 10-day personal free trial and, like Udemy, courses geared toward IT certification.

PowerShell 5 Essential Training

Of the 13 online PowerShell courses offered by LinkedIn Learning — formerly, Lynda.com — this is the most popular. The course targets beginner and intermediate PowerShell users who are Windows systems admins. Although the course is based on PowerShell 5, the basic information is still applicable today, like other courseware written to this version.

The material covers most of the basics one would expect from a course at this level. It explains how to set up and customize PowerShell, and it introduces admins to cmdlets and their syntax and how to find help. This is followed by installing modules and packages. The course also describes how to use the PowerShell pipeline, covering such topics as working with files and printers, as well as storing data as a webpage.

The course moves onto objects and their properties and methods. Participants can learn how to create scripts that incorporate variables and parameters so they can automate administrative tasks. Participants are also introduced to PowerShell ISE and shown how to use PowerShell remoting to manage multiple systems at once, along with practical examples of administrative operations at scale.

PowerShell: Scripting for Advanced Automation

This course, which is also offered by LinkedIn Learning, focuses on automating advanced administrative operations in a Windows network. Those planning to take the course should have a strong foundation in managing Windows environments. As its name suggests, the course is geared toward advanced users.

After a brief introduction, the course jumps into DSC automation, providing an overview of DSC and explaining how to set up DSCs. Users can learn how to work with DSC resources, push DSCs and create pull configurations. The course then moves onto Just Enough Administration, explaining JEA concepts and best practices. In this part of the course, participants learn how to create role capability files and JEA session configurations, as well as how to register JEA endpoints.

The final section of the tutorial describes how to troubleshoot PowerShell scripts. The discussion begins with an overview of PowerShell workflows and examines the specifics of troubleshooting PowerShell in both the console and ISE. The section ends with information about using the PSScriptAnalyzer tool for quality control. As with any advanced course, not all users will benefit from this information. But the tutorial could provide a valuable resource for admins looking to refine their PowerShell skills.

LinkedIn Learning pricing

LinkedIn Learning sells courses individually, offers a one-month free trial and provides both personal and business plans. Individual PowerShell courses cost between $30 and $45, and individual subscription plans start at $20 per month. Contact LinkedIn Learning regarding business plans. LinkedIn Learning also offers courses aimed at IT certifications.

Go to Original Article
Author:

Qualtrics XM adds mobile, AI, information governance

Qualtrics XM added AI and information governance tools to its customer and employee experience measurement platform this week and gave its year-old mobile app an infusion of dashboards to put data into the hands of front-line workers on the go.

In some ways, the new features reflect the influence of SAP, which acquired Qualtrics for $8 billion a year ago. The new features, such as mobile dashboarding, likely reflect a step toward making Qualtrics data relevant and available to customer-facing employees who use other SAP applications, in addition to marketing and research teams, Constellation Research principal analyst Nicole France said.

Getting such data into the hands of front-line employees makes the data more likely to be effectively used.

“Simply making these tools more widely available gets people more used to seeing this type of information, and it changes behaviors,” France said, adding that new features like mobile dashboards subtly get more people involved in using real-time performance metrics. “It’s doing it in almost a subliminal way, rather than trying to make it a quick-change program.” 

A number of Qualtrics competitors have also slowly added mobile dashboarding so employees can monitor reaction to a product, customer service or employee initiatives. But they’re all trying to find the right balance, lest it degrades employee experience or causes knee-jerk reactions to real-time fluctuations in customer response, Forrester Research senior analyst Faith Adams said

Qualtrics XM mobile NPS dashboard
Qualtrics XM mobile-app upgrades include dashboards to convey real-time customer response data to front-line employees responsible for product and service performance.

“It can be great — but it is also one that you need to be really careful with, too,” Adams said. “Some firms have noted that when they adopt mobile, it sometimes sets an expectation to employees of all levels that they are ‘always on.'”

Both France and Adams noted that the mobile app will help sales teams keep more plugged in to customer sentiment in their territories by getting data to them more quickly.

BMW, an early adopter of the new mobile app, uses it in dealerships to keep salespeople apprised of how individual customers feel about the purchasing process during the sale, and to prevent sales from falling through, according to Kelly Waldher, Qualtrics executive vice president and general manager.

AI and information governance tools debut

Qualtrics XM also added Smart Conversations, an AI-assisted tool to automate customer dialog around feedback. Two other AI features comb unstructured data for insights; one graphically visualizes customer sentiment and the other more precisely measures customer sentiment.

Prior to being acquired by SAP, Qualtrics had built its own AI and machine learning tools, Waldher said, and will continue to strategically invest in it. That said, Qualtrics will likely add features based on SAP’s Leonardo AI toolbox down the road.  

“We have that opportunity to work more closely with SAP engineers to leverage Leonardo,” Waldher said. “We’re still in the early stages of trying to tap into the broader SAP AI capabilities, but we’re excited to have that stack available to us.”

Also new to Qualtrics XM is a set of information governance features, which Waldher said will enable customers to better comply with privacy rules in both the U.S. and Europe. Qualtrics users will be able to monitor who is using data, and how within their organizations.

“Chief compliance officers and those within the IT group can make sure that the tools that are being deployed across the organization have advanced security and governance capabilities,” Waldher said. “SAP’s global strength, their presence in Western Europe and beyond, has strongly reinforced the path [of building compliance tools] we were already on.”

The new features are included in most paid Qualtrics plans at no extra charge, with a few of the AI tools requiring different licensing plans to use.

Go to Original Article
Author:

New AWS cost management tool, instance tactics to cut cloud bills

LAS VEGAS — Amazon continuously rolls out new discounting programs and AWS cost management tools in an appeal to customers’ bottom lines and as a hedge against mounting competition from Microsoft and Google.

Companies have grappled with nasty surprises on their AWS bills for years, with the reasons attributed to AWS’ sheer complexity, as well as the runaway effect on-demand computing can engender without strong governance. It’s a thorny problem with a solution that can come in multiple forms.

To that end, the cloud giant released a number of new AWS cost management tools at re:Invent, including Compute Optimizer, which uses machine learning to help customers right-size their EC2 instances.

At the massive re:Invent conference here this week, AWS customers discussed how they use both AWS-native tools and their own methods to get the most value from their cloud budgets.

Ride-sharing service Lyft has committed to spend at least $300 million on AWS cloud services between the beginning of this year and the end of 2021.

Lyft, like rival Uber, saw a hockey stick-like growth spurt in recent years, going from about 50 million rides in 2015 to more than 350 million a few years later. But its AWS cost management needed serious work, said Patrick Valenzuela, engineering manager.

An initial effort to wrangle AWS costs resulted in a spreadsheet, powered by a Python script, that divided AWS spending by the number of rides given to reach an average figure. The spreadsheet also helped Lyft rank engineering teams according to their rate of AWS spending, which had a gamification effect as teams competed to do better, Valenzuela said in a presentation.

Within six months, Lyft managed to drop the AWS cost-per-ride figure by 40%. But it needed more, such as fine-grained data sets that could be probed via SQL queries. Other factors, such as discounts and the cost of AWS Reserved Instances, weren’t always reflected transparently in the AWS-provided cost usage reports used to build the spreadsheet.

Lyft subsequently built a second-generation tool that included a data pipeline fed into a data warehouse. It created a reporting and dashboard layer on top of that foundation. The results have been promising. Earlier this year, Lyft found it was now spending 50% less on read/writes for its top 25 DynamoDB tables and also saved 50% on spend related to Kubernetes container migrations.

 “If you want to learn more about AWS, I recommend digging into your bill,” Valenzuela said.

AWS cost management a perennial issue

While there are plenty of cloud cost management tools available in addition to the new AWS Compute Optimizer, some AWS customers take a proactive approach to cost savings, compared to using historical analysis to spot and shed waste, as Lyft did in the example presented at re:Invent.

Privately held mapping data provider Here Technologies serves 100 million motor vehicles and collects 28 TB of data each day. Companies have a choice in the cloud procurement process — one being to force teams through rigid sourcing activities, said Jason Fuller, head of cloud management and operations at Here.

“Or, you let the builders build,” he said during a re:Invent presentation. “We let the builders build.”

Still, Here had developed a complex landscape on AWS, with more than 500 accounts that collectively spun up more than 10 million EC2 instances a year. A few years ago, Here began a concerted effort to adopt AWS Reserved Instances in a programmatic manner, hoping to squeeze out waste.

Reserved Instances carry contract terms of up to three years and offer substantial savings over on-demand pricing. Here eventually moved nearly 80% of its EC2 usage into Reserved Instances, which gave it about 50% off the on-demand rate, Fuller said.

The results have been impressive. During the past three-and-a-half years, Here saved $50 million and avoided another $150 million in costs, Fuller said.

Salesforce is another heavy user of AWS. It signed a $400 million infrastructure deal with AWS in 2016 and the companies have since partnered on other areas. Based on its 2017 acquisition of Krux, Salesforce now offers Audience Studio, a data management platform that collects and analyzes vast amounts of audience information from various third-party sources. It’s aimed at marketers who want to run more effective digital advertising campaigns.

Audience Studio handles 200,000 user queries per second, supported by 2,500 Elastic MapReduce Clusters on AWS, said Alex Estrovitz, director of software engineering at Salesforce.

“That’s a lot of compute, and I don’t think we’d be doing it cost-effectively without using [AWS Spot Instances],” Estrovitz said in a re:Invent session. More than 85% of Audience Studio’s infrastructure uses Spot Instances, which are made up of idle compute resources on AWS and cost up to 90% less than on-demand pricing.

But Spot Instances are best suited for jobs like Audience Studio’s, where large amounts of data get parallel-processed in batches across large pools of instances. Spot Instances are ephemeral; AWS can shut them down upon a brief notice when the system needs resources for other customer jobs. However, customers like Salesforce can buy Spot Instances based on their application’s degree of tolerance for interruptions.

Salesforce has achieved 48% savings overall since migrating Audience Studio to Spot Instances, Estrovitz said. “If you multiply this over 2,500 jobs every day, we’ve saved an immense amount of money.”

Go to Original Article
Author:

SageMaker Studio makes model building, monitoring easier

LAS VEGAS — AWS launched a host of new tools and capabilities for Amazon SageMaker, AWS’ cloud platform for creating and deploying machine learning models; drawing the most notice was Amazon SageMaker Studio, a web-based integrated development platform.

In addition to SageMaker Studio, the IDE for platform for building, using and monitoring machine learning models, the other new AWS products aim to make it easier for non-expert developers to create models and to make them more explainable.

During a keynote presentation at the AWS re:Invent 2019  conference here Tuesday, AWS CEO Andy Jassy described five other new SageMaker tools: Experiments, Model Monitor, Autopilot, Notebooks and Debugger.

“SageMaker Studio along with SageMaker Experiments, SageMaker Model Monitor, SageMaker Autopilot and Sagemaker Debugger collectively add lots more lifecycle capabilities for the full ML [machine learning] lifecycle and to support teams,” said Mike Gualtieri, an analyst at Forrester.

New tools

SageMaker Studio, Jassy claimed, is a “fully-integrated development environment for machine learning.” The new platform pulls together all of SageMaker’s capabilities, along with code, notebooks and datasets, into one environment. AWS intends the platform to simplify SageMaker, enabling users to create, deploy, monitor, debug and manage models in one environment.

Google and Microsoft have similar machine learning IDEs, Gualtieri noted, adding that Google plans for its IDE to be based on DataFusion, its cloud-native data integration service, and to be connected to other Google services.

SageMaker Notebooks aims to make it easier to create and manage open source Jupyter notebooks. With elastic compute, users can create one-click notebooks, Jassy said. The new tool also enables users to more easily adjust compute power for their notebooks and transfer the content of a notebook.

Meanwhile, SageMaker Experiments automatically captures input parameters, configuration and results of developers’ machine learning models to make it simpler for developers to track different iterations of models, according to AWS. Experiments keeps all that information in one place and introduces a search function to comb through current and past model iterations.

AWS CEO Andy Jassy talks about new Amazon SageMaker capabilitiesatre:Invent 2019
AWS CEO Andy Jassy talks about new Amazon SageMaker capabilities at re:Invent 2019

“It is a much, much easier way to find, search for and collect your experiments when building a model,” Jassy said.

As the name suggests, SageMaker Debugger enables users to debug and profile their models more effectively. The tool collects and monitors key metrics from popular frameworks, and provides real-time metrics about accuracy and performance, potentially giving developers deeper insights into their own models. It is designed to make models more explainable for non-data scientists.

SageMaker Model Monitor also tries to make models more explainable by helping developers detect and fix concept drift, which refers to the evolution of data and data relationships over time. Unless models are updated in near real time, concept drift can drastically skew the accuracy of their outputs. Model Monitor constantly scans the data and model outputs to detect concept drift, alerting developers when it detects it and helping them identify the cause.

Automating model building

With Amazon SageMaker Autopilot, developers can automatically build models without, according to Jassy, sacrificing explainability.

Autopilot is “AutoML with full control and visibility,” he asserted. AutoML essentially is the process of automating machine learning modeling and development tools.

The new Autopilot module automatically selects the correct algorithm based on the available data and use case and then trains 50 unique models. Those models are then ranked by accuracy.

“AutoML is the future of ML development. I predict that within two years, 90 percent of all ML models will be created using AutoML by data scientists, developers and business analysts,” Gualtieri said.

SageMaker Autopilot is a must-have for AWS.
Mike GualtieriAnalyst, Forrester

“SageMaker Autopilot is a must-have for AWS, but it probably will help” other vendors also, including such AWS competitors as DataRobot because the AWS move further legitimizes the automated machine learning approach, he continued.

Other AWS rivals, including Google Cloud Platform, Microsoft Azure, IBM, SAS, RapidMiner, Aible and H2O.ai, also have automated machine learning capabilities, Gualtieri noted.

However, according to Nick McQuire, vice president at advisory firm CCS Insight, some of the  new AWS capabilities are innovative.

“Studio is a great complement to the other products as the single pane of glass developers and data scientists need and its incorporation of the new features, especially Model Monitor and Debugger, are among the first in the market,” he said.

“Although AWS may appear late to the game with Studio, what they are showing is pretty unique, especially the positioning of the IDE as similar to traditional software development with … Experiments, Debugger and Model Monitor being integrated into Studio,” McQuire said. “These are big jumps in the SageMaker capability on what’s out there in the market.”

Google also recently released several new tools aimed at delivering explainable AI, plus a new product suite, Google Cloud Explainable AI.

Go to Original Article
Author: