Tag Archives: Automation

Microsoft Power Platform adds chatbots; Flow now Power Automate

More bots and automation tools went live on the Microsoft Power Platform, Microsoft announced today. In their formal introductions, Microsoft said the tools will make data sources flow within applications like SharePoint, OneDrive and Dynamics 365, and create more efficiencies with custom apps.

The more than 400 capabilities added to the Microsoft Power Platform focus on expanding its robotic process automation potential for users, as well as new integrations between the platform and Microsoft Teams, according to a blog post by James Phillips, corporate vice president of business applications at Microsoft.

Some of those include robotic process automation (RPA) tools for Microsoft Power Automate, formerly known as Flow, which makes AI tools easier to add into PowerApps. Also newly available are tools for creating user interfaces in Power Automate.

AI Builder adds a point-and-click means to fold common processes such as forms processing, object detection and text classification into apps — processes commonly used for SharePoint and OneDrive content curation.

Microsoft is adding these tools, as well as new security features to analytics platform Power BI, in part to coax customers who remain on premises into the Azure cloud, said G2 analyst Michael Fauscette.

PowerApps reduce the development needed to create necessary connections between systems in the cloud, such as content in OneDrive and SharePoint with work being done in Dynamics 365 CRM, Teams and ERP applications.

Microsoft Power Automate, formerly Flow
Microsoft Power Automate, a low-code app-design tool,is the new version ofFlow.

Chatbots go live

Also announced as generally available at Microsoft Ignite are Power Virtual Agents, do-it-yourself chatbots on the Microsoft Power Platform.

They’ll likely first be used by customer service teams on Dynamics 365, said Constellation Research analyst R “Ray” Wang, but they could spread to other business areas such as human resources, which could use the bots to answer common questions during employee recruiting or onboarding.

If an agent is costing you $15 an hour and the chatbot 15 cents an hour … it’s all about call deflection.
R ‘Ray’ WangAnalyst, Constellation Research

While some companies may choose outside consultants and developers to build custom chatbots instead of making their own on the Microsoft Power Platform, Wang said some companies may try it to build them internally. Large call centers employing many human agents and running on Microsoft applications would be logical candidates for piloting new bots.

“I think they’ll start coming here to build their virtual agents,” Wang said. “[Bot] training will be an issue, but it’s a matter of scale. If an agent is costing you $15 an hour and the chatbot 15 cents an hour … it’s all about call deflection.”

Microsoft Power Platform evolves

PowerApps, which launched in late 2015, originally found utility with users of Microsoft Dynamics CRM who needed to automate and standardize processes across data sets inside the Microsoft environment and connect to outside platforms such as Salesforce, said Gartner analyst Ed Anderson.

Use quickly spread to SharePoint, OneDrive and Dynamics ERP users, as they found that Flow — a low-code app-design tool — enabled the creation of connectors and apps without developer overhead. Third-party consultants and developers also used PowerApps to speed up deliverables to clients. Power BI, Power Automate and PowerApps together became known as the Microsoft Power Platform a year ago.

“PowerApps are really interesting for OneDrive and SharePoint because it lets you quickly identify data sources and quickly do something meaningful with them — connect them together, add some logic around them or customized interfaces,” Anderson said.

Go to Original Article
Author:

Gen Z in the workforce both want and fear AI and automation

For Gen Z in the workforce, AI and automation are useful and time-saving tools, but also possible threats to job security.

Typically characterized as the demographic born between the mid-1990s and mid-2000s, Generation Z  is the first generation to truly grow up exclusively with modern technologies such as smart phones, social media and digital assistants.

Many Gen Z-ers first experienced Apple’s Siri, released in 2011, and then Amazon’s Alexa, introduced in 2014 alongside Amazon Echo, at a young age.

The demographic as a whole tends to have a strong understanding of the usefulness of AI and automation, said Terry Simpson, technical evangelist at Nintex, a process management and automation vendor

Gen Z in the workforce

Most Gen Z employees have confidence in AI and automation, Nintex found in a September 2019 report about a survey of 500 current and 500 future Gen Z employees. Some 88% of the survey takers said AI and automation can make their jobs easier.

This generation understands AI technology, Simpson said, and its members want more of it in the workplace.

“For most organizations, almost 68 percent of processes are not automated,” Simpson said. Automation typically replaces menial, repetitive tasks, so lack of automation leaves those tasks to be handled by employees.

Gen Z, Gen Z in the workforce, AI and automation
Gen Z wants more automation in the workplace, even as they fear it could affect job security.

For Gen Z in the workforce, a lack of automation can be frustrating, Simpson said, especially when Gen Z-ers are so used to the ease of digital assistants and automated programs in their personal lives. Businesses generally haven’t caught up to the AI products Gen Z-ers are using at home, he said.

Yet, even as Gen Z-ers have faith that AI and automation will help them in the workplace, they fear it, too.

Job fears

According to the Nintex report, 57% of those surveyed expressed concern that AI and automation could affect their job security.

“A lot of times you may be a Gen Z employee that automation could replace what you’re doing as a job function, and that becomes a risk,” Simpson said.

Everybody says I don’t want to lose my job to a robot, and then Outlook tells you to go to a meeting and you go.
Anthony ScriffignanoChief data scientist, Dun & Bradstreet

Still, he added, automation can help an organization as a whole, and can ease the employees’ workloads.

“Everybody says I don’t want to lose my job to a robot, and then Outlook tells you to go to a meeting and you go,” said Anthony Scriffignano, chief data scientist at Dun & Bradstreet.

Jobs that can be easily automated may eventually be given to an automated system, but AI will also create jobs, Scriffignano said.

As a young generation, Gen Z-ers may have less to fear than other generations, however.

Younger generations are coachable and more open to change than the older generations, Scriffignano said. They will be able to adapt better to new technologies, while also helping their employers adapt, too.

“Gen Z have time in their career to reinvent themselves and refocus” their skills and career goals to better adapt for AI and automation, Scriffignano said.

Go to Original Article
Author:

Box AI, workflow automation strategies about to unfold

Box AI and workflow automation advancements that users are waiting for, and which are instrumental to the content services platform vendor’s future, will come into clearer focus this month, according to CEO Aaron Levie.

With Box AI tools at the hub of Box Skills, the company’s still-in-beta system for customizing Box applications with machine learning technology from Google, Microsoft or IBM, AI will permeate Box’s content management systems, Levie said.

“We want to make sure we continue to automate and bring intelligence to your digital business processes,” Levie said in an interview.

New Box AI tools

Levie said the company will make announcements around Box AI and workflow automation, and generally, about how Box plans to “advance the state of the digital workplace,” at the BoxWorks 2018 conference in San Francisco Aug. 29 to 30.

“We’re going to talk a lot about AI and the power of machine learning,” Levie said. “And you’re going to see more of a roadmap around workflow in Box as well, which we’re really excited about.”

Indeed, workflow and digital process automation have been a perennial question for Box in recent years, said Cheryl McKinnon, a Forrester analyst scheduled to speak at BoxWorks.

Workflow automation progress

McKinnon noted that Box, which started out as an enterprise file sync-and-share company, has tried to remedy the gap through a partnership with IBM on the Box Relay workflow automation tool and other deals (with companies like Nintex and Pegasystems). Box also recently acquired startup Progressly to improve workflow automation.

We want to make sure we continue to automate and bring intelligence to your digital business processes.
Aaron LevieCEO, Box

“I do expect to see deeper investment in Box’s own automation capabilities as it puts some of the expertise from recent acquisitions, such as Progressly, to work,” McKinnon said.

“Content doesn’t get created in a vacuum — embedding the content creation, collaboration and sharing lifecycle into key business processes is important to keep Box a sticky and integral part of its clients’ internal and external work activities,” she said.

In addition to Box AI and workflow automation, Levie said Box is putting a lot of emphasis on its native-cloud architecture and persuading potential customers to move from on-premises content management systems to the cloud-based content services platform model that has distinguished Box.

Box CEO Aaron Levie
Box CEO Aaron Levie speaking at the BoxWorks 2017 conference.

“We’re really trying to help them move their legacy information systems, their technology infrastructure, to the cloud,” Levie said.

Box wants “to show a better path forward for managing, securing, governing and working content and not just using the same legacy systems, not having a fragmented content management architecture that we think is not going to enable a modern digital workplace,” Levie said.

Box vs. Dropbox and bigger foes

Meanwhile, its similarly named competitor, DropBox, completed a successful IPO this year and is angling for the enterprise market, where Box holds the lead. Dropbox’s stock price took a hit recently, but Levie said he takes the competition seriously. Box, too, sustained a decline in its stock price earlier this year, though the stock’s value has stabilized.

“I would not dismiss them as a player in this space,” Levie said of Dropbox. “But we think we serve more or less different segments of the market. They are more consumer and SMB leaning and we are much more SMB and enterprise leaning.”

Actually, Box’s most dangerous competitive threats are from cloud giants like Microsoft and Google, McKinnon said.

They are “investing significantly in their own content and collaboration platforms, and while Box partners with both of them for integration with office productivity tools and as optional cloud storage back ends, the desire to be the single source of truth for corporate content in the cloud will put them head to head in many accounts,” she said.

ONAP Beijing release targets deployment scenarios

Deployability is the name of the game with the Linux Foundation’s latest Open Network Automation Platform architecture.

Central to the ONAP Beijing release are seven identified “dimensions of deployability,” said Arpit Joshipura, general manager of networking and orchestration at the Linux Foundation. These seven deployability factors comprise usability, security, manageability, stability, scalability, performance and resilience.

By identifying these dimensions, the Linux Foundation expects to better address and answer questions regarding documentation, Kubernetes management, disruptive testing, multisite failures and lifecycle management transactions. The goal is better consistency among ONAP deployments, Joshipura said.

Other than the standardized support for external northbound APIs that face a user’s operational and business support systems, the ONAP Beijing release had only a handful of architectural changes from the previous Amsterdam architecture, according to Joshipura. To that end, the ONAP Beijing release features four relevant MEF APIs taken from MEF’s Lifecycle Service Orchestration architecture and framework.

An additional architectural tweak pinpointed the ONAP Operations Manager. OOM now works with Kubernetes and can run with any cloud provider, Joshipura said.

Arpit Joshipura, Linux Foundation Arpit Joshipura

“All the projects within ONAP can become Docker containers, and Kubernetes orchestrates all of them,” he said. “It helps with management, portability and efficiencies in terms of VMs [virtual machines] needed to run them.”

The ONAP Beijing release also introduced Multi-site State Coordination Services, which ONAP dubbed MUSIC. MUSIC coordinates databases and synchronizes policies for ONAP deployments in multiple locations, geographies and countries — relevant for providers like Vodafone and Orange. The release also provided standard templates and virtual network functions (VNFs) integration and validation, regarding information and data modeling.

Functional enhancements for ONAP Beijing

In addition to architecture adaptions, the ONAP Beijing release made a series of functional enhancements that include change management, hardware platform awareness and autoscaling with manual triggers. For example, the system follows policy to automatically move VNFs or add VMs if a certain location has excess compute capacity. This capability helps scale the VNFs appropriately, Joshipura said.

ONAP expects to make its next release, Casablanca, available at the end of 2018. ONAP Casablanca will continue work on operational and business support systems, in addition to adding more cross-project integration related to microservices architecture, Joshipura said. Further, ONAP Casablanca will introduce a formal VNF certification program and standardize features to support 5G and cross-cloud connectivity.

Linux Foundation reacts to Microsoft’s GitHub acquisition

Microsoft’s $7.5 billion acquisition of GitHub received a cautiously positive response from the Linux Foundation. Jim Zemlin, the Linux Foundation’s executive director, categorized the move as “pretty good news for the world of open source,” highlighting Microsoft’s expertise to make GitHub better. He did, however, stress Microsoft’s growing need to earn the trust of developers, while also acknowledging the existence of other open source developer platforms, such as GitLab and Stack Overflow.

“As we all evaluate the evolution of open source from the early days to now, I suggest we celebrate this moment,” Zemlin said about the purchase. “The multidecade progression toward the adoption and continual use of open source software in developing modern technological products, solutions and services is permanent and irreversible. The majority of the world’s economic systems, stock exchanges, the internet, supercomputers and mobile devices run the open source Linux operating system, and its usage and adoption continue to expand.”

Atomist extends CI/CD to automate the entire DevOps toolchain

Startup Atomist hopes to revolutionize development automation throughout the application lifecycle, before traditional application release automation vendors catch on.

Development automation has been the fleeting goal of a generation of tools, particularly DevOps tools, that promise continuous integration and continuous delivery. The latest is Atomist and its development automation platform, which aims to automate as many of the mundane tasks as possible in the DevOps toolchain.

Atomist ingests information about an organization’s software projects and processes to build a comprehensive understanding of those projects. Then it creates automations for the environment, which use programming tools such as parser generators and microgrammars to parse and contextualize code.

The system also correlates event streams pulled from various stages of development and represents them as code in a graph database known as the Cortex. Because Atomist’s founders said they believe the CI pipeline model falls short, Atomist takes an event-based approach to model everything in an organization’s software delivery process as a stream of events. The event-driven model also enables development teams to compose development flows based on events.

In addition, Atomist automatically creates Git repositories and configures systems for issue tracking and continuous integration, and creates chat channels to consolidate notifications on the project and delivered information to the right people.

“Atomist is an interesting and logical progression of DevOps toolchains, in that it can traverse events across a wide variety of platforms but present them in a fashion such that developers don’t need to context switch,” said Stephen O’Grady, principal analyst at RedMonk in Portland, Maine. “Given how many moving parts are involved in DevOps toolchains, the integrations are welcome.”

Mik Kersten, a leading DevOps guru and CEO at Tasktop Technologies, has tried Atomist firsthand and calls it a fundamentally new approach to manage delivery. As these become increasingly complex, the sources of waste move well beyond the code and into the tools spread across the delivery pipeline, Kersten noted.

The rise of microservices, and tens or hundreds of services in their environments, introduce trouble spots as developers collaborate, deploy and monitor the lifecycle of these hundreds of services, Johnson said.

This is particularly important for security, where keeping services consistent is paramount. In last year’s Equifax breach, hackers gained access through an unpatched version of Apache Struts — but with Atomist, an organization can identify and upgrade old software automatically across potentially hundreds of repositories, Johnson said.

Atomist represents a new class of DevOps product that goes beyond CI, which is “necessary, but not sufficient,” said Rod Johnson, Atomist CEO and creator of the Spring Framework.

Rod Johnson, CEO, AtomistRod Johnson

Tasktop’s Kersten agreed that approach to developer-centric automation “goes way beyond what we got with CI.” The company created a Slack bot that incorporates Atomist’s automation facilities, driven by a development automation engine that is reminiscent of model-driven development or aspect-oriented programming, but provides generative facilities not only of code but across projects resources and other tools, Kersten said. A notification system informs users what the automations are doing.

Most importantly, Atomist is fully extensible, and its entire internal data model can be exposed in GraphQL.

Tasktop has already explored ways to connect Atomist to Tasktop’s Integration Hub and the 58 Agile and DevOps tools it currently supports, Kersten said.

Automation built into development

As DevOps becomes more widely adopted, integrating automation into the entire DevOps toolchain is critical to help streamline the development process so programmers can develop faster, said Edwin Yuen, an analyst at Enterprise Strategy Group in Milford, Mass.

The market to integrate automation and development will grow, as both the companies that use DevOps and the number of applications they develop increase.
Edwin Yuenanalyst, Enterprise Strategy Group

“The market to integrate automation and development will grow, as both the companies that use DevOps and the number of applications they develop increase,” he said. Atomist’s integration in the code creation and deployment process, through release and update management processes, “enables automation not just in the development process but also in day two and beyond application management,” he said.”

Atomist joins other approaches such as GitOps and Bitbucket Pipelines that target the developer who chooses the tools used across the complete lifecycle, said Robert Stroud, an analyst at Forrester Research in Cambridge, Mass.

“Selection of tooling such as Atomist will drive developer productivity allowing them to focus on code, not pipeline development — this is good for DevOps adoption and acceleration,” he said. “The challenge for these tools is although new code fits well, deployment solutions are selected within enterprises by Ops teams, and also need to support on-premises deployment environments.”

For that reason, look for traditional application release automation vendors, such as IBM, XebiaLabs and CA Technologies, to deliver features similar to Atomist’s capabilities in 2018, Stroud said.

Intelligent information management the next wave for ECM

The introductions of AI, automation and an abundance of vendors have caused a seismic shift in content management — so much so that, in 2017, Gartner rebranded the space as content services, a term that covers content services applications, platforms and components.

Content management company M-Files hopes its evolution toward intelligent information management is where the content services industry is heading. The thought being that, as information finds itself in more silos, there’s more importance on what the content is, rather than where it resides.

“What M-Files is doing is interesting, as it’s consistent with what I see in the industry now,” said John Mancini, chief evangelist at AIIM, a nonprofit research and education firm in Silver Spring, Md., focusing on information management. “Everyone is trying to figure out how to leverage its existing content and how to push as much as possible in the direction of the user so that content management becomes more user-defined, rather than a top-down approach.”

Focus on content type, not new thinking

The concept of focusing on what content is, rather than where it resides, isn’t unique to 2018. Intelligent information management is gaining in importance, however, as information is scattered across clouds, on-premises systems and mobile devices.

We spent 15 years trying to convince people to care about what repository something is in, but they still don’t.
John Mancinichief evangelist, AIIM

“We spent 15 years trying to convince people to care about what repository something is in, but they still don’t,” Mancini said. “But they do care if it’s a contract or a proposal or an internal document.”

It’s with that in mind that M-Files released M-Files 2018, which connects to market leaders in content storage — ranging from other content management services, like OpenText and Laserfiche, to more classical repositories, like on-premises network folders or SharePoint, and expanding out to systems of record, like Salesforce and Dynamics 365.

“There’s this trend of consumerization — the idea that the way you’ll be most productive is by allowing users to work the way they want to work,” said Greg Milliken, vice president of marketing at M-Files, based in Plano, Texas. “There’s the OpenText or Laserfiche type, where the structure is defined, and this is how we’re going to work. But everyone works a little bit differently.”

Finding content wherever it’s stored

What M-Files is touting is, with its intelligent information management platform, users will be able to natively search for content based on its metadata and locate and view any content no matter where it may be stored.

“It speaks to the trend we’re going toward, which is away from static silos and one-size-fits-all approach to something where content can be anywhere and it comes to you like a service,” Milliken said. “You can look into these repositories with a common lens.”

Being repository-agnostic requires some form of AI, and M-Files hopes its partnership with ABBYY will prove fruitful in building out its intelligent information management software.

“The way I think about it sometimes is upside-down content management,” Mancini said, referring to the repository-agnostic approach — and not a reference to Stranger Things. “Content management used to be a very top-down, orchestrated process. That colored how we all thought about the tools that individual knowledge workers used. And in that process, we tended to overcomplicate that from an end-user perspective.”

A list of the various vendors to which M-Files 2018 is connected includes other content management players, on-premises share folders and CRM systems. The connectors vary in price, according to M-Files.
A list of the various vendors to which M-Files 2018 is connected includes other content management players, on-premises share folders and CRM systems. The connectors vary in price, according to M-Files.

Partnership adds AI to content management

Last year’s partnership between M-Files and ABBYY brought AI functionality to the vendor, which M-Files hopes benefits its users by automatically classifying and tagging documents and other content with the correct metadata, saving users time and minimizing human error.

“Where you put something comes down to the company or even the individual,” Milliken said. “Someone might put a document in a customer folder; someone else might put it in a specific project folder — now, it’s derived automatically. The idea of applying AI to a meta-driven approach is what we’ve been striving for.”

Mancini sees a metadata-driven, repository-agnostic approach as a potential transition for the content management market.

“What’s happened in the last couple of years and in surveys we’ve done, people and companies with long-term ECM [enterprise content management], the primary challenge they come back to us with is usability,” Mancini said. “The metadata-centric approach of M-Files is an attempt to do this through the prism of a knowledge worker and to see if that redefines content management.”

M-Files 2018, currently available for download, is a platform upgrade for existing customers. The AI metadata layer is an additional cost for new and existing customers, starting at $10,000 per year and varies depending on the size and scope of the company. The connectors to various repositories are also an additional cost, depending on the connector.

New ONAP architecture provides network automation platform

Eight months after its inception, the Open Network Automation Platform project released its first code, dubbed Amsterdam. The ONAP architecture is a combination of two former open source projects — AT&T’s Enhanced Control, Orchestration, Management and Policy and the Open-Orchestrator project.

ONAP’s November release targets carriers and service providers by creating a platform for network automation. It includes two blueprints — one for Voice over Long Term Evolution and one governing virtual customer premises equipment. Additionally, Amsterdam focuses on automating the service lifecycle management for virtual network functions (VNFs), said Arpit Joshipura, general manager of networking and orchestration at The Linux Foundation, which hosts the ONAP project.

The ONAP architecture includes three main components: design time, run time and the managed environment. Users can package VNFs according to their individual requirements, but Amsterdam also offers a VNF software developer kit (SDK) to incorporate third-party VNFs, Joshipura said.

Once services are live, the code — a combination of existing Enhanced Control, Orchestration, Management and Policy, or ECOMP, and Open-O with new software — can manage physical and virtual network functions, hypervisors, operating systems and cloud environments. The ONAP architecture integrates with existing operational and billing support systems through an external API framework.

VNF automation is a key component, Joshipura said.

“The network is constantly collecting data, analytics, events, security, scalability — all the things relevant to closed-loop automation — and then it feeds it [the data] back to the service lifecycle management,” he said. “If a VNF needs more VMs [virtual machines] or more memory or a change in priority or quality of service, all that is automatically done — no human touch required.”

Because ONAP is a collection of individual open source projects, some industry observers and potential users expressed doubts about how easy it would be to put Amsterdam to use — particularly since AT&T was originally the main ECOMP contributor. But Joshipura said ONAP reworked the code to reduce the complexity and make Amsterdam usable for the majority of users, not just specific contributors.

“Originally, yes, it was complex because it was a set of two monolithic codes. One was Open-O and the other was ECOMP,” he said. “Then, what we did was we decoupled and modularized it and we removed all the open source components. We refactored a lot of code when we added new code.”

The result is a modular platform — not a product, he said — that has many parts doing several different things. This modularity means carriers and service providers can pick and choose from the Amsterdam code or use the platform as a whole.

ONAP’s next release — Beijing, expected in 2018 — will focus on support for enterprise workloads, including 5G and internet of things (IoT).

MEF releases 3.0 framework aimed at automation, orchestration

MEF has released a new framework governing how service providers deploy network automation and orchestration.

MEF 3.0 Transformational Global Services Framework is the latest effort by the organization to move beyond its carrier Ethernet roots. MEF is shifting its focus toward creating a foundation that service provider members can use as they move toward cloud-native environments.

MEF 3.0 is developed around four main components: standardized and orchestrated services, open lifecycle service orchestration (LSO) APIs, certification programs and community collaboration.

With the new framework, MEF is defining network services, like wavelength, IP and security, to help service providers move to cloud environments and network automation, according to Pascal Menezes, CTO at MEF, based in Los Angeles.

“A service is defined like a language that everybody can understand, whether it be a subscriber ordering it or a provider implementing it. They all agree on that language,” he said. “But how they actually implement it and what technology they use is independent and was never really defined in any specs. It defines SLA objectives, performance objectives and different classes of performances, but it doesn’t tell you how to implement.”

MEF has previously worked on orchestrating connectivity services, like wavelength and IP, and intends to deliver that work early next year, Menezes said. MEF has started developing SD-WAN orchestration standards, as well, citing its role as a bridge between connectivity layer services and other services, like security as a service and application performance, he added.

These services are automated and orchestrated via MEF LSO APIs. MEF released two LSO APIs earlier this year and will continue to develop more within MEF’s LSO reference orchestration framework. The certification programs will correlate with upcoming releases and are subscription-based, he said.

The fourth MEF 3.0 component involves what MEF calls community collaboration. This involves open source contributions, an enterprise advisory council, hackathons and partnerships with other industry groups. MEF and ONAP, for example, announced earlier this year they are working together to standardize automation and orchestration for service provider networks.

In a separate announcement this week, MEF said it plans to combine its efforts to determine how cloud applications connect with networks with work conducted by the International Multimedia Telecommunications Consortium (IMTC) to define application performance. According to Menezes, MEF will integrate existing IMTC work into its new applications committee and will take over any active projects as part of the MEF 3.0 framework. 

“IMTC has been focused on real-time media application performance and interoperability. It made a lot of sense to bring that work into MEF,” Menezes said.

Filter and query Windows event logs with PowerShell

In addition to its automation capabilities, PowerShell helps the IT staff troubleshoot problems with Windows, specifically…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

when they need to find errors in the Windows event logs. PowerShell parses logs and has a few more advantages over the numerous third-party tools at administrators’ disposal. Microsoft includes PowerShell for free with Windows, which gives it a cost advantage over other vendors’ products. Also, PowerShell connects deeply with the OS to provide many options to filter logs and query across multiple systems simultaneously.

Get-EventLog is the primary cmdlet administrators utilize to manage Windows event logs. This cmdlet shows the log’s contents with the -LogName parameter, followed by the name of the desired log file.

Log files can get large, but this cmdlet cuts results down to more easily reveal relevant events.

Use this command to show a summary of available log files:

Get-EventLog -List

PowerShell returns the log names and the number of events in each. Let’s focus on the Application log, which can contain several thousand entries. This command displays the Application log events:

Get-EventLog -LogName “Application”

The command output shows the log file’s full contents, which is not helpful. To filter the results, use this example to show the 10 most recent entries:

Get-EventLog -LogName “Application” -Newest 10

Next, take the command a step further, and find the 10 most recent errors with the -EntryType parameter:

Get-EventLog -LogName “Application” -EntryType “Error” -Newest 10

PowerShell also finds specific error occurrences. Find the 10 most recent instances of event 7670 — an issue related to SQL Server database access — with this command:

Get-EventLog -LogName “Application” -Index 7670 -Newest 10

Event 7670 often accompanies several other SQL Server events, such as 7671 or 7673. PowerShell specifies a range of event IDs rather than a single event ID. Let’s say you’re interested in event IDs 7670, 7671, 7672 and 7673. Use this command to view the 10 most recent SQL Server-related errors with those event IDs in the Application log:

Get-EventLog -LogName “Application” -Index(7670..7673) -Newest 10

Alternatively, the command to list SQL errors — which varies based on the SQL Server version — resembles this:

Get-EventLog -LogName “Application” -EntryType “Error” -Source “SQLLocalDB 11.0” -Newest 10

How to check logs on remote machines

PowerShell also filters log events on Windows systems across the network. The administrator must specify the -ComputerName parameter, followed by the NetBIOS name, fully qualified domain name or the target system’s IP address.

To show results from several computers, store the computer names in a variable, and then use a ForEach loop. If the server names are Server1, Server2 and Server3, for example, use these commands to query each server:

$Computers=’Server1′,’Server2′,’Server3′

ForEach($Computer in $Computers){Get-EventLog -ComputerName $Computer -LogName “Application” -Newest 10}

The output does not list the name of the server with the event. To adjust this, customize the results: Append the pipe symbol, followed by the Select-Object cmdlet and the fields to display. The valid field names are EventID, MachineName, Data, Index, Category, CategoryNumber, EntryType, Message, Source, ReplacementStrings, InstanceID, TimeGenerated, TimeWritten, UserName, Site and Container.

[embedded content]

How to parse event log
message field with PowerShell

This command returns the server name, event ID, time and description:

$Computers=’Server1′,’Server2′,’Server3′

ForEach($Computer in $Computers){Get-EventLog -ComputerName $Computer -LogName “Application” -Newest 10} | Select-Object MachineName, EventID, TimeGenerated, Message

These are just a few methods to parse Windows event logs with Get-EventLog. Microsoft provides an extensive list of other ways this cmdlet helps administrators troubleshoot Windows systems.

Next Steps

PowerShell commands to troubleshoot Exchange Server

Implement PowerShell’s piping capabilities to build scripts

PowerShell Gallery offers easy access to scripts

PowerShell Script: Deploy VMs, and Configure the Guest OS in one Go on Hyper-V

I’ve been prepping for a lot of different speaking engagements coming up in the next few months and a very hot topic these days is the use of PowerShell and automation, when it comes to Hyper-V. With this in mind, I’ve prepped the below script for some of these upcoming discussions, and wanted to share it with the community so that it’ll be of help to some people.

Read the post here: PowerShell Script: Deploy VMs, and Configure the Guest OS in one Go on Hyper-V

What is mobile marketing automation?

Mobile Marketing Automation (MMA) encompasses a wide range of technologies and approaches with the intent of increasing customer engagement. MMA provides understanding into the user, allowing for the creation of personalized content that is both relevant to the user and delivered in an optimal manner. This, in turn, produces a better user experience, builds retention as the user becomes familiar with the relevant content served to them, and increases ROI as more and more users are active on devices and making purchases.

Key features of mobile marketing automation

Each solution will vary on the features offered but let’s look at the following: push notifications, in-app messaging, A/B testing, and remarketing. These tools are used to improve user experience, build retention, and increase ROI.

  • Push Notifications. Users respond to what is applicable to them. Notifications are a great way to provide valuable information to the user.
  • With some consumers turning off notifications, creating personalized ads that follow a user’s journey outside of your app is another way to keep them engaged.
  • In-App messaging. This is a way to deliver alerts with the right content to a user at the appropriate time.
  • A/B testing. So many apps are downloaded and only used once. With A/B testing, you can optimize based on what is working and avoid guesswork.

Who can benefit from mobile marketing automation

Mobile marketing automation is used by various industries, such as games, media, e-commerce, health, and travel—just to name a few. Ultimately, anyone that wants to engage their users effectively can benefit from MMA for the purpose of increased revenue and better user engagement.

New MMA partner: Swrve

To highlight the integration processes, here is an example from Swrve, the latest partner to support Windows.

Swrve is a mobile engagement platform, purpose-built for the mobile app experience owner. This platform enables mobile app experience owners to know and personally interact with each and every customer who has their apps. The platform focuses on four key areas to help owners shape the mobile experience—onboarding, nurturing, engagement, and retention—and empowers owners with the following tools in the platform:

  • real-time segmentation and targeting
  • conversation management
  • dynamic optimization
  • analytics
  • cross-channel data orchestration with existing marketing stacks

In short, Swrve can help drive monetization while building customer lifetime value.

Swrve app dashboards

The User Lifecycle dashboard provides an out-of-the-box breakdown of your users against a standard app lifecycle covering six states of activity and inactivity. It also enables you to target users in each state with relevant in-app message, push notification, or conversation campaigns.

1_MMA

The KPI Metrics dashboard allows you to monitor and analyze your app’s performance over the past 24 hours.

2_MMA

Swrve integration to Universal Windows Platform (UWP) apps

After you have installed the SDK, complete the following steps for a basic integration. Replace <app_id> and <api_key> with your app ID and API key.

Step 1: Add the following code snippet to the OnLaunched method of your application:

private async Task InitializeSwrveSDK()
{
  		  // Configure here anything extra like in-app custom button listener,
   		 // push notifications and push payload listener or dialog listeners.
 	  	 SwrveConfig config = new SwrveConfig();
  	 	// To use the EU stack, include this in your config.
// config.SelectedStack = Swrve.Stack.EU;
// Configure here anything extra like in-app custom button listener,
// push notifications and push payload listener or dialog listeners.
  	  	await SwrveSDK.Instance.InitAsync(&lt;app_id&gt;, &lt;api_key&gt;, config);
}
protected override async void OnLaunched(LaunchActivatedEventArgs e)
{
   		 Frame rootFrame = Window.Current.Content as Frame;
    		if (rootFrame == null) {
     	  		 // Initialize the SDK when the app has been launched for the first time
     	 		  await InitializeSwrveSDK();
   		 }
}

Step 2: Add the following to your application’s OnActivated method:

protected override async void OnActivated(IActivatedEventArgs args)

{
  	  base.OnActivated(args);
  	  var rootFrame = Window.Current.Content as Frame;
   	 if (rootFrame == null)
  	  {
   	     // Initialize the SDK when the app has been started from an activation
    	    await InitializeSwrveSDK();
  	  }
 	   SwrveSDK.Instance.OnActivated(args);
}

Step 3: Add the following to your application’s OnSuspending method:

private async void OnSuspending(object sender, SuspendingEventArgs e)
{
   	 var deferral = e.SuspendingOperation.GetDeferral();
   	 // Save application state and stop any background activity
   	 await SwrveSDK.Instance.OnSuspendingAsync();
   	 deferral.Complete();
}

Step 4: Add the following to your application’s OnResuming method:

private async void OnResuming(object sender, object e)
{
    await SwrveSDK.Instance.OnResumingAsync();
}

The project is now ready to compile and run in the Windows 10 simulator.

Other MMA partners supporting Windows

If you’re interested in learning more about other MMA partners supporting Windows, please check out this list:

  • Azure Mobile Engagement – Combine big data collection with real-time processing to trigger engagement scenarios according to app-user behavior and demographics. Azure Mobile Engagement answers nearly any question relevant to your business needs.  For documentation and SDK installation, please click here.
  • Localytics – A Mobile Engagement Platform that brings together User Insights, Smart Targeting & Marketing Automation to power great user engagement.  For documentation and SDK installation, please click here.
  • Urban Airship – Urban Airship provides leading brands with a market-leading mobile engagement platform and digital wallet solution that enable high-value customer relationships.  For documentation and SDK installation, please click here.