Tag Archives: tool

Accenture cloud tool aims to shorten decision cycle

Accenture has rolled out a tool that the company said will help customers navigate complex cloud computing options and let them simulate deployments before committing to an architecture.

The IT services firm will offer the tool, called myNav, as part of a larger consulting agreement with its customers. The myNav process starts with a discovery phase, which scans the customer’s existing infrastructure and recommends a cloud deployment approach, whether private, public, hybrid or multi-cloud. Accenture’s AI engine then churns through the company’s repository of previous cloud projects to recommend a specific enterprise architecture and cloud offering. Next, the Accenture cloud tool simulates the recommended design, allowing the client to determine its suitability.

“There’s an over-abundance of choice when the client chooses to … take applications, data and infrastructure into the cloud,” said Kishore Durg, Accenture’s cloud lead and growth and strategy lead for technology services. “The choices cause them to ponder, ‘What is the right choice?’ This [tool] will help increase their confidence in going to the cloud.”

Accenture isn’t unique among consultancies in marketing services to aid customers’ cloud adoption. But industry watchers pointed to myNav’s simulation feature as a point of differentiation.

There are many companies that offer cloud service discovery, assessment and design services for a fee, said Stephen Elliot, an analyst with IDC. “But I don’t know of any other firm that will run a simulation,” he added.

Yugal Joshi, a vice president with Everest Group, cited myNav’s cloud architecture simulator as an intriguing feature. “Going forward, I expect it to further cover custom bespoke applications in addition to COTS [commercial off-the-shelf] platforms,” he said.

Joshi, who leads Everest Group’s digital, cloud and application services research practices, said most mature IT service providers have developed some type of platform to ease clients’ journey to the cloud. “The difference lies in the vision behind the IP, the quality of the IP, articulation and the business value it can provide to clients,” he noted.

Accenture cloud simulation’s potential benefits

Elliot said myNav’s simulation is interesting because it could help customers understand the outcome of a project in advance and whether that outcome will meet their expectations.

Despite cloud being around for quite some time now, it is still not a done deal.
Yugal Joshivice president, Everest Group

This could help Accenture close deals faster while fostering more productive conversations with IT buyers, Elliot said. “In any case, customers will have to trust that the underlying information and models are correct, and that the outcomes in the solution can be trusted,” he said.

Customers, meanwhile, could benefit from faster cloud rollouts.

“Where Accenture myNav is focusing is leveraging the expertise Accenture has gathered over many cloud engagements,” Joshi said. “This can potentially shorten the decision-making, business-casing and the eventual cloud migration for clients.”

Customers can get to the results faster, rather than spend weeks or, potentially, months in assessment and roadmap exercises, he said. Whether the Accenture cloud platform delivers the anticipated results, however, will only become evident when successful client adoption case studies are available, he cautioned.

Durg said cloud assessments can take eight to 12 weeks, depending on the scale of the project. The migration phase could span two months and require 80 or more people. The simulation aspect of myNav, he noted, lets clients visualize the deployment “before a single person is put on a project.”

Help wanted

Accenture’s myNav tool arrives at a time when the cloud matured — the public cloud is more than a decade old — but not completely. The multiplicity of cloud technologies introduces uncertainty and sparks enterprise conversations around skill sets and adoption approaches.

“Despite cloud being around for quite some time now, it is still not a done deal,” Joshi said. “Clients need lot of hand-holding and comfort before they can migrate to, and then leverage, cloud as an operating platform [rather] than an alternative hosting model.”

Elliot added, “The market is at a point where every cloud deployment is almost a snowflake. It’s the organizational, skills and process discussions that slow projects down.”

Go to Original Article
Author:

Accelerate IoMT on FHIR with new Microsoft OSS Connector

Microsoft is expanding the ecosystem of FHIR® for developers with a new tool to securely ingest, normalize, and persist Protected Health Information (PHI) from IoMT devices in the cloud.  

Continuing our commitment to remove the barriers of interoperability in healthcare, we are excited to expand our portfolio of Open Source Software (OSS) to support the HL7 FHIR Standard (Fast Healthcare Interoperability Resource). The release of the new IoMT FHIR Connector for Azure is available today in GitHub.


An illustration of medical data being connected to FHIR with IoMT FHIR Connector for Azure

The Internet of Medical Things (IoMT) is the subset of IoT devices that capture and transmit patient health data. It represents one of the largest technology revolutions changing the way we deliver healthcare, but IoMT also presents a big challenge for data management.

Data from IoMT devices is often high frequency, high volume, and requires sub-second measurements. Developers have to deal with a range of devices and schemas, from sensors worn on the body, ambient data capture devices, applications that document patient reported outcomes, and even devices that only require the patient to be within a few meters of a sensor.

Traditional healthcare providers, innovators, and even pharma and life sciences researchers are ushering in a new era of healthcare that leverages machine learning and analytics from IoMT devices. Most see a future where devices monitoring patients in their daily lives will be used as a standard approach to deliver cost savings, improve patient visibility outside of the physician’s office, and to create new insights for patient care. Yet as new IoMT apps and solutions are developed, two consistent barriers are preventing broad scalability of these solutions: interoperability of IoMT device data with the rest of the healthcare data, such as clinical or pharmaceutical records, and the security and private exchange of protected health information (PHI) from these devices in the cloud.

In the last several years, the provider ecosystem began to embrace the open source standard of FHIR as a solution for interoperability. FHIR is rapidly becoming the preferred standard for exchanging and managing healthcare information in electronic format and has been most successful in the exchange of clinical health records. We wanted to expand the ecosystem and help developers working with IoMT devices to normalize their data output in FHIR. The robust, extensible data model of FHIR standardizes the semantics of healthcare data and defines standards for exchange, so it fuels interoperability across data systems. We imagined a world where data from multiple device inputs and clinical health data sets could be quickly normalized around FHIR and work together in just minutes, without the added cost and engineering work to manage custom configurations and integration with each and every device and app interface. We wanted to deliver foundational technology that developers could trust so they could focus on innovation. And today, we’re releasing the IoMT FHIR Connector for Azure.

This OSS release opens an exciting new horizon for healthcare data management. It provides a simple tool that can empower application developers and technical professionals working with data from devices to quickly ingest and transform that data into FHIR. By connecting to the Azure API for FHIR, developers can set up a robust and secure pipeline to manage data from IoMT devices in the cloud.

The IoMT FHIR Connector for Azure enables easy deployment in minutes, so developers can begin managing IoMT data in a FHIR Server that supports the latest R4 version of FHIR:

  • Rapid provisioning for ingestion of IoMT data and connectivity to a designated FHIR Server for secure, private, and compliant persistence of PHI data in the cloud
  • Normalization and integrated mapping to transform data to the HL7 FHIR R4 Standard
  • Seamless connectivity with Azure Stream Analytics to query and refine IoMT data in real-time
  • Simplified IoMT device management and the ability to scale through Azure IoT services (including Azure IoT Hub or Azure IoT Central)
  • Secure management for PHI data in the cloud, the IoMT FHIR Connector for Azure has been developed for HIPAA, HITRUST, and GDPR compliance and in full support of requirements for protected health information (PHI)

To enhance scale and connectivity with common patient-facing platforms that collect device data, we’ve also created a FHIR HealthKit framework that works with the IoMT FHIR Connector. If patients are managing data from multiple devices through the Apple Health application, a developer can use the IoMT FHIR Connector to quickly ingest data from all of the devices through the HealthKit API and export it to their FHIR server.

Playing with FHIR
The Microsoft Health engineering team is fully backing this open source project, but like all open source, we are excited to see it grow and improve based on the community’s feedback and contributions. Next week we’ll be joining developers around the world for FHIR Dev Days in Amsterdam to play with the new IoMT FHIR Connector for Azure. Learn more about the architecture of the IoMT FHIR Connector and how to contribute to the project on our GitHub page.


FHIR® is the registered trademark of HL7 and is used with the permission of HL7

Go to Original Article
Author: Microsoft News Center

Jamf Protect offers visibility, protection for macOS admins

MINNEAPOLIS — Compliance and behavioral analysis features in endpoint security tool Jamf Protect may lessen IT concerns about adopting macOS devices in the enterprise.

Jamf Protect offers a kernel-less — or kextless — approach to endpoint security, which was announced here at Jamf Nation User Conference (JNUC) 2019, Jamf’s annual user conference. The platform offers day-one support of new macOS security features, insight into compliance across an organization’s fleet of macOS devices and behavior-based malware detection.

As the use of macOS in the enterprise increases, the landscape of security threats evolves, said David McIntyre, CISO and CTO of Build America Mutual, a financial services company in New York.

“There were so many more threats for Mac than I thought, so we had to add something to fight them off,” McIntyre said.

The origin of Jamf Protect

The announcement of a Jamf endpoint protection tool aligns with the company’s acquisition of Digita Security, a macOS endpoint security management company, earlier this year.

A lack of security management is one of the biggest hindrances to macOS adoption in the enterprise, said Patrick Wardle, co-founder at Digita Security and current principle security researcher at Jamf. Most enterprise organizations that consider deploying macOS devices have existing Windows machines that they manage, and as such they have a Windows-focused desktop management infrastructure.

“In an ideal world, the single pane of glass for Windows and Mac endpoint management would work, but feature parity is largely missing for the macOS components of these tools,” Wardle said.

What can Jamf Protect do?

Jamf Protect offers kextless management; instead of kernel extensions, it builds on the EndpointSecurity framework that Apple provides. Kext files extend Mac OS X kernels and can bloat a desktop with additional code. With the release of macOS 10.15 Catalina, Apple deprecated kernel extensions to encourage a kextless approach.

“It’ll be huge for us if we can get rid of apps that use kext files,” said Tom O’Mahoney, a systems support analyst at Home Advisor in Golden, Co. “Hopefully that’s the future of all desktop management.”

It’ll be huge for us if we can get rid of apps that use kext files — hopefully that’s the future of all desktop management.
Tom O’MahoneySystems support analyst, Home Advisor

Some kernel extensions only work with certain versions of Mac OS X and can prevent users from booting desktops after OS updates. Admins must troubleshoot this issue by searching through all of the OS’ kext files and determining which non-Apple kext file is causing the issue, as Apple automatically trusts kext files that have its developer ID.

“The kextless approach prevents a lot of issues that our current endpoint manager has with macOS updates,” said Brian Bocklett, IT engineer at Intercontinental Exchange, a financial services company in Atlanta, Ga.

Jamf Protect will also provide visibility into an organization’s entire macOS fleet. Admins can view the status of macOS devices’ security configurations and settings in the Insights tab of Jamf Protect and compare this data to endpoint security standards published by the Center for Internet Security (CIS).

Jamf Protect screenshot
Jamf Protect’s Insights tab

Michael Stover, a desktop engineer at Home Advisor, which has roughly a 90-10 split on Windows and macOS devices, said that macOS visibility is a common compliance issue.

“The CIS benchmarks are probably the biggest selling point for us,” he said. “It would be game-changing to see all that configuration data in one place and compare it to the benchmarks.”

The behavioral analysis style of macOS threat detection also drew some interest from JNUC 2019 attendees. This approach to malware detection identifies actions that files or software try to execute and searches for anomalies. If Jamf Protect finds instances of a phantom click, a common malware tactic, it can alert IT professionals to the suspicious behavior.

Jamf Protect forgoes attempts to recognize specific instances of malware; instead it recognizes the actions of potentially malicious software. Jamf Protect also detects software with an unfamiliar developer ID attempting to access data, install additional software or take actions that could invite malware onto a desktop.

“You don’t need to have every bank robber’s photo to know that someone running into a bank with a ski mask and a weapon is trying to rob that bank,” McIntyre said. 

Still, some aspects of Jamf Protect gave macOS admins pause, including the behavior analysis style of threat detection. In a Q&A after the Jamf Protect session ended, several attendees asked if the tool provides a more proactive approach for threat prevention and if Jamf Protect had any way to prevent false positives before they happen.

Spotify, for example, includes the suspicious phantom clicks as part of its UI, so users running Spotify could generate false positives. IT professionals can add exceptions to the behavioral analysis with Spotify and other similar cases, but it’s difficult to anticipate every exception they’ll need to add.

Additionally, some organizations require security standards far stricter than those of the CIS, and Jamf Protect doesn’t allow organizations to add their own benchmarks or customize the CIS benchmarks.

Jamf Protect is generally available as a paid subscription service for commerical U.S. customers, according to Jamf.

Go to Original Article
Author:

Using the Sysinternals Sysmon tool to check DNS queries

If you’re an IT professional with experience troubleshooting the Windows OS, then you may have used a tool from the Sysinternals suite.

The Sysinternals utilities have been around since 1996 and have been one of the most popular tools to handle various tasks in Windows, from remote execution (PSExec) to looking at software that starts automatically (Autoruns). Of the many tools in the Sysinternals suite, Sysmon is one of the best at providing great insight into what is happening in several areas on Windows. With the addition of the DNS query logging feature, I consider Sysmon an essential tool for administrators to monitor process creations and network connections.

Deploying Sysmon to clients

Chocolatey is the de facto package manager on Windows, due to its immense repository of Windows software and its integration with PowerShell and configuration management applications. Chocolatey has Sysmon and the rest of the Sysinternals suite on its public repository.

Chocolatey doesn’t install Sysmon on a machine; it just unzips the files needed to install the Sysmon service. With some modification to the Chocolatey installation script, we can change that.

C:Chocotemp> cat .chocolateyInstall.ps1

$packageName = ‘sysmon’

$url = “$(Split-Path -parent $MyInvocation.MyCommand.Definition)filesSysmon.zip”

$checksum = ‘ed271b81eee546f492f25b10cdf99ffcff5670fa502fdf21151c18157b826f39’

$checksumType = ‘sha256’

$url64 = “$url”

$checksum64 = “$checksum”

$checksumType64 = “checksumType”

$toolsDir = “$(Split-Path -parent $MyInvocation.MyCommand.Definition)”

 

Install-ChocolateyZipPackage -PackageName “$packageName” `

                             -Url “$url” `

                             -UnzipLocation “$toolsDir” `

                             -Url64bit “$url64” `

                             -Checksum “$checksum” `

                             -ChecksumType “$checksumType” `

                             -Checksum64 “$checksum64” `

                             -ChecksumType64 “$checksumType64”

 

& ($toolsDir + ‘Sysmon64.exe’) /accepteula /i /h * /n

The last line of the script calls for the execution of sysmon64.exe with the arguments /accepteula /i /h * /n, which accepts the end-user license agreement, installs the Sysmon service on the local system, uses all hash algorithms and sets up logging of network connections.

When I run the command choco install sysmon –y, it installs the Sysmon service when I install the package.

Sysmon setup
Set up Chocolatey to fetch Sysmon and install the service.

Use configuration files to get what you want

With the addition of the DNS query logging feature, I consider Sysmon an essential tool for administrators to monitor process creations and network connections.

Once you get familiar with using Sysmon, you will want to use it with configuration files, which help filter events that Sysmon logs to weed out unnecessary information.

The IT professional who uses the handle @SwiftOnSecurity on Twitter maintains one of the more popular customized Sysmon configuration files at this repository on GitHub. It contains a lot of valuable inclusions and exclusions for those times when you need a cleaner Sysmon log. For instance, there is a section for monitoring file creation processes that includes important file extensions, such as .ps1, .bat and .vbs.

Displaying the Sysmon event log

[embedded content]
Working with the Sysinternals suite

One of the great features of Sysmon is that it puts logs in a familiar location: Windows Event Viewer. The exact location is under Applications and Services > Microsoft > Windows > Sysmon. Here, we can search and filter just like any other Windows event log. For instance, to search for a specific IP address for a network connection, users can right-click on the Sysmon log, and choose Find. This opens a dialog to search keywords — in this case, an IP address.

Logging DNS queries in Sysmon

A recent release of Sysmon added a new feature: logging DNS queries. To test it, after browsing to Google in Chrome, I see it is logged in Sysmon as the following:

Dns query:
RuleName:
UtcTime: 2019-06-13 19:38:50.327
ProcessGuid: {17847a67-4157-5d02-0000-001048c02000}
ProcessId: 11328
QueryName: www.google.com
QueryStatus: 0
QueryResults: 172.217.10.68;
Image: C:Program Files (x86)GoogleChromeApplicationchrome.exe

This brings in the ability to track if a system attempts to contact malicious sites, which can be helpful when detecting malware.

Search the Sysmon event log with PowerShell

The Get-WinEvent cmdlet is one of the most useful troubleshooting cmdlets in PowerShell for its ability to run a search in the Windows event log. Because Sysmon gets logged to the Windows event log, we can search it with PowerShell.

In the command below, we run Get-WinEvent on a remote computer (WIN10-CBB) and use -FilterHashTable to look in the Sysmon log for DNS queries only. I then pipe that output to Select-Object so that I only retrieve the message in the event. (The Event ID 22 occurs when a process runs a DNS query.)

Get-WinEvent -ComputerName win10-cbb -FilterHashTable @{logname="Microsoft-Windows-Sysmon/Operational";ProviderName="Microsoft=Windows-Sysmon";ID=22"} | Select-Object -ExpandProperty Message
Search the Sysmon event log
Use the Get-WinEvent cmdlet to search the Sysmon event log with PowerShell.

The result is that I print all of the DNS queries for this machine.

Go to Original Article
Author:

PowerShell backup scripts: What are 3 essential best practices?

Although Windows PowerShell is not a backup tool per se, it can be used to create data backups. In fact, there are several PowerShell backup scripts available for download.

For those who may be considering backing up data using PowerShell, there are several best practices to keep in mind.

Don’t use internet scripting as-is

Even though there are some good PowerShell backup scripts available for download, none of those scripts should be used as-is. At the very least, you will probably need to modify the script to instruct PowerShell as to what data should and should not be backed up, and where to save the backup.

Additionally, a script might be designed to create a full backup every time that it is run, as opposed to creating an incremental or differential backup.

Be mindful of permissions

Another best practice for PowerShell backup scripts is to be mindful of permissions. When a PowerShell script runs interactively, that script inherits the permissions of the user who executed the script. It is possible, however, to force PowerShell to get its permissions from a different set of credentials by using the Get-Credential cmdlet

This brings up an important point. When a script includes the Get-Credential cmdlet, the script will cause Windows to display a prompt asking the user to enter a set of credentials. If a script is designed to be automated, then such behavior is not desirable.

PowerShell makes it possible to export a set of credentials to an encrypted file. The file can then be used to supply credentials to a script. Such a file must, however, be carefully protected. Otherwise, someone could make a copy of the file and use it to supply credentials to other PowerShell scripts.

Don’t rely on manual script execution

Finally, with PowerShell backup scripts, try not to rely on manual script execution. While there is nothing wrong with running a PowerShell script as a way of creating a one-off backup, a script is likely to be far more useful if it is configured to automatically run on a scheduled basis.

The Windows operating system includes a built-in tool for task automation, called Task Scheduler. By using Task Scheduler, you can automatically execute PowerShell backup scripts on a scheduled basis.

Go to Original Article
Author:

Naveego launches tool for analyzing data quality and health

Naveego has launched Accelerator, a tool that analyzes data accuracy and checks the health of multiple data sources.

Naveego Accelerator checks data health by auto-profiling and doing a cross-system comparison. It calculates the percentage of data with consistency errors that would affect a business’s operations and profitability by doing a cross-system comparison.

The tool then delivers analysts results and data health metrics within minutes, according to the vendor. Users can also have Accelerator set data quality checks to investigate issues further.

Data cleansing has long been an important part of data management for businesses. The process fixes or removes data that is wrong, incomplete, formatted incorrectly or duplicated. Data-heavy industries, such as banking, transportation or retail, can use data cleansing to examine data for issues by using rules, algorithms and lookup tables.

Naveego’s flagship product is the Complete Data Accuracy Platform, which aims to prevent issues stemming from inaccurate data. It is a hybrid, multi-cloud platform that manages and detects data accuracy issues.

Naveego has also expanded its Partner Success Program, partnering with Frontblade Systems, H2 Integrated Solutions, Mondelio and Narwal. The Partner Success Program provides a support package for partners that includes sales personnel, technical training and expertise, and marketing and promotional support.

As an emerging vendor in the data quality software market, Naveego must compete with market giants such as Informatica and IBM.

Informatica offers a portfolio of products designed for data quality assurance, including Axon Data Governance, Informatica Data Quality, Cloud Data Quality, Big Data Quality, Enterprise Data Catalog and Data as a Service. Informatica Data Quality ensures data is clean and ready to use, and it supports Microsoft Azure and Amazon Web Services.

IBM offers a handful of data quality products, as well, including InfoSphere Information Server for Data Quality, InfoSphere QualityStage, BigQuality and InfoSphere Information Analyzer. These products work to cleanse data, monitor data quality and provide data profiling and analysis to evaluate data for consistency and quality.

Go to Original Article
Author:

AWS expands its cloud cost optimization portfolio

AWS’ latest tool aims to help customers save money and optimize their workloads on the cloud platform, and also expand AWS’ cost management capabilities to a broader base.

As an opt-in feature, Amazon EC2 now scans customer usage over the previous two weeks and creates “resource optimization recommendations” for actions to address idle and underutilized instances. AWS defines idle instances as ones with less than 1% of their maximum CPU utilization active, and underutilized instances as CPU activity between 1% and 40% of capacity, according to a blog post.

The system recommends customers shut off idle instances entirely. For underutilized ones, AWS simulates that same level of usage applied to a smaller instance in the same service tier, and shows customers cost savings to bundle multiple instances into one. Customers get a summary of potential resource optimizations, which includes estimates of monthly savings, and can also download lists of recommendations.

At present, the recommendations cover major EC2 instance families but not GPU-based ones, according to the blog.

AWS advances cloud cost optimization question

The new feature bears similarity at a glance to the likes of AWS Cost Explorer and AWS Trusted Advisor, but there are differences, and it should be welcomed by customers, analysts said.

Deepak Mohan, analyst at IDCDeepak Mohan

“This aligns with one of the top pain points customers highlight as they start scaling up their cloud usage, which is that optimal service selection and configuration are not easy, and suboptimal configuration results in high costs as usage increases,” said Deepak Mohan, an analyst with IDC.

This aligns with one of [cloud customers’] top pain points … optimal service selection and configuration are not easy, and suboptimal configuration results in high costs as usage increases.
Deepak MohanAnalyst, IDC

With resource optimization recommendations, AWS also presents cost management features to a broader set of customers, Mohan said.

Cost Explorer gives customers report-generation tools to examine their usage over time. It also includes forecasting capabilities, but Cost Explorer is more a means to examine the past.

Trusted Advisor has a broader remit, as it looks at not just cost issues but also security and governance, fault tolerance and performance improvements. The full feature set of Trusted Advisor is only available to customers with business and enterprise-level support plans, while the new capabilities are available to all customers at no charge, Mohan noted.

Moreover, Trusted Advisor alerts admins that an instance has a poor level of utilization, which might prompt them to investigate which instance might be better, said Owen Rogers, vice president of cloud transformation and digital economics at 451 Research. By comparison, these resource optimization recommendations tell admins which instance would be a better fit to keep the application performing well but also at a lower price point.

Owen Rogers, 451 ResearchOwen Rogers

“This is a nice free feature that I think many customers will take advantage of,” he said. “After all, if you can save money without impacting deliverables, why wouldn’t you?”

AWS has not achieved anything revolutionary here. Microsoft and Google have similar tools for cloud cost management, as well as third-party options from the likes of ParkMyCloud, VMWare CloudHealth and OpsRamp, Rogers added.

But AWS’ complexity with regard to prices and SKUs has long been a sore spot for customers. Its latest move ties generally into remarks Amazon CTO Werner Vogels made in a recent interview with TechTarget.

“I think there’s a big role for automation,” Vogels said. “I think helping customers make better choices there through automation and tools is definitely a path we are looking for.”

Go to Original Article
Author:

Easily integrate Skype calls into your content with the new content creators feature | Skype Blogs

Skype is used worldwide as a tool for bringing callers into a variety of different podcasts, live streams, and TV shows. Today, we made it even simpler to bring your incoming audio and video calls to life with the Skype for content creators feature.

Building off the Skype TX appliance for professional studios, we built the feature directly into the desktop app, so podcasters, vloggers, and live streamers can bring Skype calls directly into their content without the need for expensive equipment, studio setup, or multiple crew members.

From a one-on-one audio call up to a four-person group video—incoming calls are available for you to build your own content by integrating Skype calls.

The feature uses NewTek NDI®. You need an NDI-enabled application or device to use Skype for content creators.

There are a number of NDI-enabled software and appliances to choose from,* including:

  • NewTek TriCaster®
  • Xsplit
  • OBS with NDI plugin
  • Pro presenter
  • Wirecast
  • vMIX
  • Ecamm Live for Mac
  • Ovrstream

You will be able to edit, brand, and distribute your Skype content, which can then be sent to a group of friends, uploaded as a podcast or vlog, or live streamed to an audience of millions using platforms such as Facebook, YouTube, Twitch, and LinkedIn.

Skype for content creators is now available on the latest version of Skype for Windows and Mac. Visit Skype for content creators to learn more. We would love to hear from and see what you have created using this feature; email us at [email protected]

*Third-party applications have not been checked, verified, certified, or otherwise approved or endorsed by Skype. Applications may be subject to the third-party provider’s own terms and privacy policy. Skype does not control and is not responsible for the download, installation, pricing, quality, performance, availability, support, or terms and conditions of purchase of third-party applications.

News roundup: Manage employee resource groups and more

This week’s news roundup features a tool to manage employee resource groups, a roadmap for a wellness coaching technology program and an AI-powered platform to match employees with the right insurance options.

Ready, set, engage

Espresa, which makes a platform for automating employee programs, has added new features that can track and manage employee resource groups.

Employee resource groups, which are organically formed clubs of people with shared enthusiasms, are increasingly popular in U.S. corporations. A 2016 study by Bentley University indicated 90% of Fortune 500 companies have employee resource groups, and 8.5% of American employees participate in at least one.

At a time when employee retention has become more critical, thanks to a very tight labor market, employee resource groups can help employee engagement. But the grassroots nature of the efforts makes it hard for both employees and HR departments to track and manage them.

In many companies today, employee resource groups are managed with a cobbled-together collection of wiki pages, Google Docs and Evite invitations, said Raghavan Menon, CTO of Espresa, based in Palo Alto, Calif. And HR departments often have no idea what’s going on, when it’s happening or who is in charge.

“Today, nothing allows the employer or company to actually promote [employee resource groups] and then decentralize them to allow employees to manage and run the groups with light oversight from HR,” Menon explained.

Espresa’s new features give HR departments a web-based way to keep track of the employee resource groups, while giving the employees a matching mobile app to help them run the efforts.

“When employees are running things, they’re not going to use it if it’s an old-style enterprise app,” he said. “They want consumer-grade user experience on a mobile app.”

With Espresa, HR staff can also measure employee resource groups’ success factors, including participation and volunteer activity levels. That information can then be used to make decisions about company funding or a rewards program, Menon said.

An alternate health coach

Is it possible to help an employee with a chronic condition feel supported and empowered to make lifestyle changes using high-tech health coaching and wearable health technology? According to John Moore, M.D., medical director at San Francisco-based Fitbit, the answer is yes.

During World Congress’ 10th annual Virtual Health Care Summit in Boston, Moore outlined a health coaching roadmap designed to help HR departments and employers meet workers where they are.

“Hey, we know the healthcare experience can be really tough, and it’s hard to manage with other priorities,” he said. “We know you have a life.”

Using a health coach, wearables or a mobile phone — and possibly even looping in family and friends — an employee with a health condition is walked through the steps of setting micro-goals over a two-week period. Reminders, support and encouragement are delivered via a wearable or a phone and can include a real or virtual coach, or even a family intervention, if necessary.

The idea, Moore stressed, is to enable an HR wellness benefits program to give ownership of lifestyle changes back to the employee, while at the same time making the goals sufficiently small to be doable.

“This is different than [typical] health coaching in the workplace,” he said. “This is going to be a much richer interaction on a daily basis. And because it’s facilitated by technology, it’s more scalable and more cost-effective. We’ll be able to collect information that spans from blood pressure, to weight, to steps, to glucose activity and sleep data to get the whole picture of the individual so they can understand themselves better.”

This is an in-the-works offering from Fitbit, and it will not be limited to just the Fitbit-brand device. This platform will be based on technology Fitbit acquired from Twine in February 2018. Moore outlined a vision of interoperability that could include everything, from the pharmacy to a glucose meter to, eventually, an electronic health record system. This could work in tandem with a company’s on-site or near-site health clinic and expand from there, he said.

“Technology can help break down barriers that have existed in traditional healthcare. Right now, interactions are so widely spaced, you can’t put coaches in the office every day or every week. There needs to be a way to leverage technology,” he said. “We can’t just give people an app with an AI chatbot and expect it to magically help them. The human element is still a very important piece, and we can use technology to make that human superhuman.”

HR on the go

StaffConnect has released version 2.2 of its mobile engagement platform, which includes new options for customers to create portals for easier access to payroll, training and other HR information and forms. The StaffConnect service can be used by workers in the office and by what the company calls “nondesk employees,” or NDEs.

The company’s 2018 Employee Engagement Survey showed more than one-third of companies have at least 50% of their workforce as NDEs and highlighted the challenges of keeping all employees equally informed and engaged. The survey indicated the vast majority of companies continue to use either email (almost 80%) or an intranet (almost 49%) to communicate with employees, while just 2% of companies reach out via mobile devices.

The company is also now offering a REST API to make it easier to integrate its platform into existing HR services, and it added custom branding and increased quiz feature options to boost customization.

StaffConnect’s new version also offers additional security options and features, including GDPR compliance and protection for data at rest.

Netflix launches tool for monitoring AWS credentials

LAS VEGAS — A new open source tool looks to make monitoring AWS credentials easier and more effective for large organizations.

The tool, dubbed Trailblazer, was introduced during a session at Black Hat USA 2018 on Wednesday by William Bengtson, senior security engineer at Netflix, based in Los Gatos, Calif. During his session, Bengtson discussed how his security team took a different approach to reviewing AWS data in order to find signs of potentially compromised credentials.

Bengtson said Netflix’s methodology for monitoring AWS credentials was fairly simple and relied heavily on AWS’ own CloudTrail log monitoring tool. However, Netflix couldn’t rely solely on CloudTrail to effectively monitor credential activity; Bengtson said a different approach was required because of the sheer size of Netflix’s cloud environment, which is 100% AWS.

“At Netflix, we have hundreds of thousands of servers. They change constantly, and there are 4,000 or so deployments every day,” Bengtson told the audience. “I really wanted to know when a credential was being used outside of Netflix, not just AWS.”

That was crucial, Bengtson explained, because an unauthorized user could set up infrastructure within AWS, obtain a user’s AWS credentials and then log in using those credentials in order to “fly under the radar.”

However, monitoring credentials for usage outside of a specific corporate environment is difficult, he explained, because of the sheer volume of data regarding API calls. An organization with a cloud environment the size of Netflix’s could run into challenges with pagination for the data, as well as rate limiting for API calls — which AWS has put in place to prevent denial-of-service attacks.

“It can take up to an hour to describe a production environment due to our size,” he said.

To get around those obstacles, Bengtson and his team crafted a new methodology that didn’t require machine learning or any complex technology, but rather a “strong but reasonable assumption” about a crucial piece of data.

“The first call wins,” he explained, referring to when a temporary AWS credential makes an API call and grabs the first IP address that’s used. “As we see the first use of that temporary [session] credential, we’re going to grab that IP address and log it.”

The methodology, which is built into the Trailblazer tool, collects the first API call IP address and other related AWS data, such as the instance ID and assumed role records. The tool, which doesn’t require prior knowledge of an organization’s IP allocation in AWS, can quickly determine whether the calls for those AWS credentials are coming from outside the organization’s environment.

“[Trailblazer] will enumerate all of your API calls in your environment and associate that log with what is actually logged in CloudTrail,” Bengtson said. “Not only are you seeing that it’s logged, you’re seeing what it’s logged as.”

Bengtson said the only requirement for using Trailblazer is a high level of familiarity with AWS — specifically how AssumeRole calls are logged. The tool is currently available on GitHub.