Tag Archives: machine

Biometrics firm fights monitoring overload with log analytics

Log analytics tools with machine learning capabilities have helped one biometrics startup keep pace with increasingly complex application monitoring as it embraces continuous deployment and microservices.

BioCatch sought a new log analytics tool in late 2017. At the time, the Tel Aviv, Israel, firm employed a handful of workers and had just refactored a monolithic Windows application into microservices written in Python. The refactored app, which captures biometric data on how end users interact with web and mobile interfaces for fraud detection, required careful monitoring to ensure it still worked properly. Almost immediately after it completed the refactoring, BioCatch found the process had tripled the number of logs it shipped to a self-managed Elasticsearch repository.

“In the beginning, we had almost nothing,” said Tamir Amram, operations group lead for BioCatch, of the company’s early logging habits. “And, then, we started [having to ship] everything.”

The team found it could no longer manage its own Elasticsearch back end as that log data grew. Its IT infrastructure also mushroomed into 10 Kubernetes clusters distributed globally on Microsoft Azure. Each cluster hosts multiple sets of 20 microservices that provide multi-tenant security for each of its customers.

At that point, BioCatch had a bigger problem. It had to not only collect, but also analyze all its log data to determine the root cause of application issues. This became too complex to do manually. BioCatch turned to log analytics vendor Coralogix as a potential answer to the problem.

Log analytics tools flourish under microservices

Coralogix, founded in 2015, initially built its log management system on top of a hosted Elasticsearch service but couldn’t generate enough interest from customers.

“It did not go well,” Coralogix CEO Ariel Assaraf recalled of those early years for the business. “It was early in log analytics’ and log management’s appeal to the mainstream, and customers already had ‘good enough’ solutions.”

While the company still hosts Elasticsearch for its customers, based on the Amazon Open Distro for Elasticsearch, it refocused on log analytics, developed machine learning algorithms and monitoring dashboards, and relaunched in 2017.

That year coincided with the emergence of containers and microservices in enterprise IT shops as they sought to refactor monolithic applications with new design patterns. The timing proved fortuitous; since the Coralogix’s relaunch in 2017, it has gained more than 1,200 paying customers, according to Assaraf, at an average deal size of $50,000 a year.

Coralogix isn’t alone among DevOps monitoring vendors reaping the spoils of demand for microservices monitoring tools — not just in log analytics, but AI- and machine learning-driven infrastructure management, or AIOps, as well. These include application performance management (APM) vendors, such as New Relic, Datadog, AppDynamics and Dynatrace, along with Coralogix log analytics competitors Elastic Inc. and Splunk.

We were able to delegate log management to the support team, so the DevOps team wasn’t the only one owning and using logs.
Tamir AmramOperations group lead, BioCatch

In fact, analyst firm 451 Research predicted that the market for Kubernetes monitoring tools will dwarf the market for Kubernetes management products by 2022 as IT pros move from the initial phases of deploying microservices into “day two” management problems. Even more recently, log analytics tools have begun to play an increasing role in IT security operations and DevSecOps.

The newly relaunched Coralogix caught the eye of BioCatch in part because of its partnership with the firm’s preferred cloud vendor, Microsoft Azure. It was also easy to set up and redirect logs from the firm’s existing Elasticsearch instance, and the Coralogix-managed Elasticsearch service eliminated log management overhead for the BioCatch team.

“We were able to delegate log management to the support team, so the DevOps team wasn’t the only one owning and using logs,” Amram said. “Now, more than half of the company works with Coralogix, and more than 80% of those who work with it use it on a daily basis.”

Log analytics correlate app changes to errors

The BioCatch DevOps team adds tags to each application update that direct log data into Coralogix. Then, the software monitors application releases as they’re rolled out in a canary model for multiple tiers of customers. BioCatch rolls out its first application updates to what it calls “ring zero,” a group of early adopters; next, to “ring one;” and so on, according to each customer group’s appetite for risk. All those changes to multiple tiers and groups of microservices result in an average of 1.5 TB of logs shipped per day.

The version tags fed through the CI/CD pipeline to Coralogix enable the tool to identify issues and correlate them with application changes made by BioCatch developers. It also identifies anomalous patterns in infrastructure behavior post-release, which can catch problems that don’t appear immediately.

Coralogix log analytics
Coralogix log analytics uses version tags to correlate application issues with specific developer changes.

“Every so often, an issue will appear a day later because we usually release at off-peak times,” BioCatch’s Amram said. “For example, it can say, ‘sending items to this queue is 20 times slower than usual,’ which shows the developer why the queue is filling up too quickly and saturating the system.”

BioCatch uses Coralogix alongside APM tools from Datadog that analyze application telemetry and metrics. Often, alerts in Datadog prompt BioCatch IT ops pros to consult Coralogix log analytics dashboards. Datadog also began offering log analytics in 2018 but didn’t include this feature when BioCatch first began talks with Coralogix.

Coralogix also maintains its place at BioCatch because its interfaces are easy to work with for all members of the IT team, Amram said. This has grown to include not only developers and IT ops, but solutions engineers who use the tool to demonstrate to prospective customers how the firm does troubleshooting to maintain its service-level agreements.

“We don’t have to search in Kibana [Elasticsearch’s visualization layer] and say, ‘give me all the errors,'” Amram said. “Coralogix recognizes patterns, and if the pattern breaks, we get an alert and can immediately react.”

Go to Original Article
Author:

For Sale – Mac Pro 2009 (4,1) – with Mojave

Spare machine to go in order to make space…

Mac Pro 2009 (4,1) with firmware updated as (5,1) compatible
– Xeon Quad-core – 2.66GHz (to be confirmed)
– 32GB RAM (4 sticks)
– 500GB SATA HDD, brackets in all bays
– Optical drive
– ATI 5770 (up to El Capitan)
– nVidia GT 630 (1GB) (Mojave)

Still outruns the ‘Darth Vader’ MacPro (2013). With a (SATA) SSD (not included), the machine flies!

GT630 can stay for El Capitan (hence both GPU cards). Working with macOS Mojave; reportedly compatible with Catalina (but not tested).

5770 must not be present for Mojava (macOS limitation)

No monitor/display. No box. Expect scuff marks on external casing.

£350 collected from my office in the Science Park (Milton Road)

Go to Original Article
Author:

For Sale – Custom loop water cooled pc – i9 9900k, 2080ti, 32gb 3200mhz ram, 2tb nvme

Selling as only seems to be my work machine rather than playing games and creating content as intended

Built by myself in November 2019, machine is only a few months old.

Only the best components were chosen When this was built.

Machine runs at 5ghz on all cores and gpu never sees above 50c.

Motherboard – ASus maximus Code

Cpu – intel i9 9900k with ek water block

Gpu – msi ventus oc 2080ti with ek water block and nickel backplate

Ram- 32gb g skill royal silver 3200mhz

Nvme – 1tb wd black

Nvme – 1tb sabrent

Psu – Corsair 750 modular

Ek nickel fittings

Ek d5 stand alone pump

Phanteks reservoir

6 Thermaltake ring plus fans with controllers

2 360mm x 45mm alphacool radiators

Thermaltake acrylic tubes and liquid

Custom cables

I am based in Tadworth Surrey and the machine can be seen and inspected in person.

Go to Original Article
Author:

Deploy and configure WSUS 2019 for Windows patching needs

Transcript – Deploy and configure WSUS 2019 for Windows patching needs

In this video, I want to show you how to deploy the Windows Server Update Services, or WSUS, in Windows Server 2019.

I’m logged into a Windows Server 2019 machine that is domain-joined. Open Server Manager and click on Manage, then go to Add Roles and Features to launch the wizard.

Click Next and choose the Role-based or feature-based installation option and click Next. Select your server from the server pool and click Next to choose the roles to install.

Scroll down and choose the Windows Server Update Services role, then click Add Features. There are no additional features needed, so click Next.

At the WSUS screen: If you need SQL Server connectivity, you can enable it here. I’m going to leave that checkbox empty and click Next.

I’m prompted to choose a location to store the updates that get downloaded. I’m going to store the updates in a folder that I created earlier called C:Updates. Click Next to go to the confirmation screen. Everything looks good here, so I’ll click Install.

After a few minutes, the installation process completes. Click Close.

The next thing that we need to do is to configure WSUS for use. Go to the notifications icon and click on that. We have some post-deployment configuration tasks that need to be performed, so click on Launch Post-Installation tasks. After a couple of minutes, the notification icon changes to a number. If I click on that, then we can see the post-deployment configuration was a success.

Close this out and click on Tools, and then click on Windows Server Update Services to open the console. Select the WSUS server and expand that to see we have a number of nodes underneath the server. One of the nodes is Options. Click on Options and then click on WSUS Server Configuration Wizard.

Click Next on the Before You Begin screen and then I’m taken to the Microsoft Update Improvement Program screen that asks if I want to join the program. Deselect that checkbox and click Next.

Next, we choose an upstream server. I can synchronize updates either from another Windows Server Update Services server or from Microsoft Update. This is the only WSUS server in my organization, so I’m going to synchronize from Microsoft Update, which is the default selection, and click Next.

I’m prompted to specify my proxy server. I don’t use a proxy server in my organization, so I’m going to leave that blank and click Next.

Click the Start Connecting button. It can take several minutes for WSUS to connect to the upstream update server, but the process is finally finished.

Now the wizard asks to choose a language. Since English is the only language spoken in my organization, I’m going to choose the option to download updates in English and click Next.

I’m asked which products I want to download updates for — I’m going to choose all products. I’ll go ahead and click Next.

Now I’m asked to choose the classifications that I want to download. In this case, I’m just going to go with the defaults [Critical Updates, Definition Updates, Security Updates and Upgrades]. I’ll click Next.

I’m prompted to choose a synchronization schedule. In a production organization, you’re probably going to want to synchronize automatically. I’m going to leave this set to synchronize manually. I’ll go ahead and click Next.

I’m taken to the Finished screen. At this point, we’re all done, aside from synchronizing updates, which can take quite a while to complete. If you’d like to start the initial synchronization process, now all you have to do is select the Begin Initial Synchronization checkbox and then click Next, followed by Finish.

That’s how you deploy and configure Windows Server Update Services.

+ Show Transcript

Go to Original Article
Author:

For Sale – £1700 – Alienware 17 R4 Laptop and Graphics Amp – i7, GTX1080, 16GB, 2x SSD, QHD 1440p 120Hz G-Sync

Selling my ‘beloved’ Alienware 17 R4.

Great machine, runs anything you can throw at it. Outstanding specification, including the screen.
Never overclocked. I have been the sole owner from new. Great condition, no damage etc., really looked after this.

Extras:

  • Alienware Graphics Amplifier (external GPU box) also included, empty, so you could upgrade the GPU if you wanted.
  • Alienware branded neoprene carry case and original box included.

Any questions, please ask.

Looking for: £1,800 now £1,700

Techradar review link here:

Due to the price and weight, I am looking for collection, and payment via bank transfer.

Specs below:
CPU: 2.9GHz Intel Core i7-7820HK (quad-core, 8MB cache, overclocking up to 4.4GHz)
Graphics: Nvidia GeForce GTX 1080 (8GB GDDR5X VRAM); Intel HD Graphics 630
RAM: 16GB DDR4 (2,400MHz)
Screen: 17.3-inch QHD (2,560 x 1,440), 120Hz, TN anti-glare at 400 nits; Nvidia G-Sync; Tobii eye-tracking
Storage: 512GB SSD (M.2 NVME), 1TB SSD WD Blue (M.2 SATA), 1TB HDD (7,200 RPM)
Ports: 1 x USB 3.0 port, 1 x USB-C port, 1 x USB-C Thunderbolt 3 port, HDMI 2.0, Mini-DisplayPort, Ethernet, Graphics Amplifier Port, headphone jack, microphone jack, Noble Lock
Connectivity: Killer 1435 802.11ac 2×2 Wi-Fi; Bluetooth 4.1
Camera: Alienware FHD camera with Tobii IR eye-tracking
Weight: 9.74 pounds (4.42kg)
Size: 16.7 x 13.1 x 1.18 inches (42.4 x 33.3 x 3cm; W x D x H)

Go to Original Article
Author:

AWS, NFL machine learning partnership looks at player safety

The NFL will use AWS’ AI and machine learning products and services to better simulate and predict player injuries, with the goal of ultimately improving player health and safety.

The new NFL machine learning and AWS partnership, announced during a press event Thursday with AWS CEO Andy Jassy and NFL Commissioner Roger Goodell at AWS re:Invent 2019, will change the game of football, Goodell said.

“It will be changing the way it’s played, it will [change] the way its coached, the way we prepare athletes for the game,” he said.

The NFL machine learning journey

The partnership builds off Next Gen Stats, an existing NFL and AWS agreement that has helped the NFL capture and process data on its players. That partnership, revealed back in 2017, introduced new sensors on player equipment and the football to capture real-time location, speed and acceleration data.

That data is then fed into AWS data analytics and machine learning tools to provide fans, broadcasters and NFL Clubs with live and on-screen stats and predictions, including expected catch rates and pass completion probabilities.

Taking data from that, as well as from other sources, including video feeds, equipment choice, playing surfaces, player injury information, play type, impact type and environmental factors, the new NFL machine learning and AWS partnership will create a digital twin of players.

AWS CEO Andy Jassy and NFL Commissioner Roger Goodell
AWS CEO Andy Jassy, left, and NFL Commissioner Roger Goodell announced a new AI and machine learning partnership at AWS re:Invent 2019.

The NFL began the project with a collection of different data sets from which to gather information, said Jeff Crandall, chairman of the NFL Engineering Committee, during the press event.

It wasn’t just passing data, but also “the equipment that players were wearing, the frequency of those impacts, the speeds the players were traveling, the angles that they hit one another,” he continued.

Typically used in manufacturing to predict machine outputs and potential breakdowns, a digital twin is essentially a complex virtual replica of a machine or person formed out of a host of real-time and historical data. Using machine learning and predictive analytics, a digital twin can be fed into countless virtual scenarios, enabling engineers and data scientists to see how its real-life counterpart would react.

The new AWS and NFL partnership will create digital athletes, or digital twins of a scalable sampling of players, that can be fed into infinite scenarios without risking the health and safety of real players. Data collected from these scenarios is expected to provide insights into changes to game rules, player equipment and other factors that could make football a safer game.

“For us, what we see the power here is to be able to take the data that we’ve created over the last decade or so” and use it, Goodell said. “I think the possibilities are enormous.”

Partnership’s latest move to enhance safety

It will be changing the way it’s played, it will [change] the way its coached, the way we prepare athletes for the game.
Roger GoodellCommissioner, NFL

New research in recent years has highlighted the extreme health risks of playing football. In 2017, researchers from the VA Boston Healthcare System and the Boston University School of Medicine published a study in the Journal of the American Medical Association that indicated football players are at a high risk for developing long-term neurological conditions.

The study, which did not include a control group, looked at the brains of high school, college and professional-level football players. Of the 111 NFL-level football players the researchers looked at, 110 of them had some form of degenerative brain disease.

The new partnership is just one of the changes the NFL has made over the last few years in an attempt to make football safer for its players. Other recent efforts include new helmet rules, and a recent $3 million challenge to create safer helmets.

The AWS and NFL partnership “really has a chance to transform player health and safety,” Jassy said.

AWS re:Invent, the annual flagship conference of AWS, was held this week in Las Vegas.

Go to Original Article
Author:

For Sale – Gaming Pc RTX 2080Ti, I7 9700, 32GB Dominator Platinum

What brand is the machine please?
Purchased new?
Purchased from?
Waranty remaining?
Optical drive blu ray?
Any bundled software?
Ancillaries or just PC unit?
Price paid?

thanks
Brian

Go to Original Article
Author:

How to achieve explainability in AI models

When machine learning models deliver problematic results, it can often happen in ways that humans can’t make sense of — and this becomes dangerous when there are no limitations of the model, particularly for high-stakes decisions. Without straightforward and simple tools that highlight explainability in AI models, organizations will continue to struggle in implementing AI algorithms. Explainable AI refers to the process of making it easier for humans to understand how a given model generates the results it does and planning for cases when the results should be second-guessed.

AI developers need to incorporate explainability techniques into their workflows as part of their overall modeling operations. AI explainability can refer to the process of creating algorithms for teasing apart how black box models deliver results or the process of translating these results to different types of people. Data science managers working on explainable AI should keep tabs on the data used in models, strike a balance between accuracy and explainability, and focus on the end user.

Opening the black box

Traditional rule-based AI systems included explainability in AI as part of models, since humans would typically handcraft the inputs to output. But deep learning techniques using semi-autonomous neural-network models can’t provide a model’s results map to an intended goal.

Researchers are working to build learning algorithms that generate explainable AI systems from data. Currently, however, most of the dominant learning algorithms do not yield interpretable AI systems, said Ankur Taly, head of data science at Fiddler Labs, an explainable AI tools provider.

“This results in black box ML techniques, which may generate accurate AI systems, but it’s harder to trust them since we don’t know how these systems’ outputs are generated,” he said. 

AI explainability often describes post-hoc processes that attempt to explain the behavior of AI systems, rather than alter their structure. Other machine learning model properties like accuracy are straightforward to measure, but there are no corresponding simple metrics for explainability. Thus, the quality of an explanation or interpretation of an AI system needs to be assessed in an application-specific manner. It’s also important for practitioners to understand the assumptions and limitations of the techniques they use for implementing explainability.

“While it is better to have some transparency rather than none, we’ve seen teams fool themselves into a false sense of security by wiring an off-the-shelf technique without understanding how the technique works,” Taly said. 

Start with the data

The results of a machine learning model could be explained by the training data itself, or how a neural network interprets a dataset. Machine learning models often start with data labeled by humans. Data scientists can sometimes explain the way a model is behaving by looking at the data it was trained on.

“What a particular neural network derives from a dataset are patterns that it finds that may or may not be obvious to humans,” said Aaron Edell, director of applied AI at AI platform Veritone.

But it can be hard to understand what good data looks like. Biased training data can show in up a variety of ways. A machine learning model trained to identify sheep might only come from pictures of farms, causing the model to misinterpret sheep in other settings, or white clouds on farm pictures as sheep. Facial recognition software can be trained on company faces — but if those faces are mostly male or white, the data is biased.

One good practice is to train machine learning models on data that should be indistinguishable from the data the model will be expected to run on. For example, a face recognition model that identified how long Jennifer Aniston appears in every episode of Friends should be trained on frames of actual episodes rather than Google image search results for ‘Jennifer Aniston.’ In a similar vein, it’s OK to train models on publicly available datasets, but generic pre-trained models as a service will be harder to explain and change if necessary.   

Balancing explainability, accuracy and risk

The real problem with implementing explainability in AI is that there are major trade-offs between accuracy, transparency and risk in different types of AI models, said Matthew Nolan, senior director of decision sciences at Pegasystems. More opaque models may be more accurate, but fail the explainability test. Other types of models like decision trees and Bayesian networks are considered more transparent but are less powerful and complex.

“These models are critical today as businesses deal with regulations such as like GDPR that require explainability in AI-based systems, but this sometimes will sacrifice performance,” said Nolan.

Focusing on transparency can cost a business, but turning to more opaque models can leave a model unchecked and might expose the consumer, customer and the business to additional risks or breaches.

To address this gap, platform vendors are starting to embed transparency settings into their AI tool sets. This can make it easier to companies to adjust the acceptable amount of opaqueness or transparency thresholds used in their AI models and gives enterprises the control to adjust the models based on their needs or on corporate governance policy so they can manage risk, maintain regulatory compliance and ensure customers a differentiated experience in a responsible way.

Data scientists should also identify when the complexity of new models are getting in the way of explainability. Yifei Huang, data science manager at sales engagement platform Outreach, said there are often simpler models available for attaining the same performance, but machine learning practitioners have a tendency toward using more fancy and advanced models.

Focus on the user

Explainability means different things to a highly skilled data scientist compared to a call center worker who may need to make decisions based on an explanation. The task of implementing explainable AI is not just to foster trust in explanations but also help the end users make decisions, said Ankkur Teredesai, CTO and co-founder at KenSci, an AI healthcare platform.

Often data scientists make the mistake of thinking about explanations from the perspective of a computer scientist, when the end user is a domain expert who may need just enough information to make a decision. For a model that predicts the risk of a patient being readmitted, a physician may want an explanation of the underlying medical reasons, while a discharge planner may want to know the likelihood of readmission to plan accordingly.

Teredesai said there is still no general guideline for explainability, particularly for different types of users. It’s also challenging to integrate these explanations into the machine learning and end user workflows. End users typically need explanations as possible actions to take based on a prediction rather than just explanation as reasons, and this requires striking the right balance between focusing on prediction and explanation fidelity.

There are a variety of tools for implementing explainability on top of machine learning models which generate visualizations and technical descriptions, but these can be difficult for end users to understand, said Jen Underwood, vice president of product management at Aible, an automated machine learning platform. Supplementing visualizations with natural language explanations is a way to partially bridge the data science literacy gap. Another good practice is to directly use humans in the loop to evaluate your explanations to see if they make sense to a human, said Daniel Fagnan, director of applied science on the Zillow Offers Analytics team. This can help lead to more accurate models through key improvements including model selection and feature engineering.

KPIs for AI risks

Enterprises should consider the specific reasons that explainable AI is important when looking towards how to measure explainability and accessibility. Teams should first and foremost establish a set of criteria for key AI risks including robustness, data privacy, bias, fairness, explainability and compliance, said Dr. Joydeep Ghosh, chief scientific officer at AI vendor CognitiveScale. It’s also useful to generate appropriate metrics for key stakeholders relevant to their needs.

External organizations like AI Global can help establish measurement targets that determine acceptable operating values. AI Global is a nonprofit organization that has established the AI Trust Index, a scoring benchmarks for explainable AI that is like a FICO score. This enables firms to not only establish their own best practices, but also compare the enterprise against industry benchmarks.

When someone offers you a silver bullet explainable AI technology or solution, check whether you can have a common-grounded conversation with the AI that goes deep and scales to the needs of the application.
Mark StefikResearch Fellow, PARC, a Xerox Company

Vendors are starting to automate this process with tools for automatically scoring, measuring and reporting on risk factors across the AI operations lifecycle based on the AI Trust Index. Although the tools for explainable AI are getting better, the technology is at an early research stage with proof-of-concept prototypes, cautioned Mark Stefik, a research fellow at PARC, a Xerox Company. There are substantial technology risks and gaps in machine learning and in AI explanations, depending on the application.

“When someone offers you a silver bullet explainable AI technology or solution, check whether you can have a common-grounded conversation with the AI that goes deep and scales to the needs of the application,” Stefik said.

Go to Original Article
Author:

For Sale – Gaming Pc RTX 2080Ti, I7 9700, 32GB Dominator Platinum

What brand is the machine please?
Purchased new?
Purchased from?
Waranty remaining?
Optical drive blu ray?
Any bundled software?
Ancillaries or just PC unit?
Price paid?

thanks
Brian

Go to Original Article
Author: