Tag Archives: machine

For Sale – £1700 – Alienware 17 R4 Laptop and Graphics Amp – i7, GTX1080, 16GB, 2x SSD, QHD 1440p 120Hz G-Sync

Selling my ‘beloved’ Alienware 17 R4.

Great machine, runs anything you can throw at it. Outstanding specification, including the screen.
Never overclocked. I have been the sole owner from new. Great condition, no damage etc., really looked after this.

Extras:

  • Alienware Graphics Amplifier (external GPU box) also included, empty, so you could upgrade the GPU if you wanted.
  • Alienware branded neoprene carry case and original box included.

Any questions, please ask.

Looking for: £1,800 now £1,700

Techradar review link here:

Due to the price and weight, I am looking for collection, and payment via bank transfer.

Specs below:
CPU: 2.9GHz Intel Core i7-7820HK (quad-core, 8MB cache, overclocking up to 4.4GHz)
Graphics: Nvidia GeForce GTX 1080 (8GB GDDR5X VRAM); Intel HD Graphics 630
RAM: 16GB DDR4 (2,400MHz)
Screen: 17.3-inch QHD (2,560 x 1,440), 120Hz, TN anti-glare at 400 nits; Nvidia G-Sync; Tobii eye-tracking
Storage: 512GB SSD (M.2 NVME), 1TB SSD WD Blue (M.2 SATA), 1TB HDD (7,200 RPM)
Ports: 1 x USB 3.0 port, 1 x USB-C port, 1 x USB-C Thunderbolt 3 port, HDMI 2.0, Mini-DisplayPort, Ethernet, Graphics Amplifier Port, headphone jack, microphone jack, Noble Lock
Connectivity: Killer 1435 802.11ac 2×2 Wi-Fi; Bluetooth 4.1
Camera: Alienware FHD camera with Tobii IR eye-tracking
Weight: 9.74 pounds (4.42kg)
Size: 16.7 x 13.1 x 1.18 inches (42.4 x 33.3 x 3cm; W x D x H)

Go to Original Article
Author:

AWS, NFL machine learning partnership looks at player safety

The NFL will use AWS’ AI and machine learning products and services to better simulate and predict player injuries, with the goal of ultimately improving player health and safety.

The new NFL machine learning and AWS partnership, announced during a press event Thursday with AWS CEO Andy Jassy and NFL Commissioner Roger Goodell at AWS re:Invent 2019, will change the game of football, Goodell said.

“It will be changing the way it’s played, it will [change] the way its coached, the way we prepare athletes for the game,” he said.

The NFL machine learning journey

The partnership builds off Next Gen Stats, an existing NFL and AWS agreement that has helped the NFL capture and process data on its players. That partnership, revealed back in 2017, introduced new sensors on player equipment and the football to capture real-time location, speed and acceleration data.

That data is then fed into AWS data analytics and machine learning tools to provide fans, broadcasters and NFL Clubs with live and on-screen stats and predictions, including expected catch rates and pass completion probabilities.

Taking data from that, as well as from other sources, including video feeds, equipment choice, playing surfaces, player injury information, play type, impact type and environmental factors, the new NFL machine learning and AWS partnership will create a digital twin of players.

AWS CEO Andy Jassy and NFL Commissioner Roger Goodell
AWS CEO Andy Jassy, left, and NFL Commissioner Roger Goodell announced a new AI and machine learning partnership at AWS re:Invent 2019.

The NFL began the project with a collection of different data sets from which to gather information, said Jeff Crandall, chairman of the NFL Engineering Committee, during the press event.

It wasn’t just passing data, but also “the equipment that players were wearing, the frequency of those impacts, the speeds the players were traveling, the angles that they hit one another,” he continued.

Typically used in manufacturing to predict machine outputs and potential breakdowns, a digital twin is essentially a complex virtual replica of a machine or person formed out of a host of real-time and historical data. Using machine learning and predictive analytics, a digital twin can be fed into countless virtual scenarios, enabling engineers and data scientists to see how its real-life counterpart would react.

The new AWS and NFL partnership will create digital athletes, or digital twins of a scalable sampling of players, that can be fed into infinite scenarios without risking the health and safety of real players. Data collected from these scenarios is expected to provide insights into changes to game rules, player equipment and other factors that could make football a safer game.

“For us, what we see the power here is to be able to take the data that we’ve created over the last decade or so” and use it, Goodell said. “I think the possibilities are enormous.”

Partnership’s latest move to enhance safety

It will be changing the way it’s played, it will [change] the way its coached, the way we prepare athletes for the game.
Roger GoodellCommissioner, NFL

New research in recent years has highlighted the extreme health risks of playing football. In 2017, researchers from the VA Boston Healthcare System and the Boston University School of Medicine published a study in the Journal of the American Medical Association that indicated football players are at a high risk for developing long-term neurological conditions.

The study, which did not include a control group, looked at the brains of high school, college and professional-level football players. Of the 111 NFL-level football players the researchers looked at, 110 of them had some form of degenerative brain disease.

The new partnership is just one of the changes the NFL has made over the last few years in an attempt to make football safer for its players. Other recent efforts include new helmet rules, and a recent $3 million challenge to create safer helmets.

The AWS and NFL partnership “really has a chance to transform player health and safety,” Jassy said.

AWS re:Invent, the annual flagship conference of AWS, was held this week in Las Vegas.

Go to Original Article
Author:

For Sale – Gaming Pc RTX 2080Ti, I7 9700, 32GB Dominator Platinum

What brand is the machine please?
Purchased new?
Purchased from?
Waranty remaining?
Optical drive blu ray?
Any bundled software?
Ancillaries or just PC unit?
Price paid?

thanks
Brian

Go to Original Article
Author:

How to achieve explainability in AI models

When machine learning models deliver problematic results, it can often happen in ways that humans can’t make sense of — and this becomes dangerous when there are no limitations of the model, particularly for high-stakes decisions. Without straightforward and simple tools that highlight explainability in AI models, organizations will continue to struggle in implementing AI algorithms. Explainable AI refers to the process of making it easier for humans to understand how a given model generates the results it does and planning for cases when the results should be second-guessed.

AI developers need to incorporate explainability techniques into their workflows as part of their overall modeling operations. AI explainability can refer to the process of creating algorithms for teasing apart how black box models deliver results or the process of translating these results to different types of people. Data science managers working on explainable AI should keep tabs on the data used in models, strike a balance between accuracy and explainability, and focus on the end user.

Opening the black box

Traditional rule-based AI systems included explainability in AI as part of models, since humans would typically handcraft the inputs to output. But deep learning techniques using semi-autonomous neural-network models can’t provide a model’s results map to an intended goal.

Researchers are working to build learning algorithms that generate explainable AI systems from data. Currently, however, most of the dominant learning algorithms do not yield interpretable AI systems, said Ankur Taly, head of data science at Fiddler Labs, an explainable AI tools provider.

“This results in black box ML techniques, which may generate accurate AI systems, but it’s harder to trust them since we don’t know how these systems’ outputs are generated,” he said. 

AI explainability often describes post-hoc processes that attempt to explain the behavior of AI systems, rather than alter their structure. Other machine learning model properties like accuracy are straightforward to measure, but there are no corresponding simple metrics for explainability. Thus, the quality of an explanation or interpretation of an AI system needs to be assessed in an application-specific manner. It’s also important for practitioners to understand the assumptions and limitations of the techniques they use for implementing explainability.

“While it is better to have some transparency rather than none, we’ve seen teams fool themselves into a false sense of security by wiring an off-the-shelf technique without understanding how the technique works,” Taly said. 

Start with the data

The results of a machine learning model could be explained by the training data itself, or how a neural network interprets a dataset. Machine learning models often start with data labeled by humans. Data scientists can sometimes explain the way a model is behaving by looking at the data it was trained on.

“What a particular neural network derives from a dataset are patterns that it finds that may or may not be obvious to humans,” said Aaron Edell, director of applied AI at AI platform Veritone.

But it can be hard to understand what good data looks like. Biased training data can show in up a variety of ways. A machine learning model trained to identify sheep might only come from pictures of farms, causing the model to misinterpret sheep in other settings, or white clouds on farm pictures as sheep. Facial recognition software can be trained on company faces — but if those faces are mostly male or white, the data is biased.

One good practice is to train machine learning models on data that should be indistinguishable from the data the model will be expected to run on. For example, a face recognition model that identified how long Jennifer Aniston appears in every episode of Friends should be trained on frames of actual episodes rather than Google image search results for ‘Jennifer Aniston.’ In a similar vein, it’s OK to train models on publicly available datasets, but generic pre-trained models as a service will be harder to explain and change if necessary.   

Balancing explainability, accuracy and risk

The real problem with implementing explainability in AI is that there are major trade-offs between accuracy, transparency and risk in different types of AI models, said Matthew Nolan, senior director of decision sciences at Pegasystems. More opaque models may be more accurate, but fail the explainability test. Other types of models like decision trees and Bayesian networks are considered more transparent but are less powerful and complex.

“These models are critical today as businesses deal with regulations such as like GDPR that require explainability in AI-based systems, but this sometimes will sacrifice performance,” said Nolan.

Focusing on transparency can cost a business, but turning to more opaque models can leave a model unchecked and might expose the consumer, customer and the business to additional risks or breaches.

To address this gap, platform vendors are starting to embed transparency settings into their AI tool sets. This can make it easier to companies to adjust the acceptable amount of opaqueness or transparency thresholds used in their AI models and gives enterprises the control to adjust the models based on their needs or on corporate governance policy so they can manage risk, maintain regulatory compliance and ensure customers a differentiated experience in a responsible way.

Data scientists should also identify when the complexity of new models are getting in the way of explainability. Yifei Huang, data science manager at sales engagement platform Outreach, said there are often simpler models available for attaining the same performance, but machine learning practitioners have a tendency toward using more fancy and advanced models.

Focus on the user

Explainability means different things to a highly skilled data scientist compared to a call center worker who may need to make decisions based on an explanation. The task of implementing explainable AI is not just to foster trust in explanations but also help the end users make decisions, said Ankkur Teredesai, CTO and co-founder at KenSci, an AI healthcare platform.

Often data scientists make the mistake of thinking about explanations from the perspective of a computer scientist, when the end user is a domain expert who may need just enough information to make a decision. For a model that predicts the risk of a patient being readmitted, a physician may want an explanation of the underlying medical reasons, while a discharge planner may want to know the likelihood of readmission to plan accordingly.

Teredesai said there is still no general guideline for explainability, particularly for different types of users. It’s also challenging to integrate these explanations into the machine learning and end user workflows. End users typically need explanations as possible actions to take based on a prediction rather than just explanation as reasons, and this requires striking the right balance between focusing on prediction and explanation fidelity.

There are a variety of tools for implementing explainability on top of machine learning models which generate visualizations and technical descriptions, but these can be difficult for end users to understand, said Jen Underwood, vice president of product management at Aible, an automated machine learning platform. Supplementing visualizations with natural language explanations is a way to partially bridge the data science literacy gap. Another good practice is to directly use humans in the loop to evaluate your explanations to see if they make sense to a human, said Daniel Fagnan, director of applied science on the Zillow Offers Analytics team. This can help lead to more accurate models through key improvements including model selection and feature engineering.

KPIs for AI risks

Enterprises should consider the specific reasons that explainable AI is important when looking towards how to measure explainability and accessibility. Teams should first and foremost establish a set of criteria for key AI risks including robustness, data privacy, bias, fairness, explainability and compliance, said Dr. Joydeep Ghosh, chief scientific officer at AI vendor CognitiveScale. It’s also useful to generate appropriate metrics for key stakeholders relevant to their needs.

External organizations like AI Global can help establish measurement targets that determine acceptable operating values. AI Global is a nonprofit organization that has established the AI Trust Index, a scoring benchmarks for explainable AI that is like a FICO score. This enables firms to not only establish their own best practices, but also compare the enterprise against industry benchmarks.

When someone offers you a silver bullet explainable AI technology or solution, check whether you can have a common-grounded conversation with the AI that goes deep and scales to the needs of the application.
Mark StefikResearch Fellow, PARC, a Xerox Company

Vendors are starting to automate this process with tools for automatically scoring, measuring and reporting on risk factors across the AI operations lifecycle based on the AI Trust Index. Although the tools for explainable AI are getting better, the technology is at an early research stage with proof-of-concept prototypes, cautioned Mark Stefik, a research fellow at PARC, a Xerox Company. There are substantial technology risks and gaps in machine learning and in AI explanations, depending on the application.

“When someone offers you a silver bullet explainable AI technology or solution, check whether you can have a common-grounded conversation with the AI that goes deep and scales to the needs of the application,” Stefik said.

Go to Original Article
Author:

For Sale – Gaming Pc RTX 2080Ti, I7 9700, 32GB Dominator Platinum

What brand is the machine please?
Purchased new?
Purchased from?
Waranty remaining?
Optical drive blu ray?
Any bundled software?
Ancillaries or just PC unit?
Price paid?

thanks
Brian

Go to Original Article
Author:

Wanted – Decent GPU – £75 MAX Budget

Well my trusty AMD HD6870 is starting to show its age, so I’m looking for a decent (but not mega) upgrade.

I use my machine solely for racing SIM’s (Rfactor, Grid, Dirt, etc), not FPS, etc

I’m looking for a reasonable replacement.

Max budget is £75 delivered

Location
BRISTOL

Go to Original Article
Author:

SwiftStack 7 storage upgrade targets AI, machine learning use cases

SwiftStack turned its focus to artificial intelligence, machine learning and big data analytics with a major update to its object- and file-based storage and data management software.

The San Francisco software vendor’s roots lie in the storage, backup and archive of massive amounts of unstructured data on commodity servers running a commercially supported version of OpenStack Swift. But SwiftStack has steadily expanded its reach over the last eight years, and its 7.0 update takes aim at the new scale-out storage and data management architecture the company claims is necessary for AI, machine learning and analytics workloads.

SwiftStack said it worked with customers to design clusters that scale linearly to handle multiple petabytes of data and support throughput of more than 100 GB per second. That allows it to handle workloads such as autonomous vehicle applications that feed data into GPU-based servers.

Marc Staimer, president of Dragon Slayer Consulting, said throughput of 100 GB per second is “really fast” for any type of storage and “incredible” for an object-based system. He said the fastest NVMe system tests at 120 GB per second, but it can scale only to about a petabyte.

“It’s not big enough, and NVMe flash is extremely costly. That doesn’t fit the AI [or machine learning] market,” Staimer said.

This is the second object storage product launched this week with speed not normally associated with object storage. NetApp unveiled an all-flash StorageGrid array Tuesday at its Insight user conference.

Staimer said SwiftStack’s high-throughput “parallel object system” would put the company into competition with parallel file system vendors such as DataDirect Networks, IBM Spectrum Scale and Panasas, but at a much lower cost.

New ProxyFS Edge

SwiftStack 7 plans introduce a new ProxyFS Edge containerized software component next year to give remote applications a local file system mount for data, rather than having to connect through a network file serving protocol such as NFS or SMB. SwiftStack spent about 18 months creating a new API and software stack to extend its ProxyFS to the edge.

Founder and chief product officer Joe Arnold said SwiftStack wanted to utilize the scale-out nature of its storage back end and enable a high number of concurrent connections to go in and out of the system to send data. ProxyFS Edge will allow each cluster node to be relatively stateless and cache data at the edge to minimize latency and improve performance.

SwiftStack 7 will also add 1space File Connector software in November to enable customers that build applications using the S3 or OpenStack Swift object API to access data in their existing file systems. The new File Connector is an extension to the 1space technology that SwiftStack introduced in 2018 to ease data access, migration and searches across public and private clouds. Customers will be able to apply 1space policies to file data to move and protect it.

Arnold said the 1space File Connector could be especially helpful for media companies and customers building software-as-a-service applications that are transitioning from NAS systems to object-based storage.

“Most sources of data produce files today and the ability to store files in object storage, with its greater scalability and cost value, makes the [product] more valuable,” said Randy Kerns, a senior strategist and analyst at Evaluator Group.

Kerns added that SwiftStack’s focus on the developing AI area is a good move. “They have been associated with OpenStack, and that is not perceived to be a positive and colors its use in larger enterprise markets,” he said.

AI architecture

A new SwiftStack AI architecture white paper offers guidance to customers building out systems that use popular AI, machine learning and deep learning frameworks, GPU servers, 100 Gigabit Ethernet networking, and SwiftStack storage software.

“They’ve had a fair amount of success partnering with Nvidia on a lot of the machine learning projects, and their software has always been pretty good at performance — almost like a best-kept secret — especially at scale, with parallel I/O,” said George Crump, president and founder of Storage Switzerland. “The ability to ratchet performance up another level and get the 100 GBs of bandwidth at scale fits perfectly into the machine learning model where you’ve got a lot of nodes and you’re trying to drive a lot of data to the GPUs.”

SwiftStack noted distinct differences between the architectural approaches that customers take with archive use cases versus newer AI or machine learning workloads. An archive customer might use 4U or 5U servers, each equipped with 60 to 90 drives, and 10 Gigabit Ethernet networking. By contrast, one machine learning client clustered a larger number of lower horsepower 1U servers, each with fewer drives and a 100 Gigabit Ethernet network interface card, for high bandwidth, he said.

An optional new SwiftStack Professional Remote Operations (PRO) paid service is now available to help customers monitor and manage SwiftStack production clusters. SwiftStack PRO combines software and professional services.

Go to Original Article
Author:

For Sale – HP EliteBook 1030 G3 | i7-8550u, 16gb, 120hz, Touch – Convertible, 256gb, 4G LTE

Selling a brand new, top notch laptop.
It’s a brilliant machine but unfortunately I do miss the trackpoint of Thinkpads.

Specs:

  • i7-8650U, 16GB RAM, 256GB NVMe SSD, 13″ SureView 1080p IPS Touchscreen 120Hz, Harman Kardon speakers, 4G LTE
  • Brand new condition, 3 years warranty, no original box. *
  • £750
Location
London
Price and currency
750
Delivery
Prefer collection
Prefer goods collected?
I prefer the goods to be collected
Advertised elsewhere?
Advertised elsewhere
Payment method
Paypal / Bank transfer

Last edited:

Go to Original Article
Author:

Azure Sentinel—the cloud-native SIEM that empowers defenders is now generally available

Machine learning enhanced with artificial intelligence (AI) holds great promise in addressing many of the global cyber challenges we see today. They give our cyber defenders the ability to identify, detect, and block malware, almost instantaneously. And together they give security admins the ability to deconflict tasks, separating the signal from the noise, allowing them to prioritize the most critical tasks. It is why today, I’m pleased to announce that Azure Sentinel, a cloud-native SIEM that provides intelligent security analytics at cloud scale for enterprises of all sizes and workloads, is now generally available.

Our goal has remained the same since we first launched Microsoft Azure Sentinel in February: empower security operations teams to help enhance the security posture of our customers. Traditional Security Information and Event Management (SIEM) solutions have not kept pace with the digital changes. I commonly hear from customers that they’re spending more time with deployment and maintenance of SIEM solutions, which leaves them unable to properly handle the volume of data or the agility of adversaries.

Recent research tells us that 70 percent of organizations continue to anchor their security analytics and operations with SIEM systems,1 and 82 percent are committed to moving large volumes of applications and workloads to the public cloud.2 Security analytics and operations technologies must lean in and help security analysts deal with the complexity, pace, and scale of their responsibilities. To accomplish this, 65 percent of organizations are leveraging new technologies for process automation/orchestration, while 51 percent are adopting security analytics tools featuring machine learning algorithms.3 This is exactly why we developed Azure Sentinel—an SIEM re-invented in the cloud to address the modern challenges of security analytics.

Learning together

When we kicked off the public preview for Azure Sentinel, we were excited to learn and gain insight into the unique ways Azure Sentinel was helping organizations and defenders on a daily basis. We worked with our partners all along the way; listening, learning, and fine-tuning as we went. With feedback from 12,000 customers and more than two petabytes of data analysis, we were able to examine and dive deep into a large, complex, and diverse set of data. All of which had one thing in common: a need to empower their defenders to be more nimble and efficient when it comes to cybersecurity.

Our work with RapidDeploy offers one compelling example of how Azure Sentinel is accomplishing this complex task. RapidDeploy creates cloud-based dispatch systems that help first responders act quickly to protect the public. There’s a lot at stake, and the company’s cloud-native platform must be secure against an array of serious cyberthreats. So when RapidDeploy implemented a SIEM system, it chose Azure Sentinel, one of the world’s first cloud-native SIEMs.

Microsoft recently sat down with Alex Kreilein, Chief Information Security Officer at RapidDeploy. Here’s what he shared: “We build a platform that helps save lives. It does that by reducing incident response times and improving first responder safety by increasing their situational awareness.”

Now RapidDeploy uses the complete visibility, automated responses, fast deployment, and low total cost of ownership in Azure Sentinel to help it safeguard public safety systems. “With many SIEMs, deployment can take months,” says Kreilein. “Deploying Azure Sentinel took us minutes—we just clicked the deployment button and we were done.”

Learn even more about our work with RapidDeploy by checking out the full story.

Another great example of a company finding results with Azure Sentinel is ASOS. As one of the world’s largest online fashion retailers, ASOS knows they’re a prime target for cybercrime. The company has a large security function spread across five teams and two sites—but in the past, it was difficult for ASOS to gain a comprehensive view of cyberthreat activity. Now, using Azure Sentinel, ASOS has created a bird’s-eye view of everything it needs to spot threats early, allowing it to proactively safeguard its business and its customers. And as a result, it has cut issue resolution times in half.

“There are a lot of threats out there,” says Stuart Gregg, Cyber Security Operations Lead at ASOS. “You’ve got insider threats, account compromise, threats to our website and customer data, even physical security threats. We’re constantly trying to defend ourselves and be more proactive in everything we do.”

Already using a range of Azure services, ASOS identified Azure Sentinel as a platform that could help it quickly and easily unite its data. This includes security data from Azure Security Center and Azure Active Directory (Azure AD), along with data from Microsoft 365. The result is a comprehensive view of its entire threat landscape.

“We found Azure Sentinel easy to set up, and now we don’t have to move data across separate systems,” says Gregg. “We can literally click a few buttons and all our security solutions feed data into Azure Sentinel.”

Learn more about how ASOS has benefitted from Azure Sentinel.

RapidDeploy and ASOS are just two examples of how Azure Sentinel is helping businesses process data and telemetry into actionable security alerts for investigation and response. We have an active GitHub community of preview participants, partners, and even Microsoft’s own security experts who are sharing new connectors, detections, hunting queries, and automation playbooks.

With these design partners, we’ve continued our innovation in Azure Sentinel. It starts from the ability to connect to any data source, whether in Azure or on-premises or even other clouds. We continue to add new connectors to different sources and more machine learning-based detections. Azure Sentinel will also integrate with Azure Lighthouse service, which will enable service providers and enterprise customers with the ability to view Azure Sentinel instances across different tenants in Azure.

Secure your organization

Now that Azure Sentinel has moved out of public preview and is generally available, there’s never been a better time to see how it can help your business. Traditional on-premises SIEMs require a combination of infrastructure costs and software costs, all paired with annual commitments or inflexible contracts. We are removing those pain points, since Azure Sentinel is a cost-effective, cloud-native SIEM with predictable billing and flexible commitments.

Infrastructure costs are reduced since you automatically scale resources as you need, and you only pay for what you use. Or you can save up to 60 percent compared to pay-as-you-go pricing by taking advantage of capacity reservation tiers. You receive predictable monthly bills and the flexibility to change capacity tier commitments every 31 days. On top of that, bringing in data from Office 365 audit logs, Azure activity logs and alerts from Microsoft Threat Protection solutions doesn’t require any additional payments.

Please join me for the Azure Security Expert Series where we will focus on Azure Sentinel on Thursday, September 26, 2019, 10–11 AM Pacific Time. You’ll learn more about these innovations and see real use cases on how Azure Sentinel helped detect previously undiscovered threats. We’ll also discuss how Accenture and RapidDeploy are using Azure Sentinel to empower their security operations team.

Get started today with Azure Sentinel!

1 Source: ESG Research Survey, Security Analytics and Operations: Industry Trends in the Era of Cloud Computing, September 2019
2 Source: ESG Research Survey, Security Analytics and Operations: Industry Trends in the Era of Cloud Computing, September 2019
3 Source: ESG Research Survey, Security Analytics and Operations: Industry Trends in the Era of Cloud Computing, September 2019

Go to Original Article
Author: Microsoft News Center