For Sale – amd r5 1600 gaming pc for sale

Selling this gaming pc, parts are used except the case and psu which i bought used to get it built from parts i had to use. 4tb was a refurb which wouldn’t fit in my main rig.

The specification is as follows:

Processor: Ryzen 5 1600 hex core cpu @ 3.20 with wraith cooler

Memory: 8 gb ddr4 3000 mhz rgb

Motherboard: asrock b350 pro 4

Hard Drive: 4 TB western digital

Solid state drive: 240 GB SSD

Power supply: 500 watt thermaltake RGB

Dedicated graphics card: Powercolor RX 580 8GB GPU

DVD-RW: No

Case: CIT ARGB midi tower case

Operating system: windows 10 pro

Go to Original Article
Author:

For Trade – or Sale: AMD build PC

Interested in the PC. I’m in West Wales so couldn’t collect. In theory, I suppose I could meet halfway

Not bothered about the monitor. But the rest is fine and I can redeploy into other PCs.

I want to stick a 1070GTX in there – is there enough room and is there a spare 8-pin power connector?

The cooler I’m not sure about – do you think it would be OK to run without? Are worried about it becoming dislodged in transit?

Go to Original Article
Author:

SageMaker Studio makes model building, monitoring easier

LAS VEGAS — AWS launched a host of new tools and capabilities for Amazon SageMaker, AWS’ cloud platform for creating and deploying machine learning models; drawing the most notice was Amazon SageMaker Studio, a web-based integrated development platform.

In addition to SageMaker Studio, the IDE for platform for building, using and monitoring machine learning models, the other new AWS products aim to make it easier for non-expert developers to create models and to make them more explainable.

During a keynote presentation at the AWS re:Invent 2019  conference here Tuesday, AWS CEO Andy Jassy described five other new SageMaker tools: Experiments, Model Monitor, Autopilot, Notebooks and Debugger.

“SageMaker Studio along with SageMaker Experiments, SageMaker Model Monitor, SageMaker Autopilot and Sagemaker Debugger collectively add lots more lifecycle capabilities for the full ML [machine learning] lifecycle and to support teams,” said Mike Gualtieri, an analyst at Forrester.

New tools

SageMaker Studio, Jassy claimed, is a “fully-integrated development environment for machine learning.” The new platform pulls together all of SageMaker’s capabilities, along with code, notebooks and datasets, into one environment. AWS intends the platform to simplify SageMaker, enabling users to create, deploy, monitor, debug and manage models in one environment.

Google and Microsoft have similar machine learning IDEs, Gualtieri noted, adding that Google plans for its IDE to be based on DataFusion, its cloud-native data integration service, and to be connected to other Google services.

SageMaker Notebooks aims to make it easier to create and manage open source Jupyter notebooks. With elastic compute, users can create one-click notebooks, Jassy said. The new tool also enables users to more easily adjust compute power for their notebooks and transfer the content of a notebook.

Meanwhile, SageMaker Experiments automatically captures input parameters, configuration and results of developers’ machine learning models to make it simpler for developers to track different iterations of models, according to AWS. Experiments keeps all that information in one place and introduces a search function to comb through current and past model iterations.

AWS CEO Andy Jassy talks about new Amazon SageMaker capabilitiesatre:Invent 2019
AWS CEO Andy Jassy talks about new Amazon SageMaker capabilities at re:Invent 2019

“It is a much, much easier way to find, search for and collect your experiments when building a model,” Jassy said.

As the name suggests, SageMaker Debugger enables users to debug and profile their models more effectively. The tool collects and monitors key metrics from popular frameworks, and provides real-time metrics about accuracy and performance, potentially giving developers deeper insights into their own models. It is designed to make models more explainable for non-data scientists.

SageMaker Model Monitor also tries to make models more explainable by helping developers detect and fix concept drift, which refers to the evolution of data and data relationships over time. Unless models are updated in near real time, concept drift can drastically skew the accuracy of their outputs. Model Monitor constantly scans the data and model outputs to detect concept drift, alerting developers when it detects it and helping them identify the cause.

Automating model building

With Amazon SageMaker Autopilot, developers can automatically build models without, according to Jassy, sacrificing explainability.

Autopilot is “AutoML with full control and visibility,” he asserted. AutoML essentially is the process of automating machine learning modeling and development tools.

The new Autopilot module automatically selects the correct algorithm based on the available data and use case and then trains 50 unique models. Those models are then ranked by accuracy.

“AutoML is the future of ML development. I predict that within two years, 90 percent of all ML models will be created using AutoML by data scientists, developers and business analysts,” Gualtieri said.

SageMaker Autopilot is a must-have for AWS.
Mike GualtieriAnalyst, Forrester

“SageMaker Autopilot is a must-have for AWS, but it probably will help” other vendors also, including such AWS competitors as DataRobot because the AWS move further legitimizes the automated machine learning approach, he continued.

Other AWS rivals, including Google Cloud Platform, Microsoft Azure, IBM, SAS, RapidMiner, Aible and H2O.ai, also have automated machine learning capabilities, Gualtieri noted.

However, according to Nick McQuire, vice president at advisory firm CCS Insight, some of the  new AWS capabilities are innovative.

“Studio is a great complement to the other products as the single pane of glass developers and data scientists need and its incorporation of the new features, especially Model Monitor and Debugger, are among the first in the market,” he said.

“Although AWS may appear late to the game with Studio, what they are showing is pretty unique, especially the positioning of the IDE as similar to traditional software development with … Experiments, Debugger and Model Monitor being integrated into Studio,” McQuire said. “These are big jumps in the SageMaker capability on what’s out there in the market.”

Google also recently released several new tools aimed at delivering explainable AI, plus a new product suite, Google Cloud Explainable AI.

Go to Original Article
Author:

AWS Outposts brings hybrid cloud support — but only for Amazon

LAS VEGAS — AWS controls nearly half of the public IaaS market today, and based on the company’s rules against use of the term ‘multi-cloud,’ would be happy to have it all, even as rivals Microsoft and Google make incremental gains and more customers adopt multi-cloud strategies.

That’s the key takeaway from the start of this year’s massive re:Invent conference here this week, which was marked by the release of AWS Outposts for hybrid clouds and a lengthy keynote from AWS CEO Andy Jassy that began with a tongue-in-cheek invite to AWS’ big tent in the cloud.

“You have to decide what you’re going to bring,” Jassy said of customers who want to move workloads into the public cloud. “It’s a little bit like moving from a home,” he added, as a projected slide comically depicted moving boxes affixed with logos for rival vendors such as Oracle and IBM sitting on a driveway.

“It turns out when companies are making this big transformation, what we see is that all bets are off,” Jassy said. “They reconsider everything.”

For several years now, AWS has used re:Invent as a showcase for large customers in highly regulated industries that have made substantial, if not complete, migrations to its platform. One such company is Goldman Sachs, which has worked with AWS on several projects, including Marcus, a digital banking service for consumers. A transaction banking service that helps companies manage their cash in a cloud-native stack on AWS is coming next year, said Goldman Sachs CEO David Solomon, who appeared during Jassy’s talk. Goldman is also moving its Marquee market intelligence platform into production on AWS.

Along with showcasing enthusiastic customers like Goldman Sachs, Jassy took a series of shots at the competition, some veiled and others overt.

“Every industry has lots of companies with mainframes, but everyone wants to move off of them,” he claimed. The same goes for databases, he added. Customers are trying to move away from Oracle and Microsoft SQL Server due to factors such as expense and lock-in, he said. Jassy didn’t mention that similar accusations have been lodged at AWS’ native database services.

Jassy repeatedly took aim at Microsoft, which has the second most popular cloud platform after AWS, albeit with a significant lag. “People don’t want to pay the tax anymore for Windows,” he said.

But it isn’t as if AWS would actually shun Microsoft technology, since it has long been a host for many Windows Server workloads. In fact, it wants as much as it can get. This week, AWS introduced a new bring-your-own-license program for Windows Server and SQL Server designed to make it easier for customers to run those licenses on AWS, versus Azure.

AWS pushes hybrid cloud, but rejects multi-cloud

One of the more prominent, although long-expected, updates this week is the general availability of AWS Outposts. These specialized server racks provided by AWS reside in customers’ own data centers, in order to comply with regulations or meet low-latency needs. They are loaded with a range of AWS software, are fully managed by AWS and maintain continuous connections to local AWS regions.

The company is taking the AWS Outposts idea a bit further with the release of new AWS Local Zones. These will consist of Outpost machines placed in facilities very close to large cities, giving customers who don’t want or have their own data centers, but still have low-latency requirements, another option. Local Zones, the first of which is in the Los Angeles area, provide this capability and tie back to AWS’ larger regional zones, the company said.

Outposts, AWS Local Zones and the previously launched VMware Cloud on AWS constitute a hybrid cloud computing portfolio for AWS — but you won’t hear Jassy or other executives say the phrase multi-cloud, at least not in public.

In fact, partners who want to co-brand with AWS are forbidden from using that phrase and similar verbiage in marketing materials, according to an AWS co-branding document provided to SearchAWS.com.

“AWS does not allow or approve use of the terms ‘multi-cloud,’ ‘cross cloud,’ ‘any cloud, ‘every cloud,’ or any other language that implies designing or supporting more than one cloud provider,” the co-branding guidelines, released in August, state. “In this same vein, AWS will also not approve references to multiple cloud providers (by name, logo, or generically).”

An AWS spokesperson didn’t immediately reply to a request for comment.

The statement may not be surprising in context of AWS’s market lead, but does stand in contrast to recent approaches by Google, with the Anthos multi-cloud container management platform, and Microsoft’s Azure Arc, which uses native Azure tools, but has multi-cloud management aspects.

AWS customers may certainly want multi-cloud capabilities, but can protect themselves by using portable products and technologies, such as Kubernetes at the lowest level with a tradeoff being the manual labor involved, said Holger Mueller, an analyst with Constellation Research in Cupertino, Calif.

“To be fair, Azure and Google are only at the beginning of [multi-cloud],” he said.

Meanwhile, many AWS customers have apparently grown quite comfortable moving their IT estates onto the platform. One example is Cox Automotive, known for its digital properties such as Autotrader.com and Kelley Blue Book.

In total, Cox has more than 200 software applications, many of which it accrued through a series of acquisitions, and the company expects to move it all onto AWS, said Chris Dillon, VP of architecture, during a re:Invent presentation.

Cox is using AWS Well-Architected Framework, a best practices tool for deployments on AWS, to manage the transition.

“When you start something new and do it quickly you always run the risk of not doing it well,” said Gene Mahon, director of engineering operations. “We made a decision early on that everything would go through a Well-Architected review.”

Go to Original Article
Author:

AWS moves into quantum computing services with Braket

Amazon debuted a preview version of its quantum computing services this week, along with a new quantum computing research center and lab where AWS cloud users can work with quantum experts to identify practical, short-term applications.

The new AWS quantum computing managed service, called Amazon Braket, is aimed initially at scientists, researchers and developers, giving them access to quantum systems provided by IonQ, D-Wave and Rigetti.

Amazon’s quantum computing services news comes less than a month after Microsoft disclosed it is developing a chip capable of running quantum software. Microsoft also previewed a version of its Azure Quantum Service and struck partnerships with IonQ and Honeywell to help deliver the Azure Quantum Service.

In November, IBM said its Qiskit QC development framework supports IonQ’s ion trap technology, used by IonQ and Alpine Quantum Technologies.

Google recently claimed it was the first quantum vendor to achieve quantum supremacy — the ability to solve complex problems that classical systems either can’t solve or would take them an extremely long time to solve. Company officials said it represented an important milestone.

In that particular instance, Google’s Sycamore processor solved a difficult problem in just 200 seconds — a problem that would take a classical computer 10,000 years to solve. The claim was met with a healthy amount of skepticism by some competitors and other more objective sources as well. Most said they would reserve judgement on the results until they could take a closer look at the methodology involved.

Cloud services move quantum computing forward

Peter Chapman, CEO and president of IonQ, doesn’t foresee any conflicts with his respective agreements with rivals Microsoft and AWS. AWS jumping into the fray with Microsoft and IBM will help push quantum computing closer to the limelight and make users more aware of the technology’s possibilities, he said.

“There’s no question AWS’s announcements give greater visibility to what’s going on with quantum computing,” Chapman said. “Over the near term they are looking at hybrid solutions, meaning they will mix quantum and classical algorithms making [quantum development software] easier to work with,” he said.

There’s no question AWS’s announcements will give greater visibility to what’s going on with quantum computing.
Peter ChapmanCEO and president, IonQ

Microsoft and AWS are at different stages of development, making it difficult to gauge which company has advantages over the other. But what Chapman does like about AWS right now is the set of APIs that allows a developer’s application to run across the different quantum architectures of IonQ (ion trap), D-Wave (quantum annealing) and Rigetti (superconducting chips).

“At the end of the day it’s not how many qubits your system has,” Chapman said. “If your application doesn’t run on everyone’s hardware, users will be disappointed. That’s what is most important.”

Another analyst agreed that the sooner quantum algorithms can be melded with classical algorithms to produce something useful in an existing corporate IT environment, the faster quantum computing will be accepted.

“If you have to be a quantum expert to produce anything meaningful, then whatever you do produce stays in the labs,” said Frank Dzubeck, president of Communications Network Architects, Inc. “Once you integrate it with the classical world and can use it as an adjunct for what you are doing right now, that’s when [quantum technology] grows like crazy.”

Microsoft’s Quantum Development Kit, which the company open sourced earlier this year, also allows developers to create applications that operate across a range of different quantum architectures. Like AWS, Microsoft plans to combine quantum and classical algorithms to produce applications and services aimed at the scientific markets and ones that work on existing servers.

One advantage AWS and Microsoft provide for smaller quantum computing companies like IonQ, according to Chapman, is not just access to their mammoth user bases, but support for things like billing.

“If customers want to run something on our computers, they can just go to their dashboard and charge it to their AWS account,” Chapman said. “They don’t need to set up an account with us. We also don’t have to spend tons of time on the sales side convincing Fortune 1000 users to make us an approved vendor. Between the two of them [Microsoft and AWS], they have the whole world signed up as approved vendors,” he said.

The mission of the AWS Center for Quantum Computing will be to solve longer-term technical problems using quantum computers. Company officials said they have users ready to begin experimenting with the newly minted Amazon Braket but did not identify any users by name.

The closest they came was a prepared statement by Charles Toups, vice president and general manager of Boeing’s Disruptive Computing and Networks group. The company is investigating how quantum computing, sensing and networking technologies can enhance Boeing products and services for its customers, according to the statement.

“Quantum engineering is starting to make more meaningful progress and users are now asking for ways to experiment and explore the technology’s potential,” said Charlie Bell, senior vice president with AWS’s Utility Computing Services group.

AWS’s assumption going forward is quantum computing will be a cloud-first technology, which will be the way AWS will provide its users with their first quantum experience via Amazon Braket and the Quantum Solutions Lab.

Corporate and third-party developers can create their own customized algorithms with Braket, which gives them the option of executing either low-level quantum circuits or fully-managed hybrid algorithms. This makes it easier to choose between software simulators and whatever quantum hardware they select.

The AWS Center for Quantum Computing is based at Caltech, which has long invested in both experimental and theoretical quantum science and technology.

Go to Original Article
Author:

Halo: Reach now available for Xbox One, Windows 10 and Steam, plus Xbox Game Pass | Windows Experience Blog

343 Industries announced Tuesday that Halo: Reach has officially joined Halo: The Master Chief Collection with Xbox Game Pass and for Xbox One, Windows 10 PC and Steam.“Since we announced Halo: Reach and the rest of The Master Chief Collection are coming to PC, we’ve been thrilled with the community response and collaboration, and are excited the game is now available to play in stunning 4K, over nine years after its original launch,” writes Brian Jarrard, Community Director at 343 Industries, on Xbox Wire. “Some of you have been closely involved in making this launch a reality and we hope to have your continued involvement with our Halo Insider program, helping to make this the best experience of Halo on PC!”
There are a variety of ways to play the game across Xbox Game Pass, Steam and Xbox One. For details on how to get Halo: Reach, what improvements you can expect in the new versions, and what’s next for Halo fans, head on over to the Xbox Wire post. Or check out the Microsoft Store page.

New Amazon Kendra AI search tool indexes enterprise data

LAS VEGAS — Amazon Kendra, a new AI-driven search tool from the tech giant, is designed to enable organizations to automatically index business data, making it easily searchable using keywords and context.

Revealed during a keynote by AWS CEO Andy Jassy at the re:Invent 2019 user conference here,  Kendra relies on machine learning and natural language processing (NLP) to bring enhanced search capabilities to on-premises and cloud-based business data. The system is in preview.

“Kendra is enterprise search technology,” said Forrester analyst Mike Gualtieri. “But, unlike enterprise search technology of the past, it uses ML [machine learning] to understand the intent of questions and return more relevant results.”

Cognitive search

Forrester, he said, calls this type of technology “cognitive search.” Recent leaders in that market, according to a Forrester Wave report Gualtieri helped write, include intelligent search providers Coveo, Attivio, IBM, Lucidworks, Mindbreeze and Sinequa. Microsoft was also ranked highly in the report, which came out in May 2019. AWS is a new entrant in the niche.

“Search is often an area customers list as being broken especially across multiple data stores whether they be databases, office applications or SaaS,” said Nick McQuire, vice president at advisory firm CCS Insight.

Unlike enterprise search technology of the past, [Kendra] uses ML to understand the intent of questions and return more relevant results.
Mike GualtieriAnalyst, Forrester

While vendors such as IBM and Microsoft have similar products, “the fact that AWS is now among the first of the big tech firms to step into this area illustrates the scale of the challenge” to bring a tool like this to market, he said.

During his keynote, Jassy touted the intelligent search capabilities of Amazon Kendra, asserting that the technology will “totally change the value of the data” that enterprises have.

Setup of Kendra appears straightforward. Organizations will start by linking their storage accounts and providing answers to some of the questions their employees frequently query their data about. Kendra then indexes all the provided data and answers, using machine learning and NLP to attempt to understand the data’s context.

Understanding context

“We’re not just indexing the keywords inside the document here,” Jassy said.

AWS CEO Andy Jassy announced Kendra at AWS re:Invent 2019
AWS CEO Andy Jassy announced Kendra at AWS re:Invent 2019

Meanwhile, Kendra is “an interesting move especially since AWS doesn’t really have a range of SaaS application which generate a corpus of information that AI can improve for search,” McQuire said.

“But,” he continued, “this is part of a longer-term strategy where AWS has been focusing on specific business and industry applications for its AI.”

Jassy also unveiled new features for Amazon Connect, AWS’ omnichannel cloud contact center platform. With the launch of Contact Lens for Amazon Connect, users will be able to perform machine learning analytics on their customer contact center data. The platform will also enable users to automatically transcribe phone calls and intelligently search through them.

By mid-2020, Jassy said, Amazon Kendra will support real-time transcription and analysis of phone calls.

Go to Original Article
Author:

Improving Tracking Prevention in Microsoft Edge – Microsoft Edge Blog

Today, we’re excited to announce some improvements to our tracking prevention feature that have started rolling out with Microsoft Edge 79. In our last blog post about tracking prevention in Microsoft Edge, we mentioned that we are experimenting with ways that our Balanced mode can be further improved to provide even greater privacy protections by default without breaking sites. We are looking to strike a balance between two goals:Blocking more types of trackers – Microsoft Edge’s tracking prevention feature is powered by Disconnect’s tracking protection lists. We wanted to build off our initial implementation of tracking prevention in Microsoft Edge 78 and maximize the protections we offered by default by exploring blocking other categories of trackers (such as those in the Content category) in Balanced mode. These changes resulted in Microsoft Edge 79 blocking ~25% more trackers than Microsoft Edge 78.
Maintaining compatibility on the web – We knew that blocking more categories of trackers (especially those in the Content category) had the potential to break certain web workflows such as federated login or embedded social media content.
We learned through experimentation that it is possible to manage these tradeoffs by relaxing tracking prevention for organizations with which a user has established a relationship. To determine this list, we built on-device logic that combines users’ personal site engagement scores with the observation that some organizations own multiple domains that they use to deploy functionality across the web. It’s worth mentioning that this compatibility mitigation only applies to Balanced mode; Strict mode will continue to block the largest set of trackers without any mitigations.

The Chromium project’s site engagement score is a measure of how engaged a specific user is with a specific site. Site engagement scores can range from 0 (meaning a user has no relationship with a site) to 100 (meaning that a user is extremely engaged with a site). Activities such as browsing to a site repeatedly/over several days, spending time interacting with a site, and playing media on a site all cause site engagement scores to increase, whereas not visiting a site causes site engagement scores to decay exponentially over time. You can view your own site engagement scores by navigating to edge://site-engagement.
It’s also worth noting that site engagement scores are computed on your device and never leave it. This means that they are not synced across your devices or sent to Microsoft at any time.
Through local experimentation, we found that a site engagement score of 4.1 was a suitable threshold to define a site that a user has an active relationship with. While this value is subject to change based on user feedback and future experiments, it was selected as an initial value for two reasons:
It is low enough to ensure successful interactions with a site that a user has not previously had a history of engagement with.
It is high enough to ensure that sites a user visits infrequently will drop off the list relatively quickly.
While site engagement helps signal which sites are important to individual users, allowing third party storage access/resource loads from only these sites would not consider the fact that organizations can serve content that users care about from multiple domains, which can still result in site breakages.
Combining site engagement with organizations
In our last blog post about tracking prevention, we introduced the concept of an organization, that is, a single company that can own multiple domains related to their business (such as Org1 owning “org1.test” and “org1-cdn.test”). We also shared that in order to keep sites working smoothly, our tracking prevention implementation groups such domains together and exempts storage/resource blocks when a domain in one organization requests resources from another domain in that same organization.
In order to keep sites that users engage with working as expected while also increasing the types of trackers that we block by default, we combined the concept of an organization together with site engagement to create a new mitigation. This mitigation takes effect whenever a user has established an ongoing relationship with a given site (currently defined by a site engagement score of 4.1 or greater). For example, consider the following organization which owns two domains:
Social Org
social.example
social-videos.example
A user will be considered to have a relationship with Social Org if they have established a site engagement score of at least 4.1 with any one of its domains.
If another site, content-embedder.example, includes third-party content (say an embedded video from social-videos.example) from any of Social Org’s domains that would normally be restricted by tracking prevention, it will be temporarily allowed as long as the user’s site engagement score with Social Org’s domains is maintained above the threshold.
If a site does not belong to an organization, a user will need to establish a site engagement score of at least 4.1 with it directly before any storage access/resource load blocks imposed by tracking prevention will be lifted.
What does this mean?
By exempting sites and organizations that you have an ongoing and established relationship with from tracking prevention, we can ensure that the web services and applications you care about continue to work as you expect across the web. Leveraging site engagement also allows us to only unblock content that is likely to be important to you and reflects your current needs. This ensures that actions such as briefly visiting a site or seeing a popup aren’t enough to unblock content by themselves. If content does get unblocked due to you interacting with a site, it is always unblocked in a temporary manner that is proportional to how highly engaged you are with that site/its parent organization. By combining these exemptions with more strict blocking of trackers by default, we can provide higher levels of protection while still maintaining compatibility on the ever-evolving set of sites that you engage with.
It’s worth noting that tracking prevention, when enabled, will always block storage access and resource loads for sites that fall into the Fingerprinting or Cryptomining categories on Disconnect’s tracking protection lists. We will also not apply the site engagement-based mitigation outlined above for our most privacy-minded users who opt into tracking prevention’s Strict mode.

The best way to learn what’s changed with tracking prevention in Microsoft Edge 79 is to take a look at the table below:
Along the top are the categories of trackers as defined by Disconnect’s tracking protection list categories.
Along the left side are comparisons of the improvements made to our tracking prevention feature broken down into Basic, Balanced, and Strict.
The letter “S” in a cell denotes that storage access is blocked.
The letter “B” in a cell denotes that both storage access and resource loads (i.e. network requests) are blocked.
A “-“ in a cell denotes that no block will be applied to either storage access or resource loads.
The “Same-Org Mitigation” refers to the first mitigation that we introduced in our previous blog post and recapped above.
The “Org Engagement Mitigation” refers to the second mitigation based on site engagement that we introduced earlier in this post.

Advertising
Analytics
Content
Cryptomining
Fingerprinting
Social
Other
Same Org Mitigation
Org Engagement Mitigation
Basic

Microsoft Edge 78



B
B


Enabled
Not impl.
Microsoft Edge 79



B
B


Enabled
N/A
Balanced

Microsoft Edge 78
S


B
B
S

Enabled
Not impl.
Microsoft Edge 79
S

S
B
B
S
S
Enabled
Enabled1
Strict 2

Microsoft Edge 78
B
B

B
B
B
B
Enabled
Not impl.
Microsoft Edge 79
B
B
S
B
B
B
B
Enabled
Disabled
Does not apply to Cryptomining or Fingerprinting categories.
Strict mode blocks more resource loads than Balanced. This can result in Strict mode appearing to block less tracking requests than Balanced since the trackers making the requests are never even loaded to begin with.
With our recent updates in Microsoft Edge 79, we have seen, on average, 25% more trackers blocked in Balanced mode. Close monitoring of user feedback and engagement time also showed no signs of negative compatibility impact, suggesting that the org engagement mitigation is effective at minimizing breakage on sites that users actively engage with. While this does mean that top sites have the org engagement mitigation applied more often, we believe this is an acceptable tradeoff versus compatibility, especially as more top sites are starting to give users mechanisms to transparently view, control, and delete their data.
As with all our features, we’ll continue to monitor telemetry and user feedback channels to learn more and continually improve tracking prevention in future releases. We are also exploring additional compatibility mitigations such as the Storage Access API, which we intend to experiment with in a future version of Microsoft Edge.
InPrivate Changes
In our previous blog post, we mentioned that users browsing in InPrivate will automatically get Strict mode protections. By listening to the feedback our users provided, we found that this led to unexpected behavior (such as causing sites that worked in a normal browsing window to fail to load InPrivate) and broke some important use cases. That’s why in Microsoft Edge 79, your current tracking prevention settings will be carried over to InPrivate sessions.
We are currently experimenting in our Canary and Dev channels with a switch at the bottom of our settings panel (which you can reach by navigating to edge://settings/privacy) that will allow you to re-enable Strict mode protections InPrivate by default:

We’ve also made it easier for you to view the trackers that Microsoft Edge has blocked for you. Navigate to edge://settings/privacy/blockedTrackers to test out this new experience today!

We’d love to hear your thoughts on our next iteration of tracking prevention. If something looks broken, or if you have feedback to share on these changes, we’d love to hear from you. Please send us feedback using the “smiley face” in the top right corner of the browser.
Send feedback at any time with the Send a Smile button in Microsoft Edge

As always, thanks for being a part of this journey towards a more private web!
–  Scott Low, Senior Program Manager–  Brandon Maslen, Senior Software Engineer

For Sale – Lenovo desktop £80

Making some room for a gaming PC so selling one of the kids’ homework PCs. It’s a Lenovo H50-50 with G3260, 8GB, new 120GB SSD, WiFi card.

In very good cosmetic condition (still has some of the plastic protection film). If you are local you can have the monitor (think 17″ or 19″) for free.

Location
bristol
Price and currency
£80
Delivery cost included
Delivery is NOT included
Prefer goods collected?
I have no preference
Advertised elsewhere?
Advertised elsewhere
Payment method
BT, cash

Last edited:

Go to Original Article
Author:

AWS Access Analyzer aims to limit S3 bucket exposures

Amazon Web Services is taking another crack at mitigating S3 bucket misconfigurations and data exposures with a new tool called IAM Access Analyzer.

Announced at the re:Invent conference in Las Vegas, IAM Access Analyzer will be part of the AWS Identity and Access Management (IAM) console. The tool will alert users when an S3 bucket is configured to be publicly accessible and will offer a one-click option to block public access to ensure no unintended access.

“When reviewing results that show potentially shared access to a bucket, you can Block All Public Access to the bucket with a single click in the S3 Management console, configure more granular permissions if required, or for specific and verified use cases that require public access, such as static website hosting, you can acknowledge and archive the findings on a bucket to record that you intend for the bucket to remain public or shared,” Shasya Sharma, senior technical product manager for AWS, wrote in a blog post.

The IAM Access Analyzer console will group all publicly accessible buckets and show users whether this access is a result of an access control list (ACL), policy setting or both, as well as what permissions are enabled for that bucket. 

AWS buckets are private by default, but that hasn’t stopped a series of high-profile data exposures due to misconfiguration, including exposures involving data from the Department of Defense, Verizon and more. AWS has been trying for two years to mitigate S3 bucket exposures, beginning with making it clearer when buckets were public, sending emails to owners of public buckets, introducing new settings to batch change bucket settings, and adding new tools.

AWS announced Control Tower at a re:Invent conference in Boston earlier this year as a landing page for some of these tools, such as AWS Config, which allows users to set standardized rules for S3 buckets and receive alerts if a new bucket is deployed that isn’t consistent with those rules.

Chris Vickery, director of cyber risk research at UpGuard, based in Mountain View, Calif., who has found a number of exposed S3 buckets, said IAM Access Analyzer “is definitely a step in the right direction,” but may not see wide adoption.

“The most notable aspect being that you have to know it exists and proactively turn it on,” Vickery told SearchSecurity. “Entities with massive already-existing configurations and systems may be hesitant to change things even if problems are detected, for fear of breaking the overall functionality.

“There is also the aspect of smaller operations, without sophisticated IT staff, feeling a bit overwhelmed with all the tech language, ID strings and other output,” Vickery added. “Those types of people want to simply know ‘Am I in trouble? Yes or no?’ It’s a complicated situation because Amazon doesn’t inherently know the purpose of each customer’s use.”

Go to Original Article
Author: