Wanted – Apple iMac 27inch 5k

I’ll have a 2015 iMac available after tomorrow.

My mate (lives in Wales) and I decided it best to get it shipped to me, that way I can take full ownership for it, take some decent pics and confirm its condition. I tried to do this long distance last week but it was going to prove difficult so took it off the classifieds.

It’s a Late 2015, 27″ 5K, 3.2GHz i5, 16GB RAM, 1TB Fusion in near perfect condition (to be confirmed), both inner and outer boxes, Magic Keyboard 2, Magic Mouse 2 (rechargeable versions) and PSU. I believe he’s reset it with macOS Mojave.

Subject to inspection, it will then be up for £900 with offers considered.

Shipping would be extra but perhaps you’d consider collection from North Essex.

Keep a look out…

Go to Original Article
Author:

Windows Developer Twitch Workshop – Windows Developer Blog

On March 14th starting at 8AM PT, we’ll be hosting a free Windows Developer Twitch Workshop for .NET developers working with WPF, WinForms or UWP framework.
The day will be split into three themes across seven sessions:

Productivity: Productivity for existing developers focused on Visual Studio XAML tooling and design pattern framework libraries
Desktop Apps of Tomorrow: Looking forward to how you can get started exploring the latest technology for your desktop application such as .NET Core 3, XAML Islands, MSIX and Windows 10 APIs
Extending Your Skills: Extending your skills with Forms for mobile apps and DevOps for desktop applications CI/CD

For the full schedule of the days sessions see this detailed post on the Visual Studio blog.
We hope you tune in on March 14th and if you miss it, don’t worry, the content will be posted on Visual Studio YouTube channel in about a week.
Updated March 7, 2019 10:04 am

A progress report on digital transformation in healthcare – Microsoft Industry Blogs

Two scientists using digital tablet in laboratory

It’s been an incredible year so far for the health industry. We’ve seen the dream and the opportunity of digital transformation and AI start to really take shape in the marketplace.

We saw many examples of this last month at HIMSS 2019, many of our partners and other cloud providers are offering commoditized access to complex healthcare algorithms and models to improve clinical and business outcomes.

Trust

These examples show how cloud computing and AI can deliver on the promise of digital transformation. But for health organizations to realize that potential, they have to trust the technology—and their technology partner.

Microsoft has always taken the lead on providing cloud platforms and services that help health organizations protect their data and meet their rigorous security and compliance requirements. Recently, we announced the HIPAA  eligibility and HITRUST certifications of Microsoft Cognitive Services and Office 365.

It’s crucial for health organizations to feel utterly confident not only in their technology partner’s ability to help them safeguard their data and infrastructure, and comply with industry standards, but also in their partner’s commitment to help them digitally transform their way—whatever their needs or objectives are. Our mission is to empower every person and every organization on the planet to achieve more. So whether you’re a health provider, pharmaceutical company, or retailer entering healthcare, your mission is our mission. Our business model is rooted in delivering success rather than disruption for our customers.

Interoperability

Another point of vital importance as we support the movement of healthcare as an industry—and healthcare data specifically—to the cloud is ensuring that we avoid the sins of the past, specifically data silos.

To that end, we jointly announced with other leading cloud providers that we’re committed to healthcare data interoperability among cloud platforms and supporting common data standards like Fast Healthcare Interoperability Resources (FHIR). And I was particularly thrilled to see the excitement in the health industry in reaction to our launch last last month with Azure API for FHIR and our commitment to develop open source FHIR servers. I hope you’ll join the huge movement behind health interoperability fueled by FHIR and encourage your technologists to start actively using the open-source project to bring diverse data sets together—and to build systems that learn from those data sets.

As my colleague, Doug Seven, recently wrote, interoperability helps you bring together data from disparate sources, apply AI to it to gain insights, and then enrich care team and patient tools with those insights to help you achieve your mission. That’s a crucial step in the digital transformation of health.

Teamwork

Another crucial step is supporting health teamwork. With the changing nature of care delivery, health services increasingly require coordination across multiple care settings and health professionals. So we launched a new set of capabilities to our Teams platform that provides workflows for first-line clinical workers such as doctors and nurses that they can use to access patient information and coordinate care in real time.

The end game

Why does all of this matter? To answer that question, I always come back to the quadruple aim, which all of us in the health industry strive for: enhancing both patient and caregivers’ experiences, improving the health of populations, and lowering the costs of healthcare.

Empowering care teams and patients with data insights and tools that help them coordinate care—and that they and your health organization can trust—will help bring about the desired outcomes of the quadruple aim. Not only will this systemic change improve clinical and business outcomes, but also, at an individual level, enhance the day-to-day and digital experiences of clinical workers and patients alike—creating better experiences, better insights, and better care across the delivery system.

Learn more about real-world use cases for AI in the e-book: “Breaking down AI: 10 real applications in healthcare.”

Go to Original Article
Author: Steve Clarke

Cisco: Network security strategy requires IT, OT to play nice

SAN FRANCISCO — Securing IP networks that stretch from the office to the factory floor will require collaboration between IT staff and operational teams with different priorities and measurements for success.

That network security strategy advice was Cisco’s keynote message to security pros at the RSA Conference here this week. Liz Centoni, general manager of IoT at Cisco, encouraged attendees to make security the common ground of IT teams and people responsible for assembly lines.

Because both sides need security, it becomes the “bridge” between them, Centoni said. “Security is the reason that IT teams and OT [operational technology] teams are kind of forced to work together.”

She encouraged IT security pros to take the time to understand what’s important to managers on the factory floor. Their job is to ensure equipment is running as efficiently as possible, while also keeping a close eye on employee safety.

“The OT world cares about people safety; they care about equipment safety — not data loss,” Centoni said. “The OT world cares about what rolls off their production lines.”

They also worry about downtime. An assembly line that stops can cost a large manufacturer tens of thousands of dollars a minute. “That drives their behavior; that drives their actions; and that drives their decisions,” Centoni said.

IP on the factory floor

Nevertheless, with a growing number of manufacturers switching from proprietary networks to IP, operations need security. Therefore, Centoni recommended IT professionals reach out to the engineers who design the processes that keep production lines running. Together, they can create a security architecture for the factory floor.

Building partnerships are also an important piece of a successful network security strategy. For example, Centoni suggested IT teams work closely with plant managers since they are the ones who would have to shut down a production line during a cyberattack.

In the OT space, we’re just getting started with visibility.
Liz Centonigeneral manager of IoT, Cisco

Security teams should also be prepared to look for assets on the factory floor that aren’t on any inventory list. Based on Cisco’s experience with customers, Centoni estimates many companies don’t know 40% to 50% of the equipment sitting in their environments. As a result, IT pros have to work a little harder to account for all assets before placing them in the proper network segments protected by security policies.

“In the OT space, we’re just getting started with visibility,” Centoni said.

Cisco is working on its bridge for IT and plant operations. In February, the company released industrial switches a customer could manage with DNA Center. The software console is a core component of Cisco’s intent-based networking portfolio, which includes switches and security for running IT networks on the campus.

Go to Original Article
Author:

For Sale – Custom-Made Gaming PC – AWD-IT Falcon Blue – High-Spec! – DELIVERY AVAILABLE

For sale a custom built Falcon Blue gaming computer by AWD-IT. Computer has a high specification as follows:

OS: Windows 10 Home x64
Motherboard: Asus H110M
Processor: Intel Core i5-7400 @3.00GHz
RAM: 8gb
Graphics Card: nVidia GTX 1060 6gb
Hard drive: 1TB
USB: 4 x 3.0, 6 x 2.0 (serial keyboard and serial mouse)
Display: VGA, 2 x DVI, 3 x Display Port, 1 x HDMI
Disk drive: DVD R/RW
Fans: 2- user controlled variable speed
PSU: 500w
Case lighting: blue
Dimensions: 50cm (h) x 48cm (d) x 18cm (w)

Computer is in perfect condition and has barely been used. It has been reset and fully updated on Windows 10 (as at 06/02/19).

The protective covering is still fixed to the clear side panel. The computer comes with original box, packaging, power cable and invoice.

Screen, mouse and keyboard not included.

Price and currency: 400
Delivery: Goods must be exchanged in person
Payment method: Cash or transfer prior to taking item
Location: Bristol
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Go to Original Article
Author:

Data, insights and listening to improve the customer experience | Windows Experience Blog

Measuring the quality of Windows is a complex undertaking that requires gathering a variety of diagnostic signals from millions of devices within the Windows ecosystem. The insights we derive from these signals are essential to our understanding of whether our customers, starting with our Insiders and progressing to the general population of Windows users, are successfully using our products and services. In addition to all the internal testing that we do, we rely heavily on the feedback provided through diagnostic data to detect and fix problems before we release new updates of Windows to the general population, and to monitor the impact of those updates after each release.
In this installment of our quality blog series, Jane Liles and Rob Mauceri from the Windows Data Science team share some of the practices we’ve developed during the last few years to measure and improve the quality of Windows.
Measuring the impact of change
We approach each release with a straightforward question, “Is this Windows update ready for customers?” This is a question we ask for every build and every update of Windows, and it’s intended to confirm that automated and manual testing has occurred before we evaluate quality via diagnostic data and feedback-based metrics. After a build passes the initial quality gates and is ready for the next stages of evaluation, we measure quality based on the diagnostic data and feedback from our own engineers who aggressively self-host Windows to discover potential problems. We look for stability and improved quality in the data generated from internal testing, and only then do we consider releasing the build to Windows Insiders, after which we review the data again, looking specifically for failures.
Answering the question, “Is this update of Windows ready?” requires that we design and curate metrics in a way that is reliable, repeatable, precise, true and unbiased. It requires a high level of testing and iteration. We look at the metrics multiple times per week as part of our normal rhythm of the business to better understand the impact of the code changes that our engineering teams put into the product, and to make decisions about where to focus stabilization efforts.
By the time we are ready to ship to our customer base, our metrics must be, at a minimum, at or above the quality levels for the previous release, the idea being that every update should make the Windows 10 experience better. We use these testing methods and metrics throughout the release and track their performance over time, enabling us to ship with higher confidence. We then continue monitoring for quality issues that affect the ecosystem as the update reaches a broader audience.
To understand the quality of each update through data, we divide the feature set of Windows into distinct areas defined by the customer experience. Building on the ability to count unique “active” devices each month, we then focus on the success of the upgrade process and general health of the user experience. From there, we define “measures of success” for key user scenarios that exemplify a best-in-class operating system (OS) experience. For example, we measure success rates for connecting to Wi-Fi, or opening a PDF file from Microsoft Edge, or logging in using Windows Hello.
For any given update of Windows, we have thousands of “measures” (proprietary metrics built on a shared data platform) that we use at the team level to monitor the impact of the changes. Of these, we have just over 1,000 that we incorporate daily into our Release Quality View (RQV) dashboard, which gives us an “all up” product view. We use this dashboard to assess the quality of the customer experience before we ship, i.e., while our engineers and Windows Insiders are running daily and weekly pre-release builds. We continue monitoring the RQV when the product becomes generally available to the hundreds of millions of devices used by our consumer and commercial Windows customers, using the data to spot and quickly fix issues that may arise so that Windows is always improving, even after it is released. In fact, we use measures at every stage of development to monitor the quality of each release, whether it is a feature update or a monthly servicing update. We build these “measures” from Windows diagnostic data and feedback signals to help us understand if the product is working the way it’s supposed to. The diagram below shows what one of these measures looks like.

Example measure: Average time to connect to Wi-Fi on a recent Windows 10 update: 1.503 seconds (within the goal of 3,000 ms, measured over 101.1 million connections with at least 50 percent signal quality)
When developing the RQV, we needed to normalize our assessment across many measures, each with a different way of expressing “success.” The measures had to be flexible enough to best capture what is important, for example:

How should we characterize the experience being measured to influence the right action, for example, defining success as a low failure rate for connecting to Wi-Fi, or the percentage of devices with no failures, or the success rate of deleting browser history via Settings?
How should we express targets, such as a 99.85 percent success rate when creating a new tab in Microsoft Edge, or 100) since the previous build
Measures drive action
As anyone who regularly works with metrics has probably discovered, data is only as valuable as an organization’s commitment to using it to inform real decisions and drive action on a consistent basis. Having a dashboard of measures provides limited benefit unless there is a process and accountability for using it.
Today, leaders in the Windows organization meet multiple times each week to review the quality of the latest version of Windows being self-hosted by our own engineering teams and by hundreds of thousands of Windows Insiders, as represented by our RQV measures. Starting with the lowest scoring problem areas, we run down the list of areas whose measures are proportionally farthest from their targets. The engineering owners for those areas are then called on to explain what is causing the problem, who is on point to resolve it, and when they expect the quality of that area, as represented by the measures, to be back within target. We look for both build-by-build differences and the trends of measures over time.
Between these “Release Readiness” sessions, engineers and managers regularly review their measures and customer feedback, investigating failures using analytics tools to correlate cohorts of failure conditions with problems in the code. This enables them to more easily diagnose issues and fix them. Fixes that engineers check into future builds are tracked through the system, so reviewers can see when a fix will be delivered via a new build and can monitor impact as the build moves through its normal validation path: through automated quality gates, to self-hosted devices in our internal engineering “rings,” and to our Windows Insiders. Impactful regressions—again, exposed through measures and user feedback—in quality at any of these stages can halt the progression of that build to the next audience until a new fix (usually in the form of a new build) is available.
It has been exciting to grow a new team culture focused on driving quality through measurements built from our listening channels, and we continue to invest in making it better. In the early days of Windows 10, it took months of curation, review and iteration just to get an initial set of measures that were representative, reliable and trustworthy. It required patience and rigor to get to a state where the measures could tell a credible story. If a feature owner had to explain why the measures weren’t accurately describing what was happening, we had to develop the habit of asking how they were going to fix the measures and when. It didn’t happen overnight, but it did happen, thanks in part to leaders like Mike Fortin, who are passionate about us living by our measures, and to our dedicated teams of data scientists and engineers. We have learned a lot about how to express quality in a customer-centric way through quantitative measurement (at considerable scale), and we’re still learning.
Measure example: Success creating a new tab in Microsoft Edge
One of the hundreds of measures we track is the failure rate of opening new tabs in the Microsoft Edge browser. The success of this action is essential to a good customer experience, but it is also a non-trivial operation behind the scenes. This makes it a good action to track as a quality measure.
We detect that a new tab is being created by looking for a sequence of three diagnostic events: (1) TabAddedToViewModel, (2) AddNewTabCreateInstance and (3) ConsumedPreLoadedTabInstance. If each of these events occurs in order, and within the prescribed amount of time, the action is considered a success. However, there are many events that can cause the operation to fail. For example, if we see the TabAddedToViewModel event, but the sequence is not completed in time, then the action is deemed a failure.
Here is the logic behind this measure illustrated as a state model, which is the way we describe the event pattern in the measure definition:

State machine diagram for “percentage of Microsoft Edge Tab Failures” measure
We aggregate all success and failure outcomes to determine the overall reliability of the action. The percentage of successes over all successes and failures is the measure result. In this case, the “target” or goal that the Microsoft Edge team set for this measure is .15 percent, so if the average percentage of tab creation failures is .15 percent or less over a rolling seven-day period, then the measure is passing.
When we view the results for a measure, we compare across builds to understand the following:

How is this measure performing for this audience of devices with this specific build?
Is this feature working better in this build than in the last build we released to this audience?
How is the quality of this feature in this build compared to in the latest released version of Windows?

An example comparison of results for this measure across three different builds on Feb. 24 looks like this:

Comparison of tab creation failure rate for three different Windows 10 builds
Since the measure was within its target of .15 percent, averaging .07 percent within the last seven days, it gets a score of 100 (in green).
A more detailed view of the first chart shows more information about the number of instances and the daily averages:

Detailed view of Tab Create failure measure
The grey bars at the bottom of the chart show the number of instances of that measure for that day. The confidence interval shows the range of results for that day, which on Feb. 24 was relatively narrow with 95 percent chance of the true mean falling within a .02 percent range that is well within target. If the measure were failing, we could drill from the chart to get machine learning-generated information about anomalous cohorts, or patterns of conditions among the failing devices, as well as correlations with known crashes. We can also seek additional diagnostic data to understand the root cause of the problem(s), to aid engineers on the Microsoft Edge team in fixing it quickly for a future build.
Progress, and more investment
We have come a long way in our quest to better understand and improve the quality of Windows through the signals we receive from Windows Insiders and in-market usage. We will continue to look for opportunities to make our listening and analytics systems more intelligent, all in an effort to provide you, our customer, with the best possible Windows experience. We are constantly looking to apply new and improved ways to augment our instrumentation and measures—for example, investing more in customer feedback to help us identify gaps or inconsistencies in our diagnostic data-based measures and provide insight into the experiences you have on your actual devices. We’re also investing in new machine learning models to foster earlier detection through text analytics. In short, we are on a journey of continuous improvement, and we’re using data and AI to make release decisions that are more informed and customer-centric, while also maintaining your privacy and trust.