EVERYTHING HAS BEEN CLEANED WITH ANTIBIOTIC WIPES (obviously not the gpu, ram and ssd lol)
Due to the current situation I’m having a calearout and downgrade and so I have the below for sale cash on collection or bank transfer all ID details ect can be provided.
RTX 2080 SUPER FTW3 ULTRA GAMING EVGA £680ono (I also have the hydrocopper block to match this £60) this was a step up from my 2070 super and has full warranty with EVGA and comes with all boxes and bits see warranty page screenshot for details below.
Corsair vengeance pro RGB DDR4 16gb 3600 C18 boxed £70ono (CURRENTLY UNDER OFFER AWAITING PAYMENT)
Corsair k65 lux rgb keyboard with glorious gaming wrist rest £65ono (comes boxed with all key caps and bits and original unused wrist rest)
The ThoughtSpot analytics platform only has been available for six years, but since 2014 the vendor has quickly gained a reputation as an innovator in the field of business intelligence software.
ThoughtSpot, founded in 2012 and based in Sunnyvale, Calif., was an early adopter ofaugmented intelligenceand machine learning capabilities, and even as other BI vendors have begun to infuse their products with AI and machine learning, the ThoughtSpot analytics platform hascontinued to push the pace of innovation.
Now, however, ThoughtSpot is facing the same uncertainty as most enterprises as COVID-19 threatens not only people’s health around the world, but also organizations’ ability to effectively go about their business.
In a recent interview, ThoughtSpot CEOSudheesh Nairdiscussed all things ThoughtSpot, from the way the coronavirus is affecting the company to the status of an IPO.
In part one of a two-part Q&A, Nair talked about how COVID-19 has changed the firm’s corporate culture in a short time. Here in part two, he discusses upcoming plans for the ThoughtSpot analytics platform and when the vendor might be ready to go public.
One of the main reasons the ThoughtSpot analytics platform has been able to garner respect in a short time is its innovation, particularly with respect to augmented intelligence and machine learning. Along those lines, what is a recent feature ThoughtSpot developed that stands out to you?
Sudheesh Nair: One of the main changes that is happening in the world of data right now is that the source of data is moving to the cloud. To deliver the AI-based, high-speed innovation on data, ThoughtSpot was really counting on running the data in a high-speed memory database, which is why ThoughtSpot was mostly focused on on-premises customers. One of the major changes that happened in the last year is that delivered what we call Embrace. With Embrace we are able to move to the cloud and leave the data in place. This is critical because as data is moving, the cost of running computations will get higher because computing is very expensive in the cloud.
With ThoughtSpot, what we have done is we are able to deliver this on platforms like Snowflake, Amazon Redshift, Google BigQuery and Microsoft Synapse. So now with all four major cloud vendors fully supported, we have the capability to serve all of our customers and leave all of their data in place. This reduces the cost to operate ThoughtSpot — the value we deliver — and the return on investment will be higher. That’s one major change.
Looking ahead, what are some additions to the ThoughtSpot analytics platform customers can expect?
Nair: If you ask people who know ThoughtSpot — and I know there are a lot of people who don’t know ThoughtSpot, and that’s OK — … if you ask them what we do they will say, ‘search and AI.’ It’s important that we continue to augment on that; however, one thing that we’ve found is that in the modern world we don’t want search to be the first thing that you do. What if search became the second thing you do, and the first thing is that what you’ve been looking for comes to you even before you ask?
Sudheesh NairCEO, ThoughtSpot
Let’s say you’re responsible for sales in Boston, and you told the system you’re interested in figuring out sales in Boston — that’s all you did. Now the system understands what it means to you, and then runs multiple models and comes back to you with questions you’ll be interested in, and most importantly with insights it thinks you need to know — it doesn’t send a bunch of notifications that you never read. We want to make sure that the insights we’re sending to you are so relevant and so appropriate that every single one adds value. If one of them doesn’t add value, we want to know so the system can understand what it was that was not valuable and then adjust its algorithms internally. We believe that the right action and insight should be in front of you, and then search can be the second thing you do prompted by the insight we sent to you.
What tools will be part of the ThoughtSpot analytics platform to deliver these kinds of insights?
Nair: There are two features we are delivering around it. One is called Feed, which is inspired by our social media curating insights, and conversations and opinions around facts. Right now social media is all opinion, but imagine a fact-driven social media experience where someone says they had a bad a quarter and someone else says it was great and then data shows up so it doesn’t become an opinion based on another opinion. It’s important that it should be tethered to facts. The second one is Monitor, which is the primary feature where the thing you were looking for shows up even before you ask in the format that you like — could be mobile, could be notifications, could be an image.
Those two features are critical innovations for our growth, and we are very focused on delivering them this year.
The last time we spoke we talked about the possibility of ThoughtSpot going public, and you were pretty open in saying that’s something you foresee. It’s about seven months later, where do plans for going public currently stand?
Nair: If you had asked me before COVID-19 I would have had a bit of a different answer, but the big picture hasn’t changed. I still firmly believe that a company like ThoughtSpot will tremendously benefit from going public because our customers are massive customers, and those customers like to spend more with a public company and the trust that comes with it.
Having said that, I talked last time about building a team and predictability, and I feel seven months later that we have built the executive team that can be the best in class when it comes to public companies. But going public also requires being predictable, and we’re getting in that right spot. I think that the next two quarters will be somewhat fluid, which will maybe set us back when it comes to building a plan to take the company public. But that is basically it. I think taken one by one, we have a good product market, we have good business momentum, we have a good team, and we just need to put together the history that is necessary so that the business is predictable and an investor can appreciate it. That’s what we’re focused on. There might be a short-term setback because of what the coronavirus might throw at us, but it’s going to definitely be a couple of more quarters of work.
Does the decline in the stock market related to COVID-19 play into your plans at all?
Nair: It’s absolutely an important event that’s going on and no one knows how it will play out, but when I think about a company’s future I never think about an IPO as a few quarters event. It’s something we want to do, and a couple of quarters here or there is not going to make a major difference. Over the last couple of weeks, we haven’t seen any softness in the demand for ThoughtSpot, but we know that a lot of our customers’ pipelines are in danger from supply impacts from China, so we will wait and see. We need to be very close to our customers right now, helping them through the process, and in that process we will learn and make the necessary course corrections.
Editor’s note: This interview has been edited for clarity and conciseness.
It’s been two years since Sea of Thieves arrived on Xbox One and Windows 10, and what years they’ve been. Since launch, we’ve seen over 10 million pirates plundering the seas, and during the last 24 months we’ve forged on with the wind in our sails to deliver an abundance of additions to the game – most notably in 2019’s Anniversary Update and with the introduction of monthly content updates from last July.
We’re incredibly proud of how far we’ve journeyed from launch, and we’re excited to continue making waves with future content updates. We’re humbled by how many player stories we’ve seen shared, and our community continually inspires us. So we can’t wait to show you all what’s on the horizon – but for now, we want to celebrate everything that’s come before!
As a thank you to our players and a celebration of all things Sea of Thieves, we’ve planned a programme of challenges and goodies kicking off this weekend and running throughout March’s content update. There’s a lot of in-game swag to be bagged for making all the right moves, so let’s take a look at the line-up!
Play Sea of Thieves free this weekend
To start with, an incentive if you’re not one of the millions of pirates who’ve joined us on the seas already: Sea of Thieves is part of the Xbox Free Play Days this weekend, and will be free for all Xbox Live subscribers to play until March 23rd! Don’t worry about being a late starter as all new pirates are eased into the game via the Maiden Voyage, a narrative-driven tutorial experience that provides guidance and information to fledgling sailors.
Enter the Heart of Fire
Let’s not forget this month’s free content update, Heart of Fire. Live since March 12th, this update brings the next fiery Tall Tale to Sea of Thieves, Athena’s Run Voyages for Pirate Legends and some brand new missiles in the form of chainshot for your cannons and throwable Blunderbombs.
Heart of Fire: Official Sea of Thieves Content Update
Bag the Anniversary Eye of Reach
What would a birthday be without a present? If you play Sea of Thieves between Thursday, March 19th and Friday, March 27th, you’ll get the very special, very golden ‘X Marks the Spot’ Eye of Reach! For those of you who will want to equip it straight away, don’t worry – the weapon will appear in your armoury immediately upon entering the game.
Snap up the skeletal Spinal Figurehead
As made famous in Rare’s ’90s fighting game Killer Instinct (and resurrected for the modern version in 2013), Spinal can be claimed for the front of your ship just by watching Sea of Thieves’ anniversary stream at mixer.com/seaofthieves on Friday, March 20th. Make sure your Microsoft and Sea of Thieves accounts are linked so that you qualify for this MixPot item, sign in and join us there from 5pm-7pm GMT!
Set sail with Ori and the Will of the Wisps
If you’re joining Sea of Thieves via Game Pass Ultimate, don’t forget you can also claim the wonderful Ori-inspired ship set to carry you into adventure. This gorgeous new livery is available exclusively to Game Pass Ultimate subscribers from March 18th, and you can see it in all its glowing glory right here:
Ancestral Ship Set Reveal Trailer – Official Sea of Thieves
Relive some of Sea of Thieves’ greatest moments
From Friday, March 20th, pirates will also be able relive some of the greatest Sea of Thieves moments from the last two years. Take a truncated tour through The Hungering Deep, Cursed Sails and Forsaken Shores to bag cosmetics previously limited to the first time these updates launched. For example, if you hadn’t taken to the seas or missed your chance to bag Merrick’s drum the first time around, you’ll have the opportunity to earn it now – allowing everyone to get a taste of some of the events they might have missed from year one!
Turn the seas red with Bleeding Edge
The fun doesn’t stop there. From March 30th, you’ll also be able to unlock some awesome Bleeding Edge ship cosmetics. Pirates will be challenged throughout the week with three objectives, and motivated to complete them with stunning Bleeding Edge-inspired sail, flag and hull designs. As with the Hunter’s Haul event last month, you’ll also be able to track your progress through these objectives here on the Sea of Thieves website – so stay tuned for more info on this event which is set to see the seas turn red…
Want to find out more about Sea of Thieves? Follow us at any of our social channels below, then take the plunge and embark on an epic journey with one of gaming’s most welcoming communities!
The 2020 HIMSS Global Health Conference & Exhibition may have been canceled Thursday due to coronavirus concerns, but federal regulators wasted no time in announcing that two long-awaited health IT rules finally have been released.
The finalized interoperability and information blocking rules from the Office of the National Coordinator for Health IT (ONC) and the Centers for Medicare and Medicaid Services (CMS) will require healthcare organizations give patients access to data through standardized APIs within the next two years, said Don Rucker, national coordinator for ONC, during a media briefing Monday. The rules also focus on data sharing between health insurers, as well as exceptions to information blocking, or situations that do not constitute healthcare organizations keeping data from patients.
Both ONC’s information blocking and interoperability rule, and CMS’ patient data access rule, were finalized amid concerns about patient privacy. Organizations, including EHR vendor Epic, voiced concerns that there weren’t enough privacy protections in place to keep patient data safe.
Proposals for the two rules were unveiled at last year’s event and it was rumored they would drop in conjunction with President Trump’s last-minute addition to this year’s HIMSS speaker lineup, which was slated to start today.
ONC’s interoperability rule
ONC’s interoperability rule mandates that healthcare organizations use FHIR-based APIs to connect patient-facing and consumer-grade apps to patient EHRs. It’s part of the Trump administration’s push to consumerize healthcare.
At the start of the year, one of the biggest EHR vendors, Epic, publicly expressed concerns on sharing patient data with third-party apps because of the lack of outlined privacy protections. During the media briefing, Rucker addressed those concerns head on, saying that the apps will use the same, secure API technology used in banking apps. Additionally, Rucker said providers will be able to let patients know in a “deliberate, straight-forward way” what information they’re consenting to sharing through a patient authentication process.
“That is not snuck in on the side,” Rucker said. “It’s central to the way that patients allow an app to get access to their information. We’ve empowered providers to communicate the privacy issues in that process.”
Rucker said a second part of the finalized ONC rule identifies activities that do not constitute information blocking, which is the interference of a healthcare organization with the sharing of health data, and establishes new rules to prevent information blocking practices by healthcare providers, developers of certified health IT and health information exchange networks, as required by the 21st Century Cures Act.
The rule also requires health IT developers to meet certification requirements to ensure interoperability. Health IT developers must comply with requirements such as assuring that they are not restricting communication about a product’s usability or security so that nurses and doctors are able to discuss safety and usability issues without being bound by what Rucker said has historically been called a “gag clause.”
The finalized ONC rule also replaces the Common Clinical Data Set (CCDS) data elements standard with the U.S. Core Data for Interoperability (USCDI) data set for the exchange of data within APIs. The USCDI is a defined set of data that includes clinical notes such as allergies and medications. The data set will support data exchange, Rucker said.
“These are standardized sets of data classes and data elements … to help improve this flow of information,” he said.
CMS patient access rule
The ONC rule goes hand in hand with the CMS rule, which aims to open data sharing between the health insurance system and patients.
Starting in 2021, the CMS patient data access rule will require all health plans that do business with the federal government to share data with patients through a standards-based API. The push to make it easier for patients to access health data follows a model CMS implemented with Blue Button 2.0, an API which gives Medicare beneficiaries the ability to connect their claims data to apps of their choosing, such as research apps.
The rule also requires health plans to make their provider directory available through an API, so patients know if their physician is in their insurance network.
“This will allow innovative third parties to design apps that will help patients evaluate which plan networks are right for them and potentially avoid surprise billing by having a clear picture of which clinicians are in network,” CMS administrator Seema Verma said during Monday’s media briefing.
Starting in 2022, Verma said insurance plans will also be required to share patient information with each other, which will enable patients to take data with them as they move between plans.
Additionally, effective six months from today, CMS is changing the participation conditions for Medicare- and Medicaid-participating hospitals as part of the rule. To ensure they are supporting care coordination for patients, Verma said the rule requires the hospitals to send admission, discharge and transfer notifications so patients receive a “timelier follow-up supporting better care and better health outcomes.”
“The Trump administration is pushing the healthcare system forward,” Verma said. “We are breaking down barriers to a seamless, data-driven healthcare system. The result of these two rules will be a more intuitive and convenient experience for American patients.”
Databases have long been used for transactional and analytics use cases, but they also have practical utility to help enablemachine learningcapabilities. After all, machine learning is all about deriving insights from data, which is often stored inside a database.
San Francisco-based database vendorSplice Machineis taking an integrated approach to enabling machine learning with its eponymous database. Splice Machine is a distributed SQLrelational database management systemthat includes machine learning capabilities as part of the overall platform.
Splice Machine 3.0 became generally available on March 3, bringing with it updated machine learning capabilities. It also hasnew Kubernetescloud native-based model for cloud deployment and enhanced replication features.
In this Q&A, Monte Zweben,co-founder and CEOof Splice Machine, discusses the intersection of machine learning and databases and provides insight into the big changes that have occurred in the data landscape in recent years.
How do you integrate machine learning capabilities with a database?
Monte Zweben: The data platform itself has tables, rows and schema. The machine learning manager that we have native to the database has notebooks for developing models,Pythonfor manipulating the data, algorithms that allow you to model and model workflow management that allows you to track the metadata on models as they go through their experimentation process. And finally we have in-database deployment.
So as an example, imagine a data scientist working in SpliceMachine working in the insurance industry. They have an application for claims processing and they are building out models inside Splice Machine to predict claims fraud. There’s a function in SpliceMachine called deploy, and what it will do is take a table and a model to generate database code. The deploy function builds a trigger on the database table that tells the table to call a stored procedure that has the model in it for every new record that comes in the table.
So what does this mean in plain English? Let’s say in the claims table, every time new claims would come in, the system would automatically trigger, grab those claims, run the model that predicts claim cause and outputs those predictions in another table. And now all of a sudden, you have real-time, in-the-moment machine learning that is detecting claim fraud on first notice of loss.
What does distributed SQL mean to you?
Zweben: So at its heart, it’s about sharing data across multiple nodes. That provides you the ability to parallelize computation and gain elastic scalability. That is the most important distributed attribute of Splice Machine.
In our new 3.0 release, we just added distributed replication. It’s another element of distribution where you have secondary Splice Machine instances in geo-replicated areas, to handle failover for disaster recovery.
What’s new in Splice Machine 3.0?
Zweben: We moved our cloud stack for SpliceMachines from an oldMesosarchitecture to Kubernetes. Now our container-based architecture is all Kubernetes, and that has given us the opportunity to enable the separation of storage and compute. You literally can pause Splice Machine clusters and turn them back on. This is a great utility for consumption based usage of databases.
Along with our upgrade to Kubernetes, we also upgraded our machine learning manager from an older notebook technology calledZeppelinto a newer notebook technology that has really gained momentum in the marketplace, as much as Kubernetes has in the DevOps world.Jupyternotebooks have taken off in the data science space.
We’ve also enhanced our workflow management tool calledmlflow, which is an open source tool that originated with Databricks and we’re part of that community. Mlflow allows data scientists to track their experiments and has that record of metadata available for governance.
What’s your view on open source and the risk of a big cloud vendor cannibalizing open source database technology?
Zweben: We do compose many different open source projects into a seamless and highly performant integration. Our secret sauce is how we put these things together at a very low level, with transactional integrity, to enable a single integrated system. This composition that we put together is open source, so that all of the pieces of our data platform are available in our open source repository, and people can see the source code right now.
I’m intensely worried about cloud cannibalization. I switched to anAGPLlicense specifically to protect against cannibalization by cloud vendors.
On the other hand, we believe we’re moving up the stack. If you look at our machine learning package, and how it’s so inextricably linked with the database, and the reference applications that we have in different segments, we’re going to be delivering more and more higher-level application functionality.
What are some of the biggest changes you’ve seen in the data landscape over the seven years you’ve been running Splice Machine?
Zweben: With the first generation of big data, it was all aboutdata lakes, and let’s just get all the data the company has intoone repository.Unfortunately, that has proven time and time again, at company after company, to just be data swamps.
Data repositories work, they’re scalable, but they don’t have anyone using the data, and this was a mistake for several reasons.
Monte ZwebenCo-founder and CEO, Splice Machine
Instead of thinking about storing the data,companiesshould think about how to use thedata. Start with the application and how you are going to make the application leverage new data sources.
The second reason why this was a mistake was organizationally, because the data scientists who know AI were all centralized in one data science group, away from the application. They are not the subject matter experts for the application.
When you focus on the application and retrofit the application to make it smart and inject AI, you can get a multidisciplinary team. You have app developers, architects, subject-matter experts, data engineers and data scientists, all working together on one purpose. That is a radically more effective and productive organizational structure for modernizing applications with AI.
1. Model info…. (I’ve previously been advised against sharing serial numbers publicly so hope this is sufficient) Model Name: Mac mini Model Identifier: Macmini6,2 Processor Name: Intel Core i7 Processor Speed: 2.6 GHz
2. Yes, that’s fine. Without the monitor, I’d like £440 delivered or £420 collected from London.
3. SSD is ‘Crucial MX500 CT500MX500SSD1(Z) 500 GB (3D NAND, SATA, 2.5 Inch, Internal SSD)’ – taken from the amazon page where I bought it (I installed it myself)
4. I’m not massively keen on doing this, but aware that I have limited feedback on this forum. You can check out my eBay feedback – my username is mrcjbush – or let me know where you are as collection could be possible.
Very compact PC which has been used mainly as an HTPC. Intel 4150T, 8GB, wifi, 128GB SSD, Win10. Will consider trade with a graphics card enclosure as long as it’s TB3 compatible. Cash your way depending on model.
Having a clear out of a bunch of old computer components that have been sat doing nothing other than gathering dust for the last couple of years. No warranty as all the bits are at least 2 years old, but I’ve tested them today and they all appear to be working as expected – A lot of the components are top tier, so I wasn’t expecting to find anything dead
Gigabyte G1 1070 8GB – Box and card only (no cables or manual) – £175
Palit 1070 Dual fan 8GB – Box and card only (no cables or manual) – £160
Bundle – Xeon 2603v3 Asus X99 Deluxe motherboard – Seems to have all the bits (WiFi, break out cards, back plate, box, manual etc…) – £120
Bundle – Intel G3900 8GB DDR4 (2x4GB) Asrock Z170 Fatal1ty i7 – Box, backplate and a few other little bits – £100
Corsair RM1000i – No box but comes with most cables (missing a couple peripheral power cables for molex sata – will update when if I find them) – £90
Corsair RM1000x – Same as above – £90
Delivery is not included, collection from Sheffield welcome.
Pictures to come shortly.
if someone by some amazing coincidence wants all of it then happy to knock a nice amount off!