Wanted – 12-13 inch Windows Laptop

Hi all,

I’m looking for a small windows laptop, between 12-13 inches for effectively, university use.
Microsoft Office, Internet browsing, Netflix etc but small/light enough to carry around and transport.

Does anyone one have such a laptop they are looking to sell?

Thanks,

Ben

Location: Nottingham

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

For Sale – 6TB WD Red Drive HDD

I’m selling 2 RED Western Digital 6tb Hard drives, bought from Amazon.co.uk in 2014. The drives have been used in a NAS during that time and I’ve attached the health info of the drives for those interested. They’ve operated very well and I’m only selling to get larger drives.

Thanks for looking.

1 SOLD

Price and currency: £125 each
Delivery: Delivery cost is included within my country
Payment method: Cash on collection, Bank transfer or Paypal gift
Location: Barnet
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

For Sale – Lots of stuff X35 g-sync macbook pro 2603v4 xeon Imac surface laptop 2666MHZ 16GB 2333Mhz ECC memory

Hello,

I am a hoarder, and I still have alot of stuff stored in my parents house they need rid of (can’t blame them). I’m not a trader just a hoarder and I need some stuff gone. Offers welcome on the memory 2x16GB 2666Mhz and 2x16GB 2133p ECC server/workstation memory.

Imac Retina 27″ late 2014
I5
16GB
512GB SSD
Very nice machine with a 5K screen, still looks like new

Macbook pro
Mid 2014
I7 4770HQ
16GB
256GB SSD
Still looks like new

Surface Laptop
I5
8GB
256GB SSD
In warranty until March 2019

HP Omen X35 G-sync display
Still sealed, purchased 2 but only got around to using 1, this one for sale is still sealed (100mhz 4MS response).

Intel Xeon 2603V4
Has a HPE Proliant case but can be removed if not using in a HPE server

I can delivery items near the Sheffield Area (will be happy to post the memory or processor).
Link to images.
Imgur

Price and currency: £900 macbook / £720 surface laptop / 2603v4 xeon £160 / Omen X35 £660 / Imac Retina £1000 /
Delivery: Delivery cost is not included
Payment method: Cash on delivery pref
Location: sheffield
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

For Sale – Steelseries Rival 700 Optical Gaming Mouse

hya peeps,

got the above for sale having switched to a lighter/smaller mouse

around 18 mths old from new in good condition and comes boxed

link is here to the model Rival 700 Gaming Mouse

thanks for looking

Price and currency: 40
Delivery: Delivery cost is included within my country
Payment method: bank transfer/ppg
Location: wigan
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Data guru living with ALS modernizes industries by typing with his eyes – Stories

The couple remodeled their two-story townhouse near Guatemala City so he had everything he needed on the first floor and didn’t have to navigate stairs. Otto learned to use a trackball mouse with his foot to type with an on-screen keyboard. But it was cumbersome, and he needed Pamela nearby to move the cursor from one corner of his two 32-inch screens to another as he navigated Excel spreadsheets and Power BI dashboards.

A tracheotomy was put in his throat to help him breathe, taking away his limited speech and increasing his isolation. But when Knoke, who spends two hours a day reading blogs and researching, saw his friend Juan Alvarado’s post about the new Eye Control feature in Windows 10, he let loose with his version of a shout and immediately ordered the Tobii Eye Tracker hardware to use with the software.

A man sits in a wheelchair while four women and two men crouch around him, touching his shoulders and arms and smiling.
Otto Knoke with his wife, daughters and sons-in-law. Photo provided by Pamela Knoke.

Alvarado, who met Knoke as a database consultant working on the ATM system Knoke had implemented, hadn’t known about Knoke’s condition until he suddenly saw him in a wheelchair one day. And fittingly, Eye Control itself began with a wheelchair.

Microsoft employees, inspired by former pro football player Steve Gleason, who had lost the use of his limbs to ALS, outfitted a wheelchair with  electronic gadgets to help him drive with his eyes during the company’s first Hackathon, in 2014. The project was so popular that a new Microsoft Research team was formed to explore the potential of eye-tracking technology to help people with disabilities, leading to last year’s release of Eye Control for Windows 10.

Knoke said it was “a joy” to learn how to type with his eyes, getting the feel of having sensors track his eye movements as he navigated around the screen and rested his gaze on the elements he wanted to click. Using Eye Control and the on-screen keyboard, he now can type 12 words a minute and creates spreadsheets, Power BI dashboards and even PowerPoint presentations. Combined with his foot-operated mouse, his productivity has doubled. He plans to expand his services to the U.S., where he spent six years studying and working in the 1970s. He no longer relies on his wife’s voice, because Eye Control offers a text-to-speech function as well.

“It was frustrating trying to be understood,” Knoke said in the email interview. “After a few days of using Eye Control I became so independent that I did not need someone to interact with clients when there were questions or I needed to explain something. We have a remote session to the client’s computer, and we open Notepad and interact with each other that way.”

His wife and his nurse had learned to understand the sounds he was able to make, even with the tracheotomy restricting his vocal chords. But now he can communicate with his three grown daughters, his friends and all his customers.

A man lies in a reclining chair, while a younger woman sits in a chair next to him. Both are smiling and looking at a computer screen with a Spanish phrase typed on it.
Using a foot-operated mouse, Eye Control for Windows 10 and the text-to-speech function, Otto Knoke is able to communicate with his family — including his daughter, seen here — as well as with clients.

“Now when our children visit, he can be not just nodding at what they say, but he can be inside the conversation, too,” Pamela Knoke said. “He always has a big smile on his face, because he’s got his independence back.”

He’s also started texting jokes to friends again.

“It’s kind of like it brought my friend back, and it’s amazing,” Alvarado said. “Otto told me that for him, it was like eye tracking meant his arms can move again.”

Being able to text message with Eye Control has helped his business as well.

Grupo Tir, a real-estate development and telecommunications business in Guatemala, hired Knoke for several projects, including streamlining its sales team’s tracking of travel expenses with Power BI.

“Working with Otto has been amazing,” said Grupo Tir Chief Financial Officer Cristina Martinez. “We can’t really meet with him, so we usually work with texts, and it’s like a normal conversation.

“He really has no limitations, and he always is looking for new ways to improve and to help companies.”

For Sale – Logitech DiNovo Edge Keyboard and Logitech Performance MX mouse

As per the title, I am selling my Logitech DiNovo Keyboard and Performance MX Mouse

The Keyboard is boxed with all original accessories (charging dock, power supply and Logitech unifiying USB dongle).

It works exactly as exptected and the keyboard itself is in great condition for a device which has been used pretty much daily for a few years. The only thing of notre is that the letters on a few keys has either faded partly or completely. The E, L and Left Shift key are partly worn and the A and S key are almost completley gone (as shown in pjhotos). I was intending to get some replacement trasnfers for these but havent gotten around to it yet and they should only cost a few pounds. This doesnt effect performance at all of course but it has to be mentioned as it may well be an issue for some if not all. Connects via the included gone or directly via #bluetooth.

Im also including my Logitefch Performance MX mouse, this just comes as its and will be included in the box for the keyboard. Again, this has been used daily and due it its nature also has signs of wear but again works perfectly. This runs on a single AA rechargeable battery (not included) and can charge directly in the mouse via a micro USB cable which I will include in the box but will not be the original one. This connects with the included Unifying dongle that comes with the keyboard.

Both the mouse and keyboard can be configured using the Logitech Setpoint software which is easily downloaded from the Logitech site which allowes you to configure a lot of the customisable buttons on both devices.

Battery life is great on both devices and they both go roughly a month between charges with average usage. The mouse can be used whilst charging or ytou can pop in another battery and charge at a later date. The keyboard needs to be in it’sdock to charge.

Both have been fully tested and are working exactly as epected prior to this listing

Any questions pelase ask and feel free to make an offer

IMG_2490.jpg

IMG_2497.jpg

IMG_2496.jpg

IMG_2491.jpg

IMG_2498.jpg

IMG_2499.jpg

IMG_2495.JPG

Price and currency: £100
Delivery: Delivery cost is not included
Payment method: Bank Transfer
Location: Colchester
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

For Sale – EVGA 1080 FE & EK Water Block

Over the next few weeks (when I get time due to work!) I am slowly dismantling my PC. First up i my EVGA GTX 1080 with EK waterblock. This is a FE card. I have replaced all the thermal pads and used Thermal grisly paste on the chip. The card has never got above 40 degrees. It idles at 31. I will include the original housing/fan with all the screws, so you can return to stock if needed.

Apologies for the pic but due to the configuration, I was going to remove it when I have a buyer, as I will have to drain the system thus putting the PC out of action.

If your considering going to water, I may be able to help you with rads,pump/res ect.

If you need to know anything, dont hesitate to ask.

Price and currency: £400
Delivery: Delivery cost is included within my country
Payment method: BT/PPG/Cash
Location: Chorley
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Data Center Scale Computing and Artificial Intelligence with Matei Zaharia, Inventor of Apache Spark

Matei Zaharia, Chief Technologist at Databricks & Assistant Professor of Computer Science at Stanford University, in conversation with Joseph Sirosh, Chief Technology Officer of Artificial Intelligence in Microsoft’s Worldwide Commercial Business


At Microsoft, we are privileged to work with individuals whose ideas are blazing a trail, transforming entire businesses through the power of the cloud, big data and artificial intelligence. Our new “Pioneers in AI” series features insights from such pathbreakers. Join us as we dive into these innovators’ ideas and the solutions they are bringing to market. See how your own organization and customers can benefit from their solutions and insights.

Our first guest in the series, Matei Zaharia, started the Apache Spark project during his PhD at the University of California, Berkeley, in 2009. His research was recognized through the 2014 ACM Doctoral Dissertation Award for the best PhD dissertation in Computer Science. He is a co-founder of Databricks, which offers a Unified Analytics Platform powered by Apache Spark. Databricks’ mission is to accelerate innovation by unifying data science, engineering and business. Microsoft has partnered with Databricks to bring you Azure Databricks, a fast, easy, and collaborative Apache Spark based analytics platform optimized for Azure. Azure Databricks offers one-click set up, streamlined workflows and an interactive workspace that enables collaboration between data scientists, data engineers, and business analysts to generate great value from data faster.

So, let’s jump right in and see what Matei has to say about Spark, machine learning, and interesting AI applications that he’s encountered lately.

Video and podcast versions of this session are available at the links below. The podcast is also available from your Spotify app and via Stitcher. Alternatively, just continue reading the text version of their conversation below, via this blog post.

Joseph Sirosh: Matei, could you tell us a little bit about how you got started with Spark and this new revolution in analytics you are driving?

Matei Zaharia: Back in 2007, I started doing my PhD at UC Berkeley and I was very interested in data center scale computing, and we just saw at the time that there was an open source MapReduce implementation in Apache Hadoop, so I started early on by looking at that. Actually, the first project was profiling Hadoop workloads to identify some bottlenecks and, as part of that, we made some improvements to the Hadoop job scheduler and that actually went into Hadoop and I started working with some of the early users of that, especially Facebook and Yahoo. And what we saw across all of these is that this type of large data center scale computing was very powerful, there were a lot of interesting applications they could do with them, but just the map-reduce programming model alone wasn’t really sufficient, especially for machine learning – that’s something everyone wanted to do where it wasn’t a good fit but also for interactive queries and streaming and other workloads.

So, after seeing this for a while, the first project we built was the Apache Mesos cluster manager, to let you run other types of computations next to Hadoop. And then we said, you know, we should try to build our own computation engine which ended up becoming Apache Spark.

JS: What was unique about Spark?

MZ: I think there were a few interesting things about it. One of them was that it tried to be a general or unified programming model that can support many types of computations. So, before the Spark project, people wanted to do these different computations on large clusters and they were designing specialized engines to do particular things, like graph processing, SQL custom code, ETL which would be map-reduce, they were all separate projects and engines. So in Spark we kind of stepped back at them and looked at these and said is there any way we can come up with a common abstraction that can handle these workloads and we ended up with something that was a pretty small change to MapReduce – MapReduce plus fast data sharing, which is the in-memory RDDs in Spark, and just hooking these up into a graph of computations turned out to be enough to get really good performance for all the workloads and matched the specialized engines, and also much better performance if your workload combines a bunch of steps. So that is one of the things.

I think the other thing which was important is, having a unified engine, we could also have a very composable API where a lot of the things you want to use would become libraries, so now there are hundreds maybe thousands of third party packages that you can use with Apache Spark which just plug into it that you can combine into a workflow. Again, none of the earlier engines had focused on establishing a platform and an ecosystem but that’s why it’s really valuable to users and developers, is just being able to pick and choose libraries and arm them.

JS: Machine Learning is not just one single thing, it involves so many steps. Now Spark provides a simple way to compose all of these through libraries in a Spark pipeline and build an entire machine learning workflow and application. Is that why Spark is uniquely good at machine learning?

MZ: I think it’s a couple of reasons. One reason is much of machine learning is preparing and understanding the data, both the input data and also actually the predictions and the behavior of the model, and Spark really excels at that ad hoc data processing using code – you can use SQL, you can use Python, you can use DataFrames, and it just makes those operations easy, and, of course, all the operations you do also scale to large datasets, which is, of course, important because you want to train machine learning on lots of data.

Beyond that, it does support iterative in-memory computation, so many algorithms run pretty well inside it, and because of this support for composition and this API where you can plug in libraries, there are also quite a few libraries you can plug in that call external compute engines that are optimized to do different types of numerical computation.

JS: So why didn’t some of these newer deep learning toolsets get built on top of Spark? Why were they all separate?

MZ: That’s a good question. I think a lot of the reason is probably just because people, you know, just started with a different programming language. A lot of these were started with C++, for example, and of course, they need to run on the GPU using CUDA which is much easier to do from C++ than from Java. But one thing we’re seeing is really good connectors between Spark and these tools. So, for example, TensorFlow has a built-in Spark connector that can be used to get data from Spark and convert it to TFRecords. It also actually connects to HDFS and different sorts of big data file systems. At the same time, in the Spark community, there are packages like deep learning pipelines from Databricks and quite a few other packages as well that let you setup a workflow of steps that include these deep learning engines and Spark processing steps.

“None of the earlier engines [prior to Apache Spark] had focused on establishing a platform and an ecosystem.”

JS: If you were rebuilding these deep learning tools and frameworks, would you recommend that people build it on top of Spark? (i.e. instead of the current approach, of having a tool, but they have an approach of doing distributed computing across GPUs on their own.)

MZ: It’s a good question. I think initially it was easier to write GPU code directly, to use CUDA and C++ and so on. And over time, actually, the community has been adding features to Spark that will make it easier to do that in there. So, there’s definitely been a lot of proposals and design to make GPU a first-class resource. There’s also this effort called Project Hydrogen which is to change the scheduler to support these MPI-like batch jobs. So hopefully it will become a good platform to do that, internally. I think one of the main benefits of that is again for users that they can either program in one programming language, they can learn just one way to deploy and manage clusters and it can do deep learning and the data preprocessing and analytics after that.

JS: That’s great. So, Spark – and Databricks as commercialized Spark – seems to be capable of doing many things in one place. But what is not good at? Can you share some areas where people should not be stretching Spark?

MZ: Definitely. One of the things it doesn’t do, by design, is it doesn’t do transactional workloads where you have fine grained updates. So, even though it might seem like you can store a lot of data in memory and then update it and serve it, it is not really designed for that. It is designed for computations that have a large amount of data in each step. So, it could be streaming large continuous streams, or it could be batch but is it not these point queries.

And I would say the other thing it does not do it is doesn’t have a built-in persistent storage system. It is designed so it’s just a compute engine and you can connect it to different types of storage and that actually makes a lot of sense, especially in the cloud, with separating compute and storage and scaling them independently. But it is different from, you know, something like a database where the storage and compute are co-designed to live together.

JS: That makes sense. What do you think of frameworks like Ray for machine learning?

MZ: There are lot of new frameworks coming out for machine learning and it’s exciting to see the innovation there, both in the programming models, the interface, and how to work with it. So I think Ray has been focused on reinforcement learning which is where one of the main things you have to do is spawn a lot of little independent tasks, so it’s a bit different from a big data framework like Spark where you’re doing one computation on lots of data – these are separate computations that will take different amounts of time, and, as far as I know, users are starting to use that and getting good traction with it. So, it will be interesting to see how these things come about.

I think the thing I’m most interested in, both for Databricks products and for Apache Spark, is just enabling it to be a platform where you can combine the best algorithms, libraries and frameworks and so on, because that’s what seems to be very valuable to end users, is they can orchestrate a workflow and just program it as easily as writing a single machine application where you just import a bunch of libraries.

JS: Now, stepping back, what do you see as the most exciting applications that are happening in AI today?

MZ: Yeah, it depends on how recent. I mean, in the past five years, deep learning is definitely the thing that has changed a lot of what we can do, and, in particular, it has made it much easier to work with unstructured data – so images, text, and so on. So that is pretty exciting.

I think, honestly, for like wide consumption of AI, the cloud computing AI services make it significantly easier. So, I mean, when you’re doing machine learning AI projects, it’s really important to be able to iterate quickly because it’s all about, you know, about experimenting, about finding whether something will work, failing fast if a particular idea doesn’t work. And I think the cloud makes it much easier.

JS: Cloud AI is super exciting, I completely agree. Now, at Stanford, being a professor, you must see a lot of really exciting pieces of work that are going on, both at Stanford and at startups nearby. What are some examples?

MZ: Yeah, there are a lot of different things. One of the things that is really useful for end users is all the work on transfer learning, and in general all the work that lets you get good results with AI using smaller training datasets. There are other approaches as well like weak supervision that do that as well. And the reason that’s important is that for web-scale problems you have lot of labeled data, so for something like web search you can solve it, but for many scientific or business problems you don’t have that, and so, how can you learn from a large dataset that’s not quite in your domain like the web and then apply to something like, say, medical images, where only a few hundred patients have a certain condition so you can’t get a zillion images. So that’s where I’ve seen a lot of exciting stuff.

But yeah, there’s everything from new hardware for machine learning where you throw away the constraints that the computation has to be precise and deterministic, to new applications, to things like, for example security of AI, adversarial examples, verifiability, I think they are all pretty interesting things you can do.

JS: What are some of the most interesting applications you have seen of AI?

MZ: So many different applications to start with. First of all, we’ve seen consumer devices that bring AI into every home, or every phone, or every PC – these have taken off very quickly and it’s something that a large fraction of customers use, so that’s pretty cool to see.

In the business space, probably some of the more exciting things are actually dealing with image data, where, using deep learning and transfer learning, you can actually start to reliably build classifiers for different types of domain data. So, whether it’s maps, understanding satellite images, or even something as simple as people uploading images of a car to a website and you try to give feedback on that so it’s easier to describe it, a lot of these are starting to happen. So, it’s kind of a new class of data, visual data – we couldn’t do that much with it automatically before, and now you can get both like little features and big products that use it.

JS: So what do you see as the future of Databricks itself? What are some of the innovations you are driving?

MZ: Databricks, for people not familiar, we offer basically, a Unified Analytics Platform, where you can work with big data mostly through Apache Spark and collaborate with it in an organization, so you can have different people, developing say notebooks to perform computations, you can have people developing production jobs, you can connect these together into workflows, and so on.

So, we’re doing a lot of things to further expand on that vision. One of the things that we announced recently is what we call machine learning runtime where we have preinstalled versions of popular machine learning libraries like XGBoost or TensorFlow or Horovod on your Databricks cluster, so you can set those up as easily as you can set up as easily as you can setup an Apache Spark cluster in the past. And then another product that we featured a lot at our Spark Summit conference this year is Databricks Delta which is basically a transactional data management layer on top of cloud objects stores that lets us do things like indexing, reliable exactly once stream processing, and so on at very massive scale, and that’s a problem that all our users have, because all our users have to setup a reliable data ingest pipeline.

JS: Who are some of the most exciting customers of Databricks and what are they doing?

MZ: There are a lot of really interesting customers doing pretty cool things. So, at our conference this year, for example, one of the really cool presentations we saw was from Apple. So, Apple’s internal information security group – this is the group that does network monitoring, basically gets hundreds of terabytes of network events per day to process, to detect intrusions and information security problems. They spoke about using Databricks Delta and streaming with Apache Spark to handle all of that – so it’s one of the largest applications people have talked about publicly, and it’s very cool because the whole goal there – it’s kind of an arms race between the security team and attackers – so you really want to be able to design new rules, new measurements and add new data sources quickly. And so, the ease of programming and the ease of collaborating with this team of dozens of people was super important.

We also have some really exciting health and life sciences applications, so some of these are actually starting to discover new drugs that companies can actually productionize to tackle new diseases, and this is all based on large scale genomics and statistical studies.

And there are a lot of more fun applications as well. Like actually the largest video game in the world, League of Legends, they use Databricks and Apache Spark to detect players that are misbehaving or to recommend items to people or things like that. These are all things that were featured at the conference.

JS: If you had one advice to developers and customers using Spark or Databricks, or guidance on what they should learn, what would that be?

MZ: It’s a good question. There are a lot of high-quality training materials online, so I would say definitely look at some of those for your use case and see what other people are doing in that area. The Spark Summit conference is also a good way to see videos and talks and we make all of those available for free, the goal of that is to help and grow the developer community. So, look for someone who is doing similar things and be inspired by that and kinda see what the best practices are around that, because you might see a lot of different options for how to get started and it can be hard to see what the right path is.

JS: One last question – in recent years there’s been a lot of fear, uncertainty and doubt about AI, and a lot of popular press. Now – how real are they, and what do you think people should be thinking?

MZ: That’s a good question. My personal view is – this sort of evil artificial general intelligence stuff – we are very far away from it. And basically, if you don’t believe that, I would say just try doing machine learning tutorials and see how these models break down – you get a sense for how difficult that is.

But there are some real challenges that will come from AI, so I think one of them is the same challenge as with all technology which is, automation – how quickly does it happen. Ultimately, after automation, people usually end up being better off, but it can definitely affect some industries in a pretty bad way and if there is no time for people to transition out, that can be a problem.

I think the other interesting problem there is always a discussion about is basically access to data, privacy, managing the data, algorithmic discrimination – so I think we are still figuring out how to handle that. Companies are doing their best, but there are also many unknowns as to how these techniques will do that. That’s why we’ll see better best practices or regulations and things like that.

JS: Well, thank you Matei, it’s simply amazing to see the innovations you have driven, and looking forward to more to come.

MZ: Thanks for having me.

“When you’re doing machine learning AI projects, it’s really important to be able to iterate quickly because it’s all about experimenting, about finding whether something will work, failing fast if a particular idea doesn’t work.


And I think the cloud makes it much easier.”

We hope you enjoyed this blog post. This being our first episode in the series, we are eager to hear your feedback, so please share your thoughts and ideas below.

The AI / ML Blog Team

Resources

Azure preparedness for Hurricane Florence

As Hurricane Florence continues its journey to the mainland, our thoughts are with those in its path. Please stay safe. We’re actively monitoring Azure infrastructure in the region. We at Microsoft have taken all precautions to protect our customers and our people.

Our datacenters (US East, US East 2, and US Gov Virginia) have been reviewed internally and externally to ensure that we are prepared for this weather event. Our onsite teams are prepared to switch to generators if utility power is unavailable or unreliable. All our emergency operating procedures have been reviewed by our team members across the datacenters, and we are ensuring that our personnel have all necessary supplies throughout the event.

As a best practice, all customers should consider their disaster recovery plans and all mission-critical applications should be taking advantage of geo-replication.

Rest assured that Microsoft is focused on the readiness and safety of our teams, as well as our customers’ business interests that rely on our datacenters. 

You can reach our handle @AzureSupport on Twitter, we are online 24/7. Any business impact to customers will be communicated through Azure Service Health in Azure portal.

If there is any change to the situation, we will keep customers informed of Microsoft’s actions through this announcement.

For guidance on Disaster Recovery best practices see references below: