For Sale – Boxed Intel i7 8700K Intel CPU

Discussion in ‘Desktop Computer Classifieds‘ started by Simplybefree, Aug 21, 2019 at 6:48 PM.

  1. Simplybefree


    Active Member

    Nov 18, 2009
    Products Owned:
    Products Wanted:
    Trophy Points:

    Having upgraded (side-graded?) to an Intel i7 9700k, I now have my Intel i7 8700k for sale. Includes the original box.

    Payment via PPG or BT, entirely up to you. Thank you!

    Price and currency: 275
    Delivery: Delivery cost is included within my country
    Payment method: BT or PPG
    Location: Wiltshire
    Advertised elsewhere?: Advertised elsewhere
    Prefer goods collected?: I have no preference

    This message is automatically inserted in all classifieds forum threads.
    By replying to this thread you agree to abide by the trading rules detailed here.
    Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

    • Landline telephone number. Make a call to check out the area code and number are correct, too
    • Name and address including postcode
    • Valid e-mail address

    DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Share This Page


Go to Original Article

Announcing Windows 10 Insider Preview Build 18965 | Windows Experience Blog

UPDATE 8/22: Hello Windows Insiders, we have released Windows 10 Insider Preview Build 18965.1005 (KB4517787) to Windows Insiders in the Fast ring. This Cumulative Update is a little bit bigger as it is designed to help us test our servicing pipeline with larger updates. This update contains nothing new. We simply updated the version numbers of several components in the OS.Hello Windows Insiders, today we’re releasing Windows 10 Insider Preview Build 18965 (20H1) to Windows Insiders in the Fast ring.
IMPORTANT: As is normal with builds early in the development cycle, these builds may contain bugs that might be painful for some. If you take this flight, you won’t be able to switch Slow or Release Preview rings without doing a clean-install on your PC.
If you want a complete look at what build is in which Insider ring, head over to Flight Hub. You can also check out the rest of our documentation here including a complete list of new features and updates that have gone out as part of Insider flights for the current development cycle.
Not seeing any of the features in this build? Check your Windows Insider Settings to make sure you’re on the Fast ring. Submit feedback here to let us know if things weren’t working the way you expected.

Control over restarting apps at sign-in
As some of you already know, apps have the ability to register for restart, which helps in a few situations, including enabling you to get back to what you were doing if you need to restart your PC. Previously this option was tied to the “Use my sign-in info to automatically finish setting up my device” option under Sign-in options in accounts settings. We’ve heard feedback that some of you would prefer more explicit control over when Windows automatically restarts apps that were open when you restart your PC, and with 20H1 we’re bringing that option to you.
Windows now puts you in control with a new app restart setting. When turned on, Windows automatically saves your restartable apps when signing out, restarting, or shutting down Windows, and restarts them next time you sign in. This setting is off by default and you can change it any time in Settings > Accounts > Sign-in options, and can find it by searching for “restart apps” in Start or the search box.

Feedback Hub updates
The Feedback Hub team has been hard at work lately to bring you some app updates based on your feature requests, and we have a few changes and improvements to share about the latest version that’s currently rolling out to Insiders in the Fast ring.
Feedback Search UI updates
On the Feedback section of the app, you will now be able to more clearly see the differentiation between Problems and Suggestions, with each showing an icon, color, and labels Problem or Suggestion displayed above each feedback entry. We also have updated iconography and displays for Adding similar feedback to problems, upvoting suggestions, and adding comments to feedback.

Adding similar feedback
In the past, Feedback Hub allowed two kinds of participation on feedback: upvoting and adding more details. The notion of voting on Suggestions makes a lot of sense – engineers at Microsoft want to know which features the community wants us to build next, and voting on feature suggestions is a great way to see where your interest lies.
However, voting on Problems was trickier – problems are not a matter of popularity, and what helps engineers here resolve problems is having clear descriptions of how the issue arose. Feedback is especially helpful if it includes a reproduction of the problem, diagnostics that help our teams pinpoint what went wrong so they can fix issues faster. Voting on the search page often did not provide enough detail as to what was happening, and we saw that few people went into existing feedback to add their personal reproductions of the issues.
With the new Add similar feedback feature, selecting a problem with symptoms that match your own will take you to the feedback form, with the title pre-filled. You edit the title or add your own description to let us know exactly what was happening when you encountered the problem. We’ll already have your category selected to ensure the right feature team sees your feedback, and in our new Similar Feedback section, the feedback you selected will already be selected. As usual, the last step involves optionally adding your own reproduction of the issue or any attachments you like.
Windows Insiders Achievements 
We are excited to announce a refresh of the Windows Insider Achievements page. We’ve made achievements more discoverable by moving them from your profile page to their own landing page, and we added additional features that allow you to categorize and track your progress. Be sure to check it out today and begin unlocking badges. We would love to hear your feedback on social media by using the hashtag #Builds4Badges.

The information previously found in your profile (Device ID and User ID) is now located in the settings section of Feedback Hub.
As always, we appreciate your feedback – if you have any suggestions or problems to report, you can share them in the Feedback Hub under Apps > Feedback Hub.

We fixed an issue resulting in the screens shown while updating Windows unexpectedly saying “managed by your organization” for some Insiders.
We fixed an issue resulting in the taskbar unexpectedly hiding sometimes when launching the touch keyboard.
We fixed an issue where some of the colors weren’t correct in Language Settings if using High Contrast White.
We fixed an issue that could result in background tasks not working in certain apps.
We fixed an issue where if you set focus to the notification area of the taskbar via WIN+B, then opened a flyout and pressed Esc to close it, then the focus rectangle would no longer show up correctly anymore.
We fixed an issue where on the Bluetooth & Other Settings page, the device type wasn’t read out loud when using a screen reader.
We fixed an issue resulting in help links not being accessible when adding a new wireless display device on the Bluetooth & Other Settings page if the text scaling was set to 200%.

Insiders may notice a new “Cloud download” option in the Windows Recovery Environment (WinRE) under “Reset this PC.” This feature is not working quite yet. We’ll let you know once it is, so you can try it out!
There has been an issue with older versions of anti-cheat software used with games where after updating to the latest 19H1 Insider Preview builds may cause PCs to experience crashes. We are working with partners on getting their software updated with a fix, and most games have released patches to prevent PCs from experiencing this issue. To minimize the chance of running into this issue, please make sure you are running the latest version of your games before attempting to update the operating system. We are also working with anti-cheat and game developers to resolve similar issues that may arise with the 20H1 Insider Preview builds and will work to minimize the likelihood of these issues in the future.
Some Realtek SD card readers are not functioning properly. We are investigating the issue.
We’re working on a fix for an issue resulting in the minimize, maximize, and close title bar buttons not working for certain apps. If you’re using an impacted app, Alt+F4 should work as expected to close the app if needed.
Some WSL distros will not load (Issue #4371).
We’re investigating reports that DWM is using unexpectedly high system resources for some Insiders.
There’s an issue impacting a small number of Insiders which started on the previous flight, involving a lsass.exe crash and resulting in a message saying, “Windows ran into a problem and needs to restart.” We’re working on a fix and appreciate your patience.
[Added] Text on Devices pages in Settings for “Bluetooth and Other Devices” and “Printers and Scanners” isn’t rendering correctly
[Added] Search isn’t working for Insiders using certain display languages, including Polish. If you are impacted by this, switching your display language to English then back to your preferred display language should resolve it.

Allergy season got you down? Stay prepared with Bing! Check out current and future pollen counts for various locations. Whether you’re planning a trip to different city or getting ready for a local outing, use Bing to see current levels of tree, grass, and ragweed pollen.
If you want to be among the first to learn about these Bing features, join our Bing Insider Program.
No downtime for Hustle-As-A-Service,Dona

Hackathons show teen girls the potential for AI – and themselves – AI for Business

This summer, young women in San Francisco and Seattle spent a weekend taking their creative problem solving to a whole new level through the power of artificial intelligence. The two events were part of a Microsoft-hosted AI boot-camp program that started last year in Athens, then broadened its reach with events in London last fall and New York City in the spring.

“I’ve been so impressed not only with the willingness of these young women to spend an entire weekend learning and embracing this opportunity, but with the quality of the projects,” said Didem Un Ates, one of the program organizers and a senior director for AI within Microsoft. “It’s just two days, but what they come up with always blows our minds.” (Read a LinkedIn post from Un Ates about the events.)

The problems these girls tackled aren’t kid stuff: The girls chose their weekend projects from among the U.N. Sustainable Development Goals, considered to be the most difficult and highest priority for the world.

The result? Dozens of innovative products that could help solve issues as diverse as ocean pollution, dietary needs, mental health, acne and climate change. Not to mention all those young women – 129 attended the U.S. events – who now feel empowered to pursue careers to help solve those problems. They now see themselves as “Alice,” a mascot created by the project team to represent the qualities young women possess that lend themselves to changing the world through AI.

Organizers plan to broaden the reach of these events, so that girls everywhere can learn about the possibility of careers in technology.


Go to Original Article
Author: Microsoft News Center

Pester tests help pinpoint infrastructure issues

Troubleshooting is a fact of life for the Windows administrator. Something is broken that prohibits an employee from doing important work, and it’s your job to find out what’s wrong and fix it fast.

There are many troubleshooting approaches. At one end, you have frenzied clicking around multiple tools desperately trying to spot something that can shed a clue. This hope-based approach is not the best. At the other end, you can employ a known set of tests that enable you to work methodically through the issues and get to the root of the problem. The advantage to the more methodical approach is that you get to the answer, but the downside is it requires multiple steps. The ideal approach combines the best of both worlds with an automated troubleshooting approach.

Enter the Pester module for PowerShell, which provides a testing framework for PowerShell code and infrastructure setups. The advantage of troubleshooting with Pester tests is that the tests always perform in a consistent manner, which means other members of the IT team can use them to run tests. One way to extend the concept is to add suggestions for possible remedies when a test fails to teach junior admins how to troubleshoot.

The anatomy of a common troubleshooting scenario

Consider a common troubleshooting scenario: A user can’t connect to a server. You might try several tests and hope to see the following results. First, test the user’s network card by testing the loopback address.

Test-Connection -Quiet

Then, test the local machine’s IP address.

Test-Connection -Quiet

Test the server’s address.

Test-Connection -Quiet

Test the connection via the server’s name. This also tests the DNS resolution from the client.

Test-Connection W19FS01 -Quiet

Figure 1 shows the expected results.

Testing network connectivity
Figure 1. Testing the network connectivity to the server named W19FS01 returns the expected results.

How could we go about automating a suite of tests of this sort? One way is to use the PowerShell Pester module. Windows PowerShell v5.1 has Pester version v3.4 installed; you must upgrade the version of Pester if you use Windows PowerShell or install the latest version if you work with PowerShell Core.

On Windows PowerShell, use the following.

Install-Module -Name Pester -Force -SkipPublisherCheck

This will install the latest version of Pester even though there is a version preinstalled with the OS.

On PowerShell Core, use the following.

Install-Module -Name Pester -Scope AllUsers -Force

Once you’ve installed Pester from the gallery, you can update it on any version of PowerShell with the following.

Update-Module -Name Pester -Force

PowerShell Core v6.2 has a problem of installing modules from the gallery into the C:UsersDocumentsPowerShellModules folder by default. When you use Install -Module, you can override this behavior using the AllUsers scope. Update-Module doesn’t have a Scope parameter, so the new version of the module ends up in the user area rather than the C:Program FilesPowerShellModules folder. You should keep modules there to make them accessible to all users on the system. Until the PowerShell team resolves this issue, it may be best on PowerShell Core to delete old versions of the module and reinstall into the AllUsers scope rather than using Update-Module.

Building Pester tests with Windows PowerShell

In the following example, I use Windows PowerShell for the more polished version of Test-Connection and easier access to other modules. You can use the Windows Compatibility module for PowerShell Core to use the required modules.

Let’s create the first test, pinging the loopback adapter.

## test loopback adapter
Describe 'Loopback Adapter' {
It 'Loop back adapter should be pingable' {
Test-Connection -ComputerName -Quiet -Count 1 |
Should Be $true

The Describe keyword creates the test container with a name, analogous to defining and supplying a name to a function. The It keyword creates a test. The text following It gets echoed back when displaying the results. You can have multiple tests in a single container; for this article, I will restrict tests to one per container.

Within the It block, the syntax reduces to the following.

 | Should 

Our example is a test.

Test-Connection -ComputerName -Quiet -Count 1

And here is a desired result.

Should Be $true

Most of the tests you’ll design for troubleshooting purposes will have the following result.

Should Be 

The Pester documentation (about_Pester and about_should) explains how to create other tests.

When you run the test, you should see these results.

Describing Loopback Adapter
[+] Loop back adapter should be pingable 1.04s

The first line echoes the container name and then each test in the container is run — with success shown by [+] or a failure by [-]. The text from the It statement is echoed back with the time taken to run the test.

Let’s add some more tests.

## test local adapter
Describe 'Local Adapter' {
It 'Local adapter should be pingable' {
Test-Connection -ComputerName -Quiet -Count 1 |
Should Be $true
## test server adapter
Describe 'Server Adapter' {
It 'Server adapter should be pingable' {
Test-Connection -ComputerName W19FS01 -Quiet -Count 1 |
Should Be $true

The first test is for the local adapter, and the second test is for the server adapter. You’d probably want to test the default gateway in between the two tests, but my test lab doesn’t have one.

Running the three tests gives these results.

Describing Loopback Adapter
[+] Loop back adapter should be pingable 82ms

Describing Local Adapter
[+] Local adapter should be pingable 74ms

Describing Server Adapter
[-] Server adapter should be pingable 3.07s
Expected $true, but got $false.
20: Should Be $true

The first two tests passed, but pinging the server adapter failed. Notice the results included the expected result and the actual result.

The test that pings the server adapter actually performs two tests. It tests the resolution of the name of the server to an IP address and then pings that IP address. You should only test one thing at a time to determine the problem more accurately.

The tests should be changed to the following.

## test loopback adapter
Describe 'Loopback Adapter' {
It 'Loop back adapter should be pingable' {
Test-Connection -ComputerName -Quiet -Count 1 |
Should Be $true

## test local adapter
Describe 'Local Adapter' {
It 'Local adapter should be pingable' {
Test-Connection -ComputerName -Quiet -Count 1 |
Should Be $true

## test server IP address
Describe 'Server IP address' {
It 'Server IP address should be pingable' {
Test-Connection -ComputerName -Quiet -Count 1 |
Should Be $true

## test server name
Describe 'Server Name' {
It 'Server name should be pingable' {
Test-Connection -ComputerName W19FS01 -Quiet -Count 1 |
Should Be $true

The first and second tests remain the same, while I split the third test to check the server’s IP address and then the resolution and ping. Running these tests produced the following results.

Describing Loopback Adapter
[+] Loop back adapter should be pingable 104ms

Describing Local Adapter
[+] Local adapter should be pingable 87ms

Describing Server IP address
[-] Server IP address should be pingable 3.65s
Expected $true, but got $false.
21: Should Be $true

Describing Server Name
[-] Server name should be pingable 3.66s
Expected $true, but got $false.
29: Should Be $true

The first two tests passed, but both the third and fourth tests failed.

How to fine-tune troubleshooting with Pester tests

Windows troubleshooting is often an iterative process; you find one issue and resolve it, only to uncover another issue.

Windows troubleshooting is often an iterative process; you find one issue and resolve it, only to uncover another issue. In our example, you’d ideally want the tests to stop after the first failure. The tests on the server failed because it’s switched off. If you discover that you can’t ping the server, then the next step would be to check if it’s running.

If you run a file containing Pester tests, all the tests in the file will run even if some of the tests fail. To stop after the first failure, you must run the tests individually, which requires you to call the tests from another script.

Save the pester tests as PingTests.ps1 to a folder called TroubleShooting. The files containing the tests are in a subfolder called Tests.

Make a second script called Test-ServerPing.ps1 in the Troubleshooting folder with the following.

$tests = 'Loopback Adapter', 'Local Adapter', 'Server IP address', 'Server Name'

$data = foreach ($test in $tests) {
$result = $null
$result = Invoke-Pester -Script @{Path = 'C:ScriptsTroubleShootingTestsPingTests.ps1'} -PassThru -TestName "$test" -Show None

$props = [ordered]@{
'Test' = $result.TestResult.Name
'Result' = $result.TestResult.Result
'Failure Message' = $result.TestResult.FailureMessage
New-Object -TypeName PSObject -Property $props

if ($result.FailedCount -gt 0) {break}

$data | Format-Table -AutoSize -Wrap

These tests are from each of the Describe statements shown earlier. We use a foreach loop to run each test from the PingTests.ps1 file. The results from each test create an output object. If a failure occurs, the testing stops so you can see where the first failure occurred without spending time on further irrelevant — at this stage — testing.

 Running Test-ServerPing.ps1 produces these results.

Test                                 Result Failure  Message                 
---- ------ ------- -------
Loop back adapter should be pingable Passed
Local adapter should be pingable Passed
Server IP address should be pingable Failed Expected $true, but got $false.

Currently, the IP addresses and server name are hardcoded. Hardcoding the loopback adapter address is acceptable because it’s always The other tests must be made generic. The easiest way to change that is to create a variable in Test-PingServer.ps1 that stores the required information. That variable will be accessible by the Pester tests because they run in a child scope of Test-PingServer.ps1, which becomes the following.

param (

function Get-NetworkInformation {
param (

$nic = Get-NetAdapter -Name LAN

$ip = Get-NetIPAddress -AddressFamily IPv4 -InterfaceAlias LAN

$dg = Get-NetIPConfiguration -InterfaceAlias LAN

$props = [ordered]@{
iIndex = $nic.InterfaceIndex
iAlias = $nic.InterfaceAlias
Status = if ($nic.InterfaceOperationalStatus -eq 1){'Up'}else{'Down'}
IPAddress = $ip.IPAddress
PrefixLength = $ip.PrefixLength
DefaultGateway = $dg.IPv4DefaultGateway | select -ExpandProperty NextHop
DNSserver = ($dg.DNSServer).serverAddresses
Server = $server
ServerIP = Resolve-DnsName -Name $server | select -ExpandProperty IPAddress
New-Object -TypeName PSobject -Property $props
$netinfo = Get-NetworkInformation -server $ServerToTest
$path = 'C:ScriptsTroubleShootingTestsPingTests.ps1'

$tests = Get-Content -Path $path |
Select-String -Pattern 'Describe' |
foreach {
(($_ -split 'Describe')[1]).Trim('{').Trim().Trim("'")

$data = foreach ($test in $tests) {
$result = $null
$result = Invoke-Pester -Script @{Path = $path} -PassThru -TestName "$test" -Show None

$props = [ordered]@{
'Test' = $result.TestResult.Name
'Result' = $result.TestResult.Result
'Failure Message' = $result.TestResult.FailureMessage
New-Object -TypeName PSObject -Property $props

if ($result.FailedCount -gt 0) {break}

$data | Format-Table -AutoSize -Wrap

The script takes a parameter ServerToTest, which you use to input the server name. The function Get-NetworkInformation discovers the information required to run the tests using the Get-NetAdapter, Get-NetIPAddress and Get-NetIPConfiguration cmdlets. My network adapters have the name LAN to make access easier. You should name your adapters something easy for you to access, especially if you have multiple adapters in a system. Resolve-Dnsname gets the server’s IP address from the name. The Get-NetworkInformation function populates the $netinfo variable.

The list of tests is automatically generated from the PingTests.ps1 file by using Get-Content to read the file and Select-String to find the lines with the Describe keyword.

An additional test checks if the DNS server is available, which also illustrates how $netinfo is used.

## test DNS server
Describe 'DNSServer' {
It 'DNS server should be available' {
Test-Connection -ComputerName $netinfo.DNSserver -Quiet -Count 1 |
Should Be $true


Run the script with the server name as a parameter.

.Test-ServerPing.ps1 -ServerToTest W19FS01

The results are shown in Figure 2.

server ping test
Figure 2. When we run Test-ServerPing.ps1, the testing stops when a failure occurs.

You can add other tests, such as testing the default gateway.

The final versions of the code used for this article are available at this link. A further example of using Pester for troubleshooting — this time for troubleshooting the configuration of PowerShell remoting — is also available in the same repository. In the file, the RemotingTests.ps1 file contains the Pester tests, and the Test-Remoting.ps1 file contains the code to run the tests and stop when a problem is detected. The code assumes Windows PowerShell but can work in PowerShell Core with some modifications.

Why troubleshooting is still an important skill

Monitoring products, such as System Center Operations Manager, can report if a server or service goes offline and may remove the need for some of the troubleshooting scenarios, but there are many scenarios where this ability is still needed. Some of these include the following:

  • There isn’t a monitoring tool in place.
  • The monitoring tool doesn’t oversee the technology at the root of the problem.
  • Distributed environments have complicated networks that may not be monitored.
  • The issue is related to a configuration problem, which is outside the scope of any monitoring product.

If you adopt this troubleshooting approach, I recommend using the Test-X naming convention for the code that runs the tests. You could convert the Test-X scripts to a module that could utilize single copies of functions, such as Get-NetworkInformation, as hidden helper functions.

Go to Original Article

Sign-in and sync with work or school accounts in Microsoft Edge Insider builds – Microsoft Edge Blog

A top piece of feedback we’ve heard from Microsoft Edge Insiders is that you want to be able to roam your settings and browsing data across your work or school accounts in Microsoft Edge. Today, we’re excited to announce that Azure Active Directory work and school accounts now support sign-in and sync in the latest Canary, Dev, and Beta channel preview builds of Microsoft Edge.By signing in with a work or school account, you will unlock two great experiences: your settings will sync across devices, and you’ll enjoy fewer sign-in prompts thanks to single sign-on (Web SSO).

When signed in with an organizational account on any preview channel, Microsoft Edge is able to sync your browser data across all your devices that are signed in with the same account. Today, your favorites, preferences, passwords, and form-fill data will sync; in future previews, we’ll expand this to support other attributes like your browsing history, installed extensions, and open tabs. You can control which available attributes to sync, once you enable the feature from the sync settings page. Sync makes the web a more personal, seamless experience across all devices—the less time you have to spend managing your experience, the more time you’ll have to get things done.
Syncing with your work or school account is currently available for AAD Premium accounts on Windows, iOS and Android devices, and coming soon to macOS.

Once you’ve signed in to your organizational account in Microsoft Edge, we’ll use those credentials to authenticate you to websites and services that support Web Single Sign-On. This helps keep you productive by cutting down on unnecessary sign-in prompts on the web. When you access web content which is authenticated with your signed in account, Microsoft Edge will simply sign you in to the website you’re trying to access.
To try this, just navigate to while signed into Microsoft Edge with your work or school account. Notice that you didn’t need to sign in with your username and password—you are simply authenticated to the website and can access your content immediately. This also works on other web properties that recognize the organizational account you are signed in to.

To get started with an organizational account in Microsoft Edge, all you have to do is sign in and turn on sync. Just click the profile icon to the right of your address bar and click “Sign In” (if you’re already signed in with a personal account, you’ll have to “Add a profile” first and then sign into the new profile with your work or school account.)

At the sign-in prompt, select any of your existing work or school accounts (on Windows 10) or enter your email, phone, or Skype credentials into the sign-in field (on macOS or older versions of Windows) and sign in.
Once you’re signed in, follow the prompts asking if you want to sync your browsing data to enable sync. That’s it! To learn more about sync, check out our previous article on syncing in Microsoft Edge preview channels. You can always change your settings or disable sync at any time by clicking your profile icon and selecting “Manage profile settings.”

We are excited to bring you work/school account sign-in and sync in the Microsoft Edge Insider channels. We hope to make your everyday web surfing experience a breeze. However, we want to be sure that sign-in, as well as all the personalized experiences, actually work for you. Please give sign-in a try and let us know how you like it – or not. If you run into any issues, use the in-app feedback button to submit the details. If you have other feedback about work/school account sign-in or personalized experiences, we welcome your comments below.
Thank you for helping us build the next version of Microsoft Edge that’s right for you.
– Avi Vaid, Program Manager, Microsoft Edge
[Updated 08/23/2019 to clarify availability on platforms and AAD subscription requriements – Ed]

Machine reading comprehension with Dr. T.J. Hazen

Dr. TJ Hazen

Episode 86, August 21, 2019

The ability to read and understand unstructured text, and then answer questions about it, is a common skill among literate humans. But for machines? Not so much. At least not yet! And not if Dr. T.J. Hazen, Senior Principal Research Manager in the Engineering and Applied Research group at MSR Montreal, has a say. He’s spent much of his career working on machine speech and language understanding, and particularly, of late, machine reading comprehension, or MRC.

On today’s podcast, Dr. Hazen talks about why reading comprehension is so hard for machines, gives us an inside look at the technical approaches applied researchers and their engineering colleagues are using to tackle the problem, and shares the story of how an a-ha moment with a Rubik’s Cube inspired a career in computer science and a quest to teach computers to answer complex, text-based questions in the real world.



T.J. Hazen: Most of the questions are fact-based questions like, who did something, or when did something happen? And most of the answers are fairly easy to find. So, you know, doing as well as a human on a task is fantastic, but it only gets you part of the way there. What happened is, after this was announced that Microsoft had this great achievement in machine reading comprehension, lots of customers started coming to Microsoft saying, how can we have that for our company? And this is where we’re focused right now. How can we make this technology work for real problems that our enterprise customers are bringing in?

Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.

Host: The ability to read and understand unstructured text, and then answer questions about it, is a common skill among literate humans. But for machines? Not so much. At least not yet! And not if Dr. T.J. Hazen, Senior Principal Research Manager in the Engineering and Applied Research group at MSR Montreal, has a say. He’s spent much of his career working on machine speech and language understanding, and particularly, of late, machine reading comprehension, or MRC.

On today’s podcast, Dr. Hazen talks about why reading comprehension is so hard for machines, gives us an inside look at the technical approaches applied researchers and their engineering colleagues are using to tackle the problem, and shares the story of how an a-ha moment with a Rubik’s Cube inspired a career in computer science and a quest to teach computers to answer complex, text-based questions in the real world. That and much more on this episode of the Microsoft Research Podcast.

(music plays)

Host: T.J. Hazen, welcome to the podcast!

T.J. Hazen: Thanks for having me.

Host: Researchers like to situate their research, and I like to situate my researchers so let’s get you situated. You are a Senior Principal Research Manager in the Engineering and Applied Research group at Microsoft Research in Montreal. Tell us what you do there. What are the big questions you’re asking, what are the big problems you’re trying to solve, what gets you up in the morning?

T.J. Hazen: Well, I’ve spent my whole career working in speech and language understanding, and I think the primary goal of everything I do is to try to be able to answer questions. So, people have questions and we’d like the computer to be able to provide answers. So that’s sort of the high-level goal, how do we go about answering questions? Now, answers can come from many places.

Host: Right.

T.J. Hazen: A lot of the systems that you’re probably aware of like Siri for example, or Cortana or Bing or Google, any of them…

Host: Right.

T.J. Hazen: …the answers typically come from structured places, databases that contain information, and for years these models have been built in a very domain-specific way. If you want to know the weather, somebody built a system to tell you about the weather.

Host: Right.

T.J. Hazen: And somebody else might build a system to tell you about the age of your favorite celebrity and somebody else might have written a system to tell you about the sports scores, and each of them can be built to handle that very specific case. But that limits the range of questions you can ask because you have to curate all this data, you have to put it into structured form. And right now, what we’re worried about is, how can you answer questions more generally, about anything? And the internet is a wealth of information. The internet has got tons and tons of documents on every topic, you know, in addition to the obvious ones like Wikipedia. If you go into any enterprise domain, you’ve got manuals about how their operation works. You’ve got policy documents. You’ve got financial reports. And it’s not typical that all this information is going to be curated by somebody. It’s just sitting there in text. So how can we answer any question about anything that’s sitting in text? We don’t have a million or five million or ten million librarians doing this for us…

Host: Right.

T.J. Hazen: …uhm, but the information is there, and we need a way to get at it.

Host: Is that what you are working on?

T.J. Hazen: Yes, that’s exactly what we’re working on. I think one of the difficulties with today’s systems is, they seem really smart…

Host: Right?

T.J. Hazen: Sometimes. Sometimes they give you fantastically accurate answers. But then you can just ask a slightly different question and it can fall on its face.

Host: Right.

T.J. Hazen: That’s the real gap between what the models currently do, which is, you know, really good pattern matching some of the time, versus something that can actually understand what your question is and know when the answer that it’s giving you is correct.

Host: Let’s talk a bit about your group, which, out of Montreal, is Engineering and Applied Research. And that’s an interesting umbrella at Microsoft Research. You’re technically doing fundamental research, but your focus is a little different from some of your pure research peers. How would you differentiate what you do from others in your field?

T.J. Hazen: Well, I think there’s two aspects to this. The first is that the lab up in Montreal was created as an offshoot of an acquisition. Microsoft bought Maluuba, which was a startup that was doing really incredible deep learning research, but at the same time they were a startup and they needed to make money. So, they also had this very talented engineering team in place to be able to take the research that they were doing in deep learning and apply it to problems where it could go into products for customers.

Host: Right.

T.J. Hazen: When you think about that need that they had to actually build something, you could see why they had a strong engineering team.

Host: Yeah.

T.J. Hazen: Now, when I joined, I wasn’t with them when they were a startup, I actually joined them from Azure where I was working with outside customers in the Azure Data Science Solution team, and I observed lots of problems that our customers have. And when I saw this new team that we had acquired and we had turned into a research lab in Montreal, I said I really want to be involved because they have exactly the type of technology that can solve customer problems and they have this engineering team in place that can actually deliver on turning from a concept into something real.

Host: Right.

T.J. Hazen: So, I joined, and I had this agreement with my manager that we would focus on real problems. They were now part of the research environment at Microsoft, but I said that doesn’t restrict us on thinking about blue sky, far-afield research. We can go and talk to product teams and say what are the real problems that are hindering your products, you know, what are the difficulties you have in actually making something real? And we could focus our research to try to solve those difficult problems. And if we’re successful, then we have an immediate product that could be beneficial.

Host: Well in any case, you’re swimming someplace in a “we could do this immediately” but you have permission to take longer, or is there a mandate, as you live in this engineering and applied research group?

T.J. Hazen: I think there’s a mandate to solve hard problems. I think that’s the mandate of research. If it wasn’t a hard problem, then somebody…

Host: …would already have a product.

T.J. Hazen: …in the product team would already have a solution, right? So, we do want to tackle hard problems. But we also want to tackle real problems. That’s, at least, our focus of our team. And there’s plenty of people doing blue sky research and that’s an absolute need as well. You know, we can’t just be thinking one or two years ahead. Research should be also be thinking five, ten, fifteen years ahead.

Host: So, there’s a whole spectrum there.

T.J. Hazen: So, there’s a spectrum. But there is a real need, I think, to fill that gap between taking an idea that works well in a lab and turning it into something that works well in practice for a real problem. And that’s the key. And many of the problems that have been solved by Microsoft have not just been blue sky ideas, but they’ve come from this problem space where a real product says, ahh, we’re struggling with this. So, it could be anything. It can be, like, how does Bing efficiently rank documents over billions of documents? You don’t just solve that problem by thinking about it, you have to get dirty with the data, you have to understand what the real issues are. So, many of these research problems that we’re focusing on, and we’re focusing on, how do you answer questions out of documents when the questions could be arbitrary, and on any topic? And you’ve probably experienced this, if you are going into a search site for your company, that company typically doesn’t have the advantage of having a big Bing infrastructure behind it that’s collecting all this data and doing sophisticated machine learning. Sometimes it’s really hard to find an answer to your question. And, you know, the tricks that people use can be creative and inventive but oftentimes, trying to figure out what the right keywords are to get you to an answer is not the right thing.

Host: You work closely with engineers on the path from research to product. So how does your daily proximity to the people that reify your ideas as a researcher impact the way you view, and do, your work as a researcher?

T.J. Hazen: Well, I think when you’re working in this applied research and engineering space, as opposed to a pure research space, it really forces you to think about the practical implications of what you’re building. How easy is it going to be for somebody else to use this? Is it efficient? Is it going to run at scale? All of these problems are problems that engineers care a lot about. And sometimes researchers just say, let me solve the problem first and everything else is just engineering. If you say that to an engineer, they’ll be very frustrated because you don’t want to bring something to an engineer that works ten times slower than needs to be, uses ten times more memory. So, when you’re in close proximity to engineers, you’re thinking about these problems as you are developing your methods.

Host: Interesting, because those two things, I mean, you could come up with a great idea that would do it and you pay a performance penalty in spades, right?

T.J. Hazen: Yeah, yeah. So, sometimes it’s necessary. Sometimes you don’t know how to do it and you just say let me find a solution that works and then you spend ten years actually trying to figure out how to make it work in a real product.

Host: Right.

T.J. Hazen: And I’d rather not spend that time. I’d rather think about, you know, how can I solve something and have it be effective as soon as possible?

(music plays)

Host: Let’s talk about human language technologies. They’ve been referred to by some of your colleagues as “the crown jewel of AI.” Speech and language comprehension is still a really hard problem. Give us a lay of the land, both in the field in general and at Microsoft Research specifically. What’s hope and what’s hype, and what are the common misconceptions that run alongside the remarkable strides you actually are making?

T.J. Hazen: I think that word we mentioned already: understand. That’s really the key of it. Or comprehend is another way to say it. What we’ve developed doesn’t really understand, at least when we’re talking about general purpose AI. So, the deep learning mechanisms that people are working on right now that can learn really sophisticated things from examples. They do an incredible job of learning specific tasks, but they really don’t understand what they’re learning.

Host: Right.

T.J. Hazen: So, they can discover complex patterns that can associate things. So in the vision domain, you know, if you’re trying to identify objects, and then you go in and see what the deep learning algorithm has learned, it might have learned features that are like, uh, you know, if you’re trying to identify a dog, it learns features that would say, oh, this is part of a leg, or this is part of an ear, or this is part of the nose, or this is the tail. It doesn’t know what these things are, but it knows they all go together. And the combination of them will make a dog. And it doesn’t know what a dog is either. But the idea that you could just feed data in and you give it some labels, and it figures everything else out about how to associate that label with that, that’s really impressive learning, okay? But it’s not understanding. It’s just really sophisticated pattern-matching. And the same is true in language. We’ve gotten to the point where we can answer general-purpose questions and it can go and find the answer out of a piece of text, and it can do it really well in some cases, and like, some of the examples we’ll give it, we’ll give it “who” questions and it learns that “who” questions should contain proper names or names of organizations. And “when” questions should express concepts of time. It doesn’t know anything about what time is, but it’s figured out the patterns about, how can I relate a question like “when” to an answer that contains time expression? And that’s all done automatically. There’s no features that somebody sits down and says, oh, this is a month and a month means this, and this is a year, and a year means this. And a month is a part of a year. Expert AI systems of the past would do this. They would create ontologies and they would describe things about how things are related to each other and they would write rules. And within limited domains, they would work really, really well if you stayed within a nice, tightly constrained part of that domain. But as soon as you went out and asked something else, it would fall on its face. And so, we can’t really generalize that way efficiently. If we want computers to be able to learn arbitrarily, we can’t have a human behind the scene creating an ontology for everything. That’s the difference between understanding and crafting relationships and hierarchies versus learning from scratch. We’ve gotten to the point now where the algorithms can learn all these sophisticated things, but they really don’t understand the relationships the way that humans understand it.

Host: Go back to the, sort of, the lay of the land, and how I sharpened that by saying, what’s hope and what’s hype? Could you give us a “TBH” answer?

T.J. Hazen: Well, what’s hope is that we can actually find reasonable answers to an extremely wide range of questions. What’s hype is that the computer will actually understand, at some deep and meaningful level, what this answer actually means. I do think that we’re going to grow our understanding of algorithms and we’re going to figure out ways that we can build algorithms that could learn more about relationships and learn more about reasoning, learn more about common sense, but right now, they’re just not at that level of sophistication yet.

Host: All right. Well let’s do the podcast version of your NERD Lunch and Learn. Tell us what you are working on in machine reading comprehension, or MRC, and what contributions you are making to the field right now.

T.J. Hazen: You know, NERD is short for New England Research and Development Center

Host: I did not!

T.J. Hazen: …which is where I physically work.

Host: Okay…

T.J. Hazen: Even though I work closely and am affiliated with the Montreal lab, I work out of the lab in Cambridge, Massachusetts, and NERD has a weekly Lunch and Learn where people present the work they’re doing, or the research that they’re working on, and at one of these Lunch and Learns, I gave this talk on machine reading comprehension. Machine reading comprehension, in its simplest version, is being able to take a question and then being able to find the answer anywhere in some collection of text. As we’ve already mentioned, it’s not really “comprehending” at this point, it’s more just very sophisticated pattern-matching. But it works really well in many circumstances. And even on tasks like the Stanford Question Answering Dataset, it’s a common competition that people have competed in, question answering, by computer, has achieved a human level of parity on that task.

Host: Mm-hmm.

T.J. Hazen: Okay. But that task itself is somewhat simple because most of the questions are fact-based questions like, who did something or when did something happen? And most of the answers are fairly easy to find. So, you know, doing as well as a human on a task is fantastic, but it only gets you part of the way there. What happened is, after this was announced that Microsoft had this great achievement in machine reading comprehension, lots of customers started coming to Microsoft saying, how can we have that for our company? And this is where we’re focused right now. Like, how can we make this technology work for real problems that our enterprise customers are bringing in? So, we have customers coming in saying, I want to be able to answer any question in our financial policies, or our auditing guidelines, or our operations manual. And people don’t ask “who” or “when” questions of their operations manual. They ask questions like, how do I do something? Or explain some process to me. And those answers are completely different. They tend to be longer and more complex and you don’t always, necessarily, find a short, simple answer that’s well situated in some context.

Host: Right.

T.J. Hazen: So, our focus at MSR Montreal is to take this machine reading comprehension technology and apply it into these new areas where our customers are really expressing that there’s a need.

Host: Well, let’s go a little deeper, technically, on what it takes to enable or teach machines to answer questions, and this is key, with limited data. That’s part of your equation, right?

T.J. Hazen: Right, right. So, when we go to a new task, uh, so if a company comes to us and says, oh, here’s our operations manual, they often have this expectation, because we’ve achieved human parity on some dataset, that we can answer any question out of that manual. But when we test the general-purpose models that have been trained on these other tasks on these manuals, they don’t generally work well. And these models have been trained on hundreds of thousands, if not millions, of examples, depending on what datasets you’ve been using. And it’s not reasonable to ask a company to collect that level of data in order to be able to answer questions about their operations manual. But we need something. We need some examples of what are the types of questions, because we have to understand what types of questions they ask, we need to understand the vocabulary. We’ll try to learn what we can from the manual itself. But without some examples, we don’t really understand how to answer questions in these new domains. But what we discovered through some of the techniques that are available, transfer learning is what we refer to as sort of our model adaptation, how do you learn from data in some new domain and take an existing model and make it adapt to that domain? We call that transfer learning. We can actually use transfer learning to do really well in a new domain without requiring a ton of data. So, our goal is to have it be examples like hundreds of examples, not tens of thousands of examples.

Host: How’s that working now?

T.J. Hazen: It works surprisingly well. I’m always amazed at how well these machine learning algorithms work with all the techniques that are available now. These models are very complex. When we’re talking about our question answering model, it has hundreds of millions of parameters and what you’re talking about is trying to adjust a model that is hundreds of millions of parameters with only hundreds of examples and, through a variety of different techniques where we can avoid what we call overfitting, we can allow the generalizations that are learned from all this other data to stay in place while still adapting it so it does well in this specific domain. So, yeah, I think we’re doing quite well. We’re still exploring, you know, what are the limits?

Host: Right.

T.J. Hazen: And we’re still trying to figure out how to make it work so that an outside company can easily create the dataset, put the dataset into a system, push a button. The engineering for that and the research for that is still ongoing, but I think we’re pretty close to being able to, you know, provide a solution for this type of problem.

Host: All right. Well I’m going to push in technically because to me, it seems like that would be super hard for a machine. We keep referring to these techniques… Do we have to sign an NDA, as listeners?

T.J. Hazen: No, no. I can explain stuff that’s out…

Host: Yeah, do!

T.J. Hazen: … in the public domain. So, there are two common underlying technical components that make this work. One is called word embeddings and the other is called attention. Word embeddings are a mechanism where it learns how to take words or phrases and express them in what we call vector space.

Host: Okay.

T.J. Hazen: So, it turns them into a collection of numbers. And it does this by figuring out what types of words are similar to each other based on the context that they appear in, and then placing them together in this vector space, so they’re nearby each other. So, we would learn, that let’s say, city names are all similar because they appear in similar contexts. And so, therefore, Boston and New York and Montreal, they should all be close together in this vector space.

Host: Right.

T.J. Hazen: And blue and red and yellow should be close together. And then advances were made to figure this out in context. So that was the next step, because some words have multiple meanings.

Host: Right.

T.J. Hazen: So, you know, if you have a word like apple, sometimes it refers to a fruit and it should be near orange and banana, but sometimes it refers to the company and it should be near Microsoft and Google. So, we’ve developed context dependent ones, so that says, based on the context, I’ll place this word into this vector space so it’s close to the types of things that it really represents in that context.

Host: Right.

T.J. Hazen: That’s the first part. And you can learn these word embeddings from massive amounts of data. So, we start off with a model that’s learned on far more data than we actually have question and answer data for. The second part is called attention and that’s how you associate things together. And it’s the attention mechanisms that learn things like a word like “who” has to attend to words like person names or company names. And a word like “when” has to attend to…

Host: Time.

T.J. Hazen: …time. And those associations are learned through this attention mechanism. And again, we can actually learn on a lot of associations between things just from looking at raw text without actually having it annotated.

Host: Mm-hmm.

T.J. Hazen: Once we’ve learned all that, we have a base, and that base tells us a lot about how language works. And then we just have to have it focus on the task, okay? So, depending on the task, we might have a small amount of data and we feed in examples in that small amount, but it takes advantage of all the stuff that it’s learned about language from all these, you know, rich data that’s out there on the web. And so that’s how it can learn these associations even if you don’t give it examples in your domain, but it’s learned a lot of these associations from all the raw data.

Host: Right.

T.J. Hazen: And so, that’s the base, right? You’ve got this base of all this raw data and then you train a task-specific thing, like a question answering system, but even then, what we find is that, if we train a question answering system on basic facts, it doesn’t always work well when you go to operation manuals or other things. So, then we have to have it adapt.

Host: Sure.

T.J. Hazen: But, like I said, that base is very helpful because it’s already learned a lot of characteristics of language just by observing massive amounts of text.

(music plays)

Host: I’d like you to predict the future. No pressure. What’s on the horizon for machine reading comprehension research? What are the big challenges that lie ahead? I mean, we’ve sort of laid the land out on what we’re doing now. What next?

T.J. Hazen: Yeah. Well certainly, more complex questions. What we’ve been talking about so far is still fairly simple in the sense that you have a question, and we try to find passages of text that answer that question. But sometimes a question actually requires that you get multiple pieces of evidence from multiple places and you somehow synthesize them together. So, a simple example we call the multi-hop example. If I ask a question like, you know, where was Barack Obama’s wife born? I have to figure out first, who is Barack Obama’s wife? And then I have to figure out where she was born. And those pieces of information might be in two different places.

Host: Right.

T.J. Hazen: So that’s what we call a multi-hop question. And then, sometimes, we have to do some operation on the data. So, you could say, you know like, what players, you know, from one Super Bowl team also played on another Super Bowl team? Well there, what you have to do is, you have to get the list of all the players from both teams and then you have to do an intersection between them to figure out which ones are the same on both. So that’s an operation on the data…

Host: Right.

T.J. Hazen: …and you can imagine that there’s lots of questions like that where the information is there, but it’s not enough to just show the person where the information is. You also would like to go a step further and actually do the computation for that. That’s a step that we haven’t done, like, how do you actually go from mapping text to text, and saying these two things are associated, to mapping text to some sequence of operations that will actually give you an exact answer. And, you know, it can be quite difficult. I can give you a very simple example. Like, just answering a question, yes or no, out of text, is not a solved problem. Let’s say I have a question where someone says, I’m going to fly to London next week. Am I allowed to fly business class according to my policies from my company, right? We can have a system that would be really good at finding the section of the policy that says, you know, if you are a VP-level or higher and you are flying overseas, you can fly business class, otherwise, no. Okay? But, you know, if we actually want the system to answer yes or no, we have to actually figure out all the details, like okay, who’s asking the question? Are they a VP? Where are they located? Oh, they’re in New York. What does flying overseas mean??

Host: Right. They’re are layers.

T.J. Hazen: Right. So that type of comprehension, you know, we’re not quite there yet for all types of questions. Usually these things have to be crafted by hand for specific domains. So, all of these things about how can you answer complex questions, and even simple things like common sense, like, things that we all know… Um. And so, my manager, Andrew McNamara, he was supposed to be here with us, one of his favorite examples is this concept of coffee being black. But if you spill coffee on your shirt, do you have a black stain on your shirt? No, you’ve got a brown stain on your shirt. And that’s just common knowledge. That is, you know, a common-sense thing that computers may not understand.

Host: You’re working on research, and ultimately products or product features, that make people think they can talk to their machines and that their machines can understand and talk back to them. So, is there anything you find disturbing about this? Anything that keeps you up at night? And if so, how are you dealing with it?

T.J. Hazen: Well, I’m certainly not worried about the fact that people can ask questions of the computer and the computer can give them answers. What I’m trying to get at is something that’s helpful and can help you solve tasks. In terms of the work that we do, yeah, there are actually issues that concern me. So, one of the big ones is, even if a computer can say, oh, I found a good answer for you, here’s the answer, it doesn’t know anything about whether that answer is true. If you go and ask your computer, was the Holocaust real? and it finds an article on the web that says no, the Holocaust was a hoax, do I want my computer to show that answer? No, I don’t. But…

Host: Or the moon landing…!

T.J. Hazen: …if all you are doing is teaching the computer about word associations, it might think that’s a perfectly reasonable answer without actually knowing that this is a horrible answer to be showing. So yeah, the moon landing, vaccinations… The easy way that people can defame people on the internet, you know, even if you ask a question that might seem like a fact-based question, you can get vast differences of opinion on this and you can get extremely biased and untrue answers. And how does a computer actually understand that some of these things are not things that we should represent as truth, right? Especially if your goal is to find a truthful answer to a question.

Host: All right. So, then what do we do about that? And by we, I mean you!

T.J. Hazen: Well, I have been working on this problem a little bit with the Bing team. And one of the things that we discovered is that if you can determine that a question is phrased in a derogatory way, that usually means the search results that you’re going to get back are probably going to be phrased in a derogatory way. So, even if we don’t understand the answer, we can just be very careful about what types of questions we actually want to answer.

Host: Well, what does the world look like if you are wildly successful?

T.J. Hazen: I want the systems that we build to just make life easier for people. If you have an information task, the world is successful if you get that piece of information and you don’t have to work too hard to get it. We call it task completion. If you have to struggle to find an answer, then we’re not successful. But if you can ask a question, and we can get you the answer, and you go, yeah, that’s the answer, that’s success to me. And we’ll be wildly successful if the types of things where that happens become more and more complex. You know, where if someone can start asking questions where you are synthesizing data and computing answers from multiple pieces of information, for me, that’s the wildly successful part. And we’re not there yet with what we’re going to deliver into product, but it’s on the research horizon. It will be incremental. It’s not going to happen all at once. But I can see it coming, and hopefully by the time I retire, I can see significant progress in that direction.

Host: Off script a little… will I be talking to my computer, my phone, a HoloLens? Who am I asking? Where am I asking? What device? Is that so “out there” as well?

T.J. Hazen: Uh, yeah, I don’t know how to think about where devices are going. You know, when I was a kid, I watched the original Star Trek, you know, and everything on there, it seemed like a wildly futuristic thing, you know? And then fifteen, twenty years later, everybody’s got their own little “communicator.”

Host: Oh my gosh.

T.J. Hazen: And so, uh, you know, the fact that we’re now beyond where Star Trek predicted we would be, you know, that itself, is impressive to me. So, I don’t want to speculate where the devices are going. But I do think that this ability to answer questions, it’s going to get better and better. We’re going to be more interconnected. We’re going to have more access to data. The range of things that computers will be able to answer is going to continue to expand. And I’m not quite sure exactly what it looks like in the future, to be honest, but, you know, I know it’s going to get better and easier to get information. I’m a little less worried about, you know, what the form factor is going to be. I’m more worried about how I’m going to actually answer questions reliably.

Host: Well it’s story time. Tell us a little bit about yourself, your life, your path to MSR. How did you get interested in computer science research and how did you land where you are now working from Microsoft Research in New England for Montreal?

T.J. Hazen: Right. Well, I’ve never been one to long-term plan for things. I’ve always gone from what I find interesting to the next thing I find interesting. I never had a really serious, long-term goal. I didn’t wake up some morning when I was seven and say, oh, I want to be a Principal Research Manager at Microsoft in my future! I didn’t even know what Microsoft was when I was seven. I went to college and I just knew I wanted to study computers. I didn’t know really what that meant at the time, it just seemed really cool.

Host: Yeah.

T.J. Hazen: I had an Apple II when I was a kid and I learned how to do some basic programming. And then I, you know, was going through my course work. I was, in my junior year, I was taking a course in audio signal processing and in the course of that class, we got into a discussion about speech recognition, which to me was, again, it was Star Trek. It was something I saw on TV. Of course, now it was Next Generation….!

Host: Right!

T.J. Hazen: But you know, you watch the next generation of Star Trek and they’re talking to the computer and the computer is giving them answers and here somebody is telling me you know there’s this guy over in the lab for computer science, Victor Zue, and he’s building systems that recognize speech and give answers to questions! And to me, that was science-fiction. So, I went over and asked the guy, you know, I heard you’re building a system, and can I do my bachelor’s thesis on this? And he gave me a demo of the system – it was called Voyager – and he asked a question, I don’t remember the exact question, but it was probably something like, show me a map of Harvard Square. And the system starts chugging along and it’s showing results on the screen as it’s going. And it literally took about two minutes for it to process the whole thing. It was long enough that he actually explained to me how the entire system worked while it was processing. But then it came back, and it popped up a map of Harvard Square on the screen. And I was like, ohhh my gosh, this is so cool, I have to do this! So, I did my bachelor’s thesis with him and then I stayed on for graduate school. And by seven years later, we had a system that was running in real time. We had a publicly available system in 1997 that you could call up on a toll-free number and you could ask for weather reports and weather information for anywhere in the United States. And so, the idea that it went from something that was “Star Trek” to something that I could pick up my phone, call a number and, you know, show my parents, this is what I’m working on, it was astonishing how fast that developed! I stayed on in that field with that research group. I was at MIT for another fifteen years after I graduated. At some point, a lot of the things that we were doing, they moved from the research lab to actually being real.

Host: Right.

T.J. Hazen: So, like twenty years after I went and asked to do my bachelor’s thesis, Siri comes out, okay? And so that was our goal. They were like, twenty years ago, we should be able to have a device where you can talk to it and it gives you answers and twenty years later there it was. So, that, for me, that was a queue that maybe it’s time to go where the action is, which was in companies that were building these things. Once you have a large company like Microsoft or Google throwing their resources behind these hard problems, then you can’t compete when you’re in academia for that space. You know, you have to move on to something harder and more far out. But I still really enjoyed it. So, I joined Microsoft to work on Cortana…

Host: Okay…

T.J. Hazen: …when we were building the first version of Cortana. And I spent a few years working on that. I’ve worked on some Bing products. I then spent some time in Azure trying to transfer these things so that companies that had the similar types of problems could solve their problems on Azure with our technology.

Host: And then we come full circle to…

T.J. Hazen: Then full circle, yeah. You know, once I realized that some of the stuff that customers were asking for wasn’t quite ready yet, I said, let me go back to research and see if I can improve that. It’s fantastic to see something through all the way to product, but once you’re successful and you have something in a product, it’s nice to then say, okay, what’s the next hard problem? And then start over and work on the next hard problem.

Host: Before we wrap up, tell us one interesting thing about yourself, maybe it’s a trait, a characteristic, a life event, a side quest, whatever… that people might not know, or be able to find on a basic web search, that’s influenced your career as a researcher?

T.J. Hazen: Okay. You know, when I was a kid, maybe about eleven years old, the Rubik’s Cube came out. And I got fascinated with it. And I wanted to learn how to solve it. And a kid down the street from my cousin had taught himself from a book how to solve it. And he taught me. His name was Jonathan Cheyer. And he was actually in the first national speed Rubik’s Cube solving competition. It was on this TV show, That’s Incredible. I don’t know if you remember that TV show.

Host: I do.

T.J. Hazen: It turned out what he did was, he had learned what is now known as the simple solution. And I learned it from him. And I didn’t realize it until many years later, but what I learned was an algorithm. I learned, you know, a sequence of steps to solve a problem. And once I got into computer science, I discovered all that problem-solving I was doing with the Rubik’s Cube and figuring out what are the steps to solve a problem, that’s essentially what things like machine learning are doing. What are the steps to figure out, what are the features of something, what are the steps I have to do to solve the problem? I didn’t realize that at the time, but the idea of being able to break down a hard problem like solving a Rubik’s Cube, and figuring out what are the stages to get you there, is interesting. Now, here’s the interesting fact. So, Jonathan Cheyer, his older brother is Adam Cheyer. Adam Cheyer is one of the co-founders of Siri.

Host: Oh my gosh. Are you kidding me?

T.J. Hazen: So, I met the kid when I was young, and we didn’t really stay in touch. I discovered, you know, many years later that Adam Cheyer was actually the older brother of this kid who taught me the Rubik’s Cube years and years earlier, and Jonathan ended up at Siri also. So, it’s an interesting coincidence that we ended up working in the same field after all those years from this Rubik’s Cube connection!

Host: You see, this is my favorite question now because I’m getting the broadest spectrum of little things that influenced and triggered something…!

Host: At the end of every podcast, I give my guests a chance for the proverbial last word. Here’s your chance to say anything you want to would-be researchers, both applied and other otherwise, who might be interested in working on machine reading comprehension for real-world applications.

T.J. Hazen: Well, I could say all the things that you would expect me to say, like you should learn about deep learning algorithms and you should possibly learn Python because that’s what everybody is using these days, but I think the single most important thing that I could tell anybody who wants to get into a field like this is that you need to explore it and you need to figure out how it works and do something in depth. Don’t just get some instruction set or some high-level overview on the internet, run it on your computer and then say, oh, I think I understand this. Like get into the nitty-gritty of it. Become an expert. And the other thing I could say is, of all the people I’ve met who are extremely successful, the thing that sets them apart isn’t so much, you know, what they learned, it’s the initiative that they took. So, if you see a problem, try to fix it. If you see a problem, try to find a solution for it. And I say this to people who work for me. If you really want to have an impact, don’t just do what I tell you to do, but explore, think outside the box. Try different things. OK? I’m not going to have the answer to everything, so therefore, if I don’t have the answer to everything, then if you’re only doing what I’m telling you to do, then we both, together, aren’t going to have the answer. But if you explore things on your own and take the initiative and try to figure out something, that’s the best way to really be successful.

Host: T.J. Hazen, thanks for coming in today, all the way from the east coast to talk to us. It’s been delightful.

T.J. Hazen: Thank you. It’s been a pleasure.

(music plays)

To learn more about Dr. T.J. Hazen and how researchers and engineers are teaching machines to answer complicated questions, visit

Go to Original Article
Author: Microsoft News Center

IFA 2019: Dell adds new 10th Generation Intel Core processors to XPS 13 and Inspiron systems, makes XPS 13 2-in-1 available | Windows Experience Blog

Just before the IFA tradeshow in Berlin, Dell announces the expansion of its consumer portfolio with brand new form factors and the addition of new 10th Generation Intel Core processors, delivering performance gains needed for compute-intensive, demanding, multi-thread workloads – while still efficiently handling 4K content.

XPS 13

Bingeing series or always working on the go? Turn to the XPS 13, which will now include Intel 10th Generation Core U series processors, with up to i7 hexacore chips (available in October). With the new Killer AX1650 (2×2) built on Intel Wi-Fi 6 Chipset, wireless connectivity is three times as fast as the previous generation. Along with Dell CinemaColor and Dolby Vision and an optional 4K Ultra HD InfinityEdge display, the XPS 13 will remain eye candy for those glued to their screen.

XPS 13 2-in-1

And you can now buy the XPS 13 2-in-1, a COMPUTEX d&i award winner. The 2-in-1 is also the first laptop available with Intel’s new 10th Generation Core, with its 10-nanometer silicon processors.

Inspiron 14 7000

The new Inspiron 14 7000 ultralight laptops will also have the latest 10th Generation Intel Core processors. With the new lid-open sensor, Connected Modern Standby and fingerprint reader built into the power button, the system signs on securely and starts in a flash. The laptop, made of magnesium alloy, has four-sided narrow borders with 100% sRGB color coverage, perfect for mobile multitasking.

Inspiron 27 7000

Other Inspiron systems have been upgraded or are now available (and, depending on the model, these Inspiron systems run on either Windows 10 Home or Windows 10 in S Mode):
Inspiron 14, 15, 17 3000; Inspiron 13 5000; Inspiron 14, 15 5000; and Inspiron 14, 15 5000 2-in-1 have been updated with the latest 10th Generation Intel Core processors.
Inspiron 13 7000 (available only in China, Japan and Brazil), an ultralight laptop with a starting weight of less than 1 kg*. Made from magnesium alloy and featuring a lid-open sensor, this device offers mobility and productivity.
Inspiron 13, 15, 17 7000 2-in-1 first announced at CES 2019 and a Computex Design Innovation Award winner, is now available globally (North America, Europe, Middle East, Africa, Greater China and Asia Pacific Japan), with the addition of USB Type-C with Thunderbolt 3 support and in an optional silver chassis.
Inspiron 14, 15 5000 (available only in non-North America regions): packed with features, is built for reliability and multitasking. You can find them in the following colors (based on regional availability): Platinum Silver, Iced Lilac, Iced Mint and Iced Gold (14-inch only).
Inspiron 24 5000 and 27 7000 All-in-One (AIOs) desktops, first announced at Computex 2019, will be available Aug. 23. They fit in your living room or dorm room as a simple and attractive entertainment system or TV replacement.
Find out more about these and other Dell announcements at IFA 2019.
* Weights vary depending on configuration and manufacturing variability.

Growing Web Template Studio – Windows Developer Blog

We’re excited to announce Version 2.0 of Microsoft Web Template Studio a cross-platform extension for Visual Studio Code that simplifies and accelerates creating new full-stack web applications.Web Template Studio (WebTS) is a user-friendly wizard to quickly bootstrap a web application and provides a with step by step instructions to start developing. Best of all, Web Template Studio is open source on GitHub.
Our philosophy is to help you focus on your ideas and bootstrap your app in a minimal amount of time. We also strive to introduce best patterns and practices. Web Template Studio currently supports React, Vue, and Angular for frontend and Node.js and Flask for backend. You can choose any combination of frontend/backend frameworks to quickly build your project.
We want to partner with the community to see what else is useful and should be added. We know there are many more frameworks, pages, and features to be included and can’t stress enough that this is a work in progress. If there is something you feel strongly about, please let us know. On top of feedback, we’re also willing to accept PRs. We want to be sure we’re building the right thing.
Web Template Studio takes the learnings from its sister project, Windows Template Studio which implements the same concept but for native UWP applications. While the two projects target different development environments and tech stacks, they share a lot of architecture under the hood.

Install the weekly staging build; just head over to Visual Studio Marketplace’s Web Template Studio page and click “install.” In addition, you’ll need Node and Yarn installed as well.

We launch WebTS by simply using the shortcut (ctrl + shift + p) and typing in Web Template Studio. This will fire up the wizard and you’ll be able to start generating a project in no time.

Step 1: Project Name and Save To destination
You don’t even have to fill in the project name and destination path as everything is now automated for you!
We’ve added a Quick Start pane for advanced users that offers a single view of all wizard steps. This lets you generate a new project in just two clicks!

Step 2: Choose your frameworks
Based on community feedback, we added new frameworks: Angular, Vue and Flask.
So now we support the following frameworks for frontend: React.js , Vue.js, Angular. And for backend: Node.js and Flask.

Step 3: Add Pages to your project
This page has been redesigned to give you a smoother experience.
To accelerate app creation, we provide several app page templates that you can use to add common UI pages into your new app. The current page templates include: blank page, grid page, list, master detail. You can click on preview to see what these pages look like before choosing them.

Step 4: Cloud Services
In this new release, we added App Service. We currently support services cover storage (Azure Cosmos DB) and cloud hosting (App Service)!

Step 5: Summary and Create Project
This page has been redesigned. You can now see the project details on the right-side bar and you are able to make quick changes to your project before creating it.
Simply click on Create Project and start coding!

Step 6: Running your app
Click the “Open project in VSCode” link. You can open up your file for helpful tips and tricks and then, to get the webserver up and running. To run your app, you just need to open the terminal then type “yarn install” then “yarn start” and you’re up and going! This generates a web app that gives you a solid starting point. It pulls real data, allowing you to quickly refactor so you can spend your time on more important tasks like your business logic.

Web Template Studio is completely open-source and available now on GitHub. We want this project to follow the direction of the community and would love for you to contribute issues or code. Please read our contribution guidelines for next steps. A public roadmap is currently available and your feedback here will help us shape the direction the project takes.
This project was proudly created by Microsoft Garage interns. The Garage Internship is a unique, startup-style program for talented students to work in groups of 6-8 on challenging engineering projects. The team partnered with teams across Microsoft along with the community to build the project. It has gone through multiple iterations variations to where it is currently today.

For Sale – 4U Silenced Custom Server

Here I have for sale my old lab ESXi host, which after a house move is now surplus to requirements.

The host itself is comprised of the following components:

– X-Case 4u Case
– Supermicro X9SRL-F Motherboard
– E5-2670 Xeon Processor (8 Core @ 2.60Ghz)
– 16GB (2 x 8GB Dimm) 1333Mhz Registered ECC Memory
– Noctua NH-U9DX i4 Xeon Cooler
– Corsair G650M Power Supply

Not latest tech by any stretch of the imagination, but certainly more than enough for the majority of home use cases.

Price and currency: £200
Delivery: Delivery cost is not included
Payment method: BACS / PPG
Location: Newbury
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Go to Original Article