Announcing Windows 10 Insider Preview Build 19555 | Windows Experience Blog

Hello Windows Insiders, today we’re releasing Windows 10 Insider Preview Build 19555.1001 to Windows Insiders in the Fast ring.If you want a complete look at what build is in which Insider ring, head over to Flight Hub. You can also check out the rest of our documentation here, including a list of new features and updates.
Not seeing any of the features in this build? Check your Windows Insider Settings to make sure you’re on the Fast ring. Submit feedback here to let us know if things weren’t working the way you expected.

We’ve resolved an issue where certain external USB 3.0 drives ended up in an unresponsive state with Start Code 10 after they were attached.
The cloud recovery option for Reset this PC is now back up and running on this build.
We fixed an issue resulting in ARM64 devices not being able to upgrade to the previous build.
We’ve updated Protection History in the Windows Security app to show a loading indicator in cases where loading is taking longer than expected.
We fixed an issue with the modern print dialog not displaying the print preview correctly in certain cases in recent flights.
We fixed an issue that could result in the Start menu and apps not opening until explorer.exe had been restarted after locking then unlocking your computer while listening to music.
We fixed an alignment issue with the toggles under Windows Update > Advanced options when certain policies were enabled/disabled.

BattlEye and Microsoft have found incompatibility issues due to changes in the operating system between some Insider Preview builds and certain versions of BattlEye anti-cheat software. To safeguard Insiders who might have these versions installed on their PC, we have applied a compatibility hold on these devices from being offered affected builds of Windows Insider Preview. See this article for details.
We are aware Narrator and NVDA users that seek the latest release of Microsoft Edge based on Chromium may experience some difficulty when navigating and reading certain web content. Narrator, NVDA and the Edge teams are aware of these issues. Users of legacy Microsoft Edge will not be affected. NVAccess has released a beta of NVDA that resolves the known issue with Edge. Further information can also be found in the In Process blog post that goes into more detail about the beta.
We’re looking into reports of the update process hanging for extended periods of time when attempting to install a new build.
We’re investigating reports that some Insiders are unable to update to newer builds with error 0x8007042b.
We’re looking into reports that some Insiders are unable to update to newer builds with error 0xc1900101.
The Documents section under Privacy has a broken icon (just a rectangle).
The IME candidate window for East Asian IMEs (Simplified Chinese, Traditional Chinese, and the Japanese IME) may not open sometimes. We are investigating your reports. As a workaround if you encounter this, please change the focus to another application or editing area and back to the original and try again. Alternatively, you can go to Task Manager and end the “TextInputHost.exe” task from the Details tab, and it should work afterwards.
We’re investigating reports that certain devices are no longer sleeping on idle. We have identified the root cause and are working on a fix for an upcoming flight. If your device is impacted, manually triggering sleep should work (Start > Power button > Sleep).

Did you miss the Grammys on Sunday? No worries, Bing has you covered. Check out our Grammy Award winners carousel. From Record of the Year to Pop Solo Performance, we cover it all. Select a winner and learn more about their career journey. Check it out here and let us know what you think!
If you want to be among the first to learn about these Bing features, join our Bing Insiders Program.

Help us continue to improve Microsoft Edge! Join our latest user research session today, January 30 from 11 a.m. to 12:30 p.m. PST to give us your feedback. Get more details about this session.

Microsoft Azure Peering Services Explained

In this blog post, you’ll discover everything you need to know about Microsoft Azure Peering Services, a networking service introduced during Ignite 2019.

Microsoft explains the service within their documentation as follows:

Azure Peering Service is a networking service that enhances customer connectivity to Microsoft cloud services such as Office 365, Dynamics 365, software as a service (SaaS) services, Azure, or any Microsoft services accessible via the public internet. Microsoft has partnered with internet service providers (ISPs), internet exchange partners (IXPs), and software-defined cloud interconnect (SDCI) providers worldwide to provide reliable and high-performing public connectivity with optimal routing from the customer to the Microsoft network.

To be honest, Microsoft explained the service well, but what’s behind the explanation is much more complex. To understand Azure Peering Services and its benefits, you need to understand how peering, routing, and connectivity for internet providers work.

What Are Peering And Transit?

In the internet and network provider world, peering is an interconnection of separated and independent internet networks to exchange traffic between users within their respective networks. Peering or partnering is a free agreement between two providers. Normally both providers only pay their cross-connect in the datacenter and their colocation space. Traffic is not paid by any party. Instead, there are special agreements, e.g. from smaller to larger providers.

Normally you have the following agreements:

  • between equal providers or peering partners – traffic upload and download between these two networks is free for both parties
  • a larger provider and a smaller provider – the smaller provider needs to pay a fee for the transit traffic to the larger network provider
  • providers who transit another network to reach a 3rd party network (upstream service) – the provider using the upstream needs to pay a fee for the transit traffic to the upstream provider

An agreement by two or more networks to peer is instantiated by a physical interconnection of the networks, an exchange of routing information through the Border Gateway Protocol (BGP) routing protocol and, in some special cases, a formalized contractual document. These documents are called peering policies and Letter of Authorization or LOA.

Fun Fact – As a peering partner for Microsoft, you can easily configure the peering through the Azure Portal as a free service.

As you can see in the screenshot, Microsoft is very restrictive with their routing and peering policies. That prevents unwanted traffic and protects Microsoft customers when Peering for Azure ExpressRoute (AS12076).

Routing and peering policies Azure express route.

Now let’s talk a bit about the different types of peering.

Public Peering

Public peering is configured over the shared platform of Internet Exchange Point. Internet Exchanges charge a port and/or member fee for using their platform for interconnect.

If you are a small cloud or network provider with less infrastructure, the peering via an Internet Exchange is a good place to start. As a big player on the market, it is a good choice because you are also reaching smaller networks on a short path. The picture below shows an example of those prices. I took my example from the Berlin Commercial Internet Exchange Pricing Page.

Berlin Commercial Internet Exchange Pricing

Hurricane Electric offers a tool that can give you a peering map and more information about how a provider is publicly peered with other providers, but you will not get a map from the private peering there. The picture below shows you some examples for Microsoft AS 8075.

Microsoft AS 8075 peering

Private Peering

Private peering is a direct physical link between two networks. Commonly the peering is done by one or more 10GBE or 100GBE links. The connection is made from only one network to another, for which any site pays a set fee to the owner of the infrastructure or colocation that is used. Those costs are usually crossconnect within the datacenter. That makes private peering a good choice when you need to send large volumes of traffic to one specific network. That’s a much cheaper option when looking on the pricing per transferred gigabyte between both networks than with public peering. When peering private with providers you may need to follow some peering policies though.

A good provider also has a looking glass where you can get more insights into peerings, but we will look at this later on.

Transit and Upstream

When someone is using Transit, the provider itself has no access to the destination network. Therefore he needs to leverage other networks or network providers to reach the destination network and destination service. Those providers who give the transit are known as transit providers, with larger networks being considered as Tier 1 networks. As a network provider for cloud customers like Microsoft, you don’t want any transit routing. In the first place, you normally have high costs for transitive routing through other networks, and what is worse, you add additional latency and uncontrollable space between your customers and the cloud services. So, the first thing when handling cloud customers, avoid transit routing and peer yourself with cloud providers either through private or public network interconnect at interconnect locations.

That is one reason why Microsoft is working with Internet Exchanges and Network and Internet Providers to enable Services like Microsoft Azure Peering. It should give customers more control over how they reach Microsoft Services incl. Azure, Microsoft 365, xBox etc. To understand the impact, you also need to know about Service Provider Routing. That’s how we will follow up in the next part of the post.

How Internet Service Providers Route your Traffic?

When you look at routing, there are mostly only two options within a carrier network. The first one is cold potato or centralized routing. With cold potato routing, a provider keeps the traffic as long as possible within his network before he sends it to another 3rd party. The other option is hot potato routing or decentralized routing. Here the provider sends the traffic as fast as possible to the 3rd party, mostly in the same metro.

The picture below illustrates the difference between hot and cold potato routing.

cold and hot potato routing differences

As you can see in the drawing, the cold potato routing takes a longer distance through the provider network and with that to your target, e.g. Microsoft.

Those routing configurations have a large impact on your cloud performance because every kilometer distance adds latency. The actual number is 1ms in latency added per every 200 kilometers of distance. As a result, you will see an impact on the likes of voice quality during Teams Meetings or synchronization issues for Backups to Azure.

Microsoft has a big agenda to address that issue for their customers and the rest of the globe. You can read more about the plans in articles from Yousef Khalidi, Cop. Vice President Microsoft Networking.

Now let’s start with Peering Services and how it can change the game.

What is Azure Peering Services and How it Solves the Issue?

When you look at how the service is designed, you can see that it leverages all of Microsoft Provider Peering with AS 8075. Together with the Microsoft Azure Peering Services Partners, Microsoft can change the default routing and transit behavior to their services when using a partner provider.

Following the picture below, you can setup a routing so that traffic from your network to Azure (or other networks) now uses the Microsoft Global Backbone instead of a transit provider without any SLA.

What is Azure Peering Services

With that service enabled, the performance to Microsoft Services will increase and the latency will be reduced depending on the provider. As you can expect, services like Office 365 or Azure AD will profit from that Azure Service but there is more. When you for example build your backbone on the Microsoft Global Transit Architecture with Azure Virtual WAN and leverage Internet Connections of these certain Providers and Internet Exchange Partners, you will directly boost your network performance and you will have a pseudo-private network. The reason for that is because you now leverage private or public peering with route restrictions. Your backbone traffic will now bypass the regular Internet and flow through the Microsoft Global Backbone from A to B.

Let me try to explain it with a drawing.

Microsoft global backbone network

in addition to better performance, you will also get an additional layer of monitoring. While the regular internet is a black box regarding dataflow, performance, etc. with Microsoft Azure Peering Services you get fully operational monitoring of your wide area network through the Microsoft Backbone.

You can find this information in the Azure Peering Services Telemetry Data.

The screenshot below shows the launch partner of Azure Peering Services.

Launch partner of Azure Peering Services

When choosing a network provider for your access to Microsoft, you should follow this guideline:

  • Choose a provider well peered with Microsoft
  • Choose a provider with hot potato routing to Microsoft
  • Don`t let the price decide the provider, a good network has costs
  • Choose Dedicated Internet Access before regular Internet Connection any time possible
  • If possible use locale providers instead of global ones
  • A good provider always has a looking glass or can provide you with default routes between a city location and other peering partners. If not, it is not a good provider to choose

So, let’s learn about the setup of the service.

How to configure Azure Peering Services?

First, you need to understand that like with Azure ExpressRoute, there are two sites to contact and configure.

You need to follow the steps below to establish a Peering Services connection.

Step 1: Customer provision the connectivity from a connectivity partner (no interaction with Microsoft). With that, you get an Internet provider who is well connected to Microsoft and meets the technical requirements for performant and reliable connectivity to Microsoft. Again you should check the Partnerlist.
Step 2: Customer registers locations into the Azure portal. A location is defined by: ISP/IXP Name, Physical location of the customer site (state level), IP Prefix given to the location by the Service Provider or the enterprise. As a service from Microsoft, you now get Telemetry data like Internet Routes monitoring and traffic prioritization from Microsoft to the user’s closest edge location.

The registration of the locations happens within the Azure Portal.

Currently, you need to register for the public beta first. That happens with some simple PowerShell commands.

Using Azure PowerShell 

Using Azure CLI

Afterward, you can configure the service using the Azure Portal, Azure PowerShell, or Azure CLI.

You can find the responsive guide here.

After the Service went General Available (GA), customers also received SLAs on the Peering and Telemetry Service. Currently, there is no SLA and no support if you use the services in production.

Peering and Telemetry service

Closing Thoughts

From reading this article you now have a better understanding of Microsoft Azure Peering Services and its use, peering between providers, and the routing and traffic behavior within the internet. When digging deeper into Microsoft Peering Services, you now should be able to develop some architectures and ideas on how to use that service.

If you have any providers which are not aware about that Service or direct Peering with Microsoft AS 8075, point them to or let them drop an email to [email protected]

When using the BGP Tools from Hurricane Electric, you should get info about some providers, peering with Microsoft. One thing you need to know, most of the 3500 Network Partners of Microsoft are peering private with Microsoft. The Hurricane tools and only observe the public peering partners.

Go to Original Article
Author: Florian Klaffenbach

Windows Server 2008 end of life means it’s time to move

Windows Server 2008 end of life is here, so will you go to the cloud, use containers or carry on with on-premises Windows Server?

Windows Server 2008 and Server 2008 R2 left support recently, giving administrators one last batch of security updates on January Patch Tuesday. Organizations that have not migrated to a supported platform will not get further security fixes for Server 2008/2008 R2 machines unless they have enrolled in the Extended Security Updates program or moved those workloads into the Microsoft cloud for three additional years of security updates. Organizations that choose to continue without support will roll the dice with machines that now present a liability.

In many instances, a switch to a newer version of Windows Server is not an option. For example, many hospitals run equipment that relies on applications that do not function on a 64-bit operating system, which rules out every currently supported Windows Server OS. In these cases, IT must keep those workloads running but keep them as secure as possible using various methods, such as isolating the machine with a segregated virtual LAN or even pulling the plug by air-gapping those systems.

What works best for your organization is based on many factors, such as cost and the IT department’s level of expertise and comfort level with newer technologies.

For some, a cloudy forecast

The decision to stay with the same version of Server 2008/2008 R2 comes with a price. To enroll in the Extended Security Updates program requires Software Assurance and the cost for the updates annually is about 75% of what a Windows Server license costs.

This expense will motivate some organizations to explore ways to reduce those costs and one alternative is to push those Server 2008/2008 R2 workloads into the Azure cloud. This migration will require some adjustment as the capital expense of an on-premises system migrates to the operational expense used with the cloud consumption model.

Mentioning the cloud word doesn’t fill IT with as much apprehension as it once did, but the move might require some technological gymnastics to get some workloads working when one component, such as the database, needs to stay on premises while the application runs in the cloud.

Some other considerations include increasing the available bandwidth to accommodate the need for lower latency when working with cloud workloads and learning how to patch and do other administrative tasks when the system is in Azure.

Application virtualization is another option

While traditional virtualization is the norm for most Windows shops, there’s a relatively new form of virtualization that is another migration option. Putting a Windows Server 2008/2008 R2 workload into a Docker container might not seem as far-fetched as it did when this technology was in its infancy.

Containers versus VMs
Because each virtual machine uses a guest operating system, VMs use more disk space than a container that shares an underlying operating system.

Microsoft added support for Windows containers on Windows Server 2016 and 2019, as well as the Semi-Annual Channel releases. The migration process puts the legacy application into a container, which then runs on top of a supported Windows Server OS.

Administrators will need to get up to speed with the differences between the two forms of virtualization, and the advantages and disadvantages of migrating a server workload to a containerized application. For example, all the containerized applications run on top of a shared kernel, which might not work in environments with a requirement for kernel isolation for sensitive workloads.

Storage Migration Service eases file server upgrades

Microsoft released Windows Server 2019 with a host of new features, including the Storage Migration Service, which attempts to reduce the friction associated with moving file servers to a newer Windows Server operating system.

One standby for many organizations is the on-premises file server that holds documents, images and other data that employees rely on to do their jobs. The Windows Server 2008 end of life put many in IT in the difficult position of upgrading file servers on this legacy server OS. It’s not as simple as copying all the files over to the new server because there are numerous dependencies associated with stored data that must get carried over and, at the same time, avoid disrupting the business during the migration process.

The Storage Migration Service runs from within Microsoft’s relatively new administrative tool called the Windows Admin Center. The feature is not limited to just shifting to a supported on-premises Windows Server version but will coordinate the move to an Azure File Sync server or a VM that runs in Azure.

Go to Original Article

Traditional, emerging topics unite in the new CCNA exam

While Cisco’s updated Cisco Certified Network Associate — or CCNA — certification track shrunk to a single path and single exam, CCNA hopefuls must know a broad range of both networking basics and emerging networking technologies in order to pass the exam.

Cisco announced sweeping changes to its certification tracks in June 2019, and the new CCNA exam derives from one of the largest changes in Cisco history, according to Cisco author Wendell Odom. Odom, author of every CCNA Official Cert Guide, wrote two new volumes of his guides for the CCNA 200-301 exam. The singular path of the new CCNA exam is smaller overall compared to past exam versions, yet the extensive amount of material — both old and new — necessitated two volumes.

Both Volumes 1 and 2 cover various traditional networking topics, such as virtual LANs (VLANs) and basic IP services, as well as newer networking technologies, such as network automation. Odom said the new CCNA exam includes a lot for engineers to learn but also contains relevant and useful material for the current job market.

Editor’s note: The following interview was edited for length and clarity.

Can you compare details of the former and the new CCNA exams?

Wendell OdomWendell Odom

Wendell Odom: If you took the old CCNA Routing and Switching exam blueprint, about half those topics are in the new CCNA exam. The literal words are there. It’s not just the same topic — it’s copied-and-pasted topics from the old to the new.

Then, the new exam has topics that weren’t in any of the old. It has a few you might say came from CCNA Collaboration or CCNA Data Center. For the most part, the new topics [show] the world is changing and IT changes quickly. These are new things Cisco finds important for routing and switching, like automation and cloud. Now, it introduces intent-based networking to CCNA for the first time.

If you view the old as 100 points in volume, the new is about 75% of that — 75 points. Fifty points are old exam topics that stuck around: VLANs, VLAN trunks, IPv4 and IPv6 routing, Layer 3 filters, sub-Layer 2 filtering with port security, security protocols, basic IP services, like SNMP [Simple Network Management Protocol] and NTP [Network Time Protocol].

CCNA Guide book coverClick to learn more about
this book.

Now, there’s more OSPF [Open Shortest Path First] — particularly, OSPF network types. On an Ethernet interface, you’ve got two or more routers that run OSPF connected to the same Ethernet. They elect a designated router, which causes OSPF to model the connected subnet differently. It changes OSPF operation on that LAN.

That’s typical on a LAN, but if you use Ethernet in WANs — particularly point-to-point WAN links — you don’t want LAN-like OSPF behavior electing a designated router. To change that, in Cisco routers, you change the OSPF network type to point-to-point instead of the default broadcast type, which is what causes it to act like a LAN.

The new Volume 1 has four chapters on wireless LANs. It’s basic: What’s an access point [AP]? What are the different wireless standards? How would you configure an AP to be a stand-alone AP? How would you do it with a wireless LAN controller? To a networker, it’s not very deep, but it’s your first step, and there’s a lot in CCNA that are first steps in learning technologies.

Now, there’s DHCP [Dynamic Host Configuration Protocol] snooping and dynamic ARP [Address Resolution Protocol] inspection. And the new CCNA exam mentions TFTP [Trivial File Transfer Protocol] and FTP specifically.

People will enjoy the topics they learn, both for learning and for how it matches real jobs today. Cisco did this particular exam right.
Wendell OdomAuthor

The old had basics of what I call ‘controller-based networking;’ there’s more now. It talks about underlays and overlays, which now gets you ready for software-defined access. The old and new CCNA exams have a lot about the old way to do LANs — how you build switch networks, Spanning Tree Protocol, etc.

Now, there’s REST, JSON [JavaScript Object Notation], specifically mentioned comparisons of Ansible, Puppet and Chef, as far as how they work under the covers. It doesn’t get into how to manipulate the tools, but more of which uses a push model, which uses a pull model, etc.

If you studied now for everything except newer technologies, which is 10% of the exam blueprint, it’d seem like traditional networking technology. Then, you get into newer, evolving technologies. Now, we’re pushing the baby birds out of the nest because … you’re going to get a lot of this in the CCNP Enterprise Core, etc. I’m glad some of it is in CCNA.

What questions have you gotten about the new CCNA exam?

Odom: Oddly enough, there’s not much worry about new topics. ‘Do I need to know Python?’ That’s probably most common because exam topics don’t mention Python. You think automation, and you think your first step is a programming language. You can actually learn everything in CCNA for automation without knowing Python.

People quickly zero in on technical questions: Layer 2, Layer 3 interactions. People get confused about encapsulation. OSPF concepts are more common — typically, LSAs [link-state advertisement], what those mean and whether that’s important. ‘Do I need to understand what a Type 1, Type 2 and Type 3 LSA is?’ I don’t know how important that is for the exam depending on the version. But if you’re going to use OSPF, you need to know what it is for real life.

I’m happy with how [the new CCNA exam] balances newer automation features and technologies — not overwhelming newbies with too much new and giving the foundation they need to get a real job. I think Cisco hit the right balance. People will enjoy the topics they learn, both for learning and for how it matches real jobs today. Cisco did this particular exam right.

Go to Original Article

TraceProcessor 0.3.0 – Windows Developer Blog

TraceProcessor version 0.3.0 is now available on NuGet with the following package ID:Microsoft.Windows.EventTracing.Processing.All
This release contains some feature additions and bug fixes since version 0.2.0. (A full changelog is below). Basic usage is still the same as in version 0.1.0.
The focus of this release has been in preparation for a forthcoming version 1.0.0, including many minor changes to naming and data types moving towards a finalized version 1 API.
Also, this release adds trace.UseStreaming(), which supports accessing multiple types of trace data in a streaming manner (processing data as it is read from the trace file, rather than buffering that data in memory). For example, a syscalls trace can be quite large, and buffering the entire list of syscalls in a trace can be quite expensive. The following code shows accessing syscall data in the normal, buffered manner via trace.UseSyscalls():
[code lang=”csharp”]
using Microsoft.Windows.EventTracing;using Microsoft.Windows.EventTracing.Processes;using Microsoft.Windows.EventTracing.Syscalls;using System;using System.Collections.Generic;
class Program{static void Main(string[] args){if (args.Length != 1){Console.Error.WriteLine(“Usage: <trace.etl>”);return;}
using (ITraceProcessor trace = TraceProcessor.Create(args[0])){IPendingResult<ISyscallDataSource> pendingSyscallData = trace.UseSyscalls();
ISyscallDataSource syscallData = pendingSyscallData.Result;
Dictionary<IProcess, int> syscallsPerCommandLine = new Dictionary<IProcess, int>();
foreach (ISyscall syscall in syscallData.Syscalls){IProcess process = syscall.Thread?.Process;
if (process == null){continue;}
if (!syscallsPerCommandLine.ContainsKey(process)){syscallsPerCommandLine.Add(process, 0);}
Console.WriteLine(“Process Command Line: Syscalls Count”);
foreach (IProcess process in syscallsPerCommandLine.Keys){Console.WriteLine($”{process.CommandLine}: {syscallsPerCommandLine[process]}”);}}}}
With a large syscalls trace, attempting to buffer the syscall data in memory can be quite expensive, or it may not even be possible. The following code shows how to access the same syscall data in a streaming manner, replacing trace.UseSyscalls() with trace.UseStreaming().UseSyscalls():
[code lang=”csharp”]
using Microsoft.Windows.EventTracing;using Microsoft.Windows.EventTracing.Processes;using Microsoft.Windows.EventTracing.Syscalls;using System;using System.Collections.Generic;
class Program{static void Main(string[] args){if (args.Length != 1){Console.Error.WriteLine(“Usage: <trace.etl>”);return;}
using (ITraceProcessor trace = TraceProcessor.Create(args[0])){IPendingResult<IThreadDataSource> pendingThreadData = trace.UseThreads();
Dictionary<IProcess, int> syscallsPerCommandLine = new Dictionary<IProcess, int>();
trace.UseStreaming().UseSyscalls(ConsumerSchedule.SecondPass, context =>{Syscall syscall = context.Data;IProcess process = syscall.GetThread(pendingThreadData.Result)?.Process;
if (process == null){return;}
if (!syscallsPerCommandLine.ContainsKey(process)){syscallsPerCommandLine.Add(process, 0);}
Console.WriteLine(“Process Command Line: Syscalls Count”);
foreach (IProcess process in syscallsPerCommandLine.Keys){Console.WriteLine($”{process.CommandLine}: {syscallsPerCommandLine[process]}”);}}}}
By default, all streaming data is provided during the first pass through the trace, and buffered data from other sources is not available. This example shows how to combine streaming with buffering – thread data is buffered before syscall data is streamed. As a result, the trace must be read twice – once to get buffered thread data, and a second time to access streaming syscall data with the buffered thread data now available. In order to combine streaming and buffering in this way, the example passes ConsumerSchedule.SecondPass to trace.UseStreaming().UseSyscalls(), which causes syscall processing to happen in a second pass through the trace. By running in a second pass, the syscall callback can access the pending result from trace.UseThreads() when it processes each syscall. Without this optional argument, syscall streaming would have run in the first pass through the trace (there would be only one pass), and the pending result from trace.UseThreads() would not be available yet. In that case, the callback would still have access to the ThreadId from the syscall, but it would not have access to the process for the thread (because thread to process linking data is provided via other events which may not have been processed yet).
Some key differences in usage between buffering and streaming:
Buffering returns an IPendingResult<T>, and the result it holds is available only before the trace has been processed. After the trace has been processed, the results can be enumerated using techniques such as foreach and LINQ.
Streaming returns void and instead takes a callback argument. It calls the callback once as each item becomes available. Because the data is not buffered, there is never a list of results to enumerate with foreach or LINQ – the streaming callback needs to buffer whatever part of the data it wants to save for use after processing has completed.
The code for processing buffered data appears after the call to trace.Process(), when the pending results are available.
The code for processing streaming data appears before the call to trace.Process(), as a callback to the trace.UseStreaming.Use…() method.
A streaming consumer can choose to process only part of the stream and cancel future callbacks by calling context.Cancel(). A buffering consumer always is provided a full, buffered list.
Sometimes trace data comes in a sequence of events – for example, syscalls are logged via separate enter and exit events, but the combined data from both events can be more helpful. The method trace.UseStreaming().UseSyscalls() correlates the data from both of these events and provides it as the pair becomes available. A few types of correlated data are available via trace.UseStreaming():



Streams correlated context switch data (from compact and non-compact events, with more accurate SwitchInThreadIds than raw non-compact events).
Streams correlated scheduled task data.
Streams correlated system call data.
Streams correlated window-in-focus data.
Additionally, trace.UseStreaming() provides parsed events for a number of different standalone event types:



Streams parsed last branch record (LBR) events.
Streams parsed ready thread events.
Streams parsed thread create events.
Streams parsed thread exit events.
Streams parsed thread rundown start events.
Streams parsed thread rundown stop events.
Streams parsed thread set name events.
Finally, trace.UseStreaming() also provides the underlying events used to correlate data in the list above. These underlying events are:



Included in

Streams parsed compact context switch events.
Streams parsed context switch events. SwitchInThreadIds may not be accurate in some cases.
Streams parsed window focus change events.
Streams parsed scheduled task start events.
Streams parsed scheduled task stop events.
Streams parsed scheduled task trigger events.
Streams parsed session-layer set active window events.
Streams parsed syscall enter events.
Streams parsed syscall exit events.
If there are other types of data that you think would benefit from streaming support, please let us know.
As before, if you find these packages useful, we would love to hear from you, and we welcome your feedback. For questions using this package, you can post on StackOverflow with the tag .net-traceprocessing, and issues can also be filed on the eventtracing-processing project on GitHub.

Breaking Changes
StartTime and StopTime have changed from DateTime to DateTimeOffset (no longer UTC but now preserving the trace time zone offset).
The following three properties on IContextSwitchIn were incorrect and have been removed: ThreadState, IsWaitModeSwapable and ThreadRank. These properties remain available from IContextSwitchOut.
Metadata has been removed. Use trace.UseMetadata instead.
OriginalFileName was removed because it may contain inaccurate data. Use IImage.OriginalFileName instead.
IImageWeakKey was removed because it may contain inaccurate data. Use IImage.Timestamp and IImage.Size instead.
WeakKey was removed because it may contain inaccurate data. Use Use IImage.Timestamp and IImage.Size instead.
DefaultSymCachePath was removed. Use static properties on SymCachePath instead.
DefaultSymbolPath was removed. Use static properties on SymCachePath instead.
Service snapshots were previously available from both IServiceDataSource and ISystemMetadata. They are now only available from IServiceDataSource.
Trace statistics and stack events have had their shapes made consistent with event APIs elsewhere in trace processor.


ExecutingDeferredProcedureCall was removed. Use ICpuSample.IsExecutingDeferredProcedureCall instead.
ExecutingInterruptServicingRoutine was removed. Use ICpuSample.IsExecutingInterruptServicingRoutine instead.
IsWaitModeSwapable was incorrect and has been renamed IsUserMode.
The enum RawWaitReason has been renamed KernelWaitReason.
The RawWaitReason property on IContextSwitchOut has been renamed WaitReason.
StartTime has been renamed to EnterTime, and ISyscall.StopTime has been renamed to ExitTime.
ErrorCode has been changed to ExitCode for consistency.
UniqueKey has been renamed to ObjectAddress for accuracy.
TimeRange has been renamed to TraceTimeRange.
DiskIOPriority has been renamed to IOPriority.
A few core types named GenericEvent* have been renamed to TraceEvent* for consistency, since they also apply to classic and unparsed events (TraceEventHeaderFlags, TraceEventHeaderProperties and TraceEventType).
Trace statistics-related types are now in the Event namespace instead of the Metadata namespace.
StackEvent-related types are now in the Event namespace instead of the Symbols namespace.
Type has been replaced by TraceEvent.HeaderType.
EventProperty has been renamed to HeaderProperties.
Core extensibility types have been moved from the .Events namespace up to the Microsoft.Windows.EventTracing namespace.
Size has been renamed to Length for consistency.
WindowsTracePreprocessor has been renamed to TraceMessage for accuracy.
IsWindowsTracePreprocessor has been renamed to IsTraceMessage for accuracy.

Data Type Updates:
Most properties on IContextSwitch, IContextSwitchOut IContextSwitchIn have been made nullable for correctness.
uint Processor has been changed to int Processor on multiple types.
ID-like properties (for example, ProcessId and ThreadId) have been changed from uint to int for consistency with .NET.
UserStackRange is now nullable, and Base and Limit addresses have been swapped to match KernelStackRange ordering and actual Windows stack memory layout.
The type of RemainingQuantum on IContextSwitchOut has been changed from int? to long? due to observed data overflow.
Throughout the API, timestamp properties are now of type TraceTimestamp rather than Timestamp. (TraceTimestamp implicitly converts to Timestamp).

ITraceTimestampContext has a new method (GetDateTimeOffset).
EventContext is now a ref struct instead of a class.
UserData is now of type ReadOnlySpan<byte> instead of IntPtr. The associated EventContext.UserDataLength has been removed; instead use EventContext.UserData.Length.
ExtendedData is now of type ExtendedDataItemReadOnlySpan, which is enumerable, rather than IReadOnlyList<ExtendedDataItem>.
TraceEvent has been split from EventContext and moved to EventContext.Event.
ICompletableEventConsumer has been replaced by ICompletable.
EventConsumerSchedule and IScheduledEventConsumer have been replaced by ConsumerSchedule and IScheduledConsumer.
Completion requests are no longer included in trace.Use(IEventConsumer) and require a separate call to trace.UseCompletion.
PendingResultAvailability has been merged into ConsumerSchedule.
UsePendingResult has been moved into an extension method.
PreparatoryPass and MainPass have been replaced with FirstPass and SecondPass.
WindowInFocus processing will no longer throw an exception when focus change events are missing.
Generic event field parsing exceptions will no longer be thrown during processing. Instead they are thrown on access to the Fields property of the IGenericEvent. GenericEventSettings.SuppressFieldParsingExceptions has been removed.
MarkHandled and MarkWarning have been removed.

New Data Exposed
Streaming window-in-focus data as well as parsed events are now available via trace.UseStreaming().
UseClassicEvents() now provides all classic events, not just unhandled ones.
Previously the very last ContextSwitch on each processor was omitted from IContextSwitchDataSource.ContextSwitches, as the information about the thread switching in at that time was not present. Now these context switches are included in the list with a null value for IContextSwitch.SwitchIn.
A new HypervisorPartitionDataSource has been added that exposes data about the Hyper-V partition the trace was recorded in.
TraceTimestamp now provides a .DateTimeOffset property to get the absolute (clock) time for a timestamp.
Streaming Last Branch Record (LBR) events are now available via trace.UseStreaming().
Streaming ready thread events are now available via trace.UseStreaming().
Streaming syscall data as well as parsed events are now available via trace.UseStreaming().
Streaming context switch data as well as parsed events (both standard and compact) are now available via trace.UseStreaming().
Streaming scheduled task data as well as parsed events are now available via trace.UseStreaming().
IContextSwitchOut now contains Rank (only present for the non-legacy implementation).
IContextSwitchIn now contains WaitTime (only present for the non-legacy implementation).
IScheduledTask now provides user information.
NuGet packages for individual namespaces are now available in addition to the .All packages.
Streaming thread events are now available via trace.UseStreaming().
IThread now provides BasePriority, IOPriority PagePriority, ProcessorAffinity and ServiceId.
Bug Fixes
Thread IDs used for syscalls are now taken from a reliable data source.
An access violation that could occur on program exit has been fixed.
TraceTimestamp now implements IComparable, IEquatable and multiple comparison operators.
An event consumer can cancel future event delivery by calling EventContext.Cancel().
Scheduled tasks now support the remaining trigger types.

Announcing Windows Server vNext Insider Preview Build 19551 | Windows Experience Blog

Hello Windows Insiders!Today we are pleased to release a new Insider preview build of the Windows Server VNext Semi-Annual Channel Datacenter and Standard editions.

This build includes a fix that enlightens National Language Support (NLS) components to be container-aware. NLS state is now instanced per container. This fix addresses some scenarios where a container OS components attempts to access data that is unavailable in the container due to instancing.

Windows Server vNext Semi-Annual Preview The Server Core Datacenter and Standard Editions are available in the 18 supported Server languages in ISO format and in VHDX format in English only.
Matching Windows Server Core App Compatibility FoD Preview
Matching Windows Server Language Packs

Symbols are available on the public symbol server – see Update on Microsoft’s Symbol Server blog post and Using the Microsoft Symbol Server.
Containers: For more information about how to obtain and install Windows Server containers and Insider builds, click here.  Quick start information, here.
Keys: The following keys allow for unlimited activations of Windows Server Semi-Annual Channel Previews
Server Standard: V6N4W-86M3X-J77X3-JF6XW-D9PRV
Server Datacenter: B69WH-PRNHK-BXVK3-P9XF7-XD84W
The Windows Server Preview will expire July 31st, 2020.

Registered Insiders may navigate directly to the Windows Server Insider Preview download page.  See the Additional Downloads dropdown for Windows Admin Center and other supplemental apps and products. If you have not yet registered as an Insider, see GETTING STARTED WITH SERVER on the Windows Insiders for Business portal.

The most important part of a frequent release cycle is to hear what’s working and what needs to be improved, so your feedback is extremely valued. For Windows Server, use your registered Windows 10 Insider device and use the Feedback Hub application. In the app, choose the Windows Server category and then the appropriate subcategory for your feedback. In the title of the Feedback, please indicate the build number you are providing feedback as shown below:
[Server #####] Title of my feedback
See Share Feedback on Windows Server via Feedback Hub for specifics. We also encourage you to visit the Windows Server Insiders space on the Microsoft Tech Communities forum to collaborate, share and learn from experts.
For Windows Admin Center, Send us feedback via UserVoice. We also encourage you to visit the Windows Admin Center space on the Microsoft Tech Communities.

This is pre-release software – it is provided for use “as-is” and is not supported in production environments. Users are responsible for installing any updates made available from Windows Update.   All pre-release software made available to you via the Windows Server Insider program are governed by the Insider Terms of Use.

What are some features in Microsoft Defender ATP?

Microsoft Defender Advanced Threat Protection is another layer of endpoint security available to administrators, but what it offers can be confusing due to the many features of the platform, some of which are not available on every operating system.

Microsoft Defender ATP — the name changed from Windows Defender ATP in March 2019 after Microsoft extended support to Mac systems — includes several endpoint protection features. For example, attack surface reduction uses rules — such as blocking Office communication application from creating child processes — along with folder access controls, exploit protection and network protection to reduce the attack surface of the operating system. Microsoft also provides enhanced antivirus protection through the Azure cloud — a feature the company calls “next-generation protection” — though the older Microsoft Defender antivirus feature is still available.

Microsoft Defender ATP endpoint detection and response capabilities monitor endpoint and network events, recording certain behaviors for further analysis, detection, alerting and reporting. This functionality can highlight events that may indicate malicious activity. An agent is typically required on each endpoint for data collection and communication. Microsoft said it provides additional automation for better security intelligence updates through the Microsoft Defender ATP cloud, reducing the amount of direct attention and remediation required from systems administrators.

The enhanced reporting feature groups-related alerts into “incidents,” which correlate the machines involved and related evidence to improve the IT staff’s understanding of an attack. This reduces the amount of time needed to manually analyze and assess the attack. Finally, Microsoft Defender ATP improves threat hunting with support for detecting and responding to memory-based — also known as file-less — attacks allowing administrators to better detect and respond when these incidents occur.

Microsoft Defender ATP enhances onboarding practices for Windows Server 2019 systems. For example, machines running Windows Server 2019 can be onboarded through System Center Configuration Manager using a script. This greatly accelerates adding new servers to the platform and minimizes errors. Microsoft tied the security features in Microsoft Defender ATP more closely to Windows Server 2019 to provide additional attention to attacks that originate in the kernel and memory of the server OS.

Microsoft Defender ATP also integrates with other offerings, most notably several Azure cloud security services including Azure Security Center and Azure Advanced Threat Protection. As its name indicates, the Azure Security Center is a cloud-based security platform. It includes automated onboarding of new systems, a unified view of systems and alerts, and the capability to manage security and conduct investigations across the enterprise and in the cloud. The Azure Security Center also connects IT to the dashboard view provided by the Microsoft Defender Security Center to give IT in-depth information on alerts to determine if a breach has occurred. Also based in the Microsoft cloud, Azure Advanced Threat Protection pulls in information from the on-premises Active Directory system and handles certain security tasks, such as tracking down suspicious user activities and protecting the credentials and identities of employees.

Dig Deeper on Windows Server management

Go to Original Article

How IoT, 5G, RPA and AI are opening doors to cybersecurity threats

“You can’t say civilization don’t advance… in every war they kill you in a new way.” – Will Rogers

Software is eating the world. Cloud, RPA and AI are becoming increasingly common and a necessary part of every business that wishes to thrive or survive in the age of digital transformation, whether for lowering operational costs or to remain in the competition. But as we increasingly digitalize our work, we’re opening new doors for cybersecurity threats. Here, we dive into the technological advancements in the past year to learn how we can use those progresses without getting burnt.


From office devices to home appliances, our “anytime, anywhere” needs require every peripheral to connect to the internet and our smartphones. But simultaneously, the new IT landscape has created a massive attack vector. SonicWall’s Annual Threat Report discovered a 217% increase in IoT attacks, while their Q3 Threat Data Report discovered 25 million attacks in the third quarter alone, a 33% increase that shows the continued relevance of IoT attacks in 2020.

IoT devices collect our private data for seemingly legitimate purposes, but when a hacker gains access to those devices, they offer the perfect means for spying and tracking. The FBI recently warned against one such example of the cybersecurity threat concerning smart TVs, which are equipped with internet streaming and facial recognition capabilities.

As governments increasingly use cyberattacks as part of their aggressive policies, the problem only gets worse. IoT devices were usually exploited for creating botnet armies to launch distributed denial-of-service attacks, but in April 2019, Microsoft announced that Russian state-sponsored hackers used IoT devices to breach corporate networks. The attackers initially broke into a voice over IP phone, an office printer and a video decoder and then used that foothold to scan for other vulnerabilities within their target’s internal networks.

Some of the hacks mentioned above were facilitated because the devices were deployed with default manufacturer passwords, or because the latest security update was not installed. But with the IoT rush, new cybersecurity threats and attack vectors emerge. “When new IoT devices are created, risk reduction is frequently an afterthought. It is not always a top priority for device makers to create security measures since no initial incentive is seen due to a lack of profit,” warned Hagay Katz, vice president of cybersecurity at Allot, a global provider of innovative network intelligence and security solutions. “Most devices suffer from built-in vulnerabilities and are not designed to run any third-party endpoint security software. For many consumers, cybersecurity has been synonymous with antivirus. But those days are long gone,” he said.

To fight against the new cybersecurity threats, Katz recommended turning to a communications service providers (CSP). “Through machine learning techniques and visibility provided by the CSP, all the devices are identified. A default security policy is then applied for each device and the network is segregated to block lateral malware propagation. By simply adding a software agent on the subscriber’s existing consumer premise equipment, CSPs can easily roll out a network or router-based solution that protects all the consumer’s IoT devices.”

We also need to consider whether we really need an IoT version of everything. In the words of Ryan Trost, co-founder and CTO of ThreatQuotient who has over 15 years of security experience focusing on intrusion detection and cyber intelligence: “I can appreciate the benefits of every single student having a tablet (or equivalent) for schooling. However, I struggle to find the legitimacy of why my refrigerator needs an Internet connection, or for that matter, a video conferencing feature.”


While the next generation network takes AI, VR and IoT to new levels, it’s also creating new problems. “5G utilizes millimeter waves, which have a much shorter range than the conventional lower-frequency radio waves. This is where the source of the greatest [cybersecurity] threat in 5G infrastructure originates from,” warned Abdul Rehman, a cybersecurity editor at VPNRanks. “An attacker can steal your data by setting up a fake cell tower near your home and learn a great deal about the device you are using including location, phone model, operating system, etc. These can even be used to listen in on your phone calls.” To mitigate the risk, Rehman suggests relying on strong encryption.


We’ve previously talked about how AI is vulnerable to data poisoning attacks. As the technology advances, new forms of cybersecurity threats emerge. Voice deepfakes are one of such threats, where hackers impersonate C-level executives, politicians or other high-profile individuals. “Employees are tricked into sending money to scammers or revealing sensitive information after getting voice messages and calls that sound like they are from the CFO or other executives,” said Curtis Simpson, CISO at IoT security company Armis. “We’ve already seen one fraudulent bank transfer convert to $243,000 for criminals. Given how hard it is to identify these deepfakes compared to standard phishing attacks, I expect these operations will become the norm in the new year.”

It only takes one wrong click for a hacker to implant malware or open a backdoor. Unfortunately, that could be the undoing of all other security measures put in place to protect the network. “No one is off limits when it comes to cybersecurity threats,” warned PJ Kirner, CTO and founder of Illumio, which develops adaptive micro-segmentation technologies to prevent the spread of breaches. Children could end up installing malware on their parents’ phones. According to Kirner, “our sons and daughters will quickly become a new threat vector to enterprise security.”

Robotic process automation

A Gartner report showed the annual growth of RPA software and projected that revenue will grow to $1.3 billion by 2019. “In 2020, [RPA] will continue its disruptive rise and become even more ingrained in our everyday lives,” predicted Darrell Long, vice president of product management at One Identity, an identity and access management provider. “However, with the rapid adoption of RPA, security has become an afterthought, leaving major vulnerabilities.” RPA technologies hold privileged data and that makes them lucrative targets for cybercriminals. CIOs must pay close attention to the security of the RPA tools they use and the data they expose to ensure their business is not infiltrated by malicious actors.

Storage attacks

Cybercrimes are not only rising — they are also evolving. Attackers have realized that data in storage systems are key to an organization’s operations. “Hackers are now targeting network attached storage (NAS) devices, according to the data revealed in a new Kaspersky report. This new type of attack presents a significant problem to businesses using only NAS devices to store their backups,” said Doug Hazelman, a software industry veteran with over 20 years of experience.

According to Kaspersky, there was little evidence of NAS attacks in 2018, but as hackers realized the benefits, they caught users off guard since NAS devices typically don’t run antivirus or anti-malware products. Hackers exploited this shortcoming to put 19,000 QNAP NAS devices at risk.

Organizations should keep their systems updated with the latest security patches and ensure only necessary devices are reachable from public networks. Per Hazelman’s recommendation, “to prevent cybercriminals from infecting backups with malicious software, CIOs should ensure company backups are being stored on two different media types, one of which being cloud storage, which has several benefits, including increased security.”

Reaching for the clouds

While new technologies promise convenience and increased returns, CIOs must make sure the security risks do not outweigh the gains.

Contrary to the other technologies on this list, ransomware has largely left the cloud untouched. However, as companies continue to transition their servers and data to the cloud for more cost-efficient solutions, criminals will shift their focus. The current attacks have largely been due to cloud misconfigurations or stolen credentials, but since the cloud has become a one-stop shop for all data, it’s becoming the new battleground.

What we need to do about cybersecurity threats

By now, we’ve seen how devastating cyberattacks can be, and that the risks are steadily increasing. Security must be a priority and not an afterthought. While new technologies promise convenience and increased returns, CIOs must make sure the security risks do not outweigh the gains.

Go to Original Article

For Sale – Nvidia RTX-2080 Ti – Cheapest you can get!

Europe’s busiest forums, with independent news and expert reviews, for TVs, Home Cinema, Hi-Fi, Movies, Gaming, Tech and more. is owned and operated by M2N Limited,
company number 03997482, registered in England and Wales.

Powered by Xenforo, Hosted by Nimbus Hosting, Original design Critical Media Ltd.
This website uses the TMDb API but is not endorsed or certified by TMDb.

Copyright © 2000-2020 E. & O.E.

Go to Original Article

Using AI for Good with Microsoft AI

Partner Story

Celebrating priceless architecture in France

The Musée des Plans-Reliefs is bringing architecture to life using AI and mixed reality. Viewers are immersed in an experience that uses technology to recreate a vital piece of French history and culture, based on a relief map of the historic Mont-Saint-Michel.

Learn about relief map project
Go to Original Article
Author: Microsoft News Center