Microsoft is pleased to announce the recent opening of our new Quantum Materials Laboratory in Copenhagen, Denmark, on September 21. We have high expectations for the new lab. It’s where the heart of our quantum computer—the topological qubit—will be developed under the direction of Scientific Director Peter Krogstrup.
Reporting to Krogstrup is a team of skilled mechanical engineers, materials scientists, and quantum physicists. Together, they’re synthesizing ultra-clean quantum crystals, the building blocks of future quantum computers. The Copenhagen lab will supply these crystals to Microsoft Quantum labs located in Delft, the Netherlands; Sydney, Australia; Santa Barbara, California; and other locations.
It’s fitting that Copenhagen should host this groundbreaking new lab. After all, it was Danish physicist Hans Christian Oersted who in 1820 discovered the link between electricity and magnetism—a breakthrough that in time helped lead to the use of electricity to run our world. Another Danish scientist, Niels Bohr, received a Nobel prize in physics in 1922 for his work on quantum theory. Bohr later founded the Institute of Theoretical Physics in Copenhagen. Our new quantum lab will lead to discoveries that are equally groundbreaking.
Given that people such as Oersted and Bohr are household names in Denmark—with streets and parks named for them—it wasn’t surprising that the opening of our new lab was a newsworthy event. Danish Minister of Higher Education and Science Tommy Ahlers was among those attending, and later joked on Twitter about a TV interview he gave: “Everything was going fine until they asked me to explain the physics behind quantum computing!”
Beyond research and development, another role for the new Copenhagen lab is to help educate the public on the field of quantum computing. It’s been designed such that passersby, families with children, students, and others can see researchers at work behind large glass windows creating materials that will make scalable quantum computing possible. The lab’s neighbor is the Technical University of Denmark, where half of Denmark’s engineers are trained. Students there are finding inspiration in the Microsoft lab and charting their own futures around quantum computing.
The Microsoft Quantum Materials Lab’s impressive array of scientific equipment speaks to the exciting research it’s tackling. One of the problems researchers there will investigate is how to create quantum states that are more easily interpreted. “Quantum states are extremely fragile and therefore very difficult to maintain and read,” lab director Krogstrup says. “And quantum materials must be perfect. That means not one atom can lie in the wrong place—literally. This is among the things we need to do more research in.”
Quantum computing is a complex concept and can be a challenge for people to wrap their heads around. But the potential of the field is clear—creating computers far more powerful than anything available today, with the ability to solve some of the most difficult computing problems imaginable. We look forward to delivering that reality with the Quantum Materials Lab.
Communications platform as a service is changing the collaboration relationship between businesses and their customers, partners and employees through embedded communications. Traditional business communication and collaboration tools were limited to serve broad use cases, according to Frost & Sullivan analyst Michael Brandenburg.
Limited vendor offerings create clunky user experiences and lack the ability to tailor products to fit specific business needs. Now with the programmability of CPaaS and embedded communications capabilities, users are no longer limited to specific vendor products, he said.
CPaaS effectively breaks down the different elements that are integral to business communications and collaboration, such as voice, video and messaging services, and makes them easily customizable. With the help of APIs, these tools can be coded and embedded into business applications to allow organizations to structure their communications around what works best for operations and workflow.
Read more about how Brandenburg believes CPaaS and embedded communications are disrupting legacy communications and collaboration technology.
SD-WAN not a threat to MPLS survival
Though the popular opinion about MPLS seems to be that software-defined WAN (SD-WAN) is only a short time away from making it obsolete, Nemertes Research analyst John Burke said rumors of MPLS’s demise have been greatly exaggerated. Rather it appears those adopting SD-WAN are planning to keep MPLS in some capacity.
Most organizations will change their strategic plans for MPLS. While not eliminating it entirely, MPLS will likely go from being everywhere in the network to connecting only larger, more critical sites.
MPLS networks have high reliability and are able to differentiate traffic with different needs. The addition of SD-WAN will help keep lower value traffic off the MPLS network, which will help mitigate costs associated with rising capacity needs, he said.
Read more about how Burke says an SD-WAN-MPLS strategy creates a stronger network.
Cisco strategy light on collaboration software offerings
Cisco appears to be falling back on its hardware-driven roots in its latest collaboration product updates, despite a number of software-focused acquisitions. The updates focus heavily on updates to existing video conferencing hardware, new headsets and its Webex Edge architecture, leaving those who were hoping for more software updates feeling underwhelmed, according to Tim Banting, analyst for Global Data.
When Rowan Trollope took over as general manager of Cisco’s collaboration unit, the company seemed to focus more on software in an attempt to follow market trends. Since Trollope’s departure in May, however, Cisco has shared little about its plans for future development. A number of shakeups in senior leadership have also added to the uncertainty surrounding Cisco’s strategy on the collaboration front, Banting said.
While Cisco has stayed quiet about its strategy, competitors like Microsoft and Zoom have been delivering significant software updates. The unified communications market is rapidly changing, and with so much uncertainty around Cisco’s strategy, it’s unclear if it will be able to keep up with the competition.
Using a few PowerShell commands for Active Directory, you can streamline your administrative approach to managing groups in the enterprise.
Active Directory is the foundation of the modern Windows environment that organizes the use of devices, users and resources. You can think of Active Directory as having two aspects: data — users, groups, etc. — and service — sites, replication, etc.
Active Directory data administration can take a lot of time unless you learn to automate. In this tip, I’ll concentrate on explaining how to use PowerShell commands for Active Directory to manage groups.
The Active Directory cmdlets don’t yet work in PowerShell Core. You could use .NET classes to administer groups in PowerShell Core, but that is more difficult than working with cmdlets. This tutorial will explain how to use these cmdlets with Windows PowerShell.
Produce a proper setup for Active Directory groups
A group in Active Directory is a container for user or computer objects. A best practice is to have groups include either users or computers, but not both. Usually, a group is created to simplify the process of granting permissions, as this grants access to the group once rather than having to grant access many times to each individual user.
[embedded content] How to set up access to resources with groups
The following cmdlets provide the functionality needed to manage the full group lifecycle:
Group membership is managed by these cmdlets:
You can view the available groups with this command:
The results are shown in the following screenshot:
Building a group with PowerShell commands for Active Directory
GroupCategory can be either Security — a group to which permissions are assigned — or Distribution, which is used for email distribution lists. As an Active Directory administrator, you normally deal with Security groups.
The GroupScope has three possible values:
Domain Local: Contains members from any domain in the AD forest but only applies to the domain in which it was created. A Domain Local group can be nested in Domain Local groups from the same domain.
Global: Contains members of the domain in which it was created and can be applied in any domain in the forest. A Global group can be nested in a Global group from the same domain or any Domain Local or Universal group.
Universal: Contains members of and applies to any domain in the Active Directory forest. It can be nested in any Domain Local or Global Group.
Creating a new group with PowerShell commands for Active Directory requires, at a minimum, the group name, category and scope:
New-ADGroup -Name SWStest1 -GroupCategory Security -GroupScope Global Get-ADGroup -Identity SWStest1
DistinguishedName : CN=SWStest1,CN=Users,DC=Manticore,DC=org GroupCategory : Security GroupScope : Global Name : SWStest1 ObjectClass : group ObjectGUID : b26c225e-9fe9-43c3-a2d4-362515389bae SamAccountName : SWStest1 SID : S-1-5-21-759617655-3516038109-1479587680-1362
The group is created in the Users container. You can specify the OU for the group you’re creating using the following commands:
New-ADGroup -Name SWStest2 -GroupCategory Security -GroupScope Global -Path “OU=UserGroups,DC=Manticore,DC=org”
You can specify other parameters, such as a display name or description, when you create the group using PowerShell commands for Active Directory. You can adjust those properties using Set-ADGroup, though you’re more likely to use the cmdlet to change the group scope or category. You have a limited number of options when changing the group scope:
Domain Local: change to Universal
Global: change to Universal
Universal: change to Domain Local or Global
If you want to change a Domain Local group to a Global group, you have to do so via a Universal group:
Confirm Are you sure you want to perform this action? Performing the operation “Set” on target “CN=SWStest2,OU=UserGroups,DC=Manticore,DC=org”. [Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is “Y”): y
If you don’t want to manually confirm the removal, use the Confirm parameter:
distinguishedName : CN=Domain Users,CN=Users,DC=Manticore,DC=org GroupCategory : Security GroupScope : Global name : Domain Users objectClass : group objectGUID : 645b85eb-84d1-4046-a052-46f0eee004f1 SamAccountName : Domain Users SID : S-1-5-21-759617655-3516038109-1479587680-513
distinguishedName : CN=SWStest2,OU=UserGroups,DC=Manticore,DC=org GroupCategory : Security GroupScope : Global name : SWStest2 objectClass : group objectGUID : ff770df0-c416-45eb-b4f9-00ad39f7ea8d SamAccountName : SWStest2 SID : S-1-5-21-759617655-3516038109-1479587680-1364
The last cmdlet is Get-ADAccountAuthorizationGroup, which retrieves the security groups from the specified user, computer or service accounts token. The results will include all groups, such as Everyone, that are managed automatically:
Get-ADAccountAuthorizationGroup -Identity MickGreen | select name
name —- Domain Users Everyone Users Pre-Windows 2000 Compatible Access Authenticated Users This Organization SWStest2 Service asserted identity Medium Mandatory Level
You will use the *-ADGroup and *-ADGroupMembership cmdlets for most of your administrative efforts. It’s very rare that you’ll need to use the other cmdlets mentioned in this tip.
We’re excited to announce the preview release of the Fluent XAML Theme Editor application!
As some of you may remember from our Build 2018 session this year, we previewed a tool using our new ColorPaletteResources API that allows you to set the color theme of your app through some simple color selections.
This tool is now publicly available to insiders and those running on pre-release builds (Windows 10 SDK Preview Build 17723 or newer) to test out on our GitHub repository!
With the preview build, you can select three major colors for both the Light and Dark themes in the right-hand properties view labeled “Color Dictionary”:
Region – The background that all the controls sit on, which is a separate resource that does not exist in our framework.
Base – Represents all our controls’ backplates and their temporary state visuals like hover or press. In general, Base should be in contrast with the background (or Region color) of your theme and with black text (if in Light theme) and white text (if in Dark theme).
Primary – This is essentially the Accent color and should contrast with mainly white text.
It is used in more choice locations to show alternate rest states for toggled controls like list selection, checkbox or radiobutton checked states, slider fill values, and other control parts that need to be shown as different from their default rest state once interacted with.
In addition to the three major colors for each theme, you can also expand any one of the major colors to see a list of minor colors that change the color on certain controls on a more refined level.
In the above example, you’ll notice that although my major Base color is purple in Dark Theme, I can override that second gradient value to be yellow to change the borders and disabled color of controls.
To access the detailed view of colors, simply click the chevron next to the major color buttons:
The editor will ship with some presets for you to look at to get an idea of what a theme looks like in the app.
The preset dropdown is located at the top of the Color Dictionary properties panel.
When you first boot up it will always be set to Default – which is the Light and Dark theme styling default for all our controls. You can select different themes like Lavender and Nighttime to get an idea of how the tool will theme our controls.
Once you’re ready to start making your own theme, just start editing the colors! Once you’ve started tweaking them, you’ll notice that the Presets ComboBox goes from the name of the preset to “Custom”:
This means that you’ve started a new temporary theme that’s “Custom.” Any changes you make will not affect any of the other Presets in that box.
Once you’re satisfied with the changes you’ve made, simply click the “Save” button and browse to your desired save point.
Similarly, you can open your saved JSON theme by clicking the “Load” button and browsing to your saved theme’s file location.
Contrast Ratio Checking
Lastly, one of the most important parts of creating your theme needs to be making sure that in either respective theme you are contrast compliant. The tool provides you with a small list of contrast information on the left-hand side of the color selection window when choosing your color.
In this window you can see your contrast with the most prevalent text color in the theme that you’re choosing to edit, in the above case black text because you are editing a Light theme color value.
When you pick a color that falls below the standard contrast ratio of 4.5:1, you’ll be alerted with red text next to your contrast value.
You can learn more about contrast ratios and their importance here.
Of course, once you’ve themed everything, you’ll want to use in your app! To do that you’ll need to click the “Export” button at the bottom of the Color Dictionary properties panel.
That button will open a popup window with a generic, unnamed ResourceDictionary stub (seen below).
This window doesn’t make anything final, however, if you want to make some changes to the theme and re-export them to the Export window, it will refresh with your changed colors.
Once you’re ready to use it in your app, click the “Copy to Clipboard” button in the lower right corner and go to UWP Visual Studio solution.
Once in Visual Studio, right-click on the project solution, located in the Solution Explorer.
And go to Add > New Item and then choose Resource Dictionary.
Name that dictionary whatever makes sense to you and click Add when you’re done.
That should generate a blank ResourceDictionary like this:
Now you can paste the exported theme code from the editor into that ResourceDictionary.
Now you have a fully customized color theme waiting to be use, so let’s apply it!
To do that, you’ll want to go into your page or app.xaml (depending on how much of your app you want the theme to apply to) and merge your theme dictionary into the resources of that page or app.
Lastly, don’t forget to set the background color of your page to the RegionColor that you picked for your theme. It’s the only brush that won’t get set automatically.
Once that’s in, you’re done! Your theme colors will now be pervasive across your app or page depending.
If instead you want to scope your theme colors to a smaller area, you can also put all that power into just a container (Like Grid or StackPanel) and the theme will scope to just the controls that live within that container:
When you export your theme, you’ll see a ResourceDictionary markup with a ColorPaletteResources definition like this:
ColorPaletteResources is a friendly API for our SystemColors that sit within generic.xaml and allows for those SystemColors to be scoped at any level.
If you wanted to enable this same theme to work downlevel, you would have to define each SystemColor individually with each color from your theme:
In the above case we’re using the Lavender theme to show this down-level transition.
Warning: Although this markup format change will enable your theme to be applied across controls in earlier SDK versions, it will not work on a page, container or control scoped level. ColorPaletteResources is the API that allows scoping behavior. This markup format will only work at the app.xaml level for earlier SDKs.
To use the Fluent XAML Theme Editor preview now, head on over to our GitHub repo for the app. There you can download the source to the app and get started with what it has to offer!
It has been one week since we launched Forza Horizon 4 and we are humbled by the overwhelmingly positive feedback we continue to receive from players around the world. We are proud to say that in this first week alone, there are currently two million players driving around Forza Horizon 4’s Britain. On behalf of everyone at Playground Games, thank you for supporting Forza Horizon 4 and making the game such a success!
With such incredible global engagement with Forza Horizon 4, the game is now the highest-rated Xbox exclusive of this generation* and Forza continues to be the best-selling racing franchise of this console generation**. We also want to share some fun stats with you from the community:
More than 4.6 million hours of Forza Horizon 4 gameplay have been watched across Mixer, Twitch, YouTube and Facebook as of Oct. 9***
Players have logged more than 822.7 million miles – that’s probably not covered under manufacturer warranty
The community has settled in quite nicely living the Horizon Life, purchasing more than 4.1 million properties and owning more than 74.4 million cars
More than 377.7 million roads discovered – you’ve been looking for miles and miles and miles…
To say this is an exciting time for everyone would be an understatement and as Ralph shared on Inside Xbox during Goodwood, there’s a lot more to look forward to in the next few weeks including our biggest fan requested feature – Route Creator, coming on Oct. 25.
Forza Horizon 4 is now available with Xbox Game Pass and globally on Xbox One and Windows 10. Xbox Game Pass members can start playing Forza Horizon4 Standard Edition on Xbox One and Windows 10, as part of their monthly membership. The membership includes over 100 more great games, including highly anticipated new Xbox exclusives the day they’re released, all for one low monthly price.
The future of gaming is a world where you are empowered to play the games you want, with the people you want, whenever you want, wherever you are, and on any device of your choosing. Our vision for the evolution of gaming is similar to music and movies — entertainment should be available on demand and accessible from any screen. Today, I’m excited to share with you one of our key projects that will take us on an accelerated journey to that future world: Project xCloud.
Today, the games you play are very much dictated by the device you are using. Project xCloud’s state-of-the-art global game-streaming technology will offer you the freedom to play on the device you want without being locked to a particular device, empowering YOU, the gamers, to be at the center of your gaming experience.
Content and community
Ultimately, Project xCloud is about providing gamers — whether they prefer console or PC — new choices in when and where they play, while giving mobile-only players access to worlds, characters and immersive stories they haven’t been able to experience before.
To realize this vision, we know we must make it easy for developers to bring their content to Project xCloud. Developers of the more than 3,000 games available on Xbox One today, and those building the thousands that are coming in the future, will be able to deploy and dramatically scale access to their games across all devices on Project xCloud with no additional work.
About Project xCloud
Scaling and building out Project xCloud is a multi-year journey for us. We’ll begin public trials in 2019 so we can learn and scale with different volumes and locations. Our focus is on delivering an amazing added experience to existing Xbox players and on empowering developers to scale to hundreds of millions of new players across devices. Our goal with Project xCloud is to deliver a quality experience for all gamers on all devices that’s consistent with the speed and high-fidelity gamers experience and expect on their PCs and consoles.
We’ve enabled compatibility with existing and future Xbox games by building out custom hardware for our datacenters that leverages our years of console and platform experience. We’ve architected a new customizable blade that can host the component parts of multiple Xbox One consoles, as well as the associated infrastructure supporting it. We will scale those custom blades in datacenters across Azure regions over time.
We are testing Project xCloud today. The test runs on devices (mobile phones, tablets) paired with an Xbox Wireless Controller through Bluetooth, and it is also playable using touch input. The immersive nature of console and PC games often requires controls that are mapped to multiple keys, buttons, sticks and triggers. We are developing a new, game-specific touch input overlay that provides maximum response in a minimal footprint for players who choose to play without a controller.
Cloud game-streaming is a multi-faceted, complex challenge. Unlike other forms of digital entertainment, games are interactive experiences that dynamically change based on player input. Delivering a high-quality experience across a variety of devices must account for different obstacles, such as low-latency video streamed remotely, and support a large, multi-user network. In addition to solving latency, other important considerations are supporting the graphical fidelity and framerates that preserve the artist’s original intentions, and the type of input a player has available.
Microsoft — with our nearly 40 years of gaming experience starting with PC, as well as our breadth and depth of capabilities from software to hardware and deep experience of being a platform company — is well equipped to address the complex challenge of cloud game-streaming. With datacenters in 54 Azure regions and services available in 140 countries, Azure has the scale to deliver a great gaming experience for players worldwide, regardless of their location.
Developers and researchers at Microsoft Research are creating ways to combat latency through advances in networking topology, and video encoding and decoding. Project xCloud will have the capability to make game streaming possible on 4G networks and will dynamically scale to push against the outer limits of what’s possible on 5G networks as they roll out globally. Currently, the test experience is running at 10 megabits per second. Our goal is to deliver high-quality experiences at the lowest possible bitrate that work across the widest possible networks, taking into consideration the uniqueness of every device and network.
We are looking forward to learning with you during our public trials next year and sharing more details as we continue on this journey to the future of gaming with you at the center. Stay tuned!
Forty years after it was built, the mainframe responsible for Medicare Part A claims is going to be moving to the cloud, at least in part because of anticipated value-based care needs.
The United States Digital Service (USDS) was asked by the Centers for Medicare & Medicaid Services (CMS) to bring its oldest and arguably most complicated system into the 21st Century. Sweeping changes to a healthcare data center are never for the faint of heart, but this project is especially challenging, said Shannon Sartin, executive director of the USDS. “This is our most technically hairy project and it processes claims worth 4% of the [U.S.] GDP,” which points to the massive data volume involved, Sartin said. “This has an impact on the way we pay for the care of the world, and we need to bring it to the point [where it can handle] value-based care.”
At a time when both providers and payers are struggling with the harsh realities of moving complex and regulated legacy systems to the cloud, the approach the USDS is taking with the Medicare healthcare data center could prove a useful guide.
Shannon Sartinexecutive director, U.S. Digital Service
Founded in 2014, the USDS is a self-described “startup at the White House” and is made up of 150 to 200 private sector technologists who serve terms ranging from 18 months to four years. The group has tackled tech transformations at healthcare.gov, vets.gov and the IRS, among many others. Earlier this year, the USDS rolled out Blue Button 2.0, an API designed to make Medicare claims interoperable. To date, Blue Button 2.0 is in production in 10 applications, and over 1,100 developers are experimenting with it.
Project involves 10 million lines of code
Now the team has turned its attention to practically the mother of all healthcare data center challenges: a computer that runs 8 million lines of COBOL and close to 2 million lines of a customized version of the programming language Assembly. In other words, it’s complicated, said Scott Haselton, a digital service expert at the USDS and the lead engineer on this project.
“Medicare pays its claims out of a series of four separate systems,” he explained. “Medicare asked us to take at look at one of the oldest, and it has a lot of issues around general maintenance due to its age, rigor and fragility.” And even if age wasn’t a factor, a mainframe isn’t the best choice for the move to value-based care, Haselton said. “Taking into account we’ve got budding value-based payments coming out of CMS, if you do a lot of [modernization], it decreases the risk you won’t pull everything off.”
So, the move to the cloud has begun, but slowly. “Our approach is to take very slow, iterative steps,” Haselton said. “CMS has tried modernization approaches in the past that were a ‘big bang,’ where you turn off one system and turn on another and hopefully everything works. Our approach this time is to take small steps where we have small deltas between changes and we can feel good about the management and security and sanity.”
To get started, Haselton and his team needed a small, low-risk piece of code that wouldn’t bring the entire healthcare data center down if things went badly. They chose to use a module that handles certain Medicare payments to inpatient rehab centers because the code was little more than a mathematical equation that didn’t require calls to a database or anything else complicated. “This is a well-defined, very stable and very simplistic piece of code,” he said, which made it easier to debug.
Sandbox environment lets developers verify
So far the strategy is working. The USDS team set up a sandbox, which is a software developer “safe zone” for experimentation, and installed the cloud next to the mainframe in the healthcare data center. A commercial tool translated the COBOL to the more mainstream Java programming language and then refactored the module to function like a modern application would. After that transition, the mainframe was able to make the call to the API in the cloud, so the next step is to move this out of the sandbox and in to production, Haselton said.
But the pace will remain deliberate. “We’re going to look for more modules that are self contained and work with those until we have a proof of concept,” he said. “Once we’ve done that a couple of times, we’re going to look for bigger and uglier pieces of code.”
Haselton didn’t have an estimate for when the transition will be complete. “The big thing is really trying to establish a process around how to make changes and create a playbook. The goal is five to 10 years down the line, CMS will be managing this process with five to 10 years of history on the changes they can refer to.”
I have 2x eMachines ER1401 for sale. They are in good condition, a few scratches on the case due to their age. Everything working fine. They come with the power supply only.
1.3 GHz AMD Athlon II Neo Processor K325 nVidia nForce 9200 Chipset 250GB 5400rpm SATA hard drive Multi-in-One Digital Media Card Reader: MultiMediaCard, Secure Digital Card, Memory Stick, xDPicture Card 10/100/1000 Gigabit Ethernet LAN (RJ-45 port), integrated 802.11b/g/n wireless
Will sort pictures out if there is any interest in them. Thanks for looking.
Price and currency: £40 each Delivery: Delivery cost is included within my country Payment method: PPG or BT Location: Liverpool Advertised elsewhere?: advertised elsewhere Prefer goods collected?: I have no preference
______________________________________________________ This message is automatically inserted in all classifieds forum threads. By replying to this thread you agree to abide by the trading rules detailed here. Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:
Landline telephone number. Make a call to check out the area code and number are correct, too
Name and address including postcode
Valid e-mail address
DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.