Tag Archives: progress

Microsoft Redmond campus modernization – Construction update – Stories

Since we kicked off demolition in January 2019, there has been great progress on the Redmond campus modernization project. Check out the timelapse below to see some of the work that has been done thus far.

Credit: Skycatch

Here are some other fun facts about the construction efforts:

  • The square footage of the building demolition on east campus is equivalent to the total square feet of all thirty NFL football fields combined.
  • Concrete from the demolition would be enough to build 1.3 Empire State Buildings. One hundred percent of the concrete is being recycled, and some of it will come back to the site for use in the new campus.
  • We’ve recycled a variety of materials from the deconstructed buildings including carpets, ceiling tiles, outdoor lights and turf from the sports fields. As a result, we have diverted almost 95 percent of our demolition waste away from landfills.
  • The resources recycled from the demolition thus far includes 449,697 pounds (50 trucks full) of carpet and 284,400 pounds of ceiling tiles.
  • The majority of the furniture removed from the demolished buildings that will not be reused in other buildings, has been donated to local charities and nonprofit startups.
  • We’ve moved 1 million cubic yards of dirt and reached the bottom of the digging area for our underground parking facility, which will consolidate traffic and make our campus even more pedestrian and bike friendly.
  • We‘ve installed 51k feet of fiber optic cabling.  That’s just over 9.5 miles.
  • The Microsoft Art Program has relocated 277 art pieces, including an early Chihuly and a Ken Bortolazzo sculpture. These art pieces were placed across our Puget Sound buildings so they can continue to be enjoyed by employees and guests.
  • The drone video featured above, created by Skycatch, not only offers a unique view of the project, but the images have fed into 3D models of the site which are providing data to more effectively tackle challenges as they arise, plan ahead and monitor construction progress.
  • The project is actively coordinating over 100 different building information models containing over 2.8 million individual 3D building components.

We look forward to continuing this journey to modernize Microsoft’s workplaces. When completed, the project will provide additional proximity for teams who collaborate and an inspiring, healthy and sustainably responsible workplace where our employees can do their best work and grow their careers.

Continued thanks for your patience and flexibility during the construction phase. As a reminder, please allow extra time to get around campus and remind visitors to do the same. Always be cautious around the construction sites and remain mindful of safety notices and instructions.

Follow updates and developments as this project progresses and view the latest renderings on Microsoft’s Modern Campus site.

Go to Original Article
Author: Microsoft News Center

Machine intelligence could benefit from child’s play

CAMBRIDGE — Current progress in machine intelligence is newsworthy, but it’s often talked about out of context. MIT’s Josh Tenenbaum described it this way: Advances in deep learning are powering machines to accurately recognize patterns, but human intelligence is not just about pattern recognition, it’s about modeling the world.

By that, Tenenbaum, a professor of cognitive science and computation, was referring to abilities that humans possess such as understanding what they see, imagining what they haven’t seen, problem-solving, planning, and building new models as they learn.

That’s why, in the interest of advancing AI even further, Tenenbaum is turning to the best source of information on how humans build models of the world: children.

“Imagine if we could build a machine that grows into intelligence the way a person does — that starts like a baby and learns like a child,” he said during his presentation at EmTech 2018, an emerging technology conference hosted by MIT Technology Review.

Tenenbaum called the project a “moonshot,” one of several that researchers at MIT are exploring as part of the university’s new MIT Quest for Intelligence initiative to advance the understanding of human and machine intelligence. The “learning moonshot” is a collaborative effort by MIT colleagues, including AI experts, as well as those in early childhood development and neuroscience. The hope is to use how children learn as a blueprint to build a machine intelligence that’s truly capable of learning, Tenenbaum said.

The “quest,” as it’s aptly labeled, won’t be easy partly because researchers don’t have a firm understanding for how learning happens, according to Tenenbaum. In the 1950s, Alan Turing, father the Turing test to analyze machine intelligence, presumed a child’s brain was simpler than an adult’s and that it was akin to a new notebook full of blank pages.

“We’ve now learned that Turing was brilliant, but he got this one wrong,” Tenenbaum said. “And many AI researchers have gotten this wrong.”

Child’s play is serious business

Instead, research such as that done by Tenenbaum’s colleagues suggests that newborns are already programmed to see and understand the world in terms of people, places and things — not just patterns and pixels. And that children aren’t passive learners but instead are actively experimenting, interacting with and exploring the world around them.

Imagine if we could build a machine that grows into intelligence the way a person does — that starts like a baby and learns like a child.
Josh Tenenbaumprofessor of cognitive science and computation, MIT

“Just like science is playing around in the lab, children’s play is serious business,” he said. “And children’s play may be what makes human beings the smartest learners in the known universe.”

Tenenbaum described his job as identifying insights like these and translating them into engineering terms. Take common sense, for example. Kids are capable of stacking cups or blocks without a single physics lesson. They can observe an action they’ve never seen before and yet understand the desired outcome and how to help achieve that outcome.

In an effort to codify common sense, Tenenbaum and his team are working with new kinds of AI programming languages that leverage the pattern recognition advances by neural networks, as well as concepts that don’t fit neatly into neural networks. One example of this is probabilistic inference, which enables machines to use prior events to predict future events.

Game engines open window into learning

Tenenbaum’s team is also using game engines. These are used to simulate a player’s experience in real time in a virtual world. Common game engines include the graphics engine used to design 2D and 3D images, and the physics engine, which transposes the laws of physics from the real to the virtual world. “We think they provide first approximations to the kinds of basic commonsense knowledge representation that are built into even the youngest children’s brains,” he said.

He said the game engines, coupled with probabilistic programming, capture data that helps researchers understand what a baby knows at 10 months or a year old, but the question remains: How does a baby learn how to build engines like these?

“Evolution might have given us something kind of like game engine programs, but then learning for a baby is learning to program the game engine to capture the program of their life,” he said. “That means learning algorithms have to be programming algorithms — a program that learns programs.”

Tenenbaum called this “the hard problem of learning.” To solve it, he’s focused on the easier problem of how people acquire simple visual concepts such as learning a character from a new alphabet without needing to see it a thousand times. Using Bayesian program learning, a machine learning method, researchers have been able to program machines to see an output, such as a character, and deduce how the output was created from one example.

It’s an admittedly small step in the larger landscape of machine intelligence, Tenenbaum said. “But we also know from history that even small steps toward this goal can be big.”

MEF targets multivendor interoperability for network services

MEF this week touted its progress in multivendor interoperability by announcing its active software-defined WAN implementation project. Three SD-WAN vendors — Riverbed Technology, Nuage Networks from Nokia and VMware’s VeloCloud — are leading the MEF project, focusing on multivendor SD-WAN use cases. Software development services provider Amartus is also participating with the SD-WAN vendors.

MEF — a Los Angeles-based association, with more than 200 members — launched its multivendor SD-WAN implementation project last year in an attempt to standardize services across multiple providers and technologies. But multivendor interoperability has numerous aspects, according to Joe Ruffles, global standards architect at Riverbed, based in San Francisco, and co-leader of the SD-WAN implementation project. Companies merge; they need to partner with somebody to increase geographic reach, or they want basic interoperability and service chaining, he said.

The implementation project allows member vendors to get their hands dirty, while actively testing and proving out proposed SD-WAN interoperability issues, Ruffles said. Each vendor uses MEF’s cloud-based dev-test platform, MEFnet, to develop its respective SD-WAN technology. They then interconnect and orchestrate those SD-WAN implementations using MEF’s Presto API, which is part of MEF’s Lifecycle Service Orchestration (LSO) framework.

The Presto API communicates with orchestration and management to help service providers manage multiple SD-WAN implementations with a single orchestrator. Additionally, it helps create better multivendor interoperability among SD-WAN controllers and edge devices, according to Ralph Santitoro, head of SDN, network functions virtualization and SD-WAN at Fujitsu and MEF distinguished fellow.

“Member companies can get together and connect their appliances or run software in the environment and actually do things,” Santitoro said. “They can actually prove out different topics or items that are important to them or the industry.”

Other MEF members can build from the existing SD-WAN implementation project or suggest additional projects and issues, Ruffles said. “It’s not so much a phase as it is continuous, depending on who has an issue and who’s available to work on it,” he added.

Standardized specs lead to better automation processes

The SD-WAN implementation project work benefits more than its current participants, according to Santitoro. By “playing in the sandbox,” members can feed the knowledge learned from the testing environment into MEF’s work on SD-WAN specifications. For example, participants can more accurately define SD-WAN requirements, capabilities, architecture and what’s needed for multivendor interoperability.

“We learn by hand what has to be done, and then we use that information to make changes or additions to the API,” Ruffles said.

In addition to the SD-WAN specs, MEF this week published specs for retail and wholesale Ethernet services, subscriber and operator Layer 1 services, and IP services. These services — especially IP services — have historically been defined in various ways, Santitoro said, which can impede automation. To combat the discrepancies, MEF is defining the fundamentals of IP services and their attributes, which will then help define and build broader services.

“We’ll create things like the VPN [virtual private network] service, internet access service, private cloud access service and operator wholesale services — particularly the IP-VPN case,” said David Ball, MEF’s services committee co-chair and editor of the MEF IP services project.

These definitions and specs will then be fed into MEF’s LSO architecture to help establish a standard vocabulary, so SD-WAN buyers and sellers understand what they need or get with particular services, Santitoro said. Further, defining services and their requirements helps create standardized processes for orchestration and automation, he added.

“Automation is really about consistency and being able to create a model of a service, so services are deployed, designed and implemented in a similar fashion,” he said.

Server Core management remains a challenge for some

Server Core introduced a number of benefits to IT, but certain hurdles have stymied its progress in the enterprise.

Microsoft unveiled Server Core with Windows Server 2008. It wasn’t a new operating system, but a slimmed-down version of the full server OS. Server Core removed the GUI, but kept its infrastructure functionality. This reduced the codebase and brought several advantages: a smaller attack surface, fewer patches, quicker installs and more reliability.

But the lack of a GUI also made Server Core management a challenge. The absence of a traditional Windows interface took away the comfort level for the admin when it came to deployments and overall use of the operating system.

Administrators missed the interface because, while using the command line might not have been a complete mystery, using it to manage every aspect of the OS was new. A strong focus on PowerShell to control this OS caused further discomfort for many in IT. This new language came in at a time with Server Core to make the admin feel very unwelcome in this new world.

Server Core management with PowerShell and the command prompt are two very different things.

Server Core management with PowerShell and the command prompt are two very different things. Besides the language, scripting is linear, and PowerShell is an object-oriented language. The MS-DOS command prompt has been around a lot longer, but has not kept up with the features and functionality in the newer Windows operating systems. Microsoft expanded on scripting after MS-DOS with Visual Basic Script (VBS) but that introduced security issues from VBS-based viruses. Microsoft developed PowerShell to provide extensive functionality with fewer security liabilities. PowerShell has cmdlets tightly integrated with Microsoft’s newest operating systems for both basic and advanced functionality — which MS-DOS and VBS lacked.

Microsoft aids learning efforts

PowerShell is the predominant command-line language for Windows. MS-DOS exists but has had few updates to its core. Microsoft helped establish this course in the later versions of Windows Server. Many of the traditional server configuration wizards can produce the PowerShell code for the actions the administrator executes from the GUI. This capability changed the game for many administrators with limited programming experience or time to learn PowerShell scripting. Rather than write scripts from scratch, IT pros could take the automatically generated code and manipulate it to work on other servers. This feature was a step up from taking code examples from the Internet that only worked with very specific conditions or environments.

Microsoft helped spur Server Core adoption with improved remote management with later server OS versions with its Server Manager console. While Microsoft always had some level of remote management with Windows Server 2012 and beyond, a much stronger focus on this meant the admin could use a single GUI-based server to handle Server Core management for dozens — or even hundreds — of installations of this minimal operating system over the network. This kept the GUI aspect the admins were familiar with but allowed the enterprise to take advantage of more Server Core deployments. While they did not get the full benefits of what PowerShell and other automation tools do, this move helped admins get started with Server Core.

 When administrators start with Server Core, it’s helpful to look at the long-term view. How far do you want to go with it? Some companies that want to implement Server Core will be content to use remote management, but PowerShell will unlock the full potential of this server OS deployment.

Admins new to PowerShell will have a bit of a learning curve to overcome, but a few things can help. There are utilities, such as Notepad++, that make editing PowerShell code easier with its contextual highlighting feature. Another scripting tool is Microsoft’s PowerShell Integrated Scripting Environment, which can test code blocks and commands that help debug issues in a context-sensitive environment.

Server Core should only grow in popularity. Microsoft runs workloads on its new Azure Stack on Server Core. Administrators should consider its use just for the reduced patching workload.

In Windows Server 2016, the default installation is Server Core, and administrators need to manually select a different option to get the full server GUI setup. Also removed from Windows Server 2016 is the ability to install a desktop onto Server Core after deployment.

With the enhancements to remote management, the future is clear for the Microsoft server OS — and it’s without a GUI.