The post Microsoft and PowerSchool partner to reshape the future of K-12 education appeared first on Stories.
Microsoft is making yet another push into the classroom. But this time, it’s using the buddy system, today announcing a high-profile partnership with education technology standard-bearer PowerSchool.
PowerSchool has a prominent, cloud-based K-12 student information system for school districts that handles administration of data from student attendance to grades. More recently, PowerSchool has added its Unified Classroom product to the mix, which layers on classroom management plus learning and assessment tools behind a dashboard for teachers, students and parents.
As part of the new partnership, Microsoft says its Office 365 products will be embedded into Unified Classroom, including OneDrive, OneNote, Word, Excel, PowerPoint and OneNote Class Notebook. At the same time, PowerSchool will emphasize its use of Microsoft’s Azure cloud infrastructure to deliver its products to schools and districts.
This isn’t the first time the two companies have worked together, but it’s been at a less-formal level for light integration of PowerSchool’s products with Microsoft’s OneNote and School Data Sync. A Microsoft spokesperson tells GeekWire that some of PowerSchool’s products, notably those PowerSchool picked up through acquisition, already use Azure. But the new, official partnership appears to deepen the relationship across more products and services.
For its part, PowerSchool is no startup. It marked 20 years in business last August, and was once owned by Apple and later by education publishing’s Pearson. In 2015 it was acquired by Vista Equity Partners, has made several acquisitions of its own since, and now claims its products reach 30 million students in North America.
Microsoft’s new partnership with PowerSchool could conceivably give both a stronger competitive position against Google, which is wildly popular in schools with its free G Suite for Education productivity and communications products and Google Classroom management tool.
If you’re handed a note that asks you to draw a picture of a bird with a yellow body, black wings and a short beak, chances are you’ll start with a rough outline of a bird, then glance back at the note, see the yellow part and reach for a yellow pen to fill in the body, read the note again and reach for a black pen to draw the wings and, after a final check, shorten the beak and define it with a reflective glint. Then, for good measure, you might sketch a tree branch where the bird rests.
Now, there’s a bot that can do that, too.
The new artificial intelligence technology under development in Microsoft’s research labs is programmed to pay close attention to individual words when generating images from caption-like text descriptions. This deliberate focus produced a nearly three-fold boost in image quality compared to the previous state-of-the-art technique for text-to-image generation, according to results on an industry standard test reported in a research paper posted on arXiv.org.
The technology, which the researchers simply call the drawing bot, can generate images of everything from ordinary pastoral scenes, such as grazing livestock, to the absurd, such as a floating double-decker bus. Each image contains details that are absent from the text descriptions, indicating that this artificial intelligence contains an artificial imagination.
“If you go to Bing and you search for a bird, you get a bird picture. But here, the pictures are created by the computer, pixel by pixel, from scratch,” said Xiaodong He, a principal researcher and research manager in the Deep Learning Technology Center at Microsoft’s research lab in Redmond, Washington. “These birds may not exist in the real world — they are just an aspect of our computer’s imagination of birds.”
The drawing bot closes a research circle around the intersection of computer vision and natural language processing that He and colleagues have explored for the past half-decade. They started with technology that automatically writes photo captions – the CaptionBot – and then moved to a technology that answers questions humans ask about images, such as the location or attributes of objects, which can be especially helpful for blind people.
These research efforts require training machine learning models to identify objects, interpret actions and converse in natural language.
“Now we want to use the text to generate the image,” said Qiuyuan Huang, a postdoctoral researcher in He’s group and a paper co-author. “So, it is a cycle.”
Image generation is a more challenging task than image captioning, added Pengchuan Zhang, an associate researcher on the team, because the process requires the drawing bot to imagine details that are not contained in the caption. “That means you need your machine learning algorithms running your artificial intelligence to imagine some missing parts of the images,” he said.
Attentive image generation
At the core of Microsoft’s drawing bot is a technology known as a Generative Adversarial Network, or GAN. The network consists of two machine learning models, one that generates images from text descriptions and another, known as a discriminator, that uses text descriptions to judge the authenticity of generated images. The generator attempts to get fake pictures past the discriminator; the discriminator never wants to be fooled. Working together, the discriminator pushes the generator toward perfection.
Microsoft’s drawing bot was trained on datasets that contain paired images and captions, which allow the models to learn how to match words to the visual representation of those words. The GAN, for example, learns to generate an image of a bird when a caption says bird and, likewise, learns what a picture of a bird should look like. “That is a fundamental reason why we believe a machine can learn,” said He.
GANs work well when generating images from simple text descriptions such as a blue bird or an evergreen tree, but the quality stagnates with more complex text descriptions such as a bird with a green crown, yellow wings and a red belly. That’s because the entire sentence serves as a single input to the generator. The detailed information of the description is lost. As a result, the generated image is a blurry greenish-yellowish-reddish bird instead a close, sharp match with the description.
As humans draw, we repeatedly refer to the text and pay close attention to the words that describe the region of the image we are drawing. To capture this human trait, the researchers created what they call an attentional GAN, or AttnGAN, that mathematically represents the human concept of attention. It does this by breaking up the input text into individual words and matching those words to specific regions of the image.
“Attention is a human concept; we use math to make attention computational,” explained He.
The model also learns what humans call commonsense from the training data, and it pulls on this learned notion to fill in details of images that are left to the imagination. For example, since many images of birds in the training data show birds sitting on tree branches, the AttnGAN usually draws birds sitting on branches unless the text specifies otherwise.
“From the data, the machine learning algorithm learns this commonsense where the bird should belong,” said Zhang. As a test, the team fed the drawing bot captions for absurd images, such as “a red double-decker bus is floating on a lake.” It generated a blurry, drippy image that resembles both a boat with two decks and a double-decker bus on a lake surrounded by mountains. The image suggests the bot had an internal struggle between knowing that boats float on lakes and the text specification of bus.
“We can control what we describe and see how the machine reacts,” explained He. “We can poke and test what the machine learned. The machine has some background learned commonsense, but it can still follow what you ask and maybe, sometimes, it seems a bit ridiculous.”
Text-to-image generation technology could find practical applications acting as a sort of sketch assistant to painters and interior designers, or as a tool for voice-activated photo refinement. With more computing power, He imagines the technology could generate animated films based on screenplays, augmenting the work that animated filmmakers do by removing some of the manual labor involved.
For now, the technology is imperfect. Close examination of images almost always reveals flaws, such as birds with blue beaks instead of black and fruit stands with mutant bananas. These flaws are a clear indication that a computer, not a human, created the images. Nevertheless, the quality of the AttnGAN images are a nearly three-fold improvement over the previous best-in-class GAN and serve as a milestone on the road toward a generic, human-like intelligence that augments human capabilities, according to He.
“For AI and humans to live in the same world, they have to have a way to interact with each other,” explained He. “And language and vision are the two most important modalities for humans and machines to interact with each other.”
In addition to Xiaodong He, Pengchuan Zhang and Qiuyuan Huang at Microsoft, collaborators include former Microsoft interns Tao Xu from Lehigh University and Zhe Gan from Duke University; and Han Zhang from Rutgers University and Xiaolei Huang from Lehigh University.
John Roach writes about Microsoft research and innovation. Follow him on Twitter.
(Reuters – Thomson Reuters Corp (TRI.TO) on Wednesday published its debut “Top 100 Global Technology Leaders” list with Microsoft Corp (MSFT.O) in the no. 1 spot, followed by chipmaker Intel Corp (INTC.O) and network gear maker Cisco Systems Inc (CSCO.O).
The list, which aims to identify the industry’s top financially successful and organizationally sound organizations, features U.S. tech giants such as Apple Inc (AAPL.O) , Alphabet Inc (GOOGL.O) , International Business Machines Corp (IBM.N) and Texas Instruments Inc (TXN.O), among its top 10.
Microchip maker Taiwan Semiconductor Manufacturing (2330.TW), German business software giant SAP (SAPG.DE) and Dublin-based consultant Accenture (ACN.N) round out the top 10.
The remaining 90 companies are not ranked, but the list also includes the world’s largest online retailer Amazon.com Inc (AMZN.O) and social media giant Facebook Inc (FB.O). ( bit.ly/2B8eowE )
The results are based on a 28-factor algorithm that measures performance across eight benchmarks: financial, management and investor confidence, risk and resilience, legal compliance, innovation, people and social responsibility, environmental impact, and reputation.
The assessment tracks patent activity for technological innovation and sentiment in news and selected social media as the reflection of a company’s public reputation.
The set of tech companies is restricted to those that have at least $1 billion in annual revenue.
According to the list, 45 percent of these 100 tech companies are headquartered in the United States. Japan and Taiwan are tied for second place with 13 companies each, followed by India with five tech leaders on the list.
By continent, North America leads with 47, followed by Asia with 38, Europe with 14 and Australia with one.
The strength of Asia highlights the growth of companies such as Tencent Holdings Ltd (0700.HK), which became the first Asian firm to enter the club of companies worth more than $500 billion, and surpassed Facebook in market value in November.
Reuters is the news and media division of Thomson Reuters, which produced the list.
Reporting by Sonam Rai in Bengaluru, editing by Peter Henderson
The post Today in Technology: When an American and a Russian took a ‘walk in the woods’ appeared first on Stories.
Since its release in 2016, millions of players have been exploring the world of Forza Horizon 3 on Xbox One and Windows 10 PCs, the highest-rated Xbox One exclusive. We are pleased to announce that, today, Xbox One X enhancements arrive for Forza Horizon 3, as a free download for players with an Xbox One X.
Forza Horizon 3 on Xbox One X is powered by the same state-of-the art ForzaTech engine at the heart of the Forza franchise, also used to develop Forza Motorsport 7 for Xbox One X which brought native 4K resolution racing to the new platform. Forza Horizon 3 enhancements let players experience the game in native 4K along with a host of additional visual updates, including improved car reflections and shadow resolutions, improved texture detail for road and terrain surfaces, and more. In addition, 4K resolution enhancements will be fully compatible with both Blizzard Mountain and Hot Wheels expansions for Forza Horizon 3. Whether you’re careening across the dunes of the Outback in your favorite off-roader or building up a legion of fans with death-defying stunt driving in the rainforest, the 4K-enhanced version of Forza Horizon 3 is a thrilling blend of fantastic gameplay and cutting-edge visuals.
As part of Xbox’s ongoing Inside Xbox One X Enhanced series, I chatted with Playground Games’ creative director and co-founder Ralph Fulton for insight into the studio’s work on the enhancements. Enjoy!
What specifically has your team done to enhance Forza Horizon 3 for Xbox One X?
First and foremost, this update enables Forza Horizon 3 to run in native 4K (3840×2160) on a console for the very first time. Forza Horizon 3 has always been a fantastic-looking game but the clarity and detail of native 4K resolution really brings the vast playground of Australia to life like never before. In addition, we’ve made a number of graphical improvements to the game, such as increased shadow resolution, improved visual effects and increased LOD and draw distances, which take advantage of the power of the Xbox One X.
How do these enhancements impact the gaming experience?
These enhancements are all about the visuals. We know that our fans really value great image quality, so we’ve taken this opportunity to deliver that to them with this update on Xbox One X. On top of the obvious enhancement to native 4K, there are a number of other improvements we’ve made which really take advantage of the added definition 4K brings. Reflections are sharper and clearer, environment shadows are crisper and better defined, the quality of motion blur has been increased to make the driving experience significantly smoother, and better anisotropic filtering improves the detail visible in environment textures, particularly on the roads themselves. For me, the biggest improvement is in the combination of 4K and HDR though, especially in Forza Horizon 3‘s dynamic time-lapse skies. The sky is such a huge part of nearly every scene in the game that it affects the feel of the game a great deal, and the improvements we’ve made to reflections and shadows really complement it.
Why did your development team choose to focus on these enhancement areas?
We’ve been positively overwhelmed by the positive feedback we’ve had from the community over the last year about the quality of the gameplay in Forza Horizon 3 and the amount of fun they have with it, and we feel really proud that we’ve been able to keep that going with the high-quality Blizzard Mountain and Hot Wheels expansions, car packs and weekly Forzathon events. This was our opportunity to bring the game to native 4K on console for the first time and make a great-looking game look even better.
How do you expect Forza Horizon 3 fans will respond to seeing and playing it on Xbox One X with these enhancements?
I feel like we’re at the crest of a wave in the transition to 4K – more and more people are trading up to 4K TVs and want native 4K experiences to show what their new TV can do. That was exactly my experience when I upgraded recently and I’ve really enjoyed getting back into some of my favourite games, like the The Witcher 3 for example, as they’ve released their enhanced versions. I hope that will be the case for Forza Horizon 3 fans as well. When I’ve been playing with the update in the studio, I’ve been really blown away by the beauty of the world, like I’m seeing it for the first time and I hope Horizon fans feel the same way too. One of my favourite things about Forza is the incredible photography which comes out of the community. I follow a bunch of Forza photography accounts on Twitter, where the creativity of Forza fans always blows me away, so I’m really looking forward to seeing the community photographs which come out of the enhanced version.
How has the process been to get the game up and running on Xbox One X?
It’s been incredibly straightforward. It took us less than a day from receiving our first Xbox One X kit to get Forza Horizon 3 up and running in 4K, and when we did we still had a lot of spare headroom on both the CPU and the GPU. The Xbox One Development Kit for the Xbox One X is by far the most developer-friendly and powerful dev kit we’ve ever worked with, which makes it a pleasure to develop for.
What enhancement were you most excited about to explore leveraging for Forza Horizon 3 on Xbox One X?
Our goal with this update was to bring Forza Horizon 3 in 4K to console for the first time, and that’s what we’ve delivered. As I mentioned earlier, as one of the first titles which featured HDR on the Xbox One S, we’ve been really excited by the visual combination of native and HDR technology – it is a real, evolutionary leap in graphics.
What does 4K and HDR mean for your game, games in the future and development at your studio?
We know our players value incredible visuals, effects and image quality, and we put a huge amount of effort into delivering them with every title. Native 4K resolution, especially when combined with HDR, is a huge step forward in visual fidelity and as mass adoption of 4K displays continues it will become the standard by which game visuals are judged. For us, as a studio, the power of the Xbox One X, as well as its ease of development and straightforward compatibility across the Xbox One family of devices, has made it the lead development platform for the new title we’re currently working on. This means a couple of things. First, it means we’re developing from the ground up to take advantage of the Xbox One X’s enormous graphical horsepower, an approach which will continue to yield massive advances in visual quality. Secondly, though, it also means you’ll see improvements on Xbox One and Xbox One S as a result – we find that there’s a trickle-down effect when you develop for the most powerful hardware which brings improvements even on the less powerful machines.
Make sure to join us on the official Forza Motorsport channel on Mixer to watch as we show off the Xbox One X version of the game, starting on January 16 at 1 p.m. PST. Stay tuned to ForzaMotorsport.net for updates.
I grew up in the slums (Barrio Unión) of Caracas, Venezuela. My parents were missionaries who taught my siblings and me the importance of kindness and service to those who were less fortunate.
We didn’t have much growing up, but whatever we had we always shared with others. We set out to be that family that helped out, making compassion one of our key core values.
Kindness—the value of serving others, being aware of who has less than me and helping them—was cemented in who I became. It has become one of my two key staples of my “personal brand” and has been passed down to my kids.
My brand is also feminine. When you look up the word feminine, you see words such as gentleness, empathy, caring, sensitivity, and sweetness.
My mom raised me as a feminist and wanted to make sure that I became a strong, self-sufficient woman who—for example—didn’t even let men open doors for me.
I’m all for gender equality. And I also believe that we depend on other humans, and that we will always need to depend on others to achieve our success. I am meant to thrive with those around me, and femininity empowers me to say that I’m going to be dependent on my tribe.
I have two teenage boys, and my intent is to raise gentlemen. Yes, I can open my door, but I love when my sons or husband does it for me. Femininity is not about capability. I am guided by the philosophy that we shouldn’t do it alone.
“If you can’t love yourself, you can’t be in this harmonious place to share your brand.”
In my first role at Microsoft, I came in as a force to show I was a strong woman. However, I was always open to the idea of serving others and letting them serve me, which is a part of being both feminine and kind. This combination was confusing to people at times, but ultimately it was all about me staying strong to keep my brand evolving and staying my course.
Now—on my current team and as a storyteller for IT Showcase—my brand is received as refreshing. I can look at every relationship very methodically and see how my personal branding impacts and influences because we are always impacting one another, everywhere we go and especially at work.
This is something I talk about with those I coach for personal branding. I have 15 mentees across the world, including some Microsoft employees. It has been a transformative experience for me to help them find their authentic selves.
I’ve found that it has been helpful for my mentees to approach their journey through three milestones. The first is learn yourself, which is a combination of three personality tests (i.e., Love Languages, Myers-Briggs, and Emergenetics) and a self-assessment of how you spend your time.
Then there’s liking yourself, which is aspirational of who you want to be in your brand. This can help you move from accepting to actually appreciating who you’re becoming.
And last there’s loving yourself. If you can’t love yourself, you can’t be in this harmonious place to share your brand. This milestone is truly the culmination of the process.
My goal is to make an impact and leave a positive legacy. I go into work constantly asking how I will make an impact. Did I make someone smile today?
Are you a Microsoft employee with a journey to share? Drop us a line from your work email at MicrosoftLife (at) microsoft.com.
Today’s innovations in technology are opening new doors for retailers. The ability to infuse data and intelligence in all areas of a business has the potential to completely reinvent retail. Here’s a visual look at the top technologies we see enabling this transformation in 2018 and beyond, and where they’ll have the greatest impact.
It’s a major milestone in the push to have search engines such as Bing and intelligent assistants such as Cortana interact with people and provide information in more natural ways, much like people communicate with each other.
A team at Microsoft Research Asia reached the human parity milestone using the Stanford Question Answering Dataset, known among researchers as SQuAD. It’s a machine reading comprehension dataset that is made up of questions about a set of Wikipedia articles.
According to the SQuAD leaderboard, on Jan. 3, Microsoft submitted a model that reached the score of 82.650 on the exact match portion. The human performance on the same set of questions and answers is 82.304. On Jan. 5, researchers with the Chinese e-commerce company Alibaba submitted a score of 82.440, also about the same as a human.
The two companies are currently tied for first place on the SQuAD “leaderboard,” which lists the results of research organizations’ efforts.
Microsoft has made a significant investment in machine reading comprehension as part of its effort to create more technology that people can interact with in simple, intuitive ways. For example, instead of typing in a search query and getting a list of links, Microsoft’s Bing search engine is moving toward efforts to provide people with more plainspoken answers, or with multiple sources of information on a topic that is more complex or controversial.
With machine reading comprehension, researchers say computers also would be able to quickly parse through information found in books and documents and provide people with the information they need most in an easily understandable way.
That would let drivers more easily find the answer they need in a dense car manual, saving time and effort in tense or difficult situations.
These tools also could let doctors, lawyers and other experts more quickly get through the drudgery of things like reading through large documents for specific medical findings or rarified legal precedent. The technology would augment their work and leave them with more time to apply the knowledge to focus on treating patients or formulating legal opinions.
Microsoft is already applying earlier versions of the models that were submitted for the SQuAD dataset leaderboard in its Bing search engine, and the company is working on applying it to more complex problems.
For example, Microsoft is working on ways that a computer can answer not just an original question but also a follow-up. For example, let’s say you asked a system, “What year was the prime minister of Germany born?” You might want it to also understand you were still talking about the same thing when you asked the follow-up question, “What city was she born in?”
It’s also looking at ways that computers can generate natural answers when that requires information from several sentences. For example, if the computer is asked, “Is John Smith a U.S. citizen?,” that information may be based on a paragraph such as, “John Smith was born in Hawaii. That state is in the U.S.”
Ming Zhou, assistant managing director of Microsoft Research Asia, said the SQuAD dataset results are an important milestone, but he noted that, overall, people are still much better than machines at comprehending the complexity and nuance of language.
“Natural language processing is still an area with lots of challenges that we all need to keep investing in and pushing forward,” Zhou said. “This milestone is just a start.”
Allison Linn is a senior writer at Microsoft. Follow her on Twitter.