Tag Archives: draw

Microsoft researchers build a bot that draws what you tell it to – The AI Blog

If you’re handed a note that asks you to draw a picture of a bird with a yellow body, black wings and a short beak, chances are you’ll start with a rough outline of a bird, then glance back at the note, see the yellow part and reach for a yellow pen to fill in the body, read the note again and reach for a black pen to draw the wings and, after a final check, shorten the beak and define it with a reflective glint. Then, for good measure, you might sketch a tree branch where the bird rests.

Now, there’s a bot that can do that, too.

The new artificial intelligence technology under development in Microsoft’s research labs is programmed to pay close attention to individual words when generating images from caption-like text descriptions. This deliberate focus produced a nearly three-fold boost in image quality compared to the previous state-of-the-art technique for text-to-image generation, according to results on an industry standard test reported in a research paper posted on arXiv.org.

The technology, which the researchers simply call the drawing bot, can generate images of everything from ordinary pastoral scenes, such as grazing livestock, to the absurd, such as a floating double-decker bus. Each image contains details that are absent from the text descriptions, indicating that this artificial intelligence contains an artificial imagination.

“If you go to Bing and you search for a bird, you get a bird picture. But here, the pictures are created by the computer, pixel by pixel, from scratch,” said Xiaodong He, a principal researcher and research manager in the Deep Learning Technology Center at Microsoft’s research lab in Redmond, Washington. “These birds may not exist in the real world — they are just an aspect of our computer’s imagination of birds.”

The drawing bot closes a research circle around the intersection of computer vision and natural language processing that He and colleagues have explored for the past half-decade. They started with technology that automatically writes photo captions – the CaptionBot – and then moved to a technology that answers questions humans ask about images, such as the location or attributes of objects, which can be especially helpful for blind people.

These research efforts require training machine learning models to identify objects, interpret actions and converse in natural language.

“Now we want to use the text to generate the image,” said Qiuyuan Huang, a postdoctoral researcher in He’s group and a paper co-author. “So, it is a cycle.”

Image generation is a more challenging task than image captioning, added Pengchuan Zhang, an associate researcher on the team, because the process requires the drawing bot to imagine details that are not contained in the caption. “That means you need your machine learning algorithms running your artificial intelligence to imagine some missing parts of the images,” he said.

Attentive image generation

At the core of Microsoft’s drawing bot is a technology known as a Generative Adversarial Network, or GAN. The network consists of two machine learning models, one that generates images from text descriptions and another, known as a discriminator, that uses text descriptions to judge the authenticity of generated images. The generator attempts to get fake pictures past the discriminator; the discriminator never wants to be fooled. Working together, the discriminator pushes the generator toward perfection.

Microsoft’s drawing bot was trained on datasets that contain paired images and captions, which allow the models to learn how to match words to the visual representation of those words. The GAN, for example, learns to generate an image of a bird when a caption says bird and, likewise, learns what a picture of a bird should look like. “That is a fundamental reason why we believe a machine can learn,” said He.

GANs work well when generating images from simple text descriptions such as a blue bird or an evergreen tree, but the quality stagnates with more complex text descriptions such as a bird with a green crown, yellow wings and a red belly. That’s because the entire sentence serves as a single input to the generator. The detailed information of the description is lost. As a result, the generated image is a blurry greenish-yellowish-reddish bird instead a close, sharp match with the description.

As humans draw, we repeatedly refer to the text and pay close attention to the words that describe the region of the image we are drawing. To capture this human trait, the researchers created what they call an attentional GAN, or AttnGAN, that mathematically represents the human concept of attention. It does this by breaking up the input text into individual words and matching those words to specific regions of the image.

“Attention is a human concept; we use math to make attention computational,” explained He.

The model also learns what humans call commonsense from the training data, and it pulls on this learned notion to fill in details of images that are left to the imagination. For example, since many images of birds in the training data show birds sitting on tree branches, the AttnGAN usually draws birds sitting on branches unless the text specifies otherwise.

“From the data, the machine learning algorithm learns this commonsense where the bird should belong,” said Zhang. As a test, the team fed the drawing bot captions for absurd images, such as “a red double-decker bus is floating on a lake.” It generated a blurry, drippy image that resembles both a boat with two decks and a double-decker bus on a lake surrounded by mountains. The image suggests the bot had an internal struggle between knowing that boats float on lakes and the text specification of bus.

“We can control what we describe and see how the machine reacts,” explained He. “We can poke and test what the machine learned. The machine has some background learned commonsense, but it can still follow what you ask and maybe, sometimes, it seems a bit ridiculous.”

Practical applications

Text-to-image generation technology could find practical applications acting as a sort of sketch assistant to painters and interior designers, or as a tool for voice-activated photo refinement. With more computing power, He imagines the technology could generate animated films based on screenplays, augmenting the work that animated filmmakers do by removing some of the manual labor involved.

For now, the technology is imperfect. Close examination of images almost always reveals flaws, such as birds with blue beaks instead of black and fruit stands with mutant bananas. These flaws are a clear indication that a computer, not a human, created the images. Nevertheless, the quality of the AttnGAN images are a nearly three-fold improvement over the previous best-in-class GAN and serve as a milestone on the road toward a generic, human-like intelligence that augments human capabilities, according to He.

“For AI and humans to live in the same world, they have to have a way to interact with each other,” explained He. “And language and vision are the two most important modalities for humans and machines to interact with each other.”

In addition to Xiaodong He, Pengchuan Zhang and Qiuyuan Huang at Microsoft, collaborators include former Microsoft interns Tao Xu from Lehigh University and Zhe Gan from Duke University; and Han Zhang from Rutgers University and Xiaolei Huang from Lehigh University.

Related

John Roach writes about Microsoft research and innovation. Follow him on Twitter.

Xbox One X Shadow of War shows profound improvements over PS4 Pro

One version to rule them all?

While we can draw conclusions about PlayStation 4 Pro and Xbox One X from their respective specs sheets, real-life comparisons are somewhat thin on the ground right now. Microsoft’s new console should offer a comprehensive improvements owing to more memory, higher levels of bandwidth and a big compute advantage, but to what extent will it actually matter in the homogenised world of multi-platform development? From an extended look at the Gamescom build of Shadow of War running on Xbox One X, the signs are looking good for the green team’s new hardware, with an immediately obvious, comprehensively improved presentation – possibly the most dramatic boost we’ve seen to date.

Make no mistake though, as we’ve previously discussed, the PS4 Pro version of ‘Wardor’ is no slouch. Its dynamic resolution averages out at around 1620p over the base machine’s full HD, while geometry draw distance and shadow LODs are improved. However, even after upgrading to the latest patch 1.04, the Pro still exhibits many low quality textures that stick out like a sore thumb – especially noticeable on ultra HD displays.

Implementation of the PC version’s 4K texture pack would have done wonders here, but clearly the limited 512MB of extra RAM available to Pro developers isn’t enough to house the top-tier assets. And that’s the most immediately obvious difference between PS4 Pro and Xbox One X versions of the game – what sticks out right away is that texture problems on PS4 Pro are eradicated. Through sheer capacity via Xbox One X’s 12GB of GDDR5 memory, Shadow of War offers a dramatic improvement in quality: for example, ground textures get a clear resolution bump from the soup-like results on Pro, offering a sharper, clearer presentation.

[embedded content]

Shadow of War on Xbox One X given a thorough early analysis. This is an early build, but prospects are looking great when compared to the final PS4 Pro code.

Monolith has also brought over the dual presentation modes from PS4 Pro – and enhanced them. There’s a quality-biased offering that prioritises better visual settings like draw distance, along with a resolution mode that delivers a native 4K image. And to be clear, both options deliver better textures than PS4 Pro regardless of which you choose. On quality mode you get the best assets possible on Xbox One X, while the resolution mode uses a slightly lower quality texture filtering – blurring the ground slightly, despite using the same texture seting. Compared to PS4 Pro’s texture work on its quality mode (which is also a match for its resolution mode), these still both trump it in sheer clarity though – it’s a big upgrade.

Additionally, whether it’s quality or resolution mode, you still get the benefit of improved ambient occlusion over a standard Xbox One. Dividing the two modes, Xbox One X gets a big boost in draw distance if you opt for the quality setting: an overview of a castle during a siege for example, shows more shadow detail and geometry rendered in from range. It prevents the pop-in you might see on other machines, and the world just feels more cohesive as a result.

In comparing PS4 Pro and Xbox One X, it makes sense to use the quality setting on both systems. Subtle as it may be, it’s evident that Microsoft’s machine gets even better draw distance settings overall on mountain-side geometry and small objects, over and above PS4 Pro’s existing enhancements. It’s a small improvement and nothing like the scale of the texture upgrades – but another advantage that shows Xbox One X veering close to PC’s best presets.

On top of the quality and resolution modes shared with Sony’s ‘supercharged’ console, Xbox One X adds another toggle that can be deployed on either preset: dynamic resolution. It’s hard-set on PS4 Pro, but Xbox One X users can disable it if they want to take their chances with a less stable performance level. Assuming that toggle is disabled, what you get in quality mode is a native resolution of 3520×1980. That’s 1980p fixed on Xbox One X – or very close to it – bringing a leap in image quality over the PS4 Pro’s ballpark 2880×1620 output.

Based on the Gamescom build, Xbox One X’s resolution mode bears mention too if you want a superior 4K picture. From what we’ve tested, a native 3840×2160 is achieved from the machine this way, fixed at that number, as long as the dynamic res checkbox is left blank. The machine forces the maximum resolution here, but at the cost of texture filtering and LOD quality we have on the quality mode, it’s maybe not the best way to go. Based on playing through Shadow of War in both quality and resolution mode with dynamic res enabled and disabled, we’d trade the clarity of the game’s 4K output for the additional features of the quality mode.

And yes, we’d enable dynamic resolution too. Quality mode with this checkbox selected can drop the pixel count to 3360×1890 – the lowest point we’ve measured so far. For reference, that’s still higher than PS4 Pro’s lowest measurement of 1512p in quality mode, and sees a 56 per cent improvement in pixel count. Unlike Sony’s machine though, it’s also capable of hitting a true native 4K on Xbox One X in the best case – provided not much is happening on-screen.

It’s an interesting feature, and having quality with dynamic res engaged means you get all the visual bells and whistles, plus a genuine 4K picture on the occasions that it’s possible. Even when it can’t hit native UHD with all bling engaged, it’s close enough most of the time to look good on a 4K screen. Meanwhile, engaging dynamic scaling on Xbox One X’s resolution mode isn’t quite as revolutionary; you get a lower bounds of around 3584×2016 wherever it senses performance is about to take a hit – while the upper bounds stays at 3840×2160. And as expected, all of these numbers super-sample down to a 1080p set if you haven’t made the upgrade to 4K yet – a feature Microsoft says is common to all X titles.

Even though it’s early code, the Xbox One X build we played runs almost flawlessly for regular missions at a straight 30fps, regardless of rendering mode. However, there is one exceptional area in the Gamecom build that causes issues. The Ghasghor siege is a large-scale battle that buckles Xbox One X’s performance to around the low-20s. It’s a mission with a surplus of enemies and effects, and the bottleneck applies whether you’re on quality or resolution mode.

As a stress-test it’s fascinating to see the standard Xbox One’s adaptive v-sync used here, causing occasional screen-tear. Also curious is that the quality mode appears to push Xbox One X harder, compared to the resolution mode – creating a divide of 2-3 frames per second in matching scenes. Interestingly, engaging dynamic resolution makes no difference here, and it may be the case that the sheer number of entities is causing the console to be CPU-bound. Higher detail means more draw calls, adding to the load, perhaps explaining the performance deficit here. Again, it’s worth stressing this is still early code from months past, and things could change come release. For the rest of the package, Shadow of War hands in a solid 30fps, and the siege areas will be worth revisiting in the final code, with Pro factored into the mix too.


Spider-Man 2’s swinging has never been bettered
Here’s why.


Spider-Man 2's swinging has never been bettered

As thing stand, Shadow of War gives us a prime, early example of why Microsoft targeted Xbox One X’s specific specs. The sense is that a higher pixel count alone isn’t enough; more importantly, this console has the extra memory resources to give those pixels more to show off – better textures, and improved LODs, for example. Additional visual options such as the ability to toggle dynamic resolution scaling are also welcome. Those who want their true 4K can have it, while those looking for more consistent performance are also catered for.

It’s the radically improved art that most obviously sets Xbox One X apart from PS4 Pro, but Monolith’s approach to 4K textures likely won’t apply to every game. At the start of the generation, 8GB of GDDR5 seemed almost overkill – a mammoth 16x increase over last-gen. That’s still a hefty chunk of memory to work with and as such, the difference in other titles may not be so pronounced. However, based on the evidence presented by Shadow of War and Rise of the Tomb Raider in particular, it’s possible that Xbox One X’s 12GB provision is a good case of forward-thinking on Microsoft’s part. We’ll report back on final code just as soon as we can.

Windows Server containers and Hyper-V containers explained

A big draw of Windows Server 2016 is the addition of containers that provide similar capabilities to those from…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

leading open source providers. This Microsoft platform actually offers two different types of containers: Windows Server containers and Hyper-V containers. Before you decide which option best meets your needs, take a look at these five quick tips so you have a better understanding of container architecture, deployment and performance management.

Windows Server containers vs. Hyper-V containers

Although Windows Server containers and Hyper-V containers do the same thing and are managed the same way, the level of isolation they provide is different. Windows Server containers share the underlying OS kernel, which makes them smaller than VMs because they don’t each need a copy of the OS. Security can be a concern, however, because if one container is compromised, the OS and all of the other containers could be at risk.

Hyper-V containers and their dependencies reside in Hyper-V VMs and provide an additional layer of isolation. For reference, Hyper-V containers and Hyper-V VMs have different use cases. Containers are typically used for microservices and stateless applications because they are deposable by design and, as such, don’t store persistent data. Hyper-V VMs, typically equipped with virtual hard disks, are better suited to mission-critical applications.

The role of Docker on Windows Server

One key advantage of Docker on Windows is support for container image automation.

In order to package, deliver and manage Windows container images, you need to download and install Docker on Windows Server 2016. Docker Swarm, supported by Windows Server, provides orchestration features that help with cluster creation and workload scheduling. After you install Docker, you’ll need to configure it for Windows, a process that includes selecting secured connections and setting disk paths.

One key advantage of Docker on Windows is support for container image automation. You can use container images for continuous integration cycles because they’re stored as code and can be quickly recreated when need be. You can also download and install a module to extend PowerShell to manage Docker Engine; just make sure you have the latest versions of both Windows and PowerShell before you do so.

Meet Hyper-V container requirements

If you prefer to use Hyper-V containers, make sure you have Server Core or Windows Server 2016 installed, along with the Hyper-V role. There is also a list of minimum resource requirements necessary to run Hyper-V containers. First, you need at least 4 GB of memory for the host VM. You also need a processor with Intel VT-x and at least two virtual processors for the host VM. Unfortunately, nested virtualization doesn’t support Advanced Micro Devices yet.

Although these requirements might not seem extensive, it’s important to carefully consider resource allocation and the workloads you intend to run on Hyper-V containers before deployment. When it comes to container images, you have two different options: a Windows Server Core image and a Nano Server image.

OS components affect both container types

Portability is a key advantage of containers. Because an application and all its dependencies are packaged within the container, it should be easy to deploy on other platforms. Unfortunately, there are different elements that can negatively affect this deployment flexibility. While containers share the underlying OS kernel, they do contain their own OS components, also known as the OS layer. If these components don’t match up with the OS kernel running on the host, the container will most likely be blocked.

The four-level version notation system Microsoft uses includes the major, minor, build and revision levels. Before Windows Server containers or Hyper-V containers will run on a Windows Server host, the major, minor and build levels must match, at minimum. The containers will still start if the revision level doesn’t match, but they might not work properly.

Antimalware tools and container performance

Because of shared components, like those of the OS layer, antimalware tools can affect container performance. The components or layers are shared through the use of placeholders; when those placeholders are read, the reads are redirected to the underlying component. If the container modifies a component, the container replaces the placeholder with the modified one.

Antimalware tools aren’t aware of the redirection and don’t know which components are placeholders and which components are modified, so the same components end up being scanned multiple times. Fortunately, there is a way to make antimalware tools aware of this activity. You can modify the container volume by attaching a parameter to the Create CallbackData flag and checking the Exchange Control Panel (ECP) redirection flags. ECP will then either indicate that the file was opened from a remote layer or that the file was opened from a local layer.

Next Steps

Ensure container isolation and prevent root access

Combine microservices and containers

Maintain high availability with containers and data mirroring

Coding school bets on scholarships to put more women in tech jobs

To draw more women to its immersive software engineering and web development programs, the Flatiron School is granting them scholarships. The coding boot camp awards 25 women a month 50% off tuition for its online program and a $1,000 discount for every woman who attends in person at its New York campus.

The Women Take Tech scholarship program is designed to put more women in tech jobs, said Flatiron School COO Kristi Riordan. According to the school’s 2017 report on student employment, 97% of graduates get jobs; 35% of grads were women.

The program’s goal is three-pronged: to raise awareness among women about opportunities in technology, give them the confidence they’ll need to thrive in a male-dominated market and give them access to the training needed for high-paying tech careers. Tuition at the Flatiron School is $15,000 for the 15-week on-campus program and $1,500 a month for its online course. Considering many of the school’s female students are in their mid-30s and some have children, that’s not cheap.

“We believe it’s really crucial that women believe they can financially take the risk to pursue a program like this,” Riordan said.

Since launching the scholarship program in January, the school has seen the percentage of women in its online program jump from 30% to 50%.

The Flatiron School scholarships are part of a nationwide push to get more women in tech jobs. For example, Harvey Mudd College, in Claremont, Calif., retooled its curriculum to make it more accessible to students with limited computer experience. The percentage of the computer science majors went from 10% women to 40% in five years, and today stands at 55%, the Los Angeles Times reported in January.

Carnegie Mellon University, in Pittsburgh, has also enacted reforms, which helped increase the percentage of female comp-sci majors to nearly 50% in 2016. And President Donald Trump, who has drawn more ire than praise for his efforts at inclusivity, this week mandated that $200 million go toward technology education grants for women and minorities.

Needed: More women in IT

But technology hasn’t proven to be a friendly place for women of late. The nation’s tech mecca, Silicon Valley, is still smarting from high-profile reports of sexual harassment and bias, and some male technologists are calling women-in-tech recruitment unjust to men, The New York Times reported Saturday. Their complaints are getting louder, too, as some feel emboldened by James Damore, the Google engineer who was fired after arguing in a company post that biology could be behind why there are fewer women than men in the technology field.

According to the U.S. Department of Labor, women make up 26% of people in computer and mathematics jobs.

We need to make sure that we have broad sections of society who are participating in the future of work. If we don’t help women be a participant in those future opportunities, I think we’re going to have societal instability.
Kristi RiordanCOO, the Flatiron School

Getting more women in tech jobs is a good and necessary thing, Riordan said. For one, more women in the labor pool means more talent to tap. If just a fraction of the women who make up about half of society are suited for technology jobs, “think about how much you are limiting the ability to hire talent,” she said.

There’s a long-term benefit, too. Technology is edging into practically every precinct of daily life and will play a colossal role in the future of work in general, Riordan said. So tech jobs can’t be meted out mainly to one gender or the other.

“We need to make sure that we have broad sections of society who are participating in the future of work,” she said. “If we don’t help women be a participant in those future opportunities, I think we’re going to have societal instability.”

Gender diversity is also good business. Investment bank Morgan Stanley reported in a May study that companies with high diversity — a workforce consisting of close to 50% women, 50% men — delivered an average 5.4% more in revenue returns than their peers with gender imbalances.

Enter PCs, exit girls

The reason for the low percentage of women in tech jobs was explored in a 2014 National Public Radio broadcast. Throughout the 1970s and early 1980s, the percentage of women in tech jobs was inching toward 40% — until 1984, when it started to fall. That’s the same time personal computers became commercially popular — but they were marketed for boys, not girls. Eventually, computers became a guy thing. In 2014, the last year accounted for by the National Science Foundation, the percentage of women in computer science was under 20%.

“Now we’ve started to see an awareness of this,” Riordan said. “We cannot talk about technology as a gender-driven thing. It needs to be something that is accessible, and there’s an aptitude for it across those genders.”

Geared toward males for more than 30 years, technology can seem like an undiscovered country to many women, Riordan said, even those naturally inclined to it. She relayed the story of a former student who, as a freshman in college, decided to major in computer science.

“She looks around the classroom. She’s the only woman,” Riordan said. “She’s talked to in a way that made her feel like she didn’t belong.”

Eventually, she dropped out of the computer science program. After graduating college, she became a librarian with the tedious work of cataloging book metadata. Thinking there had to be a better way, the woman drew on her computer science background.

“She found a way to write a script and more efficiently catalog all of the book data,” Riordan said. The woman applied to the Flatiron School, graduated and went back to library science, this time as software engineer at the New York Public Library.

Students take classes in computer programming at the Flatiron School, a coding boot camp in New York.
Students take classes in computer programming at the Flatiron School, a coding boot camp in New York.

A comfortable space

Riordan said the school tries to make women feel like they’re where they should be. For instance, female students get training on how to dispel impostor syndrome — the strong feeling that they don’t deserve the job, education or opportunity they have.

“It’s crucial to help women to understand that if they have self-doubts that [technology] isn’t a field they can do, that we help them understand why they can,” Riordan said.

The school also invites female technologists in New York to speak to students about their work and careers. Flatiron School alumni and members of its engineering or teaching staff are also tapped to talk about their experiences.

The response from female students, Riordan said, has been overwhelmingly positive: “‘Yes, there are women out there,’ and ‘Yes, they can share what their experiences are like,’ and ‘Yes, it’s not necessarily as scary as any of the things that are being reported on in the press,'” she said. “There are many organizations, especially here in New York City, where women are thriving in tech and thriving in leadership roles.”