Tag Archives: draw

Jake Braun discusses the Voting Village at DEF CON

Election security continues to be a hot topic, as the 2018 midterm elections draw closer. So, the Voting Village at DEF CON 26 in Las Vegas wanted to re-create and test every aspect of an election.

Jake Braun, CEO of Cambridge Global Advisors, based in Arlington, Va., and one of the main organizers of the DEF CON Voting Village, discussed the pushback the event has received and how he hopes the event can expand in the future.

What were the major differences between what the Voting Village had this year compared to last year?

Jake Braun: The main difference is it’s way bigger. And we’ve got, end to end, the voting infrastructure. We’ve got voter registration, a list of voters in the state of Ohio that are in a cyber range that’s basically like a county clerk’s network. Cook County, Illinois, their head guy advised us on how to make it realistic [and] make it like his network. We had that, but we didn’t have the list of voters last year.

That’s the back end of the voter process with the voter infrastructure process. And then we’ve got machines. We’ve got some new machines and accessories and all this stuff.

Then, on the other end, we’ve got the websites. This is the last piece of the election infrastructure that announces the results. And so, obviously, we’ve got the kids hacking the mock websites.

What prompted you to make hacking the mock websites an event for the kids in R00tz Asylum?

Braun: It was funny. I was at [RSA Conference], and we’ve been talking for a long time about, how do we represent this vulnerability in a way that’s not a waste of time? Because the guys down in the [Voting Village], hacking websites is not interesting to them. They’ve been doing it for 20 years, or they’ve known how to do it for 20 years. But this is the most vulnerable part of the infrastructure, because it’s [just] a website. You can cause real havoc.

I mean, the Russians — when they hacked the Ukrainian website and changed it to show their candidate won, and the Ukrainians took it down, fortunately, they took it down before anything happened. But then, Russian TV started announcing their candidate won. Can you imagine if, in November 2020, the Florida and Ohio websites are down, and Wolf Blitzer is sitting there on CNN saying, ‘Well, you know, we don’t really know who won, because the Florida and Ohio websites are down,’ and then RT — Russian Television — starts announcing that their preferred candidate won? It would be chaos.

Anyway, I was talking through this with some people at [RSA Conference], and I was talking about how it would be so uninteresting to do it in the real village or in the main village. And the guy [I was talking to said], ‘Oh, right. Yeah. It’s like child’s play for them.’

I was like, ‘Exactly, it’s child’s play. Great idea. We’ll give it to R00tz.’ And so, I called up Nico [Sell], and she was like, ‘I love it. I’m in.’ And then, the guys who built it were the Capture the Packet guys, who are some of the best security people in the planet. I mean, Brian Markus does security for … Aerojet Rocketdyne, one of the top rocket manufacturers in the world. He sells to [Department of Defense], [Department of Homeland Security] and the Australian government. So, I mean, he is more competent than any election official we have.

The first person to get in was an 11-year-old girl, and she got in in 10 minutes. Totally took over the website, changed the results and everything else.

How did it go with the Ohio voter registration database?

Braun: The Secretaries of State Association criticized us, [saying], ‘Oh, you’re making it too easy. It’s not realistic,’ which is ridiculous. In fact, we’re protecting the voter registration database with this Israeli military technology, and no one has been able to get in yet. So, it’s actually probably the best protected list of voters in the country right now.

Have you been able to update the other machines being used in the Voting Village?

Braun: Well, a lot of it is old, but it’s still in use. The only thing that’s not in use is the WinVote, but everything else that we have in there is in use today. Unlike other stuff, they don’t get automatic updates on their software. So, that’s the same stuff that people are voting on today.

Have the vendors been helpful at all in providing more updated software or anything?

Braun: No. And, of course, the biggest one sent out a letter in advance to DEF CON again this year saying, ‘It’s not realistic and it’s unfair, because they have full access to the machines.’

Do people think these machines are kept in Fort Knox? I mean, they are in a warehouse or, in some places, in small counties, they are in a closet somewhere — literally. And, by the way, Rob Joyce, the cyber czar for the Trump administration who’s now back at NSA [National Security Agency], in his talk [this year at DEF CON, he basically said], if you don’t think that our adversaries are doing exactly this all year so that they know how to get into these machines, your head is insane.

The thing is that we actually are playing by the rules. We don’t steal machines. We only get them if people donate them to us, or if we can buy them legally somehow. The Russians don’t play by the rules. They’ll just go get them however they want. They’ll steal them or bribe people or whatever.

They could also just as easily do what you do and just to get them secondhand.

Braun: Right. They’re probably doing that, too.

Is there any way to test these machines in a way that would be acceptable to the manufacturers and U.S. government?

Braun: The unfortunate thing is that, to our knowledge, the Voting Village is still the only public third-party inspection — or whatever you want to call it — of voting infrastructure.

The unfortunate thing is that the only time this is done publicly by a third party is when it’s done by us. And that’s once a year for two and a half days. This should be going on all year.
Jake BraunCEO of Cambridge Global Advisors

The vendors and others will get pen testing done periodically for themselves, but that’s not public. All these things are done, and they’re under [nondisclosure agreement]. Their customers don’t know what vulnerabilities they found and so on and so forth.

So, the unfortunate thing is that the only time this is done publicly by a third party is when it’s done by us. And that’s once a year for two and a half days. This should be going on all year with all the equipment, the most updated stuff and everything else. And, of course, it’s not.

Have you been in contact with the National Institute of Standards and Technology, as they are in the process of writing new voting machine guidelines?

Braun: Yes. This is why DEF CON is so great, because everybody is here. I was just talking to them yesterday, and they were like, ‘Hey, can you get us the report as soon as humanly possible? Because we want to take it into consideration as we are putting together our guidelines.’ And they said they used our report last year, as well.

How have the election machines fared against the Voting Village hackers this year?

Braun: Right, of course, they were able to get into everything. Of course, they’re finding all these new vulnerabilities and all this stuff. 

The greatest thing that I think came out of last year was that the state of Virginia wound up decommissioning the machine that [the hackers] got into in two minutes remotely. They decommissioned that and got rid of the machine altogether. And it was the only state that still had it. And so, after DEF CON, they had this emergency thing to get rid of it before the elections in 2017.

What’s the plan for the Voting Village moving forward?

Braun: We’ll do the report like we did last year. Out of all the guidelines that have come out since 2016 on how to secure election infrastructure, none of them talk about how to better secure your reporting websites or, since they are kind of impossible to secure, what operating procedures you should have in place in case they get hacked.

So, we’re going to include that in the report this year. And that will be a big addition to the overall guidelines that have come out since 2016.

And then, next year, I think, it’s really just all about, what else can we get our hands on? Because that will be the last time that any of our findings will be able to be implemented before 2020, which is, I think, when the big threat is.

A DEF CON spokesperson said that most of the local officials that responded and are attending have been from Democratic majority counties. Why do you think that is?

Braun: That’s true, although [Neal Kelley, chief of elections and registrar of voters for] Orange County, attended. Orange County is pretty Republican, and he is a Republican.

But I think it winds up being this functionally odd thing where urban areas are generally Democratic, but because they are big, they have a bigger tax base. So then, the people who run them have more money to do security and hire security people. So, they kind of necessarily know more about this stuff.

Whereas if you’re in Allamakee County, Iowa, with 10,000 people, the county auditor who runs the elections there, that guy or gal — I don’t know who it is — but they are both the IT and the election official and the security person and the whatever. You’re just not going to get the specialized stuff, you know what I mean?

Do you have any plans to try to boost attendance from smaller counties that might not be able to afford sending somebody here or plans on how to get information to them?

Braun: Well, that’s why we do the report. This year, we did a mailing of 6,600 pieces of mail to all 6,600 election officials in the country and two emails and 3,500 live phone calls. So, we’re going to keep doing that.
 
And that’s the other thing: We just got so much more engagement from local officials. We had a handful come last year. We had several dozen come this year. None of them were public last year. This year, we had a panel of them speaking, including DHS [Department of Homeland Security].

So, that’s a big difference. Despite the stupid letter that the Secretary of State Association sent out, a lot of these state and local folks are embracing this.

And it’s not like we think we have all the answers. But you would think if you were in their position and with how cash-strapped they are and everything, that they would say, ‘Well, these guys might have some answers. And if somebody’s got some answers, I would love to go find out about those answers.’

Microsoft researchers build a bot that draws what you tell it to – The AI Blog

If you’re handed a note that asks you to draw a picture of a bird with a yellow body, black wings and a short beak, chances are you’ll start with a rough outline of a bird, then glance back at the note, see the yellow part and reach for a yellow pen to fill in the body, read the note again and reach for a black pen to draw the wings and, after a final check, shorten the beak and define it with a reflective glint. Then, for good measure, you might sketch a tree branch where the bird rests.

Now, there’s a bot that can do that, too.

The new artificial intelligence technology under development in Microsoft’s research labs is programmed to pay close attention to individual words when generating images from caption-like text descriptions. This deliberate focus produced a nearly three-fold boost in image quality compared to the previous state-of-the-art technique for text-to-image generation, according to results on an industry standard test reported in a research paper posted on arXiv.org.

The technology, which the researchers simply call the drawing bot, can generate images of everything from ordinary pastoral scenes, such as grazing livestock, to the absurd, such as a floating double-decker bus. Each image contains details that are absent from the text descriptions, indicating that this artificial intelligence contains an artificial imagination.

“If you go to Bing and you search for a bird, you get a bird picture. But here, the pictures are created by the computer, pixel by pixel, from scratch,” said Xiaodong He, a principal researcher and research manager in the Deep Learning Technology Center at Microsoft’s research lab in Redmond, Washington. “These birds may not exist in the real world — they are just an aspect of our computer’s imagination of birds.”

The drawing bot closes a research circle around the intersection of computer vision and natural language processing that He and colleagues have explored for the past half-decade. They started with technology that automatically writes photo captions – the CaptionBot – and then moved to a technology that answers questions humans ask about images, such as the location or attributes of objects, which can be especially helpful for blind people.

These research efforts require training machine learning models to identify objects, interpret actions and converse in natural language.

“Now we want to use the text to generate the image,” said Qiuyuan Huang, a postdoctoral researcher in He’s group and a paper co-author. “So, it is a cycle.”

Image generation is a more challenging task than image captioning, added Pengchuan Zhang, an associate researcher on the team, because the process requires the drawing bot to imagine details that are not contained in the caption. “That means you need your machine learning algorithms running your artificial intelligence to imagine some missing parts of the images,” he said.

Attentive image generation

At the core of Microsoft’s drawing bot is a technology known as a Generative Adversarial Network, or GAN. The network consists of two machine learning models, one that generates images from text descriptions and another, known as a discriminator, that uses text descriptions to judge the authenticity of generated images. The generator attempts to get fake pictures past the discriminator; the discriminator never wants to be fooled. Working together, the discriminator pushes the generator toward perfection.

Microsoft’s drawing bot was trained on datasets that contain paired images and captions, which allow the models to learn how to match words to the visual representation of those words. The GAN, for example, learns to generate an image of a bird when a caption says bird and, likewise, learns what a picture of a bird should look like. “That is a fundamental reason why we believe a machine can learn,” said He.

GANs work well when generating images from simple text descriptions such as a blue bird or an evergreen tree, but the quality stagnates with more complex text descriptions such as a bird with a green crown, yellow wings and a red belly. That’s because the entire sentence serves as a single input to the generator. The detailed information of the description is lost. As a result, the generated image is a blurry greenish-yellowish-reddish bird instead a close, sharp match with the description.

As humans draw, we repeatedly refer to the text and pay close attention to the words that describe the region of the image we are drawing. To capture this human trait, the researchers created what they call an attentional GAN, or AttnGAN, that mathematically represents the human concept of attention. It does this by breaking up the input text into individual words and matching those words to specific regions of the image.

“Attention is a human concept; we use math to make attention computational,” explained He.

The model also learns what humans call commonsense from the training data, and it pulls on this learned notion to fill in details of images that are left to the imagination. For example, since many images of birds in the training data show birds sitting on tree branches, the AttnGAN usually draws birds sitting on branches unless the text specifies otherwise.

“From the data, the machine learning algorithm learns this commonsense where the bird should belong,” said Zhang. As a test, the team fed the drawing bot captions for absurd images, such as “a red double-decker bus is floating on a lake.” It generated a blurry, drippy image that resembles both a boat with two decks and a double-decker bus on a lake surrounded by mountains. The image suggests the bot had an internal struggle between knowing that boats float on lakes and the text specification of bus.

“We can control what we describe and see how the machine reacts,” explained He. “We can poke and test what the machine learned. The machine has some background learned commonsense, but it can still follow what you ask and maybe, sometimes, it seems a bit ridiculous.”

Practical applications

Text-to-image generation technology could find practical applications acting as a sort of sketch assistant to painters and interior designers, or as a tool for voice-activated photo refinement. With more computing power, He imagines the technology could generate animated films based on screenplays, augmenting the work that animated filmmakers do by removing some of the manual labor involved.

For now, the technology is imperfect. Close examination of images almost always reveals flaws, such as birds with blue beaks instead of black and fruit stands with mutant bananas. These flaws are a clear indication that a computer, not a human, created the images. Nevertheless, the quality of the AttnGAN images are a nearly three-fold improvement over the previous best-in-class GAN and serve as a milestone on the road toward a generic, human-like intelligence that augments human capabilities, according to He.

“For AI and humans to live in the same world, they have to have a way to interact with each other,” explained He. “And language and vision are the two most important modalities for humans and machines to interact with each other.”

In addition to Xiaodong He, Pengchuan Zhang and Qiuyuan Huang at Microsoft, collaborators include former Microsoft interns Tao Xu from Lehigh University and Zhe Gan from Duke University; and Han Zhang from Rutgers University and Xiaolei Huang from Lehigh University.

Related

John Roach writes about Microsoft research and innovation. Follow him on Twitter.

Xbox One X Shadow of War shows profound improvements over PS4 Pro

One version to rule them all?

While we can draw conclusions about PlayStation 4 Pro and Xbox One X from their respective specs sheets, real-life comparisons are somewhat thin on the ground right now. Microsoft’s new console should offer a comprehensive improvements owing to more memory, higher levels of bandwidth and a big compute advantage, but to what extent will it actually matter in the homogenised world of multi-platform development? From an extended look at the Gamescom build of Shadow of War running on Xbox One X, the signs are looking good for the green team’s new hardware, with an immediately obvious, comprehensively improved presentation – possibly the most dramatic boost we’ve seen to date.

Make no mistake though, as we’ve previously discussed, the PS4 Pro version of ‘Wardor’ is no slouch. Its dynamic resolution averages out at around 1620p over the base machine’s full HD, while geometry draw distance and shadow LODs are improved. However, even after upgrading to the latest patch 1.04, the Pro still exhibits many low quality textures that stick out like a sore thumb – especially noticeable on ultra HD displays.

Implementation of the PC version’s 4K texture pack would have done wonders here, but clearly the limited 512MB of extra RAM available to Pro developers isn’t enough to house the top-tier assets. And that’s the most immediately obvious difference between PS4 Pro and Xbox One X versions of the game – what sticks out right away is that texture problems on PS4 Pro are eradicated. Through sheer capacity via Xbox One X’s 12GB of GDDR5 memory, Shadow of War offers a dramatic improvement in quality: for example, ground textures get a clear resolution bump from the soup-like results on Pro, offering a sharper, clearer presentation.

[embedded content]

Shadow of War on Xbox One X given a thorough early analysis. This is an early build, but prospects are looking great when compared to the final PS4 Pro code.

Monolith has also brought over the dual presentation modes from PS4 Pro – and enhanced them. There’s a quality-biased offering that prioritises better visual settings like draw distance, along with a resolution mode that delivers a native 4K image. And to be clear, both options deliver better textures than PS4 Pro regardless of which you choose. On quality mode you get the best assets possible on Xbox One X, while the resolution mode uses a slightly lower quality texture filtering – blurring the ground slightly, despite using the same texture seting. Compared to PS4 Pro’s texture work on its quality mode (which is also a match for its resolution mode), these still both trump it in sheer clarity though – it’s a big upgrade.

Additionally, whether it’s quality or resolution mode, you still get the benefit of improved ambient occlusion over a standard Xbox One. Dividing the two modes, Xbox One X gets a big boost in draw distance if you opt for the quality setting: an overview of a castle during a siege for example, shows more shadow detail and geometry rendered in from range. It prevents the pop-in you might see on other machines, and the world just feels more cohesive as a result.

In comparing PS4 Pro and Xbox One X, it makes sense to use the quality setting on both systems. Subtle as it may be, it’s evident that Microsoft’s machine gets even better draw distance settings overall on mountain-side geometry and small objects, over and above PS4 Pro’s existing enhancements. It’s a small improvement and nothing like the scale of the texture upgrades – but another advantage that shows Xbox One X veering close to PC’s best presets.

On top of the quality and resolution modes shared with Sony’s ‘supercharged’ console, Xbox One X adds another toggle that can be deployed on either preset: dynamic resolution. It’s hard-set on PS4 Pro, but Xbox One X users can disable it if they want to take their chances with a less stable performance level. Assuming that toggle is disabled, what you get in quality mode is a native resolution of 3520×1980. That’s 1980p fixed on Xbox One X – or very close to it – bringing a leap in image quality over the PS4 Pro’s ballpark 2880×1620 output.

Based on the Gamescom build, Xbox One X’s resolution mode bears mention too if you want a superior 4K picture. From what we’ve tested, a native 3840×2160 is achieved from the machine this way, fixed at that number, as long as the dynamic res checkbox is left blank. The machine forces the maximum resolution here, but at the cost of texture filtering and LOD quality we have on the quality mode, it’s maybe not the best way to go. Based on playing through Shadow of War in both quality and resolution mode with dynamic res enabled and disabled, we’d trade the clarity of the game’s 4K output for the additional features of the quality mode.

And yes, we’d enable dynamic resolution too. Quality mode with this checkbox selected can drop the pixel count to 3360×1890 – the lowest point we’ve measured so far. For reference, that’s still higher than PS4 Pro’s lowest measurement of 1512p in quality mode, and sees a 56 per cent improvement in pixel count. Unlike Sony’s machine though, it’s also capable of hitting a true native 4K on Xbox One X in the best case – provided not much is happening on-screen.

It’s an interesting feature, and having quality with dynamic res engaged means you get all the visual bells and whistles, plus a genuine 4K picture on the occasions that it’s possible. Even when it can’t hit native UHD with all bling engaged, it’s close enough most of the time to look good on a 4K screen. Meanwhile, engaging dynamic scaling on Xbox One X’s resolution mode isn’t quite as revolutionary; you get a lower bounds of around 3584×2016 wherever it senses performance is about to take a hit – while the upper bounds stays at 3840×2160. And as expected, all of these numbers super-sample down to a 1080p set if you haven’t made the upgrade to 4K yet – a feature Microsoft says is common to all X titles.

Even though it’s early code, the Xbox One X build we played runs almost flawlessly for regular missions at a straight 30fps, regardless of rendering mode. However, there is one exceptional area in the Gamecom build that causes issues. The Ghasghor siege is a large-scale battle that buckles Xbox One X’s performance to around the low-20s. It’s a mission with a surplus of enemies and effects, and the bottleneck applies whether you’re on quality or resolution mode.

As a stress-test it’s fascinating to see the standard Xbox One’s adaptive v-sync used here, causing occasional screen-tear. Also curious is that the quality mode appears to push Xbox One X harder, compared to the resolution mode – creating a divide of 2-3 frames per second in matching scenes. Interestingly, engaging dynamic resolution makes no difference here, and it may be the case that the sheer number of entities is causing the console to be CPU-bound. Higher detail means more draw calls, adding to the load, perhaps explaining the performance deficit here. Again, it’s worth stressing this is still early code from months past, and things could change come release. For the rest of the package, Shadow of War hands in a solid 30fps, and the siege areas will be worth revisiting in the final code, with Pro factored into the mix too.


Spider-Man 2’s swinging has never been bettered
Here’s why.


Spider-Man 2's swinging has never been bettered

As thing stand, Shadow of War gives us a prime, early example of why Microsoft targeted Xbox One X’s specific specs. The sense is that a higher pixel count alone isn’t enough; more importantly, this console has the extra memory resources to give those pixels more to show off – better textures, and improved LODs, for example. Additional visual options such as the ability to toggle dynamic resolution scaling are also welcome. Those who want their true 4K can have it, while those looking for more consistent performance are also catered for.

It’s the radically improved art that most obviously sets Xbox One X apart from PS4 Pro, but Monolith’s approach to 4K textures likely won’t apply to every game. At the start of the generation, 8GB of GDDR5 seemed almost overkill – a mammoth 16x increase over last-gen. That’s still a hefty chunk of memory to work with and as such, the difference in other titles may not be so pronounced. However, based on the evidence presented by Shadow of War and Rise of the Tomb Raider in particular, it’s possible that Xbox One X’s 12GB provision is a good case of forward-thinking on Microsoft’s part. We’ll report back on final code just as soon as we can.

Windows Server containers and Hyper-V containers explained

A big draw of Windows Server 2016 is the addition of containers that provide similar capabilities to those from…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

leading open source providers. This Microsoft platform actually offers two different types of containers: Windows Server containers and Hyper-V containers. Before you decide which option best meets your needs, take a look at these five quick tips so you have a better understanding of container architecture, deployment and performance management.

Windows Server containers vs. Hyper-V containers

Although Windows Server containers and Hyper-V containers do the same thing and are managed the same way, the level of isolation they provide is different. Windows Server containers share the underlying OS kernel, which makes them smaller than VMs because they don’t each need a copy of the OS. Security can be a concern, however, because if one container is compromised, the OS and all of the other containers could be at risk.

Hyper-V containers and their dependencies reside in Hyper-V VMs and provide an additional layer of isolation. For reference, Hyper-V containers and Hyper-V VMs have different use cases. Containers are typically used for microservices and stateless applications because they are deposable by design and, as such, don’t store persistent data. Hyper-V VMs, typically equipped with virtual hard disks, are better suited to mission-critical applications.

The role of Docker on Windows Server

One key advantage of Docker on Windows is support for container image automation.

In order to package, deliver and manage Windows container images, you need to download and install Docker on Windows Server 2016. Docker Swarm, supported by Windows Server, provides orchestration features that help with cluster creation and workload scheduling. After you install Docker, you’ll need to configure it for Windows, a process that includes selecting secured connections and setting disk paths.

One key advantage of Docker on Windows is support for container image automation. You can use container images for continuous integration cycles because they’re stored as code and can be quickly recreated when need be. You can also download and install a module to extend PowerShell to manage Docker Engine; just make sure you have the latest versions of both Windows and PowerShell before you do so.

Meet Hyper-V container requirements

If you prefer to use Hyper-V containers, make sure you have Server Core or Windows Server 2016 installed, along with the Hyper-V role. There is also a list of minimum resource requirements necessary to run Hyper-V containers. First, you need at least 4 GB of memory for the host VM. You also need a processor with Intel VT-x and at least two virtual processors for the host VM. Unfortunately, nested virtualization doesn’t support Advanced Micro Devices yet.

Although these requirements might not seem extensive, it’s important to carefully consider resource allocation and the workloads you intend to run on Hyper-V containers before deployment. When it comes to container images, you have two different options: a Windows Server Core image and a Nano Server image.

OS components affect both container types

Portability is a key advantage of containers. Because an application and all its dependencies are packaged within the container, it should be easy to deploy on other platforms. Unfortunately, there are different elements that can negatively affect this deployment flexibility. While containers share the underlying OS kernel, they do contain their own OS components, also known as the OS layer. If these components don’t match up with the OS kernel running on the host, the container will most likely be blocked.

The four-level version notation system Microsoft uses includes the major, minor, build and revision levels. Before Windows Server containers or Hyper-V containers will run on a Windows Server host, the major, minor and build levels must match, at minimum. The containers will still start if the revision level doesn’t match, but they might not work properly.

Antimalware tools and container performance

Because of shared components, like those of the OS layer, antimalware tools can affect container performance. The components or layers are shared through the use of placeholders; when those placeholders are read, the reads are redirected to the underlying component. If the container modifies a component, the container replaces the placeholder with the modified one.

Antimalware tools aren’t aware of the redirection and don’t know which components are placeholders and which components are modified, so the same components end up being scanned multiple times. Fortunately, there is a way to make antimalware tools aware of this activity. You can modify the container volume by attaching a parameter to the Create CallbackData flag and checking the Exchange Control Panel (ECP) redirection flags. ECP will then either indicate that the file was opened from a remote layer or that the file was opened from a local layer.

Next Steps

Ensure container isolation and prevent root access

Combine microservices and containers

Maintain high availability with containers and data mirroring

Coding school bets on scholarships to put more women in tech jobs

To draw more women to its immersive software engineering and web development programs, the Flatiron School is granting them scholarships. The coding boot camp awards 25 women a month 50% off tuition for its online program and a $1,000 discount for every woman who attends in person at its New York campus.

The Women Take Tech scholarship program is designed to put more women in tech jobs, said Flatiron School COO Kristi Riordan. According to the school’s 2017 report on student employment, 97% of graduates get jobs; 35% of grads were women.

The program’s goal is three-pronged: to raise awareness among women about opportunities in technology, give them the confidence they’ll need to thrive in a male-dominated market and give them access to the training needed for high-paying tech careers. Tuition at the Flatiron School is $15,000 for the 15-week on-campus program and $1,500 a month for its online course. Considering many of the school’s female students are in their mid-30s and some have children, that’s not cheap.

“We believe it’s really crucial that women believe they can financially take the risk to pursue a program like this,” Riordan said.

Since launching the scholarship program in January, the school has seen the percentage of women in its online program jump from 30% to 50%.

The Flatiron School scholarships are part of a nationwide push to get more women in tech jobs. For example, Harvey Mudd College, in Claremont, Calif., retooled its curriculum to make it more accessible to students with limited computer experience. The percentage of the computer science majors went from 10% women to 40% in five years, and today stands at 55%, the Los Angeles Times reported in January.

Carnegie Mellon University, in Pittsburgh, has also enacted reforms, which helped increase the percentage of female comp-sci majors to nearly 50% in 2016. And President Donald Trump, who has drawn more ire than praise for his efforts at inclusivity, this week mandated that $200 million go toward technology education grants for women and minorities.

Needed: More women in IT

But technology hasn’t proven to be a friendly place for women of late. The nation’s tech mecca, Silicon Valley, is still smarting from high-profile reports of sexual harassment and bias, and some male technologists are calling women-in-tech recruitment unjust to men, The New York Times reported Saturday. Their complaints are getting louder, too, as some feel emboldened by James Damore, the Google engineer who was fired after arguing in a company post that biology could be behind why there are fewer women than men in the technology field.

According to the U.S. Department of Labor, women make up 26% of people in computer and mathematics jobs.

We need to make sure that we have broad sections of society who are participating in the future of work. If we don’t help women be a participant in those future opportunities, I think we’re going to have societal instability.
Kristi RiordanCOO, the Flatiron School

Getting more women in tech jobs is a good and necessary thing, Riordan said. For one, more women in the labor pool means more talent to tap. If just a fraction of the women who make up about half of society are suited for technology jobs, “think about how much you are limiting the ability to hire talent,” she said.

There’s a long-term benefit, too. Technology is edging into practically every precinct of daily life and will play a colossal role in the future of work in general, Riordan said. So tech jobs can’t be meted out mainly to one gender or the other.

“We need to make sure that we have broad sections of society who are participating in the future of work,” she said. “If we don’t help women be a participant in those future opportunities, I think we’re going to have societal instability.”

Gender diversity is also good business. Investment bank Morgan Stanley reported in a May study that companies with high diversity — a workforce consisting of close to 50% women, 50% men — delivered an average 5.4% more in revenue returns than their peers with gender imbalances.

Enter PCs, exit girls

The reason for the low percentage of women in tech jobs was explored in a 2014 National Public Radio broadcast. Throughout the 1970s and early 1980s, the percentage of women in tech jobs was inching toward 40% — until 1984, when it started to fall. That’s the same time personal computers became commercially popular — but they were marketed for boys, not girls. Eventually, computers became a guy thing. In 2014, the last year accounted for by the National Science Foundation, the percentage of women in computer science was under 20%.

“Now we’ve started to see an awareness of this,” Riordan said. “We cannot talk about technology as a gender-driven thing. It needs to be something that is accessible, and there’s an aptitude for it across those genders.”

Geared toward males for more than 30 years, technology can seem like an undiscovered country to many women, Riordan said, even those naturally inclined to it. She relayed the story of a former student who, as a freshman in college, decided to major in computer science.

“She looks around the classroom. She’s the only woman,” Riordan said. “She’s talked to in a way that made her feel like she didn’t belong.”

Eventually, she dropped out of the computer science program. After graduating college, she became a librarian with the tedious work of cataloging book metadata. Thinking there had to be a better way, the woman drew on her computer science background.

“She found a way to write a script and more efficiently catalog all of the book data,” Riordan said. The woman applied to the Flatiron School, graduated and went back to library science, this time as software engineer at the New York Public Library.

Students take classes in computer programming at the Flatiron School, a coding boot camp in New York.
Students take classes in computer programming at the Flatiron School, a coding boot camp in New York.

A comfortable space

Riordan said the school tries to make women feel like they’re where they should be. For instance, female students get training on how to dispel impostor syndrome — the strong feeling that they don’t deserve the job, education or opportunity they have.

“It’s crucial to help women to understand that if they have self-doubts that [technology] isn’t a field they can do, that we help them understand why they can,” Riordan said.

The school also invites female technologists in New York to speak to students about their work and careers. Flatiron School alumni and members of its engineering or teaching staff are also tapped to talk about their experiences.

The response from female students, Riordan said, has been overwhelmingly positive: “‘Yes, there are women out there,’ and ‘Yes, they can share what their experiences are like,’ and ‘Yes, it’s not necessarily as scary as any of the things that are being reported on in the press,'” she said. “There are many organizations, especially here in New York City, where women are thriving in tech and thriving in leadership roles.”