Tag Archives: face

Retail facial recognition and eye tracking the next tech wave, maybe

Your face tells your story and confirms your identity when you shop. Digitizing all that takes next-generation eye-tracking and facial recognition technology, which retailers and restaurateurs have just begun weaving into the IT mix to improve customer experience.

Stores and restaurants are testing retail facial recognition technology to help speed up checkout and ordering, users and vendors said at the recent NRF 2020 Vision: Retail’s Big Show. Eye-tracking software (see sidebar) helps retailers improve user interfaces on e-commerce sites as well as in physical stores, influencing the planograms that map product placement on store shelves.

Customers in several small restaurant chains in California and Illinois can order meals on kiosks from vendor PopID, a company that integrates NEC facial recognition and Brierly Group digital loyalty programs into the kiosks. The goal for many restaurants with self-service kiosks is to eliminate humans taking orders and running payments, said Yale Goldberg, vice president of strategy and business development at PopID, and facial recognition can speed up the process

These are early days for retail facial recognition to connect loyalty program names and credit cards to customers, and so far, the results have been mixed. At restaurants with PopID kiosks, humans at the cash register can punch in orders and take payments in 30 seconds, while on their own, customers take an average of two and a half minutes, even with instantaneous facial ID.

Furthermore, concerns about data privacy Goldberg refers to as the “creep factor” can make some consumers reticent to use the system. So far, about 20% of the restaurants’ customers opt in to PopID facial recognition for ordering and checkout.

That said, older customers — who typically have more reservations about giving up personal data than Millennial-generation and younger customers — are buying into the company’s retail facial recognition systems at about the same rate, Goldberg said.

“People are returning to these restaurants frequently, and they understand they can have a much more frictionless experience when they opt in,” Goldberg said. “Once it’s explained, people start to trust the brand and can see the benefits.”

NCR sees biometrics on rise

Customer privacy concerns about opting into retail facial recognition are a barrier to widespread acceptance, said David Wilkinson, senior vice president and general manager of global retail at NCR Corp., which provides cloud application and infrastructure support for retailers. NCR is partnering with biometric ID vendors to offer convenience and grocery stores checkout kiosks using face ID, but Wilkinson characterized adoption as low, or even in the testing phase among the company’s retail customers for now.

NCR remains agnostic on new tech such as biometric IDs, Wilkinson said, and supports as many as possible to meet its customer demand if and when it comes. NCR also offers computer vision tools for automated age verification for the purchase of age-restricted items, which the company said can be more accurate than humans. 

Biometrics in general have much promise, Wilkinson said, for matching customers to loyalty memberships and enabling quicker checkouts. Facial recognition in particular, however, may have a difficult path to acceptance in retail among consumers. Alternatives such as palm recognition for payment may eventually prove more accurate and less intrusive for payments, he said.

“I think there will be some kind of AI-driven, biometric way that we can identify ourselves at retail,” Wilkinson said. “At NCR, we can’t bet our business on a winner or a loser; that’s not the way we’re built.”

Integration woes slow progress

I think there’s a massive opportunity, but there’s a leap of faith needed. Taking that leap of faith is really hard for some organizations, but the technology’s there.
Jon HughesEVP, REPL Group

Integration of facial technologies hasn’t always been smooth. Customers have to relearn familiar processes like checkout. On the back end, new biometric data feeds must work their way into long-standing payment systems, or in the case of eye-tracking data, into planogram applications.

Many retailers are a few years off from getting their systems and data management working in harmony, said Jon Hughes, executive vice president at retail tech consultant REPL Group. A third have it “well sorted out,” a third are just getting started and a third are in what he called “the dark space in the middle.”

“The data’s the big problem,” Hughes said. He added that, in his view, facial recognition comes closer to true AI than many other technologies vendors call AI, but he views as basic automation without intelligence.

“I think there’s a massive opportunity, but there’s a leap of faith needed,” Hughes said. “Taking that leap of faith is really hard for some organizations, but the technology’s there.”

Go to Original Article
Author:

Major storage vendors map out 2020 plans

The largest enterprise storage vendors face a common set of challenges and opportunities heading into 2020. As global IT spending slows and storage gets faster and frequently handles data outside the core data center, primary storage vendors must turn to cloud, data management and newer flash technologies.

Each of the major storage vendors has its own plans for dealing with these developments. Here is a look at what the major primary storage vendors did in 2019 and what you can expect from them in 2020.

Dell EMC: Removing shadows from the clouds

2019 in review: Enterprise storage market leader Dell EMC spent most of 2019 bolstering its cloud capabilities, in many cases trying to play catch-up. New cloud products include VMware-orchestrated Dell EMC Cloud Platform arrays that integrate Unity and PowerMax storage, coupled with VxBlock converged and VxRail hyper-converged infrastructure.

The new Dell EMC Cloud gear allows customers to build and deploy on-premises private clouds with the agility and scale of the public cloud — a growing need as organizations dive deeper into AI and DevOps.

What’s on tap for 2020: Dell EMC officials have hinted at a new Power-branded midrange storage system for several years, and a formal unveiling of that product is expected in 2020. Then again, Dell initially said the next-generation system would arrive in 2019. Customers with existing Dell EMC midrange storage likely won’t be forced to upgrade, at least not for a while. The new storage platform will likely converge features from Dell EMC Unity and SC Series midrange arrays with an emphasis on containers and microservices.

Dell will enhance its tool set for containers to help companies deploy microservices, said Sudhir Srinivasan, the CTO of Dell EMC storage. He said containers are a prominent design featured in the new midrange storage. 

“Software stacks that were built decades ago are giant monolithic pieces of code, and they’re not going to survive that next decade, which we call the data decade,” Srinivasan said. 

Hewlett Packard Enterprise’s eventful year

2019 in review: In terms of product launches and partnerships, Hewlett Packard Enterprise (HPE) had a busy year in 2019. HPE Primera all-flash storage arrived in late 2019,  and HPE expects customers will slowly transition from its flagship 3PAR platform. Primera supports NVMe flash, embedding custom chips in the chassis to support massively parallel data transport on PCI Express lanes. The first Primera customer, BlueShore Financial, received its new array in October.

HPE bought supercomputing giant Cray to expand its presence in high-performance computing, and made several moves to broaden its hyper-converged infrastructure options. HPE ported InfoSight analytics to HPE SimpliVity HCI, as part of the move to bring the cloud-based predictive tools picked up from Nimble Storage across all HPE hardware. HPE launched a Nimble dHCI disaggregated HCI product and partnered with Nutanix to add Nutanix HCI technology to HPE GreenLake services while allowing Nutanix to sell its software stack on HPE servers.

It capped off the year with HPE Container Platform, a bare-metal system to make it easier to spin up Kubernetes-orchestrated containers on bare metal. The Container Platform uses technology from recent HPE acquisitions MapR and BlueData.

What’s on tap for 2020: HPE vice president of storage Sandeep Singh said more analytics are coming in response to customer calls for simpler storage. “An AI-driven experience to predict and prevent issues is a big game-changer for optimizing their infrastructure. Customers are placing a much higher priority on it in the buying motion,” helping to influence HPE’s roadmap, Singh said.

It will be worth tracking the progress of GreenLake as HPE moves towards its goal of making all of its technology available as a service by 2022.

Hitachi Vantara: Renewed focus on traditional enterprise storage

2019 in review: Hitachi Vantara renewed its focus on traditional data center storage, a segment it had largely conceded to other array vendors in recent years. Hitachi underwent a major refresh of the Hitachi Virtual Storage Platform (VSP) flash array in 2019. The VSP 5000 SAN arrays scale to 69 PB of raw storage, and capacity extends higher with hardware-based deduplication in its Flash Storage Modules. By virtualizing third-party storage behind a VSP 5000, customers can scale capacity to 278 PB.

What’s on tap for 2020: The VSP5000 integrates Hitachi Accelerated Fabric networking technology that enables storage to scale out and scale up. Hitachi this year plans to phase in the networking to other high-performance storage products, said Colin Gallagher, a Hitachi vice president of infrastructure products.

“We had been lagging in innovation, but with the VSP5000, we got our mojo back,” Gallagher said.

Hitachi arrays support containers, and Gallagher said the vendor is considering whether it needs to evolve its support beyond a Kubernetes plugin, as other vendors have done. Hitachi plans to expand data management features in Hitachi Pentaho analytics software to address AI and DevOps deployments. Gallagher said Hitachi’s data protection and storage as a service is another area of focus for the vendor in 2020.

IBM: hybrid cloud, with cyber-resilient storage

2019 in review: IBM brought out the IBM Elastic Storage Server 3000, an NVMe-based array packaged with IBM Spectrum Scale parallel file storage. Elastic Storage Server 3000 combines NVMe flash and containerized software modules to provide faster time to deployment for AI, said Eric Herzog, IBM’s vice president of world storage channels.

In addition, IBM added PCIe-enabled NVMe flash to Versastack converged infrastructure and midrange Storwize SAN arrays.

What to expect in 2020: Like other storage vendors, IBM is trying to navigate the unpredictable waters of cloud and services. Its product development revolves around storage that can run in any cloud. IBM Cloud Services enables end users to lease infrastructure, platforms and storage hardware as a service. The program has been around for two years, and will add IBM software-defined storage to the mix this year. Customers thus can opt to purchase hardware capacity or the IBM Spectrum suite in an OpEx model. Non-IBM customers can run Spectrum storage software on qualified third-party storage.

“We are going to start by making Spectrum Protect data protection available, and we expect to add other pieces of the Spectrum software family throughout 2020 and into 2021,” Herzog said.

Another IBM development to watch in 2020 is how its $34 billion acquisition of Red Hat affects either vendor’s storage products and services.

NetApp: Looking for a rebound

2019 in review: Although spending slowed for most storage vendors in 2019, NetApp saw the biggest decline. At the start of 2019, NetApp forecast annual sales at $6 billion, but poor sales forced NetApp to slash its guidance by around 10% by the end of the year.

NetApp CEO George Kurian blamed the revenue setbacks partly on poor sales execution, a failing he hopes will improve as NetApp institutes better training and sales incentives. The vendor also said goodbye to several top executives who retired, raising questions about how it will deliver on its roadmap going forward.

What to expect in 2020: In the face of the turbulence, Kurian kept NetApp focused on the cloud. NetApp plowed ahead with its Data Fabric strategy to enable OnTap file services to be consumed, via containers, in the three big public clouds.  NetApp Cloud Data Service, available first on NetApp HCI, allows customers to consume OnTap storage locally or in the cloud, and the vendor capped off the year with NetApp Keystone, a pay-as-you-go purchasing option similar to the offerings of other storage vendors.

Although NetApp plans hardware investments, storage software will account for more revenue as companies shift data to the cloud, said Octavian Tanase, senior vice president of the NetApp OnTap software and systems group.

“More data is being created outside the traditional data center, and Kubernetes has changed the way those applications are orchestrated. Customers want to be able to rapidly build a data pipeline, with data governance and mobility, and we want to try and monetize that,” Tanase said.

Pure Storage: Flash for backup, running natively in the cloud

2019 in review: The all-flash array specialist broadened its lineup with FlashArray//C SAN arrays and denser FlashBlade NAS models. FlashArray//C extends the Pure Storage flagship with a model that supports Intel Optane DC SSD-based MemoryFlash modules and quad-level cell NAND SSDs in the same system.

Pure also took a major step on its journey to convert FlashArray into a unified storage system by acquiring Swedish file storage software company Compuverde. It marked the second acquisition in as many years for Pure, which acquired deduplication software startup StorReduce in 2018.

What to expect in 2020: The gap between disk and flash prices has narrowed enough that it’s time for customers to consider flash for backup and secondary workloads, said Matt Kixmoeller, Pure Storage vice president of strategy.

“One of the biggest challenges — and biggest opportunities — is evangelizing to customers that, ‘Hey, it’s time to look at flash for tier two applications,'” Kixmoeller said.

Flexible cloud storage options and more storage in software are other items on Pure’s roadmap items. Cloud Block Store, which Pure introduced last year, is just getting started, Kixmoeller said, and is expected to generate lots of attention from customers. Most vendors support Amazon Elastic Block Storage by sticking their arrays in a colocation center and running their operating software on EBS, but Pure took a different approach. Pure reengineered the backend software layer to run natively on Amazon S3.

Go to Original Article
Author:

Government IT pros: Hiring data scientists isn’t an exact science

WASHINGTON, D.C. — Government agencies face the same problems as enterprises when it comes to turning their vast data stores into useful information. In the case of government, that information is used to provide services such as healthcare, scientific research, legal protections and even to fight wars.

Public sector IT pros at the Veritas Public Sector Vision Day this week talked about their challenges in making data useful and keeping it secure. A major part of their work currently involves finding the right people to fill data analytical roles, including hiring data scientists. They described data science skills as a combination of roles that require technical, as well as subject matter expertise, which often requires a diverse team to become successful.

Tiffany Julian, data scientist at the National Science Foundation, said she recently sat in on a focus group involved with the Office of Personnel Management’s initiative to define data scientist.

“One of the big messages from that was, there’s no such thing as a unicorn. You don’t hire a data scientist. You create a team of people who do data science together,” Julian said.

Julian said data science includes more than programmers and technical experts. Subject experts who know their company or agency mission also play a role.

“You want your software engineers, you want your programmers, you want your database engineers,” she said. “But you also want your common sense social scientists involved. You can’t just prioritize one of those fields. Let’s say you’re really good at Python, you’re really good at R. You’re still going to have to come up with data and processes, test it out, draw a conclusion. No one person you hire is going to have all of those skills that you really need to make data-driven decisions.”

Wanted: People who know they don’t know it all

Because she is a data scientist, Julian said others in her agency ask what skills they should seek when hiring data scientists.

You don’t hire a data scientist. You create a team of people who do data science together.
Tiffany JulianData scientist, National Science Foundation

“I’m looking for that wisdom that comes from knowing that I don’t know everything,” she said. “You’re not a data scientist, you’re a programmer, you’re an analyst, you’re one of these roles.”

Tom Beach, chief data strategist and portfolio manager for the U.S. Patent and Trademark Office (USPTO), said he takes a similar approach when looking for data scientists.

“These are folks that know enough to know that they don’t know everything, but are very creative,” he said.

Beach added that when hiring data scientists, he looks for people “who have the desire to solve a really challenging problem. There is a big disconnect between an abstract problem and a piece of code. In our organization, a regulatory agency dealing with patents and trademarks, there’s a lot of legalese and legal frameworks. Those don’t code well. Court decisions are not readily codable into a framework.”

‘Cloud not enough’

Like enterprises, government agencies also need to get the right tools to help facilitate data science. Peter Ranks, deputy CIO for information enterprise at the Department of Defense, said data is key to his department, even if DoD IT people often talk more about technologies such as cloud, AI, cybersecurity and the three Cs (command, control and communications) when they discuss digital modernization.

“What’s not on the list is anything about data,” he said. “And that’s unfortunate because data is really woven into every one of those. None of those activities are going to succeed without a focused effort to get more utility out of the data that we’ve got.”

Ranks said future battles will depend on the ability of forces on land, air, sea, space and cyber to interoperate in a coordinated fashion.

“That’s a data problem,” he said. “We need to be able to communicate and share intelligence with our partners. We need to be able to share situational awareness data with coalitions that may be created on demand and respond to a particular crisis.”

Ranks cautioned against putting too much emphasis on leaning on the cloud for data science. He described cloud as the foundation on the bottom of a pyramid, with software in the middle and data on top.

“Cloud is not enough,” he said. “Cloud is not a strategy. Cloud is not a destination. Cloud is not an objective. Cloud is a tool, and it’s one tool among many to achieve the outcomes that your agency is trying to get after. We find that if all we do is adopt cloud, if we don’t modernize software, all we get is the same old software in somebody else’s data center. If we modernize software processes but don’t tackle the data … we find that bad data becomes a huge boat anchor or that all those modernized software applications have to drive around. It’s hard to do good analytics with bad data. It’s hard to do good AI.”

Beach agreed. He said cloud is “100%” part of USPTO’s data strategy, but so is recognition of people’s roles and responsibilities.

“We’re looking at not just governance behavior as a compliance exercise, but talking about people, process and technology,” he said. “We’re not just going to tech our way out of a situation. Cloud is just a foundational step. It’s also important to understand the recognition of roles and responsibilities around data stewards, data custodians.”

This includes helping ensure that people can find the data they need, as well as denying access to people who do not need that data.

Nick Marinos, director of cybersecurity and data protection at the Government Accountability Office, said understanding your data is a key step in ensuring data protection and security.

“Thinking upfront about what data do we actually have, and what do we use the data for are really the most important piece questions to ask from a security or privacy perspective,” he said. “Ultimately, having an awareness of the full inventory within the federal agencies is really all the way that you can even start to approach protecting the enterprise as a whole.”

Marinos said data protection audits at government agencies often start with looking at the agency’s mission and its flow of data.

“Only from there can we as auditors — and the agency itself — have a strong awareness of how many touch points there are on these data pieces,” he said. “From a best practice perspective, that’s one of the first steps.”

Go to Original Article
Author:

VMware’s Bitnami acquisition grows its development portfolio

The rise of containers and the cloud has changed the face of the IT market, and VMware must evolve with it. The vendor has moved out of its traditional data center niche and — with its purchase of software packager Bitnami — has made a push into the development community, a change that presents new challenges and potential. 

Historically, VMware delivered a suite of system infrastructure management tools. With the advent of cloud and digital disruption, IT departments’ focus expanded from monitoring systems to developing applications. VMware has extended its management suite to accommodate this shift, and its acquisition of Bitnami adds new tools that ease application development.

Building applications presents difficulties for many organizations. Developers spend much of their time on application plumbing, writing software that performs mundane tasks — such as storage allocation — and linking one API to another.

Bitnami sought to simplify that work. The company created prepackaged components called installers that automate the development process. Rather than write the code themselves, developers can now download Bitnami system images and plug them into their programs. As VMware delves further into hybrid cloud market territory, Bitnami brings simplified app development to the table.

Torsten Volk, managing research director at Enterprise Management AssociatesTorsten Volk

“Bitnami’s solutions were ahead of their time,” said Torsten Volk, managing research director at Enterprise Management Associates (EMA), a computer consultant based out of Portsmouth, New Hampshire. “They enable developers to bulletproof application development infrastructure in a self-service manner.”

The value Bitnami adds to VMware

Released under the Apache License, Bitnami’s modules contain commonly coupled software applications instead of just bare-bones images. For example, a Bitnami WordPress stack might contain WordPress, a database management system (e.g., MySQL) and a web server (e.g., Apache).

Bitnami takes care of several mundane programming chores. Its keeps all components up-to-date — so if it finds a security problem, it patches that problem — and updates those components’ associated libraries. Bitnami makes its modules available through its Application Catalogue, which functions like an app store.

The company designed its products to run on a wide variety of systems. Bitnami supports Apple OS X, Microsoft Windows and Linux OSes. Its VM features work with VMware ESX and ESXi, VirtualBox and QEMU. Bitnami stacks also are compatible with software infrastructures such as WAMP, MAMP, LAMP, Node.js, Tomcat and Ruby. It supports cloud tools from AWS, Azure, Google Cloud Platform and Oracle Cloud. The installers, too, feature a wide variety of platforms, including Abante Cart, Magento, MediaWiki, PrestaShop, Redmine and WordPress. 

Bitnami seeks to help companies build applications once and run them on many different configurations.

“For enterprise IT, we intend to solve for challenges related to taking a core set of application packages and making them available consistently across teams and clouds,” said Milin Desai, general manager of cloud services at VMware.

Development teams share project work among individuals, work with code from private or public repositories and deploy applications on private, hybrid and public clouds. As such, Bitnami’s flexibility made it appealing to developers — and VMware.

How Bitnami and VMware fit together

[VMware] did not pay a premium for the products, which were not generating a lot of revenue. Instead, they wanted the executives, who are all rock stars in the development community.
Torsten VolkManaging Research Director, EMA

VMware wants to extend its reach from legacy, back-end data centers and appeal to more front-end and cloud developers.

“In the last few years, VMware has gone all in on trying to build out a portfolio of management solutions for application developers,” Volk said. VMware embraced Kubernetes and has acquired container startups such as Heptio to prove it.

Bitnami adds another piece to this puzzle, one that provides a curated marketplace for VMware customers who hope to emphasize rapid application development.

“Bitnami’s application packaging capabilities will help our customers to simplify the consumption of applications in hybrid cloud environments, from on-premises to VMware Cloud on AWS to VMware Cloud Provider Program partner clouds, once the deal closes,” Desai said.

Facing new challenges in a new market

However, the purchase moves VMware out of its traditional virtualized enterprise data center sweet spot. VMware has little name recognition among developers, so the company must build its brand.

“Buying companies like Bitnami and Heptio is an attempt by VMware to gain instant credibility among developers,” Volk said. “They did not pay a premium for the products, which were not generating a lot of revenue. Instead, they wanted the executives, who are all rock stars in the development community.”  

Supporting a new breed of customer poses its challenges. Although VMware’s Bitnami acquisition adds to its application development suite — an area of increasing importance — it also places new hurdles in front of the vendor. Merging the culture of a startup with that of an established supplier isn’t always a smooth process. In addition, VMware has bought several startups recently, so consolidating its variety of entities in a cohesive manner presents a major undertaking.

Go to Original Article
Author:

Page Locking comes to OneNote Class Notebooks |

Educators face an array challenges, not least of which is ongoing classroom management. As more and more teachers use Class Notebooks, stand alone or integrated with Microsoft Teams, the most common request we’ve heard from teachers is the ability to “lock” a page. This capability allows educators to have control and make the OneNote page read only for students while still allowing the teacher to add feedback or marks.  Today, we are excited to deliver on this request and begin rolling out page locking broadly to help teachers manage their classrooms and save time.

Page Locking—To further simplify classroom workflows, we are delivering on the number-one request from teachers for OneNote Class Notebooks—enabling lock pages. With our new page locking, the following capabilities are enabled:

  • Teachers can now lock all the pages of a distributed page as read-only after giving feedback to the student.
  • Teachers can unlock or lock individual pages by simply right clicking on the page on a student.
  • Teachers using Microsoft Teams to create OneNote assignments can have the page of the OneNote assignment automatically lock as read only when the due date/time passes

During our early testing process, we’ve had teachers trying out the page locking in their classrooms.  Robin Licato, an AP Chemistry and Forensic Science from St. Agnes Academy, Houston, TX had this to say: “This feature is an absolute game changer.  I am enjoying the ability to unlock a specific student who has an extension on an assignment due to illness or absence while keeping the page locked for students who did not complete the assignment on time!”

Scott Titmas, Technology Integration Specialist Old Bridge Township Public Schools, NJ was also an early beta tester of the new page locking feature. “The page locking feature is extremely intuitive, easy to use, and opens a whole new world of possibilities for teachers. It will be a welcomed feature addition for all teachers.  More encouraging than just this feature is the fact that Microsoft has consistently shown they listen to their users and user voice drives the direction of product development”

Platforms supported Initially, we are rolling this out for OneNote for Windows 10, OneNote 2016 Desktop Addin, OneNote Online, and OneNote for iPad. Most platforms will provide page locking built in to the toolbar. For OneNote desktop, download the new free add in.

For additional details on which version of OneNote is required for both teacher and students, please visit this new OneNote Class Notebook page locking support article.  It is important to read this article to understand the details before rolling this out.

Important Note #1: for OneNote 2016 Desktop MSI customers, you must deploy this Public Update first before student and teacher pages will properly lock.  Please work with your IT Admin to ensure you properly deployment this patch first.  Page Locking is not supported for OneNote 2013 Desktop clients 

Important note #2: Page Locking works best when a page is distributed or made into an assignment. For example, if students copy the pages manually from the Content Library into their own notebooks and change the page title, the teacher will have to manually right click on the student page to lock it, instead of being able to use the single checkbox to lock all pages.

Page Locking in OneNote for Windows 10

Page Locking in OneNote 2016 Desktop

 

Teacher right click to unlock a page

Class Notebook Addin version 2.5.0.0

  • Page Locking support to allow teachers to make a page of set of student pages read-only
  • Bug fixes and performance improvements

We hope you enjoy these new updates! Share any feedback at @OneNoteEDU, and if you need support or help, you can file a ticket here: http://aka.ms/edusupport.

This post was originally published on this site.

Kubernetes networking expands its horizons with service mesh

Enterprise IT operations pros who support microservices face a thorny challenge with Kubernetes networking, but service mesh architectures could help address their concerns.

Kubernetes networking under traditional methods faces performance bottlenecks. Centralized network resources must handle an order of magnitude more connections once the user migrates from VMs to containers. As containers appear and disappear much more frequently, managing those connections at scale quickly can create confusion on the network, and stale information inside network management resources can even misdirect traffic.

IT pros at KubeCon this month got a glimpse at how early adopters of microservices have approached Kubernetes networking issues with service mesh architectures. These network setups are built around sidecar containers, which act as a proxy for application containers on internal networks. Such proxies offload networking functions from application containers and offer a reliable way to track and apply network security policies to ephemeral resources from a centralized management interface.

Proxies in a service mesh better handle one-time connections between microservices than can be done with traditional networking models. Service mesh proxies also tap telemetry information that IT admins can’t get from other Kubernetes networking approaches, such as transmission success rates, latencies and traffic volume on a container-by-container basis.

“The network should be transparent to the application,” said Matt Klein, a software engineer at San Francisco-based Lyft, which developed the Envoy proxy system to address networking obstacles as the ride-sharing company moved to a microservices architecture over the last five years.

“People didn’t trust those services, and there weren’t tools that would allow people to write their business logic and not focus on all the faults that were happening in the network,” Klein said.

With a sidecar proxy in Envoy, each of Lyft’s services only had to understand its local portion of the network, and the application language no longer factored in its function. At the time, only the most demanding web application required proxy technology such as Envoy. But now, the complexity of microservices networking makes service mesh relevant to more mainstream IT shops.

The National Center for Biotechnology Information (NCBI) in Bethesda, Md., has laid the groundwork for microservices with a service mesh built around Linkerd, which was developed by Buoyant. The bioinformatics institute used Linkerd to modernize legacy applications, some as many as 30 years old, said Borys Pierov, a software developer at NCBI.

Any app that uses the HTTP protocol can point to the Linkerd proxy, which gives NCBI engineers improved visibility and control over advanced routing rules in the legacy infrastructure, Pierov said. While NCBI doesn’t use Kubernetes yet — it uses HashiCorp Consul and CoreOS rkt container runtime instead of Kubernetes and Docker — service mesh will be key to container networking on any platform.

“Linkerd gave us a look behind the scenes of our apps and an idea of how to split them into microservices,” Pierov said. “Some things were deployed in strange ways, and microservices will change those deployments, including the service mesh that moves us to a more modern infrastructure.”

Matt Klein speaks at KubeCon
Matt Klein, software engineer at Lyft, presents the company’s experiences with service mesh architectures at KubeCon.

Kubernetes networking will cozy up with service mesh next year

Linkerd is one of the most well-known and widely used tools among the multiple open source service mesh projects in various stages of development. However, Envoy has gained notoriety because it underpins a fresh approach to the centralized management layer, called Istio. This month, Buoyant also introduced a better performing and efficient successor to Linkerd, called Conduit.

Linkerd gave us a look behind the scenes of our apps … Some things were deployed in strange ways, and microservices will change those deployments, including the service mesh that moves us to a more modern infrastructure.
Borys Pierovsoftware developer, National Center for Biotechnology Information

It’s still too early for any of these projects to be declared the winner. The Cloud Native Computing Foundation (CNCF) invited Istio’s developers, which include IBM, Microsoft and Lyft, to make the Istio CNCF project, CNCF COO Chris Aniszczyk said at KubeCon. But Buoyant also will formally present Conduit to the CNCF next year, and multiple projects could coexist within the foundation, Aniszczyk said.

Kubernetes networking challenges led Gannett’s USA Today Network to create its own “terrible, over-orchestrated” service mesh-like system, in the words of Ronald Lipke, senior engineer on the USA Today platform-as-a-service team, who presented on the organization’s Kubernetes experience at KubeCon. HAProxy and the Calico network management system have supported Kubernetes networking in production so far, but there have been problems under this system with terminating nodes cleanly and removing them from Calico quickly so traffic isn’t misdirected.

Lipke likes the service mesh approach, but it’s not yet a top priority for his team at this early stage of Kubernetes deployment. “No one’s really asking for it yet, so it’s taken a back seat,” he said.

This will change in the new year. The company plans to rethink the HAproxy approach to reduce its cloud resource costs and improve network tracing for monitoring purposes. The company has done proof-of-concept evaluations around Linkerd and plans to look at Conduit, he said in an interview after his KubeCon session.

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at [email protected] or follow @PariseauTT on Twitter.

To buy or build IT infrastructure focus of ONUG 2017 panel

NEW YORK — Among the most vexing questions enterprises face is whether it makes more sense to buy or build IT infrastructure. The not-quite-absolute answer, according to the Great Discussion panel at last week’s ONUG conference: It depends.

“It’s hard, because as engineers, we follow shiny objects,” said Tsvi Gal, CTO at New York-based Morgan Stanley, adding there are times when the financial services firm will build what it needs, rather than being lured by what vendors may be selling.

“If there are certain areas of the industry that have no good solution in the market, and we believe that building something will give us significant value or edge over the competition, then we will build,” he said.

This decision holds even if buying the product is cheaper than building IT infrastructure, he said — especially if the purchased products don’t always have the features and functions Morgan Stanley needs.

“I don’t mind spending way more money on the development side, if the return for it will be significantly higher than buying would,” he said. “We’re geeks; we love to build. But at the end of the day, we do it only for the areas where we can make a difference.”

ONUG panelists discuss buy vs. build IT infrastructure
Panelists at the Great Discussion during the ONUG 2017 fall conference

A company’s decision to buy or build IT infrastructure heavily depends on its size, talent and culture.

For example, Suneet Nandwani, senior director of cloud infrastructure and platform services at eBay, based in San Jose, Calif., said eBay’s culture as a technology company creates a definite bias toward building and developing its own IT infrastructure. As with Morgan Stanley, however, Nandwani said eBay stays close to the areas it knows.

“We often stick within our core competencies, especially since eBay competes with companies like Facebook and Netflix,” he said.

On the other side of the coin, Swamy Kocherlakota, S&P Global’s head of global infrastructure and enterprise transformation, takes a mostly buy approach, especially for supporting functions. It’s a choice based on S&P Global’s position as a financial company, where technology development remains outside the scope of its main business.

This often means working with vendors after purchase.

“In the process, we’ve discovered not everything you buy works out of the box, even though we would like it to,” Kocherlakota said.

Although he said it’s tempting to let S&P Global engineers develop the desired features, the firm prefers to go back to the vendor to build the features. This choice, he said, traces back to the company’s culture.

“You have to be part of a company and culture that actually fosters that support and can maintain [the code] in the long term,” he said.  

The questions of talent, liability and supportability

Panelists agreed building the right team of engineers was an essential factor to succeed in building IT infrastructure.

“If your company doesn’t have enough development capacity to [build] it yourself, even when you can make a difference, then don’t,” Morgan Stanley’s Gal said. “It’s just realistic.”

But for companies with the capacity to build, putting together a capable team is necessary.

“As we build, we want to have the right talent and the right teams,” eBay’s Nandwani said. “That’s so key to having a successful strategy.”

To attract the needed engineering talent, he said companies should foster a culture of innovation, acknowledging that mistakes will happen.

For Gal, having the right team means managers should do more than just manage.

“Most of our managers are player-coach, not just coach,” Gal said. “They need to be technical; they need to understand what they’re doing, and not just [be] generic managers.”

But it’s not enough to just possess the talent to build IT infrastructure; companies must be able to maintain both the talent and the developed code.

“One of the mistakes people make when building software is they don’t staff or resource it adequately for operational support afterward,” Nandwani said. “You have to have operational process and the personnel who are responding when those things screw up.”

S&P Global’s Kocherlakota agreed, citing the fallout that can occur when an employee responsible for developing important code leaves the company. Without proper documentation, the required information to maintain the code would be difficult to follow.

This means having the right support from the beginning, with well-defined processes encompassing software development lifecycle, quality assurance and control and code reviews.

“I would just add that when you build, it doesn’t free you from the need to document what you’re doing,” Gal said.

Configuration Manager tool regulates server updates to stop attacks

Business workers face a persistent wave of online threats — from malicious hacking techniques to ransomware –…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

and it’s up to the administrator to lock down Microsoft systems and protect the company.

Administrators who apply Microsoft’s security updates in a timely fashion thwart many attacks effectively. IT departments use both System Center Configuration Manager and Windows Server Update Services to roll out patches, but the Configuration Manager tool’s scheduling and deployment options make it the preferred utility for this task. Admins gain control and automation over software updates to all managed systems with the Configuration Manager tool, which also monitors compliance and reporting.

Why we wait to update

An organization bases its security update deployment timeline on several factors, including internal policies, strategies, staff and skill sets. Some businesses roll patches out to production servers as soon as Microsoft makes them available on Patch Tuesday, the second Tuesday each month. Other companies wait a week or even a couple months to do the same, due to stringent testing procedures.

Here’s one example of a deployment timeline:

  • Week 1: Handful of test systems (pilot)
  • Week 2: Larger pool of test systems
  • Week 3: Small pool of production servers
  • Week 4: Larger pool of production servers
  • Week 5: All systems

This scenario leaves many endpoints unpatched and vulnerable to security risks for several weeks. Microsoft has a cumulative update model for all supported Windows OSes; the company packages each month’s patches and supersedes the previous month’s release. In some cases, systems won’t be fully patched — or will remain unpatched — if a business fails to deploy the previous month’s security fixes before Microsoft releases the new updates. To avoid this situation, IT organizations should roll out the current month’s updates before the next Patch Tuesday arrives just a few weeks later.

Automatic deployment rule organizes the patch process

An automatic deployment rule (ADR) in the Configuration Manager tool coordinates the patch rollout process. An ADR provides settings to download updates, package them into software update groups, create deployments of the updates for a collection of devices and roll out the updates when it’s most appropriate.

Find the ADR feature in the Configuration Manager tool under the Software Updates menu within the Software Library module. Figure 1 shows its options.

Create a software update group
Figure 1. The automatic deployment rule feature in the Configuration Manager tool builds a deployment package to automate the update procedure.

Settings to configure specific update criteria

The admin sets the ADR options to download and package software updates with the following criteria, which is also shown in Figure 2:

  • released or revised within the last month;
  • only updates that are required by systems evaluated at the last scan;
  • updates that are not superseded; and
  • updates classified as Critical Updates, Security Updates, Feature Packs, Service Packs, Update Rollups or Updates.
Build an automatic deployment rule
Figure 2. The administrator builds the criteria for a software update group in the ADR component.

The property filter — also seen in Figure 2 — packages software updates on a granular scale to best suit the organization’s needs. In the example shown, the admin uses the property filter to only deploy updates released in the last month.

In the evaluation schedule shown in Figure 3, the admin configures an ADR to assess and package software updates at 11 p.m. on the second Tuesday of each month.

ADR custom schedule
Figure 3. The admin builds a schedule to evaluate and package software updates every month at a certain time in the ADR feature of the Configuration Manager tool.

Set a maintenance window to assist users

To patch servers, use maintenance windows, which control the deployment of software updates to clients in a collection at a specific time. This meets the preferences of server owners, who cannot take certain machines down at particular times for a software update and the consequent reboot. In most cases, admins set maintenance windows to run updates overnight to minimize disruption and effects on end users.

Some businesses roll patches out to production servers as soon as Microsoft makes them available on Patch Tuesday, the second Tuesday each month. Other companies wait a week or even a couple months to do the same, due to stringent testing procedures.

Admins can set the deployment schedule in a maintenance window to As soon as possible since the maintenance window controls the actual rollout time. For example, assume the IT staff configured the following maintenance windows for a collection of servers:

  1. Servers-Updates-GroupA: maintenance window from 12 a.m. to 2 a.m.
  2. Servers-Updates-GroupB: maintenance window from 2 a.m. to 4 a.m.
  3. Servers-Updates-GroupC: maintenance window from 4 a.m. to 6 a.m.

If the admin sets these collections to deploy software updates with the As soon as possible flag, the servers download the Microsoft updates when they become available — it could be right in the middle of a busy workday. Instead, the update process waits until 12 a.m. for Servers-Updates-GroupA, 2 a.m. for the next group and so on. Without any deployment schedule, collections install the software updates as soon as possible and reboot if necessary based on the client settings in the Configuration Manager tool.

To create a maintenance window for a collection, click on the starburst icon under the Maintenance Windows tab in the collection properties. Figure 4 shows a maintenance window that runs daily from 2 a.m. to 4 a.m.

Maintenance window schedule
Figure 4. Configure a maintenance window for a collection with a recurring schedule.

In this situation, admins should configure an ADR to deploy updates with the Available flag at a specific date and time, but not make the installation mandatory until later. Users apply patches and reboot the system at their convenience. Always impress upon users why they should implement the updates quickly.

Microsoft refines features to maximize uptime

Microsoft added more flexibility to coordinate maintenance and control server uptime in version 1606 of the Configuration Manager tool. The server group settings feature the following controls:

  • the percentage of machines that update at the same time;
  • the number of the machines that update at the same time;
  • the maintenance sequence; and
  • PowerShell scripts that run before and after deployments.

[embedded content]

How to use System Center Configuration
Manager to plan and execute a patching regimen
for applications and OSes.

A server group uses a lock mechanism to ensure only the machines in the collection execute and complete the update before the process moves to the next set of servers. An admin can release the deployment lock manually if a patch gets stuck before it completes. Microsoft provides more information on updates to server groups.

To develop server group settings, select the All devices are part of the same server group option in the collection properties, and then click on Settings, as seen in Figure 5.

 Set server group configuration
Figure 5. Select the
All devices are part of the same server group option to configure a collection’s server group settings.

Select the preferred option for the group. In Figure 6, the admin sets the maintenance sequence. Finally, click OK, and the server group is ready.

Maintenance sequence
Figure 6. The administrator uses the server group settings to maintain control over uptime and coordinate the maintenance schedule.

For additional guidance on software update best practices, Microsoft offers pointers for the deployment process.

Next Steps

Secret Service: Culture change needed to boost security

Reduce patching headaches with these tools

Find the right patching software

Join educators from around the world as we Hack the Classroom – Saturday, Oct. 14th |

Educators face an entirely new set of challenges and opportunities in today’s constantly evolving technological landscape. Digital transformation can have a profound impact on the education experience, but it’s becoming harder than ever to keep up with, identify, and incorporate the best strategies and solutions for your classroom.

Enter Hack the Classroom: It’s a live, online event designed to inspire educators, ignite new ideas, and showcase what’s possible in today‘s schools and classrooms. Broadcasting live on October 14th, Hack the Classroom will bring together the latest teaching methods, tools, and technologies to spark creativity and curiosity in students and educators alike. We’ll also share tips, tricks, and inspiring stories from educators all across the globe, unlocking new ways to empower the students of today to create the world of tomorrow.

Tune in live to the event on October 14, 2017, 8:00 a.m. – 10:00 a.m. PDT.

Our theme for this Hack the Classroom event is all about:

Sparking creativity and curiosity to empower the students of today to create the world of tomorrow.

Hack the Classroom will feature classroom hacks from educators, discussions from inspiring thought leaders, and resources to help you get started. The key is to start with just a few small steps.

By attending our live, online Hack the Classroom event, you can:

  • Hear from Alan November, live from Boston, MA. Alan’s approach is to support students in becoming “problem designers” as a critical step in tapping their imagination and curiosity.  Providing a framework for lines of inquiry and “messy” problems to be developed can be a stepping stone to helping students learn how to think through increasingly complex and creative conundrums.
  • Look into Tammy Dunbar’s tech-infused classroom in Manteca Unified School District to see how technology is engaging her students and empowering them to meet high standards.  Tammy will share the top five tools to support creativity and curiosity.
  • Learn how courses on the Microsoft Educator Community can prepare you to incorporate rich STEM lessons into your classroom, whether you are an elementary or a secondary teacher, and see the results in action with a school in the UK.
  • See how students at Renton Prep are using the new video features in the Photo App to share their learning creatively.
  • See innovative class hacks from MIE Experts around the world.
  • Participate in the live studio Q&A – come with questions!
  • Receive an HTC participant badge and receive 500 points on our Educator Community. Once you’ve earned 1,000 points, you become a certified Microsoft Innovative Educator.

Click here to register and save your spot.

Be sure to share the event with fellow teachers, connect with us on Twitter @MicrosoftEDU and tweet out your thoughts using #MicrosoftEDU and #HackTheClassroom.

For daily ideas on how to infuse technology into your classroom, check out our Educator Community and learn how to become a certified Microsoft Innovative Educator. Our Educator Community is the place to collaborate with educators around the world and the best location for resources on using technology in the classroom. Our content is designed by educators for educators, up-to-date and ever-growing to meet your teaching needs.

We look forward to seeing you at Hack the Classroom!