Tag Archives: face

Zoom faces challenges in implementing end-to-end encryption

Zoom has outlined a four-phase plan for implementing end-to-end encryption. But the company will face hurdles as it attempts to add the security protocol to its video conferencing service.

Each phase of the plan will improve security but leave vulnerabilities that Zoom plans to address in the future. However, the company’s draft white paper provides less detail about the later stages of the project.

“This is complex stuff,” said Alan Pelz-Sharpe, founder of research and advisory firm Deep Analysis. “You can’t just plug and play end-to-end encryption.”

Zoom has not said when end-to-end encryption will launch or who will get access to it. At least initially, the service will likely be available only to paid customers.

The goal of the effort is to give users control of the keys used to decrypt their communications. That would prevent Zoom employees from snooping on conversations or from letting law enforcement agencies do the same.

Zoom previously advertised its service as end-to-end encrypted. But in April, the company acknowledged that it wasn’t using the commonly understood definition of that term. The claim has provided fodder for numerous class-action lawsuits.

The first phase of the plan will change Zoom’s security protocol so that users’ clients — not Zoom’s servers — generate encryption keys. The second phase will more securely tie those keys to individual users through partnerships with single sign-on vendors and identity providers

The third step will give customers an audit trail to verify that neither Zoom nor anyone else is circumventing the system. And the fourth will introduce mechanisms for detecting hacks in real time.

One weakness is the scheme’s reliance on single sign-on vendors and identity providers to match users’ encryption keys to users. That will leave customers that don’t use those services less secure, potentially increasing the risk of meddler-in-the-middle attacks.

Zoom also won’t be able to apply the protocol to all endpoints. Excluded clients include Zoom’s web app and room systems that use SIP or H.323. Zoom also can’t encrypt from end-to-end audio connections made through the public telephone network.

Turning on end-to-end encryption will disable certain features. Users won’t be able to record meetings or, at least initially, join before the host. These limitations are typical of end-to-end encryption schemes for video communications.

Engineers from the messaging and file-sharing service Keybase are leading Zoom’s encryption effort. Zoom acquired Keybase in early May as part of its effort to improve security and privacy.

Zoom released a draft of its encryption plan on GitHub on May 22. The company is accepting public comments on the proposal through June 5. In the meantime, Zoom is urging customers to update their apps by May 30 to get access to a more secure encryption protocol called GCM.

Zoom has been working to repair its reputation after a series of news reports in March revealed numerous security and privacy flaws in its product. An influx of users following the global outbreak of coronavirus put a spotlight on the company.

Users and security experts criticized Zoom for prioritizing ease of use over security. They also faulted the company for not being transparent enough about its encryption and data-sharing practices.

“The criticism was justified and warranted and needed because otherwise these things don’t get fixed,” said Tatu Ylonen, a founder and board member of SSH Communications Security. “I would applaud them for actually taking action fairly quickly.”

More recently, Zoom celebrated some wins. The company settled with the New York attorney general’s office, warding off a further investigation into its security practices. Zoom also got the New York City public school district to undo a ban on the product that had drawn national headlines in April.  

But the company will need to do more to win back the trust of some security-minded buyers.

“I think they’ve responded very quickly,” Pelz-Sharpe said. “But if I were advising a compliant company on a product to buy, it probably wouldn’t be on my list.”

Go to Original Article
Author:

Workspot VDI key to engineering firm’s pandemic planning

Like many companies, Southland Industries is working to accelerate its virtualization plans in the face of the coronavirus pandemic.

The mechanical engineering firm, which is based in Garden Grove, Calif., and has seven main offices across the U.S., has been using the Workspot Workstation Cloud virtual desktop service. Combined with Microsoft Azure Cloud, Workspot’s service enables engineers to build design-intensive work at home and enables Southland to keep pace as technology advances. When COVID-19 emerged, the company was transitioning users in the mid-Atlantic states to virtual desktops.

Israel Sumano, senior director of infrastructure at Southland Industries, recently spoke about making the move to virtual desktops and the challenges posed by the current public health crisis.

How did your relationship with Workspot first begin?

Israel SumanoIsrael Sumano

Israel Sumano: We were replicating about 50 terabytes across 17 different locations in the U.S. real-time, with real-time file launches. It became unsustainable. So over the last five years, I’ve tested VDI solutions — Citrix, [VMware] Horizon, other hosted solutions, different types of hardware. We never felt the performance was there for our users.

When Workspot came to us, I liked it because we were able to deploy within a week. We tested it on on-prem hardware, we tested it on different cloud providers, but it wasn’t until we had Workspot on [Microsoft] Azure that we were comfortable with the solution.

For us to build our own GPU-enabled VDI systems [needed for computing-intensive design work], we probably would have spent about $4 million, and they would have been obsolete in about six years. By doing it with Microsoft, we were able to deploy the machines and ensure they will be there and upgradeable. If a new GPU comes out, we can upgrade to the new GPU and it won’t be much cost to us to migrate.

How has your experience in deploying Workspot been so far? What challenges have you met?

Sumano: It was a battle trying to rip the PCs from engineers’ hands. They had a lot of workstations [and] they really did not want to give them up. We did the first 125 between October 2017 and February 2018. … That pushed back the rest of the company by about a year and a half. We didn’t get started again until about October of 2019. By that time, everyone had settled in, and they all agreed it was the best thing we’ve ever done and we should push forward. That’s coming from the bottom up, so management is very comfortable now doing the rest of the company.

How did you convince workers that the virtualization service was worthwhile?

Sumano: They were convinced when they went home and were able to work, or when they were in a hotel room and they were able to work. When they were at a soccer match for their kids, and something came up that needed attention right away, they pulled out their iPads and were able … to manipulate [designs] or check something out. That’s when it kicked in.

In the past, when they went to a job site, [working] was a really bad experience. We invested a lot of money into job sites to do replication [there].

[With Workspot,] they were able to pick up their laptops, go to the job site and work just like they were at the office.

The novel coronavirus has forced companies to adopt work-at-home policies. What is Southland’s situation?

Sumano: We have offices in Union City [California], which is Marin County, and they were ordered to stay in place, so everyone was sent home there. We just got notice that Orange County will be sent home. Our Las Vegas offices have also been sent home.

Our job sites are still running, but having this solution has really changed the ability for these engineers to go home and work. Obviously, there’s nothing we can do about the shops — we need to have people on-hand at the shop, [as] we’re not fully automated at that level.

On the construction site, we need guys to install [what Southland has designed]. Those are considered critical by the county. They’re allowed to continue work at the job sites, but everybody from the offices has been set home, and they’re working from home.

We hadn’t done the transition for the mid-Atlantic division to Workspot. We were planning on finishing that in the next 10 weeks. We are now in a rush and plan on finishing it by next Friday. We’re planning on moving 100 engineers to Workspot, so they’re able to go home.

How has it been, trying to bring many workers online quickly?

Sumano: I’ve been doing this a long time. I’ve implemented large virtual-desktop and large Citrix environments in the past. It’s always been a year to a year-and-a-half endeavor.

We are rushing it for the mid-Atlantic. We’d like to take about 10 weeks to do it — to consolidate servers and reduce footprint. We’re skipping all those processes right now and just enacting [virtualization] on Azure, bringing up all the systems as-is and then putting everyone onto those desktops.

Has the new remote-work situation been a strain on your company’s infrastructure?

Sumano: The amount of people using it is exactly the same. We haven’t heard any issues about internet congestion — that’s always a possibility with more and more people working from home. It’s such a small footprint, the back-and-forth chatter between Workspot and your desktop, that it shouldn’t be affected much.

What’s your level of confidence going forward, given that this may be a protracted situation?

Sumano: We’re very confident. We planned on being 100% Azure-based by December 2020. We’re well on track for doing that, except for, with what’s happening right now, there was a bit of a scramble to get people who didn’t have laptops [some] laptops. There’s a lot of boots on the ground to get people able to work from home.

Most of our data is already on Azure, so it’s a very sustainable model going forward, unless there’s a hiccup on the internet.

Editor’s note: This interview has been edited for clarity and length.

Go to Original Article
Author:

CIOs should plan for a spike in healthcare cyberattacks

Healthcare organizations face a growing risk of healthcare cyberattacks during the coronavirus pandemic.

The federal government is relaxing regulations so that providers can treat patients from home and use consumer-grade technologies like Skype and FaceTime. The measures are aimed at keeping providers and patients at home as much as possible to slow the spread of COVID-19. But there is also a downside to making healthcare more accessible: The measures are creating more points of entry into healthcare systems for cyberattackers.

Before the coronavirus outbreak, the healthcare industry was already one of the most likely industries to be attacked. The industry pays the highest cost to detect, respond to and deal with the fallout of a data breach, averaging just under $6.5 million per breach, said Caleb Barlow, president and CEO of healthcare cybersecurity firm CynergisTek.

Caleb BarlowCaleb Barlow

Now in the midst of a pandemic, the healthcare industry is more vulnerable than ever, and cyber criminals are likely laying the groundwork for major healthcare cyberattacks.

“If you put yourself in the mindset of an attacker right now, now is actually not the time to detonate your attack,” Barlow said. “Now is the time to get on a system, to move laterally and to elevate your credentials, and that’s likely exactly what they’re doing. There are a lot of indicators of that. We’ve seen a significant rise in COVID-19-focused phishing, both that is targeting individuals as well as institutions.”

There is not going to be a plea to bad guys of, ‘Please not right now.’ It just doesn’t work that way. It is coming. Get prepared, you have a few weeks. It is that simple.
Caleb BarlowPresident and CEO, CynergisTek

Healthcare systems and even the U.S. Department of Health and Human Services are seeing phishing and other similar attacks right now, but Barlow warns that healthcare CIOs and CISOs need to prepare for the more insidious healthcare cyberattacks that are coming, including ransomware.

“We have to realize that these attackers are highly motivated,” Barlow said. “Many of them, particularly with things like ransomware, are nation-state actors. These are how nation-states fund their activities. There is not going to be a plea to bad guys of, ‘Please not right now.’ It just doesn’t work that way. It is coming. Get prepared, you have a few weeks. It is that simple.”

Cyberthreats seen on the front lines

Anahi Santiago, CISO at the Delaware-based ChristianaCare health system, said there has been a rapid increase in social engineering attacks — including phishing, where bad actors appear as a trusted source and trick healthcare employees into revealing their credentials — that are testing healthcare systems during the coronavirus crisis.

Anahi SantiagoAnahi Santiago

Although the ChristianaCare health system has security tools to prevent phishing attacks on the organization, Santiago said home computers may not have the same protections. Additionally, Santiago said threat actors are setting up websites using legitimate coronavirus outbreak global maps to trick people into visiting those sites and, unbeknownst to them, downloading malware. While the healthcare system’s security tools block malicious websites, clinicians may not have the same types of protection at home.

CynergisTek’s Barlow said the “threat landscape has increased dramatically,” as regulations have been relaxed to enable physicians to work and treat patients remotely. That increased threat landscape includes a physician’s home network, which gives bad actors more opportunity to gain access to a healthcare institution.

As cyberattackers capitalize on this opportunity, Barlow said it’s important for health systems’ security teams to mobilize and for healthcare CIOs and CISOs to have a plan in place in case their healthcare system is breached.

Santiago echoed Barlow’s call on security teams, saying awareness and ensuring the cybersecurity posture remains intact are key to preventing these kinds of attacks.

“We have been working very closely with our external affairs folks to communicate to the organization so that our caregivers have awareness, not only around potential phishing and social engineering attacks that might come through the organization, but also to be aware at home,” she said. “We’re doing a lot of enablement for the organization, but also making sure that we’re thinking about our caregivers and their families and making sure we’re giving them the tools to be able to go home and continue to protect themselves.”

Aaron MiriAaron Miri

Aaron Miri, CIO at the University of Texas at Austin Dell Medical School and UT Health Austin, said he has heard of academic medical institutions and healthcare systems being under constant attack and is remaining vigilant.

“During any situation, even if it’s a Friday afternoon at 5 o’clock, you can expect to see bad actors try to capitalize,” he said. “It is an unfortunate way of the world and it’s reality, so we are always keeping watch.”

Preparing for cyberattacks

Barlow said there are a few steps healthcare security teams can take to make sure providers working at home are doing so securely.

First, he said it’s key to make sure clinicians have proper virtual private networks (VPNs) in place and that they’re set up properly. A VPN creates a safe connection between a device that could be on a less secure network and the healthcare system network.

Second, he said security teams should make sure those computers have proper protection, often referred to as endpoint security. Endpoint security ensures devices meet certain security criteria before being allowed to connect to a hospital’s network.

The next step is getting a plan in place so that when a healthcare system is breached or hit with ransomware, it will know how to respond, he said. The plan should include how to manage a breach in light of the pandemic, when leaders of the organization are likely working from home.

“If you are hit with ransomware, how are you going to process through that, how are you going to do that when you can’t get everybody in the room … how are you going to make decisions, who are you going to work with,” he said. “Get those plans up to date.”

Go to Original Article
Author:

Retail facial recognition and eye tracking the next tech wave, maybe

Your face tells your story and confirms your identity when you shop. Digitizing all that takes next-generation eye-tracking and facial recognition technology, which retailers and restaurateurs have just begun weaving into the IT mix to improve customer experience.

Stores and restaurants are testing retail facial recognition technology to help speed up checkout and ordering, users and vendors said at the recent NRF 2020 Vision: Retail’s Big Show. Eye-tracking software (see sidebar) helps retailers improve user interfaces on e-commerce sites as well as in physical stores, influencing the planograms that map product placement on store shelves.

Customers in several small restaurant chains in California and Illinois can order meals on kiosks from vendor PopID, a company that integrates NEC facial recognition and Brierly Group digital loyalty programs into the kiosks. The goal for many restaurants with self-service kiosks is to eliminate humans taking orders and running payments, said Yale Goldberg, vice president of strategy and business development at PopID, and facial recognition can speed up the process

These are early days for retail facial recognition to connect loyalty program names and credit cards to customers, and so far, the results have been mixed. At restaurants with PopID kiosks, humans at the cash register can punch in orders and take payments in 30 seconds, while on their own, customers take an average of two and a half minutes, even with instantaneous facial ID.

Furthermore, concerns about data privacy Goldberg refers to as the “creep factor” can make some consumers reticent to use the system. So far, about 20% of the restaurants’ customers opt in to PopID facial recognition for ordering and checkout.

That said, older customers — who typically have more reservations about giving up personal data than Millennial-generation and younger customers — are buying into the company’s retail facial recognition systems at about the same rate, Goldberg said.

“People are returning to these restaurants frequently, and they understand they can have a much more frictionless experience when they opt in,” Goldberg said. “Once it’s explained, people start to trust the brand and can see the benefits.”

NCR sees biometrics on rise

Customer privacy concerns about opting into retail facial recognition are a barrier to widespread acceptance, said David Wilkinson, senior vice president and general manager of global retail at NCR Corp., which provides cloud application and infrastructure support for retailers. NCR is partnering with biometric ID vendors to offer convenience and grocery stores checkout kiosks using face ID, but Wilkinson characterized adoption as low, or even in the testing phase among the company’s retail customers for now.

NCR remains agnostic on new tech such as biometric IDs, Wilkinson said, and supports as many as possible to meet its customer demand if and when it comes. NCR also offers computer vision tools for automated age verification for the purchase of age-restricted items, which the company said can be more accurate than humans. 

Biometrics in general have much promise, Wilkinson said, for matching customers to loyalty memberships and enabling quicker checkouts. Facial recognition in particular, however, may have a difficult path to acceptance in retail among consumers. Alternatives such as palm recognition for payment may eventually prove more accurate and less intrusive for payments, he said.

“I think there will be some kind of AI-driven, biometric way that we can identify ourselves at retail,” Wilkinson said. “At NCR, we can’t bet our business on a winner or a loser; that’s not the way we’re built.”

Integration woes slow progress

I think there’s a massive opportunity, but there’s a leap of faith needed. Taking that leap of faith is really hard for some organizations, but the technology’s there.
Jon HughesEVP, REPL Group

Integration of facial technologies hasn’t always been smooth. Customers have to relearn familiar processes like checkout. On the back end, new biometric data feeds must work their way into long-standing payment systems, or in the case of eye-tracking data, into planogram applications.

Many retailers are a few years off from getting their systems and data management working in harmony, said Jon Hughes, executive vice president at retail tech consultant REPL Group. A third have it “well sorted out,” a third are just getting started and a third are in what he called “the dark space in the middle.”

“The data’s the big problem,” Hughes said. He added that, in his view, facial recognition comes closer to true AI than many other technologies vendors call AI, but he views as basic automation without intelligence.

“I think there’s a massive opportunity, but there’s a leap of faith needed,” Hughes said. “Taking that leap of faith is really hard for some organizations, but the technology’s there.”

Go to Original Article
Author:

Major storage vendors map out 2020 plans

The largest enterprise storage vendors face a common set of challenges and opportunities heading into 2020. As global IT spending slows and storage gets faster and frequently handles data outside the core data center, primary storage vendors must turn to cloud, data management and newer flash technologies.

Each of the major storage vendors has its own plans for dealing with these developments. Here is a look at what the major primary storage vendors did in 2019 and what you can expect from them in 2020.

Dell EMC: Removing shadows from the clouds

2019 in review: Enterprise storage market leader Dell EMC spent most of 2019 bolstering its cloud capabilities, in many cases trying to play catch-up. New cloud products include VMware-orchestrated Dell EMC Cloud Platform arrays that integrate Unity and PowerMax storage, coupled with VxBlock converged and VxRail hyper-converged infrastructure.

The new Dell EMC Cloud gear allows customers to build and deploy on-premises private clouds with the agility and scale of the public cloud — a growing need as organizations dive deeper into AI and DevOps.

What’s on tap for 2020: Dell EMC officials have hinted at a new Power-branded midrange storage system for several years, and a formal unveiling of that product is expected in 2020. Then again, Dell initially said the next-generation system would arrive in 2019. Customers with existing Dell EMC midrange storage likely won’t be forced to upgrade, at least not for a while. The new storage platform will likely converge features from Dell EMC Unity and SC Series midrange arrays with an emphasis on containers and microservices.

Dell will enhance its tool set for containers to help companies deploy microservices, said Sudhir Srinivasan, the CTO of Dell EMC storage. He said containers are a prominent design featured in the new midrange storage. 

“Software stacks that were built decades ago are giant monolithic pieces of code, and they’re not going to survive that next decade, which we call the data decade,” Srinivasan said. 

Hewlett Packard Enterprise’s eventful year

2019 in review: In terms of product launches and partnerships, Hewlett Packard Enterprise (HPE) had a busy year in 2019. HPE Primera all-flash storage arrived in late 2019,  and HPE expects customers will slowly transition from its flagship 3PAR platform. Primera supports NVMe flash, embedding custom chips in the chassis to support massively parallel data transport on PCI Express lanes. The first Primera customer, BlueShore Financial, received its new array in October.

HPE bought supercomputing giant Cray to expand its presence in high-performance computing, and made several moves to broaden its hyper-converged infrastructure options. HPE ported InfoSight analytics to HPE SimpliVity HCI, as part of the move to bring the cloud-based predictive tools picked up from Nimble Storage across all HPE hardware. HPE launched a Nimble dHCI disaggregated HCI product and partnered with Nutanix to add Nutanix HCI technology to HPE GreenLake services while allowing Nutanix to sell its software stack on HPE servers.

It capped off the year with HPE Container Platform, a bare-metal system to make it easier to spin up Kubernetes-orchestrated containers on bare metal. The Container Platform uses technology from recent HPE acquisitions MapR and BlueData.

What’s on tap for 2020: HPE vice president of storage Sandeep Singh said more analytics are coming in response to customer calls for simpler storage. “An AI-driven experience to predict and prevent issues is a big game-changer for optimizing their infrastructure. Customers are placing a much higher priority on it in the buying motion,” helping to influence HPE’s roadmap, Singh said.

It will be worth tracking the progress of GreenLake as HPE moves towards its goal of making all of its technology available as a service by 2022.

Hitachi Vantara: Renewed focus on traditional enterprise storage

2019 in review: Hitachi Vantara renewed its focus on traditional data center storage, a segment it had largely conceded to other array vendors in recent years. Hitachi underwent a major refresh of the Hitachi Virtual Storage Platform (VSP) flash array in 2019. The VSP 5000 SAN arrays scale to 69 PB of raw storage, and capacity extends higher with hardware-based deduplication in its Flash Storage Modules. By virtualizing third-party storage behind a VSP 5000, customers can scale capacity to 278 PB.

What’s on tap for 2020: The VSP5000 integrates Hitachi Accelerated Fabric networking technology that enables storage to scale out and scale up. Hitachi this year plans to phase in the networking to other high-performance storage products, said Colin Gallagher, a Hitachi vice president of infrastructure products.

“We had been lagging in innovation, but with the VSP5000, we got our mojo back,” Gallagher said.

Hitachi arrays support containers, and Gallagher said the vendor is considering whether it needs to evolve its support beyond a Kubernetes plugin, as other vendors have done. Hitachi plans to expand data management features in Hitachi Pentaho analytics software to address AI and DevOps deployments. Gallagher said Hitachi’s data protection and storage as a service is another area of focus for the vendor in 2020.

IBM: hybrid cloud, with cyber-resilient storage

2019 in review: IBM brought out the IBM Elastic Storage Server 3000, an NVMe-based array packaged with IBM Spectrum Scale parallel file storage. Elastic Storage Server 3000 combines NVMe flash and containerized software modules to provide faster time to deployment for AI, said Eric Herzog, IBM’s vice president of world storage channels.

In addition, IBM added PCIe-enabled NVMe flash to Versastack converged infrastructure and midrange Storwize SAN arrays.

What to expect in 2020: Like other storage vendors, IBM is trying to navigate the unpredictable waters of cloud and services. Its product development revolves around storage that can run in any cloud. IBM Cloud Services enables end users to lease infrastructure, platforms and storage hardware as a service. The program has been around for two years, and will add IBM software-defined storage to the mix this year. Customers thus can opt to purchase hardware capacity or the IBM Spectrum suite in an OpEx model. Non-IBM customers can run Spectrum storage software on qualified third-party storage.

“We are going to start by making Spectrum Protect data protection available, and we expect to add other pieces of the Spectrum software family throughout 2020 and into 2021,” Herzog said.

Another IBM development to watch in 2020 is how its $34 billion acquisition of Red Hat affects either vendor’s storage products and services.

NetApp: Looking for a rebound

2019 in review: Although spending slowed for most storage vendors in 2019, NetApp saw the biggest decline. At the start of 2019, NetApp forecast annual sales at $6 billion, but poor sales forced NetApp to slash its guidance by around 10% by the end of the year.

NetApp CEO George Kurian blamed the revenue setbacks partly on poor sales execution, a failing he hopes will improve as NetApp institutes better training and sales incentives. The vendor also said goodbye to several top executives who retired, raising questions about how it will deliver on its roadmap going forward.

What to expect in 2020: In the face of the turbulence, Kurian kept NetApp focused on the cloud. NetApp plowed ahead with its Data Fabric strategy to enable OnTap file services to be consumed, via containers, in the three big public clouds.  NetApp Cloud Data Service, available first on NetApp HCI, allows customers to consume OnTap storage locally or in the cloud, and the vendor capped off the year with NetApp Keystone, a pay-as-you-go purchasing option similar to the offerings of other storage vendors.

Although NetApp plans hardware investments, storage software will account for more revenue as companies shift data to the cloud, said Octavian Tanase, senior vice president of the NetApp OnTap software and systems group.

“More data is being created outside the traditional data center, and Kubernetes has changed the way those applications are orchestrated. Customers want to be able to rapidly build a data pipeline, with data governance and mobility, and we want to try and monetize that,” Tanase said.

Pure Storage: Flash for backup, running natively in the cloud

2019 in review: The all-flash array specialist broadened its lineup with FlashArray//C SAN arrays and denser FlashBlade NAS models. FlashArray//C extends the Pure Storage flagship with a model that supports Intel Optane DC SSD-based MemoryFlash modules and quad-level cell NAND SSDs in the same system.

Pure also took a major step on its journey to convert FlashArray into a unified storage system by acquiring Swedish file storage software company Compuverde. It marked the second acquisition in as many years for Pure, which acquired deduplication software startup StorReduce in 2018.

What to expect in 2020: The gap between disk and flash prices has narrowed enough that it’s time for customers to consider flash for backup and secondary workloads, said Matt Kixmoeller, Pure Storage vice president of strategy.

“One of the biggest challenges — and biggest opportunities — is evangelizing to customers that, ‘Hey, it’s time to look at flash for tier two applications,'” Kixmoeller said.

Flexible cloud storage options and more storage in software are other items on Pure’s roadmap items. Cloud Block Store, which Pure introduced last year, is just getting started, Kixmoeller said, and is expected to generate lots of attention from customers. Most vendors support Amazon Elastic Block Storage by sticking their arrays in a colocation center and running their operating software on EBS, but Pure took a different approach. Pure reengineered the backend software layer to run natively on Amazon S3.

Go to Original Article
Author:

Government IT pros: Hiring data scientists isn’t an exact science

WASHINGTON, D.C. — Government agencies face the same problems as enterprises when it comes to turning their vast data stores into useful information. In the case of government, that information is used to provide services such as healthcare, scientific research, legal protections and even to fight wars.

Public sector IT pros at the Veritas Public Sector Vision Day this week talked about their challenges in making data useful and keeping it secure. A major part of their work currently involves finding the right people to fill data analytical roles, including hiring data scientists. They described data science skills as a combination of roles that require technical, as well as subject matter expertise, which often requires a diverse team to become successful.

Tiffany Julian, data scientist at the National Science Foundation, said she recently sat in on a focus group involved with the Office of Personnel Management’s initiative to define data scientist.

“One of the big messages from that was, there’s no such thing as a unicorn. You don’t hire a data scientist. You create a team of people who do data science together,” Julian said.

Julian said data science includes more than programmers and technical experts. Subject experts who know their company or agency mission also play a role.

“You want your software engineers, you want your programmers, you want your database engineers,” she said. “But you also want your common sense social scientists involved. You can’t just prioritize one of those fields. Let’s say you’re really good at Python, you’re really good at R. You’re still going to have to come up with data and processes, test it out, draw a conclusion. No one person you hire is going to have all of those skills that you really need to make data-driven decisions.”

Wanted: People who know they don’t know it all

Because she is a data scientist, Julian said others in her agency ask what skills they should seek when hiring data scientists.

You don’t hire a data scientist. You create a team of people who do data science together.
Tiffany JulianData scientist, National Science Foundation

“I’m looking for that wisdom that comes from knowing that I don’t know everything,” she said. “You’re not a data scientist, you’re a programmer, you’re an analyst, you’re one of these roles.”

Tom Beach, chief data strategist and portfolio manager for the U.S. Patent and Trademark Office (USPTO), said he takes a similar approach when looking for data scientists.

“These are folks that know enough to know that they don’t know everything, but are very creative,” he said.

Beach added that when hiring data scientists, he looks for people “who have the desire to solve a really challenging problem. There is a big disconnect between an abstract problem and a piece of code. In our organization, a regulatory agency dealing with patents and trademarks, there’s a lot of legalese and legal frameworks. Those don’t code well. Court decisions are not readily codable into a framework.”

‘Cloud not enough’

Like enterprises, government agencies also need to get the right tools to help facilitate data science. Peter Ranks, deputy CIO for information enterprise at the Department of Defense, said data is key to his department, even if DoD IT people often talk more about technologies such as cloud, AI, cybersecurity and the three Cs (command, control and communications) when they discuss digital modernization.

“What’s not on the list is anything about data,” he said. “And that’s unfortunate because data is really woven into every one of those. None of those activities are going to succeed without a focused effort to get more utility out of the data that we’ve got.”

Ranks said future battles will depend on the ability of forces on land, air, sea, space and cyber to interoperate in a coordinated fashion.

“That’s a data problem,” he said. “We need to be able to communicate and share intelligence with our partners. We need to be able to share situational awareness data with coalitions that may be created on demand and respond to a particular crisis.”

Ranks cautioned against putting too much emphasis on leaning on the cloud for data science. He described cloud as the foundation on the bottom of a pyramid, with software in the middle and data on top.

“Cloud is not enough,” he said. “Cloud is not a strategy. Cloud is not a destination. Cloud is not an objective. Cloud is a tool, and it’s one tool among many to achieve the outcomes that your agency is trying to get after. We find that if all we do is adopt cloud, if we don’t modernize software, all we get is the same old software in somebody else’s data center. If we modernize software processes but don’t tackle the data … we find that bad data becomes a huge boat anchor or that all those modernized software applications have to drive around. It’s hard to do good analytics with bad data. It’s hard to do good AI.”

Beach agreed. He said cloud is “100%” part of USPTO’s data strategy, but so is recognition of people’s roles and responsibilities.

“We’re looking at not just governance behavior as a compliance exercise, but talking about people, process and technology,” he said. “We’re not just going to tech our way out of a situation. Cloud is just a foundational step. It’s also important to understand the recognition of roles and responsibilities around data stewards, data custodians.”

This includes helping ensure that people can find the data they need, as well as denying access to people who do not need that data.

Nick Marinos, director of cybersecurity and data protection at the Government Accountability Office, said understanding your data is a key step in ensuring data protection and security.

“Thinking upfront about what data do we actually have, and what do we use the data for are really the most important piece questions to ask from a security or privacy perspective,” he said. “Ultimately, having an awareness of the full inventory within the federal agencies is really all the way that you can even start to approach protecting the enterprise as a whole.”

Marinos said data protection audits at government agencies often start with looking at the agency’s mission and its flow of data.

“Only from there can we as auditors — and the agency itself — have a strong awareness of how many touch points there are on these data pieces,” he said. “From a best practice perspective, that’s one of the first steps.”

Go to Original Article
Author:

VMware’s Bitnami acquisition grows its development portfolio

The rise of containers and the cloud has changed the face of the IT market, and VMware must evolve with it. The vendor has moved out of its traditional data center niche and — with its purchase of software packager Bitnami — has made a push into the development community, a change that presents new challenges and potential. 

Historically, VMware delivered a suite of system infrastructure management tools. With the advent of cloud and digital disruption, IT departments’ focus expanded from monitoring systems to developing applications. VMware has extended its management suite to accommodate this shift, and its acquisition of Bitnami adds new tools that ease application development.

Building applications presents difficulties for many organizations. Developers spend much of their time on application plumbing, writing software that performs mundane tasks — such as storage allocation — and linking one API to another.

Bitnami sought to simplify that work. The company created prepackaged components called installers that automate the development process. Rather than write the code themselves, developers can now download Bitnami system images and plug them into their programs. As VMware delves further into hybrid cloud market territory, Bitnami brings simplified app development to the table.

Torsten Volk, managing research director at Enterprise Management AssociatesTorsten Volk

“Bitnami’s solutions were ahead of their time,” said Torsten Volk, managing research director at Enterprise Management Associates (EMA), a computer consultant based out of Portsmouth, New Hampshire. “They enable developers to bulletproof application development infrastructure in a self-service manner.”

The value Bitnami adds to VMware

Released under the Apache License, Bitnami’s modules contain commonly coupled software applications instead of just bare-bones images. For example, a Bitnami WordPress stack might contain WordPress, a database management system (e.g., MySQL) and a web server (e.g., Apache).

Bitnami takes care of several mundane programming chores. Its keeps all components up-to-date — so if it finds a security problem, it patches that problem — and updates those components’ associated libraries. Bitnami makes its modules available through its Application Catalogue, which functions like an app store.

The company designed its products to run on a wide variety of systems. Bitnami supports Apple OS X, Microsoft Windows and Linux OSes. Its VM features work with VMware ESX and ESXi, VirtualBox and QEMU. Bitnami stacks also are compatible with software infrastructures such as WAMP, MAMP, LAMP, Node.js, Tomcat and Ruby. It supports cloud tools from AWS, Azure, Google Cloud Platform and Oracle Cloud. The installers, too, feature a wide variety of platforms, including Abante Cart, Magento, MediaWiki, PrestaShop, Redmine and WordPress. 

Bitnami seeks to help companies build applications once and run them on many different configurations.

“For enterprise IT, we intend to solve for challenges related to taking a core set of application packages and making them available consistently across teams and clouds,” said Milin Desai, general manager of cloud services at VMware.

Development teams share project work among individuals, work with code from private or public repositories and deploy applications on private, hybrid and public clouds. As such, Bitnami’s flexibility made it appealing to developers — and VMware.

How Bitnami and VMware fit together

[VMware] did not pay a premium for the products, which were not generating a lot of revenue. Instead, they wanted the executives, who are all rock stars in the development community.
Torsten VolkManaging Research Director, EMA

VMware wants to extend its reach from legacy, back-end data centers and appeal to more front-end and cloud developers.

“In the last few years, VMware has gone all in on trying to build out a portfolio of management solutions for application developers,” Volk said. VMware embraced Kubernetes and has acquired container startups such as Heptio to prove it.

Bitnami adds another piece to this puzzle, one that provides a curated marketplace for VMware customers who hope to emphasize rapid application development.

“Bitnami’s application packaging capabilities will help our customers to simplify the consumption of applications in hybrid cloud environments, from on-premises to VMware Cloud on AWS to VMware Cloud Provider Program partner clouds, once the deal closes,” Desai said.

Facing new challenges in a new market

However, the purchase moves VMware out of its traditional virtualized enterprise data center sweet spot. VMware has little name recognition among developers, so the company must build its brand.

“Buying companies like Bitnami and Heptio is an attempt by VMware to gain instant credibility among developers,” Volk said. “They did not pay a premium for the products, which were not generating a lot of revenue. Instead, they wanted the executives, who are all rock stars in the development community.”  

Supporting a new breed of customer poses its challenges. Although VMware’s Bitnami acquisition adds to its application development suite — an area of increasing importance — it also places new hurdles in front of the vendor. Merging the culture of a startup with that of an established supplier isn’t always a smooth process. In addition, VMware has bought several startups recently, so consolidating its variety of entities in a cohesive manner presents a major undertaking.

Go to Original Article
Author:

Page Locking comes to OneNote Class Notebooks |

Educators face an array challenges, not least of which is ongoing classroom management. As more and more teachers use Class Notebooks, stand alone or integrated with Microsoft Teams, the most common request we’ve heard from teachers is the ability to “lock” a page. This capability allows educators to have control and make the OneNote page read only for students while still allowing the teacher to add feedback or marks.  Today, we are excited to deliver on this request and begin rolling out page locking broadly to help teachers manage their classrooms and save time.

Page Locking—To further simplify classroom workflows, we are delivering on the number-one request from teachers for OneNote Class Notebooks—enabling lock pages. With our new page locking, the following capabilities are enabled:

  • Teachers can now lock all the pages of a distributed page as read-only after giving feedback to the student.
  • Teachers can unlock or lock individual pages by simply right clicking on the page on a student.
  • Teachers using Microsoft Teams to create OneNote assignments can have the page of the OneNote assignment automatically lock as read only when the due date/time passes

During our early testing process, we’ve had teachers trying out the page locking in their classrooms.  Robin Licato, an AP Chemistry and Forensic Science from St. Agnes Academy, Houston, TX had this to say: “This feature is an absolute game changer.  I am enjoying the ability to unlock a specific student who has an extension on an assignment due to illness or absence while keeping the page locked for students who did not complete the assignment on time!”

Scott Titmas, Technology Integration Specialist Old Bridge Township Public Schools, NJ was also an early beta tester of the new page locking feature. “The page locking feature is extremely intuitive, easy to use, and opens a whole new world of possibilities for teachers. It will be a welcomed feature addition for all teachers.  More encouraging than just this feature is the fact that Microsoft has consistently shown they listen to their users and user voice drives the direction of product development”

Platforms supported Initially, we are rolling this out for OneNote for Windows 10, OneNote 2016 Desktop Addin, OneNote Online, and OneNote for iPad. Most platforms will provide page locking built in to the toolbar. For OneNote desktop, download the new free add in.

For additional details on which version of OneNote is required for both teacher and students, please visit this new OneNote Class Notebook page locking support article.  It is important to read this article to understand the details before rolling this out.

Important Note #1: for OneNote 2016 Desktop MSI customers, you must deploy this Public Update first before student and teacher pages will properly lock.  Please work with your IT Admin to ensure you properly deployment this patch first.  Page Locking is not supported for OneNote 2013 Desktop clients 

Important note #2: Page Locking works best when a page is distributed or made into an assignment. For example, if students copy the pages manually from the Content Library into their own notebooks and change the page title, the teacher will have to manually right click on the student page to lock it, instead of being able to use the single checkbox to lock all pages.

Page Locking in OneNote for Windows 10

Page Locking in OneNote 2016 Desktop

 

Teacher right click to unlock a page

Class Notebook Addin version 2.5.0.0

  • Page Locking support to allow teachers to make a page of set of student pages read-only
  • Bug fixes and performance improvements

We hope you enjoy these new updates! Share any feedback at @OneNoteEDU, and if you need support or help, you can file a ticket here: http://aka.ms/edusupport.

This post was originally published on this site.

Kubernetes networking expands its horizons with service mesh

Enterprise IT operations pros who support microservices face a thorny challenge with Kubernetes networking, but service mesh architectures could help address their concerns.

Kubernetes networking under traditional methods faces performance bottlenecks. Centralized network resources must handle an order of magnitude more connections once the user migrates from VMs to containers. As containers appear and disappear much more frequently, managing those connections at scale quickly can create confusion on the network, and stale information inside network management resources can even misdirect traffic.

IT pros at KubeCon this month got a glimpse at how early adopters of microservices have approached Kubernetes networking issues with service mesh architectures. These network setups are built around sidecar containers, which act as a proxy for application containers on internal networks. Such proxies offload networking functions from application containers and offer a reliable way to track and apply network security policies to ephemeral resources from a centralized management interface.

Proxies in a service mesh better handle one-time connections between microservices than can be done with traditional networking models. Service mesh proxies also tap telemetry information that IT admins can’t get from other Kubernetes networking approaches, such as transmission success rates, latencies and traffic volume on a container-by-container basis.

“The network should be transparent to the application,” said Matt Klein, a software engineer at San Francisco-based Lyft, which developed the Envoy proxy system to address networking obstacles as the ride-sharing company moved to a microservices architecture over the last five years.

“People didn’t trust those services, and there weren’t tools that would allow people to write their business logic and not focus on all the faults that were happening in the network,” Klein said.

With a sidecar proxy in Envoy, each of Lyft’s services only had to understand its local portion of the network, and the application language no longer factored in its function. At the time, only the most demanding web application required proxy technology such as Envoy. But now, the complexity of microservices networking makes service mesh relevant to more mainstream IT shops.

The National Center for Biotechnology Information (NCBI) in Bethesda, Md., has laid the groundwork for microservices with a service mesh built around Linkerd, which was developed by Buoyant. The bioinformatics institute used Linkerd to modernize legacy applications, some as many as 30 years old, said Borys Pierov, a software developer at NCBI.

Any app that uses the HTTP protocol can point to the Linkerd proxy, which gives NCBI engineers improved visibility and control over advanced routing rules in the legacy infrastructure, Pierov said. While NCBI doesn’t use Kubernetes yet — it uses HashiCorp Consul and CoreOS rkt container runtime instead of Kubernetes and Docker — service mesh will be key to container networking on any platform.

“Linkerd gave us a look behind the scenes of our apps and an idea of how to split them into microservices,” Pierov said. “Some things were deployed in strange ways, and microservices will change those deployments, including the service mesh that moves us to a more modern infrastructure.”

Matt Klein speaks at KubeCon
Matt Klein, software engineer at Lyft, presents the company’s experiences with service mesh architectures at KubeCon.

Kubernetes networking will cozy up with service mesh next year

Linkerd is one of the most well-known and widely used tools among the multiple open source service mesh projects in various stages of development. However, Envoy has gained notoriety because it underpins a fresh approach to the centralized management layer, called Istio. This month, Buoyant also introduced a better performing and efficient successor to Linkerd, called Conduit.

Linkerd gave us a look behind the scenes of our apps … Some things were deployed in strange ways, and microservices will change those deployments, including the service mesh that moves us to a more modern infrastructure.
Borys Pierovsoftware developer, National Center for Biotechnology Information

It’s still too early for any of these projects to be declared the winner. The Cloud Native Computing Foundation (CNCF) invited Istio’s developers, which include IBM, Microsoft and Lyft, to make the Istio CNCF project, CNCF COO Chris Aniszczyk said at KubeCon. But Buoyant also will formally present Conduit to the CNCF next year, and multiple projects could coexist within the foundation, Aniszczyk said.

Kubernetes networking challenges led Gannett’s USA Today Network to create its own “terrible, over-orchestrated” service mesh-like system, in the words of Ronald Lipke, senior engineer on the USA Today platform-as-a-service team, who presented on the organization’s Kubernetes experience at KubeCon. HAProxy and the Calico network management system have supported Kubernetes networking in production so far, but there have been problems under this system with terminating nodes cleanly and removing them from Calico quickly so traffic isn’t misdirected.

Lipke likes the service mesh approach, but it’s not yet a top priority for his team at this early stage of Kubernetes deployment. “No one’s really asking for it yet, so it’s taken a back seat,” he said.

This will change in the new year. The company plans to rethink the HAproxy approach to reduce its cloud resource costs and improve network tracing for monitoring purposes. The company has done proof-of-concept evaluations around Linkerd and plans to look at Conduit, he said in an interview after his KubeCon session.

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at [email protected] or follow @PariseauTT on Twitter.