Tag Archives: Labs

NSS Labs drops antitrust suit against AMTSO, Symantec and ESET

NSS Labs ended its legal battle against the Anti-Malware Testing Standards Organization, Symantec and ESET.

The independent testing firm dropped its antitrust lawsuit Tuesday, which was filed in 2018 against AMTSO (a nonprofit organization) and several top endpoint security vendors, including Symantec, ESET and CrowdStrike. The suit accused the vendors and AMTSO of conspiring to prevent NSS Labs from testing their products by boycotting the company.

In addition, NSS Labs accused the vendors of instituting restrictive licensing agreements that prevented the testing firm from legally purchasing products for public testing. The suit also alleged AMTSO adopted a draft standard that required independent firms like NSS Labs to give AMTSO vendor members advance notice of how their products would be tested, which NSS Labs argued was akin to giving vendors answers to the test before they took it.

In May, NSS Labs and CrowdStrike agreed to a confidential settlement that resolved the antitrust suit as well as other lawsuits between the two companies stemming from NSS Labs’ 2017 endpoint protection report that included negative test results for CrowdStrike’s Falcon platform. Under the settlement, NSS Labs retracted the test results, which the firm admitted were incomplete, and issued an apology to CrowdStrike.

In August, a U.S. District Court judge for the Northern District of California dismissed NSS Labs’ antitrust claims, ruling in part that NSS Labs failed to show how the alleged conspiracy damaged the market, which is required for antitrust claims. The judge also said NSS Labs’ complaint failed to show ESET and AMTSO participated in the alleged conspiracy (Symantec did not challenge the conspiracy allegations in the motion to dismiss). The ruling allowed the company to amend the complaint; instead, NSS Labs dropped its lawsuit.

Still, the testing firm had some harsh words in its statement announcing the dismissal of the suit. NSS Labs said vendors “were using a Draft Standard from the non-profit group to demonstrate their dissatisfaction with tests that revealed their underperforming products and associated weaknesses, which did not support their marketing claims.”

“During the past year, AMTSO has made progress to be more fair and balanced in its structure, vendors have shown progress in working with testing organizations, and the market itself has had significant change and notable acquisition activity,” NSS Labs CEO Jason Brvenik said in the statement. “It is said that sunshine is the best disinfectant, and that has been our experience here. We look forward to continued improvement in the security vendor behaviors.”

AMTSO sent the following statement to SearchSecurity:

“While AMTSO welcomes NSS Lab’s decision to dismiss, its actions were disruptive, expensive, and without merit,” said Ian McShane, an AMTSO Board member and senior director of security products at Elastic. “However, we agree with its statement that ‘sunshine is the best disinfectant,’ and we’re looking forward to NSS Labs re-joining AMTSO, and to its voluntary participation in standard-based testing. We believe this will give customers a greater assurance that the tests were conducted fairly.”

AMTSO did not comment on whether the organization has made any specific changes to its structure or policies in the wake of the antitrust suit.

NSS Labs changed its approach to testing results earlier this year with its 2019 Advanced Endpoint Protection Group Test, which redacted the names of vendors that received low scores and “caution” ratings. At RSA Conference 2019, Brvenik told SearchSecurity that NSS Labs decided to take a “promote, not demote” approach that focuses on the vendors that are doing well.

Go to Original Article
Author:

HiQ Labs vs LinkedIn case OKs robot monitoring of employees

HiQ Labs Inc. has built a business of scraping and analyzing public data on LinkedIn Corp., a business networking site owned by Microsoft. LinkedIn wanted HiQ to stop, and the two ended up in federal court.

So far, LinkedIn is losing. The U.S. Court of Appeals in the Northern District of California ruled this week that San Francisco-based HiQ can keep using its software bots to collect that data. But even if LinkedIn drops its court effort, the issue is far from settled

LinkedIn data is public, and anyone can view it. The lawsuit raises concerns about the use of software bots to automate social media monitoring. HiQ can watch for profile changes through the bots, which have gained interest from the HR community. One of its tools, Keeper, can identify employees who are a potential flight risk. HR users of the service learn of flight risk through individual risk scores.

The appeals court reaffirmed that public data on LinkedIn is not private. “There is little evidence that LinkedIn users who choose to make their profiles public actually maintain an expectation of privacy,” the court said. 

In a statement, LinkedIn said it is “disappointed in the court’s decision, and we are evaluating our options following this appeal.” It also said that it “will continue to fight to protect our members and the information they entrust to LinkedIn.” HiQ declined to comment.

What’s wholly public — and what isn’t

LinkedIn told the court that the data scraping is done without the consent of its members and is a violation of the Computer Fraud and Abuse Act (CFAA), an anti-hacking law. HiQ argued the information was “wholly public” and accessible to anyone.

What authority and authorization powers should be left to the owners of the data?
Shain KhoshbinAttorney, Munck Wilson Mandala, LLP

Shain Khoshbin, an attorney at Munck Wilson Mandala, LLP in Dallas, described the court’s decision as troubling. He used a physical locker as an analogy to explain why. A person could look through a locker’s vents “take pictures of its contents, and analyze and sell some version of that information to others — arguably whether or not the locker has a padlock, and even if the locker’s owner sends a cease and desist letter saying stop it.”

The owner of the contents of this locker “has no serious privacy expectation as to the contents of the locker,” Khoshbin said.

What seems lost in all the court decisions, “is what authority and authorization powers should be left to the owners of the data?” Khoshbin said.

It’s privacy vs freedom 

The case has split the opinion of Internet advocacy groups. The Electronic Privacy Information Center (EPIC) filed a brief arguing that the lower court erred. “Regrettably, the lower court discounted the privacy interests of users and required LinkedIn to make the personal data of LinkedIn users available to data aggregators for whatever purpose they wish. That cannot be correct.”

But the Electronic Frontier Foundation, which also filed a brief, is pleased with the outcome. It said the CFAA law was designed to target people who hack into a computer. Allowing the LinkedIn position to prevail would give precedent for any website to bar any software bot, a move that would hurt journalists, researchers and others.

LinkedIn has the technology to stop automated software bots from collecting its member data. It has instructions in its “robots.txt” file to prohibit access to its servers via automated bots, except for the ones it wants, such as the Google search engine, which has permission from LinkedIn. Robots.txt is used to determine what bots can crawl a site. LinkedIn also had security tools to stop software bots. HiQ was fighting to keep LinkedIn from blocking access of its bots to LinkedIn’s public information.

Protecting an information monopoly

Bryan Harper, manager of Schellman & Company, LLC, a global independent security and privacy compliance assessor in Tampa, Fla., said the ruling doesn’t change the ability of firms to protect themselves.

“In practical terms, companies basically continue business as usual,” Harper said. “If there is a malicious actor or a threat event that is captured by a monitoring tool, then a company should and has a duty to respond with their standard incident response procedures.” 

But “companies should not selectively target those scraping efforts simply to protect an information monopoly,” Harper said. It’s also impractical to block all bots, he said.

The takeaway is “you can’t target competitors” if the services rely on public data that isn’t considered private by its users, Harper said.

Still, the issues may be flushed out with other lawsuits, “since this appeal involved an interim ruling on a preliminary injunction,” Khoshbin said.

The appeals court made it clear that even if the CFAA does not apply, entities that view themselves as victims may still be able to raise claims under state laws, copyright infringement, misappropriation, unjust enrichment or breach of privacy, among other avenues, according to Khoshbin.

Go to Original Article
Author:

Startup Dgraph Labs growing graph database technology

Dgraph Labs Inc. is set to grow its graph database technology with the help of a cash infusion of venture financing.

The company was founded in 2015 as an effort to advance the state of graph database technology. Dgraph Labs’ founder and CEO Manish Jain previously worked at Google, where he led a team that was building out graph database systems. Jain decided there was a need for a high-performance graph database technology that could address different enterprise use cases.

Dgraph said July 31 it had completed an $11.5 million Series A funding round.

The Dgraph technology is used by a number of different organizations and projects. Among them is Intuit, which uses Dgraph as the back-end graph database for its’ open source project K-Atlas.

“We were looking for a graph database with high performance in querying large-scale data sets, fully distributed, highly available and as cloud-native as possible,” said Dawei Ding, engineering manager at Intuit.

Ding added that Dgraph’s graph database technology stood out from both architectural design as well as performance benchmarking perspective. Moreover, he noted that being fully open sourced made Dgraph an even more attractive solution for Intuit’s open source software project.

The graph database landscape

Multiple technologies are available in the graph database landscape, including from Neo4j, Amazon Neptune and DataStax Enterprise Graph, among others. In Jain’s view, many graph database technologies are actually graph layers, rather than full graph databases.

“By graph layer, what I mean is that they don’t control storage; they just overlay a graph layer on top of some other database,” Jain said.

So, for example, he said a common database used by graph layer-type technologies is Apache Cassandra or, in Amazon’s case, Amazon Aurora.

Screenshot of graph database from Dgraph Labs showing information about all movies directed by Steven Spielberg
Graph database of all the movies directed by Steven Spielberg, their country of filming, genres, actors in those movies and the characters played by those actors.

“The problem with that approach is that to do the graph traversal or to do a graph join, you need to first bring the data to the layer before you can interact with it and do interesting things on it,” Jain commented. “So, there’s multiple back and forth steps and, therefore, the performance likely will decrease.”

In contrast, the founding principle behind Dgraph was that graph database technology could be developed that can scale horizontally while also upping performance, because the database controls how data is stored on disk.

Open source and the enterprise

Dgraph is an open source project and hit its 1.0 milestone in December 2017. The project has garnered more than 10,000 stars on GitHub, which Jain points to as a measure of the undertaking’s popularity.

Going a step further is the company’s Dgraph Enterprise platform, which provides more capabilities that an organization might need for support, access control and management. Jain said Dgraph Labs is using the open core model, in which the open source application is free to use, but then if an organization wants certain features, it must pay for it.

Jain stressed that the core open source project is functional on its own — so much so that an organization could choose to run a 20 node Dgraph cluster with replication and consistency for free.

Why graph databases matter

We were looking for a graph database with high performance in querying large-scale data sets, fully distributed, highly available and as cloud-native as possible.
Dawei DingEngineering manager, Intuit

A problem with typical relational databases is that with every data model comes a new table, or new schema, and, over time, that can become a scaling challenge, Jain said. He added that in the relational database approach, large data sets tend to become siloed over time as well. With a graph database, it is possible to unify disparate data sources.

As an example of how a graph database technology approach can help to eliminate isolated data sources, Jain said one of Dgraph’s largest enterprise customers took 60 different vertical data silos stored in traditional database and put all of it into a Dgraph database.

“Now, they’re able to run queries across all of these data sets, to be able to power not only their apps, but also power real-time analytics,” Jain said.

What’s next for Dgraph Labs

With the new funding, Jain said the plan is to open new offices for the company as well as expand the graph database technology.

One key area of future expansion is building a Dgraph cloud managed service. Another area that will be worked on is third-party integration, with different technologies such as Apache Spark for data analysis.

“Now that we have a bunch of big companies using Dgraph, they need some additional features, like, for example, encryption, and so we are putting a good chunk of our time into building out capabilities,” he said.

Go to Original Article
Author:

Cisco refused to participate in NSS Labs report on SD-WAN

Cisco refused to activate the Viptela software-defined WAN product NSS Labs bought for testing, leaving the research firm with a noticeable hole in its recent comparative report on SD-WAN vendors.

Cisco did not provide a reason for refusing to activate the product NSS Labs had purchased for between $30,000 and $40,000, NSS Labs CEO Vikram Phatak said this week. “There was no reason given other than, effectively, they didn’t want to be tested (for the NSS Labs report).”

Cisco’s action marked the first time a vendor had refused to turn on a product NSS Labs had bought for evaluation, Phatak said. Cisco’s Viptela team had initially told NSS Labs it would support the test, which led the firm to buy the product.

“That’s a first for us, candidly,” Phatak said. “And given Cisco’s ethical rules and so on — rules of conduct — I’m in shock because normally, they’re pretty straightforward to work with.”

Cisco refused to discuss the matter, saying in a statement “We believe our customer traction, standing in the market and the continued productive innovation we’re driving speak for themselves.”

NSS Labs wants a refund

NSS Labs wants Cisco to refund the money spent on Viptela. It is hoping it can get the money back without going to court.

“I hope it doesn’t come to that,” Phatak said. “We haven’t talked to any lawyers. I’m assuming that we’ll be able to have the conversation and get our money back.”

Typically, NSS Labs buys products, and the vendors turn them on like they would for any other customer.

“If someone says they don’t want to be tested, we say, ‘That’s great, but if a product is good enough to be sold to the public, it’s good enough to be tested,'” Phatak said. “We’re going to buy it, and we’ll report to the public.”

That’s a first for us, candidly. And given Cisco’s ethical rules and so on — rules of conduct — I’m in shock.
Vikram PhatakCEO, NSS Labs

NSS Labs noted Cisco’s refusal to activate the Viptela purchase in its SD-WAN Comparative Report, which was the company’s first SD-WAN test. Not having Cisco in the evaluation left out one of the largest SD-WAN vendors and a major tech company.

In the first quarter, London-based IHS Markit listed Cisco as No. 4 in the SD-WAN market, just behind Silver Peak. VMware was first with a 19% share, followed by Aryaka with 18%.

The NSS Labs report, released this month, compared the products of nine vendors, including VMware’s NSX SD-WAN, formerly VeloCloud. VMware is Cisco’s largest competitor.

NSS Labs had also planned to include Silver Peak in the comparison but noted it was unable to obtain the product in time for testing.

Tech companies often cite recommended ratings in NSS Labs reports in marketing materials. In April, Cisco highlighted in a blog post the organization’s “recommended” rating for the Cisco Advanced Malware Protection for Endpoints product.

Based on its recent SD-WAN tests, NSS Labs recommended products from VMware, Talari Networks and Fortinet and listed products from Citrix Systems, FatPipe Networks, Forcepoint and Versa Networks as “verified.” Tech buyers should consider recommended and verified products as candidates for purchase, according to NSS Labs.

The company issued “caution” ratings for Barracuda Networks and Cradlepoint, which means companies should not deploy their products without a comprehensive evaluation, NSS Labs said.

Dell EMC VDI helps university expand application access

By 2016, computer labs at the University of Arkansas had become so high maintenance that they took up an inordinate amount of the IT staff’s time.

It was difficult to repair hardware, update software and protect against malware on about 400 physical desktops in about eight labs. Plus, some applications were only available on certain computers in specific labs, which limited student access.

To solve these problems, the university deployed a combination of Dell EMC and VMware products to provide virtual desktops and a revamped infrastructure to support them.

“VDI greatly reduces the cost and the need for maintenance … so this frees up the IT resources across campus to do more important things,” said Stephen Herzig, director of enterprise systems for the university in Fayetteville, Ark. “And this ‘any device, anytime, anywhere’ concept that we have frees the student from geography to get the application that they need.”

The university began the Dell EMC VDI project in December 2016 and, by March 2017, had delivered virtual desktops to thin clients in several labs. Herzig’s team chose a mix of rack servers, thin clients and virtualization software (see sidebar, “Hardware and software”) for its VDI deployment.

Had we just done a plain, vanilla VDI, we wouldn’t be talking.
Stephen Herzigdirector of enterprise systems, University of Arkansas

The combination of these technologies allowing scalable, flexible VDI was what made the approach so innovative. The university essentially developed its own type of hyper-converged infrastructure, and the vendors collaborated on that infrastructure’s delivery, before a similar bundle was commercially available. Dell EMC and its subsidiary, VMware, now offer VDI Complete, a package of hyper-converged infrastructure appliances, software and thin clients from the two vendors.

“Had we just done a plain, vanilla VDI, we wouldn’t be talking,” Herzig said. “It was the way we went about it and making all of these technologies work in concert with each other.”

‘A vision we were aligned with’

The university chose Dell R630 servers for their high density; one rack hosts 1,000 desktops and up to 2,000 applications. And for students in the schools of architecture and engineering that needed graphics-heavy apps, for example, the R730 servers allowed the virtual desktops to support GPUs. (Plus, at the same time as the Dell EMC VDI project, the university moved many of its devices from Windows 7 to Windows 10, which tends to require more graphics processing on basic applications.)

“We wanted to have the most rich experience and allow everyone on campus to be able to use [VDI], so that meant we needed to cover everybody,” said Jon C. Kelley, associate director of enterprise systems at the university. “Having a GPU for literally every desktop helped with even the base-level stuff on Windows 10.”

Before choosing Dell EMC VDI services, the team had looked at Hewlett Packard Enterprise for infrastructure and Citrix for VDI software. But IT staff members were already familiar with Dell, and they felt more attracted to Dell EMC’s philosophy, which pushed the commoditization of hardware and the value of software abstraction, Kelley said.

In addition, VMware’s vision of simplified desktop delivery with major end-user visibility, using its NSX and vRealize Suite products for cloud infrastructure management, resonated with IT, he said.

“An individual having data and needing to manipulate that data using applications — while wanting to have access to both, wherever you are — was a vision we were aligned with,” he added.

Users work in a computer lab at the university.
Users work in a computer lab at the university.

A bundled approach to desktop virtualization and its back-end infrastructure can help organizations reduce complexity, said Rhett Dillingham, vice president and senior analyst at Moor Insights & Strategy.

“VDI has been one of the more complicated technologies to plan for and deliver at scale,” he said. “To have a single vendor not just deliver but support that is key. The ability to call a single vendor and have them run triage and manage resolution of all issues … is a drastic simplification.”

Teamwork made the dream work

To ensure that the Dell EMC VDI project went smoothly from planning to implementation, members of the university’s communications, desktop support and IT infrastructure teams formed a group that met regularly.

“That was really crucial because we needed buy-in from desktop support,” Kelley said. “A lot of those people were pretty resistant to the VDI concept. Getting them to understand, ‘Oh, it frees up my time to do other things, and I also still have control over the imaging and things like that,’ was really key.”

The IT staff nailed down the overall architecture first, then deployed the thin clients, created a management cluster and used contractor services to help them deploy NSX. Lastly, they built the compute nodes, which store and run the memory, processing and other resources for the deployment’s virtual machines, and added them to VMware Horizon to enable virtual desktop provisioning.

“The university IT team was really strong,” said Andrew McDaniel, director of VDI Ready Solutions at Dell EMC. “They got deep into the deployment and took on responsibility for doing quite a bit of the work themselves.”

The biggest challenge was figuring out how to organize billing for the on-demand access to software and services, because the university has a central IT department and several additional, distributed IT groups that support specific colleges and departments.

“When you’re using Workspace One and deploying apps, there is no static number,” Herzig said. “How do you bill for that?”

The university is experimenting with two models. With one, a group pays a specific fee per year per endpoint, and the central IT department provides and maintains the thin client and monitor, as well as covers back-end infrastructure and licensing costs. In the second, the group buys its own thin clients and monitors, which it is responsible for maintaining, and pays a reduced yearly fee per endpoint to cover infrastructure and licensing.

The Dell EMC VDI deployment was so successful that within only a couple months of deploying the first virtual desktops, the IT department amassed a long list of other groups at the university that wanted to implement them too. Herzig’s team continuously works to deliver virtual desktops to more groups and plans to implement VDI for faculty desktops and student and faculty mobile devices.

“From day one in the labs, we have had virtually no complaints,” Herzig said. “And typically if you’ve got 30 machines, you’ve got a couple that are down for one reason or another. Well, that problem is gone. The support people spend a lot less time fiddling with lab machines trying to bring them back up or solve problems or help users deal with connectivity issues or application issues.”

As more organizations see how this type of bundled approach to VDI can be successful, they may be more willing to adopt the technology, Dillingham said.

Maxta hyper-converged MxSP helps services firm fuel growth

When Larry Chapman arrived at Trusource Labs as IT manager, the technical support services provider was in the hyper-growth stage, while its IT infrastructure was stuck in neutral.

The IT infrastructure consisted of no shared storage and a server with a single point of failure. Chapman decided to upgrade it in one shot with hyper-convergence. Trusource installed Maxta MxSP software-based hyper-convergence running on Dell PowerEdge servers last April when it opened a new call center.

The company started in 2013 with a call center in Austin, Texas, and added one in Limerick, Ireland, in the past year. It has since built a second call center in Austin, a 450-seat facility called “Austin North” to deal with the company’s rapid customer growth and for redundancy. Trusource plans further expansion with another call center set to open in Alpine, Texas, in 2018.

In four years, Trusource has grown to 600 employees and around $30 million in annual revenue.

“We were in hyper-growth mode from when we started until I got here,” said Chapman, who joined Trusource in mid-2016. He said when he arrived at Trusource the network consisted of “one big HP server with 40 VMs and 40 cores. Obviously, that’s a single point of failure; there was no shared storage and no additional servers.”

Chapman considered building his IT infrastructure out the traditional way, adding a dedicated storage array and more servers. But that would require adding at least one engineer to his small IT staff.

Forty minutes, start to finish, and boom, I was running hyper-converged infrastructure.
Larry ChapmanIT manager, Trusource Labs

“I wasn’t sure I wanted to hire a storage engineer to calculate LUNs and do all the storage stuff,” he said. “Over the course of years, there’s a lot of salary involved there. So I started looking at new next-generation things.”

That led him to hyper-converged infrastructure, which requires no storage specialists. He checked out HCI players Nutanix, SimpliVity and Maxta’s MxSP.

Chapman ruled out SimpliVity after Hewlett Packard Enterprise bought the startup in January 2017. He worried SimpliVity OmniStack software would no longer be hardware-agnostic after the deal closed.

“I like the option to be hardware-agnostic,” he said. “I will buy my server from whoever can give me the best deal at the time. At the time I looked at SimpliVity; it was hardware agnostic, but I didn’t think it would be in the future.”

He liked Nutanix’s appliance, but its initial cost scared him off. The price seemed especially steep compared to Maxta. Chapman chose Maxta’s freemium license option, which provides software for free and charges for maintenance. He said the Maxta hyper-converged MxSP price tag came to $54,000 for four three-node clusters. After the initial three years, he will pay $3,000 per server for support.

“I had to look at the quote a couple of times. I thought they left something off,” he said. He said a comparable set up with Nutanix would have cost around $150,000 just for the HCI appliances.

After selecting the Maxta hyper-converged software, Chapman priced servers, picking three Dell PowerEdge R530 models with 24-core processors, 120 GB of RAM and four 10 GigE interfaces for a total of $24,000. Each server has 800 GB solid-state drives for cache and six 1 TB hard disk drives in a hybrid setup.

Throw in switching and cabling, and Chapman said he ended up with his entire infrastructure for a 450-seat call center based on the Maxta hyper-converged MxSP software for $125,000.

Chapman said he was a bit leery of installing do-it-yourself software, but he followed Maxta’s  checklist and did it himself anyway. Installation went smoothly.

“Forty minutes, start to finish, and boom, I was running hyper-converged infrastructure,” he said.

As part of the setup process, Maxta hyper-converged MxSP asks how many copies of data to keep on the virtual machines. Chapman said he selected three copies across his three nodes, “so no matter what combination of things I lose, as long as I have two of the servers up, the VMs will run like nothing happened.”

That bailed him out when a parity error brought down a server, but no one even noticed until an alert went out. “Everything was still chugging along,” Chapman said. A firmware upgrade fixed the problem.

Trusource now runs its production workload on the Maxta MxSP HCI appliances.

He said Trusource does not use dedicated backup software for the Maxta hyper-converged cluster but replicates between data centers.

Chapman said his setup allows easy upgrades at no additional software cost because of the Maxta perpetual license.

“I will just take the server off line, shut it down, put a new server in, turn it on and repeat the process for each new server,” he said. “If I need bigger drives, I can just swap out drives while the system’s running. If I need more processing power, I just add another node in the cluster, another $8,000 server and I’m done.”

Kubernetes adoption sends Rancher users back to eval mode

NEW YORK — In September 2017, Rancher Labs told customers it would discontinue its Cattle orchestration and scheduling framework in favor of Kubernetes for Rancher 2.0. This disclosure sent enterprise IT teams back to the drawing board and into strategy meetings to ensure they have no surprises on production container deployments.

At Velocity Conference 2017 here last week, Rancher users — one wholly on premises and the other 100% cloud-native — shared how the vendor’s Kubernetes adoption will guide their future operations.

“My hope is that Rancher does a good job integrating their existing API and their functionality into the Kubernetes world, so it’s not really impactful [to our teams],” said Andrew Maurer, IT manager for web platform ops at Cleveland-based Dealer Tire, which serves the automotive industry.

“We’re not sure what the benefit is yet,” Maurer said, but he’s familiar with the capabilities Kubernetes offers. Dealer Tire evaluated it along with Rancher, Mesos and Docker when the company ramped up containers six months ago.

Dealer Tire’s web platform ops team touts Rancher’s easy-to-use interface versus the complex workings of Kubernetes, a feeling shared by fellow Rancher adherents at Washington, D.C.-based Social Tables, which provides SaaS products for event planners and management.

During a search to replace Amazon Web Services’ Elastic Compute Cloud Container Service, Social Tables evaluated Kubernetes on AWS and the Kubernetes Operations (kops) provisioning tool, but ultimately landed on Rancher’s Cattle container management technology.

“We trialed Kubernetes on AWS via both Kelsey Hightower’s ‘Kubernetes the Hard Way’ and the kops provisioning tool, but ultimately were unsuccessful establishing a reliable overlay network,” said Michael Dumont, lead systems engineer in DevOps at Social Tables.

Editor’s note: Hightower, a key strategist for Google Cloud Platform, has called the “Kubernetes the Hard Way” project a way to learn how the Kubernetes components fit together with networking and role-based access control.

Now that Rancher has made the move to Kubernetes, Social Tables’ team plans to provision Kubernetes through the tech preview in Rancher 2.0 as soon as possible.

I’m very excited to get started with the Kube-native tools, like Helm, Draft and Istio, that are rapidly maturing.
Michael Dumontlead systems engineer in DevOps at Social Tables

“I’m very excited to get started with the Kube-native tools, like Helm, Draft and Istio, that are rapidly maturing,” Dumont said. Best-case scenario: Rancher’s rich user interface stays the same, and Social Tables picks up the “rich, consistent” Kubernetes API for its in-house tools, he said.

Kubernetes adoption wasn’t ever off the table for even enthusiastic Rancher users. To address growth and changing needs, Dealer Tire’s IT organization as a whole likely would have reassessed its container management tool set in 2018 and given Kubernetes a second look, Maurer said.

Maurer said he will rely on Rancher’s support to help the team through the conversion, while Dumont said he currently uses the community version without paid, enterprise-level support. It’s not only a technological change; Kubernetes adoption will affect Dealer Tire’s organization, such as who will make decisions about deployments and architecture, who will support the new technologies and who simply needs to be aware of changes occurring.

While there’s no sign of a slowdown in Kubernetes adoption, Maurer stressed the importance of tool choice for IT operations.

“I don’t necessarily like tying our wagons and investing everything we’ve got into one technology — we get pigeonholed into whatever that technology provides,” he said. Instead, when the team needs a feature, such as secrets management, they evaluate options for what works best.

“I don’t believe that one company has everything,” Maurer said.

Meredith Courtemanche is a senior site editor in TechTarget’s Data Center and Virtualization group, with sites including SearchITOperations, SearchWindowsServer and SearchExchange. Find her work @DataCenterTT or email her at [email protected].

Azure DevTest Labs offers substitute for on-premises testing

Azure DevTest Labs brings a consistent development and test environment to cost-conscious enterprises. The service…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

also gives admins the chance to explore Azure’s capabilities and determine other ways the cloud can assist the business.

A DevTest Lab in Azure puts a virtual machine in the cloud to verify developer code before it moves to the organization’s test environment. This practice unveils initial bugs before operations starts an assessment. DevTest Labs gives organizations a way to investigate the Microsoft cloud platform and its compute services, without incurring a large monthly cost. Look at Azure DevTest Labs as a way to augment internal tests — not replace them.

Part one of this two-part series explains Azure DevTest Labs and how to configure a VM for lab use. In part two, we examine the benefits of a testing cloud-based environment.

DevTest Labs offers a preliminary look at code behavior

After we create a lab with a server VM, connect to it using the same tools as you would in an on-premises environment — Visual Studio or Remote Desktop for Windows VMs and Secure Socket Shell for Linux VMs. Development teams can push the code to an internal repository connected to the Azure environment and then deploy it to the DevTest Lab VM.

Use the DevTest Lab VM to check what happens to the code:

  • when no modifications have been made to infrastructure; and
  • if the application runs on different versions of an OS.

Windows Server VMs in Azure provide uniformity

An organization’s test environment often has stipulations, such as a requirement to mirror the production Windows Servers through the last patch cycle, which can hinder the development process. Azure DevTest Labs uncovers how applications behave on the latest Windows Server version. This prepares IT for any issues before the internal testing environment moves to that server OS version. IT also can use DevTest Labs to check new features of an OS before they roll it out to production.

DevTest Labs assists admins who want to study for a certification and need a home lab environment to practice and study. But building a home lab is expensive when you consider costs for storage, server hardware and software. Virtualized labs with VMware Workstation or Client Hyper-V reduce this cost, but it’s still expensive to buy a powerful laptop that can handle all the new technologies in a server OS.

Admins can stand up Windows Server 2016 in DevTest Labs to understand the capabilities of the OS and set up an automatic shutdown time. This gives employees access to capable systems for after-hours studying, and the business only pays for the time the lab runs.

Azure DevTest Labs doesn’t replace on-premises testing

Many organizations have replica environments that mirror production sites, which ensures any fixes and changes will function properly when they go live. Azure DevTest Labs should not replace an on-premises test environment.

[embedded content]

Steps to produce an Azure DevTest
Lab.

Implement DevTest Labs to prevent testing delays; start work in DevTest Labs, which refine the items needed from operations. And because Azure is built to scale, users can add resources with a few clicks. An on-premises environment does not have the same flexibility to grow on demand, which can slow the code development process.

Production apps don’t have to stay in Azure

Azure DevTest Labs also checks applications or configurations, and then deploys them into the company’s data center. When the test phase of development passes, shut down the DevTest Lab until it is needed again.

In addition, IT teams can turn to DevTest Labs to showcase how the business can use Azure cloud. If the company wants to work with a German organization, for example, it must contend with heavy regulations about how data is handled and who owns it. Rather than build a data center in Germany, which could be cost-prohibitive, move some apps into an Azure region that covers the European Union or Germany. This is much less expensive because the business only pays for what it uses.

Still, regulatory issues override all the good reasons to use Azure. If you’re unsure of what regulatory items your organizations needs to know, use this link to get a list. You also can examine Microsoft’s audit reports to perform a risk assessment and see if Azure meets your company’s compliance needs.

Microsoft offers a 30-day free trial of DevTest Labs. It’s a great resource for development and testing, and provides an inexpensive learning environment for administrators who want to explore current and upcoming technologies.

Next Steps

Don’t let a test VM affect the production environment

Explore OpenStack’s capabilities with a virtual home lab

Use a Hyper-V lab for certification studies