Tag Archives: popular

Jaguar Land Rover, BI Worldwide share GitLab migration pros and cons

Microsoft’s proposed acquisition of popular code repository vendor GitHub also thrust competitor GitLab into the spotlight. A quarter-million customers tried to move code repositories from GitHub to GitLab last week in the wake of the Microsoft news, a surge that crashed the SaaS version of GitLab.

Enterprises with larger, more complex code repositories will need more than a few days to weigh the risks of the Microsoft acquisition and evaluate alternatives to GitHub. However, they were preceded by other enterprise GitLab converts who shared their experience with GitLab migration pros and cons.

BI Worldwide, an employee engagement software company in Minneapolis, considered a GitLab migration when price changes to CloudBees Jenkins Enterprise software drove a sevenfold increase in the company’s licensing costs for both CloudBees Jenkins Enterprise and GitHub Enterprise.

GitLab offers built-in DevOps pipeline tools with its code repositories in both SaaS and self-hosted form. BI Worldwide found it could replace both GitHub Enterprise and CloudBees Jenkins Enterprise with GitLab for less cost, and made the switch in late 2017.

“GitLab offered better functionality over GitHub Enterprise because we don’t have to do the extra work to create web hooks between the code repository and CI/CD pipelines, and its CI/CD tools are comparable to CloudBees,” said Adam Dehnel, product architect at BI Worldwide.

GitLab pipelines
GitLab’s tools include both code version control and app delivery pipelines.

Jaguar Land Rover-GitLab fans challenge Atlassian incumbents

Automobile manufacturer Jaguar Land Rover, based in London, also uses self-hosted GitLab among the engineering teams responsible for its in-vehicle infotainment systems. A small team of three developers in a company outpost in Portland, Ore., began with GitLab’s free SaaS tool in 2016, though the company at large uses Atlassian’s Bitbucket and Bamboo tools.

As of May 2018, about a thousand developers in Jaguar Land Rover’s infotainment division use GitLab, and one of the original Portland developers to champion GitLab now hopes to see it rolled out across the company.

Sometimes vendors … get involved with other parts of the software development lifecycle that aren’t their core business, and customers get sold an entire package that they don’t necessarily want.
Chris Hillhead of systems engineering, Jaguar Land Rover’s infotainment systems

“Atlassian’s software is very good for managing parent-child relationships [between objects] and collaboration with JIRA,” said Chris Hill, head of systems engineering for Jaguar Land Rover’s infotainment systems. “But sometimes vendors can start to get involved with other parts of the software development lifecycle that aren’t their core business, and customers get sold an entire package that they don’t necessarily want.”

A comparison between tools such as GitLab and Bitbucket and Bamboo largely comes down to personal preference rather than technical feature gaps, but Hill said he finds GitLab more accessible to both developers and product managers.

“We can give developers self-service capabilities so they don’t have to chew up another engineer’s time to make merge requests,” Hill said. “We can also use in-browser editing for people who don’t understand code, and run tutorials with pipelines and rundeck-style automation jobs for marketing people.”

Jaguar Land Rover’s DevOps teams use GitLab’s collaborative comment-based workflow, where teams can discuss issues next to the exact line of code in question.

“That cuts down on noise and ‘fake news’ about what the software does and doesn’t do,” Hill said. “You can make a comment right where the truth exists in the code.”

GitLab offers automated continuous integration testing of its own and plugs in to third-party test automation tools. Continuous integration testing inside GitLab and with third-party tools is coordinated by the GitLab Runner daemon. Runner will be instrumental to deliver more frequent software updates over the air to in-car infotainment systems that use a third-party service provider called Redbend, which will mean Jaguar Land Rover vehicle owners will get automatic updates to infotainment systems without the need to go to a dealership for installation. This capability will be introduced with the new Jaguar I-Pace electric SUV in July 2018.

Balancing GitLab migration pros and cons

BI Worldwide and Jaguar Land Rover both use the self-hosted version of GitLab’s software, which means they escaped the issues SaaS customers suffered with crashes during the Microsoft GitHub exodus. They also avoided a disastrous outage that included data loss for GitLab SaaS customers in early 2017.

Still, their GitLab migrations have come with downsides. BI Worldwide jumped through hoops to get GitLab’s software to work with AWS Elastic File System (EFS), only to endure months of painful conversion from EFS to Elastic Block Store (EBS), which the company just completed.

GitLab never promised that its software would work well with EFS, and part of the issue stemmed from the way AWS handles EFS burst credits for performance. But about three times a day, response time from AWS EFS in the GitLab environment would shoot up from an average of five to eight milliseconds to spikes as high as 900 milliseconds, Dehnel said.

“EBS is quite a bit better, but we had to get an NFS server setup attached to EBS and work out redundancy for it, then do a gross rsync project to get 230 GB of data moved over, then change the mount points on our Rancher [Kubernetes] cluster,” Dehnel said. “The version control system is so critical, so things like that are not taken lightly, especially as we also rely on [GitLab] for CI/CD.”

GitLab is working with AWS to address the issues with its product on EFS, a company spokesperson said. For now, its documentation recommends against deployment with EFS, and the company suggests users consider deployments of GitLab to Kubernetes clusters instead.

Electron framework flaw puts popular desktop apps at risk

A new vulnerability found in an app development tool has caused popular desktop apps made with the tool to inherit a risky flaw.

The Electron framework uses node.js and Chromium to build desktop apps for popular web services — including Slack, Skype, WordPress.com, Twitch, GitHub, and many more — while using web code like JavaScript, HTML and CSS. Electron announced that a remote code execution vulnerability in the Electron framework (CVE-

2018-1000006
) was inherited by an unknown number of apps.

Zeke Sikelianos, a designer and developer who works at Electron, wrote in a blog post that only apps built for “Windows that register themselves as the default handler for a protocol … are vulnerable,” while apps for macOS and Linux are not at risk.

Amit Serper, principal security researcher at Cybereason, said a flaw like the one found in the Electron framework “is pretty dangerous since it allows arbitrary command execution by a simple social engineering trick.”

A flaw like this is pretty dangerous since it allows arbitrary command execution by a simple social engineering trick.
Amit Serperprincipal security researcher at Cybereason

“Electron apps have the ability to register a protocol handler to make it easier to automate processes for the Electron apps themselves (for example, if you’ll click a link that starts with slack:// then Slack will launch. It makes it easier to automate the process of joining a Slack group,” Serper told SearchSecurity by email. “The vulnerability is in the way that the protocol handler is being processed by the Electron app, which allows an attacker to create a malicious link to an Electron app which will execute whatever command that the attacker wanted to run.”

Sikelianos urged developers to update apps to the most recent version of Electron as soon as possible.

There are more than 460 apps that have been built using the flawed Electron framework, but it is unclear how many of those apps are at risk and experts noted that code reviews could take a while.  

Security audits

Lane Thames, senior security researcher at Tripwire, said mechanisms for code reuse like software libraries, open source code, and the Electron framework “are some of the best things going for modern software development. However, they are also some of its worst enemies in terms of security.”

“Anytime a code base is in use across many products, havoc will ensue when (not if) a vulnerability is discovered. This is inevitable. Therefore, developers should ensure that mechanisms are in place for updating downstream applications that are impacted by the vulnerabilities in the upstream components,” Thames told SearchSecurity. “This is not an easy task and requires lots of coordination between various stakeholders. In a perfect world, code that gets used by many other projects should undergo security assessments with every release. Implementing a secure coding practice where every commit is evaluated at least with a security-focused code review would be even better.”

Serper said developers need to “always audit their code and be mindful to security.”

“However, in today’s software engineering ecosystem, where there is a lot of use of third party libraries it is very hard to audit the code that you are using since many developers today use modules and code that was written by other people, completely unrelated to their own project,” Serper said. “These are vast amounts of code and auditing third party code in addition to auditing your own code could take a lot of time.”

Justin Jett, director of audit and compliance at Plixer International Inc., a network analysis company based in Kennebunk, Maine, said the Electron framework flaw was significant, given that “affected applications like Skype, Slack, and WordPress are used by organizations to host and share their most critical information”

“If these applications were to be compromised, the impact could be devastating. Developers that use third-party frameworks, like Electron, should audit their code on a regular basis, ideally quarterly, to ensure they are using an up-to-date version of the framework that works with their application and has resolved any security issues from previous releases,” Jett told SearchSecurity. “Additionally, platform developers, like Electron, should complete routine audits on their software to ensure that the developers taking advantage of their platform don’t expose users to security vulnerabilities — vulnerabilities which, left unresolved, could cause profound damage to businesses that rely on these applications.”

Top 10 blog posts of 2017 illuminate top CIO goals

SearchCIO’s most popular blog posts of 2017 point to a set of lofty — and mandatory — CIO goals: artificial intelligence, digital transformation, multicloud management. IT leaders are learning all they can about these tech trends. The aim? To help their companies gain business advantage — before their competitors do.

Readers perused posts about avoiding getting locked into relationships with public cloud vendors, the absence of a universal platform for digitally connected smart cities and the coming proliferation of AI in the workplace.

The blogs IT leaders showed interest in 2017 were about how to install robotic process automation technology, what copy data management software is good for and what to look for in cloud management platforms. Moreover, they revealed some of the top CIO goals of 2017. Here are the year’s 10 most-read blog posts.

10. Managing the unmanageable

IT departments today are overseeing an ever-expanding assortment of cloud services — and it’s not easy. Each service requires a different management tool, and juggling all that “is just painful,” said IBM cloud development expert Mike Edwards. In “Out of many, one hybrid cloud management platform,” Edwards gives a rundown of the functions to look for in cloud management platforms, commercial tools meant to rein in cloud chaos. Among them are integration, so they can pull together disparate computing systems; general services, including a central portal to manage all a company’s cloud services; and financial management, to track the resources consumed and how much money is spent on them.

Cloud management platform functions
This slide, from a Cloud Standards Customer Council presentation in July, outlines the capabilities of a cloud management platform.

9. Cloud, consolidated

The public cloud firmament is ruled not by a pantheon of platform providers but by a tiny clique of cloud gods. According to an August report by Forrester Research, organizations hosting IT and business operations in the cloud shouldn’t let the small number of big players — Amazon, Microsoft and Google are the top three — lull them into a one-provider strategy. Those that do may see business come to a halt should the provider experience an outage. Or they may complain bitterly if a provider raises its prices — and then grudgingly pay up. In “Forrester: Go multicloud, ditch public cloud platform lock-in,” analyst Andrew Bartels advises CIOs on ways to reap cloud benefits — and lower risk. 

8. Making data talk

Bob Rogers, chief data scientist, Intel Corp.Bob Rogers

What makes a great data scientist? At a talk at Harvard University, Bob Rogers, chief data scientist at Intel Corp., started with what doesn’t: creating a report the business just asked for. A great data scientist needs to understand algorithms and statistics to produce analytics, of course, but “can also communicate with the stakeholders who are going to use those results.” In “What Intel’s Bob Rogers looks for when hiring data scientists,” Rogers describes the detailed “conversation” these practitioners need to have with users of data in order to dig up the insights that matter to the business.

7. Urban renewal

City CIOs examining initiatives aimed at making their cities “smart” — using data to improve municipal services — are at a frontier. There are no technical standards for how data is collected and measured. There’s no analytical data platform. There’s not even one understanding of what a smart city is. “If you go to any smart city conference, you’re going to find as many definitions of a smart city as there are attendees,” said Bob Bennett, chief innovation officer for Kansas City, Mo. The post “Smart city platform: A work in progress” reports on a conference convened to address big questions swirling around smart city projects today. The verdict? It’s too early for answers.

6. What’s my job?

Know of a chief digital officer hired to drive employee productivity and operational efficiency? Then the job description is in need of a redo, said Jim Fowler. “That’s the role of the CIO,” the vice president and CIO at General Electric said at MIT Sloan CIO Symposium in May. Fowler delineates the two roles in “CIO doesn’t play chief digital officer role at GE.” The CDO should be focused on using data to develop commercial products, Fowler said. At GE, an old company going through huge digital changes, the roles are separate and distinct. As CIO, Fowler is working toward a billion-dollar productivity target. The CDO, who happens to be his boss, William Ruh, “is focused on turning us into a $10 billion software business.”

From left to right, Peter Weill, of MIT; Jim Fowler, of GE (on screen); David Gledhill, of DBS Bank; Ross Meyercord, of Salesforce; and Lucille Mayer, of BNY Mellon, chat on stage at the MIT Sloan CIO Symposium in Cambridge, Mass., on May 24.

5. The business of AI

CIOs who want to inject AI into their companies’ lifeblood have some work to do. Today, 80% of organizations are considering use of AI or examining it, while just a small percentage are using it in their core business, according to McKinsey & Co. research. In “McKinsey: You can’t do enterprise AI without digital transformation,” McKinsey partner Michael Chui said the entire organization needs to be on board “to move the needle from a corporate standpoint.” CIOs need to build the foundation for AI in business first by determining where the potential is — and then by pushing ahead on digital efforts, digitizing infrastructure, amassing data and making it easy to access.

4. Double take

Rosetta Stone’s Mark Moseley is thankful for having been to one boring meeting. The vice president of IT at the language-learning company agreed to meet sales reps from Actifio, which sells copy data management software. “I didn’t care,” he told SearchCIO in “Waking up to benefits of copy data management software.” “I was mostly zoned out of the meeting.” Until he realized that the vendor’s product could clone an entire database in minutes — and help his development team work more efficiently. After installing the software, Moseley found he could do other useful tasks, such as virtualize his data, which allows him to have to store less, and spin up disaster recovery sites in the cloud.

3. ‘What I want when I want it’

Nestlé’s 100-year-old water delivery business is going through an immense transition. The prime objective used to be making sure that customers didn’t run out of bottled water before a truck delivered more. “Now it’s, ‘Make sure you deliver what I want when I want it,'” said Aymeric Le Page, vice president of business strategy and transformation at Nestlé Waters North America. The post “Nestlé builds ‘digital ecosystems’ to transform its massive bottled water biz” describes the technological and cultural changes the company is ushering in to personalize its service for convenience-obsessed customers — and it draws a critical parallel to how IT leaders should be thinking about serving the business.

2. My software, my co-worker

AI will supplant the UI, Accenture says; count on it. “Accenture: AI is the new UI” examines the consulting outfit’s prediction — the rise of software tailored to individuals rather than programming for the masses. “The standard way people built applications 20 years ago was you had one interface to serve everybody,” said Michael Biltz, managing director of Accenture Technology Vision. CIOs should start overhauling customer-facing applications, Biltz said, equipping them with technology such as voice recognition to make interacting with them “more human or natural” — and then move to internal apps, to help make employees more effective and efficient.

1. Show me the value

David Brain, RPA consultant David Brain

Companies looking to robotic process automation should ditch POC for POV — that’s alphabet soup for proof of concept and proof of value — according to RPA consultant David Brain in “Proof of value — not proof of concept — key to RPA technology.” A proof of concept may show customers that the technology works — that it can automate a certain business process. What it doesn’t show is “whether there is a business case for automation and will it deliver the scale of improvements the company wants to achieve.” A proof of value for RPA, Brain said, shows whether the technology can automate systems in the precise way they’re used in a specific company.

Azure feature updates in 2017 play catch up to AWS

Microsoft Azure already solidified its position as the second most popular public cloud, and critical additions in 2017 brought the Azure feature set closer to parity with AWS.

In some cases, Azure leapfrogged its competition. But a bevy of similar products bolstered the platform as a viable alternative to Amazon Web Services (AWS). Some Microsoft initiatives broadened the company’s database portfolio. Others lowered the barrier to entry for Azure, and pushed further into IoT and AI. And the long-awaited, on-premises machine, Azure Stack, seeks to tap surging interest to make private data centers obsolete.

Like all the major public cloud providers, Microsoft Azure doubled down on next-generation applications that rely on serverless computing and machine learning. Among the new products are Machine Learning Workbench, intended to improve productivity in developing and deploying AI applications, and Azure Event Grid, which helps route and filter events built in serverless architectures. Some important upgrades to Azure IoT Suite included managed services for analytics on data collected through connected devices, and Azure IoT Edge, which extends Azure functionality to connected devices.

Many of those Azure features are too advanced for most corporations that lack a team of data scientists. However, companies have begun to explore other services that rely on these underlying technologies in areas such as vision, language and speech recognition.

AvePoint, an independent software vendor in Jersey City, N.J., took note of the continued investment by Microsoft this past year in its Azure Cognitive Services, a turnkey set of tools to get better results from its applications.

“If you talk about business value that’s going to drive people to use the platform, it’s hard to find a more business-related need than helping people do things smartly,” said John Peluso, Microsoft regional director at AvePoint.

Microsoft also joined forces with AWS on Gluon, an open source, deep learning interface intended to simplify the use of machine learning models for developers. And the company added new machine types that incorporate GPUs for AI modeling.

Azure compute and storage get some love, too

Microsoft’s focus wasn’t solely on higher-level Azure services. In fact, the areas in which it caught up the most with AWS were in its core compute and storage capabilities.

The B-Series are the cheapest machines available on Azure and are designed for workloads that don’t always need great CPU performance, such as test and development or web servers. But more importantly, they provide an on-ramp to the platform for those who want to sample Azure services.

Another Azure feature addition was the M-Series machines that can support SAP workloads with up to 20 TBs of memory, a new bare-metal VM and the incorporation of Kubernetes into Azure’s container service.

“I don’t think anybody believes they are on par [with AWS] today, but they have momentum at scale and that’s important,” said Deepak Mohan, an analyst at IDC.

In storage, Managed Disks is a new Azure feature that handles storage resource provisioning as applications scale. Archive Storage provides a cheap option to house data as an alternative to Amazon Glacier, as well as a standard access model to manage data across all the storage tiers.

Reserved VM Instances emulate AWS’ popular Reserved Instances to provide significant cost-savings for advanced purchases and deeper discounts for customers that link the machines to their Windows Server licenses. Azure also added low-priority VMs– the equivalent to AWS Spot Instances — that can provide even further savings but should be limited to batch-type projects due to the fact that they can be pre-empted.

It looks to me like Azure is very much openly and shamelessly following the roadmap of AWS.
Jason McKaysenior vice president and CTO, Logicworks

The addition of Azure Availability Zones was a crucial update for mission-critical workloads that need high availability. It brings greater fault tolerance to the platform through the ability to spread workloads across regions and achieve a guaranteed 99.99% uptime.

“It looks to me like Azure is very much openly and shamelessly following the roadmap of AWS,” said Jason McKay, senior vice president and CTO at Logicworks, a cloud managed service provider in New York.

And that’s not a bad thing, because Microsoft has always been good at being a fast follower, McKay said. There’s a fair amount of parity in the service catalogs for Azure and AWS, though Azure’s design philosophy is a bit more tightly coupled between its services. That means potentially slightly less creativity, but more functionality out of the box compared to AWS, McKay said.

Databases and private data centers

Azure Database Migration Service has helped customers transition from their private data centers to Azure. Microsoft also added full compatibility between SQL Server and the fully managed Azure SQL database service.

Azure Cosmos DB, a fully managed NoSQL cloud database, may not see a wave of adoption any time soon, but has the potential to be an exciting new technology to manage databases on a global scale. And in Microsoft’s continued evolution to embrace open source technologies, the company added MySQL and PostgreSQL support to the Azure database lineup as well.

The company also improved management and monitoring, which incorporates tools from Microsoft’s acquisition of Cloudyn, as well as added security. Azure confidential computing encrypts data while in use, in addition to encryption options at rest and in transit, while Azure Policy added new governance capabilities to enforce corporate rules at scale.

Other important security upgrades include Azure App Service Isolated, which made it easier to install dedicated virtual networks in the platform-as-a-service layer. The Azure DDoS Protection service aims to protect against DDoS attacks, new capabilities put firewalls around data in Azure Storage, and end points within the Azure virtual network limit the exposure of data to the public internet to access various multi-tenant Azure services.

Azure Stack’s late arrival

Perhaps Microsoft’s biggest cloud product isn’t part of its public cloud. After two years of fanfare, Azure Stack finally went on sale in late 2017. It transfers many of the tools found on the Azure public cloud within private facilities, for customers that have higher regulatory demands or simply aren’t ready to vacate their data center.

“That’s a huge area of differentiation for Microsoft,” Mohan said. “Everybody wants true compatibility between services on premises and services in the cloud.”

Rather than build products that live on premises, AWS joined with VMware to build a bridge for customers that want their full VMware stack on AWS either for disaster recovery or extension of their data centers. Which approach will succeed depends on how protracted the shift to public cloud becomes — and a longer delay in that shift favors Azure Stack, Mohan said.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.

Alexa for Business sounds promising, but security a concern

Virtual assistant technology, popular in the consumer world, is migrating toward businesses with the hopes of enhancing employee productivity and collaboration. Organizations could capitalize on the familiarity of home-based virtual assistants, such as Siri and Alexa, to boost productivity in the office and launch meetings quicker.

Last week, Amazon announced Alexa for Business, a virtual assistant that connects Amazon Echo devices to the enterprise. Alexa for Business allows organizations to equip conference rooms with Echo devices that can turn on video conferencing equipment and dial into a conference via voice commands.

“Virtual assistants, such as Alexa, greatly enhance the user experience and reduce the complexity in joining meetings,” Frost & Sullivan analyst Vaishno Srinivasan said.

Personal Echo devices connected to the Alexa for Business platform can also be used for hands-free calling and messaging, scheduling meetings, managing to-do lists and finding information on business apps, such as Salesforce and Concur.

Overcoming privacy and security hurdles

Before enterprise virtual assistants like Alexa for Business can see widespread adoption, they must overcome security concerns.

“Amazon and other providers will have to do some evangelizing to demonstrate to CIOs and IT leaders that what they’re doing is not going to compromise any security,” Gartner analyst Werner Goertz said.

Amazon is well-positioned to grab this opportunity much ahead of Microsoft Cortana, Google Assistant and Apple’s Siri.
Vaishno Srinivasananalyst, Frost & Sullivan

Srinivasan said organizations may have concerns about Alexa for Business collecting data and sharing it in a cloud environment. Amazon has started to address these concerns, particularly when connecting personal Alexa accounts and home Echo devices to a business account.

Goertz said accounts are sandboxed, so users’ personal information will not be visible to the organization. The connected accounts must also comply with enterprise authentication standards. The platform also includes administrative controls that offer shared device provisioning and management capabilities, as well as user and skills management.

Another key challenge is ensuring a virtual assistant device, like the Amazon Echo, responds to a user with information that is highly relevant and contextual, Srinivasan said.

“These devices have to be trained to enhance its intelligence to deliver context-sensitive and customized user experience,” she said.

Integrating with enterprise IT systems

End-user spending on virtual assistant devices is expected to reach $3.5 billion by 2021, up from $720 million in 2016, according to Gartner. Enterprise adoption is expected to ramp up by 2019.

Goertz said Amazon had to do a lot of work “under the hood” to enable the integrations with business apps and vendors such as Microsoft, Cisco, Polycom and BlueJeans. The deep integrations with enterprise IT systems is required to enable future capabilities, such as dictating and sending emails from an Echo device, he said.

Srinivasan said Alexa for Business can extend beyond conference rooms through APIs provided by Amazon’s Alexa Skills Kit for developers.

“Thousands of developers utilize these APIs and have created ‘skills’ that enable automation and increase efficiency within enterprises,” she said.

Taking use cases beyond productivity tools

While enterprise virtual assistants could be deployed in any type of company looking to boost productivity, Alexa for Business has already seen deployments in industries such as hospitality.

Wynn Las Vegas is equipping its rooms with Amazon Echo devices, which are managed with Alexa for Business, Goertz said. Guests of the hotel chain can use voice commands, called skills, to turn on the lights, close the blinds or order room service.

Another industry that could see adoption of virtual assistants is healthcare. Currently, Alexa for Business supports audio-only devices. But the platform could potentially support devices with a camera and display that could add video conferencing and telemedicine capabilities, Goertz said.

Alexa for Business also has the potential to disrupt the huddle room market by turning Echo devices into stand-alone conference phones, Srinivasan said.

Amazon Echo prices range from $50 to $200, and the most recent generation of devices offers improved audio quality. The built-in virtual assistant with Alexa for Business and developer ecosystem fills a gap that exists in the conference phone market, she wrote in a blog post.

“Amazon is well-positioned to grab this opportunity much ahead of Microsoft Cortana, Google Assistant and Apple’s Siri,” she said.

13 Questions Answered on the Future of Hyper-V

Following our hugely popular panel webinar discussing “3 Emerging Technologies that will Change the Way you use Hyper-V” we’ve decided to bring together all of the questions asked during both sessions (we hold 2 separate webinar sessions on the same topic to accommodate our European and American audiences) into one article with some extended answers to address the issue of what’s around the corner for Hyper-V and related technologies.

Let’s get started!

questions from the webinar 3 emerging technologies that will change the way you use hyper-v

The Questions

Question 1: Do you think IT Security is going to change as more and more workloads move into the cloud?

Answer: Absolutely! As long as we’re working with connected systems, no matter where they are located, we will always have to worry about security. 1 common misconception though is that just because a workload is housed inside of Microsoft Azure, doesn’t mean that it’s LESS secure. Public cloud platforms have been painstakingly setup from the ground up with the help of security experts in the industry. You’ll find that if best practices are followed, and rules of least access and just-in-time administration are followed, the public cloud is a highly secure platform.

Question 2: Do you see any movement to establish a global “law” of data security/restrictions that are not threatened by local laws (like the patriot act)?

Answer: Until all countries of the world are on the same page, I just don’t see this happening. The US treats data privacy in a very different way than the EU unfortunately. The upcoming General Data Protection Regulation (GDPR) coming in may of 2018 is a step in the right direction, but that only applies to the EU and data traversing the boundaries of the EU. It will certainly affect US companies and organizations, but nothing similar in nature is in the works there.

Question 3: In the SMB Space, where a customer may only have a single MS Essentials server and use Office 365, do you feel that this is still something that should move to the cloud?

Answer: I think the answer to that question depends greatly on the customer and the use case. As Didier, Thomas and I discussed in the webinar, the cloud is a tool, and you have to evaluate for each case, whether it makes sense or not to run that workload in the cloud. If for that particular customer, they could benefit from those services living in the cloud with little downside, then it may be a great fit. Again, it has to make sense, technically, fiscally, and operationally, before you can consider doing so.

Question 4: What exactly is a Container?

Answer: While not the same at all, it’s often easiest to see a container as a kind of ultra-stripped down VM. A container holds an ultra-slim OS image (In the case of Nano Server 50-60 MB), any supporting code framework, such as DotNet, and then whatever application you want to run within the container. They are not the same as a VM due to the fact that Windows containers all share the kernel of the underlying host OS. However, if you require further isolation, you can do so with Hyper-V containers, which allows you to run a container within an optimized VM so you can take advantage of Hyper-V’s isolation capabilities.

Question 5: On-Premises Computing is Considered to be a “cloud” now too correct?

Answer: That is correct! In my view, the term cloud doesn’t refer to a particular place, but to the new technologies and software-defined methods that are taking over datacenters today. So you can refer to your infrastructure on-prem as “private cloud”, and anything like Azure or AWS as “Public Cloud”. Then on top of that anything that uses both is referred to as “Hybrid Cloud”.

Question 6: What Happens when my client goes to the cloud and they lose their internet service for 2 weeks.

Answer: The cloud, just like any technology solution, has its shortcomings that can be overcome if planned for properly. If you have mission critical service you’d like to host in the cloud, then you’ll want to research ways for the workload to be highly available. That would include a secondary internet connection from a different provider or some way to make that workload accessible from the on-prem location if needed. Regardless of where the workload is, you need to plan for eventualities like this.

Question 7: What Happened to Azure Pack?

Answer: Azure Pack is still around and usable, it will just be replaced by Azure stack at some point. In the meantime, there are integrations available that allow you to manage both solutions from your Azure Stack management utility.

Question 8: What about the cost of Azure Stack? What’s the entry point?

Answer: This is something of a difficult question. Ranges that I’ve heard range from 75k to 250k, depending on the vendor and the load-out. You’ll want to contact your preferred hardware vendor for more information on this question.

Question 9: We’re a hosting company, is it possible to achieve high levels of availability with Azure Stack?

Answer: Just like any technology solution, you can achieve the coveted 4 9s of availability. The question is how much money do you want to spend? You could do so with Azure stack and the correct supporting infrastructure. However, one other thing to keep in mind, your SLA is only as good as your supporting vendors as well. For example, if you sell 4 9s as an SLA, and the internet provider for your datacenter can only provide 99%, then you’ve already broken your SLA, so something to keep in mind there.

Question 10: For Smaller businesses running Azure Stack, should software vendors assume these businesses will look to purchase traditionally on-prem software solutions that are compatible with this? My company’s solution does not completely make sense for the public cloud, but this could bridge the gap. 

Answer: I think for most SMBs, Azure Stack will be fiscally out of reach. In Azure Stack you’re really paying for a “Cloud Platform”, and for most SMBs it will make more sense to take advantage of public Azure if those types of features are needed. that said, to answer your question, there are already vendors doing this. Anything that will deploy on public Azure using ARM will also deploy easily on Azure Stack.

Question 11: In Azure Stack, can I use any backup software and backup the VM to remote NAS storage or to AWS

Answer: At release, there is no support for 3rd party backup solutions in Azure Stack. Right now there is a built-in flat file backup and that is it. I suspect that it will be opened up to third-party vendors at some point in time and it will likely be protected in much the same way as public Azure resources.

Question 12: How would a lot of these services be applied to the K-12 education market? There are lots of laws that require data to be stored in the same country. Yet providers often host in a different country. 

Answer: If you wanted to leverage a providers Azure stack somewhere, you would likely have to find one that actually hosts it in the geographical region you’re required to operate in. Many hosters will provide written proof of where the workload is hosted for these types of situations.

Question 13: In planning to move to public Azure, how many Azure cloud Instances would I need?

Answer: There is no hard set answer for this. It depends on the number of VMs/Applications and whether you run them in Azure as VMs or in Azure’s PaaS fabric. The Azure Pricing Calculator will give you an idea of VM sizes and what services are available.

Did we miss something?

If you have a question on the future of Hyper-v or any of the 3 emerging technologies that were discussed in the webinar just post in the comments below and we will get straight back to you. Furthermore, if you asked a question during the webinar that you don’t see here, by all means, let us know in the comments section below and we will be sure to answer it here. Any follow-up questions are also very welcome – to feel free to let us know about that as well!

As always – thanks for reading!

NSA cyberweapons report follows Kaspersky transparency plan

Kaspersky Lab launched a transparency initiative for its popular security products in order to ease fears of inappropriate ties to Russia and also released a statement explaining how the company came into possession of NSA cyberweapons.

A new statement from Kaspersky provided details regarding a recently uncovered incident where an NSA contractor reportedly put agency cyberweapons on a personal computer and that NSA malware was transmitted to Kaspersky servers.

Though others have claimed Kaspersky products could be used for spying, Kaspersky Lab has continually asserted incident occurred because NSA cyberweapons are malware and its products are designed to find and quarantine malware.

In the latest statement on the matter, Kaspersky explained that that after a user’s  device was flagged for having the Equation Group malware on it, the device was also found to have pirated Microsoft Office software containing malware. At some point, the individual disabled the Kaspersky product in order to run a keygen for the pirated software and was infected with malware.

“Executing the keygen would not have been possible with the antivirus enabled,” the team wrote in the Kaspersky report. “The user was infected with this malware for an unspecified period, while the product was inactive. The malware dropped from the trojanized keygen was a full blown backdoor which may have allowed third parties access to the user’s machine.”

The Kaspersky statement also noted that after an analyst processed the data gathered from the device and determined samples to be Equation Group NSA cyberweapons, the incident was reported to Kaspersky CEO Eugene Kaspersky, who ordered all archives of the data be deleted from company’s systems.

Jake Williams, founder of consulting firm Rendition InfoSec LLC in Augusta, Ga., said on Twitter he understood the rationale behind the decision to delete all traces of the NSA cyberweapons.

Kaspersky transparency plan

Prior to detailing the NSA cyberweapons incident, the company introduced a new Kaspersky transparency initiative includes a number of components to help restore trust in the company, including three new transparency centers to be located in the U.S., Asia and Europe and all open by 2020, increased bug bounty rewards and independent reviews of product source code and internal processes.

The U.S. Department of Homeland Security recently banned Kaspersky products from use on government systems, and a number of stories based on unnamed sources claimed connections between Kaspersky and the Russian government as well as Kaspersky products potentially searching out classified data on customer devices.

Eugene Kaspersky said in a public statement that the move was intended to prove the company has nothing to hide and “to overcome mistrust and support our commitment to protecting people in any country on our planet.”

It’s hard (or impossible) to come back from allegations from the U.S. government.
Matt Suichefounder, Comae Technologies

“Internet balkanization benefits no one except cybercriminals. Reduced cooperation among countries helps the bad guys in their operations, and public-private partnerships don’t work like they should. The internet was created to unite people and share knowledge,” Kaspersky said. “Cybersecurity has no borders, but attempts to introduce national boundaries in cyberspace [are] counterproductive and must be stopped. We need to reestablish trust in relationships between companies, governments and citizens.”

However, the early expert reaction to the announcement noted the Kaspersky transparency plan may not go far enough. Security professionals noted that any potential evidence of spying may not be visible in Kaspersky product source code reviews and it would be necessary to review server-side code or the rulesets that govern what data is pulled from client systems to the Kaspersky Security Network (KSN).

When asked about this issue, Kaspersky Lab told SearchSecurity the initial code review would focus on products with the “biggest user base — like Kaspersky Internet Security and Kaspersky Endpoint Security for Business,” but said the KSN might be included.

“Our proposal for source code and software updates analysis suggests the access to review how our products interact with Kaspersky Security Network,” Kaspersky Lab said. “Kaspersky Lab wants to work with highly-reputable and credible independent experts who have the expertise, capability, and capacity to account for the company’s extensive code base and technology infrastructure underpinning its products and solutions, as well as the diverse sets of controls and processes that govern the company’s data processing practices.”

Matt Suiche, founder of managed threat detection company Comae Technologies, told SearchSecurity that the Kaspersky transparency “initiative is good in general,” but said it might not be enough to prove they are innocent. “It’s hard (or impossible) to come back from allegations from the U.S. government.”

MobileIron, VMware can help IT manage Macs in the enterprise

As Apple computers have become more popular among business users, IT needs better ways to manage Macs in the enterprise. Vendors have responded with some new options.

The traditional problem with Macs is they have required different management and security software than their Windows counterparts, which means organizations must spend more money or simply leave these devices unmanaged. New features from MobileIron and VMware aim to help IT manage Macs in a more uniform way.

“Organizations really didn’t have an acute system to secure and manage Macs as they did with their Windows environment. But now, what we are starting to see is that a large number of companies have started taking Mac a lot more seriously,” said Nicholas McQuire, vice president of enterprise research at CCS Insight.

Macs in the enterprise see uptick

Windows PCs have long dominated the business world, whereas Apple positioned Macs for designers and other creative workers, plus the education market. There are several reasons why businesses traditionally did not offer Macs to employees, including their pricing and a lack of strong management and security options. About 5% to 10% of corporate computers are Macs, but that percentage is growing, McQuire said.

[embedded content]

With Macs growing in popularity, IT needs
streamlined configuration methods.

There are a few potential reasons for the growth of Macs in the enterprise. Demand from younger workers is a big one, said Ojas Rege, chief strategy officer at MobileIron, based in Mountain View, Calif. In addition, because Macs don’t lose value as quickly as PCs, the difference in total cost of ownership between Macs and PCs isn’t as significant as it once was, he said.

“A lot of our customers tell us that Macs are key to the new generation of their workforce,” Rege said. “Another key is that the economics are improving.”

New capabilities help manage Macs in the enterprise

It is surprising how many people still think they do not need additional software to help secure Macs.
Tobias Kreidldesktop creation and integration services team lead, Northern Arizona University

Windows has managed to stay on top in the eyes of IT because of its ability to offer more management platforms from third parties. Despite some options, such as those from Jamf, the macOS management ecosystem was very limited for a long time. But as the BYOD trend took off and shadow IT emerged, more business leaders felt they could no longer limit their employees to using Windows PCs.

VMware in August introduced updates to Workspace One, its end-user computing software, that allow IT to manage Macs the same way they would mobile devices. Workspace One will also have a native macOS client and let users enroll their Macs in unified endpoint management through a self-service portal, just like they can with smartphones and tablets.

MobileIron already supported macOS for basic device configuration and security. The latest improvements included these new Mac management features:

  • secure delivery of macOS apps through MobileIron’s enterprise app store;
  • per-app virtual private network connectivity through MobileIron Tunnel; and
  • trusted access enforcement for cloud services, such as Office 365, through MobileIron Access.

Mac security threats increase

At Northern Arizona University, the IT department is deploying Jamf Pro to manage and secure Macs, which make up more than a quarter of all client devices on campus. The rise in macOS threats over the past few years is a concern, said Tobias Kriedl, desktop creation and integration services team lead at the school in Flagstaff, Ariz.

The number of macOS malware threats increased from 819 in 2015 to 3,033 in 2016, per a report by AV-Test. And the first quarter of 2017 saw a 140% year-over-year increase in the number of different types of macOS malware, according to the report.

“It is surprising how many people still think they do not need additional software to help secure Macs,” Kreidl said. “[Apple macOS] is pretty good as it stands, but more and more efforts are being spent to find ways to circumvent Mac security, and some have been successful.”

CCleaner malware spread via supply chain attack

Researchers discovered a popular system maintenance tool was the victim of a supply chain attack that put potentially millions of users at risk of downloading a malicious update.

CCleaner is a tool designed to help consumers perform basic PC maintenance functions like removing cached files, browsing data and defragmenting hard drives. CCleaner is made by Piriform Ltd., a UK-based software maker that was acquired by antivirus company Avast Software in July. The compromised update of the tool was first discovered by Israeli endpoint security firm Morphisec following an investigation that began on Sept. 11th, but the company claims it began blocking the CCleaner malware at customer sites on Aug. 20th.

“A backdoor transplanted into a security product through its production chain presents a new unseen threat level which poses a great risk and shakes customers’ trust,” wrote Michael Gorelik, vice president of research and development at Morphisec in a blog post. “As such, we immediately, as part of our responsible disclosure policy, contacted Avast and shared all the information required for them to resolve the issue promptly. Customers safety is our top concern.”

The CCleaner malware gathered information about systems and transmitted it to a command and control (C&C) server; it was reportedly downloaded by users for close to one month from August 15 to September 12, according to Morphisec. However, Avast noted that the CCleaner malware was limited to running on 32-bit systems and would only run if the affected user profile had administrator privileges.

Avast said CCleaner claims to have more than 2 billion downloads and adds new users at a rate of 5 million per week, but because only the 32-bit and cloud versions of CCleaner were compromised, the company estimated just 2.27 million users were affected.

Impact of the CCleaner malware

A team of researchers at Cisco Talos, which included Edmund Brumaghin, threat researcher, Ross Gibb, senior information security analyst, Warren Mercer, technical leader, Matthew Molyett, research engineer, and Craig Williams, senior technical leader, discovered and analyzed the CCleaner malware soon after Morphisec. According to the Cisco Talos team, Avast unwittingly distributed legitimate signed versions of CCleaner and CCleaner Cloud which “contained a multi-stage malware payload that rode on top of the installation.”

“This is a prime example of the extent that attackers are willing to go through in their attempt to distribute malware to organizations and individuals around the world. By exploiting the trust relationship between software vendors and the users of their software, attackers can benefit from users’ inherent trust in the files and web servers used to distribute updates,” Talos researchers wrote in their analysis. “In many organizations data received from commonly software vendors rarely receives the same level of scrutiny as that which is applied to what is perceived as untrusted sources. Attackers have shown that they are willing to leverage this trust to distribute malware while remaining undetected.”

What makes this attack particularly worrying is the volume of downloads this software receives leaving a huge number of users exposed.
James MaudeSenior security engineer for Avecto

James Maude, senior security engineer for Avecto, a privilege management software maker, said it was especially concerning that the CCleaner malware included the official code signature from Avast.

“Given that CCleaner is designed to be installed by a user with admin rights, and the malware was not only embedded within it but also signed by the developers own code signing certificate (giving it a high level of trust), this is pretty dangerous,” Maude told SearchSecurity via email. “This means that the malware, and therefore the attacker, would have complete control of the system and the ability to access almost anything they wanted. What makes this attack particularly worrying is the volume of downloads this software receives leaving a huge number of users exposed.”

Itsik Mantin, director of security research at security software company Imperva, said the CCleaner malware incident shows “there’s not much users can do when the vendor gets infected.”

“This hack creates a new reality where users need to assume that their desktops, laptops and smartphones are infected, which has been the reality for security officers at organizations in the last years,” Mantin told SearchSecurity. “For organizations, this does not really matter as security officers are accustomed to the reality that they should always assume the attackers are in, are looking for ways to spread the infection within the organization and are searching for business sensitive data to steal or corrupt.”

Avast response to the CCleaner malware incident

Vince Steckler, CEO of Avast Software, and Ondřej Vlček, executive vice president and general manager of the consumer business unit, released a statement saying the company remediated the issue within 72 hours of becoming aware of the problem by releasing an clean update without the malware. They also stated Avast is working with law enforcement to shut down the CCleaner malware C&C server on Sept. 15th.

The Avast execs downplayed their company’s involvement by saying they “strongly suspect that Piriform was being targeted while they were operating as a standalone company, prior to the Avast acquisition,” and that the compromise “may have started on July 3rd,” two weeks before Avast’s acquisition of Piriform was complete. Avast also claimed the compromised update took four weeks to discover due to “the sophistication of the attack.”

Avast asserted users “should upgrade even though they are not at risk as the malware has been disabled on the server side,” and claimed it was unnecessary to follow the suggestions by Talos and other experts to restore systems to a date before Aug. 15, 2017 to ensure removal of the CCleaner malware.

“Based on the analysis of this data, we believe that the second stage payload never activated, i.e. the only malicious code present on customer machines was the one embedded in the ccleaner.exe binary,” Steckler and Vlček wrote. “Therefore, we consider restoring the affected machines to the pre-August 15 state unnecessary. By similar logic, security companies are not usually advising customers to reformat their machines after a remote code execution vulnerability is identified on their computer.”

Supply chain attacks

Experts said the CCleaner malware incident should be a reminder of the dangers of supply chain attacks.

Marco Cova, senior security researcher at malware protection vendor Lastline, said the recent NotPetya attacks were another case of a supply chain attack “where an otherwise trusted software vendor gets compromised and the update mechanism of the programs they distribute is leveraged to distribute malware.”

“This is sort of a holy grail for malware authors because they can efficiently distribute their malware, hide it in a trusted channel, and reach a potentially large number of users,” Cova told SearchSecurity. “It appears that the build process of CCleaner itself was compromised: that is, attackers had access to the infrastructure used to build the software itself. This is very troublesome because it indicates that attackers were able to control a critical piece of the infrastructure used by the vendor.”

Jonathan Cran, vice president of product at Bugcrowd, told SearchSecurity the CCleaner malware issue appeared to be “less of a traditional supply chain attack and more of a case of poor vendor security. Given that the affected installer was signed as a verified safe binary by Piriform, this indicates that they didn’t realize at the time of release and that the corporate network of Piriform was likely compromised.”

Justin Fier, director for cyber intelligence and analysis at threat detection company Darktrace, said this “should come as yet another wake-up call that corporations must have visibility into how their suppliers interact with their systems, as well as a real-time assessment of their suppliers’ cyber risk.”

“The risk that companies inherit from their suppliers is a pervasive problem for cybersecurity. Quite simply, companies with a supply chain cannot avoid compromises — supply chain breaches are inevitable,” Fier told SearchSecurity. “The assessment of potential supply chain partners is often a rushed process in terms of evaluating their cyber security level, and is rarely as in-depth as it should be. While we can’t change the security posture of our supply chains, we can have a transparent relationship when it comes to cyber risk.”

Continue on PC, Timeline features raise Windows 10 security concerns

New Windows 10 syncing features should be popular among users but could lead to IT security risks.

Microsoft’s upcoming Windows 10 Fall Creators Update will include the Continue on PC feature, which allows users to start web browsing on their Apple iPhones or Google Android smartphones and then continuing where they left off on their PCs. A similar feature called Timeline, which will allow users to access some apps and documents across their smartphones and PCs, is also in the works. IT will have to pay close attention to both of these features, because linking PCs to other devices can threaten security.

“It does have the potential to be a real mess,” said Willem Bagchus, messaging and collaboration specialist at United Bank in Parkersburg, W.Va. “To pick up data on another device, you have to do it securely. This has to be properly protected.”

How Continue on PC works

Continue on PC syncs browser sessions through an app for iPhones and Android smartphones. Users must be logged into the same Microsoft account in the app and on their Windows 10 PC.

When on a webpage, smartphone users can select the Share option in the browser and choose Continue on PC, which syncs the browsing session through the app. The feature is currently available as part of a preview build leading up to the Windows 10 Fall Creators Update, and the iOS app is already available in the Apple App Store.

To pick up data on another device, you have to do it securely. This has to be properly protected.
Willem Bagchusmessaging and collaboration specialist, United Bank

Microsoft did not say if the feature will allow users to continue a browsing session on their smartphone that started on their PC. Apple’s Continuity feature offers this capability, and the Google Chrome browser lets users share tabs and browsing history across multiple devices as well.  

Continue on PC could expose sensitive data when sharing web applications through synced devices, said Jack Gold, founder and principal analyst of J. Gold Associates, a mobile analyst firm in Northborough, Mass.

For example, if a user’s personal laptop is stolen that is synced to a corporate phone, the thief could access business web apps through a synced browsing session, exposing company data. If the feature is expanded to share browsing sessions from a PC to a smartphone, all it would take is someone to steal a user’s smartphone to have access to the web apps the employee used on their PC.

“It could be something to worry about if a user loses their phone,” Gold said. “I can’t lose that device because it can sync to my PC.”

To avoid this problem, IT could use enterprise mobility management (EMM) software to blacklist the Continue on PC app altogether, or simply prevent users from sharing the browser session through the app.   

Timeline shares security issues

Originally, Timeline was supposed to be part of the Windows 10 Fall Creators Update, but now it will come out in a preview build shortly afterward, Microsoft said.

Timeline suggests recent documents and apps a user accessed on a synced smartphone and allows them to pull some of them up on their PC, and vice versa. Microsoft hasn’t disclosed which apps the feature will support.

This feature could also cause a security problem if a user loses their PC or smartphone and it gets in the wrong hands. Timeline is basically a dashboard displaying every app, document and webpage the user was in across multiple devices, so someone could access documents, apps and web apps that contain work data on the stolen device.

“Security is needed across the board,” said Bagchus, whose company plans to move to Windows 10 next year. “It absolutely has to be managed.”

EMM software should also come into play when managing this feature, he said.

IT needs to force users to have passwords on all PCs and mobile devices to protect from these instances, said Jim Davies, IT director at Ongweoweh Corp., a pallet and packing management company in Ithaca, N.Y.

“This is something that will be used by a lot of people in a lot of companies,” Davies said. “People won’t need to email themselves a link because this makes it simpler. That being said, your password is that much more important now.”

Ongweoweh Corp. plans to migrate to Windows 10 in the first quarter of 2018.

It is likely that these Windows 10 syncing features won’t be limited to smartphones, and iPads and Android tablets could gain this ability in the future, Bagchus said.

“This feature … makes productivity easier,” Bagchus said. “This will be huge.” 

Powered by WPeMatico