Tag Archives: introduced

Cisco security GM discusses plan for infosec domination

Cisco believes CISOs are overwhelmed by too many security products and vendors, and the company introduced a new platform, ominously code-named Thanos, to help enterprises.

But despite being named after the Marvel Comics megavillain, Cisco’s SecureX platform isn’t necessarily designed to wipe out half of all existing security products within enterprise environments. Instead, Cisco is taking a different approach by opening up the platform, which was unveiled last month, and integrating with third parties.

Gee Rittenhouse, senior vice president and general manager of Cisco’s Security Business Group (SBG), said the aim of SecureX is to tie not only Cisco products together, but other vendor offerings as well. “We’ve been working really hard on taking the security problem and reducing it to its simplest form,” he told SearchSecurity at RSA Conference 2020 last month.

That isn’t to say that all security products are effective; many “are supposed to have a bigger impact than they actually do,” Rittenhouse said. Nevertheless, the SBG strategy for SecureX is to establish partnerships with third parties and invite them to integrate with the platform, he said, rather than Cisco trying to be everything to everyone. In this interview, Rittenhouse discusses the evolution of SecureX, how Cisco’s security strategy has shifted over the last decade and the company’s plan to change the infosec industry.

Editor’s note: This interview was edited for clarity and length.

How did the idea for SecureX come about?

Gee Rittenhouse CiscoGee Rittenhouse

Gee Rittenhouse: We thought initially if we had a solution for every one of the major threats vectors — email, endpoint, firewalls, cloud, etc. — for one vendor, Cisco, then that would be enough. You buy Cisco networking and you buy Cisco security and that transactional model will simplify the industry. And we realized very quickly that didn’t do anything except put a name on a box. Then the second thing we thought was this: What happens if we take all these different things and integrate the back end together so that when I see a threat on email, I can block on my endpoint? We stitch all this together [via the SecureX framework] on behalf of the customer, and not only does the blocking happen automatically but you also get better protection and higher efficacy. We’d tell people we had an integrated architecture. And the customers would look at us and say ‘Really? I don’t feel that. You’ve got a portal over here, and a portal over there’ and so on. And we’d say, ‘Look, we’ve worked for three years integrating this together and we have the highest efficacy.’ And they’d say, ‘Well, everybody has their numbers …’

About a couple of years ago, we said we’ve simplified the buying model and simplified the back end. Let’s try to simplify the user experience. But you have to be very careful with that. The classic approach is to build a platform, and everyone jumps on the platform and if you only have Cisco stuff, life is great. But, of course, there are other platforms and other products. We wanted to be precise about how we do this, so we picked a particular use case around investigations. It’s an important use case. We built this very simple investigation tool [Cisco Threat Response] that you can think about as the Google search of security. Within five seconds, you can find out that you don’t have [a specific threat] in your environment, or yes, you do and here’s how to block it and respond. The tool had the fastest rate of adoption of any of our products in Cisco’s history. It’s massively successful. More than 8,000 customers use it every day as their investigation tool.

Were you expecting that kind of adoption for Cisco Threat Response?

Rittenhouse: No. We were not. There were two things we weren’t expecting. We weren’t expecting the response in terms in usage. We thought there’d be a few customers using it. The other thing that we didn’t expect was a whole use community came together to, for example, integrate vendor X into the tool and publish the connectors on GitHub. A whole user community has evolved around that platform and extended the capability of it. In both cases, we were quite surprised.

When we saw how that worked, saw the business model, and we understood how people consumed it, we attached it to everything and then said ‘Let’s take the next step’ with analytics and security postures. We asked what a day in the life for security professional was. They’re flooded with noise and threats and alerts. They have to be able to decipher all of that — can the platform do that automatically on their behalf? That’s what we’re doing with SecureX, and the feedback has been super positive

What kind of feedback did you get from customers prior to Cisco Threat Response and SecureX? Did they have an idea of what they wanted?

There is only a handful of true, successful platform businesses in the world; it’s very hard to attract that community and achieve that scale.
Gee RittenhouseSVP and GM, Cisco

Rittenhouse: There was a lot of feedback from customers who asked us to make the front end of our portfolio simpler. But what does that actually mean? It was very generic feedback. And in fact, we struggled with the ‘single pane of glass’ approach. What typically happens with that approach is you try to do everything through it, and all of the sudden that portal becomes the slowest part of the portfolio. This actually took a lot of time and a lot of conversations with customers on how they actually work. We engaged a lot of them with design thinking, and Cisco Threat Response was the first thing to come out of those discussions, and then SecureX.

And I want to make the distinction between a platform and a single pane of glass or a portal. And we very much think of SecureX as a platform. And when you think about a platform, it’s usually something that other people can build stuff on top of, so the value to the community is other people’s contributions to it, and you get a multiplier effect. There is only a handful of true, successful platform businesses in the world; it’s very hard to attract that community and achieve that scale.

Like other recent studies, Cisco’s [2020] CISO Benchmark Report showed that many CISOs feel they have too many security products and are actively trying to reduce the number of vendors they have. Other vendors have talked about this trend and are trying to capitalize on it by becoming a one-stop security shop and pushing out other products. But with SecureX, it sounds like you’re taking a different approach by welcoming third-party vendors to the platform and being more open.

Rittenhouse: We would encourage the industry as a whole to be more open. In fact, the industry is not very open at all. One of the benefits to being open is the ability to integrate. In today’s industry, for example, let’s say you’re a security vendor and your technology says a piece of malware is a threat level 5, and I say it’s a level 2. And you’re integrated into our platform, and you’re freaking out because it’s a level 5. I ask you, ‘Rob, why do you think this? What’s the context around this? Share more.’ And until you have that open interface and integration, I just sit there and say, ‘For some reason, this vendor over here claims it’s big, but we don’t see it'”

So yes, we’re open. And I would anticipate the user experience with Cisco security products integrated together will be very different than what you would get with third parties integrated until they start to share more. And this is one of the issues you see in the SIEM and SOAR markets; they become data repositories for investigations after you get attacked. What actually happened? Let’s go back into the records and figure it out. Because of the data fidelity and the real-time nature [of SecureX] this is something you interact with immediately. It can automatically trace threats and set up workflows and bring in other team members to collaborate because you have that integrated back end.

Cisco has said it’s the biggest security vendor in the world by revenue, but most businesses probably still associate the company with networking. Now that SecureX has been introduced, what’s the strategy moving forward?

Rittenhouse: We’ve spent a lot of time on the messaging. I think more and more people recognize we’re the biggest enterprise security company. In many ways, our mission is to democratize security like [Duo Security’s] Wendy Nather said, so we want to make it invisible. We don’t want to be sending the message that you have to get this other stuff to be secure. We want it to be built into everything we do.

There’s been a lot of mergers and acquisitions, especially by companies looking to increase their infosec presence. But Wendy talked during her keynote about simplifying security instead adding product upon product. But it doesn’t sound like you’re feeling the pressure to do that.

Rittenhouse: No. We are not a private equity firm. We buy things for a purpose. And when we buy something, we’ll be happy to tell you why.

Go to Original Article
Author:

NTFS vs. ReFS – How to Decide Which to Use

By now, you’ve likely heard of Microsoft’s relatively recent file system “ReFS”. Introduced with Windows Server 2012, it seeks to exceed NTFS in stability and scalability. Since we typically store the VHDXs for multiple virtual machines in the same volume, it seems as though it pairs well with ReFS. Unfortunately, it did not… in the beginning. Microsoft has continued to improve ReFS in the intervening years. It has gained several features that distanced it from NTFS. With its maturation, should you start using it for Hyper-V? You have much to consider before making that determination.

What is ReFS?

The moniker “ReFS” means “resilient file system”. It includes built-in features to aid against data corruption. Microsoft’s docs site provides a detailed explanation of ReFS and its features. A brief recap:

  • Integrity streams: ReFS uses checksums to check for file corruption.
  • Automatic repair: When ReFS detects problems in a file, it will automatically enact corrective action.
  • Performance improvements: In a few particular conditions, ReFS provides performance benefits over NTFS.
  • Very large volume and file support: ReFS’s upper limits exceed NTFS’s without incurring the same performance hits.
  • Mirror-accelerated parity: Mirror-accelerated parity uses a lot of raw storage space, but it’s very fast and very resilient.
  • Integration with Storage Spaces: Many of ReFS’s features only work to their fullest in conjunction with Storage Spaces.

Before you get excited about some of the earlier points, I need to emphasize one thing: except for capacity limits, ReFS requires Storage Spaces in order to do its best work.

ReFS Benefits for Hyper-V

ReFS has features that accelerate some virtual machine activities.

  • Block cloning: By my reading, block cloning is essentially a form of de-duplication. But, it doesn’t operate as a file system filter or scanner. It doesn’t passively wait for arbitrary data writes or periodically scan the file system for duplicates. Something must actively invoke it against a specific file. Microsoft specifically indicates that it can greatly speed checkpoint merges.
  • Sparse VDL (valid data length): All file systems record the amount of space allocated to a file. ReFS uses VDL to indicate how much of that file has data. So, when you instruct Hyper-V to create a new fixed VHDX on ReFS, it can create the entire file in about the same amount of time as creating a dynamically-expanding VHDX. It will similarly benefit expansion operations on dynamically-expanding VHDXs.

Take a little bit of time to go over these features. Think through their total applications.

ReFS vs. NTFS for Hyper-V: Technical Comparison

With the general explanation out of the way, now you can make a better assessment of the differences. First, check the comparison tables on Microsoft’s ReFS overview page. For typical Hyper-V deployments, most of the differences mean very little. For instance, you probably don’t need quotas on your Hyper-V storage locations. Let’s make a table of our own, scoped more appropriately for Hyper-V:

  • ReFS wins: Really large storage locations and really large VHDXs
  • ReFS wins: Environments with excessively high incidences of created, checkpointed, or merged VHDXs
  • ReFS wins: Storage Space and Storage Spaces Direct deployments
  • NTFS wins: Single-volume deployments
  • NTFS wins (potentially): Mixed-purpose deployments

I think most of these things speak for themselves. The last two probably need a bit more explanation.

Single-Volume Deployments Require NTFS

In this context, I intend “single-volume deployment” to mean installations where you have Hyper-V (including its management operating system) and all VMs on the same volume. You cannot format a boot volume with ReFS, nor can you place a page file on ReFS. Such an installation also does not allow for Storage Spaces or Storage Spaces Direct, so it would miss out on most of ReFS’s capabilities anyway.

Mixed-Purpose Deployments Might Require NTFS

Some of us have the luck to deploy nothing but virtual machines on dedicated storage locations. Not everyone has that. If your Hyper-V storage volume also hosts files for other purposes, you might need to continue with NTFS. Go over the last table near the bottom of the overview page. It shows the properties that you can only find in NTFS. For standard file sharing scenarios, you lose quotas. You may have legacy applications that require NTFS’s extended properties, or short names. In these situations, only NTFS will do.

Note: If you have any alternative, do not use the same host to run non-Hyper-V roles alongside Hyper-V. Microsoft does not support mixing. Similarly, separate Hyper-V VMs onto volumes apart from volumes that hold other file types.

Unexpected ReFS Behavior

The official content goes to some lengths to describe the benefits of ReFS’s integrity streams. It uses checksums to detect file corruption. If it finds problems, it engages in corrective action. On a Storage Spaces volume that uses protective schemes, it has an opportunity to fix the problem. It does that with the volume online, providing a seamless experience. But, what happens when ReFS can’t correct the problem? That’s where you need to pay real attention.

On the overview page, the documentation uses exceptionally vague wording: “ReFS removes the corrupt data from the namespace”. The integrity streams page does worse: “If the attempt is unsuccessful, ReFS will return an error.” While researching this article, I was told of a more troubling activity: ReFS deletes files that it deems unfixable. The comment section at the bottom of that page includes a corroborating report. If you follow that comment thread through, you’ll find an entry from a Microsoft program manager that states:

ReFS deletes files in two scenarios:

  1. ReFS detects Metadata corruption AND there is no way to fix it. Meaning ReFS is not on a Storage Spaces redundant volume where it can fix the corrupted copy.
  2. ReFS detects data corruption AND Integrity Stream is enabled AND there is no way to fix it. Meaning if Integrity Stream is not enabled, the file will be accessible whether data is corrupted or not. If ReFS is running on a mirrored volume using Storage Spaces, the corrupted copy will be automatically fixed.

The upshot: If ReFS decides that a VHDX has sustained unrecoverable damage, it will delete it. It will not ask, nor will it give you any opportunity to try to salvage what you can. If ReFS isn’t backed by Storage Spaces’s redundancy, then it has no way to perform a repair. So, from one perspective, that makes ReFS on non-Storage Spaces look like a very high risk approach. But…

Mind Your Backups!

You should not overlook the severity of the previous section. However, you should not let it scare you away, either. I certainly understand that you might prefer a partially readable VHDX to a deleted one. To that end, you could simply disable integrity streams on your VMs’ files. I also have another suggestion.

Do not neglect your backups! If ReFS deletes a file, retrieve it from backup. If a VHDX goes corrupt on NTFS, retrieve it from backup. With ReFS, at least you know that you have a problem. With NTFS, problems can lurk much longer. No matter your configuration, the only thing you can depend on to protect your data is a solid backup solution.

When to Choose NTFS for Hyper-V

You now have enough information to make an informed decision. These conditions indicate a good condition for NTFS:

  • Configurations that do not use Storage Spaces, such as single-disk or manufacturer RAID. This alone does not make an airtight point; please read the “Mind Your Backups!” section above.
  • Single-volume systems (your host only has a C: volume)
  • Mixed-purpose systems (please reconfigure to separate roles)
  • Storage on hosts older than 2016 — ReFS was not as mature on previous versions. This alone is not an airtight point.
  • Your backup application vendor does not support ReFS
  • If you’re uncertain about ReFS

As time goes on, NTFS will lose favorability over ReFS in Hyper-V deployments. But, that does not mean that NTFS has reached its end. ReFS has staggeringly higher limits, but very few systems use more than a fraction of what NTFS can offer. ReFS does have impressive resilience features, but NTFS also has self-healing powers and you have access to RAID technologies to defend against data corruption.

Microsoft will continue to develop ReFS. They may eventually position it as NTFS’s successor. As of today, they have not done so. It doesn’t look like they’ll do it tomorrow, either. Do not feel pressured to move to ReFS ahead of your comfort level.

When to Choose ReFS for Hyper-V

Some situations make ReFS the clear choice for storing Hyper-V data:

  • Storage Spaces (and Storage Spaces Direct) environments
  • Extremely large volumes
  • Extremely large VHDXs

You might make an additional performance-based argument for ReFS in an environment with a very high churn of VHDX files. However, do not overestimate the impact of those performance enhancements. The most striking difference appears when you create fixed VHDXs. For all other operations, you need to upgrade your hardware to achieve meaningful improvement.

However, I do not want to gloss over the benefit of ReFS for very large volumes. If you have storage volume of a few terabytes and VHDXs of even a few hundred gigabytes, then ReFS will rarely beat NTFS significantly. When you start thinking in terms of hundreds of terabytes, NTFS will likely show bottlenecks. If you need to push higher, then ReFS becomes your only choice.

ReFS really shines when you combine it with Storage Spaces Direct. Its ability to automatically perform a non-disruptive online repair is truly impressive. On the one hand, the odds of disruptive data corruption on modern systems constitute a statistical anomaly. On the other, no one that has suffered through such an event really cares how unlikely it was.

ReFS vs NTFS on Hyper-V Guest File Systems

All of the above deals only with Hyper-V’s storage of virtual machines. What about ReFS in guest operating systems?

To answer that question, we need to go back to ReFS’s strengths. So far, we’ve only thought about it in terms of Hyper-V. Guests have their own conditions and needs. Let’s start by reviewing Microsoft’s ReFS overview. Specifically the following:

“Microsoft has developed NTFS specifically for general-purpose use with a wide range of configurations and workloads, however for customers specially requiring the availability, resiliency, and/or scale that ReFS provides, Microsoft supports ReFS for use under the following configurations and scenarios…”

I added emphasis on the part that I want you to consider. The sentence itself makes you think that they’ll go on to list some usages, but they only list one: “backup target”. The other items on their list only talk about the storage configuration. So, we need to dig back into the sentence and pull out those three descriptors to help us decide: “availability”, “resiliency”, and “scale”. You can toss out the first two right away — you should not focus on storage availability and resiliency inside a VM. That leaves us with “scale”. So, really big volumes and really big files. Remember, that means hundreds of terabytes and up.

For a more accurate decision, read through the feature comparisons. If any application that you want to use inside a guest needs features only found on NTFS, use NTFS. Personally, I still use NTFS inside guests almost exclusively. ReFS needs Storage Spaces to do its best work, and Storage Spaces does its best work at the physical layer.

Combining ReFS with NTFS across Hyper-V Host and Guests

Keep in mind that the file system inside a guest has no bearing on the host’s file system, and vice versa. As far as Hyper-V knows, VHDXs attached to virtual machines are nothing other than a bundle of data blocks. You can use any combination that works.

Go to Original Article
Author: Eric Siron

Developers could ease DevOps deployment with CircleCI Orbs

CI/CD platform provider CircleCI has introduced a suite of 20 integrations that automate deployment and were developed with prominent partners including AWS, Azure, Google Cloud, VMware and Salesforce.

These integrations, known as CircleCI Orbs, enable developers to quickly automate deployments directly from their CI/CD pipelines. CircleCI launched Orbs in November 2018, and today there are more than 1,200 listed in its registry. But users created the vast majority of them; the difference with CircleCI’s internally created orbs is that they’re backed by vendor support.

CircleCI Orbs are shareable configuration packages for development builds, said Tom Trahan, CircleCI’s vice president of business development. The orbs define reusable commands, executors and jobs so that commonly used pieces of configuration can be condensed into a single line of code, he said.

The process of automating deployment can be challenging, which is why CircleCI added this suite of out-of-the-box integrations.

Orbs have two primary benefits for developers, said Chris Condo, an analyst at Forrester Research. “They can be certified by the third parties that create them, and they are maintainable pieces of code that contain logic, actions and connections to CD [continuous delivery] capabilities,” he said.

The orbs help CircleCI operate in an increasingly competitive market that includes open-source Jenkins as well as the commercial CloudBees Jenkins Platform, GitLab and GitHub, as well as cloud platform providers such as AWS and Microsoft.

Orbs are very similar in design to the best package managers that you see — like npm for Node.js, or like the Java library or Ruby Gems.
Tom TrahanVice president of business development, CircleCI

“When we launched Orbs, it was because our customers were asking us for a way to operate the same way that they operate within the broader open source world, particularly when you think about open source frameworks for various languages,” Trahan said. “Orbs are very similar in design to the best package managers that you see — like npm for Node.js, or like the Java library or Ruby Gems.”

These are all frameworks created so that bundles of code could be packaged up and made available to developers, which is what the CircleCI Orbs do, Trahan added.

Developers don’t want to have to “reinvent the wheel,” when they can simply access bundles of code and best practices that others have already developed, he said.

Multi-cloud trend drives need for easier deployment

Anything that removes boring configuration work from a developer’s plate is likely to be welcome, said James Governor, an analyst at RedMonk, based in Portland, Maine.

“CircleCI building out a catalog of deployment orbs makes a lot of sense, particularly as the market becomes increasingly multi-cloud oriented,” Governor said. “Enterprises want to see their vendors offer a wide range of supported platforms. The Orb approach allows for standardized, repeatable deployments and rollbacks.”

However, the process of automating deployments can be problematic for some teams because of the time it takes to write integrations with services such as AWS ECS or Google Cloud Run, Trahan said. The CircleCI deployment orbs are designed to limit the complexity and time spent creating integrations.

“Customers are asking for simpler ways to connect their dev and CD processes; Orbs helps them do that,” Forrester’s Condo said. “So I see Orbs as a very nice evolutionary step that allows teams to build maintainable abstractions between their development and deployment processes.”

How commercially successful the new suite of Orbs will be remains to be seen, but conceptually, the approach has been embraced by CircleCI users. Since their launch in November 2018, CircleCI orbs are now used by 13,000 user organizations, with around 40,000 repositories and nine million CI/CD pipelines, Trahan said.

Pricing for CircleCI’s CI/CD pipeline services is free for small teams and starts at $30 a month for teams with four or more developers. Pricing for enterprise customers starts at $3,000 a month. The orbs are free for all CircleCI users.

Go to Original Article
Author:

Google Cloud support premium tier woos enterprise customers

Google Cloud has introduced a Premium Support option designed to appeal to large enterprises through features such as 15-minute response times for critical issues.

Premium Support customers will be serviced by “context-aware experts who understand your unique application stack, architecture and implementation details,” said Atul Nanda, vice president of cloud support.

These experts will coordinate with a customer’s assigned technical account manager to resolve issues faster and in a more personalized manner, Nanda said in a blog post.

Google wanted to expand its support offerings beyond what basic plans for Google Cloud and G Suite include, according to Nanda. Other Premium Support features include operational health reviews, training, preview access to new products and more help with third-party technologies.

In contrast, Google’s other support options range from a free tier that provides help with only billing issues; Development, which costs $100 per user per month, with a four-hour response time; and Production, which costs $250 per user per month and has a one-hour response time.

Premium Support carries a base annual fee of $150,000 plus 4% of the customer’s net spending on Google Cloud Platform and/or G Suite. Google is also working on add-on services for Premium Support, such as expanded technical account manager coverage and mission-critical support, which involves a site reliability engineering consulting engagement. The latter is now in pilot.

Cloud changes the support equation

Customers with on-premises software licenses are used to paying stiff annual maintenance fees, which give them updates, bug fixes and technical support. On-premises maintenance fees can generate profit margins for vendors north of 90%, consuming billions of IT budget dollars that could have been spent on better things, said Duncan Jones, an analyst at Forrester.

Duncan JonesDuncan Jones

Google is recognizing they need to move up the stack in terms of support to make further inroads into the enterprise space.
Grant KirkwoodCTO, Unitas Global

“But customers of premium support offerings such as Microsoft Unified (fka Premier) Support and SAP MaxAttention express much higher satisfaction levels with value for money,” Jones said via email. “They are usually an alternative to similar services that the vendor’s SI and channel partners offer, so there is competition that drives up standards. Plus, they are optional extras so price/demand sensitivity keeps pricing at reasonable levels.” On the whole, Google’s move to add Premium Support is positive for customers, according to Jones.

But it’s clear why Google did it from a business perspective, said Grant Kirkwood, CTO of Unitas Global, a hybrid cloud services provider in Los Angeles. “Google is recognizing they need to move up the stack in terms of support to make further inroads into the enterprise space,” he said.

Microsoft today probably has the most robust support in terms of a traditional enterprise look-and-feel, while AWS’ approach is geared a bit more toward DevOps-centric shops, Kirkwood added.

“[Google is] taking a bit out of both playbooks,” he said. Premium Support could appeal to enterprises that have already done easier lift-and-shift projects to the cloud and are now rebuilding or creating new cloud-native applications, according to Kirkwood.

But as with anything, Google will have to prove its Premium Support option is worth the extra money.

“Successful [support] plans require great customer success management, highly trained technical account managers and AI-driven case management,” said Ray Wang, founder and CEO of Constellation Research.

Go to Original Article
Author:

Citrix’s performance analytics service gets granular

Citrix introduced an analytics service to help IT professionals better identify the cause of slow application performance within its Virtual Apps and Desktops platform.

The company announced the general availability of the service, called Citrix Analytics for Performance, at its Citrix Summit, an event for the company’s business partners, in Orlando on Monday. The service carries an additional cost.

Steve Wilson, the company’s vice president of product for workspace ecosystem and analytics, said many IT admins must deal with performance problems as part of the nature of distributed applications. When they receive a call from workers complaining about performance, he said, it’s hard to determine the root cause — be it a capacity issue, a network problem or an issue with the employee’s device.

Performance, he said, is a frequent pain point for employees, especially remote and international workers.

“There are huge challenges that, from a performance perspective, are really hard to understand,” he said, adding that the tools available to IT professionals have not been ideal in identifying issues. “It’s all been very technical, very down in the weeds … it’s been hard to understand what [users] are seeing and how to make that actionable.”

Part of the problem, according to Wilson, is that traditional performance-measuring tools focus on server infrastructure. Keeping track of such metrics is important, he said, but they do not tell the whole story.

“Often, what [IT professionals] got was the aggregate view; it wasn’t personalized,” he said.

When the aggregate performance of the IT infrastructure is “good,” Wilson said, that could mean that half an organization’s users are seeing good performance, a quarter are seeing great performance, but a quarter are experiencing poor performance.

Steve Wilson, vice president of product for workspace ecosystem and analytics, CitrixSteve Wilson

With its performance analytics service, Citrix is offering a more granular picture of performance by providing metrics on individual employees, beyond those of the company as a whole. That measurement, which Citrix calls a user experience or UX score, evaluates such factors as an employee’s machine performance, user logon time, network latency and network stability.

“With this tool, as a system administrator, you can come in and see the entire population,” Wilson said. “It starts with the top-level experience score, but you can very quickly break that down [to personal performance].”

Wilson said IT admins who had tested the product said this information helped them address performance issues more expeditiously.

“The feedback we’ve gotten is that they’ve been able to very quickly get to root causes,” he said. “They’ve been able to drill down in a way that’s easy to understand.”

A proactive approach

Eric Klein, analyst, VDC Research GroupEric Klein

Eric Klein, analyst at VDC Research Group Inc., said the service represents a more proactive approach to performance problems, as opposed to identifying issues through remote access of an employee’s computer.

“If something starts to degrade from a performance perspective — like an app not behaving or slowing down — you can identify problems before users become frustrated,” he said.

Mark Bowker, senior analyst, Enterprise Strategy GroupMark Bowker

Klein said IT admins would likely welcome any tool that, like this one, could “give time back” to them.

“IT is always being asked to do more with less, though budgets have slowly been growing over the past few years,” he said. “[Administrators] are always looking for tools that will not only automate processes but save time.”

Enterprise Strategy Group senior analyst Mark Bowker said in a press release from Citrix announcing the news that companies must examine user experience to ensure they provide employees with secure and consistent access to needed applications.

IT is always being asked to do more with less.
Eric KleinAnalyst, VDC Research Group

“Key to providing this seamless experience is having continuous visibility into network systems and applications to quickly spot and mitigate issues before they affect productivity,” he said in the release.

Wilson said the performance analytics service was the product of Citrix’s push to the cloud during the past few years. One of the early benefits of that process, he said, has been in the analytics field; the company has been able to apply machine learning to the data it has garnered and derive insights from it.

“We do see a broad opportunity around analytics,” he said. “That’s something you’ll see more and more of from us.”

Go to Original Article
Author:

Cradlepoint NetCloud update avoids unnecessary data usage

Cradlepoint has introduced technology that helps customers control costs by flagging unusual increases in data use across the wireless links managed by the vendor’s software-defined WAN.

The vendor unveiled this week the latest analytics in its cloud-based Cradlepoint NetCloud management platform. Cradlepoint is aiming the technology at retailers, government agencies and enterprises that have widely distributed operations. Those organizations typically have a WAN dependent on 4G and other wireless links.

The latest algorithms determine patterns of data usage based on historical data gathered over time across a company’s wireless links, the vendor said. Cradlepoint NetCloud will notify network managers when data usage deviates from past patterns.

The feature provides early notification of surges in usage that might be unrelated to normal business operations, such as video streaming by employees or misconfigured networking gear.

Cradlepoint pitches itself as particularly useful to retailers. The company claims that 75% of the top retailers globally uses its technology. Customers include David’s Bridal, which sells wedding dresses through 330 stores in North America and the United Kingdom. Another sizable retail customer is the jewelry manufacturer Pandora, which distributes its products through stores in more than 100 countries.

Companies outside of retail also use Cradlepoint technology. DSC Dredge LLC uses Cradlepoint for managing 4G LTE, 4G and 3G connectivity across its fleet of dredging machines. The company supplies the equipment in more than 40 countries for use in constructing dams and improving waterway drainage and navigability.

DSC has equipped each of its dredges with a Cradelpoint router and oversees the technology through the NetCloud management software.

Cradlepoint sells subscription-based packages that converge multiple network services on a single edge router. The bundle, for example, could include a router with Ethernet ports, and support for Wi-Fi with a guest portal and LTE integration.

Cradlepoint sells subscriptions on a one-, three- or five-year basis.

Go to Original Article
Author:

New capabilities added to Alfresco Governance Services

Alfresco Software introduced new information governance capabilities this week to its Digital Business Platform through updates to Alfresco Governance Services.

The updates include new desktop synchronization, federation services and AI-assisted legal holds features.

“In the coming year, we expect many organizations to be hit with large fines as a result of not meeting regulatory standards for data privacy, e.g., the European GDPR and California’s CCPA. We introduced these capabilities to help our customers guarantee their content security and circumvent those fines,” said Tara Combs, information governance specialist at Alfresco.

Federation Services enables cross-databases search

Federation Services is a new addition to Alfresco Governance Services. Users can search, view and manage content from Alfresco and other repositories, such as network file shares, OpenText, Documentum, Microsoft SharePoint, Dropbox.

Users can also search across different databases with the application without having to migrate content. Federation Services provides one user interface for users to manage all the information resources in an organization, according to the company.

Organizations can also store content in locations outside of Alfresco platform.

Legal holds feature provides AI-assisted search for legal teams

The legal holds feature provides document search and management capabilities that help legal teams identify relevant content for litigation purposes. Alfresco’s tool now uses AI to discover relevant content and metadata, according to the company.

“AI is offered in some legal discovery software systems, and over time all these specialized vendors will leverage AI and machine learning,” said Alan Pelz-Sharpe, founder and principal analyst at Deep Analysis. He added that the AI-powered feature of Alfresco Governance Services is one of the first such offerings from a more general information management vendor.

“It is positioned to augment the specialized vendors’ work, essentially curating and capturing relevant bodies of information for deeper analysis.”

Desktop synchronization maintains record management policies

Another new feature added to Alfresco Governance Services synchronizes content between a repository and a desktop, along with the records management policies associated with that content, according to the company.

With the desktop synchronization feature, users can expect to have the same record management policies when they access a document on their desktop computer or viewing it from the source repository, according to the company.

When evaluating a product like this in the market, Pelz-Sharpe said the most important feature a buyer should look for is usability. “AI is very powerful, but less than useless in the wrong hands. Many AI tools expect too much of the customer — usability and recognizable, preconfigured features that the customer can use with little to no training are essential.”

The new updates are available as of Dec. 3. There is no price difference between the updated version of Alfresco Governance Services and the previous version. Customers who already had a subscription can upgrade as part of their subscription, according to the company.

According to Pelz-Sharpe, Alfresco has traditionally competed against enterprise content management and business process management vendors. It has pivoted during recent years to compete more directly with PaaS competitors, offering a content- and process-centric platform upon which its customer can build their own applications. In the future, the company is likely to compete against the likes of Oracle and IBM, he said.

Go to Original Article
Author:

Google joins bare-metal cloud fray

Google has introduced bare-metal cloud deployment options geared for legacy applications such as SAP, for which customers require high levels of performance along with deeper virtualization controls.

“[Bare metal] is clearly an area of focus of Google,” and one underscored by its recent acquisition of CloudSimple for running VMware workloads on Google Cloud, said Deepak Mohan, an analyst at IDC.

Deepak MohanDeepak Mohan

IBM, AWS and Azure have their own bare-metal cloud offerings, which allow them to support an ESXi hypervisor installation for VMware, and Bare Metal Solution will apparently underpin CloudSimple’s VMware service on Google, Mohan added.

But Google will also be able to support other workloads that can benefit from bare metal availability, such as machine learning, real-time analytics, gaming and graphical rendering. Bare-metal cloud instances also avert the “noisy neighbor” problem that can crop up in virtualized environments as clustered VMs seek out computing resources, and do away with the general hit to performance known commonly as the “hypervisor tax.”

Google’s bare-metal cloud instances offer a dedicated interconnect to customers and tie into all native Google Cloud services, according to a blog post. The hardware has been certified to run “multiple enterprise applications,” including ones built on top of Oracle’s database, Google said.

Oracle, which lags far behind in the IaaS market, has sought to preserve some of those workloads as customers move to the cloud.

This is clearly an area of focus of Google.
Deepak MohanAnalyst, IDC

Earlier this year, it formed a cloud interoperability partnership with Microsoft, pushing a use case wherein customers could run enterprise application logic and presentation tiers on Azure infrastructure, while tying back to an Oracle database running on bare-metal servers or specialized Exadata hardware in Oracle’s cloud.

Not all competitive details laid bare

Overall, bare-metal cloud is a niche market, but by some estimates it is growing quickly.

Among hyperscalers such as AWS, Google and Microsoft, the battleground is in early days, with AWS only making its bare-metal offerings generally available in May 2018. Microsoft has mostly positioned bare metal for memory-intensive workloads such as SAP HANA, while also offering it underneath CloudSimple’s VMware service for Azure.

Meanwhile, Google’s bare-metal cloud service is fully managed by Google, provides a set of provisioning tools for customers, and will have unified billing with other Google Cloud services, according to the blog.

How smoothly this all works together could be a key differentiator for Google in comparison with rival bare-metal providers. Management of bare-metal machines can be more granular than traditional IaaS, which can mean increased flexibility as well as complexity.

Google’s Bare Metal Solution instances are based on x86 systems that range from 16 cores with 384 GB of DRAM, to 112 cores with 3,072 GB of DRAM. Storage comes in 1 TB chunks, with customers able to choose between all-flash or a mix of storage types. Google also plans to offer custom compute configurations to customers with that need.

It also remains to be seen how price-competitive Google is on bare metal compared with competitors, which includes providers such as Packet, CenturyLink and Rackspace.

The company didn’t immediately provide costs for Bare Metal Solution instances, but said the hardware can be purchased via monthly subscription, with the best deals for customers that sign 36-month terms. Google won’t charge for data movement between Bare Metal Solution instances and general-purpose Google Cloud infrastructure if it occurs in the same cloud region.

Go to Original Article
Author:

When to Use SCVMM (And When Not To)

Microsoft introduced Hyper-V as a challenge to the traditional hypervisor market. Rather than a specialty hallmark technology, they made it into a standardized commodity. Instead of something to purchase and then plug into, Microsoft made it ubiquitously available as something to build upon. As a side effect, administrators manage Hyper-V using markedly different approaches than other systems. In this unfamiliar territory, we have a secondary curse of little clear guidance. So, let’s take a look at the merits and drawbacks of Microsoft’s paid Hyper-V management tool, System Center Virtual Machine Manager.

What is System Center Virtual Machine Manager?

“System Center” is an umbrella name for Microsoft’s datacenter management products, much like “Office” describes Microsoft’s suite of desktop productivity applications. System Center has two editions: Standard and Datacenter. Unlike Office, the System Center editions do not vary by the number of member products that you can use. Both editions allow you to use all System Center tools. Instead, the different editions differ by the number of systems that you can manage. We will not cover licensing in this article; please consult your reseller.

System Center Virtual Machine Manager, or SCVMM, or just VMM, presents a centralized tool to manage multiple Hyper-V hosts and clusters. It provides the following features:

  • Bare-metal deployment of Hyper-V hosts
  • Pre-defined host and virtual switch configuration combinations
  • Control over clusters, individual hosts, and virtual machines
  • Virtual machine templating
  • Simultaneous deployment of multiple templates for automated setup of tiered services
  • Granular access controls (control over specific hosts, VMs, deployments, etc.)
  • Role-based access
  • Self-service tools
  • Control over Microsoft load balancers
  • Organization of offline resources (ISOs, VHDXs, etc.)
  • Automatic balancing of clustered virtual machines
  • Control over network virtualization
  • Partial control over ESXi hosts

In essence, VMM allows you to manage your datacenter as a cloud.

Can I Try VMM Before Buying?

You can read the list above to get an idea of the product’s capabilities. But, you can’t distinguish much about a product from a simple bulleted list. You learn the most about a tool by using it. To that end, you can download an evaluation copy of the System Center products. I created a link to the current long-term version (2019). If you scroll below that, you will find an evaluation for the semi-annual channel releases. Because of the invasive nature of VMM, I highly recommend that you restrict it to a testbed of systems. If you don’t have a test environment, then it presents you with a fantastic opportunity to try out nested virtualization.

Why Should I Use VMM to Manage my Hyper-V Environment?

Rather than trying to take you through a world tour of features that you could more easily explore on your own, I want to take this up to a higher-level view. Let’s settle one fact upfront: not everyone needs VMM. To make a somewhat bolder judgment, very few Hyper-V deployments need it. So, let’s cover the ones that do.

VMM for Management at Scale

The primary driver of VMM use has less to do with features than with scale. Understand that VMM does almost nothing that you cannot do yourself with freely-available tools. It can make tasks easier. The more hosts you have, the more work to do. So, if you’ve got many hosts, it doesn’t hurt to have some help. Of course, the word “many” does not have a universal meaning. Where do we draw the line?

For starters, we would not draw any line at all. If you’ve gone through the evaluation, you like what VMM has to offer, and the licensing cost does not drive you away, then use VMM. If you go through the effort to configure it properly, then VMM can work for even a very small environment. We’ll dive deeper into that angle in the section that discusses the disincentives to use VMM.

Server hosting providers with dozens or hundreds of clients make an obvious case for VMM. VMM does one thing easy that nothing else can: role-based access. The traditional tools allow you to establish host administrators, but nothing more granular. If you want a simple tool to establish control for tenants, VMM can do that.

VMM solves another problem that makes the most sense in the context of hosting providers: network virtualization. The term “network virtualization” could have several meanings, so let’s disambiguate it. With network virtualization, we can use the same IP addresses in multiple locations without collision. In many contexts, we can provide that with network address translation (NAT) routers. But, for tenants, we need to separate their traffic from other networks while still using common hardware. We could do that with VLANs, but that gives us two other problems. First, we have a hard limit on the number of VLANs that can co-exist. Second, customers may want to stretch their networks, including their VLANs, into the hosted environment. With current versions of Hyper-V, we have the ability to manage network virtualization with PowerShell, but VMM still makes it easier.

So, if you manage very large environments that can make use of VMM’s tenant management, or if you have a complicated networking environment that can benefit from network virtualization, then VMM makes sense for you.

VMM for Cloud Management

VMM for cloud management really means much the same thing as the previous section. It simply changes the approach to thinking about it. The common joke goes, “the cloud is just someone else’s computer”. But, how does that change when it’s your cloud? Of course, that joke has always represented a misunderstanding of cloud computing.

A cloud makes computing resources available in a controlled fashion. Prior to the powers of virtualization, you would either assign physical servers or you’d slice out access to specific resources (like virtual servers in Apache). With virtualization, you can create virtual machines of particular sizes, which supplants the physical server model. With a cloud, at least the way that VMM treats it, you can quickly stand up all-new systems for clients. You can even give them the power to do deploy their own.

Nothing requires the term “client” to apply only to external, paying customers. “Client” could easily mean internal teams. You can have an “accounting cloud” and a “sales cloud” and whatever else you need. Hosting providers aren’t the only entities that need to easily provide computing resources.

Granular Management Capabilities

I frequently see requests for granular control over Hyper-V resources. Administrators want to grant access to specific users to manage or connect to particular virtual machines. They want helpdesk staff to be able to reboot VMs, but not change settings. They want to allow different administrators to perform different functions based on their roles within the organization. I also think that some people just want to achieve a virtual remote desktop environment without paying the accompanying license fees.

VMM enables all of those things (except the VDI sidestep, of course). Some of these things are impossible with native tools. With difficulty, you can achieve some in other ways, such as with Constrained PowerShell Endpoints. VMM does it all, and with much greater ease.

The Quick Answer to Choosing VMM

I hope that all of this information provides a clearer image. When you have a large or complex Hyper-V environment, especially with multiple stakeholders that need to manage their own systems, VMM can help you. If you read through all of the above and did not see how any of that could meaningfully apply to your organization, then the next section may fit you better.

Reasons NOT to Use SCVMM?

We’ve seen the upsides of VMM. Now it’s time for a look at the downsides.

VMM Does Not Come Cheap – or Alone

You can’t get VMM by itself. You must buy into the entire suite or get nothing at all. I won’t debate the merits of the other members of this suite in this article. Whether you want them or not, they all come as a set. That means that you pay for the set. If you get the quote and feel any hesitation at paying it, then that’s a sign that it might not be worth it to you.

VMM is Heavy

Hyper-V’s built-in management tools require almost nothing. The PowerShell module and MMC consoles are lightweight. They require a bit of disk space to store and a spot of memory to operate. They communicate with the WMI/CIM interfaces to do their work.

VMM shows up at the opposite end. It needs a server application install, preferably on a dedicated system. It stores all of its information in a Microsoft SQL database. It requires an agent on every managed host.

VMM Presents its Own Challenges

VMM is not easy to install, configure, or use. You will have questions during your first install that the documentation does not cover. It does not get easier. I have talked with others that have different experiences from mine; some with problems that I did not encounter, and others that have never dealt with things that routinely irritate me. I will limit this section to the things that I believe every potential VMM customer will need to prepare for.

Networking Complexity

We talked about the powers of network virtualization earlier. That technology necessitates complexity. However, VMM makes things difficult even when you have a simple Hyper-V networking design. In my opinion, it’s needlessly complicated. You have several configuration points. If you miss one, something will not work. To tell the full story, a successful network configuration can be easily duplicated to other systems, even overwriting existing configurations. However, in smaller deployments, the negatives can greatly outweigh the positives.

General Complexity

I singled out networking in its own section because I feel that VMM’s designers could have created an equally capable networking system with a substantially simpler configuration. But, I think they can justify most of the rest of the complexity. VMM was built to enable you to run your own cloud – multiple clouds, even. That requires a bit more than the handful of interfaces necessary to wrangle a couple of hosts and a handful of VMs.

Over-Eager Problem Solving

When VMM detects problems, it tries to apply fixes. That sounds good, except that the “fixes” are often worse than the disease – and sometimes there aren’t even any problems to fix. I’ve had hosts drained of their VMs, sitting idle, all because VMM suddenly decided that there was a configuration problem with the virtual switch. Worse, it wouldn’t specify what it didn’t like about that virtual switch or propose how to remedy the problem. You’ll see unspecified problems with hosts and virtual machines that VMM won’t ignore and require you to burn time in tedious housekeeping.

Convoluted Error Messaging

A point of common frustration that you’ll eventually run into: the error messages. VMM often leaves cryptic error messages in its logs. I’ve encountered numerous messages that I could not understand or find any further information about. These cost time and energy to research. Inability to uncover what triggered something or even find an actual problem – these things eventually lead to “alarm fatigue”. You simply ignore the messages that don’t seem to matter, thereby taking a risk that you’ll miss something that does matter.

Mixed Version Limitations

With the introduction of changes in Hyper-V in the 2012 series, Microsoft directly addressed an earlier problem: simultaneous management of different versions of Hyper-V. You can currently use Hyper-V Manager and Failover Cluster Manager in the Windows 8+ and Windows Server 2012+ versions to control any version of Hyper-V that employs the v2 namespace. Officially, Microsoft says that any given built-in management tool will work with the version they were released with, any lower version that supports v2, and one version higher. They can only manage the features that they know about, of course, but they’ll work.

Conversely, I have not seen any version of VMM that can control a higher-level Hyper-V version. VMM 2016 controls 2016 and lower, but not 2019. Furthermore, System Center rarely releases on the same schedule as Windows Server. VMM-reliant shops that wanted to migrate to Hyper-V in Windows Server 2019 had to wait several months for the release of VMM 2019.

The Quick Answer to Choosing Against VMM

As mentioned a few earlier times in this article, the decision against VMM will largely rest on the scale of your deployment. Whether or not the problems that I mentioned above matter to you – or even apply to you – you will need to invest time and effort specifically for managing VMM. If you do not have that time, or if that effort is simply not worth it to you, then do not use VMM.

Remember that you have several free tools available: Hyper-V Manager, Failover Cluster Manager, their PowerShell modules, and Windows Admin Center.

Addressing the Automatic Recommendation for VMM

Part of the impetus behind writing this article was the oft-repeated directive to always use VMM with Hyper-V. For some writers and forum responders, it’s simply automatic. Unfortunately, it’s simply bad advice. It’s true that VMM provides an integrated, all-in-one management experience. But, if you’ve only got a handful of hosts, you can get a lot of mileage out of the free management tools. Where the graphical tools prove functionally inadequate, PowerShell can pick up the slack. I know that some administrators resist using PowerShell or any other command-line tools, but they simply have no valid reasons.

I will close this out by repeating what I said earlier in the article: get the evaluations and try out VMM. Set up networking, configure hosts, deploy virtual machines, and build-out services. You should know quickly if it’s all worth it to you. Decide for yourself. And remember to come back and tell us your experiences! Good luck!


Go to Original Article
Author: Eric Siron

Google cloud network tools check links, firewalls, packet loss

Google has introduced several network monitoring tools to help companies pinpoint problems that could impact applications running on the Google Cloud Platform.

Google launched this week the first four modules of an online console called the Network Intelligence Center. The components for monitoring a Google cloud network include a network topology map, connectivity tests, a performance dashboard, and firewall metrics and insights. The first two are in beta, and the rest are in alpha, which means they are still in the early stages of development.

Here’s a brief overview of each module, based on a Google blog post:

— Google is providing Google Cloud Platform (GCP) subscribers with a graphical view of their network topology. The visualization shows how traffic is flowing between private data centers, load balancers, and applications running on computing environments within GCP. Companies can drill down on each element of the topology map to verify policies or identify and troubleshoot problems. They can also review changes in the network over the last six weeks.

— The testing module lets companies diagnose problems with network connections within GCP or from GCP to an IP address in a private data center or another cloud provider. Along with checking links, companies can test the impact of network configuration changes to reduce the chance of an outage.

–The performance dashboard provides a current view of packet loss and latency between applications running on virtual machines. Google said the tool would help IT teams determine quickly whether a packet problem is in the network or an app.

–The firewall metrics component offers a view of rules that govern the security software. The module is designed to help companies optimize the use of firewalls in a Google cloud network.

Getting access to the performance dashboard and firewall metrics requires a GCP subscriber to sign up as an alpha customer. Google will incorporate the tools into the Network Intelligence Center once they reach the beta level.

Go to Original Article
Author: