Tag Archives: Management

HubStor takes on Veeam, goes virtual with backup service

HubStor is taking its backup-as-a-service platform into the virtual world.

The cloud data management provider launched its BaaS offering for VMware vSphere virtual machines, featuring backup, replication, recovery and archiving.

HubStor also plans to provide support for other virtual infrastructures, including Microsoft Hyper-V and Nutanix.

Customers seek VMware backup

The HubStor backup and archive platform already supported SaaS apps such as Microsoft 365 and Box, as well as PaaS offerings, including S3 and Azure Blob Storage.

“HubStor is one unified SaaS platform,” CEO Geoff Bourgeois said.

Bourgeois said he thought VM backup was a saturated market, but customers at trade shows kept asking if the vendor backed up VMware. He found that many customers were following the 3-2-1 rule of backup, but they were making it more complicated than necessary, such as using tape.

“We can simplify the whole 3-2-1,” Bourgeois said of the cloud platform.

The HubStor support for vSphere includes backup policy administration and monitoring, immutability, encryption and cloud storage. HubStor recovers the VMs to any vSphere instance, Bourgeois said.

Screenshot of HubStor's VM backup
HubStor added backup for VMware vSphere VMs to its platform.

The HubStor VM backup is available in dedicated and shared models with pricing mainly based on cloud storage consumption.

The dedicated BaaS instance is geared toward enterprises. It provides more enhanced features, including advanced security, data sovereignty controls and expansion into other HubStor offerings such as file system tiering and journaling. Pricing starts at $335 per month, with monthly storage costs typically ranging from .3 cents to 3 cents per gigabyte.

The shared BaaS instance is aimed at smaller businesses that only want VM backup. Pricing starts at $50 per month, with backup storage costs ranging from .3 cents to 3 cents per gigabyte.

HubStor’s vSphere backup became generally available in May. The vendor is targeting late July for its Hyper-V backup. Bourgeois said he’s hoping to add support for Nutanix, Azure VMs, Windows Server and SQL Server by the end of the year. Within that group of infrastructures, customers requested VMware support the most by far, he said.

Bourgeois listed Veeam as HubStor’s top competition. He acknowledged that while HubStor might not be as feature-rich, some Veeam customers have said they’re paying for a lot of features they don’t use, and they’re looking for a backup product that’s cheaper and simpler. Veeam started out as a pioneer in virtual backup and has since expanded to include physical and cloud protection.

HubStor has recently been converting a couple of Veeam customers over to its product. Veritas is another competitor, Bourgeois said.

HubStor, based in Ottawa, claims more than 200 customers, who are backing up a range from gigabytes to multiple petabytes.

Since the coronavirus pandemic hit in March, customers have told HubStor that they want to use more cloud. Users are looking for simplification, Bourgeois said.

HubStor customer appreciates range of functions

Virgin Hyperloop has been using HubStor for about two years. The product protects more than 100 TB for the transportation company, which is based in Los Angeles.

Cory Coles, senior systems administrator at Virgin Hyperloop, said he was initially impressed with HubStor’s e-discovery, Microsoft 365 backup, and Windows file server stubbing and cloud tiering.

I needed to check a bunch of boxes and HubStor was able to do a lot with one stroke.
Cory ColesSenior systems administrator, Virgin Hyperloop

“From the first time I heard about it, I found HubStor to be a collection of incredibly valuable solutions I had never seen combined into one platform before,” Coles wrote in an email. “As a startup company still adding a lot of core IT functions to our infrastructure, I needed to check a bunch of boxes and HubStor was able to do a lot with one stroke.”

Coles said it was simple to add the vSphere backup to his HubStor platform and it took about 30 minutes.

“It’s honestly probably easier than using the native Azure Backup product,” he wrote. “I can easily tier older backups to cool and archive storage, [which is] key for saving money and not upsetting my boss with an accidentally large bill.”

Coles said he would like to see application backup supported with the VM backup, “so I can back up and restore to SQL Server and Active Directory natively and on a per-object level.”

Go to Original Article

New Quest Kace products make UEM easier for IT admins

Recent updates to the Quest Kace lineup of unified endpoint management products look to ease the management of remote work by supporting more devices and providing better visibility of every device connected to the corporate network.

Quest Software Inc. announced the updates to its products — among them the Kace Systems Management Appliance (SMA) and the Kace Cloud Mobile Device Manager (MDM) — in late May. The products each handle components of endpoint management; Kace SMA, for example, automates such tasks as discovering hardware and software, while Kace MDM focuses on the management of mobile devices, such as wiping lost and stolen devices.

Quest Kace senior product manager Ken Galvin said any IT professional who believed there was a secure perimeter around their company’s network was disabused of that notion with the COVID-19 crisis.

“All of these IT admins suddenly find themselves managing remote employees who are using a different variety of company-owned and personal devices,” he said. “[The employees] are downloading and installing software from wherever.”

Galvin said every device on a network is a potential attack vector, and these devices are now in the homes of users who may not be tech-savvy.

“It’s not so much a new problem as compounding existing problems,” he said.

Such problems, Galvin said, called for better unified endpoint management (UEM) tools.

Eric Klein, independent analystEric Klein

Independent analyst Eric Klein echoed the statement, saying UEM has taken on increased prominence as more employees work from home.

“IT orgs are finally recognizing that [UEM] is going to be helpful for them in the era of COVID and remote work,” he said.

Klein noted, however, that Quest may struggle to compete with bigger UEM players, even with the additional features.

Update brings changes

Galvin said two updated features — better Chromebook management for Kace SMA and Apple TV support for Kace MDM — are designed to help IT manage a breadth of devices. Among the new Chromebook management features is the ability to remotely disable a lost or stolen device.

Apple TVs may be overlooked as a potential attack vector, but Galvin noted that such devices are commonly used for kiosks and as electronic signage for schools and offices.

“It’s commonly used by customers as a display/dashboard device,” he said.

One advantage to the broader support of devices, Galvin said, is that the software is better at discovering what devices are on a corporate network, which could help IT admins on the security front.

“Part of discovery is discovering what should be on your network, but the other part is discovering what shouldn’t be there,” he said. “Part of your IT hygiene should be running a regular scan of all the IP addresses on your network.”

Andrew Hewitt, analyst, Forrester ResearchAndrew Hewitt

“If you don’t know what you have, you can’t manage it or secure it,” he added.

Andrew Hewitt, an analyst at Forrester Research, said remote work had revealed new pain points in managing devices.

“One of the things that’s been coming up is visibility for the unmanaged devices that are on your network — to understand where they are and bring them into compliance,” he said. “The traditional UEM [products] don’t tend to do that very well.”

Hewitt said Quest, with its emphasis on discovering devices on a network, could help IT professionals in that regard.

Using Quest

Leyla McCrary, manager of end user computing at the St. Louis-headquartered construction firm McCarthy Holdings Inc., said her firm has been using Quest’s Kace SMA for about five years. She said the product’s help desk ticketing system was attractive to the business, which has about 3,000 employees in offices across the country.

Leyla McCrary, manager of end user computing, McCarthy Holdings Inc.Leyla McCrary

“We ended up picking Quest, which was owned by Dell at the time … because it was really user-friendly,” she said. “It gave us a lot of options with the ticketing system to customize a lot of different things.”

Rollout, she said, had gone smoothly, but it did take some time to customize the ticketing system. Each of the company’s IT groups had a different ticketing queue with different sets of rules at the time. Having to manually create those rules, she said, was the longest part of the product’s implementation.

McCrary said McCarthy Holdings has used more Kace SMA features over time. The company mainly relies on the product to manage PCs, she said, although it tracks iPads through Kace’s asset management functionality.

“We just a few weeks ago added servers,” she said. “We’ve put an agent on every server, and we’re getting ready to start patching on those servers.”

This change, she said, should make patching on those servers more user-friendly for the company’s data center team.

McCrary also noted that McCarthy Holdings was able to use Kace SMA for streamlining the new hire process. The firm’s HR team can fill out a custom form, which then automatically sends tickets to the IT groups responsible for such things as creating user accounts and asset management.

Managing a diverse set of devices

Holger Mueller, vice president and principal analyst at Constellation Research, said IT professionals must be able to manage a diverse set of devices remotely to handle the current situation. He said the tools they use must be able to accomplish this task.

Holger Mueller, vice president and principal analyst, Constellation ResearchHolger Mueller

“It is good to see vendors responding in both a platform reach and capabilities perspective,” he said. “Quest, with its Kace suite of products, does the former with [such things as] adding Apple TV as a [supported] device and the latter by allowing combined management and security updates to flow to devices.”

Independent analyst Klein said the change wrought by remote work has presented an opportunity for UEM vendors.

“Now is the opportunity for organizations to put their money where their mouths are in terms of supporting a remote workforce and investing in UEM,” he said. “There is more personal computer and laptop usage right now, happening in homes. Because of that, you’re going to need better Chromebook support, better Mac support.”

Go to Original Article

Ivanti updates its unified endpoint management

An update to Ivanti’s endpoint management software — Ivanti Unified Endpoint Manager 2020.1 –focuses on bettering the experience of managing remote workers through such features as improved remote control and enhanced BitLocker support.

Alan Braithwaite, Ivanti’s senior director of product management, said that although those remote work features are playing an increased role during the lockdown, the company had been working on them prior to the COVID-19 outbreak.

Alan BraithwaiteAlan Braithwaite

“It just turned out very nicely,” he said, noting the increased call for such things as cloud storage for software delivery — a feature in the new update.

Liz Miller, a vice president and principal analyst at Constellation Research, said the update represented a positive expansion to Ivanti’s endpoint management and security offerings.

“As more enterprises shift to managing a remote workforce that may remain permanent, IT teams need easy-to-manage, quick-to-interpret-and-alert interfaces to manage an expansive virtual campus of endpoints and devices,” she said. “Their solution is making device management — from discovery to OS update delivery to, almost as important in these times, patch update delivery — that much easier, and making delivery of these updates and patches unobtrusive to the end user.”

Improving management tools

Braithwaite said he saw the update’s remote control improvements — allowing systems administrators to take control of devices — as improving the day-to-day experience of IT professionals. Ivanti Unified Endpoint Manager (UEM) had previously supported remote control, he said, but required administrators to do so through their company’s servers.

“Now, they can do it directly from the cloud,” he said.

The update also includes full macOS support for remote control.

Better BitLocker support would make it easier for IT to use the Windows 10 drive-encryption feature, Braithwaite said. Among the enhancements is a means to recover login information remotely.

“What if your end user loses their password? They got sent home, and usually keep their password on a sticky [note] under their desk, in case they forgot it,” he said. “That’s a real challenge for them to be able to recover that, unless you have technology like we have. We can grab that key information, store it securely and then provide it … no matter where [the users] are.”

With cloud storage for software delivery, Braithwaite said, companies could use Azure or AWS to deliver software to their employees, reducing the strain on VPNs. Demand for such a feature, he noted, predated the current explosion of remote work.

“Even before COVID-19, we had more and more requests from our customers, saying ‘We have a number of remote workers in the field, and we’re trying to move [data] from our data centers up to the cloud,'” he said.

The increasing importance of UEM

Miller said COVID-19 dramatically increased the endpoint footprint that IT departments must manage.

“[The outbreak] sent IT the almost-impossible task of ensuring productivity and uptime across these devices, but also ensuring visibility, control and compliance over an unknown universe,” she said. “Ivanti’s platform works to bring all of that into view while not disrupting the workflow and productivity of individual users.”

Eric KleinEric Klein

Independent analyst Eric Klein said the remote-work shift has underscored the need for management software.

“UEM wasn’t something that was an absolute need for companies that had a lot of their workers in-house,” he said. “Now that a lot of their workforce is remote, [companies are] going to realize they need to invest in that. There’s an opportunity in this market across the board.”

According to Miller, IT professionals would benefit from Ivanti UEM’s flexibility in automating and managing updates across a variety of devices. BitLocker in particular has been a source of intense frustration for administrators, she said, citing Reddit threads full of frustration about the feature. Help in managing it would be welcomed, she said.

Klein said the update should help IT professionals, as it adds the ability to perform management tasks — like the remote control of devices — across multiple platforms.

He questioned whether those features would gain customers for Ivanti, however, as larger competitors like Citrix and VMware also offer such capabilities.

“The messaging and marketing efforts have started to pay dividends for vendors like VMware and Citrix,” he said. “I think it’s going to be a bit harder for other vendors to get into the space.”

Go to Original Article

HR use case shows value of Oracle Analytics Cloud

By detailing the business challenges of a waste management company, Myles Gilsenan demonstrated the value Oracle Analytics Cloud can give organizations.

Gilsenan, director of Oracle business analytics at Perficient, a consulting firm based in St. Louis that works on digital projects with enterprises, spoke about Oracle Analytics Cloud (OAC) at a breakout session of Oracle Analytics’ annual user conference May 19. The conference, which began on May 12 and has sessions scheduled through August 18, was held online due to the COVID-19 pandemic.

Unifying platforms

Oracle’s analytics platform had been a patchwork of nearly 20 business intelligence products until June 2019, when the software giant streamlined its BI platform into three products — Oracle Analytics Cloud, Oracle Analytics Server and Oracle Analytics for Applications. Oracle Analytics Cloud is its SaaS offering aimed at business users and featuring natural language generation and other augmented intelligence capabilities to foster ease of use.

It’s a transformation that’s been well received.

“The Oracle Analytics Cloud has enabled Oracle to rapidly play catch-up to some of the incumbents in the analytics space,” said Mike Leone, an analyst at Enterprise Strategy Group. “It provides data-centric organizations with a cloud service anchored in simplicity. While OAC focuses on data visualization and augmented analytics, there’s a lot more under the covers — intelligent automation, recommendations, natural language querying and numerous third-party integrations.”

Daily traffic volume for the Washington, D.C., area is displayed on an Oracle Analytics Cloud dashboard.
An Oracle Analytics Cloud dashboard shows the traffic volume per day for the Washington, D.C., area.

Similarly, Doug Henschen, an analyst at Constellation Research, said the Oracle analytics reorganization was significant.

“Oracle has done a nice job of unifying its strategy and technology across cloud, on-premises and application-integrated deployments with Oracle Analytics Cloud, Oracle Analytics Server and Oracle Analytics for Applications, respectively,” he said. “It’s all one code base.”

The Oracle Analytics Cloud has enabled Oracle to rapidly play catch-up to some of the incumbents in the analytics space.
Mike LeoneSenior analyst, Enterprise Strategy Group

In addition, he added, the way the platform is packaged gives users flexibility.

“The packing gives them a data model, data integration capabilities, dashboards and reports that are prebuilt for Oracle’s cloud ERP and [healthcare management] apps, yet all of these prebuilt features can be extended to customer-specific data and analysis requirements,” Henschen said. “It’s a way to get started quickly but without being limited to prebuilt content.”

While Oracle Analytics Cloud is designed to be accessible to both technical and non-technical users alike, ironically it’s through one organization’s difficulty getting started that Gilsenan demonstrated what he said are its actual ease of use and capability to quickly deliver value.

Perficient’s client, which he did not name, was a provider of waste management services including waste removal, recycling and renewable energy. One of the company’s main goals when it began using Oracle Analytics Cloud was to join human resources data from Taleo and PeopleSoft, human resources software platforms owned by Oracle.

Specifically, according to Gilsenan, the client wanted greater visibility into such HR metrics as the cost of vacant positions, the time it took to fill vacant positions, quality of hires, employee career progression and talent optimization.

“What they really wanted was to track employees from the recruiting channel all the way through career progression at the company,” he said. “And over time, they wanted to build up a data set to be able to say that people who come through a certain channel turn out to be successful employees, and they would then of course emphasize those channels.”

The company’s data, however, came from disparate systems, including one that was on premises. And when the company started trying to unify its data in Oracle Analytics Cloud, it ran into trouble.

“They had a sense that OAC is an agile, cloud-based environment, and you should be able to get value very quickly,” Gilsenan said. “There were a lot of expectations, and people were expecting to see a dashboard very, very quickly. But there were organizational things that caused issues.”

One of the biggest was that the company’s expert in the subject matter was also working on many other things and didn’t have enough time to devote to the project. Other members of the team working on the project also had competing responsibilities.

As a result, according to Gilsenan, when it started taking longer to complete the project than originally planned, company management concluded that Oracle Analytics Cloud was too complicated.

“When it came to integrating data sources, there was some technical expertise that was needed, but by and large it was the idea that they couldn’t focus,” Gilsenan said. “It was a classic organizational issue.”

Rather than a different analytics platform, what the company really needed was some outside help, according to Glisenan. It brought in Perficient, and within four weeks delivered an HR analytics system in Oracle Analytics Cloud.

Perficient’s first step was to restore the waste management company’s confidence in Oracle Analytics Cloud by showing executives success stories. It then helped the company define success criteria, develop a plan and move into the execution phase.

Perficient helped the waste management company develop a dashboard and six reports that covered critical HR metrics such the quality of hires and the cost of open positions.

“They became very competent in the platform, and right then and there made plans to roll out Oracle Analytics Cloud to the rest of the company [beyond HR],” Gilsenan said.

Focus on HR

While the waste management company is now using Oracle Analytics Cloud throughout its organization, HR has been a particular focus of the platform. Oracle even unveiled a specialized HR version of Oracle Analytics for Cloud HCM at the start of its virtual user conference, though that’s not the tool Perficient’s client is now using.

“Oracle is looking to deliver a more holistic approach to HR analytics,” Leone said. “They’ve spent a ton of time researching various aspects of HR to deliver a comprehensive launching pad for organizations looking to modernize HR with advanced analytics. It’s about using more data from several entities together to help accurately measure success, failure and the likelihood of each. This is where Oracle is making significant strides in helping to modernize analytical approaches.”

Go to Original Article

Small vendors that stand out in network automation

Incumbent vendors are typically behind in providing cutting-edge features in network management tools. So, enterprises looking for advanced analytics and network automation will more likely find them in small vendors’ products.

More advanced tools are critical to enterprises switching to software-based network management in the data center from a traditional hardware-centric model. Driving the shift are initiatives to move workloads to the cloud and digitize more internal and external operations.

In a study released this month, almost half of the 350 IT professionals surveyed by Enterprise Management Associates said they wanted advanced analytics for anomaly detection and traffic optimization.

Small vendors are addressing the demand by incorporating machine learning in network monitoring tools that search for potential problems. Examples of those vendors include Kentik and Moogsoft.

Besides more comprehensive analytics, enterprises want software that automatically configures, provisions and tests network devices. Those network automation features are vital to improving efficiency and reducing human error and operating expenses.

Gartner recently named three small vendors at the forefront of network automation: BeyondEdge, Intentionet and NetYCE.

Machine learning in network management

Moogsoft is using machine learning to reduce the number of events its network monitoring software flags to engineers. Moogsoft does that by identifying and then hiding multiple activities related to the same problem.

“It really helps streamline” network operations, said Terry Slattery, a network consultant at IT adviser NetCraftsmen.

Kentik, on the other hand, uses machine learning to correlate network traffic flow data generated by switches and routers that support the NetFlow protocol, Slattery said. The process can identify sources of malware or other potential security threats.

Moogsoft and Kentik use machine learning to improve specific features in their products. Vendors have yet to deploy it in broader network operations, which would likely require significant changes in network infrastructure.

Today, companies prefer to work on provisioning, monitoring and making hardware changes on a large scale. After that, they might start adding “smarts” to the network, said Jason Edelman, founder and CTO of consultancy Network to Code.

Gartner also named Network to Code as a small vendor that enterprises should consider. The consultancy’s client base includes 30 of the Fortune 500. The company specializes in the use of open source software for managing networks with a variety of vendor devices.

Gartner picks for automation

Among Gartner’s other small vendors, BeyondEdge was the only one focused on the campus network, where it competes with behemoths like Cisco and Hewlett Packard Enterprise’s Aruba.

BeyondEdge has developed overlay software for Ethernet switching fabrics and passive optical networks. The software lets enterprises create configurations based on business and application policies and then applies them at devices’ access points. BeyondEdge sells its vendor-agnostic technology through consumption-based pricing.

BeyondEdge is best suited for organizations that need to provision many ports for different classes of users, Gartner said. Those types of organizations are found in commercial real estate, hospitality, higher education and healthcare.

Intentionet and NetYCE provide tools for data center networks. The former has developed open source-based software that mathematically validates network configurations before deploying them. “This is a new capability in the market and can simultaneously enhance uptime and agility,” Gartner said.

NetYCE stands out for developing a straightforward UI that simplifies network configuration change management, network automation and orchestration capabilities, Gartner said.

“It provides a simple way for networking personnel — who may be novices in automation — to get up to speed quickly,” the analyst firm said.

NetYCE’s technology supports hardware from the largest established vendors. The company claims to provide adapters to nonsupported gear within two weeks, Gartner said.

Go to Original Article

Getting a handle on certificate management in Windows shops

Certificate management is one thing that IT pros often forget until an application fails or resources are unavailable because a certificate was not renewed before its expiration date.

Certificates are typically used to identify a webpage as a known site to create an encrypted HTTPS session. Most static webpages don’t use them. With known secure pages, the certificate handling is often done behind the scenes.

Certificates also manage authentication and communication between systems across an organization’s network; a lapsed certificate in your data center can have serious consequences, such as preventing users from logging into Microsoft Exchange to access email and calendars.

As an administrator, the process to check certificates in Windows is easily done by running certmgr.msc at the command prompt to open the Certificates Microsoft Management Console (MMC) snap-in tool.

On the surface, it doesn’t look too difficult to manage certificates, but problems with them have caused some of the largest applications in the world to go offline.

Certificates MMC snap-in tool
The Certificates MMC snap-in tool displays the installed certificates on the current Windows machine.

The most common use of certificates is to establish a secure communication tunnel with a website so both your login information and what you do is hidden from the rest of the internet. For example, when you load LinkedIn, the site uses a certificate to encrypt communication using Secure Sockets Layer between your machine the site.

As you start to look at the websites you visit, you are likely to find many that use login information have certificates to protect your privacy. These certificates are not permanent and they do expire. When I checked, the LinkedIn certificate is due to expire in September. An expired certificate will cause problems. Once you cannot establish a secure connection, a website can simply go dark until the certificate is renewed.

LinkedIn certificate
Like many sites on the internet, LinkedIn uses a certificate to secure the traffic between the site and its users.

While losing LinkedIn might not be drastic, what if it was the certificate to a cloud-based application you use? Or worse yet, what if it was your company’s application and now your customers can’t access their data? An expiring certificate is simple to overlook and problems with certificate management happen to even the largest of companies, including Microsoft. It costs next to nothing to renew these certificates, but once they pass their expiration date, the resulting chaos can cost money and cause embarrassment for the IT staff.

Certificates often remain out of sight, out of mind

One of the main challenges with certificates is they remain hidden in plain sight. They are not complex to deal with and often last several years.

Your IT admins are used to the hustle and critical need of many IT services that remain front of mind. Because certificates last for a long time — often, several years — their importance fades into the background; they fall off the daily list of tasks that must be completed.

It’s easy enough to check the status of your certificates in Windows, but there is no mechanism to alert you about an imminent expiration. For some sites, it’s possible to click past the warning you might see when a certificate has expired; we train our users to avoid these types of potential security risks, so why is it an option to proceed? This practice doesn’t work for other key functions, such as single sign-on; other more automated functions will simply stop working when the certificate expires.

Certificate management issues happen for several reasons

Renewal of certificates is not hard and can be done by even the most junior person on your team, except for one critical piece: You need a company credit card to charge the renewal to, and those are typically not given to junior admins. The stigma of needing to ask permission to use a corporate credit card or wanting to avoid the hassle of getting reimbursed can prevent IT staff from proceeding.

Oftentimes, this certificate task falls outside the realm of IT and into the accounting department. This also means they are the ones who would get the renewal notices, and they may not understand how critical they are until it’s too late.

If both the communication related to and the payment of the certificates is outside of the main IT department, then it’s up to IT to be proactive and stay on top of certificate management. You should not rely on an email or a spreadsheet to track these expiration dates. A group calendar appointment, even years out, still helps, even when turnover occurs. There are also several vendors that offer certificate management add-ons to popular monitoring tools, such as SolarWinds and Quest Software.

While you don’t want to reinvent or deploy large-scale solutions to address certificate management, it’s not something to ignore. They can be at the root of many wide-ranging issues. An expiring certificate is not usually on any type of disaster recovery or backup plan because they are so unique. Look to incorporate certificate monitoring into existing tool sets so your staff has ample time to get them renewed and deployed before your secure connections go offline along with your customers and reputation.

Checking a certificate isn’t hard and the renewal process isn’t difficult, but remembering to stay on top of certificate management continues to evade many IT shops. Another complication is the number of certificates to keep track of. You might have multiple sites, each with its own certificate that are all required to make one application work. It can be very easy to lose track of one, which can then cause a cascade of events that lead to application failure. While co-terming certificates to line up the expiration dates would make the most sense, sometimes that is not possible in every environment.

Go to Original Article

Essential components and tools of server monitoring

Though server capacity management is an essential part of data center operations, it can be a challenge to figure out which components to monitor and what tools are available. How you address server monitoring can change depending on what type of infrastructure you run within your data center, as virtualized architecture requirements differ from on-premises processing needs.

With the capacity management tools available today, you can monitor and optimize servers in real time. Monitoring tools keep you updated on resource usage and automatically allocate resources between appliances to ensure continuous system uptime.

For a holistic view of your infrastructure, capacity management software should monitor these server components to some degree. Tracking these components can help you troubleshoot issues and predict any potential changes in processing requirements.

CPU. Because CPUs handle basic logic and I/O operations, as well as route commands for other components in the server, they’re always in use. High CPU usage can indicate an issue with the CPU, but more likely it’s a sign that the issue is with a connected component. Above 70% utilization applications on the server can become sluggish or stop responding.

Memory. High memory usage can result from multiple concurrent applications, but a faulty process that’s usually less resource-intensive may cause additional issues. The memory hardware component itself rarely fails, but you should investigate performance when its usage rates rise.

Storage area network. SAN component issues can occur at several points, including connection cabling, host bus adapters, switches and the storage servers themselves. A single SAN server can host data for multiple applications and often span multiple physical sites, which leads to significant business effects if any component fails.

Server disk capacity. Storage disks help alleviate storage issues and reduce bottlenecks for data storage with the right amount of capacity. Problems can arise when more users access the same application that uses a particular storage location, or if a resource-intensive process is located on a server not designed for the application. If you can’t increase disk capacity, you can monitor it and investigate when rates rise, so you can optimize future usage.

Storage I/O rates. You should also monitor storage I/O rates. Bottlenecks and high I/O rates can indicate a variety of issues, including CPU problems, disk capacity limitations, process bugs and hardware failure.

Physical temperatures of servers. Another vital component to monitor is server temperatures. Data centers are cooled to prevent any hardware component problems, but temperatures can increase for a variety of reasons: HVAC failure, internal server hardware failure (CPU, RAM or motherboard), external hardware failure (switches and cabling) or a software failure (firmware bug or application process issues).

OS, firmware and server applications. The entire server software stack must work together to ensure optimal usage (Basic I/O System, OS, hypervisors, drivers and applications.) Failed regular updates could lead to issues for the server, any hosted applications, faulty stakeholder user experience or downtime.

Streamline reporting with software tools

Most server monitoring software tracks and notifies you of any issues with servers in your technology stack. They include default and custom component monitoring, automated and manual optimization features, and standard and custom alerting options.

The software sector for server monitoring covers all types of architectures as well as required depth and breadth of data collection. Here is a shortlist of server capacity monitoring software for your data center.

SolarWinds Server & Application Monitor
SolarWinds’ software provides monitoring, optimization and diagnostic tools in a central hub. You can quickly identify which server resources are at capacity in real time, use historical reporting to track trends and forecast resource purchasing. Additional functions let you diagnose and fix virtual and physical storage capacity bottlenecks that affect application health and performance.

HelpSystems Vityl Capacity Management
Vityl Capacity Management is a comprehensive capacity management offering that makes it easy for organizations to proactively manage performance and do capacity planning in hybrid IT setups. It provides real-time monitoring data and historical trend reporting, which helps you understand the health and performance of your network over time.

BMC Software TrueSight Capacity Optimization
The TrueSight Capacity Optimization product helps admins plan, manage and optimize on-premises and cloud server resources through real-time and predictive features. It provides insights into multiple network types (physical, virtual or cloud) and helps you manage and forecast server usage.

VMware Capacity Planner
As a planning tool, VMware’s Capacity Planner can gather and analyze data about your servers and better forecast future usage. The forecasting and prediction functionality provides insights on capacity usage trends, as well as virtualization benchmarks based on industry performance standards.

Splunk App for Infrastructure
The Splunk App for Infrastructure (SAI) is an all-in-one tool that uses streamlined workflows and advanced alerting to monitor all network components. With SAI, you can create custom visualizations and alerts for better real-time monitoring and reporting through metric grouping and filtering based on your data center and reporting needs.

Go to Original Article

Databricks bolsters security for data analytics tool

Some of the biggest challenges with data management and analytics efforts is security.

Databricks, based in San Francisco, is well aware of the data security challenge, and recently updated its Databricks’ Unified Analytics Platform with enhanced security controls to help organizations minimize their data analytics attack surface and reduce risks. Alongside the security enhancements, new administration and automation capabilities make the platform easier to deploy and use, according to the company.

Organizations are embracing cloud-based analytics for the promise of elastic scalability, supporting more end users, and improving data availability, said Mike Leone, a senior analyst at Enterprise Strategy Group. That said, greater scale, more end users and different cloud environments create myriad challenges, with security being one of them, Leone said.

“Our research shows that security is the top disadvantage or drawback to cloud-based analytics today. This is cited by 40% of organizations,” Leone said. “It’s not only smart of Databricks to focus on security, but it’s warranted.”

He added that Databricks is extending foundational security in each environment with consistency across environments and the vendor is making it easy to proactively simplify administration.

As organizations turn to the cloud to enable more end users to access more data, they’re finding that security is fundamentally different across cloud providers.
Mike LeoneSenior analyst, Enterprise Strategy Group

“As organizations turn to the cloud to enable more end users to access more data, they’re finding that security is fundamentally different across cloud providers,” Leone said. “That means it’s more important than ever to ensure security consistency, maintain compliance and provide transparency and control across environments.”

Additionally, Leone said that with its new update, Databricks provides intelligent automation to enable faster ramp-up times and improve productivity across the machine learning lifecycle for all involved personas, including IT, developers, data engineers and data scientists.

Gartner said in its February 2020 Magic Quadrant for Data Science and Machine Learning Platforms that Databricks Unified Analytics Platform has had a relatively low barrier to entry for users with coding backgrounds, but cautioned that “adoption is harder for business analysts and emerging citizen data scientists.”

Bringing Active Directory policies to cloud data management

Data access security is handled differently on-premises compared with how it needs to be handled at scale in the cloud, according to David Meyer, senior vice president of product management at Databricks.

Meyer said the new updates to Databricks enable organizations to more efficiently use their on-premises access control systems, like Microsoft Active Directory, with Databricks in the cloud. A member of an Active Directory group becomes a member of the same policy group with the Databricks platform. Databricks then maps the right policies into the cloud provider as a native cloud identity.

Databricks uses the open source Apache Spark project as a foundational component and provides more capabilities, said Vinay Wagh, director of product at Databricks.

“The idea is, you, as the user, get into our platform, we know who you are, what you can do and what data you’re allowed to touch,” Wagh said. “Then we combine that with our orchestration around how Spark should scale, based on the code you’ve written, and put that into a simple construct.”

Protecting personally identifiable information

Beyond just securing access to data, there is also a need for many organizations to comply with privacy and regulatory compliance policies to protect personally identifiable information (PII).

“In a lot of cases, what we see is customers ingesting terabytes and petabytes of data into the data lake,” Wagh said. “As part of that ingestion, they remove all of the PII data that they can, which is not necessary for analyzing, by either anonymizing or tokenizing data before it lands in the data lake.”

In some cases, though, there is still PII that can get into a data lake. For those cases, Databricks enables administrators to perform queries to selectively identify potential PII data records.

Improving automation and data management at scale

Another key set of enhancements in the Databricks platform update are for automation and data management.

Meyer explained that historically, each of Databricks’ customers had basically one workspace in which they put all their users. That model doesn’t really let organizations isolate different users, however, and has different settings and environments for various groups.

To that end, Databricks now enables customers to have multiple workspaces to better manage and provide capabilities to different groups within the same organization. Going a step further, Databricks now also provides automation for the configuration and management of workspaces.

Delta Lake momentum grows

Looking forward, the most active area within Databricks is with the company’s Delta Lake and data lake efforts.

Delta Lake is an open source project started by Databrick and now hosted at the Linux Foundation. The core goal of the project is to enable an open standard around data lake connectivity.

“Almost every big data platform now has a connector to Delta Lake, and just like Spark is a standard, we’re seeing Delta Lake become a standard and we’re putting a lot of energy into making that happen,” Meyer said.

Other data analytics platforms ranked similarly by Gartner include Alteryx, SAS, Tibco Software, Dataiku and IBM. Databricks’ security features appear to be a differentiator.

Go to Original Article

ConnectWise threat intelligence sharing platform changes hands

Nonprofit IT trade organization CompTIA said it will assume management and operations of the Technology Solution Provider Information Sharing and Analysis Organization established by ConnectWise in August 2019.

Consultant and long-time CompTIA member MJ Shoer will remain as the TSP-ISAO’s executive director under the new arrangement. The TSP-ISAO retains its primary mission of fostering real-time threat intelligence sharing among channel partners, CompTIA said.

MJ ShoerMJ Shoer

Nancy Hammervik, CompTIA’s executive vice president of industry relations, discussed CompTIA’s TSP-ISAO leadership role with Shoer during the CompTIA Communities and Councils Forum event this week. CompTIA conducted the event virtually after cancelling its Chicago in-person event due to the coronavirus pandemic.

Shoer said CompTIA is uniquely positioned to enhance the TSP-ISAO. “If you look at all the educational opportunities and resources that CompTIA brings to the table … those are going to be integral to this in terms of helping to further educate the world of TSPs … about the cyber threats and how to respond,” he said.

He added that CompTIA’s involvement in government policy work will contribute to the success of the threat intelligence sharing platform, as “the government is going to be key.” ISAOs were chartered by the Department of Homeland Security as a result of an executive order by former president Barack Obama in 2015.

Hammervik and Shoer also underscored that CompTIA’s commitment to vendor neutrality will help the TSP-ISAO bring together competitive companies in pursuit of a collective benefit. “We all face these threats. We have all seen some of the reports about MSPs being used as threat vectors against their clients. If we don’t … stop that, it can harm the industry from the largest member to the smallest,” Shoer said.

About 650 organizations have joined the TSP-ISAO, according to Hammervik. Membership in the organization in 2020 is free for TSP companies.

Shoer said his goal for the TSP-ISAO is to develop a collaborative platform that can share qualified, real-time and actionable threat intelligence with TSPs so they can secure their own and customers’ businesses. He said ultimately, the organization would like to automate elements of the threat intelligence sharing, but it may be a long-term goal as AI and other technologies mature.

Wipro launches Microsoft technology unit

Wipro Ltd., a consulting and business process services company based in Bangalore, India, launched a business unit dedicated to Microsoft technology.

Wipro said its Microsoft Business Unit will focus on developing offerings that use Microsoft’s enterprise cloud services. Those Wipro offerings will include:

  • Cloud Studio, which provides migration services for workloads on such platforms as Azure and Dynamics 365.
  • Live Workspace, which uses Microsoft’s Modern Workplace, Azure’s Language Understanding Intelligent Service, Microsoft 365 and Microsoft’s Power Platform.
  • Data Discovery Platform, which incorporates Wipro’s Holmes AI system and Azure.

Wipro’s move follows HCL Technologies’ launch in January 2020 of its Microsoft Business Unit and Tata Consultancy Services’ rollout in November 2019 of a Microsoft Business Unit focusing on Azure’s cloud and edge capabilities. Other large IT service providers with Microsoft business units include Accenture/Avenade and Infosys.

Other news

  • 2nd Watch, a professional services and managed cloud company based in Seattle, unveiled a managed DevOps service, which the company said lets clients take advantage of DevOps culture without having to deploy the model on their own. The 2nd Watch Managed DevOps offering includes an assessment and strategy phase, DevOps training, tool implementation based on the GitLab platform, and ongoing management. 2nd Watch is partnering with GitLab to provide the managed DevOps service.
  • MSPs can now bundle Kaseya Compliance Manager with a cyber insurance policy from Cysurance. The combination stems from a partnership between Kaseya and Cysurance, a cyber insurance agency. Cysurance’s cyber policy is underwritten by Chubb.
  • Onepath, a managed technology services provider based in Atlanta, rolled out Onepath Analytics, a cloud-based business intelligence offering for finance professionals in the SMB market. The analytics offering includes plug-and-play extract, transform and load, data visualization and financial business metrics such as EBITDA, profit margin and revenue as a percentage of sales, according to the company. Other metrics maybe included, the company said, if the necessary data is accessible.
  • Avaya and master agent Telarus have teamed up to provide Avaya Cloud Office by Ring Central. Telarus will offer the unified communications as a service product to its network of 4,000 technology brokers, Avaya said.
  • Adaptive Networks, a provider of SD-WAN as a service, said it has partnered with master agent Telecom Consulting Group.
  • Spinnaker Support, an enterprise software support services provider, introduced Salesforce application management and consulting services. The company also provides Oracle and SAP application support services.
  • Avanan, a New York company that provides a security offering for cloud-based email and collaboration suites, has hired Mike Lyons as global MSP/MSSP sales director.
  • Managed security service provider High Wire Networks named Dave Barton as its CTO. Barton will oversee and technology solutions and channel sales engineering for the company’s Overwatch Managed Security Platform, which is sold through channel partners, the company said.

Market Share is a news roundup published every Friday.

Go to Original Article

Updated Exchange Online PowerShell module adds reliability, speed

PowerShell offers administrators a more flexible and powerful way to perform management activities in Exchange Online. At times, PowerShell is the only way to perform certain management tasks.

But there have been widespread concerns by many Exchange administrators who have not always felt confident in Exchange Online PowerShell’s abilities, especially when dealing with thousands of mailboxes and complicated actions. But Microsoft recently released the Exchange Online PowerShell V2 module — also known as the ExchangeOnlineManagement module — to reduce potential management issues.

New cmdlets attempt to curb PowerShell problems

Moving the messaging platform to the cloud can frustrate administrators when they attempt to work with the system using remote PowerShell without a reliable connection to Microsoft’s hosted email system. Microsoft said the latest Exchange Online PowerShell module, version 0.3582.0, brings new enhancements and new cmdlets to alleviate performance and reliability issues, such as session timeouts or poor error handling during complex operations.

Where a spotty connection could cause errors or scripts to fail with the previous module, Microsoft added new cmdlets in the Exchange Online PowerShell V2 module to restart and attempt to run a script where it left off before issues started.

Microsoft added 10 new cmdlets in the new Exchange Online PowerShell module. One new cmdlet, Connect-ExchangeOnline, replaces two older cmdlets: Connect-EXOPSSession and New-PSSession.

Microsoft took nine additional cmdlets in the older module, updated them to use REST APIs and gave them new names using the EXO prefix:

  • Get-EXOMailbox
  • Get-EXORecipient
  • Get-EXOCASMailbox
  • Get-EXOMailboxPermission
  • Get-EXORecipientPermission
  • Get-EXOMailboxStatistics
  • Get-EXOMailboxFolderStatistics
  • Get-EXOMailboxFolderPermission
  • Get-EXOMobileDeviceStatistics

Microsoft said the new REST-based cmdlets will perform significantly better and faster than the previous PowerShell module. The REST APIs offer a more stable connection to the Exchange Online back end, making most functions more responsive and able to operate in a stateless session.

Given that administrators will develop complex PowerShell scripts for their management needs, they needed more stability from Microsoft’s end to ensure these tasks will execute properly. Microsoft helped support those development efforts by introducing better script failure with functionality that will retry and resume from the point of failure. Previously, the only option for administrators was to rerun their scripts and hope it worked the next time.

There are cases where some properties are queried during a script execution that can generally impact the overall response and performance of the script given the size of the objects and their properties. To help optimize these scenarios, Microsoft introduced a way for a PowerShell process to run against Exchange Online to only retrieve relevant properties of objects needed during the execution process.  An example would be retrieving mailbox properties that would be the most likely to be used, such as mailbox statistics, identities and quotas.

Microsoft removed the need to use the Select parameter typically used to determine which properties are needed as part of the result set.  This neatens scripts and eliminates unnecessary syntax as shown in the example below.


Get-ExoMailbox -ResultSize Unlimited -RecipientTypeDetails UserMailbox | Select WhenCreated, WhenChanged | Export-CSV c:tempExportedMailbox.csv


Get-ExoMailbox -ResultSize Unlimited -RecipientTypeDetails UserMailbox  -PropertySets Quota -Properties WhenCreated, WhenChanged | Export-CSV c:tempExportedMailbox.csv

How to get the new Exchange Online PowerShell module

To start using the latest Exchange Online PowerShell capabilities requires the installation or upgrade of the ExchangeOnlineManagement module. This can be done from a PowerShell prompt running under administrator settings and executing one of the two following commands:

Install-Module -Name ExchangeOnlineManagement
Import-Module ExchangeOnlineManagement; Get-Module ExchangeOnlineManagement


Update-Module -Name ExchangeOnlineManagement
Exchange Online PowerShell module install
New Exchange Online PowerShell module users can use the Install-Module command to start working with the new cmdlets.

Exchange Online PowerShell V2 module commands offer speed boost

IT pros who use the new Exchange Online PowerShell module should see improved performance and faster response time.

We can run a short test to compare how the current version stacks up to the previous version when we run commands that provide the same type of information.

First, let’s run the following legacy command to retrieve mailbox information from an organization:

Get-Mailbox -ResultSize Unlimited -RecipientTypeDetails UserMailbox | Select DisplayName, ProhibitSendReceiveQuota, WhenCreated, WhenChanged

The command completes in 2.3890 seconds.

Exchange Online PowerShell mailbox command
One typical use of PowerShell on Exchange Online is to use the Get-Mailbox cmdlet to retrieve information about mailboxes used by members of the organization.

This is the new version of the command that provides same set of information but in a slightly different format:

$RESTResult = Measure-Command { $Mbx = Get-ExoMailbox -ResultSize Unlimited -RecipientTypeDetails UserMailbox -PropertySets Quota -Properties WhenCreated, WhenChanged

The command completes in 1.29832 seconds, or almost half the time. Extrapolate these results to an organization with many thousands of users and mailboxes in Exchange Online and you can begin to see the benefit when a script takes half as much time to run.

Use the following command to get mailbox details for users in the organization:

Get-ExoMailbox -ResultSize Unlimited -RecipientTypeDetails UserMailbox -PropertySets Quota -Properties WhenCreated, WhenChanged
Exchange Online mailbox details
The updated Get-ExoMailbox cmdlet fetches detailed information for a mailbox hosted in Exchange Online.

The following command exports a CSV file with details of mailboxes with additional properties listed:

Get-ExoMailbox -ResultSize Unlimited -RecipientTypeDetails UserMailbox -PropertySets Quota -Properties WhenCreated, WhenChanged | Export-CSV c:tempExportedMailbox.csv

Be aware of the Exchange Online PowerShell module provisions

There are several caveats Exchange administrators should know before they use the latest ExchangeOnlineManagement module:

  • The new Exchange PowerShell Online module only works on Windows PowerShell 5.1 with support coming for the new cross-platform version of PowerShell.
  • Data results returned by the latest cmdlets are in alphabetic order, not chronologically.
  • The new module only supports OAuth 2.0 authentication, but the client machine will need basic authorization enabled to use the older remote PowerShell cmdlets.
  • Some results may require additional formatting or adjusting because the new cmdlets give output alphabetically.
  • Administrators should use Azure AD GUID for account identity.

How to give Microsoft feedback for additional development

As Microsoft continues to improve the module, administrators will continue to see more capabilities that will allow them to have a much more improved experience with PowerShell to manage their Exchange Online environment.

There are three avenues for users to provide feedback to Microsoft on the new PowerShell commands. The first one is to report bugs or other issues during the processing of the different scripts from within PowerShell. To do this, run the following command:

Connect-ExchangeOnline -EnableErrorReporting -LogDirectoryPath <Path to store log file> -LogLevel All

The second option is to post a message on the Office 365 UserVoice forum.

Lastly, users can file an issue or check on the status of one with the Exchange Online PowerShell commands on the Microsoft Docs Github site at this link.

Go to Original Article