Tag Archives: managed

Logically acquires Carolinas IT in geographic expansion

Logically Inc., a managed service provider based in Portland, Maine, has acquired Carolinas IT, a North Carolina MSP with cloud, security and core IT infrastructure skills.

The deal continues Logically’s geographic expansion. The company launched in April, building upon the 2018 merger of Portland-based Winxnet Inc. and K&R Network Solutions of San Diego. Logically in August added a New York metro area company through its purchase of Sullivan Data Management, an outsourced IT services firm.

Carolinas IT, based in Raleigh, provides a base from which Logically can expand in the region, Logically CEO Christopher Claudio said. “They are a good launching pad,” he said.

Carolinas IT’s security and compliance practice also attracted Logically’s interest. Claudio called security and compliance a growing market and one that will continue to expand, given the challenges organizations face with regulatory obligations and the risk of data breaches.

“You can’t be an MSP without a security and compliance group,” he said.

Carolinas IT’s security services include risk assessment, HIPAA consulting and auditing, security training and penetration testing. The company’s cloud offerings include Office 365 and private hosted cloud services. And its professional services personnel possess certifications from vendors such as Cisco, Citrix, Microsoft, Symantec and VMware.

Christopher ClaudioChristopher Claudio

Claudio cited Carolinas IT’s “depth of talent,” recurring revenue and high client retention rate as some of the business’s favorable attributes.

Mark Cavaliero, Carolinas IT’s president and CEO, will remain at the company for the near term but plans to move on and will not have a long-term leadership role, Claudio said. But Cavaliero will have an advisory role, Claudio added, noting “he has built a great business.”

Logically’s Carolinas IT purchase continues a pattern of companies pulling regional MSPs together to create national service providers. Other examples include Converge Technology Solutions Corp. and Mission.

Go to Original Article
Author:

Effectively implement Azure Ultra Disk Storage

In August 2019, Microsoft announced the general availability of a new Managed Disks tier: Ultra Disk Storage. The new offering represents a significant step up from the other Managed Disks tiers, offering unprecedented performance and sub-millisecond latency to support mission-critical workloads.

The Ultra Disk tier addresses organizations reluctant to move data-intensive workloads to the cloud because of throughput and latency requirements.

According to Microsoft, Azure Ultra Disk Storage makes it possible to support these workloads by delivering next-generation storage technologies geared toward performance and scalability, while providing you with the convenience of a managed cloud service.

Understanding Azure Ultra Disk

Managed Disks is an Azure feature that simplifies disk management for infrastructure-as-a-service storage. A managed disk is a virtual hard disk that works much like a physical disk, except that the storage is abstracted and virtualized. Azure stores the disks as page blobs, in the form of random I/O storage objects.

To use managed disks, you only have to provision the necessary storage resources and Azure does the rest, deploying and managing the drives.

Azure offers four Managed Disks tiers: Standard HDD, Standard SSD, Premium SSD and the new Ultra Disk Storage, which also builds on SSD technologies. Ultra Disk SSDs support enterprise-grade workloads driven by systems such as MongoDB, SQL Server, SAP HANA and high-performing, mission-critical applications. The latest storage tier comes with configurable performance attributes, making it possible to adjust IOPS and throughput to meet evolving performance requirements.

Azure Ultra Disk Storage implements a distributed block storage architecture that uses NVMe to support I/O-intensive workloads. NVMe is a host controller interface and storage protocol that accelerates data transfers between data center systems and SSDs over a computer’s high-speed PCIe bus.

Ultra Disk Storage makes it possible to utilize a VM’s maximum I/O limits using only a single ultra disk, without needing to stripe multiple disks.

Along with the new storage tier, Azure introduced the virtual disk client (VDC), a simplified client that runs on the compute host. The client has full knowledge of the virtual disk metadata mappings in the Azure Ultra Disk cluster. This knowledge enables the client to communicate directly with the storage servers, bypassing the load balancers and front-end servers often used to establish initial disk connections.

With earlier Managed Disk storage tiers, the route was much less direct. For example, Azure Premium SSD storage is dependent on the Azure Blob storage cache. As a result, the compute host runs the Azure Blob Cache Driver, rather than the VDC. The driver communicates with a storage front end, which, in turn, communicates with partition servers. The partition servers then talk to the stream servers, which connect to the storage devices.

The VDC, on the other hand, supports a more direct connection, minimizing the number of layers that read and write operations traverse, reducing latency and increasing performance.

Deploying Ultra Disk Storage

Azure Ultra Disk Storage lets you configure capacity, IOPS and throughput independently, providing the flexibility necessary to meet specific performance requirements. For capacity, you can choose a disk size ranging from 4 GiB to 64 TiB, and you can provision the disks with up to 300 IOPS per GiB, to a maximum of 160,000 IOPS per disk. For throughput, Azure supports up to 2,000 MB per second, per disk.

Ultra Disk Storage makes it possible to utilize a VM’s maximum I/O limits using only a single ultra disk, without needing to stripe multiple disks. You can also configure disk IOPS or throughput without detaching the disk from the VM or restarting the VM. Azure automatically implements the new performance settings in less than an hour.

To deploy Ultra Disk Storage, you can use the Azure Resource Manager, Azure CLI or PowerShell. Ultra Disk Storage is currently available in three Azure regions: East US 2, North Europe and Southeast Asia. Microsoft plans to extend to other regions, but the company has not provided specific timelines. In addition, Ultra Disk Storage supports only the ESv3 and DSv3 Azure VMs.

Azure Ultra Disk handles data durability behind the scenes. The service is built on Azure’s locally redundant storage (LRS), which maintains three copies of the data within the same availability zone. If an application writes data to the storage service, Azure will acknowledge the operation only after the LRS system has replicated the data.

When implementing Ultra Disk Storage, you must consider the throttling limits Azure places on resources. For example, you could configure your VM with a 16-GiB ultra disk at 4,800 IOPS. However, if you’re working with a Standard_D2s_v3 VM, you won’t be able to take full advantage of the storage because the VM gets throttled to 3,200 IOPS as a result of its limitations. To realize the full benefits available to Ultra Disk Storage, you need hardware that can support its capabilities.

Where Ultra Disk fits in the Managed Disk lineup

Azure Managed Disks simplify disk management by handling deployment and management details behind the scenes. Currently, Azure provides the following four storage options for accommodating different workloads.

The Standard HDD tier is the most basic tier, providing a reliable, low-cost option that supports workloads in which IOPS, throughput and latency are not critical to application delivery. For this reason, the Standard HDD tier is well suited to backup and other non-critical workloads. The maximum disk size for this tier is 32,767 GiB, the maximum IOPS is 2,000 and the maximum throughput is 500 MiB per second.

The Standard solid-state drive tier offers a step up from the Standard HDD tier to support workloads that require better consistency, availability, reliability and latency. The Standard SSD tier is well suited to web servers and lightly used applications, as well as development and testing environments. The maximum disk size for this tier is 32,767 GiB, the maximum IOPS is 6,000 and the maximum throughput is 750 MiB per second.

Prior to the release of the Ultra Disks tier, the Premium SSD tier was the top offering in the Managed Disks stack. The Premium tier is geared toward production and performance-sensitive workloads that require greater performance than the lower tiers. This tier can benefit mission-critical applications that support I/O-intensive workloads. The maximum disk size for this tier is 32,767 GiB, the maximum IOPS is 20,000 and the maximum throughput is 900 MiB per second.

The Ultra Disks tier is the newest Managed Disks service available to customers. The new tier takes performance to the next level, delivering high IOPS and throughput, with consistently low latency. Customers can dynamically change performance settings without restarting their VMs. The Ultra Disks tier targets data-intensive applications such as SAP HANA, Oracle Database and other transaction-heavy workloads. The maximum disk size for this tier is 65,536 GiB, the maximum IOPS is 160,000 and the maximum throughput is 2,000 MiB per second.

Because Ultra Disk Storage is a new Azure service, it comes with several limitations. The service is available in only a few regions and works with only a couple types of VMs. Additionally, you cannot attach an ultra disk to a VM running in an availability set. The service also does not support snapshots, VM scale sets, Azure disk encryption, Azure Backup or Azure Site Recovery. You can’t convert an existing disk to an ultra disk, but you can migrate the data from an existing disk to an ultra disk.

Despite these limitations, Azure Ultra Disk Storage could prove to be an asset to organizations that plan to move their data-intensive applications to the cloud. No doubt Microsoft will continue to improve the service, extending their reach to other regions and addressing the lack of support for other Azure data services, but that hasn’t happened yet, and some IT teams might insist that these issues be resolved before they consider migrating their workloads. In the meantime, Ultra Disk Storage promises to be a service worth watching, especially for organizations already committed to the Azure ecosystem.

Go to Original Article
Author:

ConnectWise-Continuum buyout shakes up MSP software market

ConnectWise, a provider of software for managed services providers, has acquired its competitor Continuum.

The Continuum acquisition was announced today by ConnectWise CEO Jason Magee at his company’s annual user conference, IT Nation Connect, running from Oct. 30 to Nov. 1 in Orlando, Fla. The buyout, which is poised to shake up the MSP software market, accompanies the acquisition of ITBoost, an IT documentation vendor. ConnectWise also revealed a strategic partnership with partner relationship management software provider Webinfinity to help ConnectWise partners manage their vendor alliances.

“[The Continuum acquisition] allows ConnectWise to address the growing pains of our partners and some of those pains around talent and skills shortages … [and] continues to accelerate ConnectWise in the cybersecurity area,” Magee said in a press briefing.

ConnectWise and Continuum are owned by private equity investment firm Thoma Bravo. Thoma Bravo purchased ConnectWise in February. The private equity firm also owns MSP software players (and ConnectWise-Continuum competitors) SolarWinds and Barracuda Networks.

ConnectWise’s platform spans professional services automation, remote monitoring and management (RMM), and ‘configure, price and quote’ software. Continuum’s development of a global security operations center (SOC), network operations center and help desk technologies will be “complementary” to what ConnectWise does today, Magee said.

Jason Magee, CEO of ConnectWise Jason Magee

The future of ConnectWise and Continuum’s RMM platforms, ConnectWise Automate and Continuum Command, remains in question. Magee said the respective RMM platforms “will be maintained [separately] at this point.” After the IT Nation Connect 2019 event, the companies will begin working on its overall business plan and joint roadmaps, “which to this point we have not been able to dig into much due to regulatory restraints around getting government approval of making the deal happen and so on,” he said.

Magee suggested that in the short term ConnectWise-Continuum partners could see some innovations introduced to the Automate and Command platforms. He pointed to a few potential examples, such as making Command’s LogMeIn remote control available to ConnectWise partners and adding features of Command’s automation and patching capabilities to the Automate platform. He didn’t specify the timing around implementing any changes but said partners could expect to see some in early 2020.

Although the post-acquisition is still in the planning stage, Magee said Continuum’s CFO Geoffrey Willison will be brought as COO at ConnectWise, and the senior vice president of global service delivery, Tasos Tsolakis, will join as the senior vice president of service delivery “over all ConnectWise going forward.” Additionally, Magee said ConnectWise will hire a new CFO for the combined business.

“Until we have the rest of the best of the business plan done, it is business as usual,” Magee said.

Addressing two types of MSPs

Magee said that the ConnectWise-Continuum acquisition also serves to benefit “two mindsets” that have emerged among MSPs.

The first mindset is of the do-it-yourself MSPs that build their practices by partnering, buying platforms and tools, and hiring teams to manage and service their customers. The second mindset is of “the companies and people [that] just want to go hire the general contractor, and those people are asking for someone else to manage [their customers] for them, take the hassle out of having to do all that stuff within their company or themselves.”

This opens up a whole new world from a ConnectWise standpoint.
Jason MageeCEO, ConnectWise

“This opens up a whole new world from a ConnectWise standpoint,” Magee said.

For a few years, ConnectWise has been establishing a ‘connected ecosystem’ of third-party software integrations around its platform, and the company will remain committed to that strategy. “We are still committed to the power of choice for our partners and will continue with our API-first mindset, which allows for continued partnership with the 260 and growing vendor partnerships that we have out there,” Magee said. “These are all great options for those [MSPs] that like to do it themselves.”

When asked if Magee anticipated challenges in merging the ConnectWise and Continuum communities of MSP partners, he said he didn’t expect any problems but would address any issues that may crop up to ensure “we are doing right by the communities.”

“At the end of the day, there is so much good and greatness that comes from bringing these two together that the partner communities are going to benefit tremendously.”

ITBoost, Webinfinity and cybersecurity initiative

In a move similar to MSP software vendor Kaseya’s buyout of IT Glue, ConnectWise is purchasing documentation provider ITBoost. ConnectWise said the IT document tool will be integrated with its product suite.

Magee said the Webinfinity partnership will help ConnectWise launch ConnectWise Engage, a tool for channel firms for simplifying vendor relationship management. ConnectWise Engage aims to give partners “the ability to receive enablement content and material or solution stack information” from their supplier partners, he noted. Additionally, ConnectWise said the Webinfinity alliance will help centralize vendor-partner touch points for areas such as deal registration, multivendor support issues, co-marketing and SKU management.

ConnectWise today also revealed a cybersecurity initiative, which Magee is calling ‘Fight Back,’ to encourage vendors, platform providers, MSPs and MSP customers to up their security awareness and capabilities.

Magee noted that ConnectWise recently achieved SOC Type 2 certification and will mandate by early 2020 multifactor and two-factor authentication across its platforms. The company in August rolled out its Technology Solution Provider Information Sharing and Analysis Organization, a forum for MSPs to share threat intelligence and best practices. “This is an area that ConnectWise for years has strived to be better. We are not perfect by any means, but we strive to get better,” he said.

Go to Original Article
Author:

Experts on demand: Your direct line to Microsoft security insight, guidance, and expertise – Microsoft Security

Microsoft Threat Experts is the managed threat hunting service within Microsoft Defender Advanced Threat Protection (ATP) that includes two capabilities: targeted attack notifications and experts on demand.

Today, we are extremely excited to share that experts on demand is now generally available and gives customers direct access to real-life Microsoft threat analysts to help with their security investigations.

With experts on demand, Microsoft Defender ATP customers can engage directly with Microsoft security analysts to get guidance and insights needed to better understand, prevent, and respond to complex threats in their environments. This capability was shaped through partnership with multiple customers across various verticals by investigating and helping mitigate real-world attacks. From deep investigation of machines that customers had a security concern about, to threat intelligence questions related to anticipated adversaries, experts on demand extends and supports security operations teams.

The other Microsoft Threat Experts capability, targeted attack notifications, delivers alerts that are tailored to organizations and provides as much information as can be quickly delivered to bring attention to critical threats in their network, including the timeline, scope of breach, and the methods of intrusion. Together, the two capabilities make Microsoft Threat Experts a comprehensive managed threat hunting solution that provides an additional layer of expertise and optics for security operations teams.

Experts on the case

By design, the Microsoft Threat Experts service has as many use cases as there are unique organizations with unique security scenarios and requirements. One particular case showed how an alert in Microsoft Defender ATP led to informed customer response, aided by a targeted attack notification that progressed to an experts on demand inquiry, resulting in the customer fully remediating the incident and improving their security posture.

In this case, Microsoft Defender ATP endpoint protection capabilities recognized a new malicious file in a single machine within an organization. The organization’s security operations center (SOC) promptly investigated the alert and developed the suspicion it may indicate a new campaign from an advanced adversary specifically targeting them.

Microsoft Threat Experts, who are constantly hunting on behalf of this customer, had independently spotted and investigated the malicious behaviors associated with the attack. With knowledge about the adversaries behind the attack and their motivation, Microsoft Threat Experts sent the organization a bespoke targeted attack notification, which provided additional information and context, including the fact that the file was related to an app that was targeted in a documented cyberattack.

To create a fully informed path to mitigation, experts pointed to information about the scope of compromise, relevant indicators of compromise, and a timeline of observed events, which showed that the file executed on the affected machine and proceeded to drop additional files. One of these files attempted to connect to a command-and-control server, which could have given the attackers direct access to the organization’s network and sensitive data. Microsoft Threat Experts recommended full investigation of the compromised machine, as well as the rest of the network for related indicators of attack.

Based on the targeted attack notification, the organization opened an experts on demand investigation, which allowed the SOC to have a line of communication and consultation with Microsoft Threat Experts. Microsoft Threat Experts were able to immediately confirm the attacker attribution the SOC had suspected. Using Microsoft Defender ATP’s rich optics and capabilities, coupled with intelligence on the threat actor, experts on demand validated that there were no signs of second-stage malware or further compromise within the organization. Since, over time, Microsoft Threat Experts had developed an understanding of this organization’s security posture, they were able to share that the initial malware infection was the result of a weak security control: allowing users to exercise unrestricted local administrator privilege.

Experts on demand in the current cybersecurity climate

On a daily basis, organizations have to fend off the onslaught of increasingly sophisticated attacks that present unique security challenges in security: supply chain attacks, highly targeted campaigns, hands-on-keyboard attacks. With Microsoft Threat Experts, customers can work with Microsoft to augment their security operations capabilities and increase confidence in investigating and responding to security incidents.

Now that experts on demand is generally available, Microsoft Defender ATP customers have an even richer way of tapping into Microsoft’s security experts and get access to skills, experience, and intelligence necessary to face adversaries.

Experts on demand provide insights into attacks, technical guidance on next steps, and advice on risk and protection. Experts can be engaged directly from within the Windows Defender Security Center, so they are part of the existing security operations experience:

We are happy to bring experts on demand within reach of all Microsoft Defender ATP customers. Start your 90-day free trial via the Microsoft Defender Security Center today.

Learn more about Microsoft Defender ATP’s managed threat hunting service here: Announcing Microsoft Threat Experts.

Go to Original Article
Author: Microsoft News Center

Cloud database services multiply to ease admin work by users

NEW YORK — Managed cloud database services are mushrooming, as more database and data warehouse vendors launch hosted versions of their software that offer elastic scalability and free users from the need to deploy, configure and administer systems.

MemSQL, TigerGraph and Yellowbrick Data all introduced cloud database services at the 2019 Strata Data Conference here. In addition, vendors such as Actian, DataStax and Hazelcast said they soon plan to roll out expanded versions of managed services they announced earlier this year.

Technologies like the Amazon Redshift and Snowflake cloud data warehouses have shown that there’s a viable market for scalable database services, said David Menninger, an analyst at Ventana Research. “These types of systems are complex to install and configure — there are many moving parts,” he said at the conference. With a managed service in the cloud, “you simply turn the service on.”

Menninger sees cloud database services — also known as database as a service (DBaaS) — as a natural progression from database appliances, an earlier effort to make databases easier to use. Like appliances, the cloud services give users a preinstalled and preconfigured set of data management features, he said. On top of that, the database vendors run the systems for users and handle performance tuning, patching and other administrative tasks.

Overall, the growing pool of DBaaS technologies provides good options “for data-driven companies needing high performance and a scalable, fully managed analytical database in the cloud at a reasonable cost,” said William McKnight, president of McKnight Consulting Group.

Database competition calls for cloud services

For database vendors, cloud database services are becoming a must-have offering to keep up with rivals and avoid being swept aside by cloud platform market leaders AWS, Microsoft and Google, according to Menninger. “If you don’t have a cloud offering, your competitors are likely to eat your lunch,” he said.

Strata Data Conference
The Strata Data Conference was held from Sept. 23 to 26 in New York City.

Todd Blaschka, TigerGraph’s chief operating officer, also pointed to the user adoption of the Atlas cloud service that NoSQL database vendor MongoDB launched in 2016 as a motivating factor for other vendors, including his company. “You can see how big of a revenue generator that has been,” Blaschka said. Services like Atlas “allow more people to get access [to databases] more quickly,” he noted.

Blaschka said more than 50% of TigerGraph’s customers already run its namesake graph database in the cloud, using a conventional version that they have to deploy and manage themselves. But with the company’s new TigerGraph Cloud service, users “don’t have to worry about knowing what a graph is or downloading it,” he said. “They can just build a prototype database and get started.”

TigerGraph Cloud is initially available in the AWS cloud; support will also be added for Microsoft Azure and then Google Cloud Platform (GCP) in the future, Blaschka said.

Yellowbrick Data made its Yellowbrick Cloud Data Warehouse service generally available on all three of the cloud platforms, giving users a DBaaS alternative to the on-premises data warehouse appliance it released in 2017. Later this year, Yellowbrick also plans to offer a companion disaster recovery service that provides cloud-based replicas of on-premises or cloud data warehouses.

More cloud database services on the way

MemSQL, one of the vendors in the NewSQL database category, detailed plans for a managed cloud service called Helios, which is currently available in a private preview release on AWS and GCP. Azure support will be added next year, said Peter Guagenti, MemSQL’s chief marketing officer.

About 60% of MemSQL’s customers run its database in the cloud on their own now, Guagenti said. But he added that the company, which primarily focuses on operational data, was waiting for the Kubernetes StatefulSets API object for managing stateful applications in containers to become available in a mature implementation before launching the Helios service.

Actian, which introduced a cloud service version of its data warehouse platform on AWS last March, said it will make the Avalanche service available on Azure this fall and on GCP at a later date.

We ultimately are the caretaker of the system. We may not do the actual work, but we guide them on it.
Naghman WaheedData platforms lead, Bayer Crop Science

DataStax, which offers a commercial version of the Cassandra open source NoSQL database, said it’s looking to make a cloud-native platform called Constellation and a managed version of Cassandra that runs on top of it generally available in November. The new technologies, which DataStax announced in May, will initially run on GCP, with support to follow on AWS and Azure.

Also, in-memory data grid vendor Hazelcast plans in December to launch a version of its Hazelcast Cloud service for production applications. The Hazelcast Cloud Dedicated edition will be deployed in a customer’s virtual private cloud instance, but Hazelcast will configure and maintain systems for users. The company released free and paid versions of the cloud service for test and development uses in March on AWS, and it also plans to add support for Azure and GCP in the future.

Managing managed database services vendors

Bayer AG’s Bayer Crop Science division, which includes the operations of Monsanto following Bayer’s 2018 acquisition of the agricultural company, uses managed database services on Teradata data warehouses and Oracle’s Exadata appliance. Naghman Waheed, data platforms lead at Bayer Crop Science, said the biggest benefit of both on-premises and cloud database services is offloading routine administrative tasks to a vendor.

“You don’t have to do work that has very little value,” Waheed said after speaking about a metadata management initiative at Bayer in a Strata session. “Why would you want to have high-value [employees] doing that work? I’d rather focus on having them solve creative problems.”

But he said there were some startup issues with the managed services, such as standard operating procedures not being followed properly. His team had to work with Teradata and Oracle to address those issues, and one of his employees continues to keep an eye on the vendors to make sure they live up to their contracts.

“We ultimately are the caretaker of the system,” Waheed said. “We do provide guidance — that’s still kind of our job. We may not do the actual work, but we guide them on it.”

Go to Original Article
Author:

Managed services companies remain hot M&A ticket

Managed services companies continue to prove popular targets for investment, with more merger and acquisition deals surfacing this week.

Those transactions included private equity firm Lightview Capital making a strategic investment in Buchanan Technologies; Siris, a private equity firm, agreeing to acquire TPx Communications; and IT Solutions Consulting Inc. buying SecurElement Infrastructure Solutions.

Those deals follow private equity firm BC Partners’ agreement last week to acquire Presidio, an IT solutions provider with headquarters in New York. That transaction, valued at $2.1 billion, is expected to close in the fourth quarter of 2019.

More than 30 transactions involving managed service providers (MSPs) and IT service firms have closed thus far in 2019. This year’s deals mark a continuation of the high level of merger and acquisition (M&A) activity that characterized the MSP market in 2018. Economic uncertainty may yet dampen the enthusiasm for acquisitions, but recession concerns don’t seem to be having an immediate impact.

Seth Collins, managing director at Martinwolf, an M&A advisory firm based in Scottsdale, Ariz., said trade policies and recession talk have brought some skepticism to the market. That said, the MSP market hasn’t lost any steam, according to Collins.

“We haven’t seen a slowdown in activity,” he said. The LMM Group at Martinwolf represented Buchanan Technologies in the Lightview Capital transaction.

Collins said the macroeconomic environment isn’t affecting transaction multiples or valuations. “Valuations aren’t driven by uncertainty; they’re driven by the quality of the asset,” he noted.

Finding the right partner

Buchanan Technologies is based in Grapevine, Texas, and operates a Canadian headquarters in Mississauga, Ont. The company’s more than 500 consultants, engineers and architects provide cloud services, managed services and digital transformation, among other offerings.

Valuations aren’t driven by uncertainty; they’re driven by the quality of the asset.
Seth CollinsManaging director at Martinwolf

A spokesman for Lightview Capital said Buchanan Technologies manages on-premises environments, private clouds and public cloud offerings, such as AWS, IBM Cloud and Microsoft Azure. The company focuses on the retail, manufacturing, education, and healthcare and life sciences verticals.

Collins said Buchanan Technologies founder James Buchanan built a solid MSP over the course of 30 years and had gotten to the point where he would consider a financial partner able to take the company to the next level.

“As it turned out, Lightview was that partner,” Collins added, noting the private equity firm’s experience with other MSPs, such as NexusTek.

The Siris-TPx deal, meanwhile, also involves a private equity investor and long-established services provider. TPx, a 21-year old MSP based in Los Angeles, provides managed security, managed WAN, unified communications and contact center offerings. The companies said the deal will provide the resources TPx needs to “continue the rapid growth” it is encountering in unified communications as a service, contact center as a service and managed services.

Siris has agreed to purchase TPx from its investors, which include Investcorp and Clarity.

“Investcorp and Clarity have been invested with TPx for more than 15 years, and they were ready to monetize their investment,” a spokeswoman for TPx said.

IT Solutions Consulting’s acquisition of SecurElement Infrastructure Solutions brings together two MSPs in the greater Philadelphia area.

The companies will pool their resources in areas such as security. IT Solutions offers network and data security through its ITSecure+ offering, which includes antivirus, email filtering, advanced threat protection, encryption and dark web monitoring. A spokeswoman for IT Solutions said SecurElement’s security strategy aligns with IT Solutions’ approach and also provides “expertise in a different stack of security tools.”

The combined company will also focus on private cloud, hybrid cloud and public cloud services, with a particular emphasis on Office 365, the spokeswoman said.

IT Solutions aims to continue its expansion plans in the Philadelphia area and mid-Atlantic regions through hiring, new office openings and acquisitions.

“We have an internal sales force that will continue our organic growth efforts, and our plan is to continue our acquisition strategy of one to two transactions per year,” she said.

MSP market M&A chart
Managed services companies continue to consolidate in an active M&A market.

VMware arms cloud partners with new tools

Ahead of the VMworld 2019 conference, VMware has unveiled a series of updates for its cloud provider partners.

The VMware Cloud Provider Platform now features new tools to enhance the delivery of hybrid cloud offerings and differentiated cloud services, the vendor said. Additionally, VMware said it is enabling cloud providers to target the developer community with their services.

“Customers are looking for best-of-breed cloud that addresses their specific application requirements. … In this world, where there are multiple types of clouds, customers are looking to accelerate the deployment of the applications, and, when they are looking at cloud, what they are looking for is flexibility —  flexibility so that they can choose a cloud that best fits their workload requirements. In many ways, the clouds have to adapt to the application requirements,” said Rajeev Bhardwaj, vice president of products for the cloud provider software business unit at VMware.

Highlights of the VMware updates include the following:

  • The latest version of the vendor’s services delivery platform, VMware vCloud Director 10, now provides a centralized view for hosted private and multi-tenant clouds. Partners can also tap a new “intelligent workload placement” capability for placing “workloads on the infrastructure that best meets the workload requirements,” Bhardwaj said.
  • To help partners differentiate their services, VMware introduced a disaster-recovery-as-a-service program for delivering DRaaS using vCloud Availability; an object storage extension for vCloud Director to deliver S3-compliant object storage services; and a backup certification to certify backup vendors in vCloud Director-based multi-tenant environments, VMware said. Cohesity, Commvault, Dell EMC, Rubrik and Veeam have completed the backup certification.
  • Cloud provider partners can offer containers as a service via VMware Enterprise PKS, a container orchestration product. The update enables “our cloud providers to move up the stack. So, instead of offering just IaaS … they can start targeting new workloads,” Bhardwaj said. VMware will integrate the Cloud Provider Platform with Bitnami, which develops a catalog of apps and development stacks that can be rapidly deployed, he said. The Bitnami integration can be combined with Enterprise PKS to support developer and DevOps costumers, attracting workloads such as test/dev environments onto clouds, according to VMware.

Bhardwaj noted that the VMware Cloud Provider Program has close to 4,300 partners today. Those partners span more than 120 countries and collectively support more than 10 million workloads. VMware’s Cloud Verified partners, which offer VMware software-defined data center and value-added services, have grown to more than 60 globally, VMware noted.

Managed service providers are a growing segment within the VMware Cloud Provider Program (VCCP), Bhardwaj added.

“As the market is shifting more and more toward SaaS and … subscription services, what we are seeing is more and more different types of partners” join VCCP, he said.

Partner businesses include solution providers, systems integrators and strategic outsourcers. They typically don’t build their own clouds, but “want to take cloud services from VMware as a service and become managed service providers,” he said.

Other news

  • Rancher Labs, an enterprise container management vendor, rolled out its Platinum Partner Program. Targeting partners with Kubernetes expertise, the program provides lead and opportunity sharing programs, joint marketing funds and options for co-branded content, the company said. Partners must meet a series training requirements to qualify for the program.
  • Quantum Corp., a storage and backup vendor based in San Jose, Calif., updated its Alliance Partner Program with a new deal registration application, an expanded online training initiative and a redesigned partner portal. The deal registration component, based on Vartopia’s deal registration offering, provides a dashboard to track sales activity, the deal funnel and wins, according to Quantum. The online training for sales reps and engineers is organized by vertical market, opportunities and assets. The company also offers new training options for in-person training.
  • Quisitive Technology Solutions Inc., a Microsoft solutions provider based in Toronto, launched a Smart Start Workshop for Microsoft Teams.
  • MSP software vendor Continuum cut the ribbon on a new security operations center (SOC). Located in Pittsburgh, the SOC will bolster the availability of cybersecurity talent, threat detection and response, and security monitoring for Continuum MSP partners, the vendor said.
  • Technology vendor Honeywell added Consultare America LLC and Silver Touch Technologies to its roster of Guided Work Solutions resellers. A voice-directed productivity product, Guided Work Solutions software targets small and medium-sized distribution centers.
  • Sify Technologies Ltd., an information and communications technology provider based in Chennai, India, aims to bring its services to Europe through a partnership with ZSAH Managed Technology Services. The alliance provides a “broader consulting practice” to the United Kingdom market, according to Sify.
  • US Signal, a data center services provider based in Grand Rapids, Mich., added several features to its Zerto-based disaster recovery as a service offering. Those include self-management, enterprise license mobility, multi-cloud replication and stretch layer 2 failover.
  • Dizzion, an end user cloud provider based in Denver, introduced a desktop-as-a-service offering for VMware Cloud on AWS customers.
  • LaSalle Solutions, a division of Fifth Third Bank, said it has been upgraded to Elite Partner Level status in Riverbed’s channel partner program, Riverbed Rise.
  • FTI Consulting Inc., a business advisory firm, said its technology business segment has launched new services around its RelativityOne Data Migration offering. The services include migration planning, data migration and workspace migration.
  • Mimecast Ltd., an email and data security company, has appointed Kurt Mills as vice president of channel sales. He is responsible for the company’s North American channel sales strategy. In addition, Mimecast appointed Jon Goodwin as director of public sector.
  • Managed detection and response vendor Critical Start has hired Dwayne Myers as its vice president of channels and alliances. Myers joins the company from Palo Alto Networks, where he served as channel business manager, Central U.S. and Latin America, for cybersecurity solutions.

Market Share is a news roundup published every Friday.

Go to Original Article
Author:

For Sale – Gaming PC, i7 7700k, 16GB DDR4, GTX 1080ti, 250GB SSD, 1.5TB HDD, NZXT H500 [price drop]

PC 1
In excellent condition, fully tested, cable managed and working

Ready to use

Intel i7 7700k
Zotac GTX 1080ti AMP! Extreme Core Edition
G.Skill Trident Z 16GB DDR 4 2400MHz
MSI PC MATE B250 Motherboard
Samsung 850 evo 250GB SSD
Seagate 1.5TB HDD
NZXT Kraken M22 120mm Liquid Cooler
NZXT H500 Tempered Glass Case
Corsair RM850 850w 80+ Gold PSU
2x NZXT 120mm case fans (front)
Windows 10 Pro

Can deliver personally same day (~150 miles) – just ask me
Further away? Ask me…

£1350 £1180 £1125 £1080

[​IMG]
[​IMG]
[​IMG]
[​IMG]
[​IMG]

Price and currency: 1125
Delivery: Goods must be exchanged in person
Payment method: BT / Cash
Location: Birmingham
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Go to Original Article
Author:

VMware Project Dimension to deliver managed HCI, edge networking

VMware is developing a managed edge appliance that has compute and storage for running applications and a software-defined WAN for connecting to the data center and public clouds.

The upcoming offering is in technical preview under the name Project Dimension. The product is a lightweight hyper-converged infrastructure system that includes the vSphere infrastructure compute stack and the vSAN software-defined storage product.

For networking, Project Dimension uses VMware’s NSX SD-WAN by VeloCloud, which VMware acquired last year. The VeloCloud SD-WAN provides connectivity to the corporate data center, SaaS or applications running on IaaS.

Project Dimension is essentially the branch version of VMware’s Cloud Foundation, which merges compute, storage and network provisioning to simplify application deployment in the data center and the Amazon and Microsoft Azure public clouds. Companies could use Project Dimension to run IoT and other software in retail stores, factories and oil rigs, according to VMware. Actual hardware for the system would come from VMware partners.

Companies already using Cloud Foundation could apply their policies and security to applications running on Project Dimension.

“There’s a lot of potential for operational simplicity. There’s the potential for improved multi-cloud management, and there’s the potential for faster time to market [for users’ applications],” said Stephen Elliot, an analyst at IDC.

But Project Dimension’s hybrid cloud approach — which lets companies run some applications at the edge, while also connecting to software running in the cloud — could eventually make it a “niche product,” said Andrew Froehlich, president of computer consultancy West Gate Networks, based in Loveland, Colo.

“While hybrid architectures are extremely common today, most businesses are looking to get to a 100% public cloud model as soon as they can,” he said. “Thus, it’s an interesting concept — and one that some can use — but I don’t see this making a significant impact long term.”

How Project Dimension works as a managed service

VMware plans to offer Project Dimension as a managed service. A company would order the service by logging into the VMware Cloud and going to its Edge Portal, where the business would choose a Project Dimension resource cluster and a service-level agreement.

Businesses would then upload the IP addresses of the edge locations, where VMware would send technicians to install the Project Dimension system. Each system would appear as a separate cluster in the Edge Portal.

VMware plans to use its cloud-based lifecycle management system to fix failures and handle infrastructure firmware and software updates. As a result, companies could focus on developing and deploying business applications without having to worry about infrastructure maintenance.

VMware, which introduced Project Dimension last week at the VMworld conference in Las Vegas, did not say when it would release the product. Also, the company did not disclose pricing.

Wanted – GPU cheap as chips

Hey all, after being a bit of a plonker, I’ve managed to break my GPU and can’t afford a new one for the next few weeks.

Not looking for much, literally the cheapest GPU I can find. Cex sell the 6850 for £20, so that should be a decent baseline for people to go on what I need. I’d sooner give someone on here the money rather than Cex or a shop.

Let me know,

Thanks.

Location: Skegness

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

How to Monitor Hyper-V Performance with PowerShell

Virtual machines can quickly lose speed and efficiency unless managed properly. Using PowerShell, you can monitor Hyper-V performance so you can keep on top of your performance levels and ensure your Hyper-V VMs are running optimally at all times.

In my last article, I demonstrated how to work with performance counters but from a WMI (Windows Management Instrumentation) perspective, using the corresponding Win32 classes with Get-CimInstance. Today I want to circle back to using Get-Counter to retrieve performance counter information but as part of a toolmaking process. I expect that when you are looking at performance counters, you do so on a very granular level. That is, you are only interested in data from a specific counter. I am too. In fact, I want to develop some tooling around a performance counter so that I can quickly get the information I need.

Getting Started

I’m using Hyper-V running on my Windows 10 desktop, but there’s no reason you can’t substitute your own Hyper-V host.

You should be able to test my code by setting your own value for $Computer.

Hyper-V Performance Counters

Of all the Hyper-V performance counters, the one that interests me is part of the Hyper-V Dynamic Memory VM set.

Dynamic Memory Counters

I am especially interested in the pressure related counters. This should give me an indication if the virtual machine is running low on memory. You sometimes see this in the Hyper-V management console when you look at the memory tab for a given virtual machine. Sometimes you’ll see a Low status. I want to be able to monitor these pressure levels from PowerShell.

After a little research, I found the corresponding WMI class.

Memory Counters via WMI and CIM

As you can see SRV2 is running a bit high. One of the benefits of using a WMI class instead of Get-Counter is that I can create a filter.

High Memory Pressure VM

Building Tools With What We’ve Done So Far

One tool I could create would be to turn this one line command into a function, perhaps adding the Hyper-V host as a parameter. I could set the function to run in a PowerShell scheduled job.

Another option would be to register a WMI event subscription. This is an advanced topic that we don’t have room to cover in great detail. But here is some sample code.

The code is checking every 30 seconds (within 30) for instances of the performance counter where the current pressure value is greater or equal to 80. I am registering the event subscription on my computer.  As long as my PowerShell session is open, any time a VM goes above 80 for Current Pressure, information is logged to a CSV file.

When using an Action scriptblock, you won’t see when the event is raised with Get-Event. The only way I can tell is by looking at the CSV file.

image

To manually stop watching, simply unregister the event.

Using this kind of event subscription has a number of other applications when it comes to managing Hyper-V. I expect I’ll revisit this topic again.

But there’s one more technique I want to share before we wrap up for today.

Usually, I am a big believer in taking advantage of PowerShell objects in the pipeline. Using Write-Host is generally frowned upon. But there are always exceptions and here is one of them.  I want a quick way to tell if a virtual machine is under pressure. Color coding will certainly catch my eye.  Instead of writing objects to the pipeline, I’ll write a string of information to the console. But I will color code it depending on the value of CurrentPressure. You will likely want to set your own thresholds. I wanted settings so that I’d have something good to display.

It wouldn’t take much to turn this into a function and create a reusable tool.

Colorized Performance Counters

I have at least one other performance monitoring tool technique I want to share with you but I think I’ve given you plenty to try out for today so I’ll cover that in my next article.

Wrap-Up

Have you built any custom tools for your Hyper-V environment? Do you find these types of tools helpful? Would you like us to do more? Let us know in the comments section below!

Thanks for reading!