Tag Archives: said

ConnectWise threat intelligence sharing platform changes hands

Nonprofit IT trade organization CompTIA said it will assume management and operations of the Technology Solution Provider Information Sharing and Analysis Organization established by ConnectWise in August 2019.

Consultant and long-time CompTIA member MJ Shoer will remain as the TSP-ISAO’s executive director under the new arrangement. The TSP-ISAO retains its primary mission of fostering real-time threat intelligence sharing among channel partners, CompTIA said.

MJ ShoerMJ Shoer

Nancy Hammervik, CompTIA’s executive vice president of industry relations, discussed CompTIA’s TSP-ISAO leadership role with Shoer during the CompTIA Communities and Councils Forum event this week. CompTIA conducted the event virtually after cancelling its Chicago in-person event due to the coronavirus pandemic.

Shoer said CompTIA is uniquely positioned to enhance the TSP-ISAO. “If you look at all the educational opportunities and resources that CompTIA brings to the table … those are going to be integral to this in terms of helping to further educate the world of TSPs … about the cyber threats and how to respond,” he said.

He added that CompTIA’s involvement in government policy work will contribute to the success of the threat intelligence sharing platform, as “the government is going to be key.” ISAOs were chartered by the Department of Homeland Security as a result of an executive order by former president Barack Obama in 2015.

Hammervik and Shoer also underscored that CompTIA’s commitment to vendor neutrality will help the TSP-ISAO bring together competitive companies in pursuit of a collective benefit. “We all face these threats. We have all seen some of the reports about MSPs being used as threat vectors against their clients. If we don’t … stop that, it can harm the industry from the largest member to the smallest,” Shoer said.

About 650 organizations have joined the TSP-ISAO, according to Hammervik. Membership in the organization in 2020 is free for TSP companies.

Shoer said his goal for the TSP-ISAO is to develop a collaborative platform that can share qualified, real-time and actionable threat intelligence with TSPs so they can secure their own and customers’ businesses. He said ultimately, the organization would like to automate elements of the threat intelligence sharing, but it may be a long-term goal as AI and other technologies mature.

Wipro launches Microsoft technology unit

Wipro Ltd., a consulting and business process services company based in Bangalore, India, launched a business unit dedicated to Microsoft technology.

Wipro said its Microsoft Business Unit will focus on developing offerings that use Microsoft’s enterprise cloud services. Those Wipro offerings will include:

  • Cloud Studio, which provides migration services for workloads on such platforms as Azure and Dynamics 365.
  • Live Workspace, which uses Microsoft’s Modern Workplace, Azure’s Language Understanding Intelligent Service, Microsoft 365 and Microsoft’s Power Platform.
  • Data Discovery Platform, which incorporates Wipro’s Holmes AI system and Azure.

Wipro’s move follows HCL Technologies’ launch in January 2020 of its Microsoft Business Unit and Tata Consultancy Services’ rollout in November 2019 of a Microsoft Business Unit focusing on Azure’s cloud and edge capabilities. Other large IT service providers with Microsoft business units include Accenture/Avenade and Infosys.

Other news

  • 2nd Watch, a professional services and managed cloud company based in Seattle, unveiled a managed DevOps service, which the company said lets clients take advantage of DevOps culture without having to deploy the model on their own. The 2nd Watch Managed DevOps offering includes an assessment and strategy phase, DevOps training, tool implementation based on the GitLab platform, and ongoing management. 2nd Watch is partnering with GitLab to provide the managed DevOps service.
  • MSPs can now bundle Kaseya Compliance Manager with a cyber insurance policy from Cysurance. The combination stems from a partnership between Kaseya and Cysurance, a cyber insurance agency. Cysurance’s cyber policy is underwritten by Chubb.
  • Onepath, a managed technology services provider based in Atlanta, rolled out Onepath Analytics, a cloud-based business intelligence offering for finance professionals in the SMB market. The analytics offering includes plug-and-play extract, transform and load, data visualization and financial business metrics such as EBITDA, profit margin and revenue as a percentage of sales, according to the company. Other metrics maybe included, the company said, if the necessary data is accessible.
  • Avaya and master agent Telarus have teamed up to provide Avaya Cloud Office by Ring Central. Telarus will offer the unified communications as a service product to its network of 4,000 technology brokers, Avaya said.
  • Adaptive Networks, a provider of SD-WAN as a service, said it has partnered with master agent Telecom Consulting Group.
  • Spinnaker Support, an enterprise software support services provider, introduced Salesforce application management and consulting services. The company also provides Oracle and SAP application support services.
  • Avanan, a New York company that provides a security offering for cloud-based email and collaboration suites, has hired Mike Lyons as global MSP/MSSP sales director.
  • Managed security service provider High Wire Networks named Dave Barton as its CTO. Barton will oversee and technology solutions and channel sales engineering for the company’s Overwatch Managed Security Platform, which is sold through channel partners, the company said.

Market Share is a news roundup published every Friday.

Go to Original Article

Q&A: SwiftStack object storage zones in on AI, ML, analytics

SwiftStack founder Joe Arnold said the company’s recent layoffs reflected a change in its sales focus but not in its core object storage technology.

San Francisco-based SwiftStack attributed the layoffs to a switch in use cases from classic backup and archiving to newer artificial intelligence, machine learning and analytics. Arnold said the staffing changes had no impact on the engineering and support team, and the core product will continue to focus on modern applications and complex workflows that need to store lots of data.

“I’ve always thought of object storage as a data as a service platform more than anything else,” said Arnold, SwiftStack’s original CEO and current president and chief product officer.

TechTarget caught up with Arnold to talk about customer trends and the ways SwiftStack is responding in an increasingly cloud-minded IT world. Arnold unveiled product news about SwiftStack adding Microsoft Azure as a target for its 1space technology, which facilitates a single namespace between object storage locations for cloud platform compatibility. The company already supported Amazon S3 and Google.

SwiftStack’s storage software, which is based on open source OpenStack Swift, runs on commodity hardware on premises, but the 1space technology can run in the public cloud to facilitate access to public and private cloud data. Nearly all of SwiftStack’s estimated 125 customers have some public cloud footprint, according to Arnold.

Arnold also revealed a new distributed, multi-region erasure code option that can enable customers to reduce their storage footprint.

What caused SwiftStack to change its sales approach?

Joe Arnold, founder and president, SwiftStackJoe Arnold

Joe Arnold: At SwiftStack, we’ve always been focused on applications that are in the data path and mission critical to our customers. Applications need to generate more value from the data. People are distributing data across multiple locations, between the public cloud and edge data locations. That’s what we’ve been really good at. So, the change of focus with the go-to-market path has been to double down on those efforts rather than what we had been doing.

How would you compare your vision of object storage with what you see as the conventional view of object storage?

Arnold: The conventional view of object storage is that it’s something to put in the corner. It’s only for cold data that I’m not going to access. But, that’s not the reality of how I was brought up through object storage. My first exposure to object storage was building platforms versus Amazon Web Services when they introduced S3. We immediately began using that as the place to store data for applications that were directly in the data path.

Didn’t object storage tend to address backup and archive use cases because it wasn’t fast enough for primary workloads?

Arnold: I wouldn’t say that. Our customers are using their data for their applications. That’s usually a large data set that can’t be stored in traditional ways. Yes, we do have customers that use [SwiftStack] for purely cold archive and purely backup. In fact, we have features and capabilities to enhance some of the cold storage capabilities of the product. What we’ve changed is our go-to-market approach, not the core product.

So, for example, we’re adding a distributed, multi-region erasure code storage policy that customers can use across three data centers for colder data. It allows the entire segments of data — data bits and parity bits — to be distributed across multiple sites and, to retrieve data, only two of the data centers need to be online.

How does the new erasure code option differ from what you’ve offered in the past?

Arnold: Before, we offered the ability to use erasure code where each site could fully reconstruct the data. A data center could be offline, and you could still reconstruct fully. Now, with this new approach, you can store data more economically, but it requires two of three data centers to be online. It’s just another level of efficiency in our storage tier. Customers can distribute data across more data centers without using as much raw storage footprint and still have high levels of durability and availability. Since we’re building out storage workflows that tier up and down across different storage tiers, they can utilize this one for their most cold data storage policies.

Does the new erasure coding target users who strictly do archiving, or will it also benefit those doing AI and analytics?

Arnold: They absolutely need it. Data goes back and forth between their core data center, the edge and the public cloud in workflows such as autonomous vehicles, personalized medicine, telco and connected city. People need to manage data between different tiers as they’re evolving from more traditional-based applications into more modern, cloud-native type applications. And they need this ultra-cold tier.

How similar is this cold tier to Amazon Glacier?

Arnold: From a cost point of view, it will be similar. From a performance point of view, it’s much better. From a data availability point of view, it’s much better. It costs a lot of money to egress data out of something like AWS Glacier.

How important is flash technology in getting performance out of object storage?

Arnold: If the applications care about concurrency and throughput, particularly when it comes to a large data set, then a disk-based solution is going to satisfy their needs. Because the SwiftStack product’s able to distribute requests across lots of disks at the same time, they’re able to sustain the concurrency and throughput. Sure, they could go deploy a flash solution, but that’s going to be extremely expensive to get the same amount of storage footprint. We’re able to get single storage systems that can deliver a hundred gigabytes a second aggregate read-write throughput rates. That’s nearly a terabit of throughput across the cluster. That’s all with disk-based storage.

What do you think of vendors such as Pure Storage offering flash-based options with cheaper quad-level cell (QLC) flash that compares more favorably price-wise to disk?

Arnold: QLC flash is great, too. We support that as well in our product. We’re not dogmatic about using or not using flash. We’re trying to solve large-footprint problems of our customers. We do have customers using flash with a SwiftStack environment today. But they’re using it because they want reduced latencies across a smaller storage footprint.

How do you see demand for AWS, Microsoft and Google based on customer feedback?

Arnold: People want options and flexibility. I think that’s the reason why Kubernetes has become popular, because that enables flexibility and choice between on premises and the public cloud, and then between public clouds. Our customers were asking for the same. We have a number of customers focused on Microsoft Azure for their public cloud usage. And they want to be able to manage SwiftStack data between their on-premises environments with SwiftStack and the public cloud. So, we added the 1space functionality to include Azure.

What tends to motivate your customers to use the public cloud?  

Arnold: Some use it because they want to have disaster recovery ready to go up in the public cloud. We will mirror a set of data and use that as a second data center if they don’t already have one. We have customers that collect data from partners or devices out in the field. The data lands in the public cloud, and they want to move it to their on-premises environment. The other example would be customers that want to use the public cloud for compute resources where they need access to their data, but they don’t want to necessarily have long-term data storage in the public clouds. They want the flexibility of which public cloud they’re going to use for their computation and application runtime, and we can provide them connections to the storage environment for those use cases.

Do you have customers who have second thoughts about their cloud decisions due to egress and other costs?

Arnold: Of course. That happens in all directions. Sometimes you’re helping people move more stuff into the public cloud. In some situations, you’re pulling down data, or maybe it’s going in between clouds. They may have had a storage footprint in the public cloud that was feeding to some end users or some computation process. The egress charges were getting too high. The footprint was getting too high. And that costs them a tremendous amount month over month. That’s where we have the conversation. But it still doesn’t mean that they need to evacuate entirely from the public cloud. In fact, many customers will keep the storage on premises and use the public cloud for what it’s good at — more burstable computation points.

What’s your take on public cloud providers coming out with various on-premises options, such as Amazon Outposts and Azure Stack?

Arnold: It’s the trend of ‘everything as a service.’ I think what customers want is a managed experience. The number of operators who are able to manage these big environments is becoming harder and harder to come across. So, it’s a natural for those companies to offer a managed on-premises product. We feel the same way. We think that managing large sets of infrastructure needs to be highly automated, and we’ve built our product to make that as simple as possible. And we offer a product to do storage as a service on premises for customers who want us to do remote operations of their SwiftStack environments.

How has Kubernetes- and container-based development affected the way you design your product?

Arnold: Hugely. It impacts how applications are being developed. Kubernetes gives an organization the flexibility to deploy an application in different environments, whether that’s core data centers, bursting out into the public cloud or crafting applications out to the edge. At SwiftStack, we need to make the data just as portable as the containerized application is. That’s why we developed 1space. A huge number of our customers are using Kubernetes. That just naturally lends itself to the use of something like 1space to give them the portability they need for access to their data.

What gaps do you need to fill to more fully address what customers want to do?

Arnold: One is further flushing out ‘everything as a service.’ We just launched a service around that. As more customers adopt that, we’re going to have more work to do, as the deployments become more diverse across not just core data centers, but also edge data centers.

I see the convergence of file and object workflows and furthering 1space with our edge-to-core-to-cloud workflows. Particularly in the world of high-performance data analytics, we’re seeing the need for object — but it’s a world that is dominated by file-based applications. Data gets pumped into the system by robots, and object storage is awesome for that because it’s easy and you get lots of concurrency and lots of parallelism. However, you see humans building out algorithms and doing research and development work. They’re using file systems to do much of their programming, particularly in this high performance data analytics world. So, managing the convergence between file and object is an important thing to do to solve those use cases.

Go to Original Article

Samsung Galaxy Chromebook, Galaxy Book Flex Alpha hands-on

Samsung Galaxy Chromebook

Who said Chromebooks had to be inexpensive and underpowered? Certainly not Samsung, as the Galaxy maker unveiled its latest and greatest Chromebook at CES 2020, fittingly named the Samsung Galaxy Chromebook.

Looking at the spec sheet, you’d think this was a premium Windows 10 two-in-one, like those launched by Dell and HP at this same event. The Samsung Galaxy Chromebook has a stunning 13.3-inch 3840 x 2160 4K AMOLED display, powered by a 10th-generation Intel Core i5 processor, 8GB RAM and a 256GB SSD, coupled with a fingerprint reader and Wi-Fi 6 support. Samsung offers upgrades to 16GB RAM and a 1TB SSD.

It’s fanless, with an aluminum chassis that feels great and cool to the touch, and durable too. It comes in what Samsung calls Fiesta Red and Mercury Gray. It’s thin and light, measuring .55 inches thick, and weighing 2.27 pounds. Ports include two USB-C inputs that double for charging, a microSD card slot and 3.5 mm audio jack.

Business users might scoff at the dearth of ports. But again, this runs Chrome OS, not Windows 10. It’s designed for mobility and working on the road. In fact, Samsung claims it meets the standards of Intel’s Project Athena, meaning instant on, extended battery life and fast charging.

Samsung unveiled its Galaxy Chromebook at CES 2020.
The Samsung Galaxy Chromebook promises Windows-like power and performance.

The Samsung Galaxy Chromebook ships with an active pen for note-taking, which docks neatly in the device. This is not the same S Pen found on the Samsung Galaxy Note series of smartphones. It’s much more limited, used only for writing and drawing, and it does not support any of the hover actions or shortcuts found on recent Samsung pen-toting devices.

Is the Samsung Galaxy Chromebook overkill? With a $999 starting price, perhaps. Samsung is stepping on Google’s turf, as Google offers the premium Google Pixelbook and Pixelbook Go with similar price tags. But a device with these specs running this operating system, which supports Google Android apps, is also as future-proof as any laptop on the market. Chrome OS currently doesn’t necessitate much power to run smoothly. And when we are staring at 15th-generation Intel chips, the Galaxy Chromebook should still be humming right along.

No word on a specific release date. Samsung claims the Galaxy Chromebook will ship in the first quarter of 2020.

Samsung unveiled its Galaxy Chromebook at CES 2020.
The Samsung Galaxy Chromebook comes in Fiesta Red and Mercury Gray.

Samsung Galaxy Book Flex Alpha

If a $999 Chromebook is too much, how about a premium Windows 10 two-in-one for $830? That’s the Samsung Galaxy Book Flex Alpha.

Think of the Galaxy Book Flex Alpha as a scaled-down version of the Galaxy Book Flex announced last October. As with most all Samsung devices, it has an outstanding display, specifically, a 13.3-inch QLED HD display, which looks really sharp — when not compared directly the Galaxy Chromebook’s 4K display, at least.

It also has 10th-generation Intel Core processors — though Samsung hasn’t revealed the specific processor yet — starting with 8GB RAM and 256GB SSD. It can be upgraded to 12GB RAM and a 512GB SSD. Ports include USB-C, full-sized USB 3, and HDMI, along with a microSD card slot and 3.5 mm audio jack. Other specs include Wi-Fi 6 and a fingerprint reader.

The Samsung Galaxy Book Flex Alpha is nearly indistinguishable from its pricier predecessor.
The Samsung Galaxy Book Flex Alpha is an affordable 2-in-1.

Samsung claims 17.5 hours of juice from a full charge, with fast charging support.

At a glance it’s hard to distinguish from the Galaxy Book Flex, and that’s a good thing. To cut costs, Samsung opted not to include the active pen, offering it as a separate purchase. There is also no built-in Qi charging pad, or dedicated graphics options.

But for a sub-$1,000 laptop, the Samsung Galaxy Book Flex Alpha is potentially a pretty good haul. It’s cheaper than that Chromebook anyway.

Go to Original Article

Citrix brings Workspace and micro apps to Google Cloud

Citrix Workspace platform for Google Cloud is now generally available. In an announcement, Citrix said the move would simplify tasks for IT professionals and users alike by using micro apps and unifying tasks in a single work feed.

The partnership underscores Citrix’s commitment to keep its services agnostic to support its customers’ choice in cloud providers, according to analysts.

Eric Kenney, a senior product manager at Citrix, said IT professionals are, at present, responsible for wrangling a variety of disparate products. These applications may, for example, govern security, file synchronization, file sharing and virtual desktops, and all of them could have different portals and login screens. Citrix Workspace is designed to make it easier to administer a range of end-user computing applications.

“It’s really difficult to manage all of these different vendors and resources,” he said. “With Workspace, IT professionals are able to bring these solutions together, with one partner, to deliver them to users.”

Putting these solutions and the options to manage them in one place helps both desktop administrators and users, Kenney said.

Although Workspace provides a centralized place through which Citrix products, such as Citrix Virtual Apps and Desktops, Citrix Virtual Desktops and Citrix ADC, may be launched, Kenney said the platform goes beyond that. The intent, he said, is to provide a home for whatever application a company wants to deliver to its users, including homegrown and cloud-hosted offerings.

One way Workspace acts to simplify employee workloads is through the use of micro apps, or small programs that can accomplish simple tasks quickly, according to Kenney.

“An analogy we use is the office copier; it has a ton of buttons on it,” he said, noting that, with knowledge of those functions, people can collate, print double-sided copies and perform any number of specialized tasks. Most people, though, only use the big green button. “That’s a way of looking at enterprise applications; you’re using them a lot, but only for a small sliver of their functionality.”

Employees approving an expense report, for example, typically must go into a separate application to review and OK the document. Kenney said that process is less streamlined than it could be and that micro apps can integrate multiple tasks of approving an expense report into one feed, enabling workers to accomplish in seconds what used to take minutes.

“You could review and approve [the report] and never have to leave Workspace,” he said.

Workspace’s new availability also provides Citrix greater integration with Google Cloud services, among them Google’s G Suite, a collection of productivity apps. Kenney said a new cloud service, Citrix Access Control, provides administrators additional control over user actions on Google Drive documents.

For example, if a malware link is inadvertently added to a document, the Access Control settings could ensure the link is opened in an isolated browser that is safely disposed of at the end of a user session. Access Control can also restrict “copy and paste” functionality in certain documents.

Workspace isn’t just for IT

Ulrik Christensen, principal infrastructure engineer at Oncology Venture, said Citrix services, including Workspace, have made things easier for his firm. The drug development company is a global operation with offices and labs in both Denmark and the U.S., and manufacturing operations in India.

“I have four to five people in the U.S., and they’re not even in the same office,” he said, adding that the complexity of supporting the different hardware they use, including Apple machines, Windows machines and Chromebooks, has proven difficult in the past.

Moving to the kind of standardized system offered by Citrix has improved the user experience and lessened the burden on IT, Christensen said.

“It’s a lot easier if something doesn’t work,” he said. “We can help because we know the whole platform… It also made it a lot easier for IT to provide users new applications and updates.”

Security had improved as well, Christensen said. With only one way to access the company’s network, it is at less risk and the firm can be more confident that its data is protected.

Citrix continues to support cloud choice

Andrew Hewitt, an analyst at Forrester Research, said the partnership with Google Cloud makes sense for Citrix, as it bolsters one of the key tenets of its pitch to customers.

Andrew HewittAndrew Hewitt

“Citrix’s core messaging is around experience, choice and security,” he said. “This announcement sits squarely in its desire to be an agnostic player in the [end-user computing market] that can enable enterprises to pick and choose whatever technologies they want to deploy to their end users.”

Citrix’s core messaging is around experience, choice and security.
Andrew HewittAnalyst, Forrester Research

The move, Hewitt said, seems like a logical extension of past partnerships with Google.

“For example, Citrix has full API access to manage Chromebooks; it supports all the management models for Android Enterprise and provides Citrix Receiver for virtualization support on Chromebooks,” he said. “This announcement is just further deepening of the relationship with Google.”

Mark BowkerMark Bowker

Enterprise Strategy Group senior analyst Mark Bowker said the partnership is good for Google as well.

“Google is trying to make inroads into the enterprise,” he said, noting pushes with Chromebooks and the Chrome browser.

Bowker added, though, that enterprises must still interact with Windows frequently. By working with Citrix, then, Google can provide its users with easier access to Windows-based services.

Citrix recognizes the importance of being able to provide its services on its customers’ cloud of choice, including a recent announcement of deeper ties with AWS. Still, its closest ties are with Microsoft, Bowker said. “The strength of their integration is ultimately with Microsoft, and always has been,” he said.

Go to Original Article

Accenture cloud tool aims to shorten decision cycle

Accenture has rolled out a tool that the company said will help customers navigate complex cloud computing options and let them simulate deployments before committing to an architecture.

The IT services firm will offer the tool, called myNav, as part of a larger consulting agreement with its customers. The myNav process starts with a discovery phase, which scans the customer’s existing infrastructure and recommends a cloud deployment approach, whether private, public, hybrid or multi-cloud. Accenture’s AI engine then churns through the company’s repository of previous cloud projects to recommend a specific enterprise architecture and cloud offering. Next, the Accenture cloud tool simulates the recommended design, allowing the client to determine its suitability.

“There’s an over-abundance of choice when the client chooses to … take applications, data and infrastructure into the cloud,” said Kishore Durg, Accenture’s cloud lead and growth and strategy lead for technology services. “The choices cause them to ponder, ‘What is the right choice?’ This [tool] will help increase their confidence in going to the cloud.”

Accenture isn’t unique among consultancies in marketing services to aid customers’ cloud adoption. But industry watchers pointed to myNav’s simulation feature as a point of differentiation.

There are many companies that offer cloud service discovery, assessment and design services for a fee, said Stephen Elliot, an analyst with IDC. “But I don’t know of any other firm that will run a simulation,” he added.

Yugal Joshi, a vice president with Everest Group, cited myNav’s cloud architecture simulator as an intriguing feature. “Going forward, I expect it to further cover custom bespoke applications in addition to COTS [commercial off-the-shelf] platforms,” he said.

Joshi, who leads Everest Group’s digital, cloud and application services research practices, said most mature IT service providers have developed some type of platform to ease clients’ journey to the cloud. “The difference lies in the vision behind the IP, the quality of the IP, articulation and the business value it can provide to clients,” he noted.

Accenture cloud simulation’s potential benefits

Elliot said myNav’s simulation is interesting because it could help customers understand the outcome of a project in advance and whether that outcome will meet their expectations.

Despite cloud being around for quite some time now, it is still not a done deal.
Yugal Joshivice president, Everest Group

This could help Accenture close deals faster while fostering more productive conversations with IT buyers, Elliot said. “In any case, customers will have to trust that the underlying information and models are correct, and that the outcomes in the solution can be trusted,” he said.

Customers, meanwhile, could benefit from faster cloud rollouts.

“Where Accenture myNav is focusing is leveraging the expertise Accenture has gathered over many cloud engagements,” Joshi said. “This can potentially shorten the decision-making, business-casing and the eventual cloud migration for clients.”

Customers can get to the results faster, rather than spend weeks or, potentially, months in assessment and roadmap exercises, he said. Whether the Accenture cloud platform delivers the anticipated results, however, will only become evident when successful client adoption case studies are available, he cautioned.

Durg said cloud assessments can take eight to 12 weeks, depending on the scale of the project. The migration phase could span two months and require 80 or more people. The simulation aspect of myNav, he noted, lets clients visualize the deployment “before a single person is put on a project.”

Help wanted

Accenture’s myNav tool arrives at a time when the cloud matured — the public cloud is more than a decade old — but not completely. The multiplicity of cloud technologies introduces uncertainty and sparks enterprise conversations around skill sets and adoption approaches.

“Despite cloud being around for quite some time now, it is still not a done deal,” Joshi said. “Clients need lot of hand-holding and comfort before they can migrate to, and then leverage, cloud as an operating platform [rather] than an alternative hosting model.”

Elliot added, “The market is at a point where every cloud deployment is almost a snowflake. It’s the organizational, skills and process discussions that slow projects down.”

Go to Original Article

AWS rejects Elasticsearch trademark lawsuit claims

AWS has responded to an Elasticsearch trademark lawsuit with broad denials of its claims, but experts said an eventual settlement is not only likely, but also the best outcome for customers.

The company sued AWS on Sept. 27 on grounds of false advertising and trademark infringement related to AWS’ Open Distro for Elasticsearch, its version of the popular distributed analytics and search engine. Elasticsearch Inc., or Elastic, originated and serves as chief maintainer of the open source project.

AWS, with the participation of Expedia and Netflix, launched Open Distro for Elasticsearch in March. The companies said this move was necessary because Elastic’s version includes too much proprietary code inside the main open source code line. Open Distro for Elasticsearch is fully open source and licensed under Apache 2.0, according to AWS.

The Elasticsearch trademark lawsuit contends that branding for both the original Amazon Elasticsearch Service, which AWS has sold since 2015, and Open Distro for Elasticsearch violates its trademark, and that customers are “likely to be confused as to whether Elastic sponsors or approves AESS [Amazon Elasticsearch Service] and Open Distro.”

AWS filed its response to Elasticsearch’s complaint last week in U.S. District Court for the Northern District of California. The company denies all wrongdoing, demands a jury trial and offers a series of defensive arguments, one being that Elastic trademark infringement claims “are barred at least in part” under the fair use doctrine. Another asserts that Elastic gave AWS a license to use the term “Elasticsearch.”

Overall, AWS’ response to the Elasticsearch trademark lawsuit is fairly boilerplate, said Jeremy Peter Green, a New York-based attorney specializing in trademark law who reviewed it and Elastic’s original complaint.

Click here to read the complaint.

“In the trademark world, different lawyers have different ways of doing this [but] usually law firms just have templates for these,” Green said.

For example, another AWS defense cites the doctrine of unclean hands, a legal concept that means a complainant shouldn’t be awarded relief if they have committed legal breaches of their own in a dispute.

This, too, is standard practice, according to Green. “There’s always a chance that during the discovery process, something will show up,” he said. “You’re just hedging your bets by accusing them of everything.”

Green has been evaluating options for managed Elasticsearch as part of a trademark search engine he plans to develop. AWS does seem to have sowed some consumer confusion, which is the basis of trademark infringement law, Green said.

“I like Elastic’s case here, from the perspective as both an attorney and consumer,” he said. Elastic’s initial complaint calls for treble damages and attorney’s fees, a figure that could be significant if it wins at trial.

Both of these companies have a major incentive to come to some kind of settlement.
Jeremy Peter GreenTrademark attorney

It is likely that the parties will settle, Green added. “I think [Elastic has] a good enough case that it would be silly for [AWS] to throw a lot of money at it.”

Many Elasticsearch users also host their clusters on AWS anyway, which blurs the competitive lines. “Both of these companies have a major incentive to come to some kind of settlement,” Green said.

Experienced enterprise IT buyers are aware of the potential repercussions of intellectual property battles, according to Holger Mueller, an analyst at Constellation Research in Cupertino, Calif. “But ultimately, it is in the interest of the sparring vendors to settle and keep customers going,” he said.

Go to Original Article

MyPayrollHR arrest brings relief, but few answers

The FBI said Monday it arrested the head of MyPayrollHR and charged him with $70 million in bank fraud. The arrest is gratifying to some victims, who saw their paychecks vanish from their bank accounts just a few weeks ago. 

But the FBI’s arrest of Michael Mann, 49, doesn’t answer many of the questions hanging over this case, including where the still-missing funds have gone. Mann ran the parent company of MyPayrollHR, which suddenly shuttered Sept. 5.

The FBI complaint, filed in U.S. District Court in Albany, accuses Mann of creating businesses that were used in a scheme to obtain loans and lines of credit. It alleges the fraudulent activity may have dated as far back as 2010 or 2011 and may have totaled about $70 million. The complaint went on to say that Mann “wished to accept responsibility for his conduct and confess to a fraudulent scheme that he had been running for years.”

Mann also told the FBI that MyPayrollHR, which was founded in 2006, was legitimate. Indeed, customers had no inkling of the fraudulent behavior until the payroll money vanished. Funds were “reversed” or withdrawn from accounts used for direct payroll deposits.

“Now that Mann has been arrested, that will help shed a little light on things,” said Brad Mete, managing partner of two recruiting and staffing firms with 800 employees in Fort Lauderdale.

What happened to the money?

“But I still don’t know where the money is — especially the taxes we paid,” said Mete, a nearly three-year customer of MyPayrollHR.

Mete has more than $75,000 in withholding taxes from one pay period that disappeared with MyPayrollHR. But the government will still want its money, he said. He manages Affinity Resource, an employment agency, and IntellaPro LLC, a professional staffing firm. He doesn’t know whether the money has disappeared or is sitting frozen in a bank account.

Mete is trying to recoup the missing withholding taxes through a fraud complaint with the bank. He is investigating possible insurance coverage.

The National Automated Clearinghouse Association (NACHA), which develops rules and standards for the automated clearinghouse (ACH), an electronic funds transfer system, said its “ongoing investigation” of the incident “continues to show that fewer than 400 companies and approximately 8,000 employees experienced unauthorized payroll reversals.” This was in a written response to questions from SearchHRSoftware.com.

NACHA said that, as of Sept. 19, it estimates that “about 97% of people that had unauthorized reversals have had their funds restored.” The FBI put the total number of MyPayrollHR clients at about 1,000.

New regulations may be on the way

Mete questions NACHA’s claim that only 8,000 employees were affected. “There is no way it’s that little,” he said. Out of his two firms, 600 of his 800 employees were impacted by reversals, and he knows anecdotally from other firms that they had hundreds affected.

Mete suspects the industry will downplay the impact of the MyPayrollHR incident possibly to avoid new regulations. Regardless, new regulations may be on the way.

Last week, the New York State Senate announced a package of bills in response to MyPayrollHR. They include new criminal penalties for intentionally misappropriating payroll, tax credits for victims and restrictions on deductions from employee accounts.

NACHA defended the ACH system. It “has strong consumer protection measures in place. There are rules in place to prevent unauthorized withdrawals, and to allow consumers to be re-credited in the event that there are unauthorized withdrawals,” it said in an unsigned statement. 

“This is an unprecedented and isolated incident, and obviously, these rules were circumvented in this case,” NACHA said in its statement.

MyPayrollHR’s ACH provider was Cachet Financial Services in Pasadena, Calif. Cachet’s services include direct deposit for payroll processing firms. It provided the services to MyPayrollHR for about 12 years, it said in an earlier interview with SearchHRSoftware.com.

MyPayrollHR uploaded a file instructing Cachet to take money out of employer accounts. The money should have been put into a Cachet settlement account. But that didn’t happen. To fulfill the transaction, the ACH system took money out of Cachet’s holding account to pay employees. Cachet says it is out $26 million and is a victim of fraud.

Cachet initiated reversals to get its money back from employee accounts. Some accounts had two reversals, because the first reversal was not coded in accordance with NACHA standards. It then changed direction, and started a process urging banks to reject both its reversals.

Reversals were outside the rules

In its statement, NACHA said that “Cachet should not have sent any reversals in this incident. This is not permitted by the NACHA Rules, and is not in keeping with any industry standard or best practice.”

Lawsuits seeking class action status are now being filed against MyPayrollHR, and the ACH firms involved, including Cachet, which declined further comment.

The payroll problem is not completely resolved for Tanya Willis, executive director at Agape Animal Rescue in Nashville, but her organization may be in better shape than most.

Most of the seven shelter employees have been made whole by banks, which can take 45 to 60 days to fix the problem, according to Willis. One employee had a nearly $1 million deduction in a checking account. First Tennessee Bank, whose name appeared on the screenshot showing the negative $999,193.75 balance, declined to comment.

In an interview Monday, Willis said it appeared that the employee with the $1 million deduction had access to her accounts, but she wasn’t completely certain of the employee’s status.

Animal rescue is rescued by its supporters

The apparent fraud has cost Agape about $10,000 in withholding taxes for a calendar year quarter. But Agape appealed to the community for help.

“Our supporters and our donors stepped up and made us whole and we’re out saving dogs again,” Willis said. The shelter has been able to raise the money to pay their tax bill due Oct. 1, she said.

Willis worries about the for-profit businesses that can’t turn to donors to get help. “I know that there are still so many people in worse situations, and I’m thankful that we’ve been able to go to the community and raise the funds needed to get us back on track — but I want that for everybody,” she said.

Willis was contacted by the FBI, and she sent them every document she could think of to help the investigation.

Mann is cooperating with authorities

On Sept. 10, Mann met with the U.S. attorney in Albany, less than a week after MyPayrollHR closed.

Mann started cooperating with the FBI before the investigation began. It was about two-and-a-half weeks ago that his attorney, Michael Koenig at Hinckley, Allen & Snyder LLP, reached out to authorities.

In an email statement, Koenig said that he “pro-actively called the United States Attorney’s Office before any law enforcement or regulatory agency contacted Michael Mann.”

Mann “has been cooperating with authorities since that initial meeting, and will continue to do so, in order to fully and accurately detail what occurred,” Koenig said.

The five-page FBI complaint only hints at motive. The court filing said that Mann claimed “he committed the fraud in response to businesses and financial pressures, and that he used almost all of the fraudulently obtained funds to sustain certain businesses, and purchase and start new ones.”

Mann faces up to 30 years in prison and $1 million fine, according to the Justice Dept.

But the court document does provide insight into what might have triggered the sudden problem at MyPayrollHR.

There are still so many questions that employers have.
Melanie O’MalleyOwner, O’Malley’s Oven

Mann told authorities that Pioneer Bancorp Inc. was his largest creditor. The decision to siphon off money “was precipitated by [Mann’s] decision to route MyPayroll’s clients’ payroll payment to an account at Pioneer instead of directly to Cachet. He did this in order to temporarily reduce the amount of money he owed to Pioneer. When Pioneer froze Mann’s accounts, it also (inadvertently) stopped movement of MyPayrollHR’s clients’ payroll payments to Cachet.”

In a U.S. Securities and Exchange Commission filing Sept. 11, Pioneer described the “potentially fraudulent activity” without naming MyPayrollHR.

Much remains unsettled

A closed Facebook group for victims of MyPayrollHR now has over 2,000 members.

A moderator of the group, Melanie O’Malley, owner of O’Malley’s Oven, an Albany, NY bakery, and a MyPayrollHR customer, said much remains unsettled.

“Some employees are still missing money, and employers are at a complete loss,” O’Malley said.

She described the general reaction to news of Mann’s arrest as relief.

“There are still so many questions that employers have,” O’Malley said. “I think seeing charges gives us hope that perhaps we’ll get some answers, and a sense of our chances of recompense.”

Go to Original Article

Learn to set up and use PowerShell SSH remoting

When Microsoft said PowerShell would become an open source project that would run on Windows, Linux and macOS in August 2016, there was an interesting wrinkle related to PowerShell remoting.

Microsoft said this PowerShell Core would support remoting over Secure Shell (SSH) as well as Web Services-Management (WS-MAN). You could always use the PowerShell SSH binaries, but the announcement indicated SSH support would be an integral part of PowerShell. This opened up the ability to perform remote administration of Windows and Linux systems easily using the same technologies.

A short history of PowerShell remoting

Microsoft introduced remoting in PowerShell version 2.0 in Windows 7 and Windows Server 2008 R2, which dramatically changed the landscape for Windows administrators. They could create remote desktop sessions to servers, but PowerShell remoting made it possible to manage large numbers of servers simultaneously.

Remoting in Windows PowerShell is based on WS-MAN, an open standard from the Distributed Management Task Force. But because WS-MAN-based remoting is Windows orientated, you needed to use another technology, usually SSH, to administer Linux systems.

Introducing SSH on PowerShell Core

We have grown accustomed to installing software on Windows using the wizards, but the installation of OpenSSH requires more background information and more work from the administrator.

SSH is a protocol for managing systems over a possibly unsecured network. SSH works in a client-server mode and is the de facto standard for remote administration in Linux environments.

PowerShell Core uses OpenSSH, a fork from SSH 1.2.12 which was released under an open source license. OpenSSH is probably the most popular SSH implementation.

The code required to use WS-MAN remoting is installed as part of the Windows operating system. You need to install OpenSSH manually.

Installing OpenSSH

We have grown accustomed to installing software on Windows using the wizards, but the installation of OpenSSH requires more background information and more work from the administrator. Without some manual intervention, many issues can arise.

The installation process for OpenSSH on Windows has improved over time, but it’s still not as easy as it should be. Working with the configuration file leaves a lot to be desired.

There are two options when installing PowerShell SSH:

  1. On Windows 10 1809, Windows Server 1809, Windows Server 2019 and later, OpenSSH is available as an optional feature.
  2. On earlier versions of Windows, you can download and install OpenSSH from GitHub.

Be sure your system has the latest patches before installing OpenSSH.

Installing the OpenSSH optional feature

You can install the OpenSSH optional feature using PowerShell. First, check your system with the following command:

Get-WindowsCapability -Online | where Name -like '*SSH*'
OpenSSH components
Figure 1. Find the OpenSSH components in your system.

Figure 1 shows the OpenSSH client software is preinstalled.

You’ll need to use Windows PowerShell for the installation unless you download the WindowsCompatibility module for PowerShell Core. Then you can import the Deployment Image Servicing and Management module from Windows PowerShell and run the commands in PowerShell Core.

Install the server feature:

Add-WindowsCapability -Online -Name OpenSSH.Server~~~~
Path :
Online : True
RestartNeeded : False

The SSH files install in the C:WindowsSystem32OpenSSH folder.

Download OpenSSH from GitHub

Start by downloading the latest version from GitHub. The latest version of the installation instructions are at this link.

After the download completes, extract the zip file into the C:Program FilesOpenSSH folder. Change location to C:Program FilesOpenSSH to install the SSH services:

[SC] SetServiceObjectSecurity SUCCESS
[SC] ChangeServiceConfig2 SUCCESS
[SC] ChangeServiceConfig2 SUCCESS

Configuring OpenSSH

After OpenSSH installs, perform some additional configuration steps.

Ensure that the OpenSSH folder is included on the system path environment variable:

  • C:WindowsSystem32OpenSSH if installed as the Windows optional feature
  • C:Program FilesOpenSSH if installed via the OpenSSH download

Set the two services to start automatically:

Set-Service sshd -StartupType Automatic
Set-Service ssh-agent -StartupType Automatic

If you installed OpenSSH with the optional feature, then Windows creates a new firewall rule to allow inbound access of SSH over port 22. If you installed OpenSSH from the download, then create the firewall rule with this command:

New-NetFirewallRule -Name sshd -DisplayName 'OpenSSH Server (sshd)' `
-Enabled True -Direction Inbound -Protocol TCP `
-Action Allow -LocalPort 22

Start the sshd service to generate the SSH keys:

Start-Service sshd

The SSH keys and configuration file reside in C:ProgramDatassh, which is a hidden folder. The default shell used by SSH is the Windows command shell. This needs to change to PowerShell:

New-ItemProperty -Path "HKLM:SOFTWAREOpenSSH" -Name DefaultShell `
-Value "C:Program FilesPowerShell6pwsh.exe" -PropertyType String -Force

Now, when you connect to the system over SSH, PowerShell Core will start and will be the default shell. You can also make the default shell Windows PowerShell if desired.

There’s a bug in OpenSSH on Windows. It doesn’t work with paths with a space, such as the path to the PowerShell Core executable! The workaround is to create a symbolic link that creates a path that OpenSSH can use:

New-Item -ItemType SymbolicLink -Path C:pwsh -Target 'C:Program FilesPowerShell6'

In the sshd_config file, un-comment the following lines:

PubkeyAuthentication yes
PasswordAuthentication yes

Add this line before other subsystem lines:

Subsystem  powershell C:pwshpwsh.exe -sshs -NoLogo -NoProfile

This tells OpenSSH to run PowerShell Core.

Comment out the line:

AuthorizedKeysFile __PROGRAMDATA__/ssh/administrators_authorized_keys

After saving the changes to the sshd_config file, restart the services:

Restart-Service sshd
Start-Service ssh-agent

You need to restart the sshd service after any change to the config file.

Using PowerShell SSH remoting

Using remoting over SSH is very similar to remoting over WS-MAN. You can access the remote system directly with Invoke-Command:

Invoke-Command -HostName W19DC01 -ScriptBlock {Get-Process}
[email protected]@w19dc01's password:

You’ll get a prompt for the password, which won’t be displayed as you type it.

If it’s the first time you’ve connected to the remote system over SSH, then you’ll see a message similar to this:

The authenticity of host 'servername (' can't be established.
ECDSA key fingerprint is SHA256:().
Are you sure you want to continue connecting (yes/no)?

Type yes and press Enter.

You can create a remoting session:

$sshs = New-PSSession -HostName W19FS01
[email protected]@w19fs01's password:

And then use it:

Invoke-Command -Session $sshs -ScriptBlock {$env:COMPUTERNAME}

You can enter an OpenSSH remoting session using Enter-PSSession in the same way as a WS-MAN session. You can enter an existing session or use the HostName parameter on Enter-PSSession to create the interactive session.

You can’t disconnect an SSH based session; that’s a WS-MAN technique.

You can use WS-MAN and SSH sessions to manage multiple computers as shown in Figure 2.

The session information shows the different transport mechanism — WS-MAN and SSH, respectively — and the endpoint in use by each session.

Remote management sessions
Figure 2. Use WS-MAN and SSH sessions together to manage remote machines.

If you look closely at Figure 2, you’ll notice there was no prompt for the password on the SSH session because the system was set up with SSH key-based authentication.

Using SSH key-based authentication

Open an elevated PowerShell session. Change the location to the .ssh folder in your user area:

Set-Location -Path ~.ssh

Generate the key pair:

ssh-keygen -t ed25519

Add the key file into the SSH-agent on the local machine:

ssh-add id_ed25519

Once you’ve added the private key into SSH-agent, back up the private key to a safe location and delete the key from the local machine.

Copy the id_ed25519.pub file into the .ssh folder for the matching user account on the remote server. You can create such an account if required:

$pwd = Read-Host -Prompt 'Password' -AsSecureString
Password: ********
New-LocalUser -Name Richard -Password $pwd -PasswordNeverExpires
Add-LocalGroupMember -Group Administrators -Member Richard

On the remote machine, copy the contents of the key file into the authorized_keys file:

scp id_ed25519.pub authorized_keys

The authorized_keys file needs its permissions changed:

  • Open File Explorer, right click authorized_keys and navigate to Properties – Security – Advanced
  • Click Disable Inheritance.
  • Select Convert inherited permissions into explicit permissions on this object.
  • Remove all permissions except for SYSTEM and your user account. Both should have Full control.

Introduction to SSH with PowerShell Core.

You’ll see references to using the OpenSSHUtils module to set the permissions, but there’s a bug in the version from the PowerShell Gallery that makes the authorized_keys file unusable.

Restart the sshd service on the remote machine.

You can now connect to the remote machine without using a password as shown in Figure 2.

If you’re connecting to a non-domain machine from a machine in the domain, then you need to use the UserName parameter after enabling key-pair authentication:

$ss = New-PSSession -HostName W19ND01 -UserName Richard

You need the username on the remote machine to match your domain username. You won’t be prompted for a password.

WS-MAN or SSH remoting?

Should you use WS-MAN or SSH based remoting? WS-MAN remoting is available on all Windows systems and is enabled by default on Windows Server 2012 and later server versions. WS-MAN remoting has some issues, notably the double hop issue. WS-MAN also needs extra work to remote to non-domain systems.

SSH remoting is only available in PowerShell Core; Windows PowerShell is restricted to WS-MAN remoting. It takes a significant amount of work to install and configure SSH remoting. The documentation isn’t as good as it needs to be. The advantages of SSH remoting are that you can easily access non-domain machines and non-Windows systems where SSH is the standard for remote access.

Go to Original Article

VMware Cloud on AWS migrations continue to pose challenges

Hybrid cloud solutions provider Unitas Global said it is expecting an uptick in VMware Cloud on AWS migrations ahead, but noted migrations continue to pose problems.

According to the Los Angeles-based company, which provides cloud infrastructure, managed services and connectivity services, VMware Cloud on AWS has gained traction among enterprise clients with extensive VMware-based legacy infrastructure. Those legacy environments in the past have proved difficult to migrate, but VMware Cloud on AWS has smoothed the journey.

“[VMware Cloud on AWS] has given us a path to migrating legacy environments to cloud with less friction,” said Grant Kirkwood, CTO at Unitas Global.

VMware Cloud on AWS has also drummed up customer interest for its disaster recovery capabilities, which can provide significant cost reductions compared with traditional enterprise DR infrastructure.  “We are seeing a lot of interest in this particular use case,” he said.

Despite its benefits, however, Kirkwood has found that VMware Cloud on AWS migrations can be problematic for some customers. The biggest challenge usually stems from enterprises’ often complexly interwoven environments. As enterprise environments evolve, they tend to amass lots of hidden dependencies, which can break during cloud migrations, he said. “So no matter how much planning you seem to do, you pick up a database or middleware application and migrate it to the cloud, and [then] five other downstream [apps] break because they were dependent on that and it wasn’t known,” Kirkwood said.

A report from Faction, a Denver-based multi-cloud managed service provider, cited cost management (51%) as the top VMware Cloud on AWS usage challenge, followed by network complexity (37%) and AWS prerequisites (27%). Faction’s report, published in August, was based on a survey of 1,156 IT and business professionals.

VMware poised for multi-cloud opportunities

While enterprise multi-cloud adoption remains in its early stages, Kirkwood said VMware has been successfully redeveloping its portfolio for when it matures.

Each of the leading public cloud providers are trying to differentiate themselves based on their unique capabilities and services, he said. For the most part, enterprise customers today haven’t even scratched the surface of Google, AWS and Microsoft’s rapidly expanding menus of services. As enterprises gradually embrace more public cloud services, “being able to leverage all of them across a common data set [will be valuable] for companies that are sophisticated enough to take advantage of that,” he said.

VMware Cloud on AWS chart

According to Kirkwood, Google Cloud Platform (GCP) excels in AI and machine learning tooling that can be applied to large data sets. GCP is also “very competitive in large-scale storage,” he noted. Meanwhile, AWS has developed powerful analytics and behavioral tooling. Microsoft, though it “has probably the least sophisticated offerings,” provides “the path of least resistance for Microsoft-centric workloads.”

“What I think is going to be interesting to watch is how VMware adapts what they are doing to provide value across that much broader spectrum of [public cloud] services as they gain popularity,” he said.

Other news

  • Insight Enterprises, a technology solutions and services provider based in Tempe, Ariz., has completed its acquisition of PCM Inc., a provider of IT products and services. The deal expands Insight’s reach into the mid-market, especially in North America, and adds more than 2,700 salespeople, technical architects, engineers, consultants and service delivery personnel, according to the company.
  • Iland, a hosted cloud, backup and disaster recovery services provider, said it is reaching “a broader audience of enterprise customers” through a growing network of resellers and managed services providers. SMBs had been the traditional customer set for the company’s VMware-based offerings. The Houston-based company also said it has expanded its channel program. The program provides a partner portal for training, certification and sales management; a new data center in Canada for regional partners in North America; and an updated Catalyst cloud assessment tool.
  • MSP software vendor ConnectWise launched an organization that aims to boost cybersecurity among channel partners. The Technology Solution Provider Information Sharing and Analysis Organization, or TSP-ISAO, offers its members access to threat intelligence, cybersecurity best practices, and other tools and resources.
  • Accenture disclosed two acquisitions this week. The company acquired Northstream, a consulting firm in Stockholm that works with communications service providers and networking services vendors, and Fairway Technologies, an engineering services provider with offices in San Diego; Irvine, Calif.; and Austin, Texas.
  • Ensono, a hybrid IT services provider, launched a managed services offering for VMware Cloud on AWS and said it has achieved a VMware Cloud on AWS Solution Competency.
  • Sparkhound, a digital solutions firm, said its digital transformation project at paving company Pavecon involved Microsoft Office 365, SharePoint, Azure SQL Database and Active Directory. The project also drew upon Power BI for business analytics and PowerApps for creating mobile apps on Android, iOS and Windows, according to the company.
  • US Signal, a data center services provider based in Grand Rapids, Mich., unveiled its managed Website and Application Security Solution. The offering builds upon the company’s partnership with Cloudflare, an internet security company, according to US Signal. The managed website and application security offering provides protection against DDoS, ransomware, malicious bots and application layer attacks, the company said.
  • Cloud communications vendor CoreDial rolled out its CoreNexa Contact Center Certification Program. The program offers free sales and technical training on the vendor’s contact center platform.
  • Security vendor Kaspersky revealed that more than 2,000 companies have joined its global MSP program. Kaspersky launched its MSP program in 2017.
  • Service Express, a third-party maintenance provider based in Grand Rapids, Mich., has opened an office in the Washington, D.C., area. The company specializes in post-warranty server, storage and network support.

Market Share is a news roundup published every Friday.

Go to Original Article

New Everbridge CEO talks education, NC4 acquisition

The new Everbridge CEO said he wants people to understand the importance of a critical event management platform.

“It just needs to be something everyone has, because it does save lives,” said David Meredith, previously the COO of Rackspace. He took over on July 15 for Jaime Ellertson, the Everbridge CEO since 2011 who is transitioning to the role of executive chairman of the board.

“We need to get out there as the leader and we need to be more aggressive in having conversations like we’re having today, and educating people about what are the best practices, and how they can best prepare,” Meredith said.

Two weeks into his time as CEO, Everbridge acquired NC4 Inc., a risk intelligence provider that Meredith said will improve his company’s Critical Event Management (CEM) suite. The two companies had previously been partners.

“A lot of acquisitions, companies may be competing with each other or they’re maybe in an adjacent space, but they haven’t worked together very much,” said Karl Kotalik, who will be general manager of NC4 after serving as its president and CEO. “We’ve been exchanging information for years, not just together, but in combination with customers.”

The NC4 acquisition gives Everbridge 10 products it sells as a SaaS company, Meredith said. Everbridge, which is based in Burlington, Mass., claims about 4,700 enterprise customers. The company now has about 950 employees, including the entire team of more than 70 workers from NC4, which is based in El Segundo, Calif. The acquisition payment was $83 million in cash and Everbridge stock, and it’s expected to fully close at the end of the third quarter.

We need to really be more proactive in terms of educating the marketplace on what can be done to keep people safe and keep businesses running.
David MeredithCEO, Everbridge

NC4 claimed more than 300 customers. One hundred of those customers are in the Fortune 500. About 50% of NC4 customers were also Everbridge users as well. Kotalik said the acquisition will help NC4 “scale down” into Everbridge’s base for smaller companies that still need risk intelligence.

Meredith said he wants Everbridge to be for CEM what Salesforce is for customer relationship management, in a “platform that really makes the ecosystem” around CRM.

“You can have one place to get all the data if you are an enterprise, or a state, local or federal government,” Meredith said. “Then if something is happening, we can move very quickly to manage that with the rest of the tools in the suite.”

We recently spoke with Meredith and Kotalik to discuss their plans for the Critical Event Management suite and NC4.

What led you to take the Everbridge CEO job?

David MeredithDavid Meredith

David Meredith: I’ve known of Everbridge as a customer for years and was a very happy and satisfied customer. What pulled me into this role, first and foremost, is the mission of Everbridge — the mission of keeping people safe and businesses running faster. It’s a very powerful draw. We are a mission-driven company.

The technology is the leader in the space. They used to say you’d never get fired for hiring IBM in technology. And in the critical event management space, Everbridge is the leader and I think it’s safe to say you would never get fired for picking Everbridge. If you look at the ability to scale, the global reach, the resiliency, the fact that we’re a public company, our size, the breadth of our offerings, we’re the clear leader in the space, and that’s very exciting.

But I still think there’s a lot of room to grow from there.

What is it about the technology that makes Everbridge a leader?

Meredith: Everbridge has been investing heavily on building out our technology platform and doing acquisitions as well. If you look at the Critical Event Management suite, critical event management, or CEM, is an area that we’re sort of a pioneer in. It starts with a single pane of glass, and this is our Visual Command Center, and that’s where we can aggregate thousands and thousands of pieces of data. The ability to curate all that data, using technology, machine learning, artificial intelligence, as well as expert human analysts, [create] that added level of validation.

Our systems are extremely scalable. We’ve moved everything to the cloud now and we’re very resilient. … We have the ability to deliver the messages when you need them in a timely manner, and we’ve got backups in place at every level of the supply chain. We’re the leader in that.

Everbridge and NC4 were partners previously — how did you work together in the past?

Karl KotalikKarl Kotalik

Karl Kotalik: I started NC4 18 years ago, right after 9/11. … A natural partnership developed about 10 years ago because Everbridge was already emerging as the leader in mass notification and communication — at the time they called it unified communications. And NC4, our specialty, we were very focused on risk intelligence. We were emerging as the leader in real-time event incident monitoring, all hazards — everything from water main breaks and one-alarm fires and shootings up to terrorist attacks, hurricanes, tornadoes, major floods.

We were getting the information, but to deliver it at scale, to the large customers we were serving, we needed that assist from Everbridge. So, we partnered, where we did something really well, on the front end of the process, and Everbridge handled the downstream messaging, unified communications to people who needed to know. And they started getting into more response and coordination, and they’ve grown the CEM platform today.

When you’re partnered, you’re not coordinating on strategy. It’s a nice relationship, but we realized we could do so much more coming together. … Putting the two together is a killer combination. And a lot of the work had already been done because of the partnerships with these large enterprises.

How has the integration been going?

Kotalik: Everbridge has had access to every iteration and evolution of our APIs going back 10 years, and they’ve seen our data streams and fed it to their platforms over all these years. So, their development team, their product management team, their operations team, understand the [kind and volume of data NC4 deals in]. We’re doing 700 incidents a day on critical events.

You might think of a critical event as something big, but a critical event for an individual customer could be as minor as a water main break. But [it’s a major event] if it’s across the street and you’re a data center and you depend on that water pressure for the cooling of the equipment in the data center. With the CEM platform, to very quickly orchestrate all the mitigation steps you want to take, shutting down the servers, turning on alternate cooling systems, whatever those steps are, not being able to do that, could turn that into a disaster. It could take down your customers and you don’t want to do that.

Where do you see the CEM suite progressing? Is there anything you want to see added?

Meredith: We have a whole roadmap that we’re going to continue to be building out. There are some big market drivers that we’re tapping into. One is internet of things. There’s going to be 75 billion connected devices in the next six years. One of the things Everbridge does, in addition to keeping your employees or citizens or customers safe, we also help to keep your assets and things safe as well. And that’s going to get much more complex with the advent of more IoT.

Another big trend we see is around mobility. If you look at what’s happening with the workforce today, in the next few years, over 70% of U.S. workers are going to be mobile. If you’re trying to keep your employees safe, it’s not as simple as when everyone is just in one building, from 9 to 5. Now they’re spread everywhere, working from home and other places.

Big data is another one. I think NC4 is a great example where aggregating all that data, being able to curate it, sort through it, and get to actionable intelligence for our customers as quickly as possible, even to the point of being predictive, is going to be strategically important for us. We’re going to continue to invest and drive more analytics-type solutions out of all the data that we have and all the data that we see.

Photo of Everbridge's Global Operations Center
Everbridge’s Global Operations Center at its headquarters in Burlington, Mass., tracks critical events worldwide, 24/7.

What are you seeing as trends in customers?

Meredith: One big trend, and another reason I was drawn to the company, is Everbridge is really creating a network effects business.

We recently announced that the state of Florida did a five-year renewal with us. So, what happens when you win a state like Florida? Over the years, we’ve added 64 of 67 counties as customers. We’ve added 26 cities, including the 10 largest in Florida, almost 50 corporations, 15 state agencies, almost 20 higher education universities [and] 29 healthcare organizations.

When you start to add all that on, it creates this network effect, where when something happens, it’s all interrelated — you’ve got emergency responders, you’ve got the state, the county, the city, transportation. If there’s a hurricane in Florida, all of these groups are impacted. Our ability to have all of them on our platform is really powerful. It’s beneficial to them, it’s beneficial to us. That’s really that ecosystem effect, that network effect we create.

We just announced that we won the country of Australia as a customer. If you think about what I just talked about with Florida, now we’re doing it for the country of Australia — the states, the cities, the healthcare, the higher education, the corporations and tying all that together.

What’s really interesting, looking forward, the European Union has come out and said all of the EU countries need to have population alerting systems in place in the next few years, so that’s an opportunity for us to take what we’ve done in Australia and other countries and now move faster in terms of spreading that in Europe.

We’re getting all this data coming in from all these sources. The data is the lifeblood of the system. As you’re looking at that Visual Command Center — and we’re getting data from our analysts, we’re getting data from the web, from our customers — it allows us to be much more accurate in terms of false positives and false negatives. There have been some highly publicized examples recently about false alarms and how disruptive that can be. With NC4, you’ve got 24/7 analysts looking at all the feeds, highly trained, highly skilled, and can say, ‘I’m looking at all my data, I’m curating all the data and this is not a critical event. This is a false alarm.’

Or, alternatively, potentially minutes can save lives. And being able to shrink that time and know something is really happening, know we’re getting into a critical event, and be able to get people to safety, be able to protect your assets, that is very important and has a huge impact in terms of the overall return on investment the customer makes in a platform like this.

What else have you learned as Everbridge CEO in a month and what are your short- and long-term plans?

Meredith: Having been in technology for many years now, I will say, you need great people, you need great technology; you also need timing to line up. Unfortunately, we’re at a period now, we have the data — unfortunately, it’s up in terms of weather events, in terms of cyber, malware attacks, terrorist attacks. The rate’s increasing.

We’re creating a whole new category. We need to really be more proactive in terms of educating the marketplace on what can be done to keep people safe and keep businesses running. … We’ve got to be out there and educating and talking about the story. I really believe if you’re a Global 2000 or Fortune 1000 company, really every one of those companies should have technology and plans in place for what to do in the event of a critical event, whether they use Everbridge or not.

Do you think that not enough people and organizations know about what you do?

Meredith: I think that’s correct. When we go talk to a company, a lot of times, it’s not that they already have a solution, but they have maybe a couple point solutions and they’ve sort of jury-rigged some standard operating procedures. We don’t see the level of preparation that you would like to see. It’s something that you don’t want to ever have to use, but you want to have it in place.

Kotalik: We will go in to customers and they won’t even realize they can get real-time information that’s impacting their travelers, their assets, their locations, in enough time to really mitigate. When they hear the stories about how it saved lives or it reduced downtime, it stopped an event from turning into a disaster for the company because they were able to mitigate it, that helps drive our business for these less sophisticated organizations that haven’t really thought about this. They don’t think they have a big enough budget or enough people.

Go to Original Article