Tag Archives: continue

Odaseva introduces high availability for Salesforce

Salesforce users will be able to continue to work even if Salesforce goes down, thanks to Odaseva’s new addition.

Odaseva ultra high availability (UHA) works similarly to high availability (HA) for any non-SaaS environment. If there’s a Salesforce outage, such as a planned maintenance or an unexpected failure, a customer’s Salesforce account would failover to an emulated Salesforce account in Odaseva. Users can continue to view, edit and update the emulated records like normal. When Salesforce is back up, Odaseva will re-synchronize the two environments, performing what is essentially a failback.

Odaseva UHA is in early access and will be released as an add-on to the Odaseva platform in early 2020. Pricing is not yet available.

Salesforce has become so mission-critical to some organizations that they can’t afford any downtime. Odaseva CEO Sovan Bin said Odaseva UHA isn’t strictly necessary for smaller businesses that can shrug off a small Salesforce outage, but there are places such as call centers that need Salesforce access 100% of the time. These organizations stand to lose hundreds of thousands of dollars for every hour they can’t conduct business, while suffering from lost opportunities and damage to their brand.

“The real damage is because you’ve stopped doing business,” Bin said.

Odaseva provides backup and data governance for Salesforce data. Developed by Salesforce certified technical architects — the highest Salesforce expertise credential — Odaseva Data Governance Cloud offers archiving and automated data compliance on top of data protection features. Odaseva claims its compliance and data governance tools differentiate it from Salesforce backup competitors such as OwnBackup and Spanning.

Data protection and backup only address the integrity of data, but HA addresses its availability and accessibility. Christophe Bertrand, senior analyst at IT analyst firm Enterprise Strategy Group (ESG), said HA is lacking for SaaS application data. He said he didn’t know any other vendor with a similar product or feature.

“Not only is it unique, other vendors aren’t even exploring HA for Salesforce,” Bertrand said.

Bertrand added that other SaaS applications such as Office 365, Box and ServiceNow also have an availability gap, even as they become mission-critical to businesses. When these services go down, companies may have to stop working. Bertrand estimated the cost of downtime averages to higher than $300,000 per hour for most enterprises. Although many vendors provide backup, no one has yet provided a failover/failback offering.

“Ninety-nine-point-whatever percent uptime is not enough. That’s still 15 hours of downtime per year,” Bertrand said.

Screenshot of Odaseva UHA interface
Odaseva UHA users can continue making changes to Salesforce records even if Salesforce is offline.

Odaseva also introduced some new capabilities to its platform this week. It is now integrated with Salesforce Marketing Cloud, which allows users to back up emails, leads, contact information and marketing campaign files stored in Marketing Cloud. Before this integration, customers would have to develop a backup mechanism for Marketing Cloud themselves, which would include complex processes of extracting the data and replicating it.

Odaseva also extended its compliance automation applications to cover more than GDPR. Odaseva has data privacy applications that automatically perform anonymization, right of access, right of erasure and other privacy tasks in order to keep compliant with GDPR. Automated compliance now covers CCPA, HIPAA and a number of privacy regulations in non-U.S. countries such as PIPA (Japan), PIPEDA (Canada) and POPIA (South Africa).

The Salesforce Marketing Cloud integration and compliance automation extensions are available immediately.

Bin said Odaseva will focus on DevOps next. Salesforce Full Sandbox environments can be natively refreshed every 29 days. To help customers accelerate development, Bin said Odaseva will come up with a way to work around that limit and enable more frequent refreshes in a future release.

Go to Original Article

Consider these Office 365 alternatives to public folders

As more organizations consider a move from Exchange Server, public folders continue to vex many administrators for a variety of reasons.

Microsoft supports public folders in its latest Exchange Server 2019 as well as Exchange Online, but it is pushing companies to adopt some of its newer options, such as Office 365 Groups and Microsoft Teams. An organization pursuing alternatives to public folders will find there is no direct replacement for this Exchange feature. There reason for this is due to the nature of the cloud.

Microsoft set its intentions early on under Satya Nadella’s leadership with its “mobile first, cloud first” initiative back in 2014. Microsoft aggressively expanded its cloud suite with new services and features. This fast pace meant that migrations to cloud services, such as Office 365, would offer a different experience based on the timing. Depending on when you moved to Office 365, there might be different features than if you waited several months. This was the case for migrating public folders from on-premises Exchange Server to Exchange Online, which evolved over time and also coincided with the introduction of Microsoft Teams, Skype for Business and Office 365 Groups.

The following breakdown of how organizations use public folders can help Exchange administrators with their planning when moving to the new cloud model on Office 365.

Organizations that use public folders for email only

Public folders are a great place to store email that multiple people within an organization need to access. For example, an accounting department can use public folders to let department members use Outlook to access the accounting public folders and corresponding email content.

A shared mailbox has a few advantages over a public folder with the primary one being accessibility through the Outlook mobile app or from Outlook via the web.

Office 365 offers similar functionality to public folders through its shared mailbox feature in Exchange Online. A shared mailbox stores email in folders, which is accessible by multiple users.

A shared mailbox has a few advantages over a public folder with the primary one being accessibility through the Outlook mobile app or from Outlook via the web. This allows users to connect from their smartphones or a standard browser to review email going to the shared mailbox. This differs from public folder access which requires opening the Outlook client.

Organizations that use public folders for email and calendars

For organizations that rely on both email and calendars in their public folders, Microsoft has another cloud alternative that comes with a few extra perks.

Office 365 Groups not only lets users collaborate on email and calendars, but also stores files in a shared OneDrive for Business page, tasks in Planner and notes in OneNote. Office 365 Groups is another option for email and calendars made available on any device. Office 365 Groups owners manage their own permissions and membership to lift some of the burden of security administration from the IT department.

Microsoft provides migration scripts to assist with the move of content from public folders to Office 365 Groups.

Organizations that use public folders for data archiving

Some organizations that prefer to stay with a known quantity and keep the same user experience also have the choice to keep using public folders in Exchange Online.

The reasons for this preference will vary, but the most likely scenario is a company that wants to keep email for archival purposes only. The migration from Exchange on-premises public folders requires administrators to use Microsoft’s scripts at this link.

Organizations that use public folders for project communication and data sharing repository

The Exchange public folders feature is excellent for sharing email, contacts and calendar events. For teams working on projects, the platform shines as a way to centralize information that’s relevant to the specific project or department. But it’s not as expansive as other collaboration tools on Office 365.

Take a closer look at some of the other modern collaboration tools available in Office 365 in addition to Microsoft Teams and Office 365 Groups, such as Kaizala. These offerings extend the organization’s messaging abilities to include real-time chat, presence status and video conferencing.

Go to Original Article

Managed services companies remain hot M&A ticket

Managed services companies continue to prove popular targets for investment, with more merger and acquisition deals surfacing this week.

Those transactions included private equity firm Lightview Capital making a strategic investment in Buchanan Technologies; Siris, a private equity firm, agreeing to acquire TPx Communications; and IT Solutions Consulting Inc. buying SecurElement Infrastructure Solutions.

Those deals follow private equity firm BC Partners’ agreement last week to acquire Presidio, an IT solutions provider with headquarters in New York. That transaction, valued at $2.1 billion, is expected to close in the fourth quarter of 2019.

More than 30 transactions involving managed service providers (MSPs) and IT service firms have closed thus far in 2019. This year’s deals mark a continuation of the high level of merger and acquisition (M&A) activity that characterized the MSP market in 2018. Economic uncertainty may yet dampen the enthusiasm for acquisitions, but recession concerns don’t seem to be having an immediate impact.

Seth Collins, managing director at Martinwolf, an M&A advisory firm based in Scottsdale, Ariz., said trade policies and recession talk have brought some skepticism to the market. That said, the MSP market hasn’t lost any steam, according to Collins.

“We haven’t seen a slowdown in activity,” he said. The LMM Group at Martinwolf represented Buchanan Technologies in the Lightview Capital transaction.

Collins said the macroeconomic environment isn’t affecting transaction multiples or valuations. “Valuations aren’t driven by uncertainty; they’re driven by the quality of the asset,” he noted.

Finding the right partner

Buchanan Technologies is based in Grapevine, Texas, and operates a Canadian headquarters in Mississauga, Ont. The company’s more than 500 consultants, engineers and architects provide cloud services, managed services and digital transformation, among other offerings.

Valuations aren’t driven by uncertainty; they’re driven by the quality of the asset.
Seth CollinsManaging director at Martinwolf

A spokesman for Lightview Capital said Buchanan Technologies manages on-premises environments, private clouds and public cloud offerings, such as AWS, IBM Cloud and Microsoft Azure. The company focuses on the retail, manufacturing, education, and healthcare and life sciences verticals.

Collins said Buchanan Technologies founder James Buchanan built a solid MSP over the course of 30 years and had gotten to the point where he would consider a financial partner able to take the company to the next level.

“As it turned out, Lightview was that partner,” Collins added, noting the private equity firm’s experience with other MSPs, such as NexusTek.

The Siris-TPx deal, meanwhile, also involves a private equity investor and long-established services provider. TPx, a 21-year old MSP based in Los Angeles, provides managed security, managed WAN, unified communications and contact center offerings. The companies said the deal will provide the resources TPx needs to “continue the rapid growth” it is encountering in unified communications as a service, contact center as a service and managed services.

Siris has agreed to purchase TPx from its investors, which include Investcorp and Clarity.

“Investcorp and Clarity have been invested with TPx for more than 15 years, and they were ready to monetize their investment,” a spokeswoman for TPx said.

IT Solutions Consulting’s acquisition of SecurElement Infrastructure Solutions brings together two MSPs in the greater Philadelphia area.

The companies will pool their resources in areas such as security. IT Solutions offers network and data security through its ITSecure+ offering, which includes antivirus, email filtering, advanced threat protection, encryption and dark web monitoring. A spokeswoman for IT Solutions said SecurElement’s security strategy aligns with IT Solutions’ approach and also provides “expertise in a different stack of security tools.”

The combined company will also focus on private cloud, hybrid cloud and public cloud services, with a particular emphasis on Office 365, the spokeswoman said.

IT Solutions aims to continue its expansion plans in the Philadelphia area and mid-Atlantic regions through hiring, new office openings and acquisitions.

“We have an internal sales force that will continue our organic growth efforts, and our plan is to continue our acquisition strategy of one to two transactions per year,” she said.

MSP market M&A chart
Managed services companies continue to consolidate in an active M&A market.

VMware arms cloud partners with new tools

Ahead of the VMworld 2019 conference, VMware has unveiled a series of updates for its cloud provider partners.

The VMware Cloud Provider Platform now features new tools to enhance the delivery of hybrid cloud offerings and differentiated cloud services, the vendor said. Additionally, VMware said it is enabling cloud providers to target the developer community with their services.

“Customers are looking for best-of-breed cloud that addresses their specific application requirements. … In this world, where there are multiple types of clouds, customers are looking to accelerate the deployment of the applications, and, when they are looking at cloud, what they are looking for is flexibility —  flexibility so that they can choose a cloud that best fits their workload requirements. In many ways, the clouds have to adapt to the application requirements,” said Rajeev Bhardwaj, vice president of products for the cloud provider software business unit at VMware.

Highlights of the VMware updates include the following:

  • The latest version of the vendor’s services delivery platform, VMware vCloud Director 10, now provides a centralized view for hosted private and multi-tenant clouds. Partners can also tap a new “intelligent workload placement” capability for placing “workloads on the infrastructure that best meets the workload requirements,” Bhardwaj said.
  • To help partners differentiate their services, VMware introduced a disaster-recovery-as-a-service program for delivering DRaaS using vCloud Availability; an object storage extension for vCloud Director to deliver S3-compliant object storage services; and a backup certification to certify backup vendors in vCloud Director-based multi-tenant environments, VMware said. Cohesity, Commvault, Dell EMC, Rubrik and Veeam have completed the backup certification.
  • Cloud provider partners can offer containers as a service via VMware Enterprise PKS, a container orchestration product. The update enables “our cloud providers to move up the stack. So, instead of offering just IaaS … they can start targeting new workloads,” Bhardwaj said. VMware will integrate the Cloud Provider Platform with Bitnami, which develops a catalog of apps and development stacks that can be rapidly deployed, he said. The Bitnami integration can be combined with Enterprise PKS to support developer and DevOps costumers, attracting workloads such as test/dev environments onto clouds, according to VMware.

Bhardwaj noted that the VMware Cloud Provider Program has close to 4,300 partners today. Those partners span more than 120 countries and collectively support more than 10 million workloads. VMware’s Cloud Verified partners, which offer VMware software-defined data center and value-added services, have grown to more than 60 globally, VMware noted.

Managed service providers are a growing segment within the VMware Cloud Provider Program (VCCP), Bhardwaj added.

“As the market is shifting more and more toward SaaS and … subscription services, what we are seeing is more and more different types of partners” join VCCP, he said.

Partner businesses include solution providers, systems integrators and strategic outsourcers. They typically don’t build their own clouds, but “want to take cloud services from VMware as a service and become managed service providers,” he said.

Other news

  • Rancher Labs, an enterprise container management vendor, rolled out its Platinum Partner Program. Targeting partners with Kubernetes expertise, the program provides lead and opportunity sharing programs, joint marketing funds and options for co-branded content, the company said. Partners must meet a series training requirements to qualify for the program.
  • Quantum Corp., a storage and backup vendor based in San Jose, Calif., updated its Alliance Partner Program with a new deal registration application, an expanded online training initiative and a redesigned partner portal. The deal registration component, based on Vartopia’s deal registration offering, provides a dashboard to track sales activity, the deal funnel and wins, according to Quantum. The online training for sales reps and engineers is organized by vertical market, opportunities and assets. The company also offers new training options for in-person training.
  • Quisitive Technology Solutions Inc., a Microsoft solutions provider based in Toronto, launched a Smart Start Workshop for Microsoft Teams.
  • MSP software vendor Continuum cut the ribbon on a new security operations center (SOC). Located in Pittsburgh, the SOC will bolster the availability of cybersecurity talent, threat detection and response, and security monitoring for Continuum MSP partners, the vendor said.
  • Technology vendor Honeywell added Consultare America LLC and Silver Touch Technologies to its roster of Guided Work Solutions resellers. A voice-directed productivity product, Guided Work Solutions software targets small and medium-sized distribution centers.
  • Sify Technologies Ltd., an information and communications technology provider based in Chennai, India, aims to bring its services to Europe through a partnership with ZSAH Managed Technology Services. The alliance provides a “broader consulting practice” to the United Kingdom market, according to Sify.
  • US Signal, a data center services provider based in Grand Rapids, Mich., added several features to its Zerto-based disaster recovery as a service offering. Those include self-management, enterprise license mobility, multi-cloud replication and stretch layer 2 failover.
  • Dizzion, an end user cloud provider based in Denver, introduced a desktop-as-a-service offering for VMware Cloud on AWS customers.
  • LaSalle Solutions, a division of Fifth Third Bank, said it has been upgraded to Elite Partner Level status in Riverbed’s channel partner program, Riverbed Rise.
  • FTI Consulting Inc., a business advisory firm, said its technology business segment has launched new services around its RelativityOne Data Migration offering. The services include migration planning, data migration and workspace migration.
  • Mimecast Ltd., an email and data security company, has appointed Kurt Mills as vice president of channel sales. He is responsible for the company’s North American channel sales strategy. In addition, Mimecast appointed Jon Goodwin as director of public sector.
  • Managed detection and response vendor Critical Start has hired Dwayne Myers as its vice president of channels and alliances. Myers joins the company from Palo Alto Networks, where he served as channel business manager, Central U.S. and Latin America, for cybersecurity solutions.

Market Share is a news roundup published every Friday.

Go to Original Article

Microsoft Office 365 now available from new South Africa cloud datacenters

As Microsoft strives to support the digital transformation of organizations and enterprises around the world, we continue to drive innovation and expand into new geographies to empower more customers with Office 365, the world’s leading cloud-based productivity solution, with more than 180 million commercial monthly active users. Today, we’re taking another step in our ongoing investment to help enable digital transformation and societal impact across Africa with the general availability of Office 365 services from our new cloud datacenters in South Africa.

Office 365, delivered from local datacenters in South Africa, helps our customers enable the modern workplace and empower their employees with real-time collaboration and cloud-powered intelligence while maintaining security, compliance, and in-country customer data residency. The addition of South Africa as a new geography for Office 365 increases the options for secure, cloud productivity services combined with customer data residency in 16 geographies across the globe along with three additional geographies also announced.

In-country data residency for core customer data helps Office 365 customers meet regulatory requirements, which is particularly important and relevant in industries such as healthcare, financial services, and government—where organizations need to keep specific data in-country to comply with local requirements. Customer data residency provides additional assurances regarding data privacy and reliability for organizations and enterprises. Core customer data is stored only in their datacenter geography (Geo)—in this case, the cloud datacenters within South Africa.

Customers like Altron and the Gauteng Provincial Government have used Office 365 to transform their workplaces. This latest development will enable them—and other organizations and enterprises adopting Office 365—to ramp up their digital transformation journey.

“Altron is committed to improving our infrastructure and embracing a strategy to become a cloud-first company to better serve our customers and empower our employees through modern collaboration. We’ve noticed a tangible difference since making the move to Office 365.”
—Debra Marais, Lead, IT Shared Services at Altron

“Office 365 is driving our modernization journey of Government ICT infrastructure and services by allowing us to develop pioneering solutions at manageable costs and create overall improvements in operations management, all while improving transparency and accountability.”
—David Kramer, Deputy Director General, ICT at Gauteng Provincial Government

Microsoft recently became the first global provider to deliver cloud services from the African continent with the opening of our new cloud datacenter regions. Office 365 joins Azure to expand the intelligent cloud service available from Africa. Dynamics 365 and Power Platform, the next generation of intelligent business applications, are anticipated to be available in the fourth quarter of 2019.

By delivering the comprehensive Microsoft cloud—which includes Azure, Office 365, and Dynamics 365—from datacenters in a given geography, we offer scalable, available, and resilient cloud services to companies and organizations while meeting customer data residency, security, and compliance needs. We have deep expertise in protecting data and empowering customers around the globe to meet extensive security and privacy requirements, including offering the broadest set of compliance certifications and attestations in the industry.

The new cloud regions in South Africa are connected to Microsoft’s other regions via our global network, one of the largest and most innovative on the planet—spanning more than 100,000 miles (161,000 kilometers) of terrestrial fiber and subsea cable systems to deliver services to customers. Microsoft is bringing the global cloud closer to home for African organizations and citizens through our trans-Arabian paths between India and Europe, as well as our trans-Atlantic systems, including Marea, the highest capacity cable to ever cross the Atlantic.

We’re committed to accelerating digital transformation across the continent through numerous initiatives and also recently announced Microsoft’s first Africa Development Centre (ADC), with two initial sites in Nairobi, Kenya and Lagos, Nigeria. The ADC will serve as a premier center of engineering for Microsoft, where world-class African talent can create solutions for local and global impact. With our new cloud datacenter regions, the ADC, and programs like 4Afrika, we believe Africa is poised to develop locally and scale for global impact better than ever before.

Learn more about Office 365 and Microsoft in the Middle East and Africa.

Go to Original Article
Author: Microsoft News Center

AWS Summit widens net with services for containers, devs

NEW YORK — AWS pledges to maintain its torrid pace of product and services innovations and continue to expand the breadth of both to meet customer needs.

“You decide how to build software, not us,” said Werner Vogels, Amazon vice president and CTO, in a keynote at the AWS Summit NYC event. “So, we need to give you a really big toolbox so you can get the tools you need.”

But AWS, which holds a healthy lead over Microsoft and Google in the cloud market, also wants to serve as an automation engine for customers, Vogels added.

“I strongly believe that in the future … you will only write business logic,” he said. “Focus on building your application, drop it somewhere and we will make it secure and highly available for you.”

Parade of new AWS services continues

Vogels sprinkled a series of news announcements throughout his keynote, two of which centered on containers. First, Amazon CloudWatch Container Insights, a service that provides container-level monitoring, is now in preview for monitoring clusters in Amazon Elastic Container Service and Amazon Fargate, in addition to Amazon EKS and Kubernetes. In addition, AWS for Fluent Bit, which serves as a centralized environment for container logging, is now generally available, he said.

Serverless compute also got some attention with the release of Amazon EventBridge, a serverless event bus to take in and process data across AWS’ own services and SaaS applications. AWS customers currently do this with a lot of custom code, so “the goal for us was to provide a much simpler programming model,” Vogels said. Initial SaaS partners for EventBridge include Zendesk, OneLogin and Symantec.

Focus on building your application, drop it somewhere and we will make it secure and highly available for you.
Werner VogelsCTO, AWS

AWS minds the past, with eye on the future

Most customers are moving away from the concept of a monolithic application, “but there are still lots of monoliths out there,” such as SAP ERP implementations that won’t go away anytime soon, Vogels said.

But IT shops with a cloud-first mindset focus on newer architectural patterns, such as microservices. AWS wants to serve both types of applications with a full range of instance types, containers and serverless functionality, Vogels said.

He cited customers such as McDonald’s, which has built a home-delivery system with Amazon Elastic Container Service. It can take up to 20,000 orders per second and is integrated with partners such as Uber Eats, Vogels said.

Vogels ceded the stage for a time to Steve Randich, executive vice president and CIO of the Financial Industry Regulatory Authority (FINRA), a nonprofit group that seeks to keep brokerage firms fair and honest.

FINRA moved wholesale to AWS and its systems now ingest up to 155 billion market events in a single day — double what it was three years ago. “When we hit these peaks, we don’t even know them operationally because the infrastructure is so elastic,” Randich said.

FINRA has designed the AWS-hosted apps to run across multiple availability zones. “Essentially, our disaster recovery is tested daily in this regard,” he said.

AWS’ ode to developers

Developers have long been a crucial component of AWS’ customer base, and the company has built out a string of tool sets aimed to meet a broad set of languages and integrated development environments (IDEs). These include AWS Cloud9, IntelliJ, Python, Visual Studio and Visual Studio Code.

VS Code is Microsoft’s lighter-weight, browser-based IDE, which has seen strong initial uptake. All the different languages in VS Code are now generally available, Vogels said to audience applause.

Additionally, AWS Cloud Development Kit (CDK) is now generally available with support for TypeScript and Python. AWS CDK makes it easier for developers to use high-level construct to define cloud infrastructure in code, said Martin Beeby, AWS principal developer evangelist, in a demo.

AWS seeks to keep the cloud secure

Vogels also used part of his AWS Summit talk to reiterate AWS’ views on security, as he did at the recent AWS re:Inforce conference dedicated to cloud security.

“There is no line in the sand that says, ‘This is good-enough security,'” he said, citing newer techniques such as automated reasoning as key advancements.

Werner Vogels, AWS CTO
Werner Vogels, CTO of AWS, on stage at the AWS Summit in New York.

Classic security precautions have become practically obsolete, he added. “If firewalls were the way to protect our systems, then we’d still have moats [around buildings],” Vogels said. Most attack patterns AWS sees are not brute-force front-door efforts, but rather spear-phishing and other techniques: “There’s always an idiot that clicks that link,” he said.

The full spectrum of IT, from operations to engineering to compliance, must be mindful of security, Vogels said. This is true within DevOps practices such as CI/CD from both an external and internal level, he said. The first involves matters such as identity access management and hardened servers, while the latter brings in techniques including artifact validation and static code analysis.

AWS Summit draws veteran customers and newcomers

The event at the Jacob K. Javits Convention Center drew thousands of attendees with a wide range of cloud experience, from FINRA to fledgling startups.

“The analytics are very interesting to me, and how I can translate that into a set of services for the clients I’m starting to work with,” said Donald O’Toole, owner of CeltTools LLC, a two-person startup based in Brooklyn. He retired from IBM in 2018 after 35 years.

AWS customer Timehop offers a mobile application oriented around “digital nostalgia,” which pulls together users’ photographs from various sources such as Facebook and Google Photos, said CTO Dmitry Traytel.

A few years ago, Timehop found itself in a place familiar to many startups: Low on venture capital and with no viable monetization strategy. The company created its own advertising server on top of AWS, dubbed Nimbus, rather than rely on third-party products. Once a user session starts, the system conducts an auction for multiple prominent mobile ad networks, which results in the best possible price for its ad inventory.

“Nimbus let us pivot to a different category,” Traytel said.

Go to Original Article

AT&T revives [email protected] UCaaS deal with RingCentral

In an unexpected move, AT&T will continue supporting and reselling RingCentral’s unified-communications-as-a-service platform as part of a new partnership targeting the large-enterprise market. 

AT&T had informed customers in January 2018 that, within a year, it would no longer support RingCentral’s [email protected], a cloud-based calling and messaging platform the carrier had been selling to small and midsize businesses.

But, this week, the two companies struck a new deal: AT&T will now start selling the UCaaS product to large enterprises, as well as SMBs. Scott Velting, an associate vice president with AT&T, attributed the about-face to “rapidly changing market dynamics and technological advances.”

RingCentral previously purchased AT&T’s customer licenses for [email protected] in an agreement worth up to $26 million. The startup cautioned investors that revenue could be “significantly and adversely affected” if too many of those customers declined to transition from AT&T to RingCentral.

While some of those customers have already migrated, the businesses that have not yet switched will be able to remain on the [email protected] platform through AT&T, RingCentral said.

Leading up to the earlier decision to end the partnership, [email protected] sales had stagnated, with an “immaterial” number of new subscriptions sold in all of 2017, according to RingCentral. Sales through the AT&T channel accounted for 11% of RingCentral’s revenue that year — down from 14% in 2016. 

With the new deal announced this week, the technology behind [email protected] is the same, but RingCentral appears to have developed a more robust strategy for working with service-provider partners like AT&T, said Jeremy Duke, founder and chief analyst at Synergy Research Group, based in Reno, Nevada.

RingCentral is “currently growing at more than the average growth rate for the UCaaS market, and they want to continue that growth,” Duke said. “And they see the AT&T relationship as very important to continue building that momentum.”

RingCentral has more than doubled its revenue over the past three years, from $220 million in 2014 to $501 million last year, according to federal regulatory filings.

“I think this move helps RingCentral in its efforts to move upmarket. We continue to see RingCentral posting strong growth numbers; this expanded partnership should only help,” said Irwin Lazar, analyst at Nemertes Research, based in Mokena, Ill.

AT&T partners with several other UCaaS vendors, including Cisco and BroadSoft, which was recently acquired by Cisco. It also has its own cloud communications platform, AT&T Collaborate.

Vendors target large enterprises with UCaaS

While SMBs drove most of the initial growth of the UCaaS market, more and more enterprises are also now buying cloud calling plans.

UCaaS sales in the enterprise market are growing at twice the rate of sales in the SMB market, according to data released last month by Synergy. There are now nearly 8 million UCaaS seats worldwide, a twofold increase since late 2015, the firm said.

As of the second quarter of 2018, RingCentral remains the leader, with an 18% market share, trailed by Mitel at 16%, 8×8 at 13%, Cisco at 7%, and Vonage at 7%, according to Synergy. Mitel recently acquired UCaaS vendor ShoreTel.

Still, UCaaS platforms account for less than 10% of the total PBX market, according to Synergy.

“Our data shows that larger enterprises are still laggards when it comes to adopting UCaaS, but interest continues to grow,” Lazar said.

Infosec mental health support and awareness hits Black Hat 2018

LAS VEGAS — Rather than continue being reactive to social issues, Black Hat 2018 took steps to be more proactive in addressing and bringing awareness to the topic of infosec mental health.

The Black Hat conference set up a “self-care” lounge for attendees and included two complementary sessions covering the negative infosec mental health issues of depression and burnout and how the cybersecurity community can prove to be a source of aid for those suffering from post-traumatic stress disorder (PTSD).

During “Mental Health Hacks: Fighting Burnout, Depression and Suicide in the Hacker Community,” speakers Christian Dameff, emergency medicine physician and clinical informatics fellow at the University of California, San Diego, and Jay Radcliffe, cybersecurity researcher at Boston Scientific, shared personal stories of depression and burnout, as well as ways to identify symptoms in oneself or in co-workers.

Radcliffe noted that the widely acknowledged skills gap could be a contributing factor of infosec mental health issues. 

“With global staffing shortages in information security, we’re seeing departments that should have 10 people work with five. And that increases stress,” said Radcliffe, adding that infosec workers can even have a “hero complex” that leads to taking on more work than is healthy.

Radcliffe said workers and employers should keep an eye out for common symptoms, including, “feeling cynical, no satisfaction from accomplishments, dreading going to work and no work-life balance.” He suggested options such as speaking to counselors, therapists and psychologists, and also being mindful that workers take vacations and managers ensure time off is encouraged.

In the talk, “Demystifying PTSD in the Cybersecurity Environment,” Joe Slowik, adversary hunter at Dragos Inc., expanded on those topics and talked about how working in the infosec community helped him deal with PTSD from his military service in Afghanistan.

Slowik was careful to point out that PTSD should not be confused with burnout, depression or other infosec mental health issues because, as he wrote via email, certain “solutions or mitigations that may be appropriate for one, [may not be for] others.”

“For example, it is likely advisable to tell someone to step away from work for a bit to combat burnout — but in the case of PTSD where an individual may gain empowerment or agency from doing work they love/are successful at, such a step may in fact be counterproductive (it is for me),” Slowik wrote. “Similarly, for depression, treatment may simply be a combination of taking time away, medication, and some degree of therapy, whereas successful treatment of PTSD requires more intensive interventions and likely must be ongoing and continuing to be effective. Combining all of these into the same category means very real mistakes can be made, which at best leave a situation unresolved, and at worst exacerbate it.”

Slowik added that being in the infosec community was “empowering” because it allowed him “to do well at doing good.”

Information security work has allowed me to reclaim a sense of agency by having direct, measurable, recognizable impact in meaningful affairs.
Joe Slowikadversary hunter, Dragos Inc.

“One of the more pernicious aspects of PTSD is a loss of agency deriving from a moment of helplessness when one’s life/integrity was placed in severe danger or risk — re-experiencing this event leaves one feeling worthless and helpless in the face of adversity,” Slowik wrote. “Information security work has allowed me to reclaim a sense of agency by having direct, measurable, recognizable impact in meaningful affairs, and at least for me has been instrumental in moving beyond past trauma.”

The talks showed two sides of the security community that don’t often get talked about: how the work can be both the cause of — and the remedy for — infosec mental health issues.

The attendance for the two talks was noticeably lower than for the more technical talks. It is unclear if this was due to poor marketing, unreasonable expectations for attendance, or the social stigmas surrounding mental health issues.

Slowik said he was grateful for those who attended and noted that the lower attendance could also be attributed to his talk being “the first scheduled talk the morning after Black Hat’s infamous parties.”

“Numbers are irrelevant, as conversations after the presentation made it clear this really reached members of the audience,” Slowik wrote. “My only hope is that this talk, along with other items from the Black Hat Community track, are made publicly available since so many good lessons and observations were made in this forum and these should be shared with the wider information security community.”

Ransomware outbreak threat calls for backup and DR strategy

The ransomware outbreak threat may be subsiding somewhat, but IT managers continue to shore up their defenses. Backup and disaster recovery is a key area of emphasis.

For much of 2017, the WannaCry and NotPetya ransomware outbreaks dominated cybercrime headlines. A new report from antimalware vendor Malwarebytes said ransomware detections last year increased 90% among businesses. But by the end of 2017, the “development of new ransomware families grew stale,” as cybercriminals shifted their focus to other forms of malware, such as banker Trojans that steal financial information, according to the report, “Cybercrime Tactics and Techniques: 2017 State of Malware.”

That said, organizations are looking to bolster their ransomware outbreak protections. Front-end measures often include antivirus software, firewalls and content scanners that can intercept email attachments that appear questionable.

IT departments, however, are also looking to strengthen back-end protections that can help them recover from ransomware attacks that lock up data via encryption. Here, the emphasis is on disaster recovery strategies that let a business restore its data from a backup copy. But even here, there are risks: IT managers must ensure the backups it makes are actually usable and consider how long a data restore will take in the event of an emergency.

Another level of security

The city of Milpitas, Calif., already has a number of security measures in place to defend itself from a ransomware outbreak. On the front end, the municipal government employs email filtering, spam filtering and email attachment scanning. On the back end, the city uses BackupAssist, a Windows server backup and recovery software offering for SMBs. A remote disaster recovery site provides an additional line of defense.

The city earlier this month said it layered on another element to its backup and recovery defense. Mike Luu, information services director for the city of Milpitas, said the city activated CryptoSafeGuard, a BackupAssist feature the vendor recently added to its product.

CryptoSafeGuard, according to the company, prevents infected files from being backed up and also prevents backups from becoming encrypted. Some ransomware attacks have succeeded in encrypting both an organization’s production and backup data.

“It’s just another method of trying to protect against [Ransomware],” Luu said of CryptoSafeGuard.

Luu said switching on CryptoSafeGuard was a simple matter of ticking a box on BackupAssist’s user interface. “It came along for the ride at no additional cost,” he added.

BackupAssist offers CryptoSafeGuard as part of the vendor’s BackupCare subscription package. Troy Vertigan, digital sales and marketing manager at BackupAssist, said 30% of the vendor’s customers running the latest versions of BackupAssist have activated CryptoSafeGuard since it became available in September 2017.

When backups fail

Backup plans can fall through when ransomware hits. TenCate, a maker of composite materials and armor based in the Netherlands, found that out a few years ago during the CryptoLocker ransomware outbreak. Malware entered the company’s U.S. operations through a manufacturing facility and made its way to the file server, recalled Jayme Williams, senior systems engineer at TenCate. Data ended up encrypted from the shop floor to the front office.

When TenCate attempted a data restore from Linear Tape-Open standard tape backups, the backup software the company used wasn’t able to catalog the LTO tapes — a necessary step for recovering files. Williams said some data had been copied off to disk media, but that backup tier was also unreadable. He contacted a data recovery service, which was able to extract the data from the disks.

The company’s disk-based backups weren’t frequent, so some of the data had become stale. The recovered data, however, provided a framework for rebuilding what was lost. It took two weeks to make data accessible again; even then, it wasn’t an ideal data restore because of the age of the recovered data.

One of the key lessons learned from the CryptoLocker experience was that TenCate’s security was lacking for the ransomware infection to penetrate as far as it did, Williams noted. In response, company managers have signed off on tighter security.

The other lesson: Backup and disaster recovery are different things.

Backup is not resilience.
Jayme Williamssenior systems engineer at TenCate

“Backup is not resilience,” Williams said.

That realization put TenCate on the path toward new approaches. Initially, the company, which is a VMware shop, considered the virtualization vendor’s Site Recovery Manager. But the company’s IT services partner recommended a cloud-based backup and disaster recovery offering from Zerto. The vendor replicates data from an organization’s on-site data stores to the cloud.

One factor in favor of Zerto was simplicity. Zerto helped TenCate set up a proof of concept (POC) in about 30 minutes to demonstrate replication and failover. When Williams received permission to purchase the replication service, TenCate was able to take the POC into production without reinstallation.

When a second ransomware outbreak struck TenCate, the updated security and disaster recovery system thwarted the attack. The company’s virtual machines (VMs) were shielded within Zerto’s Virtual Protection Groups and journaling technique, which Williams described as “the TiVo of the VM.” The Zerto journal lets administrators rollback a VM to a point in time before the ransomware virus hit — a matter of seconds, according to Williams.

Time is a critical consideration in devising a ransomware mitigation strategy, noted Michael Suby, Stratecast vice president of research at Frost & Sullivan.

A too lengthy data restore process leaves organizations vulnerable to ransomware demands, he said. A besieged organization may capitulate and pay the fee if a drawn out recovery time would result in a greater loss of revenue or threaten lives, as in the case of an attack against a hospital.

“Companies can still be exploited if the time to revert to those backup files is excessive,” Suby explained. “It’s not just having backup files. We have to have them readily accessible.”

A series of new IoT botnets plague connected devices

Internet of things botnets continue to plague connected devices with two new botnets appearing this week.

The first of the IoT botnets causing trouble was discovered by security researchers at Bitdefender and is called Hide ‘N Seek, or HNS. HNS was first noticed on January 10, “faded away” for a few days and then reemerged on January 20 in a slightly different form, according to Bitdefender senior e-threat analyst Bogdan Botezatu. Since then, HNS — which started with only 12 compromised devices — had amassed over 32,000 bots worldwide as of January 26. Most of the affected devices are Korean-manufactured IP cameras.

“The HNS botnet communicates in a complex and decentralized manner and uses multiple anti-tampering techniques to prevent a third party from hijacking/poisoning it,” Botezatu explained in his analysis of HNS, also noting that the bot can perform device exploits similar to those done by the Reaper botnet. “The bot embeds a plurality of commands such as data exfiltration, code execution and interference with a device’s operation.”

Botezatu also explained that HNS works sort of like a worm in that it uses a randomly generated list of IP addresses to get potential targets. The list of targets can be updated in real time as the botnet grows or bots are lost or gained. Luckily, like other IoT botnets, the HNS “cannot achieve persistence” and a device reboot will remove it from the botnet.

“While IoT botnets have been around for years, mainly used for DDoS attacks, the discoveries made during the investigation of the Hide and Seek bot reveal greater levels of complexity and novel capabilities such as information theft — potentially suitable for espionage or extortion,” Botezatu said.

Unlike other recent IoT botnets, HNS is different from the infamous Mirai malware, and is instead similar to the Hajime botnet. Like Hajime, HNS has a “decentralized peer-to-peer architecture.”

The Masuta botnets

Two other new botnets on the scene do show similarities to Mirai, however.

The Masuta and PureMasuta variant were discovered by researchers at the company NewSky Security and appear to be the work of the Satori botnet creators. The Satori botnet targeted Huawei routers earlier this month, and the Masuta botnets now also target home routers.

According to the research from NewSky Security, Masuta shares a similar attack method with Mirai and uses weak, known or default credentials to access the targeted devices. PureMasuta is a bit more sophisticated and exploits a network administration bug uncovered in 2015 in D-Link’s Home Network Administration Protocol, which relies on the Simple Object Access Protocol to manage device configuration.

“Protocol exploits are more desirable for threat actors as they usually have a wider scope,” Ankit Anubhav, principal researcher at NewSky Security, wrote in the analysis of the botnets. “A protocol can be implemented by various vendors/models and a bug in the protocol itself can get carried on to a wider range of devices.”

PureMasuta has been infecting devices since September 2017.

In other news

  • Kaspersky Lab filed a preliminary injunction as part of its appeal against the U.S. Department of Homeland Security’s ban on the use of the company’s products in government agencies. The ban was originally issued in September 2017 in response to concerns that the Moscow-based security company helped the Russian government gather data on the U.S. through its antivirus software and other products. The ban, Binding Operational Directive (BOD) 17-01, was reinforced in December 2017 in the National Defense Authorization Act, despite offers from Kaspersky to have the U.S. government investigate its products and operations. In response to the National Defense Authorization Act, Kaspersky Lab filed a lawsuit against the U.S. government saying that the ban was unconstitutional. As part of the lawsuit, the injunction would, for now, stop the government ban on BOD 17-01.
  • The PCI Security Standards Council (PCI SSC) published new security requirements for mobile point-of-sale systems. The requirements focus on software-based PIN entry on commercial off-the-shelf (COTS) mobile devices. Requirements already exist for hardware-based devices that accept PINs, so these standards expand on them. The so-called PCI Software-Based PIN Entry on COTS (SPoC) Standard introduces a “requirement for a back-end monitoring system for additional external security controls such as attestation (to ensure the security mechanisms are intact and operational), detection (to notify when anomalies are present) and response (controls to alert and take action) to address anomalies,” according to PCI SSC CTO Troy Leach. The standard consists of two documents: the Security Requirements for solution providers, including designers of applications that accept PINS; and the Test Requirements, which “create validation mechanisms for payment security laboratories to evaluate the security” of the PIN processing apps. The SPoC security requirements focus on five core principles, according to Leach:
    • isolation of the PIN from other account data;
    • ensuring the software security and integrity of the PIN entry application on the COTS device;
    • active monitoring of the service, to mitigate against potential threats to the payment environment within the phone or tablet;
    • Required Secure Card Reader for PIN (SCRP) to encrypt and maintain confidentiality of account data; and
    • transactions restricted to EMV contact and contactless.
  • Alphabet, best known for being Google’s parent company, launched a new cybersecurity company — Chronicle. Chronicle is an offshoot of the group X and will be a stand-alone company under Alphabet. Former Symantec COO Stephen Gillett will be the company’s CEO. Chronicle offers two services to enterprises: a security intelligence and analytics platform and VirusTotal, an online malware and virus scanner Google acquired in 2012. “We want to 10x the speed and impact of security teams’ work by making it much easier, faster and more cost-effective for them to capture and analyze security signals that have previously been too difficult and expensive to find,” Gillett said in a blog post announcing the company launch. “We are building our intelligence and analytics platform to solve this problem.” The announcement did not provide many specifics, but the launch could pose a significant threat to cybersecurity vendors that do not have access to the same resources as a company with the same parent as Google.

Master the seven key DevOps engineer skills for 2018

This year will be an exciting year in DevOps. Cloud-based technologies will continue to grow in 2018, as will the use of AI in day-to-day operations. We’re going to see a renewed focus on the role of hardware in both the cloud and on-premises installation. Also, quantum computing will become a regular part of commercial computing.

All of these trends will require developers and operations professionals to acquire new DevOps engineer skills to adapt to this evolving landscape. Below, you’ll find 2018’s technological trends and the skills DevOps pros will have to develop to be viable in the coming year.

Serverless computing is here, so get used to it

The big three of web service providers — Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform — now provide serverless computing environments. AWS has Lambda, and here’s Azure Functions and Google Cloud Functions. These technologies were significant investments; they are not going away. In fact, the big three are promoting serverless computing as a first-order way for developing for the web, particularly around the internet of things (IoT).

And so moving into 2018, key DevOps engineer skills will include understanding the basic concepts of serverless computing in terms of architecture, version control, deployment and testing. There are still outstanding problems to be solved, particularly around real-world unit testing of serverless functions in a continuous integration and continuous delivery pipeline.

Get with IoT, or IoT will get you

IoT is on pace to eat the internet. Back in 2014, Business Insider‘ predicted IoT will become the internet’s predominant technology.
This year, we’ll see even more activity. IoT will have a growing impact in two significant areas: processing and security. In terms of processing, these IoT devices will emit a lot of data, which obviously needs to be processed. The increased demand will put a burden on infrastructure. Understanding how to accommodate the increase in volume due to IoT devices is going to be an important DevOps engineer skill in 2018.

In terms of security, new practices still need to be adopted. One type of consumer hazard is home invasion, in which a nefarious agent takes over household appliances. Imagine some bad tech turning off the heating system in a house during a Boston winter. After a few hours, all the pipes in the house burst. The damage will be significant. In the commercial enterprise, things can get much worse — think a nuclear reactor.

Given the risks at hand, DevOps personnel need to get a firm understanding of the intricacies of IoT. The technologies go beyond the standard practices of controlling a standard data center. The risks are real, and the consequences of not being well-informed are significant.

IoT smart home
Security in your smart home — and all your IoT devices — will become even more essential, as code-crunching quantum computers become more readily available.

Get ready for the resurrection of hardware

The days of using any old type of hardware to run a cloud-based VM are coming to a close, particularly as more applications are used in life-and-death situations — driverless vehicles, for example. The most telling change is the growing attraction of GPU as the processor of choice for AI and machine learning computation. Hardware is indeed making a comeback.

Cloud providers are listening. Amazon allows you to attach GPUs to cloud instances — so does Azure and Google Compute Engine. Along with this GPU rise, you are also going to see companies going back to “down to the metal” installations. There are providers out there, such as Packet.net, BareMetalCloud and Storm, that offer hourly rates on actual hardware.

As specialized big data processing becomes more a part of the everyday computing workload, alternatives to multicore commodity hardware will become essential. This hardware resurrection will have a definite impact on DevOps engineer skills and practices. DevOps personnel will need to know the basics of chip architecture — for example, how is a GPU different from a CPU? We’re going to have to refresh our understanding of network hardware and architecture.

Put learning about RPA firmly on your roadmap

Robotic process automation (RPA) is the practice of applying robotic technology to do physical work within a given workflow. In other words, RPA is the about teaching robots to do work with, or instead of, humans.

Over the last few years, RPA has become a standard discipline on the factory floor, and it’s getting more prominent in general IT. Short of a Luddite revolution, RPA is not going away. A quote in the Institute for Robotic Process Automation primer is quite telling:  “Though it is expected that automation software will replace up to 140 million full-time employees worldwide by the year 2025, many high-quality jobs will be created for those who are able to maintain and improve RPA software.”

As hard as it is to imagine today, teaching robots is going to become an essential DevOps skill. It makes sense in a way. We’ve been automating since Day One. Applying robotic technology to physical work in physical locations such as a data center is a natural extension of DevOps activity.

Prepare for the impact of quantum computing on your security infrastructure

Quantum computing is no longer a science-fiction fantasy. It’s here. IBM has a quantum computer available for public use via the cloud. D-Wave Systems is selling quantum computers commercially. They go for around $10 million each. Google and Lockheed Martin are already customers.

Quantum computing is no longer a science-fiction fantasy. It’s here.

The key benefit of quantum computing is speed. There are still problems out there that take classical computers — computers that use standard binary processors — billions of years to solve. Decoding encrypted data is one such problem. Such complex code breaking can be done by a quantum computer in a few hundred seconds.

The impact of quantum computing on security practices is going to be profound. At the least, quantum computing is going to allow any text-based password to be deciphered in seconds. Also, secure access techniques, such as fingerprinting and retinal scan, will be subject to hacking. Quantum computing will allow malicious actors to perform highly developed, digital impersonation in cyberspace.

To paraphrase Alan Turing in Imitation Game, “The only way to beat a machine is with another machine.”

It’ll be the same with quantum computing. DevOps security pros — whose primary concern is providing a state-of-the-art security infrastructure — will do well to start learning how to use quantum computing. Quantum computing will provide the defensive techniques required to ensure the safety of the digital enterprise as we move into the era of infinite computing.

Get good at extreme edge cases

In the old days, we’d need a human to look over reports from automated system agents to figure out how to address anomalies. Now, with the growth of AI and machine learning, technology can identify more anomalies. The more anomalies AI experiences, the smarter it gets. Thus, the number of anomalies — aka edge cases that require human attention — is going to diminish. AI will have it covered.

But the cases that do warrant human attention are going to be harder to resolve. And the type of human that will be needed to address an edge case will need to be very smart and very specialized, to the point that only a few people on the planet will have qualifications necessary to do the work.

In short, AI is going to continue to grow. But there will be situations in which human intelligence will be required to address issues AI can’t. Resolving these edge cases is going to require very deep understanding of a very precise knowledge set, coupled with highly developed analytical skills. If part of your job is to do troubleshooting, start developing expertise in a well-defined specialty to a level of understanding that only a few will have. For now, keep your day job. But understand that the super-specialization and extreme analysis are going to be a DevOps skill trend in the future.

Relearn the 12 Principles of Agile Software

The Agile Manifesto, released in 2001, describes a way of making software that’s focused on getting useful, working code into the hands of users as fast as possible. Since the Manifesto’s release, the market has filled with tools that support the philosophy. There have been arguments at the process level, and there are a number of permutations of Agile among project managers. Still, the 12 principles listed in the Manifesto are as relevant today as when they first appeared.

Sometimes, we in DevOps get so muddled down in the details of our work that we lose sight of the essential thinking that gave rise to our vocation. Though not a DevOps skill exactly, reviewing the 12 Principles of Agile Software is a good investment of time not only to refresh one’s sense of how DevOps came about, but also to provide an opportunity to recommit oneself the essential thinking that makes DevOps an important part of the IT infrastructure.