Tag Archives: Services

How to Create and Manage Hot/Cold Tiered Storage

When I was working in Microsoft’s File Services team around 2010, one of the primary goals of the organization was to commoditize storage and make it more affordable to enterprises. Legacy storage vendors offered expensive products, often consuming a majority of the budget of the IT department and they were slow to make improvements because customers were locked in. Since then, every release of Windows Server has included storage management features which were previously only provided by storage vendors, such as deduplication, replication, and mirroring. These features could be used to manage commodity storage arrays and disks, reducing costs and eliminating vendor lock-in. Windows Server now offers a much-requested feature, the ability to move files between different tiers of “hot” (fast) storage and “cold” (slow) storage.

Managing hot/cold storage is conceptually similar to computer memory cache but at an enterprise scale. Files which are frequently accessed can be optimized to run on the hot storage, such as faster SSDs. Meanwhile, files which are infrequently accessed will be pushed to cold storage, such as older or cheaper disks. These lower priority files will also take advantage of file compression techniques like data deduplication to maximize storage capacity and minimize cost. Identical or varying disk types can be used because the storage is managed as a pool using Windows Server’s storage spaces, so you do not need to worry about managing individual drives. The file placement is controlled by the Resilient File System (ReFS), a file system which is used to optimize and rotate data between the “hot” and “cold” storage tiers in real-time based on their usage. However, using tiered storage is only recommended for workloads that are not regularly accessed. If you have permanently running VMs or you are using all the files on a given disk, there would be little benefit in allocating some of the disk to cold storage. This blog post will review the key components required to deploy tiered storage in your datacenter.

Overview of Resilient File System (ReFS) with Storage Tiering

The Resilient File System was first introduced in Windows Server 2012 with support for limited scenarios, but it has been greatly enhanced through the Windows Server 2019 release. It was designed to be efficient, support multiple workloads, avoid corruption and maximize data availability. More specifically to tiering though, ReFS divides the pool of storage into two tiers automatically, one for high-speed performance and one of maximizing storage capacity. The performance tier receives all the writes on the faster disk for better performance. If those new blocks of data are not frequently accessed, the files will gradually be moved to the capacity tier. Reads will usually happen from the capacity tier, but can also happen from the performance tier as needed.

Storage Spaces Direct and Mirror-Accelerated Parity

Storage Spaces Direct (S2D) is one of Microsoft’s enhancements designed to reduce costs by allowing servers with Direct Attached Storage (DAS) drives to support Windows Server Failover Clustering. Previously, highly-available file server clusters required some type of shared storage on a SAN or used an SMB file share, but S2D allows for small local clusters which can mirror the data between nodes. Check out Altaro’s blog on Storage Spaces Direct for in-depth coverage on this technology.

With Windows Server 2016 and 2019, S2D offers mirror-accelerated parity which is used for tiered storage, but it is generally recommended for backups and less frequently accessed files, rather than heavy production workloads such as VMs. In order to use tiered storage with ReFS, you will use mirror-accelerated parity. This provides decent storage capacity by using both mirroring and a parity drive to help prevent and recover from data loss. In the past, mirroring and parity would conflict and you would usually have to select one of the other.  Mirror-accelerator parity works with ReFS by taking writes and mirroring them (hot storage), then using parity to optimize their storage on disk (cold storage). By switching between these storage optimizations techniques, ReFS provides admins with the best of both worlds.

Creating Hot and Cold Tiered Storage

When configuring hot and cold storage you get to define the ratio of the hot and cold storage. For most workloads, Microsoft recommends allocating 20% to hot and 80% to cold. If you are using high-performance workloads, consider having more hot memory to support more writes. On the flip-side, if you have a lot of archival files, then allocate more cold memory. Remember that with a storage pool you can combine multiple disk types under the same abstracted storage space. The following PowerShell cmdlets show you how to configure a 1,000 GB disk to use 20% (200 GB) for performance (hot storage) and 80% (800 GB) for capacity (cold storage).

Managing Hot and Cold Tiered Storage

If you want to increase the performance of your disk, then you will allocate a great percentage of the disk to the performance (hot) tier. In the following example we use the PowerShell cmdlets to create a 30:70 ratio between the tiers:

Unfortunately, this resizing only changes the ratios of the disks but does not change the size of the partition or volume, so you likely also want to change these using the Resize-Volumes cmdlets.

Optimizing Hot and Cold Storage

Based on the types of workloads you are using, you may wish to further optimize when data is moved between hot and cold storage, which is known as the “aggressiveness” of the rotation. By default, the hot storage will use wait until 85% of its capacity is full before it begins to send data to the cold storage. If you have a lot of write traffic going to the hot storage then you want to reduce this value so that performance-tier data gets pushed to the cold storage quicker. If you have fewer write requests and want to keep data in hot storage longer then you can increase this value. Since this is an advanced configuration option, it must be configured via the registry on every node in the S2D cluster, and it also requires a restart. Here is a sample script to run on each node if you want to change the aggressiveness so that it swaps files when the performance tier reaches 70% capacity:

You can apply this setting cluster-wide by using the following cmdlet:

NOTE: If this is applied to an active cluster, make sure that you reboot one node at a time to maintain service availability.

Wrap-Up

Now you should be fully equipped with the knowledge to optimize your commodity storage using the latest Windows Server storage management features. You can pool your disks with storage spaces, use storage spaces direct (S2D) to eliminate the need for a SAN, and ReFS to optimize the performance and capacity of these drives.  By understanding the tradeoffs between performance and capacity, your organization can significantly save on storage management and hardware costs. Windows Server has made it easy to centralize and optimize your storage so you can reallocate your budget to a new project – or to your wages!

What about you? Have you tried any of the features listed in the article? Have they worked well for you? Have they not worked well? Why or why not? Let us know in the comments section below!


Go to Original Article
Author: Symon Perriman

Nectar launches Customer Experience Assurance platform

Nectar Services Corp. recently launched Nectar Customer Experience Assurance, a customer experience testing and monitoring platform for contact center and interactive voice response team, promising to eliminate the need for legacy network monitoring platforms.

Nectar said Customer Experience Assurance offers a range of capabilities, including auto-discovery, voice recognition and simulation, dynamic call automation and load testing. These features enable contact center DevOps teams to test and discover network issues in a timely manner and to save time when launching new platforms or making configuration changes.

Nectar’s Customer Experience Assurance also offers perpetual monitoring that performs testing in regular intervals to monitor platforms for service availability and configuration changes, the company said. This enables contact center management teams to alert and carry out historical reporting based on factors affecting customer experience (CX) metrics such as service availability, functionality and call quality.

Nectar CX Assurance includes the following features:

  • Auto discovery enables reverse-engineering of calls flows that speed up interactive voice response (IVR) and provides accurate and timely customer experience monitoring. 
  • Real-time alerting notifies companies via email and/or text when issues are identified.
  • Voice automation provides text-to-speech and speech recognition that, in combination with call recording, enable a high level quality control and monitoring.  
  • Voice quality scoring identifies clicks and noises, artifacts, intermittent gaps and jitter due to packet loss in audio during playback.

Nectar said Customer Experience Assurance is the first product to apply its experience in unified communications (UC) monitoring, diagnostics and reporting to the contact center environment. It is built upon Nectar’s core products, network and endpoint operations for UC and provides cloud-based CX testing for enterprise contact center and IVR operations.

In the CX monitoring market, Nectar competes with Oracle, Clarabridge and Integrated Research, known as IR. Oracle CX Cloud Suite offers a full set of applications from marketing to sales, and commerce to service. Clarabridge’s product stresses AI technology that provides audio transcription of agent-customer interactions, along with sentiment, tone and voice analysis for customer service conversations. IR’s Prognosis for Contact Center offers complete contact center ecosystem from Cisco and Avaya, and the underlying UC systems with one platform.

Go to Original Article
Author:

Amazon Quantum Ledger Database brings immutable transactions

The Amazon Web Services Quantum Ledger Database is now generally available.

The database provides a cryptographically secured ledger as a managed service. It can be used to store both structured and unstructured data, providing what Amazon refers to as an immutable transaction log.

The new database service was released on Sept. 10, 10 months after AWS introduced it as a preview technology.

The ability to provide a cryptographically and independently verifiable audit trail of immutable data has multiple benefits and use cases, said Gartner vice president and distinguished analyst Avivah Litan.

“This is useful for establishing a system of record and for satisfying various types compliance requirements, such as regulatory compliance,” Litan said. “Gartner estimates that QLDB and other competitive offerings that will eventually emerge will gain at least 20% of permissioned blockchain market share over the next three years.”

A permissioned blockchain has a central authority in the system to help provide overall governance and control. Litan sees the Quantum Ledger Database as satisfying several key requirements in multi-company projects, which are typically complementary to existing database systems.

Among the requirements is that once data is written to the ledger, the data is immutable and cannot be deleted or updated. Another key requirement that QLDB satisfies is that it provides a cryptographically and independently verifiable audit trail.

“These features are not readily available using traditional legacy technologies and are core components to user interest in adopting blockchain and distributed ledger technology,” Litan said. “In sum, QLDB is optimal for use cases when there is a trusted authority recognized by all participants and centralization is not an issue.”

How AWS Quantum Ledger Database works shown in diagram graphic
Diagram of how AWS Quantum Ledger Database works

Centralized ledger vs. de-centralized blockchain

The basic promise of many blockchain-based systems is that they are decentralized, and each party stores a copy of the ledger. For a transaction to get stored in a decentralized and distributed ledger, multiple parties have to come to a consensus. In this way, blockchains achieve trust in a distributed and decentralized way.

“Customers who need a decentralized application can use Amazon Managed Blockchain today,” said Rahul Pathak, general manager of databases, analytics and blockchain at AWS. “However, there are customers who primarily need the immutable and verifiable components of a blockchain to ensure the integrity of their data is maintained.”

By quantum, we imply indivisible, discrete changes. In QLDB, all the transactions are recorded in blocks to a transparent journal where each block represents a discrete state change.
Rahul PathakGeneral manager of databases, analytics and blockchain, Amazon Web Services

For customers who want to maintain control and act as the central trusted entity, just like any database application works today, a decentralized system with multiple entities is not the right fit for their needs, Pathak said.

“Amazon [Quantum Ledger Database] combines the data integrity capabilities of blockchain with the ease and simplicity of a centrally owned datastore, allowing a single entity to act as the central trusted authority,” Pathak said.

While QLDB includes the term “quantum” in its name, it’s not a reference to quantum computing

“By quantum, we imply indivisible, discrete changes,” Pathak said. “In QLDB, all the transactions are recorded in blocks to a transparent journal where each block represents a discrete state change.”

How the Amazon Quantum Ledger Database works

The immutable nature of QLDB is a core element of the database’s design. Pathak explained that QLDB uses a cryptographic hash function to generate a secure output file of the data’s change history, known as a digest. The digest acts as a proof of the data’s change history, enabling customers to look back and validate the integrity of their data changes.

From a usage perspective QLDB supports the PartiQL open standard query language that supports SQL-compatible access to data. Pathak said that customers can build applications with the Amazon QLDB Driver for Java to write code that accesses and manipulates the ledger database.

“This is a Java driver that allows you to create sessions, execute PartiQL commands within the scope of a transaction, and retrieve results,” he said. 

Developed internally at AWS

The Quantum Ledger Database is based on technology that AWS has been using for years, according to Pathak. AWS has been using an internal version of Amazon QLDB to store configuration data for some of its most critical systems, and has benefitted from being able to view an immutable history of changes, he said.

“Over time, our customers have asked us for the same ledger capability, and a way to verify that the integrity of their data is intact,” he said. “So, we built Amazon QLDB to be immutable and cryptographically verifiable.”

Go to Original Article
Author:

Microsoft expands its automotive partner ecosystem to power the future of mobility – The Official Microsoft Blog

Technology can help automotive companies transform into smart mobility services providersDashboard of self-driving auto

Karl Benz and Henry Ford revolutionized transportation with the initial development and mass production of the automobile. Now, more than a century later, the automotive industry is poised to transform transportation again, with a push to develop connected, personalized and autonomous driving experiences, electric vehicles and new mobility business models from ride-sharing to ride-hailing and multimodal, smart transportation concepts.

This industry is expected to see significant growth, becoming a $6.6T industry by 2030, with disruptive business models accounting for 25 percent of all revenues, according to consulting firm, McKinsey & Company. From shared vehicle services to fully electric transportation, manufacturers are developing new products and services to enable large fleets offering mobility-as-a-service, which will increasingly replace individual car ownership. This involves modernizing the in-vehicle experience with productivity, entertainment, and personal assistants that are safe and secure, following users across different transport modes, adding value for businesses and consumers alike.

This transformation requires a data-driven mindset. The automotive sector generates vast amounts of data. However, companies aren’t yet fully set up to turn it into relevant insights. Future success depends on the ability to identify and capture digital signals and evolve how the business approaches innovation. Through what we call a digital feedback loop, the entirety of the enterprise can be connected with relevant data— whether it is pertaining to relationship management with customers and partners, or engagement with employees, core product creation or enterprise operations— to drive continuous improvement in products and services, mobility companies must differentiate from their competition.

We support the industry with unlocking this enormous potential by providing intelligent cloud, edge, IoT and AI services and helping automotive companies build and extend their own digital capabilities.

To that end, this year, for the first time, Microsoft is joining Frankfurt Motor Show (IAA) and showcasing our approach to working with the automotive industry. We want to empower automotive organizations of all sizes to transform into smart mobility services providers.

Our automotive strategy is shaped by three key principles:

  1. We partner across the industry. We are not in the business of making vehicles or delivering end mobility as a service offerings.
  2. We believe data should be owned by our customers, as insights from data will become the new drivers of revenue for the auto industry. We do not monetize our customers’ data.
  3. We support automotive companies as they enhance and extend their unique brand experiences to expand their relationships with their customers.

We are focusing our customer engagements along with our extensive global partner network to support their success in the five following areas: connected vehicle solutions, autonomous driving development, smart mobility solutions, connected marketing, sales and service as well as intelligent manufacturing and supply chain.

Today, we are sharing updates about our approach and expansions to our partner ecosystem across these focus areas:

  1. Empower connected vehicle solutions

The core of our connected vehicle efforts is the Microsoft Connected Vehicle Platform (MCVP). It combines advanced cloud and edge computing services with a strong partner network so automotive companies can build connected driving solutions that span from in-vehicle experiences and autonomous driving to prediction services and connectivity. In addition to our partnerships with Volkswagen and Renault-Nissan-Mitsubishi Alliance, new partners are using MCVP to do more:

  • LG Electronics’ webOS Autoplatform offers an in-vehicle, container-capable OS that brings the third party application ecosystem created for premium TVs to in-vehicle experiences. webOSAuto supports the container-based runtime environment of MCVP and can be an important part of modern experiences in the vehicle.
  • Faurecia is leveraging MCVP to create disruptive, connected and personalized services inside the Cockpit of the Future to reinvent the on-board experience for all occupants.
  • Cubic Telecom is a leading connectivity management software provider to the automotive and IoT industries globally. They are one of the first partners to bring seamless connectivity as a core service offering to MCVP for a global market. The deep integration with MCVP allows for a single data lake and an integrated services monitoring path.

Meet more partners in our MCVP blog.

Our customers are also looking to provide conversational assistants tailored to their brand and customer needs, and make them available across multiple devices and apps. The Microsoft Azure Virtual Assistant Solution Accelerator simplifies the creation of these assistants.

  1. Accelerate autonomous driving function development

We empower car makers, suppliers and mobility services providers to accelerate their delivery of autonomous driving solutions that provide safe, comfortable and personalized driving experiences with a comprehensive set of cloud, edge, IoT and AI services and a partner-led open ecosystem that enables collaborative development across companies. We support companies of all sizes from large enterprises such as Audi, that are leveraging Microsoft Azure to create simulations using these large volumes of data, to small and medium sized businesses and start-ups.

Today, we are announcing Microsoft for Startups: Autonomous Driving, a program to accelerate the growth of start-ups working on autonomous driving and help them seize new business opportunities in areas such as delivery, ride-sharing and long haul transit. Learn more about our collaboration with start-ups like Linker Networks and Udelv in our start-up blog.

This year in the Microsoft booth at IAA, Bosch, FEV, Intempora and Applied Intuition are showcasing their autonomous driving solutions.

  • FEV is overcoming the central challenge to validating automated driving functions with a data management and assessment system developed in house, which uses Microsoft Azure.
  • Intempora has recently unveiled IVS, the Intempora Validation Suite, a new software toolchain for the test, training, benchmarking and the validation of ADAS (Advanced Driver and Assistance Systems) and HAD (Highly Automated Driving) algorithms.
  • Applied Intuition is equipping engineering and product development teams with software that makes it faster, safer, and easier to bring autonomy to market.
  1. Enable creation of smart mobility solutions

Intelligent mapping and navigation services are critical to building smart mobility solutions. This is why Microsoft is partnering with companies like TomTom and Moovit.

  • TomTom is integrating their navigation intelligence services such as HD Maps and Traffic as containerized services for use in MCVP so that other in-vehicle services, including autonomous driving, can take advantage of the additional location context.
  • TomTom and Moovit are also partnering with Microsoft for a comprehensive multi-modal trip planner leveraging Azure Maps.
  • The urban mobility app Moovit using Azure Maps also helps people with disabilities ride transit with confidence. This project supports Microsoft’s aim to make our latest technology accessible to everyone and foster inclusion and the use of our technology for the good so that every person on the planet can benefit from technological innovations.
  1. Empower connected marketing, sales and services solutions

With Microsoft Business Applications, our automotive partners, suppliers, and retailers can develop new customer insights and create omnichannel customer experiences. With the Microsoft Automotive Accelerator, auto companies can schedule appointments and automotive services, facilitated through proactive communications.

At IAA, we’re excited to have several partners onsite, including Annata, Adobe and Daimler:

  • Annata is leveraging our Automotive Accelerator to help automotive and equipment companies meet business challenges while taking advantages of new opportunities in the market.
  • Adobe and Microsoft’s strategic partnership and integrations allow an end-to-end customer experience management solution for experience creation, marketing, advertising, analytics, and commerce.
  • Daimler launched eXtollo, the company’s new cloud platform for big data and advanced analytics. The platform uses Azure Key Vault, a service that safeguards encryption keys and secrets, including certificates, connection strings and passwords.
  1. Provide services to build an intelligent supply chain

Driving end-to-end digital transformation requires an integrated digital supply chain–from the factory and shop floor to end customer delivery. Microsoft works with Icertis, BMW, and others to build intelligent supply chain:

  • Icertis Contract Management natively runs on Microsoft Azure and seamlessly integrates with Office 365, Teams and Dynamics 365 so customers can extend the benefits from their Microsoft technology investments.
  • BMW and Microsoft continue to develop the Open Manufacturing Platform to enable industrial manufacturers to work together to break down data silos and overcome the challenges of complex, proprietary systems that slow down production optimization.

We are looking forward to meeting you at our Microsoft booth (Hall 5, C21) or at one of our IAA sessions. On your way to Frankfurt explore our Microsoft Connected Vehicle Platform microsite.

Tags:

Go to Original Article
Author: Microsoft News Center

Supporting modern technology policy for the financial services industry – guidelines by the European Banking Authority | Transform

The financial services community has unprecedented opportunity ahead. With new technologies like cloud, AI and blockchain, firms are creating new customer experiences, managing risk more effectively, combating financial crime, and meeting critical operational objectives. Banks, insurers and other services providers are choosing digital innovation to address these opportunities at a time when competition is increasing from every angle – from traditional and non-traditional players alike.

At the same time, our experience is that lack of clarity in regulation can hinder adoption of these exciting technologies, as regulatory compliance remains fundamental to financial institutions using technology they trust.  Indeed, the common question I get from customers is: Will regulators let me use your technology, and have you built in the capabilities to help me meet my compliance obligations?

A portrait of Dave Dadoun, assistant general counsel for Microsoft.
Dave Dadoun.

With this in mind, we applaud the European Banking Authority’s (EBA) revised Guidelines on outsourcing arrangements which, in part, address the use of cloud computing. For several years now we have shared perspectives with regulators on how regulation can be modernized to address cloud computing without diminishing the security, privacy, transparency and compliance safeguards necessary in a native cloud or hybrid-cloud world. In fact, cloud computing can afford financial institutions greater risk assurance – particularly on key things like managing data, securing data, addressing cyber threats and maintaining resilience.

At the core of the revised guidelines are a set of flexible principles addressing cloud in financial services. Indeed, the EBA has been clear these “guidelines are subject to the principle of proportionality,” and should be “applied in a manner that is appropriate, taking into account, in particular, the institution’s or payment institution’s size … and the nature, scope and complexity of its activities.” In addition, the guidelines set out to harmonize approaches across jurisdictions, a big step forward for financial institutions to have predictability and consistency among regulators in Europe. We think the EBA took this smart move to support leading-edge innovation and responsible adoption, and prepare for more advanced technology like machine learning and AI going forward.

Given these guidelines reflect a modernized approach that transcends Europe, we have updated our global Financial Services Amendment for customers to reflect these key changes. We have also created a regulatory mapping document which shows how our cloud services and underlying contractual commitments map to these requirements in an EU Checklist. The EU Checklist is accessible on the Microsoft Service Trust Portal. In essence, Europe offers the benchmark in establishing rules to permit use of cloud for financial services and we are proud to align to such requirements.

Because this is such an important milestone for the financial sector, we wanted to share our point-of-view on a few key aspects of the guidelines, which may help firms accelerate technology transformation with the Microsoft cloud going forward:

  • Auditability: As cloud has become more prevalent, we think it is natural to extend audit rights to cloud vendors in circumstances that warrant it. We also think that audits are not a one-size-fits-all approach but adaptable based on use cases – particularly whether it involves running core banking systems in the cloud. Microsoft has provided innovations to help supervise and audit hyper-scale cloud, including:
  • Data localization: We are pleased there are no data localization requirements in the EBA guidance. Rather, customers must assess the legal, security and other risks where data is stored, as opposed to mandating data be stored strictly in Europe. We help customers manage and assess such risk by providing:
    • Contractual commitments to store data at rest in a specified region (including Europe).
    • Transparency where data is stored.
    • Full commitments to meet key privacy requirements, like the General Data Protection Regulation (GDPR).
    • Flow-through of such commitments to our subcontractors.
  • Subcontractors. The guidelines address subcontractors, particularly those that provide “critical or important” functions. Management, governance and oversight of Microsoft’s subcontractors is core to what we do.  Among other things:
    • Microsoft’s subcontractors are subject to a vetting process and must follow the same privacy and governance controls we ourselves implement to protect customer data.
    • We provide transparency about subcontractors who may have access to customer data and provide 180 days notification about any new subcontractors as well.
    • We provide customers termination rights should they conclude a subcontractor presents a material increase in risk to a critical or important function of their operations.
  • Core platforms: We welcome the EBA’s position providing clarity that core platforms may run in the cloud. What matters is governance, documenting protocols, the security and resiliency of such systems, and having appropriate oversight (and audit rights), and commitments to terminate an agreement, if and when that becomes necessary. These are all capabilities Microsoft offers to its customers and we now see movement among leading banks to put core systems into our cloud because of the benefits we provide.
  • Business Continuity and Exit Planning. Institutions must have business continuity plans and test them periodically for use of critical or important functions. Microsoft has supported our customers to meet this requirement, including providing a Modern Cloud Risk Assessment toolkit and, in addition, in the Service Trust Portal documentation on our service resilience architecture, our Enterprise Business Continuity Management team (EBCM), and a quarterly report detailing results from our recent EBCM testing. In addition, we have supported our customers in preparing exit planning documentation, and we work with industry bodies like the European Banking Federation towards further industry guidance for these new EBA requirements.
  • Concentration risk: The EBA addresses the need to assess whether concentration risk may exist due to potential systemic failures in use of cloud services (and other legacy infrastructure). However, this is balanced with understanding what the risks are of a single point of failure, and to balance those risks and trade-offs from existing legacy systems. In short, financial institutions should assess the resiliency and safeguards provided with our hyper-scale cloud services, which can offer a more robust approach than systems in place today. When making those assessments, financial institutions may decide to lean-in more with cloud as they transform their businesses going forward.

The EBA framework is a great step forward to help modernize regulation and take advantage of cloud computing. We look forward to participating in ongoing industry discussion, such as new guidance under consideration by the European Insurance and Occupational Pension Authority concerning use of cloud services, as well as assisting other regions and countries in their journey to creating more modern policy that both supports innovation while protecting the integrity of critical global infrastructure.

For more information on Microsoft in the financial services industry, please go here.

Top photo courtesy of the European Banking Authority.

Go to Original Article
Author: Microsoft News Center

Managed services companies remain hot M&A ticket

Managed services companies continue to prove popular targets for investment, with more merger and acquisition deals surfacing this week.

Those transactions included private equity firm Lightview Capital making a strategic investment in Buchanan Technologies; Siris, a private equity firm, agreeing to acquire TPx Communications; and IT Solutions Consulting Inc. buying SecurElement Infrastructure Solutions.

Those deals follow private equity firm BC Partners’ agreement last week to acquire Presidio, an IT solutions provider with headquarters in New York. That transaction, valued at $2.1 billion, is expected to close in the fourth quarter of 2019.

More than 30 transactions involving managed service providers (MSPs) and IT service firms have closed thus far in 2019. This year’s deals mark a continuation of the high level of merger and acquisition (M&A) activity that characterized the MSP market in 2018. Economic uncertainty may yet dampen the enthusiasm for acquisitions, but recession concerns don’t seem to be having an immediate impact.

Seth Collins, managing director at Martinwolf, an M&A advisory firm based in Scottsdale, Ariz., said trade policies and recession talk have brought some skepticism to the market. That said, the MSP market hasn’t lost any steam, according to Collins.

“We haven’t seen a slowdown in activity,” he said. The LMM Group at Martinwolf represented Buchanan Technologies in the Lightview Capital transaction.

Collins said the macroeconomic environment isn’t affecting transaction multiples or valuations. “Valuations aren’t driven by uncertainty; they’re driven by the quality of the asset,” he noted.

Finding the right partner

Buchanan Technologies is based in Grapevine, Texas, and operates a Canadian headquarters in Mississauga, Ont. The company’s more than 500 consultants, engineers and architects provide cloud services, managed services and digital transformation, among other offerings.

Valuations aren’t driven by uncertainty; they’re driven by the quality of the asset.
Seth CollinsManaging director at Martinwolf

A spokesman for Lightview Capital said Buchanan Technologies manages on-premises environments, private clouds and public cloud offerings, such as AWS, IBM Cloud and Microsoft Azure. The company focuses on the retail, manufacturing, education, and healthcare and life sciences verticals.

Collins said Buchanan Technologies founder James Buchanan built a solid MSP over the course of 30 years and had gotten to the point where he would consider a financial partner able to take the company to the next level.

“As it turned out, Lightview was that partner,” Collins added, noting the private equity firm’s experience with other MSPs, such as NexusTek.

The Siris-TPx deal, meanwhile, also involves a private equity investor and long-established services provider. TPx, a 21-year old MSP based in Los Angeles, provides managed security, managed WAN, unified communications and contact center offerings. The companies said the deal will provide the resources TPx needs to “continue the rapid growth” it is encountering in unified communications as a service, contact center as a service and managed services.

Siris has agreed to purchase TPx from its investors, which include Investcorp and Clarity.

“Investcorp and Clarity have been invested with TPx for more than 15 years, and they were ready to monetize their investment,” a spokeswoman for TPx said.

IT Solutions Consulting’s acquisition of SecurElement Infrastructure Solutions brings together two MSPs in the greater Philadelphia area.

The companies will pool their resources in areas such as security. IT Solutions offers network and data security through its ITSecure+ offering, which includes antivirus, email filtering, advanced threat protection, encryption and dark web monitoring. A spokeswoman for IT Solutions said SecurElement’s security strategy aligns with IT Solutions’ approach and also provides “expertise in a different stack of security tools.”

The combined company will also focus on private cloud, hybrid cloud and public cloud services, with a particular emphasis on Office 365, the spokeswoman said.

IT Solutions aims to continue its expansion plans in the Philadelphia area and mid-Atlantic regions through hiring, new office openings and acquisitions.

“We have an internal sales force that will continue our organic growth efforts, and our plan is to continue our acquisition strategy of one to two transactions per year,” she said.

MSP market M&A chart
Managed services companies continue to consolidate in an active M&A market.

VMware arms cloud partners with new tools

Ahead of the VMworld 2019 conference, VMware has unveiled a series of updates for its cloud provider partners.

The VMware Cloud Provider Platform now features new tools to enhance the delivery of hybrid cloud offerings and differentiated cloud services, the vendor said. Additionally, VMware said it is enabling cloud providers to target the developer community with their services.

“Customers are looking for best-of-breed cloud that addresses their specific application requirements. … In this world, where there are multiple types of clouds, customers are looking to accelerate the deployment of the applications, and, when they are looking at cloud, what they are looking for is flexibility —  flexibility so that they can choose a cloud that best fits their workload requirements. In many ways, the clouds have to adapt to the application requirements,” said Rajeev Bhardwaj, vice president of products for the cloud provider software business unit at VMware.

Highlights of the VMware updates include the following:

  • The latest version of the vendor’s services delivery platform, VMware vCloud Director 10, now provides a centralized view for hosted private and multi-tenant clouds. Partners can also tap a new “intelligent workload placement” capability for placing “workloads on the infrastructure that best meets the workload requirements,” Bhardwaj said.
  • To help partners differentiate their services, VMware introduced a disaster-recovery-as-a-service program for delivering DRaaS using vCloud Availability; an object storage extension for vCloud Director to deliver S3-compliant object storage services; and a backup certification to certify backup vendors in vCloud Director-based multi-tenant environments, VMware said. Cohesity, Commvault, Dell EMC, Rubrik and Veeam have completed the backup certification.
  • Cloud provider partners can offer containers as a service via VMware Enterprise PKS, a container orchestration product. The update enables “our cloud providers to move up the stack. So, instead of offering just IaaS … they can start targeting new workloads,” Bhardwaj said. VMware will integrate the Cloud Provider Platform with Bitnami, which develops a catalog of apps and development stacks that can be rapidly deployed, he said. The Bitnami integration can be combined with Enterprise PKS to support developer and DevOps costumers, attracting workloads such as test/dev environments onto clouds, according to VMware.

Bhardwaj noted that the VMware Cloud Provider Program has close to 4,300 partners today. Those partners span more than 120 countries and collectively support more than 10 million workloads. VMware’s Cloud Verified partners, which offer VMware software-defined data center and value-added services, have grown to more than 60 globally, VMware noted.

Managed service providers are a growing segment within the VMware Cloud Provider Program (VCCP), Bhardwaj added.

“As the market is shifting more and more toward SaaS and … subscription services, what we are seeing is more and more different types of partners” join VCCP, he said.

Partner businesses include solution providers, systems integrators and strategic outsourcers. They typically don’t build their own clouds, but “want to take cloud services from VMware as a service and become managed service providers,” he said.

Other news

  • Rancher Labs, an enterprise container management vendor, rolled out its Platinum Partner Program. Targeting partners with Kubernetes expertise, the program provides lead and opportunity sharing programs, joint marketing funds and options for co-branded content, the company said. Partners must meet a series training requirements to qualify for the program.
  • Quantum Corp., a storage and backup vendor based in San Jose, Calif., updated its Alliance Partner Program with a new deal registration application, an expanded online training initiative and a redesigned partner portal. The deal registration component, based on Vartopia’s deal registration offering, provides a dashboard to track sales activity, the deal funnel and wins, according to Quantum. The online training for sales reps and engineers is organized by vertical market, opportunities and assets. The company also offers new training options for in-person training.
  • Quisitive Technology Solutions Inc., a Microsoft solutions provider based in Toronto, launched a Smart Start Workshop for Microsoft Teams.
  • MSP software vendor Continuum cut the ribbon on a new security operations center (SOC). Located in Pittsburgh, the SOC will bolster the availability of cybersecurity talent, threat detection and response, and security monitoring for Continuum MSP partners, the vendor said.
  • Technology vendor Honeywell added Consultare America LLC and Silver Touch Technologies to its roster of Guided Work Solutions resellers. A voice-directed productivity product, Guided Work Solutions software targets small and medium-sized distribution centers.
  • Sify Technologies Ltd., an information and communications technology provider based in Chennai, India, aims to bring its services to Europe through a partnership with ZSAH Managed Technology Services. The alliance provides a “broader consulting practice” to the United Kingdom market, according to Sify.
  • US Signal, a data center services provider based in Grand Rapids, Mich., added several features to its Zerto-based disaster recovery as a service offering. Those include self-management, enterprise license mobility, multi-cloud replication and stretch layer 2 failover.
  • Dizzion, an end user cloud provider based in Denver, introduced a desktop-as-a-service offering for VMware Cloud on AWS customers.
  • LaSalle Solutions, a division of Fifth Third Bank, said it has been upgraded to Elite Partner Level status in Riverbed’s channel partner program, Riverbed Rise.
  • FTI Consulting Inc., a business advisory firm, said its technology business segment has launched new services around its RelativityOne Data Migration offering. The services include migration planning, data migration and workspace migration.
  • Mimecast Ltd., an email and data security company, has appointed Kurt Mills as vice president of channel sales. He is responsible for the company’s North American channel sales strategy. In addition, Mimecast appointed Jon Goodwin as director of public sector.
  • Managed detection and response vendor Critical Start has hired Dwayne Myers as its vice president of channels and alliances. Myers joins the company from Palo Alto Networks, where he served as channel business manager, Central U.S. and Latin America, for cybersecurity solutions.

Market Share is a news roundup published every Friday.

Go to Original Article
Author:

No one likes waiting on the phone for a GP appointment. So why do we still do it?

The team behind the services are experts at healthcare, as they also run Patient.Info, one of the most popular medical websites in the UK. More than 100 million people logged on to the site in 2018 to read articles about healthcare, check symptoms and learn to live a healthier life, and more than 60% of GPs in England have access to it.

They also produce a newsletter that’s sent to 750,000 subscribers and around 2,000 leaflets on health conditions and 850 on medicines.

People can access Patient.Info 24 hours a day, seven days a week. It’s the same for Patient Access but web traffic spikes every morning when people want to book appointments to see their GP. To handle that demand, Patient Access runs on Microsoft’s Azure cloud platform. As well as being reliable and stable, all patient data is protected by a high level of security – Microsoft employs more than 3,500 dedicated cybersecurity professionals to help protect, detect and respond to threats, while segregated networks and integrated security controls add to the peace of mind.

“About 62% of GP practices use Patient Access,” says Sarah Jarvis MBE, the Clinical Director behind the service. “They’re using it to manage their services, manage appointments, take in repeat medications, consolidate a patient’s personal health record and even conduct video consultations.

“Just imagine your GP being able to conduct video consultations. If you’re aged 20 to 39 you might not want or need to have a relationship with a GP because you don’t need that continuity of care.

“But imagine you are elderly and housebound, and a district nurse visits you. They phone your GP and say: ‘Could you come and visit this patient’, but the GP is snowed under and can’t get there for a couple of hours. The district nurse is also very busy and must visit someone else.

“Now, with Patient Access, a Duty Doctor can look at someone’s medical record and do a video consultation in five minutes. If the patient needs to be referred, the GP can do it there and then from inside the system. The possibilities are endless, and older people, especially, have so much to gain from this.”

Go to Original Article
Author: Microsoft News Center

Automated transcription services for adaptive applications

Automated transcription services have a variety of applications. Enterprises frequently use them to transcribe meetings, and call centers use them to transcribe phone calls into text to more easily analyze the substance of each call.

The services are widely used to aid the deaf, by automatically providing subtitles to videos and television shows, as well as in call centers that enable the deaf to communicate with each other by transcribing each person’s speech.

VTCSecure and Google

VTCSecure, a several-years-old startup based in Clearwater, Fla., uses Google Cloud’s Speech-to-Text services to power a transcription platform that is used by businesses, non-profits, and municipalities around the world to aid the deaf and hard of hearing.

The platform offers an array of capabilities, including video services that connect users to a real-time sign-language interpreter, and deaf-to-deaf call centers. The call centers, enabling users to connect via video, voice or real-time-text, build on Google Cloud’s Speech-to-Text technology to provide users with automatic transcriptions.

Google Cloud has long sold Speech-to-Text and Text-to-Speech services, which provide developers with the data and framework to create their own transcription or voice applications. For Hayes, the services, powered in part by speech technologies developed by parent company Alphabet Inc.’s DeepMind division, were easy to set up and adapt.

“It was one of the best processes,” said Peter Hayes, CEO of VTCSecure. He added that his company has been with happy with what it considers a high level of support from Google.

Speech-to-text

Hayes said Google provides technologies, as well as development support, for VTCSecure and for his newest company, TranslateLive.

Hayes also runs the platform on Google Cloud, after doing a demo for the FTC that he said lagged on a rival cloud network.

Google Cloud’s Speech-to-Text and Text-to-Speech technology, as well as the translation technologies used for TranslateLive, constantly receive updates from Google, Hayes said.

Startup Verbit provides automated transcription services that it built in-house. While only two years old, the startup considers itself a competitor to Google Cloud’s transcription services, even releasing a blog post last year outlining how its automated transcription services can surpass Google’s.

Automatic translation service, automatic translation services, Verbit
Automatic translation services from companies like Verbit are used by the deaf and hard of hearing

Transcription startup

Verbit, unlike Google, adds humans to the transcription loop, explained Tom Livne, co-founder and CEO of the Israel-based startup. It relies on its home-grown models for an initial transcription, and then passes those off to remote human transcribers who fine-tune the transcription, reviewing them and making edits.

The combined process produces high accuracy, Livne said.

A lawyer, Livne initially started Verbit to specifically sell to law firms. However, the vendor moved quickly into the education space.

“We want to create an equal opportunity for students with disabilities,” Livne said. Technology, he noted, has long been able to aid those with disabilities.

We want to create an equal opportunity for students with disabilities.
Tom LivneCo-founder and CEO, Verbit

George Mason University, a public university in Fairfax, Va., relies on Verbit to automatically transcribe videos and online lectures.

“We address the technology needs of students with disabilities here on campus,” said Korey Singleton, assistive technology initiative manager at George Mason.

After trying out other vendors, the school settled on Verbit largely because of its competitive pricing, Singleton said. As most of its captioning and transcription comes from the development of online courses, the school doesn’t require a quick turnaround, Singleton said. So, Verbit was able to offer a cheaper price.

“We needed to find a vendor that could do everything we needed to do and provide us with a really good rate,” Singleton said. Verbit provided that.

Moving forward, George Mason will be looking for a way to automatically integrate transcripts with the courses. Now, putting them together is a manual process, but with some APIs and automated technologies, Singleton said he’s aiming to make that happen automatically.

Go to Original Article
Author:

Amazon CTO Werner Vogels on transparency, developers, multi-cloud

Amazon CTO Werner Vogels is known for his work with Amazon Web Services, but he actually leads technology innovation across the entire company. In a keynote talk at this week’s AWS Summit event in New York City, he outlined new product directions and his philosophy for the future of cloud computing.

Vogels sat down with TechTarget to discuss a wide range of issues, from transparency into future development of AWS services to customers’ multi-cloud plans.

In December 2018, AWS posted a public roadmap for its container strategy on GitHub. This was seen as an unusual, maybe unprecedented move. Talk about transparency in terms of a philosophy — will we see more of this kind of thing out of AWS?

Werner Vogels: As always, with respect to customer interaction, we try to experiment. The whole thing with roadmaps is that once you produce it, you have to stick with it. And historically, we’ve always tried to be more secretive. We’ve always tried to keep the roadmap with customers under NDA. Mostly so we could have the opportunity to change our minds.

Because once you promise customers you’re going to deliver X, Y and Z in September, you have to deliver X, Y and Z in September for them.

And so I think given the tremendous interest of developers in containers, this seems like a really great space to start with giving the community access to a roadmap, knowing what’s coming. And I think definitely given our close cooperation with that group we need this sort of ecosystem. I think it was really important to show what our plans are there.

One critique of AWS is that CloudFormation lags too much with regard to support for new AWS features. In response, AWS pledged to provide more transparency around CloudFormation, including a roadmap. What’s going on from your perspective with CloudFormation?

Werner Vogels, Amazon CTO
Werner Vogels, vice president and CTO of Amazon

Vogels: Often we have a number of innovations scheduled for CloudFormation, but as you can see we put a lot of effort into the Cloud Development Kit, or CDK. One thing we’ve gotten from developers is that they prefer to write code instead of these large, declarative JSON and XML files. I showed it onstage this morning, with the demo that we did. We’ve put most of our effort in actually going the CDK route more than sort of extending CloudFormation.

Most customers have asked for new features in CloudFormation to get sort of parity with what Terraform is doing. I have great respect for HashiCorp and the speed at which they’re innovating. They’re a great partner. And as such, we’re working with CloudFormation to take it in the direction that customers are asking for.

I think overall, we’re on a good path, the right path. But I love the fact that there is a long list of requests for CloudFormation. It means that customers are passionate about it and want us to do more.

There is a sense these days that enterprises should look to be multi-cloud, not tied to a single provider, for reasons such as cost, vendor management and richer opportunities for innovation. One of your competitors, Google, hopes to be a middleman player with its Anthos multi-cloud deployment platform. What is your stance on multi-cloud, and can we see something like Anthos coming out of AWS someday?

Vogels: It depends a bit on how you define multi-cloud. If you think about if you have this one application that you want to run on any of the providers, you pretty quickly go to a lowest common denominator, which is to use a cloud as a data center. You just use instances as a service. Now you get some elasticity, you get some cost savings out of it, maybe some more reliability, but you get none of the other benefits. You can’t use any of the security tools that Amazon is giving you. Plus, you need to have your workforce, your development force able to be proficient in each and every one of these clouds that you’re using, which seems like a waste.

Given the tremendous interest of developers in containers, this seems like a really great space to start with giving the community access to a roadmap, knowing what’s coming.
Werner VogelsCTO, Amazon

The few companies that I’ve seen being slightly successful with having a multi-cloud approach are ones that say, oh this is one particular thing that this particular provider is unique in and I really want to make use of that. Well, sometimes that’s as some sort of a vertical, or it might be in a particular location.

The other thing that we’re working with most of our enterprise customers is, what is an exit strategy? What do I need to do, if one moment I decide that I would like to move over to another provider? That for any large enterprise is just good due diligence. If you start using a [SaaS application], you want to know about what do we need to do to get my data out of there, if I want to move let’s say from Salesforce to Workday.

It’s the same for most large enterprises. They want to know how much work is it actually for me to actually move if I decide to go from cloud provider A to cloud provider B, or maybe bring it back on premises.

That’s something that we’ve been working on with most of our large customers, because that’s just good due diligence.

You talked about your strategy for developers today [in the AWS Summit keynote]. Are you satisfied with where AWS is with regard to developer experience?

Vogels: I’m never satisfied. I think this is mostly focused on serverless. Anything serverless is still so much in flux. We see customers building more and more complex and larger applications using only serverless components, and we’re learning from that. What are the kinds of things that customers want?

For example, when we launched [Lambda] Layers, that was purely from feedback from customers saying, ‘Hey you know, we have this whole set of basic components that we are always using for each of our applications, but it doesn’t allow us to actually easily integrate them.’ So we built Layers for customers.

We continue to look at how we can do these things. The same goes for building Custom Runtimes. There [are] only so many languages you can do yourself, but if there’s someone else that wants to do Haskell or Caml, or any let’s say, less popular language, we should be able to enable them. And so we built Custom Runtimes.

Part two of TechTarget’s Q&A with Amazon CTO Werner Vogels will touch on AWS Outposts, AWS’ pace of innovation, and how customers can control cloud costs.

Go to Original Article
Author:

AWS Summit widens net with services for containers, devs

NEW YORK — AWS pledges to maintain its torrid pace of product and services innovations and continue to expand the breadth of both to meet customer needs.

“You decide how to build software, not us,” said Werner Vogels, Amazon vice president and CTO, in a keynote at the AWS Summit NYC event. “So, we need to give you a really big toolbox so you can get the tools you need.”

But AWS, which holds a healthy lead over Microsoft and Google in the cloud market, also wants to serve as an automation engine for customers, Vogels added.

“I strongly believe that in the future … you will only write business logic,” he said. “Focus on building your application, drop it somewhere and we will make it secure and highly available for you.”

Parade of new AWS services continues

Vogels sprinkled a series of news announcements throughout his keynote, two of which centered on containers. First, Amazon CloudWatch Container Insights, a service that provides container-level monitoring, is now in preview for monitoring clusters in Amazon Elastic Container Service and Amazon Fargate, in addition to Amazon EKS and Kubernetes. In addition, AWS for Fluent Bit, which serves as a centralized environment for container logging, is now generally available, he said.

Serverless compute also got some attention with the release of Amazon EventBridge, a serverless event bus to take in and process data across AWS’ own services and SaaS applications. AWS customers currently do this with a lot of custom code, so “the goal for us was to provide a much simpler programming model,” Vogels said. Initial SaaS partners for EventBridge include Zendesk, OneLogin and Symantec.

Focus on building your application, drop it somewhere and we will make it secure and highly available for you.
Werner VogelsCTO, AWS

AWS minds the past, with eye on the future

Most customers are moving away from the concept of a monolithic application, “but there are still lots of monoliths out there,” such as SAP ERP implementations that won’t go away anytime soon, Vogels said.

But IT shops with a cloud-first mindset focus on newer architectural patterns, such as microservices. AWS wants to serve both types of applications with a full range of instance types, containers and serverless functionality, Vogels said.

He cited customers such as McDonald’s, which has built a home-delivery system with Amazon Elastic Container Service. It can take up to 20,000 orders per second and is integrated with partners such as Uber Eats, Vogels said.

Vogels ceded the stage for a time to Steve Randich, executive vice president and CIO of the Financial Industry Regulatory Authority (FINRA), a nonprofit group that seeks to keep brokerage firms fair and honest.

FINRA moved wholesale to AWS and its systems now ingest up to 155 billion market events in a single day — double what it was three years ago. “When we hit these peaks, we don’t even know them operationally because the infrastructure is so elastic,” Randich said.

FINRA has designed the AWS-hosted apps to run across multiple availability zones. “Essentially, our disaster recovery is tested daily in this regard,” he said.

AWS’ ode to developers

Developers have long been a crucial component of AWS’ customer base, and the company has built out a string of tool sets aimed to meet a broad set of languages and integrated development environments (IDEs). These include AWS Cloud9, IntelliJ, Python, Visual Studio and Visual Studio Code.

VS Code is Microsoft’s lighter-weight, browser-based IDE, which has seen strong initial uptake. All the different languages in VS Code are now generally available, Vogels said to audience applause.

Additionally, AWS Cloud Development Kit (CDK) is now generally available with support for TypeScript and Python. AWS CDK makes it easier for developers to use high-level construct to define cloud infrastructure in code, said Martin Beeby, AWS principal developer evangelist, in a demo.

AWS seeks to keep the cloud secure

Vogels also used part of his AWS Summit talk to reiterate AWS’ views on security, as he did at the recent AWS re:Inforce conference dedicated to cloud security.

“There is no line in the sand that says, ‘This is good-enough security,'” he said, citing newer techniques such as automated reasoning as key advancements.

Werner Vogels, AWS CTO
Werner Vogels, CTO of AWS, on stage at the AWS Summit in New York.

Classic security precautions have become practically obsolete, he added. “If firewalls were the way to protect our systems, then we’d still have moats [around buildings],” Vogels said. Most attack patterns AWS sees are not brute-force front-door efforts, but rather spear-phishing and other techniques: “There’s always an idiot that clicks that link,” he said.

The full spectrum of IT, from operations to engineering to compliance, must be mindful of security, Vogels said. This is true within DevOps practices such as CI/CD from both an external and internal level, he said. The first involves matters such as identity access management and hardened servers, while the latter brings in techniques including artifact validation and static code analysis.

AWS Summit draws veteran customers and newcomers

The event at the Jacob K. Javits Convention Center drew thousands of attendees with a wide range of cloud experience, from FINRA to fledgling startups.

“The analytics are very interesting to me, and how I can translate that into a set of services for the clients I’m starting to work with,” said Donald O’Toole, owner of CeltTools LLC, a two-person startup based in Brooklyn. He retired from IBM in 2018 after 35 years.

AWS customer Timehop offers a mobile application oriented around “digital nostalgia,” which pulls together users’ photographs from various sources such as Facebook and Google Photos, said CTO Dmitry Traytel.

A few years ago, Timehop found itself in a place familiar to many startups: Low on venture capital and with no viable monetization strategy. The company created its own advertising server on top of AWS, dubbed Nimbus, rather than rely on third-party products. Once a user session starts, the system conducts an auction for multiple prominent mobile ad networks, which results in the best possible price for its ad inventory.

“Nimbus let us pivot to a different category,” Traytel said.

Go to Original Article
Author: