Tag Archives: Services

Amazon Chime gets integration with Dolby Voice Room

Amazon Web Services has integrated its Amazon Chime online meetings software with a video hardware kit for small and midsize conference rooms made by Dolby Laboratories.

Businesses using Amazon Chime could already connect the app to software-agnostic video hardware using H.323 and SIP. But standards-based connections are generally difficult to set up and use.

The Dolby partnership gives Chime users access to video gear that is preloaded with the AWS software. However, Dolby only entered the video hardware market last year, so few Chime customers will be able to take advantage of the integration without purchasing new equipment.

Amazon Chime is far behind competing services, such as Zoom and Microsoft Teams. Both already have partnerships with leading makers of conference room hardware, such as Poly and Logitech. Also, Chime still lacks support for a room system for large meeting spaces and boardrooms.

Online meetings software must integrate with room systems to effectively compete, said Irwin Lazar, analyst at Nemertes Research. “So the Dolby announcement represents a much-needed addition to their capabilities.”

Dolby Voice Room includes a camera and a separate speakerphone with a touchscreen for controlling a meeting. The audio device’s microphone suppresses background noise and compensates for quiet and distant voices.

AWS recently expanded Chime to include a bare-bones service for calling, voicemail and SMS messaging. The vendor also earlier this year released a service for connecting on-premises PBXs to the internet using SIP.

Unlike other cloud-based calling and meeting providers, AWS charges customers based on how much they use Chime. However, Chime still trails more established offerings in the video conferencing market.

“Customers I’ve spoken to like their pay-per-use pricing model,” Lazar said. “But at this point, I don’t yet see them making a major push to challenge Microsoft, Cisco or Zoom.”

In a recent Nemertes Research study, 8% of organizations using a video conferencing service were Chime customers, seventh behind offerings from Microsoft, Cisco and others. However, only 0.6% said Chime was the primary app they used — the smallest percentage of any vendor.

Adoption of Chime has been “pretty sluggish,” said Zeus Kerravala, principal analyst at ZK Research. “But Amazon can play the long game here.” Launched in February 2017, Chime is a relatively insignificant project of AWS, a division of Amazon that generated more than $25 billion in revenue last fiscal year.

Go to Original Article
Author:

Slack services partners to help vendor target enterprises

Slack is partnering with IT services and consulting firms to help midsize or larger businesses adopt and use its team collaboration app.

It is Slack’s first significant step toward developing a partner channel that would help it compete with Microsoft and Cisco for large enterprises. Those vendors rely on an ecosystem of resellers and IT integrators to support businesses on a global scale.

But Slack’s initial partners are small and midsize organizations. The vendor has yet to recruit the world’s leading IT integrators. Until it does so, Slack will still be at a significant disadvantage against those larger rivals as it attempts to sell to businesses with tens of thousands of employees.

The move comes as financial analysts sour on Slack, worrying that the vendor will be unable to compete with Microsoft Teams in the enterprise market over the long term. Slack’s valuation has dropped from $19 billion to less than $12 billion amid a steady decline in its stock price over the past several months.

The Slack services partners will help businesses with more than 250 employees build integrations, train employees and figure out where Slack fits into their move to the cloud. Slack is launching the program with seven partners across the United States, the United Kingdom and Japan.

Slack could launch a reseller program in the future, said Rich Hasslacher, Slack’s head of global alliances and channels. But, for now, the company will pay its services partners a finder’s fee worth 8% of the first-year contract of any customer they refer to Slack.

The services partners are Robot & Pencils, Adaptavist, Abeam Consulting, Ricksoft, Rainmaker, Onix and Cprime. Slack plans to add additional partners to the program around February or March of 2020, targeting markets in continental Europe, Australia and Latin America.

Developing the right ecosystem of partners will be essential to Slack’s long-term viability, said Zeus Kerravala, principal analyst at ZK Research. Slack is more than just a messaging app. Yet, many businesses don’t understand how to take full advantage of the platform, he said.

“When you look at long-term viability, that’s always been around platforms, not products,” Kerravala said. “I think if Slack wants to go down that route, [the services partner program] is part of what they need to do.”

Developing a channel should also help Slack sell to IT departments, rather than to isolated business units and groups of end users. Slack has 12 million daily active users, but only 6 million paid seats. Winning more company-wide deployments would help Slack boost its paid user count.

Go to Original Article
Author:

ConnectWise-Continuum buyout shakes up MSP software market

ConnectWise, a provider of software for managed services providers, has acquired its competitor Continuum.

The Continuum acquisition was announced today by ConnectWise CEO Jason Magee at his company’s annual user conference, IT Nation Connect, running from Oct. 30 to Nov. 1 in Orlando, Fla. The buyout, which is poised to shake up the MSP software market, accompanies the acquisition of ITBoost, an IT documentation vendor. ConnectWise also revealed a strategic partnership with partner relationship management software provider Webinfinity to help ConnectWise partners manage their vendor alliances.

“[The Continuum acquisition] allows ConnectWise to address the growing pains of our partners and some of those pains around talent and skills shortages … [and] continues to accelerate ConnectWise in the cybersecurity area,” Magee said in a press briefing.

ConnectWise and Continuum are owned by private equity investment firm Thoma Bravo. Thoma Bravo purchased ConnectWise in February. The private equity firm also owns MSP software players (and ConnectWise-Continuum competitors) SolarWinds and Barracuda Networks.

ConnectWise’s platform spans professional services automation, remote monitoring and management (RMM), and ‘configure, price and quote’ software. Continuum’s development of a global security operations center (SOC), network operations center and help desk technologies will be “complementary” to what ConnectWise does today, Magee said.

Jason Magee, CEO of ConnectWise Jason Magee

The future of ConnectWise and Continuum’s RMM platforms, ConnectWise Automate and Continuum Command, remains in question. Magee said the respective RMM platforms “will be maintained [separately] at this point.” After the IT Nation Connect 2019 event, the companies will begin working on its overall business plan and joint roadmaps, “which to this point we have not been able to dig into much due to regulatory restraints around getting government approval of making the deal happen and so on,” he said.

Magee suggested that in the short term ConnectWise-Continuum partners could see some innovations introduced to the Automate and Command platforms. He pointed to a few potential examples, such as making Command’s LogMeIn remote control available to ConnectWise partners and adding features of Command’s automation and patching capabilities to the Automate platform. He didn’t specify the timing around implementing any changes but said partners could expect to see some in early 2020.

Although the post-acquisition is still in the planning stage, Magee said Continuum’s CFO Geoffrey Willison will be brought as COO at ConnectWise, and the senior vice president of global service delivery, Tasos Tsolakis, will join as the senior vice president of service delivery “over all ConnectWise going forward.” Additionally, Magee said ConnectWise will hire a new CFO for the combined business.

“Until we have the rest of the best of the business plan done, it is business as usual,” Magee said.

Addressing two types of MSPs

Magee said that the ConnectWise-Continuum acquisition also serves to benefit “two mindsets” that have emerged among MSPs.

The first mindset is of the do-it-yourself MSPs that build their practices by partnering, buying platforms and tools, and hiring teams to manage and service their customers. The second mindset is of “the companies and people [that] just want to go hire the general contractor, and those people are asking for someone else to manage [their customers] for them, take the hassle out of having to do all that stuff within their company or themselves.”

This opens up a whole new world from a ConnectWise standpoint.
Jason MageeCEO, ConnectWise

“This opens up a whole new world from a ConnectWise standpoint,” Magee said.

For a few years, ConnectWise has been establishing a ‘connected ecosystem’ of third-party software integrations around its platform, and the company will remain committed to that strategy. “We are still committed to the power of choice for our partners and will continue with our API-first mindset, which allows for continued partnership with the 260 and growing vendor partnerships that we have out there,” Magee said. “These are all great options for those [MSPs] that like to do it themselves.”

When asked if Magee anticipated challenges in merging the ConnectWise and Continuum communities of MSP partners, he said he didn’t expect any problems but would address any issues that may crop up to ensure “we are doing right by the communities.”

“At the end of the day, there is so much good and greatness that comes from bringing these two together that the partner communities are going to benefit tremendously.”

ITBoost, Webinfinity and cybersecurity initiative

In a move similar to MSP software vendor Kaseya’s buyout of IT Glue, ConnectWise is purchasing documentation provider ITBoost. ConnectWise said the IT document tool will be integrated with its product suite.

Magee said the Webinfinity partnership will help ConnectWise launch ConnectWise Engage, a tool for channel firms for simplifying vendor relationship management. ConnectWise Engage aims to give partners “the ability to receive enablement content and material or solution stack information” from their supplier partners, he noted. Additionally, ConnectWise said the Webinfinity alliance will help centralize vendor-partner touch points for areas such as deal registration, multivendor support issues, co-marketing and SKU management.

ConnectWise today also revealed a cybersecurity initiative, which Magee is calling ‘Fight Back,’ to encourage vendors, platform providers, MSPs and MSP customers to up their security awareness and capabilities.

Magee noted that ConnectWise recently achieved SOC Type 2 certification and will mandate by early 2020 multifactor and two-factor authentication across its platforms. The company in August rolled out its Technology Solution Provider Information Sharing and Analysis Organization, a forum for MSPs to share threat intelligence and best practices. “This is an area that ConnectWise for years has strived to be better. We are not perfect by any means, but we strive to get better,” he said.

Go to Original Article
Author:

Cloud database services multiply to ease admin work by users

NEW YORK — Managed cloud database services are mushrooming, as more database and data warehouse vendors launch hosted versions of their software that offer elastic scalability and free users from the need to deploy, configure and administer systems.

MemSQL, TigerGraph and Yellowbrick Data all introduced cloud database services at the 2019 Strata Data Conference here. In addition, vendors such as Actian, DataStax and Hazelcast said they soon plan to roll out expanded versions of managed services they announced earlier this year.

Technologies like the Amazon Redshift and Snowflake cloud data warehouses have shown that there’s a viable market for scalable database services, said David Menninger, an analyst at Ventana Research. “These types of systems are complex to install and configure — there are many moving parts,” he said at the conference. With a managed service in the cloud, “you simply turn the service on.”

Menninger sees cloud database services — also known as database as a service (DBaaS) — as a natural progression from database appliances, an earlier effort to make databases easier to use. Like appliances, the cloud services give users a preinstalled and preconfigured set of data management features, he said. On top of that, the database vendors run the systems for users and handle performance tuning, patching and other administrative tasks.

Overall, the growing pool of DBaaS technologies provides good options “for data-driven companies needing high performance and a scalable, fully managed analytical database in the cloud at a reasonable cost,” said William McKnight, president of McKnight Consulting Group.

Database competition calls for cloud services

For database vendors, cloud database services are becoming a must-have offering to keep up with rivals and avoid being swept aside by cloud platform market leaders AWS, Microsoft and Google, according to Menninger. “If you don’t have a cloud offering, your competitors are likely to eat your lunch,” he said.

Strata Data Conference
The Strata Data Conference was held from Sept. 23 to 26 in New York City.

Todd Blaschka, TigerGraph’s chief operating officer, also pointed to the user adoption of the Atlas cloud service that NoSQL database vendor MongoDB launched in 2016 as a motivating factor for other vendors, including his company. “You can see how big of a revenue generator that has been,” Blaschka said. Services like Atlas “allow more people to get access [to databases] more quickly,” he noted.

Blaschka said more than 50% of TigerGraph’s customers already run its namesake graph database in the cloud, using a conventional version that they have to deploy and manage themselves. But with the company’s new TigerGraph Cloud service, users “don’t have to worry about knowing what a graph is or downloading it,” he said. “They can just build a prototype database and get started.”

TigerGraph Cloud is initially available in the AWS cloud; support will also be added for Microsoft Azure and then Google Cloud Platform (GCP) in the future, Blaschka said.

Yellowbrick Data made its Yellowbrick Cloud Data Warehouse service generally available on all three of the cloud platforms, giving users a DBaaS alternative to the on-premises data warehouse appliance it released in 2017. Later this year, Yellowbrick also plans to offer a companion disaster recovery service that provides cloud-based replicas of on-premises or cloud data warehouses.

More cloud database services on the way

MemSQL, one of the vendors in the NewSQL database category, detailed plans for a managed cloud service called Helios, which is currently available in a private preview release on AWS and GCP. Azure support will be added next year, said Peter Guagenti, MemSQL’s chief marketing officer.

About 60% of MemSQL’s customers run its database in the cloud on their own now, Guagenti said. But he added that the company, which primarily focuses on operational data, was waiting for the Kubernetes StatefulSets API object for managing stateful applications in containers to become available in a mature implementation before launching the Helios service.

Actian, which introduced a cloud service version of its data warehouse platform on AWS last March, said it will make the Avalanche service available on Azure this fall and on GCP at a later date.

We ultimately are the caretaker of the system. We may not do the actual work, but we guide them on it.
Naghman WaheedData platforms lead, Bayer Crop Science

DataStax, which offers a commercial version of the Cassandra open source NoSQL database, said it’s looking to make a cloud-native platform called Constellation and a managed version of Cassandra that runs on top of it generally available in November. The new technologies, which DataStax announced in May, will initially run on GCP, with support to follow on AWS and Azure.

Also, in-memory data grid vendor Hazelcast plans in December to launch a version of its Hazelcast Cloud service for production applications. The Hazelcast Cloud Dedicated edition will be deployed in a customer’s virtual private cloud instance, but Hazelcast will configure and maintain systems for users. The company released free and paid versions of the cloud service for test and development uses in March on AWS, and it also plans to add support for Azure and GCP in the future.

Managing managed database services vendors

Bayer AG’s Bayer Crop Science division, which includes the operations of Monsanto following Bayer’s 2018 acquisition of the agricultural company, uses managed database services on Teradata data warehouses and Oracle’s Exadata appliance. Naghman Waheed, data platforms lead at Bayer Crop Science, said the biggest benefit of both on-premises and cloud database services is offloading routine administrative tasks to a vendor.

“You don’t have to do work that has very little value,” Waheed said after speaking about a metadata management initiative at Bayer in a Strata session. “Why would you want to have high-value [employees] doing that work? I’d rather focus on having them solve creative problems.”

But he said there were some startup issues with the managed services, such as standard operating procedures not being followed properly. His team had to work with Teradata and Oracle to address those issues, and one of his employees continues to keep an eye on the vendors to make sure they live up to their contracts.

“We ultimately are the caretaker of the system,” Waheed said. “We do provide guidance — that’s still kind of our job. We may not do the actual work, but we guide them on it.”

Go to Original Article
Author:

How to work with the WSUS PowerShell module

In many enterprises, you use Windows Server Update Services to centralize and distribute Windows patches to end-user devices and servers.

WSUS is a free service that installs on Windows Server and syncs Windows updates locally. Clients connect to and download patches from the server. Historically, you manage WSUS with a GUI, but with PowerShell and the PoshWSUS community module, you can automate your work with WSUS for more efficiency. This article will cover how to use some of the common cmdlets in the WSUS PowerShell module, found at this link.

Connecting to a WSUS server

The first task to do with PoshWSUS is to connect to an existing WSUS server so you can run cmdlets against it. This is done with the Connect-PSWSUSServer cmdlet. The cmdlet provides the option to make a secure connection, which is normally on port 8531 for SSL.

Connect-PSWSUSServer -WsusServer wsus -Port 8531 -SecureConnection
Name Version PortNumber ServerProtocolVersion
---- ------- ---------- ---------------------
wsus 10.0.14393.2969 8530 1.20

View the WSUS clients

There are various cmdlets used to view WSUS client information. The most apparent is Get-PSWSUSClient, which shows client information such as hostname, group membership, hardware model and operating system type. The example below gets information on a specific machine named Test-1.

Get-PSWSUSClient Test-1 | Select-Object *
ComputerGroup : {Windows 10, All Computers}
UpdateServer : Microsoft.UpdateServices.Internal.BaseApi.UpdateServer
Id : 94a2fc62-ea2e-45b4-97d5-10f5a04d3010
FullDomainName : Test-1
IPAddress : 172.16.48.153
Make : HP
Model : HP EliteDesk 800 G2 SFF
BiosInfo : Microsoft.UpdateServices.Administration.BiosInfo
OSInfo : Microsoft.UpdateServices.Administration.OSInfo
OSArchitecture : AMD64
ClientVersion : 10.0.18362.267
OSFamily : Windows
OSDescription : Windows 10 Enterprise
ComputerRole : Workstation
LastSyncTime : 9/9/2019 12:06:59 PM
LastSyncResult : Succeeded
LastReportedStatusTime : 9/9/2019 12:18:50 PM
LastReportedInventoryTime : 1/1/0001 12:00:00 AM
RequestedTargetGroupName : Windows 10
RequestedTargetGroupNames : {Windows 10}
ComputerTargetGroupIds : {59277231-1773-401f-bf44-2fe09ac02b30, a0a08746-4dbe-4a37-9adf-9e7652c0b421}
ParentServerId : 00000000-0000-0000-0000-000000000000
SyncsFromDownstreamServer : False

WSUS usually organizes machines into groups, such as all Windows 10 machines, to apply update policies. The command below measures the number of machines in a particular group called Windows 10 with the cmdlet Get-PSWSUSClientsinGroup:

Get-PSWSUSClientsInGroup -Name 'Windows 10' | Measure-Object | Select-Object -Property Count
Count
-----
86

How to manage Windows updates

With the WSUS PowerShell module, you can view, approve and decline updates on the WSUS server, a very valuable and powerful feature. The command below finds all the Windows 10 feature updates with the title “Feature update to Windows 10 (business editions).” The output shows various updates on my server for version 1903 in different languages:

Get-PSWSUSUpdate -Update "Feature update to Windows 10 (business editions)"  | Select Title
Title
-----
Feature update to Windows 10 (business editions), version 1903, en-gb x86
Feature update to Windows 10 (business editions), version 1903, en-us arm64
Feature update to Windows 10 (business editions), version 1903, en-gb arm64
Feature update to Windows 10 (business editions), version 1903, en-us x86
Feature update to Windows 10 (business editions), version 1903, en-gb x64
Feature update to Windows 10 (business editions), version 1903, en-us x64

Another great feature of this cmdlet is it shows updates that arrived after a particular date. The following command gives the top-five updates that were downloaded in the last day:

Get-PSWSUSUpdate -FromArrivalDate (Get-Date).AddDays(-1) | Select-Object -First 5
Title KnowledgebaseArticles UpdateType CreationDate UpdateID
----- --------------------- ---------- ------------ --------
Security Update for Microso... {4475607} Software 9/10/2019 10:00:00 AM 4fa99b46-765c-4224-a037-7ab...
Security Update for Microso... {4475574} Software 9/10/2019 10:00:00 AM 1e489891-3372-43d8-b262-8c8...
Security Update for Microso... {4475599} Software 9/10/2019 10:00:00 AM 76187d58-e8a6-441f-9275-702...
Security Update for Microso... {4461631} Software 9/10/2019 10:00:00 AM 86bdbd3b-7461-4214-a2ba-244...
Security Update for Microso... {4475574} Software 9/10/2019 10:00:00 AM a56d629d-8f09-498f-91e9-572...

The approval and rejection of updates is an important part of managing Windows updates in the enterprise. The WSUS PowerShell module makes this easy to do. A few years ago, Microsoft began releasing preview updates for testing purposes. I typically want to decline these updates to avoid their installation on production machines. The following command finds every update with the string “Preview of” in the title and declines them with the Deny-PSWSUSUpdate cmdlet.

Get-PSWSUSUpdate -Update "Preview of" | Where-Object {$_.IsDeclined -eq 'False' } | Deny-PSWSUSUpdate
Patch IsDeclined
----- ----------
2019-08 Preview of Quality Rollup for .NET Framework 3.5.1 on Windows Server 2008 R2 for Itanium-based Systems (KB4512193) True
2019-08 Preview of Quality Rollup for .NET Framework 3.5.1, 4.5.2, 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2, 4.8 for Windows 7 (KB4512193) True
2019-08 Preview of Quality Rollup for .NET Framework 3.5.1, 4.5.2, 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2, 4.8 for Windows 7 and Server 2008 R2 for x64 (KB4512193) True
2019-07 Preview of Quality Rollup for .NET Framework 2.0 on Windows Server 2008 SP2 for Itanium-based Systems (KB4512196) True
2019-08 Preview of Quality Rollup for .NET Framework 3.5, 4.5.2, 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2, 4.8 for Windows Server 2012 for x64 (KB4512194) True
2019-07 Preview of Quality Rollup for .NET Framework 2.0, 3.0, 4.5.2, 4.6 on Windows Server 2008 SP2 (KB4512196) True
2019-08 Preview of Quality Rollup for .NET Framework 3.5, 4.5.2, 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2, 4.8 for Windows 8.1 and Server 2012 R2 for x64 (KB4512195) True
2019-07 Preview of Quality Rollup for .NET Framework 2.0, 3.0, 4.5.2, 4.6 on Windows Server 2008 SP2 for x64 (KB4512196) True

Syncing WSUS with Microsoft’s servers

In the WSUS GUI, users can set up a daily synchronization between their WSUS server and the Microsoft update servers to download new updates. I like to synchronize more than once a day, especially on Patch Tuesday when you may get several updates in one day. For this reason, you can create a scheduled task that runs a WSUS sync hourly for a few hours per day. The script can be as simple as this command below:

Start-PSWSUSSync
Synchronization has been started on wsus.

Performing cleanups

A WSUS server can be fickle. I have had to rebuild WSUS servers several times, and it is a pretty lengthy process because you have to download all the updates to the new server. You can avoid this process by running a cleanup on the WSUS server. The Start-PSWSUSCleanup cmdlet performs many of these important actions, such as declining superseded updates, cleanup of obsolete updates and removing obsolete computers:

Start-PSWSUSCleanup -DeclineSupersededUpdates -DeclineExpiredUpdates -CleanupObsoleteUpdates -CompressUpdates -CleanupObsoleteComputers -CleanupUnneededContentFiles
Beginning cleanup, this may take some time...
SupersededUpdatesDeclined : 223
ExpiredUpdatesDeclined : 0
ObsoleteUpdatesDeleted : 0
UpdatesCompressed : 4
ObsoleteComputersDeleted : 6
DiskSpaceFreed : 57848478722

Go to Original Article
Author:

How to Create and Manage Hot/Cold Tiered Storage

When I was working in Microsoft’s File Services team around 2010, one of the primary goals of the organization was to commoditize storage and make it more affordable to enterprises. Legacy storage vendors offered expensive products, often consuming a majority of the budget of the IT department and they were slow to make improvements because customers were locked in. Since then, every release of Windows Server has included storage management features which were previously only provided by storage vendors, such as deduplication, replication, and mirroring. These features could be used to manage commodity storage arrays and disks, reducing costs and eliminating vendor lock-in. Windows Server now offers a much-requested feature, the ability to move files between different tiers of “hot” (fast) storage and “cold” (slow) storage.

Managing hot/cold storage is conceptually similar to computer memory cache but at an enterprise scale. Files which are frequently accessed can be optimized to run on the hot storage, such as faster SSDs. Meanwhile, files which are infrequently accessed will be pushed to cold storage, such as older or cheaper disks. These lower priority files will also take advantage of file compression techniques like data deduplication to maximize storage capacity and minimize cost. Identical or varying disk types can be used because the storage is managed as a pool using Windows Server’s storage spaces, so you do not need to worry about managing individual drives. The file placement is controlled by the Resilient File System (ReFS), a file system which is used to optimize and rotate data between the “hot” and “cold” storage tiers in real-time based on their usage. However, using tiered storage is only recommended for workloads that are not regularly accessed. If you have permanently running VMs or you are using all the files on a given disk, there would be little benefit in allocating some of the disk to cold storage. This blog post will review the key components required to deploy tiered storage in your datacenter.

Overview of Resilient File System (ReFS) with Storage Tiering

The Resilient File System was first introduced in Windows Server 2012 with support for limited scenarios, but it has been greatly enhanced through the Windows Server 2019 release. It was designed to be efficient, support multiple workloads, avoid corruption and maximize data availability. More specifically to tiering though, ReFS divides the pool of storage into two tiers automatically, one for high-speed performance and one of maximizing storage capacity. The performance tier receives all the writes on the faster disk for better performance. If those new blocks of data are not frequently accessed, the files will gradually be moved to the capacity tier. Reads will usually happen from the capacity tier, but can also happen from the performance tier as needed.

Storage Spaces Direct and Mirror-Accelerated Parity

Storage Spaces Direct (S2D) is one of Microsoft’s enhancements designed to reduce costs by allowing servers with Direct Attached Storage (DAS) drives to support Windows Server Failover Clustering. Previously, highly-available file server clusters required some type of shared storage on a SAN or used an SMB file share, but S2D allows for small local clusters which can mirror the data between nodes. Check out Altaro’s blog on Storage Spaces Direct for in-depth coverage on this technology.

With Windows Server 2016 and 2019, S2D offers mirror-accelerated parity which is used for tiered storage, but it is generally recommended for backups and less frequently accessed files, rather than heavy production workloads such as VMs. In order to use tiered storage with ReFS, you will use mirror-accelerated parity. This provides decent storage capacity by using both mirroring and a parity drive to help prevent and recover from data loss. In the past, mirroring and parity would conflict and you would usually have to select one of the other.  Mirror-accelerator parity works with ReFS by taking writes and mirroring them (hot storage), then using parity to optimize their storage on disk (cold storage). By switching between these storage optimizations techniques, ReFS provides admins with the best of both worlds.

Creating Hot and Cold Tiered Storage

When configuring hot and cold storage you get to define the ratio of the hot and cold storage. For most workloads, Microsoft recommends allocating 20% to hot and 80% to cold. If you are using high-performance workloads, consider having more hot memory to support more writes. On the flip-side, if you have a lot of archival files, then allocate more cold memory. Remember that with a storage pool you can combine multiple disk types under the same abstracted storage space. The following PowerShell cmdlets show you how to configure a 1,000 GB disk to use 20% (200 GB) for performance (hot storage) and 80% (800 GB) for capacity (cold storage).

Managing Hot and Cold Tiered Storage

If you want to increase the performance of your disk, then you will allocate a great percentage of the disk to the performance (hot) tier. In the following example we use the PowerShell cmdlets to create a 30:70 ratio between the tiers:

Unfortunately, this resizing only changes the ratios of the disks but does not change the size of the partition or volume, so you likely also want to change these using the Resize-Volumes cmdlets.

Optimizing Hot and Cold Storage

Based on the types of workloads you are using, you may wish to further optimize when data is moved between hot and cold storage, which is known as the “aggressiveness” of the rotation. By default, the hot storage will use wait until 85% of its capacity is full before it begins to send data to the cold storage. If you have a lot of write traffic going to the hot storage then you want to reduce this value so that performance-tier data gets pushed to the cold storage quicker. If you have fewer write requests and want to keep data in hot storage longer then you can increase this value. Since this is an advanced configuration option, it must be configured via the registry on every node in the S2D cluster, and it also requires a restart. Here is a sample script to run on each node if you want to change the aggressiveness so that it swaps files when the performance tier reaches 70% capacity:

You can apply this setting cluster-wide by using the following cmdlet:

NOTE: If this is applied to an active cluster, make sure that you reboot one node at a time to maintain service availability.

Wrap-Up

Now you should be fully equipped with the knowledge to optimize your commodity storage using the latest Windows Server storage management features. You can pool your disks with storage spaces, use storage spaces direct (S2D) to eliminate the need for a SAN, and ReFS to optimize the performance and capacity of these drives.  By understanding the tradeoffs between performance and capacity, your organization can significantly save on storage management and hardware costs. Windows Server has made it easy to centralize and optimize your storage so you can reallocate your budget to a new project – or to your wages!

What about you? Have you tried any of the features listed in the article? Have they worked well for you? Have they not worked well? Why or why not? Let us know in the comments section below!


Go to Original Article
Author: Symon Perriman

Nectar launches Customer Experience Assurance platform

Nectar Services Corp. recently launched Nectar Customer Experience Assurance, a customer experience testing and monitoring platform for contact center and interactive voice response team, promising to eliminate the need for legacy network monitoring platforms.

Nectar said Customer Experience Assurance offers a range of capabilities, including auto-discovery, voice recognition and simulation, dynamic call automation and load testing. These features enable contact center DevOps teams to test and discover network issues in a timely manner and to save time when launching new platforms or making configuration changes.

Nectar’s Customer Experience Assurance also offers perpetual monitoring that performs testing in regular intervals to monitor platforms for service availability and configuration changes, the company said. This enables contact center management teams to alert and carry out historical reporting based on factors affecting customer experience (CX) metrics such as service availability, functionality and call quality.

Nectar CX Assurance includes the following features:

  • Auto discovery enables reverse-engineering of calls flows that speed up interactive voice response (IVR) and provides accurate and timely customer experience monitoring. 
  • Real-time alerting notifies companies via email and/or text when issues are identified.
  • Voice automation provides text-to-speech and speech recognition that, in combination with call recording, enable a high level quality control and monitoring.  
  • Voice quality scoring identifies clicks and noises, artifacts, intermittent gaps and jitter due to packet loss in audio during playback.

Nectar said Customer Experience Assurance is the first product to apply its experience in unified communications (UC) monitoring, diagnostics and reporting to the contact center environment. It is built upon Nectar’s core products, network and endpoint operations for UC and provides cloud-based CX testing for enterprise contact center and IVR operations.

In the CX monitoring market, Nectar competes with Oracle, Clarabridge and Integrated Research, known as IR. Oracle CX Cloud Suite offers a full set of applications from marketing to sales, and commerce to service. Clarabridge’s product stresses AI technology that provides audio transcription of agent-customer interactions, along with sentiment, tone and voice analysis for customer service conversations. IR’s Prognosis for Contact Center offers complete contact center ecosystem from Cisco and Avaya, and the underlying UC systems with one platform.

Go to Original Article
Author:

Amazon Quantum Ledger Database brings immutable transactions

The Amazon Web Services Quantum Ledger Database is now generally available.

The database provides a cryptographically secured ledger as a managed service. It can be used to store both structured and unstructured data, providing what Amazon refers to as an immutable transaction log.

The new database service was released on Sept. 10, 10 months after AWS introduced it as a preview technology.

The ability to provide a cryptographically and independently verifiable audit trail of immutable data has multiple benefits and use cases, said Gartner vice president and distinguished analyst Avivah Litan.

“This is useful for establishing a system of record and for satisfying various types compliance requirements, such as regulatory compliance,” Litan said. “Gartner estimates that QLDB and other competitive offerings that will eventually emerge will gain at least 20% of permissioned blockchain market share over the next three years.”

A permissioned blockchain has a central authority in the system to help provide overall governance and control. Litan sees the Quantum Ledger Database as satisfying several key requirements in multi-company projects, which are typically complementary to existing database systems.

Among the requirements is that once data is written to the ledger, the data is immutable and cannot be deleted or updated. Another key requirement that QLDB satisfies is that it provides a cryptographically and independently verifiable audit trail.

“These features are not readily available using traditional legacy technologies and are core components to user interest in adopting blockchain and distributed ledger technology,” Litan said. “In sum, QLDB is optimal for use cases when there is a trusted authority recognized by all participants and centralization is not an issue.”

How AWS Quantum Ledger Database works shown in diagram graphic
Diagram of how AWS Quantum Ledger Database works

Centralized ledger vs. de-centralized blockchain

The basic promise of many blockchain-based systems is that they are decentralized, and each party stores a copy of the ledger. For a transaction to get stored in a decentralized and distributed ledger, multiple parties have to come to a consensus. In this way, blockchains achieve trust in a distributed and decentralized way.

“Customers who need a decentralized application can use Amazon Managed Blockchain today,” said Rahul Pathak, general manager of databases, analytics and blockchain at AWS. “However, there are customers who primarily need the immutable and verifiable components of a blockchain to ensure the integrity of their data is maintained.”

By quantum, we imply indivisible, discrete changes. In QLDB, all the transactions are recorded in blocks to a transparent journal where each block represents a discrete state change.
Rahul PathakGeneral manager of databases, analytics and blockchain, Amazon Web Services

For customers who want to maintain control and act as the central trusted entity, just like any database application works today, a decentralized system with multiple entities is not the right fit for their needs, Pathak said.

“Amazon [Quantum Ledger Database] combines the data integrity capabilities of blockchain with the ease and simplicity of a centrally owned datastore, allowing a single entity to act as the central trusted authority,” Pathak said.

While QLDB includes the term “quantum” in its name, it’s not a reference to quantum computing

“By quantum, we imply indivisible, discrete changes,” Pathak said. “In QLDB, all the transactions are recorded in blocks to a transparent journal where each block represents a discrete state change.”

How the Amazon Quantum Ledger Database works

The immutable nature of QLDB is a core element of the database’s design. Pathak explained that QLDB uses a cryptographic hash function to generate a secure output file of the data’s change history, known as a digest. The digest acts as a proof of the data’s change history, enabling customers to look back and validate the integrity of their data changes.

From a usage perspective QLDB supports the PartiQL open standard query language that supports SQL-compatible access to data. Pathak said that customers can build applications with the Amazon QLDB Driver for Java to write code that accesses and manipulates the ledger database.

“This is a Java driver that allows you to create sessions, execute PartiQL commands within the scope of a transaction, and retrieve results,” he said. 

Developed internally at AWS

The Quantum Ledger Database is based on technology that AWS has been using for years, according to Pathak. AWS has been using an internal version of Amazon QLDB to store configuration data for some of its most critical systems, and has benefitted from being able to view an immutable history of changes, he said.

“Over time, our customers have asked us for the same ledger capability, and a way to verify that the integrity of their data is intact,” he said. “So, we built Amazon QLDB to be immutable and cryptographically verifiable.”

Go to Original Article
Author:

Microsoft expands its automotive partner ecosystem to power the future of mobility – The Official Microsoft Blog

Technology can help automotive companies transform into smart mobility services providersDashboard of self-driving auto

Karl Benz and Henry Ford revolutionized transportation with the initial development and mass production of the automobile. Now, more than a century later, the automotive industry is poised to transform transportation again, with a push to develop connected, personalized and autonomous driving experiences, electric vehicles and new mobility business models from ride-sharing to ride-hailing and multimodal, smart transportation concepts.

This industry is expected to see significant growth, becoming a $6.6T industry by 2030, with disruptive business models accounting for 25 percent of all revenues, according to consulting firm, McKinsey & Company. From shared vehicle services to fully electric transportation, manufacturers are developing new products and services to enable large fleets offering mobility-as-a-service, which will increasingly replace individual car ownership. This involves modernizing the in-vehicle experience with productivity, entertainment, and personal assistants that are safe and secure, following users across different transport modes, adding value for businesses and consumers alike.

This transformation requires a data-driven mindset. The automotive sector generates vast amounts of data. However, companies aren’t yet fully set up to turn it into relevant insights. Future success depends on the ability to identify and capture digital signals and evolve how the business approaches innovation. Through what we call a digital feedback loop, the entirety of the enterprise can be connected with relevant data— whether it is pertaining to relationship management with customers and partners, or engagement with employees, core product creation or enterprise operations— to drive continuous improvement in products and services, mobility companies must differentiate from their competition.

We support the industry with unlocking this enormous potential by providing intelligent cloud, edge, IoT and AI services and helping automotive companies build and extend their own digital capabilities.

To that end, this year, for the first time, Microsoft is joining Frankfurt Motor Show (IAA) and showcasing our approach to working with the automotive industry. We want to empower automotive organizations of all sizes to transform into smart mobility services providers.

Our automotive strategy is shaped by three key principles:

  1. We partner across the industry. We are not in the business of making vehicles or delivering end mobility as a service offerings.
  2. We believe data should be owned by our customers, as insights from data will become the new drivers of revenue for the auto industry. We do not monetize our customers’ data.
  3. We support automotive companies as they enhance and extend their unique brand experiences to expand their relationships with their customers.

We are focusing our customer engagements along with our extensive global partner network to support their success in the five following areas: connected vehicle solutions, autonomous driving development, smart mobility solutions, connected marketing, sales and service as well as intelligent manufacturing and supply chain.

Today, we are sharing updates about our approach and expansions to our partner ecosystem across these focus areas:

  1. Empower connected vehicle solutions

The core of our connected vehicle efforts is the Microsoft Connected Vehicle Platform (MCVP). It combines advanced cloud and edge computing services with a strong partner network so automotive companies can build connected driving solutions that span from in-vehicle experiences and autonomous driving to prediction services and connectivity. In addition to our partnerships with Volkswagen and Renault-Nissan-Mitsubishi Alliance, new partners are using MCVP to do more:

  • LG Electronics’ webOS Autoplatform offers an in-vehicle, container-capable OS that brings the third party application ecosystem created for premium TVs to in-vehicle experiences. webOSAuto supports the container-based runtime environment of MCVP and can be an important part of modern experiences in the vehicle.
  • Faurecia is leveraging MCVP to create disruptive, connected and personalized services inside the Cockpit of the Future to reinvent the on-board experience for all occupants.
  • Cubic Telecom is a leading connectivity management software provider to the automotive and IoT industries globally. They are one of the first partners to bring seamless connectivity as a core service offering to MCVP for a global market. The deep integration with MCVP allows for a single data lake and an integrated services monitoring path.

Meet more partners in our MCVP blog.

Our customers are also looking to provide conversational assistants tailored to their brand and customer needs, and make them available across multiple devices and apps. The Microsoft Azure Virtual Assistant Solution Accelerator simplifies the creation of these assistants.

  1. Accelerate autonomous driving function development

We empower car makers, suppliers and mobility services providers to accelerate their delivery of autonomous driving solutions that provide safe, comfortable and personalized driving experiences with a comprehensive set of cloud, edge, IoT and AI services and a partner-led open ecosystem that enables collaborative development across companies. We support companies of all sizes from large enterprises such as Audi, that are leveraging Microsoft Azure to create simulations using these large volumes of data, to small and medium sized businesses and start-ups.

Today, we are announcing Microsoft for Startups: Autonomous Driving, a program to accelerate the growth of start-ups working on autonomous driving and help them seize new business opportunities in areas such as delivery, ride-sharing and long haul transit. Learn more about our collaboration with start-ups like Linker Networks and Udelv in our start-up blog.

This year in the Microsoft booth at IAA, Bosch, FEV, Intempora and Applied Intuition are showcasing their autonomous driving solutions.

  • FEV is overcoming the central challenge to validating automated driving functions with a data management and assessment system developed in house, which uses Microsoft Azure.
  • Intempora has recently unveiled IVS, the Intempora Validation Suite, a new software toolchain for the test, training, benchmarking and the validation of ADAS (Advanced Driver and Assistance Systems) and HAD (Highly Automated Driving) algorithms.
  • Applied Intuition is equipping engineering and product development teams with software that makes it faster, safer, and easier to bring autonomy to market.
  1. Enable creation of smart mobility solutions

Intelligent mapping and navigation services are critical to building smart mobility solutions. This is why Microsoft is partnering with companies like TomTom and Moovit.

  • TomTom is integrating their navigation intelligence services such as HD Maps and Traffic as containerized services for use in MCVP so that other in-vehicle services, including autonomous driving, can take advantage of the additional location context.
  • TomTom and Moovit are also partnering with Microsoft for a comprehensive multi-modal trip planner leveraging Azure Maps.
  • The urban mobility app Moovit using Azure Maps also helps people with disabilities ride transit with confidence. This project supports Microsoft’s aim to make our latest technology accessible to everyone and foster inclusion and the use of our technology for the good so that every person on the planet can benefit from technological innovations.
  1. Empower connected marketing, sales and services solutions

With Microsoft Business Applications, our automotive partners, suppliers, and retailers can develop new customer insights and create omnichannel customer experiences. With the Microsoft Automotive Accelerator, auto companies can schedule appointments and automotive services, facilitated through proactive communications.

At IAA, we’re excited to have several partners onsite, including Annata, Adobe and Daimler:

  • Annata is leveraging our Automotive Accelerator to help automotive and equipment companies meet business challenges while taking advantages of new opportunities in the market.
  • Adobe and Microsoft’s strategic partnership and integrations allow an end-to-end customer experience management solution for experience creation, marketing, advertising, analytics, and commerce.
  • Daimler launched eXtollo, the company’s new cloud platform for big data and advanced analytics. The platform uses Azure Key Vault, a service that safeguards encryption keys and secrets, including certificates, connection strings and passwords.
  1. Provide services to build an intelligent supply chain

Driving end-to-end digital transformation requires an integrated digital supply chain–from the factory and shop floor to end customer delivery. Microsoft works with Icertis, BMW, and others to build intelligent supply chain:

  • Icertis Contract Management natively runs on Microsoft Azure and seamlessly integrates with Office 365, Teams and Dynamics 365 so customers can extend the benefits from their Microsoft technology investments.
  • BMW and Microsoft continue to develop the Open Manufacturing Platform to enable industrial manufacturers to work together to break down data silos and overcome the challenges of complex, proprietary systems that slow down production optimization.

We are looking forward to meeting you at our Microsoft booth (Hall 5, C21) or at one of our IAA sessions. On your way to Frankfurt explore our Microsoft Connected Vehicle Platform microsite.

Tags:

Go to Original Article
Author: Microsoft News Center

Supporting modern technology policy for the financial services industry – guidelines by the European Banking Authority | Transform

The financial services community has unprecedented opportunity ahead. With new technologies like cloud, AI and blockchain, firms are creating new customer experiences, managing risk more effectively, combating financial crime, and meeting critical operational objectives. Banks, insurers and other services providers are choosing digital innovation to address these opportunities at a time when competition is increasing from every angle – from traditional and non-traditional players alike.

At the same time, our experience is that lack of clarity in regulation can hinder adoption of these exciting technologies, as regulatory compliance remains fundamental to financial institutions using technology they trust.  Indeed, the common question I get from customers is: Will regulators let me use your technology, and have you built in the capabilities to help me meet my compliance obligations?

A portrait of Dave Dadoun, assistant general counsel for Microsoft.
Dave Dadoun.

With this in mind, we applaud the European Banking Authority’s (EBA) revised Guidelines on outsourcing arrangements which, in part, address the use of cloud computing. For several years now we have shared perspectives with regulators on how regulation can be modernized to address cloud computing without diminishing the security, privacy, transparency and compliance safeguards necessary in a native cloud or hybrid-cloud world. In fact, cloud computing can afford financial institutions greater risk assurance – particularly on key things like managing data, securing data, addressing cyber threats and maintaining resilience.

At the core of the revised guidelines are a set of flexible principles addressing cloud in financial services. Indeed, the EBA has been clear these “guidelines are subject to the principle of proportionality,” and should be “applied in a manner that is appropriate, taking into account, in particular, the institution’s or payment institution’s size … and the nature, scope and complexity of its activities.” In addition, the guidelines set out to harmonize approaches across jurisdictions, a big step forward for financial institutions to have predictability and consistency among regulators in Europe. We think the EBA took this smart move to support leading-edge innovation and responsible adoption, and prepare for more advanced technology like machine learning and AI going forward.

Given these guidelines reflect a modernized approach that transcends Europe, we have updated our global Financial Services Amendment for customers to reflect these key changes. We have also created a regulatory mapping document which shows how our cloud services and underlying contractual commitments map to these requirements in an EU Checklist. The EU Checklist is accessible on the Microsoft Service Trust Portal. In essence, Europe offers the benchmark in establishing rules to permit use of cloud for financial services and we are proud to align to such requirements.

Because this is such an important milestone for the financial sector, we wanted to share our point-of-view on a few key aspects of the guidelines, which may help firms accelerate technology transformation with the Microsoft cloud going forward:

  • Auditability: As cloud has become more prevalent, we think it is natural to extend audit rights to cloud vendors in circumstances that warrant it. We also think that audits are not a one-size-fits-all approach but adaptable based on use cases – particularly whether it involves running core banking systems in the cloud. Microsoft has provided innovations to help supervise and audit hyper-scale cloud, including:
  • Data localization: We are pleased there are no data localization requirements in the EBA guidance. Rather, customers must assess the legal, security and other risks where data is stored, as opposed to mandating data be stored strictly in Europe. We help customers manage and assess such risk by providing:
    • Contractual commitments to store data at rest in a specified region (including Europe).
    • Transparency where data is stored.
    • Full commitments to meet key privacy requirements, like the General Data Protection Regulation (GDPR).
    • Flow-through of such commitments to our subcontractors.
  • Subcontractors. The guidelines address subcontractors, particularly those that provide “critical or important” functions. Management, governance and oversight of Microsoft’s subcontractors is core to what we do.  Among other things:
    • Microsoft’s subcontractors are subject to a vetting process and must follow the same privacy and governance controls we ourselves implement to protect customer data.
    • We provide transparency about subcontractors who may have access to customer data and provide 180 days notification about any new subcontractors as well.
    • We provide customers termination rights should they conclude a subcontractor presents a material increase in risk to a critical or important function of their operations.
  • Core platforms: We welcome the EBA’s position providing clarity that core platforms may run in the cloud. What matters is governance, documenting protocols, the security and resiliency of such systems, and having appropriate oversight (and audit rights), and commitments to terminate an agreement, if and when that becomes necessary. These are all capabilities Microsoft offers to its customers and we now see movement among leading banks to put core systems into our cloud because of the benefits we provide.
  • Business Continuity and Exit Planning. Institutions must have business continuity plans and test them periodically for use of critical or important functions. Microsoft has supported our customers to meet this requirement, including providing a Modern Cloud Risk Assessment toolkit and, in addition, in the Service Trust Portal documentation on our service resilience architecture, our Enterprise Business Continuity Management team (EBCM), and a quarterly report detailing results from our recent EBCM testing. In addition, we have supported our customers in preparing exit planning documentation, and we work with industry bodies like the European Banking Federation towards further industry guidance for these new EBA requirements.
  • Concentration risk: The EBA addresses the need to assess whether concentration risk may exist due to potential systemic failures in use of cloud services (and other legacy infrastructure). However, this is balanced with understanding what the risks are of a single point of failure, and to balance those risks and trade-offs from existing legacy systems. In short, financial institutions should assess the resiliency and safeguards provided with our hyper-scale cloud services, which can offer a more robust approach than systems in place today. When making those assessments, financial institutions may decide to lean-in more with cloud as they transform their businesses going forward.

The EBA framework is a great step forward to help modernize regulation and take advantage of cloud computing. We look forward to participating in ongoing industry discussion, such as new guidance under consideration by the European Insurance and Occupational Pension Authority concerning use of cloud services, as well as assisting other regions and countries in their journey to creating more modern policy that both supports innovation while protecting the integrity of critical global infrastructure.

For more information on Microsoft in the financial services industry, please go here.

Top photo courtesy of the European Banking Authority.

Go to Original Article
Author: Microsoft News Center