Tag Archives: more

Companies bolster endpoint data protection for remote work

With more people working from home due to the coronavirus, some companies have had to adjust how they handle backup and business continuity.

The spread of COVID-19, which is the disease caused by the new coronavirus, created a unique challenge for data protection experts. Instead of threatening data or applications, this disaster directly affects personnel. Because of social distancing and shelter-in-place orders, many employees must work remotely. Not all businesses’ IT infrastructure can easily accommodate this shift.

In recent months, MDL AutoMation, based out of Roswell, Ga., has been testing a business continuity plan for when its employees can no longer come to work. This includes Carbonite software installed on all laptops, Dell DDPE encryption and Absolute DDS for asset tracking and security. This level of endpoint data protection is largely unnecessary when everyone works in the office, but MDL AutoMation manager of infrastructure Eric Gutmann said they may not have that option for long.

“We will be able to continue functioning as a company with all our employees working remotely as if they were in the office,” Gutmann said.

MDL is a software company that sells car tracking capabilities to car dealerships. It has a client base of about 250 dealerships and manages 1.4 TB of data gathered from IoT devices.

Gutmann said he has VPN and remote desktop protocol (RDP) ready, and the switch to remote working and enhanced endpoint data protection is meant to be temporary. He is prepared to implement it for two months.

No going back

Marc Staimer, president of Dragon Slayer Consulting, said it’s highly unlikely that any business that implements endpoint data protection will want to go back. Endpoint data protection is a separate investment from workstation data protection and involves extra security measures such as geolocation and remote wiping. Businesses that do not already have this will need to invest time and money into such a system, and will likely want to keep it after making that investment.

Many businesses may already be in a good position to support remote work. Staimer said organizations that use virtual desktop infrastructure (VDI) do not have to worry about backing up laptops, and less data-intensive businesses can have everyone work off of the cloud. Bandwidth is also much more abundant now, eliminating what used to be a roadblock to remote work.

With SaaS-based applications such as Microsoft Office 365 and Google Docs and cloud-based storage such as OneDrive and Dropbox, teleworking isn’t complicated to implement. The difficulty, according to Steven Hill, senior analyst at 451 Research, part of S&P Global Market Intelligence, comes from making sure everything on the cloud is just as protected as anything on premises.

Unlike endpoint data protection, using the cloud is more about locking down storage being used than protecting multiple devices. Whether it’s Dropbox, OneDrive or a private cloud NAS, an administrator only has to worry about protecting and securing that one management point. Aside from native tools, third-party vendors such as Backblaze and CloudAlly can provide data protection for these storage environments.

“Rather than storing business information locally, you could dictate that everything goes to and comes from the cloud,” Hill said.

Staimer said the pandemic will make many businesses realize they don’t need all of their workers in a single location. While some organizations won’t treat the coronavirus seriously enough to implement any of these systems, Staimer expects that for many, it will be the impetus to do what they should’ve been doing.

Coronavirus is going to change the way we work — permanently.
Marc StaimerPresident, Dragon Slayer Consulting

“Coronavirus is going to change the way we work — permanently,” Staimer said.

For some businesses, the biggest challenge will be accommodating workers who cannot perform their jobs from home. They may include partners or customers, as well as a company’s employees.

KCF Technologies, based in State College, Penn., which manufactures industrial diagnostic equipment, is already invested in endpoint data protection. Myron Semack, chief infrastructure architect at KCF, said the company is cloud-centric and many of its workers can work from anywhere.

However, the business would still be impacted if it or its customers go into lockdown because of the coronavirus. Not only would KCF be unable to produce its sensor products, but any installation or project work in the field would have to be suspended. This isn’t anything IT can fix.

“Our manufacturing line employees cannot work from home, unfortunately. If they were forced to stay home, our ability to build or ship product would be impacted,” Semack said.

Go to Original Article
Author:

For Sale – 4TB Red Pro | SOLD: 2 x 2TB WD Red HDDs, 8TB Red

Are you putting up any more of these reds in the coming days?

Go to Original Article
Author:

Three Years of Microsoft Teams

Microsoft Teams at 3: Everything you need to connect with your teammates and be more productive

This week marks the third anniversary of Microsoft Teams. It’s been an incredible three years, and we’re inspired to see the way organizations across the globe are using Teams to transform the way they work. Today, we’re sharing some new Teams capabilities across a few different aspects of the Teams experience, many with a tie to meetings.

Read more

Go to Original Article
Author: Microsoft News Center

Updated Exchange Online PowerShell module adds reliability, speed

PowerShell offers administrators a more flexible and powerful way to perform management activities in Exchange Online. At times, PowerShell is the only way to perform certain management tasks.

But there have been widespread concerns by many Exchange administrators who have not always felt confident in Exchange Online PowerShell’s abilities, especially when dealing with thousands of mailboxes and complicated actions. But Microsoft recently released the Exchange Online PowerShell V2 module — also known as the ExchangeOnlineManagement module — to reduce potential management issues.

New cmdlets attempt to curb PowerShell problems

Moving the messaging platform to the cloud can frustrate administrators when they attempt to work with the system using remote PowerShell without a reliable connection to Microsoft’s hosted email system. Microsoft said the latest Exchange Online PowerShell module, version 0.3582.0, brings new enhancements and new cmdlets to alleviate performance and reliability issues, such as session timeouts or poor error handling during complex operations.

Where a spotty connection could cause errors or scripts to fail with the previous module, Microsoft added new cmdlets in the Exchange Online PowerShell V2 module to restart and attempt to run a script where it left off before issues started.

Microsoft added 10 new cmdlets in the new Exchange Online PowerShell module. One new cmdlet, Connect-ExchangeOnline, replaces two older cmdlets: Connect-EXOPSSession and New-PSSession.

Microsoft took nine additional cmdlets in the older module, updated them to use REST APIs and gave them new names using the EXO prefix:

  • Get-EXOMailbox
  • Get-EXORecipient
  • Get-EXOCASMailbox
  • Get-EXOMailboxPermission
  • Get-EXORecipientPermission
  • Get-EXOMailboxStatistics
  • Get-EXOMailboxFolderStatistics
  • Get-EXOMailboxFolderPermission
  • Get-EXOMobileDeviceStatistics

Microsoft said the new REST-based cmdlets will perform significantly better and faster than the previous PowerShell module. The REST APIs offer a more stable connection to the Exchange Online back end, making most functions more responsive and able to operate in a stateless session.

Given that administrators will develop complex PowerShell scripts for their management needs, they needed more stability from Microsoft’s end to ensure these tasks will execute properly. Microsoft helped support those development efforts by introducing better script failure with functionality that will retry and resume from the point of failure. Previously, the only option for administrators was to rerun their scripts and hope it worked the next time.

There are cases where some properties are queried during a script execution that can generally impact the overall response and performance of the script given the size of the objects and their properties. To help optimize these scenarios, Microsoft introduced a way for a PowerShell process to run against Exchange Online to only retrieve relevant properties of objects needed during the execution process.  An example would be retrieving mailbox properties that would be the most likely to be used, such as mailbox statistics, identities and quotas.

Microsoft removed the need to use the Select parameter typically used to determine which properties are needed as part of the result set.  This neatens scripts and eliminates unnecessary syntax as shown in the example below.

Before:

Get-ExoMailbox -ResultSize Unlimited -RecipientTypeDetails UserMailbox | Select WhenCreated, WhenChanged | Export-CSV c:tempExportedMailbox.csv

After:

Get-ExoMailbox -ResultSize Unlimited -RecipientTypeDetails UserMailbox  -PropertySets Quota -Properties WhenCreated, WhenChanged | Export-CSV c:tempExportedMailbox.csv

How to get the new Exchange Online PowerShell module

To start using the latest Exchange Online PowerShell capabilities requires the installation or upgrade of the ExchangeOnlineManagement module. This can be done from a PowerShell prompt running under administrator settings and executing one of the two following commands:

Install-Module -Name ExchangeOnlineManagement
Import-Module ExchangeOnlineManagement; Get-Module ExchangeOnlineManagement

Or:

Update-Module -Name ExchangeOnlineManagement
Exchange Online PowerShell module install
New Exchange Online PowerShell module users can use the Install-Module command to start working with the new cmdlets.

Exchange Online PowerShell V2 module commands offer speed boost

IT pros who use the new Exchange Online PowerShell module should see improved performance and faster response time.

We can run a short test to compare how the current version stacks up to the previous version when we run commands that provide the same type of information.

First, let’s run the following legacy command to retrieve mailbox information from an organization:

Get-Mailbox -ResultSize Unlimited -RecipientTypeDetails UserMailbox | Select DisplayName, ProhibitSendReceiveQuota, WhenCreated, WhenChanged

The command completes in 2.3890 seconds.

Exchange Online PowerShell mailbox command
One typical use of PowerShell on Exchange Online is to use the Get-Mailbox cmdlet to retrieve information about mailboxes used by members of the organization.

This is the new version of the command that provides same set of information but in a slightly different format:

$RESTResult = Measure-Command { $Mbx = Get-ExoMailbox -ResultSize Unlimited -RecipientTypeDetails UserMailbox -PropertySets Quota -Properties WhenCreated, WhenChanged

The command completes in 1.29832 seconds, or almost half the time. Extrapolate these results to an organization with many thousands of users and mailboxes in Exchange Online and you can begin to see the benefit when a script takes half as much time to run.

Use the following command to get mailbox details for users in the organization:

Get-ExoMailbox -ResultSize Unlimited -RecipientTypeDetails UserMailbox -PropertySets Quota -Properties WhenCreated, WhenChanged
Exchange Online mailbox details
The updated Get-ExoMailbox cmdlet fetches detailed information for a mailbox hosted in Exchange Online.

The following command exports a CSV file with details of mailboxes with additional properties listed:

Get-ExoMailbox -ResultSize Unlimited -RecipientTypeDetails UserMailbox -PropertySets Quota -Properties WhenCreated, WhenChanged | Export-CSV c:tempExportedMailbox.csv

Be aware of the Exchange Online PowerShell module provisions

There are several caveats Exchange administrators should know before they use the latest ExchangeOnlineManagement module:

  • The new Exchange PowerShell Online module only works on Windows PowerShell 5.1 with support coming for the new cross-platform version of PowerShell.
  • Data results returned by the latest cmdlets are in alphabetic order, not chronologically.
  • The new module only supports OAuth 2.0 authentication, but the client machine will need basic authorization enabled to use the older remote PowerShell cmdlets.
  • Some results may require additional formatting or adjusting because the new cmdlets give output alphabetically.
  • Administrators should use Azure AD GUID for account identity.

How to give Microsoft feedback for additional development

As Microsoft continues to improve the module, administrators will continue to see more capabilities that will allow them to have a much more improved experience with PowerShell to manage their Exchange Online environment.

There are three avenues for users to provide feedback to Microsoft on the new PowerShell commands. The first one is to report bugs or other issues during the processing of the different scripts from within PowerShell. To do this, run the following command:

Connect-ExchangeOnline -EnableErrorReporting -LogDirectoryPath <Path to store log file> -LogLevel All

The second option is to post a message on the Office 365 UserVoice forum.

Lastly, users can file an issue or check on the status of one with the Exchange Online PowerShell commands on the Microsoft Docs Github site at this link.

Go to Original Article
Author:

The 10 most innovative workplace companies of 2020

Whether helping employees to communicate more seamlessly or helping underemployed mid-career women boost their incomes, these 10 companies are creating new ways to make working life fairer and more rewarding.

advertisement

advertisement

For taking the slack out of messaging communications for first-line employees

It’s hard to argue with Microsoft’s dominance: Teams has more than 20 million daily users, with 91 of the Fortune 100 utilizing the platform. Last year’s improvements included greater AI integration, as well as additional tools for first-line workers (those working in people-facing positions, like doctors, or service industry employees).

Read more about why Microsoft is one of the World’s Most Innovative Companies of 2020.

2. Mursion

For teaching EQ via VR for the likes of Coca-Cola, Nationwide, and T-Mobile

Mursion is a virtual reality training tool that combines AI and interactions with trained actors to help develop stronger soft skills among employees, such as the ability to recognize bias. Clients include companies like T-Mobile, Coca-Cola, Best Western, and Nationwide.

3. Pipeline Equity

For giving companies the tools to improve gender equality when they hire and promote

advertisement

Founded in 2017, Pipeline demonstrates the connection between gender parity and economic opportunity. The subscription-based platform analyzes company-specific data to make recommendations about moves that will both increase business outcomes and improve gender balance.

4. Dropbox

For thinking outside the file folder

The company introduced numerous enhancements in 2019, including integrations with various other platforms (like Google Docs/Sheets/Slides) and a cold-storage option. Dropbox also acquired e-signature company HelloSign, and used shingled magnetic recording (SMR) to reduce energy use and storage costs at data centers.

5. Docusign

For closing mortgages in the cloud

Document signing isn’t the sexiest topic, but it is a critical one. The company has expanded into a number of related areas, including DocuSign Rooms for Mortgage and DocuSign Identify, for verification of IDs.

6. Coda

For applying app-like functionality to document creation

advertisement

A flexible doc and management tool, Coda emphasizes the idea of democratization of software, enabling users to build app-like solutions without a coding background. Officially launched in February 2019, Coda is already used by a number of companies, including Spotify, Cheddar, and Uber.

7. The Second Shift

For creating flexible gigs for mid-career women

The Second Shift is a small company, but it’s tackling a critical workforce problem: lack of flexible opportunities for women, especially mid-career, when many are assuming primary caregiving responsibilities for children and/or parents. The company connects employers with experienced women to fill in for positions or tackle special projects.

8. Bluecrew

For matching workers with hourly jobs with health insurance, overtime, sick pay, and workers comp

At a time when many gig-economy employers don’t offer workers comprehensive protections, Bluecrew is helping secure benefits for hourly workers at companies like Blue Bottle and Levi’s Stadium.

9. Lattice

For making performance reviews a continual process via apps

advertisement

The HR software company is working with more than 1,400 companies to integrate management processes into tools employees are already using, such as Slack.

10. Samepage

For putting chat, email, files, and tasks on the same . . . you know

Samepage is emerging as a leading collaborative tool in the booming intranet market, offering such features as integrated videoconferencing and threaded chats.

Read more about Fast Company’s Most Innovative Companies:

Go to Original Article
Author: Microsoft News Center

For Sale – 4TB Red Pro | 2 x 2TB WD Red HDDs | SOLD: 8TB Red

Are you putting up any more of these reds in the coming days?

Go to Original Article
Author:

Developers vie for Oracle Cloud Infrastructure certs

Oracle hopes a new developer certification program can drive more interest in its cloud platform, which is well behind the likes of AWS, Microsoft and Google in terms of market share.

To this end, Oracle has introduced a new Developer Associate certification for Oracle Cloud Infrastructure, to help developers familiarize themselves with the platform and ultimately build modern enterprise applications there. Oracle now offers five distinct certifications for architects, operators and developers on OCI.

“We absolutely expect this certification will help to grow the number of developers in the Oracle cloud ecosystem,” said Bob Quillin, vice president of Oracle cloud developer relations. “This is a real, professional set of skills and we have a broad set of tools on the toolbox to get and keep developers up to speed.”

Oracle has struggled to attract enterprise customers to OCI, the second-generation version of its cloud platform. The company is hoping the certifications will help in that regard by teaching developers how to use the Oracle Cloud Infrastructure service APIs, command-line interface and SDKs to write applications.

Bob Quillin, VP of Oracle cloud developer relationsBob Quillin

“Certifications matter when it comes to building software,” said Holger Mueller, principal analyst at Constellation Research in Cupertino, Calif. “It matters for enterprises to know what they can expect from developers with a certain certification level. And developers want to document their skills level with a certification.”

Passing the certification means that developers have learned OCI architecture, as well as use cases and best practices, Quillin said.

Traditional leading app platforms providers such as Oracle have a massive installed base of customers and developers invested in their solutions who will continue to adopt new cloud-based solutions.

Developers are less motivated to climb onboard a new brand, which is very costly, so vendors are tailoring new programs and certificates to attract non-Oracle expertise developers and broaden its developer following.
Charlotte DunlapPrincipal analyst, GlobalData

However, “Beyond that pool, developers are less motivated to climb onboard a new brand, which is very costly, so vendors are tailoring new programs and certificates to attract non-Oracle expertise developers and broaden its developer following,” said Charlotte Dunlap, principal analyst at GlobalData in Santa Cruz, Calif.

Quillin said the move to add the new certification was based on developer demand.

“We have thousands of Oracle Cloud Infrastructure certified individuals and are seeing a strong customer demand for Oracle Cloud technical skills,” he said.

This reflects strong momentum among developers who want to build new applications, as well as build extensions to existing applications and data.

“The creation of a certification is a usually a sign of maturity for a platform or product,” Mueller said. “This is the case for Oracle’s cloud developer certification, which just launched. As with all certifications, popularity will determine relevance and we will see in a few quarters what the uptake will be.”

Oracle University is offering the new Developer Associate certification at $150. Oracle University is also reducing the price of all Oracle Cloud Infrastructure Associate level certifications — including architect and operations — to $150 and offering the Foundations certification at $95.

In addition, Oracle University has made all its Oracle Cloud Infrastructure learning content available at no charge. Oracle previously charged $2,995 per user subscription.  

The certification exam is 105 minutes long and contains 60 questions. It is currently available only in English.

Moreover, the developer certification is aimed at developers who have 12 or more months of experience in developing and maintaining applications. They should also have an understanding of cloud-native fundamentals and knowledge of at least one programming language.

Go to Original Article
Author:

How to fortify your virtualized Active Directory design

Active Directory is much more than a simple server role. It has become the single sign-on source for most, if not all, of your data center applications and services. This access control covers workstation logins and extends to clouds and cloud services.

Since AD is such a key part of many organizations, it is critical that it is always available and has the resiliency and durability to match business needs. Microsoft had enough foresight to set up AD as a distributed platform that can continue to function — without much or, in some cases, no interruption in services — even if parts of the system went offline. This was helpful when AD nodes were still physical servers that were often spread across multiple racks or data centers to avoid downtime. So, the question now becomes, what’s the right way to virtualize Active Directory design?

Don’t defeat the native AD distributed abilities

Active Directory is a distributed platform, so virtualizing it will hinder the native distributed functionality of the software. AD nodes can be placed on different hosts and fail-over software will restart VMs if a host crashes, but what if your primary storage goes down? It’s one scenario you should not discount.

When you undertake the Active Directory design process for a virtualization platform, you must go beyond just a host failure and look at common infrastructure outages that can take out critical systems. One of the advantages of separate physical servers was the level of resiliency the arrangement provided. While we don’t want to abandon virtual servers, we must understand the limits and concerns associated with them and consider additional areas such as management clusters.

Management clusters are often slightly lower tier platforms — normally still virtualized — that only contain management servers, applications and infrastructure. This is where you would want to place a few AD nodes, so they are outside of the production environment they manage. The challenge with a virtualized management cluster is that it can’t be placed on the same physical storage location as production; this defeats the purpose of separation of duties. You can use more cost-effective storage platforms such as a virtual storage area network for shared storage or even local storage.

Remember, this is infrastructure and not core production, so IOPS should not be as much of an issue because the goal is resiliency, not performance. This means local drives and RAID groups should be able to provide the IOPS required.

How to keep AD running like clockwork

One of the issues with AD controllers in a virtualized environment is time drift.

All computers have clocks and proper timekeeping is critical to both the performance and security of the entire network. Most servers and workstations get their time from AD, which helps to keep everything in sync and avoids Kerberos security login errors.

These AD servers would usually get their time from a time source if they were physical or from the hosts if virtualized from them. The AD servers would then keep the time synchronized with the internal clock of the computer based on CPU cycles.

When you virtualize a server, it no longer has a set number of CPU cycles to base its time on. That means time can drift until it reaches out for an external time check to reset itself. But that time check can also be off since you might be unable to tell the passage of time until the next check, which compounds the issue. Time drift can become stuck in a nasty loop because the virtualization hosts often get their time from Active Directory.

Your environment needs an external time source that is not dependent on virtualization to keep things grounded. While internet time sources are tempting, having the infrastructure reach out for time checks might not be ideal. A core switch or other key piece of networking gear can offer a dependable time source that is unlikely to be affected by drift due to its hardware nature. You can then use this time source as the sync source for both the virtualization hosts and AD, so all systems are on the same time that comes from the same source.

Some people will insist on a single physical server in a virtualized data center for this reason. That’s an option, but one that is not usually needed. Virtualization isn’t something to avoid in Active Directory design, but it needs to be done with thought and planning to ensure the infrastructure can support the AD configuration. Management clusters are key to the separation of AD nodes and roles.

This does not mean that high availability (HA) rules for Hyper-V or VMware environments are not required. Both production and management environments should have HA rules to prevent AD servers from running on the same hosts.

Rules should be in place to ensure these servers restart first and have reserved resources for proper operations. Smart HA rules are easy to overlook as more AD controllers are added and the rules configuration is forgotten.

The goal is not to prevent outages from happening — that’s not possible. It is to have enough replicas and roles of AD in the right places so users won’t notice. You might scramble a little behind the scenes if a disruption happens, but that’s part of the job. The key is to keep customers moving along without them knowing about any of the issues happening in the background.

Go to Original Article
Author:

For Sale – ASUS ROG G752VY-DH72 17-Inch Gaming Laptop

Perfect Condition but out of box (i decided its to big to use day to day)

$980 – More pics on request

Reference 90NB09V1-M00060
Model_Name G752VY-DH72
Additional Warranty 24-7 tech support
Color ROG Copper Silver
Touch No Touch Screen
LCD 17.3″ G-SYNC IPS FHD (1920*1080)
CPU Intel Quad-Core i7-6700HQ 2.6GHz (Turbo up to 3.5GHz)
Memory 32GB DDR4 (2133MHZ)
VGA NVIDIA GTX980M 4GB GDDR5
HDD 1TB (7200 RPM) + 256GB SSD (PCIEG3x4)
ODD Blu-ray Writer
OS Windows 10 (64bit)
WLAN 802.11 ac
Webcam 1.2MP HD Camera
Bluetooth Bluetooth 4.0
Keyboard Backlit Chiclet
Card Reader SD, MMC
Battery 88WHrs, 4S2P, 8 cell Li-ion Battery Pack up to 4 hours

Go to Original Article
Author:

What Exactly is Azure Dedicated Host?

In this blog post, we’ll become more familiar with a new Azure service called Azure Dedicated Hosts. Microsoft announced the service as preview some time ago and will go general-available with it in the near future.

Microsoft Azure Dedicated Host allows customers to run their virtual machines on a dedicated host not shared with other customers. While in a regular virtual machine scenario different customers or tenants share the same hosts, with Dedicated Host, a customer does no longer share the hardware. The picture below illustrates the setup.

Azure Dedicated Hosts

With a Dedicated Host, Microsoft wants to address customer concerns regarding compliance, security, and regulations, which could come up when running on a shared physical server. In the past, there was only one option to get a dedicated host in Azure. The option was to use very large instances like a D64s v3 VM size. These instances were so large that they consumed one host, and the placement of other VMs was not possible.

To be honest here, with the improvements in machine placement, larger hosts, and with that a much better density, there was no longer a 100% guaranty that the host is still dedicated. Another thing regarding instances is they are extremely expensive, as you can see in the screenshot from the Azure Price Calculator.

Azure price calculator

How to Setup a Dedicated Host in Azure

The setup of a dedicated host is pretty easy. First, you need to create a host group with your preferences for availability, like Availability Zones and Number of Fault Domains. You also need to decide for a Host Region, Group Name, etc.

How To Setup A Dedicated Host In Azure

After you created the host group, you can create a host within the group. Within the current preview, only VM Type Ds3 and Es3 Family are available to choose from. Microsoft will add more options soon.

Create dedicated host

More Details About Pricing

As you can see in the screenshot, Microsoft added the option to use Azure Hybrid Use Benefits for Dedicated Host. That means you can use your on-prem Windows Server and SQL Server licenses with Software Assurance to reduce your costs in Azure.

Azure Hybrid Use Benefits pricing

Azure Dedicated Host also gives you more insides into the host like:

  • The underlying hardware infrastructure (host type)
  • Processor brand, capabilities, and more
  • Number of cores
  • Type and size of the Azure Virtual Machines you want to deploy

An Azure Customer can control all host-level platform maintenance initiated by Azure, like OS updates. An Azure Dedicated Host gives you the option to schedule maintenance windows within 35 days where these updates are applied to your host system. During this self-maintenance window, customers can apply maintenance to hosts at their own convenience.

So looking a bit deeper in that service, Azure becomes more like a traditional hosting provider who gives a customer a very dynamic platform.

The following screenshot shows the current pricing for a Dedicated Host.

Azure Dedicated Host pricing details

Following virtual machine types can be run on a dedicated host.

Virtual Machines on a Dedicated Host

Currently, you have a soft limit from 3000 vCPUs for a dedicated host per region. That limit can be enhanced by submitting a support ticket.

When Would I Use A Dedicated Host?

In most cases, you would choose a dedicated host because of compliance reasons. You may not want to share a host with other customers. Another reason could be that you want a guaranteed CPU architecture and type. If you place your VMs on the same host, then it is guaranteed that it will have the same architecture.

Further Reading

Microsoft already published a lot of documentation and blogs about the topic so you can deepen your knowledge about Dedicated Host.

Resource #1: Announcement Blog and FAQ 

Resource #2: Product Page 

Resource #3: Introduction Video – Azure Friday “An introduction to Azure Dedicated Hosts | Azure Friday”

Go to Original Article
Author: Florian Klaffenbach