Tag Archives: quickly

Monitor Active Directory replication via PowerShell

breaks down, administrators need to know quickly to prevent issues with the services and applications that Active Directory oversees.

It is important to monitor Active Directory replication to ensure the process remains healthy. Larger organizations that use Active Directory typically have several domain controllers that rely on replication to synchronize networked objects — users, security groups, contacts and other information — in the Active Directory database. Changes in the database can be made at any domain controller, which must then be duplicated to the other domain controllers in an Active Directory forest. If the changes are not synchronized to a particular domain controller — or all domain controllers — in an Active Directory site, users in that location might encounter problems.

For example, if an administrator applies a security policy setting via a Group Policy Object to all workstations, all domain controllers in a domain should pick up the GPO changes. If one domain controller in a particular location fails to receive this update, users in that area will not receive the security configuration.

Why does Active Directory replication break?

Active Directory replication can fail for several reasons. If network ports between the domain controllers are not open or if the connection object is missing from a domain controller, then the synchronization process generally stops working.

Since domain controllers rely on the domain name system, if their service records are missing, the domain controllers will not communicate with each other, which causes a replication failure.

Check Active Directory replication status manually

There are many ways to check the Active Directory replication status manually.

Administrators can run the following string using the command-line repadmin utility to show the replication errors in the Active Directory forest:
repadmin /replsum /bysrc /bydest /errorsonly

Administrators can also use the Get-ADReplicationPartnerMetadata PowerShell cmdlet to check the replication status, which is used in the script further in this article.

Use a script to check replication health

While larger organizations might have an enterprise tool, such as System Center Operations Manager, to monitor Active Directory, a PowerShell script can be a helpful supplement to alert administrators on the replication status. Because so much of a business relies on a properly functioning Active Directory system, it can’t hurt to implement this script and have it run every day via a scheduled task. If the script finds an error, it will send an alert via email.

The system must meet a few requirements before executing the script:

  • It runs on a computer that reaches all domain controllers.
  • It is recommended to use a computer that runs Windows Server 2012 R2 or a Windows 10 computer joined to a domain in the Active Directory forest.
  • The computer has the Active Directory PowerShell modules installed.

How does the script work?

The PowerShell script uses the Get-ADReplicationPartnerMetadata cmdlet, which connects to a primary domain controller emulator in the Active Directory forest and then collects the replication metadata for each domain controller.

The script checks the value of the LastReplicationResult attribute for each domain controller entry. If the value of LastReplicationResult is zero for any domain controller, the script considers this a replication failure. If this error is found, the script executes the Send-MailMessage cmdlet to send the email with a copy of the report file in a CSV file. The script stores the replication report in C:TempReplStatus.CSV.

The settings in the script should be modified to use the email address to send the message along with the subject line and message body.

PowerShell script to check replication status

The following PowerShell script helps admins monitor Active Directory for these replication errors and delivers the findings via email. Be sure to modify the email settings in the script.

$ResultFile = “C:TempReplStatus.CSV”

$ADForestName = “TechTarget.com”

$GetPDCNow =Get-ADForest $ADForestName | Select-Object -ExpandProperty RootDomain | Get-ADDomain | Select-Object -Property PDCEmulator

$GetPDCNowServer = $GetPDCNow.PDCEmulator

$FinalStatus=”Ok”

 

Get-ADReplicationPartnerMetadata -Target * -Partition * -EnumerationServer $GetPDCNowServer -Filter {(LastReplicationResult -ne “0”)} | Select-Object LastReplicationAttempt, LastReplicationResult, LastReplicationSuccess, Partition, Partner, Server | Export-CSV “$ResultFile” -NoType -Append -ErrorAction SilentlyContinue

 

$TotNow = GC $ResultFile

$TotCountNow = $TotNow.Count

IF ($TotCountNow -ge 2)

{

    $AnyOneOk = “Yes”

    $RCSV = Import-CSV $TestCSVFile

    ForEach ($AllItems in $RCSV)

    {

        IF ($AllItems.LastReplicationResult -eq “0”)

        {

            $FinalStatus=”Ok”

            $TestStatus=”Passed”

            $SumVal=””

            $TestText=”Active Directory replication is working.”

        }

        else

        {

            $AnyGap = “Yes”

            $SumVal = “”

            $TestStatus = “Critical”

            $TestText=”Replication errors occurred. Active Directory domain controllers are causing replication errors.”

            $FinalStatus=”NOTOK”           

            break

        }

    }

}

$TestText

 

IF ($FinalStatus -eq “NOTOK”)

{

    ## Since some replication errors were reported, start email procedure here…

 

### START – Modify Email parameters here

$message = @”                                

Active Directory Replication Status

 

Active Directory Forest: $ADForestName

                                  

Thank you,

PowerShell Script

“@

 

$SMTPPasswordNow = “PasswordHere”

$ThisUserName = “UserName”

$MyClearTextPassword = $SMTPPasswordNow

$SecurePassword = Convertto-SecureString –String $MyClearTextPassword –AsPlainText –force

$ToEmailNow =”EmailAddressHere”

$EmailSubject = “SubjectHere”

$SMTPUseSSLOrNot = “Yes”

$SMTPServerNow = “SMTPServerName”

$SMTPSenderNow = “SMTPSenderName”

$SMTPPortNow = “SMTPPortHere”

 

### END – Modify Email parameters here

 

$AttachmentFile = $ResultFile

 

$creds = new-object -typename System.Management.Automation.PSCredential -argumentlist “$ThisUserName”, $SecurePassword

Send-MailMessage -Credential $Creds -smtpServer $SMTPServerNow -from $SMTPSenderNow -Port $SMTPPortNow -to $ToEmailNow -subject $EmailSubject -attachment $AttachmentFile -UseSsl -body $message

}

When the script completes, it generates a file that details the replication errors.

Replication error report
The PowerShell script compiles the Active Directory replication errors in a CSV file and delivers those results via email.

Administrators can run this script automatically through the Task Scheduler. Since the script takes about 10 minutes to run, it might be best to set it to run at a time when it will have the least impact, such as midnight.

Scarab ransomware joins with Necurs botnet for faster spread

Researchers saw a surge of activity as the Scarab ransomware spread quickly to millions of victims via an email campaign run by botnet, but updates since that initial wave have been lacking.

Ben Gibney and Roland Dela Paz, security researcher and senior security researcher for Forcepoint Security Labs LLC, based in Dublin, reported a surge in volume of Scarab ransomware emails being blocked by security systems on Nov. 23rd. According to the researchers, more than 12.5 million emails were captured between 07:00 and 12:00 UTC, and the current campaign of Scarab ransomware used emails that looked like scanned documents, similar to “Locky ransomware campaigns distributed via Necurs.”

The Scarab ransomware was first seen in the wild in June, but the recent resurgence has been credited to the malware being spread via the Necurs botnet. Necurs was first discovered by cybersecurity vendors in 2012, and the botnet has grown steadily since that time. The Necurs botnet was previously used to spread the Dridex banking malware and Locky ransomware, though the botnet’s activity decreased sharply following a series of raids and arrests of suspect hackers in Russia last year.

“By employing the services of larger botnets such as Necurs, smaller ransomware players such as the actors behind Scarab are able to run a massive campaign with a global reach,” Gibney and Dela Paz wrote in a blog post. “It remains a question whether this is a temporary campaign, as was the case with Jaff, or if we will see Scarab increase in prominence through Necurs-driven campaigns.”

It is still unclear if the campaign was temporary or not as Forcepoint has not released any updates to its initial figures since the post on the 23rd and the company has not responded to requests for more data as of the time of this article.

Andy Norton, director of threat intelligence at Lastline, said the Necurs botnet can be a dangerous delivery system, but as yet it has only been seen propagating ransomware.

“Necurs is so popular to push malware and ransomware because it contains lots of concealment technology like the use of packers to evade static analysis, and lots of evasion technology to avoid being discovered by behavioral malware analysis platforms,” Norton told SearchSecurity. “It is able to survive inside an enterprise security environment, making it successful as a platform for delivering other subsequent malicious payloads.”

This month on Bing: holiday shopping, photo contest, new sports and weather experiences

This month at Bing we shipped several new experiences that help you quickly find what you’re looking for.
 

Holiday Shopping

With the holidays upon us, we have new tools to make it easy for you to search and discover the best deals on gifts.

blackfridaycountdown

Bing Shopping helps you save time by bringing together products from multiple sellers across the Web in to one search experience. Be it televisions, clothing and shoes, toys for your little ones, or gift baskets for your loved ones, you can compare across a wide range of products, filter your choices, compare prices, and visit the seller’s website where you can complete the purchase.

The Black Friday flyers page is a one stop shop to help you find Black Friday ads from across major stores in the US, saving you time mapping out your Black Friday strategy. So whether you are deal hunting or browsing specific stores, bookmark this page and visit it often to discover the latest deals as they are available.

Also in time for the holidays, we’ve increased the number of delivery services we support for package tracking, expanding our coverage from USPS, UPS, and Fedex in the US to coverage of several markets outside the US including myHermes in Great Britain and Purolator in Canada. Simply put your tracking ID in the search box, and Bing will present the latest tracking status right in the search result.

purolator
 

#AmplifyIngenuity Photo Contest

We also launched a #AmplifyIngenuity photo contest on 11/10 to help our users find inspiration in the ways humanity has used its intelligence to make a better future.

Users can

 

Historical weather and sports information

Bing has already got current weather and football information, but now you can check historical statistics for these topics. 

For example, if you’re hoping to travel these holidays or even just plan what to do at home, you can find historical weather patterns to know what to better expect.

historicalweather

Similarly, you can now go beyond searching current NFL results, and can brush up on your pro football knowledge by checking results of historical games.

historicalfootball

We hope you’re as excited by these releases as we are; we’d love to hear your thoughts and feedback at User Voice!

– The Bing Team
 

Cisco cloud VP calls out trends in multicloud strategy

Large enterprises have quickly embraced multicloud strategy as a common practice — a shift that introduces opportunities, as well as challenges.

Cisco has witnessed this firsthand, as the company seeks a niche in a shifting IT landscape. Earlier this year, Cisco shuttered Intercloud Services, its failed attempt to create a public cloud competitor to Amazon Web Services (AWS). Now, Cisco’s bets are on a multicloud strategy to draw from its networking and security pedigree and sell itself as a facilitator for corporate IT’s navigation across disparate cloud environments.

In an interview with SearchCloudComputing, Kip Compton, vice president of Cisco’s cloud platforms and solutions group, discussed the latest trends with multicloud strategy and where Cisco plans to fit in the market.

How are your customers shifting their view on multicloud strategy?

Kip Compton: It started with the idea that each application is going to be on separate clouds. It’s still limited to more advanced customers, but we’re seeing use cases where they’re spanning clouds with different parts of an application or subsystems, either for historical reasons or across multiple applications, or taking advantage of the specific capabilities in a cloud.

Hybrid cloud was initially billed as a way for applications to span private and public environments, but that was always more hype than reality. What are enterprises doing now to couple their various environments?

Compton: The way we approach hybrid cloud is as a use case where you have an on-prem data center and a public data center and the two work together. Multicloud, the definition we’ve used, is at least two clouds, one of which is a public cloud. In that way, hybrid is essentially a subset of multicloud for us.

Azure Stack is a bit of an outlier, but hybrid has changed a bit for most of our customers in terms of it not being tightly coupled. Now it is deployments where they have certain codes that run in both places, and the two things work together to deliver an application. They’re calling that hybrid, whereas in the early days, it was more about seamless environments and moving workloads between on prem and the public cloud based on load and time of day, and that seems to have faded.

What are the biggest impediments to a successful multicloud strategy?

Compton: Part of it is what types of problems do people talk about to Cisco, as opposed to other companies, so I acknowledge there may be some bias there. But there are four areas that are pretty reliable for us in customer conversations.

First is networking, not surprisingly, and they talk about how to connect from on prem to the cloud. How do they connect between clouds? How do they figure out how that integrates to their on-prem connectivity frameworks?

Then, there’s security. We see a lot of companies carry forward their security posture as they move workloads; so virtual versions of our firewalls and things like that, and wanting to align with how security works in the rest of their enterprise.

The third is analytics, particularly application performance analytics. If you move an app to a completely different environment, it’s not just about getting the functionality, it’s about being performant. And then, obviously, how do you monitor and manage it [on] an ongoing basis?

The trend we see is [customers] want to take advantage of the unique capabilities of each cloud, but they need some common framework, some capability that actually spans across these cloud providers, which includes their on-prem deployment.

Where do you draw the line on that commonality between environments?

Compton: In terms of abstraction, there was a time where a popular approach was — I’ll call it the Cloud Foundry or bring-your-own-PaaS [platform as a service] approach — to say, ‘OK, the way I’m going to have portability is I’m not going to write my application to use any of the cloud providers’ APIs. I’m not going to take advantage of anything special from AWS or Azure or anyone.’

That’s less popular because the cloud providers have been fairly successful at launching new features developers want to use. We think of it more like a microservices style or highly modular pattern, where, for my application to run, there’s a whole bunch of things I need: messaging queues, server load, database, networking, security plans. It’s less to abstract Amazon’s networking, and it’s more to provide a common networking capability that will run on Amazon.

You mentioned customers with workloads spanning multiple clouds. How are those being built?

Compton: What I referred to are customers that have an application, maybe with a number of different subsystems. They might have an on-prem database that’s a business-critical system. They might do machine learning in Google Cloud Platform with TensorFlow, and they might look to deliver an experience to their customers through Alexa, which means they need to run some portion of the application in Amazon. They’re not taking their database and sharding it across multiple clouds, but those three subsystems have to work together to deliver that experience that the customer perceives as a single application.

What newer public cloud services do you see getting traction with your customers?

Compton: A few months ago, people were reticent to use [cloud-native] services because portability was the most important thing — but now, ROI and speed matter, so they use those services across the board.

A few months ago, people were reticent to use [cloud-native] services because portability was the most important thing — but now, ROI and speed matter, so they use those services across the board.
Kip Comptonvice president, Cisco’s cloud platforms and solutions group

We see an explosion of interest in serverless. It seems to mirror the container phenomenon where everybody agrees containers will become central to cloud computing architectures. We’re reaching the same point on serverless, or function as a service, where people see that as a better way to create code for more efficient [use of] resources.

The other trend we see: a lot of times people use, for example, Salesforce’s PaaS because their data is there, so the consumption of services is driven by practical considerations. Or they’re in a given cloud using services because of how they interface with one of their business partners. So, as much as there are some cool new services, there are some fairly practical points that drive people’s selection, too.

Have you seen companies shift their in-house architectures to accommodate what they’re doing in the public cloud?

Compton: I see companies starting new applications in the cloud and not on prem. And what’s interesting is a lot of our customers continue to see on-prem growth. They have said, ‘We’re going to go cloud-first on our new applications,’ but the application they already have on prem continues to grow in resource needs.

We also see interest in applying the cloud techniques to the on-prem data center or private cloud. They’re moving away from some of the traditional technologies to make their data center work more like a cloud, partially so it’s easier to work between the two environments, but also because the cloud approach is more efficient and agile than some of the traditional approaches.

And there are companies that want to get out of running data centers. They don’t want to deal with the real estate, the power, the cooling, and they want to move everything they can into Amazon.

What lessons did Cisco learn from the now-shuttered Intercloud?

Compton: The idea was to build a global federated IaaS [infrastructure as a service] that, in theory, would compete with AWS. At that time, most in the industry thought that OpenStack would take over the world. It was viewed as a big threat to AWS.

Today, it’s hard to relate to that point of view — obviously, that didn’t happen. In many ways, cloud is about driving this brutal consistency, and by having global fabrics that are identical and consistent around the world, you can roll out new features and capabilities and scale better than if you have a federated model.

Where we are now in terms of multicloud and strategy going forward — to keep customers and partners and large web scale cloud providers wanting to either buy from us or partner with us — it’s solving some of these complex networking and security problems. Cisco has value in our ability to solve these problems [and] link to the enterprise infrastructures that are in place around the world … that’s the pivot we’ve gone through.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.

AWS and Microsoft announce Gluon, making deep learning accessible to all developers – News Center

New open source deep learning interface allows developers to more easily and quickly build machine learning models without compromising training performance. Jointly developed reference specification makes it possible for Gluon to work with any deep learning engine; support for Apache MXNet available today and support for Microsoft Cognitive Toolkit coming soon.

SEATTLE and REDMOND, Wash. — Oct. 12, 2017 — On Thursday, Amazon Web Services Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), and Microsoft Corp. (NASDAQ: MSFT) announced a new deep learning library, called Gluon, that allows developers of all skill levels to prototype, build, train and deploy sophisticated machine learning models for the cloud, devices at the edge and mobile apps. The Gluon interface currently works with Apache MXNet and will support Microsoft Cognitive Toolkit (CNTK) in an upcoming release. With the Gluon interface, developers can build machine learning models using a simple Python API and a range of prebuilt, optimized neural network components. This makes it easier for developers of all skill levels to build neural networks using simple, concise code, without sacrificing performance. AWS and Microsoft published Gluon’s reference specification so other deep learning engines can be integrated with the interface. To get started with the Gluon interface, visit https://github.com/gluon-api/gluon-api/.

Developers build neural networks using three components: training data, a model and an algorithm. The algorithm trains the model to understand patterns in the data. Because the volume of data is large and the models and algorithms are complex, training a model often takes days or even weeks. Deep learning engines like Apache MXNet, Microsoft Cognitive Toolkit and TensorFlow have emerged to help optimize and speed the training process. However, these engines require developers to define the models and algorithms up front using lengthy, complex code that is difficult to change. Other deep learning tools make model-building easier, but this simplicity can come at the cost of slower training performance.

The Gluon interface gives developers the best of both worlds — a concise, easy-to-understand programming interface that enables developers to quickly prototype and experiment with neural network models, and a training method that has minimal impact on the speed of the underlying engine. Developers can use the Gluon interface to create neural networks on the fly, and to change their size and shape dynamically. In addition, because the Gluon interface brings together the training algorithm and the neural network model, developers can perform model training one step at a time. This means it is much easier to debug, update and reuse neural networks.

“The potential of machine learning can only be realized if it is accessible to all developers. Today’s reality is that building and training machine learning models require a great deal of heavy lifting and specialized expertise,” said Swami Sivasubramanian, VP of Amazon AI. “We created the Gluon interface so building neural networks and training models can be as easy as building an app. We look forward to our collaboration with Microsoft on continuing to evolve the Gluon interface for developers interested in making machine learning easier to use.”

“We believe it is important for the industry to work together and pool resources to build technology that benefits the broader community,” said Eric Boyd, corporate vice president of Microsoft AI and Research. “This is why Microsoft has collaborated with AWS to create the Gluon interface and enable an open AI ecosystem where developers have freedom of choice. Machine learning has the ability to transform the way we work, interact and communicate. To make this happen we need to put the right tools in the right hands, and the Gluon interface is a step in this direction.”

“FINRA is using deep learning tools to process the vast amount of data we collect in our data lake,” said Saman Michael Far, senior vice president and CTO, FINRA. “We are excited about the new Gluon interface, which makes it easier to leverage the capabilities of Apache MXNet, an open source framework that aligns with FINRA’s strategy of embracing open source and cloud for machine learning on big data.”

“I rarely see software engineering abstraction principles and numerical machine learning playing well together — and something that may look good in a tutorial could be hundreds of lines of code,” said Andrew Moore, dean of the School of Computer Science at Carnegie Mellon University. “I really appreciate how the Gluon interface is able to keep the code complexity at the same level as the concept; it’s a welcome addition to the machine learning community.”

“The Gluon interface solves the age old problem of having to choose between ease of use and performance, and I know it will resonate with my students,” said Nikolaos Vasiloglou, adjunct professor of Electrical Engineering and Computer Science at Georgia Institute of Technology. “The Gluon interface dramatically accelerates the pace at which students can pick up, apply and innovate on new applications of machine learning. The documentation is great, and I’m looking forward to teaching it as part of my computer science course and in seminars that focus on teaching cutting-edge machine learning concepts across different cities in the U.S.”

“We think the Gluon interface will be an important addition to our machine learning toolkit because it makes it easy to prototype machine learning models,” said Takero Ibuki, senior research engineer at DOCOMO Innovations. “The efficiency and flexibility this interface provides will enable our teams to be more agile and experiment in ways that would have required a prohibitive time investment in the past.”

The Gluon interface is open source and available today in Apache MXNet 0.11, with support for CNTK in an upcoming release. Developers can learn how to get started using Gluon with MXNet by viewing tutorials for both beginners and experts available by visiting https://mxnet.incubator.apache.org/gluon/.

About Amazon Web Services

For 11 years, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud platform. AWS offers over 90 fully featured services for compute, storage, networking, database, analytics, application services, deployment, management, developer, mobile, Internet of Things (IoT), Artificial Intelligence (AI), security, hybrid and enterprise applications, from 44 Availability Zones (AZs) across 16 geographic regions in the U.S., Australia, Brazil, Canada, China, Germany, India, Ireland, Japan, Korea, Singapore, and the UK. AWS services are trusted by millions of active customers around the world — including the fastest-growing startups, largest enterprises, and leading government agencies — to power their infrastructure, make them more agile, and lower costs. To learn more about AWS, visit https://aws.amazon.com.

About Amazon

Amazon is guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. Customer reviews, 1-Click shopping, personalized recommendations, Prime, Fulfillment by Amazon, AWS, Kindle Direct Publishing, Kindle, Fire tablets, Fire TV, Amazon Echo, and Alexa are some of the products and services pioneered by Amazon. For more information, visit www.amazon.com/about and follow @AmazonNews.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) is the leading platform and productivity company for the mobile-first, cloud-first world, and its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777, rrt@we-worldwide.com

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.

3 Emerging Technologies that Will Change the Way You Use Hyper-V

Hello once again everyone!

The I.T. landscape changes incredibly quickly (if you know a faster changing industry, I’d love to know!) I.T. professionals need to know what’s coming round the corner to stay ahead of the game or risk being left behind. Well, we don’t want that to happen to you, so we’ve run down what we feel are the three most important emerging technologies that will drastically change the Hyper-V landscape.

  1. Continued Adoption of Public Cloud Platforms – It’s becoming clear that the public cloud is continuing to gain steam. It’s not just one vendor, but several, and it continues to pull workloads from on-premise to the cloud. Many people were keen to wait out this “cloud thing”, but it has become quite clear that it’s here to stay. Capabilities in online platforms such as Microsoft Azure and Amazon AWS, have increasingly made it easier, more cost-effective, and desirable to put workloads in the public cloud. These cloud platforms can often provide services that most customers don’t have available on-premise, and this paired with several other things that we’ll talk about in the webinar are leading to increased adoption of these platforms over on-premise installations.
  2. Azure Stack and the Complete Abstraction of Hyper-V under-the-hood – With some of the latest news and release information out of Microsoft regarding their new Microsoft Azure Stack (MAS), things have taken an interesting turn for Hyper-V. As on-premise administrators have always been used to having direct access to the hypervisor, they may be surprised to learn that Hyper-V is so far under the hood in MAS that you can’t even access it. That’s right. The Hypervisor has become so simplified and automated, that there is no need to directly access it in MAS, but this is primarily because MAS follows the same usage and management guidelines as Microsoft Azure. This will bother a lot of administrators but it’s becoming the world we live in. As such, we’ll be talking about this extensively during the webinar.
  3. Containers and Microservices and why they are a game-changer – Containers has become one of the new buzz-words in the industry. If you’re not aware, you can think of containers as similar to a VM, but fundamentally different. Whereas in a VM you’re virtualizing the OS, and everything on top of it, with containers you’re only virtualizing the application. Much of the underlying support functions are handled by the container host, as opposed to an OS built into a VM. For a long time it seemed that containers were going to primarily be a developer thing, but as the line between IT Pro and Dev continues to blur, Containers can no longer be ignored by IT Pros, and we’ll be talking about that revelation extensively during our panel discussion.

As you can see there is much to talk about, and many will be wondering how this affects them. You’re probably asking yourself questions like: “What new skills should IT Pros be learning to stay relevant?”, “Are hypervisors becoming irrelevant?”, “Will containers replace virtual machines?”, “Is the Cloud here to stay?”, “Is there still a place for Windows Server in the world?”, “What can I do now to stay relevant and what skills do I need to learn to future-proof my career?” Yep, these developments certainly raise a lot of issues which is why we decided to take this topic further.

Curious to know more? Join our Live Webinar!

As you know we love to put on webinars here at Altaro as we find them a critical tool for getting information about new technologies and features to our viewership. We’ve always stuck to the same basic educational format and it’s worked well over the years. However, we’ve always wanted to try something a bit different. There certainly isn’t anything wrong with an educational format, but with some topics, it’s often best to just have a conversation. This idea is at the core of our next webinar along with some critical changes that are occurring within our industry.

For the first time ever, Altaro will be putting on a panel-style webinar with not 1 or 2, but with 3 Microsoft Cloud and Datacenter MVPs. Andy Syrewicze, Didier Van Hoye, and Thomas Maurer will all be hosting this webinar as they talk about some major changes and take your questions/feedback regarding things that are occurring in the industry today. These are things that will affect the way you use and consume Hyper-V.

Webinar Registration