Tag Archives: environments

IFDS goes hands-on with Zerto’s Kubernetes backup

As International Financial Data Services (IFDS) started containerizing more and more environments, it needed better Kubernetes backup.

IFDS first dipped its toes into containers in 2017. The company writes its own software for the financial industry, and the first container deployment was for its application development environment. With the success of the large containerized quality assurance testing environment, the company started using containers in production as well.

Headquartered in Toronto, IFDS provides outsourcing and technology for financial companies, such as investment funds’ record keeping and back-office support. It has around 2,400 employees, a clientele of about 240 financial organizations and $3.6 trillion CAD ($2.65 trillion US) in assets under administration.

Kent Pollard, senior infrastructure architect at IFDS, has been with the company for 25 of its 33-year history and said containerizing production opened up a need for backup. One of the use cases of containers is to quickly bring up applications or services with little resource overhead and without the need to store anything. However, Pollard said IFDS’ container environment was no longer about simply spinning up and spinning down.

“We’re not a typical container deployment. We have a lot of persistent storage,” Pollard said.

Zerto recently unveiled its Zerto for Kubernetes backup product at ZertoCon 2020, but Pollard has been working with an alpha build of it for the past month. He said it is still in early stages, and he’s been giving feedback to Zerto, but he has a positive impression so far. Pollard said not having to turn to another vendor such as IBM, Asigra or Trilio for Kubernetes backup will be a huge benefit.

Kent PollardKent Pollard

Pollard’s current container backup method uses Zerto to restore containers in a roundabout way. His container environment is built in Red Hat OpenShift and running in a virtualized environment. Zerto is built for replicating VMs, so Pollard can use it to restore the entire VM housing OpenShift. The drawback is this reverts the entire VM to an earlier state, when all he wanted was to restore a single container.

Pollard said, at the least, Zerto for Kubernetes instead allows him to restore at the container level. He understood the early nature of what he’s been testing and said he is looking forward to when other Zerto capabilities get added, such as ordered recovery and automated workflows for failover and testing. From his limited experience, Pollard said he believes Zerto for Kubernetes has the potential to fill his container backup needs.

Pollard said Zerto for Kubernetes will give him incentive to containerize more of IFDS’ environment. The number of containers IFDS currently has in production is still relatively small, and part of the reason Pollard won’t put more critical workloads in containers is because he can’t protect them yet.

He said there were many reasons IFDS moved to containers three years ago. With containers, IFDS is able to more efficiently use its underlying hardware resources, enabling faster responses to application load changes. Pollard also said it improved IFDS’ security and supports the company’s future move to the cloud and built out a hybrid infrastructure. Zerto provided Pollard with an AWS environment to test Zerto for Kubernetes, but IFDS currently has no cloud footprint whatsoever.

IFDS first deployed Zerto in late 2014. It started as a small production environment deployment on a couple of VMs but became the company’s standard tool for disaster recovery. IFDS now uses Zerto to protect 190 VMs and 200 TB of storage. Pollard said he was sold after the first annual DR test when Zerto completed in 30 minutes.

“We never had anything that fast. It was always hours and hours for a DR test,” he said.

Go to Original Article
Author:

Accessibility tools support Hamlin Robinson students learning from home | | Microsoft EDU

More than ever, educators are relying on technology to create inclusive learning environments that support all learners. As we recognize Global Accessibility Awareness Day, we’re pleased to mark the occasion with a spotlight on an innovative school that is committed to digital access and success for all.

Seattle-based Hamlin Robinson School, an independent school serving students with dyslexia and other language-based learning differences, didn’t set a specific approach to delivering instruction immediately after transitioning to remote learning. “Our thought was to send home packets of schoolwork and support the students in learning, and we quickly realized that was not going to work,” Stacy Turner, Head of School, explained in a recent discussion with the Microsoft Education Team.

After about a week into distance learning, the school quickly went to more robust online instruction. The school serves grades 1-8 and students in fourth-grade and up are utilizing Office 365 Education tools, including Microsoft Teams. So, leveraging those same resources for distance learning was natural.

Built-in accessibility features

Stacy said the school was drawn to Microsoft resources for schoolwide use because of built-in accessibility features, such as dictation (speech-to-text), and the Immersive Reader, which relies on evidence-based techniques to help students improve at reading and writing.

“What first drew us to Office 365 and OneNote were some of the assistive technologies in the toolbar,” Stacy said. Learning and accessibility tools are embedded in Office 365 and can support students with visual impairments, hearing loss, cognitive disabilities, and more.

Josh Phillips, Head of Middle School, says for students at Hamlin Robinson, finding the right tools to support their learning is vital. “When we graduate our students, knowing that they have these specific language-processing needs, we want them to have fundamental skills within themselves and strategies that they know how to use. But we also want them to know what tools are available to them that they can bring in,” he said.

For example, for students who have trouble typing, a popular tool is the Dictate, or speech-to-text, function of Office 365. Josh said that a former student took advantage of this function to write a graduation speech at the end of eighth grade. “He dictated it through Teams, and then he was able to use the skills we were practicing in class to edit it,” Josh said. “You just see so many amazing ideas get unlocked and be able to be expressed when the right tools come along.”

Supporting teachers and students

Providing teachers with expertise around tech tools also is a focus at Hamlin Robinson. Charlotte Gjedsted, Technology Director, said the school introduced its teachers to Teams last year after searching for a platform that could serve as a digital hub for teaching and learning. “We started with a couple of teachers being the experts and helping out their teams, and then when we shifted into this remote learning scenario, we expanded that use,” Charlotte said.

“Teams seems to be easiest platform for our students to use in terms of the way it’s organized and its user interface,” added Josh.

He said it was clear in the first days of distance learning that using Teams would be far better than relying on packets of schoolwork and the use of email or other tools. “The fact that a student could have an assignment issued to them, could use the accessibility tools, complete the assignment, and then return the assignment all within Teams is what made it clear that this was going to be the right app for our students,” he said. 

A student’s view

Will Lavine, a seventh-grade student at the school says he appreciates the stepped-up emphasis on Teams and tech tools during remote learning and says those are helping meet his learning needs. “I don’t have to write that much on paper. I can use technology, which I’m way faster at,” he said.

“Will has been using the ease of typing to his benefit,” added Will’s tutor, Elisa Huntley. “Normally when he is faced with a hand written assignment, he would spend quite a bit of time to refine his work using only a pencil and eraser. But when he interfaces with Microsoft Teams, Will doesn’t feeling the same pressure to do it right the first time. It’s much easier for him to re-type something. His ideas are flowing in ways that I have never seen before.”

Will added that he misses in-person school, but likes the collaborative nature of Teams, particularly the ability to chat with teachers and friends.

With the technology sorted out, Josh said educators have been very focused on ensuring students are progressing as expected. He says that teachers are closely monitoring whether students are joining online classes, engaging in discussions, accessing and completing assignments, and communicating with their teachers.

Connect, explore our tools

We love hearing from our educator community and students and families. If you’re using accessibility tools to create more inclusive learning environments and help all learners thrive, we want to hear from you! One great way to stay in touch is through Twitter by tagging @MicrosoftEDU.

And if you want to check out some of the resources Hamlin Robinson uses, remember that students and educators at eligible institutions can sign up for Office 365 Education for free, including Word, Excel, PowerPoint, OneNote, and Microsoft Teams.

In honor of Global Accessibility Awareness Day, Microsoft is sharing some exciting updates from across the company. To learn more visit the links below:

Go to Original Article
Author: Microsoft News Center

The Acid Test for Your Backup Strategy

For the first several years that I supported server environments, I spent most of my time working with backup systems. I noticed that almost everyone did their due diligence in performing backups. Most people took an adequate responsibility to verify that their scheduled backups ran without error. However, almost no one ever checked that they could actually restore from a backup — until disaster struck. I gathered a lot of sorrowful stories during those years. I want to use those experiences to help you avert a similar tragedy.

Successful Backups Do Not Guarantee Successful Restores

Fortunately, a lot of the problems that I dealt with in those days have almost disappeared due to technological advancements. But, that only means that you have better odds of a successful restore, not that you have a zero chance of failure. Restore failures typically mean that something unexpected happened to your backup media. Things that I’ve encountered:

  • Staff inadvertently overwrote a full backup copy with an incremental or differential backup
  • No one retained the necessary decryption information
  • Media was lost or damaged
  • Media degraded to uselessness
  • Staff did not know how to perform a restore — sometimes with disastrous outcomes

I’m sure that some of you have your own horror stories.

These risks apply to all organizations. Sometimes we manage to convince ourselves that we have immunity to some or all of them, but you can’t get there without extra effort. Let’s break down some of these line items.

People Represent the Weakest Link

We would all like to believe that our staff will never make errors and that the people that need to operate the backup system have the ability to do so. However, as a part of your disaster recovery planning, you must expect an inability to predict the state or availability of any individual. If only a few people know how to use your backup application, then those people become part of your risk profile.

You have a few simple ways to address these concerns:

  • Periodically test the restore process
  • Document the restore process and keep the documentation updated
  • Non-IT personnel need knowledge and practice with backup and restore operations
  • Non-IT personnel need to know how to get help with the application

It’s reasonable to expect that you would call your backup vendor for help in the event of an emergency that prevented your best people from performing restores. However, in many organizations without a proper disaster recovery plan, no one outside of IT even knows who to call. The knowledge inside any company naturally tends to arrange itself in silos, but you must make sure to spread at least the bare minimum information.

Technology Does Fail

I remember many shock and horror reactions when a company owner learned that we could not read the data from their backup tapes. A few times, these turned into grief and loss counselling sessions as they realized that they were facing a critical — or even complete — data loss situation. Tape has its own particular risk profile, and lots of businesses have stopped using it in favour of on-premises disk-based storage or cloud-based solutions. However, all backup storage technologies present some kind of risk.

In my experience, data degradation occurred most frequently. You might see this called other things, my favourite being “bit rot”. Whatever you call it, it all means the same thing: the data currently on the media is not the same data that you recorded. That can happen just because magnetic storage devices have susceptibilities. That means that no one made any mistakes — the media just didn’t last. For all media types, we can establish an average for failure rates. But, we have absolutely no guarantees on the shelf life for any individual unit. I have seen data pull cleanly off decade-old media; I have seen week-old backups fail miserably.

Unexpectedly, newer technology can make things worse. In our race to cut costs, we frequently employ newer ways to save space and time. In the past, we had only compression and incremental/differential solutions. Now, we have tools that can deduplicate across several backup sets and at multiple levels. We often put a lot of reliance on the single copy of a bit.

How to Test your Backup Strategy

The best way to identify problems is to break-test to find weaknesses. Leveraging test restores will help identity backup reliability and help you solve these problems. Simply, you cannot know that you have a good backup unless you can perform a good restore. You cannot know that your staff can perform a restore unless they perform a restore. For maximum effect, you need to plan tests to occur on a regular basis.

Some tools, like Altaro VM Backup, have built-in tools to make tests easy. Altaro VM Backup provides a “Test & Verify Backups” wizard to help you perform on-demand tests and a “Schedule Test Drills” feature to help you automate the process.

how to test and verify backups altaro

If your tool does not have such a feature, you can still use it to make certain that your data will be there when you need it. It should have some way to restore a separate or redirected copy. So, instead of overwriting your live data, you can create a duplicate in another place where you can safely examine and verify it.

Test Restore Scenario

In the past, we would often simply restore some data files to a shared location and use a simple comparison tool. Now that we use virtual machines for so much, we can do a great deal more. I’ll show one example of a test that I use. In my system, all of these are Hyper-V VMs. You’ll have to adjust accordingly for other technologies.

Using your tool, restore copies of:

  • A domain controller
  • A SQL server
  • A front-end server dependent on the SQL server

On the host that you restored those VMs to, create a private virtual switch. Connect each virtual machine to it. Spin up the copied domain controller, then the copied SQL server, then the copied front-end. Use the VM connect console to verify that all of them work as expected.

Create test restore scenarios of your own! Make sure that they match a real-world scenario that your organization would rely on after a disaster.


Go to Original Article
Author: Eric Siron

Get back on the mend with Active Directory recovery methods

Active Directory is the bedrock of most Windows environments, so it’s best to be prepared if disaster strikes.

AD is an essential component in most organizations. You should monitor and maintain AD, such as clear out user and computer accounts you no longer need. With routine care, AD will run properly, but unforeseen issues can arise. There are a few common Active Directory recovery procedures you can follow using out-of-the-box technology.

Loss of a domain controller

Many administrators see losing a domain controller as a huge disaster, but the Active Directory recovery effort is relatively simple — unless your AD was not properly designed and configured. You should never rely on a single domain controller in your domain, and large sites should have multiple domain controllers. Correctly configured site links will keep authentication and authorization working even if the site loses its domain controller.

You have two possible approaches to resolve the loss of a domain controller. The first option is to try to recover the domain controller and bring it back into service. The second option is to replace the domain controller. I recommend adopting the second approach, which requires the following actions:

  • Transfer or seize any flexible single master operation roles to an active domain controller. If you seize the role, then you must ensure that the old role holder is never brought back into service.
  • Remove the old domain controller’s account from AD. This will also remove any metadata associated with the domain controller.
  • Build a new server, join to the domain, install AD Directory Services and promote to a domain controller.
  • Allow replication to repopulate the AD data.

How to protect AD data

Protecting data can go a long way to make an Active Directory recovery less of a problem. There are a number of ways to protect AD data. These techniques, by themselves, might not be sufficient. But, when you combine them, they provide a defense in depth that should enable you to overcome most, if not all, disasters.

First, enable accidental deletion protection on all of your organizational units (OUs), as well as user and computer accounts. This won’t stop administrators from removing an account, but they will get warned and might prevent an accident.

protect from accidental deletion option
Select the option to protect from accidental deletion when creating an organizational unit in AD Administrative Center.

Recover accounts from the AD recycle bin

Another way to avoid trouble is to enable the AD recycle bin. This is an optional feature used to restore a deleted object.

Enable-ADOptionalFeature -Identity 'Recycle Bin Feature' -Scope ForestOrConfigurationSet `-Target sphinx.org -Confirm:$false

After installing the feature, you may need to enable it through AD Administrative Center. Once added, you can’t uninstall the recycle bin.

Let’s run through a scenario where a user, whose properties are shown in the screenshot below, has been deleted.

Active Directory user account
An example of a typical user account in AD, including group membership

To check for deleted user accounts, run a search in the recycle bin:

Get-ADObject -Filter {objectclass -eq 'user' -and Deleted -eq $true} -IncludeDeletedObjects

The output for this command returns a deleted object, the user with the name Emily Brunel.

Active Directory recycle bin
An AD object found in the recycle bin

For a particularly volatile AD, you may need to apply further filters to identify the account you wish to restore.

If you have a significant number of objects in the recycle bin, use the object globally unique identifier (GUID) to identify the object to restore.

Get-ADObject -Filter {ObjectGUID -eq '73969b9d-05fa-4b45-a667-79baba1ac9a3'} 
`-IncludeDeletedObjects -Properties * | Restore-ADObject

The screenshot shows the restored object and its properties, including the group membership.

restored Active Directory user account
Restoring an AD user account from recycle bin

Generate AD snapshots

The AD recycle bin helps restore an object, but what do you do when you restore an account with incorrect settings?

To fix a user account in that situation, it helps to create AD snapshots to view previous settings and restore attributes. Use the following command from an elevated prompt:

ntdsutil snapshot 'Activate Instance NTDS' Create quit quit

The Ntdsutil command-line tool installs with AD and generates the output in this screenshot when creating the snapshot.

Active Directory snapshot
The command-line output when creating an AD snapshot

You don’t need to take snapshots on every domain controller. The number of snapshots will depend on the geographic spread of your organization and the arrangement of the administration team.

The initial snapshot captures the entire AD. Subsequent snapshots take incremental changes. The frequency of snapshots should be related to the amount of movement of the data in your AD.

Restore data from a snapshot

In this test scenario, let’s assume that the group memberships of a user account have been incorrectly changed. Run the following PowerShell commands to remove the user’s group memberships:

Remove-ADGroupMember -Identity finance -Members (Get-ADUser -Identity EmilyBrunel) -Confirm:$false
Remove-ADGroupMember -Identity department1 -Members (Get-ADUser -Identity EmilyBrunel) -Confirm:$false
Remove-ADGroupMember -Identity project1 -Members (Get-ADUser -Identity EmilyBrunel) -Confirm:$false

You need to identify the snapshot from which you will restore the data. The following command lists the snapshots:

ntdsutil snapshot 'List All' quit quit
Active Directory snapshots list
The Ntdsutil utility produces a list of the available AD snapshots.

To mount the snapshot, run the following command:

ntdsutil snapshot "mount f828eb4e-3a06-4bcb-8db6-2b07b54f9d5f" quit quit

Run the following command to open the snapshot:

dsamain -dbpath 'C:$SNAP_201909161530_VOLUMEC$WindowsNTDSntds.dit' -ldapport 51389

The Dsamain utility gets added to the system when you install AD Domain Services. Note that the console you use to mount and open the snapshot is locked.

Active Directory snapshot
Mount and open the AD snapshot.

When you view the group membership of the user account in your AD, it will be empty. The following command will not return any output:

Get-ADUser -Identity EmilyBrunel -Properties memberof | select -ExpandProperty memberof

When you view the same account from your snapshot, you can see the group memberships:

Get-ADUser -Identity EmilyBrunel -Properties memberof -Server TTSDC01.sphinx.org:51389  | select -ExpandProperty memberof
CN=Project1,OU=Groups,DC=Sphinx,DC=org
CN=Department1,OU=Groups,DC=Sphinx,DC=org
CN=Finance,OU=Groups,DC=Sphinx,DC=org

To restore the group memberships, run the following:

Get-ADUser -Identity EmilyBrunel -Properties memberof -Server TTSDC01.sphinx.org:51389  | select -ExpandProperty memberof | 
ForEach-Object {Add-ADGroupMember -Identity $_ -Members (Get-ADUser -Identity EmilyBrunel)}

After reinserting the group memberships from the snapshot version of the account, add the user into those groups in your production AD.

Your user account now has the correct group memberships:

Get-ADUser -Identity EmilyBrunel -Properties memberof | select -ExpandProperty memberof
CN=Project1,OU=Groups,DC=Sphinx,DC=org
CN=Department1,OU=Groups,DC=Sphinx,DC=org
CN=Finance,OU=Groups,DC=Sphinx,DC=org

Press Ctrl-C in the console in which you ran Dsamain, and then unmount the snapshot:

ntdsutil snapshot "unmount *" quit quit

Run an authoritative restore from a backup

In the last scenario, imagine you lost a whole OU’s worth of data, including the OU. You could do an Active Directory recovery using data from the recycle bin, but that would mean restoring the OU and any OUs it contained. You would then have to restore each individual user account. This could be a tedious and error-prone process if the data in the user accounts in the OU changes frequently. The solution is to perform an authoritative restore.

Before you can perform a restore, you need a backup. We’ll use Windows Server Backup because it is readily available. Run the following PowerShell command to install:

Install-WindowsFeature -Name Windows-Server-Backup

The following code will create a backup policy and run a system state backup:

Import-Module WindowsServerBackup
$wbp = New-WBPolicy

$volume = Get-WBVolume -VolumePath C:
Add-WBVolume -Policy $wbp -Volume $volume

Add-WBSystemState $wbp

$backupLocation = New-WBBackupTarget -VolumePath R:
Add-WBBackupTarget -Policy $wbp -Target $backupLocation

Set-WBVssBackupOptions -Policy $wbp -VssCopyBackup

Start-WBBackup -Policy $wbp

The following command creates a backup of the system state, including the AD database:

Add-WBSystemState $wbp

The following code creates a scheduled backup of the system state at 8 a.m., noon, 4 p.m. and 8 p.m.

Set-WBSchedule -Policy $wbp -Schedule 08:00, 12:00, 16:00, 20:00
Set-WBPolicy -Policy $wbp

In this example, let’s say an OU called Test with some critical user accounts got deleted.

Reboot the domain controller in which you’ve performed the backup, and go into Directory Services Recovery Mode. If your domain controller is a VM, you may need to use Msconfig to set the boot option rather than using the F8 key to get to the boot options menu.

$bkup = Get-WBBackupSet | select -Last 1
Start-WBSystemStateRecovery -BackupSet $bkup -AuthoritativeSysvolRecovery

Type Y, and press Enter to restore to original location.

At the prompt, restart the domain controller to boot back into recovery mode.

You need to mark the restored OU as authoritative by using Ntdsutil:

ntdsutil
C:Windowssystem32ntdsutil.exe: activate instance NTDS
Active instance set to "NTDS".
C:Windowssystem32ntdsutil.exe: authoritative restore
authoritative restore: restore subtree "ou=test,dc=sphinx,dc=org"

A series of messages will indicate the progress of the restoration, including the number of objects restored.

Exit ntdsutil
authoritative restore: quit
C:Windowssystem32ntdsutil.exe: quit

Restart the domain controller. Use Msconfig before the reboot to reset to a normal start.

The OU will be restored on your domain controller and will replicate to the other domain controllers in AD.

A complete loss of AD requires intervention

In the unlikely event of losing your entire AD forest, you’ll need to work through the AD forest recovery guide at this link. If you have a support agreement with Microsoft, then this would be the ideal time to use it.

Go to Original Article
Author:

How to keep VM sprawl in check

During the deployment of virtual environments, the focus is on the design and setup. Rarely are the environments revisited to check if improvements are possible.

Virtualization brought many benefits to data center operations, such as reliability and flexibility. One drawback is it can lead to VM sprawl and the generation of more VMs that contend for a finite amount of resources. VMs are not free; storage and compute have a real capital cost. This cost gets amplified if you look to move these resources into the cloud. It’s up to the administrator to examine the infrastructure resources and make sure these VMs have just what they need because the costs never go away and typically never go down.

Use Excel to dig into resource usage

One of the fundamental tools you need for this isn’t Hyper-V or some virtualization product — it’s Excel. Dashboards are nice, but there are times you need the raw data for more in-depth analysis. Nothing can provide that like Excel.

Most monitoring tools export data to CSV format. You can import this file into Excel for analysis. Shared storage is expensive, so I always like to see a report on drive space. It’s interesting to see what servers consume the most drive space, and where. If you split your servers into a C: for the OS and D: for the data, shouldn’t most of the C: drives use the same amount of space? Outside of your application install, why should the C: drives vary in space? Are admins leaving giant ISOs in the download folder or recycle bin? Or are multiple admins logging on with roaming profiles?

Whatever the reason, runaway C: drives can chew up your primary storage quickly. If it is something simple such as ISO files that should have been removed, keep in mind that this affects your backups as well. You can just buy additional storage in a pinch and, because often many us in IT are on autopilot mode, it’s easy to not give drive space issues a second thought.

Overallocation is not as easy to correct

VM sprawl is one thing but when was the last time you looked at what resources you allocated to those VMs to see what they are actually using? The allocation process is still a little bit of a guess until things get up and running fully. Underallocation is often noticed promptly and corrected quickly, and everything moves forward.

A review process could reveal places that could use an adjustment to drain resources from overallocated VMs to avoid trouble in the future.

Do you ever check for overallocation? Do you ever go back and remove extra CPU cores or RAM? In my experience, no one ever does. If everything runs well, there’s little incentive to make changes.

Some in IT like to gamble and assume everything will run properly most of the time, but it’s less stressful to prepare for some of these unlikely events. Is it possible that a host or two will fail, or that a network issue strikes your data center? You have to be prepared for failure and at a scale that is more than what you might think. We all know things will rarely fail in a way that is favorable to you. A review process could reveal places that could use an adjustment to drain resources from overallocated VMs to avoid trouble in the future.

Look closer at all aspects of VM sprawl to trim costs

Besides the resource aspect what about the licensing cost? With more and more products now allocating by core, overallocation of resources has an instant impact on the application cost to start but it gets worse. It’s the annual maintenance costs that pick at your budget and drain your resources for no gain if you cannot tighten your resource allocation.

One other maintenance item that gets overlooked is reboots. When a majority of Windows Server deployments moved from hardware to virtualization, the runtime typically increased. This increase in stability brought with it an inadvertent problem. Too often, busy IT shops without structured patching and reboot cycles only performed these tasks when a server went offline, which — for better or worse — created a maintenance window.

With virtualization, the servers tend to run for longer stretches and show more unique issues. Memory leaks that might have gone unnoticed before — because they were reset during a reboot — can affect servers in unpredictable ways. Virtualization admins need to be on alert to recognize behaviors that might be out of the norm. If you right-size your VMs, you should have enough resources for them to run normally and still handle the occasional spikes in demand. If you see your VMs requiring more resources than normal, this could point to resource leaks that need to be reset.

Often, the process to get systems online is rushed, leads to VM sprawl and overlooks any attempts at optimization. This can be anything from overallocations to simple cleanup. If this isn’t done, you lose out on ways to make the environment more efficient, losing both performance and capacity. While this all makes sense, it’s important to follow through and actually do it.

Go to Original Article
Author:

Satellite connectivity expands reach of Azure ExpressRoute across the globe

Staying connected to access and ingest data in today’s highly distributed application environments is paramount for any enterprise. Many businesses need to operate in and across highly unpredictable and challenging conditions. For example, energy, farming, mining, and shipping often need to operate in remote, rural, or other isolated locations with poor network connectivity.

With the cloud now the de facto and primary target for the bulk of application and infrastructure migrations, access from remote and rural locations becomes even more important. The path to realizing the value of the cloud starts with a hybrid environment access resources with dedicated and private connectivity.

Network performance for these hybrid scenarios from rural and remote sites becomes increasingly critical. With globally connected organizations, the explosive number of connected devices and data in the Cloud, as well as emerging areas such as autonomous driving and traditional remote locations such as cruise ships are directly affected by connectivity performance.  Other examples requiring highly available, fast, and predictable network service include managing supply chain systems from remote farms or transferring data to optimize equipment maintenance in aerospace.

Today, I want to share the progress we have made to help customers address and solve these issues. Satellite connectivity addresses challenges of operating in remote locations.

Microsoft cloud services can be accessed with Azure ExpressRoute using satellite connectivity. With commercial satellite constellations becoming widely available, new solutions architectures offer improved and affordable performance to access Microsoft.

Infographic of High level architecture of ExpressRoute and satellite integration

Microsoft Azure ExpressRoute, with one of the largest networking ecosystems in the public Cloud now includes satellite connectivity partners bringing new options and coverage.

 8095 1SES will provide dedicated, private network connectivity from any vessel, airplane, enterprise, energy or government site in the world to the Microsoft Azure cloud platform via its unique multi-orbit satellite systems. As an ExpressRoute partner, SES will provide global reach and fibre-like high-performance to Azure customers via its complete portfolio of Geostationary Earth Orbit (GEO) satellites, Medium Earth Orbit (MEO) O3b constellation, global gateway network, and core terrestrial network infrastructure around the world.

 8095 2Intelsat’s customers are the global telecommunications service providers and multinational enterprises that rely on our services to power businesses and communities wherever their needs take them. Now they have a powerful new tool in their solutions toolkit. With the ability to rapidly expand the reach of cloud-based enterprises, accelerate customer adoption of cloud services, and deliver additional resiliency to existing cloud-connected networks, the benefits of cloud services are no longer limited to only a subset of users and geographies. Intelsat is excited to bring our global reach and reliability to this partnership with Microsoft, providing the connectivity that is essential to delivering on the expectations and promises of the cloud.

8095 3 Viasat, a provider of high-speed, high-quality satellite broadband solutions to businesses and commercial entities around the world, is introducing Direct Cloud Connect service to give customers expanded options for accessing enterprise-grade cloud services. Azure ExpressRoute will be the first cloud service offered to enable customers to optimize their network infrastructure and cloud investments through a secure, dedicated network connection to Azure’s intelligent cloud services.

Microsoft wants to help accelerate scenarios by optimizing the connectivity through Microsoft’s global network, one of the largest and most innovative in the world.

ExpressRoute for satellites directly connects our partners’ ground stations to our global network using a dedicated private link. But what does it more specifically mean to our customers?

  • Using satellite connectivity with ExpressRoute provides dedicated and highly available, private access directly to Azure and Azure Government clouds.
  • ExpressRoute provides predictable latency through well-connected ground stations, and, as always, maintains all traffic privately on our network – no traversing of the Internet.
  • Customers and partners can harness Microsoft’s global network to rapidly deliver data to where it’s needed or augment routing to best optimize for their specific need.
  • Satellite and a wide selection of service providers will enable rich solution portfolios for cloud and hybrid networking solutions centered around Azure networking services.
  • With some of the world’s leading broadband satellite providers as partners, customers can select the best solution based on their needs. Each of the partners brings different strengths, for example, choices between Geostationary (GEO), Medium Earth Orbit (MEO) and in the future Low Earth Orbit(LEO) satellites, geographical presence, pricing, technology differentiation, bandwidth, and others.
  • ExpressRoute over satellite creates new channels and reach for satellite broadband providers, through a growing base of enterprises, organizations and public sector customers.

With this addition to the ExpressRoute partner ecosystem, Azure customers in industries like aviation, oil and gas, government, peacekeeping, and remote manufacturing can deploy new use cases and projects that increase the value of their cloud investments and strategy.

As always, we are very interested in your feedback and suggestions as we continue to enhance our networking services, so I encourage you to share your experiences and suggestions with us.

You can follow these links to learn more about our partners Intelsat, SES, and Viasat, and learn more about Azure ExpressRoute from our website and our detailed documentation.

Go to Original Article
Author: Microsoft News Center

Microsoft Azure Dev Spaces, Google Jib target Kubernetes woes

To entice developers to create more apps on their environments, major cloud platform companies will meet them where they live.

Microsoft and Google both released tools to help ease app development on their respective platforms, Microsoft Azure and the Google Cloud Platform. Microsoft’s Azure Dev Spaces and Google Jib help developers build applications for the Kubernetes container orchestrator and Java environments and represent a means to deliver simpler, developer-friendly technology.

Microsoft’s Azure Dev Spaces, now in public preview, is a cloud-native development environment for the company’s Azure Kubernetes Service (AKS), where developers can work on applications while connected with the cloud and their team. These users can build cloud applications with containers and microservices on AKS and do not deal with any infrastructure management or orchestration, according to Microsoft.

As Kubernetes further commoditizes deployment and orchestration, cloud platform vendors and public cloud providers must focus on how to simplify customers’ implementation of cloud-native development methods — namely DevOps, CI/CD and microservices, said Rhett Dillingham, an analyst at Moor Insights & Strategy in Austin, Texas.

“Azure Dev Spaces has the potential to be one of Microsoft’s most valuable recent developer tooling innovations, because it addresses the complexity of integration testing and debugging in microservices environments,” he said.

Edwin Yuen, analyst, Enterprise Strategy GroupEdwin Yuen

With the correct supporting services, developers can fully test and deploy in Microsoft Azure, added Edwin Yuen, an analyst at Enterprise Strategy Group in Milford, Mass.

“This would benefit the developer, as it eases the process of container development by allowing them to see the results of their app without having to set up a Docker or Kubernetes environment,” he said.

Meanwhile, Google’s Jib containerizer tool enables developers to package a Java application into a container image with the Java tools they already know to create container-based advanced applications. And like Azure Dev Spaces, it handles a lot of the underlying infrastructure and orchestration tasks.

It’s about simplifying the experience … the developer is eased into the process by using existing tools and eliminating the need to set up Docker or Kubernetes.
Edwin Yuenanalyst, Enterprise Strategy Group

Integration with Java development tools Maven and Gradle means Java developers can skip the step to create JAR, or Java ARchive, files and then containerize them, Yuen said.

“Like Azure Dev Spaces, it’s about simplifying the experience — this time, not the laptop jump, but the jump from JAR to container,” he said. “But, again, the developer is eased into the process by using existing tools and eliminating the need to set up Docker or Kubernetes.”

Jib also extends Google’s association with the open source community to provide Java developers an easy path to containerize their apps while using the Google Cloud Platform, Yuen added.

Jitterbit Harmony update brings API management to iPaaS

Multi-SaaS environments are common in enterprises today, and so are connection challenges between those environments. Jitterbit promises to simplify SaaS integrations with the latest version of its enterprise iPaaS platform.

The Harmony Summer ’18 release adds API lifecycle management and hundreds of self-service integration templates. It also features point-and-click integration and API management capabilities to accommodate both non-IT knowledge workers and experienced integrators and API developers.

In this era of BizDevOps, individual business departments frequently help implement integration between cloud applications, said Neil Ward-Dutton, research director for U.K.-based MWD Advisors. Jitterbit has worked to provide an integration platform as a service (iPaaS) that caters not only to IT specialists, but also to lesstechnical staff, he said.

Jitterbit’s expanded recipe book

With over 500 new, prebuilt and certified recipes, the Harmony Summer ’18 release aims to help less technical users quickly build integrations for common combinations of applications, Ward-Dutton said. Jitterbit recipes enable endpoint connections between enterprise SaaS apps, such as Amazon Simple Storage Service, Box, NetSuite, Salesforce and others.

In the past, Jitterbit Harmony enabled IT specialists to build integration recipes for business teams, Ward-Dutton said. The new set of development templates goes a step further to provide a library of easy-to-use, certified content, with a guarantee of certified General Data Protection Regulation compliance.

Jitterbit, overall, does a good job of spanning core IT and citizen integrator audiences.
Neil Ward-Duttonresearch director, MWD Advisors

“Jitterbit, overall, does a good job of spanning core IT and citizen integrator audiences, and its [recipes are] more consumable than more hardcore tech platforms, like those from MuleSoft and TIBCO,” Ward-Dutton said. “However, others like Boomi, Scribe Online and SnapLogic are pretty comparable.”

For Skullcandy Inc., Jitterbit’s prebuilt integration templates help it accelerate deployment and enable live integrations in weeks instead of months. “We were able to connect and automate our business processes with SAP [Business] ByDesign, EDI [electronic data interchange], FTP, email, databases — you name it,” said Yohan Beghein, IT director for the device vendor, based in Park City, Utah. With the updated Jitterbit Harmony iPaaS platform, Skullcandy processes millions of transactions, transforms high volumes of information with logic and synchronizes data across all systems, he said.

API management brings integration control

API integration is one of Jitterbit Harmony’s strong points, but its API management features lagged behind the aforementioned competitors; this latest release brings Harmony in line with many other players in this space, Ward-Dutton said.

Simon Peel, chief strategy officer, JitterbitSimon Peel

HotSchedules, a restaurant software vendor based in Austin, Texas, uses Harmony’s improved API integration and management features to quickly and accurately aggregate and manage data from many different sources. Without these capabilities, the HotSchedules’ operations team would have to use several different systems to understand the health of customers’ APIs and integrations, said Laura McDonough, vice president of operations at HotSchedules. “If the data isn’t accurate, our customer success team would be making decisions based on incorrect data,” she said.

With API development, integration and management capabilities on a single platform, it’s easier to expose data from existing apps and drive real-time integration, said Simon Peel, Jitterbit’s chief strategy officer. The new Jitterbit Harmony release enables full API lifecycle management from any device, including security control management, user authentication and API performance monitoring, and provides alerts about API processes.

The Summer ’18 release is available to new users on a 30-day trial basis.

Juniper Contrail battles Cisco ACI, VMware NSX in the cloud

SAN FRANCISCO — Juniper Networks has extended its Contrail network virtualization platform to multicloud environments, competing with Cisco and VMware for the growing number of enterprises running applications across public and private clouds.

The Juniper Contrail Enterprise Multicloud, introduced this week at the company’s NXTWORK conference, is a single software console for orchestrating, managing and monitoring network services across applications running on cloud-computing environments. The new product, which won’t be available until early next year, would compete with the cloud versions of Cisco’s ACI and VMware’s NSX.

Also at the show, Juniper announced that it would contribute the codebase for OpenContrail, the open source version of the software-defined networking (SDN) overlay, to The Linux Foundation. The company said the foundation’s networking projects would help drive OpenContrail deeper into cloud ecosystems.

Contrail Enterprise Multicloud stems, in part, from the work Juniper has done over several years with telcos building private clouds, Juniper CEO Rami Rahim told analysts and reporters at the conference.

“It’s almost like a bad secret — how embedded we have been now with practically all — many — telcos around the world in helping them develop the telco cloud,” Rahim said. “We’ve learnt the hard way in some cases how this [cloud networking] needs to be done.”

Is Juniper’s technology enough to win?

Technologically, Juniper Contrail can compete with ACI and NSX, IDC analyst Brad Casemore said. “Juniper clearly has put considerable thought into the multicloud capabilities that Contrail needs to support, and, as you’d expect from Juniper, the features and functionality are strong.”

Cisco and VMware have marketed their multicloud offerings aggressively. As such, Juniper will have to raise and sustain the marketing profile of Contrail Enterprise Multicloud.
Brad Casemoreanalyst, IDC

However, Juniper will need more than good technology when competing for customers. A lot more enterprises use Cisco and VMware products in data centers than Juniper gear. Also, Cisco has partnered with Google to build strong technological ties with the Google Cloud Platform, and VMware has a similar deal with Amazon.

“Cisco and VMware have marketed their multicloud offerings aggressively,” Casemore said. “As such, Juniper will have to raise and sustain the marketing profile of Contrail Enterprise Multicloud.”

Networking with Juniper Contrail Enterprise Multicloud

Contrail Enterprise Multicloud comprises networking, security and network management. Companies can buy the three pieces separately, but the new product lets engineers manage the trio through the software console that sits on top of the centralized Contrail controller.

For networking in a private cloud, the console relies on a virtual network overlay built on top of abstracted hardware switches, which can be from Juniper or a third party. The system also includes a virtual router that provides links to the physical underlay and Layer 4-7 network services, such as load balancers and firewalls. Through the console, engineers can create and distribute policies that tailor the network services and underlying switches to the needs of applications.

Contrail Enterprise Multicloud capabilities within public clouds, including Amazon Web Services, Google Cloud Platform and Microsoft Azure, are different because the provider controls the infrastructure. Network operators use the console to program and control overlay services for workloads through the APIs made available by cloud providers. The Juniper software also uses native cloud APIs to collect analytics information. 

Other Juniper Contrail Enterprise Multicloud capabilities

Network managers can use the console to configure and control the gateway leading to the public cloud and to define and distribute policies for cloud-based virtual firewalls.

Also accessible through the console is Juniper’s AppFormix management software for cloud environments. AppFormix provides policy monitoring and application and software-based infrastructure analytics. Engineers can configure the product to handle routine networking tasks.

The cloud-related work of Juniper, Cisco and VMware is a recognition that the boundaries of the enterprise data center are being redrawn. “Data center networking vendors are having to redefine their value propositions in a multicloud world,” Casemore said.

Indeed, an increasing number of companies are reducing the amount of hardware and software running in private data centers by moving workloads to public clouds. Revenue from cloud services rose almost 29% year over year in the first half of 2017 to more than $63 billion, according to IDC.

VMware NSX-T gets support for Pivotal Container Service

VMware has updated its version of NSX for non-vSphere environments, adding to the network virtualization software integration with the Pivotal Container Service and the latest iteration of Pivotal Cloud Foundry.

VMware introduced NSX-T 2.1 on Tuesday. Through NSX-T, Pivotal Container Service, or PKS, brings support for Kubernetes container clusters to vSphere, VMware’s virtualization platform for the data center. PCF is an open source cloud platform as a service (PaaS) that developers use to build, deploy, run and scale applications.

VMware developed the Cloud Foundry service that is the basis for PCF. Pivotal Software, whose parent company is Dell Technologies, now owns the PaaS, which Pivotal licenses under Apache 2.0.

VMware NSX-T was introduced early this year to provide networking and security management for non-vSphere application frameworks, OpenStack environments, and multiple KVM distributions.

Support for KVM underscores VMware’s recognition that the virtualization layer in Linux is a force in cloud environments. As a result, the vendor has to provide integration with vSphere for VMware to extend its technology beyond the data center.

Kubernetes cluster support in VMware NSX-T

VMware NSX-T integration with PKS is significant because of the extensive use of Kubernetes in public, private and hybrid cloud environments. Kubernetes, which Google developed, is used to automate the deployment, scaling, maintenance, and operation of multiple Linux-based containers across clusters of nodes. Google, VMware and Pivotal developed PKS.

VMware has said it plans to add Docker support in NSX-T. Docker is another popular open source software platform for application containers.

VMware NSX-T is a piece of the vendor’s strategy for spreading its technology across the branch, WAN, cloud computing environments, and security and networking in the data center. Essential to its networking plans is the acquisition of SD-WAN vendor VeloCloud, which VMware plans to complete by early next year.

VMware expects to use VeloCloud to take NSX into the branch and the WAN. “What VeloCloud offers is really NSX everywhere,” VMware CEO Pat Gelsinger told analysts last week, according to a transcript published by the financial site Seeking Alpha.

Gelsinger held the conference call after the company released earnings for the fiscal third quarter ended Nov. 3. VMware reported revenue of $1.98 billion, an increase of 11% over the same period last year. Net income grew to $443 million from $319 million a year ago.