Tag Archives: migrate

Cloud adoption a catalyst for IT modernization in many orgs

One of the biggest changes for administrators in recent years is the cloud. Its presence requires administrators to migrate from their on-premises way of thinking.

The problem isn’t the cloud. After all, there should be less work if someone else looks after the server for you. The arrival of the cloud has brought to light some of the industry’s outdated methodologies, which is prompting this IT modernization movement. Practices in many IT shops were not as rigid or regimented before the cloud came along because external access was limited.

Changing times and new technologies spur IT modernization efforts

When organizations were exclusively on premises, it was easy enough to add finely controlled firewall rules to only allow certain connections in and out. Internal web-based applications did not need HTTPS — just plain HTTP worked fine. You did not have to muck around with certificates, which seem to always be difficult to comprehend. Anyone on your network was authorized to be there, so it didn’t matter if data was unencrypted. The risk versus the effort wasn’t worthwhile — a lot of us told ourselves — to bother with and the users would have no idea anyway.

You would find different ways to limit the threats to the organization. You could implement 802.1X, which only allowed authorized devices on the network. This reduced the chances of a breach because the attacker would need both physical access to the network and an approved device. Active Directory could be messy; IT had a relaxed attitude about account management and cleanup, which was fine as long as everyone could do their job.

Now that there is increased risk with exposing the company’s systems to the world via cloud, it’s no longer an option to keep doing things the same way just to get by.

The pre-cloud era allowed for a lot of untidiness and shortcuts, because the risk of these things affecting the business in a drastic way was smaller. Administrators who stepped into a new job would routinely inherit a mess from the last IT team. There was little incentive to clean things up; just keep those existing workloads running. Now that there is increased risk with exposing the company’s systems to the world via cloud, it’s no longer an option to keep doing things the same way just to get by.

One example of how the cloud forces IT practices to change is the default configuration when you use Microsoft’s Azure Active Directory. This product syncs every Active Directory object to the cloud unless you apply filtering. The official documentation states that this is the recommended configuration. Think about that: Every single overlooked, basic password that got leaked several years ago during the LinkedIn breach is now in the cloud for use by anyone in the world. Those accounts went from a forgotten mess pushed under the rug years ago to a ticking time bomb waiting for attackers to hit a successful login as they spin through their lists of millions of username and password combos.

Back on the HTTP/HTTPS side, users now want to work from home or anywhere they might have an internet connection. They also want to do it from any device, such as their personal laptop, mobile phone or tablet. Exposing internal websites was once — and still is in many scenarios — a case of poking a hole in the firewall and hoping for the best. With an unencrypted HTTP site, all data it pushed in and out to that endpoint, from anything the user sees to anything they enter such as username and password is at risk. Your users could be working from a free McDonald’s Wi-Fi connection or at any airport in the world. It’s not hard for attackers to set up fake relay access points and listen to all the data and read anything that is not encrypted. Look up WiFi Pineapple for more information about the potential risks.

How to accommodate your users and tighten security

As you can see, it’s easy to end up in a high-risk situation if IT focuses on making users happy instead of company security. How do you make the transition to a safer environment? At the high level, there’s several immediate actions to take:

  • Clean up Active Directory. Audit accounts, disable ones not in use, organize your organizational units so they are clear and logical. Implement an account management process from beginning to end.
  • Review your password policy. If you have no other protection, cycle your passwords regularly and enforce some level of complexity. Look at other methods for added protection such as multifactor authentication (MFA), which Azure Active Directory provides, which can do away with password cycling. For more security, combine MFA with conditional access, so a user in your trusted network or using a trusted device doesn’t even need MFA. The choice is yours.
  • Review and report on account usage. When something is amiss with account usage, you should know as soon as possible to take corrective action. Technologies such as the identity protection feature Azure Active Directory issues alerts and remediates on suspicious activity, such a login from a location that is not typical for that account.
  • Implement HTTPS on all sites. You don’t have to buy a certificate for each individual site to enable HTTPS. Save money and generate them yourself if the site is only for trusted computers on which you can deploy the certificate chain. Another option is to buy a wildcard certificate to use everywhere. Once the certificate is deployed, you can expose the sites you want with Azure Active Directory Application Proxy rather than open ports in your firewall. This gives the added benefit of forcing an Azure Active Directory login to apply MFA and identity protection before the user gets to the internal site, regardless of the device and where they are physically located.

These are a few of the critical aspects to think about when changing your mindset from on-premises to cloud. This is a basic overview of the areas to give a closer look. There’s a lot more to consider, depending on the cloud services you plan to use.

Go to Original Article
Author:

Know your Office 365 backup options — just in case

Exchange administrators who migrate their email to Office 365 reduce their infrastructure responsibilities, but they must not ignore areas related to disaster recovery, security, compliance and email availability.

Different businesses rely on different applications for their day-to-day operations. Healthcare companies use medical records to treat patients or a manufacturing plant needs its ERP system to track production. But generally speaking, most businesses, regardless of their vertical, rely on email to communicate with their co-workers and customers. If the messaging platform goes down for any amount of time, users and the business suffer. A move to Microsoft’s cloud-based collaboration platform introduces new administrative challenges, such as determining whether the organization needs an Office 365 backup product.

IT pros tasked with all things related to Exchange Server administration — managing multiple email services, including system uptime; mailbox recoverability; system performance; maintenance; user setups; and general reactive system issues — will have to adjust when they move to Office 365. Many of the responsibilities related to system performance, maintenance and uptime become the responsibility of Microsoft. Unfortunately, not all of these outsourced activities meet the expectations of Exchange administrators. Some of them will resort to alternative methods to ensure their systems have the right protections to avoid serious disasters.

A move to Microsoft’s cloud-based collaboration platform introduces new administrative challenges, such as determining whether the organization needs an Office 365 backup product.

To keep on-premises Exchange running with high uptime, Exchange admins rely on setting up the environment with adequate redundancies, such as virtualization with high availability, clustering and proper backup if a recovery is required. In a hosted Exchange model with Office 365, email administrators rely heavily on the hosting provider to manage those redundancies and ensure system uptime. However, despite the promised service-level agreements (SLAs) by Microsoft, there are still some gaps that Exchange administrators must plan for to get the same level of system availability and data protection they previously experienced with their legacy on-premises Exchange platform.

Hosted email in Exchange Online, which can be purchased as a stand-alone service or as part of Office 365, has certainly attracted many companies. Microsoft did not provide exact numbers in its most recent quarterly report, but it is estimated to be around 180 million Office 365 commercial seats. Despite the popularity of the platform, one would assume Microsoft would offer an Office 365 backup option at minimum for the email service. Microsoft does, but not in the way Exchange administrators know backup and disaster recovery.

Microsoft does not have backups for Exchange Online

Microsoft provides some level of recoverability with mailboxes stored in Exchange Online. If a user loses email, then the Exchange administrator can restore deleted email by restoring an entire mailbox with PowerShell or through the Outlook recycle bin.

The Undo-SoftDeletedMailbox PowerShell command recovers the deleted mailbox, but there are some limitations. The command is only useful when a significant number of folders have been deleted from a mailbox and the recovery attempt occurs within 30 days. After 30 days, the content is not recoverable.

Due to this limited backup functionality, many administrators look to third-party Office 365 backup vendors such as SkyKick, BitTitan, Datto and Veeam to expand their backup and recovery needs beyond the 30 days that Microsoft offers. At the moment, this is the only way for Exchange administrators to satisfy their organization’s back up and disaster recovery requirements.

Microsoft promises 99.9% uptime with email

No cloud provider is immune to outages and Microsoft is no different. Despite instances of service loss, Microsoft guarantees at least 99.9% uptime for Office 365. This SLA translates into no more than nine hours of downtime per year.

For most IT executives, this guarantee does not absolve them of the need to plan for possible downtime. Administrators should investigate the costs and the technical abilities of an email continuity service from vendors, including Mimecast, Barracuda or TitanHQ, to avoid trouble from unplanned outages.

Email retention policies can go a long way for sensitive content

The ability to define different type of data access and retention policies is just as important as backup and disaster recovery for organizations with compliance requirements.

Groups that need to prevent accidental email deletion will need to work with the Office 365 administrator to set up the appropriate on-hold policies or archiving configuration to protect that content. These are native features in Exchange Online that administrators must build their familiarity to ensure they understand how to meet the different legal requirements of the different groups in their organization.

Define backup retention policies to meet business needs

For most backup offerings for on-premises Exchange, storage is always a concern for administrators. Since it is generally the dictating factor behind the retention period of email backup, Exchange admins have to keep disk space in mind when they determine the best backup scheme for their organization. Hourly, daily, weekly, monthly and quarterly backup schedules are influenced by the amount of available storage.

Office 365 backup products for email from vendors such as SkyKick, Dropsuite, Acronis and Datto ease the concerns related to storage space. This gives the administrator a way to develop the best protection scheme for their company without the added worry of wondering when to purchase additional storage hardware to accommodate these backups.

Go to Original Article
Author:

Notre Dame uses N2WS Cloud Protection Manager for backup

Coinciding with its decision to eventually close its data center and migrate most of its workloads to the public cloud, the University of Notre Dame’s IT team switched to cloud-native data protection.

Notre Dame, based in Indiana, began its push to move its business-critical applications and workloads to Amazon Web Services (AWS) in 2014. Soon after, the university chose N2WS Cloud Protection Manager to handle backup and recovery.

Now, 80% of the applications used daily by faculty members and students, as well as the data associated with those services, lives on the cloud. The university protects more than 600 AWS instances, and that number is growing fast.

In a recent webinar, Notre Dame systems engineer Aaron Wright talked about the journey of moving a whopping 828 applications to the cloud, and protecting those apps and their data.  

N2WS, which was acquired by Veeam earlier this year, is a provider of cloud-native, enterprise backup and disaster recovery for AWS. The backup tool is available through the AWS Marketplace.

Wright said Notre Dame’s main impetus for migrating to the cloud was to lower costs. Moving services to the cloud would reduce the need for hardware. Wright said the goal is to eventually close the university’s on-premises primary data center.

“We basically put our website from on premises to the AWS account and transferred the data, saw how it worked, what we could do. … As we started to see the capabilities and cost savings [of the cloud], we were wondering what we could do to put not just our ‘www’ services on the cloud,” he said.

Wright said Notre Dame plans to move 90% of its applications to the cloud by the end of 2018. “The data center is going down as we speak,” he said.

We looked at what it would cost us to build our own backup software and estimated it would cost 4,000 hours between two engineers.
Aaron Wrightsystems engineer, Notre Dame

As a research organization that works on projects with U.S. government agencies, Notre Dame owns sensitive data. Wright saw the need for a centralized backup software to protect that data, and found N2WS Cloud Protection Manager through AWS Marketplace. Wright could not find many good commercial options for protecting that cloud data.

“We looked at what it would cost us to build our own backup software and estimated it would cost 4,000 hours between two engineers,” he said. By comparison, Wright said his team deployed Cloud Protection Manger in less than an hour.

Wright said N2WS Cloud Protection Manager rescued Notre Dame’s data at least twice since the installation. One came after Linux machines failed to boot after application of a patch, and engineers restored data from snapshots within five minutes. Wright said his team used the snapshots to find and detach a corrupted Amazon Elastic Block Store volume, and then manually created and attached a new volume.

In another incident, Wright said the granularity of the N2WS Cloud Protection Manager backup capabilities proved valuable.

“Back in April-May 2018, we had to do a single-file restore through Cloud Protection Manager. Normally, we would have to have taken the volume and recreated a 300-gig volume,” he said. Locating and restoring that single file so quickly allowed him to resolve the incident within five minutes.

Microsoft Teams roadmap introduces telephony, interoperability

Many Skype for Business users probably won’t migrate to Microsoft Teams over the next year, because they are concerned about the lack of telephony features in the new chat-based workspace in Office 365, according to one industry expert.

The Microsoft Teams roadmap, released this week, promises a slew of Skype for Business features over the coming months. But Microsoft won’t fully roll out many of the telephony features until late next year.

“The biggest concern is Microsoft won’t deliver a lot of the telephony set until well into 2018,” said Irwin Lazar, a Nemertes Research analyst.

The Microsoft Teams roadmap details several Skype for Business features that will be rolled into Teams to help users prepare for a migration. Enterprise calling features — such as call park, group call pickup, location-based routing and shared-line appearance — are not expected until the fourth quarter of next year.

Additionally, the Microsoft Teams roadmap does not offer any new insights into Microsoft’s collaboration strategy, Lazar said. The roadmap, however, does provide customers with a timeline of when they can expect to see certain telephony features in Teams.

“It provides more clarity and will help companies plan for an eventual transition to Teams,” Lazar said.

Microsoft Teams roadmap: A bumpy ride?

The biggest concern is Microsoft won’t deliver a lot of the telephony set until well into 2018.
Irwin Lazaranalyst at Nemertes Research

Microsoft announced last month that Teams would replace Skype for Business Online to become the main communications client within Office 365. The announcement left many organizations questioning the migration process and the quality of telephony within Teams.

After Microsoft posted a blog announcing the roadmap, several users commented to share their thoughts. Some users are happy about the change. They lauded the upcoming Microsoft Teams features, the integration between Skype and Teams, and how the roadmap helps organizations plan for the migration and improve adoption.

Other users, however, remain skeptical.

“Honestly I am very disappointed you are moving in this direction, I miss the days of a small simple interface like the old school Communicator,” John Gooding posted in response to Microsoft’s blog. “We tried Slack and Teams, and it was fun for 30 minutes then it turned into a productivity drag.”

Messaging, meetings and more

The Microsoft Teams roadmap focuses on messaging, meetings and calling capabilities within the application. Lazar said the roadmap will help organizations with their user-awareness and adoption programs, and it will help them plan training for users as features become available.

Messaging. As a messaging-centric application, Teams already offers persistent, one-on-one and group chat. Features such as the ability to import contacts from Skype for Business, unified presence and messaging policies are expected to be available by the end of the first quarter of 2018. Microsoft expects to add screen sharing and federation between companies by the end of the second quarter of 2018.

Meetings. Teams includes meeting capabilities such as screen sharing and capturing chats in the channel after a meeting. Later this quarter, Microsoft will debut audio conferencing in over 90 countries, meeting support in the Edge and Google Chrome web browsers, and call-quality analytics.

Microsoft will introduce meeting room support with Skype Room Systems, cloud video interoperability with third-party devices and support for the Surface Hub by the end of the second quarter of 2018.

Calling. Later this year, Microsoft plans to introduce voicemail, call forwarding, e911 support, Skype for Business to Teams calling, and IT policies for Teams interoperability. In the second quarter next year, Microsoft will enable customers to use their existing telecom voice line to activate calling services in Office 365. Additional capabilities such as call queues and one-to-one to group call escalation with Teams, Skype for Business and PSTN participants will also be available.

Additional Microsoft Teams features will roll out in the second quarter of 2018, including recording and storing meetings, meeting transcriptions and the ability to search key terms.

In an effort to clear up confusion over its collaboration roadmap, Microsoft will also update the names of its PSTN Calling, PSTN Conferencing and Cloud PBX services. PSTN Calling will be renamed Calling Plan, PSTN Conferencing will be named Audio Conferencing, and Cloud PBX will be called Phone System.

For features yet to be announced in the Microsoft Teams roadmap, Lazar said he’d like to see announcements around customers using on-premises Skype for Business being able to use the cloud-based Teams for telephony.

Navigate the Microsoft roadmap and keep IT pros challenged

IT managers face multiple challenges. Not only do they need to migrate applications to new server OSes or deploy new web servers, they also must retain a troupe of talented Windows administrators — despite financial limitations.

Many Windows-based system admins and IT pros typically land in three camps — those that stick closely to the Microsoft roadmap, those that try to stay afloat with specific upgrades and those who sit back and wait for the hype to cool. And while most shops find themselves in the second camp, their highly qualified IT workers will seek out other — more exciting — opportunities.

Shops that hug the Microsoft roadmap

In Microsoft’s perfect world, all the engineers in its environments would be PowerShell MVPs, know how to work with the latest server OS and freshly baked tools from Azure, and find ways to ramp customers into the cloud to cut costs and improve agility. This group of early adopters is familiar with Microsoft’s latest technologies, and will implement them upon release.

These organizations cannot wait to get out of the data center. Usually, they provide applications and services across a wide range of cloud providers and will crank out new services for any platform a customer uses. There are not many companies that change platforms quite that fast, but there are probably a few who want that new whiz-bang feature for development.

Microsoft — and many who work with the company’s offerings — would love to move everyone to the cloud, giving users access to the latest technologies. There are potential savings with hardware and compute on demand. The cloud promises more flexibility for companies that need to scale up and scale out quickly.

The move might be more cost-efficient, but many businesses need a proof of concept that showcases that the benefits outweigh the drawbacks.

Companies that partner with Microsoft are usually close to, if not falling over, the edge — they want to keep engineers knowledgeable on the latest Microsoft technologies. This benefits customers — someone has to know how to implement features, such as Storage Spaces, and do it in a timely fashion. In addition, administrators can lean on the partner instead of paying for additional support from Microsoft.

But, in reality, a business must consider the benefits of a cloud migration and how it affects the bottom line. The move might be more cost-efficient, but many businesses need a proof of concept that showcases that the benefits outweigh the drawbacks.

Shops that work with what they have

Certain shops have fallen off the Microsoft roadmap; these organizations rarely, if ever, upgrade to a newer version or technology. They use what works until it doesn’t work anymore. These shops cannot move an old application to a new platform for a number of reasons, including support requirements.

For other organizations, systems work as they always have in the past, and that’s good enough. Some admins fear change, and many end users prefer to remain comfortable with the status quo.

Companies have various reasons to reside in this camp. But, typically, it boils down to money. It can be expensive to get a legacy application to work on a newer server OS or to upgrade the hardware that cranks out the widgets, so they choose to leave things as they are.

Shops that upgrade — eventually

Many organizations fall somewhere between cutting edge and status quo. These businesses move technology forward as they need to — or as their budget allows. But they do so at a controlled pace to appease employees, users, shareholders and IT teams.

Technology professionals need to find the middle ground and work to keep organizations on a forward path. They must ensure businesses are far enough ahead of the end-of-support lifecycle to avoid trouble as products fall by the wayside.

Microsoft realizes some customers cannot afford to follow its roadmap and immediately move to the latest technologies due to either financial or business constraints. For IT professionals who need to keep customers and their applications running, this is both a blessing and a curse. Legacy technology, with its predictable quirks and behaviors, enables IT to provide long-term support. But that stick-with-what’s-comfortable mentality often prevents the organization from changing gears to take advantage of benefits that new tools and services can bring.

All shops should make IT staff feel important

Businesses that support legacy technologies should make time for IT staff to work on compelling tasks that can expand their skill set and make them feel valued. There are plenty of organizations that require Windows Server 2008 R2, but that might not use advanced features, such as BranchCache and DirectAccess. Give admins a project to put these technologies into production if they can help reduce the number of help desk tickets.

PowerShell is another established technology that administrators can learn to automate tasks on Windows systems. As Microsoft builds on PowerShell’s interoperability features, admins can lean on that expertise to manage Linux servers and cloud resources.

Make room for newer Microsoft technologies to keep talented IT personnel engaged. For example, administrators can implement the Server Management Tools suite, a free, Azure-based remote management service, to gain experience with Microsoft’s cloud platform.

These measures won’t pair administrators with the latest technologies, but they should nudge them into more compelling areas. And that can benefit the organization as a whole.

Powered by WPeMatico