Tag Archives: presence

Cloud adoption a catalyst for IT modernization in many orgs

One of the biggest changes for administrators in recent years is the cloud. Its presence requires administrators to migrate from their on-premises way of thinking.

The problem isn’t the cloud. After all, there should be less work if someone else looks after the server for you. The arrival of the cloud has brought to light some of the industry’s outdated methodologies, which is prompting this IT modernization movement. Practices in many IT shops were not as rigid or regimented before the cloud came along because external access was limited.

Changing times and new technologies spur IT modernization efforts

When organizations were exclusively on premises, it was easy enough to add finely controlled firewall rules to only allow certain connections in and out. Internal web-based applications did not need HTTPS — just plain HTTP worked fine. You did not have to muck around with certificates, which seem to always be difficult to comprehend. Anyone on your network was authorized to be there, so it didn’t matter if data was unencrypted. The risk versus the effort wasn’t worthwhile — a lot of us told ourselves — to bother with and the users would have no idea anyway.

You would find different ways to limit the threats to the organization. You could implement 802.1X, which only allowed authorized devices on the network. This reduced the chances of a breach because the attacker would need both physical access to the network and an approved device. Active Directory could be messy; IT had a relaxed attitude about account management and cleanup, which was fine as long as everyone could do their job.

Now that there is increased risk with exposing the company’s systems to the world via cloud, it’s no longer an option to keep doing things the same way just to get by.

The pre-cloud era allowed for a lot of untidiness and shortcuts, because the risk of these things affecting the business in a drastic way was smaller. Administrators who stepped into a new job would routinely inherit a mess from the last IT team. There was little incentive to clean things up; just keep those existing workloads running. Now that there is increased risk with exposing the company’s systems to the world via cloud, it’s no longer an option to keep doing things the same way just to get by.

One example of how the cloud forces IT practices to change is the default configuration when you use Microsoft’s Azure Active Directory. This product syncs every Active Directory object to the cloud unless you apply filtering. The official documentation states that this is the recommended configuration. Think about that: Every single overlooked, basic password that got leaked several years ago during the LinkedIn breach is now in the cloud for use by anyone in the world. Those accounts went from a forgotten mess pushed under the rug years ago to a ticking time bomb waiting for attackers to hit a successful login as they spin through their lists of millions of username and password combos.

Back on the HTTP/HTTPS side, users now want to work from home or anywhere they might have an internet connection. They also want to do it from any device, such as their personal laptop, mobile phone or tablet. Exposing internal websites was once — and still is in many scenarios — a case of poking a hole in the firewall and hoping for the best. With an unencrypted HTTP site, all data it pushed in and out to that endpoint, from anything the user sees to anything they enter such as username and password is at risk. Your users could be working from a free McDonald’s Wi-Fi connection or at any airport in the world. It’s not hard for attackers to set up fake relay access points and listen to all the data and read anything that is not encrypted. Look up WiFi Pineapple for more information about the potential risks.

How to accommodate your users and tighten security

As you can see, it’s easy to end up in a high-risk situation if IT focuses on making users happy instead of company security. How do you make the transition to a safer environment? At the high level, there’s several immediate actions to take:

  • Clean up Active Directory. Audit accounts, disable ones not in use, organize your organizational units so they are clear and logical. Implement an account management process from beginning to end.
  • Review your password policy. If you have no other protection, cycle your passwords regularly and enforce some level of complexity. Look at other methods for added protection such as multifactor authentication (MFA), which Azure Active Directory provides, which can do away with password cycling. For more security, combine MFA with conditional access, so a user in your trusted network or using a trusted device doesn’t even need MFA. The choice is yours.
  • Review and report on account usage. When something is amiss with account usage, you should know as soon as possible to take corrective action. Technologies such as the identity protection feature Azure Active Directory issues alerts and remediates on suspicious activity, such a login from a location that is not typical for that account.
  • Implement HTTPS on all sites. You don’t have to buy a certificate for each individual site to enable HTTPS. Save money and generate them yourself if the site is only for trusted computers on which you can deploy the certificate chain. Another option is to buy a wildcard certificate to use everywhere. Once the certificate is deployed, you can expose the sites you want with Azure Active Directory Application Proxy rather than open ports in your firewall. This gives the added benefit of forcing an Azure Active Directory login to apply MFA and identity protection before the user gets to the internal site, regardless of the device and where they are physically located.

These are a few of the critical aspects to think about when changing your mindset from on-premises to cloud. This is a basic overview of the areas to give a closer look. There’s a lot more to consider, depending on the cloud services you plan to use.

Go to Original Article
Author:

OVHcloud expands US footprint with channel partner program

Europe’s largest cloud provider has stepped up its channel presence in the U.S. with a new partner program.

OVHcloud, which has its global headquarters in Roubaix, France, broke into the U.S. market only recently, following its acquisition of VMware’s vCloud Air business. With its U.S.-based partner program, launched this week, OVHcloud hopes to expand the reach of its IaaS portfolio, spanning VMware-based hosted private cloud, bare-metal server and public cloud options.

David WigglesworthDavid Wigglesworth

“I think we are the best-kept technical secret nobody has heard of,” said David Wigglesworth, chief revenue officer at OVHcloud, which has its U.S. corporate headquarters in Reston, Va.

The OVHcloud channel program features four partner tiers with incremental benefits and requirements. Partners will have access to training; marketing support, such as market development funds; and sales activity planning, the company said.

OVHcloud has already signed a handful of U.S.-based partners, including solution provider FusionStorm. Wigglesworth noted that OVHcloud’s offerings will also be sold by VMware sales reps.

HPE updates Partner Ready program

Hewlett Packard Enterprise (HPE) this week unveiled enhancements to the Partner Ready program, which now features increased rewards for resellers.

According to HPE, resellers can now earn increased rebates and other incentives for selling products in “high-growth markets,” specifically storage, composable infrastructure, hyper-converged technology, software and consumption services. The Partner Ready program enhancements will take effect on Nov. 1, HPE said.

“Because of the changes that we have made, resellers and other partner business types ought to look at this as an opportunity to redouble their focus and efforts around HPE because it will prove to be very rewarding,” said Terry Richardson, vice president of North American channels and alliances at HPE.

HPE’s products that align with the market opportunities it is targeting include HPE Nimble Storage and 3PAR for storage, SimpliVity and Synergy for composable and hyper-converged technology, and GreenLake and Datacenter Care for service-led offerings.

HPE said it will also enhance the Partner Ready program with the following features:

  • Further simplifications. For example, partners will now receive rewards from the first sale without gates, caps or targets, HPE said.
  • A push to increase HPE’s consumption-based offerings. HPE said it will roll out a new competency for the HPE GreenLake suite of pay-per-use on-premises offerings, in addition to a rebate for partners enabled in consumption-based models.
  • Expanded technical enablement, especially around HPE’s high-growth market opportunities.

RapidFire Tools expands MSP offerings

RapidFire Tools Inc., based in Atlanta, is expanding its managed services provider (MSP) offerings with InDoc, a tool that provides web-based access to clients’ network data.

InDoc, an Amazon cloud-based portal, is built into RapidFire Tools’ Network Detective Reporter product for scheduling automated network scans and reports. MSP technicians can use InDoc to obtain network data via desktop or mobile devices and can also store information on the portal, such as client-specific notes, remediation procedures, checklists and passwords. Information is stored in an encrypted data store. InDoc employs additional layers of encryption for confidential data and passwords. The tool includes a usage log that provides an audit trail of technicians who have accessed data and when they did so, according to RapidFire Tools.

Michael Mittel, CEO at RapidFire Tools, said more than 95% of the company’s business is to MSPs, noting service providers “are still a growing part of our business.”

He said RapidFire Tools now has more than 6,000 MSPs using its tools worldwide. He added the company is expanding its offerings to include products that MSPs can resell to their customers.

InDoc will be offered to existing Network Detective Reporter customers as a free upgrade. New subscribers to Network Detective Reporter will receive InDoc as a value-added feature at no extra charge. An unlimited amount of data can be stored via InDoc for each MSP location subscribing to Network Detective Reporter. The company said MSPs with multiple locations should purchase separate Network Detective Reporter subscriptions for each office.

Kaseya reports 30% growth, MSP signings

Kaseya cited MSP growth as it reported year-over-year growth in excess of 30% and projected annual bookings of more than $250 million.

The IT infrastructure management solutions provider said uptake of the latest version of its remote monitoring and management product, VSA, has exceeded the company’s expectations. In the first several months since its release, more than 300 organizations have adopted the technology, according to Kaseya. In addition, the company said about 400 MSPs have signed up for Unitrends MSP thus far in 2018. Kaseya acquired Unitrends, a business continuity and disaster recovery (DR) technology vendor, in May.

Other news

  • Sungard Availability Services, a DR and cloud service provider in Wayne, Pa., has expanded its Payment Card Industry Data Security Standard certification, offering compliant production and DR services on AWS and its managed private cloud.
  • AWS introduced a new program for channel partners in the public sector to grow their cloud businesses within a 110-day time frame. The AWS Public Sector Partner Transformation Program offers a cloud-readiness assessment, training and enablement resources.
  • HCL Technologies, a global technology company that provides infrastructure services, is partnering with ScienceLogic’s IT operations management technology. The partnership addresses the need for automated IT operations among HCL’s enterprise clients embarking on digital transformation projects, according to ScienceLogic. The arrangement also lets HCL’s DRYiCE division use ScienceLogic’s SL1 Automation Engine.
  • Cloud communications provider Avoxi unwrapped a global partner program. Through the program, partners can provide Avoxi’s virtual numbers in more than 120 countries for packaged minutes. Avoxi said it will soon roll out a new partner portal featuring partner sales and management tools and materials.
  • Calligo, a cloud infrastructure services provider based in St. Helier, Jersey, is offering Microsoft Azure Stack services from its Toronto branch. The company said the Toronto installation is its fifth Azure Stack deployment.
  • PSIGEN Software Inc., a document capture, business process automation and document management solutions developer, has inked a distribution pact with Access Control Devices Inc. (ACDI). PSIGEN Software, based in Madison, Ala., said ACDI will serve as its exclusive distributor in North America and Latin America. ACDI, a PaperCut Authorized Solution Center, targets the office equipment reseller channel.

Market Share is a news roundup published every Friday.

AWS Direct Connect updates help globe-spanning users

With a nod from AWS, customers with an international presence can now more simply establish secure network connections for workloads that span multiple regions.

An update to AWS Direct Connect enables enterprises to establish a single dedicated connection across multiple Amazon Virtual Private Clouds (VPCs) and cut down on administrative tasks. Enterprises have clamored for this capability, as the previous approach required them to set up unique connections in each region and peer VPCs across regions.

This feature, called AWS Direct Connect Gateways, is critical for large companies that want business continuity with data and application available across AWS regions, said Brad Casemore, an analyst with IDC.

“This is a critical capability for them as they set up direct connections to AWS services,” he said. “They want to ensure they can work across zones as dynamic application requirements dictate.”

All the major public cloud vendors have their own flavor of a dedicated networking service for enterprise customers to improve security, bandwidth and performance. These new AWS Direct Connect Gateways are global objects that exist across all public regions, with inter-region communication occurring on the AWS network backbone.

At Onica, an AWS consulting partner in Santa Monica, Calif., most of its enterprise customers have requested this capability because of the challenges created by the old model, said Kevin Epstein, Onica CTO.

Previously, users had to rely on IPsec virtual private networks to achieve the same result. That could still create real problems if, say, a master database is in one region and services in other regions rely on that database. Users must either replicate that database across AWS regions or suffer a degree of latency that’s unacceptable for certain workloads.

Amazon built its AWS regions to be self-contained to avoid cascading failures, and while that model helped limit the impact of the major AWS outage earlier this year, it hampers customers in other ways, Epstein said.

In the past, when other vendors added similar capabilities, AWS argued that segmentation between regions was the best way to operate on its platform securely. These gateways represent a change in that strategy.

As they’re finding out, [the network] still requires enhancements if they want to continue to expand their footprint.
Brad Casemoreanalyst, IDC

“This to me is the first major step in nodding to the global players and saying, ‘We understand the challenges and we’re going to take down those barriers for you,'” Epstein said.

AWS Direct Connect Gateways require IP address ranges that don’t overlap and all the VPCs must be in the same account. Amazon said it plans to add more flexibility here eventually.

The overlap issue may be a problem for large startups that haven’t considered IP address spacing, but it shouldn’t cause too many problems at large enterprises that already have a mature outlook on network allocation, Epstein said.

And while these gateways focus on connections to the cloud, Amazon is also making network changes within its cloud. AWS PrivateLink creates endpoints within VPCs through a virtual network and IP addresses within a VPC subnet.

PrivateLink can be connected via API to Kinesis, Service Catalog, EC2, EC2 Systems Manager and Elastic Load Balancing, with Key Management Service, CloudWatch and others to be added later. That allows customers to manage AWS services without any of that traffic travelling over the Internet and cut down on costly egress fees.

“This is mostly about keeping the traffic within the AWS network,” Casemore said. “Customers incur additional charges when data must traverse the Internet.”

Google addresses inter-zone latency

Customers with global footprints or latency sensitive apps have forced many cloud vendors, not just AWS, to look closer at networking. Google Cloud Platform, a distant third in the public cloud market behind AWS and Microsoft Azure, has made a number of moves in the past three months to bolster its networking capabilities.

GCP this month said the latest version of Andromeda, its internal software defined network stack, will reduce intra-zone network latency between VMs by 40%. Zones are Google’s equivalent to AWS regions.

With this move Google hopes to attract developers that prefer private hosting with bare metal over the public cloud to build latency-sensitive applications for high performance computing, financial transactions or gaming.

Customer will have calculate whether these improvements go far enough to address cost, bandwidth and latency, but it’s clear cloud vendors are focused on network innovations, Casemore said.

“It’s all about pulling a greater share of new and existing apps to the public cloud,” he said. “The network has certainly become an enabler and, as they’re finding out, it still requires enhancements if they want to continue to expand their footprint.”

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at [email protected].