Tag Archives: focus

SAP: Partners are the key to customer success

Customer success was the main focus of the SAP Global Partner Summit Online, a virtual conference held this week.

SAP Global Partner Summit Online is a gathering of SAP executives, partners and customers who convene to discuss innovations and resources.

Partners are the key to customer success and happiness, said Karl Fahrbach, who was appointed SAP’s first chief partner officer about a year ago. Partners provide a variety of services for SAP customers, including consulting and implementing systems, as well developing and marketing applications built on platforms like SAP Cloud Platform, or extensions to systems like SAP SuccessFactors.

“Customer success means that we recognize that, in order to make our customers successful, we need to do it with our partners,” Fahrbach said. “The role of the partner has changed within SAP. It’s no longer about sales with our reselling partners or implementation with our services partners.”

He stressed that partners are key players in advancing SAP’s idea of the intelligent enterprise, a broad vision of advanced enterprise systems that allow companies to transform old business processes or develop new business models.

The initiative to rely on partners as the driving force for customer success comes from the top levels of SAP, a point SAP CEO Christian Klein emphasized in his streamed keynote address.

“Everyone at SAP has to understand that customer success is not about the point of sale,” Klein said. “It continues across the sales lifecycle, and partners play a vital role in that. So, we have to double down on that.”

Klein vowed that SAP would develop tools and programs to simplify and automate partner interactions.

“We owe our ecosystem a much better experience than in the past,” he said.

Focus on implementation quality

At the summit, SAP unveiled new initiatives and enhancements to existing programs that are designed to help partners better serve SAP customers.

For implementation partners, SAP debuted the new Partner Delivery Quality Framework (PDQF), an initiative designed to help partners implement higher-quality projects faster, Fahrbach said.

Karl Fahrbach, chief partner officer, SAPKarl Fahrbach

The PDQF consists of three components: project delivery, partner skills and post-sales management. The first component looks at project delivery quality and establishes feedback loops to ensure that an implementation is on track and adoption is successful.

“You can see in real time how the implementation is going, what’s being deployed, how the adoption is going, because this is key to see if this customer will be successful or not,” he said. “We’re going to share that information with the partner to make sure that we are transparent, and we support the partner in delivering that quality.”

The second component consists of investments in certifications and skills that partners can use to make sure the project quality is high. The third component focuses on the partner’s post-sales management. An SAP team of partner delivery managers will work with partners’ project managers to deliver quality standards and resolve escalations.

SAP partners will also now have free access to the same testing and demo systems that SAP uses internally to develop and demonstrate projects for customers.

This will enable partners to build applications that integrate various SAP platforms, like S/4HANA, SAP Ariba, SAP SuccessFactors, and SAP S/4HANA Cloud, in a test and demo environment that they previously had to pay for, Fahrbach said.

“They will be able to show end-to-end scenarios of the intelligent enterprise without having any additional costs,” he said. “The partners have been asking if they can get the same environments that SAP uses to do the demos, and now they have free access. This will improve the economics for the partners because it’s free, and the quality of the demos will improve as well.”

A quicker path to validated apps

For independent software vendor (ISV) partners that develop SAP-based applications and extensions, SAP unveiled the Partner Solution Progression framework. The initiative enables partners to quickly develop SAP validated products and make them available on the SAP App Center, an online marketplace for applications and SAP product extensions, according to SAP.

Having apps that are validated and well-supported by SAP can be vital to an ISV’s success, and the Partner Solution Progression framework allows ISVs to gradually advance the technical and business quality of their applications. Once a partner puts a validated app on the SAP App Center, it can grow into the Partner Spotlight program that includes more go-to-market support. If the partner’s strategy and app success continue to improve, the app is eligible to be invited to SAP Endorsed Apps, an SAP premium certification initiative.

Christian Klein, CEO, SAPChristian Klein

The idea is to make it much easier for partners to get applications on the SAP App Center and show that they are valuable innovative products, Klein said.

“Business on the SAP App Center has quadrupled, but it took way too long for partners to become a partner in the App Center and to onboard their solution until they make their first dollar in revenue,” Klein said. “We have significantly improved how you become a partner and how you publish in the App Center.”

COVID-19 concerns addressed

When the COVID-19 crisis began earlier in the year, SAP launched a virtual partner advisory council to examine how the crisis might affect the partners’ business and determine what they need to do to address it, Fahrbach said.

One result was a decision to help partners deal with cash-flow issues and credit access, he said. SAP postponed SAP PartnerEdge program fees until later in the year and will not raise annual maintenance fees. SAP PartnerEdge is a program for ISVs that provides resources to help design, develop and bring applications to market.

“We also launched credit service options to make sure that partners have access to credit and have revised commercial guidelines for the cloud,” Fahrbach said.

To that end, partners can now use a consumption-based pricing model that was previously available only for SAP’s direct salesforce with the Cloud Platform Enterprise Agreement (CPEA), which meters a customer’s use of SAP systems on the SAP Cloud Platform so that they’re charged only for what they use.

“This will provide our partners the ability to be flexible in the way customers consume our software, which is especially important these days with COVID-19, ” Fahrbach said.

Proof will be in the pudding

It’s important that SAP’s messaging on the role of partners is coming directly from recently installed CEO Christian Klein, said Shaun Syvertsen, CEO and managing partner of ConvergentIS, an SAP partner based in Calgary, Alta.

“The idea that Klein has recognized and reinforced with his teams that partners should not feel like SAP services is directly competing with them is important,” Syvertsen said. “Certainly for few years that was a dramatic trend as SAP was really doubling down on services and growing the services teams and sales positioning, so that’s a remarkable shift, and I think it’s a really healthy one.”

SAP partners would often see similar and competing products coming from SAP product management, and it will be interesting to see if this changes, Syvertsen said.

“The idea that an ecosystem matters is something that we’ve heard from Klein over several years, and there has been a tone of being more open to that. So, now we’ll see if some of those behaviors change within the organization to honor some of the investments the partners have made,” he said. “For example, there’s Sodales Solutions [an SAP partner that develops extensions to SAP SuccessFactors]. If SAP comes out with a new module for SuccessFactors that does what Sodales does, that’s not a good sign for anybody. Those are the kinds of things I’m watching for.”

SAP can do more to boost innovative partners

The partner program initiatives are a welcome development for SAP, but they could do even more to highlight smaller niche players that build emerging technology or industry expertise into their applications, said Jon Reed, analyst and co-founder of Diginomica.com, an enterprise applications news and analysis site.

Jon Reed, co-founder, Diginomica.comJon Reed

“This is a time when companies are largely pausing on major software upgrades, but they are eager to extend their platforms with impactful apps and analytics that can get up and running quickly,” Reed said.

Many of SAP’s partners have offerings that fit this bill but do not get enough exposure. Some, like Sodales Solutions, have gained visibility this year, but there needs to be more like that, he said.

Joshua Greenbaum, principal at Enterprise Applications Consulting, agreed that the proof will be in the pudding for SAP’s partner relations.

Joshua Greenbaum, principal, Enterprise Applications ConsultingJoshua Greenbaum

“The spirit is willing in SAP at the top, and we’ll have to wait to see how everything goes,” Greenbaum said. “They are truly dedicated to the proposition that SAP can’t compete without a healthy and vigorous ecosystem, and I think they really mean that, but unfortunately the best practices have not been best for the partners. They’ve been best for SAP in the past, so this is going to be a real wait and see.”

The trajectory path for partners with the Partner Solution Progression framework is perhaps the best development, he said.

“It took a while to articulate the value of having that trajectory to follow to the partners,” he said. “The key is that SAP has to do good by existing partners, but also make it an enticing ecosystem for new partners — and their reputation isn’t that good. With Fahrbach in charge and Klein’s vision, the pieces are there, but these are complicated, inbred cultural behaviors that need to be modified, and that takes time.”

Go to Original Article
Author:

New ServiceNow workflows extend into more markets

ServiceNow sharpened its focus on vertical markets this week with new workflows for the telecommunications, financial services and healthcare industries designed to make it easier for users to implement digital transformation projects.

Built on the company’s flagship Now Platform, the new telecommunications applications, called Proactive Customer Care and Automated Service Assurance, aim to help service providers manage customer requests as well as identify and resolve technical network problems more quickly and in a more cost-effective way.

“Telcos is a good space to go into given the push ServiceNow has made in the customer service management, IT service management and asset and operations management markets,” said Thomas Murphy, senior director and analyst with Gartner. “Now they can marry supporting customers more closely to what is happening with a complex infrastructure stack.”

Pursuing vertical markets through partnerships is something newly appointed ServiceNow CEO Bill McDermott did successfully as CEO at SAP, Murphy noted, and is a step in the right direction. The strategy, he said, brings closer together ServiceNow’s platform-oriented approach with the domain expertise of the Global System Integrators (GSIs) such as Deloitte and KPMG.

Bill McDermott (former SAP CEO) at Sapphire 2019
ServiceNow CEO Bill McDermott co-led SAP until October 2019, when he took over the helm of ServiceNow.

“They’re going after industries that are on the brink of the most radical operational transformation because of emerging technologies,” said Geoff Woollacott, senior strategy consultant and principal analyst at Technology Business Research in Hampton, N.H. “Companies operating in all three of those markets are inextricably linked with their customers, so these [new products] should appeal to them.”

Now platform to receive AI infusions

In his keynote this week at the company’s virtual Knowledge 2020 Digital Experience event, McDermott said the Now platform continues to be the foundation of the company’s strategy. But as importantly, the company will continue to deliver new workflows laced with AI, machine learning and virtual agents that utilize the core capabilities of the Now Platform.

They’re going after industries that are on the brink of the most radical operational transformation because of emerging technologies.
Geoff WoollacottPrincipal analyst, Technology Business Research

“We are in a whole new world where it is necessary for a platform approach to incorporate things like AI and predictive analytics,” McDermott said. “Most customer service issues can be and need to be resolved with human intervention. [AI] enables people to do what they were born to do — innovate — and only get involved on the services side when they are really needed.”

Despite the rapid evolution of cloud and availability of AI technologies, the workflows of many companies remain disconnected both internally and externally with its users and business partners.

“Most businesses are way too siloed, so you have to connect the whole value chain, which gives customers what they need to work effectively across those silos,” McDermott said.

While ServiceNow continues to strengthen its position in the IT services business, according to most analysts, there are several formidable software and telecommunications companies, such as Verizon and Nokia, as well as Cisco, looking to grab a share of that market. Additionally, analysts point out that IBM and Microsoft both have relationships with telecom companies like AT&T along with software that would make them competitors in this market.

ServiceNow however, does have AT&T as a customer. The telecommunications company said at Knowledge 2020 they were able to help customer service agents manage user inquiries from both the office and, once the COVID-19 pandemic hit the U.S. in March, from home as well.

Over the next 12 months, AT&T expects to deploy ServiceNow’s Agent Workspace along with a number of ServiceNow workflows to improve the online experience with customers in resolving a variety of issues, said Sorabh Saxena, executive vice president of global operations and services at AT&T Business.

More ServiceNow partnerships in the works

In January, ServiceNow signed a deal making Accenture its go-to-market partner for all of its telecommunications products. ServiceNow created the new telecommunications offerings jointly with a number of telco companies, such as British Telecom (BT). BT is also working with ServiceNow as a design partner contributing its advice on what operator requirements are for communication networks.

The company also plans to develop a number of healthcare and life science products to help organizations that automate a variety of clinical and business workflows. ServiceNow also penned a go-to-market partnership with KPMG to deliver those offerings. As part of the agreement, KPMG will help shape ServiceNow’s product roadmap as well as contribute to the development of healthcare workflow products including physician onboarding and credentialing. These offerings are expected to be available sometime in 2021, according to the company.

ServiceNow rolled out its Financial Services Operations that digitize user requests such as ordering a replacement card or inquiries about payments. The new offering gives IT professionals working in operations a single system that provides insights across systems of record to manage processes and allow users to collaborate across departments.

Earlier this year, ServiceNow formed a go-to-market alliance with Deloitte to deliver financial products. Two of those products include Deloitte’s Complaints Management offering along with its Small Business Association Paycheck Protection Program Forgiveness Solution that addresses a number of customer banking issues.

Go to Original Article
Author:

Looker analytics platform adding app development capability

Looker is maintaining a focus on application development as it continues to add new features to its analytics platform six months after its last major release and three months after it finally joined forces with Google Cloud.

The vendor, which was founded in 2012 and is based in Santa Cruz, Calif., was acquired by Google for $2.6 billion in June 2019, just four days before Tableau was purchased by Salesforce for $15.7 billion. Unlike Tableau, however, which serves a largely on-premises customer base and delivers platform updates quarterly, Looker is entirely cloud-based and therefore, beyond its one major update each year, delivers new and upgraded features throughout the year.

Looker 7, released in November 2019, included a new application development framework and enhanced embedded BI capabilities. Since then, Looker has kept adding to its set of tools for application developers, enhancing the power of its no-code query capabilities and providing new ways to embed analytics into the applications customers use in their everyday workflows.

“Developers are their bread and butter,” said Mike Leone, senior analyst at Enterprise Strategy Group. “It’s all about enabling developers to seamlessly, intelligently and rapidly incorporate analytics at scale into modern applications. This is and has been a top priority for Looker.”

Meanwhile, as Looker has continued to build up its analytics platform, the vendor’s acquisition was finalized. The purchase closed so recently, however, that there hasn’t yet been any obvious evidence of collaboration between Looker and Google Cloud, analysts said.

Developers are their bread and butter. It’s all about enabling developers to seamlessly, intelligently and rapidly incorporate analytics at scale into modern applications. This is and has been a top priority for Looker.
Mike LeoneSenior analyst, Enterprise Strategy Group

“I have not seen anything yet to suggest that they’ve made a dramatic change yet in their approach,” said Dave Menninger, research director of data and analytics research at Ventana Research.

He added, however, that Looker and Google Cloud share a lot of similarities and the two are a natural fit. In particular, the way Looker uses its LookML language to enable developers to build applications without having to write complex code fits in with Google Cloud’s focus.

“Looker has found a good partner in Google in the sense that Looker is really targeted at building custom apps,” Menninger said. “Looker is all about the LookML language and constructing these analyses, these displays that are enhanced by the LookML language. And a large part of Google, the Google Cloud Platform division, is really focused on that developer community. So Looker fits into that family well.”

Leone, meanwhile, also said he’s still waiting to see Google’s influence on Looker but added that he expects to hear more about their integration in the near future.

And collaboration, according to Pedro Arellano, Looker’s vice president of product marketing, is indeed on the horizon. The two are working together on new features, and given that Looker is entirely cloud-based and that Looker and Google Cloud not only had a strong partnership before they joined forces but had 350 shared customers, Looker’s integration into the Google Cloud portfolio is proceeding more rapidly than it might have had Looker had a host of on-premises customers.

“It’s exciting to talk with the product teams and understand where the potential integration points are and think about these really exciting thing that we’ll be able to develop, some things that I expect will be out in a relatively short amount of time,” Arellano said. “That work case is happening, and it’s absolutely something we’re doing today.”

As far as features Looker has added to the analytics platform since last fall, one of the key additions is the Slack integration the vendor unveiled at the time Looker 7 was released but was still in beta testing. The tool delivers insights directly into customers’ Slack conversations.

Beyond the Slack integration, Looker has added to its extension network, which is its low-code/no-code tool set for developers. Among the latest new tools are the Data Dictionary, which pulls up metadata about fields built by developers using the LookML model and displays them in a digestible format, as well as tools that help developers customize user interfaces and create dashboard extensions such as adding a chat widget.

In terms of query power, Looker has developed what it calls aggregate awareness, a feature that uses augmented intelligence and machine learning to reduce the amount of time it takes a user to run a query and helps them run more focused queries.

“We really think of Looker as a platform for developing and building and deploying any kind of data experience that our customers might imagine,” Arellano said. “We recognize that we can’t anticipate all the data experiences they might come up with. We’re very focused on the developers because these are the people that are building those experiences.”

In addition to the new features Looker has added since the release of Looker 7, the vendor put together the Looker COVID-19 Data Block, a free hub for data related to the ongoing pandemic that includes data models and links to public sources such Johns Hopkins University, the New York Times and the COVID Tracking Project. The hub uses LookML to power frequent updates and deliver the data in prebuilt dashboards.

“This was an opportunity to do good things with technology and with data,” Arellano said.

As Looker continues to enhance its analytics platform, one of its next areas the vendor says it will focus on will be the platform’s mobile capabilities.

Mobile has long been a difficult medium for BI vendors with data difficult to digest on the small screens of phones and tablets. Many, as a result, have long ignored mobile. Recently, however, vendors such as Yellowfin and MicroStrategy have made significant investments in their mobile capabilities, and Arellano said that Looker plans to offer an improved mobile experience sometime in the second half of 2020.

That fits in with what Leone expects from Looker now that it’s under the Google Cloud umbrella, which is a broadening of the vendor’s focus and capabilities.

“I think, individually, they were behind a few of the leaders in the space, but the Google acquisition almost instantly brought them back on par with direct competition,” he said. “Google’s influence will be beneficial, especially around the ideas of democratizing analytics/insights, faster on-ramp and a much wider vision that incorporates a powerful AI vision.”

Go to Original Article
Author:

RTO and RPO: Understanding Disaster Recovery Times

You will focus a great deal of your disaster recovery planning (and rightly so) on the data that you need to capture. The best way to find out if your current strategy does this properly is to try our acid test. However, backup coverage only accounts for part of a proper overall plan. Your larger design must include a thorough model of recovery goals, specifically Recovery Time Objective (RTO) and Recovery Point Objective (RPO).

Ideally, a restore process would contain absolutely everything. Practically, expect that to never happen. This article explains the risks and options of when and how quickly operations can and should resume following systems failure.

Table of Contents

Disaster Recovery Time in a Nutshell

What is Recovery Time Objective?

What is Recovery Point Objective?

Challenges Against Short RTOs and RPOs

RTO Challenges

RPO Challenges

Outlining Organizational Desires

Considering the Availability and Impact of Solutions

Instant Data Replication

Short Interval Data Replication

Ransomware Considerations for Replication

Short Interval Backup

Long Interval Backup

Ransomware Considerations for Backup

Using Multiple RTOs and RPOs

Leveraging Rotation and Retention Policies

Minimizing Rotation Risks

Coalescing into a Disaster Recovery Plan

Disaster Recovery Time in a Nutshell

If a catastrophe strikes that requires recovery from backup media, most people will first ask: “How long until we can get up and running?” That’s an important question, but not the only time-oriented problem that you face. Additionally, and perhaps more importantly, you must ask the question: “How much already-completed operational time can we afford to lose?” The business-continuity industry represents the answers to those question in the acronyms RTO and RPO, respectively.

What is Recovery Time Objective?

Your Recovery Time Objective (RTO) sets the expectation for the answer to, “How long until we can get going again?” Just break the words out into a longer sentence: “It is the objective for the amount of time between the data loss event and recovery.”

Recovery Time Objective RTO

Of course, we would like to make all of our recovery times instant. But, we also know that will not happen. So, you need to decide in advance how much downtime you can tolerate, and strategize accordingly. Do not wait until the midst of a calamity to declare, “We need to get online NOW!” By that point, it will be too late. Your organization needs to build up those objectives in advance. Budgets and capabilities will define the boundaries of your plan. Before we investigate that further, let’s consider the other time-based recovery metric.

What is Recovery Point Objective?

We don’t just want to minimize the amount of time that we lose; we also want to minimize the amount of data that we lose. Often, we frame that in terms of retention policies — how far back in time we need to be able to access. However, failures usually cause a loss of systems during run time. Unless all of your systems continually duplicate data as it enters the system, you will lose something. Because backups generally operate on a timer of some sort, you can often describe that potential loss in a time unit, just as you can with recovery times. We refer to the maximum total acceptable amount of lost time as a Recovery Point Objective (RPO).

Recovery Point Objective RPO

As with RTOs, shorter RPOs are better. The shorter the amount of time since a recovery point, the less overall data lost. Unfortunately, reduced RPOs take a heavier toll on resources. You will need to balance what you can achieve against what your business units want. Allow plenty of time for discussions on this subject.

Challenges Against Short RTOs and RPOs

First, you need to understand what will prevent you from achieving instant RTOs and RPOs. More importantly, you need to ensure that the critical stakeholders in your organization understand it. These objectives mean setting reasonable expectations for your managers and users at least as much as they mean setting goals for your IT staff.

RTO Challenges

We can define a handful of generic obstacles to quick recovery times:

  • Time to acquire, configure, and deploy replacement hardware
  • Effort and time to move into new buildings
  • Need to retrieve or connect to backup media and sources
  • Personnel effort
  • Vendor engagement

You may also face some barriers specific to your organization, such as:

  • Prerequisite procedures
  • Involvement of key personnel
  • Regulatory reporting

Make sure to clearly document all known conditions that add time to recovery efforts. They can help you to establish a recovery checklist. When someone requests a progress report during an outage, you can indicate the current point in the documentation. That will save you time and reduce frustration.

RPO Challenges

We could create a similar list for RPO challenges as we did for RTO challenges. Instead, we will use one sentence to summarize them all: “The backup frequency establishes the minimum RPO”. In order to take more frequent backups, you need a fast backup system with adequate amounts of storage. So, your ability to bring resources to bear on the problem directly impacts RPO length. You have a variety of solutions to choose from that can help.

Outlining Organizational Desires

Before expending much effort figuring out what you can do, find out what you must do. Unless you happen to run everything, you will need input from others. Start broadly with the same type of questions that we asked above: “How long can you tolerate downtime during recovery?” and “How far back from a catastrophic event can you re-enter data?” Explain RTOs and RPOs. Ensure that everyone understands that RPO means recent a loss of recent data, not long-term historical data.

These discussions may require a fair bit of time and multiple meetings. Suggest that managers work with their staff on what-if scenarios. They can even simulate operations without access to systems. For your part, you might need to discover the costs associated with solutions that can meet different RPO and RTO levels. You do not need to provide exact figures, but you should be ready and able to answer ballpark questions. You should also know the options available at different spend levels.

Considering the Availability and Impact of Solutions

To some degree, the amount that you spend controls the length of your RTOs and RPOs. That has limits; not all vendors provide the same value per dollar spent. But, some institutions set out to spend as close to nothing as possible on backup. While most backup software vendors do offer a free level of their product, none of them makes their best features available at no charge. Organizations that try to spend nothing on their backup software will have high RTOs and RPOs and may encounter unexpected barriers. Even if you find a free solution that does what you need, no one makes storage space and equipment available for free. You need to find a balance between cost and capability that your company can accept.

To help you understand your choices, we will consider different tiers of data protection.

Instant Data Replication

For the lowest RPO, only real-time replication will suffice. In real-time replication, every write to live storage is also written to backup storage. You can achieve this many ways, but the most reliable involve dedicated hardware. You will spend a lot, but you can reduce your RPO to effectively zero. Even a real-time replication system can drop active transactions, so never expect a complete shield against data loss.

Real-time replication systems have a very high associated cost. For the most reliable protection, they will need to span geography as well. If you just replicate to another room down the hall and a fire destroys the entire building, your replication system will not save you. So, you will need multiple locations, very high speed interconnects, and capable storage systems.

Short Interval Data Replication

If you can sustain a few minutes of lost information, then you usually find much lower price tags for short-interval replication technology. Unlike real-time replication, software can handle the load of delayed replication, so you will find more solutions. As an example, Altaro VM Backup offers Continuous Data Protection (CDP), which cuts your RPO to as low as five minutes.

As with instant replication, you want your short-interval replication to span geographic locations if possible. But, you might not need to spend as much on networking, as the delays in transmission give transfers more time to complete.

Ransomware Considerations for Replication

You always need to worry about data corruption in replication. Ransomware adds a new twist but presents the same basic problem. Something damages your real-time data. None-the-wiser, your replication system makes a faithful copy of that corrupted data. The corruption or ransomware has turned both your live data and your replicated data into useless jumbles of bits.

Anti-malware and safe computing practices present your strongest front-line protection against ransomware. However, you cannot rely on them alone. The upshot: you cannot rely on replication systems alone for backup. A secondary implication: even though replication provides very short RPOs, you cannot guarantee them.

Short Interval Backup

You can use most traditional backup software in short intervals. Sometimes, those intervals can be just, or nearly, as short as short-term replication intervals. The real difference between replication and backup is the number of possible copies of duplicated data. Replication usually provides only one copy of live data — perhaps two or three at the most — and no historical copies. Backup programs differ in how many unique simultaneous copies that they will make, but all will make multiple historical copies. Even better, historical copies can usually exist offline.

You do not need to set a goal of only a few minutes for short interval backups. To balance protection and costs, you might space them out in terms of hours. You can also leverage delta, incremental, and differential backups to reduce total space usage. Sometimes, your technologies have built-in solutions that can help. As an example, SQL administrators commonly use transaction log backups on a short rotation to make short backups to a local disk. They perform a full backup each night that their regular backup system captures. If a failure occurs during the day that does not wipe out storage, they can restore the previous night’s full backup and replay the available transaction log backups.

Long Interval Backup

At the “lowest” tier, we find the oldest solution: the reliable nightly backup. This usually costs the least in terms of software licenses and hardware. Perhaps counter-intuitively, it also provides the most resilient solution. With longer intervals, you also get longer-term storage choices. You get three major benefits from these backups: historical data preservation, protection against data corruption, and offline storage. We will explore each in the upcoming sections.

Ransomware Considerations for Backup

Because we use a backup to create distinct copies, it has some built-in protection against data corruption, including ransomware. As long as the ransomware has no access to a backup copy, it cannot corrupt that copy. First and foremost, that means that you need to maintain offline backups. Replication requires essentially constant continuity to its replicas, so only backup can work under this restriction. Second, it means that you need to exercise caution around restores when you execute restore procedures. Some ransomware authors have made their malware aware of several common backup applications, and they will hijack it to corrupt backups whenever possible. You can only protect your offline data copies by attaching them to known-safe systems.

Using Multiple RTOs and RPOs

You will need to structure your systems into multiple RTO and RPO categories. Some outages will not require much time to recover from. Some will require different solutions. For instance, even though we tend to think primarily in terms of data during disaster recovery planning, you must consider equipment as well. For instance, if your sales division prints its own monthly flyers and you lose a printer, then you need to establish, RTOs, RPOs, downtime procedures, and recovery processes just for those print devices.

You also need to establish multiple levels for your data, especially when you have multiple protection systems. For example, if you have both replication and backup technologies in operation, then you will set one RPO/RTO value for times when the replication works, and RTO/RPO values for when you must resort to long-term backup. That could happen due to ransomware or some other data corruption event, but it can also happen if someone accidentally deletes something important.

To start this planning, establish “Best Case” and “Worst Case” plans and processes for your individual systems.

Leveraging Rotation and Retention Policies

For your final exercise in time-based disaster recovery designs, we will look at rotation and retention policies. “Rotation” comes from the days of tape backups, when we would decide how often to overwrite old copies of data. Now that high-capacity external disks have reached a low-cost point, many businesses have moved away from tape. You may not overwrite media anymore, or at least not at the same frequency. Retention policies dictate how long you must retain at least one copy of a given piece of information. These two policies directly relate to each other.

Backup Rotation and Retention

In today’s terms, think of “rotation” more in terms of unique copies of data. Backup systems have used “differential” and “incremental” backups for a very long time. The former is a complete record of changes since the last full backup; the latter is a record of changes since the last backup of any kind. Newer backup copies have “delta” and deduplication capabilities. A “delta” backup operates like a differential or incremental backup, but within files or blocks. Deduplication keeps only one copy of a block of bits, regardless of how many times it appears within an entire backup set. These technologies reduce backup time and storage space needs… at a cost.

Minimizing Rotation Risks

All of these speed-enhancing and space-reducing improvements have one major cost: they reduce the total number of available unique backup copies. As long as nothing goes wrong with your media, then this will never cause you a problem. However, if one of the full backups suffer damage, then that invalidates all dependent partial backups. You must balance the number of full backups that you take against the amount of time and bandwidth necessary to capture them.

As one minimizing strategy, target your full backup operations to occur during your organization’s quietest periods. If you do not operate 24 hours per day, that might allow for nightly full backups. If you have low volume weekends, you might take full backups on Saturdays or Sundays. You can intersperse full backups on holidays.

Coalescing into a Disaster Recovery Plan

As you design your disaster recovery plan, review the sections in this article as necessary. Remember that all operations require time, equipment, and personnel. Faster backup and restore operations always require a trade-off of expense and/or resilience. Modest lengthening of allowable RTOs and RPOs can result in major cost and effort savings. Make certain that the key members of your organization understand how all of these numbers will impact them and their operations during an outage.

If you need some help defining RTO and RPO in your organization, let me know in the comments section below and I will help you out!


Go to Original Article
Author: Eric Siron

Windows IIS server hardening checklist

Default configurations for most OSes are not designed with security as the primary focus. Rather, they concentrate…

on ease of setup, use and communications. Therefore, web servers running default configurations are obvious targets for automated attacks and can be quickly compromised.

Device hardening is the process of enhancing web server security through a variety of measures to minimize its attack surface and eliminate as many security risks as possible in order to achieve a much more secure OS environment.

Because web servers are constantly attached to the internet and often act as gateways to an organization’s critical data and services, it is essential to ensure they are hardened before being put into production.

Consult this server hardening checklist to ensure server hardening policies are correctly implemented for your organization’s Windows Internet Information Services (IIS) server.

General

  • Never connect an IIS server to the internet until it is fully hardened.
  • Place the server in a physically secure location.
  • Do not install the IIS server on a domain controller.
  • Do not install a printer.
  • Use two network interfaces in the server: one for admin and one for the network.
  • Install service packs, patches and hot fixes.
  • Run Microsoft Security Compliance Toolkit.
  • Run IIS Lockdown on the server.
  • Install and configure URLScan.
  • Secure remote administration of the server, and configure for encryption, low session timeouts and account lockouts.
  • Disable unnecessary Windows services.
  • Ensure services are running with least-privileged accounts.
  • Disable FTP, Simple Mail Transfer Protocol and Network News Transfer Protocol services if they are not required.
  • Disable Telnet service.
  • Disable ASP.NET state service if not used by your applications.
  • Disable Web Distributed Authoring and Versioning if not used by the application, or secure it if it is required.
  • Do not install Microsoft Data Access Components (MDAC) unless specifically needed.
  • Do not install the HTML version of Internet Services Manager.
  • Do not install Microsoft Index Server unless required.
  • Do not install Microsoft FrontPage Server Extensions (FPSE) unless required.
  • Harden the TCP/IP stack.
  • Disable NetBIOS and Server Message Block — closing ports 137, 138, 139 and 445.
  • Reconfigure recycle bin and page file system data policies.
  • Secure CMOS (complementary metal-oxide semiconductor) settings.
  • Secure physical media — CD-ROM drive and so on.

Accounts

  • Remove unused accounts from the server.
  • Disable Windows Guest account.
  • Rename Administrator account, and set a strong password.
  • Disable IUSR_Machine account if it is not used by the application.
  • Create a custom least-privileged anonymous account if applications require anonymous access.
  • Do not give the anonymous account write access to web content directories or allow it to execute command-line tools.
  • If you host multiple web applications, configure a separate anonymous user account for each one.
  • Configure ASP.NET process account for least privilege. This only applies if you are not using the default ASP.NET account, which is a least-privileged account.
  • Enforce strong account and password policies for the server.
  • Enforce two-factor authentication where possible.
  • Restrict remote logons. (The “access this computer from the network” user right is removed from the Everyone group.)
  • Do not share accounts among administrators.
  • Disable null sessions (anonymous logons).
  • Require approval for account delegation.
  • Do not allow users and administrators to share accounts.
  • Do not create more than two accounts in the administrator group.
  • Require administrators to log on locally, or secure the remote administration system.

Files and directories

  • Use multiple disks or partition volumes, and do not install the web server home directory on the same volume as the OS folders.
  • Contain files and directories on NT file system (NTFS) volumes.
  • Put website content on a nonsystem NTFS volume.
  • Create a new site, and disable the default site.
  • Put log files on a nonsystem NTFS volume but not on the same volume where the website content resides.
  • Restrict the Everyone group — no access to WINNTsystem32 or web directories.
  • Ensure website root directory has deny write access control entry (ACE) for anonymous internet accounts.
  • Ensure content directories have deny write ACE for anonymous internet accounts.
  • Remove resource kit tools, utilities and SDKs.
  • Remove any sample applications or code.
  • Remove IP address in header for Content-Location.

Shares

  • Remove all unnecessary shares, including default administration shares.
  • Restrict access to required shares — the Everyone group does not have access.
  • Remove administrative shares — C$ and Admin$ — if they are not required. (Microsoft System Center Operations Manager — formerly Microsoft Systems Management Server and Microsoft Operations Manager — requires these shares.)

Ports

  • Restrict internet-facing interfaces to port 443 (SSL).
  • Run IIS Lockdown Wizard on the server.

Registry

  • Restrict remote registry access.
  • Secure the local Security Account Manager (SAM) database by implementing the NoLMHash Policy.

Auditing and logging

  • Audit failed logon attempts.
  • Relocate and secure IIS log files.
  • Configure log files with an appropriate file size depending on the application security requirement.
  • Regularly archive and analyze log files.
  • Audit access to the MetaBase.xml and MBSchema.xml files.
  • Configure IIS for World Wide Web Consortium extended log file format auditing.
  • Read how to use SQL Server to analyze web logs here.

Sites and virtual directories

  • Put websites on a nonsystem partition.
  • Disable Parent Paths setting.
  • Remove any unnecessary virtual directories.
  • Remove or secure MDAC Remote Data Services virtual directory.
  • Do not grant included directories read web permission.
  • Restrict write and execute web permissions for anonymous accounts in virtual directories.
  • Ensure there is script source access only on folders that support content authoring.
  • Ensure there is write access only on folders that support content authoring and these folders are configured for authentication and SSL encryption.
  • Remove FPSE if not used. If FPSE are used, update and restrict access to them.
  • Remove the IIS Internet Printing virtual directory.

Script mappings

  • Map extensions not used by the application to 404.dll — .idq, .htw, .ida, .shtml, .shtm, .stm, idc, .htr, .printer.
  • Map unnecessary ASP.NET file type extensions to HttpForbiddenHandler in Machine.config.

ISAPI filters

IIS Metabase

  • Restrict access to the metabase by using NTFS permissions (%systemroot%system32inetsrvmetabase.bin).
  • Restrict IIS banner information (disable IP address in content location).

Server certificates

  • Ensure certificate date ranges are valid.
  • Only use certificates for their intended purpose. For example, the server certificate is not used for email.
  • Ensure the certificate’s public key is valid, all the way to a trusted root authority.
  • Confirm that the certificate has not been revoked.

Machine.config

  • Map protected resources to HttpForbiddenHandler.
  • Remove unused HttpModules.
  • Disable tracing: <trace enable=”false”/>.
  • Turn off debug compiles: <compilation debug=”false” explicit=”true” defaultLanguage=”vb”>.

Dig Deeper on Microsoft Windows security

Go to Original Article
Author:

Forus Health uses AI to help eradicate preventable blindness – AI for Business

Big problems, shared solutions

Tackling global challenges has been the focus of many health data consortiums that Microsoft is enabling. The Microsoft Intelligent Network for Eyecare (MINE) – the initiative that Chandrasekhar read about – is now part of the Microsoft AI Network for Healthcare, which also includes consortiums focused on cardiology and pathology.

For all three, Microsoft’s aim is to play a supporting role to help doctors and researchers find ways to improve health care using AI and machine learning.

“The health care providers are the experts,” said Prashant Gupta, Program Director in Azure Global Engineering. “We are the enabler. We are empowering these health care consortiums to build new things that will help with the last mile.”

In the Forus Health project, that “last mile” started by ensuring image quality. When members of the consortium began doing research on what was needed in the eyecare space, Forus Health was already taking the 3nethra classic to villages to scan hundreds of villagers in a day. But because the images were being captured by minimally trained technicians in areas open to sunlight, close to 20% of the images were not high quality enough to be used for diagnostic purposes.

“If you have bad images, the whole process is crude and wasteful,” Gupta said. “So we realized that before we start to understand disease markers, we have to solve the image quality problem.”

Now, an image quality algorithm immediately alerts the technician when an image needs to be retaken.

The same thought process applies to the cardiology and pathology consortiums. The goal is to see what problems exist, then find ways to use technology to help solve them.

“Once you have that larger shared goal, when you have partners coming together, it’s not just about your own efficiency and goals; it’s more about social impact,” Gupta said.

And the highest level of social impact comes through collaboration, both within the consortiums themselves and when working with organizations such as Forus Health who take that technology out into the world.

Chandrasekhar said he is eager to see what comes next.

“Even though it’s early, the impact in the next five to 10 years can be phenomenal,” he said. “I appreciated that we were seen as an equal partner by Microsoft, not just a small company. It gave us a lot of satisfaction that we are respected for what we are doing.”

Top image: Forus Health’s 3nethra classic is an eye-scanning device that can be attached to the back of a moped and transported to remote locations. Photo by Microsoft. 

Leah Culler edits Microsoft’s AI for Business and Technology blog.

Go to Original Article
Author: Microsoft News Center

Microsoft closes IE zero-day on November Patch Tuesday

Administrators will need to focus on deploying fixes for an Internet Explorer zero-day and a Microsoft Excel bug as part of the November Patch Tuesday security updates.

Microsoft issued corrections for 75 vulnerabilities, 14 rated critical, in this month’s releases which also delivered fixes for Windows operating systems, Microsoft Office and Office 365 applications, Edge browser, Exchange Server, ChakraCore, Secure Boot, Visual Studio and Azure Stack.

In addition to these November Patch Tuesday updates, administrators should also look at the Google Chrome browser to fix a zero-day (CVE-2019-13720) reported by Kaspersky Labs researchers. Google corrected the flaw in build 78.0.3904.87 released on Oct. 31 for Windows, Mac and Linux systems.

Microsoft plugs Internet Explorer zero-day

The Internet Explorer zero-day (CVE-2019-1429), rated critical for Windows client systems and moderate for the server OS, covers the range of browsers from Internet Explorer 9 to 11. The flaw is a memory corruption vulnerability that could let an attacker execute code remotely on a system in the context of the current user. If that user is an administrator, then the attacker would gain full control of the system.

On a system run by a user with lower privileges, the attacker would need to do additional work through another exploit to elevate their privilege. Organizations that follow least privilege will be less susceptible to the exploit until administrators can roll out the update to Windows systems. Exposure to the zero-day can occur in several scenarios, from visiting a malicious website to opening an application or Microsoft Office document that contains the exploit.

“[There are] a few different ways to exploit [the IE zero-day], such as going to a site that allows user-contributed content like ads that can be injected with this type of malicious content to serve up the attack,” said Chris Goettl, director of product management and security at Ivanti, a security and IT management vendor based in South Jordan, Utah.

Chris Goettl, director of product management and security, IvantiChris Goettl

Organizations can take nontechnical measures, such as implement training that instructs users on how to avoid suspicious emails and websites, but the best way to prevent exploitation is to roll out the security update as quickly as possible because the vulnerability is under active attack, Goettl said.

Microsoft resolved a security feature bypass in Microsoft Excel 2016/2019 for macOS systems (CVE-2019-1457) rated important that had been publicly disclosed. The security update corrects a bug that did not enforce the macro settings for Excel documents. A user who opened a malicious Excel worksheet would trigger the exploit when it runs a macro. Microsoft’s advisory stipulated the preview pane is not an attack vector for this vulnerability.

Other security updates worth noting for November Patch Tuesday include:

  • A critical servicing update to ChakraCore to correct three memory corruption bugs (CVE-2019-1426, CVE-2019-1427 and CVE-2019-1428) that affect the Microsoft Edge browser in client and server operating systems. The remote code execution vulnerability could let an attacker run arbitrary code in the context of the current user to obtain the same user rights.
  • A remote code execution vulnerability in Exchange Server 2013/2016/2019 (CVE-2019-1373) that would let an attacker run arbitrary code. The exploit requires a user to run a PowerShell cmdlet. The update corrects how Exchange serializes its metadata.
  • A critical remote code execution vulnerability (CVE-2019-1419) in all supported Windows versions related to OpenType font parsing in the Windows Adobe Type Manager Library. An attacker could exploit the bug either by having a user open a malicious document or go to a website embedded with specially crafted OpenType fonts.
  • Microsoft resolved nine vulnerabilities affecting the Hyper-V virtualization platform. CVE-2019-0719, CVE-2019-0721, CVE-2019-1389, CVE-2019-1397 and CVE-2019-1398 relate to critical remote code execution bugs. CVE-2019-0712, CVE-2019-1309, CVE-2019-1310 and CVE-2019-1399 are denial-of-service flaws rated important.

Microsoft shares information on Trusted Platform Module bug

[There are] a few different ways to exploit [the IE zero-day], such as going to a site that allows user-contributed content like ads that can be injected with this type of malicious content to serve up the attack.
Chris GoettlDirector of product management and security, Ivanti

Microsoft also issued an advisory (ADV190024) for a vulnerability (CVE-2019-16863) in the Trusted Platform Module (TPM) firmware. The company indicated there is no patch because the flaw is not in the Windows OS or a Microsoft application, but rather in certain TPM chipsets. Microsoft said users should contact their TPM manufacturer for further information.
TPM chips stop unauthorized modifications to hardware and use cryptographic keys to detect tampering in firmware and the operating system.
“Other software or services you are running might use this algorithm. Therefore, if your system is affected and requires the installation of TPM firmware updates, you might need to reenroll in security services you are running to remediate those affected services,” the advisory said.
The flaw affects TPM firmware based on the Trusted Computing Guidelines specification family 2.0, according to Microsoft.

Microsoft releases more servicing stack updates

For the third month in a row, Microsoft released updates for the servicing stack for Windows client and server operating systems. Microsoft does not typically give a clear deadline when a servicing stack needs to be applied but has given as little as two months in some instances, Goettl said.

Servicing stack updates are not part of the cumulative updates for Windows but rather are installed separately.

Researchers say first BlueKeep exploit attempts underway

In security news beyond the November Patch Tuesday security updates, the first reports of the BlueKeep exploit targeting users began at the end of October when security researcher Kevin Beaumont spotted hacking attempts using the RDP flaw on his honeypots and reported the findings on his blog.

On May Patch Tuesday, Microsoft corrected the critical remote code execution flaw (CVE-2019-0708) dubbed BlueKeep that affects Windows 7 and Windows Server 2008/2008R2 systems. Due to the “wormable” nature of the vulnerability, many in IT felt BlueKeep might surpass the impact of the WannaCry outbreak. At one point there were more than a million public IPs running RDP that were vulnerable to a BlueKeep attack, which should serve as a wake-up call for IT to tighten up lax RDP practices, Goettl said.

“People should just be a little bit more intelligent about how they’re using RDP. You are opening a gateway into your network,” Goettl said. “There are people who have public-facing RDP that’s not behind a VPN, doesn’t require authentication. There are about four or five things people can do to better secure RDP services, especially when they’re exposing it to public IPs, but they’re just not doing it.”

Go to Original Article
Author:

SwiftStack 7 storage upgrade targets AI, machine learning use cases

SwiftStack turned its focus to artificial intelligence, machine learning and big data analytics with a major update to its object- and file-based storage and data management software.

The San Francisco software vendor’s roots lie in the storage, backup and archive of massive amounts of unstructured data on commodity servers running a commercially supported version of OpenStack Swift. But SwiftStack has steadily expanded its reach over the last eight years, and its 7.0 update takes aim at the new scale-out storage and data management architecture the company claims is necessary for AI, machine learning and analytics workloads.

SwiftStack said it worked with customers to design clusters that scale linearly to handle multiple petabytes of data and support throughput of more than 100 GB per second. That allows it to handle workloads such as autonomous vehicle applications that feed data into GPU-based servers.

Marc Staimer, president of Dragon Slayer Consulting, said throughput of 100 GB per second is “really fast” for any type of storage and “incredible” for an object-based system. He said the fastest NVMe system tests at 120 GB per second, but it can scale only to about a petabyte.

“It’s not big enough, and NVMe flash is extremely costly. That doesn’t fit the AI [or machine learning] market,” Staimer said.

This is the second object storage product launched this week with speed not normally associated with object storage. NetApp unveiled an all-flash StorageGrid array Tuesday at its Insight user conference.

Staimer said SwiftStack’s high-throughput “parallel object system” would put the company into competition with parallel file system vendors such as DataDirect Networks, IBM Spectrum Scale and Panasas, but at a much lower cost.

New ProxyFS Edge

SwiftStack 7 plans introduce a new ProxyFS Edge containerized software component next year to give remote applications a local file system mount for data, rather than having to connect through a network file serving protocol such as NFS or SMB. SwiftStack spent about 18 months creating a new API and software stack to extend its ProxyFS to the edge.

Founder and chief product officer Joe Arnold said SwiftStack wanted to utilize the scale-out nature of its storage back end and enable a high number of concurrent connections to go in and out of the system to send data. ProxyFS Edge will allow each cluster node to be relatively stateless and cache data at the edge to minimize latency and improve performance.

SwiftStack 7 will also add 1space File Connector software in November to enable customers that build applications using the S3 or OpenStack Swift object API to access data in their existing file systems. The new File Connector is an extension to the 1space technology that SwiftStack introduced in 2018 to ease data access, migration and searches across public and private clouds. Customers will be able to apply 1space policies to file data to move and protect it.

Arnold said the 1space File Connector could be especially helpful for media companies and customers building software-as-a-service applications that are transitioning from NAS systems to object-based storage.

“Most sources of data produce files today and the ability to store files in object storage, with its greater scalability and cost value, makes the [product] more valuable,” said Randy Kerns, a senior strategist and analyst at Evaluator Group.

Kerns added that SwiftStack’s focus on the developing AI area is a good move. “They have been associated with OpenStack, and that is not perceived to be a positive and colors its use in larger enterprise markets,” he said.

AI architecture

A new SwiftStack AI architecture white paper offers guidance to customers building out systems that use popular AI, machine learning and deep learning frameworks, GPU servers, 100 Gigabit Ethernet networking, and SwiftStack storage software.

“They’ve had a fair amount of success partnering with Nvidia on a lot of the machine learning projects, and their software has always been pretty good at performance — almost like a best-kept secret — especially at scale, with parallel I/O,” said George Crump, president and founder of Storage Switzerland. “The ability to ratchet performance up another level and get the 100 GBs of bandwidth at scale fits perfectly into the machine learning model where you’ve got a lot of nodes and you’re trying to drive a lot of data to the GPUs.”

SwiftStack noted distinct differences between the architectural approaches that customers take with archive use cases versus newer AI or machine learning workloads. An archive customer might use 4U or 5U servers, each equipped with 60 to 90 drives, and 10 Gigabit Ethernet networking. By contrast, one machine learning client clustered a larger number of lower horsepower 1U servers, each with fewer drives and a 100 Gigabit Ethernet network interface card, for high bandwidth, he said.

An optional new SwiftStack Professional Remote Operations (PRO) paid service is now available to help customers monitor and manage SwiftStack production clusters. SwiftStack PRO combines software and professional services.

Go to Original Article
Author:

How to keep VM sprawl in check

During the deployment of virtual environments, the focus is on the design and setup. Rarely are the environments revisited to check if improvements are possible.

Virtualization brought many benefits to data center operations, such as reliability and flexibility. One drawback is it can lead to VM sprawl and the generation of more VMs that contend for a finite amount of resources. VMs are not free; storage and compute have a real capital cost. This cost gets amplified if you look to move these resources into the cloud. It’s up to the administrator to examine the infrastructure resources and make sure these VMs have just what they need because the costs never go away and typically never go down.

Use Excel to dig into resource usage

One of the fundamental tools you need for this isn’t Hyper-V or some virtualization product — it’s Excel. Dashboards are nice, but there are times you need the raw data for more in-depth analysis. Nothing can provide that like Excel.

Most monitoring tools export data to CSV format. You can import this file into Excel for analysis. Shared storage is expensive, so I always like to see a report on drive space. It’s interesting to see what servers consume the most drive space, and where. If you split your servers into a C: for the OS and D: for the data, shouldn’t most of the C: drives use the same amount of space? Outside of your application install, why should the C: drives vary in space? Are admins leaving giant ISOs in the download folder or recycle bin? Or are multiple admins logging on with roaming profiles?

Whatever the reason, runaway C: drives can chew up your primary storage quickly. If it is something simple such as ISO files that should have been removed, keep in mind that this affects your backups as well. You can just buy additional storage in a pinch and, because often many us in IT are on autopilot mode, it’s easy to not give drive space issues a second thought.

Overallocation is not as easy to correct

VM sprawl is one thing but when was the last time you looked at what resources you allocated to those VMs to see what they are actually using? The allocation process is still a little bit of a guess until things get up and running fully. Underallocation is often noticed promptly and corrected quickly, and everything moves forward.

A review process could reveal places that could use an adjustment to drain resources from overallocated VMs to avoid trouble in the future.

Do you ever check for overallocation? Do you ever go back and remove extra CPU cores or RAM? In my experience, no one ever does. If everything runs well, there’s little incentive to make changes.

Some in IT like to gamble and assume everything will run properly most of the time, but it’s less stressful to prepare for some of these unlikely events. Is it possible that a host or two will fail, or that a network issue strikes your data center? You have to be prepared for failure and at a scale that is more than what you might think. We all know things will rarely fail in a way that is favorable to you. A review process could reveal places that could use an adjustment to drain resources from overallocated VMs to avoid trouble in the future.

Look closer at all aspects of VM sprawl to trim costs

Besides the resource aspect what about the licensing cost? With more and more products now allocating by core, overallocation of resources has an instant impact on the application cost to start but it gets worse. It’s the annual maintenance costs that pick at your budget and drain your resources for no gain if you cannot tighten your resource allocation.

One other maintenance item that gets overlooked is reboots. When a majority of Windows Server deployments moved from hardware to virtualization, the runtime typically increased. This increase in stability brought with it an inadvertent problem. Too often, busy IT shops without structured patching and reboot cycles only performed these tasks when a server went offline, which — for better or worse — created a maintenance window.

With virtualization, the servers tend to run for longer stretches and show more unique issues. Memory leaks that might have gone unnoticed before — because they were reset during a reboot — can affect servers in unpredictable ways. Virtualization admins need to be on alert to recognize behaviors that might be out of the norm. If you right-size your VMs, you should have enough resources for them to run normally and still handle the occasional spikes in demand. If you see your VMs requiring more resources than normal, this could point to resource leaks that need to be reset.

Often, the process to get systems online is rushed, leads to VM sprawl and overlooks any attempts at optimization. This can be anything from overallocations to simple cleanup. If this isn’t done, you lose out on ways to make the environment more efficient, losing both performance and capacity. While this all makes sense, it’s important to follow through and actually do it.

Go to Original Article
Author:

Oracle Cloud Infrastructure updates hone in on security

SAN FRANCISCO — Oracle hopes a focus on advanced security can help its market-lagging IaaS gain ground against the likes of AWS, Microsoft and Google.

A new feature called Maximum Security Zones lets customers denote enclaves within their Oracle Cloud Infrastructure (OCI) environments that have all security measures turned on by default. Resources within the zones are limited to configurations that are known to be secure. The system will also prevent alterations to configurations and provide continuous monitoring and defenses against anomalies, Oracle said on the opening day of its OpenWorld conference.

Through Maximum Security Zones, customers “will be better protected from the consequences of misconfigurations than they are in other cloud environments today,” Oracle said in an obvious allusion to recent data breaches, such as the Capital One-AWS hack, which have been blamed on misconfigured systems that gave intruders a way in.

“Ultimately, our goal is to deliver to you a fully autonomous cloud,” said Oracle executive chairman and CTO Larry Ellison, during a keynote. 

“If you spend the night drinking and get into your Ford F-150 and crash it, that’s not Ford’s problem,” he said. “If you get into an autonomous Tesla, it should get you home safely.”

Oracle wants to differentiate itself and OCI from AWS, which consistently promotes a shared responsibility model for security between itself and customers. “We’re trying to leapfrog that construct,” said Vinay Kumar, vice president of product management for Oracle Cloud Infrastructure.

“The cloud has always been about, you have to bring your own expertise and architecture to get this right,” said Leo Leung, senior director of products and strategy at OCI. “Think about this as a best-practice deployment automatically. … We’re going to turn all the security on and let the customer decide what is ultimately right for them.”

Security is too important to rely solely on human effort.
Holger MuellerVice president and principal analyst, Constellation Research.

Oracle’s Autonomous Database, which is expected to be a big focal point at this year’s OpenWorld, will benefit from a new service called Oracle Data Safe. This provides a set of controls for securing the database beyond built-in features such as always-on encryption and will be included as part of the cost of Oracle Database Cloud services, according to a statement.

Finally, Oracle announced Cloud Guard, which it says can spot threats and misconfigurations and “hunt down and kill” them automatically. It wasn’t immediately clear whether Cloud Guard is a homegrown Oracle product or made by a third-party vendor. Security vendor Check Point offers an IaaS security product called CloudGuard for use with OCI.

Starting in 2017, Oracle began to talk up new autonomous management and security features for its database, and the OpenWorld announcements repeat that mantra, said Holger Mueller, an analyst at Constellation Research in Cupertino, Calif. “Security is too important to rely solely on human effort,” he said.

OCI expansions target disaster recovery, compliance

Oracle also said it will broadly expand OCI’s global cloud footprint, with the launch of 20 new regions by the end of next year. The rollout will bring Oracle’s region count to 36, spread across North America, Europe, South America, the Middle East, Asia-Pacific, India and Australia.

This expansion will add multiple regions in certain geographies, allowing for localized disaster recovery scenarios as well as improved regulatory compliance around data location. Oracle plans to add multi-region support in every country it offers OCI and claimed this approach is superior to the practice of including multiple availability zones in a single region.

Oracle’s recently announced cloud interoperability partnership with Microsoft is also getting a boost. The interconnect that ties together OCI and Azure, now available in Virginia and London, will also be offered in the Western U.S., Asia and Europe over the next nine months, according to a statement. In most cases, Oracle is leasing data center space from providers such as Equinix, according to Kumar.

Holger MuellerHolger Mueller

SaaS vendors are another key customer target for Oracle with OCI. To that end, it announced new integrated third-party billing capabilities for the OCI software marketplace released earlier this year. Oracle also cited SaaS providers who are taking advantage of Oracle Cloud Infrastructure for their own underlying infrastructure, including McAfee and Cisco.

There’s something of value for enterprise customers in OCI attracting more independent software vendors, an area where Oracle also lags against the likes of AWS, Microsoft and Google, according to Mueller.

“In contrast to enterprises, they bring a lot of workloads, often to be transferred from on-premises or even other clouds to their preferred vendor,” he said. “For the IaaS vendor, that means a lot of scale, in a market that lives by economies of scale: More workloads means lower prices.”

Go to Original Article
Author: