Tag Archives: consultant

OpenText-Salesforce integration gaining momentum

Australia is home to many deadly animals, said Sean Potter, senior consultant of group insurance at MLC Australia, and he ought to know. He traffics in life insurance data, helping connect the company’s enterprise content with agents, underwriters, adjusters and other employees via a Salesforce-OpenText integration.

Dangerous creatures native to Australia include seven of the world’s 10 most venomous snakes, Potter said, as well as crocodiles and sharks, which also inflict human casualties. But the deadliest animal is the wombat, a typically mellow, adorable, furry, herbivore marsupial.

“Basically, it’s a block of concrete on legs,” Potter said. “It has this really unnerving habit of walking in the middle of country roads. It’s a nocturnal beast … and cars crash into them.”

It’s actuarial data like this — as well as particulars on MLC’s 1.5 million customers — for which Potter’s team and consultants from Infosys had to find a new home when parent company National Australia Bank’s spun them off as an independent company and eventually sold 80% of MLC’s life insurance business to Japanese company Nippon Life.

MLC’s reboot with OpenText-Salesforce integration

The company started its IT reboot, which comprises 27 new technology platforms, in mid-2017.

For claims data and customer policy data, the IT team chose an OpenText-Salesforce integration built on OpenText Content Suite, on top of which sits OpenText Extended ECM Platform 16.2 middleware. It in turn connects to Salesforce via OpenText Extended ECM for Salesforce, a tool available on the Salesforce AppExchange.

We know it works, and we’ve got our first customers coming on the platform.
Sean Pottersenior consultant of group insurance, MLC Australia

“Basically the [OpenText] Extended ECM platform is the layer above the content suite that enables it to integrate with your leading applications,” said Ihsan Hall, founder of the consultancy Qellus. “So if [those] applications are SAP or Salesforce, you’re going to be using Extended ECM platform as your tool set to do those integrations.”

This piece of the overall MLC enterprise IT build was in its final six weeks of testing during mid-July, Potter said during a presentation at OpenText Enterprise World in Toronto.

“We know it works, and we’ve got our first customers coming on the platform,” he said.

Picking a mix of tools connecting several enterprise IT layers through to the Salesforce end-user interface part of the OpenText-Salesforce integration was a daunting task, Potter said. The whole project — starting from scratch for a 5,000-employee company — “is a massive undertaking for us, and I’d imagine, for any organization,” he said.

Wombat crossing sign.
Furry, docile wombats, ironically, are the source of many life- insurance claims Down Under, due to the car crashes they cause — according to MLC Australia, which recently rebooted its IT stack with an OpenText-Salesforce integration.

Data management: The biggest challenge

One particularly thorny content management issue was ID provisioning and data access controls, which MLC needed to set up and enforce. One example: Australia has stringent health data privacy regulations that dictate that agents aren’t privy to the same information that underwriters and adjusters might be.

The IT group needed to pick tools that were simple, flexible, scalable and also maintained and documented compliance for customers’ financial and medical data.

While old, familiar applications were popular choices among employees for the new enterprise IT tech stack, the company also wanted to employ some level of cloud integration, too. So they built a hybrid mix of cloud and on-premises IT tools, with the end goal of enabling customers to file claims quickly with a phone call.

After surveying its options, MLC decided to use an OpenText-Salesforce  integration built on bedrock OpenText content management, connecting customer data to the Salesforce front end via OpenText Extended ECM for Salesforce, a connector available on AppExchange.

Separately, MLC automates claims processing via ClaimVantage, another Salesforce AppExchange tool that taps into the customer data set. ClaimVantage was familiar to MLC employees, who had used it before.

Curb stress from Exchange Server updates with these pointers

systems. In my experience as a consultant, I find that few organizations have a reliable method to execute Exchange Server updates.

This tip outlines the proper procedures for patching Exchange that can prevent some of the upheaval associated with a disruption on the messaging platform.

How often should I patch Exchange?

In a perfect world, administrators would apply patches as soon as Microsoft releases them. This doesn’t happen for a number of reasons.

Microsoft has released patches and updates for both Exchange and Windows Server that cause trouble on those systems. Many IT departments have long memories, and they will let the bad feelings keep them from staying current with Exchange Server updates. This is detrimental to the health of Exchange and should be avoided. With proper planning, updates can and should be run on Exchange Server on a regular schedule.

Another wrinkle in the update process is Microsoft releases Cumulative Updates (CUs) for Exchange Server on a quarterly schedule. CUs are updates that feature functionality enhancements for the application.

With proper planning, updates can and should be run on Exchange Server on a regular schedule.

Microsoft plans to release one CU for Exchange 2013 and 2016 each quarter, but they do not provide a set release date. The CUs may be released on the first day of one quarter, and then on the last day of the next.

Rollup Updates (RUs) for Exchange 2010 are also released quarterly. An RU is a package that contains multiple security fixes, while a CU is a complete server build.

For Exchange 2013 and 2016, Microsoft supports the current and previous CU. When admins call Microsoft for a support case, the company will ask them to update Exchange Server to at least the N-1 CU — where N is the latest CU, N-1 refers to the previous CU — before they begin work on the issue. An organization that prefers to stay on older CUs limits its support options.

Because CUs are the full build of Exchange 2013/2016, administrators can deploy a new Exchange Server from the most recent CU. For existing Exchange Servers, using a new CU for that version to update it should work without issue.

Microsoft only tests a new CU deployment with the last two CUs, but I have never had an issue with an upgrade with multiple missed CUs. The only problems I have seen when a large number of CUs were skipped had to do with the prerequisites for Exchange, not Exchange itself.

Microsoft releases Windows Server patches on the second Tuesday of every month. As many administrators know, some of these updates can affect how Exchange operates. There is no set schedule for other updates, such as .NET. I recommend a quarterly update schedule for Exchange.

How can I curb issues from Exchange Server updates?

As every IT department is different, so is every Exchange deployment. There is no single update process that works for every organization, but these guidelines can reduce problems with Exchange Server patching. Even if the company has an established patching process, if it’s missing some of the advice outlined below, then it might be a good idea to review that method.

  • Back up Exchange servers before applying patches. This might be common sense for most administrators, but I have found it is often overlooked. If a patch causes a critical failure, a recent backup is the key to the recovery effort. Some might argue that there are Exchange configurations — such as Exchange Preferred Architecture — that do not require this, but a backup provides some reassurance if a patch breaks the system.
  • Measure the performance baseline before an update. How would you know if the CPU cycles on the Exchange Server are too high after an update if this metric hasn’t been tracked? The Managed Availability feature records performance data by default on Exchange 2013 and 2016 servers, but Exchange administrators should review server performance regularly to establish an understanding of normal server behavior.
  • Test patches in a lab that resembles production. When a new Exchange CU arrives, it has been through extensive testing. Microsoft deploys updates to Office 365 long before they are publicly available. After that, Microsoft gives the CUs to its MVP community and select organizations in its testing programs. This vetting process helps catch the vast majority of bugs before CUs go to the public, but some will slip through. To be safe, test patches in a lab that closely mirrors the production environment, with the same servers, firmware and network configuration.
  • Put Exchange Server into maintenance mode before patching: If the Exchange deployment consists of redundant servers, then put them in maintenance mode before the update process. Maintenance mode is a feature of Managed Availability that turns off monitoring on those servers during the patching window. There are a number of PowerShell scripts in the TechNet Gallery that help put servers into maintenance mode, which helps administrators streamline the application of Exchange Server updates.

Prevent Exchange Server virtualization deployment woes

are other measures administrators should take to keep the email flowing.

In my work as a consultant, I find many customers get a lot of incorrect information about virtualizing Exchange. These organizations often deploy Exchange on virtual hardware in ways that Microsoft does not support or recommend, which results in major performance issues. This tip will explain the proper way to deploy Exchange Server on virtual hardware and why it’s better to avoid cutting-edge hypervisor features.

When is Exchange Server virtualization the right choice?

The decision to virtualize a new Exchange deployment would be easy if the only concerns were technical. This choice gets difficult when politics enter the equation.

Email is one of the more visible services provided by an IT department. Apart from accounting systems, companies rely on email services more than other information technology. Problems with email availability can affect budgets, jobs — even careers.  

Some organizations spend a sizable portion of the IT department budget on the storage systems that run under the virtual platform. It may be a political necessity to use those expensive resources for high-visibility services such as messaging even when it is less expensive and overall a better technical answer to deploy Exchange on dedicated hardware. While I believe that the best Exchange deployment is almost always done on physical hardware — in accordance with the Preferred Architecture guidelines published by the Exchange engineering team — a customer’s requirements might steer the deployment to virtualized infrastructure.

How do I size my virtual Exchange servers?

Microsoft recommends sizing virtual Exchange servers the same way as physical Exchange servers. My recommendations for this procedure are:

  • Use the Exchange Server Role Requirements Calculator as if the intent was to build physical servers.
  • Take the results, and create virtual servers that are as close as possible to the results from the calculator.
  • Turn off any advanced virtualization features in the hypervisor.

Why should I adjust the hypervisor settings?

Some hypervisor vendors say that the X or Y feature in their product will help the performance or stability of virtualized Exchange. But keep in mind these companies want to sell a product. Some of those add-on offerings are beneficial, some are not. I have seen some of these vaunted features cause terrible problems in Exchange. In my experience, most stable Exchange Server deployments do not require any fancy virtualization features.

What virtualization features does Microsoft support?

Microsoft’s support statement for virtualization of Exchange 2016 is lengthy, but the essence is to make the Exchange VMs as close to physical servers as possible.

Microsoft does not support features that move a VM from one host to another unless the failover event results in cold boot of the Exchange Server. The company does not support features that allow resource sharing among multiple VMs of virtualized Exchange.

Where are the difficulties with Exchange Server virtualization?

The biggest problem with deploying Exchange on virtual servers is it’s often impossible to follow the proper deployment procedures, specifically with the validation of storage IOPS of a new Exchange Server with Jetstress. This tool checks that the storage hardware delivers enough IOPS to Exchange for a smooth experience.

Generally, a virtual host will use shared storage for the VMs it hosts. Running Jetstress on a new Exchange VM on that storage setup will cause an outage for other servers and applications. Due to this shared arrangement, it is difficult to gauge whether the storage equipment for a virtualized Exchange Server will provide sufficient performance.  

While it’s an acceptable practice to run Exchange Server on virtual hardware, I find it often costs more money and performs worse than a physical deployment. That said, there are often circumstances outside of the control of an Exchange administrator that require the use of virtualization.

To avoid trouble, try not to veer too far from Microsoft’s guidelines. The farther you stray from the company’s recommendations, the more likely you are to have problems.

Why your KPI methodology should use ‘right brain’ words

CAMBRIDGE, Mass. — Despite the appeal of trendy technologies like artificial intelligence, one consultant is encouraging CIOs to go back to business intelligence basics and rethink their key performance indicator methodology.

Mico Yuk, CEO and co-founder of the consultancy BI Brainz Group in Atlanta, said companies are still struggling to make key performance indicators actionable — and not for lack of trying. It turns out the real stumbling block isn’t data, it’s language.

“KPIs are a challenge because of people, not because of measurements. A lot of problems that exist with KPIs are in the way that people interpret them,” she said in an interview at the recent Real Business Intelligence Conference.

Yuk sat down with SearchCIO and talked about how her key performance indicator (KPI) methodology, known as WHW, breaks KPIs into simple components and why her research drove her to consider the psychological impact of KPI names. This Q&A has been edited for brevity and clarity.

You recommend teams should have at least three but no more than five KPIs. What’s the science behind that advice?

Mico Yuk, CEO and co-founder of BI Brainz, on KPIs and KPI methodology.Mico Yuk

Mico Yuk: There’s a book from Franklin Covey called The 4 Disciplines of Execution. It’s a fantastic book. In it, he talks about not having more than three WIGS — widely important goals — per team. He did a study and proved that over a long period of time — a year, three months, six months, the typical KPI timeframe for tracking, monitoring and controlling — human beings can only take action and be effective on three goals. Other research firms have said five to eight KPIs are important. Today, I tell people that most report tracking of KPIs is done on mobile devices. It’s been proven that human beings get over 30,000 ads per day, and half of those exist on their phones. You are constantly competing for people’s attention. With shorter attention spans, you have to be precise, you have to be exact, and when you have your user’s attention, you have to make sure they have exactly what they need to take action or you’ll lose them.

The KPI methodology you ascribe to is called WHW. What is WHW?

Yuk: WHW stands for What, How and When. We took Peter Drucker’s SMART concept. Everybody knows him. He’s the ‘If you can’t see it, then you can’t measure it’ guy. His methodology is called SMART, which stands for specific, measurable, accurate, results-oriented, and time-oriented. He says you have to have all five elements in your KPI in order for it to be useful. We said we’re going look at what Drucker was recommending, extract those elements and turn them into a sentence structure. To do this you take any KPI and ask yourself three questions: What do you need to do with it? That’s the W. By how much? That’s the H. By when? That’s the W. You use those answers to rename your KPI so that it reads like this: The action, the KPI name, the how much, and the when. That is SMART hacked.

Why do you find WHW to be a better KPI methodology?

Yuk: It’s easier. We don’t think one KPI methodology is necessarily better than the other. Using OKRs [Objectives and Key Results] are equally as effective, for instance. But we do find that having just a sentence where someone can plug in words is much faster. Imagine going to a company and saying, ‘You have 20 KPIs. We’re going to transform all of them.’ Some of the methodologies require quite a bit of work to get that done. We find that when we show companies a sentence structure and they are able to just answer, answer, answer and see the transformation, it’s an ‘ah-ha’ moment for them. Not to mention there’s the consumption part of it. Now that you’re specific, it also makes it easier to break that big goal down into smaller goals for people to consume.

You’ve said it’s important to rename KPIs, but the language you use is equally as important. What led you to that conclusion?

Yuk: We are data visualization specialists, but when we started nine years ago we found that [our visualizations] were becoming garbage in, garbage out. We kept saying, ‘This looks great, but it’s not effective. Why?’ We then [looked at] what we were visualizing, and we realized that the KPIs we were visualizing were the problem — not the type of charts, not the color, not the prettiness. That led us to say, ‘We’ve got to look at these KPIs closely and figure out how to make these KPIs smarter.’ That was our shared challenge. That led us into learning more about ‘right-brain wording,’ learning about simplicity, learning about exactly what the perfect KPI looks like after we evaluated as many methodologies as we could find on the market. What we concluded is that it all starts with your KPI name.

What is a “right-brain wording?”

Yuk: If you go online and you look up right brain versus left brain [wording], there are amazing diagrams. They show that your right brain controls your creativity while your left brain is more analytical. Most developers use the left side of their brains — analytics, coding, all that complex stuff. The artists of the world, the creatives who may not be able to understand complex math, they use the right part of their brain. But what you find on the creative side is that there is a cortex activity that happens when you use certain words that [are] visceral. We found that it is one thing to rename your KPIs, but it is another thing to get [the wording right] so that it resonates with people.

Let’s take profit margin as an example, and let’s say that after you use our WHW hack, the revised KPI name is ‘increase profit margin by 25% by 2017.’ If I were to ask you to visualize the word increase, you would probably see an arrow that points up. OK, it’s a visual but not one that you have an emotional connection to — it’s a left-brain, analytical word. But if I ask you to visualize a right-brain word like grow, I guarantee you’ll see a green leaf or plant in your brain. What happens in your brain is, because you’re thinking of a leaf, there’s an association that happens. Most people have a personal experience with the word grow — a memory of some kind. But they don’t have the same relationship with the word increase. Because of the association, users are more likely to remember and take action on that KPI. User adoption of KPIs and taking action is a problem. If you take the time to wordsmith or rename the KPIs so that they’re more right-brain-oriented, you can transform how your users act and react to them.

How can CIOs help make employees feel connected to top-line goals with KPIs?

Yuk: After we finish wordsmithing the KPI’s name, we focus on impact. A CIO in New York told me a long time ago, ‘One of the most important things you need to remember is that everybody has to be tuned into WIFM.’ And I asked, ‘What’s that?’ He said, ‘What’s in it for me?’

The good thing about transforming a KPI into the WHW format — it now has the action, the KPI name, the how much, the by when all in the name. You are now able to take that 25% [profit margin goal] and set deadlines and break it down, not just for the year, but by month, by quarter and even by day. You can break it down to the revenue streams that contribute [to the goal] and see what percentage those revenue streams contribute. That’s where you can individualize expectations and actions.

You tend to find two things. Not only can you individualize expectations, but you can also say, now that you have that individual goal, I can show you how it ties back into the overall goal and how other people are performing compared to you. People innately want to be competitive. They want to be on top — the leaderboard syndrome.

Those two elements are keys to having impact with your KPIs. Again, it’s a bit more psychological, but KPIs aren’t working. So we dug deep into the more cognitive side to try to figure out how to make them resonate with people and the [psychological] rabbit hole goes very deep. Start with the name.