Category Archives: Enterprise IT news

Enterprise IT news

CMS creates chief health informatics officer position

The Centers for Medicare and Medicaid Services created a chief health informatics officer position geared toward driving health IT strategy development and technology innovation for the department.

According to the job description, the chief health informatics officer (CHIO) will be charged with developing “requirements and content for health-related information technology, with an initial focus on improving innovation and interoperability.”

The chief health informatics officer position will develop a health IT and information strategy for CMS and the U.S. Department of Health and Human Services, as well as provide subject-matter expertise for health IT information management and technology innovation policy.

Applying health informatics to IT

The position also entails working with providers and vendors to determine how CMS will apply health informatics methods to IT, as well as acting as a liaison between CMS and private industry to lead innovation, according to the job description.

A candidate must have at least one year of “qualifying specialized experience,” including experience using health informatics data to examine, analyze and develop policy and program operations in healthcare programs; offering guidance on program planning to senior management for an organization; and supervising subordinate staff.

Pamela Dixon, co-founder and managing partner of healthcare executive search firm SSi-SEARCH, based in Atlanta, said a chief health informatics officer must have all the skill sets of a chief medical information officer and more. Dixon said a CHIO must be a strategic systems thinker, with the ability to innovate, a strong communicator and a “true leader.”

“The role could and should unlock the key to moving technology initiatives through healthcare dramatically faster, dramatically more effective,” Dixon said.

Finding the right balance

The role could and should unlock the key to moving technology initiatives through healthcare dramatically faster, dramatically more effective.
Pamela Dixonco-founder and managing partner, SSi-SEARCH

Eric Poon, who has served as Duke University Health System’s chief health information officer for the last three and a half years, said a successful informatics professional enables individuals within an organization to achieve quality improvement and patient safety goals with technology. Poon oversees clinical systems and analytics teams and ensures data that’s been gathered can be used to support quality initiatives and research.

One of the most significant challenges Poon said he faces is determining how to balance resources between the day-to-day and “what’s new,” along with making data accessible in a “high-quality way,” so faculty and researchers can easily access the data to support their work in quality improvement and clinical research. Being successful means creating a bridge between technology and individuals within the organization, Poon said.

“I would like them to say that we are making it possible for them to push the envelope with regards to data science and research and data exchange,” Poon said. “I also like to think we will have innovators who are coming up with new apps, new data science, machine learning algorithms that are realigning how we engage patients and how we are really becoming smart about how to use IT to move the needle in quality and safety … and patient health in a cost-effective way.”

Emerging roles important for change

Dixon said new and emerging leadership roles are important because they make organizations think about both what they need or want the individual to accomplish, as well as what the organization itself could accomplish with the right person.

“The actual title is less important,” she said. “There are CHIOs that might just as easily carry the title chief innovation officer or chief transformation officer or chief data officer, depending on their focus. The important thing is that we encourage and foster growth, value and innovation by creating roles that are aimed at doing just that.”

The creation of a chief health informatics officer position and the push to focus on health IT within CMS is part of a larger initiative started earlier this year after the Donald Trump administration announced MyHealthEData, which allows patients to take control of their healthcare data and allows CMS to follow them on their healthcare journey.

Johnathan Monroe, director of the CMS media relations group, said the organization will be accepting applications for the chief health informatics officer position until July 20.

Notre Dame uses N2WS Cloud Protection Manager for backup

Coinciding with its decision to eventually close its data center and migrate most of its workloads to the public cloud, the University of Notre Dame’s IT team switched to cloud-native data protection.

Notre Dame, based in Indiana, began its push to move its business-critical applications and workloads to Amazon Web Services (AWS) in 2014. Soon after, the university chose N2WS Cloud Protection Manager to handle backup and recovery.

Now, 80% of the applications used daily by faculty members and students, as well as the data associated with those services, lives on the cloud. The university protects more than 600 AWS instances, and that number is growing fast.

In a recent webinar, Notre Dame systems engineer Aaron Wright talked about the journey of moving a whopping 828 applications to the cloud, and protecting those apps and their data.  

N2WS, which was acquired by Veeam earlier this year, is a provider of cloud-native, enterprise backup and disaster recovery for AWS. The backup tool is available through the AWS Marketplace.

Wright said Notre Dame’s main impetus for migrating to the cloud was to lower costs. Moving services to the cloud would reduce the need for hardware. Wright said the goal is to eventually close the university’s on-premises primary data center.

“We basically put our website from on premises to the AWS account and transferred the data, saw how it worked, what we could do. … As we started to see the capabilities and cost savings [of the cloud], we were wondering what we could do to put not just our ‘www’ services on the cloud,” he said.

Wright said Notre Dame plans to move 90% of its applications to the cloud by the end of 2018. “The data center is going down as we speak,” he said.

We looked at what it would cost us to build our own backup software and estimated it would cost 4,000 hours between two engineers.
Aaron Wrightsystems engineer, Notre Dame

As a research organization that works on projects with U.S. government agencies, Notre Dame owns sensitive data. Wright saw the need for a centralized backup software to protect that data, and found N2WS Cloud Protection Manager through AWS Marketplace. Wright could not find many good commercial options for protecting that cloud data.

“We looked at what it would cost us to build our own backup software and estimated it would cost 4,000 hours between two engineers,” he said. By comparison, Wright said his team deployed Cloud Protection Manger in less than an hour.

Wright said N2WS Cloud Protection Manager rescued Notre Dame’s data at least twice since the installation. One came after Linux machines failed to boot after application of a patch, and engineers restored data from snapshots within five minutes. Wright said his team used the snapshots to find and detach a corrupted Amazon Elastic Block Store volume, and then manually created and attached a new volume.

In another incident, Wright said the granularity of the N2WS Cloud Protection Manager backup capabilities proved valuable.

“Back in April-May 2018, we had to do a single-file restore through Cloud Protection Manager. Normally, we would have to have taken the volume and recreated a 300-gig volume,” he said. Locating and restoring that single file so quickly allowed him to resolve the incident within five minutes.

Chief data officer role: Searching for consensus

Big data continues to be a force for change. It plays a part in the ongoing drama of corporate innovation — in some measure, giving birth to the chief data officer role. But consensus on that role is far from set.

The 2018 Big Data Executive Survey of decision-makers at more than 50 blue-chip firms found 63.4% of respondents had a chief data officer (CDO). That is a big uptick since survey participants were asked the same question in 2012, when only 12% had a CDO. But this year’s survey, which was undertaken by business management consulting firm NewVantage Partners, disclosed that the background for a successful CDO varies from organization to organization, according to Randy Bean, CEO and founder of NewVantage, based in Boston.

For many, the CDO is likely to be an external change agent. For almost as many, the CDO may be a long-trusted company hand. The best CDO background could be that of a data scientist, line executive or, for that matter, a technology executive, according to Bean.

In a Q&A, Bean delved into the chief data role as he was preparing to lead a session on the topic at the annual MIT Chief Data Officer and Information Quality Symposium in Cambridge, Mass. A takeaway: Whatever it may be called, the chief data officer role is central to many attempts to gain business advantage from key emerging technologies. 

Do we have a consensus on the chief data officer role? What have been the drivers?

Randy Bean: One principal driver in the emergence of the chief data officer role has been the growth of data.

Randy Bean, CEO, NewVantage PartnersRandy Bean

For about a decade now, we have been into what has been characterized as the era of big data. Data continues to proliferate. But enterprises typically haven’t been organized around managing data as a business asset.

Additionally, there has been a greater threat posed to traditional incumbent organizations from agile data-driven competitors — the Amazons, the Googles, the Facebooks.

Organizations need to come to terms with how they think about data and, from an organization perspective, to try to come up with an organizational structure and decide who would be a point person for data-related initiatives. That could be the chief data officer.

Another driver for the chief data officer role, you’ve noted, was the financial crisis of 2008.

Bean: Yes, the failures of the financial markets in 2008-2009, to a significant degree, were a data issue. Organizations couldn’t trace the lineage of the various financial products and services they offered. Out of that came an acute level of regulatory pressure to understand data in the context of systemic risk.

Banks were under pressure to identify a single person to regulators to address questions about data’s lineage and quality. As a result, banks took the lead in naming chief data officers. Now, we are into a third or fourth generation in some of these large banks in terms of how they view the mandate of that role.

Isn’t that type of regulatory driver somewhat spurred by the General Data Protection Regulation (GDPR), which recently went into effect? Also, for factors defining the CDO role, NewVantage Partners’ survey highlights concerns organizations have about being surpassed by younger, data-driven upstarts. What is going on there?

Bean: GDPR is just the latest of many previous manifestations of this. There have been the Dodd-Frank regulations, the various Basel reporting requirements and all the additional regulatory requirements that go along with classifying banks as ‘too large to fail.’

That is a defensive driver, as opposed to the offensive and innovation drivers that are behind the chief data officer role. On the offensive side, the chief data officer is about how your organization can be more data-driven, how you can change its culture and innovate. Still, as our recent survey finds, there is defensive aspect, even there. Increasingly, organizations perceive threat coming from all kinds of agile, data-driven competitors.

Organizations need to come to terms with how they think about data and, from an organization perspective, to try to come up with an organizational structure and decide who would be a point person for data-related initiatives. That could be the chief data officer.
Randy BeanCEO and founder, NewVantage

You have written that big data and AI are on a continuum. That may be worthwhile to emphasize, as so much attention turns to artificial intelligence these days.

Bean: A key point is that big data has really empowered artificial intelligence.

AI has been around for decades. One of the reasons why it hasn’t gained traction is, in its aspects as a learning mechanism, it requires large volumes of data. In the past, data was only available in subsets or samples or in very limited quantities, and the corresponding learning on the part of the AI was slow and constrained.

Now, with the massive proliferation of data and new sources — in addition to transactional information, you also now have sensor data, locational data, pictures, images and so on — that has led to the breakthrough in AI in recent years. Big data provides the data that is needed to train the AI learning algorithms.

So, it is pretty safe to say there is no meaningful artificial intelligence without good data — without an ample supply of big data.

And it seems to some of us, on this continuum, you still need human judgment.

Bean: I am a huge believer in the human element. Data can help provide a foundation for informed decision-making, but ultimately it’s the combination of human experience, human judgment and the data. If you don’t have good data, that can hamper your ability to come to the right conclusion. Just having the data doesn’t lead you to the answer.

One thing I’d say is, just because there are massive amounts of data, it hasn’t made individuals or companies any wiser in and of itself. It’s just one element that can be useful in decision-making, but you definitely need human judgment in that equation, as well.

New Elastifile CEO intensifies startup’s cloud focus

New Elastifile CEO Erwan Menard said he plans to intensify the startup’s focus on scale-out, enterprise-grade file storage for the public cloud, as he tries to fuel the company’s growth phase.

The stronger public cloud emphasis will mean changes to the product strategy that Elastifile initially laid out when emerging from stealth in April 2017. For instance, Elastifile designed its distributed file system to run on flash storage. But, Menard said, Elastifile’s software will be available with spinning HDDs and SSDs in public clouds, although on-premises deployments will continue to require flash.

Prior to joining Elastifile, Menard was president and COO at object storage vendor Scality. He previously held the same positions at DataDirect Networks, a storage vendor that caters to high-performance computing. Menard also served as vice president and general manager of Hewlett-Packard’s communications and media solutions business unit and in various leadership roles at Alcatel-Lucent.

The newly appointed Elastifile CEO recently replaced founder Amir Aharoni, who remains with the startup as chairman. Aharoni was unable to relocate from Israel to the United States, “where we want the growth to be led from,” Menard said as part of this Q&A. Elastifile’s sales and marketing office is located in Santa Clara, Calif., and its research and development arm is in Herzliya, Israel.

What are your primary areas of focus for the next year and beyond?

Erwan Menard: We were born upon the idea that file storage is here to stay, because a number of workloads in enterprises rely on it, and that file storage should be addressed in a software-defined manner designed for flash. That was the initial DNA of the company, from a product point of view.

Elastifile CEO Erwan MenardErwan Menard

Now, if we look at the market, we’re observing a growing demand for enterprise-class file storage in the cloud. If you look at the data that’s going into public clouds, there’s either very cold data for archival or disaster recovery purposes, or there’s hot data in very small quantities for workloads that are compute-centric. But there is a huge piece missing, which is all the data residing on NAS in the data center. Why aren’t those data and associated workloads in the cloud yet? Because there’s no decent enterprise-grade file storage service in public clouds.

At Elastifile, we spent four years developing a modern-age, software-defined file system for flash. And we’re taking that intellectual property and focusing on adding a strong, enterprise-grade file system to Amazon and Google and Azure. It’s two clicks on Google Launcher, which is their marketplace. We automatically provision a scale-out file system. We definitely aim at doing the same thing on the other clouds if customers choose Azure and Amazon. This is going to happen in the next few months.

Elastifile has a flash requirement with on-premises deployments. Is flash a requirement in the public cloud?

Menard: We designed for flash, because silicon is taking over infrastructure. But you can effectively run it on classic disk. In Google terminology, you can run on so-called PDs, [or] persistent disks, which are groups of SSDs at Google Cloud. Or, you can run it on standard PDs, [or] standard persistent disks, which are effectively classic HDDs. We run on both.

The good thing about designing for flash is that we’re able to provide significantly better performance than other solutions out there in the cloud. For example, we are able to provide much better performance than Amazon Elastic File [System] storage. I want to think that’s because we designed for the flash era.

Does the ability to run on HDDs extend to on-premises Elastifile deployments?

Menard: No. The on-prem deployment option is to run on bare-metal SSDs.

What significant features are in the Elastifile 2.7 release?

Menard: We are updating the [Google] Launcher experience. That experience is going to be significantly simpler in the way you install. The people who are touching our products in the data center are typically storage admins. In the cloud, sometimes it’s an application developer who happens to need storage, or someone who is even less technical. And the first impression people have with the product is extremely important in their decision to adopt it or not.

Also part of the package is what we call CloudConnect. It’s a tool that allows you to migrate your data from any NAS in your data center to any cloud. When people are absolutely convinced about the benefits of running stuff in the cloud, they often struggle with moving the data to the cloud. Most of the tools on the market tend to go from one certain type of NAS to one certain type of cloud destination. We’ve done a tool to go from any to any, and that tool is part of the subscription to our product.

Can users buy CloudConnect as a separate product?

Menard: No. Our goal isn’t to become a data-mover company. Our goal is to facilitate adoption of the cloud. The Elastifile software is available as a subscription. And, as part of that, you get CloudConnect.

Can we expect more partnerships, such as Elastifile’s OEM deal with Dell EMC signed last year? Do customers want to pick their own hardware and take a do-it-yourself approach, or do they prefer to buy your storage software bundled with hardware?

Menard: I think people want to buy software only, because that unlocks the value chain and allows them to commoditize the hardware and separate software and hardware from a procurement point of view. I think there’s a market for software only in the data center — do it yourself — that is for sophisticated organizations who decided to continue developing their data center for whatever strategic or regulatory reasons.

That being said, I think the overall trend is effectively slightly different. At the whole market level, the trend is to go to the cloud. The data center is less and less an area where you want to experiment with complicated things. If anything, you want to consume very simple offerings.

So, I think those two trends coexist — sometimes in the same enterprise account. Frankly, our focus is on the cloud, because this is the next frontier. We’re much more involved in conversations around lifting and shifting stuff to the cloud.

Do people want to move everything to the cloud, or do you think the hybrid model will win out?

Menard: I’m not comfortable with the word ‘hybrid,’ because I’m not sure people are clear on what it means. If hybrid means I have a full stack — application, infrastructure — that’s delivering a certain business outcome in the data center, and I want to replicate that in the cloud, that scenario does exist.

We have a customer in common with Google, called eSilicon. They are doing chipset design. They’ve augmented the capacity of their data center on a per-project basis. They don’t size for the peak. They size for a lower load. And they run the peak activities in the cloud. They did it with us because they didn’t need to modify their application at all when running it in the cloud. That’s a bursting scenario. I run peak activities in the cloud and continue running baseline activities in the data center.

Another scenario we see happening is people who are lifting and shifting an entire workload to the cloud. And that creates a period of time where both workloads are in the data center and in the cloud — the target being to run everything in the cloud. If we want to call that hybrid, then hybrid does exist.

Do you think you may have customers that run your software only in the cloud?

Menard: Absolutely. Four years ago, when we were all focusing on the software-defined data center, we were all undersizing the speed at which workloads could move to the cloud.

Is that why you plan to focus less on OEM partnerships and more on getting your software to work better with more clouds?

Menard: Absolutely.

Are customers moving their applications to the public cloud? Or, are they just moving their data and leaving the applications running on premises?

Menard: I think the only case where it makes sense to move the data without the application is when you’re looking at archiving or disaster recovery. The object stores of public clouds do a great job at that. When you talk about hot data, having an application running in the data center and tapping into a data pool in the cloud may look great on a slide, but I don’t think it makes economic sense.

Which vendors do you go up against in competitive scenarios?

Menard: In the cloud right now, the de facto standard — but it’s a fairly low one — is Amazon EFS [Elastic File System]. Another option is, of course, the status quo: using the same vendor you’ve been using for decades in the data center and trying to make that work in the cloud. We’ve seen announcements by the likes of NetApp in that regard. While it’s probably a good defensive play, it’s very hard with products designed many years ago for the data center to truly take advantage of the cloud. It’s going to come with a level of complexity and cost that’s probably not viable in the long run.

Microsoft bills Azure network as the hub for remote offices

Microsoft’s foray into the rapidly growing SD-WAN market could solve a major customer hurdle and open Azure to even more workloads.

All the major public cloud platforms have increased their networking functionality in recent months, and Microsoft’s latest service, Azure Virtual WAN, pushes the boundaries of those capabilities. The software-defined network acts as a hub that links with third-party tools to improve application performance and reduce latency for companies with multiple offices that access Azure.

IDC estimates the software-defined wide area network (SD-WAN) market will hit $8 billion by 2021, as cloud computing continues to proliferate and employees must access cloud-hosted workloads from various locations. So far, the major cloud providers have left that work to partners.

But this Azure network service solves a big problem for customers that make decisions about network transports and integration with existing routers, as they consume more cloud resources from more locations, said Brad Casemore, an IDC analyst.

“Now what you’ve got is more policy-based, tighter integration within the SD-WAN,” he said.

Azure Virtual WAN uses a distributed model to link Microsoft’s global network with traditional on-premises routers and SD-WAN systems provided by Citrix and Riverbed. Microsoft’s decision to rely on partners, rather than provide its own gateway services inside customers’ offices, suggests it doesn’t plan to compete across the totality of the SD-WAN market, but rather provide an on-ramp to integrate with third-party products.

Customers can already use various SD-WAN providers to easily link to a public cloud, but Microsoft has taken the level of integration a step further, said Bob Laliberte, an analyst at Enterprise Strategy Group in Milford, Mass. Most SD-WAN vendors are building out security ecosystems, but Microsoft already has that in Azure, for example.

This could also simplify the purchasing process, and it would make sense for Microsoft to eventually integrate this virtual WAN with Azure Stack to help facilitate hybrid deployments, Laliberte said.

It’s unclear if customers trust Microsoft — or any single hyperscale cloud vendor — at the core of their SD-WAN implementation, as their architectures spread across multiple clouds.

The Azure Virtual WAN service is billed as a way to connect remote offices to the cloud, and also to each other, with improved reliability and availability of applications. But that interoffice linkage also could lure more companies to use Azure for a whole host of other services, particularly customers just starting to embrace the public cloud.

There are still questions about the Azure network service, particularly around multi-cloud deployments. It’s unclear if customers trust Microsoft — or any single hyperscale cloud vendor — at the core of their SD-WAN implementation, as their architectures spread across multiple clouds, Casemore said.

Azure updates boost network security, data analytics tools

Microsoft also introduced an Azure network security feature this week, Azure Firewall, with which users can create and enforce network policies across multiple endpoints. A stateful firewall protects Azure Virtual Network resources and maintains high availability without any restrictions on scale.

Several other updates include an expanded Azure Data Box service, still in preview, which provides customers with an appliance onto which they can upload data and ship directly to an Azure data center. These types of devices have become a popular means to speed massive migrations to public clouds. Another option for Azure users, Azure Data Box Disk, uses SSD disks to transfer up to 40 TB of data spread across five drives. That’s smaller than the original box’s 100 TB capacity, and better suited to collect data from multiple branches or offices, the company said.

Microsoft also doubled the query performance of Azure SQL Data Warehouse to support up to 128 concurrent queries, and waived the transfer fee for migrations to Azure of legacy applications that run on Windows Server and SQL Server 2008/2008 R2, for which Microsoft will end support in July 2019. Microsoft also plans to add features to Power BI for ingestions and integration across BI models that are similar to Microsoft customers’ experience with Power Query for Excel.

Rackspace colocation program hosts users’ legacy servers

Rackspace’s latest service welcomes users’ legacy gear into Rackspace data centers and once in place, gives the vendor a golden opportunity to sell these customers additional services.

The Rackspace Colocation program primarily targets midsize and larger IT shops that want to launch their first cloud initiative, or sidestep the rising costs to operate their own internal data centers. Many of these IT shops have just begun to grapple with the realities of their first digital transformation projects. They must choose where to position key applications from private clouds to microservices that run on Azure and Google Cloud.

Some Rackspace users run applications on customized hardware and operating systems that are not supported by public clouds, while others have heavily invested in hardware and want to hold onto those systems for another five years to get the full value out of those systems, said Henry Tran, general manager of Rackspace’s managed hosting and colocation business.

Customers that move existing servers into Rackspace’s data centers gain better system performance from closer proximity to Rackspace’s infrastructure. This gives Rackspace a chance to upsell those customers add-on interconnectivity and other higher-margin services.

“[The Rackspace Colocation services program] is a way to get you in the door by handling all the mundane stuff, but longer term they are trying to get you to migrate to their cloud,” said Cassandra Mooshian, senior analyst at Technology Business Research Inc. in Hampton, N.H.

Green light for greenfield colocation services

There are still many enterprise workloads that run in corporate data centers, so there are a lot of greenfield opportunities to pursue in colocation services. Roughly 60% of enterprises don’t use colos today, and the colocation market should grow around 8% annually through 2021, said Dan Thompson, a senior analyst at 451 Research. “There is still a lot of headroom for companies to migrate to colocation and/or cloud,” he said.

This speaks loudly to the multi-cloud and hybrid cloud world we are living in.
Dan Thompsonanalyst, 451 Research

Other colocation service providers have expanded with various higher-margin cloud and other managed services, but Rackspace has chosen a different path.

“They’ve had hosting and cloud services for a while but are now moving in the direction of colocation,” 451 Research’s Thompson said. “This speaks loudly to the multi-cloud and hybrid cloud world we are living in.”

Rackspace’s acquisition of Datapipe in late 2017 initiated its march into colocation, with the ability to offer capabilities and services to Datapipe customers through Microsoft’s Azure Stack, VMware’s Cloud on AWS and managed services on Google’s Cloud platform. In return, Rackspace gained access to Datapipe’s colocation services and data centers to gain a market presence in the U.S. West Coast, Brazil, China and Russia.

Rackspace itself was acquired in late 2016 by private equity firm Apollo Global Management LLC, which gave the company some financial backing and freedom to expand its business.

Team collaboration secondary in Workplace by Facebook app

Facebook pitches Workplace as a team collaboration app, but businesses have found the product more useful as an intranet that helps build community across large workforces with many remote and part-time employees.

In recent months, Facebook has stepped up efforts to position its business platform as a competitor to cloud-based collaboration apps like Slack and Microsoft Teams. Recently, for example, the social media company added to Workplace by Facebook third-party business software and made it easier to deploy instant messaging.

But the Workplace users interviewed for this story do not have the platform integrated with many business apps and have not seen widespread adoption of Workplace Chat, the messaging tool.

Instead, most of those Workplace users continue to rely on platforms like Microsoft Skype for Business for unified communications (UC), while using Workplace primarily for companywide announcements and for promoting collaboration across departments.

Facebook arranged interviews with Weight Watchers, Farmers Insurance and the World Wildlife Fund (WWF) for this story. Heineken USA and Rooftop Housing Group, a 200-person nonprofit based in Evesham, England, were contacted independently. More than 30,000 organizations use the Workplace by Facebook app.

Workplace by Facebook app a better intranet

Only 10% of Weight Watchers employees work at a desk in an office. The World Wildlife Fund has 80 offices around the world. Two-thirds of Heineken workers in the United States are based out of regional offices, which they visit once or twice a week.

These organizations turned to the Workplace by Facebook app because it was a mobile-centric platform that most employees would intuitively know how to use based on the popularity of consumer Facebook.

“For someone who only works two hours a week for the company, we wanted them to be able to intuitively get what the platform was, understand how to use it and take to engage in it,” said Stacie Sherer, senior vice president of corporate communications at Weight Watchers.

Similar to consumer Facebook, Workplace lets users like, comment and share posts. Since deploying Workplace, employees engage with company news more frequently and are more likely to post updates about their own team’s work, the users said.

“Whether you’re in the field, or whether you’re working in finance, or whether you’re working in an administrative role, it has allowed [staff] to feel more part of WWF and our work,” said Kate Cooke, head of network communications at the World Wildlife Fund. (The platform is free for nonprofits.)

The tool has increased collaboration among teams and departments that would have otherwise never interacted. Weight Watchers employees based in different parts of the country have discussed best practices for helping clients. Recently, the Armenian branch of WWF posted about a communications campaign that other offices ended up copying.

Business integrations aren’t central to how companies use Workplace

In May, Facebook unveiled roughly 50 integrations with SaaS apps such as Jira, HubSpot and SurveyMonkey, following the lead of platforms like Slack, Microsoft Teams and Cisco Webex Teams. But for the most part, the organizations interviewed for this story haven’t begun taking advantage of those integrations.

The users, however, do have Workplace integrated with cloud storage apps, such as Box and Google Drive, and web conferencing platforms, such as Zoom, which can be used to live stream meetings and events to Workplace. Those integrations had been available before the May announcement.

Microsoft, Cisco and Slack have marketed their team collaboration apps as hubs for getting work done. Those apps let users, for example, approve expense reports and message with colleagues from the same interface.

The Workplace by Facebook app offers similar functionality, but users are not adopting the app primarily for that reason.

“We really focused it on that engagement perspective to start and really using it as a communication channel,” said Jacqueline Leahy, director of internal corporate communications at Heineken USA. “We have not started to really use it in terms of managing projects.”

Workplace Chat adoption lags

None of the Workplace users rely on the app as their primary instant messaging platform. Most have other UC clients deployed, such as Microsoft Skype for Business, and don’t view Workplace as a replacement for those tools.

At Weight Watchers, for example, the technology and product teams use Slack, integrated with Confluence and Jira, while others in the organization communicate through WhatsApp or text messaging. Sherer said the company was looking into boosting adoption of Workplace Chat.

In fact, Workplace may be inadvertently contributing to a communication channel overload within some organizations. Rooftop Housing Group, for example, now has three or four different ways to instant message, including Workplace Chat, Microsoft Skype for Business and a Mitel softphone client.

“We now need to find organizational defaults,” said John Rockley, the nonprofit’s head of communications and marketing. “Otherwise, we’ve got too many separate channels.”

Ticketmaster breach part of worldwide card-skimming campaign

The attack that caused the Ticketmaster breach of customer information last month was actually part of a widespread campaign that’s affected more than 800 e-commerce sites.

According to researchers at the threat intelligence company RiskIQ Inc., the hacking group known as Magecart has been running a digital credit card-skimming campaign that targets third-party components of e-commerce websites around the world.

At the end of June, ticket sales company Ticketmaster disclosed that it had been compromised and user credit card data had been skimmed. A report by RiskIQ researchers Yonathan Klijnsma and Jordan Herman said the Ticketmaster breach was not an isolated incident, but was instead part of the broader campaign run by the threat group Magecart.

“The target for Magecart actors was the payment information entered into forms on Ticketmaster’s various websites,” Klijnsma and Herman wrote in a blog post. “The method was hacking third-party components shared by many of the most frequented e-commerce sites in the world.”

A digital credit card skimmer, according to RiskIQ, uses scripts injected into websites to steal data entered into forms. Magecart “placed one of these digital skimmers on Ticketmaster websites through the compromise of a third-party functionality supplier known as Inbenta,” the researchers said, noting specifically that Ticketmaster’s network was not directly breached.

RiskIQ has been tracking the activities of Magecart since 2015 and said attacks by the group have been “ramping up in frequency and impact” throughout the past few years, and Ticketmaster and Inbenta are not the only organizations that have been affected by this threat.

According to Klijnsma and Herman, Inbenta’s custom JavaScript code was “wholly replaced” with card skimmers by Magecart.

“In the use of third-party JavaScript libraries, whether a customized module or not, it may be expected that configuration options are available to modify the generated JavaScript. However, the entire replacement of the script in question is generally beyond what one would expect to see,” they wrote.

RiskIQ also noted that the command and control servers to which the skimmed data is sent has been active since 2016, though that doesn’t mean the Ticketmaster websites were affected the entire time.

The Ticketmaster breach is just “the tip of the iceberg” according to Klijnsma and Herman.

“The Ticketmaster incident received quite a lot of publicity and attention, but the Magecart problem extends to e-commerce sites well beyond Ticketmaster, and we believe it’s cause for far greater concern,” they wrote. “We’ve identified over 800 victim websites from Magecart’s main campaigns making it likely bigger than any other credit card breach to date.”

In other news:

  • The U.K.’s Information Commissioner’s Office (ICO) is fining Facebook £500,000 — more than $600,000 — for failing to protect its users’ data from misuse by Cambridge Analytica. The ICO is also going to bring criminal charges against the parent company of Cambridge Analytica, which gathered the data of millions of Americans before the 2016 presidential election. The ICO has been investigating data privacy abuses like the one by Cambridge Analytica — which has since gone out of business — and its investigations will continue. The fine brought against Facebook is reportedly the largest ever issued by the ICO and the maximum amount allowed under the U.K.’s Data Protection Act.
  • Apple will roll out USB Restricted Mode as part of the new version of iOS 11.4.1. USB Restricted Mode prevents iOS devices that have been locked for over an hour from connecting with USB devices that plug into the Lightning port. “If you don’t first unlock your password-protected iOS device — or you haven’t unlocked and connected it to a USB accessory within the past hour — your iOS device won’t communicate with the accessory or computer, and, in some cases, it might not charge,” Apple explained. Apple hasn’t provided the reason for this feature, but it will make it more difficult for forensics analysts and law enforcement to access data on locked devices.
  • Security researcher Troy Hunt discovered an online credential stuffing list that contained 111 million compromised records. The records included email addresses and passwords that were stored on a web server in France. The data set Hunt looked at had a folder called “USA” — though it has not been confirmed whether or not all the data came from Americans — and the files had dates starting in early April 2018. “That one file alone had millions of records in it and due to the nature of password reuse, hundreds of thousands of those, at least, will unlock all sorts of other accounts belonging to the email addresses involved,” Hunt said. The site with this information has been taken down, so it’s no longer accessible. Hunt also said there’s no way to know which websites leaked the credentials and suggests users implement password managers and make their passwords stronger and more unique.

IBM blockchain apps starter pack targets developer disparity

Blockchain has emerged as one of the hottest trends in IT, and as such, it suffers the familiar plight of other big IT trends. There just aren’t enough developers to meet the demand to build blockchain apps.

To help boost the number of blockchain developers, Big Blue recently brought its blockchain platform released last summer to new developers, such as beginners with no previous knowledge of blockchain. The IBM Blockchain Platform Starter Plan helps individual developers, startups and enterprises build blockchain proof of concepts quickly and affordably. The package includes samples, tutorials and videos to help developers learn the basic concepts of blockchain and then build blockchain apps.

For $500 month, the IBM Blockchain Platform Starter Plan includes access to the IBM Cloud compute infrastructure, the open source Hyperledger Fabric blockchain framework and Hyperledger Composer developer tools — to run the blockchain ledger. IBM also offers a set of development, operational and governance tools to make it simpler to set up and run a blockchain network. Starter plan customers also get $500 in IBM Cloud credits when they sign on, said Kathryn Harrison, IBM Blockchain offering director.

Kathryn Harrison, IBM Blockchain offering directorKathryn Harrison

Blockchain is a distributed database ledger that manages transactions and tracks assets. It can enable a network of users who wish to securely record, verify and execute transactions. That security is what draws everyone’s interest, but few blockchain application developers have the skills to match.

“While there are a lot of developers that want to get in this space, there aren’t a lot of developers qualified to work on the core of a lot of these protocols from a security perspective,” said Chris Pacia, lead backend developer at OB1 based in Centreville, Va., at the recent QCon New York 2018 conference. OB1 is the parent company of OpenBazaar, an online marketplace that uses cryptocurrency.

Blockchain apps: The ‘cloud’ of the 21st century

Blockchain expertise is the top request among more than 5,000 skills on Upwork,  the organization, based in Mountain View, Calif., that matches freelance workers with employers. Demand for blockchain expertise on Upwork surged more than 6,000% year-over-year in the first three months of 2018.

In a recent Gartner study of nearly 300 CIOs of organizations with ongoing blockchain initiatives, 23% of respondents said that blockchain requires the most new skills to implement of any technology area, and another 18% said blockchain skills are the most difficult to find.

While there are a lot of developers that want to get in this space, there aren’t a lot of developers qualified to work on the core of a lot of these protocols from a security perspective.
Chris Pacialead backend developer, OB1

New York City-based Global Debt Registry (GDR), a fintech provider of asset certainty solutions, adopted the IBM blockchain starter plan to build its collateral pledge registry, which enables lenders to check the collateral positions of the institutional investors to which it lends money. For example, if Goldman Sachs lends money to a hedge fund and that hedge fund pledges a set of assets to them, that fund may also approach JPMorgan Chase & Co. and try to pledge the same set of assets. GDR’s registry would check to see if those assets are double-pledged, said Robert Brown, CTO of Global Debt Registry.

Brown’s team saw blockchain as a good fit because it’s essentially a set of data shared among a group of companies in an ecosystem. GDR, which started with no blockchain expertise, evaluated different blockchain options and selected Hyperledger because it was built from the ground up as a private blockchain. “We have a set of institutional investors and banks, and they don’t want to have their data in the open,” Brown said.

The IBM blockchain starter plan’s tools helped GDR developers build blockchain apps and get up and running quickly on the IBM Cloud, he said.

“Hyperledger Composer let us write our smart contracts in JavaScript, which is a language we’re familiar with,” Brown said. “The API was straightforward to deal with. Composer also has a modeling language that lets you define your data structures and signatures for the objects you create. The tools make it easy to get going.”

OB1’s Pacia said he is hopeful for projects like IBM’s starter plan approach but worries if it will be enough to overcome the low number of people with blockchain expertise. “I’ve seen other efforts to kind of like train people and slowly bring them along so that they can contribute at that type of high level. But it does take a high level of training to do this securely,” he said.

Broadcom acquisition of CA seeks broader portfolio

The out-of-the-blue Broadcom acquisition of CA Technologies has analysts scratching their heads about how the two companies’ diverse portfolios weave together strategically, and how customers might feel the impacts — beneficial or otherwise.

CA’s strength in mainframe and enterprise infrastructure software, the latter of which is a growing but fragmented market, gives chipmaker Broadcom another building block to create an across-the-board infrastructure technology company, stated Hock Tan, president and CEO of Broadcom.

But vaguely worded statements from both companies’ execs lent little insight into potential synergies and strategic short- or long-term goals of the $18.9 billion deal.

One analyst believes the deal is driven primarily by financial and operational incentives, and whatever technology synergies the two companies create are a secondary consideration for now.

“The operating margins from mainframes are very healthy and that fits very well with Broadcom’s financial model,” said Stephen Elliot, an analyst at IDC.

The bigger issue will be Broadcom’s ability to manage the diverse software portfolio of a company the size of CA. To date, Broadcom’s acquisition strategy has focused almost exclusively on massive deals for hardware companies, in areas such as storage, wireless LAN and networking. “The question is, is this too far of a reach for them? Customers are going to have to watch this closely,” Elliot said.

The overall track record of acquisitions that combine hardware-focused companies and large software companies is not good, Elliot noted. He pointed to the failures of Intel’s acquisition of LANDesk and Symantec’s purchase of Veritas.

Broadcom’s ability to manage CA’s complex and interwoven product portfolio is another concern.

The question is, is this too far of a reach for [Broadcom]? Customers are going to have to watch this closely.
Stephen Elliotanalyst, IDC

“As far as I can see, Broadcom has little or no visible prior execution or knowledge about a complicated and nuanced software and technology arena such as the one CA addresses … that includes DevOps, agile and security,” said Melinda Ballou, research director for IDC’s application life-cycle management program. “Infrastructure management would be more in their line of work, but still very different.”

Broadcom’s acquisition of CA also fills a need to diversify, particularly in the aftermath of its failed attempt to buy Qualcomm earlier this year, which was blocked by the Trump administration for national security reasons.

“They need to diversify their offerings to be more competitive given they primarily focus on chips, networking and the hardware space,” said Judith Hurwitz, president and CEO of Hurwitz & Associates LLC. “CA has done a lot of work on the operational and analytics side, so maybe [Broadcom] is looking at that as a springboard into the software enablement space.”

Hurwitz does see opportunities for both companies to combine their respective products, particularly in network management and IoT security. And perhaps this deal portends more acquisitions will follow, potentially among companies that compete directly or indirectly with CA. Both Broadcom and CA have pursued growth through numerous acquisitions in recent years.

“You could anticipate Broadcom goes on a spending spree, going after other companies that are adjacent to what CA does,” Hurwitz said. “For example, there was talk earlier this year that CA and BMC would merge, so BMC could be a logical step with some synergy there.”