Tag Archives: users

Forbes’ Google Cloud migration rooted in trust, cost savings

Forbes says its online audience grew from 15 million users per month in 2012 to 120 million in 2018, a growth spike that ultimately prompted a large-scale move off on-premises systems and into Google Cloud.

The Google Cloud migration now supports all three aspects of Forbes’ business: content, sales and its publishing infrastructure, according to chief digital officer Salah Zalatimo.

“[In 2020], we’re going to be continuing to mature our business model and diversify our revenue,” Zalatimo said. “Google Cloud is about giving us flexibility. We’re going to be able to establish new products and test new features really quickly.”

Forbes used to run its publishing operations on an on-premises, WordPress-based system that was heavily customized, with a front end clunked up by an accumulation of legacy code.

The number of actual people those audience figures represent is likely lower, as Google Analytics defines a user as a browser endpoint. Thus, if an individual read Forbes content both on a phone and a laptop, the user would be counted twice.

Still, the scale involved led to Forbes’ 2018 decision to build a new, custom publishing platform. At that point, the company determined it wanted to make a wholesale push into the cloud centered around one primary provider.

Google won Forbes’ business for several reasons, including pricing, incentives to help its Google cloud migration and a lower-pressure approach to sales, according to Zalatimo. “We didn’t have to make any hard commitments.”

While Forbes has a relationship with Microsoft as an Office 365 shop, it quickly ruled out Azure. “We talked to them, but the pricing was just too high,” he said.

Forbes also met with sales teams at AWS, where it initially hosted the new publishing platform, but ultimately decided that Google provided the most ease of use and the best level of automation for its needs. Forbes moved the publishing platform as part of its Google cloud migration during the first half of 2019.

Forbes has moved most of its digital infrastructure into containers and orchestrates them with Google Kubernetes Engine. It also uses the Istio service mesh to wrangle microservices. Google Cloud storage underpins the system and Google Pub/Sub supports serverless operations.

Forbes estimates that the move to GCP has saved 50 engineer-hours per week thanks to efficiencies and automation. Regression testing and new feature deployment time has dropped by 58%, according to the company.

In addition, Forbes is using Google’s AI and machine learning features to train models that suggest headlines, track trending topics and improve reader engagement.

Google Cloud hones enterprise chops

Former Oracle executive Thomas Kurian came aboard as Google Cloud CEO in November 2018. Since then, Kurian has moved to build out Google’s enterprise cloud sales and support organizations — areas where it had lagged behind competitors. Forbes’ experience on this front has been positive, Zalatimo said.

They are still maturing as an enterprise provider, and we knew that going in.
Salah ZalatimoChief digital officer, Forbes

“They are still maturing as an enterprise provider, and we knew that going in,” he said. “But they knew that going in, too.”

Forbes did work with a services partner to help with the Google Cloud migration, but Google’s account representatives were “extremely involved,” he added. “We always had access, even if it was less traditional.” As one example, Forbes’ teams might find themselves having to call a salesperson in order to escalate a technical issue, Zalatimo said.

Forbes is using a wide array of Google Cloud offerings, including its audience analytics platform, BigQuery data warehouse, Hangout meeting software and authentication service, all of which are well-established.


But of all the cloud providers with which Forbes works, Google stands out as keen on engaging customers very early on in the new product development process, he added. “A lot of [vendors] talk about it, and say they want to do it, but I don’t think a lot of companies actually do.”

Weighing Google’s influence

Walmart and other large retailers have shunned doing business with AWS, given how closely they compete with its parent company in e-commerce. Google’s kingmaker positions in general web search as well as the hugely influential Google News service might make a media company such as Forbes similarly think twice about making heavy investments in its technology.

Forbes did factor this into its decision, according to Zalatimo. “Our options [were] either to lean in or lean away,” he said. “At the end of the day, they do carry the leverage. As an independent publisher, we really don’t. So, if you can’t beat them, join them.”

The company is taking part in the Google News Initiative, where Google works with publishers on new product development and other collaborative efforts.

Forbes benefits from this relationship with Google — but not to the extent it gets any special insights into the Google News algorithm, which can heavily affect a publisher’s traffic when changes are made. “They are like Fort Knox about this,” he said.

Go to Original Article
Author:

BigID: New privacy regulations have ended ‘the data party’

The ‘data party’ era of enterprises indiscriminating, collecting, storing and selling users’ personal information is coming to an end, according to BigID.

A New York-based startup, BigID was formed in 2015 with the goal of improving enterprise data management and protection in the age of GDPR and the California Consumer Privacy Act (CCPA). The company, which won the 2018 Innovation Sandbox Contest at RSA Conference, recently raised $50 million in Series C funding. Now BigID is expanding its mission to help enterprises better understand and control their data amid new privacy regulations.

BigID co-founder and chief product officer Nimrod Vax talks with SearchSecurity about how new regulations have effectively ended the data party. He also discusses BigID’s launch, its future and whether data protection is getting easier or harder.

Editor’s note: This interview has been edited for length and clarity.

How was BigID founded?

Nimrod Vax: Dimitri [Sirota, CEO] and I were the company’s two founders. At my last kind-of real job I was head of the identity product line at CA, and at the time CA acquired Dimitri’s company, Layer 7 Technologies. That’s how we met, so we got to work together on challenges of customers around identity management and security. After we left CA, at the time, there was a big surge of breaches of personal information through incidents like the Ashley Madison scandal and LinkedIn and Twitter. And what was really surprising about those breaches was that they were breaches of what you would think is very sensitive information. It wasn’t nuclear plans or anything; it was really just lists of names and addresses and phone numbers, but it was millions and billions of them. The following year, there were four billion personal records stolen. And the question that we asked ourselves was that with all of these security tools that are out there, why are these breaches still happening? And we learned that data protection tools that were available at the time and even today were not purposely built to protect and discover and manage personal information. They were really very generic and were not built for that. And also, these scandals kind of raised visibility and awareness of privacy. The legislation has picked up and we have GDPR coming and later CCPA, so we’ve identified the opportunity to help software organizations address those needs and meet the requirements of these regulations.

What does BigID do?

Vax: BigID’s aim is to help organizations better understand what data they store about their customers and in general, and then allow them to take action on top of that and comply with regulations and better protect the data and better manage it to get more value out of the data. In order to do that, BigID is able to connect all data sources. We have over 60 different connectors to all the things you could even think about that you may have in an IT organization. All of the relational databases, all of the unstructured data sources, semistructured data, big data repositories, anything in AWS, business applications like SAP, Salesforce, Workspace, you name it. We connect to anything, and then search for and classify the data. We first and foremost catalog everything so you have a full catalog of all the data that you have. We classify that data, and tell you what type of data that is — where do you have user IDs? Where do you have phone numbers? We help to cluster it, so we can find similar types of data without knowing anything about the data; just knowing the content that’s similar to other data that helps cluster it. Our claim to fame is our ability to correlate it. We can find Social Security numbers whose Social Security number it is and that allows you to distinguish between customer data, American data, European resident data, children or adult information, and also being able to know who’s data it is for access rights and who to notify regarding a breach.

The solution is specifically built on premises, but it’s a modern enterprise software. It’s completely containerized and documented for containers. It automatically scales up and down and doesn’t require any agents on the endpoint; it connects using open APIs, and we don’t copy the data — we just house the data and that’s important because we don’t want to create a security problem. We also don’t want to incur a lot of additional storage.

And lastly, and I think this is very important, the discovery layer is all exposed to a well-documented set of APIs so that you can query that information and make it accessible to applications, and we build applications on top of that.

We’re obviously generating more and more user data every single day. Does data protection and data governance become exponentially harder as time goes on? And if so, how do you keep up with that explosion of user data?

Vax: One of the problems that led to BigID was the fact that organizations now have the knowledge and technology that allow them to store unlimited amounts of data. If you look at big data repositories, it’s all about storing truckloads of data; organizations are collecting as much as they can and they’re never deleting the data. That is a big challenge for them, not only to protect the data but even to gain value from the data. Information flows into the organization through so many different channels — from applications, from websites and from partners. Different business units are collecting data and they’re not consolidating it, so all the goodness of the ability to process all that data comes with a burden. How do I make more use of that data? How do I consolidate the data? How do I gain visibility into the data I own and have access to? That complexity requires a different approach to data discovery and data management, and that approach first requires you to be big data native; you need to be able to run in those big data repositories natively and not have to stream the data outside like the old legacy tools; you need to be able to scan data at the source, at the ingestion point, as data flows into these warehouses. What we recently introduced [with Data Pipeline Discovery] is the ability to scan data streams in services like Kafka or [AWS] Kinesis so as the data flows into those data lakes, we’re able to classify that data and understand it.

Regarding the CCPA, how much impact do you think it will have on how enterprise data is governed?

Nobody wants to be on the board of shame of the CCPA.
Nimrod VaxCo-founder, BigID

Vax: We’re seeing that effect already, and it goes back to the data party that’s been happening in the past five years. There’s been a party of data where organizations have collected as much data as they wanted without any liabilities or without any guardrails around them. Now with the CCPA and GDPR, they are bringing that additional layer of governing. You can still collect as much information as you want, but you need to protect it. You have obligations to the people from whom you are collecting the data, and that brings more governance to the data process. Now organizations need to be much more careful about that. The organization needs to have more visibility into the data not because it’s good to have it but because we have to have it for the regulations; you can’t protect, you can’t govern, and you can’t control what you don’t know, so that’s the big shift in the approach that CCPA brings to the table. Organizations are already getting prepared for that. We’re already seeing the effect that organizations are taking it very seriously and they don’t want to be the first ones to be dinged by the regulation. It’s not even the financial impact. It’s more reputational impact they are concerned about; nobody wants to be on the board of shame of the CCPA. They want to send a message to their customers that they care about privacy — not that they’re careless about it. I think that’s the big impact that we’re seeing.

What do the next 12 months look like for the company?

Vax: We’re growing rapidly both in product and in staff and in general — I think we’re about 150 people now. Last year, I think we were less than 30. We’re continuing to grow, and that growth is in two areas: on the product side and on extending to additional audiences. We are continuing to invest in our core discovery capabilities. We’re also building more apps. We’re going to solve more difficult problems in privacy and security and governance. We’re also extending to new audiences. Today, we are primarily focusing on building solutions or offerings for developers so that they can leverage our API and building process. For the next area, we are focusing on putting built-in privacy into the applications seamlessly with zero friction.

Go to Original Article
Author:

Google expands multiple Chrome password protection features

Google’s Chrome browser will now warn users if their passwords have been exposed in a data breach.

Google this week expanded Chrome password protection features, which are intended to reduce the risk of phishing sites that prompt users to enter their passwords and other sensitive information, according to the company. New protections, which were introduced Tuesday for Chrome 79, include stolen password warnings, real-time and predictive phishing protections and new profile representations for shared devices.

Phishing attacks and data breaches are on the rise. According to the 2019 State of the Phish Report by Proofpoint, 83% of information security professionals surveyed said they experienced phishing attacks in 2018, up from 76% who said the same in 2017.

New Chrome password protection features

Previously, Google offered Chrome password protection extensions such as Password Alert and Password Checkup that warn users if they enter a username and password that are no longer safe because they appear in a data breach known to the company. In October, the Password Checkup extension became a feature in Google Account’s built-in password manager and the Chrome browser where users can conduct a scan of their saved passwords.

According to blog post by AbdelKarim Mardini, senior product manager, Google now offers warnings as users browse the web in Chrome. When users enter their credentials into a website, Chrome will alert them if their username and password have been compromised in a data breach and recommend that they change their credentials.

Chrome's stolen password warning on mobile
Chrome’s stolen password warning on mobile

Users can control this feature in Chrome Settings under Sync and Google Services.

In addition, Google enhanced its list of known phishing domains. Google Safe Browsing maintains a list of malicious websites that was previously updated every 30 minutes, which allowed some phishing campaigns that quickly switch their domains to slip through. With this week’s update, Chrome now checks any site a user visits on desktop in real time, removing the 30-minute delay, and offers phishing warning for unsafe sites.

Chrome's alert of suspected phishing sites
Chrome’s alert of suspected phishing sites

According to Google, this feature is enabled to users who turn on the “Make searches and browsing better” setting in Chrome.

Chrome also expanded its predictive phishing protection, which is intended to warn users who are signed in to Chrome and have Sync enabled if they enter their Google Account passwords into a site suspected of phishing by Google.

Tuesday’s update expands the protection to users who are signed in to Chrome but do not have Sync enabled. The feature will also work for all passwords stored in Chrome’s password manager.

The new sign-in indicator in Chrome
The new sign-in indicator in Chrome

Lastly, Chrome will now show the photo and username of the profile that a user is currently using on a device. The feature is intended to help users make sure they are creating and saving passwords to the right profile when using Chrome’s password manager, according to the company.

Go to Original Article
Author:

Odaseva introduces high availability for Salesforce

Salesforce users will be able to continue to work even if Salesforce goes down, thanks to Odaseva’s new addition.

Odaseva ultra high availability (UHA) works similarly to high availability (HA) for any non-SaaS environment. If there’s a Salesforce outage, such as a planned maintenance or an unexpected failure, a customer’s Salesforce account would failover to an emulated Salesforce account in Odaseva. Users can continue to view, edit and update the emulated records like normal. When Salesforce is back up, Odaseva will re-synchronize the two environments, performing what is essentially a failback.

Odaseva UHA is in early access and will be released as an add-on to the Odaseva platform in early 2020. Pricing is not yet available.

Salesforce has become so mission-critical to some organizations that they can’t afford any downtime. Odaseva CEO Sovan Bin said Odaseva UHA isn’t strictly necessary for smaller businesses that can shrug off a small Salesforce outage, but there are places such as call centers that need Salesforce access 100% of the time. These organizations stand to lose hundreds of thousands of dollars for every hour they can’t conduct business, while suffering from lost opportunities and damage to their brand.

“The real damage is because you’ve stopped doing business,” Bin said.

Odaseva provides backup and data governance for Salesforce data. Developed by Salesforce certified technical architects — the highest Salesforce expertise credential — Odaseva Data Governance Cloud offers archiving and automated data compliance on top of data protection features. Odaseva claims its compliance and data governance tools differentiate it from Salesforce backup competitors such as OwnBackup and Spanning.

Data protection and backup only address the integrity of data, but HA addresses its availability and accessibility. Christophe Bertrand, senior analyst at IT analyst firm Enterprise Strategy Group (ESG), said HA is lacking for SaaS application data. He said he didn’t know any other vendor with a similar product or feature.

“Not only is it unique, other vendors aren’t even exploring HA for Salesforce,” Bertrand said.

Bertrand added that other SaaS applications such as Office 365, Box and ServiceNow also have an availability gap, even as they become mission-critical to businesses. When these services go down, companies may have to stop working. Bertrand estimated the cost of downtime averages to higher than $300,000 per hour for most enterprises. Although many vendors provide backup, no one has yet provided a failover/failback offering.

“Ninety-nine-point-whatever percent uptime is not enough. That’s still 15 hours of downtime per year,” Bertrand said.

Screenshot of Odaseva UHA interface
Odaseva UHA users can continue making changes to Salesforce records even if Salesforce is offline.

Odaseva also introduced some new capabilities to its platform this week. It is now integrated with Salesforce Marketing Cloud, which allows users to back up emails, leads, contact information and marketing campaign files stored in Marketing Cloud. Before this integration, customers would have to develop a backup mechanism for Marketing Cloud themselves, which would include complex processes of extracting the data and replicating it.

Odaseva also extended its compliance automation applications to cover more than GDPR. Odaseva has data privacy applications that automatically perform anonymization, right of access, right of erasure and other privacy tasks in order to keep compliant with GDPR. Automated compliance now covers CCPA, HIPAA and a number of privacy regulations in non-U.S. countries such as PIPA (Japan), PIPEDA (Canada) and POPIA (South Africa).

The Salesforce Marketing Cloud integration and compliance automation extensions are available immediately.

Bin said Odaseva will focus on DevOps next. Salesforce Full Sandbox environments can be natively refreshed every 29 days. To help customers accelerate development, Bin said Odaseva will come up with a way to work around that limit and enable more frequent refreshes in a future release.

Go to Original Article
Author:

Redis Labs eases database management with RedisInsight

The robust market of tools to help users of the Redis database manage their systems just got a new entrant.

Redis Labs disclosed the availability of its RedisInsight tool, a graphical user interface (GUI) for database management and operations.

Redis is a popular open source NoSQL database that is also increasingly being used in cloud-native Kubernetes deployments as users move workloads to the cloud. Open source database use is growing quickly according to recent reports as the need for flexible, open systems to meet different needs has become a common requirement.

Among the challenges often associated with databases of any type is ease of management, which Redis is trying to address with RedisInsight.

“Database management will never go out of fashion,” said James Governor, analyst and co-founder at RedMonk. “Anyone running a Redis cluster is going to appreciate better memory and cluster management tools.”

Governor noted that Redis is following a tested approach, by building out more tools for users that improve management. Enterprises are willing to pay for better manageability, Governor noted, and RedisInsight aims to do that.

RedisInsight based on RDBtools

The RedisInsight tool, introduced Nov. 12, is based on the RDBTools technology that Redis Labs acquired in April 2019. RDBTools is an open source GUI for users to interact with and explore data stores in a Redis database.

Database management will never go out of fashion.
James GovernorAnalyst and co-founder, RedMonk

Over the last seven months, Redis added more capabilities to the RDBTools GUI, expanding the product’s coverage for different applications, said Alvin Richards, chief product officer at Redis.

One of the core pieces of extensibility in Redis is the ability to introduce modules that contain new data structures or processing frameworks. So for example, a module could include time series, or graph data structures, Richards explained.

“What we have added to RedisInsight is the ability to visualize the data for those different data structures from the different modules,” he said. “So if you want to visualize the connections in your graph data for example, you can see that directly within the tool.”

RedisInsight overview dashboard
RedisInsight overview dashboard

RDBTools is just one of many different third-party tools that exist for providing some form of management and data insight for Redis. There are some 30 other third-party GUI tools in the Redis ecosystem, though lack of maturity is a challenge.

“They tend to sort of come up quickly and get developed once and then are never maintained,” Richards said. “So, the key thing we wanted to do is ensure that not only is it current with the latest features, but we have the apparatus behind it to carry on maintaining it.”

How RedisInsight works

For users, getting started with the new tool is relatively straightforward. RedisInsight is a piece of software that needs to be downloaded and then connected to an existing Redis database. The tool ingests all the appropriate metadata and delivers the visual interface to users.

RedisInsight is available for Windows, macOS and Linux, and also available as a Docker container. Redis doesn’t have a RedisInsight as a Service offering yet.

“We have considered having RedisInsight as a service and it’s something we’re still working on in the background, as we do see demand from our customers,” Richards said. “The challenge is always going to be making sure we have the ability to ensure that there is the right segmentation, security and authorization in place to put guarantees around the usage of data.”

Go to Original Article
Author:

Datrium opens cloud DR service to all VMware users

Datrium plans to open its new cloud disaster recovery as a service to any VMware vSphere users in 2020, even if they’re not customers of Datrium’s DVX infrastructure software.

Datrium released disaster recovery as a service with VMware Cloud on AWS in September for DVX customers as an alternative to potentially costly professional services or a secondary physical site. DRaaS enables DVX users to spin up protected virtual machines (VMs) on demand in VMware Cloud on AWS in the event of a disaster. Datrium takes care of all of the ordering, billing and support for the cloud DR.

In the first quarter, Datrium plans to add a new Datrium DRaaS Connect for VMware users who deploy vSphere infrastructure on premises and do not use Datrium storage. Datrium DraaS Connect software would deduplicate, compress and encrypt vSphere snapshots and replicate them to Amazon S3 object storage for cloud DR. Users could set backup policies and categorize VMs into protection groups, setting different service-level agreements for each one, Datrium CTO Sazzala Reddy said.

A second Datrium DRaaS Connect offering will enable VMware Cloud users to automatically fail over workloads from one AWS Availability Zone (AZ) to another if an Amazon AZ goes down. Datrium stores deduplicated vSphere snapshots on Amazon S3, and the snapshots replicated to three AZs by default, Datrium chief product officer Brian Biles said.

Speedy cloud DR

Datrium claims system recovery can happen on VMware Cloud within minutes from the snapshots stored in Amazon S3, because it requires no conversion from a different virtual machine or cloud format. Unlike some backup products, Datrium does not convert VMs from VMware’s format to Amazon’s format and can boot VMs directly from the Amazon data store.

“The challenge with a backup-only product is that it takes days if you want to rehydrate the data and copy the data into a primary storage system,” Reddy said.

Although the “instant RTO” that Datrium claims to provide may not be important to all VMware users, reducing recovery time is generally a high priority, especially to combat ransomware attacks. Datrium commissioned a third party to conduct a survey of 395 IT professionals, and about half said they experienced a DR event in the last 24 months. Ransomware was the leading cause, hitting 36% of those who reported a DR event, followed by power outages (26%).

The Orange County Transportation Authority (OCTA) information systems department spent a weekend recovering from a zero-day malware exploit that hit nearly three years ago on a Thursday afternoon. The malware came in through a contractor’s VPN connection and took out more than 85 servers, according to Michael Beerer, a senior section manager for online system and network administration of OCTA’s information systems department.

Beerer said the information systems team restored critical applications by Friday evening and the rest by Sunday afternoon. But OCTA now wants to recover more quickly if a disaster should happen again, he said.

OCTA is now building out a new data center with Datrium DVX storage for its VMware VMs and possibly Red Hat KVM in the future. Beerer said DVX provides an edge in performance and cost over alternatives he considered. Because DVX disaggregates storage and compute nodes, OCTA can increase storage capacity without having to also add compute resources, he said.

Datrium cloud DR advantages

Beerer said the addition of Datrium DRaaS would make sense because OCTA can manage it from the same DVX interface. Datrium’s deduplication, compression and transmission of only changed data blocks would also eliminate the need for a pricy “big, fat pipe” and reduce cloud storage requirements and costs over other options, he said. Plus, Datrium facilitates application consistency by grouping applications into one service and taking backups at similar times before moving data to the cloud, Beerer said.

Datrium’s “Instant RTO” is not critical for OCTA. Beerer said anything that can speed the recovery process is interesting, but users also need to weigh that benefit against any potential additional costs for storage and bandwidth.

“There are customers where a second or two of downtime can mean thousands of dollars. We’re not in that situation. We’re not a financial company,” Beerer said. He noted that OCTA would need to get critical servers up and running in less than 24 hours.

Reddy said Datrium offers two cost models: a low-cost option with a 60-minute window and a “slightly more expensive” option in which at least a few VMware servers are always on standby.

Pricing for Datrium DRaaS starts at $23,000 per year, with support for 100 hours of VMware Cloud on-demand hosts for testing, 5 TB of S3 capacity for deduplicated and encrypted snapshots, and up to 1 TB per year of cloud egress. Pricing was unavailable for the upcoming DRaaS Connect options.

Other cloud DR options

Jeff Kato, a senior storage analyst at Taneja Group, said the new Datrium options would open up to all VMware customers a low-cost DRaaS offering that requires no capital expense. He said most vendors that offer DR from their on-premises systems to the cloud force customers to buy their primary storage.

George Crump, president and founder of Storage Switzerland, said data protection vendors such as Commvault, Druva, Veeam, Veritas and Zerto also can do some form of recovery in the cloud, but it’s “not as seamless as you might want it to be.”

“Datrium has gone so far as to converge primary storage with data protection and backup software,” Crump said. “They have a very good automation engine that allows customers to essentially draw their disaster recovery plan. They use VMware Cloud on Amazon, so the customer doesn’t have to go through any conversion process. And they’ve solved the riddle of: ‘How do you store data in S3 but recover on high-performance storage?’ “

Scott Sinclair, a senior analyst at Enterprise Strategy Group, said using cloud resources for backup and DR often means either expensive, high-performance storage or lower cost S3 storage that requires a time-consuming migration to get data out of it.

“The Datrium architecture is really interesting because of how they’re able to essentially still let you use the lower cost tier but make the storage seem very high performance once you start populating it,” Sinclair said.

Go to Original Article
Author:

Microsoft self-service policy for Office 365 raises concerns

Office 365 admins must sacrifice some degree of control as Microsoft allows end users to purchase certain capabilities themselves for Power Platform products.

Microsoft Power Platform includes Power BI, PowerApps and Microsoft Flow, which have business intelligence, low-code development and workflow capabilities, respectively. These applications are included in most Office 365 enterprise subscriptions. Previously, only administrators could purchase licensing for an organization.

On Oct. 23, Microsoft announced that it would roll out self-service purchasing to U.S. cloud customers starting Nov. 19.

Reda Chouffani, VP of development at Biz Technology SolutionsReda Chouffani

Widespread adoption of the SaaS model has already caused significant communication gaps between IT and end users, said Reda Chouffani, vice president of development at Biz Technology Solutions, a consulting firm in Mooresville, N.C.

“Now introducing this and knowing that Microsoft has over 140 million business subscribers that are empowered to make purchasing decisions on certain apps within the suite … that will make it where more of these [communication issues] will occur, and IT is not going to take it lightly,” he said.

Users with non-guest user accounts in a managed Azure Active Directory tenant will be able to make purchases directly with a credit card, according to a recent Microsoft FAQ. IT administrators can turn off the self-service purchasing policy through PowerShell, however, according to an update this week from Microsoft. Microsoft also extended the rollout date to Jan. 14, 2020, to give admins more time to prepare for the change.

The decision to allow IT to disable the capability likely came about from customer pushback about security concerns, said Willem Bagchus, messaging and collaboration specialist at United Bank, based in Parkersburg, W.Va.

IT admins may still be deterred by the self-service purchasing capability, because some may not be aware they can turn it off via PowerShell, Bagchus said.

“For a small-business IT admin who does everything by themselves or depends on the web only for [PowerShell] functions, it’ll be a bit of a challenge,” he added.

Security, licensing and support concerns

Security remains a top concern for many Office 365 customers, said Doug Hemminger, director of Microsoft services at SPR, a technology consulting firm in Chicago. Midsize and large businesses will be scrambling to turn the self-service purchasing capability off, he said.

“A lot of companies are worried about the data access issues that those users may inadvertently expose their company to,” Hemminger said. “Monitoring is a key part of implementing a certain environment and making sure that governance is in place, so many companies that I work with don’t want to give their employees the ability to go out and buy their own licenses.”

In the world we live in today, employees need access to applications to get their jobs done.
Mark BowkerSenior analyst, Enterprise Strategy Group

Office 365 admins can apply data management and access policies to Microsoft self-service purchases, which may alleviate some security concerns. End users do not need administrator approval before purchasing an application with a credit card, however.

“Most users will not think twice before purchasing something if it’s going to help them, which means that security may not necessarily be top of mind,” Chouffani said. “That can make it very difficult, because now everybody can pick their product of choice without truly doing some sort of due diligence and evaluation.”

Others said Microsoft will handle security issues properly.

“Microsoft has proved to me that they’re very serious about security,” said Willem Bagchus, messaging and collaboration specialist at United Bank, based in Parkersburg, W.Va. “Anything that may happen from a security perspective, [Microsoft] will be on top of it right away.”

When it comes to licensing, organizations need to administer checks and balances, Chouffani said.

Self-service purchasers can access a limited view of the Microsoft 365 admin center and assign licenses to other end users, according to the Microsoft FAQ.

Daniel Beato, director of technology at TNTMAXDaniel Beato

“Licensing is the least of our worries,” said Daniel Beato, director of technology at TNTMAX, an IT consultancy based in Wyckoff, N.J. “The user can do their own licensing; they will pay with their own credit card or even the company credit card.”

Employees will likely be held responsible for company purchases, however, when an organization reviews its finances, Beato said.

It is also unclear who is expected to provide end-user support when an application fails, Chouffani said.

Microsoft will provide standard support for self-service purchasers, according to the company.

A ‘smart decision for Microsoft’

Mark Bowker, senior analyst at Enterprise Strategy GroupMark Bowker

Microsoft’s self-service policy is a smart one for the company, said Mark Bowker, a senior analyst at Enterprise Strategy Group in Milford, Mass.

“In the world we live in today, employees need access to applications to get their jobs done,” he said. “Today’s application environment is very, very dynamic.”

Unlike other Office 365 products, such as Word and Excel, Power Platform applications aren’t widely used, Bowker said. Instead, they are used mainly by niche employees such as corporate developers and data analytics professionals.

“I think overall this will be a good thing,” Bagchus said. “More users and more installations will improve a product.”

Communication is key

No matter their personal feelings on the Microsoft self-service policy, Office 365 admins should be prepared for the changes and adjust accordingly.

Admins should have a good relationship with their organization’s Microsoft sales representative and keep in regular contact with a point person for updates, Bagchus said.

“That way you won’t get blindsided,” he said. “You can evolve with it.”

IT should also collaborate with end users to understand the needs of the business and to be a part of the solution, Chouffani said.

Go to Original Article
Author:

Data silos and culture lead to data transformation challenges

It’s not as easy as it should be for many users to make full use of data for data analytics and business intelligence use cases, due to a number of data transformation challenges.

Data challenges arise not only in the form of data transformation problems, but also with broader strategic concerns about how data is collected and used.

Culture and data strategy within organizations are key causal factors of data transformation challenges, said Gartner analyst Mike Rollings.

“Making data available in various forms and to the right people at the right time has always been a challenge,” Rollings said. “The bigger barrier to making data available is culture.”

The path to overcoming data challenges is to create a culture of data and fully embrace the idea of being a data-driven enterprise, according to Rollings.

Rollings has been busy recently talking about the challenges of data analytics, including taking part in a session at the Gartner IT Symposium Expo from Oct. 20-24 in Orlando, where he also detailed some of the findings from the Gartner CDO (Chief Data Officer) survey.

Among the key points in the study is that most organizations have not included data and analytics as part of documented corporate strategies.

Making data available in various forms and to the right people at the right time has always been a challenge.
Mike RollingsAnalyst, Gartner

“The primary challenge is that data and data insights are not a central part of business strategy,” Rollings said.

Often, data and data analytics are actually just byproducts of other activities, rather than being the core focus of a formal data-driven architecture, he said. In Rollings’ view, data and analytics should be considered assets that can be measured, managed and monetized.

“When we talk about measuring and monetizing, we’re really saying, do you have an intentional process to even understand what you have,” he said. “And do you have an intentional process to start to evaluate the opportunities that may exist with data, or with analysis that could fundamentally change the business model, customer experience and the way decisions are made.”

Data transformation challenges

The struggle to make the data useful is a key challenge, said Hoshang Chenoy, senior manager of marketing analytics at San Francisco-based LiveRamp, an identity resolution software vendor.

Among other data transformation challenges is that many organizations still have siloed deployments, where data is collected and remains in isolated segments.

“In addition to having siloed data within an organization, I think the biggest challenge for enterprises to make their data ready for analytics are the attempts at pulling in data that has previously never been accessed, whether it’s because the data exists in too many different formats or for privacy and security reasons,” Chenoy said. “It can be a daunting task to start on a data management project but with the right tech, team and tools in place, enterprises should get started sooner rather than later.”

How to address the challenges

With the data warehouse and data lake technologies, the early promise was making it easier to use data.

But despite technology advances, there’s still a long way to go to solving data transformation challenges, said Ed Thompson, CTO of Matillion, a London-based data integration vendor that recently commissioned a survey on data integration problems.

The survey of 200 IT professionals found that 90% of organizations see making data available for insights as a barrier. The study also found a rapid rate of data growth of up to 100% a month at some organizations.

When an executive team starts to get good quality data, what typically comes back is a lot of questions that require more data. The continuous need to ask and answer questions is the cycle that is driving data demand.

“The more data that organizations have, the more insight that they can gain from it, the more they want, and the more they need,” Thompson said.

Go to Original Article
Author:

HashiCorp Consul plays to multi-platform strength with Azure

Microsoft Azure users will get a hosted version of the HashiCorp Consul service mesh as multi-platform interoperability becomes a key feature for IT shops and cloud providers alike.

Service mesh is an architecture for microservices networking that uses a sidecar proxy to orchestrate and secure network connections among complex ephemeral services. HashiCorp Consul is one among several service mesh control planes available, but its claim to fame for now is that it can connect multiple VM-based or container-based applications in any public cloud region or on-premises deployment, through the Consul Connect gateway released last year.

HashiCorp Consul Service on Azure (HCS), released to private beta this week, automatically provisions clusters that run Consul service discovery and service mesh software within Azure. HashiCorp site reliability engineers also manage the service behind the scenes, but it’s billed through Azure and provisioned via the Azure console and the Azure Managed Applications service catalog.

The two companies unveiled this expansion to their existing partnership this week at HashiConf in Seattle, and touted their work together on service mesh interoperability, which also includes the Service Mesh Interface (SMI), released in May. SMI defines a set of common APIs that connect multiple service mesh control planes such as Consul, Istio and Linkerd.

Industry watchers expect such interconnection — and coopetition — to be a priority for service mesh projects, at least in the near future, as enterprises struggle to make sense of mixed infrastructures that include legacy applications on bare metal along with cloud-native microservices in containers.

“The only way out is to get these different mesh software stacks to interoperate,” said John Mitchell, formerly chief platform architect at SAP Ariba, a HashiCorp Enterprise shop, and now an independent digital transformation consultant who contracts with HashiCorp, among others. “They’re all realizing they can’t try to be the big dog all by themselves, because it’s a networking problem. Standardization of that interconnect, that basic interoperability, is the only way forward — or they all fail.”

Nobody who’s serious about containers in production has just one Kubernetes cluster. Directionally, multiplatform interoperability is where everybody has to go, whether they realize it yet or not.
John MitchellIndependent consultant

Microsoft and HashiCorp talked up multi-cloud management as a job for service mesh, but real-world multi-cloud deployments are still a bleeding-edge scenario at best among enterprises. However, the same interoperability problem faces any enterprise with multiple Kubernetes clusters, or assets deployed both on premises and in the public cloud, Mitchell said.

“Nobody who’s serious about containers in production has just one Kubernetes cluster,” he said. “Directionally, multiplatform interoperability is where everybody has to go, whether they realize it yet or not.”

The tangled web of service mesh interop

For now, Consul has a slight edge over Google and IBM’s open source Istio service mesh control plane, in the maturity of its Consul Connect inter-cluster gateway and ability to orchestrate VMs and bare metal in addition to Kubernetes-orchestrated containers. Clearly, it’s pushing this edge with HashiCorp Consul Service on Azure, but it won’t be long before Istio catches up. Istio Gateway and Istio Multicluster projects both emerged this year, and the ability to integrate virtual machines is also in development. Linkerd has arguably the best production-use bona fides in VM-based service mesh orchestration. All the meshes use the same Envoy data plane, which will make differentiation between them in the long term even more difficult.

“Service mesh will become like electricity, just something you expect,” said Tom Petrocelli, an analyst at Amalgam Insights in Arlington, Mass. “The vast majority of people will go with what’s in their preferred cloud platform.”

HCS could boost Consul’s profile, given Microsoft’s strength as a cloud player — but it will depend more on how the two companies market it than its technical specifications, Petrocelli said. At this stage, Consul doesn’t interoperate with Azure Service Fabric, Microsoft’s original hosted service mesh, which is important if it’s to get widespread adoption, in Petrocelli’s view.

“I’m not really going to get excited about something in Azure that doesn’t take advantage of Azure’s own fabric,” he said.

Without Service Fabric integration to widen Consul’s appeal to Azure users, it’s likely the market for HCS will pull in many new customers, Petrocelli said. Also, whether Microsoft positions HCS as its service mesh of choice for Azure, or makes it one among many hosted service mesh offerings, will decide how widely used it will be, in his estimation.

“If [HCS] is one of many [service mesh offerings on Azure], it’s nice if you happen to be a HashiCorp customer that also uses Azure,” Petrocelli said.

Go to Original Article
Author:

New integration brings Fuze meetings to Slack app

An integration unveiled this week will make it easier for Slack users to launch and join Fuze meetings. Zoom Video Communications Inc. rolled out a similar integration with Slack over the summer.

Slack is increasingly making it clear that it intends to incorporate voice and video capabilities into its team messaging app through integrations and partnerships, rather than by attempting to build the technology on its own.

Fuze Inc.’s announcement also underscores how big a player Slack has become in the business collaboration industry. Fuze, a cloud unified communications (UC) provider, opted to partner with Slack, even though it sells a team messaging app with the same core capabilities.

The integration lets users launch Fuze meetings by clicking on the phone icon in Slack, instead of typing a command. They will also see details about an ongoing meeting, such as how long it’s been going on and who’s participating.

Furthermore, Slack’s Microsoft Outlook and Google Calendar apps will let users join scheduled Fuze meetings with one click. Slack previously announced support for that capability with Zoom, Cisco Webex and Skype for Business.

No formal partnership

Slack gave Fuze special access to the set of APIs that made the latest integrations possible, said Eric Hanson, Fuze’s vice president of marketing intelligence. But the companies later clarified there was no formal partnership between them.

The vendors apparently miscommunicated about how to frame this week’s announcement. Within hours on Tuesday, Fuze updated a blog post to remove references to a “partnership” with Slack, instead labeling it as an “integration.”

In contrast, Slack and Zoom signed a contract to align product roadmaps and marketing strategies earlier this year.

In the future, Fuze hopes to give users the ability to initiate phone calls through Slack. Previously, Slack said it would enable such a feature with Zoom Phone, the video conferencing provider’s new cloud calling service.

Slack declined to comment on any plans to expand the Fuze integration.

“There are still some things that Slack hasn’t made available through this set of APIs yet,” Hanson said. “They have a roadmap in terms of where they want to take this.”

Making it easier for users to pick and choose

The voice and video capabilities natively supported in Slack are far less advanced than those available from main rival Microsoft Teams, an all-in-one suite for calling, messaging and meetings. But users want to be able to easily switch between messaging with someone and talking to them in real time.

By integrating with cloud communications vendors like Fuze and Zoom, Slack can focus on what it does best — team-based collaboration — while still connecting to the real-time communications services that customers need, said Mike Fasciani, analyst at Gartner.

“One of Slack’s advantages over Microsoft Teams is its ability and willingness to integrate with many business and communications applications,” Fasciani said.

Fuze also competes with Microsoft Teams. Integrations with Slack should help cloud UC providers sell to the vendor’s rapidly expanding customer base. Slack now has more than 100,000 paid customers, including 720 enterprises that each contribute more than $100,000 per year in revenue.

“Even though Fuze has its own [messaging] app, it doesn’t have anywhere near the market share of Slack,” said Irwin Lazar, analyst at Nemertes Research. “I think this shows Slack’s continued view that they don’t want to compete directly with the voice/meeting vendors.”

Go to Original Article
Author: