Tag Archives: flexibility

M-Files cloud subscription turns hybrid with M-Files Online

To reflect the desire for flexibility, and regulatory shifts in the enterprise content management industry, software vendors are starting to offer users options for storing data on premises or in a cloud infrastructure.

The M-Files cloud strategy is a response to these industry changes. The information management software vendor has released M-Files Online, which enables users to manage content both in the cloud and behind a firewall on premises, under one subscription.

While not the first ECM vendor to offer hybrid infrastructure, the company claims that with the new M-Files cloud system, it is the first ECM software provider to provide both under one software subscription.

“What I’ve seen going on is users are trying to do two things at once,” said John Mancini, chief evangelist for the Association of Intelligent Information Management (AIIM). “On one hand, there are a lot of folks that have significant investment in legacy systems. On the other hand, they’re realizing quickly that the old approaches aren’t working anymore and are driving toward modernizing the infrastructure.”

Providing customer flexibility

It’s difficult, time-consuming and expensive to migrate an organization’s entire library of archives or content from on premises to the cloud, yet it’s also the way the industry is moving as emerging technologies like AI and machine learning have to be cloud-based to be able to function. That’s where a hybrid cloud approach can help organizations handle the migration process.

Organizations need to understand that cloud is coming, more data is coming and they need to be more agile.
John Mancinichief evangelist, Association of Intelligent Information Management

According to a survey by Mancini and AIIM, and sponsored by M-Files, 48% of the 366 professionals surveyed said they are moving toward a hybrid of cloud and on-premises delivery methods for information management over the next year, with 36% saying they are moving toward cloud and 12% staying on premises.

“We still see customers that are less comfortable to moving it all to the cloud and there are certain use cases where that makes sense,” said Mika Javanainen, vice president of product marketing at M-Files. “This is the best way to provide our customers flexibility and make sure they don’t lag behind. They may still run M-Files on premises, but be using the cloud services to add intelligence to your data.”

M-Files cloud system and its new online offering act as a hub for an organization’s storehouse of information.

“The content resides where it is, but we still provide a unified UI and access to that content and the different repositories,” Javanainen said.

M-Files Online screenshot
An M-Files Online screenshot shows how the information management company brings together an organization’s content from a variety of repositories.

Moving to the cloud to use AI

While the industry is moving more toward cloud-based ECM, there are still 60% of those in the AIIM survey that want some sort of on-premises storage, according to the survey.

“There are some parts of companies that are quite happy with how they are doing things now, or may understand the benefits of cloud but are resistant to change,” said Greg Milliken, senior vice president of marketing at M-Files. “[M-Files Online] creates an opportunity that allows users that may have an important process they can’t deviate from to access information in the traditional way while allowing other groups or departments to innovate.”

One of the largest cloud drivers is to realize the benefit of emerging business technologies, particularly AI. While AI can conceivably work on premises, that venue is inherently flawed due to the inability to store enough data on premises.

M-Files cloud computing can open up the capabilities of AI for the vendor’s customers. But for organizations to benefit from AI, they need to overcome fears of the cloud, Mancini said.

“Organizations need to understand that cloud is coming, more data is coming and they need to be more agile,” he said. “They have to understand the need to plug in to AI.”

Potential problems with hybrid clouds

Having part of your business that you want more secure to run on premises and part to run in the cloud sounds good, but it can be difficult to implement, according to Mancini.

“My experience talking to people is that it’s easier said than done,” Mancini said. “Taking something designed in a complicated world and making it work in a simple, iterative cloud world is not the easiest thing to do. Vendors may say we have a cloud offering and an on-premises offering, but the real thing customers want is something seamless between all permutations.”

Regardless whether an organization is managing through a cloud or behind a firewall, there are undoubtedly dozens of other software systems — file shares, ERP, CRM — which businesses are working with and hoping to integrate its information with. The real goal of ECM vendors and those in the information management space, according to Mancini, is to get all those repositories working together.

“What you’re trying to get to is a system that is like a set of interchangeable Lego blocks,” Mancini said. “And what we have now is a mishmash of Legos, Duplos, Tinker Toys and erector sets.”

M-Files claims its data hub approach — bringing all the disparate data under one UI via an intelligent metadata layer that plugs into the other systems — succeeds at this.

“We approach this problem by not having to migrate the data — it can reside where it is and we add value by adding insights to the data with AI,” Javanainen said.

M-Files Online, which was released Aug. 21, is generally available to customers. M-Files declined to provide detailed pricing information.

Google adds single-tenant VMs for compliance, license cares

Google’s latest VM runs counter to standard public cloud frameworks, but its added flexibility checks off another box for enterprise clients.

Google Cloud customers can now access sole-tenant nodes on Google Compute Engine. The benefits for these single-tenant VMs, currently in beta, are threefold: They reduce the “noisy neighbor” issue that can arise on shared servers; add another layer of security, particularly for users with data residency concerns; and make it easier to migrate certain on-premises workloads with stringent licensing restrictions.

The public cloud model was built on the concept of multi-tenancy, which allows providers to squeeze more than one account onto the same physical host, and thus operate at economies of scale. Early customers happily waived some of those advantages of dedicated hardware in exchange for less infrastructure management and the ability to quickly scale out.

But as more traditional corporations adopt public cloud, providers have added isolation capabilities to approximate what’s inside enterprises’ own data centers, such as private networks, virtual private clouds and bare-metal servers. Single tenancy applies that approach down to the hardware level, while maintaining a virtualized architecture. AWS was the first to offer single-tenant VMs with its Dedicated Instances.

Customers access Google’s single-tenant VMs the same way as its other compute instances, except they’re placed on a dedicated server. The location of that node is either auto-selected through a placement algorithm, or customers can manually select the location at launch. These instances are customizable in size, and are charged per second for vCPU and system memory, as well as a 10% sole-tenancy premium.

Single-tenant VMs another step for Google Cloud’s enterprise appeal

Google still lags behind AWS and Microsoft Azure in public cloud capabilities, but it has added services and support in recent months to shake its image as a cloud valued solely for its engineering. Google must expand its enterprise customer base, especially with large organizations in which multiple stakeholders sign off on use of a particular cloud, said Fernando Montenegro, a 451 Research analyst.

Not all companies will pay the premium for this functionality, but it could be critical to those with compliance concerns, including those that must prove they’re on dedicated hardware in a specific location. For example, a DevOps team may want to build a CI/CD pipeline that releases into production, but a risk-averse security team might have some trepidations. With sole tenancy, that DevOps team has flexibility to spin up and down, while the security team can sign off on it because it meets some internal or external requirement.

“I can see security people being happy that, we can meet our DevOps team halfway, so they can have their DevOps cake and we can have our security compliance cake, too,” Montenegro said.

I can see security people being happy … our DevOps team … can have their DevOps cake and we can have our security compliance cake, too.
Fernando Montenegroanalyst, 451 Research

A less obvious benefit of dedicated hardware involves the lift and shift of legacy systems to the cloud. A traditional ERP contract may require a specific set of sockets or hosts, and it can be a daunting task to ensure a customer complies with licensing stipulations on a multi-tenant platform because the requirements aren’t tied to the VM.

In a bring-your-own-license scenario, these dedicated hosts can optimize customers’ license spending and reduce the cost to run those systems on a public cloud, said Deepak Mohan, an IDC analyst.

“This is certainly an important feature from an enterprise app migration perspective, where security and licensing are often top priority considerations when moving to cloud,” he said.

The noisy neighbor problem arises when a user is concerned that high CPU or IO usage by another VM on the same server will impact the performance of its own application, Mohan said.

“One of the interesting customer examples I heard was a latency-sensitive function that needed to compute and send the response within as short a duration as possible,” he said. “They used dedicated hosts on AWS because they could control resource usage on the server.”

Still, don’t expect this to be the type of feature that a ton of users rush to implement.

“[A single-tenant VM] is most useful where strict compliance/governance is required, and you need it in the public cloud,” said Abhi Dugar, an IDC analyst. “If operating under such strict criteria, it is likely easier to just keep it on prem, so I think it’s a relatively niche use case to put dedicated instances in the cloud.”

Announcing the general availability of Azure Event Grid

Modern applications are taking maximum advantage of the agility and flexibility of the cloud by moving away from monolithic architectures and instead using a set of distinct services, all working together. This includes foundational services offered by a cloud platform like Azure (Database, Storage, IoT, Compute, Serverless Functions, etc.) and application-specific services (inventory management, payment services, manufacturing processes, mobile experiences, etc.). In these new architectures, event-driven execution has become a foundational cornerstone. It replaces cumbersome polling for communication between services with a simple mechanism. These events could include IoT device signals, cloud provisioning notifications, storage blob events, or even custom scenarios such as new employees being added to HR systems. Reacting to such events efficiently and reliably is critical in these new app paradigms.

Today, I am excited to announce the general availability of Azure Event Grid, a fully managed event routing service that simplifies the development of event-based applications.

  • Azure Event Grid is the first of its kind, enabling applications and services to subscribe to all the events they need to handle whether they come from Azure services or from other parts of the same application.
  • These events are delivered through push semantics, simplifying your code and reducing your resource consumption. You no longer need to continuously poll for changes and you only pay per event. The service automatically scales dynamically to handle millions of events per second.
  • Azure Event Grid provides multiple ways to react to these events including using Serverless offerings such as Azure Functions or Azure Logic Apps, using Azure Automation, or even custom web hooks for your code or 3rd party services. This means any service running anywhere can publish events and subscribe to reliable Azure Events.

We make it easy to react to Azure native events and build modern apps anywhere, on-premises and cloud, without restricting you to use only our public cloud services. This is unique to Azure Event Grid.

Here is how it works:

[embedded content]

In the days since we announced public preview, we have seen many customers find innovative uses for Azure Event Grid and we’ve been blown away by all the great feedback from customers and the community. 

  • Qutotec used Azure Event Grid to rearchitect their hybrid integration platform:

“Azure Event Grid enabled us to simplify the architecture of our cloud-based enterprise wide hybrid integration platform, by making it easy to reliably respond to events and changes in the global business data without polling.”

– Henri Syrjäläinen, Director of Digital Enterprise Architecture, Outotec Oyj

  • Paycor unified their human capital management applications using Azure Event Grid:

“Event Grid empowers Paycor to provide a unified experience to our customers, across the suite of our human capital management applications.  It becomes the backbone for an event driven architecture, allowing each application to broadcast and receive events in a safe, reliable way.  It solves many of the operational and scalability concerns that traditional pub-sub solutions cannot.”

– Anthony Your, Director of Architecture, Paycor, Inc.

  • Microsoft Devices supply chain team utilized Azure Event Grid as part of its serverless pipeline to optimize operations and reduce time to market. The details are described in this Microsoft supply chain serverless case study.

Here is what we have newly available since our preview:

  • Richer scenarios enabled through integration with more services: Since preview, we have added General Purpose Storage and Azure IoT Hub as new event publishers and Azure Event Hubs as a new destination (great for event archival, streaming, and buffering of events). IoT Hub adds support for device lifecycle events such as device creation and device deletion which can then be handled in a serverless manner. These new integrations simplify the architecture and expand the possibilities for your applications whether they are in cloud or on-premises. Please see the full current list of Azure Event Grid service integrations for details and the region-wise availabilities. We will continue to add more services throughout the year. 

Event Grid service integrations

  • Availability in more regions: Azure Event Grid is globally available in the following regions: West US, East US, West US 2, East US 2, West Central US, Central US, West Europe, North Europe, Southeast Asia, and East Asia with more coming soon.
  • Increased reliability and service level agreement (SLA): We now have a 24 hour retry policy with exponential back off for event delivery. We also offer an industry-leading 99.99% availability with a financially backed SLA for your production workloads. With today’s announcement, you can confidently build your business-critical applications to rely on Azure Event Grid.
  • Better developer productivity: Today, we are also releasing new Event Grid SDKs to streamline development. Management SDKs are now available for Python, .Net, and Node.js with support for Go, Ruby, and Java coming soon. Publish SDK is now available for .Net with support for Python, Node.js, Go, Ruby, and Java coming soon. Additionally, we have now made it easier to consume events by simply fetching the JSON schema of all supported event types from our event schema store. This removes the burden of the subscriber to understand and de-serialize the events.

With today’s GA, I think you will find that Azure Event Grid becomes a critical component in your serverless application. Go ahead, give it a try with this simple and fun Event Grid Quickstart. Remember, the first 100,000 events per month are on us!

Here are some other samples/tutorials to help you get started:

  • Build serverless applications
    • Use IoT Hub and Logic apps to react to device lifecycle events [doc | video]
    • Instantly pick up and resize images in Blob Storage using a function [doc]
  • Automate your infrastructure operations
    • Appropriately tag VMs as they are spun up and send a notification to your Microsoft Teams channel [doc]
  • Facilitate communication between the different pieces of your distributed applications
    • Stream data from Event Hubs to your data warehouse [doc]

To learn more, please join us for our upcoming webinar on Tuesday, February 13, 2018. 

Register here: Building event-driven applications using serverless architectures.

Thanks,

Corey

How does Data Protection Manager 2016 save and restore data?

on the DPM server. But administrators have flexibility to put those backups on storage that is located — and partitioned — elsewhere.

To get started, IT administrators install a DPM agent on every computer to protect, then add that machine to a protection group in DPM. A protection group is a collection of computers that all share the same protection settings or configurations, such as the group name, protection policy, disk target and replica method.

After the agent installation and configuration process, DPM produces a replica for every protection group member, which can include volumes, shares, folders, Exchange storage groups and SQL Server databases. System Center Data Protection Manager 2016 builds replicas in a provisioned storage pool.

After DPM generates the initial replicas, its agents track changes to the protected data and send that information to the DPM server. DPM will then use the change journal to update the file data replicas at the intervals specified by the configuration. During synchronization, any changes are sent to the DPM server, which applies them to the replica.

DPM also periodically checks the replica for consistency with block-level verification and corrects any problems in the replica. Administrators can set recovery points for a protection group member to create multiple recoverable versions for each backup.

Application data backups require additional planning

Application data protection can vary based on the application and the selected backup type. Administrators need to be aware that certain applications do not support every DPM backup type. For example, Microsoft Virtual Server and some SQL Server databases do not support incremental backups.

Administrators need to be aware that certain applications do not support every DPM backup type.

For a synchronization job, System Center Data Protection Manager 2016 tracks application data changes and moves them to the DPM server, similar to an incremental backup. Updates are combined with the base replica to form the complete backup.

For an express full backup job, System Center Data Protection Manager 2016 uses a complete Volume Shadow Copy Service snapshot, but transfers only changed blocks to the DPM server. Each full backup creates a recovery point for the application’s data.

Generally, incremental synchronizations are faster to backup but can take longer to restore. To balance the time needed to restore content, DPM will periodically create full backups to integrate any collected changes, which speeds up a recovery. DPM can support up to 64 recovery points for each member of a protection group. However, DPM can also support up to 448 full backups and 96 incremental backups for each full backup.

The DPM recovery process is straightforward regardless of the backup type or target. Administrators select the desired recovery point with the Recovery Wizard in the DPM Administrator Console. DPM will restore the data from that point to the desired target or destination. The Recovery Wizard will denote the location and availability of the backup media. If the backup media — such as tape — is not available, the restoration process will fail.

IT pros navigate the software-defined data center market

Software-defined infrastructure promises flexibility and agility in the data center, but many IT pros still struggle with challenges such as cost concerns and implementation issues.

The software-defined data center (SDDC) aims to decouple hardware from software and automate networking, compute and storage resources through a centralized software platform. IT can either implement this type of data center in increments by deploying software-defined networking, storage and compute separately, or in one fell swoop. IT pros at Gartner’s data center conference this month in Las Vegas said their organizations are interested in SDDC to address changing storage needs.

In the beginning of SDDC’s foray into the IT landscape, IT pros generally used software-defined infrastructure for one application or region. But in the past 18 months or so, more organizations are expanding the use of software-defined from one application to everywhere, as general-purpose infrastructure, said Daniel Bowers, research director at Gartner.

“That’s a shift,” he said. “That means software-defined is going from a niche technology — great for certain applications — to the mainstream.”

Why SDDC?

As interest levels increase, adoption in the software-defined data center market is on the rise. By 2023, 85% of large global enterprises will require the programmatic capabilities of an SDDC, as opposed to 25% today, according to Gartner.

Some IT teams are evaluating the software-defined data center market as their higher-ups demand innovation, including one financial services company.

“Our CIO is increasingly demanding to move in a software-defined direction,” said an infrastructure architect at the company, who requested anonymity because he was not authorized to speak with the media.

The company’s IT strategy is to shift away from a traditional, scale-up, monolithic storage model and toward a scale-out storage model, which enables IT to buy more storage in smaller chunks. The company also aims to update its “big, flat network” through software-defined networking’s automation and orchestration capabilities, the infrastructure architect said.

Currently, the company’s IT department struggles to deliver adequate test environments to its developers. It aims to close those gaps by spinning up an entire test environment through APIs. When developers are finished testing, they can spin it down, rinse and repeat.

A software-defined data center is a perfect match for an API-driven infrastructure, the infrastructure architect said. With the click of a few buttons, programmers can provision the temporary development environments they need to build applications.

For others, software-defined infrastructure is a secondary solution to an accidental problem. Wayne Morse, a network administrator and systems analyst at Jacobs Technology, an IT services company based in Dallas, runs local storage across 24 servers.

“The problem is, we’re running out of disk space on any individual server, and we need to share those resources across multiple servers,” he said.

IT didn’t implement a SAN due to cost issues, Morse said. Now, the company needs distributed storage across the data center to share resources — and software-defined storage (SDS) is a way to achieve that.

SDDC challenges

But one of the most significant advantages of an SDDC — the ability to implement it gradually — can also be its biggest downfall.

“[Software-defined storage] needs to be a part of a bigger picture,” said Julia Palmer, a research director at Gartner. “It’s very difficult, because all of the components of software-defined are developed separately.”

For Morse, that means a limited network could hinder the capabilities of SDS. He is considering upgrading the company’s network to fully take advantage of the SDS’ storage-sharing features.

Other organizations see the advantages of software-defined, but costs keep actual adoption just out of reach.

The costs of implementing and purchasing the products to make [an SDDC] happen are greater than the actual need.
Walt Baineydirector of infrastructure operations, Kent State University

Walt Bainey, the director of infrastructure operations at Kent State University in Kent, Ohio, has looked at the software-defined data center market for years, but only from afar. That’s because his IT team doesn’t roll out a lot of compute storage or make constant changes to the network.

“We are more static,” Bainey said. “The costs of implementing and purchasing the products to make [an SDDC] happen are greater than the actual need.”

Still, one ideal use case for SDDC would be in the university’s research computing cluster, which provides the infrastructure that supports research needs of professors, researchers and students. There, the IT team could license a smaller footprint of hardware, software and networking components to cut costs, Bainey said. Through software and scripts, IT can provide resources such as servers and file shares and automate routine tasks to build out the environment’s compute, storage and networking components.

“We could have our faculty members and professors self-serve and dole out things they want by spinning them up and spinning them down,” Bainey said. “I think there’s a huge advantage in that type of scenario, but we’re not there yet.”