Tag Archives: problems

Government IT pros: Hiring data scientists isn’t an exact science

WASHINGTON, D.C. — Government agencies face the same problems as enterprises when it comes to turning their vast data stores into useful information. In the case of government, that information is used to provide services such as healthcare, scientific research, legal protections and even to fight wars.

Public sector IT pros at the Veritas Public Sector Vision Day this week talked about their challenges in making data useful and keeping it secure. A major part of their work currently involves finding the right people to fill data analytical roles, including hiring data scientists. They described data science skills as a combination of roles that require technical, as well as subject matter expertise, which often requires a diverse team to become successful.

Tiffany Julian, data scientist at the National Science Foundation, said she recently sat in on a focus group involved with the Office of Personnel Management’s initiative to define data scientist.

“One of the big messages from that was, there’s no such thing as a unicorn. You don’t hire a data scientist. You create a team of people who do data science together,” Julian said.

Julian said data science includes more than programmers and technical experts. Subject experts who know their company or agency mission also play a role.

“You want your software engineers, you want your programmers, you want your database engineers,” she said. “But you also want your common sense social scientists involved. You can’t just prioritize one of those fields. Let’s say you’re really good at Python, you’re really good at R. You’re still going to have to come up with data and processes, test it out, draw a conclusion. No one person you hire is going to have all of those skills that you really need to make data-driven decisions.”

Wanted: People who know they don’t know it all

Because she is a data scientist, Julian said others in her agency ask what skills they should seek when hiring data scientists.

You don’t hire a data scientist. You create a team of people who do data science together.
Tiffany JulianData scientist, National Science Foundation

“I’m looking for that wisdom that comes from knowing that I don’t know everything,” she said. “You’re not a data scientist, you’re a programmer, you’re an analyst, you’re one of these roles.”

Tom Beach, chief data strategist and portfolio manager for the U.S. Patent and Trademark Office (USPTO), said he takes a similar approach when looking for data scientists.

“These are folks that know enough to know that they don’t know everything, but are very creative,” he said.

Beach added that when hiring data scientists, he looks for people “who have the desire to solve a really challenging problem. There is a big disconnect between an abstract problem and a piece of code. In our organization, a regulatory agency dealing with patents and trademarks, there’s a lot of legalese and legal frameworks. Those don’t code well. Court decisions are not readily codable into a framework.”

‘Cloud not enough’

Like enterprises, government agencies also need to get the right tools to help facilitate data science. Peter Ranks, deputy CIO for information enterprise at the Department of Defense, said data is key to his department, even if DoD IT people often talk more about technologies such as cloud, AI, cybersecurity and the three Cs (command, control and communications) when they discuss digital modernization.

“What’s not on the list is anything about data,” he said. “And that’s unfortunate because data is really woven into every one of those. None of those activities are going to succeed without a focused effort to get more utility out of the data that we’ve got.”

Ranks said future battles will depend on the ability of forces on land, air, sea, space and cyber to interoperate in a coordinated fashion.

“That’s a data problem,” he said. “We need to be able to communicate and share intelligence with our partners. We need to be able to share situational awareness data with coalitions that may be created on demand and respond to a particular crisis.”

Ranks cautioned against putting too much emphasis on leaning on the cloud for data science. He described cloud as the foundation on the bottom of a pyramid, with software in the middle and data on top.

“Cloud is not enough,” he said. “Cloud is not a strategy. Cloud is not a destination. Cloud is not an objective. Cloud is a tool, and it’s one tool among many to achieve the outcomes that your agency is trying to get after. We find that if all we do is adopt cloud, if we don’t modernize software, all we get is the same old software in somebody else’s data center. If we modernize software processes but don’t tackle the data … we find that bad data becomes a huge boat anchor or that all those modernized software applications have to drive around. It’s hard to do good analytics with bad data. It’s hard to do good AI.”

Beach agreed. He said cloud is “100%” part of USPTO’s data strategy, but so is recognition of people’s roles and responsibilities.

“We’re looking at not just governance behavior as a compliance exercise, but talking about people, process and technology,” he said. “We’re not just going to tech our way out of a situation. Cloud is just a foundational step. It’s also important to understand the recognition of roles and responsibilities around data stewards, data custodians.”

This includes helping ensure that people can find the data they need, as well as denying access to people who do not need that data.

Nick Marinos, director of cybersecurity and data protection at the Government Accountability Office, said understanding your data is a key step in ensuring data protection and security.

“Thinking upfront about what data do we actually have, and what do we use the data for are really the most important piece questions to ask from a security or privacy perspective,” he said. “Ultimately, having an awareness of the full inventory within the federal agencies is really all the way that you can even start to approach protecting the enterprise as a whole.”

Marinos said data protection audits at government agencies often start with looking at the agency’s mission and its flow of data.

“Only from there can we as auditors — and the agency itself — have a strong awareness of how many touch points there are on these data pieces,” he said. “From a best practice perspective, that’s one of the first steps.”

Go to Original Article
Author:

Kubernetes tools vendors vie for developer mindshare

SAN DIEGO — The notion that Kubernetes solves many problems as a container orchestration technology belies the complexity it adds in other areas, namely for developers who need Kubernetes tools.

Developers at the KubeCon + CloudNativeCon North America 2019 event here this week noted that although native tooling for development on Kubernetes continues to improve, there’s still room for more.

“I think the tooling thus far is impressive, but there is a long way to go,” said a software engineer and Kubernetes committer who works for a major electronics manufacturer and requested anonymity.

Moreover, “Kubernetes is extremely elegant, but there are multiple concepts for developers to consider,” he said. “For instance, I think the burden of the onboarding process for new developers and even users sometimes can be too high. I think we need to build more tooling, as we flush out the different use cases that communities bring out.”

Developer-oriented approach

Enter Red Hat, which introduced an update of its Kubernetes-native CodeReady Workspaces tool at event.

Red Hat CodeReady Workspaces 2 enables developers to build applications and services on their laptops that mirror the environment they will run in production. And onboarding is but one of the target use cases for the technology, said Brad Micklea, vice president of developer tools, developer programs and advocacy at Red Hat.

The technology is especially useful in situations where security is an issue, such as bringing in new contracting teams or using offshore development teams where developers need to get up and running with the right tools quickly.

I think the tooling thus far is impressive, but there is a long way to go.
Anonymous Kubernetes committer

CodeReady Workspaces runs on the Red Hat OpenShift Kubernetes platform.

Initially, new enterprise-focused developer technologies are generally used in experimental, proof-of-concept projects, said Charles King, an analyst at Pund-IT in Hayward, Calif. Yet over time those that succeed, like Kubernetes, evolve from the proof-of-concept phase to being deployed in production environments.

“With CodeReady Workspaces 2, Red Hat has created a tool that mirrors production environments, thus enabling developers to create and build applications and services more effectively,” King said. “Overall, Red Hat’s CodeReady Workspaces 2 should make life easier for developers.”

In addition to popular features from the first version, such as an in-browser IDE, Lightweight Directory Access Protocol support, Active Directory and OpenAuth support as well as one-click developer workspaces, CodeReady Workspaces 2 adds support for Visual Studio Code extensions, a new user interface, air-gapped installs and a shareable workspace configuration known as Devfile.

“Workspaces is just generally kind of a way to package up a developer’s working workspace,” Red Hat’s Micklea said.

Overall, the Kubernetes community is primarily “ops-focused,” he said. However, tools like CodeReady Workspaces help to empower both developers and operations.

For instance, at KubeCon, Amr Abdelhalem, head of the cloud platform at Fidelity Investments, said the way he gets teams initiated with Kubernetes is to have them deliver on small projects and move on from there. CodeReady Workspaces is ideal for situations like that because it simplifies developer adoption of Kubernetes, Micklea said.

Such a tool could be important for enterprises that are banking on Kubernetes to move them into a DevOps model to achieve business transformation, said Charlotte Dunlap, an analyst with GlobalData.

“Vendors like Red Hat are enhancing Kubernetes tools and CLI [Command Line Interface] UIs to bring developers with more access and visibility into the ALM [Application Lifecycle Management] of their applications,” Dunlap said. “Red Hat CodeReady Workspaces is ultimately about providing enterprises with unified management across endpoints and environments.”

Competition for Kubernetes developer mindshare

Other companies that focus on the application development platform, such as IBM and Pivotal, have also joined the Kubernetes developer enablement game.

Earlier this week, IBM introduced a set of new open-source tools to help ease developers’ Kubernetes woes. Meanwhile, at KubeCon this week, Pivotal made its Pivotal Application Service (PAS) on Kubernetes generally available and also delivered a new release of the alpha version of its Pivotal Build Service. The PAS on Kubernetes tool enables developers to focus on coding while the platform automatically handles software deployment, networking, monitoring, and logging.

The Pivotal Build Service enables developers to build containers from source code for Kubernetes, said James Watters, senior vice president of strategy at Pivotal. The service automates container creation, management and governance at enterprise scale, he said.

The build service brings technologies such as Pivotal’s kpack and Cloud Native Buildpacks to the enterprise. Cloud Native Buildpacks address dependencies in the middleware layer, such as language-specific frameworks. Kpack is a set of resource controllers for Kubernetes. The Build Service defines the container image, its contents and where it should be kept, Watters said.

Indeed, Watters said he believes it just might be game over in the Kubernetes tools space because Pivotal owns the Spring Framework and Spring Boot, which appeal to a wide swath of Java developers, which is “one of the most popular ways enterprises build applications today,” he said.

“There is something to be said for the appeal of Java in that my team would not need to make wholesale changes to our build processes,” said a Java software developer for a financial services institution who requested anonymity because he was not cleared to speak for the organization.

Yet, in today’s polyglot programming world, programming language is less of an issue as teams have the capability to switch languages at will. For instance, Fidelity’s Abdelhalem said his teams find it easier to move beyond a focus strictly on tools and more on overall technology and strategy to determine what fits in their environment.

Go to Original Article
Author:

Google cloud network tools check links, firewalls, packet loss

Google has introduced several network monitoring tools to help companies pinpoint problems that could impact applications running on the Google Cloud Platform.

Google launched this week the first four modules of an online console called the Network Intelligence Center. The components for monitoring a Google cloud network include a network topology map, connectivity tests, a performance dashboard, and firewall metrics and insights. The first two are in beta, and the rest are in alpha, which means they are still in the early stages of development.

Here’s a brief overview of each module, based on a Google blog post:

— Google is providing Google Cloud Platform (GCP) subscribers with a graphical view of their network topology. The visualization shows how traffic is flowing between private data centers, load balancers, and applications running on computing environments within GCP. Companies can drill down on each element of the topology map to verify policies or identify and troubleshoot problems. They can also review changes in the network over the last six weeks.

— The testing module lets companies diagnose problems with network connections within GCP or from GCP to an IP address in a private data center or another cloud provider. Along with checking links, companies can test the impact of network configuration changes to reduce the chance of an outage.

–The performance dashboard provides a current view of packet loss and latency between applications running on virtual machines. Google said the tool would help IT teams determine quickly whether a packet problem is in the network or an app.

–The firewall metrics component offers a view of rules that govern the security software. The module is designed to help companies optimize the use of firewalls in a Google cloud network.

Getting access to the performance dashboard and firewall metrics requires a GCP subscriber to sign up as an alpha customer. Google will incorporate the tools into the Network Intelligence Center once they reach the beta level.

Go to Original Article
Author:

Forus Health uses AI to help eradicate preventable blindness – AI for Business

Big problems, shared solutions

Tackling global challenges has been the focus of many health data consortiums that Microsoft is enabling. The Microsoft Intelligent Network for Eyecare (MINE) – the initiative that Chandrasekhar read about – is now part of the Microsoft AI Network for Healthcare, which also includes consortiums focused on cardiology and pathology.

For all three, Microsoft’s aim is to play a supporting role to help doctors and researchers find ways to improve health care using AI and machine learning.

“The health care providers are the experts,” said Prashant Gupta, Program Director in Azure Global Engineering. “We are the enabler. We are empowering these health care consortiums to build new things that will help with the last mile.”

In the Forus Health project, that “last mile” started by ensuring image quality. When members of the consortium began doing research on what was needed in the eyecare space, Forus Health was already taking the 3nethra classic to villages to scan hundreds of villagers in a day. But because the images were being captured by minimally trained technicians in areas open to sunlight, close to 20% of the images were not high quality enough to be used for diagnostic purposes.

“If you have bad images, the whole process is crude and wasteful,” Gupta said. “So we realized that before we start to understand disease markers, we have to solve the image quality problem.”

Now, an image quality algorithm immediately alerts the technician when an image needs to be retaken.

The same thought process applies to the cardiology and pathology consortiums. The goal is to see what problems exist, then find ways to use technology to help solve them.

“Once you have that larger shared goal, when you have partners coming together, it’s not just about your own efficiency and goals; it’s more about social impact,” Gupta said.

And the highest level of social impact comes through collaboration, both within the consortiums themselves and when working with organizations such as Forus Health who take that technology out into the world.

Chandrasekhar said he is eager to see what comes next.

“Even though it’s early, the impact in the next five to 10 years can be phenomenal,” he said. “I appreciated that we were seen as an equal partner by Microsoft, not just a small company. It gave us a lot of satisfaction that we are respected for what we are doing.”

Top image: Forus Health’s 3nethra classic is an eye-scanning device that can be attached to the back of a moped and transported to remote locations. Photo by Microsoft. 

Leah Culler edits Microsoft’s AI for Business and Technology blog.

Go to Original Article
Author: Microsoft News Center

How to tackle an email archive migration for Exchange Online

Problem solve
Get help with specific problems with your technologies, process and projects.

A move from on-premises Exchange to Office 365 also entails determining the best way to transfer legacy archives. This tutorial can help ease migration complications.


A move to Office 365 seems straightforward enough until project planners broach the topic of the email archive…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

migration.

Not all organizations keep all their email inside their messaging platform. Many organizations that archive messages also keep a copy in a journal that is archived away from user reach for legal reasons.

The vast majority of legacy archive migrations to Office 365 require third-party tools and must follow a fairly standardized process to complete the job quickly and with minimal expense. Administrators should migrate mailboxes to Office 365 first and then the archive for the fastest way to gain benefits from Office 365 before the archive reingestion completes.

An archive product typically scans mailboxes for older items and moves those to longer term, cheaper storage that is indexed and deduplicated. The original item typically gets replaced with a small part of the message, known as a stub or shortcut. The user can find the email in their inbox and, when they open the message, an add-in retrieves the full content from the archive.

Options for archived email migration to Office 365

The native tools to migrate mailboxes to Office 365 cannot handle an email archive migration. When admins transfer legacy archive data for mailboxes, they usually consider the following three approaches:

  1. Export the data to PST archives and import it into user mailboxes in Office 365.
  2. Reingest the archive data into the on-premises Exchange mailbox and then migrate the mailbox to Office 365.
  3. Migrate the Exchange mailbox to Office 365 first, then perform the email archive migration to put the data into the Office 365 mailbox.

Option 1 is not usually practical because it takes a lot of manual effort to export data to PST files. The stubs remain in the user’s mailbox and add clutter.

Option 2 also requires a lot of labor-intensive work and uses a lot of space on the Exchange Server infrastructure to support reingestion.

That leaves the third option as the most practical approach, which we’ll explore in a little more detail.

Migrate the mailbox to Exchange Online

When you move a mailbox to Office 365, it migrates along with the stubs that relate to the data in the legacy archive. The legacy archive will no longer archive the mailbox, but users can access their archived items. Because the stubs usually contain a URL path to the legacy archive item, there is no dependency on Exchange to view the archived message.

Some products that add buttons to restore the individual message into the mailbox will not work; the legacy archive product won’t know where Office 365 is without further configuration. This step is not usually necessary because the next stage is to migrate that data into Office 365.

Transfer archived data

Legacy archive solutions usually have a variety of policies for what happens with the archived data. You might configure the system to keep the stubs for a year but make archive data accessible via a web portal for much longer.

There are instances when you might want to replace the stub with the real message. There might be data that is not in the user’s mailbox as a stub but that users want on occasion.

We need tools that not only automate the data migration, but also understand these differences and can act accordingly.

We need tools that not only automate the data migration, but also understand these differences and can act accordingly. The legacy archive migration software should examine the data within the archive and then run batch jobs to replace stubs with the full messages. In this case, you can use the Exchange Online archive as a destination for archived data that no longer has a stub.

Email archive migration software connects via the vendor API. The software assesses the items and then exports them into a common temporary format — such as an EML file — on a staging server before connecting to Office 365 over a protocol such as Exchange Web Services. The migration software then examines the mailbox and replaces the stub with the full message.

migration dashboard
An example of a third-party product’s dashboard detailing the migration progress of a legacy archive into Office 365.

Migrate journal data

With journal data, the most accepted approach is to migrate the data into the hidden recoverable items folder of each mailbox related to the journaled item. The end result is similar to using Office 365 from the day the journal began, and eDiscovery works as expected when following Microsoft guidance.

For this migration, the software scans the journal and creates a database of the journal messages. The application then maps each journal message to its mailbox. This process can be quite extensive; for example, an email sent to 1,000 people will map to 1,000 mailboxes.

After this stage, the software copies each message to the recoverable items folder of each mailbox. While this is a complicated procedure, it’s alleviated by software that automates the job.

Legacy archive migration offerings

There are many products tailored for an email archive migration. Each has its own benefits and drawbacks. I won’t recommend a specific offering, but I will mention two that can migrate more than 1 TB a day, which is a good benchmark for large-scale migrations. They also support chain of custody, which audits the transfer of all data

TransVault has the most connectors to legacy archive products. Almost all the migration offerings support Enterprise Vault, but if you use a product that is less common, then it is likely that TransVault can move it. The TransVault product accesses source data either via an archive product’s APIs or directly to the stored data. TransVault’s service installs within Azure or on premises.

Quadrotech Archive Shuttle fits in alongside a number of other products suited to Office 365 migrations and management. Its workflow-based process automates the migration. Archive Shuttle handles fewer archive sources, but it does support Enterprise Vault. Archive Shuttle accesses source data via API and agent machines with control from either an on-premises Archive Shuttle instance or, as is more typical, the cloud version of the product.

Dig Deeper on Exchange Online administration and implementation

Mobile Sharing & Companion Experiences for Microsoft Teams Meetings – Microsoft Garage

Research into Computer-Supported Collaborative Work has explored problems of disengagement in video meetings and device conflict since the 1990s, but good solutions that could work at scale have been elusive. Microsoft Research Cambridge UK had been working on these issues when the 2015 Hackathon arose as an opportunity to highlight for the rest of the company that just a few simple and dynamic device combinations might provide users with the means to solve the issues themselves.

While we had explored some research prototypes in late 2014 and early 2015, for the Hackathon we decided to use a vision video with the goal of getting the attention of the Skype product group, because we knew that the idea would have the most impact as an infrastructural feature of an existing product rather than as a new stand-alone product. We called the video “Skype Unleashed” to connote breaking free of the traditional one person per endpoint model.

team in a conference room
Turning the hackathon video into a working proof-of-concept

When we won the Business category, our prize was meeting with the sponsor of the Business category, then-COO Kevin Turner.  We scrambled to build a proof-of-concept prototype, which at first we jokingly referred to as “Skype Skwid”, a deliberate misspelling of “squid”, because it was like a body that had lots of tentacles that could reach out to different other things. However, we realized that we needed an official project name, so we became “Project Wellington”. This was a related inside joke, because the largest squid in the world is the Colossal Squid, and the largest specimen in the world is in the Museum of New Zealand Te Papa Tongarewa… in Wellington, New Zealand.

So as Project Wellington we went to meet Kevin Turner, who also invited Gurdeep Singh Pall, then-CVP for Skype, in November 2015. Both immediately saw the relevance of the concepts and Gurdeep connected us to Brian MacDonald’s incubation project that would become Microsoft Teams.

Brian also understood right away that Companion Experiences could be an innovative market differentiator for meetings and a mobile driver for Teams. He championed the integration of our small Cambridge group with his Modern Meetings group as a loose v-team. The Modern Meetings group was exceptionally welcoming, graciously showing us the ropes of productization and taking on the formidable challenge of helping us socialize the need for changes at all levels of the product, from media stack, middle tier, and all clients. We, in turn, learned a lot about the cadence of production, scoping, aligning with the needs of multiple roadmaps, and the multitude of issues required to turn feature ideas into releasable code.Through 2016 and 2017 we worked on design iterations, usability testing, and middle tier and client code. We were thrilled when first glimpses of roving camera and proximity joining were shown at Build 2017, and then announced as officially rolling out at Enterprise Connect 2018.

a group of people in a conference room
The combined research and product team

We are very excited to see these features released. We are also excited to close the research loop by evaluating our thesis that dynamic device combinations will improve hybrid collaboration in video meetings, and doing research ‘in the wild’ at a scale unimaginable by most research projects. Microsoft is one of only a handful of institutions that can make research possible that will improve the productivity of millions of people daily. So as well as releasing product features, we are exceptionally proud of the model of collaboration itself. And, indeed, we are continuing to collaborate with Microsoft Teams even after these features are released, as we now have a tremendous relationship with a product group that understands how we work and values our help.

To come full circle, then, it was Satya Nadella’s emphasis on the Hackathon as a valuable use of company time, and The Garage’s organization of the event itself, that allowed ideas well outside a product group to be catapulted to the attention of people who could see its value and then provide a path to making it happen.

If you would like to find out more about this project, connect with Sean Rintel on LinkedIn or follow @seanrintel on twitter.

A data replication strategy for all your disaster recovery needs

Meeting an organization’s disaster recovery challenges requires addressing problems from several angles based on specific recovery point and recovery time objectives. Today’s tight RTO and RPO expectations mean almost no data gets lost and no downtime.

To meet those expectations, businesses must move beyond backup and consider a data replication strategy. Modern replication products offer more than just a rapid disaster recovery copy of data, though. They can help with cloud migration, using the cloud as a DR site and even solving copy data challenges.

Replication software comes in two forms. One is integrated into a storage system, and the other is bought separately. Both have their strengths and weaknesses.

An integrated data replication strategy

The integrated form of replication has a few advantages. It’s often bundled at no charge or is relatively inexpensive. Of course, nothing in life is really free. The customer pays extra for the storage hardware in order to get the “free” software. In addition, at-scale, storage-based replication is relatively easy to manage. Most storage system replication works at a volume level, so one job replicates the entire volume, even if there are a thousand virtual machines on it. And finally, storage system-based replication is often backup-controlled, meaning the replication job can be integrated and managed by backup software.

There are, however, problems with a storage system-based data replication strategy. First, it’s specific to that storage system. Consequently, since most data centers use multiple storage systems from different vendors, they must also manage multiple replication products. Second, the advantage of replicating entire volumes can be a disadvantage, because some data centers may not want to replicate every application on a volume. Third, most storage system replication inadequately supports the cloud.

Stand-alone replication

IT typically installs stand-alone replication software on each host it’s protecting or implements it into the cluster in a hypervisor environment. Flexibility is among software-based replication’s advantages. The same software can replicate from any hardware platform to any other hardware platform, letting IT mix and match source and target storage devices. The second advantage is that software-based replication can be more granular about what’s replicated and how frequently replication occurs. And the third advantage is that most software-based replication offers excellent cloud support.

While backup software has improved significantly, tight RPOs and RTOs mean most organizations will need replication as well.

At a minimum, the cloud is used as a DR target for data, but it’s also used as an entire disaster recovery site, not just a copy. This means there can be instantiate virtual machines, using cloud compute in addition to cloud storage. Some approaches go further with cloud support, allowing replication across multiple clouds or from the cloud back to the original data center.

The primary downside of a stand-alone data replication strategy is it must be purchased, because it isn’t bundled with storage hardware. Its granularity also means dozens, if not hundreds of jobs, must be managed, although several stand-alone data replication products have added the ability to group jobs by type. Finally, there isn’t wide support from backup software vendors for these products, so any integration is a manual process, requiring custom scripts.

Modern replication features

Modern replication software should support the cloud and support it well. This requirement draws a line of suspicion around storage systems with built-in replication, because cloud support is generally so weak. Replication software should have the ability to replicate data to any cloud and use that cloud to keep a DR copy of that data. It should also let IT start up application instances in the cloud, potentially completely replacing an organization’s DR site. Last, the software should support multi-cloud replication to ensure both on-premises and cloud-based applications are protected.

Another feature to look for in modern replication is integration into data protection software. This capability can take two forms: The software can manage the replication process on the storage system, or the data protection software could provide replication. Several leading data protection products can manage snapshots and replication functions on other vendors’ storage systems. Doing so eliminates some of the concern around running several different storage system replication products.

Data protection software that integrates replication can either be traditional backup software with an added replication function or traditional replication software with a file history capability, potentially eliminating the need for backup software. It’s important for IT to make sure the capabilities of any combined product meets all backup and replication needs.

How to make the replication decision

The increased expectation of rapid recovery with almost no data loss is something everyone in IT will have to address. While backup software has improved significantly, tight RPOs and RTOs mean most organizations will need replication as well. The pros and cons of both an integrated and stand-alone data replication strategy hinge on the environment in which they’re deployed.

Each IT shop must decide which type of replication best meets its current needs. At the same time, IT planners must figure out how that new data replication product will integrate with existing storage hardware and future initiatives like the cloud.

Wanted – Anyone selling an I5 processor and Motherboard?

Hi guys,

My son is having problems with bottlenecking after i bought him a GTX 1060 GPU so we want to replace his AMD A8-7600 for something a bit more suitable.

It doesn’t have to be current so please let me know what you have to offer.

Many thanks,

Paul

Location: Haydock, Merseyside

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Price drop! Cooler Master 700W PSU & Lide 210 Scanner

Cooler Master 700W PSU for sale.
This PSU is a couple of years old. It has worked fine no problems for me. No high pitched whine or any other issues. Has a connection for pretty much everything. Upgrading my pc for something a little more powerful so this good girl can go to a good home. Normally i don`t recommend 2nd hand psus, but she`s good and is just wasting space doing nothing (I already have a spare psu for testing).

£35 including postage.

LIDE 210 Scanner A4 size).
Love this…

Price drop! Cooler Master 700W PSU & Lide 210 Scanner

Price drop! Cooler Master 700W PSU & Lide 210 Scanner

Cooler Master 700W PSU for sale.
This PSU is a couple of years old. It has worked fine no problems for me. No high pitched whine or any other issues. Has a connection for pretty much everything. Upgrading my pc for something a little more powerful so this good girl can go to a good home. Normally i don`t recommend 2nd hand psus, but she`s good and is just wasting space doing nothing (I already have a spare psu for testing).

£35 including postage.

LIDE 210 Scanner A4 size).
Love this…

Price drop! Cooler Master 700W PSU & Lide 210 Scanner