Tag Archives: Most

For Sale – Synology DiskStation DS416j with 2x2TB HDD

It’s the 2016 model, it’s been on most of the time for four years.

The drives are a couple of years older than that, were in a 2 bay NAS prior.

These are the model numbers of them-
Seagate ST2000DL003 3.5 inch 2TB Hard Drive (Serial-ATA, 6Gb/s, 64Mb, 5900RPM).

They are not new by any means but they have been faultless for their whole lives, error free and awesome.

I would say although being on 24/7 I only use the NAS a couple of times a month, so they haven’t been hammered.

Go to Original Article
Author:

How Riot Games upped its enterprise data governance game

League of Legends is among the most popular online games in the world. The game generates a lot of data that its developer, Riot Games, needs to manage and govern.

That data includes game and player data, as well as enterprise and corporate data. As a result, Riot Games must deal with a myriad of enterprise data governance challenges, including data ownership.

Managing data at the video game vendor, based in Los Angeles, is the task of Chris Kudelka, technical product manager of data governance.

“As we started to grow, we had data ownership problems,” Kudelka said. “A lot of really well-meaning people in the company were producing data so they could measure, understand and do the right thing for the player.”

As Riot Games has grown, the biggest challenge has become a certain lack of clarity about who owns the data in the enterprise and what it was originally intended for. Another challenge is that Riot Games has grown its portfolio beyond League of Legends to include games such as Legends of Runeterra, Valorant and more in the works.

So the company has begun to deal with data management across multiple game titles, as well as understanding the different data needs for each specific game.

Riot Games enterprise data governance effort relies on Alation Data Catalog
Riot Games uses the Alation Data Catalog to ingest and categorize data from different repositories.

Using a data catalog for enterprise data governance

As a data engineer, Kudelka noted that a common recurring concern is being able to identify what data an organization has and how it can all be categorized consistently. Riot Games started out using an informal data dictionary, relying on a listing of various data types in a repository. But that approach wasn’t scaling and the game developer needed to find a better model.

As we started to grow, we had data ownership problems. A lot of really well-meaning people in the company were producing data so they could … do the right thing for the player.
Chris KudelkaTechnical product manager of data governance, Riot Games

Kudelka said Riot Games began looking at different technologies, including open source approaches, to try to tackle the problem of keeping an up-to-date repository of metadata. The company needed a central repository of data to serve as a “source of truth” for data within the enterprise, providing the authoritative best version of data.

That process led to Riot Games initially deploying the Alation Data Catalog, while it continued to work with other platforms, such as data warehouse vendor Vertica and analytics visualization vendor Tableau.

Riot Games first introduced the data catalog into the organization in 2017 and started building an enterprise data governance initiative on top of that in 2019.

Setting up an enterprise data catalog

Getting the Alation Data Catalog running was a straightforward process of setting up the service accounts to ingest and profile data. For Riot Games, however, developing an enterprise data governance program involved more than just a data catalog.

“You know, it wasn’t all sunshine and rainbows just because we got a data catalog,” Kudelka quipped.

Kudelka recalled that when he and his team first offered the data catalog to Riot Games employees, it didn’t gain much traction. What was still missing was the human element and process for data curation. He noted that once Riot Games had a means to identify all the data, it still needed to identify the right people within the organization to add ownership and more definition to the data.

Defining enterprise data governance

At first, it was hard for Riot Games staff members to put their names on data assets, because they thought it would add more work to their jobs. Kudelka’s team had to alleviate that concern by defining data stewardship and how it could help employees.

“For us data governance, is about formalizing and recognizing the relationships to data that already exists,” Kudelka said.

For Riot Games employees, Kudelka said he wanted them to understand that they are already producing and using data. The goal of the data governance process was recognizing and formalizing the process and bringing best practices to light.

So, for organizations looking to enable enterprise data governance, Kudelka advised that users be at the center.

“It’s important to have a very people-focused approach and make sure that stewardship is really well defined when you introduce a data catalog, because that will help really make the engagement accelerate,” he said.

Go to Original Article
Author:

Matt Blaze warns of election security challenges amid COVID-19

Election security is the most challenging problem Matt Blaze said he has ever encountered in his career — and that was before the COVID-19 pandemic.

During his keynote address for the Black Hat USA 2020 virtual conference, the security researcher warned that the current election infrastructure faces a myriad of threats, from simple denial-of-service attacks to tampering with vote tallies or deleting records.

“Unfortunately, these attacks are not merely theoretical,” Blaze said, citing analysis from leading computer scientists and security researchers. “In fact, every current voting system that’s been examined is terrible in some way and probably exploitable.”

Blaze, McDevitt Chair in Computer Science and Law at Georgetown University, has studied the security of modern voting systems and election infrastructure for more than 20 years. He explained how integrity and accuracy issues plagued voting machines for decades, citing the punch hole ballots in Florida from the 2000 presidential election.

But the move to more electronic and software-based voting machines hasn’t resolved those issues, he said. In fact, in some cases, it’s worsened them by introducing paperless voting technology or complex blockchain-based software that doesn’t ultimately improve security. In addition, Blaze said election security must account for more than just the software code within voting machines, such as county election management software on the back end.

I don’t think I’ve ever encountered a problem that’s harder than the security and integrity of civil elections.
Matt BlazeProfessor, Georgetown University

“I don’t think I’ve ever encountered a problem that’s harder than the security and integrity of civil elections,” he said. “It’s fundamentally orders of magnitude more difficult and more complex than almost anything else you can imagine.”

It wasn’t all bad news for election security, however. Blaze noted two important developments, starting with a term called software independence coined by Ron Rivest, co-inventor of the RSA algorithm and co-founder of RSA Security. In 2006, Rivest co-authored a research paper that outlined how voting systems could be designed in a way that would prohibit an undetectable change or flaw in the software code from altering the elections results.

More recently, Philip Stark, professor of statistics and associate dean at the University of California, Berkeley, developed a model for what’s called “risk-limiting audits,” which samples a subset of election results to ensure the results are accurate and haven’t been subjected to errors or tampering.

Still, Blaze noted that threat actors targeting U.S. elections may not even be trying to swing vote tallies one way or another. “Foreign-state adversaries are a little different from traditional attackers in an election system because they might not want to choose a winner,” he said. “They may be satisfied with simply disrupting the overall process and casting doubt on the legitimacy of the outcome and making it difficult to vote or to know who won. And that is both an easier goal and one for which many different kinds of attacks [are possible] than the kinds of attacks where you want to pick the winner.”

Matt Olney, director of threat intelligence at Cisco Talos, wrote a research paper on election security that was published last month that, in part, analyzed how threat actors broke into voter registration databases in many states prior to the 2016 presidential election and caused disruption. Olney told SearchSecurity he expects those kinds of attacks to continue as we approach November.

“We have certainly seen, in cooperating with our partners in the election space, the same sort of attacks attempted we had seen before. What is hard to do sometimes is to say, ‘Is this attack state-sponsored, or is it part of a broader campaign?'” he said. “I would anticipate that a lot of the same sort of activity would happen.”

Black Hat Matt Blaze
During his Black Hat USA 2020 keynote, Matt Blaze said the scale of U.S. election infrastructure presents enormous challenges.

COVID-19 and mail-in ballots

Blaze said there are new challenges for election infrastructure this year beyond the hacking threats and security issues of the past.

“This a very different talk than I would’ve given four or five months ago,” he said, adding that the “wrinkle” of COVID-19 has further complicated an already challenging situation for U.S. elections.

The logistics of massive mail-in ballots are daunting, he said, because it will entail not only making and mailing out more ballots, but also checking signatures and finding places to properly store and secure the received ballots. Those are “labor-intensive” issues for many state and local governments that may not have the funding or resources to widen the scale of their absentee ballot systems.

Elections officials, therefore, will have to “prepare for a very wide range of scenarios that may not come to fruition,” which could entail printing large numbers of mail-in ballots that ultimately won’t be used or preparing large volumes of in-person voting that may not happen.

Despite those challenges, Blaze said there was reason for some optimism. Many of these logistical problems are familiar ones to computer scientists and researchers, he said, who can provide expertise and assistance. Blaze ended his keynote with a call to action for the infosec community to call their local election officials and inquire how to help, whether it’s as a poll worker and ballot signature judge or IT support.

“The optimistic note is: We can do this,” he said, “but we need to engage now.”

Go to Original Article
Author:

Data theft in ransomware attacks may change disclosure game

The data theft and shaming tactic initiated by several ransomware groups, most notably Maze, has blurred the line between ransomware attacks and data breaches, forcing some enterprises into disclosing incidents when they would not normally go public.

Security researchers, analysts and IT risk assessors agree that companies most likely would not disclose a traditional ransomware attack unless legally required to do so. Jared Phipps, vice president of worldwide sales engineering for SentinelOne, said public disclosure of traditional ransomware attacks is rare. “I would say for every one ransomware incident that’s disclosed, there’s probably 100 that are not,” he said. “A vast majority of ransomware attacks are undeclared because there is no data shaming involved.”

But as attackers turn to stealing data and threatening public release on top of the ransomware attack, enterprises are left with fewer choices. While data shaming can lead to embarrassment for the victims, it’s the data theft that ultimately compels them to go public.

Public disclosure is typically required when certain types of data are accessed or stolen, such as personally identifiable information (PII) and payment card industry (PCI) data.

Most U.S. states and many international regions have some form of breach disclosure requirement when personal and sensitive information of citizens has been accessed or revealed inappropriately, Rapid7 chief data scientist Bob Rudis said.

“Once attackers moved from encrypt and ransom to overtly steal, encrypt and threaten public disclosure (I say ‘overtly’ since it is likely many attackers who commit ‘just’ ransomware attacks also stole data) any organization who did not disclose the breach, in accordance with the regulations in the jurisdictions they operate in, would be liable to incur fines and other penalties so it is highly unlikely they would have tried to keep the theft and ransom breaches involving citizen PII private,” Rudis said via email. “However, in the theft and ransom cases where company secrets and other data not involving citizen PII were stolen, many organizations have chosen to walk the fine line of not revealing the breach and just paying the ransom to avoid embarrassment.”

A typical post on the Maze ransomware 'news' site will list the victim organization, URLs, network names and the amount of data stolen.
A typical post on the Maze ransomware ‘news’ site will list the victim organization, URLs, network names and the amount of data stolen.

According to Emsisoft threat analyst Brett Callow, companies used to be able to choose whether to disclose an incident, as well as the timing of the disclosure.

“Ransomware groups’ name-and-shame tactics have now taken that decision away from them. Unless companies pay to avoid being listed on a leak site, incidents invariably become public knowledge very quickly. In fact, groups likely use this to their advantage as it puts additional pressure on companies to settle and settle quickly,” Callow said.

Over the course of 2020, Emsisoft has seen a sharp increase in ransomware-data theft combinations. In March, the vendor published a blog post arguing that ransomware attacks should be treated the same as data breaches and victims should publicly disclose incidents immediately.

Unless companies pay to avoid being listed on a leak site, incidents invariably become public knowledge very quickly.
Brett CallowThreat analyst, Emsisoft

“Given that there’s no legal requirement to disclose ransomware incidents (unlike data breaches, which must be disclosed) there’s little motivation for companies to come forward and admit they’ve been hit with ransomware. Many ransomware groups — including Maze, DoppelPaymer, Sodinokobi and Nemty — have been using techniques that enable them to extract a victim’s data to a remote server, where it can be processed, read and used however they deem fit,” Emsisoft wrote in the blog.

Shifts in disclosure practices?

It’s unclear if the new approach to ransomware attacks has fundamentally shifted how enterprises handle disclosure, especially since ransomware gangs like Maze often force victims’ hands by publicly disclosing attacks for them. Rudis said a review of recent theft, ransom and shaming attacks suggests disclosure would have been required for a large percentage of victims. ” The ‘shaming’ component would ultimately not have been a forcing factor and very likely only sped up the eventual disclosure timetable.”

The recent trend of threat actors targeting larger companies that are required to make public disclosures may have skewed the perception of breach disclosure, according to Bill Siegel, CEO of Coveware.

“Public companies typically have to make some SEC or other public regulatory disclosure. Public companies were not really being targeted that much 18 months ago, so it’s not 100% clear if the disclosures are because of the data exfiltration and name-shame boards, or just because larger companies with an actual duty to disclose publicly are being impacted,” Siegel said.

While publicly traded companies are required to report incidents, it’s the enterprises who aren’t subject to other regulations that’s telling. Several recent attacks committed by Maze ransomware affiliates, for example, have been against smaller, private companies. Alex Burkardt, vice president of field engineering at data security vendor Vera, said such attacks put companies that wouldn’t otherwise disclose a ransomware attack in an awkward position.

“For publicly traded companies it’s business as usual; they’re going to report it anyway, but it gets more attention because ransomware is in the news,” Burkardt said. “The ones who have brand damage at stake are more resistant to release negative information about the company proactively.”

However, ending up on a ransomware gang’s “news” site doesn’t necessarily mean an organization will admit to an attack. For example, 3D imaging company Faro Technologies was listed on REvil’s leak site in May. Threat actors claimed to have stolen several terabytes of Faro’s corporate data and threatened to publish it online, but the company was subsequently delisted from REvil’s site in early June after the cybercriminals announced they had a “buyer” for the data.

It’s unclear who the buyer was. A Faro spokesperson told SearchSecurity last month that the company was aware of the REvil post and “we are continuing to review it.” But since that time, Faro has made no public statements confirming or denying that a ransomware attack or breach took place.

“The typical reasons for a company being delisted are either that the ransom was paid or that the company requested delisting as a condition of entering into negotiations,” Callow said.

A lack of public disclosure could land victims in hot water. For example, a class action lawsuit was recently filed against New York accounting firm BST & Co. CPAs LLC on behalf of patients from Community Care Physicians whose data was stolen from the firm and published to Maze’s leak site. While BST first learned of the Maze attack on Dec. 7, 2019, the company didn’t alert affected parties until Feb. 14, 2020; the lawsuit claims the accounting firm didn’t provide patients with a prompt notification of the incident.

Nick DeLena, principal at DGC, a Boston-based accounting and consulting firm, said disclosure practices may depend on a variety of factors beyond the type of ransomware attack.

“If you’re in certain regulated industries, the reporting requirements are very stringent, but we know most companies in the U.S. are not in those very regulated industries,” DeLena said. “There’s no explicit requirement, and then you’re left to what states are regulating for disclosures. If you’re hit and not in a regulated industry and in a relaxed state, maybe they aren’t going report it because it would be bad for their reputation.”

Bypassing backups

With the onslaught of ransomware attacks over the last decade, enterprises have turned to backup products and services to protect their data. But another challenge occurs when sensitive data is compromised and exfiltrated.

“The challenge that backup solutions and other proactive measures that help mitigate don’t change the fact that if an attack compromises 10 medical records, there’s still a HIPAA violation. If they release it, even if they have backups, that data is not supposed to be public, so the magnitude of the damage is the same,” Burkardt said.

Stealing the data, no matter how efficient backups are, provides criminals with additional leverage and monetization options. “Should the company not pay the ransom, the data can be sold. In fact, it may be sold even if the company does pay,” Callow said.

While some attribute the data theft and shaming tactics to better backup practices, Siegel believes that profit maximization was ultimately what motivated ransomware gangs to evolve.

“The increase in conversion rates is not because backups are better — it’s because that extra one company did not want to be on a name-shame site, so they paid even though they did not need decryptors from the threat actor,” Siegel said. “In the first half of 2020, more companies that would not have considered paying considered it because of the risk of brand damage if they are named-shamed.”

But Siegel believes the trend may have peaked. “Over the past few months, more and more companies that don’t need decryptors are digging in their heels and realizing that the PR hit passes very quickly.”

DeLena has also observed a shift. “Before it was ‘We will never disclose because it will make us look bad,’ but now everyone is getting hit,” he said.

Threat researchers say it’s unclear if incorporating data theft and exposure has been a net positive for ransomware gangs because it takes more time, effort and resources to achieve lateral movement, locate sensitive data and exfiltrate it without being detected. Phipps said the data shaming and exfiltration may even add more complications because now an incident is treated as a breach, which requires a more formal investigation and response and could delay payment for threat actors.

“I think the reason actors are doing it is because they are hoping for a higher payout, but they aren’t getting the results they’re hoping for,” he said. “It’s probably being counterproductive for what a threat actor is doing, but it is bringing awareness to the larger number of breaches that are happening.”

But Phipps said ransomware attacks are still rising dramatically and more threat actors are embracing the tactic of data theft and shaming.

“It’s very profitable. There’s billions and billions of dollars. There’s going to be a lot more investment and ingenuity,” he said. “Maze is one type, but you’re going to see more and more people trying to make money off of this.”

Go to Original Article
Author:

Best of VMworld 2020 Awards: Rules and criteria

TechTarget’s Best of VMworld Awards recognize the most outstanding products at VMware’s annual user conference.

You can view last year’s Best of VMworld U.S. winners here.

TechTarget is now accepting nominations for the Best of VMworld 2020 Awards. The nomination window will remain open until 5 p.m. PST on Wednesday, July 29, 2020. The winners will be virtually announced by SearchServerVirtualization. Before nominating a product, please read the official rules and awards criteria.

A team of expert judges — consisting of editors, independent analysts, consultants and users — will evaluate the nominated products and select winners in the following categories:

  • Virtualization and Cloud Infrastructure
  • DevOps and Automation
  • Networking
  • Resilience and Recovery
  • Security
  • Digital Workspace

See the full category descriptions and judging criteria below.

A Best of Show winner will also be selected from the individual category winners; nomination forms cannot be submitted for this category. If you submit nominations for multiple products, complete one form for each product. The same product may not be entered in multiple categories.

Only products that have shipped and are available between July 25, 2019, and July 29, 2020, will be considered for this year’s awards. The product must be generally available before the submission period closes. Products that are generally available after July 29, 2020, will be eligible for next year’s awards. All product nomination forms must include a link to a public announcement or press release containing the official product general availability date.

Nominations must also include the name and contact information for at least one customer reference. This customer must have access to the exact version of the generally available product for which the nomination is submitted (i.e., if the nomination is for version 1.2 of the product, the customer must have access to 1.2). Customer references may be contacted by judges, but their names and contact information will not be shared or published.

In previous years, only vendors with a physical booth presence were eligible to participate in the Best of VMworld Awards. This year’s event will be digital; vendors must be a sponsor of the digital event or have arranged a contracted presence at VMworld prior to the cancellation of the physical event. Not all nominees will be contacted directly by judges or interviewed in person.

Submissions that fail to follow rules and regulations will be disqualified.

If you have questions about your eligibility or the Best of VMworld Awards nomination process, email [email protected].

Best of VMworld 2020 Awards judging criteria

Judges will evaluate products in each category based on the following areas:

Innovation. Does the product introduce new capabilities or significant improvements? Does it break new ground?

Performance. Does the product perform to a degree that it could improve overall data center operation?

Ease of integration into environment. How easily does the product integrate with other products? Can the product operate effectively in heterogeneous environments?

Ease of use and manageability. Is the product easy to install? Are the product’s functions clear and easy to learn and run? Will the product scale to accommodate growth?

Functionality. Does the product deliver as promised? Does it provide greater or more useful functionality than others in its category?

Value. Does the product represent a cost-effective solution? Can its return on investment be easily justified?

Fills a market gap. What needs does the product uniquely meet? What problems does it solve?

Best of VMworld 2020 Awards categories

Please review the category descriptions carefully before nominating products. Remember, a product can only be entered in one category. If you believe a product is eligible for more than one category, use your best judgment in choosing a category, drawing from the examples included in the category descriptions and customer expectations. Judges have the authority to reassign products entered into the wrong category.

Virtualization and Cloud Infrastructure. Eligible entrants include hardware products designed to enable organizations to build virtual infrastructures, including compute and storage hardware. Examples consist of storage arrays, private and hybrid cloud infrastructures, and hyper-converged infrastructure appliances, as well as software products designed to manage or virtualize hardware, such as software-defined storage.

DevOps and Automation. Eligible entrants include products that help operations teams deploy and support applications in VMs and containers on premises and in the cloud, as well as products that monitor, track and manage on-premises or cloud-based workloads, or that enable workload migration across cloud platforms. Examples include products that monitor performance, troubleshoot workload availability, and automate workload deployment, scaling or configuration.

Networking. Eligible entrants are hardware and software technologies that enhance networking in virtual or cloud infrastructures and/or enable and optimize virtualized networks. Examples include network switches, routers, and software or services that enhance networking in a virtual or cloud environment, such as software-defined networking and software-defined WANs.

Resilience and Recovery. Eligible entrants include software products or cloud services — such as disaster recovery as a service — that are designed to back up, restore and replicate data and/or achieve fault tolerance in a virtual server or cloud infrastructure.

Security. Eligible entrants monitor and protect hypervisors, cloud workloads, guest operating systems, and virtual networks and enforce security best practices.

Digital Workspace. Eligible entrants include products that secure mobile devices, applications and content while enabling mobile productivity, or software and hardware platforms that deliver or enhance the delivery of desktops and applications to various endpoints.

To submit a product for consideration, please fill out the nomination form.

Go to Original Article
Author:

Microsoft integration adds more AI to Tibco analytics suite

Tibco continues to add to the augmented intelligence capabilities of its analytics platform, most recently revealing that Tibco Spotfire and Tibco Data Science now support Microsoft Azure Cognitive Services.

Azure Cognitive Services is a system from Microsoft that enables application developers to embed AI and machine learning capabilities. Spotfire, meanwhile, is Tibco’s chief business intelligence tool for data visualizations and Data Science is a BI tool focused less on visualization and more on hard core data analysis, and the two can be used together or independently of one another.

Tibco, founded in 1997 and based in Palo Alto, Calif., is adding support for Azure Cognitive Services following other AI investments in its analytics platform. In January 2020, the vendor added to the natural language generation capabilities of Spotfire via an integration with Arria NLG Studio for BI, and in the fall of 2019 it unveiled new products and added to existing ones with the credo of AI everywhere.

Meanwhile, the vendor’s addition of native support for Azure Cognitive Services, revealed June 2, comes after Tibco expanded the multi-cloud capabilities of its analytics platform through an integration with Microsoft Azure late in 2019; it already had an integration with Amazon Web Services and supports Google Cloud, among other cloud service providers.

“We don’t believe that AI is a marketing tool or a marketing term,” said Matt Quinn, Tibco’s COO. “We see that AI can actually be used as foundational element in people’s systems, and so working with Microsoft, doing this integration, is all about us being able to use our own technology, inside of our own products, as a foundational layer.”

A sample Tibco dashboard displays an organization's data.
An organization’s data is displayed on a sample Tibco dashboard.

AI, meanwhile, is an area where Tibco should be focused, according to Rick Sherman, founder and managing partner of Athena IT Solutions.

“With Spotfire, AI is definitely where they should be,” he said. “AI, machine learning and data science is where they’re great. They’re geared to sophisticated users, and if you’re doing a deeper dive, doing serious visualizations, Tibco is a way you want to go.”

Beyond simply adding a new integration, Tibco’s move to enable application developers to embed AI and machine learning capabilities by using Azure Cognitive Services continues the vendor’s process of expanding its analytics platform.

While some longtime BI vendors have struggled to maintain an innovative platform, Tibco, after losing some momentum in the early 2000s, has been able to remain among the top vendors with a suite of BI tools that are considered innovative.

We see that AI can actually be used as foundational element in people’s systems, and so working with Microsoft, doing this integration, is all about us being able to use our own technology, inside of our own products, as a foundational layer.
Matt QuinnCOO, Tibco

Tibco’s platform is entirely cloud-based, which allows Tibco to deliver new and upgraded features without having to roll out a major update each time, and its partnership strategy gives it the ability to embed products such as Azure Cognitive Services and Arria NLG Studio for BI without having to develop them in-house.

“Tibco has really evolved into a much more partner-centric company,” Quinn said. “We realize we are part of a broader ecosystem of tools and technologies, and so these partnerships that we’ve created are pretty special and pretty important, and we’ve been really happy with the bidirectional of those, especially the relationship with Microsoft. It’s clear that they have evolved as we have evolved.”

As far as motivation for the addition of Azure Cognitive Services to the Tibco analytics platform, Quinn said it’s simply about making data scientists more productive.

Customers, he added, were asking for the integration, while Tibco had a preexisting relationship with Microsoft that made adding Azure Cognitive Services a natural fit.

“Data scientists use all sorts of tools from all different walks of life, and because of our integration heritage we’re really good at integrating those types of things, so what we’re doing is we’re opening up the universe of all the Microsoft pieces to this data science group that just wants to be more productive,” Quinn said. “It enhances the richness of the platform.”

Similarly, Sherman said that the new integration is a positive move for data scientists.

Tibco’s acquisitions in recent years, such as its 2018 purchase of Scribe Software and its 2019 purchase of SnappyData, helped advance the capabilities of Tibco’s analytics platform, and now integrations are giving it further powers.

“They’re doing some excellent things,” Sherman said. “They’re aiming at deeper analytics, digging deeper into data science and data engineering, and this move to get their analytics closer to their data science makes a heck of a lot of sense.”

In the coming months, Quinn said that Tibco plans to continue adding integrations in order to add to the capabilities of its analytics platform. In addition, ease of use will be a significant focus for the vendor.

Meanwhile, ModelOps — the lifecycle of model development — will be a new area of emphasis for Tibco.

“ModelOps is really the set of things you have to do to take a model, whether it’s AI or just plain data science, and move it into the real world, and then how do you change it, how do you evolve it, who needs to sign off on it,” Quinn said. “For Tibco it’s great because it really brings together the data science piece with the hardcore engineering stuff that people have known us for.”

Go to Original Article
Author:

Wanted – LGA 1150 Motherboard

I have an MSI mini ITX board (MSI H97 AC) which i am in the process of removing from my small pc.
I have the box and most of the gubbins that came with it.

One of the tabs to remove the RAM is broken but it does not stop the ram being removed or re-seated.
One of the antenna for wifi may be missing – I’ll have to check the other box as I have 2 of these PC’s.
How does £35 inc delivery sound?
Go to Original Article
Author:

Wanted – Microsoft Surface Pro 1,2,3,4…

Hi

I have a Surface 3, barely used, been sat in cupboards for most of it’s life , occasional usage when I went to the states.

Comes with original box, keyboard black , the clicky pen , no damage to the unit at all.

Not sure what it’s worth , I paid a gazillion bucks for it at the time but I suspect it’s obviously worth a fraction of that now.

Not sure if there’s a “how many hours” as it been used function but would be interesting to know, my gut is it’s virtually new in real terms.

Thanks

Go to Original Article
Author:

The Acid Test for Your Backup Strategy

For the first several years that I supported server environments, I spent most of my time working with backup systems. I noticed that almost everyone did their due diligence in performing backups. Most people took an adequate responsibility to verify that their scheduled backups ran without error. However, almost no one ever checked that they could actually restore from a backup — until disaster struck. I gathered a lot of sorrowful stories during those years. I want to use those experiences to help you avert a similar tragedy.

Successful Backups Do Not Guarantee Successful Restores

Fortunately, a lot of the problems that I dealt with in those days have almost disappeared due to technological advancements. But, that only means that you have better odds of a successful restore, not that you have a zero chance of failure. Restore failures typically mean that something unexpected happened to your backup media. Things that I’ve encountered:

  • Staff inadvertently overwrote a full backup copy with an incremental or differential backup
  • No one retained the necessary decryption information
  • Media was lost or damaged
  • Media degraded to uselessness
  • Staff did not know how to perform a restore — sometimes with disastrous outcomes

I’m sure that some of you have your own horror stories.

These risks apply to all organizations. Sometimes we manage to convince ourselves that we have immunity to some or all of them, but you can’t get there without extra effort. Let’s break down some of these line items.

People Represent the Weakest Link

We would all like to believe that our staff will never make errors and that the people that need to operate the backup system have the ability to do so. However, as a part of your disaster recovery planning, you must expect an inability to predict the state or availability of any individual. If only a few people know how to use your backup application, then those people become part of your risk profile.

You have a few simple ways to address these concerns:

  • Periodically test the restore process
  • Document the restore process and keep the documentation updated
  • Non-IT personnel need knowledge and practice with backup and restore operations
  • Non-IT personnel need to know how to get help with the application

It’s reasonable to expect that you would call your backup vendor for help in the event of an emergency that prevented your best people from performing restores. However, in many organizations without a proper disaster recovery plan, no one outside of IT even knows who to call. The knowledge inside any company naturally tends to arrange itself in silos, but you must make sure to spread at least the bare minimum information.

Technology Does Fail

I remember many shock and horror reactions when a company owner learned that we could not read the data from their backup tapes. A few times, these turned into grief and loss counselling sessions as they realized that they were facing a critical — or even complete — data loss situation. Tape has its own particular risk profile, and lots of businesses have stopped using it in favour of on-premises disk-based storage or cloud-based solutions. However, all backup storage technologies present some kind of risk.

In my experience, data degradation occurred most frequently. You might see this called other things, my favourite being “bit rot”. Whatever you call it, it all means the same thing: the data currently on the media is not the same data that you recorded. That can happen just because magnetic storage devices have susceptibilities. That means that no one made any mistakes — the media just didn’t last. For all media types, we can establish an average for failure rates. But, we have absolutely no guarantees on the shelf life for any individual unit. I have seen data pull cleanly off decade-old media; I have seen week-old backups fail miserably.

Unexpectedly, newer technology can make things worse. In our race to cut costs, we frequently employ newer ways to save space and time. In the past, we had only compression and incremental/differential solutions. Now, we have tools that can deduplicate across several backup sets and at multiple levels. We often put a lot of reliance on the single copy of a bit.

How to Test your Backup Strategy

The best way to identify problems is to break-test to find weaknesses. Leveraging test restores will help identity backup reliability and help you solve these problems. Simply, you cannot know that you have a good backup unless you can perform a good restore. You cannot know that your staff can perform a restore unless they perform a restore. For maximum effect, you need to plan tests to occur on a regular basis.

Some tools, like Altaro VM Backup, have built-in tools to make tests easy. Altaro VM Backup provides a “Test & Verify Backups” wizard to help you perform on-demand tests and a “Schedule Test Drills” feature to help you automate the process.

how to test and verify backups altaro

If your tool does not have such a feature, you can still use it to make certain that your data will be there when you need it. It should have some way to restore a separate or redirected copy. So, instead of overwriting your live data, you can create a duplicate in another place where you can safely examine and verify it.

Test Restore Scenario

In the past, we would often simply restore some data files to a shared location and use a simple comparison tool. Now that we use virtual machines for so much, we can do a great deal more. I’ll show one example of a test that I use. In my system, all of these are Hyper-V VMs. You’ll have to adjust accordingly for other technologies.

Using your tool, restore copies of:

  • A domain controller
  • A SQL server
  • A front-end server dependent on the SQL server

On the host that you restored those VMs to, create a private virtual switch. Connect each virtual machine to it. Spin up the copied domain controller, then the copied SQL server, then the copied front-end. Use the VM connect console to verify that all of them work as expected.

Create test restore scenarios of your own! Make sure that they match a real-world scenario that your organization would rely on after a disaster.


Go to Original Article
Author: Eric Siron