Tag Archives: time

Eye transplant nonprofit turns to supply chain modeling

Time is vital to Eversight, a nonprofit that helps restore sight through donations, transplants and research. The organization, headquartered in Ann Arbor, Mich., recovers and transports organs and tissue throughout the country, using them for research and transplants.

The organization annually provides about 8,000 tissues per year for transplants.

Recovering tissue is extremely time-sensitive, with a small time frame to recover and then use the tissue. So, to operate within its time constraints, Eversight turned in October 2019 to Llamasoft, a predictive analytics and supply chain management vendor, for more advanced supply chain modeling.

Time limits

“We have some time frames that are very stringent,” said Ryan Simmons, director of clinical services at Eversight.

Tissue may be recovered up to 24 hours after a donor dies, Simmons said. However, surgeons prefer to recover the issue within 12 hours. Recovered eye tissue may then be stored for up to seven days.

“Surgeons typically want to use that tissue transplant within four or five days of a patient passing,” Simmons said.

We have some time frames that are very stringent.
Ryan SimmonsDirector of clinical services, Eversight

In addition to the time limitations, it’s impossible to predict when, and how much, tissue will become available.

“We can’t just make an order,” Simmons said. “We have to predict the best that we can.” Typically, he added, only one parcel gets shipped at a time.

Temperature also is a critical factor. The tissue has to remain at a set temperature or it could be damaged.

Modeling

Previously, to help predict demand, Eversight relied on Microsoft Excel. That worked somewhat, Simmons said, but the venerable spreadsheet program couldn’t complete advanced supply chain modeling and predictions.

Many of Llamasoft’s customers used Excel in the past, said Ryan Purcell, director of global impact at Llamasoft.

“Typically, [users] will use Excel until they hit the breaking point,” he said.

Eversight hit that point last year. Simmons, who is working on a master’s degree in supply chain management at Michigan State University, came across Llamasoft in a class.

“It seemed like a very powerful program,” Simmons said. He contacted Llamasoft and found that the vendor was a good fit for Eversight. 

Using Demand Guru, a demand modeling program from Llamasoft, Eversight is working on creating better demand forecasts. With Supply Chain Guru, a supply chain modeling and management program, the organization is creating models to plan better routes and optimize for cost and speed.

Because Eversight didn’t begin working with Llamasoft in earnest until the fall 2019, many of its models have not yet been completed. However, creating models has been fairly easy, Simmons said, and the few models that are done seem to work well.

“Learning to do the modeling, that wasn’t too big of a challenge,” he said.

Go to Original Article
Author:

PowerShell tutorials capture attention of admins in 2019

As 2019 reaches its end, it’s time to look back at the tips and tutorials published this year that mattered the most to the Windows Server audience.

Microsoft has redoubled its efforts to make PowerShell the overarching management tool for workloads no matter where they reside. Interest in automation and PowerShell tutorials that explain how to streamline everyday tasks continue to resonate with readers. In this top-five compilation of the 2019 tips that picked up the most page views, nearly all the articles focus on PowerShell, from learning advanced text manipulation techniques to plumbing the features in the newer, open source version initially dubbed PowerShell Core. But the article that claimed the top spot indicates many administrators have their eyes on a relatively new way to manage resources in their organization.

5. Windows Compatibility module expands PowerShell Core reach

The first PowerShell Core release, version 6.0, arrived in January 2018, but it was a step back in many ways for administrators who had been used to the last Windows PowerShell version, 5.1. This new PowerShell version, developed to also run on the other major operating system platforms of Linux and macOS, lost a fair amount of functionality due to the switch from the Windows-based .NET Framework to the cross-platform .NET Core. The end result was a fair number of cmdlets administrators needed to do their jobs did not run on PowerShell Core.

With any project of this size, there will be growing pains. Administrators can continue to use Windows PowerShell, which Jeffrey Snover said will always be supported by Microsoft and should serve IT workers faithfully for many years to come. But to ease this transition, the PowerShell team released a Windows Compatibility module in late 2018 to close the functionality gap between the Windows and open source versions of PowerShell. This tip digs into the background of the module and how to use it on PowerShell Core to work with some previously incompatible cmdlets.

4. How to use the PowerShell pending reboot module

Among the many perks of PowerShell is its extensibility. Administrative functions that were once out of reach — or laborious to accomplish — can magically appear once you download a module from the PowerShell Gallery. Install a module and you get several new cmdlets to make your administrative life just a bit easier.

For example, after Patch Tuesday rolls around and you’ve applied Microsoft’s updates to all your systems, the patching process generally is not complete — and the system not fully protected from the latest threats — until the machine reboots. A Microsoft field engineer developed a pending reboot module that detects if a Windows system has not rebooted. These insights can help you see which users might require a nudge to restart their machines to make sure your patching efforts don’t go for naught. This tip explains how to install and use the pending reboot cmdlet.

3. How to configure SSL on IIS with PowerShell    

Among its many uses in the enterprise, Windows Server also functions as a web server. Microsoft shops can use the Internet Information Services role in Windows Server to serve up content and host web applications for use across the internet or in a company’s private intranet.

To keep threat actors from sniffing out traffic between your IIS web server and the clients, add HTTPS to encrypt data transmissions to and from the server. HTTPS works in tandem with Transport Layer Security (TLS), the more secure ancestor to Secure Sockets Layer (SSL). Most in IT still refer to the certificates that facilitate the encrypted transmissions between servers and clients as SSL certificates. This tip explains how to use PowerShell to deploy a self-signed TLS/SSL certificate to an IIS website, which can come in handy if you spin up a lot of websites in your organization.

2. Hone your PowerShell text manipulation skills

It might seem like a basic ability, but learning how to read text from and write text to files using PowerShell opens up more advanced avenues of automation. There are several cmdlets tailored for use with text files to perform several tasks, including importing text into a file that can then be manipulated for other uses. It’s helpful to know that while text might look the same, Windows PowerShell and PowerShell Core have different ways of dealing with encoding. This tip covers some of the finer details of working with text in PowerShell to broaden your automation horizons.

1. Azure AD Premium P1 vs. P2: Which is right for you?

Most organizations require some way to manage their resources — from user accounts to physical devices, such as printers — and Active Directory has been the tool for the job for many years. Based on Windows Server, the venerable on-premises identity and access management platform handles the allocation and permissions process to ensure users get the right permissions to access what they need.

Microsoft unveiled its cloud-based successor to Active Directory, calling it Azure Active Directory, in 2013. While similar, the two products are not a straight swap. If you use Microsoft’s cloud, either for running VMs in Azure or using the collaboration apps in Office 365, then you’ll use Azure Active Directory to manage resources on those platforms. This tip digs into some of the permutations between the two higher-end editions of Azure Active Directory to help you decide which one might work best for your organization.

Go to Original Article
Author:

For Sale – AOC AGON AG251FZ 240Hz 24.5″ LED FHD (1920×1080) Freesync 1ms Gaming monitor

Don’t have time for competitive gaming anymore. Purchased brand new from Amazon 5 months ago, still have original packaging and power adaptor. The DisplayPort cable which came with it was faulty so this includes the one I bought.

Club3D CAC-2067 DisplayPort to DisplayPort 1.4/HBR3 Cable DP 1.4 8K 60Hz 1m/3.28ft, Black

I’ll be moving from Durham to Staffordshire soon so collection is available from Durham up to the 13th of December, after that date from Staffordshire.

Go to Original Article
Author:

For Sale – AOC AGON AG251FZ 240Hz 24.5″ LED FHD (1920×1080) Freesync 1ms Gaming monitor

Don’t have time for competitive gaming anymore. Purchased brand new from Amazon 5 months ago, still have original packaging and power adaptor. The DisplayPort cable which came with it was faulty so this includes the one I bought.

Club3D CAC-2067 DisplayPort to DisplayPort 1.4/HBR3 Cable DP 1.4 8K 60Hz 1m/3.28ft, Black

I’ll be moving from Durham to Staffordshire soon so collection is available from Durham up to the 13th of December, after that date from Staffordshire.

Go to Original Article
Author:

How to repair Windows Server using Windows SFC and DISM

Over time, system files in a Windows Server installation might require a fix. You can often repair the operating…

system without taking the server down by using Windows SFC or the more robust and powerful Deployment Image Servicing and Management commands.

Windows System File Checker (SFC) and Deployment Image Servicing and Management (DISM) are administrative utilities that can alter system files, so they must be run in an administrator command prompt window.

Start with Windows SFC

The Windows SFC utility scans and verifies version information, file signatures and checksums for all protected system files on Windows desktop and server systems. If the command discovers missing protected files or alterations to existing ones, Windows SFC will attempt to replace the altered files with a pristine version from the %systemroot%system32dllcache folder.

The system logs all activities of the Windows SFC command to the %Windir%CBSCBS.log file. If the tool reports any nonrepairable errors, then you’ll want to investigate further. Search for the word corrupt to find most problems.

Windows SFC command syntax

Open a command prompt with administrator rights and run the following command to start the file checking process:

C:WindowsSystem32>sfc /scannow

The /scannow parameter instructs the command to run immediately. It can take some time to complete — up to 15 minutes on servers with large data drives is not unusual — and usually consumes 60%-80% of a single CPU for the duration of its execution. On servers with more than four cores, it will have a slight impact on performance.

Windows SFC scannow command
The Windows SFC /scannow command examines protected system files for errors.

There are times Windows SFC cannot replace altered files. This does not always indicate trouble. For example, recent Windows builds have included graphics driver data that was reported as corrupt, but the problem is with Windows file data, not the files themselves, so no repairs are needed.

If Windows SFC can’t fix it, try DISM

The DISM command is more powerful and capable than Windows SFC. It also checks a different file repository — the %windir%WinSXS folder, aka the “component store” — and is able to obtain replacement files from a variety of potential sources. Better yet, the command offers a quick way to check an image before attempting to diagnose or repair problems with that image.

Run DISM with the following parameters:

C:WindowsSystem32>dism /Online /Cleanup-Image /CheckHealth

Even on a server with a huge system volume, this command usually completes in less than 30 seconds and does not tax system resources. Unless it finds some kind of issue, the command reports back “No component store corruption detected.” If the command finds a problem, this version of DISM reports only that corruption was detected, but no supporting details.

Corruption detected? Try ScanHealth next

If DISM finds a problem, then run the following command:

C:WindowsSystem32>dism /Online /Cleanup-Image /ScanHealth

This more elaborate version of the DISM image check will report on component store corruption and indicate if repairs can be made.

If corruption is found and it can be repaired, it’s time to fire up the /RestoreHealth directive, which can also work from the /online image, or from a different targeted /source.

Run the following commands using the /RestoreHealth parameter to replace corrupt component store entries:

C:WindowsSystem32>dism /Online /Cleanup-Image /RestoreHealth

C:WindowsSystem32>dism /source: /Cleanup-Image /RestoreHealth

You can drive file replacement from the running online image easily with the same syntax as the preceding commands. But it often happens that local copies aren’t available or are no more correct than the contents of the local component store itself. In that case, use the /source directive to point to a Windows image file — a .wim file or an .esd file — or a known, good, working WinSXS folder from an identically configured machine — or a known good backup of the same machine to try alternative replacements.

By default, the DISM command will also try downloading components from the Microsoft download pages; this can be turned off with the /LimitAccess parameter. For details on the /source directive syntax, the TechNet article “Repair a Windows Image” is invaluable.

DISM is a very capable tool well beyond this basic image repair maneuver. I’ve compared it to a Swiss army knife for maintaining Windows images. Windows system admins will find DISM to be complex and sometimes challenging but well worth exploring.

Go to Original Article
Author:

Microsoft’s history and future strategy beyond 2020

In 1975, a 20-year old Bill Gates stated a bold ambition that many at the time thought naïve: his fledgling startup would put a Windows computer on every desk and in every home.

Gates, and his co-founder Paul Allen, never quite realized that ambition. They did however, grow “Micro-Soft” from a tiny, underfunded company selling a BASIC interpreter for the Altair 8800 PC to a $22.9 billion company selling operating systems, applications and tools with a 90% share of the microcomputer software market by the year 2000 — 25 years into Microsoft’s history. Close enough.

That year, Microsoft was among the top five most valuable companies with a market cap of $258 billion. Fast forward to fiscal year 2020: The company’s market cap is slightly over $1 trillion and it now tops the list of the world’s most valuable companies.

One could surmise that Microsoft’s trip to the top of the heap was predictable given its position in the fast-growing computer industry over the past two decades. But the journey between the two mountain peaks saw dramatic changes to the company’s senior management and bold technology changes that strayed far from those products that made it rich and famous. It also helped that many of its archrivals made strategic missteps opening doors of opportunity.

Microsoft’s history marked by leadership changes

There are the more obvious reasons Microsoft remained near or at the top of the most influential companies in high tech: the arrival of Satya Nadella taking over from Steve Ballmer; the subsequent refocusing from proprietary products to open source, as well as making the cloud its first priority.

But as important to the company’s success as the right people rising to the top of the company and the right set of priorities, is an often-overlooked factor: Microsoft’s slow and steady progress convincing its mammoth user base to buy its core products through long-term, cloud-based subscriptions.

“If you want to point to one thing that’s kept the company’s revenues growing over the past 20 years it is subscription selling,” said Rob Helm, managing vice president of research at Directions On Microsoft, an independent analysis firm in Kirkland, Wash. “It’s not very exciting but the company has used all kinds of clever tactics to shift users to this model. And they aren’t done yet.”

The move to subscription selling, an initiative that originated before Steve Ballmer took over the day-to-day operations of the company, started with its Licensing 6.0 and Software Assurance program in 2002. The program got off to a slow start, mainly because users were unaccustomed to buying their products through long-term licensing contracts, said Al Gillen, group vice president in IDC’s software development and open source practice.

“That program wasn’t used much at all. Hardly anyone back then used subscriptions to buy software,” Gillen said.

But with the arrival of the cloud, Office 365 and Microsoft 365 in particular, longer-term cloud licensing skyrocketed.

“[Microsoft] became very enthusiastic about the cloud because for them, it was yet another way of moving its customers to long-term subscriptions,” Helm said.

The biggest obstacle to moving many of its customers to cloud-based subscriptions was Microsoft’s own success. For decades of Microsoft’s history, the company was wedded to its business model of selling a stack of products that included the operating system, applications and utilities, and signing up major hardware suppliers to sell the stack bundled with their systems. The company grew rich with this model, but by the mid-2000s, trouble was brewing.

If you want to point to one thing that’s kept the company’s revenues growing over the past 20 years it is subscription selling.
Rob HelmManaging vice president of research, Directions On Microsoft

Under Ballmer’s reign, the software stack model became outdated with the encroaching age of the cloud and open source software led by a new raft of competitors like AWS and Google. Nadella saw this and knew it was time to accelerate the company’s cloud-based subscription business.

So, although Ballmer famously monkey-danced his way across a stage shouting “developers, developers, developers” at a Microsoft 25th anniversary event in 2000, it was Nadella who knew what Windows and non-Windows developers wanted in the age of the cloud — and how to go about enlisting their cooperation.

“Satya decisively moved away from the full stack model and went to an IaaS model more resembling that of AWS,” Helm said. “This pivot allowed them to deliver cloud services much faster than it could have otherwise. But they could not have done this without cutting Windows adrift from the rest of the stack,” he said.

Open source fork in the road

Microsoft’s pivot to support Linux in 2018 and open source happened “just in time,” according to Gillen. While the company had open source projects underway as far back as the 2004 to 2005 timeframe, including a project to move SQL Server to Linux, it took a long while before open source was accepted across Microsoft’s development groups, according to Gillen.

“A developer [inside Microsoft] once told me he would show up for meetings and the Windows guys would look at him and say, ‘We don’t want you at this meeting, you’re the open source guy,” Gillen said. “Frankly, looking back they were lucky to make the transition at all.”

What made the transition to long-term cloud subscriptions easier for Microsoft and its users was the combination of cloud and Linux vendors who already had such licensing in play, according to Gillen.

“It became more palatable to users because they were getting more exposure to it from a variety of sources,” he said.

Microsoft has so fully embraced Linux, not just by delivering Linux compatible products and cloud-based services, but through acquisitions such as GitHub, the world’s largest open source repository, it is becoming hard to remember a time in Microsoft’s history when the company was despised by the open source community.

“If you are a 28-year-old programmer today, you aren’t aware of a time Microsoft was hated by the Linux world,” Gillen said. “But I’d argue now that Microsoft is as invested in open source software as any large cloud vendor.”

Microsoft cloud subscriptions start with desktops

One advantage Microsoft had over its more traditional competitors in moving to a SaaS-based subscription model was the fact that it started by moving desktop applications to the cloud instead of server-based applications that companies like IBM, Oracle and SAP were faced with.

“Microsoft’s desktop software is an easier conversion to the cloud and to move into a subscription model,” said Geoff Woollacott, senior strategy consultant and principal analyst at Technology Business Research Inc. “The degree of difficulty with desktops compared to an on-prem, server-based database that has to be changed to a microservices, subscription-based model is much harder.”

While Microsoft has downplayed the strategic importance of Windows in favor of Azure and cloud-based open source offerings, analysts believe the venerable operating system’s life will extend well into the future. IDC’s Gillen estimated that the Windows desktop and server franchise is likely still in excess of $20 billion a year, which is more than 15% of the company’s overall revenues of $125.8 billion for fiscal 2019 ended June 30.

“Large installed bases have longevity that goes far beyond what most want to have them. Just look at mainframes as an example,” Gillen said. “I’d argue that 60% to 70% of Windows apps are going to be around 10 and 15 years from now, which means Windows doesn’t go away.”

And those Windows apps are all being infused with AI capabilities, in the Azure cloud, on servers and desktops. Microsoft is also making it easier for developers to create AI-infused applications, with tools like AI Builder for PowerApps.

Microsoft’s AI, quantum computing future in focus

While Gillen and other analysts are hesitant to predict the future of Windows and its applications in 20 years from now, most believe the company will spend a generous amount of time focused on quantum computing.

At its recent Ignite conference, Nadella and other Microsoft executives made it clear they’ll have a deep commitment to quantum computing over the next couple of decades. The company delivered its first quantum software a couple of years ago and, surprisingly, is developing a processor capable of running quantum software with plans to deliver other quantum hardware components.

Over the coming years, Microsoft plans to build quantum systems to solve a wide range of issues from complex problems those doing advanced scientific enterprises face, said Julie Love, senior director of Quantum Computing at Microsoft.

“To realize that promise we will need a full system that scales with hundreds and thousands of qubits. It’s why we have this long-term deep development effort in our labs across the globe,” she said.

Microsoft’s first steps toward introducing quantum technology for enterprise use will be to combine quantum algorithms with classical algorithms to improve speed and performance and introduce new capabilities into existing systems. The company is already beta testing this approach with Case Western University, where quantum algorithms have tripled the performance of MRI machines.

“Quantum is going to be a hybrid architecture working alongside classical [architectures],” Love said. “If you look at the larger opportunities down the road, the most promise [for quantum computing] is in chemistry, science, optimization and machine learning.”

Unlike Bill Gates, Satya Nadella isn’t promising a quantum computer on every desktop and in every home by 2040. But you can assume that as long as Gates has an association with the company, it will make an earnest effort to put Microsoft software on those desktops that do.

Go to Original Article
Author:

Ransomware attacks shaking up threat landscape — again

Ransomware is changing the threat landscape yet again, though this time it isn’t with malicious code.

A spike in ransomware attacks against municipal governments and healthcare organizations, coupled with advancements in the back-end operations of specific campaigns, have concerned security researchers and analysts alike. The trends are so alarming that Jeff Pollard, vice president and a principal analyst at Forrester Research, said he expects local, state and city governments will be forced to seek disaster relief funds from the federal government to recover from ransomware attacks.

“There’s definitely been an uptick in overall attacks, but we’re seeing municipality after municipality get hit with ransomware now,” Pollard said. “When those vital government services are disrupted, then it’s a disaster.”

In fact, Forrester’s report “Predictions 2020: Cybersecurity” anticipates that at least one local government will ask for disaster relief funding from their national government in order to recover from a ransomware attack that cripples municipal services, whether they’re electrical utilities or public healthcare facilities.

Many U.S. state, local and city governments have already been disrupted by ransomware this year, including a massive attack on Atlanta in March that paralyzed much of the city’s non-emergency services. A number of healthcare organizations have also shut down from ransomware attacks, including a network of hospitals in Alabama.

The increase in attacks on municipal governments and healthcare organizations has been accompanied by another trend this year, according to several security researchers: Threat actors are upping their ransomware games.

Today’s infamous ransomware campaigns share some aspects with the notable cyberattacks of 20 years ago. For example, the ILoveYou worm used a simple VB script to spread through email systems and even overwrote random files on infected devices, which forced several enterprises and government agencies to shut down their email servers.

But today’s ransomware threats aren’t just using more sophisticated techniques to infect organizations — they’ve also built thriving financial models that resemble the businesses of their cybersecurity counterparts. And they’re going after targets that will deliver the biggest return on investment.

New approaches

The McAfee Labs Threats Report for August showed a 118% increase in ransomware detections for the first quarter of this year, driven largely by the infamous Ryuk and GandCrab families. But more importantly, the vendor noted how many ransomware operations had embraced “innovative” attack techniques to target businesses; instead of using mass phishing campaigns (as Ryuk and GandCrab have), “an increasing number of attacks are gaining access to a company that has open and exposed remote access points, such as RDP [remote desktop protocol] and virtual network computing,” the report stated.

The concept of ransomware is no longer the concept that we’ve historically known it as.
Raj SamaniChief scientist, McAfee

“The concept of ransomware is no longer the concept that we’ve historically known it as,” Raj Samani, chief scientist at McAfee, told SearchSecurity.

Sophos Labs’ 2020 Threat Report, which was published earlier this month, presented similar findings. The endpoint security vendor noted that since the SamSam ransomware attacks in 2018, more threat actors have “jumped on the RDP bandwagon” to gain access to corporate networks, not just endpoint devices. In addition, Sophos researchers found more attacks using remote monitoring and management software from vendors such as ConnectWise and Kaseya (ConnectWise’s Automate software was recently used in a series of attacks).

John Shier, senior security advisor at Sophos, said certain ransomware operations are demonstrating more sophistication and moving away from relying on “spray and pray” phishing emails. “The majority of the ransomware landscape was just opportunistic attacks,” he said.

That’s no longer the case, he said. In addition to searching for devices with exposed RDP or weak passwords that can be discovered by brute-force attacks, threat actors are also using that access to routinely locate and destroy backups. “The thoroughness of the attacks in those cases are devastating, and therefore they can command higher ransoms and getting higher percentage of payments,” Shier said.

Jeremiah Dewey, senior director of managed services and head of incident response at Rapid7, said his company began getting more calls about ransomware attacks with higher ransomware demands. “This year, especially earlier in the year, we saw ransomware authors determine that they could ask for more,” he said.

With the volume of ransomware attacks this year, experts expect that trend to continue.

The ransomware economy

Samani said the new strategies and approaches used by many threat groups show a “professionalization” of the ransomware economy. But there are also operational aspects, particularly with the ransomware-as-a-service (RaaS) model, that are exhibiting increased sophistication. With RaaS campaigns such as GandCrab, ransomware authors make their code available to “affiliates” who are then tasked with infecting victims; the authors take a percentage of the ransoms earned by the affiliates.

In the past, Samani said, affiliates were usually less-skilled cybercriminals who relied on traditional phishing or social engineering tactics to spread ransomware. But that has changed, he said. In a series of research posts on Sodinokibi, a RaaS operation that experts believe was developed by GandCrab authors, McAfee observed the emergence of “all-star” affiliates who have gone above and beyond what typical affiliates do.

“Now you’re seeing affiliates beginning to recruit individuals that are specialists in RDP stressing or RDP brute-forcing,” Samani said. “Threat actors are now hiring specific individuals based on their specialties to go out and perform the first phase of the attack, which may well be the initial entry vector into an organization.”

And once they achieve access to a target environment, Samani said, the all-stars generally lie low until they achieve an understanding of the network, move laterally and locate and compromise backups in order to maximize the damage.

Sophos Labs’ 2020 Threat Report also noted that many ransomware actors are prioritizing the types of data that certain drives, files and documents encrypt first. Shier said it’s not surprising to see ransomware campaigns increasingly use tactics that rely on human interaction. “What we’ve seen starting with SamSam is more of a hybrid model — there is some automation, but there’s also some humans,” he said.

These tactics and strategies have transformed the ransomware business, Samani said, shifting it away from the economies of scale-approach of old. “All stars” affiliates who can not only infect the most victims but also command the biggest ransoms are now reaping the biggest rewards. And the cybercriminals behind these RaaS operations are paying close attention, too.

“The bad guys are actively monitoring, tracking and managing the efficiency of specific affiliates and rewarding them if they are as good as they claim to be,” Samani said. “It’s absolutely fascinating.”

Silver linings, dark portents

There is some good news for enterprises amid the latest ransomware research. For one, Samani said, the more professional ransomware operations were likely forced to adapt because the return on investment for ransomware was decreasing. Efforts from cybersecurity vendors and projects like No More Ransom contributed to victims refusing to pay, either because their data had been decrypted or because they were advised against it.

As a result, ransomware campaigns were forced to improve their strategies and operations in order to catch bigger fish and earn bigger rewards. “Return on investment is the key motivator to the re-evolution or rebirth of ransomware,” Samani said.

Another positive, according to Shier, is that not every ransomware campaign or its affiliates have the necessary skills to emulate a SamSam operation, for example. “In terms of other campaigns implementing similar models and techniques, it’s grown in the past 18 months,” he said. “But there are some limitations there.”

On the downside, Shier said, cybercriminals often don’t even need that level of sophistication to achieve some level of success. “Not everyone has the technical expertise to exploit BlueKeep for an RDP attack,” he said. “But there’s enough exposed RDP [systems] out there with weak passwords that you don’t need things like BlueKeep.”

In addition, Samani said the ransomware operations that earn large payments will be in a position to improve even further. “If you’ve got enough money, then you can hire whoever you want,” Samani said. “Money gives you the ability to improve research and development and innovate and move your code forward.”

In order to make the most money, threat actors will look for the organizations that are not only most vulnerable but also the most likely to pay large ransoms. That, Samani said, could lead to even more attacks on government and healthcare targets in 2020.

Shier said most ransomware attacks on healthcare companies and municipal governments still appear to be opportunistic infections, but he wouldn’t be surprised if more sophisticated ransomware operations begin to purposefully target those organizations in order to maximize their earnings.

“[Threat actors] know there are organizations that simply can’t experience downtime,” Shier said. “They don’t care who they are impacting. They want to make money.”

Go to Original Article
Author:

With Time on its hands, Meredith drives storage consolidation

After Meredith Corp. closed its $2.8 billion acquisition of Time Inc. in January 2018, it adopted the motto “Be Bold. Together.”

David Coffman, Meredith’s director of enterprise infrastructure, took that slogan literally. “I interpreted that as ‘Drive it like you stole it,'” said Coffman, who was given a mandate to overhaul the combined company’s data centers that held petabytes of data. He responded with an aggressive backup and primary storage consolidation.

The Meredith IT team found itself with a lot of Time data on its hands, and in need of storage consolidation because a variety of vendors were in use. Meredith was upgrading its own Des Moines, Iowa, data center at the time, and Coffman’s team standardized technology across legacy Time and Meredith. It dumped most of its traditional IT gear and added newer technology developed around virtualization, convergence and the cloud.

Although Meredith divested some of Time’s best-known publications, it now publishes People, Better Homes and Gardens, InStyle, Southern Living and Martha Stewart Living. The company also owns 17 local television stations and other properties.

The goal is to reduce its data centers to two major sites in New York and Des Moines with the same storage, server and data protection technologies. The sites can serve as DR sites for each other. Meredith’s storage consolidation resulted in implementing Nutanix hyper-converged infrastructure for block storage and virtualization, Rubrik data protection and a combination of Nasuni and NetApp for file storage.

“I’ve been working to merge two separate enterprises into one,” Coffman said. “We decided we wanted to go with cutting-edge technologies.”

At the time of the merger, Meredith used NetApp-Cisco FlexPod converged infrastructure for primary storage and Time had Dell EMC and Hitachi Vantara in its New York and Weehawken, N.J. data centers. Both companies backed up with Veritas NetBackup software. Meredith had a mixture of tape and NetBackup appliances and Time used tape and Dell EMC Data Domain disk backup.

By coincidence, both companies were doing proofs of concept with Rubrik backup software on integrated appliances and were happy with the results.

Meredith installed Rubrik clusters in its Des Moines and New York data centers as well as a large Birmingham, Alabama office after the merger. They protect Nutanix clusters in all those sites.

“If we lost any of those sites, we could hook up our gear to another site and do restores,” Coffman said.

Meredith also looked at Cohesity and cloud backup vendor Druva while evaluating Rubrik Cloud Data Management. Coffman and Michael Kientoff, senior systems administrator of data protection at Meredith, said they thought Rubrik had the most features and they liked its instant restore capabilities.

Coffman said Cohesity was a close second, but he didn’t like that Cohesity includes its own file system and bills itself as secondary storage.

“We didn’t think a searchable file system would be that valuable to us,” Coffman said. “I didn’t want more storage. I thought, ‘These guys are data on-premises when I’m already getting yelled out for having too much data on premises.’ I didn’t want double the amount of storage.”

Coffman swept out most of the primary storage and servers from before the merger. Meredith still has some NetApp for file storage, and Nasuni cloud NAS for 2 PB of data that is shared among staff in different offices. Nasuni stores data on AWS.

Kientoff is responsible for protecting the data across Meredith’s storage systems.

“All of a sudden, my world expanded exponentially,” he said of the Time aftermath. “I had multiple NetBackup domains all across the world to manage. I was barely keeping up on the NetBackup domain we had at Meredith.”

Coffman and Kientoff said they were happy to be rid of tape, and found Rubrik’s instant restores and migration features valuable. Instead of archiving to tape, Rubrik moves data to AWS after its retention period expires.

Rubrik’s live mount feature can recover data from a virtual machine in seconds. This comes in handy when an application running in a VM dies, but also for migrating data.

However, that same feature is missing from Nutanix. Meredith is phasing out VMware in favor of Nutanix’s AHV hypervisor to save money on VMware licenses and to have, as Coffman put it, “One hand to shake, one throat to choke. Nutanix provided the opportunity to have consolidation between the hypervisor and the hardware.”

The Meredith IT team has petitioned for Nutanix to add a similar live mount capability for AHV. Even without it, though, Kientoff said backing up data from Nutanix with Rubrik beats using tapes.

“With a tape restore, calling backup tapes from off-site, it might be a day or two before they get their data back,” he said. “Now it might take a half an hour to an hour to restore a VM instead of doing a live mount [with VMware]. Getting out of the tape handling business was a big cost savings.”

The Meredith IT team is also dealing with closing smaller sites around the country to get down to the two major data centers. “That’s going to take a lot of coordinating with people, and a lot of migrations,” Coffman said.

Meredith will back up data from remote offices locally and move them across the WAN to New York or Des Moines.

Kientoff said Rubrik’s live restores is a “killer feature” for the office consolidation project. “That’s where Rubrik has really shone for us,” he said. “We recently shut down a sizeable office in Tampa. We migrated most of those VMs to New York and some to Des Moines. We backed up the cluster across the WAN, from Tampa to New York. We shut down the VM in Tampa, live mounted in New York, changed the IP address and put it on the network. There you go — we instantly moved VMs form one office to another.”

Go to Original Article
Author:

Guarding the shop: Rewind backup protects e-commerce data

It’s the most wonderful time of the year for e-commerce … that is, until your site goes down and customers can’t shop anymore.

That’s where Rewind backup comes in.

Rewind provides backup for e-commerce sites hosted on Shopify and BigCommerce.

“Most people don’t know they need a backup,” Rewind CEO Mike Potter said.

For example, an e-commerce business that uses Shopify and deletes a product or blog post is not covered just because it’s in the cloud. Similar to cloud-based applications such as Microsoft Office 365 and Salesforce, the provider protects its infrastructure, but not always your data.

However, in Office 365, for example, users have a place for deleted items that they can access if they delete an email by mistake. That’s not the case in a lot of e-commerce platforms where “there is no trash bin,” Potter said.

Potter, who is also a founder of Ottawa-based Rewind, said he’s lost data before, so he understands the pain. Launched four years ago, Rewind had one customer lose everything right before Christmas but restored the store to a safe point in time from before the incident.

As a way to bring the backup issue to the forefront, this holiday season Rewind is offering a free version of its data protection software. Rewind: One-Time enables retailers to conduct a free one-time backup of up to 10,000 products and related data in their online stores. The Rewind backup offer is available for BigCommerce and Shopify merchants.

After an incident, Rewind: One-Time users can restore their data to the time they installed the product.

There needs to be a way for everyone to have protection in this holiday season.
Mike PotterCEO, Rewind

The one-time backup for BigCommerce includes product, brand, category, option set and option data, while the Shopify backup includes products, product images, custom collections and smart collections. The backups are stored indefinitely in the Rewind Vault, which is hosted in various Amazon regions. Data is encrypted in transit and at rest.

It’s the first time Rewind has offered this one-time backup.

“There needs to be a way for everyone to have protection in this holiday season,” Potter said.

A jump forward with Rewind backup

For Crossrope, an online jump rope seller and workout provider based in Raleigh, N.C., “it’s the biggest season of the year,” said digital marketing specialist Andy Lam.

“To have Rewind as a tool for backing up, it just gives us peace of mind,” Lam said.

Before adopting Rewind, one afternoon at the end of a workday, Crossrope made a change to its theme code that broke the site. Customers couldn’t add items to their carts and the company lost out on orders and revenue in the process.

The company had a manual backup saved from 30 days prior and spent a lot of time trying to restore the site manually.

“That kickstarted trying to find a better solution,” Lam said.

Crossrope heard from BigCommerce, its e-commerce platform of choice, about Rewind backup. It was the first backup company that Crossrope contacted.

“Because they were a full-fledged cloud backup tool, it was a no-brainer,” Lam said.

Now if there are any incorrect changes like the previous incident, Crossrope can “rewind” to a known good point in time, in just a couple of clicks. The company has been using Rewind backup for about four months and hasn’t had a major incident. Rewind performs daily backups for Crossrope, which Lam said is enough.

Screenshot of Rewind backup
Rewind backup enables merchants to restore their stores to a safe point in time.

“Now we feel safe,” Lam said. “I know they’re covering a lot of bases for us.”

While Rewind can restore the code in a couple of clicks, Lam said he is hoping the backup vendor can speed up product restoration.

A Rewind recap

Though e-commerce data loss can result from malicious acts and third-party integrations, human error is a common cause.

“We’ve seen everything,” Potter said. (Think of a cat jumping on a keyboard.) “You don’t get any warnings you’re going to have a disaster.”

Rewind claims more than 10,000 small and medium-sized enterprises as customers.

If they want backups more recent than the one-time protection, Rewind: One-Time users can upgrade to one of the paid options during the holiday season or beyond. Pricing ranges from $9 to $299 per month, depending on the size of the store and the number of orders. Many customers perform a daily Rewind backup, Potter said.

The Rewind: One-Time offer is available through Dec. 31, 2019. Customers who use it will have access to that backup indefinitely.

Rewind also provides backup for Mailchimp email marketing and QuickBooks Online accounting data.

Go to Original Article
Author:

For Sale – ***FINAL Price Drop*** Asus FX503VM-EN184T 15.6 Gaming Laptop

Only 10 months old so in perfect condition. Only selling as I don’t have the time to play it much.
Have added the extra 1TB HD myself for extra storage. Will fully factory reset the laptop beforehand.

Asus FX503VM-EN184T 15.6 Gaming Laptop GTX1060 8GB RAM 256GB SSD + 1TB HD Win 10

Screen Size 15.6
Screen Type Full HD Display 120Hz
Screen Resolution 1920 x 1080
Backlight LED
Processor Intel Core i5 (7th Gen) 7300HQ 2.5 GHz Max Turbo Speed 3.5 GHz Quad Core
Graphics NVIDIA GeForce GTX 1060 3GB GDDR5
Storage 256GB SSD + 1TB HD
Memory 8GB
Operating System Windows 10 Home
Lan Fast Gigabit Ethernet LAN
Bluetooth v 4.1
USB Ports 2 x USB 3.0 1 x USB 2.0
Display Ports 1 x HDMI
Extra Features Integrated HD Webcam with Microphone, Backlit Keyboard

Price and currency: £650 £600 £575
Delivery:
Delivery cost is included within my country
Payment method: PPG or BT
Location: Colchester, Essex
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Go to Original Article
Author: