Announcing Windows 10 Insider Preview Build 18358 | Windows Experience Blog

Hello Windows Insiders, today we are releasing Windows 10 Insider Preview Build 18358 (19H1) to Windows Insiders in the Fast ring.
If you are looking for a complete look at what build is in which Insider ring – head on over to Flight Hub. You can also check out the rest of our documentation here including a complete list of new features and updates that have gone out as part of Insider flights for the current development cycle (which currently is 19H1).

FOR GAMERS: We have addressed an issue with Game Mode that may degrade game streaming and recording quality.
Here’s the latest on trying out our new Windows gaming technology:

Still haven’t had a chance to get the game State of Decay for free (for a limited time)? We’ve added even more slots! Whether you’ve tried it in earlier builds or haven’t had the chance yet, these instructions have everything you need.
Installed the Insider version of State of Decay already? We’ll be trying out another update later today. To get it, launch the Store app, click […] and then “Downloads and Updates”. Once installed, you shouldn’t see any difference in the game – it’s just a test update – but please let us know if anything doesn’t work!

We fixed an issue that could result in the thumbnails in Alt + Tab sometimes becoming offset.
We fixed an issue where certain upgrade paths could result in the contents of the Recycle Bin being left under Windows.old.
We fixed an issue resulting in upgrades failing at 18% or 25% and rolling back for some Insiders.
We fixed an issue resulting in some Insider experiencing green screens with error KERNEL_SECURITY_VIOLATION.
We fixed an issue resulting in some apps using the Windows Installer failing to install recently.

Microsoft Store app updates do not automatically install on 18356+. As a workaround, you can manually check for, and install updates via the Microsoft Store app. Open Select “…” > “Downloads and updates” > “Get updates”.
Launching games that use anti-cheat software may trigger a bugcheck (GSOD).
Creative X-Fi sound cards are not functioning properly. We are partnering with Creative to resolve this issue.
Some Realtek SD card readers are not functioning properly. We are investigating the issue.
We’re investigating an issue preventing VMware from being able to install or update Windows Insider Preview builds. Hyper-V is a viable alternative if available to you.

If you install any of the recent builds from the Fast ring and switch to the Slow ring, optional content such as enabling developer mode will fail. You will have to remain in the Fast ring to add/install/enable optional content. This is because optional content will only install on builds approved for specific rings.

To extend our container technology to other browsers and provide customers with a comprehensive solution to isolate potential browser-based attacks, we have designed and developed Windows Defender Application Guard extensions for Google Chrome and Mozilla Firefox.
How it works
The extensions for Google Chrome and Mozilla Firefox automatically redirect untrusted navigations to Windows Defender Application Guard for Microsoft Edge. The extension relies on a native application that we’ve built to support the communication between the browser and the device’s Application Guard settings.
When users navigate to a site, the extension checks the URL against a list of trusted sites defined by enterprise administrators. If the site is determined to be untrusted, the user is redirected to an isolated Microsoft Edge session. In the isolated Microsoft Edge session, the user can freely navigate to any site that has not been explicitly defined as trusted by their organization without any risk to the rest of system. With our upcoming dynamic switching capability, if the user tries to go to a trusted site while in an isolated Microsoft Edge session, the user is taken back to the default browser.
To configure the Application Guard extension under managed mode, enterprise administrators can follow these recommended steps:

Ensure devices meet requirements.
Turn on Windows Defender Application Guard.
Define the network isolation settings to ensure a set of trusted sites is in place.
Install the new Windows Defender Application Guard companion application from the Microsoft Store.
Install the extension for Google Chrome or Mozilla Firefox browsers provided by Microsoft.
Restart the devices.

Intuitive user experience
We designed the user interface to be transparent to users about Windows Defender Application Guard being installed on their devices and what it does. We want to ensure that users are fully aware that their untrusted navigations will be isolated and why.

When users initially open Google Chrome or Mozilla Firefox after the extension is deployed and configured properly, they will see a Windows Defender Application Guard landing page.
If there are any problems with the configuration, users will get instructions for resolving any configuration errors.
Users can initiate an Application Guard session without entering a URL or clicking on a link by clicking the extension icon on the menu bar of the browser.

Where to get it
The Windows Defender Application Guard extension for Google Chrome and Mozilla Firefox is rolling out to Windows Insiders today and will be generally available very soon. This is available for users on Win 10 Enterprise and Pro SKUs on 1803 or later.

Submit feedback here. Contact our team if you have any questions.

We have locked down the inbox apps in 19H1. These simplified versions of some of the inbox apps are what will ship with 19H1 when it is released. As a result, Insiders may have noticed that some features have disappeared from these apps. This was probably most noticeable with the Photos app. Insiders can get these features back by going into the settings of an inbox app like Photos and clicking the “Join preview” button.
No downtime for Hustle-As-A-Service,Dona

Three ways analytics are improving clinical outcomes – Microsoft Industry Blogs

hospital management listening to doctor

According to Accenture Digital Health Technology Vision 2017, 84 percent of healthcare executives believe artificial intelligence (AI) will revolutionize the way they gain information. And many health organizations are already taking advantage of technologies such as AI and advanced analytics to gain insights that help them improve clinical treatment processes and outcomes. In fact, there’s a broad spectrum of use cases for clinical analytics.

Anticipating patient needs

Some of the first ways that health organizations have applied clinical analytics: looking for gaps in care and better predicting patient needs.

That’s becoming especially important with managed care models where health organizations receive reimbursement based not just on episodic health services but also on factors like length of stay (LOS) and readmission rates. Health systems are taking advantage of analytics to help them correlate staffing with anticipated patient needs and better coordinate care so they can improve patient outcomes and reduce LOS and readmission rates.

For example, Steward Health Care analyzed multiple types of data—such as CDC, flu, seasonality, and social data—using Microsoft Azure Machine Learning to predict patient volume so they could staff accordingly.

The results have been impressive. The private hospital operator can predict volumes one to two weeks out with 98 percent accuracy. And it reduced the average LOS for patients by one and a half days. In other words, improved nurse scheduling is helping patients get better faster. It has also increased patient satisfaction. All this, plus: Steward Health Care is saving $48 million per year.

Empowering care teams with predictive care guidance

The next level up in clinical analytics is predictive care guidance. A great example comes from Ochsner Health System—where they’ve integrated AI into patient care workflows.

Care teams there get “pre-code” alerts through an Azure-based platform (from our partner Epic) so they can proactively intervene sooner to help prevent emergency situations. The AI tool analyzes thousands of data points to predict which patients face immediate risks.

“It’s like a triage tool,” says Michael Truxillo, Medical Director of the Rapid Response and Resuscitation Team at Ochsner Medical Center, in this article. “A physician may be supervising 16 to 20 patients on a unit and knowing who needs your attention the most is always a challenge. The tool says, ‘Hey, based on lab values, vital signs, and other data, look at this patient now.’”

During a 90-day pilot project with the tool, Ochsner reduced the hospital’s typical number of codes (cardiac or respiratory arrests) by 44 percent. That incredible number demonstrates the impact AI-driven predictive care guidance can have on clinical outcomes.

Accelerating rare disease diagnoses

Yet another example on the clinical analytics continuum is the work we’re doing with Shire and EURORDIS to accelerate the diagnosis of rare disease. Together, we’ve formed The Global Commission to End the Diagnostic Odyssey for Children with a Rare Disease. As part of the commission’s efforts, phenotypic data (the physical presentation of a person) and genomic data are analyzed to gain insights that could help physicians identify and diagnose patients with a rare disease more quickly.

On average, it takes five years before a rare disease patient—of which approximately half are children—receives the correct diagnosis. Harnessing the power of AI-driven clinical analytics, the alliance aims to shorten the multi-year journey that patients and families endure before receiving a rare disease diagnosis. And that’s one of the most important issues affecting the health, longevity, and well-being for those patients and families.

Those are just a few examples of how AI and advanced analytics can transform healthcare and improve clinical outcomes.

Together with our partners, we’re dedicated to learning and growing alongside our customers and helping them achieve the quadruple aim through clinical analytics and other cloud-based health solutions. We’re also committed to helping them meet their security needs and safeguard the privacy of PHI. And our customers have peace of mind when innovating with us thanks to our Shared Innovation Principles that provide clarity around co-creating technology. We value our customers and partners’ expertise and don’t seek to own it. Rather, we help them monetize their technology assets.

However your health organization wants to use—or advance your use of—clinical analytics, you can learn how to take advantage of AI tools and see more real-world use cases in the e-book: Breaking down AI: 10 real applications in healthcare.

Go to Original Article
Author: Steve Clarke

Citrix data breach report raises more questions

Citrix alerted the FBI to an incident involving attackers gaining unauthorized access to the company’s internal network, but details beyond that are hard to pin down.

According to the official disclosure announcement written by Stan Black, CSIO of Citrix, the company notified the FBI that “international cyber criminals” accessed the company’s internal network via a password spraying attack wherein malicious actors brute force logins with commonly used passwords.

“While our investigation is ongoing, based on what we know to date, it appears that the hackers may have accessed and downloaded business documents. The specific documents that may have been accessed, however, are currently unknown,” Black wrote in a blog post. “At this time, there is no indication that the security of any Citrix product or service was compromised.”

Beyond it being unclear what documents were affected in the Citrix data breach, the company did not mention how long the attackers had access to the Citrix internal network. In December, Citrix also forced many ShareFile users to reset passwords, noting the “constant increase in internet-account credential theft and the risk of credential stuffing attacks.” However, Citrix said the company itself was not breached.

A Los Angeles-based cybersecurity research company called Resecurity claimed to have more information on the Citrix data breach and said the attackers gained access to somewhere between six and 10 TB worth of sensitive information, “including e-mail correspondence, files in network shares and other services used for project management and procurement.”

According to Resecurity, the attack was carried out by an Iranian threat group known for targeting government agencies and oil and gas companies. Resecurity claimed in a blog post that it reached out to Citrix in December to share an “early warning notification” about the attack, but in an interview with NBC News, Resecurity president Charles Yoo also said the threat group originally accessed Citrix’s network 10 years ago and persisted ever since.

These details about the Citrix data breach could not be verified in any way. Initially, Resecurity’s blog post on the incident did not contain any technical evidence, nor did the company respond to requests for comment. Resecurity updated its post Monday with additional information and claims, including IP addresses supposedly from Iran, as well darkened screenshots that appeared to show a list of email accounts and other information, including partially visible names, for approximately two dozen Citrix employees.

Citrix refused to comment on the claims made by Resecurity or if there was any relationship between the two companies.

“As disclosed on Friday, we have launched a comprehensive forensic investigation into the incident with the help of leading third-party experts and will communicate additional details when we have credible, actionable information,” a Citrix spokesperson said in a statement. “We have no comment on Resecurity’s claims at this time.”

Resecurity was incorporated in 2015 by Andrei Komarov, former CIO of InfoArmor, according to public documents first discovered by Twitter user “Deacon Blues.” The company’s web presence is fairly thin; the Resecurity website includes just two blog posts and a handful of news posts dating back to Sept. 1. The website contains no information about products, services or research.

The company page doesn’t list any of the employees beyond a news post about Ian Cook, director at Corbels Security Services, based in East Sussex, U.K., being named a strategic advisor. LinkedIn lists eight employees of Resecurity, though Komarov is not one of them. Of those eight employees, only three are listed as being in the L.A. area — Resecurity doesn’t mention any other offices — and Yoo’s profile includes no information beyond his position at Resecurity.

Komarov has been connected to a questionable breach report in the past with the 2013 Yahoo mega breach. InfoArmor saw data from the Yahoo breach being sold on the deep web in August 2016, published information about the data and supplied the data to law enforcement three months before Yahoo disclosed the 2013 breach. Komarov and InfoArmor never directly contacted Yahoo about the data.

George Avetisov, CEO and co-founder of identity and access management vendor HYPR, expressed concern about how much faith to put into the Resecurity claims.

“I have no knowledge of Resecurity, but it looks pretty suspicious. They have a very short history to be working with a software company as prominent as Citrix, they have minimal background in this space, the founder has no visible online presence, and industry insiders have been questioning the legitimacy of the company,” Avetisov said. “Simply put — nobody has heard of them.”

Go to Original Article

For Sale – Virtualisation PC /media PC

Hi mate I may have low feedback jut this would be the third time I have sent something before I have received payment.. The last stuff was over 450 pounds worth of stuff and then I had to chase for two days to get payment. So I would prefer not too
I also stated that that the price doesn’t include delivery so will have to work something out there..
The case is in good condition I have just didn’t put it on properly for the first pic, third was to show how the ssd was seated lol

Forgot to mention its got an Intel pci e Nic card too.

One back panel is missing

No warranty except the ssd which I bought a few months back from Cex which should have two years and I’m happy to help with that.

Where abouts are you based as I travel alot for work

PayPal would work too.. I will get the post up for the last big sale too, I just haven’t received feedback for that yet

Go to Original Article

Hone your PowerShell text manipulation skills

If you are interested in task automation, then learning how to use the *-Content cmdlets for effective PowerShell text manipulation will help with advanced infrastructure management efforts.

Part of your automation activities include modifying text files, which used to mean just flat text, such as files you’d create with the Notepad application. These days, the concept of text includes CSV, HTML, JSON, XML and even Markdown files. PowerShell works with all those filetypes — and YAML is on the horizon for a future PowerShell Core release — but this tutorial will focus on working with flat text files.

There are number of cmdlets available for PowerShell text manipulation: Add-Content, Clear-Content, Get-Content and Set-Content. Also, the Out-File cmdlet can create a text file or write to one. You need to be aware of the changes between the Windows PowerShell and PowerShell Core versions — especially with encoding — to manage your files across platforms and across applications.

Creating text files in PowerShell

You can use both Add-Content and Set-Content to create a text file, but they both require content. Start by importing the text after the $txt text string variable in the screenshot.

Create text file
Create populated text files with PowerShell.

All three of these commands create a text file:

Set-Content -Path c:testtest1.txt -Value $txt
Add-Content -Path c:testtest2.txt -Value $txt
Out-File -FilePath c:testtest3.txt -InputObject $txt

You can also use Out-File directly on the pipeline:

Get-Process | Out-File -FilePath c:testtest4.txt

But if you try:

Get-Process | Set-Content -Path c:testtest5.txt

Your content will look like this:

System.Diagnostics.Process (ApplicationFrameHost)

To avoid this, convert the data to strings before writing to the file:

Get-Process | Out-String | Set-Content -Path c:testtest6.txt

You can use Out-File rather than the *-Content cmdlets to avoid the extra processing required with the Out-String cmdlet.

Clearing content from a text file

There are times when it’s helpful to know how to clear the contents of a file so the file is empty and available for reuse.

clear text contents
Use the Clear-Content cmdlet to remove the content in a file but preserve its permissions.

The Clear-Content cmdlet executes in one less step compared to deleting and recreating the empty file. Another advantage is it preserves the permissions on the file.

Modifying content in a text file

Creating text files is a useful activity, but using a script to add content to a file is even more valuable. This script creates a file with 10 server names, one per line:

1..10 |
ForEach-Object {
If ($_ -lt 10) {
$_ | Join-String -OutputPrefix 'Server0' | Add-Content -Path C:testServers.txt
else {
$_ | Join-String -OutputPrefix 'Server' | Add-Content -Path C:testServers.txt

The code uses Join-String, which is a feature introduced in PowerShell Core v6.2 preview 3, to create a server name and write it to the file. You could use the following code as an alternative that works in any PowerShell version:

1..10 | 
ForEach-Object {
"Server{0:00}" -f $_ | Add-Content -Path C:testServers.txt

With either method, the file will look the same.

Text file contents
Displaying the content of the Servers.txt file.

Perform PowerShell text manipulation with two cmdlets

You have two options to modify file content: Add-Content and Set-Content.

Add-Content is nondestructive. It adds content to the end of the file:

11..15 | 
ForEach-Object {
"Server{0:00}" -f $_ | Add-Content -Path C:testServers.txt

This code adds another five server names to the text file created earlier.

Add-Content cmdlet
Additional content added to the Servers.txt file with the Add-Content cmdlet

Set-Content replaces the current content with new content:

$servers = 1..12 | 
ForEach-Object {
"NewServer{0:00}" -f $_

Set-Content -Path C:testServers.txt -Value $servers

This produces the results shown in the following screenshot:

Set-Content cmdlet
Displaying the contents of the Servers.txt file after Set-Content replaces the file contents

You may need to use some additional parameters available with Add-Content and Set-Content:

  • NoNewLine concatenates additional text on a single line.
  • Stream specifies an alternate data stream.
  • AsByteStream creates the file as a stream of bytes.

Use care when using different PowerShell versions

If you use multiple versions of PowerShell, your PowerShell files will be read by another application — possibly on a different OS — or you want to use PowerShell to read files created by other applications, then you may need to be aware of encoding.

The Add-Content, Set-Content and Get-Content cmdlets have an encoding parameter that controls the way PowerShell writes the file. If you only use a single version of PowerShell and files that you create will only be read by PowerShell, then you don’t need to worry about this.

If you use multiple versions of PowerShell, if your PowerShell files will be read by another application — possibly on a different OS — or if you want to use PowerShell to read files created by other applications, then you may need to be aware of encoding.

Just to add another wrinkle to the encoding story, the default encoding changed in PowerShell Core 6.0.

Windows PowerShell uses a mixture of encoding, including ASCII and UTF-16, which may lead to issues when you try to read the files you create. For the most part, PowerShell tends to figure out the encoding for files, but other applications may not be so forgiving.

PowerShell Core 6.0 standardized on UTF-8 without a byte order mark (UTF8NoBOM) as the default encoding. The following cmdlets use UTF8NoBOM in PowerShell Core 6.0 and later: Add-Content, Export-Clixml, Export-Csv, Export-PSSession, Format-Hex, Get-Content, Import-Csv, Out-File, Select-String, Send-MailMessage and Set-Content.

New-ModuleManifest was moved to the UTF8NoBOM standard in PowerShell Core 6.1.

The PowerShell team recommends to explicitly state the encoding with the -Encoding parameter.

Working with the Get-Content cmdlet parameters

Creating and modifying files are useful, but at some stage, you will need to read the contents of a file. The file with the server names is a good example of an instance where you create a file used in your automation efforts.

The Get-Content cmdlet reads files. A typical scenario is to read the file and perform some action on each server:

Get-Content -Path C:testServers.txt | 
foreach {Test-Connection -TargetName $_ -Ping -IPv4 -Count 1}

What may not be apparent is the default action reads the file as an array:

$servers = Get-Content -Path C:testServers.txt

The PowerShell pipeline unravels arrays and treats each item as a separate object. If the file contents were read as a single block of text, then you’d need to perform additional processing to separate the lines of text. If you don’t want the whole file, you can use the TotalCount — aliased as Head and First — parameter to read the first n lines of the file:

Get-Content -Path C:testServers.txt -TotalCount 4

The Tail parameter reads the last n lines of the file:

Get-Content -Path C:testServers.txt -Tail 3

For large files, you may need to use the ReadCount parameter to control the number of lines sent through the pipeline at one time. The Raw parameter ignores a new line character and returns the entire contents of the file as a single string:

$servers = Get-Content -Path C:testServers.txt -Raw

The length of the string matches the file size. If you attempt to select an element from the array, then you’ll get individual characters as shown.

Some other parameters of interest include:

  • Wait keeps the file open while checking and displaying new content once per second. Use CTRL+C to close the file.
  • Stream gets the contents of the specified alternate file stream
  • AsByteStream dictates that the content should be read as a stream of bytes. This parameter was added in PowerShell Core 6.0. For Windows PowerShell, use the Byte parameter for the same result.

Go to Original Article

What’s new with Seeing AI

Saqib Shaikh holds his camera phone in front of his face with Seeing AI open on the screen

By Saqib Shaikh, Software Engineering Manager and Project Lead for Seeing AI

Seeing AI provides people who are blind or with low vision an easier way to understand the world around them through the cameras on their smartphones. Whether in a room, on a street, in a mall or an office – people are using the app to independently accomplish daily tasks like never before. Seeing AI helps users read printed text in books, restaurant menus, street signs and handwritten notes, as well as identify banknotes and products via their barcode. Leveraging on-device facial-recognition technology, the app can even describe the physical appearance of people and predict their mood.

Today, we are announcing new Seeing AI features for the enthusiastic community of users who share their experiences with the app, recommend new capabilities and suggest improvements for its functionalities. Inspired by this rich feedback, here are the updates rolling out to Seeing AI to enhance the user’s experience:

  • Explore photos by touch: Leveraging technology from Azure Cognitive Services, including Custom Vision Service in tandem with the Computer Vision API, this new feature enables users to tap their finger to an image on a touch-screen to hear a description of objects within an image and the spatial relationship between them. Users can explore photos of their surroundings taken on the Scene channel, family photos stored in their photo browser, and even images shared on social media by summoning the options menu while in other apps.
  • Native iPad support: For the first time we’re releasing iPad support, to provide a better Seeing AI experience that accounts for the larger display requirements. iPad support is particularly important to individuals using Seeing AI in academic or other professional settings where they are unable to use a cellular device.
  • Channel improvements: Users can now customize the order in which channels are shown, enabling easier access to favorite features. We’ve also made it easier to access the face recognition function while on the Person channel, by relocating the feature directly on the main screen. Additionally, when analyzing photos from other apps, the app will now provide audio cues that indicate Seeing AI is processing the image.

Since the app’s launch in 2017, Seeing AI has leveraged AI technology and inclusive design to help people with more than 10 million tasks. If you haven’t tried Seeing AI yet, download it for free on the App Store. If you have, please share your thoughts, feedback or questions with us at [email protected], or through the Disability Answer Desk and Accessibility User Voice Forum.

Go to Original Article
Author: Steve Clarke

Druva Phoenix serves up new DRaaS features

Backup vendor Druva is bolstering its disaster recovery as a service around its Phoenix and CloudRanger cloud backup apps and Amazon Web Services.

Druva this week said it would add the following features:

  • runbook automation;
  • failback to AWS or on-premises sites;
  • automated DR testing;
  • cross-region replication; and
  • mobility of virtual machines (VMs) and virtual private clouds for Phoenix and CloudRanger.

The features are due to roll out incrementally over the next few months.

Druva Phoenix first launched in 2014 as a software-as-a-service cloud backup and archive product. The vendor added DRaaS capability into Druva Phoenix in 2016. Druva acquired AWS backup company CloudRanger in mid-2018 and added it to its Cloud Platform, along with Druva Phoenix and InSync backup. The vendor added automated DR testing to CloudRanger in late 2018.

Mike Palmer, chief product officer of Druva, based in Sunnyvale, Calif., said around 30% of Druva Phoenix customers have adopted DRaaS. Now, those customers are requesting more features to keep their data secure.

“We’re seeing an uptick in demand for this type of solution as compensating for ransomware attacks,” Palmer said.

Because Druva creates a separate, off-premises data repository from an organization’s primary infrastructure, it creates an air gap from which to recover from a cyberattack. Druva charges a flat $20 fee per VM regardless of the size of the VM for its DRaaS.

Palmer said Druva Phoenix can help customers meet recovery time objectives (RTOs) of a few minutes and recovery point objectives (RPOs) of an hour. It is possible to achieve shorter RPOs from other technologies and vendors, but that comes at a greater cost.

Steven Hill, senior analyst at 451 Research, said organizations should evaluate how mission-critical their data is and think about how quickly they need it back when choosing from many of the DRaaS providers in the market.

screenshot of Druva Phoenix recovery workflow
Druva Phoenix has already received some performance enhancements in the background.

“The whole idea of being able to port your application to the cloud and let it continue from when it went down is a pretty big deal,” Hill said. “But that’s where the cost comes in. The shorter you want your RTO and RPO, the more money you spend on it.”

Mark Jaggers, senior director analyst at research and advisory firm Gartner, gave a similar warning.

The shorter you want your RTO and RPO, the more money you spend on it.
Steven Hillsenior analyst at 451 Research

“While many technologies today have the potential to deliver very low RPOs, users should always understand what their requirements really are. It can be expensive and unneeded to achieve an RPO that is not justified by the business requirements,” Jaggers said.

The sweet spot RPO for the best cost-benefit ratio is at around the one-hour mark, Hill said, and an RPO of 30 minutes to an hour is reasonable for most businesses. Backing that up, Jaggers cited a research survey of DRaaS subscribers performed by Gartner that found 83% of respondents reported their lowest RPOs were one hour or less.

Palmer said runbook automation will be available in April, and failback capabilities will roll out in May. Because the product is being delivered as a service, customers will not have to upgrade or install a patch to use the features once they become available.

Other feature updates coming to Druva Cloud Platform will extend beyond backup and DR to areas such as analytics, data governance, e-discovery and long-term retention via Amazon storage tiers.

“You’ll see Druva evolve not just from data protection into business continuity, as we are doing today, but into compliance, archive and analytics over time,” Palmer said.

Druva faces steep competition in DRaaS, including some of the largest IT companies. Gartner lists 10 DRaaS vendors on its latest DRaaS Magic Quadrant, with Microsoft and Iland listed as leaders and IBM and Sungard Availability Services in the visionaries quadrant. Druva is not on the list of 10.

Go to Original Article

For Sale – HP Desktops and Acer Monitors for Sale

8 Units- HP Desktop Computer Elite 8000 Core 2 Duo E8400 (3.00 GHz) 4 GB DDR3 160 GB HDD Windows 7 Professional 64-Bit

1 Unit- ASUS VS207T-P Black 19.5″ 5ms Widescreen LED Backlight LCD Monitor

7 Units- Acer K2 K202HQL Abd (UM.IX3AA.A04) Black 19.5″ Widescreen LED Backlight Monitors – LCD Flat Panel

All were purchased refurbished and were never used thereafter. All computers come with keyboard and mouse.

Whole Lot Available for $1200, pm if interested.

– Andrew




Price and currency: 1200
Delivery: Goods must be exchanged in person
Payment method: venmo, cash
Location: brooklyn, ny
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Go to Original Article