Kids get ‘eye-opening’ classroom lessons through video calls that span millions of miles – Stories

Horio and her students, in turn, have shared their knowledge of what to do during natural disasters, as they did in a Skype session on earthquake safety with Tran Thi Thuy’s class in rural Vietnam.

Tran, a teacher who says she was raised in the poorest family in her village, bought her own Wi-Fi router to bring the world to her students through Skype. She says the sessions, which are in English, take her kids beyond the grammar and vocabulary that are traditionally taught, to actually speak the language themselves, even though they’re in an area where native speakers are rare. Talking with people in other regions helps students learn to understand different accents as well, Horio adds.

Darlene Colon
Students at the Escuela del Deporte San Juan school in Puerto Rico broke out in spontaneous music jams over Skype calls with classes around the world when the electricity came back on after Hurricane Maria. Photo provided by their teacher, Darlene Colon.

Darlene Colon, who teaches technology and English as a second language to middle schoolers at a public school for athletes in Puerto Rico, has started a monthly rotation of Spanish and English sessions with classes in Texas and Kentucky. The kids are helping each other with pronunciation, she said — a welcome diversion for her students, who are still struggling from the impact of Hurricane Maria last year.

Colon’s school building was destroyed in the hurricane, about two months before last year’s Skype-a-Thon. A hallway in a neighboring school became her classroom, and even though the area still didn’t have electricity, she found a way to charge her computer and boosted the data service on her phone so she could connect her class for the event.

“The kids didn’t have anything, no electricity or water. Some had lost their homes, family members and friends, and they were very stressed,” she says. “So this helped them forget their troubles and just enjoy themselves.”

When the electricity came back on in time for Christmas, her students’ joy was so infectious that a spontaneous music jam and dance broke out during a Skype session with a class in New York.

“Everyone was so happy, and even though all the news was about the chaos here, we were able to show that wasn’t necessarily the whole story,” she says. “Even though it was difficult for us, we were willing to continue and participate.”

Above all, teachers say the Skype sessions highlight how much the kids have in common with each other – such as Horio’s Japanese students discovering they had the same favorite Dragon Ball comic book character as a class in Spain.

“We’ve had sessions with 78 different countries,” says Kumar, the STEM teacher in India. “But in spite of all that diversity, the sense of commonality is what comes through to the kids. When they laugh and share things, that’s what they identify: oneness within diversity.”

Google Cloud Scheduler brings job automation to GCP

As an old-time TV pitchman who sold a popular rotisserie oven once said, sometimes, you just like to set it and forget it.

In a sense, that’s what Google Cloud Platform wants customers to do with the workloads they run on GCP through Cloud Scheduler, a managed service built on the widely used open source job scheduling tool, cron.

Many GCP customers already use a simple version of cron, but that has some logistical overhead. For one, developers must manage the underlying infrastructure connected with cron and restart jobs manually if one doesn’t complete properly. And checking to see if a cron job has run successfully involves some manual labor.

Google Cloud Scheduler, now in beta, masks that complexity, according to the vendor. It resends jobs until they execute successfully and supports fault tolerance for the Cloud Scheduler instance itself, with the option to deploy it in multiple GCP regions. Customers can invoke schedules through a UI, command-line interface or API, and they can monitor jobs’ status through the UI. Cloud Scheduler uses a serverless architecture, so customers only pay for job invocations, as needed.

Google Cloud Scheduler checks another box for enterprise appeal

Compared to its competitors, Google is late to the game with Cloud Scheduler, although it has had a manual cron service for App Engine.

Microsoft’s Azure Scheduler service became generally available in late 2015, but it will be replaced by Azure Logic Apps. The latter has a broader functional intent and scope than Cloud Scheduler, with additional capabilities for application and process integration, data integration and B2B communication, among others.

AWS rolled out similar capabilities to Scheduler with its Batch service in late 2016, and users can also schedule functions with cron in AWS Lambda.

[Google Cloud Scheduler] helps DevOps [teams] focus on higher-level problems, rather than basic plumbing.
Holger Muellervice president and principal analyst, Constellation Research

Still, Cloud Scheduler is another indication of GCP’s ambitions to attract more business from enterprises, which run large quantities of regular jobs, such as database updates and reports.

While Google encourages customers to use Cloud Scheduler for App Engine workloads on GCP, the service also works with any HTTP/S endpoint or Publish/Subscribe messaging topic. One example of the former is an on-premises enterprise application that exposes back-end data to a cloud service via HTTP/S.

Publishers take many forms, such as a sensor installed at a remote oil rig. As the sensor generates various types of messages, the publish/subscribe approach sends them to a broker system, which then forwards them on to subscribers in real time. This approach can save time and effort by eliminating the maintenance of a slew of point-to-point integrations, and it makes sense for use cases such as IoT. Google offers a publish/subscribe service for GCP.

Google Cloud Scheduler uses a serverless architecture, so customers only pay for job invocations as needed; pricing starts at $0.10 per job, per month, with three free jobs per month. It’s difficult to compare Cloud Scheduler’s cost to, for example, Azure Scheduler, which has a much more granular pricing model.

Tools such as Google Cloud Scheduler, AWS Batch and Azure Scheduler can reduce IT tasks, but as with all investments into infrastructure, enterprises must weigh the efficiencies and ease of use of automation versus vendor and tool lock-in concerns.

“If you’re running a lot of Google services, in general, or are building next-generation applications, this can provide significant operational time savings,” said Holger Mueller, vice president and principal analyst with Constellation Research in Cupertino, Calif. “It helps DevOps [teams] focus on higher-level problems, rather than basic plumbing.”

Windows 10 Quality approach for a complex ecosystem – Windows Experience Blog

Today we are re-releasing the October 2018 Update after pausing to investigate a small but serious issue.   This is the first time in Windows 10’s “Windows as a Service” history that we have taken such an action, and as such it has naturally led to questions about the work we do to test and validate Windows quality before we begin rolling it out broadly.
While our measurements of quality show improving trends on aggregate for each successive Windows 10 release, if a single customer experiences an issue with any of our updates, we take it seriously. Today, I will share an overview of how we work to continuously improve the quality of Windows and our Windows as a Service approach.  As part of our commitment to being more transparent about our approach to quality, this blog will be the first in a series of more in-depth explanations of the work we do to deliver quality in our Windows releases.
Critical to any discussion of Windows quality is the sheer scale of the Windows ecosystem, where tens of thousands of hardware and software partners extend the value of Windows as they bring their innovation to hundreds of millions of customers.  With Windows 10 alone we work to deliver quality to over 700 million monthly active Windows 10 devices, over 35 million application titles with greater than 175 million application versions, and 16 million unique hardware/driver combinations. In addition, the ecosystem delivers new drivers, firmware, application updates and/or non-security updates daily. Simply put, we have a very large and dynamic ecosystem that requires constant attention and care during every single update.  That all this scale and complexity can “just work” is key to Microsoft’s mission to empower every person and every organization on the planet to achieve more.
Windows 10 marked a change in how we develop, deliver and update Windows: What we call “Windows as a Service.”  We shifted the responsibility for base functional testing to our development teams in order to deliver higher quality code from the start. We also changed the focus of the teams that still report to me who are responsible for end-to-end validation, and added a fundamentally new capability to our approach to quality: the use of data and feedback to better understand and intensely focus on the experiences our customers were having with our products across the spectrum of real-world hardware and software combinations. This combination of testing, engagement programs, feedback, telemetry, real-life insight across complex environments and close partner engagement proved to be a powerful approach that enabled us to focus our feature innovation and monthly updates to deliver improved quality.
This data-driven listening approach has also allowed us to track our quality differently. Over the last three years one of our key indicators of product quality – customer service call and chat volumes – has steadily dropped even as the number of machines running Windows 10 increased.  Another key indicator we track is the Net Promoter Scores (NPS), where we ask people to rate their Windows experience and track the ratio of “promoters” and “detractors.”  Today, the Windows 10 April 2018 Update has the highest Net Promoter rating of any version of Windows 10.  While we are encouraged by these improving quality trends at scale, we also understand that the trend doesn’t matter if you happen to be one of the people experiencing an issue.  Our goal is to provide everyone with only the best experiences on Windows, and we take all feedback seriously.  We are committed to learn from each occurrence, and to rigorously apply the lessons to improve both our products and the transparency around our process.
Continued product quality improvement trend: declining customer incident rate

Testing in Redmond
Our approach to product quality begins by listening to customers and representing their feedback in all aspects of our development process. First, we invest in customer and partner planning feedback to help us shape and frame our product specifications, including defining both required testing and success metrics.  We employ a wide variety of automated testing processes as we develop features, allowing us to detect and correct issues quickly.  We are regularly looking to address gaps in tests and often find them based upon our internal experiences and issues we note with our insiders.  This suite of automated tests grows over time. The most fundamental of these tests must pass for features and code to “integrate up” into the main Windows build that will eventually ship to customers.  In a future blog we’ll detail the extensive testing we do in-house, but it is safe to say that testing is a key part of delivering Windows.
Internally, Windows has what we call an aggressive “self-host” culture.  “Self-host” means that employees working on Windows run the latest internal versions on their machines to ensure they are living with Windows.  The “aggressive” part refers to the tenacious push to make sure local teams run their own builds and pursue any issues found. A strong self-host culture is a source of pride for those of us working on Windows.
Engaging our partners
Given the breadth of our shared ecosystem, testing goes well beyond our Redmond campus and extends around the world to dedicated Windows test labs as well as the facilities of our many partners.  In this way we validate in-market and in-development releases at scale through arrangements with key testing partners including:

External testing labs with global, continuous coverage for application compatibility, hardware and peripherals
ISVs for a range of apps including Anti-Virus (AV)
OEMs partner with us to test and ensure quality across a vast array of systems, devices and drivers

Our approach to the driver ecosystem alone will make for an interesting blog post in this series.  High-quality device drivers are key to a great experience, and we must engage closely with our hardware partners to deliver drivers at scale.  The chart below shows the driver volume we can face month to month, with June peaking to almost 15,000 drivers delivered into the ecosystem!
Monthly driver volume via Windows Update

Engaging our customers
Delivering a quality product requires that we engage customers to provide feedback on our designs and plans. That is why we created the Windows Insider program at the beginning of Windows 10. Beyond the valuable insights Insiders share about the experience, the Windows Insider program further  expands the volume and diversity of device usage that we cannot obtain in our own controlled environments.   With Windows Insiders we gain fresh insights and feedback on user experience, compatibility, performance and more.  Insider populations are balanced between pre-release and release preview versions that receive cumulative quality updates for drivers and many applications.
We also engage with commercial customers through many programs. The Windows Insider Program for Business allows organizations to access Insider Preview builds to validate apps and infrastructure ahead of the next public Windows release.   The program helps IT professionals give us feedback on the features they use to deploy and manage Windows in an organization. The program has grown by 43 percent in the past six months, and we just introduced Olympia v2, which provides a complete Microsoft 365 deployment and management testing environment.  We also have an invitation-only program for large enterprise customers, the Technology Adoption Preview (TAP) program, that lets customers provide early feedback on product updates via real-world product testing to help us identify issues during the development process.
During development and stabilization of an update, we use all our engagement programs to identify and fix issues that often only emerge in real-life settings. We also continue automated and manual testing, as do our partners across the ecosystem.  We compare quality to previous flights and releases based on all the feedback tools available to us including Feedback Hub and social media. When we are confident in the user update experience, we begin to cautiously release a feature update to our customers.
Data-driven decisions making
One of the most critical advances we have made in our approach to quality is to continually improve the data-driven decisions we can make about the reliability of our product in-development and in-market. We begin by using detailed dashboards and metrics to evaluate the builds that we install daily on our own PCs, and only when we have clear evidence that measurable quality is at an acceptable level do we begin to start sharing flights for feedback with Windows Insiders. Our dashboards and metrics then scale to the volume of quality data and feedback we receive once a product is being flighted or is in-market.  We obsess over these metrics as we strive to improve product quality, comparing current quality levels across a variety of metrics to historical trends and digging into any anomaly.  Our data-driven approach is implemented with the highest standards of data privacy protection for our valued customers; you can learn more about that via our Privacy at Microsoft page.  
Rollout principles
Part of our “Windows as a Service” evolution is that we do not “ship” Windows the same way we did before Windows 10.  We leverage our real-time detection and response capabilities to roll out Windows in a careful and data-driven way, and this represents some of the most impactful changes we have made to improve the Windows experience.
The first principle of a feature update rollout is to only update devices that our data shows will have a good experience. One of our most recent improvements is to use a machine learning (ML) model to select the devices that are offered updates first. If we detect that your device might have an issue, we will not offer the update until that issue is resolved.
Second among our principles is to start slowly – to prioritize the update experience over rollout velocity. When a new feature update release is available, we first make it available to a small percentage of “seekers,” users who take action to get the updates early.
Third, we monitor carefully to learn about new issues. We do this by watching our telemetry, closely partnering with our customer service team to understand what customers report to us, analyzing feedback logs and screenshots directly through our Feedback Hub, and listening to signals sent through social media channels. If we find a combination of factors that results in a bad experience, we create a block that prevents similar devices from receiving an update until a full resolution occurs. We continue to look at ways to improve our ability to detect issues, especially high-impact issues where there is low volume and potentially weak signals.  To improve our capability to recognize low-volume issues, we recently added the ability for users to indicate the impact or severity of issues when they provide us feedback.
Responsive and transparent
Even a multi-element detection process will miss issues in an ecosystem as large, diverse and complex as Windows. While we will always work diligently to eliminate issues before rollout, there is always a chance an issue may occur.  When this happens, we strive to minimize the impact and respond quickly and transparently to inform and protect our customers.  Our focus until now has been almost exclusively on detecting and fixing issues quickly, and we will increase our focus on transparency and communication. We believe in transparency as a principle and we will continue to invest in clear and regular communications with our customers when there are issues. Of course, we have a responsibility to protect customers, and in some cases (e.g. zero-day exploits) we prioritize that protection over transparency until security updates are released and we can again be clear about an issue.
 Just the beginning…
We are working on many fronts to ensure our customers have the best, most secure experience on Windows. While we do see positive trends, we also hear clearly the voices of our users who are facing frustrating issues, and we pledge to do more. We will up our effort to improve our ability to prevent issues and our ability to respond quickly and openly when issues do arise.  We intend to leverage all the tools we have today and focus on new quality-focused innovation across product design, development, validation and delivery.  We look forward to sharing more about our approach to quality and emerging quality-focused innovation in future posts.
Updated November 13, 2018 10:33 am

Resuming the rollout of the Windows 10 October 2018 Update – Windows Experience Blog

In early October, we paused the rollout of the Windows 10 October 2018 Update as we investigated isolated reports of users missing files after updating.  We take any case of data loss seriously, and as I noted on October 9, we have thoroughly investigated and resolved all related issues.
In addition to extensive internal validation, we have taken time to closely monitor feedback and diagnostic data from our Windows Insiders and from the millions of devices on the Windows 10 October Update, and we have no further evidence of data loss.  Based on this data, today we are beginning the re-release of the October Update by making it available via media and to advanced users who seek to manually check for updates.
As with all Windows releases, we will continue to carefully study the results, feedback and diagnostic data before we begin offering the update in phases to more devices in the coming weeks and months.
While the April Update had the fastest Windows 10 update rollout velocity, we are taking a more measured approach with the October Update, slowing our rollout to more carefully study device health data. We will offer the October Update to users via Windows Update when data shows your device is ready and you will have a great experience. If we detect that your device may have an issue, such as an application incompatibility, we will not install the update until that issue is resolved, even if you “Check for updates,” so you avoid encountering any known problems. For those advanced users seeking to install the update early by manually using “Check for updates” in settings, know that we are slowly throttling up this availability, while we carefully monitor data and feedback.
We plan to add a Windows update status dashboard in the coming year to provide more information on any issues that lead to update blocks. For this current October Update rollout we will be providing regular updates for notable issues on the public Window 10 update history page.
For our commercial customers, the re-release date of the Windows 10 version 1809 will also be today, November 13, 2018 (this includes Windows Server 2019 and Windows Server, version 1809). This date marks the revised start of the servicing timeline for the Semi-Annual Channel (“Targeted”) release.  As previously announced and beginning with this release, feature updates that release around September will have a 30-month servicing timeline.  Just as we phase our consumer rollout, we recommend IT administrators begin to validate that apps, devices and infrastructure used by their organization work well with the new release before broadly deploying. Windows 10, version 1809 is now available through Windows Server Update Services, Windows Update for Business and System Center Configuration Manager’s phased deployment.  For additional information please see the latest IT Pro Blog.
Updated November 13, 2018 4:48 pm

For Sale – Powermac G4 Quicksilver

Next to get shot of is a Powermac G4 Quicksilver from 2001/2002. In excellent condition for it’s a age, running a clean OSX Panther 10.3.

733MHz CPU
256MB RAM (1 slot free)
80GB IDE HDD
3 free PCI slots
CD drive
Boots to desktop with known admin acc and password.

No box and weighs a tonne so collection only.

No idea what this is worth so am open to offers.

Price and currency: 35
Delivery: Goods must be exchanged in person
Payment method: Cash, BT or PPG
Location: North Essex
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

People’s online social circles are becoming riskier, new Microsoft research shows – Microsoft on the Issues

Bullying, unwanted contact and receiving unwelcome sexual images and messages were the most prominent risks in our latest digital civility research and, while strangers still pose the majority of online threats, data show a distinct rise in risk-exposure from people’s own social circles.

According to preliminary results from our latest study, 63 percent of online risks were sourced from strangers and people whom respondents knew only online – largely unchanged from the previous year. Meanwhile, 28 percent of online risks came from family and friends, up 11 points. In addition, findings revealed a relationship between risk-exposure and familiarity with the perpetrator: respondents who had met their abuser in real life were almost twice as likely to experience an online risk. More disheartening were indications that people were targeted because of their personal characteristics, namely gender, age and physical appearance.

These are some early findings from Microsoft’s latest study, “Civility, Safety and Interactions Online – 2018,” which measured attitudes and perceptions of teens and adults in 22 countries[1] about the online risks they face[2] and how their interactions impact their lives. As with previous years’ surveys, full and final results will be made available on international Safer Internet Day on Feb. 5. We chose to make these results available today in conjunction with World Kindness Day to emphasize the need for more civil and respectful interactions both online and off.

Examining the risk categories: Reputational, behavioral, sexual and personal/intrusive  

In 2017, results showed that people’s digital interactions and responses to online risks appeared to be improving, but what was surprising was that many of those targeted for abuse said their offenders came from their immediate families and social circles. We decided to take a closer look at some of these findings this year and we found that unsettling trend was continuing. Indeed, negative experiences from family, friends and acquaintances were up 4 percent, 7 percent and 2 percent, respectively, while a new classification of perpetrators – colleagues and coworkers – accounted for 9 percent of people’s unpleasant interactions online.

As for the nature of online risks across and within the four risk categories – reputational, behavioral, sexual and personal/intrusive – 40 percent of respondents experienced behavioral risks and unwanted contact (a personal and intrusive risk); just over one-third (34 percent) reported negative experiences of a sexual nature, and 28 percent said they fell victim to hoaxes, scams or fraud, another personal and intrusive risk. Interestingly, 60 percent of those who experienced a behavioral risk also experienced unwanted contact and, coincidentally, 60 percent of those who experienced unwanted contact also experienced a behavioral risk.

Perpetrators of risk graphBullying seemed to define the behavioral category. Nearly all respondents who reported experiencing a behavioral risk was a target of name-calling, purposeful embarrassment or some other form of bullying. Unwanted contact was characterized by repeated attempts to contact the target, with more than four in 10 respondents reporting at least one form of repeated unwanted contact. Receipt of unwelcome sexual imagery and messages dominated the sexual risk category, with another nearly four in 10 experiencing repeated attempts to start a romantic relationship. Finally, the commonly experienced hoaxes, scams and fraud risk was led by false and misleading information. Fake news and internet hoaxes were the most common type, far outpacing fake anti-virus scams. More detailed findings across all of these individual risks and risk categories will be released on Safer Internet Day 2019.

Get ready for Safer Internet Day 2019: Pledge to be more respectful online

On World Kindness Day and in gearing up for Safer Internet Day, we’re again encouraging global internet users to pledge to engage responsibly online. Follow the example of the 15 impressive teens that served on our inaugural Council for Digital Good, and take our Digital Civility Challenge:

  1. Live the Golden Rule by acting with empathy, compassion and kindness in every interaction, and treating everyone you connect with online with dignity and respect.
  2. Respect differences, honor diverse perspectives and when disagreements surface, engage thoughtfully, and avoid name-calling and personal attacks.
  3. Pause before replying to things you disagree with, and don’t post or send anything that could hurt someone, damage reputations or threaten someone’s safety.
  4. Stand up for yourself and others by supporting those who are targets of online abuse or cruelty, reporting threatening activity and preserving evidence of inappropriate or unsafe behavior.

Find more great advice from our council members here, and visit our website and resources page for help in handling almost any online safety situation. For more regular news and information, you can connect with us on Facebook and Twitter. However you choose to learn and get involved, make this World Kindness Day count when it comes to safer and healthier online interactions.

[1] Countries surveyed:  Argentina, Belgium, Brazil, Canada*, Chile, Colombia, France, Germany, Hungary, India, Ireland, Italy, Malaysia, Mexico, Peru, Russia, Singapore*, South Africa, Turkey, the United Kingdom, the United States and Vietnam. (* Indicates the first time this country has been included in this research.)

[2] In the latest study, the 21 risks break down as follows:

  • Reputational – “Doxing” and damage to personal or professional reputations
  • Behavioral – Being treated meanly; experiencing trolling, online harassment or bullying; encountering hate speech and microaggressions
  • Sexual – Sending or receiving unwanted “sext” messages and making sexual solicitations; receiving unwanted sexual attention – a new risk added in this latest research, and being a victim of sextortion or non-consensual pornography (aka “revenge porn”), and
  • Personal / Intrusive – Being the target of unwanted contact, experiencing discrimination, swatting, misogyny, exposure to extremist content/recruiting, or falling victim to hoaxes, scams or fraud.

Tags: , ,

Announcing Altaro VM Backup v8

It’s that time of the year again, where I have the distinct pleasure of announcing another new version of our flagship backup and disaster recovery product, Altaro VM Backup! Our awesome development team has been hard at work over the last several months and now we’re officially launching version 8 of the product! We couldn’t be more excited! In this article, you can find out the major new feature added to the product or if you’d prefer you can just jump over to the download page and try it out for yourself!

What’s New in Altaro VM Backup v8?

When you’re talking about backup, talk of RTO (Recovery Time Object) always comes into play. With your RTO being the amount of time in which data or a service needs to be restored after a disaster, as a backup company we take it pretty seriously. In one of our more recent 7.x releases, we announced support for 5-minute RTOs. If you follow that thought-process further, you end up talking about replication capabilities and DR. That’s EXACTLY where we’ve taken the product with this release, by adding WAN-Optimized Replication to our solution stack.

WAN-Optimized Replication

Altaro VM Backup v8 WAN Optimized Replication

WAN-Optimized Replication builds on Altaro’s robust data protection platform, by allowing system admins to get configured VMs back up and running at a remote location in minimal time should disaster strike. Changes on a target VM are replicated to the remote site as fast as every 5 minutes (Assuming your infrastructure can handle that). As such, little data will be lost, and the affected VM can be brought up on a virtualization host at the DR site with minimal work.

In order to accommodate this need, we’ve added functionality into the Altaro Offsite Server. As of the release of version 8 you simply install an instance of the Altaro Offsite Server directly onto a Hyper-V “DR Host” at the offsite location. This Offsite Server instance will accept the incoming replication and allow you to boot any replication-enabled VM quickly.

This functionality is contained in the Unlimited Plus licensing tier and is automatically included for those customers that are part of the Altaro VM Backup for MSPs Program.

Support for Windows Server 2019

windows server 2019

V8 may potentially support Windows Server 2019, but much of this depends on Microsoft’s re-release of Windows Server 2019 after (now) known issues occurred shortly after launch.

Replication will not be supported for Windows Server 2019 at Launch and will be added at a later time.

Things to be aware of…

  • WAN-Optimized Replication will only be available for Hyper-V on launch date with support for VMware-based VMs sometime shortly in the future
  • Replication is supported for Hyper-V 2012/R2 or 2016 at this time
  • VMs running on a 2012/R2 Hyper-V host must be replicated to an Altaro Offsite Server running on a 2012/R2 Hyper-V Host
  • VMs running on a 2016 Hyper-V host must be replicated to an Altaro Offsite Server running on a 2016 Hyper-V Host
  • Orchestrated Failover and Fail-back will be introduced in a future release with a release date TBD

Wrap-Up

We hope you’re as excited about this release as we are, and we’re looking forward to seeing how these new capabilities help you solve day-to-day business problems.

If you’d like to know more, here is a full breakdown of the new features in Altaro VM Backup v8.

If you’re interested in downloading a trial of version 8 you can do so HERE!

We hope you enjoy this release!

Thanks for reading!

Windows Server 2019 Now Available

Introduction

Windows Server 2019 is once again generally available. You can pull the new Windows Server 2019 images—including the new ‘Windows’ base image—via:

docker pull mcr.microsoft.com/windows/servercore:1809
docker pull mcr.microsoft.com/windows/nanoserver:1809
docker pull mcr.microsoft.com/windows:1809

Just like the Windows Server 2016 release, the Windows Server Core container image is the only Windows base image in our Long-Term Servicing Channel. For this image we also have an ‘ltsc2019’ tag available to use:

docker pull mcr.microsoft.com/windows/servercore:ltsc2019

The Nanoserver and Windows base images continue to be Semi-Annual Channel releases only.

(2:02PM PST – All new Windows base images are live)

FAQ

Q: I am seeing “no matching manifest for unknown in the manifest list entries” when I try to pull the image. What do I do?
A: Users will need to be running the latest version of Windows–Windows Server 2019 or Windows 10, October 2018 update–in order to pull and run the container images. Since older version of Windows do not support running newer version of containers, we disallow a user from pulling an image that they could not run.

MCR is the De Facto container source

You can now pull any Windows base image:tag combination from the MCR (Microsoft Container Registry). Whether you’re using a container based on the Windows Server 2016 release, version 1709, version 1803 or any tag in between, you should change your container pull references to the MCR source. Example:

#Here’s the old string for pulling a container
docker pull microsoft/windowsservercore:ltsc2016
docker pull microsoft/nanoserver:1709

#Change the string to the new syntax and use the same tag
docker pull mcr.microsoft.com/windows/servercore:ltsc2016
docker pull mcr.microsoft.com/windows/nanoserver:1709

Or, update your dockerfiles to reference the new image location:

#Here’s the old string to specify the base image
FROM microsoft/windowsservercore:ltsc2016

#Here’s the new, recommend string to specify your base image. Use whichever tag you’d like
FROM mcr.microsoft.com/windows/servercore:ltsc2016

We want to emphasize the MCR is not the place to browse for container images; it’s where you pull images from. Docker Hub continues to be the preferred medium for container image discovery. Steve Lasker’s blog post does a great job outlining the unique value proposition the MCR will bring for our customers.

The Windows Server 2019 VM images for the Azure gallery will be rolling out within the next few days and will come packaged with the most up-to-date Windows Server 2019 container images.

Deprecating the ‘latest’ tag

We are deprecating the ‘latest’ tag across all our Windows base images to encourage better container practices. At the beginning of the 2019 calendar year, we will no longer publish the tag; We’ll yank it from the available tags list.

We strongly encourage you to instead declare the specific container tag you’d like to run in production. The ‘latest’ tag is the opposite of specific; it doesn’t tell the user anything about what version the container actually is apart from the image name. You can read more about version compatibility and selecting the appropriate tag on our container docs.

Conclusion

For more information, please visit our container docs at aka.ms/containers. What other topics & content would you like to see written about containers? Let us know in the comments below or send me a tweet.

Cheers,

Craig Wilhite (@CraigWilhite)

Dell EMC VxRail HCI builds on VMware Cloud Foundation

Dell EMC is tying its converged storage systems more closely around VMware.

Dell EMC and VMware — both subsidiaries of Dell Technologies — introduced the fruits of joint engineering projects this week at VMworld 2018 Europe. The vendors previewed an addition to Dell EMC VxRail hyper-converged infrastructure and enhancements to the Dell EMC VxBlock converged infrastructure.

The VMware Cloud Foundation on VxRail appliances is scheduled for general availability in 2019. Storage enhancements include automated network configuration with Dell EMC SmartFabric Services, which allows VxRail to directly communicate with VMware NSX.

For VxBlock 1000 converged systems, Dell EMC added tools to build infrastructure as a service, giving customers the ability to launch vRealize management packs directly from the VxBlock Central interface.

Dell EMC also expanded its open networking with the S5200 family of 25 Gigabit Ethernet (GbE) switches. VMware shops can use the top-of-rack switches to create 100 GbE data fabrics for NSX-virtualized traffic moving across racks.

The companies launched a beta program to extend VMware Cloud on Amazon Web Services to on-premises Dell EMC VxRail environments.

Dell EMC: Steady demand for converged, hyper-converged

Hyper-converged infrastructure (HCI) is defined as computing, networking, storage and virtualization, packaged on an integrated appliance. The initial use cases centered on virtual desktop infrastructure and backup, but vendors are starting to position HCI for hybrid clouds.

Dell EMC VxRack HCI appliances are turnkey, rack-scale systems that package Dell EMC PowerEdge servers and VMware vSAN storage software. VMware Cloud Foundation also includes vSphere, NSX software-defined networking and software-defined data center (SDDC) in an integrated software package.

Dell EMC VxRail 4.7 marks a tighter product cadence with VMware, said Jon Siegal, a Dell EMC vice president of product marketing.

“With this version, we will [integrate] the latest vSAN vSphere features in VxRail within 30 days of the latest vSAN vSphere release. Customers will get new functionality more quickly,” Siegal said.

VMware partners with all large server vendors to sell vSAN as part of hyper-converged infrastructure, and Dell EMC sells other HCI products that don’t include vSAN. For instance, the Dell EMC XC Series uses software from VMware’s HCI rival, Nutanix. But Dell EMC VxRail is the main HCI focus for Dell and VMware.

Rich Gagnon, CIO of the city of Amarillo, Texas, identified the tight integration between VMware and Dell EMC as a major factor in Amarillo’s picking VxRail for its primary infrastructure platform. Amarillo currently has three VxRail clusters, with another one planned for its airport IT upgrade.

Gagnon said he also evaluated Nutanix before implementing VxRail earlier this year.

“A lot of it had to do with the fact that Dell EMC and VMware are one company, and they’re not going away,” he said of his decision to pick VxRail.

In a move aimed at edge environments, Siegal said two-node Dell EMC VxRail entry deployments with flexible vSAN licensing will be available in 2019.

Converged infrastructure is sold as individual hardware components from validated OEM partners. Dell EMC VxBlock is based on Cisco servers and networking with VMware software, with the flexibility to match Dell EMC PowerMax, Unity, XtremIO and Isilon NAS.

“We still see a world where customers continue to use converged and hyper-converged infrastructure in their data centers. We made it easier to manage both [forms of convergence] from a single lens,” Siegal said.

Dell EMC SmartFabric network automation

We still see a world where customers continue to use converged and hyper-converged infrastructure in their data centers.
Jon Siegalvice president of product marketing, Dell EMC

VMware Cloud Foundation on VxRail embeds Dell EMC SmartFabric Services to automate configuration of storage networking. SmartFabric automatically creates VxRail clusters and allows VxRail Manager to manage them with the vCenter client.

Current Dell EMC VxRail customers can get VxRail 4.7 as a free download. New node deployments or expanding VxRail environments will have to wait until December.

Adding network automation to Dell EMC VxRail addresses issues common to hyper-convergence, said Eric Hanselman, a chief analyst at 451 Research.

“Networking is like the Rodney Dangerfield of the data center. It’s been the hardest piece to automate, right up there with storage in terms of risk aversion. Being able to bind the Dell EMC operational pieces to the vRealize orchestration [can have] significant operational benefits,” Hanselman said.

The Dell EMC S5200 top-of-rack switch was designed for VMware SDDC. It provides a physical network underlay to support NSX clients and services. Hanselman said Dell EMC’s 25 GbE switch is a natural progression, as “people march toward more cost-effective 100 GbE” storage networking in hyper-scale environments.

“There are a whole set of capabilities the switch offers for greater configurability and manageability, but the big thing is getting to 25 [GbE] lanes.”

VMware executives touched on Project Dimension at Dell Technologies World in May, laying out plans to deliver SDDC as an on-premises managed service on physical storage infrastructure. Customers can sign up for the beta project, which is expected to be available in 2019.

Storage Media Group Editorial Director Dave Raffo contributed to this story.

For Sale – Powermac G4 Quicksilver

Next to get shot of is a Powermac G4 Quicksilver from 2001/2002. In excellent condition for it’s a age, running a clean OSX Panther 10.3.

733MHz CPU
256MB RAM (1 slot free)
80GB IDE HDD
3 free PCI slots
CD drive
Boots to desktop with known admin acc and password.

No box and weighs a tonne so collection only.

No idea what this is worth so am open to offers.

Price and currency: 35
Delivery: Goods must be exchanged in person
Payment method: Cash, BT or PPG
Location: North Essex
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.