Tag Archives: long

Wanted – Ryzen Setup (CPU / MB / RAM) & NVIDIA GPU

Hi all,

My 3570k is getting a bit long in the tooth – time to look for an upgrade. Looking for a Ryzen CPU, Motherboard and RAM combo.

Also looking for a decent nVidia GPU – something better than an RX 470. As much as I love the card (and AMD cards overall) – the issues with drivers and getting ReLive to work reliably are a deal breaker for me and they’re driving me nuts – never had any issues with ShadowPlay so looking to go back to the green side. Don’t mind a few gens old.

Basically all I do these days is play Modern Warfare anyhow! Would like something to be able to run it on Medium or above at 70FPS+.

Not looking to spend the world, but offer up and let me know what you have.

Thank you!

Go to Original Article
Author:

Remote access is just one of many COVID-19 IT challenges

The coronavirus pandemic caught quite a few organizations by surprise and the effects may linger long after the quarantine has ended.

IT workers scrambled to stand up technical service and hunt down enough laptops to give to workers, many of whom were working remotely for the first time. In addition to dealing with technical issues, administrators had to execute time-sensitive deployment projects while trying to explain the basics: connecting to a VPN, using multifactor authentication and muting the mic during a Zoom meeting.

Few organizations had the finances or the technical ability to quickly stand up a virtual desktop infrastructure environment to provide access to business applications that were only available to office users. As a temporary solution, some companies made do with their existing hardware and Windows Server licenses to spin up several Remote Desktop Session hosts and paid for the client access licenses to provide this remote access. Microsoft offered advice to help IT shops free up bandwidth for critical VPN systems for organizations that needed to accommodate a sudden influx of users.

Our advisory board members shared their thoughts about the ongoing coronavirus pandemic, how it affected their operations and what lessons they’ve learned during this transition period.

Tips to help IT weather this pandemic storm

Reda Chouffani: For many people working remotely for the first time, it might be overwhelming when they experience computer problems without an IT person nearby to assist. There are several keys to help IT leaders prepare their teams to expedite support and minimize some common pitfalls of working from home.

Reda ChouffaniReda Chouffani

Here are a few things IT can consider to ease this transition:

  • Get the right internet speed. While many households have access to broadband, connecting to video conferences or remoting into the office computer is not always guaranteed to go smoothly. Employees who work from home need to make sure they have a solid wireless device. The quality of working remotely will suffer with a relatively slow connection by disrupting the use of video conferencing or some other cloud service. One way to ease network traffic at home is to put limits on the internet during work hours, such as restricting streaming video services by other household members.
  • Invest in the right hardware. Because being physically by the computer to troubleshoot hardware issues for IT might be out of the question, those in IT circles quickly recognized it’s important to send employees home with reliable laptops and other equipment. This is even more critical today because a broken laptop may leave an employee out of work for several days while they await a replacement.
  • Train users on communication tools. Since many companies moved to a remote work setup, the use of real-time chat applications, intranets to post announcements, conferencing products like WebEx or Microsoft Teams and file-sharing services have increased in importance. Companies that had little use for these tools before the coronavirus pandemic have had to quickly readjust and invest in training so users have a fuller understanding of the tools they are now expected to use.
  • Other ways IT can help ease the transition. The silver lining of this pandemic is it is a great opportunity for IT workers to deliver meaningful tools and education during these challenging times. There are creative ways some IT departments have engaged with their users rather than through the typical troubleshooting exchanges. IT workers have been sending updates through newsletters and sharing daily tips and fun ways to use the new technology, such as jazzing up the Zoom or Microsoft Teams backgrounds or using Snapchat plugins on a video conference.
Adam FowlerAdam Fowler

Some shops had to scramble to set up security measures

Adam Fowler: Coronavirus changed a lot of priorities. Having the resources — both physical devices and support-wise to rapidly send everyone home — was the biggest struggle for many companies. Many people hadn’t worked from home beyond the occasional email from their phone, so staffers were thrown into the deep end to understand what they had and what they needed.

A lot of effort went to the real basics: What sort of internet do I have, where can I plug in my laptop, how do I get this screen working? It’s much harder to talk someone through these issues over the phone when the end user is not familiar with the equipment or the technology.

It was a high-pressure changeover. Everyone needed to keep working, so having to deal with plugging in your own cables or understanding why the wireless connection isn’t working can be frustrating for an end user. Setting up a video conference and choosing which device to use for the speaker and microphone was enough to frustrate people who were already stressed by the great unknown of the coronavirus pandemic.

Security, of course, was another big focus. I expect a lot of companies got caught by their “we just won’t give people remote access” setup because it was cheaper and easier to manage. There’s a lot of setup work involved in configuring multifactor authentication and poking holes in firewalls in a short period of time. Microsoft’s Azure Active Directory and multifactor authentication/conditional access is in a pretty good state right now, including user onboarding, so the timing of availability for those services was one positive outcome.

If this pandemic had occurred five years ago, then the IT world would have been in a much worse place. We would have been able to set up remote access, but it would have been less secure. There were products available but at a much higher premium than they are now, along with limited vendor support.

Brian KirschBrian Kirsch

Lack of hardware hampered some efforts

Brian Kirsch: For quite some time, IT has been on a path to reduce hardware, add more cloud services and optimize wherever possible. This helped trim budgets and was a necessary evolution for IT, but then COVID-19 hit. Few of us could have predicted how crucial that hardware was until we needed it to provide services for remote workers.

At my school, we needed to put together a remote lab environment for IT students. We were able to forklift everything the 600 students needed in a few days instead of a yearlong rollout. It wasn’t perfect, but it does the job.

That seems to be the state of things in IT today: It’s not ideal, but it works. We finished the lab quickly because we had the necessary hardware. The project was on our agenda, but COVID-19 expedited the process. We heard anecdotal stories of other schools that struggled to set up similar environments, but they didn’t have enough physical servers, so the remote access system broke down when it could not support all the users.

This server shortage isn’t just something that hit higher education. Many IT shops that worked hard to reduce their data center footprint are now laboring to get those systems back so they can provide the services their users need. In addition to servers and other data center hardware, laptops and mobile technology are in short supply, causing prices to shoot through the roof. IT teams continue to struggle to get the technology they need to help employees that need a way to remotely access their organization’s resources.

It’s safe to say few disaster recovery plans had this kind of scope in mind when they were created. Both people and technology are advancing at speeds that were nonexistent before this pandemic, because a once-in-a-lifetime event was not on anyone’s radar.

I am seeing people who have never used remote technology not only just getting by, but flourishing as they fully embrace the tools and gain confidence by using them. The technology to support them won’t go away when this pandemic is over, so we might just see a reduction in the physical offices we once thought were so necessary to do our jobs.

Nathan O'BryanNathan O’Bryan

Will working from home be the new normal after COVID-19?

Nathan O’Bryan: It’s time for corporate America to embrace remote work on a large scale. While the recent social distancing order to help curb the spread of the COVID-19 virus is the most recent and probably most attention-grabbing reason, it’s not the only one. Organizations just starting down the road of supporting remote workers have many challenges to address. The place to start is to define what your employees working from home need, then determine how your organization can secure those resources.

Multiple studies have shown that people are more productive working from home. I know many mangers find that difficult to believe, but that’s the case if the remote worker has the proper setup. There are many other compelling reasons to let employees work from home, such as higher morale, less turnover and fewer sick days. Embracing a remote work arrangement can save businesses a significant amount in office expenses.

We have the IT infrastructure, laptops and phone systems to support a remote workforce for many jobs, but that’s just one piece of the puzzle. Organizations need to build the internal culture, security practices and teamwork norms to support remote work that complies with corporate standards and industry best practices. While this can be a significant undertaking, there is no doubt that it is a necessary one to survive in this time of social distancing.

Protecting your organization’s data is always a primary concern in these situations. Many security policies have been built around the assumption that users will access data from the organization’s physical location, which is not compatible with this new world of remote workers. The IT team will need to rethink how authorized users can access that data from remote locations.

Go to Original Article
Author:

Splice Machine 3.0 integrates machine learning capabilities, database

Databases have long been used for transactional and analytics use cases, but they also have practical utility to help enable machine learning capabilities. After all, machine learning is all about deriving insights from data, which is often stored inside a database.

San Francisco-based database vendor Splice Machine is taking an integrated approach to enabling machine learning with its eponymous database. Splice Machine is a distributed SQL relational database management system that includes machine learning capabilities as part of the overall platform.

Splice Machine 3.0 became generally available on March 3, bringing with it updated machine learning capabilities. It also has new Kubernetes cloud native-based model for cloud deployment and enhanced replication features.

In this Q&A, Monte Zweben, co-founder and CEO of Splice Machine, discusses the intersection of machine learning and databases and provides insight into the big changes that have occurred in the data landscape in recent years.

How do you integrate machine learning capabilities with a database?

Monte ZwebenMonte Zweben

Monte Zweben: The data platform itself has tables, rows and schema. The machine learning manager that we have native to the database has notebooks for developing models, Python for manipulating the data, algorithms that allow you to model and model workflow management that allows you to track the metadata on models as they go through their experimentation process. And finally we have in-database deployment.

So as an example, imagine a data scientist working in Splice Machine working in the insurance industry. They have an application for claims processing and they are building out models inside Splice Machine to predict claims fraud. There’s a function in Splice Machine called deploy, and what it will do is take a table and a model to generate database code. The deploy function builds a trigger on the database table that tells the table to call a stored procedure that has the model in it for every new record that comes in the table.

So what does this mean in plain English? Let’s say in the claims table, every time new claims would come in, the system would automatically trigger, grab those claims, run the model that predicts claim cause and outputs those predictions in another table. And now all of a sudden, you have real-time, in-the-moment machine learning that is detecting claim fraud on first notice of loss.

What does distributed SQL mean to you?

Zweben: So at its heart, it’s about sharing data across multiple nodes. That provides you the ability to parallelize computation and gain elastic scalability. That is the most important distributed attribute of Splice Machine.

In our new 3.0 release, we just added distributed replication. It’s another element of distribution where you have secondary Splice Machine instances in geo-replicated areas, to handle failover for disaster recovery.

What’s new in Splice Machine 3.0?

Zweben: We moved our cloud stack for Splice Machines from an old Mesos architecture to Kubernetes. Now our container-based architecture is all Kubernetes, and that has given us the opportunity to enable the separation of storage and compute. You literally can pause Splice Machine clusters and turn them back on. This is a great utility for consumption based usage of databases.

Along with our upgrade to Kubernetes, we also upgraded our machine learning manager from an older notebook technology called Zeppelin to a newer notebook technology that has really gained momentum in the marketplace, as much as Kubernetes has in the DevOps world. Jupyter notebooks have taken off in the data science space.

We’ve also enhanced our workflow management tool called mlflow, which is an open source tool that originated with Databricks and we’re part of that community. Mlflow allows data scientists to track their experiments and has that record of metadata available for governance.

What’s your view on open source and the risk of a big cloud vendor cannibalizing open source database technology?

Zweben: We do compose many different open source projects into a seamless and highly performant integration. Our secret sauce is how we put these things together at a very low level, with transactional integrity, to enable a single integrated system. This composition that we put together is open source, so that all of the pieces of our data platform are available in our open source repository, and people can see the source code right now.

I’m intensely worried about cloud cannibalization. I switched to an AGPL license specifically to protect against cannibalization by cloud vendors.

On the other hand, we believe we’re moving up the stack. If you look at our machine learning package, and how it’s so inextricably linked with the database, and the reference applications that we have in different segments, we’re going to be delivering more and more higher-level application functionality.

What are some of the biggest changes you’ve seen in the data landscape over the seven years you’ve been running Splice Machine?

Zweben: With the first generation of big data, it was all about data lakes, and let’s just get all the data the company has into one repository. Unfortunately, that has proven time and time again, at company after company, to just be data swamps.

Data repositories work, they’re scalable, but they don’t have anyone using the data, and this was a mistake for several reasons.

Instead of thinking about storing the data, companies should think about how to use the data.
Monte ZwebenCo-founder and CEO, Splice Machine

Instead of thinking about storing the data, companies should think about how to use the data. Start with the application and how you are going to make the application leverage new data sources.

The second reason why this was a mistake was organizationally, because the data scientists who know AI were all centralized in one data science group, away from the application. They are not the subject matter experts for the application.

When you focus on the application and retrofit the application to make it smart and inject AI, you can get a multidisciplinary team. You have app developers, architects, subject-matter experts, data engineers and data scientists, all working together on one purpose. That is a radically more effective and productive organizational structure for modernizing applications with AI.

Go to Original Article
Author:

MariaDB X4 brings smart transactions to open source database

MariaDB has come a long way from its MySQL database roots. The open source database vendor released its new MariaDB X4 platform, providing users with “smart transactions” technology to enable both analytical and transactional databases.

MariaDB, based in Redwood City, Calif., was founded in 2009 by the original creator of MySQL, Monty Widenius, as a drop replacement for MySQL, after Widenius grew disillusioned with the direction that Oracle was taking the open source database.

Oracle acquired MySQL via its acquisition of Sun Microsystems in 2008. Now, in 2020, MariaDB still uses the core MySQL database protocol, but the MariaDB database has diverged significantly in other ways that are manifest in the X4 platform update.

The MariaDB X4 release, unveiled Jan. 14, puts the technology squarely in the cloud-native discussion, notably because MariaDB is allowing for specific workloads to be paired with specific storage types at the cloud level, said James Curtis, senior analyst of data, AI and analytics at 451 Research.

“There are a lot of changes that they implemented, including new and improved storage engines, but the thing that stands out are the architectural adjustments made that blend row and columnar storage at a much deeper level — a change likely to appeal to many customers,” Curtis said.

MariaDB X4 smart transactions converges database functions

The divergence with MySQL has ramped up over the past three years, said Shane Johnson, senior director of product marketing at MariaDB. In recent releases MariaDB has added Oracle database compatibility, which MySQL does not include, he noted.

In addition, MariaDB’s flagship platform provides a database firewall and dynamic data masking, both features designed to improve security and data privacy. The biggest difference today, though, between MariaDB and SQL is how MariaDB supports pluggable storage engines, which gain new functionality in the X4 update.

The thing that stands out are the architectural adjustments made that blend row and columnar storage at a much deeper level — a change likely to appeal to many customers.
James CurtisSenior analyst of data, AI and analytics, 451 Research

Previously when using the pluggable storage engine, users would deploy an instance of MariaDB for transactional use cases with the InnoDB storage engine and another instance with the ColumnStore columnar storage engine for analytics, Johnson explained.

In earlier releases, a Change Data Capture process synchronized those two databases. In the MariaDB X4 update, transactional and analytical features have been converged in an approach that MariaDB calls smart transactions.

“So, when you install MariaDB, you get all the existing storage engines, as well as ColumnStore, allowing you to mix and match to use row and columnar data to do transactions and analytics, very simply, and very easily,” Johnson said.

MariaDB X4 aligns cloud storage

Another new capability in MariaDB X4 is the ability to more efficiently use cloud storage back ends.

“Each of the storage mediums is optimized for a different workload,” Johnson said.

For example, Johnson noted that Amazon Web Service’s S3, is a good fit for analytics, because of its high-availability and capacity. He added that for transactional applications with row-based storage, Amazon Elastic Block Storage (EBS) is a better fit. The ability to mix and match both EBS and S3 in the MariaDB X4 platform makes it easier for user to consolidate both analytics and transactional workload in the database.

“The update for X4 is not so much that you can run MariaDB in the cloud, because you’ve always been able to do that, but rather that you can run it with smart transactions and have it optimized for cloud storage services,” Johnson said.

MariaDB database as a service (DBaaS) is coming

MariaDB said it plans to expand its portfolio further this year.

The core MariaDB open source community project is currently at version 10.4, with plans for version 10.5, which will include the smart transactions capabilities, to debut sometime in the coming weeks, according to MariaDB.

The new smart transaction capabilities have already landed in the MariaDB Enterprise 10.4 update. The MariaDB Enterprise Server has more configuration settings and hardening for enterprise use cases.

The full MariaDB X4 platform goes a step further with the MariaDB MaxScale database proxy, which provides automatic failover, transaction replay and a database firewall, as well as utilities that developers need to build database applications.

Johnson noted that traditionally new features tend to land in the community version first, but as it happened, during this cycle MariaDB developers were able to get the features into the enterprise release quicker.

MariaDB has plans to launch a new DBaaS product this year. Users can already deploy MariaDB to a cloud of choice on their own. MariaDB also has a managed service that provides full management for a MariaDB environment.

“With the managed service, we take care of everything for our customers, where we deploy MariaDB on their cloud of choice and we will manage it, administer it, operate and upgrade, it,” Johnson said. “We will have our own database as a service rolling out this year, which will provide an even better option.”

Go to Original Article
Author:

OneDrive Personal Vault and expandable storage now available worldwide

Microsoft OneDrive has long been an innovation leader in cloud storage, and today we’re excited to launch a new feature that gives you greater security for your files in the cloud. This summer, we announced OneDrive Personal Vault, which uses identity verification to protect your most important files. Now we’re happy to share that Personal Vault is available worldwide on all OneDrive consumer accounts. Additionally, we have more OneDrive news to share on expandable storage options, automatic folder backup, and dark mode—read on to learn more.

Meet Personal Vault

Personal Vault is a protected area in OneDrive that can only be accessed with a strong authentication method or a second step of identity verification, such as your fingerprint, face, PIN, or a code sent to you via email or SMS.1 Personal Vault gives you an added layer of protection for your most important files, photos, and videos—for example, copies of documents such as your passport, driver’s license, or insurance information—should someone gain access to your account or device.

Plus, this added security won’t slow you down. You can quickly access your important documents, photos, and files with confidence wherever you are, on your PC, OneDrive.com, or your mobile device.2

Beyond a second layer of identity verification, Personal Vault also includes the following security measures:

  • Scan and shoot—Using the OneDrive app, you can scan documents or shoot photos directly into your Personal Vault, keeping them off less secure areas of your device, like your camera roll.
  • Automatic lockingNo need to worry about whether you left your Personal Vault or your files open—both will close and lock automatically after a period of inactivity.3
  • BitLocker encryptionOn Windows 10 PCs, OneDrive automatically syncs your Personal Vault files to a BitLocker-encrypted area of your local hard drive.4
  • Restricted sharing—To prevent accidental sharing, files in Personal Vault and shared items moved into Personal Vault cannot be shared.

Taken together, these security measures help ensure that Personal Vault files are not stored unprotected on your PC, and your files have additional protection, even if your Windows 10 PC or mobile device is lost, stolen, or someone gains access to it or to your account.

Animated image of a user verifying her identity in OneDrive Personal Vault.

Personal Vault is the latest advancement in OneDrive’s suite of security features, which also includes file encryption at rest and in transit, suspicious sign-in monitoring, ransomware detection and recovery, mass file deletion notification and recovery, virus scanning on downloads for known threats, password protected sharing links, and version history for all file types.

Personal Vault is now available worldwide

To start using Personal Vault, look for the Personal Vault icon Personal Vault icon. in your OneDrive and simply click or tap it. If you’re using OneDrive’s free or standalone 100 GB plan, you can store up to three files in Personal Vault. Office 365 Personal and Office 365 Home subscribers can store as many files as they want in Personal Vault, up to their storage limit.

OneDrive

Don’t have OneDrive? Download the app and get your first 5 GB of storage free.

Get OneDrive

Learn more in this Personal Vault podcast on Intrazone.

Backing up your folders just got easier

We made it easy to back up your important folders to OneDrive—so your files are protected and available even if something happens to your PC. With PC folder backup you can choose to automatically back up files in your Desktop, Documents, or Pictures folders to OneDrive. Now you don’t have to worry about protecting your work—OneDrive will do it for you.

You can also access your backed-up files even when you’re away from your PC—just use the OneDrive mobile app or go to OneDrive.com. Plus, saving your files to OneDrive allows you to view and restore previous versions of your files up to 30 days in the past.

PC folder backup is now more deeply integrated with the newest version of Windows 10, so you can easily enable it during Windows setup or updates. The feature is included with all OneDrive consumer plans and is available on Windows 7, 8, and 10 PCs with the OneDrive sync app. Learn more about PC folder backup.

OneDrive fans rejoice—additional storage is now available!

In June, we announced that we would deliver on one of the most requested OneDrive features of all time—more storage options. Now you can add storage to your existing Office 365 subscription in 200 GB increments, starting at $1.99 per month.5 Learn more about OneDrive additional storage.

Dark mode is now available on OneDrive iOS

We’re also thrilled to announce that the OneDrive mobile app on iOS 13 now supports dark mode. This dramatic new look is both easy on the eyes and lets you take full advantage of an OLED screen to save battery life. To try it out, simply set your iOS 13 device to Dark Appearance in Settings > Display and Brightness and then open the OneDrive app.

Animated image of a phone switching between dark mode and standard, and back again.

Let us know what you think

To let us know what you think or share your thoughts and ideas, visit OneDrive UserVoice. To learn more about all the advanced protection features included in Office 365 Home and Office 365 Personal subscriptions, see our support page.

Notes:
1 Face and fingerprint verification requires specialized hardware including a Windows Hello capable device, fingerprint reader, illuminated IR sensor, or other biometric sensors and capable devices.
2 OneDrive for Android requires Android 6.0 or later; OneDrive for iOS requires iOS 11.3 or later.
3 Automatic locking interval varies by device and can be set by the user.
4 Requires Windows 10 version 1903 or above.
5Additional storage only available to Office 365 Home and Office 365 Personal subscribers. For Office 365 Home subscribers, only the primary subscription holder may purchase additional storage, and only for that user’s account.

Go to Original Article
Author: Microsoft News Center

A Look Inside Tools and Weapons: The Promise and the Peril of the Digital Age

A day we’ve long anticipated has finally arrived. Today, the new book that Carol Ann Browne and I have written, Tools and Weapons: The Promise and the Peril of the Digital Age, publishes by Penguin Press and Hodder & Stoughton in North America and English languages around the world. We chose the phrase “Tools and Weapons” to capture the paradox of technology. While tech companies like Microsoft create products and services to serve humanity, that same tech is being weaponized to inflict harm. And more indirectly, many of the issues people debate today, like income equality, trade, immigration and globalization, are all enabled and fueled by technology.

These challenges affect us all, no matter where we live, fostering a new age of anxiety. Tools and Weapons starts with the proposition that if your technology changes the world, you bear a responsibility to help the world navigate these changes. We wrote the book to make these issues more accessible to people and to examine ways to address them.

As we worked on the book, Carol Ann and I reflected on several stories drawn from current events, issues faced by Microsoft, and history. Why history? As we delved into the issues, we realized most have parallels from the past. The horse lost its job to the car, trains forced interstate regulation, the public revolted against the radio in the 1940s, and people feared that early cameras and the advent of street lamps would invade their privacy. But what’s different today is the speed of change. In a way, the issues created by today’s technology aren’t unprecedented, things are just moving a lot faster.

Tools and Weapons opens with a tour of what has become the world’s filing cabinet – the cloud. While the cloud is the underpinning of almost every aspect of society, most people don’t understand what it truly is: a massive fortress of concrete and steel. And while there is no cloud without a data center, these complexes are shrouded in mystery. We realized that to understand how the world really works today, you need to visit a data center. That’s why we open the book by taking the reader on the type of tour that typically is available only to a few industry insiders.

I hope that when people read this book, they will gain not only a better understanding of the forces changing our world, but also a sense that there is a promising way forward. It is a path that requires the entire technology sector to change and take on more responsibility. It’s also a path that requires governments to do more, to move faster and change as well. Fundamentally, it’s a path that requires that we work together in very concrete ways to bring together people who create technology, people who use technology, people who govern technology, and people who are impacted by it. As the book illustrates with concrete and colorful stories, we believe that this will provide the best approach to address issues that range from privacy and security to the development of artificial intelligence and the impact of technology on our jobs and international relations between nations, including the U.S. and China.

And there is one other thing that was very near and dear to our hearts. For all of us who like to read, as Microsoft CEO Satya Nadella has said, we all buy more books than we start and we all start more books than we finish. We had a clear goal throughout our writing process and that was to write a book that we hope people will enjoy reading. I hope you enjoy it. Please tell us what you think on LinkedIn or Twitter.  

Tools and Weapons is available today in the English language at retailers including Amazon, Barnes and Noble, and International Booksellers. Editions in additional languages will publish in the coming months. To learn more, visit the Tools and Weapons website and register for public events in your city.

Published By

Go to Original Article
Author: Microsoft News Center

How to manage Windows with Puppet

IT pros have long aligned themselves with either Linux or Windows, but it has grown increasingly common for organizations to seek the best of both worlds.

For traditional Windows-only shops, the thought of managing Windows systems with a server-side tool made for Linux may be unappealing, but Puppet has increased Windows Server support over the years and offers capabilities that System Center Configuration Manager and Desired State Configuration do not.

Use existing Puppet infrastructure

Many organizations use Puppet to manage Linux systems and SCCM to manage Windows Servers. SCCM works well for managing workstations, but admins could manage Windows more easily with Puppet code. For example, admins can easily audit a system configuration by looking at code manifests.

Admins manage Windows with Puppet agents installed on Puppet nodes. They use modules and manifests to deploy node configurations. If admins manage both Linux and Windows systems with Puppet, it provides a one-stop shop for all IT operations.

Combine Puppet and DSC for greater support

Admins need basic knowledge of Linux to use a Puppet master service. They do not need to have a Puppet master because they can write manifests on nodes and apply them, but that is likely not a scalable option. For purely Windows-based shops, training in both Linux and Puppet will make taking the Puppet plunge easier. It requires more time to set up and configure Windows systems in Puppet the same way they would be configured in SCCM. Admins should design the code before users start writing and deploying Puppet manifests or DevOps teams add CI/CD pipelines.

SCCM works well for managing workstations, but admins could more easily manage Windows with Puppet code.

DSC is one of the first areas admins look to manage Windows with Puppet code. The modules are written in C# or PowerShell. DSC has native monitoring GUI, which makes the overall view of a machine’s configuration complex. In its enterprise version, Puppet has native support for web-based reporting. Admins can also use a free open source version, such as Foreman.

Due to the number of community modules available on the PowerShell Gallery, DSC receives the most Windows support for code-based management, but admins can combine Puppet with DSC to get complete coverage for Windows management. Puppet contains native modules and a DSC module with PowerShell DSC modules built in. Admins may also use the dsc_lite module, which can use almost any DSC module available in Puppet. The dsc_lite modules are maintained outside of Puppet completely.

How to use Puppet to disable services

Administrators can use Puppet to run and disable services. Using native Puppet support without a DSC Puppet module, admins could write a manifest to always have the net logon, BITS and W3SVC running when a Puppet run completes. Place the name of each Windows service in a Puppet array $svc_name.

$svc_name  = [‘netlogon’,’BITS’,’W3SVC’]


   service { $svc_name:
   

   ensure => ‘running’


}

In the next example, the Puppet DSC module ensures that the web server Windows feature is installed on the node and reboots if a pending reboot is required.

dsc_windowsfeature {‘webserverfeature’:

  dsc_ensure = ‘present’

  dsc_name = ‘Web-Server’

}

reboot { ‘dsc_reboot’ :

  message => Puppet needs to reboot now’,

  when    => ‘pending’,

  onlyif  => ‘pending_dsc_reboot’,

}

Go to Original Article
Author:

For Sale – Corsair HX1200 PSU

Selling a Corsair HX1200 that’s been sitting on the shelf for too long now. Bought in Nov 2017 from Scan so still has warranty remaining.

As new condition and only used for a month or so, with full box and papers etc.

Looking for £140 inc special delivery

Price and currency: 140
Delivery: Delivery cost is included within my country
Payment method: Cash or Bank Transfer
Location: Ingatestone, Essex
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.