Category Archives: Microsoft Blog

Microsoft Blog

Microsoft Teams room system ecosystem expands

Polycom and HP have partnered to release a room system for Microsoft Teams, giving businesses yet another option for outfitting conference rooms to support video conferencing through Microsoft’s cloud-based collaboration app.

Microsoft also this week announced that it was bundling the licenses needed to operate room systems into a single monthly bill, an offering that should make it easier and cheaper to set up those devices. Previously, businesses had to buy up to five separate licenses; now, it’s $15 per device, per month.

In conference rooms, Microsoft is facing competition from Zoom and Cisco. The former has cultivated partnerships with a wide range of hardware vendors to support Zoom Rooms, while the latter manufactures its video and audio endpoints.

The newest Microsoft Teams room system combines the HP Elite Slice mini-computer with the Polycom Trio conference room phone and the Polycom EagleEye USB camera. The system, which also works with Skype for Business, is targeted at medium-sized conference rooms (for roughly 10 people), but businesses can buy extra microphones for use in larger spaces.

Room kits like the one from Polycom and HP come with components that are designed, or at least tested, to work together, making them more reliable than a homemade mix of off-the-shelf components, said Rob Arnold, analyst at Frost & Sullivan.

“It also demonstrates that Microsoft’s move away from requiring Surface Pro as the room controller is providing more opportunities for partners (in this case for HP laptops?), which will also create more options for customers,” Arnold said.

In addition to Polycom and HP, hardware vendors Lenovo, Crestron and Logitech have released room systems for Microsoft Teams and Skype for Business. (Microsoft has stopped adding new features to Skype for Business and is encouraging customers to switch to Teams.)

However, the certification process for all Office 365 room systems is still under the Skype for Business brand. Over the summer, Microsoft released a software update that let Skype for Business room systems connect to Teams.

The vendor is expected to unveil a certification for Microsoft Teams room systems sometime next year with support for Teams-only features, including integration with the AI voice assistant Microsoft Cortana.

Meanwhile, Polycom, BlueJeans and Pexip have partnered with Microsoft to develop cloud-based interoperability services that let businesses connect third-party, standards-based endpoints to Teams. Pexip also supports an on-premises version of the gateway.

These offerings should appeal to businesses that want to get the most out of the hardware they already own while transitioning to Teams, said Tom Arbuthnot, principal solutions architect at Modality Systems, a Microsoft-focused systems integrator.

However, some organizations may feel pigeonholed by the limited video interoperability options for Teams compared to what’s available for Skype.

“The Teams ecosystem is maturing at a rapid clip but still doesn’t equal the breadth and depth of the longer-tenured [Skype for Business] ecosystem,” Arnold said. “It will get there, though.”

Go to Original Article

Wanted – Basic Desktop PC for Light Gaming

I’ve got complete PC…was very expensive when built…just gathering dust.

Plus I’m in Nottingham.

Quick specs from memory:

ANTEC Mesh Air Midi CASE with ANTEC 620 WATT PSU.

ASUS ROG MAXIMUS 5 1155 Motherboard

Intel Quad Core I7-2600K 3400MHz 8MB Cache LGA1155

8GB DDR3 1600MHZ CORSAIR Vengeance LP Memory.

256GB SAMSUNG 830 Series SSD

1GB Gigabyte GTX 560Ti OC, 40nm, 4000MHz GDDR5


Probably need to upgrade the video card…but the rest of the specs will more than support a 1070ti…if you go to 1080 I’d up the memory to 16gb.

Windows 10 genuine installed…

Would like £300 for it though…

Go to Original Article

‘State of Decay 2’ soundtrack now available on vinyl | Windows Experience Blog

“State of Decay 2‘s” 4 million players have been enjoying the game’s soundtrack by BAFTA award-winning and Billboard/MTV VMA-nominated Danish composer Jesper Kyd, which is available on iTunes, Spotify and Amazon. Starting Dec. 18, fans can now purchase the State of Decay 2 Special Edition – Double Vinyl for $39.99. The double vinyl includes 10 previously unreleased tracks in addition to the atmospheric guitar and analog synth score heard in the game.
To celebrate the release, Xbox Wire caught up with Kyd to ask him some questions from the “State of Decay 2” player community. Head on over to read the interview.
Updated December 19, 2018 1:12 pm

Free PowerShell Script for Hyper-V: Integration Services on Older Systems



Installs the Hyper-V integration services into offline local virtual machines.


Installs the Hyper-V integration services into offline local virtual machines.

Built specifically to work on Windows Server 2012 R2 guests. Modify the default filenames and override $Path to use a different set of Integration Services.

Use the -Verbose switch for verification of successful installations.


The name or virtual machine object(s) (from Get-VM, etc.) to update.


A valid path to the update CABs.

MUST have sub-folders names amd64 and x86 with the necessary

You can override the file names by editing the script.


Use the x86 update instead of x64.

.PARAMETER Try32and64

Attempt to install both the 32-bit and 64-bit updates. Use if you’re not sure of the bitness of the contained guest.


C:PS> Update-VMIntegrationServices -VMName vm04

Installs the x64 updates on the VM named vm04.


C:PS> Get-VM | Update-VMIntegrationServices -Try32and64

Attempts to update all VMs. Will try to apply both 32 and 64 bit to see if either is applicable.


Author: Eric Siron

Version 1.0, December 11, 2018

Released under MIT license





    [Parameter(Mandatory = $true, Position = 1, ValueFromPipeline = $true, ParameterSetName=‘Default’)]

    [Parameter(Mandatory = $true, Position = 1, ValueFromPipeline = $true, ParameterSetName=‘x86Only’)]

    [Parameter(Mandatory = $true, Position = 1, ValueFromPipeline = $true, ParameterSetName=‘TryBoth’)]



    [Parameter(ParameterSetName = ‘x86Only’)]

    [Parameter(ParameterSetName = ‘TryBoth’)]

    [String]$Path = [String]::Empty,

[Parameter(ParameterSetName = ‘x86Only’)][Switch]$x86,

[Parameter(ParameterSetName = ‘TryBoth’)][Switch]$Try32and64


#requires -Version 4

#requires -RunAsAdministrator

#requires -Module Hyper-V



Set-StrictMode -Version Latest

$DefaultPath = Join-Path -Path $env:SystemRoot -ChildPath ‘vmguestsupport’

$x64UpdateFile = ‘’

$x86UpdateFile = ‘’

if ([String]::IsNullOrEmpty($Path)) { $Path = $DefaultPath }

$Path = (Resolve-Path -Path $Path -ErrorAction Stop).Path

$UpdateFiles = New-Object -TypeName System.Collections.ArrayList

if ($x86 -or $Try32and64)


$OutNull = $UpdateFiles.Add((Resolve-Path -Path (Join-Path -Path $Path -ChildPath $x86UpdateFile) -ErrorAction Stop).Path)


if (-not $Try32and64)


$OutNull = $UpdateFiles.Add((Resolve-Path -Path (Join-Path -Path $Path -ChildPath $x64UpdateFile) -ErrorAction Stop).Path)





    if($VM.Count -eq 0) { exit 0 }

    $VMParamType = $VM[0].GetType().FullName



        ‘Microsoft.HyperV.PowerShell.VirtualMachine’ {

            # preferred condition so do nothing; just capture the condition


        ‘System.String’ {

            $VM = Get-VM -Name $VM


        default {

            Write-Error -Message (‘Cannot work with objects of type {0}’ -f $VMParamType) -ErrorAction Stop



foreach ($Machine in $VM)


Write-Progress -Activity ‘Adding current integration components to VMs’ -Status $Machine.Name -Id 7 # ID just so it doesn’t collide with Add-WindowsPackage or *-DiskImage

if ($Machine.State -eq [Microsoft.HyperV.PowerShell.VMState]::Off)


$VMHDParams = @{

VM                 = $Machine;

ControllerType     = [Microsoft.HyperV.PowerShell.ControllerType]::IDE;

ControllerNumber   = 0;

ControllerLocation = 0


if ($Machine.Generation -eq 2)


$VMHDParams.ControllerType = [Microsoft.HyperV.PowerShell.ControllerType]::SCSI


$VHDPath = [String]::Empty



$VHDPath = (Get-VMHardDiskDrive @VMHDParams).Path




Write-Warning (‘VM “{0}” has no primary hard drive’ -f $Machine.Name)


            $DiskNum = (Mount-VHD -Path $VHDPath -Passthru).DiskNumber

            $DriveLetters = (Get-Disk $DiskNum | Get-Partition).DriveLetter

            if ((Get-Disk $DiskNum).OperationalStatus -ne ‘Online’)


                Set-Disk $MountedVHD.Number -IsOffline:$false -IsReadOnly:$false

                Set-Disk -Number $DiskNum -IsOffline $false

                Set-Disk -Number $DiskNum -IsReadOnly $false


            #Install the patch

            $TargetDriveLetter =

            foreach ($DriveLetter in $DriveLetters)


                if (Test-Path ($DriveLetter + ‘:Windows’))


                    $TargetDriveLetter = $DriveLetter





                foreach ($UpdateFile in $UpdateFiles)




                        $OutNull = Add-WindowsPackage -PackagePath $UpdateFile -Path ($TargetDriveLetter + ‘:’) -ErrorAction Stop




                        # Add-WindowsPackage writes to the warning and the error stream on errors so let its warning speak for itself

                        # Only include more information for an unnecessary patch

                        if ($_.Exception.ErrorCode -eq 0x800f081e)


                            Write-Warning ‘This package is not applicable’







                Write-Error -Message (‘No drive on VM {0} has a Windows folder’ -f $Machine.Name) -ErrorAction Continue


            Dismount-VHD -Path $VHDPath




Write-Warning -Message (‘{0} cannot be updated because it is not in an Off state’ -f $Machine.Name)




Go to Original Article
Author: Eric Siron

Competition win a steppingstone in the greater journey to create sustainable farming – Microsoft Research

From left to right: Thomas Follender Grossfeld, Kenneth Tran, Chetan Bansal, and David Katzin of Team Sonoma

From left to right: Thomas Follender Grossfeld, Kenneth Tran, Chetan Bansal, and David Katzin of Team Sonoma

The cucumber plants, their leaves wide and green and veiny, stood tall in neat rows, basking in the Netherland sunlight shining through the glass panes of their greenhouses. Hopes were high for the plants—a bountiful crop in just four months using as few resources as possible. With the right amount and type of care, they’d produce vegetables for consumers to enjoy. To the casual observer, though, it might have seemed like the plants had been left to their own devices. Greenhouse staff passed through to harvest or adjust cameras and other electrical devices inside, but human contact from those responsible for determining how much water, nutrition, and light the plants received was nonexistent. That was the point.

This spring, Wageningen University & Research and corporate sponsor Tencent challenged researchers, scientists, and experts from across sectors: Build the greenhouse of the future. Motivated by potential strain on traditional methods of food production as a result of a growing world population and seeing a solution in greenhouses that are operational sans on-site human expertise, organizers asked competition participants to use artificial intelligence to maximize cucumber production while minimizing resources—and to do so remotely.

Nine months later, Microsoft Research’s Team Sonoma, led by Principal Research Engineer Kenneth Tran, beat out four other interdisciplinary teams to win the Autonomous Greenhouse Challenge, creating an agent that produced more than 55 kilograms of cucumber per square meter with a net profit of €25/m2.

“This was the first time worldwide cucumbers were grown in greenhouses remotely on AI,” says challenge coordinator Silke Hemming. “We at Wageningen University & Research were excited to collaborate with different teams on this exciting international challenge. Team Sonoma was able to beat the manual-grown reference operated by good Dutch growers. They not only reached the highest net profit, but the jury also ranked them highest on total sustainability.”

With a net profit 17 percent higher, Sonoma was the only AI team to best the reference expert growers, and its net profit was 25 percent higher than that of the second-place team, led by researchers at Tencent AI Lab. Net profit counted the most toward overall performance in the competition while algorithm novelty and capacity accounted for 30 percent and sustainability— based on efficiency in energy, water, CO2, and pesticide usage—accounted for 20 percent.

A greater journey

For Tran and Microsoft, the work demonstrated in the competition is part of a larger commitment to deploying cloud, Internet of Things, and AI technologies to protect and sustain the planet and its natural resources. In July 2017, Microsoft launched AI for Earth to support individuals and organizations doing work in water, agriculture, biodiversity, and climate change with grants, education, and further collaboration. The initiative’s strategic approach and funding has since been expanded, and the gains being made, especially in the area of data-driven farming, have been impressive. FarmBeats is among the projects receiving recognition for its impact, and Tran and another Team Sonoma member, Senior Research Software Engineer Chetan Bansal, are also contributors to that work.

While FarmBeats is improving data collection outdoors with sensors, drones, and other devices for more sustainable farming, Team Sonoma’s work is in controlled environment agriculture (CEA), an enclosed system of farming that allows growers to determine and execute optimal settings for such environmental factors as light, temperature, humidity, and CO2 concentration.

Tran’s interest in CEA as a research area was piqued in 2017, about a year before he had heard about the Autonomous Greenhouse Challenge. As a member of the Reinforcement Learning group with Microsoft Research AI, he and his colleagues explore the machine learning technology’s potential for real-world application. Not only is CEA’s ability to have meaningful impact attractive—a more efficient, accessible means to meeting the nutritional demands of the earth’s population—but it is also a great training ground for reinforcement learning models. CEA offers contained scenarios in which to work and an abundance of data, the collection of which is relatively quick and easy thanks to sensing technology and IOT.

“The state of the art of reinforcement learning is notoriously data hungry, so it is critical that we focus on new sample-efficient algorithms,” says Tran. “ To faster close the gap, though, we also need environments where we can collect a lot of data easily and affordably.”

Tran (center) with collaborators from Sananbio at a vertical farm inside the company’s Xiamen, China, facility.

Tran (center) with collaborators from Sananbio at a vertical farm inside the company’s Xiamen, China, facility.

The application focus of Sonoma—the project name for Tran and his colleague’s overall work in the area, as well as the name for the greenhouse challenge team—has been greenhouse and vertical farming, both of which have the potential for safer, faster food production with less use of the resources that have been literally the foundation of traditional farming—land and water. According to the greenhouse competition website, indoor cultivation such as greenhouse and vertical farming can decrease water requirements by up to 90 percent, needs one-tenth the space to produce the same amount of crop as more traditional farming, and can thrive on less pesticides and chemicals. This promising solution requires a workforce of indoor farming experts that might be outpaced by the demand for indoor farming, though, and Tran has made it Sonoma’s goal to help create autonomy in the space.

“AI can help both scaling the expert knowledge in developed countries such as the Netherlands to developing countries, but also improve upon the expert growers,” says Tran.

To reach the Sonoma goal, Tran leads with what he describes as a bottom-up, top-down approach.

“By bottom up, we mean doing novel research in reinforcement learning, the very fundamental reinforcement learning research, and also doing the application-centric research simultaneously,” he explains. “Applicable reinforcement learning research is still at a very, very early stage, so there is a lot of ground for new research to be done. For the top-down aspect, our approach is to seek collaboration with domain experts from around the world.”

The Sonoma project members reflect that philosophy: Tran and Bansal, both of Microsoft Research, represent the AI side. On the plant science side and from partnership institutes are research scientist Xiuming Hao of Agriculture and Agri-Food Canada (AAFC) and Chieri Kubota, professor of controlled environment agriculture at The Ohio State University, among other collaborators.

A competition along the way

It was Tran’s strong belief in the power of collaboration that led to a Team Sonoma for the international Autonomous Greenhouse Challenge. He was visiting Wageningen University & Research—a leading partnership in food production—in March 2018 to explore opportunities for collaboration. During the meeting, his Wageningen counterparts mentioned the challenge, and he brought it back to his team at Microsoft Research. They were all intrigued.

“We started reading about the competition and got excited,” recalls Tran. “It was a great opportunity to get our feet wet and quickly. There was already a strong commitment from multiple key players in the area—Wageningen University & Research, Tencent, Intel, and Delphy and AgroEnergy, among them—working toward a shared goal and a shared vision. Plus, there was guaranteed support from Wageningen staff during the course of the competition.”

And Team Sonoma was formed: Tran from the Microsoft Research Redmond lab; Bansal from the Microsoft Research India lab; Thomas Follender Grossfeld and Vincent van Wingerden of Microsoft Nederland; David Katzin, a Wageningen PhD student; and Hong Phan, a PhD student from the University of Copenhagen. Sonoma was one of five teams selected from a pool of 15 for the main event after a pre-competition “hackathon” that included a virtual growing challenge and a presentation of their approach.

Each team received 96 square meters of greenhouse space at the Wageningen University & Research campus in Bleiswijk in the Netherlands. Each greenhouse was outfitted with the same system of actuators, including ventilation, heating and artificial lighting, and sensors to measure, among other things, temperature, moisture, and energy consumption. Teams were also permitted to install additional sensors and monitoring equipment.

Sonoma installed additional cameras but only one additional sensor—a leaf-wetness sensor, which was not among the preinstalled competition sensors. The team chose the sensor for increased monitoring of humidity and moisture, two factors that lead to crop-damaging pests and disease. Teams were permitted inside their greenhouse only once, to set up their extra cameras and sensors. Throughout the competition, from September 1 to mid-December, they ran their AI frameworks remotely.

But for Bansal, the distance from the site of data collection wasn’t as challenging as the means of data collection itself. The team needed to design a system that could account for Murphy’s law—whatever can go wrong will go wrong.

“The data is coming from multiple sources—there are sensor boxes, cameras, the API being used by the greenhouse,” says Bansal. “All of them can and have failed, so the question is how you’re able to detect it and how fast you’re able to react.”

“We had to deal with all of those issues and design a system that was resilient to all of them, but I think that’s part of designing any real-world control system,” Bansal added.

The Sonoma AI approach

Sonoma chose to build its framework around approximate Bayesian model-based reinforcement learning.

“We bet on model-based RL because we think it is sample-efficient and generalizable,” says Tran. “Sample efficiency is critical for real-world applications. Standard RL algorithms require a huge number of trials—in the millions—to train a good agent, even in simple environments. This is not a big deal in games, where RL has shown success, because an agent can play as many games as it needs. In real-world applications, we cannot afford to run millions of failed trials. So we need to think differently about RL.”

For reinforcement learning to be a viable solution for today’s societal challenges, the team determined the agent must be initialized as strong as any existing system and have the ability to learn and improve over time without limits to its capacity to reach optimality and conceived a framework (above) incorporating these features.

The framework begins by training a probabilistic dynamics model. This model learning is analogous to building a simulator, which helps the agent to plan by imagining. In addition, by way of imitation learning, the agent is initialized to perform like an existing expert policy. From there, the agent will operate on a continual model-based policy optimization process, improving its overall performance with every environmental interaction.

From left: Research scientist Xiuming Hao, greenhouse vegetable specialist Shalin Khosla and Tran at Agriculture and Agri-Food Canada’s Harrow Research and Development Centre in Ontario

From left: Research scientist Xiuming Hao, greenhouse vegetable specialist Shalin Khosla and Tran at Agriculture and Agri-Food Canada’s Harrow Research and Development Centre in Ontario

For the greenhouse challenge, data from around the greenhouse, such as weather conditions, and from sensors and images from inside the greenhouse were input into the agent, which then determined the intensity and distribution of artificial lighting; the amount of water, CO2, and nutrients to give the plants; and greenhouse temperature. The framework chose settings based on what it learned would result in the most biomass.

“The team has successfully incorporated the current best knowledge and best practices on crop cultivation and management and on greenhouse environmental control into its greenhouse AI control system,” says Xiuming Hao, the Agriculture and Agri-Food Canada research scientist who collaborates with the Sonoma project. “The team started with a high plant density system identified from previous model data, and adjusted the AI climate control based on crop performance and weather conditions over the crop growing period to allow for the best performance of this high density/high production system.”

Tran describes the team’s strategy as conservative. The competition setup allowed for only one trial, and there was not much data existing before the challenge; because of that, its strategy relied on a hand-crafted expert policy alone without resorting to reinforcement learning for continuous learning and improvement—yet.

“By working with domain experts and leveraging their knowledge, as well as the capabilities of our AI agent, together we were able to produce better results within a short time frame,” says Tran. “And this is just the beginning; there is a lot of room for growth, literally and figuratively.”

Go to Original Article
Author: Steve Clarke

IT industry trends home in on containers, AIOps and CI/CD

Container orchestration, AIOps and CI/CD tools are familiar to enterprise DevOps pros, but expect to see them put to everyday use among enterprises in the new year, IT experts said.

Automation is the theme that ties these IT industry trends together, and will stem from the centralized infrastructure platforms IT ops departments put into production in 2018. These platforms will help IT practitioners at mainstream companies finally shift their focus from tactical break/fix tasks to a more strategic role as site reliability engineers (SREs).

“We’re doubling down on platform and deployment automation work, and positioning the SRE team to act as an enablement function to the rest of the company,” said Andy Domeier, director of technology operations at SPS Commerce, a communications network for supply chain and logistics businesses in Minneapolis. “We’re looking for more consistency to what we’re building and how we’re operating platforms.”

Container management maturity goes mainstream

The foundation for IT industry trends in 2019 lies in the fact that containers and container orchestration will finally become commonplace in enterprise IT shops. Container technology has matured to the point where it is viable for enterprise production use, mainstream enterprise vendors have standardized container management tools on the open source Kubernetes framework, and IT pros have had time to learn the technology and figure out how to use it with their corporate applications.

One of the last frontiers for container maturity in the enterprise is full-fledged container orchestration support for Microsoft applications. Docker Inc. launched a program to modernize enterprise Windows applications through its swarm mode orchestrator, and pledged to add equal support for Kubernetes on Windows once it stabilizes upstream. That stable release is now slated for the first quarter of 2019.

Meanwhile, enterprises have begun to dig in to the ways Windows workloads can benefit from containers, a technology originally invented for Linux.

We’re ironing out how to do DevOps automation for Windows, where container images are still bigger than on lightweight Linux operating systems.
Richard Fongsenior software engineering manager, Mitchell International

“Windows has been lagging Linux in containers, but now it may be able to leapfrog ahead,” said Richard Fong, senior software engineering manager at Mitchell International, an auto insurance software company in San Diego. “We’re ironing out how to do DevOps automation for Windows, where container images are still bigger than on lightweight Linux operating systems.”

Container maturity also means that container management products, and the skills to use them effectively, will both be in high demand in 2019. The market for container management tools will swell to $2.1 billion in 2019, and reach $4.3 billion by 2022, estimates 451 Research in its “Market Monitor” study on application containers issued in December 2018. And interest in Kubernetes among job searchers jumped 173% in 2018 compared to 2017, according to job listings site Indeed’s annual “Occupation Spotlight” report.

451 Research summary of application containers sales
A 451 Research ‘Market Monitor’ report predicts huge growth for containers in 2019 and beyond

AIOps gets real

As DevOps shops break down enterprise applications into microservices and deploy them onto container platforms, the number of moving parts within them increases exponentially, as does the complexity of incident response. Artificial intelligence for IT operations, or AIOps, has been a trendy topic among IT monitoring vendors since late 2017, but those products have struggled to live up to their claims, enterprise IT buyers said.

In part, this has to do with product maturity, but also improvements in data management in the repositories of information those tools analyze.

“The data we have now on IT systems is cleaner than ever, because it’s generated through automated systems at the speed of the digital world, rather than humans,” said a senior architect at a large insurance company on the East Coast, who spoke on condition of anonymity because he is not authorized to represent his employer in the press. “Many tool vendors also advertised [AIOps] capabilities to gain attention before they were really mature, but that’s shifted to tangible things we can try now versus slideware.”

The senior architect declined to name the vendors his company will consider, but other DevOps pros have their eye on machine learning features in application performance monitoring tools from vendors such as New Relic and Instana.

“We’ve been building app profiles to work with anomaly detection systems, making sure they’re emitting the right messages,” said Brad Linder, DevOps and big data evangelist at Dish Technologies, the engineering arm of Dish Network in Englewood, Colo. The firm will review AIOps tools from Instana and its competitors in 2019, he said.

Some SREs argue that platforms should be ‘boring’ — made of simple components and deployed in repeatable patterns to reduce the need for machine learning analysis. But Linder said he still sees AIOps tools as useful, regardless of how highly automated the infrastructure becomes.

“The platforms are boring, but there are still glitches in the matrix to watch out for,” he said. “Also, AI can be used to detect good things — application services that are more popular among users than we anticipated, for example.”

AI will also become a common feature of database operations.

“Oracle has already begun offering cloud databases that it calls autonomous, and I expect that in the coming year that you’ll see the other big database [vendors] start adding [machine learning] to make databases more self-driven,” said Tony Baer, an analyst at Ovum. “The trend will start in the cloud where the database vendor has control of the infrastructure, but I know of at least one database player that will start offering some self-running capabilities on prem.”

DevOps teams test fresh takes on CI/CD

As container management and AI improve IT infrastructure automation, application deployment tools will evolve to capitalize on those advances. Thus, enterprise DevOps teams will explore new approaches to CI/CD in 2019 that surpass traditional systems such as Jenkins.

Some mainstream companies have already switched in 2018 from Jenkins to Netflix’s Spinnaker platform, primarily because it supports automated canary application deployment patterns and rollbacks. Other open source CI/CD tools that build on container infrastructure have made it onto the enterprise DevOps radar, such as, Harness, and Intuit’s Argo CD.

“Spinnaker works, but it looks like a big hammer with lots of options that’s a bit too heavy for my liking,” Linder said. Instead, he said he’s interested in Harness for its data analytics features.

SPS’s Domeier, meanwhile, said his company has moved from Jenkins to for CI/CD.

“We had more and more problems with our old Jenkins patterns and pipelines,” he said. “Drone configuration is done as code and it’s more container-native [than Jenkins] — we would be unlikely to use it if we were trying to ship large JVMs or WAR files that weren’t containerized.”

SPS engineers favor Drone’s smooth integration with GitHub code repositories, Domeier added. The Jenkins community also offers Jenkins X, which improves its integration with Kubernetes.

Go to Original Article

For Sale – HTPC – Fractal Design Case, 1TB SSD, 16GB RAM, SFX PSU (ITX Form Factor)

Just gathering interest at the moment to see if anyone wants this setup. If someone is interested, I will have to backup my drives before sending out.

I have for sale my beloved HTPC which has been used over the past 2 years as a Plex Media Server. It is quite highly spec’d and runs my 4K content without a hiccup. Please see specs and individual prices below:

Fractal Design Node 804 PC Case (Windowed) – £60
Intel i5 4590 Processor – £65
MSI Z87i AC ITX Motherboard – £70
HyperX 16GB DDR3 RAM – £80
SanDisk Ultra II 960GB SSD – £110
Western Digital 2TB 2.5″ HDD – £50
Corsair SFX (SFF) Gold Power Supply – £75
Cooler Master Hyper 212 Processor Cooler – £15

All the prices above have been based on sold listings on eBay minus fees etc. to make it fair. I’ve obviously paid a lot more for all these components, so will take a massive hit on this.

Buyer can come round and see it all working before buying, or I can dispatch. I have all the boxes for all the items.

Price and currency: £520
Delivery: Delivery cost is included within my country
Payment method: PPG, BT or Cash
Location: Birmingham
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Go to Original Article

Introducing Project Mu – Windows Developer Blog

The Microsoft Devices Team is excited to announce Project Mu, the open-source release of the Unified Extensible Firmware Interface (UEFI) core leveraged by Microsoft products including both Surface and the latest releases of Hyper-V. UEFI is system software that initializes hardware during the boot process and provides services for the operating system to load. Project Mu contributes numerous UEFI features targeted at modern Windows based PCs. It also demonstrates a code structure and development process for efficiently building scalable and serviceable firmware. These enhancements allow Project Mu devices to support Firmware as a Service (FaaS). Similar to Windows as a Service, Firmware as a Service optimizes UEFI and other system firmware for timely quality patches that keep firmware up to date and enables efficient development of post-launch features. 

When first enabling FaaS on Surface, we learned that the open source UEFI implementation TianoCore was not optimized for rapid servicing across multiple product lines. We spent several product cycles iterating on FaaS, and have now published the result as free, open source Project Mu! We are hopeful that the ecosystem will incorporate these ideas and code, as well as provide us with ongoing feedback to continue improvements.

Project Mu includes:

A code structure & development process optimized for Firmware as a Service
An on-screen keyboard
Secure management of UEFI settings
Improved security by removing unnecessary legacy code, a practice known as attack surface reduction
High-performance boot
Modern BIOS menu examples
Numerous tests & tools to analyze and optimize UEFI quality.

We look forward to engagements with the ecosystem as we continue to evolve and improve Project Mu to our mutual benefit!
Check out Project Mu Documentation and Code here:
Updated December 19, 2018 1:03 pm

2 takeaways from Pagosa Springs Medical Center HIPAA settlement

Failure to terminate former employees’ access to company information can be a costly mistake. That’s a lesson Pagosa Springs Medical Center, a critical access hospital in Pagosa Springs, Co., learned the hard way.

Pagosa Springs Medical Center has been fined $111,400 by the Office for Civil Rights (OCR) at the U.S. Department of Health and Human Services (HHS). The settlement stemmed from a 2013 complaint that alleged a former employee was able to access the hospital’s web-based scheduling calendar, which contained patient protected health information (PHI), after the employee was terminated.

An OCR investigation revealed that the PHI of 557 individuals was impermissibly disclosed not only to the former employee, but to the web-based scheduling calendar vendor as well, according to an HHS news release. Google, the vendor named in the resolution agreement, and Pagosa Springs Medical Center did not have a HIPAA-required business associate agreement in place, the release said.

Kate Borten, HIPAA, health information privacy and security expert Kate Borten

“The fact that Google missed it here, as well as Pagosa, is pretty distressing,” said Kate Borten, a HIPAA and health information privacy and security expert.

Borten described the HIPAA violations as “significant.” They include failure to recognize a HIPAA business associate and sign a contractual agreement with the associate before patient information was exchanged, as well as not having reasonable and effective termination processes established.

Both entities should have recognized that the proper written agreement ensuring Google would appropriately safeguard patient PHI wasn’t in place, Borten said.

Two major takeaways

Borten said healthcare CIOs should view the settlement as a stark reminder about system security. She listed two major takeaways for them:

  1. Use HIPAA settlements as a proactive tool

Any HIPAA-covered entity or HIPAA business associate should be on the HHS mailing list to receive notifications whenever announcements regarding settlements like Pagosa Springs Medical Center are made.

“Management, whether it’s CIOs, CISOs, somebody needs to be designated to be on that mailing list to read those cases like this one,” she said. “They can be used as a way to check and educate your own organization.”

Borten said CIOs can obtain a wealth of information from resolution agreements and corrective action plans. The plans should be seen as tools to conduct internal security checks or to review policies.

She also described resolutions as a training tool for management. In the case of Pagosa Springs Medical Center, the lesson is how to engage with business associates. Borten said formally recognizing business associates and making sure appropriate contracts are in place are crucial to keeping patient information private and secure.

  1. Develop clear policies that outline responsibility

Borten said healthcare organizations often lack policies to directly inform hospital management about terminations not only for direct staff, but for employees of third-party companies granted access to the organization’s systems.

She advised internal management, not security, be responsible for keeping in touch with third-party companies. Hospital management should be made aware when anyone who has been given access to the hospital’s systems has been terminated.

“It’s that manager or financial director’s responsibility to say, ‘Remember, you have to tell me as soon as any of your employees are terminated, any employee who has access to our systems,'” Borten said. “That is not necessarily the norm today anywhere. And I think that’s a big gap.”

Pagosa Springs Medical Center agrees to corrective action plan

Along with the fine, the hospital agreed to adopt a two-year corrective action plan to settle potential HIPAA violations. As part of that plan, Pagosa Springs Medical Center has agreed to update its policies and procedures, along with its security management and business associate agreement. The organization has agreed to then train its workforce regarding updated policies and procedures.

At the time of OCR’s investigation, Pagosa Springs Medical Center provided more than 17,000 hospital and clinic visits annually and employed more than 175 individuals.

Go to Original Article

For Sale – Mac Pro 6-Core Dual GPU 3.5GHz Intel Xeon E5 Processor 16GB 1866MHz. Late 2013 BNIB

Mac Pro 6-Core Dual GPU 3.5GHz 6-Core Intel Xeon E5 processor 16GB 1866MHz DD – Late 2013 BNIB.

Brand spanking new, complete, still sealed in the original box. Insurance replacement for one that was damaged in a storm.

This thing is still in the packing case, all nicely wrapped & ready for shipping. It has the model, reference & serial number on the packaging. I will post photos with my username & will be happy to provide a link to my eBay profile with extensive, 100% positive feedback (with lots of high value items sold), through which I can, obviously, be contacted for confirmation, that I am who I say I am… Hopefully that should be sufficient to meet any potential buyers requirements.

I don’t really want to unwrap it & break the seal because… Well, it’s new… & they never go back in the box the same way as they came out!

I am currently in France, but get back to Manchester on a semi regular basis, so a local meet may be possible. I am also happy use an escrow service for any potentially nervous buyer.

Tech specs below (& on the apple site);

6-Core and Dual GPU

◦ 3.5GHz 6-Core Intel Xeon E5 processor

◦ 16GB 1866MHz DDR3 ECC memory

◦ Dual AMD FirePro D500 with 3GB GDDR5 VRAM each

◦ 256GB PCIe-based flash storage1

◦ 3.5GHz 6-Core Intel Xeon E5 with 12MB L3 cache and Turbo Boost up to 3.9GHz

◦ Configurable to 3.0GHz 8-core processor with 25MB L3 cache or 2.7GHz 12-core processor with 30MB L3 cache

◦ 16GB (four 4GB) of 1866MHz DDR3 ECC memory

◦ Configurable to 32GB (four 8GB) or 64GB (four 16GB)

◦ Dual AMD FirePro D500 graphics processors with 3GB of GDDR5 VRAM each

◦ Configurable to dual AMD FirePro D700 with 6GB of GDDR5 VRAM each

Display Support

◦ Connect up to three 4K displays or up to six Thunderbolt displays

Storage1256GB PCIe-based flash storage

◦ Configurable to 512GB or 1TB

Size and Weight

◦ Height: 25.1 cm (9.9 inches)

◦ Diameter: 16.7 cm (6.6 inches)

◦ Weight: 5 kg (11 pounds)2


◦ Combined optical digital audio output/analog line-out mini-jack

◦ Headphone mini-jack with headset support

◦ HDMI port supports multichannel audio output

◦ Built-in speaker

Connections and Expansion

◦ Four USB 3 ports

◦ Six Thunderbolt 2 ports

◦ Dual Gigabit Ethernet ports

◦ One HDMI 1.4 port

Electrical and Operating Requirements

◦ Line voltage: 100-120V AC or 200-240V AC (wide-range power supply input voltage)

◦ Frequency: 50Hz to 60Hz single phase

◦ Maximum continuous power: 450W

◦ Operating temperature: 10° to 35° C (50° to 95° F)

◦ Storage temperature: –40° to 47° C (–40° to 116° F)

◦ Relative humidity: 5% to 95% non-condensing

◦ Maximum operating altitude: 5,000 metres (16,400 feet)

◦ Typical acoustic performance, sound pressure level (operator position): 12 dBA when idle

I’m looking for £2,000… £1,800 + Shipping please…

Shipping options;

Colissimo International – Tracked, fully insured, 3-5 days delivery = £40 GBP
Colissimo Expert – Tracked, fully insured, 1-2 days delivery = £60 GBP
Collection/meet up = Free, gratis, zip, zero, zilch, nil, nada, nowt! ;0)

Cheers for looking!

Mr H.

Price and currency: £1,8000 GBP
Delivery: Delivery cost is not included
Payment method: BT
Location: Limousin, France
Advertised elsewhere?: Now advertised elsewhere
Prefer goods collected?: I have no preference

This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Go to Original Article