Azure IoT at Build: making IoT solutions easier to develop, more powerful to use

Ran Tech / Microsoft 3/9/17

IoT is transforming every business on the planet, and that transformation is accelerating. Companies are harnessing billions of IoT devices to help them find valuable insights into critical parts of their business that were previously not connected—how customers are using their products, when to service assets before they break down, how to reduce energy consumption, how to optimize operations, and thousands of other user cases limited only by companies’ imagination.

Microsoft is leading in IoT because we’re passionate about simplifying IoT so any company can benefit from it quickly and securely.

Last year we announced a $5 billion commitment, and this year we highlighted the momentum we are seeing in the industry. This week, at our premier developer conference, Microsoft Build in Seattle, we’re thrilled to share our latest innovations that further simplify IoT and dramatically accelerate time to value for customers and partners.

Accelerating IoT

Developing a cloud-based IoT solution with Azure IoT has never been faster or more secure, yet we’re always looking for ways to make it easier. From working with customers and partners, we’ve seen an opportunity to accelerate on the device side.

Part of the challenge we see is the tight coupling between the software written on devices and the software that has to match it in the cloud. To illustrate this, it’s worth looking at a similar problem from the past and how it was solved.

Early versions of Windows faced a challenge in supporting a broad set of connected devices like keyboards and mice. Each device came with its own software, which had to be installed on Windows for the device to function. The software on the device and the software that had to be installed on Windows had a tight coupling, and this tight coupling made the development process slow and fragile for device makers.

Windows solved this with Plug and Play, which at its core was a capability model that devices could declare and present to Windows when they were connected. This capability model made it possible for thousands of different devices to connect to Windows and be used without any software having to be installed on Windows.

IoT Plug and Play

Late last week, we announced IoT Plug and Play, which is based on an open modeling language that allows IoT devices to declare their capabilities. That declaration, called a device capability model, is presented when IoT devices connect to cloud solutions like Azure IoT Central and partner solutions, which can then automatically understand the device and start interacting with it—all without writing any code.

IoT Plug and Play also enables our hardware partners to build IoT Plug and Play compatible devices, which can then be certified with our Azure Certified for IoT program and used by customers and partners right away. This approach works with devices running any operating system, be it Linux, Android, Azure Sphere OS, Windows IoT, RTOSs, and more. And all of our IoT Plug and Play support is open source as always.

Finally, Visual Studio Code will support modeling an IoT Plug and Play device capability model as well as generating IoT device software based on that model, which dramatically accelerates IoT device software development.

We’ll be demonstrating IoT Plug and Play at Build, and it will be available in preview this summer. To design IoT Plug and Play, we’ve worked with a large set of launch partners to ensure their hardware is certified ready:


Certified-ready devices are now published in the Azure IoT Device Catalog for the Preview, and while Azure IoT Central and Azure IoT Hub will be the first services integrated with IoT Plug and Play, we will add support for Azure Digital Twins and other solutions in the months to come. Watch this video to learn more about IoT Plug and Play and read this blog post for more details on IoT Plug and Play support in Azure IoT Central.

Announcing IoT Plug and Play connectivity partners

With increased options for low-power networking, the role of cellular technologies in IoT projects is on the rise. Today we’re introducing IoT Plug and Play connectivity partners. Deep integration between these partners’ technologies and Azure IoT simplifies customer deployments and adds new capabilities.

This week at Build, we are highlighting the first of these integrations, which leverages Trust Onboard from Twilio. The integration uses security features built into the SIM to automatically authenticate and connect to Azure, providing a secure means of uniquely identifying IoT devices that work with current manufacturing processes.

These are some of the many connectivity partners we are working with:


Making Azure IoT Central more powerful for developers

Last year we announced the general availability of Azure IoT Central, which enables customers and partners to provision an IoT application in 15 seconds, customize it in hours, and go to production the same day—all without writing code in the cloud.

While many customers build their IoT solutions directly on our Azure IoT platform services, we’re seeing an upswell in customers and partners that like the rapid application development Azure IoT Central provides. And, of course, Azure IoT Central is built on the same great Azure IoT platform services.

Today at Build, we’re announcing a set of new features that speak to how we’re enabling and simplifying Azure IoT Central for developers. We’ll show some of these innovations, such as new personalization features that make it easy for customers and partners to modify Azure IoT Central’s UI to conform with their own look and feel. In the Build keynote, we’ll show how Starbucks is using this personalization feature for their Azure IoT Central solution connected to Azure Sphere devices in their stores.

We’ll also demonstrate Azure IoT Central working with IoT Plug and Play to show how fast and easy this makes it to build an end-to-end IoT solution, with Microsoft still wearing the pager and keeping everything up and running so customers and partners can focus on the benefits IoT provides. Watch this video to learn more about Azure IoT Central announcement.

The growing Azure Sphere hardware ecosystem

Azure Sphere is Microsoft’s comprehensive solution for easily creating secured MCU-powered IoT devices. Azure Sphere is an integrated system that includes MCUs with built-in Microsoft security technology, an OS based on a custom Linux kernel, and a cloud-based security service. Azure Sphere delivers secured communications between device and cloud, device authentication and attestation, and ongoing OS and security updates. Azure Sphere provides robust defense-in-depth device security to limit the reach and impact of remote attacks and to renew device health through security updates.

At Build this week, we’ll showcase a new set of solutions such as hardware modules that speed up time to market for device makers, development kits that help organizations prototype quickly, and our new guardian modules.

Guardian modules are a new class of device built on Azure Sphere that protect brownfield equipment, mitigating risks and unlocking the benefits of IoT. They attach physically to brownfield equipment with no equipment redesign required, processing data and controlling devices without ever exposing vital operational equipment to the network. Through guardian modules, Azure Sphere secures brownfield devices, protects operational equipment from disabling attacks, simplifies device retrofit projects, and boosts equipment efficiency through over-the-air updates and IoT connectivity.

The seven modules and devkits on display at Build are:

  • Avnet Guardian Module. Unlocks brownfield IoT by bringing Azure Sphere’s security to equipment previously deemed too critical to be connected. Available soon.
  • Avnet MT3620 Starter Kit. Azure Sphere prototyping and development platform. Connectors allow easy expandability options with a range of MikroE Click and Grove modules. Available May 2019.
  • Avnet Wi-Fi Module. Azure Sphere-based module designed for easy final product assembly. Simplifies quality assurance with stamp hole (castellated) pin design. Available June 2019.
  • AI-Link WF-M620-RSC1 Wi-Fi Module. Designed for cost-sensitive applications. Simplifies quality assurance with stamp hole (castellated) pin design. Available now.
  • SEEED MT3620 Development Board. Designed for comprehensive prototyping. Available expansion shields enable Ethernet connectivity and support for Grove modules. Available now.
  • SEEED MT3620 Mini Development Board. Designed for size-constrained prototypes. Built on the AI-Link module for a quick path from prototype to commercialization. Available May 2019.
  • USI Dual Band Wi-Fi + Bluetooth Combo Module. Supports BLE and Bluetooth 5 Mesh. Can also work as an NFC tag (for non-contact Bluetooth pairing and device provisioning). Available soon.

For those who want to learn more about the modules, you can find specs for each and links to more information on our Azure Sphere hardware ecosystem page.

See Azure Sphere in action at Build

Azure Sphere is also taking center stage at Build during Satya Nadella’s keynote this week. Microsoft customer and fellow Seattle-area company Starbucks will showcase how it is testing Azure IoT capabilities and guardian modules built on Azure Sphere within select equipment to enable partners and employees to better engage with customers, manage energy consumption and waste reduction, ensure beverage consistency, and facilitate predictive maintenance. The company’s solution will also be on display in the Starbucks Technology booth.

Announcing new Azure IoT Edge innovations

Today, we are announcing the public preview of Azure IoT Edge support for Kubernetes. This enables customers and partners to deploy an Azure IoT Edge workload to a Kubernetes cluster on premises. We’re seeing Azure IoT Edge workloads being used in business-critical systems at the edge. With this new integration, customers can use the feature-rich and resilient infrastructure layer that Kubernetes provides to run their Azure IoT Edge workloads, which are managed centrally and securely from Azure IoT Hub. Watch this video to learn more.

Additional IoT Edge announcements include:

  • Preview of Azure IoT Edge support for Linux ARM64 (expected to be available in June 2019).
  • General availability of IoT Edge extended offline support.
  • General availability of IoT Edge support for Windows 10 IoT Enterprise x64.
  • New provisioning capabilities using x.509 and SaS token.
  • New built-in troubleshooting tooling.

A common use case for IoT Edge is transforming cameras into smart sensors to understand the physical world and enable a digital feedback loop: finding a missing product on a shelf, detecting damaged goods, etc. These examples require demanding computer vision algorithms to deliver consistent and reliable results, large-scale streaming capabilities, and specialized hardware for faster processing to provide real-time insights to businesses. At Build, we’re partnering with Lenovo and NVIDIA to simplify the development and deployment of these applications at scale. With NVIDIA DeepStream SDK for general-purpose streaming analytics, a single IoT Edge server running Lenovo hardware can process up to 70 channels of 1080P/30FPS H265 video streams to offer a cost-effective and faster time-to-market solution.

This summer, NVIDIA DeepStream SDK will be available from the IoT Edge marketplace. In addition, Lenovo’s new ThinkServer SE350 and GPU-powered “tiny” edge gateways will be certified for IoT Edge.

Announcing Mobility Services through Azure Maps

Today, an increasing number of apps built on Azure are designed to take advantage of location information in some way.

Last November, we announced a new platform partnership for Azure Maps with the world’s number-one transit service provider, Moovit. What we’re achieving through this partnership is similar to what we’ve built today with TomTom. At Build this week, we’re announcing Azure Maps Mobility Services, which will be a set of APIs that leverage Moovit’s APIs for building modern mobility solutions.

Through these new services, we’re able to integrate public transit, bike shares, scooter shares, and more to deliver transit route recommendations that allow customers to plan their routes leveraging the alternative modes of transportation, in order to optimize for travel time and minimize traffic congestion. Customers will also be able to access real-time intelligence on bike and scooter docking stations and car-share-vehicle availability, including present and expected availability and real-time transit stop arrivals.

Customers can use Azure Maps for IoT applications—or any application that uses geospatial or location data, such as apps for field service, logistics, manufacturing, and smart cities. Retail apps may integrate mobility intelligence to help customers access their stores or plan future store locations that optimize for transit accessibility. Field services apps may guide employees from one customer to another based on real-time service demand. City planners may use mobility intelligence to analyze the movement of occupants to plan their own mobility services, visualize new developments, and prioritize locations in the interests of occupants.

You can stay up to date about how Azure Maps is paving the way for the next generation of location services on the Azure Maps blog, and if you’re at Build this week, be sure to visit the Azure Maps booth to see our mobility and spatial operations services in action.

Simplifying development of robotic systems with Windows 10 IoT

Microsoft and Open Robotics have worked together to make the Robot Operating System (ROS) generally available for Windows 10 IoT. Additionally, we’re making it even easier to build ROS solutions in Visual Studio Code with upcoming support for Windows, debugging, and visualization to a community-supported Visual Studio Code extension. Read more about integration between Windows 10 IoT and ROS.

Come see us at Build

If you’re in Seattle this week, you can see some of these new technologies in our booth, and even play around with them at our IoT Hands-on Lab. I’ll also be hosting a session on our IoT Vision and Roadmap. Stop by to hear more details about these announcements and see some of these exciting new technologies in action.

Go to Original Article
Author: Microsoft News Center

Midrange NetApp flash looks to steal Dell EMC’s thunder

NetApp has fired a salvo at archrival Dell EMC, launching a midrange NVMe flash system as part of an OnTap OS upgrade.

The 2U NetApp A320 flash building block expands its All-Flash Fabric-Attached Storage (AFF) A-Series lineup, which spans four models available in varying capacities. The multiprotocol fabric-attached storage (FAS) is the NetApp flash flagship, supporting Fibre Channel block and CIFS, NFS and SMB file storage.

NetApp integrated NVMe fabric support in FAS arrays last year, coupled with back-end SAS SSDs for storage. AFF A320 controllers connect to an external shelf of dual-ported NVMe SSDs via remote direct memory access. The system contains no internal storage.

An A320 system provides 734 TB of raw capacity. NetApp claims users can scale to 35 petabytes of effective flash in a maximum 24-node cluster with data reduction.

Most major enterprise storage vendors have introduced NVMe support similar to what NetApp has done across its AFF arrays. The NVMe standard offers substantial latency reductions by sending commands across a PCIe bus. That breaks the logjam of SCSI commands routed through host bus adapters.

The NetApp flash news comes one week after Dell EMC reiterated its plan for a new “” array based on Dell Power branding this year. Dell EMC claims its Unity array is NVMe-ready, but it isn’t shipping with NVMe flash.

“Our competitor is talking about, and we’ve already delivered it,” said Joel Reich, NetApp’s executive vice president of products and operations.

Eric Burgener, research vice president at analyst firm IDC, said the AFF A320 is an early indicator of NetApp’s flash plans.

“Think of the A320 as a building block. NetApp is introducing a new form factor that it ultimately will use to build the entire AFF line. It’s the strategic way they’re going to build these systems,” Burgener said.

OnTap tunes NetApp flash for cloud management

OnTap 9.6 enables wire-encrypted data to be streamed between local NetApp flash arrays and NetApp Cloud Volumes. DevOps teams use NetApp Cloud Volumes to run NetApp file storage natively in AWS, Google Cloud Platform (GCP) and Microsoft Azure.

NetApp said encryption can be applied on aggregate data or on specific volumes. Prior to OnTap 9.6, Reich said NetApp customers needed to install an IPsec network bump to encrypt data.

The OnTap update also expands NetApp FabricPool object storage cloud targets, adding GCP and Alibaba. The software identifies inactive data and automatically tiers it to lower-cost object storage. NetApp FabricPool already supports Amazon Glacier and Azure Blob storage.

Go to Original Article

For Sale – Mac Pro 3,1 Dual Quad-Core Intel Xeon

Mac Pro 3,1 (early 2008) Dual 2.8GHz Quad-Core Intel Xeon – total of Eight Cores
OS installed – 10.11.6 (El Capitan) (upgradeable with tricks)
Two drives: 128GB Micron SSD, 320GB (2.5inch) HDD ; two bays spare (with brackets) to fit most 3.5″ SATA drives
32GB RAM (DDR2 667MHz) (eight 4GB RAM sticks) (can use 800MHz ones but these were obtained cheaply)
ATI Radeon HD 5770 (2x mini-DP, 1 DVI) (upgraded from the standard HD 2600 XT)
No wireless card fitted (you can one which I purchased for £10 but did not install – unknown state)
No box (sorry!)

This is a very capable machine, used with little stress on all its components, well kept, cleaned but expect marks, scratches, etc. It is possible to upgrade the OS (tips and tricks available with an online search). Only going due to company cornering me into a windows laptop.

Cash and collect only (heavy machine!) – so buyer can inspect to heart’s content.

Price and currency: 200
Delivery: Delivery cost is not included
Payment method: Cash
Location: Cambridge
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Go to Original Article

Windows 10 SDK Preview Build 18890 available now! – Windows Developer Blog

Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 18890 or greater). The Preview SDK Build 18890 contains bug fixes and under development changes to the API surface area.
The Preview SDK can be downloaded from the developer section on Windows Insider.
For feedback and updates to the known issues, please see the developer forum. For new developer feature requests, head over to our Windows Platform UserVoice.

This build works in conjunction with previously released SDKs and Visual Studio 2017 and 2019. You can install this SDK and still also continue to submit your apps that target Windows 10 build 1903 or earlier to the Microsoft Store.
The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2019 here.
This build of the Windows SDK will install ONLY on Windows 10 Insider Preview builds.
In order to assist with script access to the SDK, the ISO will also be able to be accessed through the following static URL:

Message Compiler (mc.exe)

Now detects the Unicode byte order mark (BOM) in .mc files. If the If the .mc file starts with a UTF-8 BOM, it will be read as a UTF-8 file. If it starts with a UTF-16LE BOM, it will be read as a UTF-16LE file. Otherwise, if the -u parameter was specified, it will be read as a UTF-16LE file. Otherwise, it will be read using the current code page (CP_ACP).
Now avoids one-definition-rule (ODR) problems in MC-generated C/C++ ETW helpers caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of MCGEN_EVENTWRITETRANSFER are linked into the same binary, the MC-generated ETW helpers will now respect the definition of MCGEN_EVENTWRITETRANSFER in each .cpp file instead of arbitrarily picking one or the other).

Windows Trace Preprocessor (tracewpp.exe)

Now supports Unicode input (.ini, .tpl, and source code) files. Input files starting with a UTF-8 or UTF-16 byte order mark (BOM) will be read as Unicode. Input files that do not start with a BOM will be read using the current code page (CP_ACP). For backwards-compatibility, if the -UnicodeIgnore command-line parameter is specified, files starting with a UTF-16 BOM will be treated as empty.
Now supports Unicode output (.tmh) files. By default, output files will be encoded using the current code page (CP_ACP). Use command-line parameters -cp:UTF-8 or -cp:UTF-16 to generate Unicode output files.
Behavior change: tracewpp now converts all input text to Unicode, performs processing in Unicode, and converts output text to the specified output encoding. Earlier versions of tracewpp avoided Unicode conversions and performed text processing assuming a single-byte character set. This may lead to behavior changes in cases where the input files do not conform to the current code page. In cases where this is a problem, consider converting the input files to UTF-8 (with BOM) and/or using the -cp:UTF-8 command-line parameter to avoid encoding ambiguity.


Now avoids one-definition-rule (ODR) problems caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of TLG_EVENT_WRITE_TRANSFER are linked into the same binary, the TraceLoggingProvider.h helpers will now respect the definition of TLG_EVENT_WRITE_TRANSFER in each .cpp file instead of arbitrarily picking one or the other).
In C++ code, the TraceLoggingWrite macro has been updated to enable better code sharing between similar events using variadic templates.

Removal of IRPROPS.LIB
In this release irprops.lib has been removed from the Windows SDK. Apps that were linking against irprops.lib can switch to bthprops.lib as a drop-in replacement.

The following APIs have been added to the platform since the release of Windows 10 SDK, version 1903, build 18362.

namespace Windows.Foundation.Metadata {
public sealed class AttributeNameAttribute : Attribute
public sealed class FastAbiAttribute : Attribute
public sealed class NoExceptionAttribute : Attribute
namespace Windows.UI.Composition.Particles {
public sealed class ParticleAttractor : CompositionObject
public sealed class ParticleAttractorCollection : CompositionObject, IIterable, IVector
public class ParticleBaseBehavior : CompositionObject
public sealed class ParticleBehaviors : CompositionObject
public sealed class ParticleColorBehavior : ParticleBaseBehavior
public struct ParticleColorBinding
public sealed class ParticleColorBindingCollection : CompositionObject, IIterable, IMap
public enum ParticleEmitFrom
public sealed class ParticleEmitterVisual : ContainerVisual
public sealed class ParticleGenerator : CompositionObject
public enum ParticleInputSource
public enum ParticleReferenceFrame
public sealed class ParticleScalarBehavior : ParticleBaseBehavior
public struct ParticleScalarBinding
public sealed class ParticleScalarBindingCollection : CompositionObject, IIterable, IMap
public enum ParticleSortMode
public sealed class ParticleVector2Behavior : ParticleBaseBehavior
public struct ParticleVector2Binding
public sealed class ParticleVector2BindingCollection : CompositionObject, IIterable, IMap
public sealed class ParticleVector3Behavior : ParticleBaseBehavior
public struct ParticleVector3Binding
public sealed class ParticleVector3BindingCollection : CompositionObject, IIterable, IMap
public sealed class ParticleVector4Behavior : ParticleBaseBehavior
public struct ParticleVector4Binding
public sealed class ParticleVector4BindingCollection : CompositionObject, IIterable, IMap

Painlessly deploy Azure File Sync with PowerShell

Organizations trying to choose between the manageability of the cloud and the accessibility of on-premises files can use Microsoft’s Azure File Sync to get the best of both worlds, and it only takes three steps — preparation, installation of an Azure File Sync agent and setting endpoints — to set up.

Azure File Sync connects on-premises users to the centralized cloud file shares in Azure Files. The service uses local caches to provide users with the same performance level of an on-premises file server. An Azure File Sync agent on Windows Server 2012 R2, 2016 or 2019 can automatically replicate data and sync files to an Azure storage account.

There are a few different ways you can deploy Azure File Sync on Windows Servers and ease storage management, but PowerShell offers an easy way to set up an Azure File Sync agent. Commands in each step make the process easy to copy and reuse through the entire process.

Step 1: Prepare to deploy Azure File Sync

Azure File Sync agents will not run with the Internet Explorer Enhanced Security Configuration setting enabled. First temporarily disable it during setup. Run the PowerShell code below to set the registry keys and stop and restart the explorer process.

$AdminKey = 'HKLM:SOFTWAREMicrosoftActive SetupInstalled Components{A509B1A7-37EF-4b3f-8CFC-4F3A74704073}'
$UserKey = 'HKLM:SOFTWAREMicrosoftActive SetupInstalled Components{A509B1A8-37EF-4b3f-8CFC-4F3A74704073}'
Set-ItemProperty -Path $AdminKey -Name 'IsInstalled' -Value 0
Set-ItemProperty -Path $UserKey -Name 'IsInstalled' -Value 0
Stop-Process -Name Explorer

After disabling IE Enhanced Security Configuration, create a file share in Azure to sync to. To simplify the deployment process, assign the resource group, storage account and name to variables. Use the Get-AzureRmStorageAccount cmdlet and assign it to the storage account variable.

$resourceGroup = 'TechSnipsBackEnd'
$storageAccountName = 'techsnips'
$storageAccount = Get-AzureRmStorageAccount -ResourceGroupName $resourceGroup -Name $storageAccountName

You will need the key to access the storage account. Use the Get-AzureRmStorageAccountKey cmdlet and pass to the Select-Object cmdlet to select just the first key and look at its value property. Assign the first key value to the storage key variable and pull the storage context.

$storageKey = (Get-AzureRmStorageAccountKey -ResourceGroupName $storageAccount.ResourceGroupName -Name $storageAccount.StorageAccountName | select -first 1).Value
$storageContext = New-AzureStorageContext -StorageAccountName $storageAccount.StorageAccountName -StorageAccountKey $storageKey

With the storage context, create a share — named myfileshare below — using the New-AzureStorageShare cmdlet and store it in the share variable.

$share = New-AzureStorageShare -Name myfileshare -Context $storageContext

Step 2: How to install the Azure File Sync agent

Next, install the agent on your server using Invoke-WebRequest to download it from Microsoft. Use the Invoke-Item cmdlet to run the executable and then accept the defaults for the installed agent. Note which directory you install it on and close any other open PowerShell sessions before install.

$downloadUri = ''
Invoke-WebRequest -Uri $downloadUri -OutFile 'C:filesyncagent.exe'
Invoke-Item 'C:filesyncagent.exe'
The Storage Sync Agent Setup Wizard
Follow the instructions in the Storage Sync Agent Setup Wizard to deploy Azure File Sync agents.

The agent installs to a default path of C:Files. Assign that to the $agentPath variable and the region to the $region variable, which is the same region as the storage account.

Create the service in the SharedLab resource group and assign the name tsstorsync. Once you’ve assigned all variables, use the Import-Module cmdlet to get the module that was installed with the agent.

$agentPath = 'C:Program FilesAzureStorageSyncAgent' ## Make sure to change this if the default path was not used
$region = 'East US 2' ## This needs to be in the same region as the storage account
$resourceGroup = 'SharedLab'
$storageSyncName = 'tsstorsync'
Import-Module "$agentPathStorageSync.Management.PowerShell.Cmdlets.dll"

With the module imported, you now have the commands to set up the file sync service. Query the Azure subscription where the file sync service will be set up and use that subscription and tenant ID to authenticate to the storage sync service. Once authenticated, create a new sync service using New-AzureRmStorageSyncService. You should see a result returned in your console.

$subscription = Get-AzureRmSubscription -SubscriptionName 'TechSnips'
Login-AzureRmStorageSync –SubscriptionId $subscription.Id -ResourceGroupName $resourceGroup -TenantId $subscription.TenantID -Location $region
New-AzureRmStorageSyncService -StorageSyncServiceName $storageSyncName
The result of the New-AzureRmStorageSyncService command
The command New-AzureRmStorageSyncService should return a result to your console.

Now that the storage sync service exists, register the Windows Server to the storage sync service. Use the Register-AzureRmStorageSyncServer command and specify the service name you just created.

$registeredServer = Register-AzureRmStorageSyncServer -StorageSyncServiceName $storageSyncName

Next create the sync group with the New-AzureRmStorageSyncGroup command, providing the name of the group and the name of the sync service you created earlier.

$syncGroupName = 'TechSnipsSyncGroup'
New-AzureRmStorageSyncGroup -SyncGroupName $syncGroupName -StorageSyncService $storageSyncName

Step 3: Create the cloud and server endpoints

Use the New-AzureRmStorageSyncCloudEndpoint cmdlet to create a cloud endpoint. Use PowerShell splatting to provide the parameters for the cmdlet. Variables simplify reusing values in your scripts instead of typing them in again.

$parameters = @{
StorageSyncServiceName = $storageSyncName
SyncGroupName = $syncGroupName
StorageAccountResourceId = $storageAccount.Id
StorageAccountShareName = 'myfileshare'
New-AzureRmStorageSyncCloudEndpoint @parameters

The last step is to set up server endpoints, so local servers can connect to the cloud endpoint. Run the New-AzureRmStorageSyncServerEndpoint command with the parameters below. Reuse the variables you’ve already created. Now you will specify the path to sync, shown as the local path on the C drive in a folder called FileSyncDemo.

New-AzureRmStorageSyncServerEndpoint -StorageSyncServiceName $storageSyncName -SyncGroupName $syncGroupName -ServerId $registeredServer.Id -ServerLocalPath 'C:FileSyncDemo'

Once you deploy Azure File Sync server endpoints, you are done. Now start dropping files and folders into your synced folder and watch them show up in the Azure storage account.

Go to Original Article

Business Applications ISV news at Build 2019 – Microsoft Dynamics 365

Microsoft Build 2019 is here, and thousands of developers are learning about the latest technologies, sharing best practices with colleagues, and writing lots of code. Last week I wrote about business applications sessions to see at Build, and on Monday I discussed updates for helping ISVs (independent software vendors) and developers be successful from both a business and a code perspective.

A few years ago, Microsoft introduced an IP co-sell program to help our digitally transforming enterprise customers get the software they needed from our Azure ISV partners. Many of these customers were moving to the cloud, so we started by focusing on Azure. During this time roughly 3,000 ISVs have generated over $5 billion in partner revenue from the collaboration between Azure sellers and ISVs. Following up on last month’s announcement about an upcoming program for business applications ISVs, Scott Guthrie announced the expansion of the IP co-sell program to include Dynamics 365 and Power Platform partners. Including these products and Azure in the program will make it easier for ISVs and Microsoft sellers to collaborate in serving our joint enterprise customers.

Enterprise customers are increasingly seeing software at the core of their business and are developing a deeper understanding of their software needs. Beyond person to person sales engagements, there will be times when they want to buy the app directly while having confidence in the quality of what they’re receiving. Like consumers, enterprises are familiar with getting their software through marketplaces and the ability for ISVs to transact through marketplaces lowers the barrier for reaching their enterprise customers. On Monday, we announced that SaaS transaction capabilities have been added to AppSource and the Azure Marketplace. As part of this, we will enable transactability support for Dynamics 365 and Power Platform partner apps over the coming months. This is just a start to what we are doing with the marketplace, and now is the perfect time to get familiar with our partner program that launches in July to learn more about the benefits that come with being a Dynamics 365 and Power Platform partner.

The Power Platform provides great general tooling for creating business apps while the Common Data Model (CDM) creates a standard representation of that data. There are times when an industry’s focused solution and data model are better than starting from scratch. For this case Dynamics 365 Industry Accelerators provide industry specific implementations, and we’re announcing private previews in automotive and financial services. The accelerator for the automotive industry enables you to build connected customer experiences based on a proven common data model designed to transform consumer experiences and enable smart mobility services. In financial services, we have built accelerators to help you develop banking solutions in the retail and commercial space with enhanced ways to engage customers and provide an improved customer banking experience. Automotive and financial services accelerators join previously announced accelerators including healthcare, higher education, and nonprofit. Sign up to learn more about how you can participate or help us build the next set of industry accelerators.

During my talk at Build, I showed how all of this fits together through three phases: data at the core, empowering domain experts to be citizen developers, and showing how developers and ISVs can build depth solutions. We used healthcare as an end to end scenario that we can relate to and showcased how we’re infusing AI into all three phases in a way that every industry can start to take advantage of, moving from BI to AI. Lastly, I was delighted to welcome two customers on stage that are delivering innovative solutions. HandsFree Health™ is a new startup founded by senior executives in healthcare including the former President of Aetna. They built an innovative new home healthcare device using Microsoft AI technologies spanning speech, bots, vision, and a companion Xamarin mobile app that we showed on stage. We also took a deeper look at what ISVs can do with Indegene, a global healthcare solutions provider who is building the next generation of cloud applications for life sciences with the Dynamics 365 Healthcare Accelerator.

If you are at Build or following along at home, I have created a list of some Dynamics 365 and Power Platform sessions that you might be interested in. Enjoy your Build 2019 experience and we look forward to seeing how you build, extend, or connect Dynamics 365 and the Power Platform.



Go to Original Article
Author: Microsoft News Center

OpenShift architecture bakes in CoreOS Operator automation

BOSTON — Red Hat OpenShift Container Platform version 4.1 holds tempting infrastructure automation features for IT ops pros, but moving large OpenShift 3 environments to the next version will take careful planning.

The new OpenShift architecture will become generally available in two weeks when Red Hat delivers version 4.1, skipping version 4.0. It integrates CoreOS Operators into the core management features of the Red Hat Kubernetes-based container management platform, with the goal of simplifying and automating ongoing operations tasks for enterprise shops with hundreds or thousands of pods, clusters and projects to worry about on Kubernetes. CoreOS Operators help users pack, deploy and manage an application on Kubernetes.

Previous versions of OpenShift had largely focused on the developer self-service experience, but this release is aimed squarely at an IT operations audience, Red Hat Summit attendees noted here this week.

“We’ve kept our cluster footprint small on purpose, and we don’t want to spin up thousands of clusters without being confident in our ability to operate them,” said James Anderson, senior principal enterprise architect at Sabre Corp., a travel industry software provider in Southlake, Texas. “OpenShift [4.1] builds in a lot of support for day-two operations that will make that easier.”

CoreOS automation features define updated OpenShift architecture

Red Hat demonstrated automated features in the updated OpenShift architecture that function at the application and infrastructure layers of the platform, such as an Operator for Microsoft SQL Server that automates rolling upgrades, as well as a highly anticipated feature known as OpenShift Installer, which automates the setup of OpenShift clusters — from application containers down to bare-metal hardware.

It’s a pretty radical shift, but ultimately, it will be worth it. I constantly hear from IT pros that large clusters and rapid deployments are too hard to do without automation.
Tom PetrocelliAnalyst at Amalgam Insights

“Operators should be able to help with stateful apps in containers,” Anderson said. “We want to run Couchbase [NoSQL databases] using the Operator. Right now, we run it outside [OpenShift], because we’re not confident about running it in production.”

The OpenShift 4.1 user interface also now includes the following:

  • a multi-cluster view;
  • Kubernetes monitoring visualizations based on Prometheus and Grafana; and
  • visibility into the Istio service mesh based on the Kiali open source project, as well as into virtual and bare-metal hosts.

Multi-cluster visibility and the default high-availability (HA) cluster configuration that comes with OpenShift Installer will give commercial aircraft maker Airbus more confidence when it converts apps that need built-in resilience to containers.

“We’re waiting for version 4 to target HA, because we don’t want to set things up in version 3 when the new version is coming out soon,” said Colin Richards, cloud and container architect for Airbus, based in France. “The new UI will also help us manage HA clusters more easily instead of having to log in to each one.”

OpenShift 4.1 also bakes in the Red Hat Enterprise Linux CoreOS microkernel, a lightweight container-optimized version of Red Hat’s Linux distribution. CoreOS is now the mandatory kernel for OpenShift master nodes, and IT pros said it could ease day-to-day container operations.

“The way CoreOS is designed means it could reduce downtime for upgrades and make it easier to roll back to previous versions,” said an engineer with a financial services company in Europe who requested anonymity. “It maintains two views of a virtual file system and swaps between them on reboot.”

Tushar Katarki OpenShift migration tool demo
Red Hat’s OpenShift senior principal product manager, Tushar Katarki, demonstrates an application migration tool at Red Hat Summit.

OpenShift architecture overhaul calls for careful migration plans

While the advances in the OpenShift architecture have broad appeal to IT pros, getting from version 3 to version 4.1 and higher could be a big undertaking. Red Hat officials demonstrated a wizard for application migration built into the OpenShift dashboard that’s based on Heptio’s Velero open source project. But that demo involved manually setting source and target clusters and used the example of migrating two namespaces and 12 pods — a far cry from the scale seen at many OpenShift enterprise users.

“We have 1,300 projects, and that demo still had you manually doing every one of the process migrations,” said a director of Linux middleware and hosting services at a university in the Southeast who watched the demo. “We’re going to have to think about how we go about that process, and we also want to make the upgrade with no URL changes to our clusters. We’ll have to figure out how to make that work.”

Red Hat officials said maintaining cluster URLs will depend on how users implement the global domain name system features in OpenShift. For large-scale app migrations, users can automate the process through the Velero API, but must create their own tools to do so.

Industry analysts predict OpenShift shops will have enough incentive to make the upgrade, even if the process turns out to be painful.

“It’s a pretty radical shift, but ultimately, it will be worth it,” said Tom Petrocelli, analyst at Amalgam Insights in Arlington, Mass. “I constantly hear from IT pros that large clusters and rapid deployments are too hard to do without automation.”

Go to Original Article

Wanted – WFH SFF or Chromebox (budget)

Discussion in ‘Desktop Computer Classifieds‘ started by IceAx, Apr 17, 2019.

  1. IceAx

    Active Member

    Jan 19, 2006
    Products Owned:
    Products Wanted:
    Trophy Points:

    Morning all

    Does anyone have a cheapie SFF or chromebox they are looking to shift for working from home.

    just needs to be able to run a browser with tabs open etc – would probably run linux on SFF.


    Location: Warrington

    This message is automatically inserted in all classifieds forum threads.
    By replying to this thread you agree to abide by the trading rules detailed here.
    Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

    • Landline telephone number. Make a call to check out the area code and number are correct, too
    • Name and address including postcode
    • Valid e-mail address

    DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Share This Page


Go to Original Article

Announcing Windows 10 Insider Preview Build 18894 | Windows Experience Blog

Hello Windows Insiders, today we are releasing Windows 10 Insider Preview Build 18894 (20H1) to Windows Insiders in the Fast ring.
IMPORTANT: As is normal with builds early in the development cycle, these builds may contain bugs that might be painful for some. If you take this flight, you won’t be able to switch Slow or Release Preview rings without doing a clean-install on your PC. If you wish to remain on 19H1, please change your ring settings via Settings > Update & Security > Windows Insider Program *before* taking this flight. See this blog post for details.
If you are looking for a complete look at what build is in which Insider ring – head on over to Flight Hub. You can also check out the rest of our documentation here including a complete list of new features and updates that have gone out as part of Insider flights for the current development cycle.

File Explorer improvements
We’ve heard your feedback asking for increased consistency, and to make it easier to find your files. Over the next few days we’ll be starting to roll out a new File Explorer search experience – now powered by Windows Search. This change will help integrate your OneDrive content online with the traditional indexed results. This rollout will start with a small percent, and then we’ll increase the rollout to more Insiders as we validate the quality of the experience.
What does that mean for you? Once you have the new experience, as you type in File Explorer’s search box, you’ll now see a dropdown populated with suggested files at your fingertips that you can pick from.
These improved results can be launched directly by clicking the entry in the new suggestions box, or if you want to open the file location, just right-click the entry and there’ll be an option to do so. If you need to use commands or dig deeper into non-indexed locations, you can still press enter and populate the view with the traditional search results.

We’ve also updated the design, so now as soon as you click the search box in File Explorer (or press CTRL+E to set focus to it), you’ll see the dropdown list with your search history.
If you encounter any issues, or have any feedback, file them under “Files, Folders, and Online Storage” > “File Explorer” in the Feedback Hub.
NOTES: You may notice in the screenshot, we’ve made the File Explorer search box wider so the suggestions dropdown has a bit more room to show results – that’s not a new option, but we figured you might want to know how to do it: just move your mouse to the starting border of the search box, and your mouse should turn into a resizing double arrow cursor. Just click down and drag the search box to be a bit wider.
Accessibility improvements

Table reading improvements: Narrator is now more efficient when reading tables. Header information is not repeated when navigating within the same row or column. Entering and exiting tables is also less verbose.
Narrator web page summary: There’s a new command in Narrator to give a webpage summary! (Narrator + S). Currently this command will give information about hyperlinks, landmarks and headings.
Magnifier text cursor setting: Windows Magnifier has a new ability to keep the text cursor in the center of the screen making it easier and smoother to type. Centered on the screen is on by default and can be changed in the Magnifier settings.

We fixed an issue resulting in scrolling with the mouse wheel or touchpad not working reliably across the system in the last few flights.
We fixed an issue where opening the Memory Integrity page in Windows Security would crash the app.
We fixed an issue where the Windows Update icon in taskbar system tray is not high DPI optimized.
We fixed a recent issue where the “Add someone else to this PC” window would crash if an MSA-attached user would try to add a local user to the PC.
We fixed a typo in the WIN+(period) kaomoji section category names.
We fixed a race condition that could result in users getting stuck with an outdated version of the search relevancy logic, impacting subsequent search results.
We fixed an issue where Start menu wasn’t launching if the “continue experiences on this device” group policy was “disabled”.
We fixed an issue where navigating using Narrator + R command got stuck in PowerPoint Edit View.
Narrator no longer reads “null” after each command listed in the Narrator + F2 list.
We fixed a problem where Narrator was at low volume and could not be increased.

Due to an OS bug, the Your Phone app will not work on this build (20H1 Build 18894). This issue does not impact the Your Phone app if you are in the Slow and Release Preview rings. We expect to have this issue resolved in the next flight to the Fast ring. Appreciate your patience.
There has been an issue with older versions of anti-cheat software used with games where after updating to the latest 19H1 Insider Preview builds may cause PCs to experience crashes. We are working with partners on getting their software updated with a fix, and most games have released patches to prevent PCs from experiencing this issue. To minimize the chance of running into this issue, please make sure you are running the latest version of your games before attempting to update the operating system. We are also working with anti-cheat and game developers to resolve similar issues that may arise with the 20H1 Insider Preview builds and will work to minimize the likelihood of these issues in the future.
Some Realtek SD card readers are not functioning properly. We are investigating the issue.
If you use remote desktop to connect to an enhanced session VM, taskbar search results will not be visible (just a dark area) until you restart searchui.exe.
We’re investigating reports that on certain devices if fast startup is enabled, night light doesn’t turn on until after a restart. (Note: The problem will occur on a “cold” reboot or power off / power on. To work around if night light doesn’t turn on, use Start > Power > Restart.)
There’s a noticeable lag when dragging the emoji and dictation panels.
Tamper Protection may be turned off in Windows Security after updating to this build. You can turn it back on.
Some features on Start Menu and in All apps are not localized in languages such as FR-FR, RU-RU, and ZH-CN.
In the Ease of Access settings, selecting a color filter may not take effect right away unless color filters option is turned off and back on again.
The IME candidate window for East Asian IMEs (Simplified Chinese, Traditional Chinese, and the Japanese IME) may not open sometimes. We are investigating the issue. In the meantime, going to Task Manager and ending the “WindowsInternal.ComposableShell.Experiences.TextInput.InputApp.exe” task from the from the Details tab should unblock you if you experience this issue.
We are aware of an issue with the Bopomofo IME where the character width is suddenly changed to Full width from Half width and are investigating.

If you install builds from the Fast ring and switch to either the Slow ring or the Release Preview ring, optional content such as enabling developer mode will fail. You will have to remain in the Fast ring to add/install/enable optional content. This is because optional content will only install on builds approved for specific rings.

No downtime for Hustle-As-A-Service,Dona