App and avatar item gifting is here – Windows Experience Blog

Today, we are excited to announce that in addition to digital games, Xbox Live Gold and Xbox Game Pass subscriptions, we have expanded digital gifting to include apps and avatar items sold in the Microsoft Store.
Here’s how to start sending digital gifts in the Microsoft Store in three easy steps:

Head over to the Microsoft Store on your Windows 10 PC, Xbox console or online.
Navigate to the product that you would like to purchase as a gift.
Select “Buy as gift” and enter the email address of the gift recipient. (On Xbox One, you can choose a Gamertag from your list of Xbox Live friends.)

The gift recipient will receive a code for their product along with instructions on how to redeem the code. (On Xbox One, gift recipients will receive a system message with a clickable redemption button.)
There are a few things to keep in mind as you’re gifting apps, games and other content:

There are limits to the number of discounted products you can buy. Gift purchasers can only buy two (2) of the same discounted product – and a total of ten (10) discounted products – every 14 days.  There are no limits for gift purchases made at full price.
Gifting of Xbox 360 and Xbox original games, Xbox original avatar items, pre-orders, free products and consumable downloadable game content such as virtual currency is not allowed.
Gift recipients can only redeem gift tokens in the country or region where they were purchased.
The gifts are sent to recipients as soon as they are purchased. Currently, you can’t time the delivery of the gift for a specific date and time.

Most apps and avatar items are available for gifting today, Oct. 19, and all apps and avatar items should be available for gifting by Tuesday, Oct. 23.
Digital gifting has been one of the top asks from our fans, and we are pleased that we’ve made it even better! Please continue to give us feedback on features you would like to see on Microsoft Store, using the Windows Feedback Hub app or the Xbox Ideas site.

Buying renewable energy should be easy — here’s one way to make it less complex – Microsoft on the Issues

By Brian Janous, Microsoft General Manager of Energy and Sustainability; Kenneth Davies, Microsoft Director of Innovation for Energy Strategy & Research; and Lee Taylor, cofounder and CEO, REsurety

It would be difficult to overestimate the impact that corporate procurement of renewable energy, primarily through power purchase agreements (PPAs), has had on the overall renewable market. In less than a decade, renewable energy created from corporate PPAs went from zero to more than 13 gigawatts in the U.S. alone.  Microsoft is one of the largest players in this market, beginning with a 110-megawatt wind project in Texas in 2013 to a portfolio of more than 1.2 gigawatts in six states and three continents.

This rapid growth, both within our portfolio and beyond, is because these deals are good for business. Renewable energy agreements help companies meet sustainability commitments customers increasingly expect and – if structured properly – do so in a way that provides a hedge against the risk of rising electricity costs on the open market. The fuel for renewable energy projects – the wind and the sun – are free, enabling a fixed price over the length of the agreement. However, as the market has matured, it’s become clear that other risks and complexities exist within the PPA structure that may inhibit their effectiveness as risk management tools. The failure to simplify this complex process and mitigate the risk assumed by the buyer could endanger the corporate procurement market, causing it to slow or stall out completely.

We want to see continued growth of renewables. That is why today, Microsoft and REsurety, along with their partners at Nephila Climate (“Nephila”) and Allianz Global Corporate & Specialty, Inc.’s Alternative Risk Transfer unit (Allianz) announced a new solution that mitigates those risks. We’re calling it a volume firming agreement (VFA), and Microsoft, in addition to co-developing it, will become the first adopter.

The concept of a VFA has its roots in late 2010, when Nephila Capital approached several of the first corporate renewable energy buyers with the idea of helping them manage the risks inherent in PPAs. At the time, however, the idea was just that. Unable to find a corporate buyer willing to put in the effort to help co-develop what would become the VFA, Nephila elected instead to sponsor an MBA project at the Tuck School of Business at Dartmouth College, led by Lee Taylor. Upon graduation, Taylor turned that concept into a company, REsurety. In 2016, Nephila and REsurety finally found that corporate partner in Microsoft, when we signed a PPA with Allianz for the output of the 178-megawatt Bloom wind project in Kansas. This was the first Proxy Generation PPA, winning honors as North American Wind Project of the Year, and laying the groundwork for today’s VFA.

VFAs are intended to be a simple fix to a big challenge with renewable energy PPAs, namely that these deals expose the buyer to all the weather-related risks of power production, and the inherent intermittent nature of wind and solar means there are hourly issues to be addressed. Put simply, the power needs of buyers are static but the power from the project varies on a day-to-day, hour-to-hour basis.

While it’s true that the fixed-price nature of PPAs provide the buyer some protection against a long-term increase in price, the hourly variability of wind and solar creates near-term complexity and risk. In periods when the wind or solar project is producing more than average, the market value of this energy is often lower due to the impact of additional supply in the market. Conversely, in periods when it is producing less than average, the market price is often high.  In other words, volume and price move inversely. This variability and the financial impact are difficult for even the savviest energy buyers and a substantial deterrent to smaller companies, as well as retailers, looking to engage in the renewables market.

But what is undesirable to buyers is very attractive to others, namely insurance companies whose core business revolves around taking weather-related risks, including temperature, rain, snow, wind and so on. VFAs effectively remove the risk related to how future weather conditions will impact the financial value of a PPA from buyers and reallocates it to people who want that risk.

As the market for VFAs and similar products grow, we believe it will create new incentives for those who now bear these risks to procure storage resources and other assets capable of physically balancing the intermittency of renewables. Through the aggregation of risk, these insurers will be able to procure resources at economies of scale that even Microsoft is unable to achieve. In that way, today’s financial firming solution is tomorrow’s physical firming solution, accelerating the adoption of storage and other resources required to eventually transition to a 100 percent carbon-free power generation system.

VFAs are not a replacement for PPAs, nor are they a product Microsoft is selling. They are contracts that simply sit atop new PPAs, or existing PPAs, mitigating the risk to the buyer. Microsoft has signed three of these contracts with Allianz, in conjunction with their partners at Nephila, covering three wind projects in the U.S. in Texas, Illinois and Kansas, totaling almost 500 megawatts. As Microsoft continues to purchase renewable energy to power our operations, we anticipate utilizing VFAs to firm the energy and match our consumption on an hourly basis.

At Microsoft, we are committed to driving a more sustainable future beyond our own four walls. That is why our corporate energy commitments are far broader than just megawatts. We intend to support and enable the transformation of the energy sector using our buying power and innovations so everyone can benefit. REsurety is also focused on enabling the growth of renewable energy by providing tools to understand and manage risks.

The partnership between our two organizations leverages deep expertise in markets, risk and the challenges buyers face in these markets. That is why we’re confident that innovations like the VFA will make it cheaper and easier to procure renewable energy, enabling corporate buyers of all sizes, as well as retailers, to play a role in enabling the transition from fossil fuels to clean energy.

We invite other corporate buyers to take a more in-depth look at our white paper expounding on the role of Proxy Generation PPAs in the implementation of VFAs, co-authored by Microsoft, REsurety and Orrick, Herrington and Sutcliffe LLP, available today here, or contact us. We’re looking forward to a future where even more corporations can participate in the renewable energy market, which would be a big step toward a low-carbon future for the planet.

Tags: ,

What are 3 best practices for backing up Office 365 data?

This question is topical and timely. The adoption of Office 365 is massive.

With two-thirds of organizations using Exchange Online and nearly 80% of organizations with at least 100 users on OneDrive for Business, Microsoft is a clear leader in email and online collaboration. It’s likely the product your organization uses or will use in the near future, so you need to come up with a strategy for backing up Office 365.

Data within Office 365 is just as susceptible to corruption, loss and manipulation as the on-premises equivalent, so you should be thinking about how to best protect it. It’s also been demonstrated that data within Office 365 can become the target of ransomware attacks.

Let me begin by defining what I mean about Office 365. Generally speaking, we’re talking about Exchange Online, SharePoint Online and OneDrive data, based on the current iterations of both Office 365 and its backup integration points, as well as third-party software vendors that can back up Office 365 data.

Here are three best practices for backing up Office 365.

Don’t assume Microsoft will help. Office 365 operates in a shared responsibility model. Microsoft is responsible for the uptime of its services, while users are responsible for the access and control of their data. Backing up Office 365 is left to the user.

Go beyond Exchange Online. You need to determine if any part of your organization uses SharePoint or OneDrive, a list that will expand over time. Microsoft is working hard to get everyone on Microsoft Teams, so I suspect it’ll shortly be providing a way to back that up as well.

Treat backups like you would on premises. In the case of backups, “there is no cloud” is the right mentality. Look at the recovery of each mailbox, OneDrive user and SharePoint site as if it resided on premises by using the same data sets, frequencies and recovery strategies.

The biggest issue is to get your executive team to realize that just because the organization has adopted Office 365 doesn’t mean it’s protected. There are plenty of third-party products that support backing up Office 365 data, with each of them being more or less at parity with one another. For some of you, it may just be a matter of purchasing the Office 365 option to your existing backup platform.

Whatever the case, the time to start backing up Office 365 is now.

Gartner Symposium 2018: Digital business models shift IT priorities

ORLANDO, Fla. — Gartner Symposium 2018 has made clear that modern, successful companies are no longer dabbling in digital business models; they have moved on to scaling their digital businesses.

And as digital business maturity reaches a tipping point, CIOs must secure a foundation for the next stage of digital evolution: educating the board about these business model changes, according to Andy Rowsell-Jones, vice president and research director at Gartner’s CIO and executive leadership research team.

A business model is defined as the way an enterprise creates, delivers and captures value, including what they do for their consumers and how they do it, Rowsell-Jones told the audience at the Gartner Symposium 2018 conference.

Forty-nine percent of CIOs reported undergoing a business model change, according to Gartner’s 2019 CIO Agenda report that surveyed 3,102 CIOs worldwide. These business model changes are being driven by digital maturity and evolving consumer demands that make customer engagement vital, Rowsell-Jones said.

“We are now into sustaining innovation, but it is much more difficult, because it’s [about] continuous release, continuous integration and continuous productization of the things that we do,” he said.

That’s the story CIOs should be telling their board of directors, he added: Explain digital transformation to them in a way they will find relevant enough to take action.

Measuring ROI on digital and focusing on cybersecurity

With businesses scaling digital to reduce service costs, it is important to measure ROI on digital initiatives. Rowsell-Jones noted that the CIO Agenda report found 89% of “top performers” — businesses that have a mature digital business model — measure this type of ROI.

These top performers have better consumer feedback and consumer engagement than everyone else, he said. The metrics or key performance indicators their organizations use to measure ROI on their digital investments include details like the number of consumers that use their apps and Net Promoter Score.

It’s not about putting in fixes to stop the bad guys; it’s about education and behavioral change.
Andy Rowsell-Jonesvice president and research director, Gartner’s CIO and executive leadership research team

“It is now digital at scale; you have to measure this stuff, because it’s not a bunch of experiments anymore,” he said. “Businesses must combine back-office and front-office capabilities to deliver a superior outcome.”

Top-performing CIOs are also making cybersecurity a priority, he said. They focus on building security into product design, while improving awareness and recovery practices.

They also prioritize digital infrastructure and operations when determining cybersecurity processes, including product and service delivery, Rowsell-Jones added.

“A lot of the security … is about behavior; it’s not about technology,” he said. “It’s not about putting in fixes to stop the bad guys; it’s about education and behavioral change.”

Investing right

Initiating this type of business model change also means CIOs must widen their typical product-centric approach, he said. They should make sure the introduction of technologies, ideas and tools is done in a sequenced way, he told the audience at Gartner Symposium 2018.

“It really means that it is a delivery mechanism that is very comfortable delivering version 1, 1.1, 1.2, 1.3 and 1.4,” he said. “When you’re making the transition from project to product, it’s about investing in DevOps, investing in culture change; it’s about attracting talent and building the tools … it’s about educating stakeholders.”

CIOs are also changing their investment patterns, adopting digital technologies like AI, data analytics and cloud to change the services they offer their customers, he said.

There has been a 270% increase in AI adoption since 2015, the CIO Agenda report found, while CIOs are reducing investments in legacy technologies, like on-premises infrastructure.

“[It’s about] rebalancing your technology portfolio toward digital transformation,” he said. “This is a CIO’s serenity prayer: Please give me the resources necessary to make the investments that I need to make. Please give me the courage and influence I need to make the cuts in existing technology that I need to make, and give me the wisdom to tell the difference.”

For Sale – 2x eMachines ER1401

Hi….

I have 2x eMachines ER1401 for sale. They are in good condition, a few scratches on the case due to their age. Everything working fine. They come with the power supply only.

Specs:

1.3 GHz AMD Athlon II Neo Processor K325
nVidia nForce 9200 Chipset
250GB 5400rpm SATA hard drive
Multi-in-One Digital Media Card Reader: MultiMediaCard, Secure Digital Card, Memory Stick, xDPicture Card
10/100/1000 Gigabit Ethernet LAN (RJ-45 port), integrated 802.11b/g/n wireless

£40 each.

Will sort pictures out if there is any interest in them. Thanks for looking.

Price and currency: £40 each
Delivery: Delivery cost is included within my country
Payment method: PPG or BT
Location: Liverpool
Advertised elsewhere?: advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

How to use PowerShell JEA for task delegation

Delegating administrative tasks outside of the IT department opens an organization to security risks, but Just Enough Administration sets boundaries to perform certain jobs without requiring full administrative rights.

Many IT pros use PowerShell scripts and functions for daily tasks. But, normally, only senior system administrators can use them because they change essential company information.

Administrators can use the PowerShell JEA service to delegate commands with specific parameters to other users to complete basic admin tasks. For example, PowerShell JEA can set up a constrained endpoint that will enable the HR department to log into the domain controller and create users in Active Directory with minimal security risk.

Set the PowerShell JEA role capabilities

If you don’t have one, create a group that includes HR employees that need limited administrative access. To set up a Just Enough Administration (JEA) constrained endpoint, it takes three steps: create a role capability file, build a session configuration file and register the JEA configuration.

JEA uses role-based access control. Each JEA endpoint, a domain controller in this case, will hold a role called ADUserManager. This role establishes a PowerShell Remoting session to a domain controller and is limited to using the New-ADUser cmdlet with specific parameters.

Administrators can use the PowerShell JEA service to delegate commands with specific parameters to other users to complete basic admin tasks.

First, create a PowerShell module called AdUserManager on the domain controller. Create the folder, the script module and the module manifest.

$roleName = ‘AdUserManager’

# Create a folder for the module
$modulePath = Join-Path $env:ProgramFiles “WindowsPowerShellModules$roleName”
$null = New-Item -ItemType Directory -Path $modulePath

# Create an empty script module and module manifest. At least one file in the module folder must have the same name as the folder itself.
$null = New-Item -ItemType File -Path (Join-Path $modulePath “$roleName.psm1”)
New-ModuleManifest -Path (Join-Path $modulePath “$roleName.psd1”) -RootModule “$roleName.psm1”

[embedded content]
Learn to set permissions on administrative tasks.

Next, create a subfolder called RoleCapabilities inside the module folder, which will hold the configuration for the ADUserManager role.

$rcFolder = Join-Path $modulePath “RoleCapabilities”
$null = New-Item -ItemType Directory $rcFolder

Next, create the role capability file for the hash table with the configuration values for the role. When the role runs in a PowerShell session, it imports the Active Directory module and only shows the New-ADUser cmdlet. We can limit this even further by setting just the specific parameters HR users need to create new users.

$rcCapFilePath = Join-Path -Path $rcFolder -ChildPath “$roleName.psrc”
$roleCapParams = @{
    Path = $rcCapFilePath
    ModulesToImport = ‘ActiveDirectory’
    VisibleCmdlets = @{
        Name = ‘New-Aduser’
        Parameters = @{ Name = ‘GivenName’ },
            @{ Name = ‘SurName’ },
            @{ Name = ‘Name’ },
            @{ Name = ‘AccountPassword’ },
            @{ Name = ‘ChangePasswordAtLogon’ },
            @{ Name = ‘Enabled’ },
            @{ Name = ‘Department’ },
            @{ Name = ‘Path’ }
    },
    @{
        Name = ‘Get-AdUser’
        Parameters = @{
            Name = ‘Filter’
        }
    }
}
New-PSRoleCapabilityFile @roleCapParams

Build the session configuration file

Next, create the session configuration file, which maps the role definition to a particular session configuration. The commands below set the session configuration and session type to RestrictedRemoteServer to make a constrained endpoint.

In the example below, the session configuration file maps the LABAdUserManagers Active Directory group to the role capability file. We use the RunAsVirtualAccount parameter to avoid changing group memberships. 

$sessionFilePath = Join-Path -Path $rcFolder -ChildPath “$roleName.pssc”
$params = @{
    SessionType = ‘RestrictedRemoteServer’
    Path = $sessionFilePath
    RunAsVirtualAccount = $true
    RoleDefinitions = @{ ‘LABADUserManagers’ = @{ RoleCapabilities = $roleName } }
}

New-PSSessionConfigurationFile @params

Next, run a test using the Test-PSSessionConfigurationFile cmdlet.

Test-PSSessionConfigurationFile -Path $sessionFilePath True

It should come back as true.

Register the PowerShell JEA configuration

Next, register the session configuration on the domain controller to allow HR users to establish a remote session to the domain controller.

Register-PSSessionConfiguration -Path $sessionFilePath -Name $roleName -Force

WSManConfig: Microsoft.WSMan.ManagementWSMan::localhostPlugin

Type            Keys                                Name
—-            —-                                —-
Container       {Name=ADUserManager}                ADUserManager

You can see the session configuration via Get-PSSessionConfiguration.

Get-PSSessionConfiguration -Name ADUserManager

Name          : AdUserManager
PSVersion     : 5.1
StartupScript :
RunAsUser     :
Permission    : LABADUserManagers AccessAllowed

Test the JEA endpoint

To make sure everything works, create a PSCredential object with the Get-Credential cmdlet and pass that credential along with the name of the registered session configuration.

$nonAdminCred = Get-Credential
$session = Enter-PSSession -ComputerName LABDC -ConfigurationName ADUserManager -Credential $nonAdminCred

You should have a PowerShell remoting session running as the virtual administrator account on the domain controller. Use Get-Command to see all of the commands available. You will see some commands that you did not assign, such as Clear-Host and Get-Help, which is fine, as these are standard commands.

How to Install the Azure Stack Development Toolkit (ASDK)

In the last article we discussed hybrid cloud scenarios and where there are the differences between private, public and hybrid clouds. We already have had a look on how cloud in general work and why you should use cloud services instead of setting up local environments, often called private clouds. In this article we’ll be diving deeper into how you can test Azure Stack yourself.

Azure Stack Refresher

With Microsoft Azure there are worldwide cloud services available within more than 50 regions and +150 datacenters, exclusively connected using the Microsoft owned backbone. But if there are scenarios where you explicitly need cloud services on-premises, Microsoft is offering Microsoft Azure Stack. This is a downsized version of Azure using the same stack of technologies that they are running in public Azure but on a smaller footprint (4-12 servers in one rack).

Azure Stack is bound to hardware, that has been tested and certified for Azure Stack. These hardware vendors are:

  • HPE
  • DELL-EMC
  • Lenovo
  • Cisco
  • Huawai
  • Wortmann
  • Fujitsu

The typical scenarios for Azure Stack are:

  • Disconnected/Edge Scenarios (e.g. running Azure Services on a ship or a plane)
  • Data privacy reasons (data cannot leave the company location)
  • Modern Cloud App Development on premise

As I already mentioned above, Azure Stack software and hardware must to be bought as an appliance, or so-called “integrated environment”. There is no software download for Azure Stack available with Microsoft. But for testing (PoC) purposes, it is not a good idea to order 4-12 servers and later decide, if this decision was suitable. Therefore, Microsoft offers a trial version of Azure Stack, called the “Azure Stack Development Toolkit (ASDK)”. It is not bound to any physical hardware vendor, does not provide any high availability or performance but you could download and install it yourself for test/dev scenarios.

Hardware Requirements for the Azure Stack Development Kit

The hardware requirements for an ASDK Azure Stack host are as follows:

Component Minimum Recommended
Disk drives: Operating System 1 OS disk with minimum of 200 GB available for system partition (SSD or HDD) 1 OS disk with minimum of 200 GB available for system partition (SSD or HDD)
Disk drives: General development kit data* 4 disks. Each disk provides a minimum of 140 GB of capacity (SSD or HDD). All available disks are used. 4 disks. Each disk provides a minimum of 250 GB of capacity (SSD or HDD). All available disks are used.
Compute: CPU Dual-Socket: 12 Physical Cores (total) Dual-Socket: 16 Physical Cores (total)
Compute: Memory 96 GB RAM 128 GB RAM (This is the minimum to support PaaS resource providers.)
Compute: BIOS Hyper-V Enabled (with SLAT support) Hyper-V Enabled (with SLAT support)
Network: NIC Windows Server 2012 R2 Certification required for NIC; no specialized features required Windows Server 2012 R2 Certification required for NIC; no specialized features required
HW logo certification Certified for Windows Server 2012 R2 Certified for Windows Server 2016

* You need more than this recommended capacity if you plan on adding many of the marketplace items from Azure.

Data disk drive configuration: All data drives must be of the same type (all SAS, all SATA, or all NVMe) and capacity. If SAS disk drives are used, the disk drives must be attached via a single path (no MPIO, multi-path support is provided).

HBA configuration options

  • (Preferred) Simple HBA
  • RAID HBA – Adapter must be configured in “pass-through” mode
  • RAID HBA – Disks should be configured as Single-Disk, RAID-0

Supported bus and media type combinations

  • SATA HDD
  • SAS HDD
  • RAID HDD
  • RAID SSD (If the media type is unspecified/unknown*)
  • SATA SSD + SATA HDD
  • SAS SSD + SAS HDD
  • NVMe

* RAID controllers without pass-through capability can’t recognize the media type. Such controllers mark both HDD and SSD as Unspecified. In that case, the SSD is used as persistent storage instead of caching devices. Therefore, you can deploy the development kit on those SSDs.

To finally double check if all hardware requirements are met, you can run the pre-req script:

https://go.microsoft.com/fwlink/?LinkId=828735&clcid=0x409

How to Install the Azure Stack Development Kit

Now let’s go on and install the ASDK on your server. You will find the download here:

https://azure.microsoft.com/en-us/overview/azure-stack/development-kit/?v=try

The download may take some time as it is about +10 GB of size.

After the download has finished, you could extract it to the folder of your choice.

The CloudBuilder.VHDX provides the source for all Azure Stack services and is the basis for the installation. Place this VHDX in a destination folder of your choice and start the ASDK Installer script, that is available here: https://docs.microsoft.com/en-us/azure/azure-stack/asdk/asdk-install.

The installer should look like the above and we then need to choose the button on the left to boot from the cloudbuilder.vhdx as a next step.

On this screen choose the folder where the vhdx file is sitting and point to a specific folder location to install dedicated drivers specific to your hardware. These will be integrated into the vhdx during the preparation phase.

Now we need to configure the administrator account and the password, which will be set identically for all accounts in Azure Stack. Finally, you have the choice of setting a computer name, a time zone, and the static IP configuration.

As Azure Stack ASDK is only supporting one NIC, you will have to choose the appropriate one, all others will be disabled by default.

When setting the Time Server IP, you will need to choose a proper time server as time issues are one of the most occurring errors during the installation. A misconfigured time server is the source for 80% of failed deployments. If you have no other one available, time.windows.com should help.

Now the system is mounting the VHDX file.

Finally, you just need to choose a server reboot option and it will boot into the VHDX. The first step of the deployment is done.

NOTE: If specific hardware drivers, in addition, are needed, install them after this reboot manually.

After the reboot, the original drive C: will be mounted as D:, so you will always have the chance to access these files (for missing drivers or anything else).

After having chosen the left button again, you will have the following deployment options:

  • Disconnected (and enabling ADFS only): if you choose this, there is no public Azure connectivity needed, but you will lose multi-tenancy.
  • Connected (connected to Azure, Azure China): this is probably the most used option, as it means that you would use Azure AD for Identity Management for Azure Stack itself (not the tenants, just all the administrative stuff)

Now let’s choose the corresponding AAD directly and set the local administrator password.

Let’s configure networking again.

Define the IP Info once again and optionally VLAN ID and DNS Forwarder

After having all steps done, the installation should start properly once you click Deploy.

After logging on with your Azure Credentials, the Script should run for approx. +4-5 hours, depending on the performance of your ASDK host.

When the deployment is finished, it will look like the above image.

If there are any issues that may have stopped the deployment, you can check the logfiles or just run the setup again using the /rerun parameter.

After having finished the second basic step, we will now need to configure Azure Stack to be fully functional.

To check if Azure Stack is working properly, you can check the following URLs:

Admin Portal: https://adminportal.local.azurestack.external

In the administrator portal, you can do things such as:

  • Manage the infrastructure (including system health, updates, capacity, etc.)
  • Populate the marketplace
  • Create subscriptions for users
  • Create plans and offers

Tenant Portal: https://portal.local.azurestack.external/

NOTE: You will need to use the account you configured during the deployment setup to login to these portals.

Wrap-Up

So, the installation took a while, didn’t it? Wonder why? Take a look at the following VMs. This is what you’ve just deployed to your ASDK host. Makes sense now doesn’t it?

From an architecture perspective, it’s important to keep in mind that this is a sandbox installation. Connecting outside services into the ASDK or any of the components inside of it will be virtually impossible. All traffic flows through BGPNAT-VM. Additionally, Azure Stack will not be served with ongoing updates in this type of deployment in order to keep the latest updates you’ll need to reinstall each month.

There are now many other tasks, that we now have to setup/configure (e.g. Registration with Azure, setting up accounts, plans, and offers, installing resource providers), which is something we will look at in the next chapter.

To summarize, Azure Stack ASDK is a great way to start with Azure Stack and will help you figure out if Azure Stack could fit into your existing environment. The next steps would then be going on with a physical multi-node PoC after having talked to your hardware vendor of choice.

What to learn more about the benefits of using Azure stack? Watch our free on-demand webinar Future-proofing your Datacenter with Microsoft Azure Stack

What about you? Have you had a chance to test Azure Stack yet? What were your thoughts? Let us know in the comments section below!

Buying renewable energy should be easy — here’s one way to make it less complex – Microsoft on the Issues

By Brian Janous, Microsoft General Manager of Energy and Sustainability; Kenneth Davies, Microsoft Director of Innovation for Energy Strategy & Research; and Lee Taylor, cofounder and CEO, REsurety

It would be difficult to overestimate the impact that corporate procurement of renewable energy, primarily through power purchase agreements (PPAs), has had on the overall renewable market. In less than a decade, renewable energy created from corporate PPAs went from zero to more than 13 gigawatts in the U.S. alone.  Microsoft is one of the largest players in this market, beginning with a 110-megawatt wind project in Texas in 2013 to a portfolio of more than 1.2 gigawatts in six states and three continents.

This rapid growth, both within our portfolio and beyond, is because these deals are good for business. Renewable energy agreements help companies meet sustainability commitments customers increasingly expect and – if structured properly – do so in a way that provides a hedge against the risk of rising electricity costs on the open market. The fuel for renewable energy projects – the wind and the sun – are free, enabling a fixed price over the length of the agreement. However, as the market has matured, it’s become clear that other risks and complexities exist within the PPA structure that may inhibit their effectiveness as risk management tools. The failure to simplify this complex process and mitigate the risk assumed by the buyer could endanger the corporate procurement market, causing it to slow or stall out completely.

We want to see continued growth of renewables. That is why today, Microsoft and REsurety, along with their partners at Nephila Climate (“Nephila”) and Allianz Global Corporate & Specialty, Inc.’s Alternative Risk Transfer unit (Allianz) announced a new solution that mitigates those risks. We’re calling it a volume firming agreement (VFA), and Microsoft, in addition to co-developing it, will become the first adopter.

The concept of a VFA has its roots in late 2010, when Nephila Capital approached several of the first corporate renewable energy buyers with the idea of helping them manage the risks inherent in PPAs. At the time, however, the idea was just that. Unable to find a corporate buyer willing to put in the effort to help co-develop what would become the VFA, Nephila elected instead to sponsor an MBA project at the Tuck School of Business at Dartmouth College, led by Lee Taylor. Upon graduation, Taylor turned that concept into a company, REsurety. In 2016, Nephila and REsurety finally found that corporate partner in Microsoft, when we signed a PPA with Allianz for the output of the 178-megawatt Bloom wind project in Kansas. This was the first Proxy Generation PPA, winning honors as North American Wind Project of the Year, and laying the groundwork for today’s VFA.

VFAs are intended to be a simple fix to a big challenge with renewable energy PPAs, namely that these deals expose the buyer to all the weather-related risks of power production, and the inherent intermittent nature of wind and solar means there are hourly issues to be addressed. Put simply, the power needs of buyers are static but the power from the project varies on a day-to-day, hour-to-hour basis.

While it’s true that the fixed-price nature of PPAs provide the buyer some protection against a long-term increase in price, the hourly variability of wind and solar creates near-term complexity and risk. In periods when the wind or solar project is producing more than average, the market value of this energy is often lower due to the impact of additional supply in the market. Conversely, in periods when it is producing less than average, the market price is often high.  In other words, volume and price move inversely. This variability and the financial impact are difficult for even the savviest energy buyers and a substantial deterrent to smaller companies, as well as retailers, looking to engage in the renewables market.

But what is undesirable to buyers is very attractive to others, namely insurance companies whose core business revolves around taking weather-related risks, including temperature, rain, snow, wind and so on. VFAs effectively remove the risk related to how future weather conditions will impact the financial value of a PPA from buyers and reallocates it to people who want that risk.

As the market for VFAs and similar products grow, we believe it will create new incentives for those who now bear these risks to procure storage resources and other assets capable of physically balancing the intermittency of renewables. Through the aggregation of risk, these insurers will be able to procure resources at economies of scale that even Microsoft is unable to achieve. In that way, today’s financial firming solution is tomorrow’s physical firming solution, accelerating the adoption of storage and other resources required to eventually transition to a 100 percent carbon-free power generation system.

VFAs are not a replacement for PPAs, nor are they a product Microsoft is selling. They are contracts that simply sit atop new PPAs, or existing PPAs, mitigating the risk to the buyer. Microsoft has signed three of these contracts with Allianz, in conjunction with their partners at Nephila, covering three wind projects in the U.S. in Texas, Illinois and Kansas, totaling almost 500 megawatts. As Microsoft continues to purchase renewable energy to power our operations, we anticipate utilizing VFAs to firm the energy and match our consumption on an hourly basis.

At Microsoft, we are committed to driving a more sustainable future beyond our own four walls. That is why our corporate energy commitments are far broader than just megawatts. We intend to support and enable the transformation of the energy sector using our buying power and innovations so everyone can benefit. REsurety is also focused on enabling the growth of renewable energy by providing tools to understand and manage risks.

The partnership between our two organizations leverages deep expertise in markets, risk and the challenges buyers face in these markets. That is why we’re confident that innovations like the VFA will make it cheaper and easier to procure renewable energy, enabling corporate buyers of all sizes, as well as retailers, to play a role in enabling the transition from fossil fuels to clean energy.

We invite other corporate buyers to take a more in-depth look at our white paper expounding on the role of Proxy Generation PPAs in the implementation of VFAs, co-authored by Microsoft, REsurety and Orrick, Herrington and Sutcliffe LLP, available today here, or contact us. We’re looking forward to a future where even more corporations can participate in the renewable energy market, which would be a big step toward a low-carbon future for the planet.

Tags: ,

Public cloud management tools lacking, research finds

Thinking about moving applications to the public cloud? If so, you better also think about the management and monitoring tools you’ll need to keep track of those assets. The bad news? Finding the right public cloud management approach isn’t a simple task.

Shamus McGillicuddy, an analyst at Enterprise Management Associates in Boulder, Colo., recently detailed EMA research that found 72% of 250 network managers said they need new public cloud management tools to oversee their public cloud deployments. Sixteen percent said they were using incumbent tools, while 14% said they were still looking for ways to satisfy their cloud management requirements.

“This is not a trivial adjustment,” McGillicuddy said. “Tools are often the foundation of the network team’s established processes and best practices. When the cloud forces them to acquire new management solutions, there will be pain.”

Several issues contribute to the public cloud management challenge. First is securing access to public cloud services; legacy hub-and-spoke WANs based on a central security foundation don’t perform well in a cloud-centric environment. Monitoring and troubleshooting internet-based cloud connectivity is another problem, McGillicuddy said, citing EMA’s research.

Read the rest of McGillicuddy’s discussion on public cloud management challenges.

Making the network resilient

Network engineer Brian Keys took a look at network resiliency and why it’s so difficult for enterprises to have a network that’s highly available. For one thing, nobody wants to pay for the technology necessary to achieve that goal. Additionally, finding architects with the experience to design a highly available network isn’t easy.

Still, Keys said, enterprises can take steps to improve their network’s reliability. The use of uninterruptable power supplies is a good approach. So are redundant links for branch office connectivity. But knowing which techniques are necessary and which ones are just nice to have requires careful study.

“A competent network designer should be able to tell with a high degree of certainty just how resilient the network is and in which ways,” Keys said. “Probably the toughest part is to explain to upper management the pros and cons of the new proposal and get their buy-in.”

Read what else Keys has to say about network resiliency

When to accept an exception (Hint: Hardly ever)

Ivan Pepelnjak at ipSpace had some thoughts about the wisdom of making exceptions in your network’s design or operation. The conclusion: Don’t. Or try not to.

Recounting an experience he had as a passenger of an airline that had decided to delay his flight while waiting for a late transfer, Pepelnjak said while the intent might have been good, the result wasn’t. His flight — and many others — were delayed for hours because of airport congestion. If the airline hadn’t waited for the incoming flight to land, his trip would have left on time.

Network operations face some of the same challenges, he said.

“How many times did you modify your network design or implement a one-off solution to accommodate an exception request coming in at the last minute? How many times did you mess up your network trying to do that?” he wrote.

While it’s necessary to be flexible, network operators also must weigh the risk involved with accepting an exception. Make sure you remember the wider effect before considering a last-minute request.

Read Pepelnjak’s thoughts on network exceptions and how good intentions can go wrong.

For Sale – 2x eMachines ER1401

Hi….

I have 2x eMachines ER1401 for sale. They are in good condition, a few scratches on the case due to their age. Everything working fine. They come with the power supply only.

Specs:

1.3 GHz AMD Athlon II Neo Processor K325
nVidia nForce 9200 Chipset
250GB 5400rpm SATA hard drive
Multi-in-One Digital Media Card Reader: MultiMediaCard, Secure Digital Card, Memory Stick, xDPicture Card
10/100/1000 Gigabit Ethernet LAN (RJ-45 port), integrated 802.11b/g/n wireless

£40 each.

Will sort pictures out if there is any interest in them. Thanks for looking.

Price and currency: £40 each
Delivery: Delivery cost is included within my country
Payment method: PPG or BT
Location: Liverpool
Advertised elsewhere?: advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.