Tag Archives: plan

Zoom faces challenges in implementing end-to-end encryption

Zoom has outlined a four-phase plan for implementing end-to-end encryption. But the company will face hurdles as it attempts to add the security protocol to its video conferencing service.

Each phase of the plan will improve security but leave vulnerabilities that Zoom plans to address in the future. However, the company’s draft white paper provides less detail about the later stages of the project.

“This is complex stuff,” said Alan Pelz-Sharpe, founder of research and advisory firm Deep Analysis. “You can’t just plug and play end-to-end encryption.”

Zoom has not said when end-to-end encryption will launch or who will get access to it. At least initially, the service will likely be available only to paid customers.

The goal of the effort is to give users control of the keys used to decrypt their communications. That would prevent Zoom employees from snooping on conversations or from letting law enforcement agencies do the same.

Zoom previously advertised its service as end-to-end encrypted. But in April, the company acknowledged that it wasn’t using the commonly understood definition of that term. The claim has provided fodder for numerous class-action lawsuits.

The first phase of the plan will change Zoom’s security protocol so that users’ clients — not Zoom’s servers — generate encryption keys. The second phase will more securely tie those keys to individual users through partnerships with single sign-on vendors and identity providers

The third step will give customers an audit trail to verify that neither Zoom nor anyone else is circumventing the system. And the fourth will introduce mechanisms for detecting hacks in real time.

One weakness is the scheme’s reliance on single sign-on vendors and identity providers to match users’ encryption keys to users. That will leave customers that don’t use those services less secure, potentially increasing the risk of meddler-in-the-middle attacks.

Zoom also won’t be able to apply the protocol to all endpoints. Excluded clients include Zoom’s web app and room systems that use SIP or H.323. Zoom also can’t encrypt from end-to-end audio connections made through the public telephone network.

Turning on end-to-end encryption will disable certain features. Users won’t be able to record meetings or, at least initially, join before the host. These limitations are typical of end-to-end encryption schemes for video communications.

Engineers from the messaging and file-sharing service Keybase are leading Zoom’s encryption effort. Zoom acquired Keybase in early May as part of its effort to improve security and privacy.

Zoom released a draft of its encryption plan on GitHub on May 22. The company is accepting public comments on the proposal through June 5. In the meantime, Zoom is urging customers to update their apps by May 30 to get access to a more secure encryption protocol called GCM.

Zoom has been working to repair its reputation after a series of news reports in March revealed numerous security and privacy flaws in its product. An influx of users following the global outbreak of coronavirus put a spotlight on the company.

Users and security experts criticized Zoom for prioritizing ease of use over security. They also faulted the company for not being transparent enough about its encryption and data-sharing practices.

“The criticism was justified and warranted and needed because otherwise these things don’t get fixed,” said Tatu Ylonen, a founder and board member of SSH Communications Security. “I would applaud them for actually taking action fairly quickly.”

More recently, Zoom celebrated some wins. The company settled with the New York attorney general’s office, warding off a further investigation into its security practices. Zoom also got the New York City public school district to undo a ban on the product that had drawn national headlines in April.  

But the company will need to do more to win back the trust of some security-minded buyers.

“I think they’ve responded very quickly,” Pelz-Sharpe said. “But if I were advising a compliant company on a product to buy, it probably wouldn’t be on my list.”

Go to Original Article
Author:

The Complete Guide to Scale-Out File Server for Hyper-V

This article will help you understand how to plan, configure and optimize your SOFS infrastructure, primarily focused on Hyper-V scenarios.

Over the past decade, it seems that an increasing number of components are recommended when building a highly-available Hyper-V infrastructure. I remember my first day as a program manager at Microsoft when I was tasked with building my first Windows Server 2008 Failover Cluster. All I had to do was connect the hardware, configure shared storage, and pass Cluster Validation, which was fairly straightforward.

Failover Cluster with Traditional Cluster Disks

Figure 1 – A Failover Cluster with Traditional Cluster Disks

Nowadays, the recommend cluster configuration for Hyper-V virtual machines (VMs) requires adding additional management layers such as Cluster Shared Volumes (CSV), disks which must also cluster a file server to host the file path to access it, known as a Scale-Out File Server (SOFS). While the SOFS provides the fairly basic functionality of keeping a file share online, understanding this configuration can be challenging for experienced Windows Server administrators. To see the complete stack which Microsoft recommends, scroll down to see the figures throughout this article. This may appear daunting, but do not worry, we’ll explain what all of these building blocks are for.

While there are management tools like System Center Virtual Machine Manager (SCVMM) that can automate the entire infrastructure deployment, most organizations need to configure these components independently. There is limited content online explaining how Scale-Out File Server clusters work and best practices for optimizing them. Let’s get into it!

Scale-Out File Server (SOFS) Capabilities & Limitations

A SOFS cluster should only be used for specific scenarios. The following list of features have been tested and are either supported, supported but not recommended, or not supported with the SOFS.

Supported SOFS scenarios

  • File Server
    • Deduplication – VDI Only
    • DFS Namespace (DFSN) – Folder Target Server Only
    • File System
    • SMB
      • Multichannel
      • Direct
      • Continuous Availability
      • Transparent Failover
  • Other Roles
    • Hyper-V
    • IIS Web Server
    • Remote Desktop (RDS) – User Profile Disks Only
    • SQL Server
  • System Center Virtual Machine Manager (VMM)

Supported, but not recommended SOFS scenarios

  • File Server
    • Folder Redirection
    • Home Directories
    • Offline Files
    • Roaming User Profiles

Unsupported SOFS scenarios

  • File Server
    • BranchCache
    • Deduplication – General Purpose
    • DFS Namespace (DFSN) – Root Server
    • DFS Replication (DFSR)
    • Dynamic Access Control (DAC)
    • File Server Resource Manager (FSRM)
    • File Classification Infrastructure (FCI)
    • Network File System (NFS)
    • Work Folders

Scale-Out File Server (SOFS) Benefits

Fundamentally, a Scale-Out File Server is a Failover Cluster running the File Server role. It keeps the file share path (\ClusterStorageVolume1) continually available so that it can always be accessed. This is critical because Hyper-V VMs us this file path to access their virtual hard disks (VHDs) via the SMB3 protocol. If this file path is unavailable, then the VMs cannot access their VHD and cannot operate.

Additionally, it also provides the following benefits:

  • Deploy Multiple VMs on a Single Disk – SOFS allows multiple VMs running on different nodes to use the same CSV disk to access their VHDs.
  • Active / Active File Connections – All cluster nodes will host the SMB namespace so that a VM can connect or quickly reconnect to any active server and have access to its CSV disk.
  • Automatic Load Balancing of SOFS Clients – Since multiple VMs may be using the same CSV disk, the cluster will automatically distribute the connections. Clients are able to connect to the disk through any cluster node, so they are sent to the server with fewest file share connections. By distributing the clients across different nodes, the network traffic and its processing overhead are spread out across the hardware which should maximize its performance and reduce bottlenecks.
  • Increased Storage Traffic Bandwidth – Using SOFS, the VMs will be spread across multiple nodes. This also means that the disk traffic will be distributed across multiple connections which maximizes the storage traffic throughput.
  • Anti-Affinity – If you are hosting similar roles on a cluster, such as two active/active file shares for a SOFS, these should be distributed across different hosts. Using the cluster’s anti-affinity property, these two roles will always try to run on different hosts eliminating a single point of failure.
  • CSV Cache – SOFS files which are frequently accessed will be copied locally on each cluster node in a cache. This is helpful if the same type of VM file is read many times, such as in VDI scenarios.
  • CSV CHKDSK – CSV disks have been optimized to skipping the offline phase, which means that they will come online faster after a crash. Faster recovery time is important for high-availability since it minimizes downtime.

Scale-Out File Server (SOFS) Cluster Architecture

This section will explain the design fundaments of Scale-Out File Servers for Hyper-V. The SOFS can run on the same cluster as the Hyper-V VMs it is supporting, or on an independent cluster. If you are running everything on a single cluster, the SOFS must be deployed as a File Server role directly on the cluster; it cannot run inside a clustered VM since that VM won’t start without access to the File Server. This would cause a problem since neither the VM nor the virtualized File Server could start-up since they have a dependency on each other.

Hyper-V Storage and Failover Clustering

When Hyper-V was first introduced with Windows Server 2008 Failover Clustering, it had several limitations that have since been addressed. The main challenge was that each VM required its own cluster disk, which made the management of cluster storage complicated. Large clusters could require dozens or hundreds of disks, one for each virtual machine. This was sometimes not even possible due to limitations created by hardware vendors which required a unique drive letter for each disk. Technically you could run multiple VMs on the same cluster disk, each with their own virtual hard disks (VHDs). However, this configuration was not recommended, because if one VM crashed and had to failover to a different node, it would force all the VMs using that disk to shut down and failover to other nodes. This causes unplanned downtime, and as virtualization becomes more popular, a cluster-aware file system was created known as Cluster Shared Volumes (CSV). See Figure 1 (above) for the basic architecture of a cluster using traditional cluster disks.

Cluster Shared Volume (CSV) Disks and Failover Clustering

CSV Disks were introduced in Windows Server 2008 R2 as a distributed file system that is optimized for Hyper-V VMs. The disk must be visible by all cluster nodes, use NTFS or ReFS, and can be created from pools of disks using Storage Spaces.

The CSV disk is designed to host VHDs from multiple VMs from different nodes and run them simultaneously. The VMs can distribute themselves across the cluster nodes, balancing the hardware resources which they are consuming. A cluster can host multiple CSV disks and their VMs can freely move around the cluster, without any planned downtime. The CSV disk traffic communicates over standard networks using SMB, so traffic can be routed across different cluster communication paths for additional resiliency, without being restricted to use a SAN.

A Cluster Shared Volumes disk functions similar to a file share hosting the VHD file since it provides storage and controls access. Virtual machines can access their VHDs like clients would access a file hosted in a file share using a path like \ClusterStorageVolume1. This file path is identical on every cluster node, so as a VM moves between servers it will always be able to access its disk using the same file path. Figure 2 shows a Failover Cluster storing its VHDs on a CSV disk. Note that multiple VHDs for different VMs on different nodes can reside on the same disk which they access through the SMB Share.

A Failover Cluster with a Cluster Shared Volumes (CSV) Disk

Figure 2 – A Failover Cluster with a Cluster Shared Volumes (CSV) Disk

Scale-Out File Server (SOFS) and Failover Clustering

The SMB file share used for the CSV disk must be hosted by a Windows Server File Server. However, the file share should also be highly-available so that it does not become a single point of failure. A clustered File Server can be deployed as a SOFS through Failover Cluster Manager as described at the end of this article.

The SOFS will publish the VHD’s file share location (known as the “CSV Namespace”) on every node. This active/active configuration allows clients to be able to access their storage through multiple pathways. This provides additional resiliency and availability because if one node crashes, the VM will temporarily pause its transactions until it can quickly reconnect to the disk via another active node, but it remains online.

Since the SOFS runs on a standard Windows Server Failover Cluster, it must follow the hardware guidance provided by Microsoft. One of the fundamental rules of failover clustering is that all the hardware and software should be identical. This allows a VM or file server to be able to operate the same way on any cluster node, as all the setting, file paths, and registry settings will be the same. Make sure you run the Cluster Validation tests and follow Altaro’s Cluster Validation troubleshooting guidance if you see any warnings or errors.

The following figure shows a SOFS deployed in the same cluster. The clustered SMB shares create a highly-available CSV namespace allowing VMs to access their disk through multiple file paths.

A Failover Cluster using Clustered SMB File Shares for CSV Disk Access

Figure 3 – A Failover Cluster using Clustered SMB File Shares for CSV Disk Access

Storage Spaces Direct (S2D) with SOFS

Storage Spaces Direct (S2D) lets organizations deploy small failover clusters with no shared storage. S2D will generally use commodity servers with direct-attached storage (DAS) to create clusters that use mirroring to replicate their data between local disks to keep their states consistent. These S2D clusters can be deployed as Hyper-V hosts, storage hosts or in a converged configuration running both roles. The storage uses Scale-Out File Servers to host the shares for the VHD files.

In Figure 4, a SOFS cluster is shown which uses storage spaces direct, rather than shared storage, to host the CSV volumes and VHD files. Each CSV volume and its respective VHDs are mirrored between each of the local storage arrays.

 A Failover Cluster with Storage Spaces Direct (S2D)

Figure 4 – A Failover Cluster with Storage Spaces Direct (S2D)

Infrastructure Scale-Out File Server (SOFS)

Windows Server 2019 introduced a new Scale-Out File Server role called the Infrastructure File Server. This functions as the traditional SOFS, but it is specifically designed to only support Hyper-V virtual infrastructure with no other types of roles. There can also be only one Infrastructure SOFS per cluster.

The Infrastructure SOFS can be created manually via PowerShell or automatically when it is deployed by Windows Azure Stack or System Center Virtual Machine Manager (SCVMM). This role will automatically create a CSV namespace share using the syntax \InfraSOFSNameVolume1. Additionally, it will enable the Continuous Availability (CA) setting for the SMB shares, also known as SMB Transparent Failover.

Infrastructure File Server Role on a Windows Server 2019 Failover Cluster

Figure 5 – Infrastructure File Server Role on a Windows Server 2019 Failover Cluster

Cluster Sets

Windows Server 2019 Failover Clustering introduced the management concept of cluster sets. A cluster set is a collection of failover cluster which can be managed as a single logical entity. It allows VMs to seamlessly move between clusters which then lets organizations create a highly-available infrastructure with almost limitless capacity. To simplify the management of the cluster sets, a single namespace can be used to access the cluster. This namespace can run on a SOFS for continual availability and clients will automatically get redirected to the appropriate location within the cluster set.

The following figure shows two Failover Clusters within a cluster set, both of which are using a SOFS. Additionally, a third independent SOFS is deployed to provide highly-available access to the cluster set itself.

A Scale-Out File Server with Cluster Sets

Figure 6 – A Scale-Out File Server with Cluster Sets

Guest Clustering with SOFS

Acquiring dedicated physical hardware is not required for the SOFS as this can be fully-virtualized. When a cluster runs inside of VMs instead of physical hardware, this is known as guest clustering. However, you should not run a SOFS within a VM which it is providing the namespace for, as it can get into a situation where it cannot start the VM since it cannot access the VM’s own VHD.

Microsoft Azure with SOFS

Microsoft Azure allows you to deploy virtualized guest clusters in the public cloud. You will need at least 2 storage accounts, each with a matching number and size of disks. It is recommended to use at least DS-series VMs with premium storage. Since this cluster is already running in Azure, it can also use a cloud witness for is quorum disk.

You can even download an Azure VM template which comes as a pre-configure two-node Windows Server 2016 Storage Spaces Direct (S2D) Scale-Out File Server (SOFS) cluster.

System Center Virtual Machine Manager (VMM) with SOFS

Since the Scale-Out File Server has become an important role in virtualized infrastructures, System Center Virtual Machine Manager (VMM) has tightly integrated it into their fabric management capabilities.

Deployment

VMM makes it fairly easy to deploy SOFS throughout your infrastructure on bare-metal or Hyper-V hosts. You can add existing file servers under management or deploy each SOFS throughout your fabric. For more information visit:

When VMM is used to create a cluster set, an Infrastructure SOFS is automatically created on the Management Server (if it does not already exist). This file share will host the single shared namespace used by the cluster set.

Configuration

Many of the foundational components of a Scale-Out File Server can be deployed and managed by VMM. This includes the ability to use physical disks to create storage pools that can host SOFS file shares. The SOFS file shares themselves can also be created through VMM. If you are also using Storage Spaces Direct (S2D) then you will need to create a disk witness which will use the SOFS to host the file share. Quality of Service (QoS) can also be adjusted to control network traffic speed to resources or VHDs running on the SOFS shares.

Management Cluster

In large virtualized environments, it is recommended to have a dedicated management cluster for System Center VMM. The virtualization management console, database, and services are highly-available so that they can continually monitor the environment. The management cluster can use unified storage namespace runs on a Scale-Out File Server, granting additional resiliency to accessing the storage and its clients.

Library Share

VMM uses a library to store files which may be deployed multiple times, such as VHDs or image files. The library uses an SMB file share as a common namespace to access those resources, which can be made highly-available using a SOFS. The data in the library itself cannot be stored on a SOFS, but rather on a traditional clustered file server.

Update Management

Cluster patch management is one of the most tedious tasks which administrators face as it is repetitive and time-consuming. VMM has automated this process through serially updating one node at a time while keeping the other workloads online. SOFS clusters can be automatically patched using VMM.

Rolling Upgrade

Rolling upgrades refers to the process where infrastructure servers are gradually updated to the latest version of Windows Server. Most of the infrastructure servers managed by VMM can be included in the rolling upgrade cycle which functions like the Update Management feature. Different nodes in the SOFS cluster are sequentially placed into maintenance mode (so the workloads are drained), updated, patched, tested and reconnected to the cluster. Workloads will gradually migrate to the newly installed nodes while the older nodes wait to be updated. Gradually all the SOFS cluster nodes are updated to the latest version of Windows Server.

Internet Information Services (IIS) Web Server with SOFS

Everything in this article so far has referenced SOFS in the context of being used for Hyper-V VMs. SOFS is gradually being adopted by other infrastructure services to provide high-availability to their critical components which use SMB file shares.

The Internet Information Services (IIS) Web Server is used for hosting websites. To distribute the network traffic, usually, multiple IIS Servers are deployed. If they have any shared configuration information or data, this can be stored in the Scale-Out File Server.

Remote Desktop Services (RDS) with SOFS

The Remote Desktop Services (RDS) role has a popular feature known as user profile disks (UPDs) which allows users to have a dedicated data disk stored on a file server. The file share path can be placed on a SOFS to make access to that share highly-available.

SQL Server with SOFS

Certain SQL Server roles have been able to use SOFS to make their SMB connections highly-available. Starting with SQL Server 2012, the SMB file server storage option is offered for SQL Server, databases (including Master, MSDB, Model and TempDB) and the database engine. The SQL Server itself can be standalone or deployed as a failover cluster installation (FCI).

Deploying a SOFS Cluster & Next Steps

Now that you understand the planning considerations, you are ready to deploy the SOFS. From Failover Cluster Manager, you will launch the High Availability Wizard and select the File Server role. Next, you will select the File Server Type. Traditional clustered file servers will use the File Server for general use. For SOFS, select Scale-Out File Server for application data.

The interface is shown in the following figure and described as, “Use this option to provide storage for server applications or virtual machines that leave files open for extended periods of time. Scale-Out File Server client connections are distributed across nodes in the cluster for better throughput. This option supports the SMB protocol. It does not support the NFS protocol, Data Deduplication, DSF Replication, or File Server Resource Manager.”

Installing a Scale-Out File Server (SOFS)

Figure 7 – Installing a Scale-Out File Server (SOFS)

Now you should have a fundamental understanding of the use and deployment options for the SOFS. For additional information about deploying a Scale-Out File Server (SOFS), please visit https://docs.microsoft.com/en-us/windows-server/failover-clustering/sofs-overview. If there’s anything you want to ask about SOFS, let me know in the comments below and I’ll get back to you!

Go to Original Article
Author: Symon Perriman

Microsoft announces it will be carbon negative by 2030 – Stories

REDMOND, Wash. — Jan. 16, 2020 — Microsoft Corp. on Thursday announced an ambitious goal and a new plan to reduce and ultimately remove its carbon footprint. By 2030 Microsoft will be carbon negative, and by 2050 Microsoft will remove from the environment all the carbon the company has emitted either directly or by electrical consumption since it was founded in 1975.

At an event at its Redmond campus, Microsoft Chief Executive Officer Satya Nadella, President Brad Smith, Chief Financial Officer Amy Hood, and Chief Environmental Officer Lucas Joppa announced the company’s new goals and a detailed plan to become carbon negative.

“While the world will need to reach net zero, those of us who can afford to move faster and go further should do so. That’s why today we are announcing an ambitious goal and a new plan to reduce and ultimately remove Microsoft’s carbon footprint,” said Microsoft President Brad Smith. “By 2030 Microsoft will be carbon negative, and by 2050 Microsoft will remove from the environment all the carbon the company has emitted either directly or by electrical consumption since it was founded in 1975.”

The Official Microsoft Blog has more information about the company’s bold goal and detailed plan to remove its carbon footprint: https://blogs.microsoft.com/?p=52558785.

The company announced an aggressive program to cut carbon emissions by more than half by 2030, both for our direct emissions and for our entire supply and value chain. This includes driving down our own direct emissions and emissions related to the energy we use to near zero by the middle of this decade. It also announced a new initiative to use Microsoft technology to help our suppliers and customers around the world reduce their own carbon footprints and a new $1 billion climate innovation fund to accelerate the global development of carbon reduction, capture and removal technologies. Beginning next year, the company will also make carbon reduction an explicit aspect of our procurement processes for our supply chain. A new annual Environmental Sustainability Report will detail Microsoft’s carbon impact and reduction journey. And lastly, the company will use its voice and advocacy to support public policy that will accelerate carbon reduction and removal opportunities.

More information can be found at the Microsoft microsite: https://news.microsoft.com/climate.

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications, (425) 638-7777, [email protected]

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at https://news.microsoft.com/microsoft-public-relations-contacts.

Go to Original Article
Author: Microsoft News Center

Startup Uplevel targets software engineering efficiency

Featuring a business plan that aims to increase software engineering efficiency and armed with $7.5 million in venture capital funding, Uplevel emerged from stealth Wednesday.

Based in Seattle and founded in 2018, Uplevel uses machine learning and organizational science to compile data about the daily activity of engineers in order to ultimately help them become more effective.

One of the main issues engineers face is a lack of time to do their job. They may be assigned a handful of tasks to carry out, but instead of being allowed to focus their attention on those tasks they’re instead being bombarded by messages, or mired in an overabundance of meetings.

Uplevel aims to improve software engineering efficiency by monitoring messaging platforms such as Slack, collaboration software like Jira, calendar tools, and code repository software such as GitHub. It then compiles the data and is able to show how engineers are truly spending their time — whether they’re being allowed to do their jobs or instead being prevented from it by no fault of their own.

“I kept seeing pain around engineering effectiveness,” said Joe Levy, co-founder and CEO of Uplevel. “Engineers are often seen as artists, but what they’re trying to manage from a business perspective can be tough. If we can help engineers be more effective, organizations can be more effective without having to throw more bodies at the problem.”

Beyond arming the engineers themselves with data to show how they can be more effective, Uplevel attempts to provide the leaders of engineering teams the kind of information they previously lacked.

If we can help engineers be more effective, organizations can be more effective without having to throw more bodies at the problem.
Joe LevyCEO and co-founder, Uplevel

While sales and marketing teams have reams of data to drive the decision-making process — and present when asked for reports — engineering teams haven’t had the same kind of solid information.

“Sales, marketing, they have super detailed data that leads to understanding, but the head of engineering doesn’t have that same level of data,” Levy said. “There are no metrics of the same caliber [for engineers], but they’re still asked to produce the same kind of detailed projections.”

As Uplevel emerges from stealth, as with all startups one of its challenges will be to demonstrate how it’s providing something different than what’s already on the market.

Without differentiation, its likelihood of success is diminished.

But according to Vanessa Larco, a partner at venture capital investment firm New Enterprise Associates with an extensive background in computer science, what Uplevel provides is something that indeed is unique.

“This is really interesting,” she said. “I haven’t seen anything doing this exact thing. The value proposition of Uplevel is compelling if it helps quantify some of the challenges faced by R&D teams to enable them to restructure their workload and processes to better enable them to reach their goals. I haven’t seen or used the product, but I can understand the need they are fulfilling.”

Similarly, Mike Leone, analyst at Enterprise Strategy Group, believes Uplevel is on to something new.

“There are numerous time-based tracking solutions for software engineering teams available today, but they lack a comprehensive view of the entire engineering ecosystem, including messaging apps, collaboration tools, code repository tools and calendars,” he said. “The level of intelligence Uplevel can provide based on analyzing all of the collected data will serve as a major differentiator for them.”

Uplevel developed from a combination of research done by organizational psychologist David Youssefnia and a winning hackathon concept from Dave Matthews, who previously worked at Microsoft and Hulu. The two began collaborating at Madrona Venture Labs in Seattle to hone their idea of how to improve software engineering efficiency before Levy, also formerly of Microsoft, and Ravs Kaur, whose previous experience includes time at Tableau and Microsoft, joined to help Uplevel go to market.

Youssefnia serves as chief strategy officer, Matthews as director product management, and Kaur as CTO.

Startup vendor Uplevel aims to improve the efficiency of software engineers by offering a look into how many distractions engineers face as they work.
A sample chart from Uplevel displays the distractions faced by an organization’s software engineering team.

Uplevel officially formed in June 2018, attracted its first round of funding in September of that year and its second in April 2019. Leading investors include Norwest Venture Partners, Madrona Venture Group and Voyager Capital.

“Their fundamental philosophy was different from what we’d heard,” said Jonathan Parramore, senior data scientist at Avalara, a provider of automated tax compliance software and an Uplevel customer for about a year. “Engineering efficiency is difficult to measure, and they took a behavioral approach and looked holistically at multiple sources of data, then had the data science to meld it together. I’d say that everything they promised they would do, they have delivered.”

Still, Avalara would eventually like to see more capabilities as Uplevel matures.

“They have amazing reports they generate by looking at the data they have access to, but we’d like them to be able to create reports that are more in real time,” said Danny Fields, Avalara’s CTO and executive vice president of engineering. “That’s coming.”

Moving forward, while Uplevel doesn’t plan to branch out and offer a wide array of products, it is aiming to become an essential platform for all organizations looking to improve software engineering efficiency.

As it builds up its own cache of information about improving software engineering efficiency it will be able to share that data — masking the identity of individual organizations — with customers so that they can compare the efficiency of their engineers versus those of other organizations.

“The goal we’re focused on is to be the de facto platform that is helping engineers do their job,” Levy said. “We want to be a platform they can’t live without, that every big organization is reliant on.”

Go to Original Article
Author:

How to Configure Failover Clusters to Work with DR & Backup

As your organization grows it is important to not only plan your high-availability solution to maintain service continuity, but also a disaster recovery solution in the event that the operations of your entire datacenter are compromised. High-availability (HA) allows your applications or virtual machines (VMs) to stay online by moving them to other server nodes in your cluster. But what happens if your region experiences a power outage, hurricane or fire?  What if your staff cannot safely access your datacenter? During times of crisis, your team will likely be focused on the well-being of their family or home, and not particularly interested in the availability of their company’s services. This is why it is important to not only protect against local crashes but to be able to move your workloads between datacenters or clouds, using disaster recovery (DR). Because you will need to have access to your data in both locations, you will need to make sure that the data is replicated and consistent in both locations.  The architecture of your DR solution will influence the replication solution you select.

Basic Architecture of a Multi-Site Failover Cluster

Basic Architecture of a Multi-Site Failover Cluster

This three-part blog post will first look at the design decisions to create a resilient multi-site infrastructure, then in future posts the different types of replicated storage you can use from third parties, along with Microsoft’s DFS-Replication, Hyper-V Replica, and Azure Site Recovery (ASR), and backup best practices for each.

Probably the first design decision will be the physical location of your second site.  In some cases, this may be your organization’s second office location, and you will not have any input.  Sometimes you will be able to select the datacenter of a service provider who allows cohosting.  When you do have a choice, first consider the disaster between these locations. Make sure that the two sites are on separate power grids.  Then consider what type of disasters your region is susceptible to, whether that is hurricanes, wildfires, earthquakes or even terrorist attacks.  If your primary site is along a coastline, then consider finding an inland location. Ideally, you should select a location that is far enough away from your primary site to avoid multi-site failure. Some organizations even select a site that is hundreds or thousands of miles away!

At first, selecting a cross-country location may sound like the best solution, but with added distance comes added latency.  If you wish to run different services from both sites (an active/active configuration), then be aware that the distance can cause performance issues as information needs to travel further across networks. If you decide to use synchronous replication, you may be limited to a few hundred miles or less to ensure that the data stays consistent.  For this reason, many organizations choose an active/passive configuration where the datacenter which is closer to the business or its customers will function as the primary site, and the secondary datacenter remains dormant until it is needed. This solution is easier to manage, yet more expensive as you have duplicate hardware which is mostly unused. Some organizations will use a third (or more) site to provide greater resiliency, but this adds more complexity when it comes to backup, replication and cluster membership (quorum).

Now that you have picked your sites, you should determine the optimal number of cluster nodes in each location.  You should always have at least two nodes at each site so that if a host crashes it can failover within the primary site before going to the DR site to minimize downtime.  You can configure local failover first through the cluster’s Preferred Owner setting.  The more nodes you have at each site, the more local failures you can sustain before moving to the secondary site.

Use Local Failover First before Cross-Site Failover

Use Local Failover First before Cross-Site Failover

It is also recommended that you have the same number of nodes at each site, ideally with identical hardware configurations.  This means that the performance of applications should be fairly consistent in both locations and it should reduce your maintenance costs.  Some organizations will allocate older hardware to their secondary site, which is still supported, but the workloads will be slower until they return to the primary site.  With this type of configuration, you should also configure automatic failback so that the workloads are restored to the faster primary site once it is healthy.

If you have enough hardware, then a best practice is to deploy at least three nodes at each site so that if you lose a single node and have local failover there will be less of a performance impact.  In the event that you lose one of your sites in a genuine disaster for an extended period of time, you can then evict all the nodes from that site, and still have a 3-node cluster running in a single site.  In this scenario, having a minimum of three nodes is important so that you can sustain the loss of one node while keeping the rest of the cluster online by maintaining its quorum.

If you are an experienced cluster administrator, you probably identified the problem with having two sites with an identical number nodes – maintaining cluster quorum.  Quorum is the cluster’s membership algorithm to ensure that there is exactly one owner of each clustered workload.  This is used to avoid a “split-brain” scenario when there is a partition between two sets of cluster nodes (such as between two sites), and two hosts independently run the same application, causing data inconsistency during replication.  Quorum works by giving each cluster node a vote, and a majority (51% or more) of voters must be in communication with each other to run all of the workloads.  So how is this possible with the recommendation of two balanced sites with three nodes each (6 total votes)?

The most common solution is to have an extra vote in a third site (7 total votes).  So long as either the primary or secondary site can communicate with that voter in the third site, that group of nodes will have a majority of votes and operate all the workloads.  For those who do not have the luxury of the third site, Microsoft allows you to place this vote inside the Microsoft Azure cloud, using a Cloud Witness Disk.  For a detailed understanding of this scenario, check out this Altaro blog about Understanding File Share Cloud Witness and Failover Clustering Quorum in the Microsoft Azure Cloud.

Use a File Share Cloud Witness to Maintain Quorum

Use a File Share Cloud Witness to Maintain Quorum

If you are familiar with designing a traditional Windows Server Failover Cluster, you know that redundancy of every hardware and software component is critical to eliminate any single point of failure.  With a disaster recovery solution, this concept is extended by also providing redundancy to your datacenters, including the servers, storage, and networks.  Between each site, you should have multiple redundant networks for cross-site communications.

You will next configure your shared storage at each site cross-site replication between the disks using either a third-party replication solution such as Altaro VM Backup or Microsoft’s Hyper-V Replica or Azure Site Recovery.  These configurations will be covered in the subsequent blog posts in this series.  Finally make sure that the entire multi-site cluster, including the replicated storage, does not fail any of the Cluster Validation Wizard tests.

Again, we’ll be covering more regarding this topic in future blog posts, so keep an eye out for them! Additionally, have you worked through multi-site failover planning for a failover cluster before? What things went well? What were the troubles you ran into? We’d love to know in the comments section below!


Go to Original Article
Author: Symon Perriman

For Sale – HP EliteBook 840 G2 i5-5200U 2.2GHz, 8GB RAM, 120GB SSD, Windows 10 Pro 1903

Hi guys
I have HP EliteBook 840 G2 for sale.

I got it more than a year ago (second hand) and the plan was to use it as a media center PC and manage wifi security cameras from it. However, I got the smart TV and the media center idea faded away and the security cameras plan did not work (was much easier to have a dedicated cctv system)

It is well looked after and is in very good and clean condition.
There are no cracks or scratches on top side, but a few scratches on the bottom part of the case (please see pictures).

This model has the rubberized top lid and it is difficult to clean, as it attracts finger prints. I cleaned the lead the best I could, but it will need cleaning again once it is delivered. I tried connecting to 1080P TV over ‘DP to HDMI cable’ (not included in sale) and it plays Netflix at 1080p and passes 5.1 sound to TV (AV amplifier).
Tested the battery several times: full charge and play music videos from youtube at full screen, it lasted over 4 hours with windows managing power settings automatically.

Delivery cost is included within my country, will be 2-3 day delivery from parcel2go and fully insured.

Spec:
HP EliteBook 840 G2 14″ (not touch screen)
Intel Core i5-5200U 2.20GHz
8GB (2x4GB) RAM
120GB Kingston SSD 300v – was installed new and only used for several days
Intel HD Graphics 5500 (1920 x 1080)
Windows 10 Pro 64bit 1903
It has 4 USB 3.0 ports, LAN port, DP ports, wifi and BT

Backlit keyboard
Original HP charger included

Delivery cost is included within my country, will be 2-3 day delivery from parcel2go and fully insured.

001.jpg 01.jpg 02.jpg 03.jpg 04.jpg 05.jpg 07.JPG 08.JPG 010.jpg

Price and currency: 150
Delivery: Delivery cost is included within my country
Payment method: Paypal gift, Bank transfer
Location: Manchester
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Go to Original Article
Author:

Are you ready for the Exchange 2010 end of life?

Exchange Server 2010 end of life is approaching — do you have your migration plan plotted out yet?

Exchange Server 2010 reached general availability on November 9, 2009, and has been the cornerstone of the collaboration strategy for many organizations over the last decade. Since that time, Microsoft also produced three releases of Exchange Server, with Exchange Server 2019 being the most recent. Exchange Server 2010 continues to serve the needs of many organizations, but they must look to migrate from this platform when support ends on January 14, 2020.

What exactly does end of support mean for existing Exchange Server 2010 deployments? Your Exchange 2010 servers will continue to operate with full functionality after this date; however, Microsoft will no longer provide technical support for the product. In addition, bug fixes, security patches and time zone updates will no longer be provided after the end-of-support date. If you haven’t already started your migration from Exchange Server 2010, now is the time to start by seeing what your options are.

Exchange Online

For many, Exchange Online — part of Microsoft Office 365 — is the natural replacement for Exchange Server 2010. This is my preferred option.

The cloud isn’t for everyone, but in many instances the reasons organizations cite for not considering the cloud are based on perception or outdated information, not reality.

A hybrid migration to Exchange Online is the quickest way to migrate to the latest version of Exchange that is managed by Microsoft. Smaller organizations may not need the complexity of this hybrid setup, so they may want to investigate simpler migration options. Not sure which migration option is best for you? Microsoft has some great guidance to help you decide on the best migration path.

The cloud isn’t for everyone, but in many instances the reasons organizations cite for not considering the cloud are based on perception or outdated information, not reality. I often hear the word “compliance” as a reason for not considering the cloud. If this is your situation, you should first study the compliance offerings on the Microsoft Trust Center. Microsoft Office 365 fulfills many industry standards and regulations, both regionally and globally.

If you decide to remain on premises with your email, you also have options. But the choice might not be as obvious as you think.

Staying with Exchange on premises

Exchange Server 2019 might seem like the clear choice for organizations that want to remain on premises, but there are a few reasons why this may not be the case.

Migrating from Exchange 2010 to Exchange 2016

First, there is no direct upgrade path from Exchange Server 2010 to Exchange Server 2019. For most organizations, this migration path involves a complex multi-hop migration. You first migrate all mailboxes and resources to Exchange Server 2016, then you decommission all remnants of Exchange Server 2010. You then perform another migration from Exchange Server 2016 to Exchange Server 2019 to finalize the process. This procedure involves significant resources, time and planning.

Another consideration with Exchange Server 2019 is licensing. Exchange Server 2019 is only available to volume license customers via the Volume Licensing Service Center. This could be problematic for smaller organizations without this type of agreement.

Organizations that use the unified messaging feature in Exchange Server 2010 have an additional caveat to consider: Microsoft removed the feature from Exchange Server 2019 and recommends Skype for Business Cloud Voicemail instead.

For those looking to remain on premises, Exchange Server 2019 has some great new features, but it is important to weigh the benefits against the drawbacks, and the effort involved with the migration process.

Microsoft only supports Exchange Server 2019 on Windows Server 2019. For the first time, the company supports Server Core deployments and is the recommended deployment option. In addition, Microsoft made it easier to control external access to the Exchange admin center and the Exchange Management Shell with client access rules.

Microsoft made several key improvements in Exchange Server 2019. It rebuilt the search infrastructure to improve indexing of larger files and search performance. The company says the new search architecture will decrease database failover times. The MetaCacheDatabase feature increases the overall performance of the database engine and allows it to work with the latest storage hardware, including larger disks and SSDs.

There are some new features on the client side as well. Email address internationalization allows support for email addresses that contain non-English characters. Some clever calendar improvements include “do not forward” work without the need for an information rights management deployment and the option to cancel/decline meetings that occur while you’re out of office.

What happens if the benefits of upgrading to Exchange Server 2019 don’t outweigh the drawbacks of the migration process? Exchange Server 2016 extended support runs through October 2025, making it a great option for those looking to migrate from Exchange Server 2010 and stay in support. The simpler migration process and support for unified messaging makes Exchange Server 2016 an option worth considering.

Go to Original Article
Author:

Transition to value-based care requires planning, communication

Transitioning to value-based care can be a tough road for healthcare organizations, but creating a plan and focusing on communication with stakeholders can help drive the change.

Value-based care is a model that rewards the quality rather than the quantity of care given to patients. The model is a significant shift from how healthcare organizations have functioned, placing value on the results of care delivery rather than the number of tests and procedures performed. As such, it demands that healthcare CIOs be thoughtful and deliberate about how they approach the change, experts said during a recent webinar hosted by Definitive Healthcare.

Andrew Cousin, senior director of strategy at Mayo Clinic Laboratories, and Aaron Miri, CIO at the University of Texas at Austin Dell Medical School and UT Health Austin, talked about their strategies for transitioning to value-based care and focusing on patient outcomes.

Cousin said preparedness is crucial, as organizations can jump into a value-based care model, which relies heavily on analytics, without the institutional readiness needed to succeed.  

“Having that process in place and over-communicating with those who are going to be impacted by changes to workflow are some of the parts that are absolutely necessary to succeed in this space,” he said.

Mayo Clinic Labs’ steps to value-based care

Cousin said his primary focus as a director of strategy has been on delivering better care at a lower cost through the lens of laboratory medicine at Mayo Clinic Laboratories, which provides laboratory testing services to clinicians.

Andrew Cousin, senior director of strategy, Mayo Clinic LaboratoriesAndrew Cousin

That lens includes thinking in terms of a mathematical equation: price per test multiplied by the number of tests ordered equals total spend for that activity. Today, much of a laboratory’s relationship with healthcare insurers is measured by the price per test ordered. Yet data shows that 20% to 30% of laboratory testing is ordered incorrectly, which inflates the number of tests ordered as well as the cost to the organization, and little is being done to address the issue, according to Cousin.

That was one of the reasons Mayo Clinic Laboratories decided to focus its value-based care efforts on reducing incorrect test ordering.

To mitigate the errors, Cousin said the lab created 2,000 evidence-based ordering rules, which will be integrated into a clinician’s workflow. There are more than 8,000 orderable tests, and the rules provide clinicians guidance at the start of the ordering process, Cousin said. The laboratory has also developed new datasets that “benchmark and quantify” the organization’s efforts.  

To date, Cousins said the lab has implemented about 250 of the 2,000 rules across the health system, and has identified about $5 million in potential savings.

Cousin said the lab crafted a five-point plan to begin the transition. The plan was based on its experience in adopting a value-based care model in other areas of the lab. The first three steps center on what Cousin called institutional readiness, or ensuring staff and clinicians have the training needed to execute the new model.

The plan’s first step is to assess the “competencies and gaps” of care delivery within the organization, benchmarking where the organization is today and where gaps in care could be closed, he said.

The second step is to communicate with stakeholders to explain what’s going to happen and why, what criteria they’ll be measured on and how, and how the disruption to their workflow will result in improving practice and financial reimbursement.

The third step is to provide education and guidance. “That’s us laying out the plans, training the team for the changes that are going to come about through the infusion of new algorithms and rules into their workflow, into the technology and into the way we’re going to measure that activity,” he said.

Cousin said it’s critical to accomplish the first three steps before moving on to the fourth step: launching a value-based care analytics program. For Mayo Clinic Laboratories, analytics are used to measure changes in laboratory test ordering and assess changes in the elimination of wasteful and unnecessary testing.

The fifth and final step focuses on alternative payments and collaboration with healthcare insurers, which Cousin described as one of the biggest challenges in value-based care. The new model requires a new kind of language that the payers may not yet speak.

Mayo Clinic Laboratories has attempted to address this challenge by taking its data and making it as understandable to payers as possible, essentially translating clinical data into claims data.     

Cousin gave the example of showing payers how much money was saved by intervening in over-ordering of tests. Presenting data as cost savings can be more valuable than documenting how many units of laboratory tests ordered it eliminated, he said.

How a healthcare CIO approaches value-based care

UT Health Austin’s Miri approaches value-based care from both the academic and the clinical side. UT Health Austin functions as the clinical side of Dell Medical School.

Aaron Miri, CIO at the University of Texas at Austin Dell Medical School and UT Health Austin Aaron Miri

The transition to value-based care in the clinical setting started with a couple of elements. Miri said, first and foremost, healthcare CIOs will need buy-in at the top. They also will need to start simple. At UT Health Austin, simple meant introducing a new patient-reported outcomes program, which aims to collect data from patients about their personal health views.

UT Health Austin has partnered with Austin-based Ascension Healthcare to collect patient reported outcomes as well as social determinants of health, or a patient’s lifestyle data. Both patient reported outcomes and social determinants of health “make up the pillars of value-based care,” Miri said.  

The effort is already showing results, such as a 21% improvement in the hip disability and osteoarthritis outcome score and a 29% improvement in the knee injury and osteoarthritis outcome score. Miri said the organization is seeing improvement because the organization is being more proactive about patient outcomes both before and after discharge.  

For the program to work, Miri and his team needs to make the right data available for seamless care coordination. That means making sure proper data use agreements are established between all UT campuses, as well as with other health systems in Austin.   

Value-based care data enables UT Health Austin to “produce those outcomes in a ready way and demonstrate that back to the payers and the patients that they’re actually getting better,” he said.

In the academic setting at Dell Medical School, Miri said the next generations of providers are being prepared for a value-based care world.

“We offer a dual master’s track academically … to teach and integrate value-based care principles into the medical school curriculum,” Miri said. “So we are graduating students — future physicians, future surgeons, future clinicians — with value-based at the core of their basic medical school preparatory work.”

Go to Original Article
Author:

For Sale – ROG Swift PG348Q

Hi,

Selling this as the GTX 980 Ti/ i7 4770K @ 4.5GHz I have {and don’t plan on upgrading any time soon} doesn’t do this screen full justice, depending on the game, of course. Sticking with playing via TV, despite the input lag difference and lack of GSync, of which the hype does live up to!

For instance Assassin’s Creed Origins runs great by my standards, with various tweaks to settings {scaled back resolution 20% and most settings on Med}. This averages a very smooth 60FPS, helped along with the addition of GSync.

This and Battlefield 1 are the most demanding games I have, with ACO being the most since there is so much to render in the world, which is absolutely massive and looks stunning; a great way to be a virtual tourist in Egypt and climb the pyramids, which I didn’t get to do while actually in Egypt!

Take a peek at the pictures attached to give a rough idea of the quality image it puts out, some in~game via FRAPS.

Screen in is perfect condition, zero dead pixels, and only missing a small panel on the back of the stand. Also has no box, hence collection only.

ROG SWIFT PG348Q | ROG – Republic Of Gamers | ASUS United Kingdom

Price and currency: £500
Delivery: Goods must be exchanged in person
Payment method: Cash on collection
Location: Manchester
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Go to Original Article
Author:

For Sale – ROG Swift PG348Q

Hi,

Selling this as the GTX 980 Ti/ i7 4770K @ 4.5GHz I have {and don’t plan on upgrading any time soon} doesn’t do this screen full justice, depending on the game, of course. Sticking with playing via TV, despite the input lag difference and lack of GSync, of which the hype does live up to!

For instance Assassin’s Creed Origins runs great by my standards, with various tweaks to settings {scaled back resolution 20% and most settings on Med}. This averages a very smooth 60FPS, helped along with the addition of GSync.

This and Battlefield 1 are the most demanding games I have, with ACO being the most since there is so much to render in the world, which is absolutely massive and looks stunning; a great way to be a virtual tourist in Egypt and climb the pyramids, which I didn’t get to do while actually in Egypt!

Take a peek at the pictures attached to give a rough idea of the quality image it puts out, some in~game via FRAPS.

Screen in is perfect condition, zero dead pixels, and only missing a small panel on the back of the stand. Also has no box, hence collection only.

ROG SWIFT PG348Q | ROG – Republic Of Gamers | ASUS United Kingdom

Price and currency: £500
Delivery: Goods must be exchanged in person
Payment method: Cash on collection
Location: Manchester
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Go to Original Article
Author: