Tag Archives: published

Cybercriminals see red as Microsoft hacks for good – Middle East & Africa News Center

Businesses in the Middle East and Africa are key targets for cyber attackers.

Data published by PwC in 2016 suggested that Middle East businesses were considerably more likely to suffer from cybercrime than the global average. Even though increased awareness and investment saw attacks in the United Arab Emirates (UAE) decline during the first half of 2018, cyber criminals still managed to steal close to Dh4 billion from victims in 2017, while the average amount of time consumers in the UAE lose dealing with online crime is rising steeply.

Businesses in South Africa are also falling victim to repeated ransomware attacks, with more than half of them hit by ransomware in 2017, according to a survey by Sophos. Further North, Kenya has been targeted by hackers in several major attacks over the past couple of years.

Hackers are increasingly taking advantage of “low-hanging fruit” as the cost of circumventing security measures goes up. Botnets continue to impact millions of computers globally, infecting them with old and new forms of malware, while ransomware continues to be a popular method used by cybercriminals to solicit and, in several cases, successfully obtain money from victims.

“We continue to see high profile cyberattacks land in the headlines around the world,” says David Weston, principal security group manager at Microsoft, who leads the Device Security and Offensive Security Research team, also known as the Red Team.

“Cryptocurrency mining, ransomware and other scareware are reaching new levels of sophistication.”

Microsoft’s own Red Team

Despite the continuous cybersecurity threats, only 33 percent of organisations have a cyber-incident response plan in place, and most companies are still not adequately prepared for or even understand the risks faced.

It’s for this reason, Microsoft is committed to helping businesses secure their environment and protect their customers. One way the company is working to achieve this, is through its Red Team, led by Weston, who is visiting the Middle-East this month.

“The Red Team operates like the world’s most sophisticated attackers: Gathering intelligence about their target, finding strings of vulnerabilities and then building the most refined exploits,” explains Weston. “Once their attack is complete, they work with their colleagues to identify and build disruptors to block the attack.”

The idea came about when Weston was at a hacking competition known as Pwn2Own, and noticed the pattern of many companies, including Microsoft, whereby they released software to the public and then hackers would attack. The so-called “white hats” would tell these companies about the vulnerabilities they found, but the “black hats” found and exploited these vulnerabilities themselves.

Weston says, “I knew we needed to be more aggressive in our approach, so I devised a plan: Disrupt this cycle by creating a team of internal hackers at Microsoft who would mimic the tactics and techniques of the most advanced hackers. Their goal would be to attack Windows 10 and its apps to make them better – to find and fix the toughest vulnerabilities before the bad guys.”

The Red Team’s advanced threat protections identify nearly a billion threats per day across end points. This helps Microsoft stay ahead of the game as hackers become increasingly more sophisticated.

The impact of AI and cloud on cybersecurity

Weston also highlights the impact of artificial intelligence (AI) on security.

“AI is filling critical gaps in cybersecurity,” he explains. “It will continue to advance cybersecurity; improve efficacy, detection and response; and bring us closer to being truly predictive and preventing attacks before they even occur.”

However, cybercriminals will continue to advance and adapt, just as the industry continues to advance and adapt. It’s for this reason businesses are being urged to move to the cloud, adopt modern platforms, and embrace comprehensive identity, security and management solutions.

“Most businesses aren’t as prepared as they could be. We can all do better, and that’s why we believe cloud is a security imperative to secure today’s modern workplace,” says Weston.

The Red Team is making inroads to ensure Microsoft software is as secure as possible for its customers. However, businesses in the Middle East and Africa that are embracing digital transformation to remain relevant in their markets should prioritise four key initiatives to ensure they are secure: implementing cyber resilience strategies; developing cybersecurity skills; protecting data privacy; and integrating cyber risk.

“At Microsoft, we recommend that everyone must be proactive in their cybersecurity efforts. Better protection equals better prevention, detection and remediation,” says Weston.

Finalized TLS 1.3 update has been published at last

The finalized and completed version of TLS 1.3 was published last week following a lengthy draft review process.

The Internet Engineering Task Force (IETF) published the latest version of the Transport Layer Security protocol used for internet encryption and authentication on Friday, Aug. 10, 2018, after starting work on it in April 2014. The final draft, version 28, was approved in March. It replaces the previous standard, TLS 1.2, which was published in RFC 5246 in August 2008. Originally based on the Secure Sockets Layer protocol, the new version of TLS has been revised significantly.

“The protocol [TLS 1.3] has major improvements in the areas of security, performance, and privacy,” IETF wrote in a blog post.

Specifically, TLS 1.3 “provides additional privacy for data exchanges by encrypting more of the negotiation handshake to protect it from eavesdroppers,” compared with TLS 1.2, IETF explained. “This enhancement helps protect the identities of the participants and impede traffic analysis.”

TLS 1.3 also has forward secrecy by default, so current communications will stay secured even if future communications are compromised, according to IETF.

“With respect to performance, TLS 1.3 shaves an entire round trip from the connection establishment handshake,” IETF wrote in its blog post announcing the finalized protocol. “In the common case, new TLS 1.3 connections will complete in one round trip between client and server.”

As a result, TLS 1.3 is expected to be faster than TLS 1.2. It will also remove outdated cryptography, such as the RSA key exchange, 3DES and static Diffie-Hellman, and thus free TLS 1.3 of the vulnerabilities that plagued TLS 1.2, such as FREAK and Logjam.

“Although the previous version, TLS 1.2, can be deployed securely, several high profile vulnerabilities have exploited optional parts of the protocol and outdated algorithms,” IETF wrote. “TLS 1.3 removes many of these problematic options and only includes support for algorithms with no known vulnerabilities.”

And, as Mozilla explained in a blog post, “TLS 1.3 is designed in cooperation with the academic security community and has benefitted from an extraordinary level of review and analysis. This included formal verification of the security properties by multiple independent groups; the TLS 1.3 RFC cites 14 separate papers analyzing the security of various aspects of the protocol.”

TLS 1.3 has already been widely deployed, according to Mozilla. The Firefox and Google Chrome browsers have draft versions deployed, with final version deployments on the way. And Cloudflare, Google and Facebook have also partially deployed the protocol.

How to Architect and Implement Networks for a Hyper-V Cluster

We recently published a quick tip article recommending the number of networks you should use in a cluster of Hyper-V hosts. I want to expand on that content to make it clear why we’ve changed practice from pre-2012 versions and how we arrive at this guidance. Use the previous post for quick guidance; read this one to learn the supporting concepts. These ideas apply to all versions from 2012 onward.

Why Did We Abandon Practices from 2008 R2?

If you dig on TechNet a bit, you can find an article outlining how to architect networks for a 2008 R2 Hyper-V cluster. While it was perfect for its time, we have new technologies that make its advice obsolete. I have two reasons for bringing it up:

  • Some people still follow those guidelines on new builds — worse, they recommend it to others
  • Even though we no longer follow that implementation practice, we still need to solve the same fundamental problems

We changed practices because we gained new tools to address our cluster networking problems.

What Do Cluster Networks Need to Accomplish for Hyper-V?

Our root problem has never changed: we need to ensure that we always have enough available bandwidth to prevent choking out any of our services or inter-node traffic. In 2008 R2, we could only do that by using multiple physical network adapters and designating traffic types to individual pathways. Note: It was possible to use third-party teaming software to overcome some of that challenge, but that was never supported and introduced other problems.

Starting from our basic problem, we next need to determine how to delineate those various traffic types. That original article did some of that work. We can immediately identify what appears to be four types of traffic:

  • Management (communications with hosts outside the cluster, ex: inbound RDP connections)
  • Standard inter-node cluster communications (ex: heartbeat, cluster resource status updates)
  • Cluster Shared Volume traffic
  • Live Migration

However, it turns out that some clumsy wording caused confusion. Cluster communication traffic and Cluster Shared Volume traffic are exactly the same thing. That reduces our needs to three types of cluster traffic.

What About Virtual Machine Traffic?

You might have noticed that I didn’t say anything about virtual machine traffic above. Same would be true if you were working up a different kind of cluster, such as SQL. I certainly understand the importance of that traffic; in my mind, service traffic prioritizes above all cluster traffic. Understand one thing: service traffic for external clients is not clustered. So, your cluster of Hyper-V nodes might provide high availability services for virtual machine vmabc, but all of vmabc‘s network traffic will only use its owning node’s physical network resources. So, you will not architect any cluster networks to process virtual machine traffic.

As for preventing cluster traffic from squelching virtual machine traffic, we’ll revisit that in an upcoming section.

Fundamental Terminology and Concepts

These discussions often go awry over a misunderstanding of basic concepts.

  • Cluster Name Object: A Microsoft Failover Cluster has its own identity separate from its member nodes known as a Cluster Name Object (CNO). The CNO uses a computer name, appears in Active Directory, has an IP, and registers in DNS. Some clusters, such as SQL, may use multiple CNOs. A CNO must have an IP address on a cluster network.
  • Cluster Network: A Microsoft Failover Cluster scans its nodes and automatically creates “cluster networks” based on the discovered physical and IP topology. Each cluster network constitutes a discrete communications pathway between cluster nodes.
  • Management network: A cluster network that allows inbound traffic meant for the member host nodes and typically used as their default outbound network to communicate with any system outside the cluster (e.g. RDP connections, backup, Windows Update). The management network hosts the cluster’s primary cluster name object. Typically, you would not expose any externally-accessible services via the management network.
  • Access Point (or Cluster Access Point): The IP address that belongs to a CNO.
  • Roles: The name used by Failover Cluster Management for the entities it protects (e.g. a virtual machine, a SQL instance). I generally refer to them as services.
  • Partitioned: A status that the cluster will give to any network on which one or more nodes does not have a presence or cannot be reached.
  • SMB: ALL communications native to failover clustering use Microsoft’s Server Message Block (SMB) protocol. With the introduction of version 3 in Windows Server 2012, that now includes innate multi-channel capabilities (and more!)

Are Microsoft Failover Clusters Active/Active or Active/Passive?

Microsoft Failover Clusters are active/passive. Every node can run services at the same time as the other nodes, but no single service can be hosted by multiple nodes. In this usage, “service” does not mean those items that you see in the Services Control Panel applet. It refers to what the cluster calls “roles” (see above). Only one node will ever host any given role or CNO at any given time.

How Does Microsoft Failover Clustering Identify a Network?

The cluster decides what constitutes a network; your build guides it, but you do not have any direct input. Any time the cluster’s network topology changes, the cluster service re-evaluates.

First, the cluster scans a node for logical network adapters that have IP addresses. That might be a physical network adapter, a team’s logical adapter, or a Hyper-V virtual network adapter assigned to the management operating system. It does not see any virtual NICs assigned to virtual machines.

For each discovered adapter and IP combination on that node, it builds a list of networks from the subnet masks. For instance, if it finds an adapter with an IP of 192.168.10.20 and a subnet mask of 255.255.255.0, then it creates a 192.168.10.0/24 network.

The cluster then continues through all of the other nodes, following the same process.

Be aware that every node does not need to have a presence in a given network in order for failover clustering to identify it; however, the cluster will mark such networks as partitioned.

What Happens if a Single Adapter has Multiple IPs?

If you assign multiple IPs to the same adapter, one of two things will happen. Which of the two depends on whether or not the secondary IP shares a subnet with the primary.

When an Adapter Hosts Multiple IPs in Different Networks

The cluster identifies networks by adapter first. Therefore, if an adapter has multiple IPs, the cluster will lump them all into the same network. If another adapter on a different host has an IP in one of the networks but not all of the networks, then the cluster will simply use whichever IPs can communicate.

As an example, see the following network:

The second node has two IPs on the same adapter and the cluster has added it to the existing network. You can use this to re-IP a network with minimal disruption.

A natural question: what happens if you spread IPs for the same subnet across different existing networks? I tested it a bit and the cluster allowed it and did not bring the networks down. However, it always had the functional IP pathway to use, so that doesn’t tell us much. Had I removed the functional pathways, then it would have collapsed the remaining IPs into an all-new network and it would have worked just fine. I recommend keeping an eye on your IP scheme and not allowing things like that in the first place.

When an Adapter Hosts Multiple IPs in the Same Network

The cluster will pick a single IP in the same subnet to represent the host in that network.

What if Different Adapters on the Same Host have an IP in the Same Subnet?

The same outcome occurs as if the IPs were on the same adapter: the cluster picks one to represent the cluster and ignores the rest.

The Management Network

All clusters (Hyper-V, SQL, SOFS, etc.) require a network that we commonly dub Management. That network contains the CNO that represents the cluster as a singular system. The management network has little importance for Hyper-V, but external tools connect to the cluster using that network. By necessity, the cluster nodes use IPs on that network for their own communications.

The management network will also carry cluster-specific traffic. More on that later.

Note: Replica uses a management network.

Cluster Communications Networks (Including Cluster Shared Volume Traffic)

A cluster communications network will carry:

  • Cluster heartbeat information. Each node must hear from every other node within a specific amount of time (1 second by default). If it does not hear from a minimum of nodes to maintain quorum, then it will begin failover procedures. Failover is more complicated than that, but beyond the scope of this article.
  • Cluster configuration changes. If any configuration item changes, whether to the cluster’s own configuration or the configuration or status of a protected service, the node that processes the change will immediately transmit to all of the other nodes so that they can update their own local information store.
  • Cluster Shared Volume traffic. When all is well, this network will only carry metadata information. Basically, when anything changes on a CSV that updates its volume information table, that update needs to be duplicated to all of the other nodes. If the change occurs on the owning node, less data needs to be transmitted, but it will never be perfectly quiet. So, this network can be quite chatty, but will typically use very little bandwidth. However, if one or more nodes lose direct connectivity to the storage that hosts a CSV, all of its I/O will route across a cluster network. Network saturation will then depend on the amount of I/O the disconnected node(s) need(s).

Live Migration Networks

That heading is a bit of misnomer. The cluster does not have its own concept of a Live Migration network per se. Instead, you let the cluster know which networks you will permit to carry Live Migration traffic. You can independently choose whether or not those networks can carry other traffic.

Other Identified Networks

The cluster may identify networks that we don’t want to participate in any kind of cluster communications at all. iSCSI serves as the most common example. We’ll learn how to deal with those.

Architectural Goals

Now we know our traffic types. Next, we need to architect our cluster networks to handle them appropriately. Let’s begin by understanding why you shouldn’t take the easy route of using a singular network. A minimally functional Hyper-V cluster only requires that “management” network. Stopping there leaves you vulnerable to three problems:

  • The cluster will be unable to select another IP network for different communication types. As an example, Live Migration could choke out the normal cluster hearbeat, causing nodes to consider themselves isolated and shut down
  • The cluster and its hosts will be unable to perform efficient traffic balancing, even when you utilize teams
  • IP-based problems in that network (even external to the cluster) could cause a complete cluster failure

Therefore, you want to create at least one other network. In the pre-2012 model we could designate specific adapters to carry specific traffic types. In the 2012 and later model, we simply create at least one more additional network to allow cluster communications but not client access. Some benefits:

  • Clusters of version 2012 or new will automatically employ SMB multichannel. Inter-node traffic (including Cluster Shared Volume data) will balance itself without further configuration work.
  • The cluster can bypass trouble on one IP network by choosing another; you can help by disabling a network in Failover Cluster Manager
  • Better load balancing across alternative physical pathways

The Second Supporting Network… and Beyond

Creating networks beyond the initial two can add further value:

  • If desired, you can specify networks for Live Migration traffic, and even exclude those from normal cluster communications. Note: For modern deployments, doing so typically yields little value
  • If you host your cluster networks on a team, matching the number of cluster networks to physical adapters allows the teaming and multichannel mechanisms the greatest opportunity to fully balance transmissions. Note: You cannot guarantee a perfectly smooth balance

Architecting Hyper-V Cluster Networks

Now we know what we need and have a nebulous idea of how that might be accomplished. Let’s get into some real implementation. Start off by reviewing your implementation choices. You have three options for hosting a cluster network:

  • One physical adapter or team of adapters per cluster network
  • Convergence of one or more cluster networks onto one or more physical teams or adapters
  • Convergence of one or more cluster networks onto one or more physical teams claimed by a Hyper-V virtual switch

A few pointers to help you decide:

  • For modern deployments, avoid using one adapter or team for a cluster network. It makes poor use of available network resources by forcing an unnecessary segregation of traffic.
  • I personally do not recommend bare teams for Hyper-V cluster communications. You would need to exclude such networks from participating in a Hyper-V switch, which would also force an unnecessary segregation of traffic.
  • The most even and simple distribution involves a singular team with a Hyper-V switch that hosts all cluster network adapters and virtual machine adapters. Start there and break away only as necessary.
  • A single 10 gigabit adapter swamps multiple gigabit adapters. If your hosts have both, don’t even bother with the gigabit.

To simplify your architecture, decide early:

  • How many networks you will use. They do not need to have different functions. For example, the old management/cluster/Live Migration/storage breakdown no longer makes sense. One management and three cluster networks for a four-member team does make sense.
  • The IP structure for each network. For networks that will only carry cluster (including intra-cluster Live Migration) communication, the chosen subnet(s) do not need to exist in your current infrastructureAs long as each adapter in a cluster network can reach all of the others at layer 2 (Ethernet), then you can invent any IP network that you want.

I recommend that you start off expecting to use a completely converged design that uses all physical network adapters in a single team. Create Hyper-V network adapters for each unique cluster network. Stop there, and make no changes unless you detect a problem.

Comparing the Old Way to the New Way (Gigabit)

Let’s start with a build that would have been common in 2010 and walk through our options up to something more modern. I will only use gigabit designs in this section; skip ahead for 10 gigabit.

In the beginning, we couldn’t use teaming. So, we used a lot of gigabit adapters:

There would be some variations of this. For instance, I would have added another adapter so that I could use MPIO with two iSCSI networks. Some people used Fiber Channel and would not have iSCSI at all.

Important Note: The “VMs” that you see there means that I have a virtual switch on that adapter and the virtual machines use it. It does not mean that I have created a VM cluster network. There is no such thing as a VM cluster network. The virtual machines are unaware of the cluster and they will not talk to it (if they do, they’ll use the Management access point like every other non-cluster system).

Then, 2012 introduced teaming. We could then do all sorts of fun things with convergence. My very least favorite:

This build takes teams to an excess. Worse, the management, cluster, and Live Migration teams will be idle almost all the time, meaning that this 60% of this host’s networking capacity will be generally unavailable.

Let’s look at something a bit more common. I don’t like this one either, but I’m not revolted by it either:

A lot of people like that design because, so they say, it protects the management adapter from problems that affect the other roles. I cannot figure out how they perform that calculus. Teaming addresses any probable failure scenarios. For anything else, I would want the entire host to fail out of the cluster. In this build, a failure that brought the team down but not the management adapter would cause its hosted VMs to become inaccessible because the node would remain in the cluster. That’s because the management adapter would still carry cluster heartbeat information.

My preferred design follows:

Now we are architected against almost all types of failure. In a “real-world” build, I would still have at least two iSCSI NICs using MPIO.

What is the Optimal Gigabit Adapter Count?

Because we had one adapter per role in 2008 R2, we often continue using the same adapter count in our 2012+ builds. I don’t feel that’s necessary for most builds. I am inclined to use two or three adapters in data teams and two adapters for iSCSI. For anything past that, you’ll need to have collected some metrics to justify the additional bandwidth needs.

10 Gigabit Cluster Network Design

10 gigabit changes all of the equations. In reasonable load conditions, a single 10 gigabit adapter moves data more than 10 times faster than a single gigabit adapter. When using 10 GbE, you need to change your approaches accordingly. First, if you have both 10GbE and gigabit, just ignore the gigabit. It is not worth your time. If you really want to use it, then I would consider using it for iSCSI connections to non-SSD systems. Most installations relying on iSCSI-connected spinning disks cannot sustain even 2 Gbps, so gigabit adapters would suffice.

Logical Adapter Counts for Converged Cluster Networking

I didn’t include the Hyper-V virtual switch in any of the above diagrams, mostly because it would have made the diagrams more confusing. However, I would use a Hyper-V team to host all of the logical adapters necessary. For a non-Hyper-V cluster, I would create a logical team adapter for each role. Remember that on a logical team, you can only have a single logical adapter per VLAN. The Hyper-V virtual switch has no such restrictions. Also remember that you should not use multiple logical team adapters on any team that hosts a Hyper-V virtual switch. Some of the behavior is undefined and your build might not be supported.

I would always use these logical/virtual adapter counts:

  • One management adapter
  • A minimum of one cluster communications adapter up to n-1, where n is the number of physical adapters in the team. You can subtract one because the management adapter acts as a cluster adapter as well

In a gigabit environment, I would add at least one logical adapter for Live Migration. That’s optional because, by default, all cluster-enabled networks will also carry Live Migration traffic.

In a 10 GbE environment, I would not add designated Live Migration networks. It’s just logical overhead at that point.

In a 10 GbE environment, I would probably not set aside physical adapters for storage traffic. At those speeds, the differences in offloading technologies don’t mean that much.

Architecting IP Addresses

Congratulations! You’ve done the hard work! Now you just need to come up with an IP scheme. Remember that the cluster builds networks based on the IPs that it discovers.

Every network needs one IP address for each node. Any network that contains an access point will need an additional IP for the CNO. For Hyper-V clusters, you only need a management access point. The other networks don’t need a CNO.

Only one network really matters: management. Your physical nodes must use that to communicate with the “real” network beyond. Choose a set of IPs available on your “real” network.

For all the rest, the member IPs only need to be able to reach each other over layer 2 connections. If you have an environment with no VLANs, then just make sure that you pick IPs in networks that don’t otherwise exist. For instance, you could use 192.168.77.0/24 for something, as long as that’s not a “real” range on your network. Any cluster network without a CNO does not need to have a gateway address, so it doesn’t matter that those networks won’t be routable. It’s preferred, in fact.

Implementing Hyper-V Cluster Networks

Once you have your architecture in place, you only have a little work to do. Remember that the cluster will automatically build networks based on the subnets that it discovers. You only need to assign names and set them according to the type of traffic that you want them to carry. You can choose:

  • Allow cluster communication (intra-node heartbeat, configuration updates, and Cluster Shared Volume traffic)
  • Allow client connectivity to cluster resources (includes cluster communication) and cluster communications (you cannot choose client connectivity without cluster connectivity)
  • Prevent participation in cluster communications (often used for iSCSI and sometimes connections to external SMB storage)

As much as I like PowerShell for most things, Failover Cluster Manager makes this all very easy. Access the Networks tree of your cluster:

I’ve already renamed mine in accordance with their intended roles. A new build will have “Cluster Network”, “Cluster Network 1”, etc. Double-click on one to see which IP range(s) it assigned to that network:

Work your way through each network, setting its name and what traffic type you will allow. Your choices:

  • Allow cluster network communication on this network AND Allow clients to connect through this network: use these two options together for the management network. If you’re building a non-Hyper-V cluster that needs access points on non-management networks, use these options for those as well. Important: The adapters in these networks SHOULD register in DNS.
  • Allow cluster network communication on this network ONLY (do not check Allow clients to connect through this network): use for any network that you wish to carry cluster communications (remember that includes CSV traffic). Optionally use for networks that will carry Live Migration traffic (I recommend that). Do not use for iSCSI networks. Important: The adapters in these networks SHOULD NOT register in DNS.
  • Do not allow cluster network communication on this network: Use for storage networks, especially iSCSI. I also use this setting for adapters that will use SMB to connect to a storage server running SMB version 3.02 in order to run my virtual machines. You might want to use it for Live Migration networks if you wish to segregate Live Migration from cluster traffic (I do not do or recommend that).

Once done, you can configure Live Migration traffic. Right-click on the Networks node and click Live Migration Settings:

Check a network’s box to enable it to carry Live Migration traffic. Use the Up and Down buttons to prioritize.

What About Traffic Prioritization?

In 2008 R2, we had some fairly arcane settings for cluster network metrics. You could use those to adjust which networks the cluster would choose as alternatives when a primary network was inaccessible. We don’t use those anymore because SMB multichannel just figures things out. However, be aware that the cluster will deliberately choose Cluster Only networks over Cluster and Client networks for inter-node communications.

What About Hyper-V QoS?

When 2012 first debuted, it brought Hyper-V networking QoS along with it. That was some really hot new tech, and lots of us dove right in and lost a lot of sleep over finding the “best” configuration. And then, most of us realized that our clusters were doing a fantastic job balancing things out all on their own. So, I would recommend that you avoid tinkering with Hyper-V QoS unless you have tried going without and had problems. Before you change QoS, determine what traffic needs to be attuned or boosted before you change anything. Do not simply start flipping switches, because the rest of us already tried that and didn’t get results. If you need to change QoS, start with this TechNet article.

Your thoughts?

Does your preferred network management system differ from mine? Have you decided to give my arrangement a try? How id you get on? Let me know in the comments below, I really enjoy hearing from you guys!

Next Generation Washington 2018 – Microsoft on the Issues

Last January, we published for the first time a blog that outlined the positions Microsoft would be advocating as we walked the halls of our state capitol in Olympia. As we pointed out, public interest groups had called for greater transparency by companies, and we had concluded that they made a good point. People seemed to appreciate last year’s publication, so we’re taking the same step this year.

We appreciate the hard work and personal sacrifices our state legislators make each year. As we move forward in this year’s session, I’ve sketched Microsoft’s priorities in Washington state below. You’ll find our thoughts on several issues, including education and workforce development, climate change, rural economic development, the Cascadia Corridor, the Special Olympics and a few others. No doubt you may agree with some of our positions more than others. But regardless of the substance of the issue, we hope you’ll find this helpful.

Strengthening education and workforce development

The short 2018 legislative session provides an opportunity to build on the accomplishments that our legislators achieved last year in Olympia during the six-month session. Last year they passed a landmark bipartisan budget designed to inject an additional $7.3 billion into schools over the next four years. As we noted last July, this continued a trend that began with the McCleary decision in 2012. That year, the state spent $13.4 billion per biennium (two years) on K-12 education. By the 2019-2021 biennium, the state will spend $26.6 billion on K-12 education. Much of the new funding is based on student need, which helps to close stubborn opportunity gaps for many students in high-poverty schools.

While the state’s Supreme Court has acknowledged the importance of this progress, it has also called on the legislature to accelerate this spending increase. As a result, this is an important priority for this legislative session, and we hope it can be addressed effectively.

At the same time, it’s critical that our legislators take additional steps to address other education and workforce development needs. Technology is changing jobs and people will need to develop new skills to succeed in the future. For the people of our state to be successful, we need to continue to increase high school graduation rates and then provide a path towards a post-secondary credential, whether that’s an industry certification, a college degree or some other credential. The state has set the important goal of helping 70 percent of Washingtonians between the ages of 25 to 44 to achieve a post-secondary degree or credential by 2023. Today, that figure is only 51 percent, with larger deficits among important racial, geographic and economic segments. In short, we have a lot of work to do.

One of our best opportunities is to invest in a strong career-connected learning strategy that will provide young people with learning and training programs that will provide them with the skills and credentials they need to pursue our state’s jobs. Microsoft has been a strong supporter of Gov. Jay Inslee’s goal of connecting 100,000 young people with career-connected learning opportunities. I’ve co-chaired – along with MacDonald-Miller’s Perry England – the governor’s task force to address this issue. We’ve learned from the business, labor, education and policy leaders involved what an important opportunity our state has to lead the nation in better preparing our young people for the full range of jobs across the state. I’m excited about the recommendations we’re finalizing and will present to the governor and public next month. I hope our legislators will support the governor as he continues to lead the state on this issue, and I hope that companies across the business community and organized labor groups will work closely with our educators to make these opportunities real for our young people.

While we undertake this new career-connected learning initiative, it’s also important for the 2018 session to address two areas of narrower but vital unfinished business left over from last year. The first is to provide $3 million in supplemental funding to complete the doubling – to over 600 – of computer science degree capacity at the University of Washington’s Paul G. Allen School of Computer Science and Engineering. We’re exceedingly fortunate as a state to have in the Allen School one of the world’s premier computer science departments located in the middle of a region that is creating so many computer science jobs.

I chaired the effort that completed the fundraising in 2017 to build a second computer science building at the University of Washington. Microsoft was a major contributor, as was the state itself, Amazon, Zillow, Google and so many generous individuals. Now that we’ve raised over $70 million of private money to build this building, we’re hoping the legislature will allocate $3 million so the university can fill it with Washington students.

Finally, we have a key opportunity to continue to help remove financial barriers for lower-income students to pursue college degrees in high demand STEM and healthcare fields by advancing and funding the Washington State Opportunity Scholarship (WSOS).

Microsoft has supported WSOS and I’ve chaired the program’s board since it was founded in 2011. Thanks to the terrific leadership of Naria Santa Lucia, our executive director, and some remarkable partners across the state, the WSOS program is already leading the nation with its innovative work to match private sector contributions with state funding and services for our scholars. More than 3,800 students who grew up in Washington state are attending colleges in the state on these scholarships this year. Because the program continues to grow, just over 1,750 of this total are new scholars added this year. And consider this: 60 percent of this new group are female, 72 percent are among the first generation in their families to attend college and 73 percent are students of color. It helps put our state at the forefront of national efforts to create better opportunities for young people of all backgrounds.

The legislature can take two additional steps this session to help the WSOS achieve even more. The first is to include an additional appropriation in its supplemental budget to match the increasing level of private donations to the program. And the second is to authorize WSOS to provide new support to students looking to pursue industry certification and associate degree programs in STEM-related fields at our state’s 34 community and technical colleges.

Addressing climate change

A second important issue on the state legislature’s agenda this year is one of the broadest issues for the planet: climate change. As a company, Microsoft is focusing on new ways we can use artificial intelligence and other technology to help address this problem, including through our AI for Earth program. This builds on ongoing work to ensure our local campus and our datacenters worldwide use more green energy. This includes setting an internal price on carbon we charge our business units, purchasing renewable energy and establishing extensive commuting and carpooling programs.  As companies across the tech sector help address climate issues, we believe that Washington state has a key role to play as well.

We applaud Gov. Inslee for his longstanding commitment to this issue, as well as the work of several legislators who have emerged as leaders in addressing it. Washington is already one of the lowest carbon emitters per capita, in part because of the important clean energy investments made by Washington businesses and families. But we all need to do more.

We hope the legislature will work with stakeholders across the state to drive reductions in total carbon emissions, while minimizing economic disruptions, creating new job opportunities and addressing the water infrastructure needs that are so vital for the eastern part of our state. In 2017, we saw the value of diverse interests coming together to craft a balanced solution that makes Washington a leader on paid family leave. A similar collaborative approach can help us forge progress in addressing climate change.

Supporting rural Washington

One of the issues we learned a lot more about in 2017 was the importance of expanding opportunities for people in rural communities in the United States. The more we’ve learned, the more passionate we’ve become. We’re now working with state governments across the country, and we hope that our own legislature will take the steps needed to ensure we can do work in our home state that matches the work we’re doing and witnessing elsewhere.

One of the issues that deserves more attention is the broadband gap in the state and across the country. As we see firsthand every day, cloud computing and artificial intelligence are reshaping the economy and creating new opportunities for people who can put this technology to use. But it’s impossible to take advantage of these opportunities without access to broadband. Today there are 23.4 million Americans in rural counties who lack broadband access. Many rural communities, especially east of the Cascades, lack adequate broadband. Whether they are parents helping their children with homework, veterans seeking telemedicine services, farmers looking to explore precision agriculture or small business owners wanting to create jobs, people in these communities are at a disadvantage to those living in cities with high-speed connectivity.

The goal of Microsoft’s Airband initiative is to help close this rural connectivity gap by bringing broadband connectivity to 2 million people in rural America by 2022. Through our direct work with partners, we will launch at least 13 projects in 13 states this year, including Washington. We believe the public sector also has a vital role to play, including the investment of matching funds to support capital equipment projects. Today, 11 states have earmarked funds to extend broadband service to their rural communities. But Washington is not one of them.

We hope the legislature will act this year to join the ranks of other states that are acting to advance rural broadband connectivity. Encouragingly, the legislature this year has taken a first step in this direction by recently adopting a capital budget with $5 million in grants for the Community Economic Revitalization Board to support the expansion of rural broadband. However, the bill failed to reach projects addressing the homework gap or providing telemedicine capacity. Legislative leaders have stated they will make supplemental adjustments to the biennial budget in the upcoming weeks. We hope the legislature will continue to pursue a more expansive use of rural broadband funds and will reestablish the Rural Broadband Office in the Department of Commerce. This office would then lead state planning to prioritize and sequence the delivery of high-quality broadband access to unserved and underserved communities.

Advancing the Cascadia Corridor

The legislature can also act in 2018 to build on the growing momentum to advance the Cascadia Corridor. The past year saw several important advances in this area by leaders in Washington, British Columbia and Oregon. This included new education and research partnerships, businesses working more closely together and transportation initiatives, all supported by government leaders across Washington state. Gov. Inslee, King County Executive Dow Constantine and University of Washington President Ana Mari Cauce have all played key leadership roles, which we greatly appreciate.

At the same time, urban congestion is making even more compelling the need to spread economic growth more broadly to more areas around the Puget Sound. One way to do that is to strengthen transportation ties from Vancouver to Seattle to Portland. The ability to move more quickly would help spur growth in places from Bellingham and Anacortes to Tacoma and Olympia, among others.

While we believe there is an important role for future investments in autonomous vehicles and highway improvements to accommodate them, we also believe there are vital steps the legislature can take in another area – high-speed rail. The construction of a high-speed rail connection between Portland and Vancouver, B.C. would be a game changer. An initial high-level report already supports the concept and Gov. Inslee has proposed a more detailed study of potential ridership, routes and financing. We support funding for this additional feasibility analysis, and in light of the recent Amtrak tragedy, urge lawmakers to examine both economic opportunities and public safety requirements.

Other issues

There are three additional issues that deserve continuing attention in Olympia and across the state this year. These are:

  • Immigration. Over the past year, state Attorney General Bob Ferguson has emerged as a national leader in addressing the urgent needs of people who have come to our state from other countries. We greatly appreciate the role he and his staff have played in helping to protect our employees, who work for us lawfully and in full compliance with federal laws. As we look ahead, we remain concerned not only about steps taken over the past year but by new steps that could come in the year ahead. These could impact our employees and families, as well as many others across the state. We’re grateful that we live in a state that has an attorney general who is committed to continuing efforts, if needed, to bring these types of issues properly before the courts.
  • Criminal justice system improvements. We hope that officials across the state will continue to build on the steps taken last year in this area. In 2017 the legislature provided $1.2 million in additional funding for the state’s Criminal Justice Training Center (CJTC) to improve situational de-escalation capabilities and build stronger trust between law enforcement and communities. Microsoft is supporting this with a $400,000 investment through 2019 to pilot the Center’s 21st Century Police Leadership program. We’re grateful that CJTC Executive Director Sue Rahr – a nationally recognized expert in policing and a long-time law enforcement leader in our state – is leading this work. We are also working with leaders in our state’s court system to build technology solutions that will help judges improve fairness and just outcomes in legal financial obligations. We look forward to continuing to pursue additional advances in 2018 with a wide range of partners.
  • Net neutrality. Like most tech companies and many consumers, we’re also concerned about the decision by the Federal Communications Commission (FCC) to rescind net neutrality rules. We’ve long supported net neutrality rules at the federal level, and we endorsed the FCC’s adoption of strong net neutrality protection in 2015. Given the federal government’s withdrawal of net neutrality protections, we believe it’s appropriate and helpful for the legislature to adopt at the state level the rules that the FCC rescinded. We hope the legislature will include a provision that will sunset these rules automatically if the FCC re-adopts rules that are the same or substantially similar in the future. This would create a long-term incentive for all stakeholders to move net neutrality in the United States back to the place where it can be governed effectively at the national level.

Ending on a very bright note – the 2018 Special Olympics USA Games are coming to Seattle!

Finally, as we contemplate the challenges of our time, there’s one thing we should all get excited about. The Special Olympics USA Games will take place in Seattle. On July 1, 4,000 athletes and coaches from across the country will arrive to compete in 14 sports. They include many remarkable athletes and their families, and it will be broadcast nationally on ESPN.

It will be one of the largest sporting events ever held in Seattle, with more than 50,000 spectators expected. Microsoft is honored to be the presenting sponsor of the games, I’m thrilled to serve as the honorary chair, and we thank (and even salute!) lawmakers for the $3 million in state support, further demonstrating our state commitment to showcasing the power of diversity.

While many of the issues I’ve noted above call for leadership by our legislators and other officials, the Special Olympics provide an opportunity for individual leadership by every one of us. Literally.

The Special Olympics have played a transformative role in the lives of athletes with intellectual disabilities and has become a global movement of acceptance and inclusion. Through sports, health, school and youth engagement, the organization brings people around the world together, with and without intellectual disabilities, to foster tolerance, unity and respect.

I hope you’ll join in to make the USA Games a special moment not only for the athletes and their families, but for all of us who live in Washington state. Please join us at the opening ceremonies on July 1. It will be an event to remember. Or, attend one of the 14 sports events that will take place around Puget Sound. Consider volunteering to help, showing our local hospitality to our visitors while learning more about how we can all learn from each other in new ways.

We also believe the USA Games can provide another opportunity as well. As we prepare for the event, one of the themes we’ve adopted is “Seattle as a city of inclusion.” We’re hoping that local employers will join together not only to encourage employee volunteerism, but to learn more about programs like the one that we’ve benefited from at Microsoft that has helped us recruit, hire and develop some sensational employees who also happen to deal with autism every day. As we’ve learned, talent flourishes all around us, but sometimes we need to look around a bit more broadly to appreciate how we can benefit from it – and how we can help other people along the way.

*        *        *

As we look to the months ahead, there’s no doubt that 2018 will bring its share of twists, turns, and even challenges. But when we look at what our legislators can accomplish and what the rest of us can contribute, there is no shortage of opportunities. Let’s make the most of them together!

As always, we welcome your thoughts on our ideas.

Tags: Brad Smith, Cascadia Corridor, education, employment, Environment, legislation, Next Generation Washington

Microsoft tops Thomson Reuters top 100 global tech leaders list

(Reuters – Thomson Reuters Corp (TRI.TO) on Wednesday published its debut “Top 100 Global Technology Leaders” list with Microsoft Corp (MSFT.O) in the no. 1 spot, followed by chipmaker Intel Corp (INTC.O) and network gear maker Cisco Systems Inc (CSCO.O).

The list, which aims to identify the industry’s top financially successful and organizationally sound organizations, features U.S. tech giants such as Apple Inc (AAPL.O) , Alphabet Inc (GOOGL.O) , International Business Machines Corp (IBM.N) and Texas Instruments Inc (TXN.O), among its top 10.

Microchip maker Taiwan Semiconductor Manufacturing (2330.TW), German business software giant SAP (SAPG.DE) and Dublin-based consultant Accenture (ACN.N) round out the top 10.

The remaining 90 companies are not ranked, but the list also includes the world’s largest online retailer Amazon.com Inc (AMZN.O) and social media giant Facebook Inc (FB.O). ( bit.ly/2B8eowE )

The results are based on a 28-factor algorithm that measures performance across eight benchmarks: financial, management and investor confidence, risk and resilience, legal compliance, innovation, people and social responsibility, environmental impact, and reputation.

The assessment tracks patent activity for technological innovation and sentiment in news and selected social media as the reflection of a company’s public reputation.

The set of tech companies is restricted to those that have at least $1 billion in annual revenue.

According to the list, 45 percent of these 100 tech companies are headquartered in the United States. Japan and Taiwan are tied for second place with 13 companies each, followed by India with five tech leaders on the list.

By continent, North America leads with 47, followed by Asia with 38, Europe with 14 and Australia with one.

The strength of Asia highlights the growth of companies such as Tencent Holdings Ltd (0700.HK), which became the first Asian firm to enter the club of companies worth more than $500 billion, and surpassed Facebook in market value in November.

Reuters is the news and media division of Thomson Reuters, which produced the list.

Reporting by Sonam Rai in Bengaluru, editing by Peter Henderson

Our Standards:The Thomson Reuters Trust Principles.

Managing first-hop router complexity with an IPv6 prefix

Ivan Pepelnjak, blogging in IPSpace, looked into RFC 8273 published by the Internet Engineering Task Force in December 2017. The RFC describes a process in which a first-hop router allocates a unique IPv6 prefix for each host attached to a subnet and sends responses to unicast MAC addresses to indicate that each host is the only host on its subnet. Pepelnjak said the complex IPv6 prefix process seems baffling. “Unfortunately, there are good reasons we need this monstrosity,” he said.

According to Pepelnjak, to meet legal requirements, internet service providers (ISPs) need to be able to identify unique customers by their IPv6 addresses. ISPs cannot use the identity association for non-temporary addresses provision of version 6 of the Dynamic Host Configuration Protocol to control address allocation. As for the results of the idea of using a unique IPv6 prefix for every host, Pepelnjak said it wastes half of the address bits. On the other hand, the ideas presented in RFC 8273 keep the client stack simpler because DHCPv6 isn’t needed, but they don’t reduce complexity for the first-hop router.

Read more of Pepelnjak’s ideas on IPv6 prefix usage. 

Why CISOs change jobs so often

Jon Oltsik, an analyst at Enterprise Strategy Group in Milford, Mass., began 2018 with a blog about turnover among chief information security officers (CISOs). The effectiveness of CISOs determines the success of cybersecurity initiatives at many organizations. However, studies suggest the average CISO tenure is as little as one to two years. ESG research with ISSA found that 38% of CISOs change jobs because they are offered higher compensation packages elsewhere. The survey gathered data from 343 CISOs.

The survey also found 36% of CISOs leave because an organization’s culture isn’t focused on cybersecurity. In other cases, CISOs left because they weren’t included in executive management or in decisions by the board of directors or because cybersecurity budgets weren’t commensurate with the size of an organization. “Clearly, money matters to CISOs but they also want to work for executives who are willing to fund, participate in, and cheerlead cybersecurity efforts across the entire organization. In lieu of this commitment, the CISO is as good as gone,” Oltsik added.

Dig deeper into Oltsik’s thoughts on CISO turnover.

When rebooting isn’t enough

Lee Badman, writing in WiredNot, said the old adage of “just reboot it” is usually innocuous advice when trouble crops up with a piece of technology. From smartphones to automatic transmissions, the advice often holds true. “But when it comes to expensive, supposedly high-end networking components, should we have the same tolerance for the need to reboot as a ‘fix’?” he said. Rebooting an important network component could result in hundreds of clients losing service.

When it comes to rebooting, Badman said that some leading wireless LAN (WLAN) systems require an access point (AP) reboot to enact a config change. Rebooting APs is needed when traffic stalls in a cell and when the relationship between radios breaks down. With some remote office switches, a code upgrade often means the switch won’t restart. The only answer is to manually power cycle the switch, which often involves sending an engineer to an underserved remote office. “I’m willing to occasionally reboot my consumer-grade gadgetry, but that allowance generally does not extend to work where real dollars get spent on beefy equipment. Sadly, too much enterprise-grade networking gear is starting to feel like it belongs on the shelves of Wal-Mart based on its code quality,” Badman said.

Explore more of Badman’s ideas about rebooting to resolve WLAN issues.

Kaspersky sheds more light on Equation Group malware detection

Kaspersky Lab published a lengthy report that shed further light on its discovery of Equation Group malware and its possession of classified U.S. government materials.

The antivirus company, which has been under intense scrutiny by government officials and lawmakers this year, disclosed that classified materials were transmitted to Kaspersky’s network between September 11, 2014 and November 17, 2014. In a previous explanation, the company said Kaspersky antivirus software detected malware on a computer located in the greater Baltimore area. Kaspersky later discovered a 7zip archive on the computer that had Equation Group malware and other materials with U.S. government classified markings.

Kaspersky’s new investigation details were issued in response to several media reports that claimed Russian state-sponsored hackers used Kaspersky’s antivirus software to identify and locate U.S. government data. The reports claimed that in 2015 an NSA contractor’s system was compromised by Russian hackers using Kaspersky antivirus scans, which led to a massive leak of confidential NSA files and Equation Group malware. The news reports also claimed Israeli intelligence penetrated Kaspersky’s network in 2014 and found classified NSA materials on its network.

The Equation Group was an APT group that was first identified by Kaspersky researchers in 2015 and later linked to the U.S. National Security Agency (NSA) in 2016 following disclosures by the hacking group known as the Shadow Brokers.

New details in Kaspersky’s investigation

Thursday’s report provided new details about the computer with Equation Group malware, which was believed to be the NSA contractor’s system. Kaspersky did not confirm or deny these reports, saying its software anonymizes users’ information and divulging details about the specific user in this case would violate its ethical and privacy standards.

The Kaspersky investigation revealed the suspected NSA contractor’s computer was “compromised by a malicious actor on October 4, 2014” as a result of a backdoor Trojan known as Smoke Loader or Smoke Bot. The compromise occurred during the nearly two-month span Kaspersky identified and scanning the computer from Sept. 11 to Nov. 17, 2014.

Kaspersky said it believes the user turned Kaspersky’s antivirus software off at some point during that time frame in order to install a pirated version of Microsoft Office, which allowed Smoke Loader to activate. The report also noted Smoke Loader was attributed to a Russian Hacker in 2011 and was known to be distributed on Russian hacker forums.

Kaspersky said once the classified markings were discovered in the 7zip archive materials, all data except the malware binaries was deleted under order of CEO Eugene Kaspersky. The company also said it “found no indication the information ever left our corporate networks.”

Kaspersky’s report appeared to suggestthe threat actors who reportedly found the classified NSA data and Equation Group malware likely did so by hacking the computer directly with Smoke Loader and not, as media reports claimed, by hacking into Kaspersky’s network and abusing the company’s antivirus technology.

The company also said it’s possible the computer had other malware on it that Kaspersky didn’t detect.

“Given that system owner’s potential clearance level, the user could have been a prime target of nation states,” the report stated. “Adding the user’s apparent need for cracked versions of Windows and Office, poor security practices, and improper handling of what appeared to be classified materials, it is possible that the user could have leaked information to many hands. What we are certain about is that any non-malware data that we received based on passive consent of the user was deleted from our storage.”

Thursday’s report followed comments from Jeanette Manfra, assistant secretary for cybersecurity and communications at the U.S. Department of Homeland Security, who told the House Science, Space and Technology Oversight Subcommittee earlier this week that there was no conclusive evidence that Kaspersky software had been exploited to breach government systems.

Policy changes

The report also contained new information about how Kaspersky responded to the 2014 Equation Group malware discovery and the company policy changes that followed.

“The reason we deleted those files and will delete similar ones in the future is two-fold; We don’t need anything other than malware binaries to improve protection of our customers and secondly, because of concerns regarding the handling of potential classified materials,” the report states. “Assuming that the markings were real, such information cannot and will not [be] consumed even to produce detection signatures based on descriptions.”

Kaspersky said that those concerns led to the adoption of a new policy for the company that requires all analysts to “delete any potential classified materials that have been accidentally collected during anti-malware research or received from a third party.”

The report didn’t say whether or not Kaspersky ever notified the NSA or other government agencies about the Equation Group malware it discovered or the classified data contained in the 7zip archive. In a previous statement on the situation, the company stated, “As a routine procedure, Kaspersky Lab has been informing the relevant U.S. government institutions about active APT infections in the USA.” It’s also unclear why, after finding the classified U.S. government files, the company never disclosed Equation Group was connected to the NSA.

Kaspersky has not responded to requests for comment on these questions.

The company responded to media reports that claimed threat actors used Kaspersky antivirus scans to hunt for classified markings.

“We have done a thorough search for keywords and classification markings in our signature databases,” Kaspersky said. “The result was negative: we never created any signatures on known classification markings.”

Kaspersky did, however, acknowledged that a malware analyst created a signature for the word “secret” based on the discovery of the TeamSpy malware in 2013, which used a wildcard string pattern based on several keywords, including “secret.” The company hypothesized that a third party may have either misinterpreted the malware signature or maliciously used it against Kaspersky to spread false allegations.

MEF releases Sonata and Presto APIs, partners with ONAP

MEF published the Sonata and Presto APIs and said it would work together with the Open Network Automation Platform to determine how automation and orchestration can be managed in software-based environments using MEF APIs as a foundation. The groups made the announcements this week at the SDN NFV World Congress in The Hague, Netherlands.

“MEF and ONAP both see a future where we have services delivered by service providers that span multiple operators, operator domains and technology domains — such as 5G, optical, packet WAN and so on,” said Daniel Bar-Lev, MEF’s director for the office of the CTO.

To operate efficiently, providers need these services to be automated and orchestrated, he added. By aligning their approaches — ONAP from the implementation side and MEF from the conceptual definition side — he said the organizations can provide end-to-end service consistency and avoid market fragmentation.

Keeping silos to a minimum

“But we need to be aware that with so many organizations, players, projects and acronyms, we don’t simply get rid of old silos and create new silos,” Bar-Lev said. “Because if everybody’s doing things their own way, then all we do is create new islands of implementation that will need to be joined up.”

MEF, formerly known as the Metro Ethernet Forum, represents service providers worldwide. ONAP, a Linux Foundation project, focuses on projects designed to help service providers employ automation, virtual functions and other technologies within their operations.

Combined, the two groups have more than 250 members — a number that forms a good portion of the market, according to Bar-Lev. Part of the collaboration includes an agreement between MEF and ONAP  on a defined set of northbound APIs — such as MEF’s previously published LSO Legato APIs — that will be used for any ONAP instantiation, he said.

“When we’ve achieved that, it means whoever uses ONAP can then also take advantage of all the east-west APIs we’re defining, because ONAP doesn’t deal with east-west — it really focuses on a single operational domain,” he said.

The two groups will focus on APIs for now, moving to federated information models and security and policy objectives in the future, Bar-Lev said.

MEF releases LSO Sonata and Presto APIs and SDKs

Earlier this week, MEF released two of its open APIs within its Lifecycle Service Orchestration (LSO) Reference Architecture and Framework. The Sonata and Presto APIs and their corresponding software development kits (SDKs) are now available for use among MEF service-provider members and other associated MEF programs.

The LSO Sonata API will be used to reduce the time it takes for a service provider to check or order connectivity from an operator. Today, that process is often performed manually, and it can take months. Sonata automates how these requests are handled, thereby significantly reducing the time frame, Bar-Lev said.

The LSO Presto API resides within the service provider domain and enables more standardized orchestration for SDN and programmable network environments.

Some larger operators have already implemented machine-to-machine automation, Bar-Lev said, but there is no consensus describing how the required APIs should look. This means those operators need to “reinvent the wheel” for each new part of the process, he added.

MEF service-provider members, including AT&T, Orange and Colt Technology Services, worked with MEF and its LSO framework over a six-month period to determine the best ways to take advantage of the new APIs.

“They reached the consensus and created SDK material that is useful for service-provider IT departments throughout the world to take as starting points to see how they would adapt their back-end system to take advantage of these APIs,” Bar-Lev said.

The LSO Presto API resides within the service provider domain and enables more standardized orchestration for SDN and programmable network environments.

“It takes an abstracted, horizontal layered approach, which means when you orchestrate, it doesn’t matter which technology domain you have or which vendor you have,” he said. “Instead of developing multiple APIs per vendor, per technology domain — which isn’t scalable — you’re able to use these well-defined APIs,” Bar-Lev said.

MEF members have already used the Presto API in implementations with OpenDaylight and Open Network Operating System, he added.

Agency shifts in corporate communication organizational structure

There’s a meme floating around the DevOps landscape. It’s called Conway’s Law, published by Melvin Conway in 1968. It goes like this: “Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.”

In other words, a big, bloated and rigid organization will design big, bloated and rigid software.

History has shown Conway to be right. That’s the bad news.

The good news is we’ve learned from our mistakes. Today, we understand that decentralization is the key to designing a better system. Companies made up of small, self-directed teams are more productive than large, bureaucratic and command-based organizations. A new corporate communication organizational structure allows for greater collaboration and exposure for good ideas, and it ultimately enables DevOps.

Makes sense. It’s the Agile and DevOps way of life.

But when I do the math, something weird sticks out. Granted, small teams work best. But if you look at the major players in DevOps and software development — Google, Microsoft, Oracle, Amazon and Facebook — you find organizations with significant headcounts. The teams might be smaller, but these companies still have a lot of employees.

What gives? How do these companies do DevOps? You still have managers, you still have contributors, and you still have office politics and departmental agendas.

So, what’s different now?

In order to answer this question, I need to share a lesson I learned from a preschool teacher:

Any typical classroom problem can be broken down into three elements: the student, the teacher and the physical classroom. Correcting the problem is going to require changing one of those three elements. I have a choice: Change the teacher, change the student or change the physical classroom. Given my options, I am going to choose to change the physical classroom every time. It’s the easiest thing to do.

In other words, the easiest way to change an organization is to change the physical infrastructure in which members act. Remember, Conway said an enterprise designs a system that’s an exact copy of its corporate communication organizational structure. Changing an organization’s communication structure changes the organization and changes the system the organization designs.

Because their physical communication structure supports and fosters decentralization, these big companies are able to decentralize and still maintain cohesiveness as a single organization.

The old way of communicating

Back around 1970, typical corporate communication organizational structures were opaque, bureaucratic and command-based. The physical mediums for the exchange of information were paper and voice. If a company had a product to release or a problem to solve, bosses at the top got together in a meeting to hash things out. Individual bosses talked one-on-one on the phone.

If your company had telephonic conferencing capability, bosses from different geographical locations shared a common phone call. Maybe some interoffice memos were exchanged. Bosses talked to bosses to determine the big picture. Then, they segmented work and distributed it to subordinate bosses for implementation. These subordinate bosses in turn delegated work to their subordinate bosses, or should the boss be a line manager, to contributors within his or her workgroup.

As authoritarian and edict-driven as the process seemed, it made sense. The communication structure in force at the time allowed few other options. Again, the primary instruments of communication were paper and voice.

A voice-based corporate communication organizational structure is synchronous. You have to be in the meeting to get the information available. Before voicemail, you had to answer the phone and be on a call. A communication structure based on paper allows for asynchronicity. Still, you have to be able to get to the paper in order to get the necessary information.

Remember, these were the days when system specifications lived in reams of paper organized in binders that sat on bookshelves. There was no such thing as full-text search. If you didn’t know about the binder, or couldn’t find it, you were sunk.

Typically, the group creating the information was the entity able to provide access to information. Hence, the dynamic of “manager as gatekeeper,” as represented by the following:

Need schematics for component Y? Go talk to Marvin. He manages the group making the component. He’ll be able to introduce you to someone who can help you out.

If Marvin didn’t think you should have access to the schematic, you’d have your boss negotiate access to the information with Marvin. That’s how the typical interface between subsystems worked: discovery and negotiation.

Conway's Law

The world goes digital

Then, along comes magnetic storage, email and the internet. The world goes digital. The corporate communication organizational structure changes. Information distribution goes from being closed to open. Magnetic storage and standard documentation formats allow for text search and retrieval. Email allows conversations to become asynchronous, yet timely. The internet allows for easy, standardized network connectivity. Programming becomes distributed. The physical communication structure goes from being opaque and proprietary to open and standardized. No longer do you need Marvin to get the spec. Today, you do a search on GitHub or the company wiki.

This openness created a new dynamic: bottom-up contribution managed by an authority based on acceptance.

Remember, back in the days of voice- and paper-based communication, work was done by management decree. The bosses set the priority, determined the means of implementation and managed production. It had to be that way by virtue of the primitive physical communication structure. If Pam, a contributor way down in the org chart, had an idea for a better way to do things, the interface in play made it hard for that better idea to emerge.

First, for Pam to get started, she needed to have access to all the information required to formulate a sound idea. Then, after Pam created the idea, she had to convince her boss the idea was worth spending time and political capital to move it up the command chain. The whole process was laborious. The social and technical interfaces between groups were unique and hard to use. The communication structure of the time squashed many a good idea.

Today, all Pam needs to do in order to come up with a better idea is to download the project from GitHub — private or public repos — and figure out how it all works. If she gets jammed up, she can email the project contributors directly. Pam opens a feature branch, implements her idea, provides proof that her implementation is actually an improvement — documentation and tests — and then creates a pull request.

The project’s maintainers review Pam’s work in the pull request. If there’s a problem, they comment. If all is well, the pull request is accepted and the code is merged into a deployment branch. In this scenario, the project maintainers are the authority. The maintainers express authority by way of acceptance. Nobody told Pam what to do. She figured it out for herself by doing her own discovery and using her creativity against the information gathered. The GitHub interface between Pam and the project is open, standardized and clear.

In terms of corporate communication organizational structure, GitHub is and was a game changer. GitHub is the change to the physical classroom described by the preschool teacher above. It changes the actors’ behavior.

The team is the product

When I first heard about Conway’s Law, I was reminded of the book, Software for Your Head, written by Jim and Michele McCarthy. The basic premise of the book is team equals product.

Jim McCarthy wrote, “You can find every attribute of the product in the team and every attribute of the team in the product.”

According to the McCarthys, a product is a direct reflection of the people making it. Again, Jim McCarthy wrote, “For example, if the software is slow, the team is loaded with bureaucracy and can’t make decisions.”

The total control we experience writing an automation script doesn’t map well onto the skills required for effective human interaction.

When I think about Conway and the McCarthys, I’ve come to understand that Conway did a good job of describing the relationship between organization structure and system design in terms of the constraints inherent in communication structures. But his paper doesn’t go into human interaction dynamics present in a collaborative activity.

The McCarthys pick up where Conway left off. Just as GitHub is like a playground and provides a standard set of rules by which any interested party can collaborate with others to improve a product at the artifact level, the McCarthys provide a way for small, transient and self-directed teams to work together with greater efficiency in the cognitive-emotional domain. The McCarthys understand the impact of emotions on thinking. The result of their thinking is The Core Protocols. It’s worth the read.

Human interaction and the new corporate communication organizational structure

I am a big fan of The Core Protocols, particularly in the light of the growing prevalence of automation in DevOps and the enterprise. More automation means less human interaction, and I can’t help but wonder if the skills and sensitivities required for effective human collaboration will diminish due of lack of exercise.

Those of us writing DevOps automation code are accustomed to negotiating software, not people. The total control we experience writing an automation script doesn’t map well onto the skills required for effective human interaction. Many would rather just hit the figurative mute button, rather than work to achieve full communication with another human being.

Just has GitHub provides a communication structure for the effective decentralization of physical software production within large organizations, The Core Protocols might very well be the foundation for the next step in effective human collaboration: efficiently combining thinking and feeling within a corporate communication organizational structure that’s worth being copied.

Take a Lap on Some of Your Favorite Tracks, Reimagined in Forza Motorsport 7

Earlier this week, the folks at ComputerBild published a feature on tracks in Forza Motorsport 7. Featuring a discussion with Turn 10 Studios creative director Dan Greenawalt, the article pointed out the many cutting-edge systems that Turn 10 uses to bring the tracks to life with new weather scenarios and alternate times of day that literally cast the tracks of Forza Motorsport 7 in a brand-new light. If you haven’t already done so, check out the article now.

Suzuka Circuit, Completely Rebuilt

It’s one thing to read about the changes coming to tracks in Forza Motorsport 7 but it’s a very different thing to see them for yourself. Suzuka is a prime example; like all of the real-world tracks in Forza Motorsport 7, Suzuka is officially licensed and completely rebuilt with High Res assets designed to look fantastic at native 4K and on the entire family of Xbox consoles. Not only is the track completely rebuilt and updated, Suzuka also features wet weather conditions for the first time in Forza history. Imagine tackling the “S Curves” at full speed, or barreling around the harrowing 130R, only this time battling the dynamic puddles that line the edges of the circuit, and the blinding spray of the cars in front of you. The same challenge that real-world drivers face at Suzuka lap-after-lap, will now be your challenge as well.

Returning Fan-Favorite, Maple Valley

I might be wrong, but I could swear I heard an audible gasp of excitement on the Interwebz back at E3 when we confirmed that Maple Valley would be returning with Forza Motorsport 7. Is there a more lauded, more beloved track in the Forza universe? Whether you’re looking to test your sideways skills on one of Forza’s best drifting tracks, or you want to push the edges of speed on its sweeping corners, Maple Valley is a masterpiece in every respect. Like Suzuka, it will also feature wet weather conditions as well – also a first in Forza history.

I want to make sure that you all understand what I mean when referring to conditions like weather and time of day. In Forza Motorsport 7, we’ve built tracks with a central goal in mind: Making every time you return to a track a unique experience. That goal manifests itself in a variety of ways.

Rapidly Changing Dynamic Weather

There’s no such thing as a simple “rain” setting in Forza Motorsport 7. Not for Sebring or the Nürburgring, or Brands Hatch, or any other track where wet conditions are available. Instead, the team has created a system that can smoothly transition through multiple weather conditions per track, and those conditions can (and often will) change throughout a race. You might start off with gray skies and fog on a track like Sebring, only to find yourself in the middle of a thunderstorm two laps later. The lights might go green at Silverstone during a light rain, only to find drivers in dry conditions by the end of Lap 2. As in the real world, conditions change and sometimes change quickly, and its up to drivers to react to those changes.

Bringing Time of Day to Life

Those dynamic conditions extend to time of day too. Turn 10 is building on the sky technology that was first seen in Forza Horizon 3, capturing real skies that bring life, motion, and color to every track in the game. Check out the screenshot of the observation tower at the Circuit of the Americas against a darkening sky – one glance is all it takes to recognize a Texas sky at dusk. Even Laguna Seca – a track that has been in Forza Motorsport since the very first game; a track that all of us have driven hundreds, if not thousands of laps on – feels completely new in Forza Motorsport 7.

Whether you’re talking time of day or the weather you’re driving in, it all comes back to thatcentral goal: every race should feel unique. When you’re playing through the Forza Driver’s Cup single player campaign, you’ll experience that first-hand. Take a race at a track like Silverstone as an example. Maybe the first time you play it, you’ll battle the elements in a typical British downpour. Go back and revisit that same race in campaign, your conditions may be completely different; in fact, you may not encounter rain at all. The developers at Turn 10 have introduced probability into the various race conditions scenarios; meaning that there is a percentage chance that the weather conditions might (or might not) change. One race, things will go from bad to worse; the next time around, conditions might stay in your favor. It’s that element of chance – and the need to prepare for whatever the race throws at you – that promises to make racing in Forza Motorsport 7 so exciting.

For me, weather and time of day are a game-changer. It will add the kind of variety and challenge that was only hinted at in previous versions of the game. The unpredictable conditions you drive in will affect your race more than ever before, and all of it runs at the rock-solid 60 fps that Forza Motorsport fans expect.

See below for all 32 tracks available to play in Forza Motorsport 7, and as always, stay tuned to ForzaMotorsport.net for more Forza Motorsport 7 goodness as we draw closer to launch, and don’t forget to pre-order Ultimate Edition for early access on Sept. 29.

Brands Hatch

Circuit of the Americas

Daytona International Speedway

Dubai Circuit

Homestead-Miami Speedway

Maple Valley Raceway

Autodromo Internazionale del Mugello

Nürburgring

Rio de Janeiro

Sebring International Raceway

Silverstone Racing Circuit

Circuit de Spa-Francorchamps

Suzuka Circuit

Virginia International Raceway

Yas Marina Circuit

Bernese Alps

Mount Panorama Circuit

Circuit de Catalunya

Hockenheim-Ring

Indianapolis Motor Speedway

Sonoma Raceway

Mazda Raceway Laguna Seca

Le Mans Circuit de la Sarthe

Lime Rock

Long Beach

Autodromo Nazionale Monza

Test Track Airfield

Prague

Road America

Road Atlanta

Top Gear

Watkins Glen