Tag Archives: past

For Sale – For parts or complete. Desktop CAD/Photoshop etc. i7, Nvidia quadro…

Selling my project PC. Has been used (successfully) as a CCTV server for the past 18 months – 2 years without ever being pushed. All parts were bought new but no retail packaging. Please assume no warranty. No operating system installed either. Selling as we’ve now upgraded to a dedicated Xeon server. Parts listed below.

Generic desktop tower case.
Supermicro C7H270-CG-ML motherboard.
Intel i7 7700 3.6 ghz with stock cooler.
PNY Nvidia quadro M2000 4gb.
Kingston hyperx fury DDR4 16gb RAM (2x8gb).
Seagate Skyhawk 4tb HDD (NO OS).
ACBEL 300w PSU.

Aside from the PSU this a solid machine with decent potential. Could easily be used for gaming with one or two changes and could be used for CAD or photoshop as is (or just change PSU). This handled HIKVision and up to 56 cameras (we had 13 on screen at any one time, could handle more) but admittedly struggled with playback on any more than four cameras at once (All 4K cameras). The case has a dent or two in it but entirely useable. Did intend to keep it for the Mrs for her photography but she’s bought a MacBook instead.

Cost around £2000 new. Asking £700 including postage but collection preferred (from Plymouth). Very open to offers as I’ve struggled to price this up to be honest.

Cheers, Chocky.

Go to Original Article
Author:

Dawn of a Decade: The Top Ten Tech Policy Issues for the 2020s

By Brad Smith and Carol Ann Browne

For the past few years, we’ve shared predictions each December on what we believe will be the top ten technology policy issues for the year ahead. As this year draws to a close, we are looking out a bit further. This January we witness not just the start of a new year, but the dawn of a new decade. It gives us all an opportunity to reflect upon the past ten years and consider what the 2020s may bring.

As we concluded in our book, Tools and Weapons: The Promise and the Peril of the Digital Age, “Technology innovation is not going to slow down. The work to manage it needs to speed up.” Digital technology has gone longer with less regulation than virtually any major technology before it. This dynamic is no longer sustainable, and the tech sector will need to step up and exercise more responsibility while governments catch up by modernizing tech policies. In short, the 2020s will bring sweeping regulatory changes to the world of technology.

Tech is at a crossroads, and to consider why, it helps to start with the changes in technology itself. The 2010s saw four trends intersect, collectively transforming how we work, live and learn. Continuing advances in computational power made more ambitious technical scenarios possible both for devices and servers, while cloud computing made these advances more accessible to the world. Like the invention of the personal computer itself, cloud computing was as important economically as it was technically. The cloud allows organizations of any size to tap into massive computing and storage capacity on demand, paying for the computing they need without the outlay of capital expenses. 

More powerful computers and cloud economics combined to create the third trend, the explosion of digital data. We begin the 2020s with 25 times as much digital data on the planet as when the past decade began.

These three advances collectively made possible a fourth: artificial intelligence, or AI. The 2010s saw breakthroughs in data science and neural networks that put these three advances to work in more powerful AI scenarios. As a result, we enter a new decade with an increasing capability to rely on machines with computer vision, speech recognition, and language translation, all powered by algorithms that recognize patterns within vast quantities of digital data stored in the cloud.

The 2020s will likely see each of these trends continue, with new developments that will further transform the use of technology around the world. Quantum computing offers the potential for breathtaking breakthroughs in computational power, compared to classical or digital computers. While we won’t walk around with quantum computers in our pockets, they offer enormous promise for addressing societal challenges in fields from healthcare to environmental sustainability.

Access to cloud computing will also increase, with more data centers in more countries, sometimes designed for specific types of customers such as governments with sensitive data. The quantity of digital data will continue to explode, now potentially doubling every two years, a pace that is even faster than the 2010s. This will make technology advances in data storage a prerequisite for continuing tech usage, explaining the current focus on new techniques such as optical- and even DNA-based storage.

The next decade will also see continuing advances in connectivity. New 5G technology is not only 20 times faster than 4G. Its innovative approach to managing spectrum means that it can support over a thousand more devices per meter than 4G, all with great precision and little latency. It will make feasible a world of ambient computing, where the Internet of Things, or IoT devices, become part of the embedded fabric of our lives, much as electrical devices do today. And well before we reach the year 2030, we’ll be talking about 6G and making use of thousands of satellites in low earth orbit.

All of this will help usher in a new AI Era that likely will lead to even greater change in the 2020s than the digital advances we witnessed during the past decade. AI will continue to become more powerful, increasingly operating not just in narrow use cases as it does today but connecting insights between disciplines. In a world of deep subject matter domains across the natural and social sciences, this will help advance learning and open the door to new breakthroughs.

In many ways, the AI Era is creating a world full of opportunities. In each technological era, a single foundational technology paved the way for a host of inventions that followed. For example, the combustion engine reshaped the first half of the 20th century. It made it possible for people to invent not just cars but trucks, tractors, airplanes, tanks, and submarines. Virtually every aspect of civilian economies and national security issues changed as a result.

This new AI Era likely will define not just one decade but the next three. Just as the impact of the combustion engine took four decades to unfold, AI will likely continue to reshape our world in profound ways between now and the year 2050. It has already created a new era of tech intensity, in which technology is reshaping every company and organization and becoming embedded in the fabric of every aspect of society and our lives.

Change of this magnitude is never easy. It’s why we live in both an era of opportunity and an age of anxiety. The indirect impacts of technology are moving some people and communities forward while leaving others behind. The populism and nationalism of our time have their roots in the enormous global and societal changes that technology has unleashed. And the rising economic power of large companies – perhaps especially those that are both tech platforms and content aggregators – has brought renewed focus to antitrust laws.

This is the backdrop for the top ten technology issues of the 2020s. The changes will be immense. The issues will be huge. And the stakes could hardly be higher. As a result, the need for informed discussion has rarely been greater. We hope the assessments that follow help you make up your own mind about the future we need collectively to help shape.

1. Sustainability – Tech’s role in the race to address climate change

A stream of recent scientific research on climate change makes clear that the planet is facing a tipping point. These dire predictions will catapult sustainability into one of the dominant global policy issues for the next decade, including for the tech sector. We see this urgency reflected already in the rapidly evolving views of our customers and employees, as well as in many electorates around the world. In countries where governments are moving more slowly on climate issues, we’re likely to see businesses and other institutions fill the gap. And over the coming decade, governments that aren’t prioritizing sustainability will be compelled to catch up.

For the tech sector, the sustainability issue will cut both ways. First, it will increase pressure on companies to make the use of technology more sustainable. With data centers that power the cloud ranking among the world’s largest users of electricity, Microsoft and other companies will need to move even more quickly than in recent years to use more and better renewable energy, while increasing work to improve electrical efficiency.

But this is just the tip of the iceberg. Far bigger than technology’s electrical consumption is “Scope 3” emissions – the indirect emissions of carbon in a company’s value chain for everything from the manufacturing of new devices to the production of concrete to build new buildings. While this is true for every sector of the economy, it’s an area where the tech sector will likely lead in part because it can. And should. With some of the world’s biggest income statements and healthiest balance sheets, look to Microsoft and other tech companies to invest and innovate, hopefully using the spirit of competition to bring out the best in each other.

This points to the other and more positive side of the tech equation for sustainability. As the world takes more aggressive steps to address the environment, digital data and technology will prove to be among the next decade’s most valuable tools. While carbon issues currently draw the most attention, climate issues have already become multifaceted. We need urgent and concerted action to address water, waste, biodiversity, and our ecosystems. Regardless of the issue or ultimate technology, insights and innovations will be fueled by data science and artificial intelligence. When quantum computing comes online, this will become even more promising.

By the middle or end of the next decade, the sustainability issue may have another impact that we haven’t yet seen and we’re not yet considering. This is on the world’s geopolitics. As the new decade begins, many governments are turning inward and nations are pulling apart. But sustainability is an issue that can’t be solved by any country alone. The world must unite to address environmental issues that know no boundaries. We all share a small planet, and the need to preserve humanity’s ability to live on it will force us to think and act differently across borders.

2. Defending Democracy – International threats and internal challenges

Early each New Year, we look forward to the release of the Economist Intelligence Unit’s annual Democracy Index. This past year’s report updated the data on the world’s 75 nations the Economist ranks as democracies. Collectively these countries account for almost half of the world’s population. Interestingly, they also account for 95 percent of Microsoft’s revenue. Perhaps more than any other company, Microsoft is the technology provider for the governments, businesses, and non-profits that support the world’s democracies. This gives us both an important vantage point on the state of democracy and a keen interest in democracy’s health.

Looking back at the past decade, the Economist’s data shows that the health of the world’s democracies peaked in the middle of the decade and has since declined slightly and stagnated. Technology-fueled change almost certainly has contributed in part to this trend.

As we enter the 2020s, defending democracy more than ever requires a focus on digital tech. The past decade saw nation-states weaponize code and launch cyber-attacks against the civilian infrastructure of our societies. This included the hacking of a U.S. presidential campaign in 2016, a tactic Microsoft’s Threat Intelligence Center has since seen repeated in numerous other countries. It was followed by the WannaCry and Not-Petya attacks in 2017, which unleashed damage around the world in ways that were unimaginable when the decade began.

The defense of democracy now requires determined efforts to protect political campaigns and governments from the hacking and leaking of their emails. Even more important, it requires digital protection of voter rolls and elections themselves. And most broadly, it requires protection against disinformation campaigns that have exploited the basic characteristics of social media platforms.

Each of these priorities now involves new steps by tech companies, as well as new strategies for and collaboration with and among governments. Microsoft is one of several industry leaders putting energy and resources into this area. Our Defending Democracy Program includes an AccountGuard program that protects candidates in 26 democratic nations, an ElectionGuard program to safeguard voting, and support for the NewsGuard initiative to address disinformation. As we look to the 2020s, we will need continued innovation to address the likely evolution of digital threats themselves.

The world will also need to keep working to solidify existing norms and add new legal rules to protect against cybersecurity threats. Recent years have seen more than 100 leading tech companies come together in a Tech Accord to advance security in new ways, while more than 75 nations and more than 1,000 multi-stakeholder signatories have now pledged their support for the Paris Call for Trust and Security in Cyberspace. The 2020s hopefully will see important advances at the United Nations, support from global groups such as the World Economic Forum, and by 2030, work on a global compact to make a Digital Geneva Convention a reality.

But the digital threats to democracy are not confined to attacks from other nations. As the new decade dawns, a new issue is emerging with potentially profound and thorny implications for the world’s democracies. Increasingly government officials in democratic nations are asking whether the algorithms that pilot social media sites are undermining the political health of their citizenries. 

It’s difficult to sustain a democracy if a population fragments into different “tribes” that are exposed to entirely different narratives and sources of information. While diverse opinions are older than democracy itself, one of democracy’s characteristics has traditionally involved broad exposure to a common set of facts and information. But over the past decade, behavioral-based targeting and monetization on digital platforms has arguably created more information siloes than democracy has experienced in the past. This creates a new question for a new decade. Namely, will tech companies and democratic governments alike need new approaches to address a new weakness for the world’s democracies? 

3.  Journalism – Technology needs to give the news business a boost

While we look to improve the health of the world’s democracies, we need to also monitor the well-being of another system playing a vital role in free societies across the globe: the independent press. For centuries, journalists have served as watch dogs for democracies, safeguarding political systems by monitoring and challenging public affairs and government institutions. As Victorian era historian Thomas Carlyle wrote, “There were Three Estates in Parliament; but, in the Reporters’ Gallery yonder, there sat a Fourth Estate more important far than they all.”

It’s clear that a healthy democracy requires healthy journalism, but newspapers are ailing – and many are on life support. The decline of quality journalism is not breaking news. It has been in slow decline since the start the 20th century with the advent of the radio and later when television overtook the air waves. By the turn of this century, the internet further eroded the news business as dotcoms like Craigslist disrupted advertising revenue, news aggregators lured away readers, and search engines and social media giants devoured both. While a number of bigger papers weathered the storm, most small local outlets were hard hit. According to data from the U.S. Bureau of Labor Statistics’ Occupational Employment Statistics, in 2018, 37,900 Americans were employed in the newsroom, down 14 percent from 2015 and down 47 percent from 2004.

The world will be hard pressed to strengthen its democracies if we can’t rejuvenate quality journalism. In the decade ahead the business model for journalism will need to evolve and become healthier, which hopefully will include partnerships that create new revenue streams, including through search and online ads. And as the world experiments with business models, we can’t forget to learn from and build on the public broadcasters that have endured through the years, like the BBC in the United Kingdom and NPR in the United States.  

Helping journalism recover will also include protecting journalists, as we’ve learned through Microsoft’s work with the Clooney Foundation for Justice. Around the world violence against journalists is on the rise, especially for those reporters covering conflict, human rights abuses, and corruption. According to the Committee to Protect Journalists, 25 journalists were killed, 250 were imprisoned, and 64 went missing in 2019. In the coming decade, look for digital technology like AI to play an important role in monitoring the safety of journalists, spotting threats, and helping ensure justice in the court of law. 

And lastly, it’s imperative that we use technology to protect the integrity of journalism. As the new decade begins, technologists warn that manipulated videos are becoming the purveyors of disinformation. These “deepfakes” do more than deceive the public, they call all journalism into question. AI is used to create this doctored media, but it will also be used to detect deepfakes and verify trusted, quality content. Look for the tech sector to partner with the news media and academia to create new tools and advocate for regulation to combat internet fakery and build trust in the authentic, quality journalism that underpins democracies around the world.

4. Privacy in an AI Era – From the second wave to the third

In the 2010s, privacy concerns exploded around the world. The decade’s two biggest privacy controversies redefined big tech’s relationships with government. In 2013, the Snowden disclosures raised the world’s ire about the U.S. Government’s access to data about people. The tech sector, Microsoft included, responded by expanding encryption protection and pushing back on our own government, including with litigation. Five years later, in 2018, the guns turned back on the tech sector after the Cambridge Analytica data scandal engulfed Facebook and digital privacy again became a top-level political issue in much of the world.

Along the way, privacy laws continued to spread around the world. The decade saw 49 new countries adopt broad privacy laws, adding to the 86 nations that protected privacy a decade ago. While the United States is not yet on that list, 2018 saw stronger privacy protections jump from Europe across the Atlantic and move all the way to the Pacific, as California’s legislature passed a new law that paves the way for action in Washington, D.C.

But it wasn’t just the geographic spread of privacy laws that marked the decade. With policy innovation centered in Brussels, the European Union effectively embarked on a second wave of privacy protection. The first wave was characterized by laws that required that web sites give consumers “notice and consent” rights before using their data. Europe’s General Data Protection Regulation, or GDPR, represented a second wave. It gives consumers “access and control” over their data, empowering them to review their data online and edit, move, or delete it under a variety of circumstances.

Both these waves empowered consumers – but also placed a burden on them to manage their data. With the volume of data mushrooming, the 2020s likely will see a third wave of privacy protection with a different emphasis. Rather than simply empowering consumers, we’re likely to see more intensive rules that regulate how businesses can use data in the first place. This will reach data brokers that are unregulated in some key markets today, as well as a focus on sensitive technologies like facial recognition and protections against the use of data to adversely impact vulnerable populations. We’re also likely to see more connections between privacy rules and laws in other fields, including competition law.

In short, fasten your seat belt. The coming decade will see more twists and turns for privacy issues.

5. Data and National Sovereignty – Economics meet geopolitics

When the combustion engine became the most important invention a century ago, the oil that fueled it became the world’s most important resource. With AI emerging as the most important technology for the next three decades, we can expect the data that fuels it to quickly become the 21st century’s most important resource. This quest to accumulate data is creating economic and geopolitical issues for the world.

As the 2020s commence, data economics are breeding a new generation of public policy issues. Part of this stems from the returns to scale that result from the use of data. While there are finite limits to the amount of gasoline that can be poured into the tank of a car, the desire for more data to develop a better AI model is infinite. AI developers know that more data will create better AI. Better AI will lead to even more usage for an AI system. And this in turn will create yet more data that will enable the system to improve yet again. There’s a risk that those with the most data, namely the first movers and largest companies and countries, will overtake others’ opportunity for success.

This helps explain the critical economic issues that are already emerging. And the geopolitical dynamics are no less vital.

Two of the biggest forces of the past decade – digital technology and geopolitics – pulled the world in opposite directions. Digital technology transmitted data across borders and connected people around the world. As technology brought the world together, geopolitical dynamics pulled countries apart and kindled tensions on issues from trade to immigration. This tug-of-war explains one reason a tech sector that started the decade as one of the most popular industries ended it under scrutiny and with mounting criticism.

This tension has created a new focus that is wrapped into a term that was seldom used just a few years ago – “digital sovereignty.” The current epicenter for this issue is Western Europe, especially Germany and France. With the ever-expanding ubiquity of digital technology developed outside of Europe and the potential international data flows that can result, the protection and control of national data is a new and complicated priority, with important implications for evolving concepts of national sovereignty.

The arrival of the AI Era requires that governments think anew about balancing some critical challenges. They need to continue to benefit from the world’s most advanced technologies and move a swelling amount of data across borders to support commerce in goods and services. But they want to do this in a manner that protects and respects national interests and values. From a national security perspective, this may lead to new rules that require that a nation’s public sector data stays within its borders unless the government provides explicit permission that it can move somewhere else. From an economic perspective, it may mean combining leading international technologies with incentives for local tech development and effective sovereignty protections.

All this has also created the need for open data initiatives to level the playing field. Part of this requires opening public data by governments to provide smaller players with access to larger data sets. Another involves initiatives to enable smaller companies and organizations to share – or “federate” – their data, without surrendering their ownership or control in the data they share. This in turn requires new licensing approaches, privacy protections, and technology platforms and tools. It also requires intellectual property policies, especially in the copyright space, that facilitate this work.

During the first two decades of this century, open source software development techniques transformed the economics of coding. During the next two decades, we’ll need open data initiatives that do the same thing for data and AI.

The past year has seen some of these concepts evolve from political theory to government proposals. This past October, the German Government proposed a project called GAIA-X to protect the country’s digital sovereignty. A month later, discussions advanced to propose a common approach that would bring together Germany and France.

It’s too early to know precisely how all these initiatives will evolve. For almost four centuries, the world has lived under a “Westphalian System” defined by territorial borders controlled by sovereign states. The technology advances of the past decade have placed new stress on this system. Every aspect of the international economy now depends on data that crosses borders unseen and at the speed of light. In an AI-driven economy and data-dependent world, the movement of data is raising increasingly important questions for sovereignty in a Westphalian world. The next decade will decide how this balance is struck.

6. Digital Safety – The need to constantly battle evolving threats

The 2010s began with optimism that new technology would advance online safety and better protect children from exploitation. It ended with a year during which terrorists and criminals used even newer technology to harm innocent children and adults in ways that seemed almost unimaginable when the decade began. While the tech sector and governments have moved to respond, the decade underscores the constant war that must be waged to advance digital safety.

Optimism marked the decade’s start in part because of PhotoDNA, developed in 2009 by Microsoft and Hany Farid, then a professor at MIT. The industry adopted it to identify and compare online photos to known illegal images of child exploitation. Working with key non-profit and law enforcement groups, the technology offered real hope for turning the tide against the horrific exploitation of children. And spurred on by the British Government and others, the tech sector took additional steps globally to address images of child pornography in search results and on other services.

Yet as the New York Times reported in late 2019, criminals have subsequently used advancing video and livestreaming technologies, as well as new approaches to file-sharing and encrypted messaging, to exploit children even more horrifically. As a result, political pressure is again pushing industry to do more to catch up. It’s a powerful lesson of the need for constant vigilance.

Meanwhile, online safety threats become more multifaceted. One of the decade’s tragic days came on March 15, 2019 in Christchurch, New Zealand. A terrorist and white supremacist used livestreaming on the internet as the stage for mass shootings at two mosques, killing 51 innocent civilians.

Led by Prime Minister Jacinda Ardern, the New Zealand Government spearheaded a global multi-stakeholder effort to create the Christchurch Call. It has brought governments and tech companies together to share information, launch a crisis incident protocol, and take other steps to reduce the possibility of others using the internet in a similar way in the future.

All of this has also led to new debate about the continued virtues of exempting social media platforms from legal liability for the content on their sites. Typified by section 230 of the United States’ Communications Decency Act, current laws shield these tech platforms from responsibilities faced by more traditional publishers. As we look to the 2020s, it seems hard to believe that this approach will survive the next decade without change.

7. Internet Inequality – A world of haves and have-nots

In 2010, fewer than a third of the world’s population had access to the internet. As this decade concludes, the number has climbed to more than half. This represents real progress. But much of the world still lacks internet access. And high-speed broadband access lags much farther behind, especially in rural areas.

In an AI Era, access to the internet and broadband have become indispensable for economic success. With public discussion increasingly focusing on economic inequality, we need to recognize that the wealth disparity in part is rooted in internet inequality.

There are many reasons to be optimistic that there will be faster progress in the decade ahead. But progress will require new approaches and not just more money.

This starts with working with better data about who currently has interest access and at what speeds. Imagine trying to restore electric power to homes after a big storm without accurate data on where the power is out. Yet that’s the fundamental reality in a country such as the United States when we discuss closing the broadband gap. The country spends billions of dollars a year without the data needed to invest it effectively. And this data gap is by no means confined to North America.

Better data can make its best contribution if it’s coupled with new and better technology. The next decade will see a world of new communications technologies, from 5G (and ultimately 6G) to thousands of low Earth orbiting satellites and terrestrial technologies like TV White Spaces. All of this is good news. But it will be essential to focus on where each technology can best be used, because there is no such thing as a one-size-fits-all approach for communications technology. For example, 5G will transform the world, but its signals travel shorter distances, making it less than optimal for many scenarios in rural areas.

With better data and new technology, it’s possible to bring high speed internet to 90 percent of the global population by 2030. This may sound ambitious, but with better data and sounder investments, it’s achievable. Internet equality calls for ambition on this level.

8. A Tech Cold War – Will we see a digital iron curtain down the Pacific?

The new decade begins with a tech question that wasn’t on the world’s radar ten years ago. Are we witnessing the start of a “tech cold war” between the United States and China? While it’s too early to know for certain, it’s apparent that recent years have been moving in this direction. And the 2020s will provide a definitive answer.

The 2010s saw China impose more constraints on technology and information access to its local market. This built on the Great Chinese Firewall constructed a decade before, with more active filtering of foreign content and more constraints on local technology licenses. In 2016, the Standing Committee of the National People’s Congress adopted a broad Cyber Security Law to advance data localization and enable the government to take “all necessary” steps to protect China’s sovereignty, including through a requirement to make key network infrastructure and information systems “secure and controllable.” Combined with other measures to manage digital technology that have raised human rights concerns, these policies have effectively created a local internet and tech ecosystem that is distinct from the rest of the world.

This Chinese tech ecosystem in the latter half of the decade also grew increasingly competitive. The pace and quality of innovation have been impressive. With companies such as Huawei, Ali Baba, and Tencent gaining worldwide prominence, Chinese technology is being adopted more globally while its own market is less open – and at the same time that it’s subject to Chinese cyber security public policies. 

As the 2010s close, the United States is responding with new efforts to contain the spread of Chinese technology. It’s not entirely different from the American efforts to contain Russian ideology and influence in the Cold War that began seven decades ago. Powered in part by American efforts to dissuade other governments from adopting 5G equipment from China, tensions heightened in 2019 when the U.S. Department of Commerce banned American tech companies from selling to Huawei components for its products.

In both Washington and Beijing, officials are entering the new decade preparing for these tensions around technology to harden. The implications are huge. Clearly, the best time to think about a Tech Cold War is before it begins. The Cold War between the United States and Soviet Union lasted more than four decades and impacted virtually every country on the planet. As we look ahead to the 2020s, the strategic questions for each country and the implications for the world are no smaller.

9. Ethics for Artificial Intelligence – Humanity needs to govern machines

For a world long accustomed to watching robots wreak havoc on the silver screen, the last few years have brought advances in artificial intelligence that still fall far short of the capabilities seen in science fiction, but are well beyond what had seemed possible when the decade began. While typically still narrow in scope, AI enters a new decade with an increasing ability to match human perception and cognition in vision, speech recognition, language translation, and machine learning based on discerning patterns in data.

In a decade that increasingly gave rise to anxiety over the impact of technology, it’s not surprising that these advances unleashed a wave of discussions focused on AI and its implications for ethics and human rights. If we’re going to empower machines to make decisions, how do we want these decisions to be made? This is a defining question not just for the decade ahead, but for all of us who are alive today. As the first generation of people to give machines the power to make decisions, we have a responsibility to get the balance right. If we fail, the generations that follow us are likely to pay a steep price.

The good news is that companies, governments, and civil society groups around the world have embraced the need to develop ethical and human rights principles for artificial intelligence. We published a set of six ethical principles at Microsoft in January 2018, and we’ve been tracking the trends. What we’re seeing is a global movement towards an increasingly common set of principles. It’s encouraging.

As we look to the 2020s, we’re likely to see at least two new trends. The first is the shift from the articulation of principles to the operationalization of ethics. In other words, it’s not sufficient to simply state what principles an organization wants to apply to its use of AI. It needs to implement this in more precise standards backed up by governance models, engineering requirements, and training, monitoring, and ultimately compliance. At Microsoft we published our first Responsible AI Standard in late 2019, spelling out many of these new pieces. No doubt we’ll improve upon it during the next few years, as we learn both from our own experience and the work of many others who are moving in a similar direction.

The second trend involves specific issues that are defining where “the rubber meets the road” for ethical and human rights concerns. The first such issue has involved facial recognition, which arguably has become a global policy issue more rapidly than any previous digital tech issue. Similar questions are being discussed about the use of AI for lethal autonomous weapons. And conversations are starting to focus on ethics and the use of algorithms more generally. This is just a beginning. By 2030, there will likely be enough issues to fill the table of contents for a lengthy book. If there’s one common theme that has emerged in the initial issues, it’s the need to bring together people from different countries, intellectual disciplines, and economic and government sectors to develop a more common vocabulary. It’s the only way people can communicate effectively with each other as we work to develop common and effective ethical practices for machines.

10. Jobs and Income Inequality in an AI Economy – How will the world manage a disruptive decade?

It’s clear that the 2020s will bring continued economic disruption as AI enables machines to replace many tasks and jobs that are currently performed by people. At the same time, AI will create new jobs, companies, and even industries that don’t exist today. As we’ve noted before, there is a lot to learn from the global economy’s transition from a horse-powered to automobile-driven economy a century ago. Like foundational technologies before it, AI will likely create something like an economic rollercoaster, with an uneven match between prosperity and distress during particular years or in specific places.

This will create many big issues, and two are already apparent. The first is the need to equip people with the new skills needed to succeed in an AI Economy. During the 2010s, technology drove globalization and created more economic opportunity for people in many developing economies around the world, perhaps especially in India and China. The resulting competition for jobs led not only to political pressure to turn inward in some developed nations, but to a recognition that economic success in the future requires more investments in education. As we saw through data published by LinkedIn, in a country like the United States there emerged a broadened interest in Europe’s approach to apprenticeships and technical skills and the pursuit of a range of post-secondary credentials. Given the importance of this trend, it’s not surprising that there was also broader political interest in addressing the educational costs for individuals pursuing these skills.

There’s every reason to believe that these trends will accelerate further in the decade ahead. If anything, expanding AI adoption will lead to additional economic ripple effects. We’re likely to see employers and governments alike invest in expanded learning opportunities. It has become a prerequisite for keeping pace.

In many ways, however, this marks the beginning rather than the conclusion of the economic debates that lie ahead. Four decades of technological change have already contributed to mounting income inequality. It’s a phenomenon that now impacts the politics of many communities and countries, with issues that range from affordable housing to tax rates, education and healthcare investments, and income redistribution.

All this raises some of the biggest political questions for the 2020s. It reminds us that history’s defining dates don’t always coincide with the start of a new decade. For example, one of the most important dates in American political history came on September 14, 1901. It was the day that Theodore Roosevelt succeeded to the United States Presidency. More than a century later, we can see that it represented the end of more than 30 years that combined advancing technology with regulatory restraint, which led to record levels of both prosperity and inequality. In important respects, it was the first day of the Progressive Era in the United States. Technology continued to progress, but in a new political age that included stronger business regulation, product liability laws, antitrust enforcement, public investment, and an income tax.

As we enter the 2020s, political leaders in many countries are debating whether to embark on a similar shift. No one has a crystal ball. But increasingly it seems like the next decade will usher in not only a new AI Era and AI Economy, but new approaches to politics and policy. As we’ve noted before, there’s a saying that “history doesn’t repeat itself, but it often rhymes.” From our vantage point, there seems a good chance that the next decade for technology and policy will involve some historical poetry.

Go to Original Article
Author: Microsoft News Center

Oracle looks to grow multi-model database features

Perhaps no single vendor or database platform over the past three decades has been as pervasive as the Oracle database.

Much as the broader IT market has evolved, so too has Oracle’s database. Oracle has added new capabilities to meet changing needs and competitive challenges. With a move toward the cloud, new multi-model database options and increasing automation, the modern Oracle database continues to move forward. Among the executives who have been at Oracle the longest is Juan Loaiza, executive vice president of mission critical database technologies, who has watched the database market evolve, first-hand, since 1988.

In this Q&A, Loaiza discusses the evolution of the database market and how Oracle’s namesake database is positioned for the future.

Why have you stayed at Oracle for more than three decades and what has been the biggest change you’ve seen over that time?

Juan LoaizaJuan Loaiza

Juan Loaiza: A lot of it has to do with the fact that Oracle has done well. I always say Oracle’s managed to stay competitive and market-leading with good technology.

Oracle also pivots very quickly when needed. How do you survive for 40 years? Well, you have to react and lead when technology changes.

Decade after decade, Oracle continues to be relevant in the database market as it pivots to include an expanding list of capabilities to serve users.

The big change that happened a little over a year ago is that Thomas Kurian [former president of product development] left Oracle. He was head of all development and when he left what happened is that some of the teams, like database and apps, ended rolling up to [Oracle founder and CTO] Larry Ellison. Larry is now directly managing some of the big technology teams. For example, I work directly with Larry.

What is your view on the multi-model database approach?

Loaiza: This is something we’re starting to talk more about. So the term that people use is multi-model but we’re using a different term, we’re using a term called converged database and the reason for that is because multi-model is kind of one component of it.

Multi-model really talks about different data models that you can model inside the database, but we’re also doing much more than that. Blockchain is an example of converging technology that is not even thought about normally as database technology into the database. So we’re going well beyond the conventional kind of multi-model of, Hey, I can do this, data format, and that data format.

Initially, the relational database was the mainstream database people used for both OLTP [online transaction processing] and analytics. What has happened in the last 10 to 15 years is that there have been a lot of new database technologies to come around, things like NoSQL, JSON, document databases, databases for geospatial data and graph databases too. So there’s a lot of specialty databases that have come around. What’s happening is, people are having to cobble together a complex kind of web of databases to solve one problem and that creates an enormous amount of complexity.

With the idea of a converged database, we’re taking all the good ideas, whether it’s NoSQL, blockchain or graph, and we’re building it into the Oracle database. So you can basically use one data store and write your application to that.

The analogy that we use is that of a smartphone. We used to have a music device and a phone device and a calendar device and a GPS device and all these things and what’s happened is they’ve all been converged into a smartphone.

Are companies actually shifting their on-premises production database deployments to the cloud?

Loaiza: There’s definitely a switch to the cloud. There are two models to cloud; one is kind of the grassroots. So we’re seeing some of that, for example, with our autonomous database that people are using now. So they’re like, ‘Hey, I’m in the finance department, and I need a reporting database,’ or, ‘hey, I’m in the marketing department, and I need some database to run some campaign with.’ So that’s kind of a grassroots and those guys are building a new thing and they want to just go to cloud. It’s much easier and much quicker to set up a database and much more agile to go to the cloud.

The second model is where somebody up in the hierarchy says, ‘Hey, we have a strategy to move to cloud.’ Some companies want to move quickly and some companies say, ‘Hey, you know, I’m going to take my time,’ and there’s everything in the middle.

Will autonomous database technology mean enterprises will need fewer database professionals?

Loaiza: The autonomous database addresses the mundane aspects of running a database. Things like tuning the database, installing it, configuring it, setting up HA [high availability], among other tasks. That doesn’t mean that there’s nothing for database professionals to do.

Like every other field where there is automation, what you do is you move upstream, you say, ‘Hey, I’m going to work on machine learning or analytics or blockchain or security.’ There’s a lot of different aspects of data management that require a lot of labor.

One of the nice things that we have in this industry is there is no real unemployment crisis in IT. There’s a lot of unfilled jobs.

So it’s pretty straightforward for someone who has good skills in data management to just move upstream and do something that’s going to add more specific value then just configuring and setting up databases, which is really more of a mechanical process.

This interview has been edited for clarity and conciseness.

Go to Original Article
Author:

Tallying the momentous growth and continued expansion of Dynamics 365 and the Power Platform – The Official Microsoft Blog

We’ve seen incredible growth of Dynamics 365 and the Power Platform just in the past year. This momentum is driving a massive investment in people and breakthrough technologies that will empower organizations to transform in the next decade.

We have allocated hundreds of millions of dollars in our business cloud that power business transformation across markets and industries and help organizations solve difficult problems.

This fiscal year we are also heavily investing in the people that bring Dynamics 365 and the Power Platform to life — a rapidly growing global network of experts, from engineers and researchers to sales and marketing professionals. Side-by-side with our incredible partner community, the people that power innovation at Microsoft will fuel transformational experiences for our customers into the next decade.

Accelerating innovation across industries

In every industry, I hear about the struggle to transform from a reactive to proactive organization that can respond to changes in the market, customer needs, and even within their own business. When I talk to customers who have rolled out Dynamics 365 and the Power Platform, the conversation shifts to the breakthrough outcomes they’ve achieved, often in very short time frames.

Customers talk about our unique ability to connect data holistically across departments and teams — with AI-powered insights to drive better outcomes. Let me share a few examples.

This year we’ve focused on a new vision for retail that unifies back office, in-store and digital experiences. One of Washington state’s founding wineries — Ste. Michelle Wine Estates — is onboarding Dynamics 365 Commerce to bridge physical and digital channels, streamline operations with cloud intelligence and continue building brand loyalty with hyper-personalized customer experiences.

When I talk to manufacturers, we often zero in on ways to bring more efficiency to the factory floor and supply chain. Again, it’s our ability to harness data from physical and digital worlds, reason over it with AI-infused insights, that opens doors to new possibilities. For example, Majans, the Australian-based snackfood company, is creating the factory of the future with the help of Microsoft Dynamics 365 Supply Chain Management, Power BI and Azure IoT Hub — bringing Internet of Things (IoT) intelligence to every step in the supply chain, from quality control on the production floor to key performance indicators to track future investments. When everyone relies on a single source of truth about production, inventory and sales performance, decisions employees make drive the same outcome — all made possible on our connected business cloud.

These connected experiences extend to emerging technologies that bridge digital and physical worlds, such as our investment in mixed reality. We’re working with companies like PACCAR — manufacturer of premium trucks — to improve manufacturing productivity and employee training using Dynamics 365 Guides and HoloLens 2, as well as Siemens to enable technicians to service its eHighway — an electrified freight transport system — by completing service steps with hands-free efficiency using HoloLens and two-way communication and documentation in Dynamics 365 Field Service.

For many of our customers, the journey to Dynamics 365 and the Power Platform started with a need for more personalized customer experiences. Our customer data platform (CDP) featuring Dynamics 365 Customer Insights, is helping Tivoli Gardens — one of the world’s longest-running amusement parks — personalize guest experiences across every touchpoint — on the website, at the hotel and in the park.  Marston’s has onboarded Dynamics 365 Sales and Customer Insights to unify guest data and infuse personalized experiences across its 1,500-plus pubs across the U.K.

The value of Dynamics 365 is compounded when coupled with the Power Platform. In late 2019, there are over 3 million monthly active developers on the Power Platform, from non-technical “citizen developers” to Microsoft partners developing world-class, customized apps. In the last year, we’ve seen a 700% growth in Power Apps production apps and a 300% growth in monthly active users. All of those users generate a ton of data, with more than 25 billion Power Automate steps run each day and 25 million data models hosted in the Power BI service.

The impact of the Power Platform is shared in the stories our customers share with us. TruGreen, one of the largest lawn care companies in the U.S., onboarded Dynamics 365 Customer Insights and the Microsoft Power Platform to provide more proactive and predictive services to customers, freeing employees to spend more time on higher value tasks and complex customer issue resolution. And the American Red Cross is leveraging Power Platform integration with Teams to improve disaster response times.

From the Fortune 500 companies below to the thousands of small and medium sized businesses, city and state governments, schools and colleges and nonprofit organizations — Dynamics 365 and the Microsoft Cloud are driving transformative success delivering on business outcomes.

24 business logos of Microsoft partners

Partnering to drive customer success

We can’t talk about growth and momentum of Dynamics 365 and Power Platform without spotlighting our partner community — from ISVs to System Integrators that are the lifeblood of driving scale for our business. We launched new programs, such as the new ISV Connect Program, to help partners get Dynamics 365 and Power Apps solutions to market faster.

Want to empower the next generation of connected cloud business? Join our team!

The incredible momentum of Dynamics 365 and Power Platform means our team is growing, too. In markets around the globe, we’re looking for people who want to make a difference and take their career to the next level by helping global organizations digitally transform on Microsoft Dynamics 365 and the Power Platform. If you’re interested in joining our rapidly growing team, we’re hiring across a wealth of disciplines, from engineering to technical sales, in markets across the globe. Visit careers.microsoft.com to explore business applications specialist career opportunities.

Tags: , ,

Go to Original Article
Author: Microsoft News Center

For Sale – Mac Mini 2011 i7 FAULTY

My mac mini has developed a fault with (I believe) the dedicated GPU (see photo). It doesn’t get past the boot up screen.

I personally don’t have the time (or inclination) to want to try to fix this.

The spec of this machine is:

  • Mac Mini mid-2011
  • 2.7GHz dual-core Intel Core i7
  • 8GB RAM (2 x 4GB sticks)
  • 500GB hard drive
  • AMD Radeon HD 6630M graphics processor

As far as I can tell, it’s only the GPU that’s causing problems, all the ports, wifi, bluetooth all work.

This is being sold as NOT WORKING and therefore no returns accepted, it might be right for someone who has the time and tools to attempt a fix. Note that as I couldn’t get past the boot screen I have opened this up to get the drive out to recover data and then wipe it.

Please do ask questions if you’re interested.

Go to Original Article
Author:

For Sale – Mac Mini 2011 i7 FAULTY

My mac mini has developed a fault with (I believe) the dedicated GPU (see photo). It doesn’t get past the boot up screen.

I personally don’t have the time (or inclination) to want to try to fix this.

The spec of this machine is:

  • Mac Mini mid-2011
  • 2.7GHz dual-core Intel Core i7
  • 8GB RAM (2 x 4GB sticks)
  • 500GB hard drive
  • AMD Radeon HD 6630M graphics processor

As far as I can tell, it’s only the GPU that’s causing problems, all the ports, wifi, bluetooth all work.

This is being sold as NOT WORKING and therefore no returns accepted, it might be right for someone who has the time and tools to attempt a fix. Note that as I couldn’t get past the boot screen I have opened this up to get the drive out to recover data and then wipe it.

Please do ask questions if you’re interested.

Go to Original Article
Author:

For Sale – Mac Mini 2011 i7 FAULTY

My mac mini has developed a fault with (I believe) the dedicated GPU (see photo). It doesn’t get past the boot up screen.

I personally don’t have the time (or inclination) to want to try to fix this.

The spec of this machine is:

  • Mac Mini mid-2011
  • 2.7GHz dual-core Intel Core i7
  • 8GB RAM (2 x 4GB sticks)
  • 500GB hard drive
  • AMD Radeon HD 6630M graphics processor

As far as I can tell, it’s only the GPU that’s causing problems, all the ports, wifi, bluetooth all work.

This is being sold as NOT WORKING and therefore no returns accepted, it might be right for someone who has the time and tools to attempt a fix. Note that as I couldn’t get past the boot screen I have opened this up to get the drive out to recover data and then wipe it.

Please do ask questions if you’re interested.

Go to Original Article
Author:

These innovations are driving collaboration in the Cascadia region | Microsoft On The Issues

As far as enviable commutes go, a short hop in a seaplane, flying over water and past snow-capped mountains, is up there.

Connecting Seattle and Vancouver, a recently launched flight route is testament to the growing ties between the locations.

The two-way trading relationship between Canada and the United States remains one of the largest in the world – and the links between British Columbia and Washington state are growing. In 2016, the launch of the Cascadia Innovation Corridor formalized the connection. And a July 2019 study also found that a high-speed rail line connecting Vancouver, Seattle and Portland could bring $355 billion in economic growth in the region.

Here are a few of the ways this region is coming together.

[Subscribe to Microsoft on the Issues for more on the topics that matter most.]

Innovation at scale

Microsoft, along with many other business, academic and research institutions, has been working to maximize the opportunities the corridor presents – and the Canadian Digital Technology Supercluster consortium is one example.

Bringing together names in tech, healthcare and natural resources, this consortium hopes to advance technologies by developing innovation and talent. It will also be a boon to the local economy, with the goal of creating 50,000 B.C. jobs over the next 10 years, fuelling growth across multiple sectors and expanding opportunity across the region.

A meeting of minds

Home to some of the world’s leading research and medical organizations, the Cascadia region is also aiming to become a global leader in biomedical data science and health technology innovation.

Stock image of people working in technology

Accelerating cancer research has been a key target. Working in collaboration with the Fred Hutchinson Cancer Research Center, Microsoft has established the Cascadia Data Discovery Initiative, which is tackling the barriers that make research breakthroughs difficult, such as data discovery and access.

Microsoft’s partnership with BC Cancer is taking another approach to finding a cure for the disease. Using Azure, scientists can collaboratively analyze vast amounts of data, accelerating the pace of research. Interns from the Microsoft Garage program have been working to take this a step further, using the HoloLens platform to create mixed reality tools to help researchers visualize the structure of a tumor.

Inspiring the next generation

Work is also happening at the grass-roots level, helping to create the next generation of graduates ready to build the technologies of the future. Through a partnership with Microsoft, the British Columbia Institute of Technology is delivering a first-of-its-kind mixed-reality curriculum, with the goal of training students for jobs in digital media and entertainment along the Cascadia Corridor.

British Columbia students are also benefiting from a Microsoft initiative to help high schools build computer science programs. The TEALS program first started in Washington state in 2009 and recently expanded to B.C. It pairs computer science professionals with teachers, giving schools the training and support to help their students build skills for in-demand local careers.

A lesson for others

The Cascadia Corridor is already helping Vancouver, Seattle and the region achieve more than they could do independently.

A steering committee established at the end of 2018 will help build on the economic opportunities, growing human capital in the region, investing in and expanding transport and infrastructure, and helping to foster an ecosystem that encourages innovation.

For more on the Cascadia Corridor and other Microsoft work follow @MSFTIssues on Twitter.

Go to Original Article
Author: Microsoft News Center

What is the Hyper-V Core Scheduler?

In the past few years, sophisticated attackers have targeted vulnerabilities in CPU acceleration techniques. Cache side-channel attacks represent a significant danger. They magnify on a host running multiple virtual machines. One compromised virtual machine can potentially retrieve information held in cache for a thread owned by another virtual machine. To address such concerns, Microsoft developed its new “HyperClear” technology pack. HyperClear implements multiple mitigation strategies. Most of them work behind the scenes and require no administrative effort or education. However, HyperClear also includes the new “core scheduler”, which might need you to take action.

The Classic Scheduler

Now that Hyper-V has all new schedulers, its original has earned the “classic” label. I wrote an article on that scheduler some time ago. The advanced schedulers do not replace the classic scheduler so much as they hone it. So, you need to understand the classic scheduler in order to understand the core scheduler. A brief recap of the earlier article:

  • You assign a specific number of virtual CPUs to a virtual machine. That sets the upper limit on how many threads the virtual machine can actively run.
  • When a virtual machine assigns a thread to a virtual CPU, Hyper-V finds the next available logical processor to operate it.

To keep it simple, imagine that Hyper-V assigns threads in round-robin fashion. Hyper-V does engage additional heuristics, such as trying to keep a thread with its owned memory in the same NUMA node. It also knows about simultaneous multi-threading (SMT) technologies, including Intel’s Hyper-Threading and AMD’s recent advances. That means that the classic scheduler will try to place threads where they can get the most processing power. Frequently, a thread shares a physical core with a completely unrelated thread — perhaps from a different virtual machine.

Risks with the Classic Scheduler

The classic scheduler poses a cross-virtual machine data security risk. It stems from the architectural nature of SMT: a single physical core can run two threads but has only one cache.

Classic SchedulerIn my research, I discovered several attacks in which one thread reads cached information belonging to the other. I did not find any examples of one thread polluting the others’ data. I also did not see anything explicitly preventing that sort of assault.

On a physically installed operating system, you can mitigate these risks with relative ease by leveraging antimalware and following standard defensive practices. Software developers can make use of fencing techniques to protect their threads’ cached data. Virtual environments make things harder because the guest operating systems and binary instructions have no influence on where the hypervisor places threads.

The Core Scheduler

The core scheduler makes one fairly simple change to close the vulnerability of the classic scheduler: it never assigns threads from more than one virtual machine to any physical core. If it can’t assign a second thread from the same VM to the second logical processor, then the scheduler leaves it empty. Even better, it allows the virtual machine to decide which threads can run together.

Hyper-V Core Scheduler

We will move on through implementation of the scheduler before discussing its impact.

Implementing Hyper-V’s Core Scheduler

The core scheduler has two configuration points:

  1. Configure Hyper-V to use the core scheduler
  2. Configure virtual machines to use two threads per virtual core

Many administrators miss that second step. Without it, a VM will always use only one logical processor on its assigned cores. Each virtual machine has its own independent setting.

We will start by changing the scheduler. You can change the scheduler at a command prompt (cmd or PowerShell) or by using Windows Admin Center.

How to Use the Command Prompt to Enable and Verify the Hyper-V Core Scheduler

For Windows and Hyper-V Server 2019, you do not need to do anything at the hypervisor level. You still need to set the virtual machines. For Windows and Hyper-V Server 2016, you must manually switch the scheduler type.

You can make the change at an elevated command prompt (PowerShell prompt is fine):

Note: if bcdedit does not accept the setting, ensure that you have patched the operating system.

Reboot the host to enact the change. If you want to revert to the classic scheduler, use “classic” instead of “core”. You can also select the “root” scheduler, which is intended for use with Windows 10 and will not be discussed further here.

To verify the scheduler, just run bcdedit by itself and look at the last line:

bcdedit

bcdedit will show the scheduler type by name. It will always appear, even if you disable SMT in the host’s BIOS/UEFI configuration.

How to Use Windows Admin Center to Enable the Hyper-V Core Scheduler

Alternatively, you can use Windows Admin Center to change the scheduler.

  1. Use Windows Admin Center to open the target Hyper-V host.
  2. At the lower left, click Settings. In most browsers, it will hide behind any URL tooltip you might have visible. Move your mouse to the lower left corner and it should reveal itself.
  3. Under Hyper-V Host Settings sub-menu, click General.
  4. Underneath the path options, you will see Hypervisor Scheduler Type. Choose your desired option. If you make a change, WAC will prompt you to reboot the host.

windows admin center

Note: If you do not see an option to change the scheduler, check that:

  • You have a current version of Windows Admin Center
  • The host has SMT enabled
  • The host runs at least Windows Server 2016

The scheduler type can change even if SMT is disabled on the host. However, you will need to use bcdedit to see it (see previous sub-section).

Implementing SMT on Hyper-V Virtual Machines

With the core scheduler enabled, virtual machines can no longer depend on Hyper-V to make the choice to use a core’s second logical processor. Hyper-V will expect virtual machines to decide when to use the SMT capabilities of a core. So, you must enable or disable SMT capabilities on each virtual machine just like you would for a physical host.

Because of the way this technology developed, the defaults and possible settings may seem unintuitive. New in 2019, newly-created virtual machines can automatically detect the SMT status of the host and hypervisor and use that topology. Basically, they act like a physical host that ships with Hyper-Threaded CPUs — they automatically use it. Virtual machines from previous versions need a bit more help.

Every virtual machine has a setting named “HwThreadsPerCore”. The property belongs to the Msvm_ProcessorSettingData CIM class, which connects to the virtual machine via its Msvm_Processor associated instance. You can drill down through the CIM API using the following PowerShell (don’t forget to change the virtual machine name):

The output of the cmdlet will present one line per virtual CPU. If you’re worried that you can only access them via this verbose technique hang in there! I only wanted to show you where this information lives on the system. You have several easier ways to get to and modify the data. I want to finish the explanation first.

The HwThreadsPerCore setting can have three values:

  • 0 means inherit from the host and scheduler topology — limited applicability
  • 1 means 1 thread per core
  • 2 means 2 threads per core

The setting has no other valid values.

A setting of 0 makes everything nice and convenient, but it only works in very specific circumstances. Use the following to determine defaults and setting eligibility:

  • VM config version < 8.0
    • Setting is not present
    • Defaults to 1 if upgraded to VM version 8.x
    • Defaults to 0 if upgraded to VM version 9.0+
  • VM config version 8.x
    • Defaults to 1
    • Cannot use a 0 setting (cannot inherit)
    • Retains its setting if upgraded to VM version 9.0+
  • VM config version 9.x
    • Defaults to 0

I will go over the implications after we talk about checking and changing the setting.

You can see a VM’s configuration version in Hyper-V Manager and PowerShell’s Get-VM :

Hyper-V Manager

The version does affect virtual machine mobility. I will come back to that topic toward the end of the article.

How to Determine a Virtual Machine’s Threads Per Core Count

Fortunately, the built-in Hyper-V PowerShell module provides direct access to the value via the *-VMProcessor cmdlet family. As a bonus, it simplifies the input and output to a single value. Instead of the above, you can simply enter:

If you want to see the value for all VMs:

You can leverage positional parameters and aliases to simplify these for on-the-fly queries:

You can also see the setting in recent version of Hyper-V Manager (Windows Server 2019 and current versions of Windows 10). Look on the NUMA sub-tab of the Processor tab. Find the Hardware threads per core setting:

settings

In Windows Admin Center, access a virtual machine’s Processor tab in its settings. Look for Enable Simultaneous Multithreading (SMT).

processors

If the setting does not appear, then the host does not have SMT enabled.

How to Set a Virtual Machine’s Threads Per Core Count

You can easily change a virtual machine’s hardware thread count. For either the GUI or the PowerShell commands, remember that the virtual machine must be off and you must use one of the following values:

  • 0 = inherit, and only works on 2019+ and current versions of Windows 10 and Windows Server SAC
  • 1 = one thread per hardware core
  • 2 = two threads per hardware core
  • All values above 2 are invalid

To change the setting in the GUI or Windows Admin Center, access the relevant tab as shown in the previous section’s screenshots and modify the setting there. Remember that Windows Admin Center will hide the setting if the host does not have SMT enabled. Windows Admin Center does not allow you to specify a numerical value. If unchecked, it will use a value of 1. If checked, it will use a value of 2 for version 8.x VMs and 0 for version 9.x VMs.

To change the setting in PowerShell:

To change the setting for all VMs in PowerShell:

Note on the cmdlet’s behavior: If the target virtual machine is off, the setting will work silently with any valid value. If the target machine is on and the setting would have no effect, the cmdlet behaves as though it made the change. If the target machine is on and the setting would have made a change, PowerShell will error. You can include the -PassThru parameter to receive the modified vCPU object:

Considerations for Hyper-V’s Core Scheduler

I recommend using the core scheduler in any situation that does not explicitly forbid it. I will not ask you to blindly take my advice, though. The core scheduler’s security implications matter, but you also need to think about scalability, performance, and compatibility.

Security Implications of the Core Scheduler

This one change instantly nullifies several exploits that could cross virtual machines, most notably in the Spectre category. Do not expect it to serve as a magic bullet, however. In particular, remember that an exploit running inside a virtual machine can still try to break other processes in the same virtual machine. By extension, the core scheduler cannot protect against threats running in the management operating system. It effectively guarantees that these exploits cannot cross partition boundaries.

For the highest level of virtual machine security, use the core scheduler in conjunction with other hardening techniques, particularly Shielded VMs.

Scalability Impact of the Core Scheduler

I have spoken with one person who was left with the impression that the core scheduler does not allow for oversubscription. They called into Microsoft support, and the engineer agreed with that assessment. I reviewed Microsoft’s public documentation as it was at the time, and I understand how they reached that conclusion. Rest assured that you can continue to oversubscribe CPU in Hyper-V. The core scheduler prevents threads owned by separate virtual machines from running simultaneously on the same core. When it starts a thread from a different virtual machine on a core, the scheduler performs a complete context switch.

You will have some reduced scalability due to the performance impact, however.

Performance Impact of the Core Scheduler

On paper, the core scheduler presents severe deleterious effects on performance. It reduces the number of possible run locations for any given thread. Synthetic benchmarks also show a noticeable performance reduction when compared to the classic scheduler. A few points:

  • Generic synthetic CPU benchmarks drive hosts to abnormal levels using atypical loads. In simpler terms, they do not predict real-world outcomes.
  • Physical hosts with low CPU utilization will experience no detectable performance hits.
  • Running the core scheduler on a system with SMT enabled will provide better performance than the classic scheduler on the same system with SMT disabled

Your mileage will vary. No one can accurately predict how a general-purpose system will perform after switching to the core scheduler. Even a heavily-laden processor might not lose anything. Remember that, even in the best case, an SMT-enabled core will not provide more than about a 25% improvement over the same core with SMT disabled. In practice, expect no more than a 10% boost. In the simplest terms: switching from the classic scheduler to the core scheduler might reduce how often you enjoy a 10% boost from SMT’s second logical processor. I expect few systems to lose much by switching to the core scheduler.

Some software vendors provide tools that can simulate a real-world load. Where possible, leverage those. However, unless you dedicate an entire host to guests that only operate that software, you still do not have a clear predictor.

Compatibility Concerns with the Core Scheduler

As you saw throughout the implementation section, a virtual machine’s ability to fully utilize the core scheduler depends on its configuration version. That impacts Hyper-V Replica, Live Migration, Quick Migration, virtual machine import, backup, disaster recovery, and anything else that potentially involves hosts with mismatched versions.

Microsoft drew a line with virtual machine version 5.0, which debuted with Windows Server 2012 R2 (and Windows 8.1). Any newer Hyper-V host can operate virtual machines of its version all the way down to version 5.0. On any system, run  Get-VMHostSupportedVersion to see what it can handle. From a 2019 host:

So, you can freely move version 5.0 VMs between a 2012 R2 host and a 2016 host and a 2019 host. But, a VM must be at least version 8.0 to use the core scheduler at all. So, when a v5.0 VM lands on a host running the core scheduler, it cannot use SMT. I did not uncover any problems when testing an SMT-disabled guest on an SMT-enabled host or vice versa. I even set up two nodes in a cluster, one with Hyper-Threading on and the other with Hyper-Threading off, and moved SMT-enabled and SMT-disabled guests between them without trouble.

The final compatibility verdict: running old virtual machine versions on core-scheduled systems means that you lose a bit of density, but they will operate.

Summary of the Core Scheduler

This is a lot of information to digest, so let’s break it down to its simplest components. The core scheduler provides a strong inter-virtual machine barrier against cache side-channel attacks, such as the Spectre variants. Its implementation requires an overall reduction in the ability to use simultaneous multi-threaded (SMT) cores. Most systems will not suffer a meaningful performance penalty. Virtual machines have their own ability to enable or disable SMT when running on a core-scheduled system. All virtual machine versions prior to 8.0 (WS2016/W10 Anniversary) will only use one logical processor per core when running on a core-scheduled host.

Go to Original Article
Author: Eric Siron

For Sale – ASUS ROG GTX980 Poseidon Platinum

Selling my GTX980 as i rarely use my PC for gaming anymore . Its played everything I have chucked at it in the past and is still a brilliant card. It has the hybrid cooler, I’ve always ran it on air but it can be used in a proper water cooled rig. Just removed from PC today

Here’s the official Asus page :- POSEIDON-GTX980-P-4GD5 | Graphics Cards | ASUS United Kingdom
And there are plenty of reviews to back up the performance .

Price is £150 plus postage, its a good size box and quite hefty . I could deliver if you are local

gpuz.jpg

[​IMG]

[​IMG]

[​IMG]

[​IMG]

[​IMG]

Price and currency: £150
Delivery: Delivery cost is not included
Payment method: BT,Cash if being delivered
Location: Holywell , Tyne and Wear
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.