Tag Archives: continues

Azure preparedness for Hurricane Florence

As Hurricane Florence continues its journey to the mainland, our thoughts are with those in its path. Please stay safe. We’re actively monitoring Azure infrastructure in the region. We at Microsoft have taken all precautions to protect our customers and our people.

Our datacenters (US East, US East 2, and US Gov Virginia) have been reviewed internally and externally to ensure that we are prepared for this weather event. Our onsite teams are prepared to switch to generators if utility power is unavailable or unreliable. All our emergency operating procedures have been reviewed by our team members across the datacenters, and we are ensuring that our personnel have all necessary supplies throughout the event.

As a best practice, all customers should consider their disaster recovery plans and all mission-critical applications should be taking advantage of geo-replication.

Rest assured that Microsoft is focused on the readiness and safety of our teams, as well as our customers’ business interests that rely on our datacenters. 

You can reach our handle @AzureSupport on Twitter, we are online 24/7. Any business impact to customers will be communicated through Azure Service Health in Azure portal.

If there is any change to the situation, we will keep customers informed of Microsoft’s actions through this announcement.

For guidance on Disaster Recovery best practices see references below: 

Jake Braun discusses the Voting Village at DEF CON

Election security continues to be a hot topic, as the 2018 midterm elections draw closer. So, the Voting Village at DEF CON 26 in Las Vegas wanted to re-create and test every aspect of an election.

Jake Braun, CEO of Cambridge Global Advisors, based in Arlington, Va., and one of the main organizers of the DEF CON Voting Village, discussed the pushback the event has received and how he hopes the event can expand in the future.

What were the major differences between what the Voting Village had this year compared to last year?

Jake Braun: The main difference is it’s way bigger. And we’ve got, end to end, the voting infrastructure. We’ve got voter registration, a list of voters in the state of Ohio that are in a cyber range that’s basically like a county clerk’s network. Cook County, Illinois, their head guy advised us on how to make it realistic [and] make it like his network. We had that, but we didn’t have the list of voters last year.

That’s the back end of the voter process with the voter infrastructure process. And then we’ve got machines. We’ve got some new machines and accessories and all this stuff.

Then, on the other end, we’ve got the websites. This is the last piece of the election infrastructure that announces the results. And so, obviously, we’ve got the kids hacking the mock websites.

What prompted you to make hacking the mock websites an event for the kids in R00tz Asylum?

Braun: It was funny. I was at [RSA Conference], and we’ve been talking for a long time about, how do we represent this vulnerability in a way that’s not a waste of time? Because the guys down in the [Voting Village], hacking websites is not interesting to them. They’ve been doing it for 20 years, or they’ve known how to do it for 20 years. But this is the most vulnerable part of the infrastructure, because it’s [just] a website. You can cause real havoc.

I mean, the Russians — when they hacked the Ukrainian website and changed it to show their candidate won, and the Ukrainians took it down, fortunately, they took it down before anything happened. But then, Russian TV started announcing their candidate won. Can you imagine if, in November 2020, the Florida and Ohio websites are down, and Wolf Blitzer is sitting there on CNN saying, ‘Well, you know, we don’t really know who won, because the Florida and Ohio websites are down,’ and then RT — Russian Television — starts announcing that their preferred candidate won? It would be chaos.

Anyway, I was talking through this with some people at [RSA Conference], and I was talking about how it would be so uninteresting to do it in the real village or in the main village. And the guy [I was talking to said], ‘Oh, right. Yeah. It’s like child’s play for them.’

I was like, ‘Exactly, it’s child’s play. Great idea. We’ll give it to R00tz.’ And so, I called up Nico [Sell], and she was like, ‘I love it. I’m in.’ And then, the guys who built it were the Capture the Packet guys, who are some of the best security people in the planet. I mean, Brian Markus does security for … Aerojet Rocketdyne, one of the top rocket manufacturers in the world. He sells to [Department of Defense], [Department of Homeland Security] and the Australian government. So, I mean, he is more competent than any election official we have.

The first person to get in was an 11-year-old girl, and she got in in 10 minutes. Totally took over the website, changed the results and everything else.

How did it go with the Ohio voter registration database?

Braun: The Secretaries of State Association criticized us, [saying], ‘Oh, you’re making it too easy. It’s not realistic,’ which is ridiculous. In fact, we’re protecting the voter registration database with this Israeli military technology, and no one has been able to get in yet. So, it’s actually probably the best protected list of voters in the country right now.

Have you been able to update the other machines being used in the Voting Village?

Braun: Well, a lot of it is old, but it’s still in use. The only thing that’s not in use is the WinVote, but everything else that we have in there is in use today. Unlike other stuff, they don’t get automatic updates on their software. So, that’s the same stuff that people are voting on today.

Have the vendors been helpful at all in providing more updated software or anything?

Braun: No. And, of course, the biggest one sent out a letter in advance to DEF CON again this year saying, ‘It’s not realistic and it’s unfair, because they have full access to the machines.’

Do people think these machines are kept in Fort Knox? I mean, they are in a warehouse or, in some places, in small counties, they are in a closet somewhere — literally. And, by the way, Rob Joyce, the cyber czar for the Trump administration who’s now back at NSA [National Security Agency], in his talk [this year at DEF CON, he basically said], if you don’t think that our adversaries are doing exactly this all year so that they know how to get into these machines, your head is insane.

The thing is that we actually are playing by the rules. We don’t steal machines. We only get them if people donate them to us, or if we can buy them legally somehow. The Russians don’t play by the rules. They’ll just go get them however they want. They’ll steal them or bribe people or whatever.

They could also just as easily do what you do and just to get them secondhand.

Braun: Right. They’re probably doing that, too.

Is there any way to test these machines in a way that would be acceptable to the manufacturers and U.S. government?

Braun: The unfortunate thing is that, to our knowledge, the Voting Village is still the only public third-party inspection — or whatever you want to call it — of voting infrastructure.

The unfortunate thing is that the only time this is done publicly by a third party is when it’s done by us. And that’s once a year for two and a half days. This should be going on all year.
Jake BraunCEO of Cambridge Global Advisors

The vendors and others will get pen testing done periodically for themselves, but that’s not public. All these things are done, and they’re under [nondisclosure agreement]. Their customers don’t know what vulnerabilities they found and so on and so forth.

So, the unfortunate thing is that the only time this is done publicly by a third party is when it’s done by us. And that’s once a year for two and a half days. This should be going on all year with all the equipment, the most updated stuff and everything else. And, of course, it’s not.

Have you been in contact with the National Institute of Standards and Technology, as they are in the process of writing new voting machine guidelines?

Braun: Yes. This is why DEF CON is so great, because everybody is here. I was just talking to them yesterday, and they were like, ‘Hey, can you get us the report as soon as humanly possible? Because we want to take it into consideration as we are putting together our guidelines.’ And they said they used our report last year, as well.

How have the election machines fared against the Voting Village hackers this year?

Braun: Right, of course, they were able to get into everything. Of course, they’re finding all these new vulnerabilities and all this stuff. 

The greatest thing that I think came out of last year was that the state of Virginia wound up decommissioning the machine that [the hackers] got into in two minutes remotely. They decommissioned that and got rid of the machine altogether. And it was the only state that still had it. And so, after DEF CON, they had this emergency thing to get rid of it before the elections in 2017.

What’s the plan for the Voting Village moving forward?

Braun: We’ll do the report like we did last year. Out of all the guidelines that have come out since 2016 on how to secure election infrastructure, none of them talk about how to better secure your reporting websites or, since they are kind of impossible to secure, what operating procedures you should have in place in case they get hacked.

So, we’re going to include that in the report this year. And that will be a big addition to the overall guidelines that have come out since 2016.

And then, next year, I think, it’s really just all about, what else can we get our hands on? Because that will be the last time that any of our findings will be able to be implemented before 2020, which is, I think, when the big threat is.

A DEF CON spokesperson said that most of the local officials that responded and are attending have been from Democratic majority counties. Why do you think that is?

Braun: That’s true, although [Neal Kelley, chief of elections and registrar of voters for] Orange County, attended. Orange County is pretty Republican, and he is a Republican.

But I think it winds up being this functionally odd thing where urban areas are generally Democratic, but because they are big, they have a bigger tax base. So then, the people who run them have more money to do security and hire security people. So, they kind of necessarily know more about this stuff.

Whereas if you’re in Allamakee County, Iowa, with 10,000 people, the county auditor who runs the elections there, that guy or gal — I don’t know who it is — but they are both the IT and the election official and the security person and the whatever. You’re just not going to get the specialized stuff, you know what I mean?

Do you have any plans to try to boost attendance from smaller counties that might not be able to afford sending somebody here or plans on how to get information to them?

Braun: Well, that’s why we do the report. This year, we did a mailing of 6,600 pieces of mail to all 6,600 election officials in the country and two emails and 3,500 live phone calls. So, we’re going to keep doing that.
 
And that’s the other thing: We just got so much more engagement from local officials. We had a handful come last year. We had several dozen come this year. None of them were public last year. This year, we had a panel of them speaking, including DHS [Department of Homeland Security].

So, that’s a big difference. Despite the stupid letter that the Secretary of State Association sent out, a lot of these state and local folks are embracing this.

And it’s not like we think we have all the answers. But you would think if you were in their position and with how cash-strapped they are and everything, that they would say, ‘Well, these guys might have some answers. And if somebody’s got some answers, I would love to go find out about those answers.’

Microsoft Ignite 2018 conference coverage

Introduction

Microsoft continues to gain market momentum fueled in part by an internal culture shift and the growing popularity of the Azure cloud platform that powers the company’s popular Office 365 product.

When CEO Satya Nadella took the helm in 2014, he made a concerted effort to turn the company away from its proprietary background to win over developers and enterprises with cloud and DevOps ambitions.

To reinforce this new agenda, Microsoft acquired GitHub, the popular software development platform, for $7.5 billion in June and expanded its developer-friendly offerings in Azure — from Kubernetes management to a Linux-based distribution for use with IoT devices. But many in IT have long memories and don’t easily forget the company’s blunders, which can wipe away any measure of good faith at a moment’s notice.

PowerShell, the popular automation tool, continues to experience growing pains after Microsoft converted it to an open source project that runs on Linux and macOS systems. As Linux workloads on Azure continue to climb — around 40% of Azure’s VMs run on Linux according to some reports — and Microsoft releases Linux versions of on-premises software, PowerShell Core is one way Microsoft is addressing the needs of companies with mixed OS environments.

While this past year solidified Microsoft’s place in the cloud and open source arenas, Nadella wants the company to remain on the cutting edge and incorporate AI into every aspect of the business. The steady draw of income from its Azure product and Office 365 — more than 135 million users — as well as its digital transformation agenda, have proven successful so far. So what’s in store for 2019?

This Microsoft Ignite 2018 guide gives you a look at the company’s tactics over the past year along with news from the show to help IT pros and administrators prepare for what’s coming next on the Microsoft roadmap. 

1Latest news on Microsoft

Recent news on Microsoft’s product and service developments

Stay current on Microsoft’s new products and updated offerings before and during the Microsoft Ignite 2018 show.

2A closer look

Analyzing Microsoft’s moves in 2018

Take a deeper dive into Microsoft’s developments with machine learning, DevOps and the cloud with these articles.

3Glossary

Definitions related to Microsoft products and technologies

SmartBear-Zephyr deal spotlights software quality tools shake-up

Consolidation continues to reshape the software quality tools landscape as vendors seek to wean app dev teams off legacy tools for digital transformation initiatives.

The latest shake-up is SmartBear’s acquisition this week of Zephyr, a San Jose, Calif., maker of real-time test management tools, primarily for Atlassian’s Jira issue tracking tool, and for continuous testing and DevOps. This follows the Somerville, Mass., company’s deal in May to acquire Hiptest, in Besancon, France, to enhance continuous testing for Agile and DevOps teams.

Highlight on support for Atlassian’s Jira

Atlassian, Slack and GitHub provide three of the top ecosystems that developers use for ancillary development tools, said Ryan Lloyd, SmartBear’s vice president of products. Atlassian Marketplace’s overall revenue this past year is $200M, according to Atlassian financial reports. Zephyr for Jira is the top-grossing app on the Atlassian Marketplace, with more than $5 million in revenue since 2012.

Ryan Lloyd, SmartBearRyan Lloyd

Zephyr strengthens SmartBear’s portfolio with native test management inside Jira, and the Zephyr Enterprise product represents a modern replacement for Quality Center, HPE’s former software now owned by Micro Focus, Lloyd said.

Meanwhile, Hiptest supports behavior-driven development, and overlaps a bit with Zephyr, said Thomas Murphy, a Gartner analyst in Spokane, Wash.

SmartBear’s portfolio of software quality tools also includes SoapUI, TestComplete, SwaggerHub, CrossBrowserTesting, Collaborator and AlertSite.

Girding for the competition

SmartBear’s moves echo those of other vendors in the software quality tools space as they fill out their portfolios to attract customers from legacy test suites, such as Micro Focus’ Quality Center and Mercury Interactive, to their platforms, Murphy said. They also want to tap into Jira’s wide adoption and teams that seek to shift to more agile practices in testing.

Other examples in the past year are Austrian firm Tricentis’ acquisition of QASymphony, and Idera, in Houston, which acquired the TestRail and Ranorex Studio test management and automation tools from German firm Gurock Software and Austria’s Ranorex GmbH, respectively.

[Software test vendors] have different tool stacks for different types of users … [but] the more you can drive consistent look and feel that is best, especially as you push from teams up to the enterprise.
Thomas Murphyanalyst, Gartner

However, vendors that assemble tools from acquisitions often end up with overlaps in features and functions, as well as very different user experience environments, Murphy said.

“They have a little feeling that they have different tool stacks for different types of users,” he said. “But I believe the more you can drive consistent look and feel that is best, especially as you push from teams up to the enterprise.”

Test management is a key part of a company’s ability to develop, test and deploy quality software at scale. Modern software quality tools must help organizations transition into a digital transformation, yet continue to adapt to the requirements of cloud scale companies.

“Organizations must get better at automation, they must have tools that support them with figuring out testable requirements on through to code quality testing, unit testing, exploratory testing, functional, automation and performance testing,” Murphy said. “This story has to be built around a continuous quality approach.”

No-code and low-code tools seek ways to stand out in a crowd

As market demand for enterprise application developers continues to surge, no-code and low-code vendors seek ways to stand out from one another in an effort to lure professional and citizen developers.

For instance, last week’s Spark release of Skuid’s eponymous drag-and-drop application creation system adds on-premises, private data integration, a new Design System Studio, and new core components for tasks such as creation of buttons, forms, charts and tables.

A suite of prebuilt application templates aim to help users build and customize a bespoke application, such as salesforce automation, recruitment and applicant tracking, HR management and online learning.

And a native mobile capability enables developers to take the apps they’ve built with Skuid and deploy them on mobile devices with native functionality for iOS and Android.

Ray Wang, Constellation ResearchRay Wang

“We’re seeing a lot of folks who started in other low-code/no-code platforms move toward Skuid because of the flexibility and the ability to use it in more than one type of platform,” said Ray Wang, an analyst at Constellation Research in San Francisco.

Skuid CTO Mike DuensingMike Duensing

“People want to be able to get to templates, reuse templates and modify templates to enable them to move very quickly.”

Skuid — named for an acronym, Scalable Kit for User Interface Design — was originally an education software provider, but users’ requests to customize the software for individual workflows led to a drag-and-drop interface to configure applications. That became the Skuid platform and the company pivoted to no-code, said Mike Duensing, CTO of Skuid in Chattanooga, Tenn.

Quick Base adds Kanban reports

Quick Base Inc., in Cambridge, Mass., recently added support for Kanban reports to its no-code platform. Kanban is a scheduling system for lean and just-in-time manufacturing. The system also provides a framework for Agile development practices, so software teams can visually track and balance project demands with available capacity and ease system-level bottlenecks.

The Quick Base Kanban reports enable development teams to see where work is in process. It also lets end users interact with their work and update their status, said Mark Field, Quick Base director of products.

Users drag and drop progress cards between columns to indicate how much work has been completed on software delivery tasks to date. This lets them track project tasks through stages or priority, opportunities through sales stages, application features through development stages, team members and their task assignments and more, Field said.

Datatrend Technologies, an IT services provider in Minnetonka, Minn., uses Quick Base to build the apps that manage technology rollouts for its customers, and finds the Kanban reports handy.

A lot of low-code/no-code platforms allow you to get on and build an app but then if you want to take it further, you’ll see users wanting to move to something else.
Ray Wanganalyst, Constellation Research

“Quick Base manages that whole process from intake to invoicing, where we interface with our ERP system,” said Darla Nutter, senior solutions architect at Datatrend.

Previously, we kept data of work in progress through four stages (plan, execute, complete and invoice) in a table report with no visual representation, but with these reports users can see what they have to do at any given stage and prioritize work accordingly, she said.

“You can drag and drop tasks to different columns and it automatically updates the stage for you,” she said.

Like the Quick Base no-code platform, the Kanban reports require no coding or programming experience. Datatrend’s typical Quick Base users are project managers and business analysts, Nutter said.

For most companies, however, the issue with no-code and low-code systems is how fast users can learn and then expand upon it, Constellation Research’s Wang said.

“A lot of low-code/no-code platforms allow you to get on and build an app but then if you want to take it further, you’ll see users wanting to move to something else,” Wang said.

OutSystems sees AI as the future

OutSystems said it plans to add advanced artificial intelligence features into its products to increase developer productivity, said Mike Hughes, director of product marketing at OutSystems in Boston.

“We think AI can help us by suggesting next steps and anticipating what developers will be doing next as they build applications,” Hughes said.

OutSystems uses AI in its own tool set, as well as links to publicly available AI services to help organizations build AI-based products. To facilitate this, the company launched Project Turing and opened an AI Center of Excellence in Lisbon, Portugal, named after Alan Turing, who is considered the father of AI.

The company also will commit 20% of its R&D budget to AI research and partner with industry leaders and universities for research in AI and machine learning.

Confluent Platform 5.0 aims to mainstream Kafka streaming

The Confluent Platform continues to expand on capabilities useful for Kafka-based data streaming, with additions that are part of a 5.0 release now available from Confluent Inc.

The creation of former LinkedIn data engineers who helped build the Kafka messaging framework, Confluent Platform’s goal is to make real-time big data analytics accessible to a wider community.

Part of that effort takes the form of KSQL, which is meant to bring easier SQL-style queries to analytics on Kafka data. KSQL is a Kafka-savvy SQL query engine and language Confluent created in 2017 to open Kafka streaming data to analytics.

Version 5.0 of the Confluent Platform, commercially released on July 31, seeks to improve disaster recovery with more adept handling of application client failover to enhance IoT abilities with MQTT proxy support, and to reduce the need to use Java for programming streaming analytics with a new GUI for writing KSQL code.

Data dips into mainstream

Confluent Platform 5.0’s support for disaster recovery and other improvements is useful, said Doug Henschen, a principal analyst at Constellation Research. But the bigger value in the release, he said, is in KSQL’s potential for “the mainstreaming of streaming analytics.”

Doug Henschen, Constellation ResearchDoug Henschen

Besides the new GUI, this Confluent release upgrades the KSQL engine with support for user-defined functions, which are essential parts of many existing SQL workloads. Also, the release supports handling nested data in popular Avro and JSON formats.

“With these moves Confluent is meeting developer expectations and delivering sought-after capabilities in the context of next-generation streaming applications,” Henschen said.

That’s important because web, cloud and IoT applications are creating data at a prodigious rate, and companies are looking to analyze that data as part of real-time operations. The programming skills required to do that level of development remain rare, but, as big data ecosystem software like Apache Spark and Kafka find wider use, simpler libraries and interfaces are appearing to link data streaming and analytics more easily.

Kafka, take a log

At its base, Kafka is a log-oriented publish-and-subscribe messaging system created to handle the data created by burgeoning web and cloud activity at social media giant LinkedIn.

The core software has been open sourced as Apache Kafka. Key Kafka messaging framework originators, including Jay Krebs, Neha Narkhede and others, left LinkedIn in 2014 to found Confluent, with the stated intent to build on core Kafka messaging for further enterprise purposes.

Joanna Schloss, Confluent’s director of product marketing, said Confluent Platform’s support for nested data in Avro and JSON will enable greater use of business intelligence (BI) tools in Kafka data streaming. In addition, KSQL now support more complex joins, allowing KSQL applications to enhance data in more varied ways.

Joanna Schloss, director of product marketing at ConfluentJoanna Schloss

She said opening KSQL activity to view via a GUI makes KSQL a full citizen in modern development teams in which programmers, as well as DevOps and operations staff, all take part in data streaming efforts.

“Among developers, DevOps and operations personnel there are persons interested in seeing how Kafka clusters are performing,” she said. Now, with the KSQL GUI, “when something arrives they can use SQL [skills] to watch what happened.” They don’t need to find a Java developer to interrogate the system, she noted.

Making Kafka more accessible for applications

KSQL is among the streaming analytics capabilities of interest to Stephane Maarek, CEO at DataCumulus, a Paris-based firm focused on Java, Scala and Kafka training and consulting.

Stephane Maarek, CEO of DataCumulusStephane Maarek

Maarek said KSQL has potential to encapsulate a lot of programming complexity, and, in turn, to lower the barrier to writing streaming applications. In this, Maarek said, Confluent is helping make Kafka more accessible “to a variety of use cases and data sources.”

Moreover, because the open source community that supports Kafka “is strong, the real-time applications are really easy to create and operate,” Maarek added.

Advances in the replication capabilities in Confluent Platform are “a leap forward for disaster recovery, which has to date been something of a pain point,” he said.

Maarek also said he welcomed recent updates to Confluent Control Center, because they give developers and administrators more insights into the activity of Kafka cluster components, particularly schema registry and application consumption lags — the difference between messaging reads and messaging writes. The updates also reduce the need for administrators to write commands, according to Maarek.

Data streaming field

The data streaming field remains young, and Confluent faces competition from established data analytics players like IBM, Teradata and SAS Institute, Hadoop distribution vendors like Cloudera, Hortonworks and MapR, and a variety of specialists such as MemSQL, SQLstream and Striim.

“There’s huge interest in streaming applications and near-real-time analytics, but it’s a green space,” Henschen said. “There are lots of ways to do it and lots of vendor camps — database, messaging-streaming platforms, next-gen data platforms and so on — all vying for a piece of the action.”

However, Kafka often is a common ingredient, Henschen noted. Such ubiquity helps put Confluent in a position “to extend the open source core with broader capabilities in a commercial offering,” he said.

Curious About Windows Server 2019? Here’s the Latest Features Added

Microsoft continues adding new features to Windows Server 2019 and cranking out new builds for Windows Server Insiders to test. Build 17709 has been announced, and I got my hands on a copy. I’ll show you a quick overview of the new features and then report my experiences.

If you’d like to get into the Insider program so that you can test out preview builds of Windows Server 2019 yourself, sign up on the Insiders page.

Ongoing Testing Requests

If you’re just now getting involved with the Windows Server Insider program or the previews for Windows Server 2019, Microsoft has asked all testers to try a couple of things with every new build:

  • In-place upgrade
  • Application compatibility

You can use virtual machines with checkpoints to easily test both of these. This time around, I used a physical machine, and my upgrade process went very badly. I have not been as diligent about testing applications, so I have nothing of importance to note on that front.

Build 17709 Feature 1: Improvements to Group Managed Service Accounts for Containers

I would bet that web applications are the primary use case for containers. Nothing else can match containers’ ability to strike a balance between providing version-specific dependencies while consuming minimal resources. However, containerizing a web application that depends on Active Directory authentication presents special challenges. Group Managed Service Accounts (gMSA) can solve those problems, but rarely without headaches. 17709 includes these improvements for gMSAs:

  • Using a single gMSA to secure multiple containers should produce fewer authentication errors
  • A gMSA no longer needs to have the same name as the system that host the container(s)
  • gMSAs should now work with Hyper-V isolated containers

I do not personally use enough containers to have meaningful experience with gMSA. I did not perform any testing on this enhancement.

Build 17709 Feature 2: A New Windows Server Container Image with Enhanced Capabilities

If you’ve been wanting to run something in a Windows Server container but none of the existing images meet your prerequisites, you might have struck gold in this release. Microsoft has created a new Windows Server container image with more components. I do not have a complete list of those components, but you can read what Lars Iwer has to say about it. He specifically mentions:

  • Proofing tools
  • Automated UI tests
  • DirectX

As I read that last item, I instantly wanted to know: “Does that mean GUI apps from within containers?” Well, according to the comments on the announcement, yes*. You just have to use “Session 0”. That means that if you RDP to the container host, you must use the /admin switch with MSTSC. Alternatively, you can use the physical console or an out-of-band console connection application.

Commentary on Windows Server 2019 Insider Preview Build 17709

So far, my experiences with the Windows Server 2019 preview releases have been fairly humdrum. They work as advertised, with the occasional minor glitch. This time, I spent more time than normal and hit several frustration points.

In-Place Upgrade to 17709

Ordinarily, I test preview upgrades in a virtual machine. Sure, I use checkpoints with the intent of reverting if something breaks. But, since I don’t do much in those virtual machines, they always work. So, I never encounter anything to report.

For 17709, I wanted to try out the container stuff, and I wanted to do it on hardware. So, I attempted an in-place upgrade of a physical host. It was disastrous.

Errors While Upgrading

First, I got a grammatically atrocious message that contained false information. I wish that I had saved it so I could share with others that might encounter it, but I must have accidentally my notes. the message started out with “Something happened” (it didn’t say what happened, of course), then asked me to look in an XML file for information. Two problems with that:

  1. I was using a Server Core installation. I realize that I am not authorized to speak on behalf of the world’s Windows administrators, but I bet no one will get at mad at me for saying, “No one in the world wants to read XML files on Server Core.”
  2. The installer didn’t even create the file.

I still have not decided which of those two things irritates me the most. Why in the world would anyone actively decide to build the upgrade tool to behave that way?

Problems While Trying to Figure Out the Error

Well, I’m fairly industrious, so I tried to figure out what was wrong. The installer did not create the XML file that it talked about, but it did create a file called “setuperr.log”. I didn’t keep the entire contents of that file either, but it contained only one line error-wise that seemed to have any information at all: “CallPidGenX: PidGenX function failed on this product key”. Do you know what that means? I don’t know what that means. Do you know what to do about it? I don’t know what to do about it. Is that error even related to my problem? I don’t even know that much.

I didn’t find any other traces or logs with error messages anywhere.

How I Fixed My Upgrade Problem

I began by plugging the error messages into Internet searches. I found only one hit with any useful information. The suggestions were largely useless. But, the guy managed to fix his own problem by removing the system from the domain. How in the world did he get from that error message to disjoining the domain? Guesswork, apparently. Well, I didn’t go quite that far.

My “fix”: remove the host from my Hyper-V cluster. The upgrade worked after that.

Why did I put the word “fix” in quotation marks? Because I can’t tell you that actually fixed the problem. Maybe it was just a coincidence. The upgrade’s error handling and messaging was so horrifically useless that without duplicating the whole thing, I cannot conclusively say that one action resulted in the other. “Correlation is not causation”, as the saying goes.

Feedback for In-Place Upgrades

At some point, I need to find a productive way to express this to Microsoft. But for now, I’m upset and frustrated at how that went. Sure, it only took you a few minutes to read what I had to say. It took much longer for me to retry, poke around, search, and prod at the thing until it worked, and I had no idea that it was ever going to work.

Sure, once the upgrade went through, everything was fine. I’m quite happy with the final product. But if I were even to start thinking about upgrading a production system and I thought that there was even a tiny chance that it would dump me out at the first light with some unintelligible gibberish to start a luck-of-the-draw scavenger hunt, then there is a zero percent chance that I would even attempt an upgrade. Microsoft says that they’re working to improve the in-place upgrade experience, but the evidence I saw led me to believe that they don’t take this seriously at all. XML files? XML files that don’t even get created? Error messages that would have set off 1980s-era grammar checkers? And don’t even mean anything? This is the upgrade experience that Microsoft is anxious to show off? No thanks.

Microsoft: the world wants legible, actionable error messages. The world does not want to go spelunking through log files for vague hints. That’s not just for an upgrade process either. It’s true for every product, every time.

The New Container Image

OK, let’s move on to some (more) positive things. Many of the things that you’ll see in this section have been blatantly stolen from Microsoft’s announcement.

Once my upgrade went through, I immediately started pulling down the new container image. I had a bit of difficulty with that, which Lars Iwer of Microsoft straightened out quickly. If you’re trying it out, you can get the latest image with the following:

Since Insider builds update frequently, you might want to ensure that you only get the build version that matches your host version (if you get a version mismatch, you’ll be forced to run the image under Hyper-V isolation). Lars Iwer provided the following script (stolen verbatim from the previously linked article, I did not write this or modify it):

Trying Out the New Container Image

I was able to easily start up a container and poke around a bit:

Testing out the new functionality was a bit tougher, though. It solves problems that I personally do not have. Searching the Internet for, “example apps that would run in a Windows Server container if Microsoft had included more components” didn’t find anything I could test with either (That was a joke; I didn’t really do that. As far as you know). So, I first wrote a little GUI .Net app in Visual Studio.

*Graphical Applications in the New Container Image

Session 0 does not seem to be able to show GUI apps from the new container image. If you skimmed up to this point and you’re about to tell me that GUI apps don’t show anything from Windows containers, this links back to the (*) text above. The comments section of the announcement article indicate that graphical apps in the new container will display on session 0 of the container host.

I don’t know if I did something wrong, but nothing that I did would show me a GUI from within the new container style. The app ran just fine — it shows up under Get-Process — but it never shows anything. It does exactly the same thing under microsoft/dotnet-framework in Hyper-V isolation mode, though. So, on that front, the only benefit that I could verify was that I did not need to run my .Net app in Hyper-V isolation mode or use a lot of complicated FROM nesting in my dockerfile. Still no GUI, though, and that was part of my goal.

DirectX Applications in the New Container Image

After failing to get my graphical .Net app to display, I next considered DirectX. I personally do not know how to write even a minimal DirectX app. But, I didn’t need to. Microsoft includes the very first DirectX-dependent app that I was ever able to successfully run: dxdiag.

Sadly, dxdiag would not display on session 0 from my container, either. Just as with my .Net app, it appeared in the local process list and docker top. But, no GUI that I could see.

However, dxdiag did run successfully, and would generate an output file:

Notes for anyone trying to duplicate the above:

  • I started this particular container with 
    docker run it mcr.microsoft.com/windowsinsider
  • DXDiag does not instantly create the output file. You have to wait a bit.

Thoughts on the New Container Image

I do wish that I had more experience with containers and the sorts of problems this new image addresses. Without that, I can’t say much more than, “Cool!” Sure, I didn’t personally get the graphical part to work, but a DirectX app from with a container? That’s a big deal.

Overall Thoughts on Windows Server 2019 Preview Build 17709

Outside of the new features, I noticed that they have corrected a few glitchy things from previous builds. I can change settings on network cards in the GUI now and I can type into the Start menu to get Cortana to search for things. You can definitely see changes in the polish and shine as we approach release.

As for the upgrade process, that needs lots of work. If a blocking condition exists, it needs to be caught in the pre-flight checks and show a clear error message. Failing partway into the process with random pseudo-English will extend distrust of upgrading Microsoft operating systems for another decade. Most established shops already have an “install-new-on-new-hardware-and-migrate” process. I certainly follow one. My experience with 17709 tells me that I need to stick with it.

I am excited to see the work being done on containers. I do not personally have any problems that this new image solves, but you can clearly see that customer feedback led directly to its creation. Whether I personally benefit or not, this is a good thing to see.

Overall, I am pleased with the progress and direction of Windows Server 2019. What about you? How do you feel about the latest features? Let me know in the comments below!

At OpenText Enterprise World, security and AI take center stage

OpenText continues to invest in AI and security, as the content services giant showcased where features from recent acquisitions fit into its existing product line at its OpenText Enterprise World user conference.

The latest Pipeline podcast recaps the news and developments from Toronto, including OpenText OT2, the company’s new hybrid cloud/on-premises enterprise information management platform. The new platform brings wanted flexibility while also addressing regulatory concerns with document storage.

“OT2 simplifies for our customers how they invest and make decisions in taking some of their on-premises workflows and [porting] them into a hybrid model or SaaS model into the cloud,” said Muhi Majzoub, OpenText executive vice president of engineering and IT.

Majzoub spoke at OpenText Enterprise World 2018, which also included further updates to how OpenText plans to integrate Guidance Software’s features into its endpoint security offerings following the Guidance’s September 2017 acquisition.

Will the native AI functionality from OpenText compare and keep up? What will be the draw for new customers?
Alan Lepofskyprincipal analyst, Constellation Research

OpenText has a rich history of acquiring companies and using the inherited customer base as an additional revenue or maintenance stream, as content management workflows are often built over decades of complex legacy systems.

But it was clear at OpenText Enterprise World 2018 that the Guidance Software acquisition filled a security gap in OpenText’s offering. One of Guidance’s premier products, EnCase, seems to have useful applications for OpenText users, according to Lalith Subramanian, vice president of engineering for analytics, security and discovery at OpenText.

In addition, OpenText is expanding its reach to Amazon AWS, Microsoft Azure and Google Cloud, but it’s unclear if customers will prefer OpenText offerings to others on the market or if current customers will migrate to public clouds.

“It comes down to: Will customers want to use a general AI platform like Azure, Google, IBM or AWS?” said Alan Lepofsky, principal analyst for Constellation Research. “Will the native AI functionality from OpenText compare and keep up? What will be the draw for new customers?”

Chief data officer role: Searching for consensus

Big data continues to be a force for change. It plays a part in the ongoing drama of corporate innovation — in some measure, giving birth to the chief data officer role. But consensus on that role is far from set.

The 2018 Big Data Executive Survey of decision-makers at more than 50 blue-chip firms found 63.4% of respondents had a chief data officer (CDO). That is a big uptick since survey participants were asked the same question in 2012, when only 12% had a CDO. But this year’s survey, which was undertaken by business management consulting firm NewVantage Partners, disclosed that the background for a successful CDO varies from organization to organization, according to Randy Bean, CEO and founder of NewVantage, based in Boston.

For many, the CDO is likely to be an external change agent. For almost as many, the CDO may be a long-trusted company hand. The best CDO background could be that of a data scientist, line executive or, for that matter, a technology executive, according to Bean.

In a Q&A, Bean delved into the chief data role as he was preparing to lead a session on the topic at the annual MIT Chief Data Officer and Information Quality Symposium in Cambridge, Mass. A takeaway: Whatever it may be called, the chief data officer role is central to many attempts to gain business advantage from key emerging technologies. 

Do we have a consensus on the chief data officer role? What have been the drivers?

Randy Bean: One principal driver in the emergence of the chief data officer role has been the growth of data.

Randy Bean, CEO, NewVantage PartnersRandy Bean

For about a decade now, we have been into what has been characterized as the era of big data. Data continues to proliferate. But enterprises typically haven’t been organized around managing data as a business asset.

Additionally, there has been a greater threat posed to traditional incumbent organizations from agile data-driven competitors — the Amazons, the Googles, the Facebooks.

Organizations need to come to terms with how they think about data and, from an organization perspective, to try to come up with an organizational structure and decide who would be a point person for data-related initiatives. That could be the chief data officer.

Another driver for the chief data officer role, you’ve noted, was the financial crisis of 2008.

Bean: Yes, the failures of the financial markets in 2008-2009, to a significant degree, were a data issue. Organizations couldn’t trace the lineage of the various financial products and services they offered. Out of that came an acute level of regulatory pressure to understand data in the context of systemic risk.

Banks were under pressure to identify a single person to regulators to address questions about data’s lineage and quality. As a result, banks took the lead in naming chief data officers. Now, we are into a third or fourth generation in some of these large banks in terms of how they view the mandate of that role.

Isn’t that type of regulatory driver somewhat spurred by the General Data Protection Regulation (GDPR), which recently went into effect? Also, for factors defining the CDO role, NewVantage Partners’ survey highlights concerns organizations have about being surpassed by younger, data-driven upstarts. What is going on there?

Bean: GDPR is just the latest of many previous manifestations of this. There have been the Dodd-Frank regulations, the various Basel reporting requirements and all the additional regulatory requirements that go along with classifying banks as ‘too large to fail.’

That is a defensive driver, as opposed to the offensive and innovation drivers that are behind the chief data officer role. On the offensive side, the chief data officer is about how your organization can be more data-driven, how you can change its culture and innovate. Still, as our recent survey finds, there is defensive aspect, even there. Increasingly, organizations perceive threat coming from all kinds of agile, data-driven competitors.

Organizations need to come to terms with how they think about data and, from an organization perspective, to try to come up with an organizational structure and decide who would be a point person for data-related initiatives. That could be the chief data officer.
Randy BeanCEO and founder, NewVantage

You have written that big data and AI are on a continuum. That may be worthwhile to emphasize, as so much attention turns to artificial intelligence these days.

Bean: A key point is that big data has really empowered artificial intelligence.

AI has been around for decades. One of the reasons why it hasn’t gained traction is, in its aspects as a learning mechanism, it requires large volumes of data. In the past, data was only available in subsets or samples or in very limited quantities, and the corresponding learning on the part of the AI was slow and constrained.

Now, with the massive proliferation of data and new sources — in addition to transactional information, you also now have sensor data, locational data, pictures, images and so on — that has led to the breakthrough in AI in recent years. Big data provides the data that is needed to train the AI learning algorithms.

So, it is pretty safe to say there is no meaningful artificial intelligence without good data — without an ample supply of big data.

And it seems to some of us, on this continuum, you still need human judgment.

Bean: I am a huge believer in the human element. Data can help provide a foundation for informed decision-making, but ultimately it’s the combination of human experience, human judgment and the data. If you don’t have good data, that can hamper your ability to come to the right conclusion. Just having the data doesn’t lead you to the answer.

One thing I’d say is, just because there are massive amounts of data, it hasn’t made individuals or companies any wiser in and of itself. It’s just one element that can be useful in decision-making, but you definitely need human judgment in that equation, as well.

For Sale – Z77 Sabretooth + 16GB DDR3 Memory

Hey Guys,

The clear out continues. I acquired these in a purchase about 3 months ago. Originally I had the dream of moving back to ATX and making a nice looking rig, but I am losing my man cave. Collection from NW London and or WC1X an option. Also willing to meet in Zone 1 somewhere or on my way back on the Met Line.

First up we have the highly coveted Asus Z77 Sabretooth board. This come boxed (little tatty), includes i/o shield and all sorted of accessories including PCI-E Socket blankers additional unused fans etc. The board is a little dusty but nothing that a can of compressed air wont fix.

Since purchase the board has been in a rig at work that I would just mess about with running various Linux distro’s. No issues what so ever. Just stripped down the machine and placed the CPU protector back in.

Now for price, these sell for silly money on ebay. I dont want the hassle of ebay and I would rather a member of the community grabs a good deal. This board is dying to go into a nice rig where it can be shown off!

Price: 110 GBP (Excluding Delivery)

Next up we have:

Corsair 16GB DDR3 (4 x 4GB set) Vengeance LP Black Kit 1600Mhz 9-9-9-24 Timings

Price: 70 GBP (Excluding Delivery)

Price and currency: 110, 70
Delivery: Delivery cost is not included
Payment method: BT, PP(Buyer Pays Fees
Location: WEMBLEY
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.