Tag Archives: continues

What’s new with the Exchange hybrid configuration wizard?

Exchange continues to serve as the on-ramp into Office 365 for many organizations. One big reason is the hybrid capabilities that connect on-premises Exchange and Exchange Online.

If you use Exchange Server, it’s not difficult to join it to Exchange Online for a seamless transition into the cloud. Microsoft refined the Exchange hybrid configuration wizard to remove a lot of the technical hurdles to shift one of the more important IT workloads into Exchange Online. If you haven’t seen the Exchange hybrid experience recently, you may be surprised about some of the improvements over the last few years.

Exchange hybrid setups have come a long way

I started configuring Exchange hybrid deployments the first week Microsoft made Office 365 publicly available in June 2011 with the newest version of Exchange at the time, Exchange 2010. Setting up an Exchange hybrid deployment was a laborious task. Microsoft provided a 75-page document with the Exchange hybrid configuration steps, which would take about three workdays to complete. Then I could start the troubleshooting process to fix the innumerable typos I made during the setup.

In December 2011, Microsoft released Exchange 2010 Service Pack 2, which included the Exchange hybrid configuration wizard. The wizard reduced that 75-page document to a few screens of information that cut down the work from three days to about 15 minutes. The Exchange hybrid configuration wizard did not solve all the problems of an Exchange hybrid deployment, but it made things a lot easier.

What the Exchange hybrid configuration wizard does

The Exchange hybrid configuration wizard is just a PowerShell script that runs all the necessary configuration tasks. The original hybrid configuration wizard completed seven key tasks:

  1. verified prerequisites for a hybrid deployment;
  2. configured Exchange federation trust;
  3. configured relationships between on-premises Exchange and Exchange Online;
  4. configured email address policies;
  5. configured free/busy calendar sharing;
  6. configured secure mail flow between the on-premises and Exchange Online organizations; and
  7. enabled support for Exchange Online archiving.

How the Exchange hybrid configuration wizard evolved

Since the initial release of the Exchange hybrid configuration wizard, Microsoft expanded its capabilities in multiple ways with several major improvements over the last few years.

Since the initial release of the Exchange hybrid configuration wizard, Microsoft expanded its capabilities in multiple ways with several major improvements over the last few years.

Exchange hybrid configuration wizard decoupled from service pack updates: This may seem like a minor change, but it’s a significant development. Having the Exchange hybrid configuration wizard as part of the standard Exchange update cycle meant that any updates to the wizard had to wait until the next service pack update.

Now the Exchange hybrid configuration wizard is an independent component from Exchange Server. When you run the wizard, it checks for a new release and updates itself to the most current configuration. This means you get fixes or additional features without waiting through that quarterly update cycle.

Minimal hybrid configuration: Not every migration has the same requirements. Sometimes a quicker migration with fewer moving parts is needed, and Microsoft offered an update in 2016 for a minimal hybrid configuration feature for those scenarios.

The minimal hybrid configuration helps organizations that cannot use the staged migration option, but want an easy switchover without worrying about configuring extras, such has the free/busy federation in calendar availability.

The minimal hybrid configuration leaves out the following functionality from a full hybrid configuration:

  • cross-premises free/busy calendar availability;
  • Transport Layer Security secured mail flow between on-premises Exchange and Exchange Online;
  • cross-premises eDiscovery;
  • automatic Outlook on the web (OWA) and ActiveSync redirection for migrated users; and
  • automatic retention for archived mailboxes.

If these features aren’t important to your organization and speed is of the essence, the minimal hybrid configuration is a good option.

Recent update goes further with setup work

Microsoft designed the Exchange hybrid configuration wizard to migrate mailboxes without interrupting the end user’s ability to work. The wizard gives users a full global address book, free/busy calendar availability and some of the mailbox delegation features used with an on-premises Exchange deployment.

A major new addition to the hybrid configuration wizard its ability to transfer some of the on-premises Exchange configurations to the Exchange Online tenant. The Hybrid Organization Configuration Transfer feature pulls configuration settings from your Exchange organization and does a one-time setup of the same settings in your Exchange Online tenant.

Microsoft expanded the abilities of Hybrid Organization Configuration Transfer in November 2018 so it configures the following settings: Active Sync Mailbox Policy, Mobile Device Mailbox Policy, OWA Mailbox Policy, Retention Policy, Retention Policy Tag, Active Sync Device Access Rule, Active Sync Organization Settings, Address List, DLP Policy, Malware Filter Policy, Organization Config and Policy Tip Configuration.

The Exchange hybrid configuration wizard only handles these settings once. If you make changes in your on-premises Exchange organization after you run the Exchange hybrid configuration wizard, those changes will not be replicated in the cloud automatically.

Go to Original Article
Author:

Ecstasy programming language targets cloud-native computing

While recent events have focused on Java and how it will fare as computing continues to evolve to support modern platforms and technologies, a new language is targeted directly at the cloud-native computing space — something Java continues to adjust to.

This new language, known as the Ecstasy programming language, aims to address programming complexity and to enhance security and manageability in software, which are key challenges for cloud app developers.

Oracle just completed its Oracle Open World and Oracle Code One conferences, where Java was dominant. Indeed, Oracle Code One was formerly known as JavaOne until last year, when Oracle changed its name to be more inclusive of other languages.

Ironically, Cameron Purdy, a former senior vice president of development at Oracle and now CEO of Xqiz.it (pronounced “exquisite”), based in Lexington, Mass., is the co-creator of the Ecstasy language. Purdy joined Oracle in 2007, when the database giant acquired his previous startup, Tangosol, to attain its Coherence in-memory data grid technology, which remains a part of Oracle’s product line today.

Designed for containerization and the cloud-native computing era

Purdy designed Ecstasy for what he calls true containerization. It will run on a server, in a VM or in an OS container, but that is not the kind of container that Ecstasy containerization refers to. Ecstasy containers are a feature of the language itself, and they are secure, recursive, dynamic and manageable runtime containers, he said.

For security, all Ecstasy code runs inside an Ecstasy container, and Ecstasy code cannot even see the container it’s running inside of — let alone anything outside that container, like the OS, or even another container. Regarding recursivity, Ecstasy code can create nested containers inside the current container, and the code running inside those containers can create their own containers, and so on. For dynamism, containers can be created and destroyed dynamically, but they also can grow and shrink within a common, shared pool of CPU and memory resources. For manageability, any resources — including CPU, memory, storage and any I/O — consumed by an Ecstasy container can be measured and managed in real time. And all the resources within a container — including network and storage — can be virtualized, with the possibility of each container being virtualized in a completely different manner.

Overall, the goal of Ecstasy is to solve a set of problems that are intrinsic to the cloud:

  • the ability to modularize application code, so that some portions could be run all the way out on the client, or all the way back in the heart of a server cluster, or anywhere in-between — including on shared edge and CDN servers;
  • to make code that is portable and reusable across all those locations and devices;
  • to be able to securely reuse code by supporting the secure containerization of arbitrary modules of code;
  • to enable developers to manage and virtualize the resources used by this code to enhance security, manageability, real-time monitoring and cloud portability; and
  • to provide an architecture that would scale with the cloud but could also scale with the many core devices and specialized processing units that lie at the heart of new innovation — like machine learning.

General-purpose programming language

Ecstasy, like C, C++, Java, C# and Python, is a general-purpose programming language — but its most compelling feature is not what it contains, but rather what it purposefully omits, Purdy said.

For instance, all the aforementioned general-purpose languages adopted the underlying hardware architecture and OS capabilities as a foundation upon which they built their own capabilities, but additionally, these languages all exposed the complexity of the underlying hardware and OS details to the developer. This not only added to complexity, but also provided a source of vulnerability and deployment inflexibility.

As a general-purpose programming language, Ecstasy will be useful for most application developers, Purdy said. However, Xqiz.it is still in “stealth” mode as a company and in the R&D phase with the language. Its design targets all the major client device hardware and OSes, all the major cloud vendors, and all of the server back ends.

“We designed the language to be easy to pick up for anyone who is familiar with the C family of languages, which includes Java, C# and C++,” he said. “Python and JavaScript developers are likely to recognize quite a few language idioms as well.”

Ecstasy is not a superset of Java, but [it] definitely [has] a large syntactic intersection. Ecstasy adds lots and lots onto Java to improve both developer productivity, as well as program correctness.
Mark FalcoSenior principal software development engineer, Workday

Ecstasy is heavily influenced by Java, so Java programmers should be able to read lots of Ecstasy code without getting confused, said Mark Falco, a senior principal software development engineer at Workday who has had early access to the software.

“To be clear, Ecstasy is not a superset of Java, but [it] definitely [has] a large syntactic intersection,” Falco said. “Ecstasy adds lots and lots onto Java to improve both developer productivity, as well as program correctness.” The language’s similarity to Java also should help with developer adoption, he noted.

However, Patrick Linskey, a principal engineer at Cisco and another early Ecstasy user, said, “From what I’ve seen, there’s a lot of Erlang/OTP in there under the covers, but with a much more accessible syntax.” Erlang/OTP is a development environment for concurrent programming.

Falco added, “Concurrent programming in Ecstasy doesn’t require any notion of synchronization, locking or atomics; you always work on your local copy of a piece of data, and this makes it much harder to screw things up.”

Compactness, security and isolation

Moreover, a few key reasons for creating a new programming language for serverless, cloud and connected devices apps are their compactness, security and isolation, he added.

“Ecstasy starts off with complete isolation at its core; an Ecstasy app literally has no conduit to the outside world, not to the network, not to the disk, not to anything at all,” Falco said. “To gain access to any aspect of the outside world, an Ecstasy app must be injected with services that provide access to only a specific resource.”

“The Ecstasy runtime really pushes developers toward safe patterns, without being painful,” Linskey said. “If you tried to bolt an existing language onto such a runtime, you’d end up with lots of tough static analysis checks, runtime assertions” and other performance penalties.

Indeed, one of the more powerful components of Ecstasy is the hard separation of application logic and deployment, noted Rob Lee, another early Ecstasy user who is vice president and chief architect at Pure Storage in Mountain View, Calif. “This allows developers to focus on building the logic of their application — what it should do and how it should do it, rather than managing the combinatorics of details and consequences of where it is running,” he noted.

What about adoption?

However, adoption will be the “billion-dollar” issue for the Ecstasy programming language, Lee said, noting that he likes the language’s chances based on what he’s seen. Yet, building adoption for a new runtime and language requires a lot of careful and intentional community-building.

Cisco is an easy potential candidate for Ecstasy usage, Linskey said. “We build a lot of middlebox-style services in which we pull together data from a few databases and a few internal and external services and serve that up to our clients,” he said. “An asynchronous-first runtime with the isolation and security properties of Ecstasy would be a great fit for us.”

Meanwhile, Java aficionados expect that Java will continue to evolve to meet cloud-native computing needs and future challenges. At Oracle Code One, Stewart Bryson, CEO of Red Pill Analytics in Atlanta, said he believes Java has another 10 to 20 years of viability, but there is room for another language that will better enable developers for the cloud. However, that language could be one that runs on the Java Virtual Machine, such as Kotlin, Scala, Clojure and others, he said.

Go to Original Article
Author:

The use of technology in education has pros and cons

The use of technology in education continues to grow, as students turn to AI-powered applications, virtual reality and internet searches to enhance their learning.

Technology vendors, including Google, Lenovo and Microsoft, have increasingly developed technology to help pupils in classrooms and at home. That technology has proved popular with students in elementary education and higher education, and has been shown to benefit independent learning efforts, even as critics have expressed worry that can lead to decreased social interactions.

Lenovo, in a recent survey of 15,000 technology users across 10 countries, reported that 75% of U.S. parents who responded said their children are more likely to look something up online than ask for help with schoolwork. In China, that number was 85%, and in India, it was 89%.

Taking away stress

According to vendors, technology can augment schoolwork help busy parents give their children.

Parenting in general is becoming a challenge for a lot of the modern families as both parents are working and some parents may feel overwhelmed,” said Rich Henderson, director of global education solutions at Lenovo, a China-based multinational technology vendor.

If children can learn independently, that can take pressure and stress off of parents, Henderson continued.

Independent learning can include searching for information on the web, querying using a virtual assistant, or using specific applications.

About 45% of millennials and younger students find technology “makes it much easier to learn about new things,” Henderson said.

Many parents, however, said on the survey that they felt the use of technology in education, while beneficial to their children’s learning, also led to decreases in social interactions. Using the technology to look up answers, instead of consulting parents, teachers or friends, concerned parents that “their children may be becoming too dependent on technology and may not be learning the necessary social skills they require,” according to the survey.

At the same time, however, many parents felt that the use of technology in education would eventually help future generations become more independent learners.

Technology has certainly helped [children learn].
Rich HendersonDirector of global education solutions, Lenovo

“Technology has certainly helped [children learn] with the use of high-speed internet, more automated translation tools. But we can’t ignore the fact that we need students to improve their social skills, also,” Henderson said. “That’s clearly a concern the parents have.”

Yet, despite the worries, technology vendors have poured more and more money into the education space. Lenovo itself sells a number of hardware and software products for the classroom, including infrastructure to help teachers manage devices in a classroom, and a virtual reality (VR) headset and software to build a VR classroom.

The VR classroom has benefited students taking online classes, giving them a virtual classroom or lab to learn in.

Google in education

Meanwhile Google, in an Aug. 15 blog post, promoted the mobile learning application Socratic it had quietly acquired last year. The AI-driven application, released for iOS, can automatically solve mathematical and scientific equations by taking photos of them. The application can also search for answers to questions posed in natural language.

Use of technology in education, student, learning
The use of technology in education provides benefits and challenges for students.

Also, Socratic features references guides to topics frequently taught in schools, including algebra, biology and literature.

Microsoft, whose Office suite is used in many schools around the world, sells a range of educational and collaborative note-taking tools within its OneNote product. The tool, which includes AI-driven search functions, enables students to type in math equations, which it will automatically solve.

While apparently helpful, the increased use of technology in education, as well as the prevalence of AI-powered software for students, has sparked some criticism.

The larger implications

Mike Capps, CEO of AI startup Diveplane, which sells auditable, trainable, “transparent” AI systems, noted that the expanding use of AI and automation could make basic skills obsolete.

Many basic skills, including typing and driving, could eventually end up like Latin — learnable, potentially useful, but unnecessary.

AI systems could increasingly help make important life decisions for people, Capps said.

“More and more decisions about kids’ lives are made by computers, like college enrollment decisions and what car they should buy,” Capps said.

Go to Original Article
Author:

Azure preparedness for Hurricane Florence

As Hurricane Florence continues its journey to the mainland, our thoughts are with those in its path. Please stay safe. We’re actively monitoring Azure infrastructure in the region. We at Microsoft have taken all precautions to protect our customers and our people.

Our datacenters (US East, US East 2, and US Gov Virginia) have been reviewed internally and externally to ensure that we are prepared for this weather event. Our onsite teams are prepared to switch to generators if utility power is unavailable or unreliable. All our emergency operating procedures have been reviewed by our team members across the datacenters, and we are ensuring that our personnel have all necessary supplies throughout the event.

As a best practice, all customers should consider their disaster recovery plans and all mission-critical applications should be taking advantage of geo-replication.

Rest assured that Microsoft is focused on the readiness and safety of our teams, as well as our customers’ business interests that rely on our datacenters. 

You can reach our handle @AzureSupport on Twitter, we are online 24/7. Any business impact to customers will be communicated through Azure Service Health in Azure portal.

If there is any change to the situation, we will keep customers informed of Microsoft’s actions through this announcement.

For guidance on Disaster Recovery best practices see references below: 

Jake Braun discusses the Voting Village at DEF CON

Election security continues to be a hot topic, as the 2018 midterm elections draw closer. So, the Voting Village at DEF CON 26 in Las Vegas wanted to re-create and test every aspect of an election.

Jake Braun, CEO of Cambridge Global Advisors, based in Arlington, Va., and one of the main organizers of the DEF CON Voting Village, discussed the pushback the event has received and how he hopes the event can expand in the future.

What were the major differences between what the Voting Village had this year compared to last year?

Jake Braun: The main difference is it’s way bigger. And we’ve got, end to end, the voting infrastructure. We’ve got voter registration, a list of voters in the state of Ohio that are in a cyber range that’s basically like a county clerk’s network. Cook County, Illinois, their head guy advised us on how to make it realistic [and] make it like his network. We had that, but we didn’t have the list of voters last year.

That’s the back end of the voter process with the voter infrastructure process. And then we’ve got machines. We’ve got some new machines and accessories and all this stuff.

Then, on the other end, we’ve got the websites. This is the last piece of the election infrastructure that announces the results. And so, obviously, we’ve got the kids hacking the mock websites.

What prompted you to make hacking the mock websites an event for the kids in R00tz Asylum?

Braun: It was funny. I was at [RSA Conference], and we’ve been talking for a long time about, how do we represent this vulnerability in a way that’s not a waste of time? Because the guys down in the [Voting Village], hacking websites is not interesting to them. They’ve been doing it for 20 years, or they’ve known how to do it for 20 years. But this is the most vulnerable part of the infrastructure, because it’s [just] a website. You can cause real havoc.

I mean, the Russians — when they hacked the Ukrainian website and changed it to show their candidate won, and the Ukrainians took it down, fortunately, they took it down before anything happened. But then, Russian TV started announcing their candidate won. Can you imagine if, in November 2020, the Florida and Ohio websites are down, and Wolf Blitzer is sitting there on CNN saying, ‘Well, you know, we don’t really know who won, because the Florida and Ohio websites are down,’ and then RT — Russian Television — starts announcing that their preferred candidate won? It would be chaos.

Anyway, I was talking through this with some people at [RSA Conference], and I was talking about how it would be so uninteresting to do it in the real village or in the main village. And the guy [I was talking to said], ‘Oh, right. Yeah. It’s like child’s play for them.’

I was like, ‘Exactly, it’s child’s play. Great idea. We’ll give it to R00tz.’ And so, I called up Nico [Sell], and she was like, ‘I love it. I’m in.’ And then, the guys who built it were the Capture the Packet guys, who are some of the best security people in the planet. I mean, Brian Markus does security for … Aerojet Rocketdyne, one of the top rocket manufacturers in the world. He sells to [Department of Defense], [Department of Homeland Security] and the Australian government. So, I mean, he is more competent than any election official we have.

The first person to get in was an 11-year-old girl, and she got in in 10 minutes. Totally took over the website, changed the results and everything else.

How did it go with the Ohio voter registration database?

Braun: The Secretaries of State Association criticized us, [saying], ‘Oh, you’re making it too easy. It’s not realistic,’ which is ridiculous. In fact, we’re protecting the voter registration database with this Israeli military technology, and no one has been able to get in yet. So, it’s actually probably the best protected list of voters in the country right now.

Have you been able to update the other machines being used in the Voting Village?

Braun: Well, a lot of it is old, but it’s still in use. The only thing that’s not in use is the WinVote, but everything else that we have in there is in use today. Unlike other stuff, they don’t get automatic updates on their software. So, that’s the same stuff that people are voting on today.

Have the vendors been helpful at all in providing more updated software or anything?

Braun: No. And, of course, the biggest one sent out a letter in advance to DEF CON again this year saying, ‘It’s not realistic and it’s unfair, because they have full access to the machines.’

Do people think these machines are kept in Fort Knox? I mean, they are in a warehouse or, in some places, in small counties, they are in a closet somewhere — literally. And, by the way, Rob Joyce, the cyber czar for the Trump administration who’s now back at NSA [National Security Agency], in his talk [this year at DEF CON, he basically said], if you don’t think that our adversaries are doing exactly this all year so that they know how to get into these machines, your head is insane.

The thing is that we actually are playing by the rules. We don’t steal machines. We only get them if people donate them to us, or if we can buy them legally somehow. The Russians don’t play by the rules. They’ll just go get them however they want. They’ll steal them or bribe people or whatever.

They could also just as easily do what you do and just to get them secondhand.

Braun: Right. They’re probably doing that, too.

Is there any way to test these machines in a way that would be acceptable to the manufacturers and U.S. government?

Braun: The unfortunate thing is that, to our knowledge, the Voting Village is still the only public third-party inspection — or whatever you want to call it — of voting infrastructure.

The unfortunate thing is that the only time this is done publicly by a third party is when it’s done by us. And that’s once a year for two and a half days. This should be going on all year.
Jake BraunCEO of Cambridge Global Advisors

The vendors and others will get pen testing done periodically for themselves, but that’s not public. All these things are done, and they’re under [nondisclosure agreement]. Their customers don’t know what vulnerabilities they found and so on and so forth.

So, the unfortunate thing is that the only time this is done publicly by a third party is when it’s done by us. And that’s once a year for two and a half days. This should be going on all year with all the equipment, the most updated stuff and everything else. And, of course, it’s not.

Have you been in contact with the National Institute of Standards and Technology, as they are in the process of writing new voting machine guidelines?

Braun: Yes. This is why DEF CON is so great, because everybody is here. I was just talking to them yesterday, and they were like, ‘Hey, can you get us the report as soon as humanly possible? Because we want to take it into consideration as we are putting together our guidelines.’ And they said they used our report last year, as well.

How have the election machines fared against the Voting Village hackers this year?

Braun: Right, of course, they were able to get into everything. Of course, they’re finding all these new vulnerabilities and all this stuff. 

The greatest thing that I think came out of last year was that the state of Virginia wound up decommissioning the machine that [the hackers] got into in two minutes remotely. They decommissioned that and got rid of the machine altogether. And it was the only state that still had it. And so, after DEF CON, they had this emergency thing to get rid of it before the elections in 2017.

What’s the plan for the Voting Village moving forward?

Braun: We’ll do the report like we did last year. Out of all the guidelines that have come out since 2016 on how to secure election infrastructure, none of them talk about how to better secure your reporting websites or, since they are kind of impossible to secure, what operating procedures you should have in place in case they get hacked.

So, we’re going to include that in the report this year. And that will be a big addition to the overall guidelines that have come out since 2016.

And then, next year, I think, it’s really just all about, what else can we get our hands on? Because that will be the last time that any of our findings will be able to be implemented before 2020, which is, I think, when the big threat is.

A DEF CON spokesperson said that most of the local officials that responded and are attending have been from Democratic majority counties. Why do you think that is?

Braun: That’s true, although [Neal Kelley, chief of elections and registrar of voters for] Orange County, attended. Orange County is pretty Republican, and he is a Republican.

But I think it winds up being this functionally odd thing where urban areas are generally Democratic, but because they are big, they have a bigger tax base. So then, the people who run them have more money to do security and hire security people. So, they kind of necessarily know more about this stuff.

Whereas if you’re in Allamakee County, Iowa, with 10,000 people, the county auditor who runs the elections there, that guy or gal — I don’t know who it is — but they are both the IT and the election official and the security person and the whatever. You’re just not going to get the specialized stuff, you know what I mean?

Do you have any plans to try to boost attendance from smaller counties that might not be able to afford sending somebody here or plans on how to get information to them?

Braun: Well, that’s why we do the report. This year, we did a mailing of 6,600 pieces of mail to all 6,600 election officials in the country and two emails and 3,500 live phone calls. So, we’re going to keep doing that.
 
And that’s the other thing: We just got so much more engagement from local officials. We had a handful come last year. We had several dozen come this year. None of them were public last year. This year, we had a panel of them speaking, including DHS [Department of Homeland Security].

So, that’s a big difference. Despite the stupid letter that the Secretary of State Association sent out, a lot of these state and local folks are embracing this.

And it’s not like we think we have all the answers. But you would think if you were in their position and with how cash-strapped they are and everything, that they would say, ‘Well, these guys might have some answers. And if somebody’s got some answers, I would love to go find out about those answers.’

Microsoft Ignite 2018 conference coverage

Introduction

Microsoft continues to gain market momentum fueled in part by an internal culture shift and the growing popularity of the Azure cloud platform that powers the company’s popular Office 365 product.

When CEO Satya Nadella took the helm in 2014, he made a concerted effort to turn the company away from its proprietary background to win over developers and enterprises with cloud and DevOps ambitions.

To reinforce this new agenda, Microsoft acquired GitHub, the popular software development platform, for $7.5 billion in June and expanded its developer-friendly offerings in Azure — from Kubernetes management to a Linux-based distribution for use with IoT devices. But many in IT have long memories and don’t easily forget the company’s blunders, which can wipe away any measure of good faith at a moment’s notice.

PowerShell, the popular automation tool, continues to experience growing pains after Microsoft converted it to an open source project that runs on Linux and macOS systems. As Linux workloads on Azure continue to climb — around 40% of Azure’s VMs run on Linux according to some reports — and Microsoft releases Linux versions of on-premises software, PowerShell Core is one way Microsoft is addressing the needs of companies with mixed OS environments.

While this past year solidified Microsoft’s place in the cloud and open source arenas, Nadella wants the company to remain on the cutting edge and incorporate AI into every aspect of the business. The steady draw of income from its Azure product and Office 365 — more than 135 million users — as well as its digital transformation agenda, have proven successful so far. So what’s in store for 2019?

This Microsoft Ignite 2018 guide gives you a look at the company’s tactics over the past year along with news from the show to help IT pros and administrators prepare for what’s coming next on the Microsoft roadmap. 

1Latest news on Microsoft

Recent news on Microsoft’s product and service developments

Stay current on Microsoft’s new products and updated offerings before and during the Microsoft Ignite 2018 show.

2A closer look

Analyzing Microsoft’s moves in 2018

Take a deeper dive into Microsoft’s developments with machine learning, DevOps and the cloud with these articles.

3Glossary

Definitions related to Microsoft products and technologies

SmartBear-Zephyr deal spotlights software quality tools shake-up

Consolidation continues to reshape the software quality tools landscape as vendors seek to wean app dev teams off legacy tools for digital transformation initiatives.

The latest shake-up is SmartBear’s acquisition this week of Zephyr, a San Jose, Calif., maker of real-time test management tools, primarily for Atlassian’s Jira issue tracking tool, and for continuous testing and DevOps. This follows the Somerville, Mass., company’s deal in May to acquire Hiptest, in Besancon, France, to enhance continuous testing for Agile and DevOps teams.

Highlight on support for Atlassian’s Jira

Atlassian, Slack and GitHub provide three of the top ecosystems that developers use for ancillary development tools, said Ryan Lloyd, SmartBear’s vice president of products. Atlassian Marketplace’s overall revenue this past year is $200M, according to Atlassian financial reports. Zephyr for Jira is the top-grossing app on the Atlassian Marketplace, with more than $5 million in revenue since 2012.

Ryan Lloyd, SmartBearRyan Lloyd

Zephyr strengthens SmartBear’s portfolio with native test management inside Jira, and the Zephyr Enterprise product represents a modern replacement for Quality Center, HPE’s former software now owned by Micro Focus, Lloyd said.

Meanwhile, Hiptest supports behavior-driven development, and overlaps a bit with Zephyr, said Thomas Murphy, a Gartner analyst in Spokane, Wash.

SmartBear’s portfolio of software quality tools also includes SoapUI, TestComplete, SwaggerHub, CrossBrowserTesting, Collaborator and AlertSite.

Girding for the competition

SmartBear’s moves echo those of other vendors in the software quality tools space as they fill out their portfolios to attract customers from legacy test suites, such as Micro Focus’ Quality Center and Mercury Interactive, to their platforms, Murphy said. They also want to tap into Jira’s wide adoption and teams that seek to shift to more agile practices in testing.

Other examples in the past year are Austrian firm Tricentis’ acquisition of QASymphony, and Idera, in Houston, which acquired the TestRail and Ranorex Studio test management and automation tools from German firm Gurock Software and Austria’s Ranorex GmbH, respectively.

[Software test vendors] have different tool stacks for different types of users … [but] the more you can drive consistent look and feel that is best, especially as you push from teams up to the enterprise.
Thomas Murphyanalyst, Gartner

However, vendors that assemble tools from acquisitions often end up with overlaps in features and functions, as well as very different user experience environments, Murphy said.

“They have a little feeling that they have different tool stacks for different types of users,” he said. “But I believe the more you can drive consistent look and feel that is best, especially as you push from teams up to the enterprise.”

Test management is a key part of a company’s ability to develop, test and deploy quality software at scale. Modern software quality tools must help organizations transition into a digital transformation, yet continue to adapt to the requirements of cloud scale companies.

“Organizations must get better at automation, they must have tools that support them with figuring out testable requirements on through to code quality testing, unit testing, exploratory testing, functional, automation and performance testing,” Murphy said. “This story has to be built around a continuous quality approach.”

No-code and low-code tools seek ways to stand out in a crowd

As market demand for enterprise application developers continues to surge, no-code and low-code vendors seek ways to stand out from one another in an effort to lure professional and citizen developers.

For instance, last week’s Spark release of Skuid’s eponymous drag-and-drop application creation system adds on-premises, private data integration, a new Design System Studio, and new core components for tasks such as creation of buttons, forms, charts and tables.

A suite of prebuilt application templates aim to help users build and customize a bespoke application, such as salesforce automation, recruitment and applicant tracking, HR management and online learning.

And a native mobile capability enables developers to take the apps they’ve built with Skuid and deploy them on mobile devices with native functionality for iOS and Android.

Ray Wang, Constellation ResearchRay Wang

“We’re seeing a lot of folks who started in other low-code/no-code platforms move toward Skuid because of the flexibility and the ability to use it in more than one type of platform,” said Ray Wang, an analyst at Constellation Research in San Francisco.

Skuid CTO Mike DuensingMike Duensing

“People want to be able to get to templates, reuse templates and modify templates to enable them to move very quickly.”

Skuid — named for an acronym, Scalable Kit for User Interface Design — was originally an education software provider, but users’ requests to customize the software for individual workflows led to a drag-and-drop interface to configure applications. That became the Skuid platform and the company pivoted to no-code, said Mike Duensing, CTO of Skuid in Chattanooga, Tenn.

Quick Base adds Kanban reports

Quick Base Inc., in Cambridge, Mass., recently added support for Kanban reports to its no-code platform. Kanban is a scheduling system for lean and just-in-time manufacturing. The system also provides a framework for Agile development practices, so software teams can visually track and balance project demands with available capacity and ease system-level bottlenecks.

The Quick Base Kanban reports enable development teams to see where work is in process. It also lets end users interact with their work and update their status, said Mark Field, Quick Base director of products.

Users drag and drop progress cards between columns to indicate how much work has been completed on software delivery tasks to date. This lets them track project tasks through stages or priority, opportunities through sales stages, application features through development stages, team members and their task assignments and more, Field said.

Datatrend Technologies, an IT services provider in Minnetonka, Minn., uses Quick Base to build the apps that manage technology rollouts for its customers, and finds the Kanban reports handy.

A lot of low-code/no-code platforms allow you to get on and build an app but then if you want to take it further, you’ll see users wanting to move to something else.
Ray Wanganalyst, Constellation Research

“Quick Base manages that whole process from intake to invoicing, where we interface with our ERP system,” said Darla Nutter, senior solutions architect at Datatrend.

Previously, we kept data of work in progress through four stages (plan, execute, complete and invoice) in a table report with no visual representation, but with these reports users can see what they have to do at any given stage and prioritize work accordingly, she said.

“You can drag and drop tasks to different columns and it automatically updates the stage for you,” she said.

Like the Quick Base no-code platform, the Kanban reports require no coding or programming experience. Datatrend’s typical Quick Base users are project managers and business analysts, Nutter said.

For most companies, however, the issue with no-code and low-code systems is how fast users can learn and then expand upon it, Constellation Research’s Wang said.

“A lot of low-code/no-code platforms allow you to get on and build an app but then if you want to take it further, you’ll see users wanting to move to something else,” Wang said.

OutSystems sees AI as the future

OutSystems said it plans to add advanced artificial intelligence features into its products to increase developer productivity, said Mike Hughes, director of product marketing at OutSystems in Boston.

“We think AI can help us by suggesting next steps and anticipating what developers will be doing next as they build applications,” Hughes said.

OutSystems uses AI in its own tool set, as well as links to publicly available AI services to help organizations build AI-based products. To facilitate this, the company launched Project Turing and opened an AI Center of Excellence in Lisbon, Portugal, named after Alan Turing, who is considered the father of AI.

The company also will commit 20% of its R&D budget to AI research and partner with industry leaders and universities for research in AI and machine learning.

Confluent Platform 5.0 aims to mainstream Kafka streaming

The Confluent Platform continues to expand on capabilities useful for Kafka-based data streaming, with additions that are part of a 5.0 release now available from Confluent Inc.

The creation of former LinkedIn data engineers who helped build the Kafka messaging framework, Confluent Platform’s goal is to make real-time big data analytics accessible to a wider community.

Part of that effort takes the form of KSQL, which is meant to bring easier SQL-style queries to analytics on Kafka data. KSQL is a Kafka-savvy SQL query engine and language Confluent created in 2017 to open Kafka streaming data to analytics.

Version 5.0 of the Confluent Platform, commercially released on July 31, seeks to improve disaster recovery with more adept handling of application client failover to enhance IoT abilities with MQTT proxy support, and to reduce the need to use Java for programming streaming analytics with a new GUI for writing KSQL code.

Data dips into mainstream

Confluent Platform 5.0’s support for disaster recovery and other improvements is useful, said Doug Henschen, a principal analyst at Constellation Research. But the bigger value in the release, he said, is in KSQL’s potential for “the mainstreaming of streaming analytics.”

Doug Henschen, Constellation ResearchDoug Henschen

Besides the new GUI, this Confluent release upgrades the KSQL engine with support for user-defined functions, which are essential parts of many existing SQL workloads. Also, the release supports handling nested data in popular Avro and JSON formats.

“With these moves Confluent is meeting developer expectations and delivering sought-after capabilities in the context of next-generation streaming applications,” Henschen said.

That’s important because web, cloud and IoT applications are creating data at a prodigious rate, and companies are looking to analyze that data as part of real-time operations. The programming skills required to do that level of development remain rare, but, as big data ecosystem software like Apache Spark and Kafka find wider use, simpler libraries and interfaces are appearing to link data streaming and analytics more easily.

Kafka, take a log

At its base, Kafka is a log-oriented publish-and-subscribe messaging system created to handle the data created by burgeoning web and cloud activity at social media giant LinkedIn.

The core software has been open sourced as Apache Kafka. Key Kafka messaging framework originators, including Jay Krebs, Neha Narkhede and others, left LinkedIn in 2014 to found Confluent, with the stated intent to build on core Kafka messaging for further enterprise purposes.

Joanna Schloss, Confluent’s director of product marketing, said Confluent Platform’s support for nested data in Avro and JSON will enable greater use of business intelligence (BI) tools in Kafka data streaming. In addition, KSQL now support more complex joins, allowing KSQL applications to enhance data in more varied ways.

Joanna Schloss, director of product marketing at ConfluentJoanna Schloss

She said opening KSQL activity to view via a GUI makes KSQL a full citizen in modern development teams in which programmers, as well as DevOps and operations staff, all take part in data streaming efforts.

“Among developers, DevOps and operations personnel there are persons interested in seeing how Kafka clusters are performing,” she said. Now, with the KSQL GUI, “when something arrives they can use SQL [skills] to watch what happened.” They don’t need to find a Java developer to interrogate the system, she noted.

Making Kafka more accessible for applications

KSQL is among the streaming analytics capabilities of interest to Stephane Maarek, CEO at DataCumulus, a Paris-based firm focused on Java, Scala and Kafka training and consulting.

Stephane Maarek, CEO of DataCumulusStephane Maarek

Maarek said KSQL has potential to encapsulate a lot of programming complexity, and, in turn, to lower the barrier to writing streaming applications. In this, Maarek said, Confluent is helping make Kafka more accessible “to a variety of use cases and data sources.”

Moreover, because the open source community that supports Kafka “is strong, the real-time applications are really easy to create and operate,” Maarek added.

Advances in the replication capabilities in Confluent Platform are “a leap forward for disaster recovery, which has to date been something of a pain point,” he said.

Maarek also said he welcomed recent updates to Confluent Control Center, because they give developers and administrators more insights into the activity of Kafka cluster components, particularly schema registry and application consumption lags — the difference between messaging reads and messaging writes. The updates also reduce the need for administrators to write commands, according to Maarek.

Data streaming field

The data streaming field remains young, and Confluent faces competition from established data analytics players like IBM, Teradata and SAS Institute, Hadoop distribution vendors like Cloudera, Hortonworks and MapR, and a variety of specialists such as MemSQL, SQLstream and Striim.

“There’s huge interest in streaming applications and near-real-time analytics, but it’s a green space,” Henschen said. “There are lots of ways to do it and lots of vendor camps — database, messaging-streaming platforms, next-gen data platforms and so on — all vying for a piece of the action.”

However, Kafka often is a common ingredient, Henschen noted. Such ubiquity helps put Confluent in a position “to extend the open source core with broader capabilities in a commercial offering,” he said.

Curious About Windows Server 2019? Here’s the Latest Features Added

Microsoft continues adding new features to Windows Server 2019 and cranking out new builds for Windows Server Insiders to test. Build 17709 has been announced, and I got my hands on a copy. I’ll show you a quick overview of the new features and then report my experiences.

If you’d like to get into the Insider program so that you can test out preview builds of Windows Server 2019 yourself, sign up on the Insiders page.

Ongoing Testing Requests

If you’re just now getting involved with the Windows Server Insider program or the previews for Windows Server 2019, Microsoft has asked all testers to try a couple of things with every new build:

  • In-place upgrade
  • Application compatibility

You can use virtual machines with checkpoints to easily test both of these. This time around, I used a physical machine, and my upgrade process went very badly. I have not been as diligent about testing applications, so I have nothing of importance to note on that front.

Build 17709 Feature 1: Improvements to Group Managed Service Accounts for Containers

I would bet that web applications are the primary use case for containers. Nothing else can match containers’ ability to strike a balance between providing version-specific dependencies while consuming minimal resources. However, containerizing a web application that depends on Active Directory authentication presents special challenges. Group Managed Service Accounts (gMSA) can solve those problems, but rarely without headaches. 17709 includes these improvements for gMSAs:

  • Using a single gMSA to secure multiple containers should produce fewer authentication errors
  • A gMSA no longer needs to have the same name as the system that host the container(s)
  • gMSAs should now work with Hyper-V isolated containers

I do not personally use enough containers to have meaningful experience with gMSA. I did not perform any testing on this enhancement.

Build 17709 Feature 2: A New Windows Server Container Image with Enhanced Capabilities

If you’ve been wanting to run something in a Windows Server container but none of the existing images meet your prerequisites, you might have struck gold in this release. Microsoft has created a new Windows Server container image with more components. I do not have a complete list of those components, but you can read what Lars Iwer has to say about it. He specifically mentions:

  • Proofing tools
  • Automated UI tests
  • DirectX

As I read that last item, I instantly wanted to know: “Does that mean GUI apps from within containers?” Well, according to the comments on the announcement, yes*. You just have to use “Session 0”. That means that if you RDP to the container host, you must use the /admin switch with MSTSC. Alternatively, you can use the physical console or an out-of-band console connection application.

Commentary on Windows Server 2019 Insider Preview Build 17709

So far, my experiences with the Windows Server 2019 preview releases have been fairly humdrum. They work as advertised, with the occasional minor glitch. This time, I spent more time than normal and hit several frustration points.

In-Place Upgrade to 17709

Ordinarily, I test preview upgrades in a virtual machine. Sure, I use checkpoints with the intent of reverting if something breaks. But, since I don’t do much in those virtual machines, they always work. So, I never encounter anything to report.

For 17709, I wanted to try out the container stuff, and I wanted to do it on hardware. So, I attempted an in-place upgrade of a physical host. It was disastrous.

Errors While Upgrading

First, I got a grammatically atrocious message that contained false information. I wish that I had saved it so I could share with others that might encounter it, but I must have accidentally my notes. the message started out with “Something happened” (it didn’t say what happened, of course), then asked me to look in an XML file for information. Two problems with that:

  1. I was using a Server Core installation. I realize that I am not authorized to speak on behalf of the world’s Windows administrators, but I bet no one will get at mad at me for saying, “No one in the world wants to read XML files on Server Core.”
  2. The installer didn’t even create the file.

I still have not decided which of those two things irritates me the most. Why in the world would anyone actively decide to build the upgrade tool to behave that way?

Problems While Trying to Figure Out the Error

Well, I’m fairly industrious, so I tried to figure out what was wrong. The installer did not create the XML file that it talked about, but it did create a file called “setuperr.log”. I didn’t keep the entire contents of that file either, but it contained only one line error-wise that seemed to have any information at all: “CallPidGenX: PidGenX function failed on this product key”. Do you know what that means? I don’t know what that means. Do you know what to do about it? I don’t know what to do about it. Is that error even related to my problem? I don’t even know that much.

I didn’t find any other traces or logs with error messages anywhere.

How I Fixed My Upgrade Problem

I began by plugging the error messages into Internet searches. I found only one hit with any useful information. The suggestions were largely useless. But, the guy managed to fix his own problem by removing the system from the domain. How in the world did he get from that error message to disjoining the domain? Guesswork, apparently. Well, I didn’t go quite that far.

My “fix”: remove the host from my Hyper-V cluster. The upgrade worked after that.

Why did I put the word “fix” in quotation marks? Because I can’t tell you that actually fixed the problem. Maybe it was just a coincidence. The upgrade’s error handling and messaging was so horrifically useless that without duplicating the whole thing, I cannot conclusively say that one action resulted in the other. “Correlation is not causation”, as the saying goes.

Feedback for In-Place Upgrades

At some point, I need to find a productive way to express this to Microsoft. But for now, I’m upset and frustrated at how that went. Sure, it only took you a few minutes to read what I had to say. It took much longer for me to retry, poke around, search, and prod at the thing until it worked, and I had no idea that it was ever going to work.

Sure, once the upgrade went through, everything was fine. I’m quite happy with the final product. But if I were even to start thinking about upgrading a production system and I thought that there was even a tiny chance that it would dump me out at the first light with some unintelligible gibberish to start a luck-of-the-draw scavenger hunt, then there is a zero percent chance that I would even attempt an upgrade. Microsoft says that they’re working to improve the in-place upgrade experience, but the evidence I saw led me to believe that they don’t take this seriously at all. XML files? XML files that don’t even get created? Error messages that would have set off 1980s-era grammar checkers? And don’t even mean anything? This is the upgrade experience that Microsoft is anxious to show off? No thanks.

Microsoft: the world wants legible, actionable error messages. The world does not want to go spelunking through log files for vague hints. That’s not just for an upgrade process either. It’s true for every product, every time.

The New Container Image

OK, let’s move on to some (more) positive things. Many of the things that you’ll see in this section have been blatantly stolen from Microsoft’s announcement.

Once my upgrade went through, I immediately started pulling down the new container image. I had a bit of difficulty with that, which Lars Iwer of Microsoft straightened out quickly. If you’re trying it out, you can get the latest image with the following:

Since Insider builds update frequently, you might want to ensure that you only get the build version that matches your host version (if you get a version mismatch, you’ll be forced to run the image under Hyper-V isolation). Lars Iwer provided the following script (stolen verbatim from the previously linked article, I did not write this or modify it):

Trying Out the New Container Image

I was able to easily start up a container and poke around a bit:

Testing out the new functionality was a bit tougher, though. It solves problems that I personally do not have. Searching the Internet for, “example apps that would run in a Windows Server container if Microsoft had included more components” didn’t find anything I could test with either (That was a joke; I didn’t really do that. As far as you know). So, I first wrote a little GUI .Net app in Visual Studio.

*Graphical Applications in the New Container Image

Session 0 does not seem to be able to show GUI apps from the new container image. If you skimmed up to this point and you’re about to tell me that GUI apps don’t show anything from Windows containers, this links back to the (*) text above. The comments section of the announcement article indicate that graphical apps in the new container will display on session 0 of the container host.

I don’t know if I did something wrong, but nothing that I did would show me a GUI from within the new container style. The app ran just fine — it shows up under Get-Process — but it never shows anything. It does exactly the same thing under microsoft/dotnet-framework in Hyper-V isolation mode, though. So, on that front, the only benefit that I could verify was that I did not need to run my .Net app in Hyper-V isolation mode or use a lot of complicated FROM nesting in my dockerfile. Still no GUI, though, and that was part of my goal.

DirectX Applications in the New Container Image

After failing to get my graphical .Net app to display, I next considered DirectX. I personally do not know how to write even a minimal DirectX app. But, I didn’t need to. Microsoft includes the very first DirectX-dependent app that I was ever able to successfully run: dxdiag.

Sadly, dxdiag would not display on session 0 from my container, either. Just as with my .Net app, it appeared in the local process list and docker top. But, no GUI that I could see.

However, dxdiag did run successfully, and would generate an output file:

Notes for anyone trying to duplicate the above:

  • I started this particular container with 
    docker run it mcr.microsoft.com/windowsinsider
  • DXDiag does not instantly create the output file. You have to wait a bit.

Thoughts on the New Container Image

I do wish that I had more experience with containers and the sorts of problems this new image addresses. Without that, I can’t say much more than, “Cool!” Sure, I didn’t personally get the graphical part to work, but a DirectX app from with a container? That’s a big deal.

Overall Thoughts on Windows Server 2019 Preview Build 17709

Outside of the new features, I noticed that they have corrected a few glitchy things from previous builds. I can change settings on network cards in the GUI now and I can type into the Start menu to get Cortana to search for things. You can definitely see changes in the polish and shine as we approach release.

As for the upgrade process, that needs lots of work. If a blocking condition exists, it needs to be caught in the pre-flight checks and show a clear error message. Failing partway into the process with random pseudo-English will extend distrust of upgrading Microsoft operating systems for another decade. Most established shops already have an “install-new-on-new-hardware-and-migrate” process. I certainly follow one. My experience with 17709 tells me that I need to stick with it.

I am excited to see the work being done on containers. I do not personally have any problems that this new image solves, but you can clearly see that customer feedback led directly to its creation. Whether I personally benefit or not, this is a good thing to see.

Overall, I am pleased with the progress and direction of Windows Server 2019. What about you? How do you feel about the latest features? Let me know in the comments below!