Tag Archives: contest

Calling all game devs: The Dream.Build.Play 2017 Challenge is Here!

Dream.Build.Play is back! The long-running indie game development contest was on hiatus for a few years, so it’s high time for it to make a resounding return.

The Dream.Build.Play 2017 Challenge is the new contest: It just launched on June 27, and it challenges you to build a game and submit it by December 31 in one of four categories. We’re not super-picky — you can choose the technology to use just so long as it falls into one of the challenges and that you publish it as a Universal Windows Platform (UWP) game. It’s up to you to build a quality game that people will line up to play.

The four categories are:

Cloud-powered game – Grand Prize: $100,000 USD

Azure Cloud Services hands you a huge amount of back-end power and flexibility, and we think it’s cool (yes, we’re biased). So, here’s your shot of trying Azure out and maybe even win big. Build a game that uses Azure Cloud Services on the backend, like Service Fabric, CosmosDB, containers, VMs, storage and Analytics. Judges will give higher scores to games that use multiple services in creative ways — and will award bonus points for Mixer integration.

PC game – Grand Prize: $50,000 USD

Building on Windows 10, for Windows 10? This is the category for you. Create your best UWP game that lives and breathes on Windows 10 and is available to the more than 450 million users through the Windows Store. It’s simple: Create a game with whatever technology you want and publish it in the Windows Store. We’ll look favorably on games that add Windows 10 features such as Cortana or Inking because we really want to challenge you.

Mixed Reality game – Grand Prize: $50,000

Oh, so you want to enhance this world you live in with something a little…augmented? Virtual? Join us in the Mixed Reality challenge and build a volumetric experience that takes advantage of 3D content in a virtual space. You’ll need to create your game for Windows Mixed Reality, but you can use technology like Unity to get you kickstarted. Oh, and don’t forget the audio to really immerse us in your world.

Console game – Grand Prize: $25,000

Console gamers unite! Want to try your hand at building a game for Xbox? This category is your jam. Your UWP game will be built for the Xbox One console family and must incorporate Xbox Live Creators Program with at least Xbox Live presence. Consideration will be given for games that incorporate more Xbox Live services such as leaderboards and statistics.

There are some important dates to be aware of:

  • June 27: Competition opens for registration
  • August 2: Team formation and game submission period opens
  • December 31: Game submission period closes
  • January 2018: Finalists announced
  • March 2018: Winners awarded

We have big things planned for you. Maybe some additional contests and challenges, maybe some extra-cool prizes for the finalists, maybe some extra-cool interviews and educational materials. Once you register, we’ll keep you updated via email, but also keep an eye on our Windows Developer social media accounts.

As I mentioned earlier, you can pretty much use whatever technology you want. Create something from the ground up in JavaScript or XAML or C++ and DirectX. Leverage one of our great middleware partners like Unity, GameMaker, Cocos2D or Monogame. Or do a bit of both – do your own thing and incorporate Mixer APIs into it, Vungle or any one (or more) of our other partners. The biggest thing we want from you is a fun game that’s so enjoyable for us to play that we forget we’re judging it!

Speaking of that, you might be wondering how we judge the games. We have four “big bucket” criteria for you to aim for:

  • Fun Factor – 40%: Bottom line – your game needs to be fun. That doesn’t mean it has to be cutesy or simple. Fun comes in many forms, but we can’t forget what we’re aiming for here – a great game. Take us for a ride!
  • Innovation – 30%: And while you’re taking us on that ride, surprise us! We’re not looking for a clone of an existing game or a tired theme that has been done a bazillion times before. Mash-up two genres. Take a theme and turn it on its head. Don’t restrict your innovation to the game, but also the technology you’re using and how you’re using it. Think outside the box when you incorporate Windows features, or how you can creatively use a service like Mixer.
  • Production Quality – 20%: Games have to be fun and we want them to be innovative, but if they don’t run, then they’re just not ready to be called a game. This scoring criterion is all about making sure your framerate is right, you have audio where you should, you’ve catered for network instability and more. Give us every opportunity to get to your game and enjoy it the way you intended.
  • Business Viability/Feasibility – 10%: And of course, what’s your plan to engage your gaming customers? Do you have a good revenue-generating plan (e.g., in-app purchases, premium charges, marketing, rollouts, etc.)? That’s stuff you might not normally think about, but we’re gonna make you. Because we care.

If you want to get started with UWP game development, you can try our Game Development Guide.

Want more? Check out the introductory .GAME episode here:

So, what are you waiting for? Get in there and register!

Building in 10k: Designing for Optimization and Performance

Editor’s note: This is the third in a series of posts from the team that built the 10k Apart contest page, exploring the process of building for interoperability, accessibility, and progressive enhancement in less than 10kB.

In the previous post in this series, I talked a lot about how the 10k Apart contest began to take shape in terms of markup. While I was busy doing that, Stephanie Stimac was tucking into the project form the design end of things.

I’ll step back for a bit and let he talk about her process.

Where do I begin?

As I started poking through the wireframes and information architecture documents for the 10k Apart website, I realized I would have to approach this a little bit differently than other microsites I’ve designed in the past. I knew I needed to keeping the code light and avoid excessive use of unique margins, padding, fonts, and font sizes. I needed to identify the patterns in the wireframes and focus on creating shared properties where possible—font-family, font-size, margin, and padding. Consistency and repetition would be key in keeping the CSS as small as possible.

Incidentally, consistency is a key tenet of design, so we had some solid alignment there.

With this in mind I set about creating three “themes” in Illustrator, following the style tiles approach championed by Samantha Warren. I’ll circle back to the style tiles, but first I want to touch on a few other things that I was thinking about during the design process.

What can I do in terms of typography?

10kB is not a lot to work with when you start to think about fonts. Heck, most fonts clock in at double or triple that weight for a single face! Custom fonts were out the window. Instead, I focused on picking a pair of fonts—serif and sans-serif—that were web safe, were system fonts, or had a similar looking fallback font.

I considered Times New Roman—for a hot second—as the primary header and logotype font, but my personal aesthetic could not allow it. Georgia is thicker and softer around the edges. It sits a little more heavily on the page. But in the event Georgia isn’t available, falling back to Times New Roman wouldn’t be the worst thing in the world. We took it a but further though and came up with a well-supported font stack for all serif text:

font-family: Georgia, Times, "Times New Roman", serif;

For body copy and other content blocks longer than a headline, my original style tiles used Arial. It’s pretty universally supported (and almost equally despised). In the end, though, we realized we could offer something a little better to our readers and built a stack around Segoe UI. We still use Arial in the rather exhaustive fallback list though:

font-family: "Segoe UI", Frutiger, "Frutiger Linotype",
             "Dejavu Sans", "Helvetica Neue", Arial,
             sans-serif;  

If you’re looking to use default fonts like this, CSS Font Stack is an incredibly useful resource. It contains information about availability on both Windows and Mac and offers suggestions for alternatives as well as font size recommendations.

Screen capture showing Font Stacks, with title and body text styles open in a selection of different fonts.

Can I compress colors?

Color was a little bit trickier to work with. The color scheme is limited to three main colors with either a shade or a tint of one of those as an alternative for hover events, so you know when you’ve hovered or clicked on an item. I chose a limited palette, again, in order to reduce the amount of code it would require to build.

Once I picked my colors, I took things a bit further—some might say too far ;-)—and tuned them to create hex codes that were either alternating characters (e.g., #232323) or that used only a three-character hex code (e.g., #ccc).

The light and dark greys were not a problem; they were easy to pick with alternating characters. The initial blue and orange swatches took more time to match up with a three-digit hex code. When I converted the six-digit code to a three-digit code, the color difference was sometimes staggering. So I went through and tweaked a few of my initial colors in order to get as close as possible to the ones I wanted.

Of course, through all of this I was checking my color contrast to ensure the design would not cause accessibility issues. This led to further color adjustments along the way. Our goal in terms of color contrast was WCAG Level AA and we were able to achieve that. Of course it helped that we used rather large fonts across the board—a font size equivalent of 20px was our baseline.

Screen capture of Lea Verou's Contrast Ratio checker, showing contrast ratio comparisons for different color pairs.

Lea Verou’s Contrast Ratio tool.

How many themes should I create?

Even with all of these constraints, I had a bunch of ideas for how the site could look. I ended up sharing three approaches with the team, using style tiles to convey how each would apply to the various patterns and components throughout the site. They all shared many characteristics, but with varying color palettes, different header designs, and slightly different typography (Georgia rather than Times New Roman, for instance). As I was very confident in the overall design of patterns like projects, forms, and such, color was the primary differentiator across the three themes.

The green monochromatic style was developed after looking at An Event Apart and the original 10k Apart website from 2010. The colors were mostly muted with the exception of the buttons, which were red to draw your eye. Ultimately this direction didn’t provide enough contrast between elements with the monochromatic scheme.

Using that as a baseline, I created an alternate approach with a more serious, rigid tone. This option provided more contrast between the green and gold. I brightened up the green because the muted green did not offer enough contrast with the gold. I also introduced a dark grey to help punch up the contrast between elements.

The third theme ditched the green completely. I have a hard time using green and red together without a design screaming Happy Holidays! and I also wanted this third approach to have a completely different tone than the other two. In the end, I opted for a palette based on orange and a blue. The inspiration for this palette came from the call-to-action buttons on An Event Apart’s website, which are a bright, vivid orange.

Screen capture showing the An Event Apart page with vivid orange call-to-action buttons.

I toned the orange down just a little bit and paired it with a blue that wasn’t too heavily saturated in order to balance the orange. The tone of the design changed completely—the orange immediately gave the design more energy. It made the whole design feel more creative and inviting. Ultimately, I think that’s why it was the direction we decided we should go.

We did end up using the header style from the monochromatic design in the end. Normally I don’t like to mix ideas from different designs, but without that big block of color at the top, the whole design felt lighter. It didn’t feel like a compromise… it just worked.

Screen capture showing the final style tile for 10K Apart

I’m normally a full page mock-up kind of designer, but I found that the style tiles worked really well. Perhaps it was because this was such a small site with only a handful of patterns, but they helped speed the visual design process along. And I was happy to see how well they translated into the final, realized design in code—after all, that’s where it really matters.

What about that hero illustration?

With the basic design handled (and Aaron busily implementing it), I turned my attention to the homepage hero. I knew I wanted the illustration to be a metaphor for the 10k Apart site and for the entries that would eventually be submitted to it. I also liked the old-time contraption design used in the 2010 edition of the contest.

Screen capture of the 10K Apart 2010 page, with a hero illustration of a pedal-powered flying machine.

After seeing her amazing talk on SVG animations at SmashingConf, we approached Sarah Drasner about getting involved with the illustration. After all, what could be cooler (or smaller) than an animated hero shot in SVG? She was enthusiastic about the contest, and the challenge of building something both really cool and really small, so we set to work.

After a brief chat, we were all loving the idea of something antique and interactive. And so Sarah went away for a few days and came back with an early version of the hero you see today. She even figured out how to get the size of the SVG under 10kB while still keeping it interactive and highly detailed. (Aaron will discuss how it’s lazy-loaded in a future post.)

Sarah nailed the concept of a vintage contraption that required a lot of machinations to pop out something simple. And that’s really what this process and this contest is all about: working hard, iterating and re-iterating until we come up with something small, simple, and amazing!

What did we learn?

Stephanie’s walkthrough covered a lot of territory, so here’s a little summary of the takeaways:

  • Consistency matters — not only are consistent designs more usable and cohesive, they’re also smaller;
  • Fonts can be a bandwidth hog — even single typefaces can be quite large, consider using web safe or standard system fonts;
  • “Web Safe” doesn’t have to be boring — just because you’re using a web safe font stack doesn’t mean you can’t get creative, you can even take things further by exploring larger font stacks with options that cater to the different platforms;
  • Colors can compress — repetitive hex values and hexidecimal shorthand can help make your CSS smaller;
  • Contrast is key — design is not painting a pretty picture, people need to use your work so make sure they can read it;
  • Design systems, not pages — style tiles are a great tool for exploring design themes and get you to think in terms of a design system rather than a collection of pages; and
  • SVG is amazing — it’s incredible how much you can do with SVg illustrations: they’re small, they scale, and they can even be animated.

Where to next?

With a beautiful and highly usable design direction in place, it’s time to write some CSS. Stay tuned!

Aaron Gustafson and Stephanie Stimac

Building in 10k: Markup for Accessibility, Clarity, and Affordance

Editor’s note: This is part two in a series of posts from the team that built the 10k Apart contest page, exploring the process of building for interoperability, accessibility, and progressive enhancement in less than 10kB.

In the previous post in this series, I talked a lot about how the 10k Apart contest site began to materialize in terms of structure and content. Once the planning was done, I was finally able to start building the site in earnest.

Where do I begin?

Generally, the first step I take in building any site is to drop some sample content into a fresh HTML document and then work my way through it, marking things up as I go. In the case of this project, I decided that sample content would be all of the patterns I had identified in the wireframes. It seemed like a solid starting point that would get me pretty far in terms of building out the bits I’d need to flesh out the full site.

First things first, I copied in the content from the wireframes. Then I began asking one simple question over and over: How can I use markup make this better?

The purpose of markup is to twofold:

  1. To give structure to a document; and
  2. To convey (and enhance) the meaning of the document’s content.

With that question in mind, I began to work my way, bit by bit, through the document, always on the lookout for opportunities to make it a little more meaningful, a little more usable. I was, of course, ever-mindful of that looming 10kB limit too. Markup tends to be pretty small, but I wanted to minimize cruft and focus on markup that actually added value to the document.

What’s the minimum viable structure?

Every page needs structure, so I started there. First off, I dropped in a minimal HTML5 shell:

.gist table { margin-bottom: 0; }

Have I told you how thankful I am for that simplified DOCTYPE? It’s worth noting that I added an id to html element to act as a site identifier. This is a practice I picked up years ago from Eric Meyer as a helpful option for folks who rely on user styles and want to override a specific site’s styles rather than every site’s.

With the minimal shell in place, I set to work marking up the content in my patterns document. I began by surveying the content and identifying key regions like the site banner, navigation, primary content, and footer. All of these pieces have semantic meaning and they also have associated HTML elements. Let’s step through them, one by one.

A “banner” is introductory content for a web page. ARIA granted us the ability to identify a site’s banner using role=”banner”, so it would be completely reasonable (and accessible) to mark up the site’s banner like this:

.gist table { margin-bottom: 0; }

Incidentally, HTML5 introduced the header element, which operated in a similar way. Semantically, it’s intended for introductory content. The header element can be used directly in the document body as well as within sectioning elements (like article and section). What’s really cool is that the first header encountered in the body (but not inside a sectioning element) is exposed to assistive technology as the site’s banner. That means we can skip adding the role and just do this:

.gist table { margin-bottom: 0; }

Why do we care about the semantic equivalence? Well, assistive technologies can use landmark roles like “banner” to enable a user to move around more quickly in a document.

The next thing I needed to address in the document was the navigation. There’s an ARIA role for that: role=”navigation”. However, there’s also a corresponding HTML5 tag: nav, which is a bit less verbose. Done and done. Well, almost. In order to identify the purpose of the navigation, I can enlist another ARIA property: aria-label:

.gist table { margin-bottom: 0; }

This ensures assistive technology exposes the purpose of the navigation to users. Edge with Narrator, for example will read out “Main Navigation, navigation, navigation landmark”.

NVDA with Firefox would read this out as “Main Navigation, navigation landmark.”

Next up is the primary content. ARIA denotes that with role="main", but the newer main element accomplishes the same thing more succinctly.

.gist table { margin-bottom: 0; }

Finally there’s the footer. HTML5 has the footer element which, like header, can operate in either a sectioning context or a page context. And like header, the first footer encountered in the body (again, provided it’s not a descendent of a sectioning element) will automatically be granted a semantic value equivalent to ARIA’s “contentinfo” role. That role denotes meta information about the content, like copyright. Just what we need!

Rolled all together, the document structure was simple and semantic, with nary a div in sight:

.gist table { margin-bottom: 0; }

Where can I provide more meaning?

With the basic document structure in place, I began to look at the individual patterns with an eye for where I could add value using markup. In many cases, simple tried and true elements like headings, paragraphs, and lists made the most sense. For example, the navigation is really a list of links. The order really doesn’t matter, so I used a ul:

.gist table { margin-bottom: 0; }

The cool thing about that approach is that when assistive technology encounters that navigation landmark, users get an even more useful information. So, going back to the NVDA example from earlier, this would be read as “Main Navigation, navigation landmark. List with three items.” That’s super helpful!

HTML’s native semantics can do a ton to super-charge our documents like this. Remember folks, not everything should be a div.

What needs to be identified?

The “common wisdom” within the web design world is to avoid using the id attribute. The oft-cited reason for this is that in a CSS selector, an id selector (e.g., #foo) carries a lot of weight in specificity calculation and often lead designers to create unnecessarily over-specific selectors. That is 100% true, but I think it gives id a bad rap. Modern attitudes toward id often remind me of the antagonism many of us felt for table when we were trying to get folks to move from table-based layouts to CSS. It took the best of us down some pretty weird rabbit holes. The table element is absolutely the best choice for marking up tabular content.

The id attribute is used to identify a specific element on the page. You can use it for CSS, sure, but it has other uses as well:

  1. It creates an anchor to that element in the document (try it out by adding “#comments” to the address bar of your browser);
  2. It creates a simple and inexpensive means of document traversal in JavaScript: document.getElementById();
  3. It creates a reference-able identifier useful for associating the identified element with another one via for, aria-labelledby, aria-describedby, and other attributes.

As with many things, id can certainly be overused, but that doesn’t mean you shouldn’t use it. In terms of the 10k Apart site, I opted to use it in the forms (which I will discuss shortly) and extensively on the FAQ page and the Official Rules page.

On both of those pages, I used id to create anchor points so I could point folks directly to certain sections. On the FAQ page, which is relatively long, I used short id values like “size”, “js” and “a11y” (short for “accessibility”). The Official Rules are a bit longer, so in order to save some bits, I opted to use the section character (“§”) as a prefix to the id values. If you look at the source of that page, you’ll see id values like “§–1”, “§–2”, and “§–3”. They may look weird, but they are valid id values.

What are the common patterns?

The id attribute is great for identifying specific elements, but there are lots of instances where elements share a function. One example of that is the gallery of projects and the gallery of judges. And so I chose to classify the two lists as being of an ilk, using a class of, you guessed it, “gallery”.

Another example of elements that are related are the various project instances. Each instance is a “project”, but a project has certain characteristics when it is part of a gallery and other characteristics when it’s on its own page. Each instance shares the class of “project” but can also receives a modifier class to denote its context. I chose to follow the BEM syntax for classifying elements, but there numerous other ways to accomplish the same goal. Here’s what I came up with:

  • .project — Used for all projects;
  • .project--page — Used for the project on its own page;
  • .project--hero — Used for the Grand Prize winner in the final version of the homepage;
  • .project--winner — Used for projects that have won a prize; and
  • .project--preview — Used for the JavaScript-based live preview accompanying the entry form.

Since I’m on the subject, I also used the BEM approach to keep my styles modular. Continuing with the project example, a project instance could have multiple properties associated with it:

  • .project__author — The person who submitted the project;
  • .project__name — The name they gave the project; and
  • .project__won — The prize a project won

Doing this allowed me to isolate project-related styles to specific elements based solely on their purpose, rather than the markup they happened to use (which might change from instance to instance, depending on the needs of the page).

How can I make the forms better?

Content, the official rules, and the projects themselves are all obviously quite important to this site, but without a usable entry form, the whole project is kinda pointless. Part of making more usable forms for the site started with the planning and began with eliminating unnecessary fields. That removed a lot of the noise from the entry form. The next step was humanizing the language, which I mentioned in the last post. Practically, it meant moving away from terse and rather useless labels like “Name” toward more conversational labels that actually beg a response, like “What’s your name?”.

With solid, conversational labels in place, I then went about associating them with the various form controls. To start with, I marked all of the standard fields up as input elements with no type assignment (meaning they all defaulted to “text”). I then marked up each bit of label text in a label element and then tied the two together using a for attribute on the label that referenced the id attribute of the associated field. I did the same with the select, textarea, and checkbox controls. Here’s an example:

.gist table { margin-bottom: 0; }

I opted to use single character id values to save room so I could spend additional markup characters on additional enhancements.

Next, I went through the form and looked for fields that were asking for specific kinds of structured information, like an email address or a URL. When that was the case, I used the corresponding input type. Here’s an early instance of the email field, for example:

.gist table { margin-bottom: 0; }

My next pass added attributes to signify if a field was required. To get the greatest amount of coverage in this area, I doubled up the required attribute with aria-required="true". It’s redundant, but not all assistive tech/browser combinations treat them as equivalent… yet.

.gist table { margin-bottom: 0; }

I used the next pass to control how content could be entered into the fields in order to provide a better experience for folks in browsers that support autocorrection, automatic capitalization, and the like. (If you’re interested, there’s a great overview of best practices for streamlining data entry, especially on mobile.) For the “name” field, that meant turning off autocorrect and setting it to autocapitalize words.

.gist table { margin-bottom: 0; }

Then I went through and provided some silly placeholders to suggest the kind of content a user should be entering into the field.

.gist table { margin-bottom: 0; }

And my final pass added support for auto-complete, which enables users of supporting browsers to fill out the form even more quickly by identifying the specific information the browser should supply for each field.

.gist table { margin-bottom: 0; }

If you’re intrigued by this idea and want to know more, you should definitely read Jason Grisby’s treatise on autocomplete.

It may not seem like much in the grand scheme of things, but those minor markup tweaks add so much to the experience. And if a particular browser or browser/assistive tech combo doesn’t support a feature, no big deal. An input can be as simple as a text input and it’s ok. We’re still doing server-side validation, so it’s not like users who can’t take advantage of these enhancements will be left out in the cold.

And speaking of server-side validation, I should probably spend a few minutes talking about how that factors into the markup.

What if the form has an error?

Let’s say you forget to enter your name for some reason and your browser doesn’t pay attention to the HTML5 field types and attributes. The server will catch the error and return you to the form with all of the info you entered intact. It also summarizes the errors and provides some useful messaging about the error and how to fix it or why it’s important.

First off, a summary of errors will appear at the top of the form:

.gist table { margin-bottom: 0; }

Here we have an introductory message followed by a list of errors. Each error contains a link that anchors you directly to the corresponding field. (Yay id!) The whole message also has an ARIA role of “alert” indicating users of assistive tech should be informed of this content immediately after the page loads. A tabindex of 0 is also added to ensure it receives focus so the contents are read.

Drilling down to the field itself, we have the label, the field, and the error message. The label and the field are already associated (via the forid connection I covered earlier), but I needed to associate the error now too. In order to do that, I used aria-describedby which also makes use of id.

.gist table { margin-bottom: 0; }

In this case aria-describedby points to a single element (strong#e-n), but the attribute can actually reference multiple id values, separated by spaces. It’s worth noting I’ve also marked the field as invalid for assistive tech using aria-invalid="true".

Again, a teensy-tiny tweak to the markup that pays huge dividends in terms of usability.

Oh yeah, and one last thing: the form works perfectly in Lynx.

Screen capture showing the 10K Apart contest form in the Lynx web browser

What did we learn?

I covered a ton of stuff in this piece, so here’s a little summary of takeaways if you’re a fan of that sort of thing:

  • Markup for structure and semantics — Choose your HTML elements with purpose;
  • Labeling elements reduces ambiguity — aria-label can (and should) be your friend;
  • Don’t shy away from id — You don’t have to use it in your CSS, id has a bunch of other uses;
  • Classify similar elements — that’s what the class attribute is for, the naming approach is up to you;
  • Associate form labels with controls — no excuses… just do it;
  • Use the right field type — HTML5 offers a bunch of options, get familiar with them;
  • Indicate required fields — do it in your content and in your form controls;
  • Manage autocorrection and capitalization — your virtual keyboard users will thank you;
  • Suggest possible responses — but keep in mind the placeholder attribute is a suggestion, not a label;
  • Facilitate auto-completion — again, your users will thank you; and
  • Handle errors well — make it easy for users to find and correct errors.

Where to next?

With a solid set of markup patterns in place, I built out the site in earnest, following along with the wireframes. Once that was done, the next step was to design the thing. Well, technically some of the design had already begun while I was working on markup, but you know what I mean. Designer Stephanie Stimac will be joining me to talk about that process in the next post. Stay tuned!

Aaron Gustafson, Web Standards Advocate

Building in 10k: Content and Strategy

Editor’s note: This is the first in a series of posts from the team that built the 10k Apart contest page, exploring the process of building for interoperability, accessibility, and progressive enhancement in less than 10kB.

When Jeffrey Zeldman first approached me about bringing back the 10k Apart contest, my mind began to race. I’d been a judge on the 2011 incarnation of the contest, so I had seen it from that angle before, but he presented me with an amazing opportunity… not just bring it back, but to evolve it into the 10k Apart contest our industry so desperately needs today.

The 10k Apart (and the 5k before it) have never operated in a vacuum. They’ve always taken the pulse of industry trends and and then challenged us to do more, to do better, and with less. In 2010, the contest pushed us to embrace HTML5. In 2011, we were challenged to make our work responsive. And so I began to ask myself: what are the challenges we, as an industry, are struggling with today. Anyone who has followed my work could probably have guessed my answer: progressive enhancement.

And so I put together a list of changes to the contest that would help us move away from the very JavaScript-heavy entries I’d judged last time toward entries that would be usable by anyone. Entries that respected low-bandwidth connections, were considerate of users who rely on assistive technology, and embraced the inherent flexibility and malleability of the Web. And, of course, entries that broke the stereotype of progressively-enhanced projects being “boring”.

I was so excited when Jeffrey and Eric Meyer responded to my suggestions with overwhelming enthusiasm. Their encouragement challenged me to think about the rules I’d drafted to govern the contest and inspired me to make the contest site abide by those very same rules as a way of demonstrating how progressive enhancement can enable our web projects to do so much more. It’s not a yoke, holding us back; it’s a powerful philosophy that challenges us to look at experience as a continuum and pushes us to think outside of our comfortable high-tech bubble.

This is the first in a series of posts about the process of building the 2016 10k Apart contest site. I, and the wonderful team that helped me make it a realty, wanted to share what we did, but moreover why we did it. We’ll talk about the sacrifices we made in designing and building the site as well as the ways markup, style, and script took a simple transactional site and gave it the polish it so richly deserved.

Thanks for joining us on this journey…

What are you here to do?

Before tucking into code, information architecture, or even copy, I took some time to stroll through previous incarnations of the contest. I poured over the structure of the sites and examined the tone of voice they used, of course, but I also took the time to examine the purpose of every page. Who were the different audiences coming to the sites? What did they want to do? Did their goals change over time?

Asking all of these questions helped me to break the site’s audience into two main camps: Folks interested in entering the contest and folks interested in spectating. I also recognized that the motivations and goals of these groups would change as the contest progressed. Folks who entered might become spectators once they’d submitted their entry. Some spectators might be inspired to enter the contest themselves after seeing someone else’s project. And so I set about organizing the site to not only support these different, potentially overlapping audiences, but to make it easy to transition back and forth between them.

To accomplish this, the site would need to evolve with our audience through several phases:

  1. Launch – When we don’t have any entries (which is what is live as I am writing this);
  2. In progress – When we have entries that we want to show off, but the contest is still open (this phase will be kicking in soon);
  3. Close – When the contest is over and we aren’t accepting new entries, but instead focus on highlighting the folks that entered and ask you to vote for your favorites; and
  4. Winner announcement – When we celebrate the awesome works judged by our expert panel and you, the community, to be the best of the best.

With that outline in place, I began putting together lists of the sorts of content we would need on each page in each phase of the contest. I was ruthless in stripping away the cruft and the nice to have bits. In many ways, I followed the model Luke Wroblewski wrote about in Mobile First, by focusing on the core purpose of each page. I got rid of any bit of content or UI was a distraction from that purpose or that simply failed to propel people toward accomplishing their goal on that page. Each page, each step in the process, was ruthlessly stripped to its essence. And then the real work began.

How do we talk to one another?

Steph Hay is often quick to point out that every interface is a conversation we’re having with out users. With that in mind, I set about authoring copy that embodied that idea. I wanted your experience of the site to be just like sitting down next to me and talking about the contest.

In their book Nicely Said, Kate Kiefer Lee and Nicole Fenton offer a ridiculous amount of great advice on not just how to write, but how to write for people. In it, they talk about writing like you speak and even go so far as to recommend you read your work aloud. Looking to the future of “headless” UIs—Cortana, Alexa, Siri—and the current crop of screen reading options out there, it was pretty obvious that this was not only good advice… it was beta testing!

I applied the wisdom I learned from these amazing content strategists (and no doubt countless others) to everything from the page titles and body copy all the way down to form labels and error messaging. I read the content aloud and in many cases had my computer read it to me as well. I know there’s room for improvement (there always is), but I’m pretty happy with the way it turned out.

Where are the patterns?

Once I had drafted copy for each page in the site, I began to organize the content into basic wireframes. In the interest of time, I focused on wireframes for large-screen views of each page, making note of the content priorities so I would know how to arrange things on smaller screens as well.

Example wireframe from https://a-k-apart.com

While working on the wireframes, I made notes (some mental, some in the wireframes themselves) about where we could shave off some page size or where certain content was helpful, but more of an enhancement than core to the purpose of the page. I also looked for places where we could use markup or style to improve the experience greatly for very little cost. HTML5 input types, for example.

For more complicated interactions, I used Interface Experience Maps (IxMaps, which are effectively flowcharts) to outline the different ways users might be able to experience an interface. For example, on an entry page the most important information is the name of the project, the name of the person (or people) who made it, the description of the project, and a link to view it live. A screenshot, while helpful, is unnecessary. So I used an IxMap to explore lazy loading the screenshots only when JavaScript was available.

IxMap exploration of lazy loading screenshots when JavaScript is available. This IxMap depicts lazy loading a picture element and supplying alternate image formats for browsers that support WebP.

In that exploration, it dawned on me that I didn’t have to settle for only one image format—I could lazy load a picture element and supply alternate image formats for browsers that support WebP (which tends to be much smaller than JPGs and PNGs). That’s one of the reasons I love IxMaps: they allow for low-cost exploration and discoveries like this.

Once the wireframes and IxMaps were complete, I went through them and teased out repeating patterns and unique components, copying them to a new page in the wireframe document. In doing so, I also found opportunities to reuse enhancements I’d come up with for specific page components. Taken together, this pattern guide have our designer, Stephanie Stimac, a good overview of the site’s overall UI needs. It helped her avoid the trap of designing pages and let her create a design system instead. Stephanie will join me to talk about the design process in the third installment of this series.

What did we learn?

They may seem obvious, but in case you’re a fan of takeaways, here are a few relating to content, strategy, and information architecture:

  • Minimize distractions – Focus on the core purpose of every page, get rid of anything that is not supportive or detracts from that;
  • Write like you speak — Write for people like you would speak to them in person and read your work aloud;
  • Set content priorities — Ensure the flow is right and leads your users where they want (or need) to go;
  • Look for opportunities to enhance — Some supportive content may be nice to have but non-essential, consider removing it by default and bringing it back in certain scenarios; and
  • Look for patterns — Focus on creating a design system rather than individual pages.

Where to next?

With a good set of bones in place, the next step for me was to tuck into the markup, but that’s the story for another post. Stay tuned!

Aaron Gustafson, Web Standards Advocate

What would you do with 10kB?

Sixteen years ago, Stewart Butterfield conceived of a contest that would test the mettle of any web designer: The 5k. The idea was that entrants would build an entire site in 5kb of code or less. Its aim was to force us to get creative by putting a bounding box on what we could do:

Between servers and bandwidth, clients and users, HTML and the DOM, browsers and platforms, our conscience and our ego, we’re left in a very small space to find highly optimal solutions. Since the space we have to explore is so small, we have to look harder, get more creative; and that’s what makes it all interesting.

The 5k contest ran from 2000 until 2002. In 2010, An Event Apart and Microsoft revived the idea with an updated limit and a new name: 10k Apart. Staying true to its roots, this new incarnation, which ran for two years, continued to push designers and developers to get creative within a pretty extreme (though slightly expanded) limit while incorporating new goodies like HTML5 and responsive design.

Today we’re thrilled to announce that the 10k Apart contest is back and brings with it a handful of new challenges:

  1. Each page must be usable in 10kB or less. The 10kB limit no longer applies to the size of a ZIP archive of your entry; the 10kB limit now applies to the total initial download size of the baseline experience of each page in your project. When we say “baseline experience,” we’re talking small screen devices running older, less capable browsers. The 10kB limit will apply to every page and whatever assets it loads by default; that means images, CSS, JavaScript, and so on.
  2. Progressive enhancement is the name of the game. Your project should start with a super-basic, bare-bones-but-usable experience that will work no matter what (including without JavaScript). You can use clever CSS and JavaScript techniques to enhance that experience as it makes sense to do so. For example: You might lazy load an image using JavaScript if the screen size is above a certain threshold or when certain other conditions are met. Entries that depend entirely on JavaScript to render the front-end won’t be accepted. If you need a primer on progressive enhancement, consult the pages of A List Apart.
  3. Back ends are in this year. In previous iterations, each entry comprised client-side code submitted via ZIP file. Over time, that limitation led to an over-reliance on JavaScript for rendering. No more. This year, you can create dynamic experiences that work without front-end JavaScript using Node, PHP, Python or .Net. You will submit your entry as public GitHub repository (so we can all learn from your awesome code) and we’ll spin up a dedicated Azure instance running the appropriate stack.
  4. Entries should be accessible. In line with the philosophy of progressive enhancement, your entry should be usable by the broadest number of users possible. Accessibility is not a checklist, but if you’re clueless about where to start, these techniques can offer some guidance.
  5. Nothing comes for free. In previous years, we gave a pass if you wanted to use jQuery or load some fonts from Typekit. This year we decided to change it up, not because we don’t love these products (we do), but because we wanted to force every piece of code, every asset, to fight for its place in your entry. Anything you add should be added with purpose.

As with previous editions, your entry should use web standards and work in all modern browsers. You can use HTML, CSS, and JavaScript features and APIs that don’t have across-the-board support as long as you do so in keeping with the progressive enhancement philosophy. In other words, your entry can’t depend on that technology or feature in order to be usable.

All of this may sound like a tall order, but it’s entirely possible. In fact, the site we built for the contest also abides by these rules. We’ll touch on some of the techniques we used (and concessions we made) in building the site in future posts.

If you’ve read this far, you might be wondering What’s in it for me? Well, bragging rights, of course, but we’ve got some awesome prizes too! We’re giving away $10,000 to the top three entries, plus tickets to An Event Apart, complete collections of A Book Apart titles, and copies of my book too. Complete details of the prizes are over on the contest site.

We’ve lined up an amazing group to judge the entires this year too: Rachel Andrew, Lara Hogan, Mat Marquis, Heydon Pickering, Jen Simmons, and Sara Soueidan will all be putting your entry through its paces and peering under the hood at your code. There’s also a People’s Choice award which will be based on votes you cast. Voting will open September 5th and run through October 14th.

The contest opens today and we will accept entries until 5pm Pacific Time on September 30th. Everything you should need to know about the contest, eligibility, etc. is up on the 10k Apart site, but if you have additional questions, you can always reach out.

We can’t wait to see what you come up with! Happy coding!

Aaron Gustafson, Web Standards Advocate

What is Windows Remote Arduino and What Can It Do?

This post is an overview of the Windows Remote Arduino library, one of the technologies used in the World’s Largest Arduino Maker Challenge.

Windows Remote Arduino is an open-source Windows library that allows any Windows 10 device – be it a Windows Phone, Surface, PC, or even Raspberry Pi 2 – to remotely control an Arduino. The library enables developers to integrate their Arduino sensors into their Windows projects, as well as offload tasks too heavy or complicated for the Arduino to Windows.

You can find the Windows Remote Arduino library on our GitHub page in a repository titled “remote-wiring,” and you can learn more about Arduino here.

Windows Remote Arduino is capable of controlling the following Arduino functions:

  • GPIO – Analog and Digital I/O
    • Digital write
    • Digital read
    • Analog write (PWM)
    • Analog read
    • Setting pin mode
    • Receiving events when pin values are reported or changed
  • Send and receive data between devices over I2C

For advanced users, Windows Remote Arduino also enables custom commands via Firmata SysEx – more information can be found here.

A closer look at the technology

Now that we’ve seen a bit of what Windows Remote Arduino can do, let’s explore how the technology works. In this section we explain the thought process behind the design of the library, take a close look at the library’s structure, and review a simple code sample. Full hardware and software setup instructions can be found here.

Design Decisions

Let’s discuss the design of the Windows Remote Arduino API. Commands like pinMode and digitalWrite are so familiar to Arduino developers that rather than reinvent them, we chose to adhere to what is already familiar.

Our API was crafted to mirror the familiar Arduino Wiring API as much as possible. There are changes, of course, as the WinRT framework is fundamentally very different from the basic Wiring language used in Arduino sketches. However, with a bit of reorganization, it is possible to use the logic and commands of an Arduino sketch in a Windows 10 UWP app using Windows Remote Arduino.

After designing the API, a protocol was needed that would facilitate communication between Windows 10 and an Arduino – Firmata was the obvious choice. Firmata is a widely accepted open-source protocol that has been implemented in many languages, including Arduino Wiring. The Firmata library for Arduino is even included in the Arduino IDE by default.

Architecture

With the above design decisions, the Windows Remote Arduino was built like a three-layer cake. Physical communication, as the bottom layer, is necessary to allow raw data to be exchanged between Windows and the Arduino device. Above the communication layer is the protocol layer, which decodes the raw incoming data into meaningful messages. On the very top is the surface API, which abstracts away all protocol messages and allows for remote control of the Arduino.

 1_architecture

Development with the library

For all basic use cases, the RemoteDevice class contained within the RemoteWiring layer is the main layer that the developer will interact with. A RemoteDevice must be constructed with one of the IStream implementations (UsbSerial, BluetoothSerial, NetworkSerial, or DfRobotBleSerial) contained within the Stream layer. After invoking begin() on the Stream object, all remaining API calls are made through the RemoteDevice instance. A developer can set the modes or states of pins, read the values of digital or analog pins, initiate I2C communications to other devices, and even drive servos just by using this single class.

Advanced behaviors, such as SPI transactions, are also possible through SysEx commands. SysEx commands allow developers to write complex or custom code in the StandardFirmata sketch file that can be executed with Remote Arduino. There is a guide for these advanced behaviors on GitHub.

A look at the code

This section follows a simple sample that blinks an LED remotely using Windows Remote Arduino. A complete walkthrough for a similar project can be found at the Hackster post here. You can also check out the video below for a glance at what the sample enables:

For more information on setup, refer to the Get Started page and this guide on using Bluetooth with Windows Remote Arduino.

With setup complete, we can start a new Universal Windows Application project in Visual Studio and import the Windows Remote Arduino NuGet using the NuGet Package Manager. To repeat these steps:

  1. Open the Package Manager Console by clicking the “Tools” menu.
  2. Hover over “NuGet Package Manager.”
  3. Choose “Package Manager Console.”
  4. Enter Install-Package Windows-Remote-Arduino.

You can also search for and install the NuGet by selecting “Manage NuGet Packages for Solution” under the NuGet Package Manager menu and searching for “Remote Arduino” on the “Browse” tab.

With the NuGet installed, we transfer the code below to a fresh Visual Studio solution. Additional details are provided below this code section – any line or section marked with //(#) will be further analyzed.



public sealed partial class MainPage : Page
{
    private RemoteDevice arduino;
    private BluetoothSerial bluetooth;

    public MainPage()
    {
        this.InitializeComponent();

        bluetooth = new BluetoothSerial( "RNBT-5A60" );      //(1)  
        arduino = new RemoteDevice( bluetooth ); //(2)
 
        arduino.DeviceReady += Arduino_DeviceReady; //(3)
        arduino.DeviceConnectionFailed += Arduino_DeviceConnectionFailed; //(4)
 
        bluetooth.begin(); //(5)
    }

    private void Arduino_DeviceConnectionFailed( string message )
    {
        Debug.WriteLine( message );
    }
 
    private void Arduino_DeviceReady()
    {
        arduino.pinMode( 13, PinMode.OUTPUT ); //(6)
        loop();
    }
 
    private async void loop()
    {
        int DELAY_MILLIS = 1000;
 
        while( true )
        {
            // toggle pin 13 to a HIGH state and delay for 1 second
            arduino.digitalWrite( 13, PinState.HIGH ); //(7)
            await Task.Delay( DELAY_MILLIS );
 
            // toggle pin 13 to a LOW state and delay for 1 second
            arduino.digitalWrite( 13, PinState.LOW ); //(7)
            await Task.Delay( DELAY_MILLIS );
        }
    }
}


  1. First, we construct our connection object. In this case, I provide the name of the Bluetooth device directly in the constructor as a string. You could also enumerate all devices by invoking the listAvailableDevicesAsync() function (which is available in the UsbSerial and DfRobotBleSerial classes), then construct a BluetoothSerial object by passing in one of the returned DeviceInformation objects. Other IStream implementations like UsbSerial and NetworkSerial have different function signatures for their constructor. For example, UsbSerial can accept a DeviceInformation object in its constructor, but also allows for VID and PID strings to be specified, and even VID only.
  2. Now we construct a RemoteDevice object by passing in an object which implements the IStream interface—in this case our BluetoothSerial object. The RemoteDevice constructor requires an IStream object. This is the communication stream that it will use to send and receive data. Valid options are BluetoothSerial, UsbSerial, NetworkSerial, and DfRobotBleSerial (for Bluetooth LE devices).
  3. Once we’ve constructed our RemoteDevice, we then initialize the object’s event handlers. This first line specifies a callback function that will be invoked when the connection and handshaking process is complete. This function must match the RemoteDeviceConnectionCallback delegate. In the example above, our handler (Arduino_DeviceReady) gets a single pin on the Arduino ready for use and then calls a loop() function (familiar from Arduino sketches).
  4. The next line specifies a callback function that is invoked if the connection process fails. This function needs to match the RemoteDeviceConnectionCallbackWithMessage delegate (one one Platform::String argument). In the example above, our handler (Arduino_DeviceConnectionFailed) writes an error message given by its single argument to the Debug console.
  5. Then, we begin the connection process by calling the begin() function (which may have different parameters given your connection choice). When invoked, the IStream class will either locate or use a provided DeviceInformation object depending on which constructor was used to create the class. Next, it will open the connection stream by invoking the necessary Windows APIs. When the connection is established, the RemoteDevice class will automatically begin the handshaking process with the device. This process involves sending a special type of Firmata protocol message called a “capability query,” which the device should respond to by listing all of the pins it has and their capabilities. When this message is completely received and correctly parsed, the RemoteDevice class will fire the DeviceReady event. In this case, the DeviceReady event will cause the Arduino_DeviceReady function from our example above to be invoked.
  6. This line will first verify that the pin and state are valid before sending a Firmata protocol message via the connected IStream class. This message will instruct the Arduino to switch a pin to the specified mode (in this case, pin 13 to “OUTPUT”). RemoteDevice will also cache the pin’s mode value to keep track of the state of the connected device.
  7. These two lines will first verify that the pin is valid and in the correct mode. If so, a Firmata protocol message will be sent using the connected IStream class that will instruct the Arduino to switch this pin to the specified state.

A more complex sample using Windows Remote Arduino

Now that you’ve seen a simple starter sample using Windows Remote Arduino, let’s take a look at something more complicated. Below is a video of an LED curtain powered by the Windows Remote Arduino technology – see how the library enables an Arduino to exceed its typical capabilities:

Where you can expand Windows Remote Arduino

Windows Remote Arduino already has many potential uses,and the library is open-source and available on our GitHub page – any developer interested in expanding this technology is more than welcome. Below are some details on how we would expand the library.

Adding SPI support

There are two communication methods – I2C and SPI – that microcontrollers typically use to communicate with other devices. This is commonly required for many sensors, shields, and other hardware that have their own MCUs. Each of these two methods have their pros and cons, and both are widely supported by on Arduino.

Currently, the Windows Remote Arduino library is dependent on the Firmata protocol in order to function. One of the cons of using Firmata is that there is no existing SPI support – SPI transactions are only possible with Windows Remote Arduino using advanced SysEx commands. To natively support SPI, we would need to update the Windows Remote Arduino library, and Firmata itself would have set SPI standards and update their implementation.

Fortunately, the three-layer architecture of the library would allow the Firmata layer to be swapped relatively easily with another protocol implementation. From there, the RemoteDevice class could be altered to accept the new protocol, or a fresh implementation of RemoteDevice could be written to utilize the new protocol layer.

We will always consider any pull requests submitted against the Windows Remote Arduino library. If you’re a developer with a bright new idea for the future of the library, hack away and let us know.

Links to explore further

There are several sources for more information on Windows Remote Arduino:

The World’s Largest Arduino Maker Challenge

Now that you’ve learned the ins and outs of Windows Remote Arduino, it’s time to put your newly-learned skills to the test. The World’s Largest Arduino Maker Challenge is a great opportunity to make use of the library.

The competition’s title is no overstatement – with more than 3,000 participants and 1,000 submitted project ideas in just the preliminary phase, this is truly the World’s Largest Arduino Maker Challenge. The contest is brought to you by Microsoft, Hackster.io, Arduino, Adafruit, and Atmel.

The parameters of the contest are simple – develop a UWP (Universal Windows Platform) app that connects with an Arduino. Windows Remote Arduino and Windows Virtual Shields for Arduino are two recommended ways of establishing this connection. Check out the contest site for more details.

We hope you take this opportunity to learn more about the library and submit something great for the World’s Largest Arduino Maker Challenge. We can’t wait to see what you make!

Written by Devin Valenciano (Program Manager) and Jesse Frush (Software Engineer) from Windows and Devices Connected Everyday Things team

What is Windows Virtual Shields for Arduino and What Can It Do?

This post is a general overview of the Windows Virtual Shields for Arduino library, one of the technologies being used in the World’s Largest Arduino Maker Challenge. If you have not heard about the contest, we have more information at the bottom of this post.

If you’ve used an Arduino, you’re familiar with the concept of a shield. Each shield has a specialized purpose (e.g. a temperature shield, an accelerometer shield), and building a device with multiple shields can be complex, costly, and space-inefficient. Now imagine that you can use a low-cost Windows Phone as a compact set of shields. Your Arduino sketch would be able to access hundreds of dollars worth of sensors and capabilities in your Windows Phone through easy-to-use library calls.

This is exactly what the Windows Virtual Shields for Arduino library enables for developers. And that’s not the best part. This technology works for all Windows 10 devices, so you can use the sensors and capabilities on your PC and Surface as well. Also, the Arduino can offload computationally expensive tasks like speech recognition and web parsing to the Windows 10 companion device!

Now let’s take a closer look at the technology. You can find the Windows Virtual Shields for Arduino library on our GitHub page – this is the library that will be included on your Arduino sketch. You will also need to install a Universal Windows Application on your Windows 10 companion device to help surface the sensors and capabilities. This application can be downloaded from the Microsoft Store. Additionally, the open-source code for the Store application can be found here.

You can control the following sensors and capabilities from an Arduino using the Windows Virtual Shields for Arduino library:

Sensors:

  • Accelerometer
  • Compass
  • GPS
  • Gyrometer
  • Light Sensor
  • Orientation

Capabilities:

  • Screen
  • Touch
  • Camera
  • Email
  • Microphone
  • Notifications
  • SMS
  • Speech-to-Text
  • Speech Recognition
  • Vibration
  • Web

Let’s take look at a simple sample

Now that you know what the Windows Virtual Shields for Arduino is, let’s talk about how to use the library.

Quick setup

The full setup instructions can be found here. Briefly, the software setup includes:

    1. Downloading the Windows Virtual Shields for Arduino library from the Arduino Library Manager in the Arduino IDE.

1_arduinolibrary

  1. Downloading the Windows Virtual Shields for Arduino Store application on your Windows 10 companion device from here.
  2. Connecting your Arduino to your Windows 10 companion device with a USB, Bluetooth, or network connection.
  3. Writing and deploying your sketch on the Arduino IDE.

Hello Virtual Shields

A skeleton “Hello World” application using Windows Virtual Shields for Arduino looks like this:



#include <ArduinoJson.h>
#include <VirtualShield.h>
#include <Text.h>

VirtualShield shield; // identify the shield
Text screen = Text(shield); // connect the screen

void setup()
{

   shield.begin(); // begin communication

   screen.clear(); // clear the screen
   screen.print("Hello Virtual Shields");
}

   void loop()
{
}


As you can see, using a Virtual Shield is simple. In the sketch above, we include all necessary libraries and declare a VirtualShield object. We then declare a specific shield (Screen) object to represent the screen of the Windows 10 companion device in use. The program starts a serial communication, clears the screen of the Windows 10 companion device, and prints the line “Hello Virtual Shields” on the freshly cleared screen.

A glimpse at the architecture

Now that we’ve seen a simple sample, we can take a deeper dive into the architecture at play.

The communication between the Arduino library and the Microsoft Store app is done over a USB, Bluetooth, or network connection. The protocol uses JSON by making use of the efficient open-source library ArduinoJson. This is what a simple transaction looks like across the wire (Arduino on left, Windows 10 companion device on right):

2_codediagram

This is a simplified illustration of the basic communication enabled by Windows Virtual Shields for Arduino.

A more complex sample with sensors

Let’s take a look at a more realistic example that includes sensors. All sensors in the Windows Virtual Shields for Arduino library have the four functions listed below:

  • get – This function is a one-time data request to a sensor. An example would be reading acceleration values off of an accelerometer.
  • start – The start function begins a series of get calls, performed at a specified interval. The interval could be determined by time (read accelerometer data every second) or value change (report if the acceleration changes by 1 unit).
  • onChange – This function is exactly like start, except it does not report a data reading the first time it is called. It will begin reporting based on a chosen time interval or change in sensor reading.
  • stop – This function ends a running start or onChange

Working with GPS

With a base knowledge of how sensors work in Windows Virtual Shields for Arduino, we can take a look at something more specific. The following sample will explore how to pull GPS readings from a Windows 10 device onto an Arduino.

The code for this example is seen below:



#include <ArduinoJson.h> 
#include <VirtualShield.h> 
#include <Text.h> 
#include <Geolocator.h> 
 
VirtualShield shield; 
Text screen = Text(shield); 
Geolocator gps = Geolocator(shield); 
 
void gpsEvent(ShieldEvent* shieldEvent) 
{ 
  // If there is a sensor error (errors are negative)... display message 
  if (shieldEvent->resultId < 0) { 
       screen.printAt(3, "Sensor doesn't exist"); 
       screen.printAt(4, "or isn't turned on."); 
     
       screen.printAt(6, "error: " + String(shieldEvent->resultId)); 
       return; 
  } 
 
  String lat = String("Lat: ") + String(gps.Latitude); 
  String lon = String("Lon: ") + String(gps.Longitude); 
  screen.printAt(3, lat); 
  screen.printAt(4, lon); 
} 
 
void setup() 
{ 
  shield.begin(); 
   
  screen.clear(); 
  screen.printAt(1, "Basic GPS Lookup"); 
   
  gps.setOnEvent(gpsEvent); 
  // Check GPS if reading changes by ~1/6 of a mile 
  gps.start(0, 0.01); 
} 
 
void loop() 
{ 
  shield.checkSensors(); 
} 


In setup, we initialize the gps.setOnEvent handler to call gpsEvent whenever a response is received. Then in loop, we start the GPS and call the function checkSensors. The call to checkSensors is required to start receiving responses and processing callbacks for any sensor or capability. Finally, the gpsEvent function prints latitude and longitude readings every time the GPS senses a shift greater than our specified delta (0.01 longitudinal/latitudinal degrees).

Here you can really start to see the power of Windows Virtual Shields for Arduino – it’s simple and easy to pull data from the Windows 10 companion device, and the device unifies a large collection of sensors and actuators that would otherwise be complex and costly.

A glimpse at the architecture

In the graphic below, we explore the underlying architecture of the GPS communication sample:

3_codediagram

An end-to-end project

Now that we’ve seen how sensors and simple screen capabilities work with Windows Virtual Shields for Arduino, we can take a look at a more complete project.

Check out this simple Hackster.io project to see the library working in action.

A quick look at more complex capabilities

So we’ve sent text to a companion screen, and we know how to get sensor readings. That’s a good start! But we’ve just scratched the surface of what Windows Virtual Shields for Arduino is capable. In this section, we’ll take a brief glimpse at some of the more advanced capabilities that your Arduino can control on your Windows 10 companion device.

Graphics

Basic graphics instructions and events are handled the same as sensors. A rectangle instruction
(id = screen.fillRectangle(80,120,70,70, YELLOW)) would produce the following communication:

4_codeblock

And pressing and releasing the rectangle on the Windows 10 companion device would send back events tied to id.

Speech

The speech functionality of Windows Virtual Shields for Arduino includes Text-to-Speech and Speech Recognition. Here we see another huge advantage of Windows Virtual Shields for Arduino – we can leverage the computational power and APIs of the Windows 10 companion device to enable speech scenarios.

Text-to-Speech is simple and can be initiated by a command such as speech.speak(“Hello World”). This particular command will make the Windows 10 companion device speak the words “Hello World”.

The Speech Recognition functionality returns an indexed list of choices. Issuing the request recognition.listenFor(“yes,no”) would return an event with where 1=”yes”, 2=”no”, or 0=unrecognized (negative values are error indicators).  The event can also account for groupings, such as recognizing a variety of words (“yes”, “yeah”, “ok”) as the single option “yes”. Recognition can also handle open-text, but is limited to 100 characters due to the memory and buffer size of an Arduino.

Web

You can also use the web capability to retrieve a web page and parse it before returning a web event to Arduino. This is really useful, as most web pages are larger than the entire Arduino onboard memory. The parsing engine uses an abbreviated instruction set to fully parse a web page.

The following code retrieves a weather dump from NOAA as a JSON blob, then parses the JSON to retrieve the current weather.



String url = "http://forecast.weather.gov/MapClick.php?lat=47.6694&lon=-122.1239&FcstType=json";  
String parsingInstructions = "J:location.areaDescription|&^J:time.startPeriodName[0]|&^J:data.weather[0]";  
web.get(url, parsingInstructions); 


The event returns “Redmond WA|This Afternoon|Chance Rain”. As with speech, Windows Virtual Shields for Arduino moves expensive tasks to the companion device, allowing for more free memory on the Arduino.

Where we want to expand Windows Virtual Shields for Arduino

Windows Virtual Shields for Arduino has already come so far, but there are many ways in which we could improve the technology further. The great part is, the library is open-source – any developer interested in expanding this technology is more than welcome. All of the code is available from our GitHub page.

Let’s take a look at three areas we would want to expand upon, if time were no obstacle!

  1. First, we would add even more sensors and capabilities. NFC would make an excellent addition – this would enable scenarios like unlocking a door with your phone or transferring NFC payments. Others desired sensors and capabilities include: an Iris scanner (e.g. on a Lumia 950 XL); FM radio control; and device geofencing (rather than using the GPS and coding it yourself on Arduino). Also, Cortana would make an excellent addition.
  2. Next, we could improve existing sensors and capabilities. For example, GPS returns a latitude, longitude, and altitude, and the sensor can be triggered on a settable delta. However, that delta is singular and applies to all properties equally. If you wanted to monitor a small change altitude, but not latitude or longitude – or ignore a large change in altitude (e.g. a tall building), then the sensor system would need more than one delta to monitor.
  3. A third option would be to expand the scope. The Universal Windows Application currently connects to only one device at a time. We can imagine scenarios where multiple Arduinos can connect to a single app, such as in a home control systems (e.g. self-registering heating ducts opening/closing depending upon where you are).

And of course, there are countless other ways in which this technology can evolve. Explore it yourself, and see what you can build!

The World’s Largest Arduino Maker Challenge

Now that you’ve learned the ins and outs of Windows Virtual Shields for Arduino, it’s time to put your newly-learned skills to the test. The World’s Largest Arduino Maker Challenge would be a great opportunity to make use of the library.

The competition’s title is no overstatement – with more than 3,000 participants and 1,000 submitted project ideas in just the preliminary phase, this is truly the World’s Largest Arduino Maker Challenge. The contest is brought to you by Microsoft, Hackster.io, Arduino, Adafruit, and Atmel.

The parameters of the contest are simple – participants must develop a Universal Windows Application (UWA) that interfaces with an Arduino using a connection. Windows Virtual Shields for Arduino and Windows Remote Arduino are two recommended ways of establishing this connection. Check out the contest site for more details.

We hope you take this opportunity to learn more about the library and submit something great for the World’s Largest Arduino Maker Challenge!  We can’t wait to see what you make!

Written by Devin Valenciano (Program Manager) and Jim Gale (Principal Software Engineering Lead) from Windows and Devices Connected Everyday Things team