Tag Archives: you’re

Skip User Research Unless You’re Doing It Right — Seriously


Skip User Research Unless You’re Doing It Right — Seriously

Is your research timeless? It’s time to put disposable research behind us

Focus on creating timeless research. (Photo: Aron on Unsplash)

We need to ship soon. How quickly can you get us user feedback?”

What user researcher hasn’t heard a question like that? We implement new tools and leaner processes, but try as we might, we inevitably meet the terminal velocity of our user research — the point at which it cannot be completed any faster while still maintaining its rigor and validity.

And, you know what? That’s okay! While the need for speed is valuable in some contexts, we also realize that if an insight we uncover is only useful in one place and at one time, it becomes disposable. Our goal should never be disposable research. We want timeless research.

Speed has its place

Now, don’t get me wrong. I get it. I live in this world, too. First to market, first to patent, first to copyright obviously requires an awareness of speed. Speed of delivery can also be the actual mechanism by which you get rapid feedback from customers.

I recently participated in a Global ResOps workshop. One thing I heard loud and clear was the struggle for our discipline to connect into design and engineering cycles. There were questions about how to address the “unreasonable expectations” of what we can do in short time frames. I also heard that researchers struggle with long and slow timelines: Anyone ever had a brilliant, generative insight ignored because “We can’t put that into the product for another 6 months”?

The good news is that there are methodologies such as “Lean” and “Agile” that can help us. Our goal as researchers is to use knowledge to develop customer-focused solutions. I personally love that these methodologies, when implemented fully, incorporate customers as core constituents in collaborative and iterative development processes.

In fact, my team has created an entire usability and experimentation engine using “Lean” and “Agile” methods. However, this team recognizes that letting speed dictate user research is a huge risk. If you cut corners on quality, customer involvement, and adaptive planning, your research could become disposable.

Do research right, or don’t do it at all

I know, that’s a bold statement. But here’s why: When time constraints force us to drop the rigor and process that incorporates customer feedback, the user research you conduct loses its validity and ultimately its value.

The data we gather out of exercises that over-index on speed are decontextualized and disconnected from other relevant insights we’ve collected over time and across studies. We need to pause and question whether this one-off research adds real value and contributes to an organization’s growing understanding of customers when we know it may skip steps critical to identifying insights that transcend time and context.

User research that takes time to get right has value beyond the moment for which it was intended. I’m betting you sometimes forgo conducting research if you think your stakeholders believe it’s too slow. But, if your research uncovered an insight after v1 shipped, you could still leverage that insight on v1+x.

For example, think of the last time a product team asked you, “We’re shipping v1 next week. Can you figure out if our customers want or need this?” As a researcher, you know you need more time to answer this question in a valid way. So, do you skip this research? No. Do you rush through your research, compromising its rigor? No. You investigate anyway and apply your learnings to v2.

To help keep track of these insights, we should build systems that capture our knowledge and enable us to resurface it across development cycles and projects. Imagine this: “Hey Judy, remember that thing we learned 6 months ago? Research just reminded me that it is applicable in our next launch!”

That’s what we’re looking for: timeless user insights that help our product teams again and again and contribute to a curated body of knowledge about our customers’ needs, beliefs, and behaviors. Ideally, we house these insights in databases, so they can be accessed and retrieved easily by anyone for future use (but that’s another story for another time). If we only focus on speed, we lose sight of that goal.

Creating timeless research

Here’s my point: we’ll always have to deal with requests to make our research faster, but once you or your user research team has achieved terminal velocity with any given method, stop trying to speed it up. Instead, focus on capturing each insight, leveling it up to organizational knowledge, and applying that learning in the future. Yes, that means when an important insight doesn’t make v1, go ahead and bring it back up to apply to v2. Timeless research is really about building long-term organizational knowledge and curating what you’ve already learned.

Disposable research is the stuff you throw away, after you ship. To be truly lean, get rid of that wasteful process. Instead, focus your research team’s time on making connections between past insights, then reusing and remixing them in new contexts. That way, you’re consistently providing timeless research that overcomes the need for speed.

Have you ever felt pressure to bypass good research for the sake of speed? Tell me about it in the comments, or tweet @insightsmunko.


To stay in-the-know with what’s new at Microsoft Research + Insight, follow us on Twitter and Facebook. And if you are interested in becoming a user researcher at Microsoft, head over to careers.microsoft.com.

Wanted – 2016/2017 | 13/15 Macbook Pro with min. 8GB RAM and 256GB SSD

I work in central milton keynes so happy to meet there, just up the road from the train station. Prices vary on ebay, I saw one go for £1517 plus postage (I can’t remember condition or warranty status of that one) so I’m sure we could go back and forth on ebay prices. Feel free to make an offer but I’m not in a rush to move it on.

Happy to send first if needs be, although would prefer say 10% of the payment upfront as I do have some trading ratings on here so I’m not starting fresh! I’d obviously pass on my home and work details to you if we go down this route

Wanted – 2016/2017 | 13/15 Macbook Pro with min. 8GB RAM and 256GB SSD

I work in central milton keynes so happy to meet there, just up the road from the train station. Prices vary on ebay, I saw one go for £1517 plus postage (I can’t remember condition or warranty status of that one) so I’m sure we could go back and forth on ebay prices. Feel free to make an offer but I’m not in a rush to move it on.

Happy to send first if needs be, although would prefer say 10% of the payment upfront as I do have some trading ratings on here so I’m not starting fresh! I’d obviously pass on my home and work details to you if we go down this route

Synology DS416j NAS (4-bay) and Synology DS213j NAS (2 bay)

My rejig of my storage continues so I’ve got 2 Synology NAS units for sale. If you’re looking at this ad, you probably are fully aware of the great operating system Synology has and its ease of use, so I won’t bore you with that.

First up is a Synology DS416j 4-bay NAS. It’s boxed with PSU, LAN cable, screws etc. It’s in amazing condition – the only issue I can see is that it has a few micro scratches on the bezel – it has a a shiny gloss plastic bezel (the type which seem to pick out…

Synology DS416j NAS (4-bay) and Synology DS213j NAS (2 bay)

FS: Intel 5820k CPU, MSI X99a mobo, 16gb (4x4gb) Corsair DDR4 & DDR 3 RAM, Asus Essence STX

I’m selling my Core i7-5820k (if you’re not familiar with Intel’s enthusiast range of CPUs, this doesn’t have an in-built GPU) with an MSI X99a SLI plus motherboard and 16gb (4x4gb) of Corsair Vengeance RAM (the RAM is a warranty replacement direct from Corsair and is still factory sealed). It’s been water-cooled since I got it in early 2016 and very lightly overclocked to run at 3.5ghz since then (which it did at very low temperatures). Looking for £300 all in, delivered. I’d…

FS: Intel 5820k CPU, MSI X99a mobo, 16gb (4x4gb) Corsair DDR4 & DDR 3 RAM, Asus Essence STX

Microsoft researchers build a bot that draws what you tell it to – The AI Blog

If you’re handed a note that asks you to draw a picture of a bird with a yellow body, black wings and a short beak, chances are you’ll start with a rough outline of a bird, then glance back at the note, see the yellow part and reach for a yellow pen to fill in the body, read the note again and reach for a black pen to draw the wings and, after a final check, shorten the beak and define it with a reflective glint. Then, for good measure, you might sketch a tree branch where the bird rests.

Now, there’s a bot that can do that, too.

The new artificial intelligence technology under development in Microsoft’s research labs is programmed to pay close attention to individual words when generating images from caption-like text descriptions. This deliberate focus produced a nearly three-fold boost in image quality compared to the previous state-of-the-art technique for text-to-image generation, according to results on an industry standard test reported in a research paper posted on arXiv.org.

The technology, which the researchers simply call the drawing bot, can generate images of everything from ordinary pastoral scenes, such as grazing livestock, to the absurd, such as a floating double-decker bus. Each image contains details that are absent from the text descriptions, indicating that this artificial intelligence contains an artificial imagination.

“If you go to Bing and you search for a bird, you get a bird picture. But here, the pictures are created by the computer, pixel by pixel, from scratch,” said Xiaodong He, a principal researcher and research manager in the Deep Learning Technology Center at Microsoft’s research lab in Redmond, Washington. “These birds may not exist in the real world — they are just an aspect of our computer’s imagination of birds.”

The drawing bot closes a research circle around the intersection of computer vision and natural language processing that He and colleagues have explored for the past half-decade. They started with technology that automatically writes photo captions – the CaptionBot – and then moved to a technology that answers questions humans ask about images, such as the location or attributes of objects, which can be especially helpful for blind people.

These research efforts require training machine learning models to identify objects, interpret actions and converse in natural language.

“Now we want to use the text to generate the image,” said Qiuyuan Huang, a postdoctoral researcher in He’s group and a paper co-author. “So, it is a cycle.”

Image generation is a more challenging task than image captioning, added Pengchuan Zhang, an associate researcher on the team, because the process requires the drawing bot to imagine details that are not contained in the caption. “That means you need your machine learning algorithms running your artificial intelligence to imagine some missing parts of the images,” he said.

Attentive image generation

At the core of Microsoft’s drawing bot is a technology known as a Generative Adversarial Network, or GAN. The network consists of two machine learning models, one that generates images from text descriptions and another, known as a discriminator, that uses text descriptions to judge the authenticity of generated images. The generator attempts to get fake pictures past the discriminator; the discriminator never wants to be fooled. Working together, the discriminator pushes the generator toward perfection.

Microsoft’s drawing bot was trained on datasets that contain paired images and captions, which allow the models to learn how to match words to the visual representation of those words. The GAN, for example, learns to generate an image of a bird when a caption says bird and, likewise, learns what a picture of a bird should look like. “That is a fundamental reason why we believe a machine can learn,” said He.

GANs work well when generating images from simple text descriptions such as a blue bird or an evergreen tree, but the quality stagnates with more complex text descriptions such as a bird with a green crown, yellow wings and a red belly. That’s because the entire sentence serves as a single input to the generator. The detailed information of the description is lost. As a result, the generated image is a blurry greenish-yellowish-reddish bird instead a close, sharp match with the description.

As humans draw, we repeatedly refer to the text and pay close attention to the words that describe the region of the image we are drawing. To capture this human trait, the researchers created what they call an attentional GAN, or AttnGAN, that mathematically represents the human concept of attention. It does this by breaking up the input text into individual words and matching those words to specific regions of the image.

“Attention is a human concept; we use math to make attention computational,” explained He.

The model also learns what humans call commonsense from the training data, and it pulls on this learned notion to fill in details of images that are left to the imagination. For example, since many images of birds in the training data show birds sitting on tree branches, the AttnGAN usually draws birds sitting on branches unless the text specifies otherwise.

“From the data, the machine learning algorithm learns this commonsense where the bird should belong,” said Zhang. As a test, the team fed the drawing bot captions for absurd images, such as “a red double-decker bus is floating on a lake.” It generated a blurry, drippy image that resembles both a boat with two decks and a double-decker bus on a lake surrounded by mountains. The image suggests the bot had an internal struggle between knowing that boats float on lakes and the text specification of bus.

“We can control what we describe and see how the machine reacts,” explained He. “We can poke and test what the machine learned. The machine has some background learned commonsense, but it can still follow what you ask and maybe, sometimes, it seems a bit ridiculous.”

Practical applications

Text-to-image generation technology could find practical applications acting as a sort of sketch assistant to painters and interior designers, or as a tool for voice-activated photo refinement. With more computing power, He imagines the technology could generate animated films based on screenplays, augmenting the work that animated filmmakers do by removing some of the manual labor involved.

For now, the technology is imperfect. Close examination of images almost always reveals flaws, such as birds with blue beaks instead of black and fruit stands with mutant bananas. These flaws are a clear indication that a computer, not a human, created the images. Nevertheless, the quality of the AttnGAN images are a nearly three-fold improvement over the previous best-in-class GAN and serve as a milestone on the road toward a generic, human-like intelligence that augments human capabilities, according to He.

“For AI and humans to live in the same world, they have to have a way to interact with each other,” explained He. “And language and vision are the two most important modalities for humans and machines to interact with each other.”

In addition to Xiaodong He, Pengchuan Zhang and Qiuyuan Huang at Microsoft, collaborators include former Microsoft interns Tao Xu from Lehigh University and Zhe Gan from Duke University; and Han Zhang from Rutgers University and Xiaolei Huang from Lehigh University.

Related

John Roach writes about Microsoft research and innovation. Follow him on Twitter.

For Sale – Intel NUC i3 – 6gb Ram – 120gb

I have decided to sell my Intel NUC DC3217BY due to lack of use.

If you’re not familiar with these, this is a fully featured Windows micro PC in a form factor no bigger than a typical android TV box.

It’s in excellent condition, still has the protective film on the top casing. Complete with original power supply, but sadly I no longer have the box it came in.

Specs:
Intel Core i3-3217u
6GB Ram
120gb MSATA
Windows 10 Pro x64

Here is a link to a review for more information:
Intel Next Unit of Computing (NUC – DC3217BY) Review

£125 posted.

Price and currency: £125
Delivery: Delivery cost is included within my country
Payment method: PPG or BT
Location: Towcester
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Future of data storage technology: Transformational trends for 2018

Sometimes big changes sneak up on you, especially when you’re talking about the future of data storage technology. For example, when exactly did full-on cloud adoption become fully accepted by all those risk-averse organizations, understaffed IT shops and disbelieving business executives? I’m not complaining, but the needle of cloud acceptance tilted over sometime in the recent past without much ado. It seems everyone has let go of their fear of cloud and hybrid operations as risky propositions. Instead, we’ve all come to accept the cloud as something that’s just done.

Sure, cloud was inevitable, but I’d still like to know why it finally happened now. Maybe it’s because IT consumers expect information technology will provide whatever they want on demand. Or maybe it’s because everything IT implements on premises now comes labeled as private cloud. Influential companies, such as IBM, Microsoft and Oracle, are happy to help ease folks formerly committed to private infrastructure toward hybrid architectures that happen to use their respective cloud services.

In any case, I’m disappointed I didn’t get my invitation to the “cloud finally happened” party. But having missed cloud’s big moment, I’m not going to let other obvious yet possibly transformative trends sneak past as they go mainstream with enterprises in 2018. So when it comes to the future of data storage technology, I’ll be watching the following:

  • Containers arose out of a long-standing desire to find a better way to package applications. This year we should see enterprise-class container management reach maturity parity with virtual machine management — while not holding back any advantages containers have over VMs. Expect modern software-defined resources, such as storage, to be delivered mostly in containerized form. When combined with dynamic operational APIs, these resources will deliver highly flexible programmable infrastructures. This approach should enable vendors to package applications and their required infrastructure as units that can be redeployed — that is, blueprinted or specified in editable and versionable manifest files — enabling full environment and even data center-level cloud provisioning. Being able to deploy a data center on demand could completely transform disaster recovery, to name one use case.
  • Everyone is talking about AI, but it is machine learning that’s slowly permeating through just about every facet of IT management. Although there’s a lot of hype, it’s worth figuring out how and where carefully applied machine learning could add significant value. Most machine learning is conceptually made up of advanced forms of pattern recognition. So think about where using the technology to automatically identify complex patterns would reduce time and effort. We expect the increasing availability of machine learning algorithms to give rise to new storage management processes. These algorithms can produce storage management processes that can learn and adjust operations and settings to optimize workload services, quickly identify and fix the root causes of abnormalities, and broker storage infrastructure and manage large-scale data to minimize cost.
  • Management as a service (MaaS) is gaining traction, when looking at the future of data storage technology. First, every storage array seemingly comes with built-in call home support replete with management analytics and performance optimization. I predict that the interval for most remote vendor management services to quickly drop from today’s daily batch to five-minute streaming. I also expect cloud-hosted MaaS offerings are the way most shops will manage their increasingly hybrid architectures, and many will start to shift away from the burdens of on-premises management software. It does seem that all the big and even small management vendors are quickly ramping up MaaS versions of their offerings. For example, this fall, VMware rolled out several cloud management services that are basically online versions of familiar on-premises capabilities.
  • More storage arrays now have in-cloud equivalents that can be easily replicated and failed over to if needed. Hewlett Packard Enterprise Cloud Volumes (Nimble); IBM Spectrum Virtualize; and Oracle cloud storage, which uses Oracle ZFS Storage Appliance internally, are a few notable examples. It seems counterproductive to require in-cloud storage to run the same or a similar storage OS as on-premises storage to achieve reliable hybrid operations. After all, a main point of a public cloud is that the end user shouldn’t have to care, and in most cases can’t even know, if the underlying infrastructure service is a physical machine, virtual image, temporary container service or something else.

    However, there can be a lot of proprietary technology involved in optimizing complex, distributed storage activities, such as remote replication, delta snapshot syncing, metadata management, global policy enforcement and metadata indexing. When it comes to hybrid storage operations, there simply are no standards. Even the widely supported Amazon Web Services Simple Storage Service API for object storage isn’t actually a standard. I predict cloud-side storage wars will heat up, and we’ll see storage cloud sticker shock when organizations realize they have to pay both the storage vendor for an in-cloud instance and the cloud service provider for the platform.

  • Despite the hype, nonvolatile memory express (NVMe) isn’t going to rock the storage world, given what I heard at VMworld and other fall shows. Yes, it could provide an incremental performance boost for those critical workloads that can never get enough, but it’s not going to be anywhere near as disruptive to the future of data storage technology as what NAND flash did to HDDs. Meanwhile, NVMe support will likely show up in most array lineups in 2018, eliminating any particular storage vendor advantage.

    On the other hand, a bit farther out than 2018, expect new computing architectures, purpose-built around storage-class memory (SCM). Intel’s initial releases of its “storage” type of SCM — 3D XPoint deployed on PCIe cards and accessed using NVMe — could deliver a big performance boost. But I expect an even faster “memory” type of SCM, deployed adjacent to dynamic RAM, would be far more disruptive.

How did last year go by so fast? I don’t really know, but I’ve got my seatbelt fastened for what looks to be an even faster year ahead, speeding into the future of data storage technology.