Hardware and HCI

Experiments with Un-tethered Interaction Video by this guy, from the Microsoft Hardware group. (Yes they have one!)

A talk about developing hardware, and how it relates to HCI, especially mobile usage. Interesting to hear a hardware point of view. If you want to learn about wearable, or swallowable computing, check it out.

People are great interfaces.

Characters Everywhere, a video feed from Stanford, illustrates its own point: why is it so much more fun to listen to a (good) lecture than to read an article? Some quotes:

The world is easy to learn in because it has a great interface: people. (They even personalise themselves)
– She then concludes a human interface for the internet would be a “Holy Grail”. I think she’s missing the point.
– The single most powerful thing you can do to increase learning is to give them a one to one tutor.

What makes me wonder: what are the elements that are in speech that aren’t reproduced in an article? Maybe trying to add these to the web (we got video, audio, the works!) would be better than try to recreate a human…

Some interfaces are kindof nice already: the Dell Service Assistant

Another interesting point is how conversations are predictable – constrained. She gives the example of the entertainment industry, who are very good at making us say-do certain things. What are the lessons here for web design? (Which are conversations as well in some ways)

Before seeing this, I was kindof against trying to simulate humans, maybe because I hadn’t see it work properly, and many demonstrations focus on technical issues (language recognition, 3D, …) – but I forgot about the “willing suspension of disbelief” – people want to believe.

Still, I think the social computing approach is a lot more interesting.

Now I’m thinking, when I send an automatic email, that kindof sounds like a real response, that’s the same thing. When I edit discussions to steer them in a certain direction, that’s influencing a conversation.

Mmmm…
You can try some things out here.

The ongoing fascination with ethnographic techniques continues

Maybe it’s just my D844 exam (I can recommend this course – you can study from home) coming up this week.

From Intel ethnography:
CHI 97: Design Ethnography: Using Custom Ethnographic Techniques to Develop New Product Concepts (CHI 97)
Engineering Ethnography in the Home (CHI 96)
Getting Out of the Box: Ethnography Meets Real Life: Applying Anthropological Techniques to Experience Research (UPA 01)
From the dreams of children to the future of technology (The Independent – newspaper article)

There is lots more at this Google search for “Intel ethnography”.

Other good queries on Google:
IBM ethnography
Nokia ethnography
hewlett packard ethnography

Research, deliverables, process and the team

I was playing around with this fairly stream-of-consciousness kind of diagram to get a clearer picture of how the different types of research and deliverables you do as an IA feed into the process and the team.

The diagram definitely needs work, but I’d like to know how it compares to your practical experience as an IA. I’m not trying to define the perfect process here, just trying to clarify how things work in the messy real world. So if you have comments on how you work in a team, what deliverables and research and processes you find useful / not useful, go ahead!

I haven’t found a good way of showing the iterations and feedback between all the different research and team members yet. The two darker bits are the two main documents for signoff. (Visual design gets signed off as well)

It’s a bit big and messy, but look at it this way, at least it’s not a Venn diagram ;)

overview.gif

Common form myths

Forms That Work – Questions and Answsers is about how to design forms for the web. It’s a bit vague on tips though, and sometimes even plain clueless.

When answering a question about how to deal with error messages, they say things like “The cursor appears in the ‘error’ field (if technically possible)”. That’s not just clueless about web technology (javascript can put the cursor in any field), it’s even badly thought trough advice (what if you have more than just one error field? Pick the top one one the page?) The advice really seems to come from people who have never designed a form, else they’d discuss things like client side of server side input validation. A bit dissapointing.

Anyways, a few common myths about forms in my experience – I’m sure there’s more:

1. “Client side validation (javascript) is always a good thing.” I’ve found that with certain types of longer forms, it just confuses the user to have popups tell them there are problems with their forms. The user has to scroll up, fix it, scroll down, click submit again, and up comes another warning. That’s frustrating. In cases like that (long, complex forms on one page), it’s better to have server side validation only, and an error page that only shows the incorrect fields, with good explanations.
2. “It’s a web standard to indicate required fields with an asterix * and most people understand that.” No they don’t. I’ve been surprised noticing how many people do not know that a field with an asterix is required. For most forms, I’d recommend (if most fields are required) to put “optional” next to optional fields, or (if most fields are optional) to put “required” next to the required fields. Of course, often you’ll be able to separate them out into separate sections (a required section first, and an optional section later), which is even better for the user.

A common objection to making fields clearly optional, is that it reduces the amount of data the marketing department can work with (these optional fields are usually for marketing purposes). But what are the numbers like? Experimenting with your form, the way it’s worded, the way it’s layed out, can bring a high level of opt-ins even though the information isn’t required. Assuming making things vague will increase the value of your marketing data doesn’t seem right to me.

Good looking people use Intel toys!

I talked to good man Ben from partially.org (check out the animations!) yesterday, who was carrying an Intel Play Digital Camera.

intelplaycamera.gifNow this little baby is a miracle of design, usability and just plain coolness. You can take a few minutes of video, audio, you can do stop-motion animation, and it just has like 5 buttons. Ben told me the software is amazingly easy to use as well. I’ve been trying to locate one for sale on the web, no such luck so far…

At Liga1.com I pointed before at the ethnographers Intel employs (like Genevieve Bell) to better understand humans and technology. I didn’t really understand which products they were working on. Now I know. Check out some of their research.

Advanced user path tracking

User tracking with Indigeo, a system to capture user interactions on the web and compare them with optimal paths. I chatted to James about this about a year ago, and now it’s kinda finished.

It gives the visitor a task, and then tracks how they go through the site, including which link they click and how long they spend on a page.

You can then do funky stuff like analyze what your users did versus an optimal path, or play back how they went trhough the site.

“The system requires just a standard web browser – no plugins, no java, no javascript. All interaction is tracked on the server using a proxy application which detects incoming web requests and returns a page where all hyperlinks have an ID attached to them. This is transparent to the visitor.”

James is looking for feedback, so head over and send him some!

Metaphors and mental models

Interview with the original Xerox dudes: Bringing Design to Software

“The purpose of computer metaphors, in general, and particularly of graphical or icon-oriented ones, is to let people use recognition rather than recall.”

I’ve been thinking about how mental models (the way users think about things) are about categories (how users organise things) or metaphors (how they explain things).

The category based mental models are really useful for information architecture. The metaphor based mental models aren’t used very often – or else I don’t understand them very well. I don’t really see how I’d use them in web design, except for the obvious and most often flawed examples of using metaphors like “rooms” for navigating websites.

IAwiki: EricScheid’s page is pretty

IAwiki: EricScheid‘s page is pretty interesting.

“The Japanese have a saying that only young boys or old men should be allocated the task of removing the leaves from their beautiful gardens. The reason is that both have intrinsic reasons for being a little bit, um, imperfect in the execution of this duty, and the occassional leaf will remain in the garden scape. This natural imperfection is considered a form of beauty.”

But do they want me?

Oddpost is very cool, but I can’t help thinking “Do they want my business?”. It just seems so like a net savvy game, that I have the suspicion they really won’t be around in 6 months. Do they have a business plan?

I think that’s the reason I’m not subsribing to the trial. (Or maybe it’s because I don’t need it right now…)

Gettho-ising your discussions

I just realised something: I always knew splitting up your discussions like this: WebAIM Accessibility Training Forum was a bad thing. It’s better to keep the crowd (and the topics) mixed and, if you want to organise things, retroactively tag discussions with certain topics.

Now I know why: it’s the same reason why having a mixed mailing list is usually better than a highly specialized one: you’re turning your discussion into a sterile gettho if you separate everything out cleanly. Because that assumes people act like machines, are interested in 1 topic only, and so on.

The tools keep coming

TextArc.org Home

“A TextArc is a visual represention of a text%u2014the entire text (twice!) on a single page. Some funny combination of an index, concordance, and summary, it uses the viewer’s eye to help uncover meaning. A more detailed overview is available. ”

The tools keep coming. There is a slow building of tools that help analyzing stuff. However, there is no interaction between them. Software designed to help social research and things like the above should all be able to export and import data in XML, so you can use multiple tools for data analysis. One day maybe…

Transana.org “Transana is designed to

Transana.org

“Transana is designed to facilitate the transcription and analysis of video data. It provides a way to view video, create a transcript, and link places in the transcript to frames in the video. It provides tools for identifying and organizing analytically interesting portions of videos, as well as for attaching keywords to those video clips. It also features database and file manipulation tools that facilitate the organization and storage of large collections of digitized video. ”

And it’s free as well :)

My take on the Google API

Google Web APIs

It will work. I wasn’t too much of a fan of the traditional web-services hype (the view where webservices would be all). But now I can see a different world.

Say I’m a developer. I can easily imagine building a website, and using API’s to Google, Yahoo, Ebay and a few other sites on one website. Even a simple one. The API’s just make my life easier, that’s all, and allow me to offer a better website for my client.

If my traffic is low I’ll pay maybe US $10-30 for each API a year, and hupla, they’re making money. Maybe I’ll pay $99 to Google to use up to 100.000 queries a month on x domain names. Or maybe I’ll pay autotranslate.com US$ 2 per automatic translation over soap (and charge this to my clients). It’s cheaper and easier than getting a translation program, installing it on my server and keeping it up to date.

Faceted metadata XML format

Here’s the thing: I’ve been thinking about this and playing around with it, and I think I might make a simple XML format (it’s half done) to publish faceted metadata that will:

– let you build your faceted metadata
– let you publish that on the web if you like
– you can import other people’s taxonomies, or parts of them
– you can merge with other people’s taxonomies

Anyone can then write software that takes a map and generates navigation like:

– “More about faceted metadata on Bloug, Moresmarter, Noisebetweenstations“. The links would show a list of pages about faceted metadata on those sites, as defined by their authors. (If they have published their taxonomy, that is). The system knows what that my topic faceted metadata is the same as your topic faceted metadata because I’ve pointed my published topic to your published topic. (Or to another published topic that you have pointed to as well.)
– Also simpler faceted metadata navigation, like “Similar articles”.

For example: Lou Rosenfeld might publish a really good taxonomy for the field of information architecture, then I could import that and merge that with my personal taxonomy for my weblog.

Is there any interest in this? Is anyone else excited about this? The concepts are based on XTM (topic maps). I don’t want to use XTM itself for a variety of reasons that I might elaborate on later, I don’t think it would work. (ie. too complex, too flexible)

Charging for content

WriteTheWeb: The problem with charging for content

“The ones that do stand a chance are the ones that are significantly, obviously different.”

Charging for regularly updated content is a pretty uphill battle. If I can get the Village Voice for free, or get a Times or Wired magazine for $10 a year, how are you going to compete with that? The problems are of scale.

Then again, charging for specialised content that doesn’t have to be rewritten every week (ie. keeps some value) is pretty doable I would think.