Social networks and the wireframe as a boundary object.

The objects that mediate the ties between people” is a powerful concept. On LinkedIn, the “recommendation” is an object that truly connects people, creates lock-in and ads long-term value (more than the network itself, the “friend” connection, for example).

From that post: “Think about the object as the reason why people affiliate with each
specific other and not just anyone. For instance, if the object is a
job, it will connect me to one set of people whereas a date will link
me to a radically different group. This is common sense but
unfortunately it’s not included in the image of the network diagram
that most people imagine when they hear the term ‘social network.'”

Exactly. Social networks with objects that connect people are stronger, longer lasting and provide more value.

Social science has more to say about these objects: they can connect different domains of expertise. When they do that, they’re called boundary objects: objects that are used by different communities, and each community attaches different meaning to the object.

The wireframe (an IA deliverable) is a boundary object, and this power to connect different groups (desingers, coders, business people) through a shared object that has different meanings for each group is (I think) one of the reasons why the practice of information architecture has been such a success.

How exactly boundary objects and social networks that connect people through objects fit together I’m not sure. We’ll figure it out :)

Now that’s writing with balls: just tell the truth: “We’ve stopped issuing new Ethnio accounts until November while we make the product amazing instead of promising but buggy.”

Reddit says: “The reason we didn’t display more than 100 comments in the past and
still don’t display more than 100 now is because of bandwidth and
rendering time (both ours and yours).”

I think that’s a mistake. First: bandwidth of a zipped list of a few 100 comments is VERY small: a few K only. Smaller than 1 image often. Second, displaying a long page with comments is fast: here’s an example (380+ comments). I am thinking about the same issue (you have to cap it somewhere, no point in displaying 1000 comments on 1 page?), but as long as there are no threads that hit 1000 comments, I think I won’t bother. I don’t think the reddit solution of only showing “good” comments is right. It breaks the conversation flow.

Lately I’ve been obsessed with the power of lists and leaderboards. I think smart lists are very, very powerful and this is something that can make or break a website. There’s almost a book in there, but I’ll keep that one for a rainy day :) Here’s a good description by Lucas of the (ex) webJay algorythm, including a discussion on feedback loops (what’s popular stays popular because it’s on the most popular list).

The “Star Wars” project is alive and well: “The glossary of acronyms provided by the Pentagon to students of
missile defense is a list of abbreviations, like SBX or MIRACL (Mid
Infrared Advanced Chemical Laser). The glossary is typed on letter-size
paper with single-spaced entries and common type size. It is 327 pages
long.”

Wrong strategy LinkedIn!

LinkedIn’s “platform” will reportedly have to “approve” all apps that get on it, in order to make sure it stays nice and business-y.

Wrong strategy. Having to be approved will keep away developers. I understand the need to keep things clean and business-y, but there are much better ways to do that. Putting in constraints *before* the app even gets in is stupid: it’s shooting your own platform in its foot. They should look at the ecosystem as a social system: bad stuff will get in. Instead of heavy guarding at the gates, they should smartly police inside, plus have clever encouragement for the types of apps they want in their system (featured app lists, make sure leaderboards for apps are weighed the right way, etc…). Please LinkedIn, we need Facebook competitors. Don’t shoot yourself in the foot taking the “easy” solution to the app-moderation problem.

(Enough mixed methaphors for you?)

I wish Google analytics would do 1 more thing: let me log “events”: short text messages with dates, so I can say: “added this feature on this date”, or “moved to different host” or “was featured in the NYT”, and so on. And then let me display these events against the stats, to easily see where they may have affected traffic. Easy to implement, not too hard to adjust the UI, come on guys! Pulease.

Is there any way that, using jQuery and the Taconite plugin, you can return HTML that’s not valid XML? I tried using cdata, but it doesn’t seem to work.

jquery question

I have a very strange jQuery problem: stuff works fine on my local install, but doesn’t work on the live install. For example, sign up at poorbuthappy.com, then go to http://poorbuthappy.com/test/ and click the “add friend” button. It should be ajaxy, but it’s not (well, kinda, you’ll see. The button just greys out, and it should change to a “remove friend” button). The PHP script is reached, but the jQuery “success” function isn’t fired. And I have other local Jquery bits that work locally but not live, while others do. It’s weird.. any ideas?

When Mozy starts backing up (2 gigs of automatic backup for free!), at some point it says “reticulating splines”. What does that mean?

Scancafe sounds pretty good for the photographers among us: you send them your old slides and negatives, and they send back a DVD with scanned images, high quality. For 25$ you can get about 100 slides scanned.

Digg’s dead.

When you actually look at the stuff on Digg’s homepage (today), it’s boooring. Dead, filled with “7 ways to X” and similar SEO spam. Digg’s out, as far as I can tell. The Yahoo homepage has a few similar problems, btw. (Specifically, check the URL’s. The spammyness jumps out.)

My sites on Mediatemple suddenly got REAL slow a few days ago. Luckily they fixed the problem with more hardware, memory and some smart tricks, and the good side of this is that now those sites are really fast. Let’s hope they keep it that way.

It has become more and more clear to me that the biggest problem enterprises face in their IT selection is the software vendors, who try to sell monolithic “solution” that almost never delivers as promised. There is an inherent conflict of interest. Regardless of the advantages of a closely controlled environment, it’s almost bound to go wrong, because there is so much incentive to make it go wrong. So what’s the solution? Loosely coupled services that work well and are adopted bottom up can take care of a large part of an enterprise’s software needs. For the bits that do need to be closely controlled, I guess we’ll need the SAP’s of this world for this for a while longer.

(Damn these cryptic posts with no examples! Oh well, no time.)

Google is pointing search traffic to their own properties, and now Yahoo’s following suit.

I wonder why nobody has written about Google poisoning their search results with video links that without fail point back to Youtube, a video company that they own. And now Yahoo is doing the same: adding “useful” results to search, that point back to their own properties. Have the search engines decided that objectivity is no longer required? I always thought getting into the content game (buying Youtube) was a mistake for Google. Don’t get me wrong though, I think the evolving “mixed” search results are brilliant, I just think that the search engine that makes an effort to send traffic to properties they don’t own will have a much better chance of winning.

(I also think Yahoo needs to totally redo the yahoo.com homepage. It drives a lot of traffic I’m sure, but it’s really quite bad. I’ll keep that analysis for another post.)