Beth Kanter points to blogger Brian Kelly's page of blog experiments. Here, Brian records experiments he's running on his blog and their results. For example, he looks at what happens when he has a guest blogger or when he adds a widget to his sidebar. A typical experiment write-up looks like this:
- Sonific Wordpress sidebar widget.
- Reasons For Experiment
- To explore possible benefits of this particular plugin and, more importantly, to investigate some of the more general issues related to use of plugins, including policies regarding their provision, user benefits and usability issues.
- The Sonific Wordpress plugin was installed on the right hand sidebar on 22nd January 2007 and the song updated on 29th January 2007. The sidebar was removed on 1 March 2007.
It's an interesting, more formal way to monitor what happens when Brian tries new things on his blog. This sparked a couple of thoughts for me.
First, I like the idea of taking a more formal approach to running experiments on your blog. I think I'd structure things a little differently though. Like Brian, I'd include a description of the experiment and my reasons for running it. But instead of setting up a separate page just for my experiments, I'd run them as individual blog posts, tagged as "blog experiment." That way I could also seek reader feedback if it was appropriate for that particular experiment. For example, if I wanted to experiment with changing the layout of my blog, I'd want to include a poll and allow readers to comment in the comments section of that particular post. If someone wanted to see all of my blog experiments, they could then use the tag as a way to find them.
Beth brings up a good point about doing this kind of experiment--that Google Analytics would be a great tool to use in evaluating the impact. (Note--Beth has just put up a fabulous screencast and primer on using Analytics) In writing up my results, I'd most likely include whatever I got from email and comment feedback, as well as any pertinent Analytics data. I'd have to think carefully, though, about which results to look at. For example, if I'm going to change my design, that might have an impact on the bounce rate, so I'd probably want to keep an eye on that, in addition to whatever reader feedback I got. If I found that a design change meant that people left my site more frequently, I'd want to re-evaluate that change. (Correction--per Beth's note in comments, bounce rates aren't as meaningful for blogs as other metrics. I wrote this post before I had a chance to take a close look at her work on the Analytics info, so spoke out of turn on that one.)
I think I'd also be clearer about the results. In the example above, Brian indicates that he removed the widget he added to his site, but doesn't say why. Maybe he didn't write about it because he knows why he did it, but one of the values of sharing experiments is so that others can learn from your experience, so I'd probably want to indicate why I decided to make a change so that others could be forewarned.
The final question here is what kinds of experiments does it make sense to run. Obvious choices are in layout and design to try to make it easier to find things on your site and to navigate through it. Other possibilities include:
- Using Guest Bloggers (as Brian did)
- Adding a widget or some other element to your site
- Playing around with the frequency of posting. (Beth tried posting less last week and found that people started to worry what had happened to her).
- Creating Landing Pages
- Adding a Beginner's Guide to your blog
- Adding multimedia, like a podcast or videocast.
The great thing about blogging is that you can play around with a lot of things without a huge investment of time or resources. I like the idea of being more formal about the process, though, not only as a way to measure impact for myself, but also as a way to share information with other people. It would be really cool, actually, to create a wiki for sharing blog experiments so that we could start to create collective knowledge about strategies for blogging and the results of those experiments. Hmmm. . . another thing to add to my "some day" list.
In the meantime, what blog experiments have you run? What happened? And do you have a process for documenting and evaluating what you try?