Categories
conferences games industry

Suggestions for Improving Conferences

My last post (where a conference organizer had explained that they wanted me to speak not because of my speaking ability but because of the name of the company I worked for) has drawn some really interesting feedback both from conference organizers and from conference speakers. Some of it’s been in email conversations, but you can see some good stuff in the comments to the blog post.

It’s touched on several things that I’d really like to see improved about games-industry conferences. Here’s my personal list of high-level changes I’d like to see, with a detailed explanation of each:

  • Transparency of selection process
  • Organizers to encourage/support publishing of session-transcripts online
  • Conference website from each year to be archived online forever; all URL’s to be kept valid forever
  • List of all submitted proposals of talks to be made public each year
  • Simple feedback given to rejected talks
  • Written feedback given to accepted talks

Transparency

Personally, if I could see only one chance then I’d love to see more transparency post-selection about what got selected and why. For people on the inside, I’m sure it all makes sense, and that the few short paragraphs posted on e.g. http://gdconf.com at RFP time “explain it all” but to the rest of us it’s a giant black box into which go thousands of submissions and out the other end a couple of hundred are chosen. And that’s it. We know nothing more.

(incidentally, this year’s GDC has a new selection process that provides slightly more transparency to the speakers who are submitting talks. Instead of one submission that’s “all-or-nothing” you now have two stages, where the selection committee get to give you some feedback halfway through. We’ll have to wait until March 2009 to see how it turns out, but I already think even though it’s just a small change it will result in a substantial improvement)

More generally, I think increased transparency from conference organizers could go a long way to making big improvements in perception at a very small additional cost to the conference.

Without transparency, the organizers are making major decisions (accept/reject talks) but forcing people to guess, and to theorize, and to assume as to why the decision goes the way it does. They pick up whatever observations they CAN make (e.g. Brian’s point about speaking at every AGDC until the year CMP took over), and they will make reasonable extrapolations from there. The problem is, none of us have any idea whether Brian’s observation was due to deliberate decisions or random chance. But one of our other observations (that we can all make independently) is that the process is very opaque, and that allows us to invent all sorts of interpretations. It’s not fair, perhaps, but many people start asking “if you’ve got nothing to hide, why are you hiding it? You must have a hidden agenda here” when they’re starved of other data points to use in their understanding process.

I’m not saying that’s fair. The more I’ve seen of the internal workings of closed organizations, the more I’ve realised the truth of the statement: “Never mistake mediocrity for malice – to the external observer their symptoms are largely indistinguishable”. (i.e. the “mediocrity” is that no-one is perfect and there will be times where a selection panel is unlucky and accidentally chooses some bad sessions; this is inevitable, but is it deliberate?)

Transcripts

During or after the conference. If there were concerns then a moratorium could be requested – e.g. don’t post until 2 weeks after the conference – although I think that would be largely unnecessary. People don’t travel hundreds (or thousands) of miles just to sit in a lecture theatre: the value of a conference is a lot more than the talks themselves.

Transcripts have a whole bunch of benefits, but they are time-consuming to produce. I’ve been writing transcripts of every talk I attended this year (GDC 2008, ION 2008, etc), and adding my own commentary. Having attendees voluntarily do this is one approach, but right now that’s neither recognized nor encouraged by organizers.

So, … “support” could be as simple as publishing an RSS feed of all known session transcripts written by third parties – Darius Kazemi and I did this for ION 2008 earlier this year, and are planning to expand it for future conferences. It would be nice if conference organizers would do the collation and stick it on the conference website front page, for example.

Site URL’s / Archives

  1. URL’s are forever. They never change. This is a fundamental design rule of the Uniform Resource Locator concept. If I bookmark a page NOW then when I click on it in 12 months time it should still be the same page
  2. Time-sensitive sites (like conferences, where the front page, the newsfeed of “events happening at the conference today”, the list of speakers, the list of sessions, the session abstracts, etc – ALL are only valid for a one-week period and then never change) should be preserved intact as soon as they expire / the event is over. e.g. in 2008, you should be able to browse the 2007 website, intact
  3. Important industry publications – “publication” includes “a talk given to an audience” – need to be:
    1. Referenceable (you can easily find a link to the publication, save that link, and add it as a footnote / bibliography reference in your own future publications)
    2. Indexable (you can search on the content – at the very least the abstracts, but ideally the full text)
    3. Retrievable (other people can re-read / re-experience the publication in future, e.g. when following up on your references)

Games industry conferences in general tend to do poorly at those aspects. Mainstream technology-industry conferences do much better, especially at understanding and appreciating the reference / bibliography / “archive of knowledge for future use” aspect. I’m not sure I’ve explained very well, so I’m going to run through some concrete examples using GDC. Not because GDC is especially bad, but because it’s amassed a bigger and broader knowledge repository than any other conference so it’s easier to find examples:

  • URL’s from each year are routinely broken, or overwritten with conflicting info the following year. For instance, view any session abstract from last year (here’s one copy/pasted from my Conferences page on this site:https://www.cmpevents.com/GD07/a.asp?option=C&V=11&SessID=3889), and then click on the “VIEW ALL SESSIONS” link in the middle of the navbar. It links to a generic URL that has already been overwritten with the 2008 sessions. The list of 2007 sessions is now inacessible.
  • Even though GDC’s organizers record most/all sessions with some combination of audio and video, the audio/video versions of each talk can only be downloaded for a limited period of time (about half a year, IIRC – which is nothing considering people like myself are still regularly referencing talks from 2000). After that, I’m not sure what happens – I doubt they get deleted, but I’ve been unable to access old ones that I had previously been able to download
  • There’s no indexable copy of the session *contents*. If your google-fu is good enough, you can find the powerpoint download links for recent GDC talks, but if you then take key phrases from those talks and put them into google, you get zero hits; i.e. google is not indexing them – and GDC’s organizers provide no interface for doing such a search through their own site

Publication of all submitted *and rejected* proposals

When the AGDC vote-for-a-session took place, I found several interesting and useful pieces of information just by looking through the submissions that CMP/GDC have never released before (to my knowledge). I guess simply because no-one realised it would be useful. Fair enough – one of the benefits of general transparency is that organizers don’t have to be able to see into the future to work out what will be useful, they just let people use all the information as they see fit. These are concrete things that as far as I can see have no value to being kept secret and clear value to being shared, since they enable speakers to improve their current and future talks.

For instance, there were several topic titles that many people had independently submitted on. Most of them were not the things I’d have expected there to be duplication on; knowing that helps me to understand next year what other people are already providing good submissions for, and help ME to find interesting/new/different things instead. More diversity = more choice for next year’s selection committee = (probably) increased quality and interest for attendees? I don’t know, but it seems likely.

For instance, I already had a session accepted by the advisory board, but by browsing the other session titles that were rejected (and, in fact, by looking at the votes they received), I was able to get some feel of the context of what the audience and organizers wanted / didn’t want.

Feedback on accepted talks

Taking that further, for an example of where more transparency *could* help: I have never spoken at a conference where the organizers have explained to me at acceptance time why they accepted my submission (I assumed because of the logistics of commenting on every accepted talk – they just don’t have the time/energy). If organizers could let the speakers know what about their talk they particularly liked, there’s a better chance they’ll fuffil expectations. I’ve spoken to organizers much later on – usually at the start of or during the conference – about why they chose my talk, and sometimes the reasons were substantially different from what I had assumed.

Assuming there is some note-taking that occurs during the selection process, it would be great to have even just one or two sentences of feedback for each accepted talk. The organizers are, to a certain extent, taking a huge risk on these speakers, so the investment of time to take down those notes is probably worth it to reduce the risks of a bad talk.

Feedback on rejected talks

This one is harder, because there’s an order of magnitude more of them, and many may get rejected very very quickly. I wonder if a simple numerical rating system could be used, so that selection committee members still don’t have to spend much time rejecting a talk, but can at least give a rough indication to the speaker why they rejected it.

Perhaps a 1-5 system, where 1 means “this is the only reason I rejected it”, and 5 means “I would have accepted the talk on this aspect if it weren’t that I’d already rejected it”:

  • 1..5: Non-innovative or duplicate subject matter (aka “your talk is dull and has nothing new; the content is too basic or a waste of time”)
  • 1..5: Insufficient evidence of speaker’s ability to communicate (aka “we’re not sure you can speak in public well enough for a conference of our size; get more experience at speaking, or give us more evidence that you CAN speak well next time”)
  • 1..5: Inappropriate subject matter for this conference / coming from this speaker (aka “your topic has nothing to do with computer games / you appear to have no personal knowledge or understanding of this subject area, come back when you know what you’re talking about”)

(those are intentionally a little vague and all-encompassing for two reasons. Firstly to preserve some ambiguity when talks are really bad and no-one wants to insult the submitter, and secondly so that people filling in the feedback don’t have to put much effort into narrowing down PRECISELY why they rejected it – they can do it very quickly and loosely).

I’m working on the assumption that “fast and loose” feedback would be better than the current standard of most conferences which is simply “no feedback at all”.

7 replies on “Suggestions for Improving Conferences”

On transcripts, wouldn’t it be great if conferences could provide incentive to write transcripts? Something along the lines of “10% off next year’s registration if you post 3 transcripts at 500+ words each on your blog” sort of thing.

Granted, there’d be HUGE problems with defining what constitutes a blog, who’s a real journalist (you shouldn’t get the discount if you’re paid to do it anyway), etc etc. But in the end, by posting transcripts, you are promoting their conference, so I think it would make sense if you could work out the pesky details.

I would much rather have conferences publish audio (and maybe even video) of talks than worry too much about transcripts. When I’m unable to attend a talk, that’s the way I prefer to consume it. Transcripts plus slides might be a good start though.

I like all of your suggestions, but I don’t expect we’ll see much change on most of them. I don’t know about the big shows, but ION is small enough that basically everyone involved in the conference is doing it part-time. Advisors are fitting in advisor meetings and voting on sessions around their crazy schedules. Even the conference director has a day job. Everything you suggest (except maybe publishing the proposals) would significantly add to the time it takes to put on the conference in the first place.

Of all of those, the one I would like to see the most is feedback to rejected sessions. GDC has rejected me something like 8 times now. Once or twice I’ve received a generic “You were rejected, better luck next year” letter. Most of the time I just don’t show up on the schedule, so I assume that they rejected me. Neither of those outcomes tells me what I need to do to actually get accepted the next time I apply. It’s hard to debug a process when you get no feedback whatsoever. Sadly, we didn’t have time to do that for ION last year and probably won’t this year either.

I empathize with you in wanting feedback on rejected sessions. I’ve been rejected as a GDC speaker at least half a dozen times, and I’m left wondering: Was my proposal poorly put together? Was there overlap with another talk? Was my topic not interesting enough? Do I not work for the right company? Do I have some sort of stigma?

Having run ION for two years, I now see things from a much different perspective. We have to reject many talks, and some of the talks we reject are potentially very good. Often the process involves ranking all the potential sessions, and accepting the top N, where we’ve predetermined N based on the number of concurrent sessions and conference duration. Other factors that we consider are: Have we already accepted a talk (higher on the list) for this topic? Is the speaker already approved for too many other sessions?

Ironically, the proposals that are closest to the cutoff line, hence the best of the rejected talks, are the ones that it is difficult to elucidate why they were rejected. They simply didn’t make the cut. Most of the “close calls” didn’t have any single aspect to criticize – they just weren’t at the top of the list.

The ones at the bottom of the list are pretty easy to explain. They were risks of marketing rather than information. The speaker was acknowledged a poor presenter (either from previous speaker feedback or someone heard them present at another conference). They didn’t have the experience to talk knowledgeably about their subject.

There are two problems from my point of view with providing feedback. First, as I describe, we would only have meaningful explicit feedback on the worst sessions. The ones that nearly made the cut? What could we say? You were almost good enough but not quite? The second problem is that selecting sessions is a tremendous amount of work, along with gathering and organizing all the data on the approved sessions and speakers. Providing feedback to the rejected speakers would be inviting them to participate in a dialog about how to make their session proposals better for next year. Or worse, they might want to revise their proposal to address the issues that figured into its rejection. In essence it would be a massive rathole diversion. We wouldn’t be able to keep up with all the questions and feedback.

I’m not sure if there’s a better way to provide transparency without opening this can of worms. Maybe we could just give information on the overall ranking of proposals. Then, you would know that if your proposal was ranked 82nd, and there were 80 sessions, you were very close. Of course, this might also be very humiliating to someone whose proposal was ranked 184th out of 184 total.

Just as a further comment to my statement:

“Providing feedback to the rejected speakers would be inviting them to participate in a dialog about how to make their session proposals better for next year.”

This isn’t intrinsically bad. I believe sharing this kind of information would be beneficial all around. The problem is that it is prohibitively impractical – at least for us.

Yeah, I foresaw there would be a lot of fear about the implicit invitation to a dialog.

But … that is a recurring problem with all online businesses, and there are many generic solutions that people have come up with.

For instance, you can provide a forum for all the speakers, where you do not post, but allows them to have that dialog among themselves (“I got told I got rejected for X. Why?”).

Generally, thanks for the insights. From my end, what it comes down to is: right now, conferecnes are doign NOTHING to help the speakers; are there some small-enough-to-be-practical steps that organizers might try which could give disproportionately large benefit to the speakers? I believe there are. (your numerical ranking, for instance, is a good idea – anything that gives “some” indication is better than no indication at all).

Heck, we’re computer-game developers. The concept of feedback-directed-optimization is bread and butter to us, and we have relied on shortcuts and cheats and hacks to get our products to market for the best part of 30 years. This should be easy!

Awesome brain dump!

Happy that we are doing a lot of the things you suggest with our approach to conferencing. Admittedly, we don’t have the same scale/scope as GDC…

Given we are using WordPress as the web site for the Leadership Forum (so, built in archiving and automatic RSSing) some of what you recommend is done by default. Regarding transcripts, well, we just posted all the video for free via Google Video (or hi-res via purchased DVD). But, since we knew that would take a while, we actually got a team of volunteers to live-blog sessions during the conference and post directly to the conference website.

Regarding transparency, I’m all for it. As you’ve suspected, it largely comes down to time/effort. Though, for this year’s Leadership Forum, we did make an effort to send basic reject feedback.

FYI, web site is at:
http://www.igda.org/leadership/

Jason

Comments are closed.