My last post (where a conference organizer had explained that they wanted me to speak not because of my speaking ability but because of the name of the company I worked for) has drawn some really interesting feedback both from conference organizers and from conference speakers. Some of it’s been in email conversations, but you can see some good stuff in the comments to the blog post.
It’s touched on several things that I’d really like to see improved about games-industry conferences. Here’s my personal list of high-level changes I’d like to see, with a detailed explanation of each:
- Transparency of selection process
- Organizers to encourage/support publishing of session-transcripts online
- Conference website from each year to be archived online forever; all URL’s to be kept valid forever
- List of all submitted proposals of talks to be made public each year
- Simple feedback given to rejected talks
- Written feedback given to accepted talks
Personally, if I could see only one chance then I’d love to see more transparency post-selection about what got selected and why. For people on the inside, I’m sure it all makes sense, and that the few short paragraphs posted on e.g. http://gdconf.com at RFP time “explain it all” but to the rest of us it’s a giant black box into which go thousands of submissions and out the other end a couple of hundred are chosen. And that’s it. We know nothing more.
(incidentally, this year’s GDC has a new selection process that provides slightly more transparency to the speakers who are submitting talks. Instead of one submission that’s “all-or-nothing” you now have two stages, where the selection committee get to give you some feedback halfway through. We’ll have to wait until March 2009 to see how it turns out, but I already think even though it’s just a small change it will result in a substantial improvement)
More generally, I think increased transparency from conference organizers could go a long way to making big improvements in perception at a very small additional cost to the conference.
Without transparency, the organizers are making major decisions (accept/reject talks) but forcing people to guess, and to theorize, and to assume as to why the decision goes the way it does. They pick up whatever observations they CAN make (e.g. Brian’s point about speaking at every AGDC until the year CMP took over), and they will make reasonable extrapolations from there. The problem is, none of us have any idea whether Brian’s observation was due to deliberate decisions or random chance. But one of our other observations (that we can all make independently) is that the process is very opaque, and that allows us to invent all sorts of interpretations. It’s not fair, perhaps, but many people start asking “if you’ve got nothing to hide, why are you hiding it? You must have a hidden agenda here” when they’re starved of other data points to use in their understanding process.
I’m not saying that’s fair. The more I’ve seen of the internal workings of closed organizations, the more I’ve realised the truth of the statement: “Never mistake mediocrity for malice – to the external observer their symptoms are largely indistinguishable”. (i.e. the “mediocrity” is that no-one is perfect and there will be times where a selection panel is unlucky and accidentally chooses some bad sessions; this is inevitable, but is it deliberate?)
During or after the conference. If there were concerns then a moratorium could be requested – e.g. don’t post until 2 weeks after the conference – although I think that would be largely unnecessary. People don’t travel hundreds (or thousands) of miles just to sit in a lecture theatre: the value of a conference is a lot more than the talks themselves.
Transcripts have a whole bunch of benefits, but they are time-consuming to produce. I’ve been writing transcripts of every talk I attended this year (GDC 2008, ION 2008, etc), and adding my own commentary. Having attendees voluntarily do this is one approach, but right now that’s neither recognized nor encouraged by organizers.
So, … “support” could be as simple as publishing an RSS feed of all known session transcripts written by third parties – Darius Kazemi and I did this for ION 2008 earlier this year, and are planning to expand it for future conferences. It would be nice if conference organizers would do the collation and stick it on the conference website front page, for example.
Site URL’s / Archives
- URL’s are forever. They never change. This is a fundamental design rule of the Uniform Resource Locator concept. If I bookmark a page NOW then when I click on it in 12 months time it should still be the same page
- Time-sensitive sites (like conferences, where the front page, the newsfeed of “events happening at the conference today”, the list of speakers, the list of sessions, the session abstracts, etc – ALL are only valid for a one-week period and then never change) should be preserved intact as soon as they expire / the event is over. e.g. in 2008, you should be able to browse the 2007 website, intact
- Important industry publications – “publication” includes “a talk given to an audience” – need to be:
- Referenceable (you can easily find a link to the publication, save that link, and add it as a footnote / bibliography reference in your own future publications)
- Indexable (you can search on the content – at the very least the abstracts, but ideally the full text)
- Retrievable (other people can re-read / re-experience the publication in future, e.g. when following up on your references)
Games industry conferences in general tend to do poorly at those aspects. Mainstream technology-industry conferences do much better, especially at understanding and appreciating the reference / bibliography / “archive of knowledge for future use” aspect. I’m not sure I’ve explained very well, so I’m going to run through some concrete examples using GDC. Not because GDC is especially bad, but because it’s amassed a bigger and broader knowledge repository than any other conference so it’s easier to find examples:
- URL’s from each year are routinely broken, or overwritten with conflicting info the following year. For instance, view any session abstract from last year (here’s one copy/pasted from my Conferences page on this site:https://www.cmpevents.com/GD07/a.asp?option=C&V=11&SessID=3889), and then click on the “VIEW ALL SESSIONS” link in the middle of the navbar. It links to a generic URL that has already been overwritten with the 2008 sessions. The list of 2007 sessions is now inacessible.
- Even though GDC’s organizers record most/all sessions with some combination of audio and video, the audio/video versions of each talk can only be downloaded for a limited period of time (about half a year, IIRC – which is nothing considering people like myself are still regularly referencing talks from 2000). After that, I’m not sure what happens – I doubt they get deleted, but I’ve been unable to access old ones that I had previously been able to download
- There’s no indexable copy of the session *contents*. If your google-fu is good enough, you can find the powerpoint download links for recent GDC talks, but if you then take key phrases from those talks and put them into google, you get zero hits; i.e. google is not indexing them – and GDC’s organizers provide no interface for doing such a search through their own site
Publication of all submitted *and rejected* proposals
When the AGDC vote-for-a-session took place, I found several interesting and useful pieces of information just by looking through the submissions that CMP/GDC have never released before (to my knowledge). I guess simply because no-one realised it would be useful. Fair enough – one of the benefits of general transparency is that organizers don’t have to be able to see into the future to work out what will be useful, they just let people use all the information as they see fit. These are concrete things that as far as I can see have no value to being kept secret and clear value to being shared, since they enable speakers to improve their current and future talks.
For instance, there were several topic titles that many people had independently submitted on. Most of them were not the things I’d have expected there to be duplication on; knowing that helps me to understand next year what other people are already providing good submissions for, and help ME to find interesting/new/different things instead. More diversity = more choice for next year’s selection committee = (probably) increased quality and interest for attendees? I don’t know, but it seems likely.
For instance, I already had a session accepted by the advisory board, but by browsing the other session titles that were rejected (and, in fact, by looking at the votes they received), I was able to get some feel of the context of what the audience and organizers wanted / didn’t want.
Feedback on accepted talks
Taking that further, for an example of where more transparency *could* help: I have never spoken at a conference where the organizers have explained to me at acceptance time why they accepted my submission (I assumed because of the logistics of commenting on every accepted talk – they just don’t have the time/energy). If organizers could let the speakers know what about their talk they particularly liked, there’s a better chance they’ll fuffil expectations. I’ve spoken to organizers much later on – usually at the start of or during the conference – about why they chose my talk, and sometimes the reasons were substantially different from what I had assumed.
Assuming there is some note-taking that occurs during the selection process, it would be great to have even just one or two sentences of feedback for each accepted talk. The organizers are, to a certain extent, taking a huge risk on these speakers, so the investment of time to take down those notes is probably worth it to reduce the risks of a bad talk.
Feedback on rejected talks
This one is harder, because there’s an order of magnitude more of them, and many may get rejected very very quickly. I wonder if a simple numerical rating system could be used, so that selection committee members still don’t have to spend much time rejecting a talk, but can at least give a rough indication to the speaker why they rejected it.
Perhaps a 1-5 system, where 1 means “this is the only reason I rejected it”, and 5 means “I would have accepted the talk on this aspect if it weren’t that I’d already rejected it”:
- 1..5: Non-innovative or duplicate subject matter (aka “your talk is dull and has nothing new; the content is too basic or a waste of time”)
- 1..5: Insufficient evidence of speaker’s ability to communicate (aka “we’re not sure you can speak in public well enough for a conference of our size; get more experience at speaking, or give us more evidence that you CAN speak well next time”)
- 1..5: Inappropriate subject matter for this conference / coming from this speaker (aka “your topic has nothing to do with computer games / you appear to have no personal knowledge or understanding of this subject area, come back when you know what you’re talking about”)
(those are intentionally a little vague and all-encompassing for two reasons. Firstly to preserve some ambiguity when talks are really bad and no-one wants to insult the submitter, and secondly so that people filling in the feedback don’t have to put much effort into narrowing down PRECISELY why they rejected it – they can do it very quickly and loosely).
I’m working on the assumption that “fast and loose” feedback would be better than the current standard of most conferences which is simply “no feedback at all”.