With only 250 tickets available, I guess a lot of people in Brighton will be getting one of these today:
Dear adam martin
I’m sorry to inform you that your application to attend TEDxBrighton on 21st January has been unsuccessful.
As the first TEDxBrighton event, and offering free tickets, we have had a huge level of interest and the ticket application was very oversubscribed. … hope that in the future we might be able to offer a TEDxBrighton event with a larger capacity than the 250 this one can host.
Selection criteria in 2011…
It was an unusual process for a public event – the tickets are free, but there’s very few of them, and to be “allowed” a ticket you had to go through a review process, answering questions from the obvious, like “who are you?” to the bizarre, like “what’s your favourite web-site?”.
I remember at the time thinking it seemed very reasonable at the start, but increasingly invasive and judgemental towards the end. You want to allow/deny access based on the personal reading habits of the visitors? IMHO that comes perilously close to opening a can of worms that conference organizers should be steering clear of.
But it’s a brand with a very high reputation, so I ran with it, intrigued to see what would happen. I felt I had as good a chance as anyone – the conference is taking place in my home city, very close to where I live, and many of the TED themes have been a big part of my career and background.
Now that it’s done, I’m rather disappointed. (and of course disappointed too not to be attending the conference!) For such a high level of invasiveness, and an arrogant (although justified!) approach of “don’t call us, we’ll call you … but only if we like you enough”, I was expecting at least *some* kind of feedback :). This is the age of feedback, A/B testing, validation, and openness.
(c.f. my post the other day on UK Education and the A-Level blacklists: on the whole, those institutions that are holding-back info about public decisions tend to be frowned on these days)
What were their criteria? Who did they accept, and who did they reject? Why?
It’s not who you choose, it’s *how* you choose them
Over the years, I’ve become innately suspicious of any and all selection processes that aren’t fully “open”: with the judging criteria clearly documented in advance, and ideally with actual (theoretical) examples of good and bad submissions.
Partly … because of my own experience as a judge. I’ve judged or helped judge everything from obscure community programming contests, through game-design contests with cash prizes, to competitions giving hundreds of thousands of dollars in cash funding to new businesses.
Every time the judging criteria were given to candidates in advance, the overall quality of submissions was massively better, across the board. Every time the criteria were vague or secretive, the volume of crappy submissions was depressingly high.
…speaking of which, I still have some user-submitted game ideas from 6 months ago that I promised to review publically and critique on this blog. Every time I fire up the laptop for a long journey, I pull them out and go over them again, and I can only apologise profusely that most of them are still unpublished. A new-year resolution for me, perhaps?