Category Archives: server admin

How to fix: upgrading Apache to 2.4 / PHP 7 breaks WordPress

WordPress had a critical update recently, and I got tonnes of emails (one from each blog I run) demanding I upgrade NOW. So I did, and upgraded Apache to latest while I was at it.

Oh dear. All sites offline. First:

Unable to connect

…then, when I fixed Apache, I got:

“Your PHP installation appears to be missing the MySQL extension which is required by WordPress.”

What happened, and how do I fix it?

Apache 2.4 upgrade is a bit dodgy in Debian

The Powers That Be decided to mess around with core parts of the config files. The right thing to do would have been to add some interactive part in the upgrade script that said: “By the way, I’ve made all your websites broken and inaccessible, because they need to be in a new subfolder. Shall I move them for you?”

Here’s the reason and the quick-fix too

Apache 2.4 brings in PHP 7.0, replacing PHP 5

PHP 5 is old, very old. Historically, PHP has also been managed in a fairly shoddy manner, very cavalier with regards to upgrades, compatibility, safety, security.

So … the standard way to run PHP is to have a separate folder on your server for each “version” of PHP. Everyone does this; PHP is so crappy that you have little alternative.

But this also means that when Debian “upgrades” to PHP7, there is no warning that the new config file – speciic to PHP7 – has been created and ignores the existing config file

This is wrong in all ways, but it’s forced upon linux user by the crapness of PHP. If PHP weren’t so crap, we’d have a single global PHP config file – /etc/php/config.ini – and maybe small override files per version. But nooooooo – can’t do that! PHP is far too crap.

(did I say PHP is crap yet? Decent language, great for what it was meant for – but the (mis)management over the years is truckloads of #facepalm)

So, instead, you need to copy your PHP5 ini over the top of your PHP 7 ini – or at least “diff” them, find the things that are “off by default” in PHP 7 but must be “on” … e.g. MySQL!

Enable them, e.g. change this:

;extension=php_mysqli.dll

to this:

extension=php_mysqli.dll

…and restart Apache. Suddenly WordPress is back online!

/etc/init.d/apache2 restart

Site disappeared? Apache ARGH!

Transferring server to new home.

Allowed it to install latest versions of core software.

Turns out … the lovely people at Apache have made major site-breaking changes to their config system, so that upgrading WILL break existing sites.

Took an hour to discover that :(.

Webmail on your Debian server: exim4 + dovecot + roundcube

2015 UPDATE: I discovered that dovecot now uses MUCH longer passwords than it used to, and the database tables I’d found online WILL FAIL to authenticate (they truncate your passwords!). Fixed below

95% of linux configuration on Debian servers is simple, well-documented, well-designed, easy to do, with only a tiny bit of reading of docs.

Sadly, “making email work” is most of the 5% that’s: nearly impossible, very badly designed, badly packaged/documented. This OUGHT to take an hour or two, in practice it takes ONE WEEK to setup. WTF? In 2014? Unacceptable!

So I took several incomplete/broken guides, dozens of pages of help and advice, and synthesized this complete, step-by-step guide. This should get you the webmail you actually want (!) in an hour or less.

Continue reading

WordPress’s Akismet pushes Database to 30x real size

Like half the planet, I use WordPress as a blogging platform. There are many good things about it. One of those used to be Akismet (if you ignore the slightly unpleasant sales strategy: it’s free, if you give WordPress some personal details) – but today I hit a very serious bug in Akismet. I’ve had to delete Akismet – but WordPress’s coders don’t clean up after themselves, so you have to do some manual clean-up too. Read on.
Continue reading

Unity: Git source control – a basic .gitignore

The most popular hit for “unity gitignore” is a post on the official Unity forums that was written by someone who doesn’t seem to fully understand how git works. Which was a little disappointing.

Before you commit anything to git, you MUST go into Unity’s menus and enable the “metadata” option (in settings, moves around a bit between versions). Without that, Unity’s internal data-bugs will corrupt your project if you ever merge (happens to us approx 1 time in 3 if we forget at start of project)

After a bit of trial and error, here’s a basic .gitignore for Unity that seems to work, and cut down the commit sizes by a factor of 4 immediately:

UPDATE: updated with a tried-and-tested gitignore that covers more things. Do please note the big WARNING though, or you’ll only have yourself to blame…

##################
# Unity ignores:
#
# !!! WARNING !!!
#
# … you MUST convert Unity to using Metafiles *before* you start using this .gitignore file,
# or you WILL lose data!
#
#####
#
# !!! WARNING !!!
#
# … don’t forget to [git add “*”] (quotes are required!) when adding new files, or git will silently ignore them
#

# OS X only (not needed on other platforms)
.DS_Store
*.swp
*.Trashes

# All platforms
Library
Temp
*.csproj
*.pidb
*.unityproj
*.sln
*.userprefs

Ruby on Rails dead. All sites p0wned. GitHub shoots the messenger?

Two things here: if you run any Rails site, check out the security hole ASAP if you haven’t already. You might be safe – but given that even GitHub wasn’t, I’d double check if I were you. (The Rails community seemingly isn’t patching it – and there’s nothing recent on the Security list. Which leaves me going: WTF? The evidence is right there on GitHub of how bad this is right now, in the wild).

Secondly … what just happened? Apart from doom and gloom and “the end of every unpatched Rails site on the planet”, there’s a fun story behind this one. As someone put it “it’s the whitest of white-hat attacks” (i.e. the “attacker”‘s motives appear extremely innocent – but foolish and naive)

It seems that GitHub got hit by the world’s nastiest security hole, in Rails – trivial to take advantage of, and utterly lethal. The hole appears to allow pretty much anyone, any time, to do anything, anywhere – while PRETENDING to be any other user of the system. So, for instance, in the attack itself, someone inserted arbitrary source code into a project they had no right to.

Hmm. That’s bad. It effectively destroys GitHub’s entire business (it’s already fixed, don’t worry)

But it gets worse … it’s a flaw in the RoR framework, not GitHub itself (although apparently GitHub’s authors were supposed to know about the flaw by reading the Rails docs, as far as I can tell from a quick glimpse at the background). Rails authors have (allegedly) known about it and underestimated how bad it is in the wild, and left Rails completely open with zero security by default.

So, allegedly, the same attack works for most of the web’s large Web 2.0 sites – any of them that run on Rails.

WTFOMGBBQ!

Who was the perpetrator of this attack? Ah, well…

made an impossible issue, a post that GitHub’s database believed was created 1,000 years in the future.

Classy. Dangerous (high risk of someone calling the police and the lawyers), but if people won’t believe you, and *close* your issues, claiming it’s not that important, what more amusing way to prove them wrong?

Whoops, shouldn’t have done that

I can’t state this strongly enough: never attack a live system. Just … don’t.

Any demonstration of a security flaw has to be done very carefully – people have been arrested for demonstrating a flaw allegedly *at the owner’s request*, because under some jurisdiction’s it’s technically a crime even if you’re given permission. In general, security researchers never show a flaw on a real system – they explain how to, and do it on a dummy system, so no-one can arrest them.

(why arrest the researcher? Usually seems to be no reason beyond ass-covering by executives and lawyers, and a petty vindictiveness)

Homakov appears to have been ignorant of this little maxim, hence I’m writing it here, let as many people as possible know: never attack a live system (unless you’re very sure the owners and the police won’t come after you)!

GitHub’s response

On the plus side, they fixed it within hours, on a weekend. And then proceeded to tell every single user what had happened. And did so in a clever way – they put a block on all GitHub accounts that practically forces you to read their “here’s what happened, but we’ve fixed it” message. They could have kept it quiet.

Which is all rather wonderful and reassuring.

On the minus side, IMHO they rather misrepresented what actually happened, portraying it more as a malicious attack, and something they fixed, rather than what it was – the overspill from an argument between developers on some software that GitHub uses.

And they initially reported they’d “suspended” the user’s account. Normally I’d support this action – generally it’s a bad idea to let it be known you’ll accept attacks and not fight back. But in this case it appears that GitHub didn’t read the f***ing manual, and the maintainers apparently (based on reading their tickets on the GitHub DB) refused to accept it was a serious problem – and apparently didn’t care that one of their own high-profile clients was wide open and insecure. The attack wasn’t even against GitHub per se – it was against the Rails team who weren’t acting. IF it had e.g. been a defacement of GitHub’s main site, that would have been different, both in impact and in intent. Instead, the attack appears to be a genuinely dumb act by someone being naive.

Seems that GitHub agreed – although their reporting is a bit weak, it happened days ago, but they never thought to edit any of their material and back-link it.

“Now that we’ve had a chance to review his activity, and have determined that no malicious intent was present, @homakov’s account has been reinstated.

…and it’s pleasing to see that their reaction included a small mea culpa for being unclear in what they expect (although anyone dealing with security ought to be aware of this stuff as “standard practice”, sometimes it’s not security experts who find the holes):

“We haven’t been as clear as we should have been on how to responsibly disclose security problems, and for that I’m sorry. To prevent future confusion about security-related account suspension, and to make explicit our stance on responsible disclosure, we have added a section entitled Responsible Disclosure of Security Vulnerabilities to our Security policy.”

Rails’s response

I’d expect: shame, weeping, and BEGGING the web world to forgive their foolishness. I’m not sure, but it’s going to be interesting to watch. As of right now, the demo’s of the flaw are still live. I particularly like one commenter’s:

drogus closed the issue 5 days ago

kennyj commented

5 days ago

“I’m closing it (again).
@drogus was close it, but it still open.
github bug?”

Closed

kennyj closed the issue 5 days ago

“github bug?” LOL, no – massive security flaw :).

ImageMagick followup: they’re not going to fix it

Response from ImageMagick folks, when I asked them to either re-instate the working binaries, … or stop building as Lion-only:

“We only host and maintain current versions of ImageMagick on one OS
release level. We have a small development team and do not have the
time to support multiple releases and multiple OS levels. The fix is to
download the MacPorts version of ImageMagick which runs under Leopard.
Another solution would be to donate a Mac with Leopard installed so we
can create binaries. We only have one Mac and it hosts Lion.”

Fine – it’s their software, they can do whatever they want. And I think they’ve done a great thing over the years by sharing this command-line tool with the world.

Except … their alternatives aren’t as reasonable as they sound.

Firstly, MacPorts is incredibly difficult to use (even as a former sysadmin and programmer, I find it painful). Simply put: I know it will take me at least a day to get that working, possibly several.

Secondly, deliberately deleting their own working software, and replacing it with non-working software, is deeply irresponsible. If this is how they approach the overall product, how long before you get “caught out” as a user when they pull some other rug out from under you? “Using ImageMagick today? Well – get it while you can, because tomorrow, they might arbitrarily delete it.” (this is what just happened to me: in the space of a few weeks, the first version I downloaded was deleted and replaced with a knowingly-broken version. My backup copy got corrupted, and I thought I could re-download from the web – nope!)

Asking them about this, they pointed out that the version from a few weeks ago had a bug which was a potential security hole. Fine, so they should discourage people from using it – but that doesn’t excuse *deleting* it, and providing only upgrade paths that are painful or expensive (Lion is not free).

It pains me to say this – as noted above, I think the IM product has been a great thing – but I have to conclude:

Don’t use ImageMagick. Just when you need it, it’s liable to let you down.

As for me, I see no other choice but to give Adobe more money, buying a more expensive copy of Photoshop that I don’t really need. I can’t afford to waste days fiddling around with MacPorts – and not even be guaranteed of success. I just need to do one, tiny, simple operation (an image resize!), but unless I can find a kind person who’s got an archived copy of ImageMagick, it’s not going to happen :(.

Oh, well.

ImageMagick: no longer runs on OS X, except Lion

The IM maintainers seem to be taking a leaf out of Apple’s book: if you don’t purchase the latest Apple OS upgrade (that most people don’t need), you can no longer use their software.

If you follow their 4-line install instructions, you’ll get:

dyld: Library not loaded: /usr/X11/lib/libpng15.15.dylib
Referenced from: …. /ImageMagick-6.7.2/bin/convert
Reason: image not found

…because that precise version of that library isn’t included in OS X generally, it’s only part of Lion, and it’s not included with ImageMagick – and ImageMagick for some reason has been compiled to refuse to run with anything except that precise version. Why? no explanation of this on the download page. I assume it’s just someone wasn’t paying attention, and went and linked a specific library. If so, it’s very frustrating that a simple noob mistake has just locked out anyone who’s not running Lion. Sigh.

Yes – I could go and download the source, and debug it, and fix it myself, because it’s Open Source. But if I’m going to consider that, then it would be cheaper to:

  1. Buy a new Mac
  2. Buy an extra copy of Photoshop
  3. Buy lots of RAM
  4. Throw away IM

And what about if I were running this on a server (which, after all, is what IM is really here for)? Basically, I’d just be screwed :(. Unless I was happy building from source, with all the pain and suffering that entails.

Overall, it gives me the strong feeling: stop using ImageMagick. It’s too risky. Which is very sad, because in the past it’s been very popular in some (server) teams I’ve worked on, where it helped us do lots of great things (and, IIRC, some of the people I worked with contributed some small minor fixes back to IM. Although this was so long ago I might be imagining that).

Follow T=Machine posts via Twitter

I’ve setup a dedicated twitter account that will auto-tweet the most interesting posts from the blog:

http://twitter.com/tmachineorg

…when I get time, I’ll see if I can configure it to auto-tweet particular tags (i.e. “games industry”, “entity systems”, and “startup advice”). Until then, it’ll be most/all the posts.

(this is for all the people who’ve given up on RSS readers. I know how you feel, I’ve not bothered reading RSS since Google killed NNW. RSS is still awesome for writing apps, but sometimes Twitter is better for humans :) )

PS: I couldn’t register /tmachine because of a Hard D00D who takes photos of himself semi-naked, but apparently hasn’t worked out how to type a single tweet yet. Doh

The nature of a Tech Director in games … and the evils of DevOps

Spotted this (the notion “DevOps”) courtesy of Matthew Weigel, a term I’d fortunately missed-out on.

It seems to come down to: Software Developers (programmers who write apps that a company sells) and Ops people (sysadmins who manage servers) don’t talk enough and don’t respect each other; this cause problems when they need to work together. Good start.

But I was feeling a gut feel of “you’ve spotted a problem, but this is a real ugly way to solve it”, and feeling guilty for thinking that, when I got down to this line in Wikipedia’s article:

“Developers apply configuration changes manually to their workstations and do not document each necessary step”

WTF? What kind of amateur morons are you hiring as “developers”? Your problem here is *nothing* to do with “DevOps” – it’s that you have a hiring manager (maybe your CTO / Tech Director?) who’s been promoted way above their competency and is allowing people to do the kind of practices that would get them fired from many of the good programming teams.

Fix the right problem, guys :).

Incidentally – and this will be a long tangent about the nature of a TD / Tech Director – … my “gut feel” negativity about the whole thing came from my experience that any TD working in large-scale “online” games *must be* a qualified SysAdmin. If they’re not, they’re not a TD – they’re a technical developer who hasn’t (yet) enough experience to be elevated to a TD role; they are incapable (through no fault of their own – simply lack of training / experience) of fulfilling the essential needs of a TD. They cannot provide the over-arching technical caretaking, because they don’t understand one enormous chunk of the problem.

I say this from personal experience in MMO dev, where people with no sysadmin experience stuck out like a sore thumb. Many network programmers on game-teams had no sysadmin experience (which in the long term is unforgivable – any network coder should be urgently scrambling to learn + practice sysadmin as fast as they can, since it’s essential to so much of the code they write) – and it showed, every time. In the short term, of course, a network coder may be 4 months away from having practiced enough sysadmin. In the medium term, maybe they’ve done “some” but not enough to be an expert on it – normally they’re fine, but sometimes they make a stupid mistake (e.g. being unaware of just how much memcached can do for you).

And that’s where the TD-who-knows-sysadmin is needed. Just like the TD is supposed to do in all situations – be the shallow expert of many trades, able to hilight problems no-one else has noticed, or use their usually out-dated yet still useful experience to suggest old ways of solving new problems that current methods fail to fix. And at least be able to point people in the right direction.

…but, of course, I was once (long ago) trained in this at IBM, and later spent many years in hardcore sysadmin both paid and unpaid (at the most extreme, tracking and logging bugs against the linux kernel) so I’m biased. But I’ve found it enormously helpful in MMO development that I know exactly how these servers will *actually* run – and the many tricks available to shortcut weeks or months of code that you don’t have to write.

A polite request from a wishes-to-be sponsor …

Notes to advertisers: checking the author’s name, email address, and what the blog is about, and acknowledging how odd their advertising attempt is – these are all good things. You’d be surprised (or depressed) how often people cold-contact me without doing any of the above. I almost feel sorry that I had to refuse…

“Hello Adam,

My name is [] from []. I was just wondering if you can write a short review about our site. Although I know your site is about Video Gaming, everybody needs car insurance, and I hope a short article in between your main content would not be a big deal. As a ‘thank you for your time’, we’ll give you a $25 gift certificate to GameStop.

Look forward to hearing back from you.”

…but given my day-rate is well over $1500 (and I’m working flat-out already) … a $25 gift cert isn’t really appealing. Sorry!

PS: the *webserver* (not the blog) is configured to block + redirect any traffic coming from insurance sites, so I’m not concerned at the impact on SEO traffic flowing this way for the fact I’ve now quoted those (bad) magic words. They’ll be a bit surprised – the current redirect code is (indirectly thanks to my Alma Mater) “[this webserver] is a teapot” (an obscure reference to the web-server that *was also* a filter-coffee machine, many years ago)

PPS: most sites about SEO are also blocked + redirected, so I’m no longer afraid of those 3 letters either…

PPPS: I post these emails mainly because so few normal people talk about this stuff (as opposed to adsense “professional” web-marketers, who talk about nothing else), and I think it’s an important topic. What’s “good” advertising? What’s “bad”? How should you approach a website when cold-calling about ads? What should a site-owner consider acceptable terms? etc…

ModSecurity updated anti-spam marketer rule

After a little tweaking, my rule is growing, and proving extremely effective:

# bad websites: domains which regularly or overwhelmingly feature spam
SecRule REQUEST_HEADERS:REFERER “http://[^/]*(yijiezi|yourhcg|lukejaten|squidoo|answerbag|jvlai|chaohuis|cledit|bait|lukejaten)” “t:lowercase,deny,nolog,status:500”

# porn and gambling: they make much cash out of random visitors
SecRule REQUEST_HEADERS:REFERER “http://[^/]*(holdem|poker|casino|porn|girlz|pussy|penis|babe|exposed|sex)” “t:lowercase,deny,nolog,status:500”

# fake / illegal designer clothing and luxury goods
SecRule REQUEST_HEADERS:REFERER “http://[^/]*(shop|store|cheap|gossip|handbag|money|deluxe|sunglass|chanel|replica|buy|sale|furniture)” “t:lowercase,deny,nolog,status:500”

# celebrity gossip and trying to make money out of children, I guess
SecRule REQUEST_HEADERS:REFERER “http://[^/]*(miley|bieber|pokemon)” “t:lowercase,deny,nolog,status:500”

# side-effects of Republican America?
SecRule REQUEST_HEADERS:REFERER “http://[^/]*(health|dental|pills|treatment|seller)” “t:lowercase,deny,nolog,status:500”

# side-effects of weakly-regulated investment markets?
SecRule REQUEST_HEADERS:REFERER “http://[^/]*(forex|realty|invest|loans)” “t:lowercase,deny,nolog,status:500”

# the people that created this problem
SecRule REQUEST_HEADERS:REFERER “http://[^/]*(seo)” “t:lowercase,deny,nolog,status:500”

# webhosting and bodybuilding: apparently, these industries are as commoditized as porn and gambling – LOL
SecRule REQUEST_HEADERS:REFERER “http://[^/]*(download|hosting|videos|bodybuilding|bodybuild)” “t:lowercase,deny,nolog,status:500”

Incidentally, I looked into using wordlists for this, but they don’t work. The most effective anti-spam is to look at the domain-names – these sites are trying to get good rankings for their domains, not for specific pages. Apart from the spam-friendly sites, where it’s a combination of both.

So .. sadly … we need the regexp so that we can target the domain-name specifically. If ModSecurity were better (documented) I’m sure it could easily do that. I’m suspicious it *does* do that, but with their shotgun approach to documentation, it could take days or weeks to discover it if so :).

Safe login on OS X: using an SSH key from a USB key/thumbdrive

I like computer security to be EASY and SECURE.

I hate passwords, and I use them rarely if at all. Instead, I use digital keys as much as possible (i.e. something based on a physical key stored on a removable USB drive that I take with me wherever I go). Like using a physical key, it’s much easier.

Sadly, OS X has a version of SSH that tries to be “too clever” while actually being “annoyingly unhelpful”. If you attempt to use a key from a removable drive, you get this error message:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: UNPROTECTED PRIVATE KEY FILE! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0777 for ‘login-key-for-tmachine.ssh’ are too open.
It is recommended that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: (key-name)
Permission denied (publickey).

(emphasis mine).

While it’s delightfully verbose, and tells you exactly what’s happened, it’s also a bit misleading. It says “WARNING” when it actually means “ERROR”, since the ssh system at this point deliberately stops itself. But, more importantly, it’s an error that you cannot evade under OS X. With OS X, all removable media has “Permissions 0777”.

Fortunately, there’s a workaround. Using this good but not-quite-detailed-enough article, I got most of the way there.

I had two problems, things that article omits.

Firstly, you are no longer “allowed” to edit /etc/fstab on OS X. Don’t try it. Instead, there’s a new command-line editor called “vifs” (hmm. vi-for-fstab, perhaps? :)) which works fine.

Secondly, the USB Drive I’m using has a space in the Label name. /etc/fstab uses spaces as a reserved character (I knew this), but … what do you write instead? (I didn’t know this).

I tried (and failed with):

  1. “My Drive”
  2. My\ Drive
  3. My Drive

…and with some creative googling, eventually found an example fstab with spaces in a label name. Aha!

  1. My\040Drive

i.e. replace spaces with “\040” (I’m guessing because it’s so low-level they’ve decided to “assume” unicode in all escape sequences)

…and now it all works as intended. Yay.

HOWTO: Prevent SEO scam Referrer traffic … AND … Install Mod-Security on Debian

UPDATE: there were several bugs in my original version – by Debian standards, ModSecurity is damn hard to configure correctly, mainly because the Debian packager has left out so much that’s essential! This version is fully tested and working…

Mod Security is an awesome, open-source product for Apache that will protect your webserver against attackers, using a custom rules-language that lets you easily filter for any kind of website attack. Even better, it comes with a pre-built (and regularly updated) set of “official” default rules for cutting out the majority of common internet attacks.

But, pretty shocking … I tried 10 different tutorials / HOWTO’s for this, and each one was wrong. Out of the 10, 6 of them lead to fundamentally insecure / misconfigured systems.

Mostly it’s the vendor’s fault for providing huge long-winded webpages in place of basic install instructions. Partly, it’s the Debian packager’s fault for both mis-packaging, and also “forgetting” to document what they’d done (e.g most of the README’s are empty. Grr!). Whatever. Here’s my HOWTO for doing it correctly, and picking up the excellent default security rules, that *should* work with most installs of Debian.
Continue reading

Don’t use BitBucket – broken OpenID authentication

We’re starting a new client project, and the client uses Mercurial exclusively, all through BitBucket.

BitBucket has a stupid user-accounts system, that demands you invent a globally-unique username. Oh dear lord – how amateurish are you guys?

Aha! BUT! … they have a (very subtle) link to let you use OpenID instead. Phew! My day is saved – I don’t have to be “dodgy-69-sucker-11111” just in a desperate attempt to work around a naive website architect.

OpenID FAIL

Except … once you’ve sacrificed your private account details to Atlassian, they … don’t allow you to login. It reports “success” but tells you that you’re not allowed to use OpenID to access the site, you STILL have to create a non-OpenID account, using a globally unique ID.

I’m sure they’re doing “something” with OpenID, but I get the impression that the folks at BitBucket don’t grok what most of the world is using it for…

How do I take back my Identity, you fraudsters?

Well, Atlassian won’t help you there.

Fortunately, Google did…

Google’s UI designers FTW

I used Google as my OpenID source this time around. And, *fortunately*, Google’s process for de-authorizing a website is very simple.

I usually assume Google’s UI is great, and I usually only blog about it when it fails badly, but here’s an example where it works beautifully.

(hint: there’s a shortcut – but Google might change the link in future. You can go directly to: https://www.google.com/accounts/IssuedAuthSubTokens)

Just go to your account page (https://www.google.com/accounts/), and *right at the top of the page* (thanks, Google!) is a link to all your authorized websites – it’s in a big white space on it’s own, VERY easy to find.

WordPress note: Curl != Curl

Gah. The world of PHP modules is a horrid mess. And sites going OAuth-compulsory are highlighting of late just how much so…

I just had a plugin fail, with no error message, even though I had it all installed correctly, and all pre-requisites.

After much messing about (much wasted time), it turns out that this plugin needs:

  1. libcurl (which is not the same as curl)
  2. php5-libcurl (which is not the same as libcurl)

So … when a WP module claims it needs “curl”, it could mean any of three things. I knew about the first two, but not the third. Even if it says it needs “libcurl” that’s still not specific enough.

In this case, the WP module embedded a 3rd-party OAuth module that used “the other libcurl” – so it needed *both* of them. Ha!

Low-cost publishing = easy-to-kill content

One great achievement of the web is the huge reduction in barriers to publishing. But the flipside is that we now see extremely low incentives for publishers to keep content “live”. Back when it cost money to publish info, you had good reasons to *keep* your content live once it had been published; you had a revenue stream to protect.

Nowadays, with publishing costing nothing, it’s often un-monetized. All it takes is the slightest increase in hassle for the publisher, and they’re better off killing the content entirely.

That’s the case with a site I just shut down. A small, incomplete – yet moderately valuable – resource for iPhone Developers, with a few thousand unique visitors a month. Too small to be worth monetizing, so I hadn’t. I was eating the (very small) hosting and support costs, until someone abused the site, and those “support costs” became non-trivial.

iPhoneDevelopmentFAQ – history

I created this site at the start of 2009, because there was no good FAQ for iPhone Development (AFAIAA there still isn’t; even today, the nearest you can get is StackOverflow. SO is great, but … a lot of subjects are “forbidden” under the site terms, and the site-search is very weak).

I set it up to be low maintenance, and to allow multiple people to moderate it (very similar lines to SO, but slightly less open, and a lot more “niche”).

In the past two weeks, after more than a year of “no active moderation”, we saw forged posting credentials and then pointless offensive questions. First rule of running a passive website: leave it configured to report (surreptitiously) on all unusual activity, so you can see if it gets out of hand / abused / attacked / etc.

Options

Deleting offensive content requires only a couple of minutes (to remember the password, login, and hit delete).

Checking what happened with the forged credential (probably unrelated) is more like half a day to a couple of days. I could audit the code, audit whatever 3rd-party PHP libraries were being referenced, and almost certainly plug the hole (or holes).

Or … I could do what I actually did: two lines of typing, and Apache kills the site. In a way, it’s a bit sad – it had background traffic of a few thousand uniques a month – and the whole thing is now gone.

The fragility of niche interests

At the end of the day, I get *zero benefit* from this site. I pay a tiny amount for the web-hosting and the domain-hosting, so it’s almost free, and I’m happy to leave it running for the benefit of the thousands of visitors each month.

But if it’s going to start costing me hundreds (or thousands) of dollars in lost time when I would otherwise have been doing paid contract work (every hour not working is an hour’s salary lost) … then the balance switches and (as in this case) I’m obviously going to kill the site.

I expect that the people who abused the site were just being thoughtless, and probably wouldn’t have ever gone back anyway. But I can’t afford the time to make sure.

Ultimately: Who has the time for this? A handful of callous acts just killed a repository of info.

GetClicky sucks: an Analytics service going out of business?

A year or so ago I did a roundup of the major free Web Analytics services. I was interested to see how Google Analytics had affected the market: was there a market left any more?

One of the trials I signed up for I found so useful I carried on using after I’d written the review. GetClicky had a lot less information than some services – including GA – and less detail than the free tools I already run on all my websites (e.g. AWstats). But it was a lot more user-friendly, presenting the most critical information all at once on a single screen.

Today I finally started disabling GetClicky on my sites; the company has forceably blocked my site from their service. Why? Because I had a week of heavy traffic *while I was using the premium version which allows unlimited traffic*. That’s it. I stayed within their requirements, but I was banned anyway. That suggests to me that their company is in trouble…
Continue reading

Making MediaWiki secure (and fixing some config annoyances)

(this assumes you are running Debian on your server; if not, I suggest you switch)

Mediawiki. One of the world’s less secure wikis? Probably. I use and install it a lot, and it’s usually “the compromise wiki”: it’s weak at a lot of things, but it’s the “least worst overall” a lot of the time. Here’s my current standard fixes and tweaks.

Continue reading