“the j stands for Jagua palm”
about me | research blog | wordpress plugins | jQuery plugins

21 October, 2014

Fix Apache & PHP after upgrading to OSX Yosemite

After waiting all night for OSX Yosemite to install, I finally booted into the new version of the OS… to find that Apache and PHP were no longer configured. Great. I remember a similar thing from the Mavericks update, so here is a list of the steps I followed to get things working again. Your results may very, but this may at least help.

  1. First you need to sort out the Apache config file /etc/apache2/httpd.conf, which has been reset to some default. Your old config file will be in the same directory, with some suffi like ~previous or .pre-update. In my case, I re-un-commented these lines:
    LoadModule rewrite_module libexec/apache2/mod_rewrite.so
    LoadModule php5_module libexec/apache2/libphp5.so

    I also needed to change AllowOverride None to AllowOverride Any in the <Directory "/Library/WebServer/Documents"> section.

  2. Then turn on Apache:
    sudo apachectl restart
  3. PHP’s config file has also disappeared, but again your old ones will be in /etc/ with some suffixes as above. In my case I started fresh with:
    sudo cp -a /etc/php.ini.default /etc/php.ini

    The only thing I added was:

    date.timezone = "Europe/Stockholm"

    Restart Apache again.

  4. Now I need to install PEAR. Clear anything named pear in your home directory, then:
    cd /usr/lib/php
    sudo php install-pear-nozlib.phar

    Then add the path where PEAR installed to php.ini:

    include_path = ".:/usr/lib/php/pear"
  5. I wanted to install the Mongo PECL package, which apparently requires the Command Line Tools, so:
    xcode-select --install

    And then finally I can install it:

    sudo pecl install mongo

    Add the following line to /etc/php.ini:


    Then restart Apache one more time.

5 April, 2013

The sad unreliability of Ubuntu One

I started using Ubuntu One more or less when it was first released. Admittedly it was pretty slow in the beginning, but they seemed to improve their speeds a lot and eventually I began to pay for extra space and use Ubuntu One exclusively for all my cloud syncing – some 3000 files from my entire Documents folder, and around 1,500 pictures. I used two Ubuntu machines and I thought U1 worked pretty well in making sure that I always had the most recent versions of everything on both machines.

I first noticed a problem with the syncing when by accident I noticed a that a folder which showed up on the U1 web interface was not on my computer. I tried various things to get this to work, trying all the command line options to u1sdtool, restarting, stopping/starting syncing etc. Eventually I wrote about it on Ask Ubuntu, and ended up getting in touch with U1 support. Their solution was to essentially clear all the cached syncing info on my machine and start again. Admittedly, this worked (although it did require that U1 scan and compare every single file again). I got the missing folder to sync, and everything seemed OK.

Things seemed OK for a few months. The a few weeks ago I got a new machine, a MacBook Pro. I still use Ubuntu at work, and since there is a U1 client for OSX I thought there should be no problem in continuing to use Ubuntu one for my syncing. This is when things really started going downhill. The initial sync on my Mac worked fine – I mean essentially it’s just downloading everything, pretty straightforward. But then I began to notice that some changes made would not get noticed by U1. Say I would delete a file from my Mac, but it would still appear in the web interface even though the U1 client would tell me that everything was up-to-date. This is really when I began to stop trusting it. Again I would try all the command line options for refreshing the sync folders, nothing. When I contacted U1 support again, they just had exactly the same solution – delete the caching data and re-sync. I did, it took it’s time to re-check every single file, and again things seemed OK again. But then I would add/delete some other file and notice that again that Ubuntu One would fail to notice them. There are things you can do to force it to notice the changes, like restarting the computer or un-checking and re-checking the “Sync locally” checkbox inside the client. But that defeats the whole purpose.

To make things worse, I’m also starting to notice this same erratic syncing behaviour from my Ubuntu machine too. And now I have absolutely no idea if there even exists a single complete version of all my files, anywhere. It feels like every computer I used U1 has some copy of my files, but is never 100% complete/updated. It’s a mess. There’s just too many files to check manually. I have backups, and I hope that when I look for a file and find that Ubuntu One has lost it, I can find it by digging into these backups. But that’s hardly a solution.  I absolutely cannot trust Ubuntu One anymore.

But I still want a cross-platform syncing solution. iCloud doesn’t have an Ubuntu client (and I haven’t heard good things about it anyway). Neither does Google Drive, although they keep promising one “soon”. Dropbox has clients for both and is starting to look like a real viable alternative now. I guess it’s popularity compared to U1 will mean it’s more reliable. But it’s going to take some work to move everything over, and I really want to avoid switching.

11 January, 2013

Markdown, Pandoc and GitHub

I love writing in Markdown, and in general I try to always write in Markdown and then convert into HTML/TeX. Pandoc is a fantastic tool for converting from Markdown to other formats, and since it is so versatile I would like to use it for everything. I also use GitHub a lot, which has an automatic renderer for Markdown documents.

Unfortunately, Pandoc’s Markdown (PM) and GitHub Flavored Markdown (GFM) are not identical, and I find myself constantly torn between the two, trying to satisfy both. I typically have some code repository hosted on GitHub, with at least one main readme file written in Markdown format. When browsing the repository through the GitHub website, this readme file is automatically converted to HTML. Since this is often the first and only documentation for my code, it is important to me that it renders correctly.

I often want to also convert my Markdown document locally into a self-contained HTML file, and sometimes TeX too, and for this Pandoc is just the best. But herein begin the differences in syntax support:


GFM likes “pipe-tables”, as defined in Markdown Extra:

| Item      | Value |
| --------- | -----:|
| Computer  | $1600 |
| Phone     |   $12 |
| Pipe      |    $1 |

However the latest releases of Pandoc (1.9.x) support a bunch of other table types, but not pipe-tables. The latest Pandoc (1.10.x) does thankfully support them, so my current solution is to use the development version of Pandoc and compile from source. This means my Makefile might not be portable, but at least I know it works me (though arguably maybe I shouldn’t depend on Pandoc in the first place).

Definition lists

Quite simply, GFM does not support definition lists. However they are defined in Markdown Extra like so, and Pandoc handles them like champ. At least GFM tends to degrade gracefully in this case, so definition lists don’t bother me too much.

Pre/post code

When building a standalone HTML or TeX file, you will definitely need to include some before and after code around your actual content. You could have completely separate files for this, and then glue them together in a Makefile. But sometimes this seems like overkill for a simple </body></html>, and I just want to stick them at the bottom of my Markdown file and be done with it. In fact GFM will happily ignore HTML tags, but will still display the content of something like <title>Hello!</title>. And if you try to include some TeX code it only gets worse.


Maybe it’s my fault for trying to expect too many different things from a simple language. But with a Master’s thesis looming, I’m currently thinking out my writing options. While I love the idea of writing in Markdown and using Pandoc to convert to TeX, this lack of standard really bothers me and I can’t help wondering if I might be safer with something like txt2tags, which my professor swears by.

18 September, 2012

A MySQL Unicode collation for Maltese

If you’ve tried to store and retrieve Maltese text in a MySQL database before, you may have noticed that there is no way to sort it correctly according to the Maltese alphabet.

The  utf8_unicode_ci collation treats g and ġ etc. as interchangeable but that is of course not right. You can try utf8_bin, but since this sorts according to Unicode codepoints, ċ, ġ, ħ, ż get sorted after the letter z — which is even worse (although it does at least mean you can search for ħ without getting h back too).

What you really need is a custom collation for the Maltese alphabet. There isn’t one built-in, but luckily MySQL makes adding custom collations relatively painless. So I went ahead and implemented such a collation for Maltese, and called it utf8_maltese_ci. You can find the code, along with detailed installation and usage instructions at the GitHub repository for the Maltese MySQL collation.

30 March, 2012


Ever since switching to Ubuntu from Windows, I’d never really found a text editor I was truly satisfied with. I spent most of my time using either Geany or GEdit, and while both are quite fine, somehow neither ever felt complete.

So a few weeks ago, after meeting probably the most hardcore power user I’ve ever known (he types in Dvorak with a blank keyboard), I decided that I would embark on the long voyage towards teach becoming proficient with the legendary text editor, Emacs.

At first it felt incredibly masochistic, like the computer science equivalent of cutting your wrists just to feel alive. Yet after two weeks, the benefits are slowly beginning to become apparent. I’m still not sure if have become a convert yet. The pain is still there of course, but somehow that almost serves to convince me that I am in fact doing the right thing.

25 July, 2011

How I revived my disappearing Seagate GoFlex Home DLNA/UPnP server

My family bought a Seagate GoFlex Home 2TB network-attached drive for streaming videos directly to our Samsung TV via DLNA/UPnP. Everything worked fine for a while, until suddenly one day it just refused to show up on the TV anymore. After eliminating all the home networking factors (cables, IPs, DHCP etc.), I noticed that when restarting the device it actually showed up briefly on the TV, but with no files on it – only to disappear after a few seconds.
Note that the device still behaved normally as a network device, i.e. when browsing using Windows Networking/SAMBA we could still access all our files normally. It was only the DLNA service that did not seem to be functioning.

Combing through the Seagate forums (and the web in general) I found that others have had similar issues but no real solutions seem to have emerged. So a little more investigation had to be done. The GoFlex Home web interface did not report anything untoward, and fiddling with all the settings in the preferences did not seem to have any effect.

So, like all good hackers I got my hands dirty and gained SSH access to the device.

Getting SSH access to the device

As described here and here, to gain access to your GoFlex Home via SSH you will need:

  1. An SSH client (obviously)
  2. The IP address of your GoFlex Home
  3. The administrator username & password
  4. Your device’s product key, which you can find by clicking About GoFlex Home in the bottom left of the web interface
  5. Confidence using the Linux command line

So, you open an SSH connection to your device with this specially-formed username USERNAME_hipserv2_seagateplug_XXXX-XXXX-XXXX-XXXX, where USERNAME is your username and XXXX-XXXX-XXXX-XXXX is your product key. On a Linux/Mac terminal this could look something like this (note the username, product key and IP will be different):

ssh john_hipserv2_seagateplug_FKSU-FJDU-DOWU-OSHD@

Once you’re in, go straight into root — as you will need to do so anyway before long — with:

sudo -s

Of course poking around as root is dangerous and could irreversibly mess up your device, but if you’re still reading then you probably already knew that.

Restarting the DLNA service

At this point I tried poking around, trying to find some logs or anything which could give me an idea what the problem was. The name of the service which actually provides the DLNA server is minidlna, for which a totally invaluable reference can be found here. I tried to access the MiniDLNA log with

tail /tmp/minidlna/minidlna.log

but was told that the log was unavailable. Curiously, running ls -l in the directory reported that the file had no size, permissions, or modification date; so that wasn’t much help. I then tried to find the status of the MiniDLNA service with

/etc/init.d/minidlna.init status

which told me that the PID in /mnt/tmpfs/var/run/minidlna.pid did not match that of any running process, implying that the daemon had crashed. Made sense so far, but when I tried to restart the service with

/etc/init.d/minidlna.init restart

the service would attempt to restart, but instantly crash again as described above. Same thing with manually stopping and starting the service. Some trial and error later, I discovered that what needs to be done is is that MiniDLNA’s temporary folder needs to be forcibly unmounted, like so:

umount /tmp/minidlna

Restarting the service again after doing this finally did the trick; Checking the service’s status again as above now reports that MiniDLNA is running, and everything shows up normally on my TV etc.

Rebuilding the database

If the steps above still don’t fix your problem, you likely need to get MiniDLNA to rebuild its media database, with the command

/usr/sbin/minidlna -f /etc/miniupnpd/minidlna.conf -R -d

This time, I was given an error message about being unable to open sqlite database file, /tmp/minidlna/files.db. Attempts to force-delete the file manually also failed, and I finally had to resort to manually unmounting the minidlna directory with

umount /tmp/minidlna

This happily worked, and I was then finally allowed to rebuild the database with the command given above. This will scan your media folders and rebuild the database for you in MiniDLNA’s debug mode, which means it will spit out lots and lots
of output message to the console. After maybe 5 minutes or so, it finally told me the media library scanning was complete, and lo and behold I could once again access my GoFlex Home via my DLNA-enabled TV.

Now, I am still unclear as to what happens when I restart my device. The first time I tried this (after rebuilding the media database) my device went into the exact same problem as before! This time I logged in via SSH again, and run the MiniDLNA daemon in debug mode but without rebuilding the library:

/usr/sbin/minidlna -f /etc/miniupnpd/minidlna.conf -d

This successfully started the service again, allowed me to cleanly exit my SSH connection and access the DLNA server via my TV again. However it did take many minutes until my files all showed up, so I am assuming that MiniDLNA was in fact rebuilding the media database itself.


Despite finally managing to get things working as described above, it turns out that every time my GoFlex Home is restarted the MiniDLNA daemon crashes in the same way, and I am forced to fire up an SSH connection to sort things out. I have as yet found no way of permanently fixing the issue, but since our device is basically online 24/7 it’s not too much of an issue.

Of course I realise the steps here are not for the computer faint hearted, but after exhausting all the “user-friendly” ways of restoring the device, this is only sure-fire way I have found.

25 July, 2010

Enabling compression with GoDaddy Shared Hosting

Compression of HTML, CSS and JavaScript is quite important for improving your site’s speed and should always be used.

Of you will find that all you need to do is add a line similar to the following to your .htaccess file:

AddOutputFilterByType DEFLATE text/html text/plain text/css text/xml application/x-javascript text/javascript application/javascript
Source: StackOverflow

However, if you’re on a GoDaddy shared hosting account you may have realised that this doesn’t work. GoDaddy’s help page recommends that you paste this code in all your PHP pages:

<?php if (substr_count($_SERVER['HTTP_ACCEPT_ENCODING'], 'gzip')) ob_start("ob_gzhandler"); else ob_start(); ?>

That’s fine, but from my understanding this will not cache your CSS or JavaScript. However, I found a solution here here, which involves some .htaccess trickery to compress all your CSS and JavaScript files automatically. Enjoy!

2 March, 2010

An interesting way to avoid publishing your email address..

Random Tech / / 8:11 pm

Check out this page: http://blacksapphire.com/antispam/

Not only does this guy require you to answer a mini quiz to prove you are human, but he will also generate a new email address for every person who wishes to contact him (which I imagine are only valid for a limited amount of time) so that he never has to give a single, spammable, email address..

I thought it was quite inventive!

29 December, 2009

Windows 7 Backup – bloody useless

As someone whos studies and work are totally computer-based, backup is an important issue to me. Until now I’ve relied on simple directory synchronisation – I can specify exactly what I want and know what exacltly is going on. I still use Directory Sync Pro, which is just great for most purposes.

However since starting to use Windows 7, I’ve been wanting to give the inbuilt Backup & Restore feature as a second backup. So I found a nice empty hard-drive for dedicating to this purpose, and went ahead.

First I just chose all the default options, which essentially backs up all your user profile stuff, and makes a “system image” (which it’s a bit vague about, but anyway). First it says it’s copying the files, which takes absolutely ages (hours and hours). The computer is usable during that time but sometimes it really slows down. Also there’s no pause button. There’s stop, but I have yet to see it actually work :/.

Anyway, then it finally gets to the system image, which takes even longer… long story short, for some reason mine would never complete. It would always get stuck at 83%, even after a whole night of working. At this point the computer is unusable – I pretty much had to do a hard-reboot. Tried again and it did exact the same thing, practically freezing at 83% then another hard reboot.

So I cleared everything, and decided to try again choosing “custom” this time. I selected my libraries and 2 folders in C:\, with no system image since that’s what seemed to be the problem before. This time it works for about an hour, getting to 25%, then suddenly finishes saying the backup was incomplete because it skipped some files. When I look at the log to find out what it skipped, it’s some non-existent file: C:\Windows\System32\config\systemprofile\Web Development

Last chance I thought.. I changed the backup settings and removed some insignificant stuff like the contacts folder – then start the backup again, it takes 6 whole hours to get to 100%… but then complains about the same file!

At this point I lost hope. There’s definitely something in the backup, but it never “completes”, always complaining about that file. So honestly I don’t know how much I can trust it…

My conclusion: you’re probably better off with something else. Anyway if anyone has had similar (or different) luck with it I’d love to know…

26 November, 2009

Use your iPod without iTunes using SharePod

So I bought an iPod Nano a while back, it’s actually a 1st-generation model. I was happy using it with iTunes for a while, but as time went by I started to hate iTunes (on Windows) more and more.. to the point where I got rid of it once and for all.

I resorted to Windows Media Player, which is actually pretty decent and BLOODY FAST! Compared to iTunes… I mean there’s absolutely no contest.

Anyway I wanted to use my iPod but would absolutely never install iTunes, so was looking for a good way to manage my iPod files without it… and lo and behold, I found SharePod.

SharPod is absolutely wonderful – there’s no installation, you just run a single executable which does all the config work for you, and all you do is drag and drop your music to and from your iPod, just like it was a normal non-closed-off-by-Apple personal music player.

Lovely… I definitely recommend it! Oh, plus it’s completely free – but not the “ok what’s the catch” sort of free, but the “all the best software is free” kind 😉

Newer Posts »