Posts

Showing posts from January, 2013

iTunes Movie/TV downloads are frustrating

I often use iTunes for movie rentals and some TV shows on my Windows-based media center PC. I do this because in Australia, iTunes has the best collection of shows available (especially if using the loophole to connect to the US store).

Problem is, iTunes is one tempermental piece of software... At times, movies and shows are downloading at say 1.5MB/s, which is more than fast enough to live-stream HD content without buffering. But then the download speed inexplicably drops to say 200KB/s, and you get stuck half way though the show you're watching.

I had this happen last night, where it got up to 1.25GB out of a 1.35GB show, then the download speed dropped to the point where I had to stop watching. The download then showed an estimated time of 4 minutes to complete, but speeds fluctuated between 140KB/s and 1.5MB/s. It took over 30 minutes to complete.

I ruled out internet connection issues, as I could start downloading a different shown on the same computer on iTunes which would …

Recursive SQL for Hierarchical Data

There's a common need to store hierarchical data in a relational database. For example, imagine you want to model Departments, where each Department can have zero or more sub-departments (and sub-departments can also have sub-departments).

The easiest way to do this is to create a Department table with the following three fields:

department_id
name
parent_id

You would then define a self-referential foreign-key constraint linking parent_id to department_id on the same table.

So top-level departments would have a null parent_id, and all sub-departments have a parent_id pointing to the record of the parent department. Pretty straight forward...

The problem arrises when you want to start using this data. For example, suppose you have an Employee record that belongs to a particular department. You want to find out if the employee's department is a child of some higher top-level department.

Typically you could do this in your application code, by writing a recursive function to exe…

PostgreSQL Shared Buffers

The default postgresq.conf settings are optimized for systems from 1998... i.e. default memory settings are typically very, very low for modern systems. What this means is that by default, even through you may be running PostgreSQL on a machine with 8GB of RAM, if you haven't explicitly configured it to use this, it will probably default to something like 32MB, which means it will use a lot of disk I/O, when it could instead cache everything to RAM and run much faster.

One of the settings you need to tweak is the shared_buffers, which is recommended to be between 20% and 25% of total available memory. So if you have 8GB of RAM, and most of that is free, then the shared_buffers setting should be around 2GB (instead of the default 16MB or 32MB).

One issue with this though, is that by default most Linux kernels will not allow this. So when you try to restart postgres, you'll get an error along the lines of:

FATAL: could not create shared memory segment: 
DETAIL: Failed system call …

Foreign Key Indexes

One of our PostgreSQL databases started slowing down considerably over three years of operation. To debug what was going on, we edited the /etc/postgresql/8.4/main/postgresql.conf file and set the parameter "log_min_duration_statement = 200". Save the file then run "service postgresql reload".

This will start logging all queries that take longer than 200ms to execute. You can view the output in the /var/log/postgresql directory. You may need to tweak your number up or down to filter the most important queries.

Once you find some queries that take a long time to execute, copy them from the log, run "psql" and connect to your database. Then run "EXPLAIN ANALYZE the_slow_query;". This will show exactly what postgres does to run the query, and how long each step takes.

In our case, there were a number of slow SELECT queries with many nested JOIN statements.

One possible cause for this is that by default, postgresql creates indexes for primary keys, …

Fitbit Aria Setup Issues

I recently got a Fitbit Aria smart scale. This basically just synchs your weight and body-percentage-fat index to the Fitbit website, so you can keep a long-term history of your ups and downs (bit overrated to be honest).

Anyhow, the initial setup of the scale is pretty straight-forward. Just download the app from the fitbit.com website and following the easy prompts. This gets your scale to join your home WiFi, and links it to your Fitbit profile, all transparently and without and mucking about.

However, every week or fortnight, my scale somehow seems to reset itself to factory defaults. It switches back to pounds mode and forgets who I am, so I need to use the setup utility again to re-link it.

The utility instructs you to remove one battery for 10 seconds and then place it back in for the scale to enter setup mode. I've found this not to work very reliably. I usually have to take out all four batteries, wait around 20 seconds, put them back in, quickly flip the scale right-side…

Chocolatey, apt-get for Windows

If there's one thing I sometimes miss about Linux when I'm working in Windows, it's an easy to use package manager such as apt-get or yum to install/update common tools.

There's no denying that typing "apt-get install git" at a console is easier and quicker than going to a website, finding the download link for the correct version, downloading, running installer and clicking through the wizard prompts.

Thankfully, there's a third-party group that are attempting to enable similar functionality for Windows -- http://chocolatey.org/

Chocolatey provides a command-line tool that enables you to install common Windows utilities as simply as "cinst git" or "cinst vlc", etc. No prompts, no manual downloads, no mucking about.

The best thing about this is that installation of chocolatey itself is done via a single command-line too (just visit their website, it's on the front page). This means that you can write a very simple script file (or n…

Picasa 2013 Review (v3.9)

I've been using Picasa for a while as my primary long-term solution to photo cloud-backup and sharing. That is, I use the desktop application to import all my photos and organize them into folders. I then upload these folders to Picasa Web Albums (Google+). I also sometimes create Albums (which are a filtered set of photos, basically virtual folders), which I can share on Google+ to individuals or circles. This is a quick review of the features I find useful, and those that I wish would be improved or implemented.

First, the good:
Photo import and management is easy and intuitive. I like the folder-based workflow which mirrors the physical file-system. When I have photos to import, I can create a new folder in my top-level Picasa storage location directly in Windows Explorer or Finder, and simply copy the new files across. I can then switch back to Picasa and they're instantly there. So I don't need to mess around with the Import Wizard or other interfaces. I like destructi…