Transcoding DV Tapes

Today we enjoy half decent high definition video recording capabilities in pretty much any device which has an imaging sensor in it. About ten years ago that wasn’t the case, back then people bought so-called DV cameras. These DV cameras recorded more or less DVD resolution footage with limited compression onto digital video (DV) tapes, usually 60 minutes per tape. Some awkward choices in the DV format were made, particularly the fact that it was recorded at 50 interlaced fields per second, which can be interpolated to either 25 or 50 actual frames per second.

Most DV cameras allowed the content of a DV tape to be directly transfered to a computer fitted with a FireWire 400 interface, a 60 minute tape would result in slightly less than 15 GB of data transfered. This is fairly easy to do on a modern Linux system (keep in mind that the transfer is 1:1, so 60 minute tape requires 60 minutes to entirely transfer):

# dvgrab -rewind -showstatus -timestamp -autosplit -size 20000 -format qt

While most media players on Linux will play these captured video fragments directly, many media players on other platforms won’t. Also the resulting files are ridiculously large, so they aren’t particularly handy to keep around indefinitely. So typically you’ll want to transcode them into something more efficient, for example (at it’s simplest):

# mkdir mp4
# for F in *.mov; do \
    avconv \
      -i $F -filter:v yadif=1 -r 50 \
      -c:v libx264 -preset:v fast -profile:v main -level:v 31 -tune:v film -g 50 -crf 18 \
      -c:a ac3 -b:a 320k \
      mp4/${F}.mp4; \
  done

The above command will transcode all captured video fragments to high quality H264/AVC video and AC3 (Dolby Digital) audio. The effective video quality is controlled via the CRF parameter, transcoding a set of video fragments totaling about 60 minutes, resulted in the following sizes for me:

CRF
17
18
19
20
21
22
23
Size 4.9 GB 4.1 GB 3.4 GB 2.8 GB 2.4 GB 2.0 GB 1.7 GB

Keep in mind that CRF encoding tries to keep the quality constant, so if you have very unsteady or action rich footage the resulting files may end up bigger for you. Also keep in mind that some filesystems don’t allow single files to exceed 2GB, so if you have a single continuous piece of footage of 60 minutes, you probably should use CRF 23, otherwise I’d recommend sticking to CRF 18 for very good quality.

Ideally we’d like our filesystem modification dates to match the recording timestamp and having some metadata in each files can be helpful at times as well, so alternatively we might transcode like so:

# mkdir mp4
# for F in *.mov; do \
    avconv \
      -i $F -filter:v yadif=1 -r 50 \
      -c:v libx264 -preset:v fast -profile:v main -level:v 31 -tune:v film -g 50 -crf 18 \
      -c:a ac3 -b:a 320k \
      -metadata title="$(echo ${F} | sed 's#dvgrab-##' | sed 's#.mov##' | tr '_' ' ' | tr '-' ':')" \
      -metadata artist="John Doe" \
      -metadata album="Holiday" \
      -metadata description="JVC GR-XXXX" \
      mp4/PAL50_$(echo ${F} | sed 's#dvgrab-##' | sed 's#.mov##' | tr -d '.' | tr -d '-').MP4; \
    touch \
      -t $(echo ${F} | sed 's#dvgrab-##' | sed 's#.mov##' | tr -c '[0-9]\n' '_' | awk -F '_' '{print $1 $2 $3 $4 $5 "." $6}') \
      mp4/PAL50_$(echo ${F} | sed 's#dvgrab-##' | sed 's#.mov##' | tr -d '.' | tr -d '-').MP4; \
  done

Lastly you might want to burn the resulting video files to a DVD, and generating a good ISO image is easy enough as well:

# cd mp4
# md5sum *.MP4 > MD5SUMS.TXT
# genisoimage -J -l -V HOLIDAY_2005 -o ../HOLIDAY_2005.ISO .

Most vaguely recent BluRay players should be able to play the files on the resulting disc.

KVM Basics

Not everybody likes having a full invasive install of VMware Workstation, VirtualBox or virt-manager for that matter. Luckily KVM can be installed without pulling in too many dependancies or having to kludge in poorly maintained kernel modules.

Using KVM bare isn’t particularly difficult, first you’ll need to create a disk image (in this particular example 50GB thin provisioned):

qemu-img create -f qcow2 disk.img 50G
Formatting 'disk.img', fmt=qcow2 size=1073741824 encryption=off cluster_size=65536 ...

Next we’ll need to start KVM (assuming a Windows guest OS), for example:

kvm -m 2048 -localtime -monitor stdio -soundhw ac97 -usb -usbdevice tablet -hda disk.img

The -m 2048 parameter assigns 2GB of virtual RAM. The -localtime parameter should be included for non-UNIX operating systems that do not same save the systemclock as GMT. The -monitor stdio parameter allows KVM to be controlled using it’s monitor interface presented on the terminal it was started from. The -soundhw parameter allows you to select which audio hardware KVM should emulate, the optimal choice depends heavily on the guest operating system. The -usb -usbdevice tablet parameters tell KVM to emulate a tablet pointer which at least for Windows allows decent mouse performance without requiring a guest driver.

Once KVM is started, you’ll notice it’s monitor interface popping up on the terminal.

QEMU 2.0.0 monitor - type 'help' for more information

With this monitor interface you’ll be able to control the KVM virtual machine, for example changing/ejecting a emulated floppy disk image:

(qemu) change floppy0 myfloppy.img
(qemu) eject floppy0

Or configuring the KVM virtual machine to (re)boot from a CD-ROM image:

(qemu) change ide1-cd0 mycdrom.iso
(qemu) boot_set d 
(qemu) system_reset

Obviously there is more to KVM’s monitor interface, and of course it accepts a help command, which will provide you with an elaborate list of possibilities and options.

Video Sharpening Before Encoding

Often after encoding a video, and playing it back, it seldomly looks as good as I’d expect, especially compared to professionally produced DVDs for example. Now while there are likely a multitude of differences, one of them seems to be acutance (perceived sharpness), which can be enhanced by applying some sharpening after scaling the source material before encoding the video.

The following small script encodes a source video input to a anamorphic widescreen PAL DVD resolution WebM file at a nominal (total) bitrate of 2Mbit/sec, while applying some sharpening:

#!/bin/sh
INPUT="$1"
SCALE_OPTS="-sws_flags lanczos -s 720x576 -aspect 16:9"
SHARP_OPTS="-vf unsharp=3:3:0.5:3:3:0"
VIDEO_OPTS="-vcodec libvpx -g 120 -lag-in-frames 15 -deadline good -profile 0 -qmax 51 -qmin 11 -slices 4 -b 1800000 -maxrate 3800000"
AUDIO_OPTS="-acodec libvorbis -ac 2 -ab 192000"

avconv -i "${INPUT}" ${SCALE_OPTS} ${SHARP_OPTS} ${VIDEO_OPTS}           -an -pass 1 -f webm -y "out-${INPUT}.webm"
avconv -i "${INPUT}" ${SCALE_OPTS} ${SHARP_OPTS} ${VIDEO_OPTS} ${AUDIO_OPTS} -pass 2 -f webm -y "out-${INPUT}.webm"

 

Now what particularly matters are the unsharp parameters, which can be divided into two triplets, the first set of three: luma (brightness (greyscale) information) and the second set of three: chroma (color information). Each of these two sets has three parameters, of which the first two are the horizontal and vertical matrix dimensions (e.g. the evaluation area), in our case a small matrix of 3 by 3 is the only configuration that makes sense. A 5 by 5 matrix is possible but will give a exagerated effect and halo artifacts. The last parameter in the triplets is the strength (respectively 0.5 for luma, and 0 for chroma), which means we’re enhancing acutance for luma, and we’re leaving the chroma unmodified. For luma strength values between 0.5 and 1.0 are likely to be the useful range depending on taste and source material. Typically you’d want to leave chroma be, but in some cases you could possibly use this as a poor mans denoising method by specifying negative values, which effectively turns it into a blur effect.

 

Missing Memory Card Icons on Ubuntu

Depending on your type of memory card reader, you may have noticed the following on Ubuntu (and possibly other distributions too). When you connect a USB flashdrive to your system an icon pops up informing you the drive has been mounted. When you insert (for example) an SD card into your cardreader, it may happen that another icon pops up using the same icon as the flashdrive.

While this isn’t the biggest problem in the world, it’s certainly a nuisance, as you’d need to hover over each icon to see the tooltip which explains to you which icon represents what. Ideally you’d want the SD card to show up with an appropriate SD card icon.

Which icon is displayed ultimately depends on disk management done by udisks and more importantly udev. In /lib/udev/rules.d/80-udisks.rules (do NOT modify this file) we find the following rules:

SUBSYSTEMS=="usb", ENV{ID_MODEL}=="*SD_Reader*", ENV{ID_DRIVE_FLASH_SD}="1"
SUBSYSTEMS=="usb", ENV{ID_MODEL}=="*Reader*SD*", ENV{ID_DRIVE_FLASH_SD}="1"
SUBSYSTEMS=="usb", ENV{ID_MODEL}=="*CF_Reader*", ENV{ID_DRIVE_FLASH_CF}="1"
SUBSYSTEMS=="usb", ENV{ID_MODEL}=="*SM_Reader*", ENV{ID_DRIVE_FLASH_SM}="1"
SUBSYSTEMS=="usb", ENV{ID_MODEL}=="*MS_Reader*", ENV{ID_DRIVE_FLASH_MS}="1"

The above rules are matched against the device names which are passed to the kernel. With one of my cardreaders, this sadly doesn’t match:

$ dmesg | grep -i Direct-Access
scsi 12:0:0:0: Direct-Access     Generic  Compact Flash    0.00 PQ: 0 ANSI: 2
scsi 12:0:0:1: Direct-Access     Generic  SM/xD-Picture    0.00 PQ: 0 ANSI: 2
scsi 12:0:0:2: Direct-Access     Generic  SDXC/MMC         0.00 PQ: 0 ANSI: 2
scsi 12:0:0:3: Direct-Access     Generic  MS/MS-Pro/HG     0.00 PQ: 0 ANSI: 2

To create new rules, we first need to figure out what USB vendor/product IDs belong to the cardreader, you can just identify USB devices attached to your computer like so:

$ lsusb
Bus 002 Device 012: ID 048d:1345 Integrated Technology Express, Inc. Multi Cardreader

Just run the command once before attaching the device and once after attaching the device and look for the differences, typically it’ll be the last device in the list. Once we have this information create a new file (replace pmjdebruijn with your own nickname, use exclusively alphanumeric characters):

$ sudo nano -w /etc/udev/rules.d/80-udisks-pmjdebruijn.rules

In this file we put the following lines:

# ITE, Hama 00055348 V4 Cardreader 35 in 1 USB
SUBSYSTEMS=="usb", ATTRS{idVendor}=="048d", ATTRS{idProduct}=="1345", ENV{ID_INSTANCE}=="0:0", ENV{ID_DRIVE_FLASH_CF}="1"
SUBSYSTEMS=="usb", ATTRS{idVendor}=="048d", ATTRS{idProduct}=="1345", ENV{ID_INSTANCE}=="0:1", ENV{ID_DRIVE_FLASH_SM}="1"
SUBSYSTEMS=="usb", ATTRS{idVendor}=="048d", ATTRS{idProduct}=="1345", ENV{ID_INSTANCE}=="0:2", ENV{ID_DRIVE_FLASH_SD}="1"
SUBSYSTEMS=="usb", ATTRS{idVendor}=="048d", ATTRS{idProduct}=="1345", ENV{ID_INSTANCE}=="0:3", ENV{ID_DRIVE_FLASH_MS}="1"

You’ll notice the idVendor and idProduct coming from the lsusb line above, also the ID_INSTANCE needs to have matching LUNs with the dmesg lines above. Once you’re done, doublecheck and save the file, and then you can reload the udev rules:

$ sudo udevadm control --reload-rules

Any newly mounted memory cards should get a proper icon now.

Not all cardreaders may be as easy as illustrated as above, for example I have a wonderful cardreader that provides no useful information at all:

$ lsusb
Bus 002 Device 009: ID 05e3:0716 Genesys Logic, Inc. USB 2.0 Multislot Card Reader/Writer
$ dmesg | grep -i Direct-Access
scsi 6:0:0:0: Direct-Access     Generic  STORAGE DEVICE   9744 PQ: 0 ANSI: 0
scsi 6:0:0:1: Direct-Access     Generic  STORAGE DEVICE   9744 PQ: 0 ANSI: 0
scsi 6:0:0:2: Direct-Access     Generic  STORAGE DEVICE   9744 PQ: 0 ANSI: 0
scsi 6:0:0:3: Direct-Access     Generic  STORAGE DEVICE   9744 PQ: 0 ANSI: 0
scsi 6:0:0:4: Direct-Access     Generic  STORAGE DEVICE   9744 PQ: 0 ANSI: 0

In such a particular case, you’ll need to experiment by actually inserting various types of memory cards, and checking what device got mounted, and what LUN is it, in the following example I inserted an SD card, which got mounted as sdk, which turns out to be LUN 0:2, which we need for the ID_INSTANCE entries:

$ mount | grep media
/dev/sdk1 on /media/FC30-3DA9 type vfat (rw,nosuid,nodev,uid=1000,gid=1000,shortname=mixed,dmask=0077,utf8=1,showexec,flush,uhelper=udisks)
$ dmesg | grep sdk
sd 12:0:0:2: [sdk] Attached SCSI removable disk
sd 12:0:0:2: [sdk] 248320 512-byte logical blocks: (127 MB/121 MiB)
sd 12:0:0:2: [sdk] No Caching mode page present
sd 12:0:0:2: [sdk] Assuming drive cache: write through
sd 12:0:0:2: [sdk] No Caching mode page present
sd 12:0:0:2: [sdk] Assuming drive cache: write through
 sdk: sdk1

Another peculiarity (or feature) of this drive is that it has 5 LUNs instead of 4, this is because it actually has two SD card slots, one for full size SD cards and one for microSD cards. In the end, after some fiddling, I ended up with:

# Genesys Logic, Conrad SuperReader Ultimate
SUBSYSTEMS=="usb", ATTRS{idVendor}=="05e3", ATTRS{idProduct}=="0716", ENV{ID_INSTANCE}=="0:0", ENV{ID_DRIVE_FLASH_CF}="1"
SUBSYSTEMS=="usb", ATTRS{idVendor}=="05e3", ATTRS{idProduct}=="0716", ENV{ID_INSTANCE}=="0:1", ENV{ID_DRIVE_FLASH_SM}="1"
SUBSYSTEMS=="usb", ATTRS{idVendor}=="05e3", ATTRS{idProduct}=="0716", ENV{ID_INSTANCE}=="0:2", ENV{ID_DRIVE_FLASH_SD}="1"
SUBSYSTEMS=="usb", ATTRS{idVendor}=="05e3", ATTRS{idProduct}=="0716", ENV{ID_INSTANCE}=="0:3", ENV{ID_DRIVE_FLASH_MS}="1"
SUBSYSTEMS=="usb", ATTRS{idVendor}=="05e3", ATTRS{idProduct}=="0716", ENV{ID_INSTANCE}=="0:4", ENV{ID_DRIVE_FLASH_SD}="1"

Ransomware Unlocker

Some days ago, I came across a Windows machine where a lot of files were renamed (locked-filename.[four random lowercase alphabetic characters]) and no longer readable by the respective applications.  In one of the home directories there was a randomly named executable which many anti virus agents didn’t see as dangerous (including the anti virus agent installed on the machine in question, namely AVG). We checked using virustotal, back then only 15 (or so) anti virus products would detect the file in question, now (as I’m writing this), the detection rate has gone up to 29 anti virus products.

But, regardless of the origin, we were still stuck with lots of unreadable “locked” files. Now, we had access to religious backups, so we could have just reverted to the day before, but I opted to have some fun first.

I transferred a few sample files (two Word documents and a WAVE audio file, both locked and originals from backup) to a Linux machine. Running the file(1) utility on these locked files, they were all identified as data, while the originals were clearly identified as Word documents and WAVE audio. So I was pretty sure something changed in the contents. Next up I ran strings(1) on the locked and original version of one the the Word documents, and strings(1) returned plain text in both cases. So I knew the files weren’t entirely scrambled, but since file(1)‘s main mechanism to identify data formats is looking at the first few bytes in a files, the obvious theory would be that only the first part of these files were scrambled.

After searching a bit for a nice binary diff utility I found vbindiff(1) because xdelta(1) wasn’t cutting the mustard. Running vbindiff(1) on one of the Word documents (diffing the original against the locked version) it became immediately apparent that the first 4096 bytes were scrambled. Same story for the other Word document, but it was less obvious for the WAVE audio file. The difference is that classic Word documents (.doc, not .docx) have a headers with lots of 0x00 and 0xFF bytes in there. Now within the same locked file multiple 0x00 bytes weren’t scrambled into the same byte value, so some form of crypto (with a key) was being applied. When looking at two different original Word documents I noticed that a large part of the header was nearly identical between to the different Word documents. So I took a look at the two respective locked Word documents, and those were largely identical too. From this we can infer that the private key used to encrypt the first 4096 bytes is most likely static between locked files (at least on this system).

Considering the fact that a simple static private key seems to have been used and the fact that locked files weren’t even entirely encrypted, it was my guess the algorithm probably wouldn’t be very sophisticated too. Would it be a simple XOR operation? To find out, I wrote some quick and dirty FreePascal code to read the first 4096 bytes from an original file and the first 4096 bytes from a locked file, and XORed them against each other, effectively outputting the private key (at least, that was the theory). Now after I ran said utility against my three samples files, the resulting private key was identical in all three cases (even for the WAVE audio file). So I was right. It was a simple XOR operation using a static private key.

The next challenge was writing another small utility (again in FreePascal) which reads the first 4096 bytes from a locked file, and XORs it with the data from my generated private key file, and writes it back to an unlocked file, but after processing the first 4096 bytes the rest of the file would be copied verbatim. After running this new utility on all of my samples the resulting unlocked files were identical to the original files. So it really worked. It was this simple.

I built both above utilities in FreePascal, because the Pascal language is what I fall back to whenever I have to code up something quickly. A nice side effect is that the FreePascal code should be fairly portable. You can download the sources here.

Now on a even more amusing note, if you place a 4096 byte file with only zero bytes in it on your filesystem before the ransomware is activated it will most likely by accident generate it’s own private key, as 0x00 XOR key = key.

How I Screencast on Ubuntu

I’ve been screencasting a while now, mostly about Darktable. And from time to time people ask me how I do it, and what software and hardware I use. So here goes nothing…

Since my main topic is Darktable (a free software photography application), it means my target audience is primarily photographers who use free software. Which means my video’s should be easy to view on a random Linux/BSD desktops. Considering the only video/audio codec that is available on most newly installed Linux desktop are Theora and Vorbis respectively, these were going to be my primary publishing formats. The fact that Theora and Vorbis have been the longest supported format for HTML5 Video is a big plus too (since Firefox 3.6 if I recall correctly). And I surely didn’t want to motivate/require anybody to install Flash to view my video’s.

Another point of concern was audio quality. In general when watching other peoples screencasts, the often poor audio quality was for me the biggest annoyance. Especially for longer video’s where I don’t want to listen to 20 minutes of someone talking through static noise. So I went a bit overboard with this and bought an M-Audio FastTrack MkII (which is plain USB Audio, no special driver required) and a RØDE NT1a Kit which I later on mounted to a RØDE PSA1 Studio Arm.

Which brings me to my choice of recording application. I can’t say I tried them all, but recording with ffmpeg seemed to slow down my machine too much. So I settled on recordmydesktop and more particularly the gtk-recordmydesktop frontend. After some experimenting I found recording just a part of my screen (1920×1200) to be a nuisance. So I settled on doing all screencasts on my laptop recording fullscreen (1280×800). The recordmydesktop application defaults to recording 15 frames per second, which seems to be fine for my purposes. It defaults to recording audio at a 22050 Hz sampling rate, with me being a sucker for audio quality I changed to that 48000 Hz, which is commonly used on DVDs and other professional audio applications. One of recordmydesktop’s potential disadvantages is that it only encodes it’s capture to Ogg/Theora/Vorbis (.ogv) which luckily for me really isn’t an issue at all. I do max out the encoding quality to 100% for both audio and video.

When publishing my screencasts on the web I just use the HTML5 video support of modern browsers. I use the .ogv file produced by recordmydesktop directly, I don’t re-encode to reduce the bitrate or anything, as the bitrate is already acceptable to begin with, and I don’t want to degrade the quality any more than I have to. While in the past I only provided the .ogv, I recently also caved in providing an .mp4 (H264/AAC) fallback video to be able to support the ever increasing ubiquity of tablets, and secondarily to be able to support browsers like Safari which don’t support free media formats like Ogg/Theora/Vorbis out of the box. So now I’m using ffmpeg to transcode my video, however there are a couple of concerns here. My original recordings were recorded at a resolution of 1280×800, while most tablets (and most importantly The Original iPad), only supports video at a resolution of 1280×720 (H264 Level 3.1), so it would likely choke on it. That said, in many cases it’s not very useful to have 1280×800 on most tablets anyway as 1024×768 is a common resolution for 10″ tablets. So I settled on resizing my screencasts to 1024×640 (which also reduced the bitrate a bit, in the process making it more suitable for mobile viewing). Initially I tried to do the audio using the MP3 audio codec, however iPads seem to dislike that, while Android tablets handled it just fine. So I had to go with AAC and while Ubuntu’s ffmpeg isn’t built with FAAC support, it does however have ffmpeg’s builtin AAC encoder called libvo_aacenc, which isn’t as good as FAAC, but it had to do. So in the end my conversion commandline for ffmpeg is this:

avconv -i input.ogv -sws_flags bicubic -s 1024x640 -vcodec libx264 -coder 1 \
-flags +loop -cmp +chroma -partitions +parti8x8+parti4x4+partp8x8+partb8x8 \
-me_method umh -subq 8 -me_range 16 -g 250 -keyint_min 25 -sc_threshold 40 \
-i_qfactor 0.71 -b_strategy 2 -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -bf 3 \
-refs 5 -directpred 3 -trellis 1 -flags2 +bpyramid+mixed_refs+wpred+dct8x8+fastpskip \
-wpredp 2 -rc_lookahead 50 -coder 0 -bf 0 -refs 1 -flags2 -wpred-dct8x8 \
-level 30 -maxrate 10000000 -bufsize 10000000 -wpredp 0 -b 1200k \
-acodec libvo_aacenc -ac 1 -ar 48000 -ab 128k output.mp4

lzma -2

When compressing data we all know we have several options, gzip most commonly used or bzip2 when we need better ratios. Recently we have also seen the introduction of lzma, which is horribly slow by default but gets us amazing ratios.

That said, when compressing your next file, you might want to consider trying lzma -2, it compresses slightly better than bzip2 –best, while still being approximately as fast as gzip –best… Seems like a nice trade off…

SSSU for HP Command View EVA

So today I was migrating our HP SSSU utility from an x86 to an x86-64 machine, only to find our I couldn’t execute the utility anymore because it was on a x86-64 machine, I wanted to lookup the version number of the utility to find a matching x86-64 version. So I was too lazy to log onto the old machine, so I used strings to find it’s version number, only to find a crazy surprise:

SSSU for HP StorageWorks Command View EVA
9.2.0
Version: %s
Build: %s
Error closing https connection
Error closing https connection
Total regression differences = %u
Press return to exit
%.6d%c
                           oooo$$$$$$$$$$$$oooo
                      oo$$$$$$$$$$$$$$$$$$$$$$$$o
                   oo$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$o         o$   $$ o$
   o $ oo        o$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$o       $$ $$ $$o$
oo $ $ '$      o$$$$$$$$$    $$$$$$$$$$$$$    $$$$$$$$$o       $$$o$$o$
'$$$$$$o$     o$$$$$$$$$      $$$$$$$$$$$      $$$$$$$$$$o    $$$$$$$$
  $$$$$$$    $$$$$$$$$$$      $$$$$$$$$$$      $$$$$$$$$$$$$$$$$$$$$$$
  $$$$$$$$$$$$$$$$$$$$$$$    $$$$$$$$$$$$$    $$$$$$$$$$$$$$  '''$$$
   '$$$''''$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$     '$$$
    $$$   o$$$$$$$$M$i$c$h$e$l$&$R$o$g$e$r$$$$$$$$$$$$$$$$$$$     '$$$o
   o$$'   $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$       $$$o
   $$$    $$$$$$$$$$$$$$$$$$w$e$r$e$$$$$$$$$$$$$$$$$$$$' '$$$$$$ooooo$$$$o
  o$$$oooo$$$$$  $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$   o$$$$$$$$$$$$$$$$$
  $$$$$$$$'$$$$   $$$$$$$$$$$$$$$h$e$r$e$$$$$$$$$$$$     $$$$''''''''
 ''''       $$$$    '$$$$$$$$$$$$$$$$$$$$$$$$$$$$'      o$$$
            '$$$o     '''$$$$$$$$$$$$$$$$$$'$$'         $$$
              $$$o          '$$''$$$$$$''''           o$$$
               $$$$o                                o$$$'
                '$$$$o      o$$$$$$o'$$$$o        o$$$$
                  '$$$$$oo     ''$$$$o$$$$$o   o$$$$''
                     ''$$$$$oooo  '$$$o$$$$$$$$$'''
                        ''$$$$$$$oo $$$$$$$$$$
                                ''''$$$$$$$$$$$
                                    $$$$$$$$$$$$
                                     $$$$$$$$$$'
                                      '$$$''

This is obviously a reference to The Hitchhiker’s Guide to the Galaxy (why?).

Pimping Unity Ever So Slightly

So like many I feared Unity, now having met it, and giving a seriously go I’m slowly starting to like it. And I’m sticking with it for at least a few weeks before passing final judgement. That said, here are a few small tips:

Currently by default the icons in the Launcher are all “backlit” with a background color, you can turn that off, to only have running programs backlit (to make that fact extra obvious). You can make this the system-wide default like so:

$ sudo  gconftool-2 --direct \
  --config-source xml:readwrite:/etc/gconf/gconf.xml.defaults \
  --type int --set /apps/compiz-1/plugins/unityshell/screen0/options/backlight_mode 1

Next, the Launcher auto-hides whenever you move a window into it (for example maximize a window). I don’t really like that, since I have too much horizontal screen estate anyhow, so I’d like to Launcher to permanently claim it’s space (again as a system-wide default):

$ sudo  gconftool-2 --direct \
  --config-source xml:readwrite:/etc/gconf/gconf.xml.defaults \
  --type int --set /apps/compiz-1/plugins/unityshell/screen0/options/launcher_hide_mode 0

Some other tips, you might find handy:

If you want to move Launcher buttons around on the Launcher you need to drag button off before you can move it. This makes moving buttons around a very conscious action, preventing the mouse-challenged people from accidentally moving them around.

Also you might have noticed that left clicking a Launcher button will only start an application once. So for apps you might want to start multiple times like a Terminal you’d need to search for the app to start it a second time, which would be rather time consuming. Luckily the Unity designer thought about this, so left click will start an application once (prevent people from accidentally starting most apps twice). But if you do want to start an app twice or thrice you can just middle click the Launcher button.

And last but not least, this wallpaper may be of use to you.

Crispy Font Rendering On Ubuntu

Like others I’m not a stranger to useless ranting. So regardless of what’s default, some people like their font rendering nice and crispy. Usually you’d go the the Appearance dialog, the Fonts tab, click on Details and select Subpixel Smoothing and Full Hinting.

However, there are two caveats, since new users still get the default fuzzy rendering. Also some apps like to ignore your own font rendering settings to a degree, at times Firefox and OpenOffice.org have been guilty of this (IIRC). They actually seem to use use your system fontconfig settings.

So to change the system wide default font rendering settings for GNOME:

$ gconftool-2 --direct \
    --config-source xml:readwrite:/etc/gconf/gconf.xml.defaults \ 
    --type string --set /desktop/gnome/font_rendering/hinting full

You can change the system fontconfig settings like so:

$ sudo -s
# cd /etc/fonts/conf.d
# rm 10-hinting-slight.conf
# ln -s ../conf.avail/10-hinting-full.conf
# exit