Ransomware Unlocker

Some days ago, I came across a Windows machine where a lot of files were renamed (locked-filename.[four random lowercase alphabetic characters]) and no longer readable by the respective applications.  In one of the home directories there was a randomly named executable which many anti virus agents didn’t see as dangerous (including the anti virus agent installed on the machine in question, namely AVG). We checked using virustotal, back then only 15 (or so) anti virus products would detect the file in question, now (as I’m writing this), the detection rate has gone up to 29 anti virus products.

But, regardless of the origin, we were still stuck with lots of unreadable “locked” files. Now, we had access to religious backups, so we could have just reverted to the day before, but I opted to have some fun first.

I transferred a few sample files (two Word documents and a WAVE audio file, both locked and originals from backup) to a Linux machine. Running the file(1) utility on these locked files, they were all identified as data, while the originals were clearly identified as Word documents and WAVE audio. So I was pretty sure something changed in the contents. Next up I ran strings(1) on the locked and original version of one the the Word documents, and strings(1) returned plain text in both cases. So I knew the files weren’t entirely scrambled, but since file(1)‘s main mechanism to identify data formats is looking at the first few bytes in a files, the obvious theory would be that only the first part of these files were scrambled.

After searching a bit for a nice binary diff utility I found vbindiff(1) because xdelta(1) wasn’t cutting the mustard. Running vbindiff(1) on one of the Word documents (diffing the original against the locked version) it became immediately apparent that the first 4096 bytes were scrambled. Same story for the other Word document, but it was less obvious for the WAVE audio file. The difference is that classic Word documents (.doc, not .docx) have a headers with lots of 0x00 and 0xFF bytes in there. Now within the same locked file multiple 0x00 bytes weren’t scrambled into the same byte value, so some form of crypto (with a key) was being applied. When looking at two different original Word documents I noticed that a large part of the header was nearly identical between to the different Word documents. So I took a look at the two respective locked Word documents, and those were largely identical too. From this we can infer that the private key used to encrypt the first 4096 bytes is most likely static between locked files (at least on this system).

Considering the fact that a simple static private key seems to have been used and the fact that locked files weren’t even entirely encrypted, it was my guess the algorithm probably wouldn’t be very sophisticated too. Would it be a simple XOR operation? To find out, I wrote some quick and dirty FreePascal code to read the first 4096 bytes from an original file and the first 4096 bytes from a locked file, and XORed them against each other, effectively outputting the private key (at least, that was the theory). Now after I ran said utility against my three samples files, the resulting private key was identical in all three cases (even for the WAVE audio file). So I was right. It was a simple XOR operation using a static private key.

The next challenge was writing another small utility (again in FreePascal) which reads the first 4096 bytes from a locked file, and XORs it with the data from my generated private key file, and writes it back to an unlocked file, but after processing the first 4096 bytes the rest of the file would be copied verbatim. After running this new utility on all of my samples the resulting unlocked files were identical to the original files. So it really worked. It was this simple.

I built both above utilities in FreePascal, because the Pascal language is what I fall back to whenever I have to code up something quickly. A nice side effect is that the FreePascal code should be fairly portable. You can download the sources here.

Now on a even more amusing note, if you place a 4096 byte file with only zero bytes in it on your filesystem before the ransomware is activated it will most likely by accident generate it’s own private key, as 0x00 XOR key = key.

ColorHug red shift workaround

As most of you probably know already, there is a cool (and affordable) little colorimeter available now called the ColorHug, and it’s open-source too (including all companion software).

As the ColorHug’s firmware is still being improved, some people have noticed a profile created with the ColorHug makes their display shift excessively to the red, possibly due to a slight measurement inaccuracy.

A display profile generally consist of two main parts, first there is the vcgt (sometimes also called VideoLUT), which is loaded and applied by X11 itself. This is usually a correction for a displays white point (which is where it goes wrong). And the second part is gamma+matrix (which is gamma/hue/saturation correction). So to avoid the red shift we have to skip the first part of profile creation.

To prepare I recommend you (try to) do the following (for this particular procedure):

  1. Note down your display old settings (if you care to go back to these).
  2. Reset your display’s settings to factory defaults.
  3. Adjust the display’s brightness to a comfortable level (you really often don’t need maximum brightness).
  4. Generally it’s a good idea to leave contrast at the manufacturers default.
  5. Change the displays color temperature to 6500K if possible (You might notice your display shift a bit to the yellow).

Then execute the following commands in a Terminal

# targen -v -d 3 -G -f 64 make_model
# ENABLE_COLORHUG=1 dispread -v -y l make_model
# colprof -v -A "Make" -M "Model" -D "Make Model" \
          -C "Copyright (c) 2012 John Doe. Some rights reserved." \
          -q l -a G make_model

The above commands skip vcgt creation with dispcal and do a fairly simple set of measurements and create a fairly simple ICC profile. This simplicity gets us increased robustness in the profile’s creation at the expense of potential accuracy. To be honest I wouldn’t be surprised if commercial vendors use a similar strategy in their entry-level colorimetry products for the consumer market.

You’ll need to either manually import the resulting profile into GNOME Color Manager (to setup the profile at login), or directly configure it in programs like the GIMP. You can load an image like this in GIMP to check if the resulting profile makes sense. Please do mind GIMP has color management disabled by default, so you need to set it up in the Preferences.

Even with the above method, the resulting profile may still be a bit off in the reds (though it will only be visible in color managed applications). If this is still an issue for you, you could try the Community Average CCMX, or possibly my Dell U2212HM CCMX, for which I’ve gotten decent results on non-Dell displays too.

How I Screencast on Ubuntu

I’ve been screencasting a while now, mostly about Darktable. And from time to time people ask me how I do it, and what software and hardware I use. So here goes nothing…

Since my main topic is Darktable (a free software photography application), it means my target audience is primarily photographers who use free software. Which means my video’s should be easy to view on a random Linux/BSD desktops. Considering the only video/audio codec that is available on most newly installed Linux desktop are Theora and Vorbis respectively, these were going to be my primary publishing formats. The fact that Theora and Vorbis have been the longest supported format for HTML5 Video is a big plus too (since Firefox 3.6 if I recall correctly). And I surely didn’t want to motivate/require anybody to install Flash to view my video’s.

Another point of concern was audio quality. In general when watching other peoples screencasts, the often poor audio quality was for me the biggest annoyance. Especially for longer video’s where I don’t want to listen to 20 minutes of someone talking through static noise. So I went a bit overboard with this and bought an M-Audio FastTrack MkII (which is plain USB Audio, no special driver required) and a RØDE NT1a Kit which I later on mounted to a RØDE PSA1 Studio Arm.

Which brings me to my choice of recording application. I can’t say I tried them all, but recording with ffmpeg seemed to slow down my machine too much. So I settled on recordmydesktop and more particularly the gtk-recordmydesktop frontend. After some experimenting I found recording just a part of my screen (1920×1200) to be a nuisance. So I settled on doing all screencasts on my laptop recording fullscreen (1280×800). The recordmydesktop application defaults to recording 15 frames per second, which seems to be fine for my purposes. It defaults to recording audio at a 22050 Hz sampling rate, with me being a sucker for audio quality I changed to that 48000 Hz, which is commonly used on DVDs and other professional audio applications. One of recordmydesktop’s potential disadvantages is that it only encodes it’s capture to Ogg/Theora/Vorbis (.ogv) which luckily for me really isn’t an issue at all. I do max out the encoding quality to 100% for both audio and video.

When publishing my screencasts on the web I just use the HTML5 video support of modern browsers. I use the .ogv file produced by recordmydesktop directly, I don’t re-encode to reduce the bitrate or anything, as the bitrate is already acceptable to begin with, and I don’t want to degrade the quality any more than I have to. While in the past I only provided the .ogv, I recently also caved in providing an .mp4 (H264/AAC) fallback video to be able to support the ever increasing ubiquity of tablets, and secondarily to be able to support browsers like Safari which don’t support free media formats like Ogg/Theora/Vorbis out of the box. So now I’m using ffmpeg to transcode my video, however there are a couple of concerns here. My original recordings were recorded at a resolution of 1280×800, while most tablets (and most importantly The Original iPad), only supports video at a resolution of 1280×720 (H264 Level 3.1), so it would likely choke on it. That said, in many cases it’s not very useful to have 1280×800 on most tablets anyway as 1024×768 is a common resolution for 10″ tablets. So I settled on resizing my screencasts to 1024×640 (which also reduced the bitrate a bit, in the process making it more suitable for mobile viewing). Initially I tried to do the audio using the MP3 audio codec, however iPads seem to dislike that, while Android tablets handled it just fine. So I had to go with AAC and while Ubuntu’s ffmpeg isn’t built with FAAC support, it does however have ffmpeg’s builtin AAC encoder called libvo_aacenc, which isn’t as good as FAAC, but it had to do. So in the end my conversion commandline for ffmpeg is this:

avconv -i input.ogv -sws_flags bicubic -s 1024x640 -vcodec libx264 -coder 1 \
-flags +loop -cmp +chroma -partitions +parti8x8+parti4x4+partp8x8+partb8x8 \
-me_method umh -subq 8 -me_range 16 -g 250 -keyint_min 25 -sc_threshold 40 \
-i_qfactor 0.71 -b_strategy 2 -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -bf 3 \
-refs 5 -directpred 3 -trellis 1 -flags2 +bpyramid+mixed_refs+wpred+dct8x8+fastpskip \
-wpredp 2 -rc_lookahead 50 -coder 0 -bf 0 -refs 1 -flags2 -wpred-dct8x8 \
-level 30 -maxrate 10000000 -bufsize 10000000 -wpredp 0 -b 1200k \
-acodec libvo_aacenc -ac 1 -ar 48000 -ab 128k output.mp4

Darktable 1.0 Screencast Library (Addition)

Since I did my last darktable 0.9 screencast library, some things have changed. So at the very least this warranted an update screencast.

Darktable 1.0 Update (download)

Darktable Archiving & Backup (download)

These are the first screencasts that should be viewable on most tablet devices too, albeit with slightly degraded quality.

Color Management (On Linux)

There seems to be a lot of confusion about what color management is, what it is supposed to do, and most particularly how to use it on Linux. While most information below is generically applicable, in cases where I have to be specific I’ll focus on Ubuntu/GNOME/Unity.

The first thing to get out of the way is the simply question what color manament is supposed to do for you. Color management is used to get consistent and reliable results from device to device to device. So if I take an image with my color managed camera, and display it on my color managed display and print it with my color managed printer it should look nearly the same everywhere. This means it doesn’t per-se make your image look better (whatever that may mean for you). Also what color management can’t do for you is make crappy equipment better than it is. Any color management solution always has to work within the bounds of the equipment it is managing. Of course any color management solution tries to compensate for a device’s limits as best it can, but there are inherent limits. When these limits are hit, colors aren’t accurately reproduced anymore.

Now we need to get some terminology straight. Calibration is modifying a device’s characteristics to match a specification (for example changing brightness of a display). Characterization is recording the devices behavior for correction in software. These terms are often incorrectly used interchangeably (even by me, excuse me when I do). The end result of characterization is a (standard) ICC color profile.

While pretty much any device can be color managed , I’ll focus on displays for the rest of this article.

Now to color manage a display you need a device that can “read” (characterize) your displays characteristics. To characterize your display there are two types of devices you can use: colorimeters and spectrophotometers. Colorimeters are the most common device to characterize displays, as they are fairly affordable (100-200 EUR range). Colorimeters do have their limits, they are in essence just a very special purpose digital camera with only a handful of pixels. While I personally never had any issues, I’ve read about older colorimeters having trouble with new kinds of display technology like LED backlit displays, and some entry level colorimeters may not work as well with professional wide gamut displays (more on that later). The other option is a spectrophotometer, these devices are rather pricy, entry level spectrophotometers like the ColorMunki Photo, are priced slightly below 400EUR (if you see any device priced significantly lower, it’s likely that the device is not a true spectrophotometer). Spectrophotometers actually read a full spectrum of the light they receive, which means it produces a lot more detailed information. This means spectrophotometers are unlikely to be fooled by new technologies. Most spectrophotometers also include a reference light source, which means they can illuminate (for example) paper, so they can be used to profile printers (combined with ink & paper) as well.

So now we’ll need to explain some more concepts. So first I’ll tell you how silly talking about (for example) RGB 245/0/0 really is. Imagine owning a car, and you’re stuck with an empty tank. So using your last few drop of petrol you go to a petrol station. Say you live in Europe, and you say to the attendant, I’ll have 40, so the attendant fills up your car with 40 Liters of petrol. If someone living in the US says the same thing to an attendant at a US petrol station, they’ll get 40 Gallons of petrol. So, you’d say, well I did tell you properly, it’s RGB… But RGB really doesn’t mean anything at all. Since RGB just tells you, you are defining colors in three components: red, blue and green. It doesn’t say anything about what the reddest red is, the greenest green is and let not forgot about the bluest blue. To define this, the concept of colorspaces was created, a colorspace defines the range of colors a device can reproduce, this is also called a device’s gamut. RGB colorspaces are defined in CIE XYZ colorspace. They do this, because XYZ colorspace encompasses all colors the average human eye can see. All RGB colorspaces are a defined as a subset of XYZ colorspace. More importantly in the late nineties two of the most important colorspaces were defined: sRGB (by Microsoft and HP), and AdobeRGB (by, erhm… well… Adobe). sRGB was more or less defined as the average common denominator of most affordable displays. These days anything not explicitly defined a in different colorspace is assumed to be in sRGB. On the other hand, AdobeRGB was defined to encompass much more colors, with it’s main goal of covering most colors professional printing solutions could cover.

Next to defining RGB to be in a colorspace, there is still the issue that the human eye does not experience light in a linear fashion, so we need gamma encoding to make images not look like a murky mess. These days gamma 2.2 is universally accepted as a standard for displays. There are some caveats though. I own a cheap netbook, and it’s display for example seems to have a native display gamma of about 1.8, which means it lacks contrast.

And then there is the issue of whitepoints, since there is no such thing a “just” white. For most purposes, a white point of 6500K (this is at least true for both sRGB and AdobeRGB) is good as a standard neutral white. Higher temperatures in Kelvin make a display look blue (common with laptop displays), and lower temperatures in Kelvin more a display look more yellow.

And last there is the question of luminance, which is a snazzy term for brightness. If your work isn’t color critical, just put your display to a comfortable level (usually not too bright), if your work is color critical, it’s common to calibrate your display to 120cd/m2.

That said, there are some common issues to address. As I said the result of characterization is an ICC profile. ICC profiles usually have the file extension of .icc or on Windows .icm. Depending on the software which generated the profile, profiles can either be version 2 or version 4. And at least on Linux (but also true for older proprietary software), many programs may not properly apply version 4 profiles, so it’s best to stick with version 2 profiles for the time being. Luckily ArgyllCMS, the premier open profiling suite generates version 2 profiles by default.

Also, you need to be aware that most web browers aren’t color management aware (Safari & Firefox are the exception when properly configured). The W3C specified that “the web” should be in sRGB. This basically means that you should only upload sRGB images to websites, if you upload images that are not sRGB, they may not look as intended to your potential viewers (depending on which browser they use). The common problem is that people upload AdobeRGB images to the web, and they get complaints that the images look desaturated (since the web browser is assuming them to be sRGB, even though they are not).

Now back to display profiling, there are several ways to accomplish this on Linux. I’ve talked in the past about manually doing this with ArgyllCMS, which is a suite of commandline tools. There are however some front-ends available. The two most important ones are dispcalGUI and GNOME Color Manager. Both tools have their own target audience, dispcalGUI caters to advanced users who really know color management inside out. While GNOME Color Manager caters to entry level users, and tries to make everything as easy as possible. To be blunt, if everything in this article isn’t really obvious to you, your best bet is probably GNOME Color Manager. GNOME Color Manager generally provides sane defaults, and guides you through the process using a Wizard *cough* Druid (or whats-it-called?)…

Next some information on the general anatomy of display profiles. Display profiles in particular have three important components. There is the VCGT, the TRCs and the XYZ matrix. The first bit, the VCGT, also often called the VideoLUT, is a lookup table which is designed to correct your display’s whitepoint and potential aberrations between the R, G and B channels. The VCGT is loaded into your X11 driver, and only works if your driver is in 24 bit mode. When the VCGT is being loaded into X11 (usually in the login manager or just after logging in) you should see the colors of the display shift a bit. The VCGT is the only bit of the profile which is beneficial to all applications (as it’s applied by X11), the other two parts have to be actively applied by the application (if properly configured, more on that later). So next we have the TRCs which basically models your display’s gamma curve. And last the XYZ matrix determines what maximum red, blue and green are for your particular display. It’s possible to have an XYZLUT instead to get more detailed correction, however I’d never recommend this, as not all applications properly apply an XYZLUT.

Since the last two parts (TRC+XYZ) need to be applied by your color management aware applications, they need to be properly configured. To make this easier there is something called the XICC specification, which allows an “active” profile to loaded into X11 as well, this is very rudimentary though, as it just means the file is being loaded into the _ICC_PROFILE atom (which is basically like an environmental variable), so it can be easily picked up by color management aware applications. Applications most commonly use the LittleCMS library on Linux to apply the TRCs and the XYZ matrix.

GNOME Color Manager (via GNOME Settings Daemon) makes sure that a profile’s VCGT gets loaded into the X11 display driver, as well as setting the _ICC_PROFILE atom. You can check if the _ICC_PROFILE atom has been properly set using xprop:

# xprop -display :0.0 -len 14 -root _ICC_PROFILE

It’s known that proprietary (nVidia/ATi) drivers can cause problems, also dual head setups can complicate things.

Now, some applications do color management by default (assuming the _ICC_PROFILE atom has been properly set), this for example includes Eye of GNOME and Darktable. Other applications seem to ignore the _ICC_PROFILE atom by default, like Firefox and GIMP.

To check if a profile is being applied, you need to good test image to evaluate, I can highly recommend SmugMug’s Calibration Print for this. In GIMP’s particular case, load this image into GIMP, and go to Edit and Preferences. Go to the Color Management section. Then check the “Try to use the system monitor profile” box while looking at the image,  in most cases you should see a change (if not use xprop to check the _ICC_PROFILE atom), and more importantly you should be able to distinguish the top gray patches from each other.

Last there is the issue of images that were adjusted on uncalibrated displays, which is true for probably 99% of all images on the web. If the author had a low contrast unmanaged display, it’s likely he might have increased contrast in a particular image. When you take a look at that image on your color managed display (with proper contrast), it may look too contrasty. And on the flipside, if the author had a high contrast unmanaged display, it’s likely he might have decreased contrast in a particular image. When you take a look at that image on your color managed display (with proper contrast), it may look devoid of contrast. So it’s not weird to have discrepancies between managed and unmanaged setups.

With the above text I hope to have shed some light on color management in general and some of the particular issues regarding it’s use on Linux.

My Notebook Display Is Too Bluish

I’ve been posting a fair amount of photography, imaging and color management lately. While colorimetry can be a good solution to display issues, but a lot of people don’t want to take it that far.

So say you’ve just gotten a new notebook, and like many notebooks the display looks a tad blueish, and you don’t want to invest in a full blown color management solution. There is a fairly simple way to address this issue at least to an extent, and it’s called xgamma (please note that xgamma might not work if your X11 setup is in 16bit mode, which is very unlikely on a modern system).

Now before making any changes it’s a good idea to get a good image to evaluate any changes with. I can highly recommend the Smugmug Calibration Print. So open the calibration print in your favorite image viewer, and do:

# xgamma -rgamma 1.0 -ggamma 1.0 -bgamma 0.9

You should see your display shift in color. Lots of notebook display also tend to lack contrast, so in theory you can use xgamma to compensate for that too:

# xgamma -rgamma 0.9 -ggamma 0.9 -bgamma 0.8

Again check the calibration print again, make sure you can clearly distinguish all the grey patches at the top of the image.

Now when you reboot your machine these settings will be lost. The best way I’ve found to automatically apply these settings seem to be via what’s called XDG Autostart, it’s basically a set of .desktop files that are run during session startup. Most big desktop environments (GNOME/XFCE/KDE) support these.

So, put the following into /etc/xdg/autostart/xgamma.desktop

[Desktop Entry]
Encoding=UTF-8
Name=Set display gamma corrections
GenericName=Set display gamma corrections
Comment=Applies display gamma corrections at session startup
Exec=xgamma -rgamma 0.9 -ggamma 0.9 -bgamma 0.8
Terminal=false
Type=Application
Categories=

Now reboot, and see your gamma settings being applied at during each new X11 login.

Please beware that the above correction are ballpark corrections, for real accuracy you really need to do proper color management.

Darktable Unity Progress

Usually I don’t do a lot of “real” coding for Darktable, but I had some time on my hands today, and I implemented basic Unity integration for Darktable. Since I wasn’t familiar with libunity, nor was I really familiar with the depths of Darktable code let alone CMake, the implementation took me about 2 hours.

That said, have a look at the results:


You can also download the video for offline viewing if you prefer.

By the way, the bug I mentioned at the end of the video has been mitigated, which is a chic way of saying I kludged it so you won’t be bothered by it. But it’s not truely fixed.

Darktable 0.9 Screencast Library (Addition)

Since I did my last darktable 0.7 screencast library, some things have changed. So at the very least this warranted an update screencast.

The downside is that I was recovering from a cold, so these screencasts sound a bit rough:

Darktable 0.9 Update (download)

Darktable B&W Film Emulation (download)

Darktable Denoising (download)

Darktable Spot Removal (download)

Small mistake, it’s actually possible to remove spots by rightclicking them.

Simulating Analog Black & White

There are millions of black & white photo plugins available. Some simple, some complex. When I recently got back a batch of real developed black & white film, I tried to investigate my scans to see how to emulate the effect (and possibly how to automate it).

The simplest approach I’ve been able to come up with involves blurring and decreasing contrast (with output levels). It can be automated with ImageMagick like so:

# convert -gaussian-blur 1x1 -filter triangle -resize 3000x2000 -level \!15%,\!95%,1.0 -colorspace gray -gaussian-blur 5x1 input.jpg output.jpg

Please note that doesn’t involve noise simulation yet, which seems to be hard to do with ImageMagick (tips are welcome). Please do note that I’m resampling to 6MPixels for convenience, you can use any resolution assuming you roughly scale along the 5 pixel Gaussian blur.

 

Upstart: don’t mess with the rc job

Recently I’ve been fiddling a bit with Upstart, and in general I’m positive about the experience. Upstart offers a lot of flexibility and simple but very welcome features like real service supervision with respawning capabilities. There are a few downsides:

Relatively few SysV init scripts have been converted to Upstart jobs. This is logical since there are a lot of scripts to convert and all of them need testing before considering them production ready. That said, the progress over time in this area isn’t particularly overwhelming either.

Upstart is much harder to troubleshoot when unexpected things happen, this is to a degree inherent to Upstart’s parallel/event-based nature. Adding the following options to your kernel parameters does help a bit: ‘nosplash INIT_VERBOSE=yes init=/sbin/init noplymouth -v’.

As far as I know there is no way to have SysV init scripts depend on Upstart jobs. While this was to be expected since this is pretty hard to implement sensibly. But considering my first point (the fact that lots of SysV scripts still have be converted), this can be annoying. My advice, don’t fiddle with the the rc job! I did, and it gave me grief (Upstart hung on reboots), causing me to waste a day figuring out what went wrong. If you need a SysV script to depend on an Upstart job, remove the SysV script and convert it to an Upstart job yourself.