How to flatten a directory on Linux and MacOS

If you want to flatten a directory with lots of deeply nested files (for example ./2012/06/09/images/previews/200x200/image1.jpg becomes ./image1.jpg), you can run this simple command:

find target/ -mindepth 2 -type f -exec mv -i '{}' target/ ';'

All the files in target‘s subdirectories will be moved directly under target. If multiple files have the same name (target/hello.txt, target/backup/hello.txt, target/hello/english/hello.txt), you will be asked to overwrite them:

overwrite ./hello.txt? (y/n [n])

The default option is “no”, so you can just hold the Enter key to say no to all overwrites.

Ubuntu: apt-get -f fails, “no space left on device”, apt-get autoremove doesn’t work.

After getting this issue on two different servers, I thought I’d write a tutorial for everyone else.

What’s the problem?

First, you try to install some packages, either with apt-get install or apt-get update. For some reason, it fails with an error like this one:

You might want to run 'apt-get -f install' to correct these:
The following packages have unmet dependencies:
 linux-image-extra-4.4.0-112-generic : Depends: linux-image-4.4.0-112-generic but it is not going to be installed
 linux-image-extra-4.4.0-93-generic : Depends: linux-image-4.4.0-93-generic but it is not going to be installed
 linux-image-generic : Depends: linux-image-4.4.0-112-generic but it is not going to be installed
                       Recommends: thermald but it is not going to be installed
E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).

So you type apt-get -f install, and that fails too, this time with an error like this one:

Unpacking linux-image-4.4.0-112-generic (4.4.0-112.135) ...
dpkg: error processing archive /var/cache/apt/archives/linux-image-4.4.0-112-generic_4.4.0-112.135_amd64.deb (--unpack):
 cannot copy extracted data for './boot/System.map-4.4.0-112-generic' to '/boot/System.map-4.4.0-112-generic.dpkg-new': failed to write (No space     left on device)
No apport report written because the error message indicates a disk full error
                                                                              dpkg-deb: error: subprocess paste was killed by signal (Broken pipe)
...
E: Sub-process /usr/bin/dpkg returned an error code (1)

You do a bit of research and find something about running out of inodes, so you type df -i, but you are using under 10% of the inodes on all devices. Another post suggests running apt-get autoremove, but that gives you the same error message as before:

You might want to run 'apt-get -f install' to correct these.
The following packages have unmet dependencies:
 linux-image-extra-4.4.0-112-generic : Depends: linux-image-4.4.0-112-generic but it is not installed
 linux-image-extra-4.4.0-93-generic : Depends: linux-image-4.4.0-93-generic but it is not installed
 linux-image-generic : Depends: linux-image-4.4.0-112-generic but it is not installed
                       Recommends: thermald but it is not installed
E: Unmet dependencies. Try using -f.

So what’s the f***ing deal?!

How to solve that problem

Long story short, you don’t have enough space to fit the new Linux kernel, so you have to delete some of the old ones. You can delete those manually, but it’s a long, tricky manual process, and if you’re like me, you only understand half of what’s going on and you just want the damn thing to work.

You might have found this solution or a variant of it, but once more you’ll get one of the error messages above:

Unmet dependencies. Try 'apt-get -f install' with no packages

Well, you were almost there! All you need is to use dpkg instead of apt-get purge.

Before you try that, run this command to see which packages will be removed. This command has no side effects and will not delete anything:

dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d'

Make sure that your current kernel version is not in that list. You can see your kernel version by running uname -a.

Once you are sure that you want to delete these kernel files, run this command to run dpkg --remove on each of them:

dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d' | xargs dpkg --remove

After that, you can run apt-get -f install without problems, then do whatever you wanted to do in the first place. See, Linux isn’t that complicated!

Help! That didn’t work!

If you run into an error like this one:

gzip: stdout: No space left on device
E: mkinitramfs failure cpio 141 gzip 1
...
Errors were encountered while processing:
 linux-image-extra-4.4.0-31-generic
 linux-image-extra-4.4.0-47-generic
 linux-image-extra-4.4.0-75-generic
 linux-image-extra-4.4.0-79-generic
 linux-image-extra-4.4.0-81-generic
 linux-image-extra-4.4.0-83-generic
 linux-image-extra-4.4.0-87-generic
 linux-image-extra-4.4.0-89-generic

…don’t panic! Some of the kernels will likely removed, so you can run apt-get -f autoremove to clear up some space, and then you can run the long command above again.

tl;dr

dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d' | xargs dpkg --remove
apt-get -f install

How to fix “READ ERROR” on Fujifilm cameras

You might get a grey screen that says “READ ERROR” when you try to view your photos on a Fujifilm camera. I had this issue on my X-T10 camera, but this issue happens with all Fujifilm X-series cameras.

The issue is caused by hidden files and folders created on the memory card when you read them on your computer. For instance, the .DS_Store files on OS X. Unfortunately, the bright minds at Fujifilm did not account for that when designing their cameras, and the problem remained unfixed for years.

The easiest solution is to lock your SD card before you read in on your computer. The other solution is to use software that disables the creation of hidden metadata files on external devices. This problem was documented here and here.

If you already get a “READ ERROR” message, the only solution is to format your memory card. The long term solution is to stop buying Fujifilm cameras.

How to fix ImportMismatchError in Python

If you get an ImportMismatchError when running a Python app, it’s likely that you have some Python bytecode files (*.pyc) from a different runtime. For example, this happens when I run my Python unit tests inside a Docker container, then try to run them again in PyCharm.

Here is an example error message:

/.../env/bin/python2.7 "/Applications/PyCharm CE.app/Contents/helpers/pycharm/_jb_pytest_runner.py" --target app_test.py::TestAppThing
Testing started at 10:51 AM ...
Launching py.test with arguments app_test.py::TestGroundTruthGet in /Users/xxxxxxx/Projects/.../app
Traceback (most recent call last):
  File "/.../env/lib/python2.7/site-packages/_pytest/config.py", line 362, in _importconftest
    mod = conftestpath.pyimport()
  File "/.../env/lib/python2.7/site-packages/py/_path/local.py", line 680, in pyimport
    raise self.ImportMismatchError(modname, modfile, self)
ImportMismatchError: ('module.name.here', '/app/filename.py', local('/Users/xxxxxxx/.../filename.py'))
ERROR: could not load /Users/xxxxxxx/.../filename.py

Process finished with exit code 0
Empty test suite.

The solution to this error is simple: delete all *.pyc files in your project. You can do this with a simple command:

find . -name \*.pyc -delete

Your Python code should now run properly.

How to prevent Chrome from changing text color when printing

Chrome has a feature that turns light text to black before printing. For instance, if you have a light gray paragraph on the page, it will print as black text.

While this feature might ensure that unoptimized pages are readable when printed, you might want to disable it so colors come out the way you intended.

To ensure that background images, background colors and text colors remain the same while printing, use the following CSS rule:

-webkit-print-color-adjust: exact

Office 2011 activation server unavailable? Here’s the fix

After almost a year after freshly installing MacOS, the time finally came for me to need the festering pile of ██ that is Microsoft Office 2011. Naturally, something was broken.

The activation screen refused to accept the license key I had used 5 minutes earlier to download the installer, stating that the activation server was temporarily unavailable.

The solution was to look for updates to Office 2011. After the latest update was installed, the key was accepted and my installation was activated. If that doesn’t work, you can always activate it by phone.

Fixing the “protocol error: bad pack header” error in git

When trying to push in git, you might get the following error message:

fatal: internal server error
remote: internal server error
fatal: protocol error: bad pack header

In my case, it was because I pulled a corrupted remote Gerrit repository, and tried to push the corrupted data back to the fixed remote. Everyone who had pulled the corrupted repository also ran into the same error.

Fortunately, the fix is really easy. All you have to do is run those four commands:

cp .git/config .git/config.backup
git remote remove origin
mv .git/config.backup .git/config
git fetch

This fix comes from Jordan Klassen.

Automatically tagging people for code review in Gerrit

If your gerrit workflow requires you to automatically tag people, or if you always end up tagging the same people, there is a way to tag people by default for all your commits.

To do so, edit .git/config to include the following lines and substitute the email with your colleague’s email:

[remote "origin"]
    ...
    receivepack = git receive-pack --reviewer "other.reviewer@company.com"

Fixing Jasmine “encountered a declaration exception”

If you ever come upon a Jasmine unit test that ends with “encountered a declaration exception”, it’s likely because you have committed a simple, subtle mistake. Every few weeks, I would get that error, and forgetting how I fixed it the previous time, spend another 30 minutes to fix it.

Here’s a screenshot of the Karma output:

enter image description here

Here’s the code that caused it:

describe('should be hidden when the scope is destroyed', () => {
  this.scope.$broadcast('$destroy');
  expect(this.scope.helpTooltip.hide).toHaveBeenCalledWith();
});

Can you spot the problem? It’s simple: I used describe instead of it. Let’s fix this…

it('should be hidden when the scope is destroyed', () => {
  this.scope.$broadcast('$destroy');
  expect(this.scope.helpTooltip.hide).toHaveBeenCalledWith();
});

Voilà! No more error! Sometimes, the solution is that simple.

How to improve Angular performance by disabling debugging

Angular JS

Angular gives elements classes such as ng-scope and ng-isolate-scope to help tools such as Batarang and Protractor retrieve debugging information. In production, these are unnecessary, and can be disabled to speed up your client-side application.

This is how you disable debugging information in Angular:

myApp.config(['$compileProvider', function ($compileProvider) {
  $compileProvider.debugInfoEnabled(false);
}]);