The Advisory Boar (page 2)
I'm a dinosaur. I've been using the same fvwm2 configuration since 1996.
I've tried other window managers, with varying levels of sincerity, but
I drift back to comfortable old fvwm2 with no window decorations sooner
or later. (I got along really well with Blackbox, but it was easier to
switch back than to hack the code to cycle forward and back through a
stack of windows with RaiseLower, a feature I like.)
I've also tried hard to get along with the default desktop on successive
versions of Ubuntu (both GNOME and Unity), but a day or two is all I can
stand. But there are some things about the Ubuntu interface that I don't
want to give up or reinvent, especially on my laptop (the keyring, some
indicator applets, nm-applet, the screensaver, etc.). Being an adaptible
dinosaur, I now run fvwm2 as my GNOME window manager.
this detailed explanation
by dedicated xmonad users, changing "xmonad" to "fvwm2" in a few places
was all the hard work I needed to do. I had an ~/.xsession file that ran
a few startup commands already, so I put the following into
/usr/share/xsessions/xsession.desktop to add an "xsession" option in the
session drop-down at the login screen:
Then I created /usr/share/gnome-session/sessions/fvwm.session with the
That was enough to make "gnome-session --session=fvwm" work, and that's
what the last line of my .xsession runs. The other bit worth mentioning
is stalonetray, which
provides a home to nm-applet and friends.
One of the two most annoying things about
my Thinkpad X120E is that
the touchpad buttons are flush with the outer edge of the chassis, and
very easy to press inadvertently. I like the touchpad, so the
option of disabling it in the BIOS or with "synclient TouchpadOff=1"
did not appeal to me.
After reading the synclient man page, I was forced to accept that there
was no easy way to disable just the hardware buttons. That left digging
into the source code of the X.Org Synaptics driver ("apt-get source
xserver-xorg-input-synaptics", and I had to install xorg-dev,
xserver-xorg-dev, and xutils-dev as well).
My mother has been using my old Lexmark printer for many years, but it
is no longer possible to find toner cartridges for it (which is such a
shame, because it's a good printer). When the last cartridge became so
flaky that she could no longer print her tickets, she asked me to find
a new printer for her.
I thought about a cheap Samsung ML-16xx laser printer, but my recent
experience with SPL led me to
settle on the
instead. This printer ticks many of my boxes: it has Ethernet support,
automatic duplex printing (surprising, for a relatively inexpensive
printer), and a proper output tray. The downside is that it supports
only PCL6, not PostScript.
It was easy to set up the printer under Ubuntu 11.10. I chose the
generic PCL6 printer driver, and everything just worked. Delightful.
(Brother's web site does have some CUPS drivers for Linux, but I did
not bother to try them out.)
Not surprisingly, the printed output looks fine too.
I have lived without a printer or scanner for many years, but the number
of things I need to print and scan has grown to the point where going to
the market each time is painful. I am a firm believer in buying printers
with PostScript and network support, but our needs are modest and do not
justify spending enough to get a "real" printer. So I resigned myself to
paying extra in terms of dealing with CUPS.
I found two or three MFPs that suited my budget on
but was unable to find anything about Linux support for those models.
Eventually, I chose the smallest one, the
SCX-3201G, based on some positive reports about the SCX-3200 series.
Fortunately, it was easy to make it work. Thanks to tweedledee's
Samsung Unified Linux Driver
Repository and the odd
post, I installed the PPD file and the SPL filter under Ubuntu
11.04. Printing with CUPS and scanning with SANE both work fine now.
The printer itself works all right. You can tell it's meant for low
volumes. There's no output tray—it just spits paper out from the front,
and there's a non-zero risk that it'll get sucked back into the input
tray below. I would have been happier with a "real" printer, but this
one works well enough that I'm glad to have it anyway.
Update: I'm glad I don't need to print photographs. Libreoffice
and the GIMP print fine, but output is very dark and the quality is a
bit disappointing even at 1200dpi. The fault may lie with the printer,
the driver, or GIMP—or a combination thereof. The GNOME image viewer
causes the printer to spit out several mostly-empty pages with a few
control characters. I assume some CUPS incantation is needed, but I'm
happy to ignore the problem entirely. Text and line-art print fine.
Update: Sometimes, printing a PDF will also print many pages of
garbage. Most of the time, printing it a second time will work fine, but
some files always result in garbage. Unfortunately, I have not found any
way to predict when it might happen. I blame the interaction between
CUPS and Samsung's SPL filter. I have set "LogLevel debug" in
cupsd.conf, and will keep an eye on the logs.
<subliminal>Life is short. Get a printer with PostScript and
The downside of
always using SSL for web
sites that require authentication is the need to buy SSL certificates.
I usually don't need anything stronger than "domain validation" (which
assures you that you're talking to the server you think you're talking
to, but says nothing about how trustworthy that server may be). I'm not
a fan of the current PKI, but there are now many more choices for cheap
SSL certificates than there were a few years ago.
The last time I bought a "proper" certificate was early last year, when
I upgraded the
30-day trial certificate I was using in development to a
certificate for production. That was fast and painless, and cost about
$40. (I've also used RapidSSL a few years before that.)
Recently, I learned that
(to whom I have now transferred all my domains from GoDaddy) is a
reseller for various SSL certificate providers, including GeoTrust (the
CA behind RapidSSL). Their pricing is very attractive, and I ordered a
three-year RapidSSL certificate for $9.95/year today. That was fast and
painless too (and it didn't include the phone verification step that my
earlier RapidSSL purchases did).
I'm happy with RapidSSL so far, but I still look forward to the day when
I can distribute encryption-only certificates through the DNS.
When I first started receiving more email than I could deal with in my
inbox, I reluctantly cobbled together a .procmailrc from bits of other
people's configuration files. procmail was a mystery, but my needs were
simple, and although I never graduated to liking its syntax, I learned
to get along with it reasonably well over time.
Then I read Simon Cozens' article about
filtering with Mail::Audit, and I was delighted by the thought of
writing a clear and understandable Perl program (ha!) to sort email. I
had switched away from procmail to a simple Mail::Audit script almost
before I had reached the end of the article.
My little program was convenient. Rather than having a separate recipe
for each mailing list, the code would derive the mailbox name from the
List-Id or Mailing-List header field. (For all I know, procmail may be
able to do this too, but none of the examples show how to do it, and I
never tried very hard to figure it out.) That code kept me happy for
almost ten years.
Yesterday, I installed Ubuntu 11.04, which ships with Perl 5.10.1; by
coincidence, the official end of support for Perl 5.10 was announced a
day earlier, along with the release of Perl 5.14. I decided to leave the
system Perl alone, and use perlbrew to
maintain my own Perl installation. But I knew that would take hours with
my reduced bandwidth,
and I didn't want to wait until then to read my mail.
So I had the bright idea of installing Mail::Audit in ~/perl just to be
able to run my mail filter. I downloaded Mail-Audit-2.225.tar.gz from a
CPAN mirror and ran Makefile.PL, only to be warned that File::HomeDir,
File::Tempdir, and MIME::Entity were missing. I tried to install using
the system CPAN.pm, only to find still more dependencies: Test::Tester,
File::Which, Test::NoWarnings, Test::Script, Probe::Perl, MIME-tools,
IPC::Run3, and Test::Deep. Something amongst that collection failed to
install, and I was left looking at several screenfuls of failing tests.
Do I understand that the use of File::HomeDir and IPC::Run3 probably
allows users of Windows and VMS to use Mail::Audit? Yes. Am I glad that
all of these modules have comprehensive test suites? Yes. Could I have
fixed the problem? Sure, if I had spent some time investigating. But my
newly-installed gnome-terminal wouldn't scroll back far enough, and I
suddenly remembered that procmail, inscrutable as always, was already
Ten minutes and some regex-surgery on my filter script later, I had
cobbled together enough of a .procmailrc to begin reading my mail.
(I'm not overly fond of CPAN dependencies, but this post is about my
mail filter growing into something that demanded more attention than
procmail, not about Mail::Audit having too many dependencies per se.
For comparison, I am a reasonably happy user of Email::Stuff in some
of my apps, and that has even more dependencies.)
A month ago, I returned from a work trip to Kolkata to find my computer
dead. None of the tricks learned during its eight years of service could
coax it back to life, and I was forced to visit Nehru Place the next day
to buy a new motherboard. I was disappointed by the lack of variety in
the models available, but had too little time to explore. I wanted an
Asus P7H55D-M Pro,
but had to settle for the
that RR Systems unearthed after much consultation on the Nehru Place
bush telephone. I got an i3-540 CPU and 8GB of RAM with it, and had to
buy a Radeon 4350 PCIe video card too (since the P7H55 doesn't support
the on-die graphics of the i3/i5 processors).
I was too busy with work to do more than install the new hardware and
continue to use the existing (32-bit) Ubuntu 10.04 installation. Given
my track record of upgrading, I may have left it that way for a year or
two, but for two things—the thought of half my RAM being unused was sad,
and the machine wouldn't boot reliably. The latter problem was difficult
to pin down, but I finally isolated it to the Via VT6415 IDE controller.
Sometimes the kernel would hang just after enumerating the IDE devices
(one of which was my root disk). Disabling the controller solved the
problem, but meant I had to set up a new installation on a SATA disk.
Last night, I finally installed Ubuntu 11.04 (whose slick new installer
does work in the background while waiting for you to answer questions!),
and got my machine up and running with surprisingly little trouble. The
proprietary ATI fglrx video driver continues to be horribly broken, but
video performance has improved dramatically even without it (but I don't
know if that's because of improvements in the open-source radeon driver,
or something else). Installing LTSP and booting 32-bit clients worked
flawlessly. The only thing I haven't figured out how to do yet is to
switch back to using fvwm2 as my window manager, but that can wait.
And now all of that lovely 8GB of RAM is accessible.
I'm working on a web application that is running behind a reverse proxy.
Most people will use HTTP to access it, but anyone who wants to login
must use HTTPS until they logout, to avoid leaking plaintext passwords
and session cookies. This is a brief note about the configuration. I'm
using Mojolicious and Apache 2.2's mod_proxy, but the implications of
providing mixed HTTP/HTTPS access through a reverse proxy are relevant
to other implementations.
A few weeks ago, I was asked to help solve a tricky DNS problem on a
laptop that needed to connect to two openvpn networks. The first network
had been set up long ago, and was outside our administrative control. It
forced the client's /etc/resolv.conf to point to a nameserver that was
accessible over the VPN and served a bogus .example-org TLD zone in
addition to acting as a general resolver.
The problem arose when the machine was configured to connect to a new
VPN. Here, clients were configured to connect to vpn.example.org. When
they were in office, the DHCP-supplied nameserver would resolve that to
10.0.0.1. Outside the office, example.org's "real" nameserver would hand
out the server's external IP address. This worked fine, but if the user
connected to the old VPN first, the forced nameserver change meant that
vpn.example.org no longer resolved to 10.0.0.1 inside office. But if we
left resolv.conf unchanged, names in .example-org could not be resolved.
I just moved the OBI web
site to my new hosted server at
Hetzner and—because I couldn't retain my old /29 netblock—to a new
IP address as well. Here are a few notes about the migration, including
the stupid (but harmless) mistake I made.
The first thing I should have done was to lower the TTL on the A records
for orientalbirdimages.org and www.orientalbirdimages.org. I thought I
had set them to 1 hour, but they were actually set to 172000s (~2 days).
This meant I had to wait much longer before I could be sure the move was
complete; see below for more.
The web site comprises some HTML and PHP files, a small MySQL database,
and a couple of gigabytes of uploaded images. I started by copying the
Apache VirtualHost definition and all the files to the new server. Then
I disabled the old site and replaced it with an "OBI will be back soon"
message (to prevent changes to the database as it was being copied). I
copied the MySQL database and recreated it on the new server. All that
remained was to change the DNS to point to the new server's address.