The Advisory Boar

By Abhijit Menon-Sen <ams@toroid.org>

Disabling touchpad buttons on the Thinkpad X120E

2012-07-22

One of the two most annoying things about my Thinkpad X120E is that the touchpad buttons are flush with the outer edge of the chassis, and very easy to press inadvertently. I like the touchpad, so the option of disabling it in the BIOS or with "synclient TouchpadOff=1" did not appeal to me.

After reading the synclient man page, I was forced to accept that there was no easy way to disable just the hardware buttons. That left digging into the source code of the X.Org Synaptics driver ("apt-get source xserver-xorg-input-synaptics", and I had to install xorg-dev, xserver-xorg-dev, and xutils-dev as well).

The code is quite pleasant to read, and a single pass through synaptics.c and some quick grepping suggested a likely approach. ReadInput() handles each packet received from the device, and it calls SynapticsGetHwState(), which in turn calls a device-specific ReadHwState() function (ALPS, PS/2, etc.) to fill in a SynapticsHwState struct. All I did was to set the left and right click flags to 0 after this call.

--- synaptics.c~ 2012-07-22 13:40:01.522703354 +0530
+++ synaptics.c 2012-07-22 12:30:16.498811737 +0530
@@ -1255,8 +1255,10 @@
 SynapticsGetHwState(InputInfoPtr pInfo, SynapticsPrivate *priv,
 		    struct SynapticsHwState *hw)
 {
-    return priv->proto_ops->ReadHwState(pInfo, priv->proto_ops,
+    Bool s = priv->proto_ops->ReadHwState(pInfo, priv->proto_ops,
 					&priv->comm, hw);
+    hw->left = hw->right = 0;
+    return s;
 }

I built and installed the result (by copying src/.libs/synaptics_drv.so to /usr/lib/xorg/modules/input), and now I have a working touchpad with disabled buttons. Tap-to-click is implemented in software, so it works perfectly. The trackpoint is a separate device altogether, so its buttons (just above the trackpad) work fine too.

One oddity is that tap-to-click doesn't work at the lightdm screen, but it works fine inside GNOME. I didn't bother trying to figure out why.

Brother HL-2250DN and Linux

2012-04-29

My mother has been using my old Lexmark printer for many years, but it is no longer possible to find toner cartridges for it (which is such a shame, because it's a good printer). When the last cartridge became so flaky that she could no longer print her tickets, she asked me to find a new printer for her.

I thought about a cheap Samsung ML-16xx laser printer, but my recent experience with SPL led me to settle on the Brother HL-2250DN instead. This printer ticks many of my boxes: it has Ethernet support, automatic duplex printing (surprising, for a relatively inexpensive printer), and a proper output tray. The downside is that it supports only PCL6, not PostScript.

It was easy to set up the printer under Ubuntu 11.10. I chose the generic PCL6 printer driver, and everything just worked. Delightful. (Brother's web site does have some CUPS drivers for Linux, but I did not bother to try them out.)

Not surprisingly, the printed output looks fine too.

Samsung SCX-3201G MFP and Linux

2012-04-01

I have lived without a printer or scanner for many years, but the number of things I need to print and scan has grown to the point where going to the market each time is painful. I am a firm believer in buying printers with PostScript and network support, but our needs are modest and do not justify spending enough to get a "real" printer. So I resigned myself to paying extra in terms of dealing with CUPS.

I found two or three MFPs that suited my budget on Flipkart, but was unable to find anything about Linux support for those models. Eventually, I chose the smallest one, the Samsung SCX-3201G, based on some positive reports about the SCX-3200 series.

Fortunately, it was easy to make it work. Thanks to tweedledee's Samsung Unified Linux Driver Repository and the odd forum post, I installed the PPD file and the SPL filter under Ubuntu 11.04. Printing with CUPS and scanning with SANE both work fine now.

The printer itself works all right. You can tell it's meant for low volumes. There's no output tray—it just spits paper out from the front, and there's a non-zero risk that it'll get sucked back into the input tray below. I would have been happier with a "real" printer, but this one works well enough that I'm glad to have it anyway.

Update: I'm glad I don't need to print photographs. Libreoffice and the GIMP print fine, but output is very dark and the quality is a bit disappointing even at 1200dpi. The fault may lie with the printer, the driver, or GIMP—or a combination thereof. The GNOME image viewer causes the printer to spit out several mostly-empty pages with a few control characters. I assume some CUPS incantation is needed, but I'm happy to ignore the problem entirely. Text and line-art print fine.

Update: Sometimes, printing a PDF will also print many pages of garbage. Most of the time, printing it a second time will work fine, but some files always result in garbage. Unfortunately, I have not found any way to predict when it might happen. I blame the interaction between CUPS and Samsung's SPL filter. I have set "LogLevel debug" in cupsd.conf, and will keep an eye on the logs.

<subliminal>Life is short. Get a printer with PostScript and Ethernet.</subliminal>

Buying an SSL certificate

2012-02-29

The downside of always using SSL for web sites that require authentication is the need to buy SSL certificates. I usually don't need anything stronger than "domain validation" (which assures you that you're talking to the server you think you're talking to, but says nothing about how trustworthy that server may be). I'm not a fan of the current PKI, but there are now many more choices for cheap SSL certificates than there were a few years ago.

The last time I bought a "proper" certificate was early last year, when I upgraded the FreeSSL 30-day trial certificate I was using in development to a RapidSSL certificate for production. That was fast and painless, and cost about $40. (I've also used RapidSSL a few years before that.)

Recently, I learned that Namecheap (to whom I have now transferred all my domains from GoDaddy) is a reseller for various SSL certificate providers, including GeoTrust (the CA behind RapidSSL). Their pricing is very attractive, and I ordered a three-year RapidSSL certificate for $9.95/year today. That was fast and painless too (and it didn't include the phone verification step that my earlier RapidSSL purchases did).

I'm happy with RapidSSL so far, but I still look forward to the day when I can distribute encryption-only certificates through the DNS.

Back to procmail

2011-05-16

When I first started receiving more email than I could deal with in my inbox, I reluctantly cobbled together a .procmailrc from bits of other people's configuration files. procmail was a mystery, but my needs were simple, and although I never graduated to liking its syntax, I learned to get along with it reasonably well over time.

Then I read Simon Cozens' article about Mail filtering with Mail::Audit, and I was delighted by the thought of writing a clear and understandable Perl program (ha!) to sort email. I had switched away from procmail to a simple Mail::Audit script almost before I had reached the end of the article.

My little program was convenient. Rather than having a separate recipe for each mailing list, the code would derive the mailbox name from the List-Id or Mailing-List header field. (For all I know, procmail may be able to do this too, but none of the examples show how to do it, and I never tried very hard to figure it out.) That code kept me happy for almost ten years.

Yesterday, I installed Ubuntu 11.04, which ships with Perl 5.10.1; by coincidence, the official end of support for Perl 5.10 was announced a day earlier, along with the release of Perl 5.14. I decided to leave the system Perl alone, and use perlbrew to maintain my own Perl installation. But I knew that would take hours with my reduced bandwidth, and I didn't want to wait until then to read my mail.

So I had the bright idea of installing Mail::Audit in ~/perl just to be able to run my mail filter. I downloaded Mail-Audit-2.225.tar.gz from a CPAN mirror and ran Makefile.PL, only to be warned that File::HomeDir, File::Tempdir, and MIME::Entity were missing. I tried to install using the system CPAN.pm, only to find still more dependencies: Test::Tester, File::Which, Test::NoWarnings, Test::Script, Probe::Perl, MIME-tools, IPC::Run3, and Test::Deep. Something amongst that collection failed to install, and I was left looking at several screenfuls of failing tests.

Do I understand that the use of File::HomeDir and IPC::Run3 probably allows users of Windows and VMS to use Mail::Audit? Yes. Am I glad that all of these modules have comprehensive test suites? Yes. Could I have fixed the problem? Sure, if I had spent some time investigating. But my newly-installed gnome-terminal wouldn't scroll back far enough, and I suddenly remembered that procmail, inscrutable as always, was already installed.

Ten minutes and some regex-surgery on my filter script later, I had cobbled together enough of a .procmailrc to begin reading my mail.

(I'm not overly fond of CPAN dependencies, but this post is about my mail filter growing into something that demanded more attention than procmail, not about Mail::Audit having too many dependencies per se. For comparison, I am a reasonably happy user of Email::Stuff in some of my apps, and that has even more dependencies.)

Ubuntu 11.04 on an Asus P7H55

2011-05-15

A month ago, I returned from a work trip to Kolkata to find my computer dead. None of the tricks learned during its eight years of service could coax it back to life, and I was forced to visit Nehru Place the next day to buy a new motherboard. I was disappointed by the lack of variety in the models available, but had too little time to explore. I wanted an Asus P7H55D-M Pro, but had to settle for the Asus P7H55 that RR Systems unearthed after much consultation on the Nehru Place bush telephone. I got an i3-540 CPU and 8GB of RAM with it, and had to buy a Radeon 4350 PCIe video card too (since the P7H55 doesn't support the on-die graphics of the i3/i5 processors).

I was too busy with work to do more than install the new hardware and continue to use the existing (32-bit) Ubuntu 10.04 installation. Given my track record of upgrading, I may have left it that way for a year or two, but for two things—the thought of half my RAM being unused was sad, and the machine wouldn't boot reliably. The latter problem was difficult to pin down, but I finally isolated it to the Via VT6415 IDE controller. Sometimes the kernel would hang just after enumerating the IDE devices (one of which was my root disk). Disabling the controller solved the problem, but meant I had to set up a new installation on a SATA disk.

Last night, I finally installed Ubuntu 11.04 (whose slick new installer does work in the background while waiting for you to answer questions!), and got my machine up and running with surprisingly little trouble. The proprietary ATI fglrx video driver continues to be horribly broken, but video performance has improved dramatically even without it (but I don't know if that's because of improvements in the open-source radeon driver, or something else). Installing LTSP and booting 32-bit clients worked flawlessly. The only thing I haven't figured out how to do yet is to switch back to using fvwm2 as my window manager, but that can wait.

And now all of that lovely 8GB of RAM is accessible.

Mixing HTTP and HTTPS access to an application

2011-02-10

I'm working on a web application that is running behind a reverse proxy. Most people will use HTTP to access it, but anyone who wants to login must use HTTPS until they logout, to avoid leaking plaintext passwords and session cookies. This is a brief note about the configuration. I'm using Mojolicious and Apache 2.2's mod_proxy, but the implications of providing mixed HTTP/HTTPS access through a reverse proxy are relevant to other implementations.

Read more…

Stomping on resolv.conf

2011-01-10

A few weeks ago, I was asked to help solve a tricky DNS problem on a laptop that needed to connect to two openvpn networks. The first network had been set up long ago, and was outside our administrative control. It forced the client's /etc/resolv.conf to point to a nameserver that was accessible over the VPN and served a bogus .example-org TLD zone in addition to acting as a general resolver.

The problem arose when the machine was configured to connect to a new VPN. Here, clients were configured to connect to vpn.example.org. When they were in office, the DHCP-supplied nameserver would resolve that to 10.0.0.1. Outside the office, example.org's "real" nameserver would hand out the server's external IP address. This worked fine, but if the user connected to the old VPN first, the forced nameserver change meant that vpn.example.org no longer resolved to 10.0.0.1 inside office. But if we left resolv.conf unchanged, names in .example-org could not be resolved.

Read more…

Moving the OrientalBirdImages.org web site

2010-10-26

I just moved the OBI web site to my new hosted server at Hetzner and—because I couldn't retain my old /29 netblock—to a new IP address as well. Here are a few notes about the migration, including the stupid (but harmless) mistake I made.

The first thing I should have done was to lower the TTL on the A records for orientalbirdimages.org and www.orientalbirdimages.org. I thought I had set them to 1 hour, but they were actually set to 172000s (~2 days). This meant I had to wait much longer before I could be sure the move was complete; see below for more.

The web site comprises some HTML and PHP files, a small MySQL database, and a couple of gigabytes of uploaded images. I started by copying the Apache VirtualHost definition and all the files to the new server. Then I disabled the old site and replaced it with an "OBI will be back soon" message (to prevent changes to the database as it was being copied). I copied the MySQL database and recreated it on the new server. All that remained was to change the DNS to point to the new server's address.

Read more…

Resetting the Lexmark E323N network configuration

2010-09-12

Many years ago, I bought a Lexmark E323N laser printer (600dpi, 19ppm) because it was the cheapest printer I could find that came with Ethernet and PostScript support. I used it for a long time and was happy with it. When I moved away from home, I left it connected to my switch—along with a DSL modem and a wireless access point—so that my mother could use it.

Fast forward a few years. The DSL modem had died and been replaced. The switch had died and been replaced. The WAP died, and the Netgear WGR614 bought to replace it had four Ethernet ports, and could thus replace the switch as well. But it was a router, not a bridge, and so it wanted its internal and external networks numbered differently. The upshot was that the printer's IP address needed to change from 10.0.0.4 to 192.168.1.4.

No problem. I added a 10.0.0.0/8 address and host route to my netbook's eth2, which let me connect to the printer's administrative interface and change its address in the network settings menu. Alas, I forgot all about the separate "access control" menu, which was set to deny requests from outside 10.0.0.0/8. When the printer came back up, it would respond to ping from 192.168.1.x but discard TCP packets because of the access filter. If I used a 10.0.0.x address, it threw away all packets because they were from a source that did not match the printer's own IP address.

Read more…