The Advisory Boar
Here's how I configured Postfix to relay mail from email@example.com through
smtp-relay.gmail.com:587 using the credentials set up for firstname.lastname@example.org
on Google Apps.
There are three parts to this: making Postfix relay mail based on the
sender address, teaching it to authenticate to gmail, and configuring
gmail to accept the relayed mail. (Postfix was already configured to
send outgoing mail directly.)
I created /etc/postfix/relay_hosts with the following contents:
Then I ran «postmap /etc/postfix/relay_hosts» and set
SMTP SASL authentication
I created /etc/postfix/sasl_passwords (mode 0600) with the following
Then I ran «postmap /etc/postfix/sasl_passwords» and added the following
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
That enables SMTP AUTH in the Postfix SMTP client and tells Postfix
where to look up the username and password for a domain.
Gmail will accept SMTP AUTH only in a TLS session, so
TLS client support
must be configured in Postfix (which means setting
to "may"). But even once that's done, gmail advertises only the
following authentication mechanisms:
250-AUTH LOGIN PLAIN XOAUTH2 PLAIN-CLIENTTOKEN OAUTHBEARER XOAUTH
I didn't want to worry about OAUTH, so I was left with PLAIN was the
only reasonable choice. Postfix will not use plaintext authentication
mechanisms by default, so I also had to remove "noplaintext" from the
default value for
As an additional precaution, I also set
to change the default TLS policy from "may" to "encrypt" for
When I tried to send mail through the relay, Postfix wasn't able to
SASL authentication failure: No worthy mechs found
SASL authentication failed; cannot authenticate to server smtp-relay.gmail.com[188.8.131.52]: no mechanism available
Google considers password authentication to be “less secure”, and you
have to explicitly enable it on the
less secure apps settings page.
There are some
but I was happy to take the path of least resistance here.
I did that and tried again, only for mail to bounce with this error:
Invalid credentials for relay [184.108.40.206]. The IP address you've
registered in your G Suite SMTP Relay service doesn't match domain of
the account this email is being sent from. If you are trying to relay
mail from a domain that isn't registered under your G Suite account
or has empty envelope-from, you must configure your mail server
either to use SMTP AUTH to identify the sending domain or to present
one of your domain names in the HELO or EHLO command. For more
information, please visit https://support.google.com/a/answer/6140680#invalidcred
This message is misleading, as I found out by using openssl's s_client
to establish a TLS session and then authenticating by hand. SMTP AUTH
succeeded, but MAIL FROM was subsequently rejected.
I followed the
link in the message,
which led me to the
SMTP relay service
The Google Apps admin console doesn't use sensible URLs, but I followed
the breadcrumb trail to an “Advanced settings” page where I was able to
edit the SMTP relay service settings to set “Allowed senders” to “Only
addresses in my domains”, as well as to “Require SMTP authentication”
and “Require TLS encryption”. Remember to “Save” the changes.
The error I got was because the “Only accept mail from specified IP
addresses” option was checked for this particular domain. I could have
added the IP address of my server to the list, but SMTP authentication
was what was I wanted to use anyway.
One of my contributions to Postgres 9.5 (back in 2015) was a two-stage
optimisation of the CRC computation code. First, switching to a faster
algorithm; and second, to use the Intel SSE4.2 CRC instructions where
available. I was delighted to have the opportunity to implement such a
dramatic performance improvement (CRC computation used to be at the top
of the profile on every streaming replica by some distance).
Optimising something by writing assembly (even if it was only a couple
of instructions, later replaced by compiler intrinsics) is always fun,
but here the algorithm change was also a substantial improvement, in
that it used a lookup table to process eight input bytes at a time. This
technique is known as “slicing-by-N” (where N depends on the size of the
lookup table), and was originally described here:
Frank L. Berry, Michael E. Kounavis, "Novel Table Lookup-Based
Algorithms for High-Performance CRC Generation", IEEE Transactions on
Computers, vol. 57, no. , pp. 1550-1560, November 2008,
This paper, having been published in a prestigious IEEE journal, is of
course not easily available for download (not when I looked in 2015, and
apparently not today). I was able to find what I needed to implement the
technique thanks to other reference materials, notably including
Stephan Brumme's Fast CRC32 page
(now considerably expanded since 2015), but I never actually got to read
what Messrs. Kounavis and Berry had to say about their technique.
Recently, I had occasion to look at CRC32 implementations again, and I
found a different paper that I had looked at briefly the last
Cyclic Redundancy Check Generation Using Multiple Lookup Table Algorithms
by Indu I. and Manu T.S. from TKM Institute of Technology, Kollam, in
Kerala (my mother's home state in South India). I remember noting that
there was something odd about the paper, but not having time enough to
give it more than a passing glance. This time, I spent a while reading
it, and it's certainly very odd.
ABSTRACT: The primary goal of this paper is to generate cyclic
redundancy check (CRC) using multiple lookup table algorithms. A compact
architecture of CRC algorithm (Slicing-by-N algorithm) based on multiple
lookup tables (LUT) approach is proposed. This algorithm can ideally
read large amounts of data at a time, while optimizing their memory
requirement to meet the constraints of specific computer architectures.
The focus of this paper is the comparison of two algorithms. These two
algorithms are Slicing by-N-algorithm and Sarwate algorithm, in which
slicing by-N-algorithm can read arbitrarily 512 bits at a time, but
Sarwate algorithm, which can read only 8 bits at a time. This paper
proposes the generation of CRC using slicing by 8 algorithm. In this,
message bits are chunked to 8 blocks. All are processed at a time.
Proposed Slicing-by-8 algorithm can read 64 bits of input data at a time
and it doubles the performance of existing implementations of Sarwate
Is this paper claiming to have invented the slicing-by-N
algorithm? It's hard to tell from the blather in the abstract, but going
through the remaining blather (and effort that, in retrospect, I cannot
in good conscience commend to the reader) suggests that this is indeed
Recently time is the major concern. So in order to
process large amount of data at a time, Multiple Lookup
based approach is more efficient. Multiple Lookup based
approach contains five CRC algorithms, called Slicing by-N
algorithm (N ϵ 4, 8, 16, 32, 64), which is used to read up to
512 bits at a time. So performance of the system should be
increased. Here proposing Slicing by-8 algorithm to read 64
bits at a time. Here proposed an efficient design of CRC
generator using Slicing by-N algorithm (N=8). In this
algorithm, input message stream is sliced into N slices and
each slice has 8 bits. So using this Slicing by-8 algorithm, it
can read 64 bits at a time and it triples the performance of
existing implementation of Sarwate algorithm.
Oho, so it triples the performance of existing implementations of the
Sarwate algorithm, does it? Funny the abstract claims a paltry doubling
in performance then. The paper goes on to describe CRC computation with
block diagrams, and then has some more blather about VHDL and MATLAB and
some screenshots of “simulation waveforms”, all of which seems to amount
to showing that the various CRC algorithms produce the same results and
that processing more input bytes at a time is faster than not doing so.
I made judicious use of the fast-forward button to reach the conclusion,
which begins with
The design of CRC generator using Multiple Look Up based approach is
proposed. In this paper, slicing by-8 algorithm is designed, and
compares this algorithm with the existing algorithms, that is, with
Sarwate algorithm and LFSR method.
So yeah, they're claiming in a slightly roundabout way to have invented
the slicing-by-8 CRC algorithm. However, the authors cite the Kounavis
and Berry paper anyway, perhaps so that any criticism can be blamed on
some sort of unfortunate misunderstanding. I didn't find any citations
of this paper in the minute or two I spent on the search, but Citeseer
and Researchgate link to it, and it's quite prominent in search results,
so it's probably only a matter of time before someone cites it.
The paper was published in "International Journal of Modern Engineering
Research” (ijmer.com) in 2012; the journal's name alone reminded me of
the scammers, Academic Journals Online,
whom I encountered a few years ago. IJMER does not, however, appear to
be one of the AJO journals. Perhaps it's a second cousin.
Unfortunately, the authors include no contact information in the paper,
so I was not able to send them a link to this page.
I wanted to restore a clean Raspbian image on the Raspberry Pi that was
running in the battery room. I could have gone and got it, but I would
have had to climb a ladder.
So I thought “Why not just remount the filesystems 'ro' and dd the image
I remounted / and /boot (after "systemctl isolate rescue.target", which
stopped sshd but left my ssh session running) read-only, and dd'ed the
image from an NFS mount onto /dev/mmcblk0. The dd worked fine. I used
/proc/sysrq-trigger afterwards to sync and reboot (I couldn't run any
binaries after the dd, which wasn't much of a surprise). The machine
fell off the network as expected…
…and never came back up. So I climbed up the ladder and brought the Pi
down to my desk and plugged it into my monitor. It did start to boot,
and got a fair distance before the kernel panicked. I didn't bother to
try to figure out the problem, just extracted and rewrote the SD card;
and all was well thereafter.
But I still think my plan ought to have worked.
The mains power supply in Lweshal is dismal.
There are frequent outages, of course—the transformer in the village
blew up earlier this year, and we had no power for a week. Two or three
times in the summer (when forest fires were burning everywhere) a tree
fell on the line and cut off power for a few days. There's a big fuse
near Mauna which seems to keep melting down. But none of that is really
a surprise in a remote area.
The unpleasant surprise was how bad the supply could be when there's no
outage. For some reason, extreme voltages are quite common. I've seen
the mains voltage at a record low of 25V for several hours once, and
we've had whole days when it stayed around 60–90V—voltages so low that
the electricity meter stayed off, even though our 9W LEDs inside would
light up. Free power!
High voltages don't last nearly as long, but we've seen spikes of 300V
and more on occasion. It's difficult to decide which condition is more
destructive. High voltages fry appliances, but persistent low voltages
where some lights appear to work encourage people to draw more current
than their circuits can safely carry—and in a place where people use
1.0mm² wire even for 16A circuits, and nobody has any earthing, that
isn't something to be taken lightly.
Either way, voltage fluctuations blew up our UPS twice. The first time
we didn't have any sort of voltage regulator installed. After having to
pay for a new logic board, we installed a custom-made "constant voltage
transformer" (a big auto-transformer with a voltage meter). It clicked a
lot to indicate its displeasure, and we had to take it back to the shop
to make it cut off the output altogether if the voltage was too low (but
why didn't it do that to begin with?). Then the next fluctuation killed
the UPS again.
In such a dire situation, only a device with a genuine superhero name
could possibly save us, and the
certainly delivers on that front. I bought one from Amazon, and we
installed it upstream of the main distribution board. It doesn't do any
voltage regulation, just cuts off the output beyond the predefined low
and high voltage thresholds. Here is it in action.
It has worked correctly in various low-voltage conditions (we've had a
130V supply for most of the past two days). It has high- and low-voltage
bypass modes that I have never tried, and an optional output timer that
restores power to the house only if the power stays on for two minutes.
It's useful that it displays the input voltage (even when the output is
cut off), and the 32A circuit breaker is very handy when we're working
on the distribution board.
Other Amazon customers assured me that the device makes no noise during
operation, but of course it does. It clicks away merrily, but it's a
small price to pay for reliable voltage limits.
Update (2017-04-23): Our low- and high-voltage records for the
Accurex are 43V and 592V respectively (both voltages persisted for some
hours before returning to normal).
My mother called to tell me that people were complaining that mail sent
to her address at one of my domains (menon-sen.com) was bouncing. Here's
an excerpt from the bounce message she sent me:
DNS Error: 27622840 DNS type 'mx' lookup of menon-sen.com responded
with code SERVFAIL
I thought it was just a temporary DNS failure, but just for completeness
I tried to look up the MX for the domain, and got a SERVFAIL response. I
checked WHOIS for the domain and was horrified to find this:
Name Server: FAILED-WHOIS-VERIFICATION.NAMECHEAP.COM
Name Server: VERIFY-CONTACT-DETAILS.NAMECHEAP.COM
In a near-panic (because this meant email to one of my work addresses
was also being bounced), I checked a bunch of stuff: No, the whois
details for the domain were not incorrect (nor had they been changed
recently). No, Namecheap had not sent me any whois verification mail
about the domain. No, Namecheap had not sent me any notification that it
was going to suspend the domain. No, the Namecheap admin page didn't say
anything about the domain having been suspended.
I couldn't find any relevant articles in the support knowledgebase, so I
opened an emergency ticket with Namecheap support. They responded in an
hour, and helped to resolve the problem immediately. They did admit that
I didn't receive a notification because of an error on their part:
We have double-checked contact details on the domain in question and
registrant details appeared to be missing on the domain due to a
one-time glitch at our end. That is the reason you have not received
verification email. Please accept our most genuine apologies for the
inconvenience caused you.
I have always found Namecheap support to be responsive and helpful. I do
appreciate their candour and the prompt response in this case as well,
but I am deeply shaken that their system has no controls in place to
prevent a domain from being suspended without any sort of notification
(especially since they were sending me notifications about other domains
registered under the same account in the same time period).
I don't know when exactly the domain was suspended. I have actually lost
mail because of this incident—and at least one of them was an important
response to some mail I sent. But thanks to my mother's correspondents,
I think the problem was discovered before very long. I cannot afford to
worry about this happening for my other domains that are up for renewal
in the near future. If the same thing had happened to toroid.org, it
would have been catastrophic.
I have been a happy customer of Namecheap for more than five years, and
recommended it to any number of friends during that time. Along with
(which is much more expensive), it's by far the best of the dozen or so
registrars I've used over the past two decades. I have no idea where to
move my domains, but I'll start keeping an eye out for an alternative.
Update, moments after writing the above: my friend Steve points
out that there's something to be said for having a vendor who admits to
their errors honestly; and only a pattern of errors rather than a single
incident would justify moving my domains away to an unknown registrar.
A few days from now, I hope to be able to properly appreciate Steve's
wisdom in this matter. Meanwhile, I'm saved from precipitous actions by
the fact that I haven't the faintest idea where to migrate anyway.
I have never had a refrigerator that was not subject to periodic power
failures. The severity and frequency of the outages varied from several
small interruptions per day to extended power failures lasting sixteen
hours or more; the former could be ignored, while the latter usually
meant throwing everything out and starting afresh.
As I grew up and started working with computers, a succession of power
backup devices entered my life, and I eventually became accustomed to
“uninterrupted” power, but it was strictly rationed. I was never able to
connect anything but the computers and networking equipment to the UPS,
and certainly nothing like a refrigerator.
So I have never experienced refrigeration as it is meant to be.
Until now. Thanks to our solar power setup, we have been able to keep
our refrigerator running without interruptions for several weeks on end.
Suddenly it feels as though we have a magical new refrigerator in which
food doesn't spoil. Coriander and green chillies stay fresh and usable
for days. Cream skimmed off the top of boiled milk is something we can
collect for the rare fettucine alfredo. Our precious cheese collection
is something we can enjoy at leisure. These days we don't have much in
the way of leftovers, and we can use fresh vegetables from our kitchen
garden often enough that we store only a few in the refrigerator, but
everything remains usable for an absurdly long time.
that has something to do with
a water monster.
I'm not very clear about the details, but there's a crocodile (or half a
crocodile) involved in some way, and that's good enough for me. So in
honour of the water monster, we cleaned the fridge today. Nothing was
spoiled, and the dreaded “fridge smell” was very faint. The fridge is
now spotless, and the monster is appeased.
Sometimes the most mundane of insights can seem profound if it comes
from experience: modern refrigeration is pretty nice.
I have more than a passing interest in VPN software, and have looked at
and used many different implementations over the years. I haven't found
much to cheer about, which led me to write
for my personal use.
I've been reading about
for the past few weeks, and I really like it so far. It follows through
on many of the same goals that I had with tappet, and goes much further
in areas important to more widespread adoption. The author,
articulates the project's design goals in this presentation:
Keeping the code small and easy to review was a primary consideration
for me (tappet is under a thousand lines of code, not including NaCl).
By this measure, Wireguard does an admirable job of staying small at
around 15,000 lines including crypto code and tests.
When I wrote tappet, the
did not exist in a usable (or recommended) form. Wireguard's adoption of
this framework brings a host of desirable properties that tappet lacks,
notably including perfect forward secrecy.
One of my major frustrations with OpenVPN is the extraordinary time it
takes to establish a TLS connection on a high-latency link. Very often,
when tethered via GPRS, it will retry forever and never
succeed. Tappet goes to the other extreme—it requires zero
setup for encrypted links (at the expense of perfect forward secrecy).
Wireguard restricts its handshake to a single round-trip, which is an
entirely acceptable compromise in practice.
Wireguard runs in the kernel, thereby avoiding the need to copy packets
in and out of userspace. I didn't care nearly as much about performance.
Tappet is fast enough in userspace that it keps up with the fastest link
I've tried it on (42.2Mbps DCHSPA+), and I didn't need anything more.
Wireguard accepts multiple peers per interface, while tappet is limited
to setting up point-to-point encrypted links. The former is obviously
more practical in realistic deployments. (On the other hand, Wireguard
is a Layer-3 VPN, while tappet operates at L2 and forwards Ethernet
frames instead of IP packets. How much that matters depends on the
I look forward to a time when I can use Wireguard in production.
We bought dark kitchen towels to wipe our iron woks, which tend to leave
rust-coloured stains—at least temporarily. But Ammu got her hands on one
of them, and made it much too pretty to wipe anything with.
On the twelfth day after christmas, my true love said to me, “This
wordpress theme won't let me save any customisations. Can you take a
look at it?”
The theme customisation menu in WordPress displays various options in
the left sidebar, and a live preview of the changes on the right. You
can edit things in the menu and see what they look like, and there's a
"Save and Publish" button at the top. But the button remained stuck at
"Saved" (greyed-out), and never detected any changes. Nor was the menu
properly styled, and many other controls were inoperative.
We found other
reports of the problem,
but no definitive solution. Disabling certain plugins fixed the problem
for some people, but that didn't help us—hardly any plugins were active
anyway, and none of the problematic ones were installed.
We looked at network requests for the page in the Chrome developer
console, and saw a series of 404 responses for local CSS and JS
resources within the
Here's one of the failing URLs:
That /var/lib/wordpress certainly didn't belong in the URL, so we went
grepping in the code to see how the URL was being generated. It took us
quite some time to figure it out, but we eventually found this code that
was used to convert a filesystem path to the static resources into a URL
(slightly edited here for clarity):
site_url(str_replace(ABSPATH, '/', $_extension_dir))
(Summary: Take /a/b/c ($_extension_dir), replace /a/b/ (ABSPATH) with a
single /, and use the resulting /c as a relative URL.)
ABSPATH was set to /usr/share/wordpress/, but the extension dir was
under /var/lib/wordpress/, so it's no surprise that stripping ABSPATH
from it didn't result in a valid URL. Not that doing search-and-replace
on filesystem paths is the most robust way to do URL generation in the
first place, but at least we could see why it was failing.
The Debian package of WordPress is… clever. It places the code under
/usr/share/wordpress (owned by root, not writable by www-data), but
overlays /var/lib/wordpress/wp-content for variable data.
Alias /wp /usr/share/wordpress
Alias /wp/wp-content /var/lib/wordpress/wp-content
This is a fine scheme in principle, but it is unfortunately at odds with
WordPress standard practice, and the Debian README mentions that liberal
applications of chown www-data may be required to soothe the itch.
Unfortunately, it also means that themes may not be installed under
ABSPATH, which usually doesn't matter… until some theme code
makes lazy and conflicting assumptions.
The eventual solution was to ditch /usr/share/wordpress and use only
/var/lib/wordpress for everything. Then ABSPATH was set correctly, and
the URL generation worked. (We tried to override the definition of
ABSPATH in wp-config.php, but it's a constant apparently set by the PHP
In the end, however, I couldn't quite make up my mind whether to blame
the Debian maintainers of Wordpress for introducing this overlay scheme,
or the theme developers for generating URLs by doing string manipulation
on filesystem paths, or the Wordpress developers for leaving static file
inclusion up to theme developers in the first place.
Well, why not all three?
I remember, as a child, reading about the discovery of the
cave paintings in Altamira
by an eight-year-old, and her wonder at seeing bison and other animals
seeming to dance in the flickering light of her torch.
Despite my fascination with palaeolithic rock art, I had never seen any.
I had read about cave paintings at Lakhudiyar near Barechhina in Almora
district, the best-known of Uttarakhand's many such sites. It's not far
from where we live, but not close enough for a casual visit either. We
had an opportunity to stop for a few minutes on a recent drive past
It's not really a cave, just an overhanging rock face; and it's
a far cry from Altamira. In fact, it looks a little like it might have
been the work of a bored schoolboy waiting for a bus home. But there's
an ASI “protected heritage site” notice-board, so it must be legit…
Notice the obvious (and accurate) “hairpin bend” road sign in the centre
of the image. The paintings are a bit repetitive, and unfortunately the
ones closer to the ground are quite worn. Here's a video that shows more
of the rock face:
Here's another video that
shows the approach to Lakhudiyar.