Google Rejection and Srs

A few domains are hosted on my server and I have an exim4 setup where I can create virtual addresses by dropping text files in a directory and putting the destination address in the text file. This is convenient because it lets me redirect everything to gmail inboxes and manage all things in one place.

However, sometimes the virtual addresses do receive spam which they promptly forward to gmail, which is unhappy, tanking my server’s reputation.

This came to a head yesterday when I noticed legitimate emails being 550d at the SMTP dialog level, which is bad because they are not resent and we start losing important data.

I installed an additional spam filter, but crucially (I think, remains to be seen whether this really helps) I found this page which describes Sender Rewriting Scheme (and the explanation makes sense, since from Google’s point of view I’m delivering email for the originating domain, after a forward from my server, which is not a designated sender for those domains).

A bit of googling led me to another page describing how to set up srs with an srs client, but unfortunately the config suggested there made a few assumptions that didn’t work for my system. At the bottom of that though, there’s a link to this exim bug which describes a pure exim4 solution with 3 routers and a new transport. This was easy to adapt and it seems to be working well.

# /etc/exim4/conf.d/router/110_srs
  driver =    dnslookup
  domains =   ! +local_domains
  transport = ${if eq {$local_part@$domain} \
                      {$original_local_part@$original_domain} \
                   {remote_smtp} {remote_forwarded_smtp}}

  driver =    redirect
  senders =   :
  domains =   +local_domains
  condition = ${if match {$local_part} \
                         {^(?i)SRS0=([^=]+)=([A-Z2-7]+)=([^=]*)=(.*)\$} \
                {${if and { {<= {${eval:$tod_epoch/86400 - ${base32d:$2} & 0x3ff}} \
                                {10}} \
                            {eq {$1} \
                                {${l_4:${hmac{md5}{SRS_SECRET}{${lc:$4@$3}}}}}} \
                          } \
                         {true}{false} \
                 }} \
                 {false} \
  data =    ${sg {$local_part} \
                 {^(?i)SRS0=[^=]+=[^=]+=([^=]*)=(.*)\$} \

  driver =    redirect
  senders =   :
  domains =   +local_domains
  condition = ${if match {$local_part} \
                         {^(?i)SRS0=([^=]+)=([^=]+)=([^=]*)=(.*)\$} \
  data =    :fail: Invalid SRS recipient address


# transport
  driver =              smtp
  return_path =         SRS0\

TIL - Fzf Alt-C

The title of this post is somewhat cryptic but what I learned today is awesome.

I was looking at fzf’s page to see how I could better integrate it with vim. I’ve been happily using fzf in bash for a while now, primarily leveraging the ctrl-r history fuzzy search, and also ocassionally ctrl-t to insert a fuzzily-searched file in the command line. Today I learned about:

Files and directories

Fuzzy completion for files and directories can be triggered if the word before the cursor ends with the trigger sequence, which is by default **.

# Files under the current directory
# - You can select multiple items with TAB key
vim **<TAB>

# Files under parent directory
vim ../**<TAB>

# Files under parent directory that match `fzf`
vim ../fzf**<TAB>

# Files under your home directory
vim ~/**<TAB>

# Directories under current directory (single-selection)
cd **<TAB>

# Directories under ~/github that match `fzf`
cd ~/github/fzf**<TAB>

Process IDs

Fuzzy completion for PIDs is provided for kill command. In this case, there is no trigger sequence; just press the tab key after the kill command.

# Can select multiple processes with <TAB> or <Shift-TAB> keys
kill -9 <TAB>

But the one that REALLY blew my mind was this:

ALT-C - cd into the selected directory

  • Set FZF_ALT_C_COMMAND to override the default command
  • Set FZF_ALT_C_OPTS to pass additional options

This is brilliant, I might get used to this and I’ll become entirely unable to manually cd into a directory now.

Hugo Conversion

I finally bit the bullet and decided to get rid of wordpress, after 16 years. There’s no benefit for my personal blog to be running a dynamic platform, I don’t have any users, or dynamic content, and comments which might be the main attraction are not something I really use.

I still need to figure out how to replace the dynamic contact form, though.

To achieve this, I initially tried to follow the Hugo tutorial, but it really only described a very incomplete set of basics. I had more luck with this other tutorial, and the theme they recommend actually has some interesting sample posts that describe how to create a hugo theme.

I also looked at this to figure out some intricacies of markdown image rendering.

I’ll continue migrating content to the new site, and running both in parallel until ready to switch.

TIL - Python f-string expression value

There’s a python module called icecream which provides a nice ic() function to print an expression and its value, but in a discussion about it I learned that a quick equivalent can be achieved by using Python f-strings (available since Python 3.6) and appending “=” to the variable name:

>>> print(f"{d['key'][1]=}")

new minidlna server broke my network

Categories: Uncategorized

We added a new device which can expose a connected USB drive via DLNA, internally it uses minidlna which uses SSDP for service discovery. For some strange reason that rendered my *existing* minidlna (hosted on a raspberry pi) invisible. When researching the problem, it looks like neighbor discovery (which didn’t happen before as there were no other devices) uses a multicast address which my rpi was blocking due to reasons (only allows traffic via the local network and a vpn gateway). My theory is that the new minidlna device took over as “primary” and then couldn’t find other peers and so the old server wasn’t visible anymore. The solution was to allow the specific multicast address used by SSDP.

iptables -F
#Tunnel interface
iptables -A INPUT -i tun+ -j ACCEPT
iptables -A OUTPUT -o tun+ -j ACCEPT
#Localhost and local networks
iptables -A INPUT -s -j ACCEPT
iptables -A OUTPUT -d -j ACCEPT
iptables -A INPUT -s -j ACCEPT
iptables -A OUTPUT -d -j ACCEPT
#multicast for minidlna/SSSP
iptables -I OUTPUT -d -j ACCEPT
iptables -I INPUT -d -j ACCEPT
#Allow VPN establishment, this is the port in the config's #remote
iptables -A OUTPUT -p udp --dport 1198 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -p udp --sport 1198 -m state --state ESTABLISHED,RELATED -j ACCEPT
#Drop everything else
iptables -A INPUT -j DROP
iptables -A OUTPUT -j DROP

convert color pdf to monochrome

Categories: Uncategorized
gs \
 -sOutputFile=output.pdf \
 -sDEVICE=pdfwrite \
 -sColorConversionStrategy=Gray \
 -dProcessColorModel=/DeviceGray \
 -dCompatibilityLevel=1.4 \
 -dBATCH \

Jump Bike invasion in Montreal and why Uber sucks

Categories: Uncategorized

As Uber’s Jump electric bike offering arrives in Montreal, a very common sight starts appearing: Orange Jump bikes parked everywhere: in parks, on sidewalks, tied to private property or public structures, sometimes blocking the way for pedestrians.

Bad parking spot, blocks the sidewalk, out-of-area

Uber’s permit stipulates that bikes have to be parked in designated spots only; a “designated spot” is a bike rack, not any random pole or fence. Of course, users park where convenient, and of course, Uber always washes their hands from any responsibility by claiming “it’s the user’s responsibility to park bikes in the appropriate spot”. Yet, for all their billions of dollars and technology, they are unwilling to enforce in any way compliance with the permit the city has granted.

I say “unwilling” because it’s entirely feasible to use technology to ensure compliance. I gave this some thought and came up with at least 3 solutions in 5 minutes. The solutions use technologies that are well within Uber’s reach and areas of expertise. They could locate and identify the allowed parking spots and areas and disallow releasing a bike unless it’s properly parked, which can be done either by analyzing Google Street View imagery (using AI, with which Uber is quite adept), sending a human to actually note locations of valid parking structures, or using AI to identify common and likely parking spots, based on density and frequency of bikes parked nearby.

Interestingly, a much smaller company (Lime, the electric scooters) has also just arrived in Montreal, with similar restrictions as to where scooters can be parked. They came up with yet another solution: upon finishing a ride, one must take a picture of the scooter, showing it is correctly parked in a designated area, and upload it.

Another one yay
Another one yay

That very simple solution is easy and cheap to implement, and it keeps users honest while ensuring everyone complies with the conditions set by the city. Honesty, however, is something that escapes Uber; it’s been shown time and time again that they will do the bare minimum necessary, and sometimes not even that if they perceive that the level of enforcement will allow them to get away with just shifting blame to users.

In my opinion, the city should mandate that Uber implement measures to curb badly-parked bicycles, seize badly-parked bikes and impose hefty fines on each one that is found, and ultimately (because these measures will NOT make Uber relent), just rescind their permit. A predatory, disrespectful company like Uber should NOT be allowed to operate in our city.

Using virsh to add a volume (disk) to an existing vm

Categories: English Geeky

I have a second hard disk mounted under /thepool, and I want to make “virtual disks” in there and be able to mount them on any of my virsh-defined virtual machines.

The root filesystem of the VM resides on a different storage pool (uvtool) on a fast SSD, but for bulk storage I don’t want to fill out the SSD with crap.

# Create the storage pool, under "/thepool" where the big disk is mounted
virsh pool-define-as disk-pool dir - - - - "/thepool/libvirt-pool/"
virsh pool-build disk-pool
virsh pool-autostart disk-pool
virsh pool-start disk-pool
# CHeck it's there
virsh pool-list
# Create a volume inside the pool, qcow2 format
virsh vol-create-as disk-pool juju-zfs-pool.qcow2 64G --format qcow2
# Attach it to the VM
virsh attach-disk juju /thepool/libvirt-pool/juju-zfs-pool.qcow2  vdc --persistent  --subdriver=qcow2
# Now inside the vm, /dev/vdc exists and can be formatted/partitioned and mounted as normal