Mirage Minimum Viable GCE with valid TLS certificate!

In a previous installment we went over deploying a static TLS web-site to Google Compute Engine.

It works but you have to click through the disconcerting certificate warning. Wouldn’t it be nice if our all-OCaml unikernel application went ahead and obtained a trusted TLS certificate from Let’s Encrypt all by itself?

To that end, I present to you the GitHub repo mbacarella/mirage-mvgce.

The scripts in that repo assume you’ve followed the steps in the previous post, and already have a GCE instance with the static TLS web site example attached and have the gcloud environment set up.

The ./publish.sh script will build your unikernel, and then do the GCE manipulations necessary to detach the boot image from the GCE instance, create a new image for your unikernel, attach it to your instance, and then boot it back up.

Tutorial: minimum viable Mirage unikernel web server on GCE

If tens of millions of lines of C code in your application stack keeps you from sleeping easily at night, as it does me, you may be interested in changing your web services to all-OCaml based Mirage unikernels. The complete argument for considering this can be made elsewhere.

What I’d like to do here is provide a minimum example that runs on a commonly available cloud provider, Google Compute Engine (GCE) on the most affordable GCE instance type, f1-micro. This is assuming you already have an OCaml opam environment setup. This also assumes you’re on Ubuntu on x86-64.

Firstly, you’ll need something called Solo5, for gluing unikernels into virtualization platforms:

cd ~/src
git clone https://github.com/Solo5/solo5
cd solo5
./configure.sh
make
# no install needed, we'll reference the tools here soon

Now get the Mirage tutorial code then build and package the unikernel.

The example we’re using here is a very basic, static content, HTTPS-enabled web site:

opam install mirage --yes
 
cd ~/src/
git clone https://github.com/mirage/mirage-skeleton
cd mirage-skeleton/applications/static_website_tls
mirage configure -t virtio --dhcp true
make depends
make
# yields a payload called https.virtio

virtio is the virtualization platform you want Mirage to target. It’s an extension to Linux KVM that allows for efficient sharing of I/O devices with the hypervisor. Then, we use Solo5 to package the unikernel:

~/src/solo5/scripts/virtio-mkimage/solo5-virtio-mkimage.sh -f tar -- \
  image.tar.gz https.virtio --ipv4-only=true

image.tar.gz is the kernel that our GCE instance will boot into.

The rest of these instructions are how to set up the GCE instance to boot this image.

Go to https://cloud.google.com/ and create an account if you don’t have one. Then set up the Google Cloud SDK by following the instructions here.

gcloud init
export MY_PROJECT=mirage-test-12345 
# This name above needs to be unique across all gcloud users in the world.
export MY_REGION=us-west1
gcloud projects create $MY_PROJECT

You must enable billing for this project to proceed. Visit https://cloud.google.com/ , log into the Console, select $MY_PROJECT from the dropdown in the top left, and then click Billing. Connect it to a credit card you have on file.

gcloud config set project $MY_PROJECT
gcloud compute addresses create my-address1 --region $MY_REGION
export MY_IP_ADDRESS=..EXTERNAL_IP from output of previous step..

If you have a DNS domain somewhere, you might want to assign a friendly name to the $MY_IP_ADDRESS above.

Now, upload image.tar.gz to a GCE storage “bucket”, then turn it into a VM image.

gsutil mb gs://$MY_PROJECT
gsutil cp image.tar.gz gs://$MY_PROJECT
gcloud compute images create mirage-test --source-uri gs://$MY_PROJECT/image.tar.gz

Your GCE instance is locked down to the world by default. Create firewall rules to allow access to the pre-configured static_website_tls ports 8080 and 4433.

gcloud compute firewall-rules create http --allow tcp:8080
gcloud compute firewall-rules create https --allow tcp:4443

Now boot the image!

gcloud compute instances create mirage-test --image mirage-test \
   --address $MY_IP_ADDRESS --zone $REGION-a --machine-type f1-micro

If your request can’t be satisfied in the current GCE region try changing the -a to a -b, or -b to -c, or so on, until you find an availability zone with enough dimension. You should see output like this on success:

NAME         ZONE        MACHINE_TYPE  PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS
mirage-test  us-west1-c  f1-micro                   10.168.0.2   $MY_IP_ADDRESS  RUNNING

Visit https://$MY_IP_ADDRESS:4433/ and accept the untrusted certificate. You should see the Mirage Hello World.

If you have any trouble, visit the GCE console, click on your mirage-test instance, and view the serial console. Successful output looks like this:

Serial port 1 (console) output for mirage-test
SeaBIOS (version 1.8.2-google)
Total RAM Size = 0x0000000026600000 = 614 MiB
CPUs found: 1     Max CPUs supported: 1
found virtio-scsi at 0:3
virtio-scsi vendor='Google' product='PersistentDisk' rev='1' type=0 removable=0
virtio-scsi blksize=512 sectors=2097152 = 1024 MiB
drive 0x000f2510: PCHS=0/0/0 translation=lba LCHS=1024/32/63 s=2097152
Sending Seabios boot VM event.
Booting from Hard Disk 0...
 
SYSLINUX 6.04 20200816 Copyright (C) 1994-2015 H. Peter Anvin et al
Loading unikernel.bin... ok
            |      ___|
  __|  _ \  |  _ \ __ \
\__ \ (   | | (   |  ) |
____/\___/ _|\___/____/
Solo5: Bindings version v0.6.8
Solo5: Memory map: 613 MB addressable:
Solo5:   reserved @ (0x0 - 0xfffff)
Solo5:       text @ (0x100000 - 0x488fff)
Solo5:     rodata @ (0x489000 - 0x52dfff)
Solo5:       data @ (0x52e000 - 0x82ffff)
Solo5:       heap >= 0x830000 < stack < 0x265fd000
Solo5: Clock source: KVM paravirtualized clock
Solo5: PCI:00:03: unknown virtio device (0x8)
Solo5: PCI:00:04: virtio-net device, base=0xc040, irq=11
Solo5: PCI:00:04: configured, mac=42:01:0a:a8:00:05, features=0x204399a7
Solo5: PCI:00:05: unknown virtio device (0x4)
Solo5: Application acquired 'service' as network device
2021-11-09 23:31:32 -00:00: INF [netif] Plugging into service with mac 42:01:0a:a8:00:05 mtu 1500
2021-11-09 23:31:32 -00:00: INF [ethernet] Connected Ethernet interface 42:01:0a:a8:00:05
2021-11-09 23:31:32 -00:00: INF [ipv6] IP6: Starting
2021-11-09 23:31:32 -00:00: INF [dhcp_client_lwt] Lease obtained! IP: 10.168.0.5, routers: 10.168.0.1
2021-11-09 23:31:32 -00:00: INF [ARP] Sending gratuitous ARP for 10.168.0.5 (42:01:0a:a8:00:05)
2021-11-09 23:31:32 -00:00: INF [udp] UDP interface connected on 10.168.0.5
2021-11-09 23:31:32 -00:00: INF [tcpip-stack-direct] stack assembled: mac=42:01:0a:a8:00:05,ip=10.168.0.5
2021-11-09 23:31:32 -00:00: INF [https] listening on 4433/TCP
2021-11-09 23:31:32 -00:00: INF [http] listening on 8080/TCP
2021-11-09 23:37:58 -00:00: INF [https] [7] serving //34.102.123.54:4433/.
2021-11-09 23:37:58 -00:00: INF [https] [7] serving //34.102.123.54:4433/favicon.ico.
2021-11-09 23:38:45 -00:00: INF [https] [8] closing

Costs

As of this writing, in the Google Cloud us-west1 region, an f1-micro instance costs about $3.88 per month and the final deployed image consumes 4.4MB (that’s megabytes with an M) of storage, though GCE rounds this up to a whole gigabyte, which costs $0.02 per month.

The future

~~In a follow-up post I’ll describe the workflow for updating the image and also for integrating Let’s Encrypt support.~~

To see how to update your running image, and replace the static TLS web server with an untrusted certificate to one that fetches a certificate from Let’s Encrypt, see this follow-up post

Thank yous!

The information here was adapted from instructions in the DreamOS project provided by the illustrious @dinosaure and with some hand-holding from the OCaml Labs Slack #mirage community.

Parler may still be hosted in the US

Parler was recently kicked off of Amazon AWS for TOS violations. Parler claims that the big tech cloud providers weren’t willing to do business with them, and facing no place left to go, the site went offline.

Recently the site came back online in limited form. Some have observed they’re enlisting the help of a Russian-owned cloud service called DDoS-Guard. I have no prior understanding of DDoS-Guard, but the emerging narrative is that they’re now hosted in Russia and this is causing a bit of a stir.

Based on my analysis, the argument that they’re now hosted in Russia is not well supported. This is not an exhaustive analysis, rather treat it as a starting point for second guessing the dominant narrative.

One can look up parler.com in DNS and see that it resolves to 190.115.31.151, and one can look up that IP address in a WHOIS database to see what company owns the IP address. When I do this, I get a result that implies it’s a company based out of Belize.

% whois -h whois.arin.net 190.115.31.151
 
...
 
inetnum:     190.115.16.0/20
status:      allocated
aut-num:     AS262254
owner:       DDOS-GUARD CORP.
ownerid:     BZ-DALT-LACNIC
responsible: Evgeniy Marchenko
address:     1/2Miles Northern Highway, --, --
address:     -- - Belize - BZ
country:     BZ
phone:       +7 928 2797045
owner-c:     HUN8
tech-c:      HUN8
abuse-c:     HUN8
inetrev:     190.115.16.0/24
nserver:     NS1.DDOS-GUARD.NET
nsstat:      20210124 AA
nslastaa:    20210124
nserver:     NS2.DDOS-GUARD.NET
nsstat:      20210124 AA
nslastaa:    20210124

Verifying the authenticity of information in IP registrations is outside of my wheelhouse, but it would appear that DDoS-Guard is involved in some way, and there’s an agent in Belize that owns the IP address, and that they may be directed by Russia-owned DDoS-Guard.

Aside: 190.115.31.151 reverse resolves to ddos-guard.net. So, that’s a non-WHOIS-based clue that they may be involved. And if you do an HTTP HEAD on that IP address, the server identifies itself as being involved with ddos-guard.net. None of this so far is solid proof that parler.com has an actual relationship with DDoS-Guard the company, but there’s paperwork that points that way.

Anyway, does the fact that a Russian internet service is working with them mean they’re now hosted out of Russia? Not necessarily. I’m assuming DDoS-Guard, by its name, is a service you sign up with to protect your service from internet attacks. To be good at this, they would probably distribute your service to multiple datacenters (I’ll call them POPs from hereon out) across the world to eliminate single points of attack-failure. A similar service, Cloudflare, has an impressive map, for example.

In probing how widely distributed Parler’s service is, it seems they’re not distributed very widely at all.

When I run a traceroute from my residential internet service at home in Oregon, the traceroute starts going dark somewhere on NTT’s backbone around NYC.

oregon % traceroute 190.115.31.151
traceroute to 190.115.31.151 (190.115.31.151), 30 hops max, 60 byte packets
...
 6  be-10847-pe02.seattle.wa.ibone.comcast.net (68.86.86.226)  24.509 ms  20.180 ms  21.082 ms
 7  ae-31.a00.sttlwa01.us.bb.gin.ntt.net (129.250.66.149)  26.071 ms  26.060 ms  26.049 ms
 8  ae-14.r05.sttlwa01.us.bb.gin.ntt.net (129.250.5.133)  94.289 ms  94.278 ms  94.268 ms
 9  ae-2.r22.sttlwa01.us.bb.gin.ntt.net (129.250.2.44)  27.087 ms ae-1.r22.sttlwa01.us.bb.gin.ntt.net (129.250.2.9)  27.075 ms ae-2.r22.sttlwa01.us.bb.gin.ntt.net (129.250.2.44)  25.983 ms
10  ae-11.r20.nwrknj03.us.bb.gin.ntt.net (129.250.6.176)  94.453 ms  94.218 ms  94.210 ms
11  ae-1.r00.nycmny13.us.bb.gin.ntt.net (129.250.2.174)  94.202 ms  94.421 ms  94.425 ms
12  * * *
13  * * *
14  * * *

Note the “nyc” in the final hops along the route that reply. These names could be completely fabricated (naturally), but backbone ISPs tend to not screw around with that.

Of course, that doesn’t necessarily mean they’re entirely hosted in NYC. It also doesn’t mean that after the trail goes dark it doesn’t continue into Russia. Perhaps DDoS-Guard directs me to their presence in NYC because it’s closest to me in Oregon. Am I wrong to focus on this one IP address? Were I to resolve their name from another part of the world, would get a different IP and be sent to a different POP?

Nope!

From a cloud provider in Hong Kong:

hongkong % host parler.com
parler.com has address 190.115.31.151

From a cloud provider in Tokyo:

tokyo % host parler.com
parler.com has address 190.115.31.151

From a cloud provider in Chicago

chicago % host parler.com
parler.com has address 190.115.31.151

From a cloud provider in Dublin:

dublin % host parler.com
parler.com has address 190.115.31.151

All the same, which is pretty odd for a DDoS protection service.

While it’s possible to use a single IP address to publish a service across multiple locations, that only works for “session-less” services based on UDP. DNS can be replicated this way using anycast IP addresses. For session-oriented stuff, like a social networking web site (and mobile app), you need to use TCP, which prevents you from replicating a single IP globally. (TCP-based anycast exists, though in my experience it isn’t common for general purpose TCP)

What you would normally do in this situation is resolve a name to a different IP address based on where in the world the name was queried from. If someone looks up the IP from Hong Kong, you would reply with an IP that maps to a POP that’s closest to Hong Kong. And if someone looks up from Europe, you’d dispatch to an IP closest to Europe. That’s not happening here.

It’s probably not for lack of resources on DDoS-Guard’s part. Their network map (scroll down) shows they have POPs all over the world.

So, where in the world are they located?

A traceroute from Hong Kong also heads towards NYC:

hongkong % traceroute 190.115.31.151
traceroute to 190.115.31.151 (190.115.31.151), 30 hops max, 60 byte packets
...
11  ae-11.r20.nwrknj03.us.bb.gin.ntt.net (129.250.6.176)  191.798 ms  196.203 ms  190.617 ms
12  ae-1.r00.nycmny13.us.bb.gin.ntt.net (129.250.2.174)  192.586 ms  185.942 ms  192.661 ms
13  * * *
14  * * *
15  * * *

From Tokyo, the trail disappears near twelve99.net’s Hong Kong onramp:

tokyo % traceroute 190.115.31.151
traceroute to 190.115.31.151 (190.115.31.151), 30 hops max, 60 byte packets
...
18  snge-b2-link.telia.net (62.115.176.136)  81.262 ms  81.243 ms  82.168 ms
19  hnk-b1-link.ip.twelve99.net (62.115.116.147)  81.928 ms  81.851 ms  81.601 ms
20  * * *
21  * * *
22  * * *

From a host in Chicago, it also appears to go dark around twelve99.net’s NYC on-ramp:

chicago % traceroute 190.115.31.151
traceroute to 190.115.31.151 (190.115.31.151), 30 hops max, 60 byte packets
...
 5  te0-7-0-25.rcr21.b002281-5.ord03.atlas.cogentco.com (38.142.64.193)  1.528 ms  1.604 ms  1.639 ms
 6  be2409.ccr41.ord03.atlas.cogentco.com (154.54.29.65)  1.768 ms  1.558 ms  1.552 ms
 7  telia.ord03.atlas.cogentco.com (154.54.12.38)  1.354 ms  1.542 ms  1.339 ms
 8  * * *
 9  nyk-b3-link.ip.twelve99.net (62.115.139.151)  19.058 ms  19.041 ms nyk-b3-link.ip.twelve99.net (62.115.140.223)  19.876 ms
10  * * *
11  * * *

From the host in Dublin. Appears to go dark after routing through Amsterdam:

dublin % traceroute 190.115.31.151
...
25  slou-b1-link.telia.net (195.12.255.242)  25.141 ms  25.157 ms  25.143 ms
26  ldn-bb1-link.ip.twelve99.net (62.115.120.74)  18.279 ms ldn-bb4-link.ip.twelve99.net (62.115.117.122)  15.722 ms ldn-bb1-link.ip.twelve99.net (62.115.117.192)  18.512 ms
27  adm-bb4-link.ip.twelve99.net (62.115.134.26)  16.582 ms  17.376 ms adm-bb3-link.ip.twelve99.net (213.155.136.99)  18.577 ms
28  adm-b1-link.ip.twelve99.net (62.115.136.195)  18.286 ms adm-b1-link.ip.twelve99.net (62.115.137.65)  17.184 ms adm-b1-link.ip.twelve99.net (62.115.136.195)  18.804 ms
...

The above traceroutes alone don’t clearly prove that all traffic must head towards and terminate in NYC, but it’s not clearly pointing away from NYC either. You would expect if they were actually hosted in (e.g.) Russia the traceroutes wouldn’t look like this at all.

Here’s another technique. We can try to triangulate from a couple of locations based on ping latency. When I do a TCP-level “ping” on the HTTP port from each location, the round-trip latency is consistent with the amount of time it takes to access something on the east coast of the US. That is, about 100ms round-trip from Oregon, 100ms round-trip from Europe, and almost 400ms round-trip from Hong Kong.

Let’s assume NYC is the POP that’s serving parler.com. This isn’t a huge leap of faith; their network map mentioned earlier certainly indicates they have one in NYC. This still doesn’t mean it’s necessarily hosted in NYC. There are obviously lots of tricks providers can play. DDoS-Guard could be providing an on-ramp in NYC that then fetches the content from somewhere else in the world (Russia!?), and acting as a caching proxy for the content. Since Parler’s customers tend to be US centric there’s no reason to provision them on other POPs. But… if you’re already providing an on-ramp in NYC, why not also host the content in NYC?

At the minimum, I think the claim that Parler is now hosted in Russia is not very strong. That they might be still hosted in the US would line up with what I’m seeing here.

I don’t have a particular dog in this fight, though as an aside, it’d be kind of sad if you couldn’t be on the internet anymore just because Amazon, Google and Microsoft told you to get lost. Simply pointing out if we’re going to freak out about Russia being involved in some way, I think it should be accurate freaking out.

Further research

This is about all the energy I want to invest in this controversy. If you’re interested in continuing, rocks I would turn over include:

  1. Better traceroutes. In addition to ICMP based traceroutes, I also did tcptraceroute which wasn’t much different (so I didn’t mention it), though it’s possible newer tools and public registries can reveal more information about the topology.

  2. Confirming whether or not they’re using TCP anycast; it’s unlikely, but possible, which would blow a big hole in my claims. This BGP registration confirms DDoS-Guard is directly connected to a router that’s in NYC for that IP. Can you find another?

  3. Doing deeper HTTPS-level transactions into their site that bypasses caching, so you can possibly identify a lower bound latency on moving from their public endpoint to their content server; if it’s low enough, it could confirm they have authoritative content in, say, NYC, but if it’s too large it doesn’t disprove that it’s in NYC (they could simply be slow).

NYT threatens to out Slate Star Codex author

This is such terrible news. As seen on Slate Star Codex: NYT is Threatening my Safety by Revealing my Real Name so I am deleting the blog.

I’m posting this mainly as a signal boost, and also to express sadness that if you want to publish ideas that are not clearly on the right side of history, you can’t do it without risking your job and/or risking your personal safety. Free speech is for rich people or for people who are advancing an approved narrative.

Even if you’re extremely polite, and a genuinely wonderful person, and do a lot of high-effort research and carefully weigh all of the facts and evidence presented, which SSC (and a big proportion of its related communities) bend over backwards to do, your platform’s days on the internet are numbered.

UPDATE: happy to report Slate Star Codex has returned! Here at Astral Codex Ten! He merely had to abandon his current medical practice and become dependent on his writing to support him in the face of de-anonymization. Maybe it will turn out for the best.

Consider switching to zsh by watching this slick video

bash has been my command shell for my entire Linux-using life. It was the default on the first Linux distribution I had ever tried and I never questioned it.

On a recent project I was forced to use a shared account (which is bad practice in general, but that’s another story) and noticed that zsh was the default shell. After my initial revulsion at how weird it was, I stumbled across a surprisingly pleasant feature: command history was actually, like, good? I started typing a long command and forgot the rest of the arguments. At that point I hit the up arrow to start scrolling through my command history, but zsh actually went ahead and searched my command history for the best match. A trivial but pleasant feature.

Intrigued, I wondered what else zsh could do. Now the usual way you learn about anything IT related is to Google search it, but having done a lot of home projects the past few years I discovered late what contractors and hobbyists have known for a long time: do a YouTube DIY video search instead.

So, I tried it out for zsh and found this slick, high-effort video by a pretty cool dude with 115,000 views on why zsh rules, why you should switch, and step-by-step on how to do it. Check it out. The rest of this post is my marveling at how cool the internet is.

Yes, the internet might be better than ever? I’m blown away that something as boring as an alternative command-shell video is so popular. If you told me 115,000 people in the world cared about this topic and found this video I never would’ve believed it, at first blush. As bizarre and eclectic and unpopular as your interests might be, there are still probably hundreds of thousands of people in the world just like you.

Long-term internet users have complained for years about how dumbed down it has become with each billion people that have been added. While true, the common surface of the internet is stupider than ever, being able to connect with a large community of people with interests less than, say, 1 in 50,000 people have, is incredible.

Go forth and find your soul tribe, weirdos.

Feed your second brain

How many times have you read a super interesting book, full of fascinating insights and facts, and then, years later, you are lucky if you remember even 1% of what you read?

I don’t know about you, but this makes me angry. I think it’s a waste of life, a squandering of potential. Instead of having all of the facts you’ve ever learned resident in your working memory, ready to make new mind-blowing connections all over the place, you have to settle with being unsurprised most of the time.

The human condition sucks. We don’t yet know how to make our own brains awesome, but maybe we can make an auxiliary second-brain-like assistant that can help.

dex told me about this sweet note-taking service called Roam, which is dedicated to enabling second-brain kind of stuff in the spirit of Zettelkasten. I’ve been using it for a few days now and I’m stoked.

I don’t have any mind-blowing insights to share yet as I’m still booting my second brain with facts. But I do feel, at least, a change in behavior. I want to attack my reading list, in pursuit of interesting food for my second brain.

Currently open books on my reading list that I’m fascination-mining are Seeing like a State and The Hungry Brain

Changing spark plugs on a 2006 Ford Econoline 4.6L V8 engine

The hippie van’s odometer recently ticked 100,000. According to ancient mechanic lore, machines last longer if you do preventative maintenance on them. As a long-term oriented kind of guy, I looked through the service schedule to see what needed to be done.

Although my camper van exudes the ineffable and defies being put in a neat little box, the base of it is a common machine: a 2006 Ford Econoline E-150 cargo van with a 4.6L V8 engine. At the factory they install spark plugs in these things, and after 100,000 miles they need to be replaced.

I asked the local mechanic for help. They said it would take them about half a day of labor to do the spark plugs and responded with a quote for $550. Incredible! Doesn’t changing spark plugs take an hour tops? The auto shop said that in the typical car that’s true, but in my particular van the spark plugs are really hard to get to. Suspicious, I said I’d have to think about it.

Researching on my own, I found this sweet YouTube video where a mechanic describes the procedure and also the parts involved to do this on the exact engine in my exact model van!

Okay, so given how long these videos (the one above and the related video) are maybe changing them really is a huge pain in the butt. The cool mechanic in the video even used the words “getting it done in a weekend”. Also, unlike the average car which has a 4 cylinder engine, my van is a V8 which means double the work there alone (V8 engine = 8 cylinders = 8 spark plugs to replace). Worse still, the van in the video is a 5.4L engine, whereas mine is a 4.6L engine, which are reportedly even more difficult to deal with.

Furthermore, and I don’t normally recommend this, read the YouTube comments on those videos and you’ll see another Ford mechanic chime in. There’s a running joke in the Ford mechanic community that spark plugs in these vans are so annoying that you’re supposed to leave them for the next shift to handle.

Okay, so that auto shop wasn’t pulling my leg after all. By now I was in too deep though! According to those videos, no expensive tools or equipment are required, nor any particularly fancy displays of dexterity or grace. Since the bulk of the cost of changing spark plugs is fidgeting inside of the engine compartment surely that’s work that I could just do myself, and save some cash. Right? Right.

Oh, did I mention I’ve never even so much as changed my own oil before?

Fortunately, I have some things going for me.

  1. This work is normally done by humans, which means it can be done by me, because I’m human too!
  2. I have the internet at my disposal which should connect me to a global community of experts if I run into problems. Besides, it doesn’t require even so much as lifting the car. Changing my own spark plugs should be well within my capabilities.
  3. The van is not our primary mode of transportation, so it’s okay if it goes down for awhile. I can move as slowly and as carefully as I need to. The pressure to get this done quickly is off.
  4. I can always take it to a mechanic if I screw it up. I probably won’t screw it up so badly that they’ll want that much more money than they originally quoted me (so long as I go to a different mechanic, so they won’t have that “so you’ve come crawling back attitude”).

Having resolved to do this myself, I carefully re-watched the video above (and the related video) and ordered the parts.

  1. Spark plugs ($20); these are the things that ignite the air/fuel mixture in the cylinders
  2. Coils ($100); they’re rubber heads that protect the plugs and also bridge the electrical system to the spark plugs
  3. A tiny tube of anti-seize lubricant ($6); you smear a small amount of this on the new plugs before you insert them
  4. Dielectric compound ($20). Smear this inside of the new coil heads before you install it.

Well, almost. I still have to round out my tool collection a bit, so I dropped by my local hardware store (True Value Eugene represent!) and picked up:

  1. Socket specifically made for spark plugs. They have little rubber bushings inside that grab onto the plug when you unscrew them so you can pull them out.
  2. “Universal” joint. Oddly named; they reduce the risk of cracking the spark plug if you twist the socket wrench at an odd angle of attack. Pretty much impossible to work in the cramped quarters of the Econoline van engine compartment without one
  3. A 4″ extender (or two) for the socket wrench, so that I had clearance for the wrenching action
  4. Disposable nitrile gloves, always a good idea for when handling things coated in grime

Oh, I also happened to have an air compressor (found for $30 on Craigslist!) left over for a different project. The mechanic in the video above stresses that it’s important to blow the compartment out no matter how clean it looks so you don’t risk letting debris fall into your cylinders while the engine is exposed. Yowza!

With all of the tools and parts in hand, I sketched out a plan of action while I waited for a several day stretch of clear and sunny weather.

To start, I would remove the air intake (disconnecting the air intake sensor) and do a thorough initial blowing out of the engine compartment. I had recently taken this van to Burning Man, so the inside was practically caked on with alkaline playa dust.

With the front portion blown out, I went into the driver/passenger compartment, removed the center console (“doghouse”), and blew it out from the inside.

The procedure I’d follow for replacing each spark plug is

  1. Disconnect the spark plug coil from power. There’s a little tab on the (tang?) connector that lets you disengage them.
  2. Blow thoroughly around each coil with the air compressor
  3. Unscrew the bolt holding on the coil. Start it off with a socket wrench to break it loose, then switch to a screwdriver with a socket bit on the end so you don’t have to try to maneuver the whole socket wrench inside of the compartment.
  4. Remove the coil, exposing the spark plug
  5. Use the air compressor again to blow out the area around the exposed spark plug
  6. Use the special socket wrench to unscrew and pull out the spark plug out of the cylinder (thanks, rubber bushing!)
  7. Blow out around the area a final time, being careful not to direct anything into the engine
  8. Prepare a new spark plug: unwrap it, apply a small dollop of anti-seize, and carefully ease it into the cylinder; they’re delicate, so avoid having them clang around
  9. Hand-tighten the spark plug (by twisting the socket on the end of an extender) as much as possible, 90% of the way or more, before slipping the wrench onto the extender and putting your elbow into it
  10. Prepare a new coil: unwrap it, apply a dollop of dielectric compound to the head, and push it down into the cylinder. It should enclose the head of the spark plug. If you pull the plug back out, it may even make a popping noise.
  11. Bolt the coil down, hand tightening the bolt with the screwdriver. I specifically didn’t use the socket wrench for this because I didn’t want to risk breaking the bolt head off
  12. Reconnect the coil to power
  13. Start the engine on the van and see if we can hear any misfires. If it idles smoothly, it’s probably fine.

There. One spark plug changed. Repeat seven more times.

In total it took about 7 1/2 hours of physical struggle spread out over 2 days. Not counted was the research and also scratching my head in the middle, wondering about strategy. Each plug was in a unique spot and required a different approach to get out. Some of the plugs were quite challenging to reach, and the space so cramped you can only get like one click on your ratcheting action so it can take what feels like forever to unscrew them. Lots of stuff was in the way, like a thing called a “fuel rail”, but I didn’t feel comfortable removing them because I didn’t want to end up even deeper out of my element.

My procedure above worked, although it had a benign but alarming flaw. About halfway through the spark plug replacement I noticed the check engine light was on. Concerned that I screwed up a plug and without really thinking I revved the engine to see if I could maybe force a misfire. At that point a wrench icon lit up in my dashboard, which meant the van’s computer had switched to limp-home mode: the engine revved down, no matter how hard I pushed on the pedal. Shit. Something’s definitely wrong. Am I going to have a really expensive repair bill after all?

Calming down a bit, I took stock that it wasn’t misfiring at idle. Something else must be wrong. Probably I knocked something else out while fidgeting under the hood. Perhaps if I could probe the on-board diagnostic computer it could give me more detail behind that check engine light.

On-board diagnostic computer? Yup! A little piece of legislation called the Clean Air Act, enacted in the 90s, requires all cars sold in the United States to come with an on-board diagnostic computer to control emissions. Better yet, these computers have a standard interface (“OBD2”) and you can buy tools to read them. Nowadays the tools are quite cheap, only $12 for a Bluetooth enabled one that would beam detailed sensor information directly to my smartphone. Sweet! I ordered one and went back to finishing the plugs.

A few days later the OBD2 reader arrived. I connected it to the diagnostic port on the hippie van (below the steering wheel) and paired it with a compatible free app (Torque Lite for Android, in this case). Two faults appeared on the screen. Looking into them further, neither of them had anything to do with spark plugs! Yes!

The faults had to do with the air intake sensor being disconnected, which you will remember I had to move out of the way to get to the plugs in the first place. Automobile engines dynamically combine air and fuel inside of each cylinder at varying ratios depending on environmental factors and performance requirements to ensure a certain degree of efficiency. With the air intake sensor being disconnected, the engine computer had no reliable way to make those decisions so it kicked into limp home mode. Essentially ordering me to take it to a mechanic.

Of course, since I had finished the plug replacement, re-installed the air intake and reconnected the air intake sensor, the condition that led to the faults was eliminated. The check engine light was simply a ghost representing a past problem. I cleared the faults (from the Torque app) and the check engine light went away and didn’t come back. Van seems to run fine now.

Total cost: under $200 in parts and tool upgrades, which means I saved about $350 by doing the labor myself.

Wait a minute though! Didn’t I spend about a day at the office worth of labor on this? In the end, did I really save any money?

That’s the wrong way to look at it. Doing it myself was part of the adventure! A fundamental part of a tune-up on my van! On my terms! If I could properly do this repair, and even reason through a scary spot instead of bailing out and taking it to experts at the first hint of trouble, what other seemingly out-of-reach car repair work am I capable of?

Cast in this light, outsourcing this work so that I could spend a day with my feet up watching Netflix (or whatever) would mean missing out on this important personal growth opportunity. I’m super stoked that I did it and can’t wait for the next even more daunting car maintenance or repair task.

Adding solar power to the camper van

We added solar power system to the camper van. It was delightfully easy.

First, we sized the system to our modest electrical needs: have power to run the exhaust fan, charge some cell phones and plug in random knick-knacks that take AC adapter plugs.

We settled on a 100 watt solar panel, with a 35Ah capacity lead-acid battery and a small inverter. A 35Ah battery can provide 420 watts of power from fully charged. The modest sized battery only takes a few hours of sunlight to charge fully. It won’t run an air conditioner or a power tool, but for minimalist camper life-style stuff it works perfectly.

Next, selecting stuff to buy. It turns out there are kits that take away a lot of the guesswork. I bought this kit by Renogy. Also this inverter and this battery.

Let’s look through the list of components.

  • Solar panel: everyone knows what a solar panel does. It’s a screen that takes in sunlight and puts out electricity. In this case it puts out DC power at 12 volts. The same thing the battery works with.

  • Lead-acid battery: everyone knows what a battery does. It stores energy. Lithium-ion batteries are all of the rage these days, and last significantly longer, but they’re also more expensive. Although lead-acid batteries are a little more risky (they’re full of acid) and don’t last as long, the lower cost meant we could get this system off of the ground more easily.

  • Charge controller: a device that takes power from the solar panel and uses it to charge a battery. Why can’t you plug the solar panel directly into the battery? Well, you could, but then there’d be nothing to tell it to stop charging the battery once it’s full. Batteries don’t like being over-charged. It’s apparently dangerous, even. So that’s why there’s a charge controller. It watches the line to the battery: when the battery is empty it charges it at full blast. When the battery is full, the controller backs off so it won’t overcharge.

  • Inverter: changes DC power to AC power, so we can plug in devices that expect a AC wall-style power.

Here’s a diagram of how we wired it.

Probably the hardest part was figuring out how to run the cables from the solar panel on the outside to the charge controller and battery on the inside. There’s a lot written on the internet on how to run cables through things and waterproof them. You can seemingly spend a sizable amount of money by using special adapters. What I ultimately settled on was something much simpler: grommets and caulk. I drilled a hole slightly bigger than the thickness of the cable, put a 20 cent rubber grommet into the hole, and then stuffed the cable through the grommet. I sized a grommet just barely big enough that it took some force to push the cable through. In some cases I had to moisten the cable a bit to make it stuff through more easily.

Once all of the cables were connected and cut to size, I clamped them down so they wouldn’t slide and applied caulk over the grommets from both the roof side and the inside. I let the caulk dry and then applied it again. Seems fine so far.

It works. We’ve gone on extended camping trips, had power for our phones, and been able to keep ourselves cool at night by having airflow without ever having to risk the van’s startup battery to do it. Pretty excited by the result and it cost about $300 in all.

Here’s what it looks like mounted to the “roof rack”. You can also spend a small fortune on these too, offered by brands like Thule and Yakima, but what I settled on here was a $60 commercial ladder rack found on ebay.

Inside, tucked away, we have the charge controller, inverter and battery.

The only part of this system that I’m not fully thrilled with is that there’s no intelligent cross-over between van power systems. Ideally once the leisure battery was charged the solar panel would then work on topping up the van battery if needed. Vice versa too; it would be nice to charge the leisure battery off of the engine alternator, but otherwise ensure the van didn’t try to draw power from the leisure battery and vice versa.

I’m still evaluating options for when I do a power system refresh, but in the meantime there’s plenty to be happy with: free and clean mobile power!