My Profile Photo


Using liberty-minded opensource tools, and using them well

Squid Reverse Proxy on a VPS for a Self-Hosted Gateway

Setting up a Reverse Proxy on a VPS to act as a gateway for all traffic that wishes to access the services that I self-host at my house. The interesting part of this is redirecting traffic to different ports on my home router based on what subdirectory site that they navigate to. Also correctly handling having a blog on the root of the site. Whew, that was a mouthful.

Clone the code. Here's the repo.

Cache Peers

So in the squid.conf file the most basic three things stood out to me:

  • acl
  • cache_peer
  • cache_peer_access

The first two here set one variable each that we use in the last one.


acl is the Acess Control list that starts setting up which server serves which content. Setting what’s called in the documentation aclname sets the name of the rule that you’re defining here. Make up whatever you want.

Next the argument to acltype tells squid how to act. Using urlpath_regex ^/{{ }} lets us define that the rule that we’re making up in the first place.


This is the easy one. This sets the IP of the “peer”. It also specifies which port that peer is going to be connected to at. So instead of 80 or 443, I’ll be setting it to the port that I’ve NAT’ed each server to on my router.

This also associates whichever peer_name you want with that IP address. That’s specified with a name={{ }} in this line.

At the end of the line, since all of my peers (read: webservers) use TLS, make sure to put ssl at the end of that line so that it knows that it will be connected with a tls server.

Also, if you’re going to be passing usernames and passwords over https, the line login=PASS needs to be at the end of that line as well. Otherwise, Squid just intercepts it and never lets it go.


So here’s where it all comes together. We need to say “Ok, if we’re requesting one of the subdomains that are listed here, request the specific other server. Else, go to the primary server. (Because I’m serving a blog on the root of my site. Presumably the one you’re reading this on.)

The example from the documentation itself that covers this specific scenario:

acl ${aclname} urlpath_regex ^/foo

cache_peer ip.of.server1 parent 80 0 no-query originserver name=${peer_name_1}
cache_peer_access ${peer_name_1} deny ${aclname}

cache_peer ip.of.server2 parent 80 0 no-query originserver name=${peer_name_2}
cache_peer_access ${peer_name_2} allow ${aclname}
cache_peer_access ${peer_name_2} deny all

As you can see, we need to explicity deny the access to the subdomains by the root server, and explicitly allow the access to the subdomains then immediately deny all other access by the other server.

Also, this is just like IPTables or Ansible. It is evaluated in a top-down structure so that once a match is made, an action is taken. In that case, I chose to add a global deny all at the bottom of each statement before the next cache_peer declaration.


This one deals with a couple things at once. First, it sets the port to listen to. Right now, we want that to be 443 as we will be serving https traffix exclusively.

Now, that would be easy enough, but we’ll be terminating the request’s SSL/TLS connection at the proxy, not the end server. The end server will be using their own TLS connection between the proxy and itself.

https_port 443 accel defaultsite={{ ansible.domain }} cert=${certificate}.pem key=${key}.pem

Redirect http to https

I can set up an acl to see if someone’s connecting with http

acl ssl_redirect localport 80

If so, I need that page to be denied and replied with a 302 that takes them to a https protocol

http_access deny ssl_redirect <some other acl>
deny_info https://%H%R <that other acl>

Where %H is the server’s hostname, and %R is the path that the client requested from that host.


This is mandatory - if there is no http_access variable specified then “the default is to deny the request”. Also, it is a good idea to have a “deny all” entry at the end of your access lists to avoid potential confusion.


The only config file was the squid.conf that I had to play with. Once I figured out how this was working, it was easy to loop over a couple of dictionaries and extract values from them.

They had to be dictionaries in that variable, because I needed to associate ports to subdomains and servernames. Especially when it got to the second and third round of cache_peer_access lists. But either way, it was stupidly simple to set up with a little help from conditional nesting.

Let’s Encrypt

Install certbot from the EPEL (verbose for tutorial purposes)

# yum install epel-release
# yum install certbot

Stop squid

# systemctl stop squid

Run certbot as standalone with tls-sni-01 challenge (bind to port 443) for existing domains:

# certbot certonly --standalone \
          --preferred-challenges tls-sni-01 \
          --rsa-key-size 4096 \
          -d \
          -d \
          -d \

In the future, it may be wise to not run it as root, but it worked out, and my certs got put somewhere in /etc/letsencrypt/live/<domain-name>.

In fact, the two options I needed were:


Cipher Suites

After testing my site at, I found that I had a ‘C’ rating. Huh, not good. I had a couple of things to disable (like SSLv2-3) and add (PFS/DHKE). So I added a couple options to the https_port parameter. There are at my repo, in the default variables. There are a lot of them, but a good list of which ones are worth including was published by ssllabs.

When I’m testing internally, can’t access my site (obviously) so I run namp to get the cipher suite information:

# nmap -sV --script ssl-enum-ciphers -p 443 squid.vmlab

If this doesn’t return including some that mention ECDHE, then it’s probably failing silently. Yeah, how shitty is that. Nothing in the log file. Oh well, the fix isn’t that hard, it’s just a matter of replacing dh-params=<file> with tls-dh=[<curve>:]<file>. However, what curve should you pick?

SafeCurves, or, how to choose safe curves for elliptic-curve crypto

There’s not a whole lot of research going into cryptography nowadays. Psyche! It’s a heavily contentious area with mystery, sex, intrigue, backstabbing, backdoor-ing, and all types of shenanigans. Luckily there are still those out there who can put together a good TL;DR. One of those sites is And out of all the curves listed there, I want to employ secp256k1. It’s a good compromise, and as long as it remains the curve for bitcoin, I will have no qualms using it.

Unfortunately, the openssl version that is included in CentOS is only 1.0.1e. This includes ECC ciphers:

  • secp384r1 : NIST/SECG curve over a 384 bit prime field
  • secp521r1 : NIST/SECG curve over a 521 bit prime field
  • prime256v1 : X9.62/SECG curve over a 256 bit prime field

Of which the first two are not even listed on that page, and the third is labelled manipulatable. Hmm, I don’t like that at all.

Installing openssl from source

Was there really any question that I was going to do the needful? But even so, because Squid isn’t compiled against the same version of OpenSSL, it won’t work. Therefore, we’ll probably have to wait for CentOS8 to get a Squid version that works with secp256k1. I could compile it from source, but if I do, it’ll be a little later down the road.

Anyways, the walkthrough is exactly what you’d expect of it - just installing the “Development Tools” group, downloading the latest openssl tarball, and compiling and installing it. There’s also a bit of linking to be done after that, but it’s not too hard. Confirm it’s working with openssl version and openssl ecparam -list_curves. After that, we are now able to set the tls-dh parameter to secp256k1:</abs/path/to/dhparams.pem>. Or whichever curve fits your style. (Once again, only if you compiled Squid against this version of OpenSSL)

After enabling/disallowing the extra ciphers, I now have a grade of ‘A’ - owing to the fact that I don’t have CAA enabled for my DNS, which is something additional that needs to be added as well.

Redirecting hobbithole

It’s easy enough to redirect an old domain to a new one. Since I still have my DNS services pointed at the gateway server, in order to redirect to the correct DNS name, I just had to put in another ACL. This ACL had to catch the old domain name, and redirect to the new domain name. Easy enough, right?

acl old_domain dstdomain <old-domain.tld>

http_access deny old_domain
deny_info https://<new-domain.tld>%R old_domain

Now, the caveat here is that the redirect will not prevent cert errors from showing if a prefix of https:// is used and the cert only has the new site’s cert. This is easy to solve as long as you can get a cert with both the old as well as the new domains on it.

Testing in VMLAB

In vmlab, I don’t have a great way to test, however, I can get away with a little hackery. Anytime I call ansible_domain in a playbook, I return either vmlab or islab, depending on the network. So if squid.vmlab will return vmlab for its ansible_domain, and I put vmlab in my hosts file as the same IP address, I can test navigate my browser to that whenever necessary with just https://vmlab. This also doesn’t preclude squid’s ACLs, which force the URL to match.

However, in my playbooks for the servers it will be serving, I have to remember to put down the common name for the cert as vmlab so it will pass squid’s ACLs on that end.