It is fairly easy to create a linux proxy host that proxies traffic from other hosts that don’t have direct access to the internet. This is a great and simple solution for keeping your backend workers off of the public internet to avoid attacks while at the same time allowing outbound traffic from them.
Here are the steps to configure this setup…
Worker
Configuring the worker that does not have direct access to the internet
DNS
Ensure that the host is using an externally resolvable IP address for DNS (this may not be needed in most cases)
edit /etc/resolvconf/resolv.conf.d/base
add…
12
nameserver 8.8.8.8
nameserver 8.8.4.4
Reload config files for DNS
1
$ sudo resolvconf -u
Networking
Change default gateway to IP address of proxy host
12
$ ip route del default
$ ip route add default via 192.168.3.1
Making the settings persist through a reboot
Default Route Changes
On Ubuntu you would edit your interfaces file, /etc/network/interfaces and update your private network interface block to include the following…
12
up ip route del default
up ip route add default via 192.168.3.1
… it would then look something like this …
123456
auto eth2
iface eth2 inet static
address 192.168.0.2
netmask 255.255.255.0
up ip route del default
up ip route add default via [PROXY_IP_ADDRESS]
Proxy Host
Configuring the proxy host to allow the worker to proxy it’s traffic through it
Networking
add iptables rules to enable nat and forwarding with masquerade
You can add them via command line…
12345
*nat
iptables -t nat -A POSTROUTING -o [PUBLIC_INTERFACE] -j MASQUERADE
*filter
iptables -A FORWARD -i [PUBLIC_INTERFACE] -o [PRIVATE_INTERFACE] -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i [PRIVATE_INTERFACE] -o [PUBLIC_INTERFACE] -j ACCEPT
The static config that this produces looks a little different than the commands used via command line to create it.
You can use $ iptables-save > iptables.rules to dump your rules to a file called iptables.rules. You can then use this file to programatically load the rules upon boot.
OR
You can create an iptables file /etc/iptables.rules to load the rules from…
12345
*nat
-A POSTROUTING -o [PUBLIC_INTERFACE] -j MASQUERADE
*filter
-A FORWARD -i [PUBLIC_INTERFACE] -o [PRIVATE_INTERFACE] -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i [PRIVATE_INTERFACE] -o [PUBLIC_INTERFACE] -j ACCEPT
…then use the following bash script to restore the rules (this will wipe any existing rules not existing in /etc/iptables.rules
Enable forwarding at the OS level in configs which will persist at reboot by editing /etc/sysctl.conf and uncommenting net.ipv4.ip_forward and setting it to 1
Enable forwarding at the OS level by running…
1
echo 1 > /proc/sys/net/ipv4/ip_forward
To set up proxying of ssh connections through the proxy host to the backend workers do the following
add iptables rules to proxy the ssh traffic to the appropriate hosts (note that this goes under the nat table, do not add another nat line if one alraedy esists)
$PUBLIC_INTERFACE_NICKNAME – refers to -A INPUT -i eth2 -j privnet where the interface nickname would be privnet and eth2 would be the private interface
Making the IPTables changes persist through reboot
On Ubuntu add the following bash script named iptablesload to /etc/network/if-pre-up.d/ (this will wipe any existing rules not existing in /etc/iptables.rules)
Debugging SSH connection issues can be tricky and frustrating.
Common Issues & Causes
ssh_exchange_identification: Connection closed by remote host
SSHD key’s are corrupt
Connection to host does not complete due to network issue
The signature for the remote host in known_hosts is not correct
There is a problem with the SSH Daemon on the remote host
Debugging Steps
Check hosts.deny and hosts.allow and ensure that you are not blocking the client, or allowing the client if necessary
Check MaxStartups value in /etc/ssh/sshd_config, the default is 10 but something like 10:30:60 is a bit safer for ssh brute force attacks
Run ssh in debug mode.
..+ This will help to expose problems with things like keys and auth types.
1
ssh my.host.com -VVV
Watch logs on remote server (if possible)
Run sshd on separate port with debug logging to console
..+ This is a very useful step. After you start the ssh daemon on the remote host (we use a different port so that we can troubleshoot while being remote). The high port number allows you to run the sshd process as a non root user.
1
/usr/sbin/sshd -p 2121 -D -d -e
Explanation
1234
-p Set the listening port.
-D This option tells sshd to not detach and does not become a daemon. It allows for easy monitoring.
-d Enable debug mode.
-e Write logs to standard error instead of system log.
After a few kernel upgrades, the /boot partition can fill up quickly if yours is 100M like mine. It’s quite painful to remove package by package to free up some space so that you can continue upgrading. Here’s a helpful oneliner to clean up all unused kernel packages…
1
for i in `dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d'`; do sudo apt-get -y purge $i; done
There are a few staple commands that we use as engineers to trobleshoot issues on Linux machines and servers. Some of these unfortunately do not translate directly to OSX’s underlying UNIX based system. Fortunately three are equivalent commands for most of them!
Networking
It is very handy to be able to determine what ports are listening on a box, or not. It’s also helpful to be able to determine which process and binary is using that port.
If you’ve set up an OpenVPN server for multiple OS tennants, you might have noticed that your OSX clients connect and receive their DNS setting from the server just fine. Your Linux clients however, if running resolvconf or openresolv may not work as easily. Luckily there is a simple and easy fix.
The Fix
In the clients OpenVPN config file add the following…
123
script-security 2
up /etc/openvpn/update-resolv-conf
down /etc/openvpn/update-resolv-conf
You can also add this on the server side configuration in the Custom Directives section.
Note however tho that if you put this on the server side, all of your clients will get this change and the file that we are referencing above may not exist on their machines which will cause problems.
GitHub previously had a feature in their GUI allowing you to compare two different commits, tags or branches. The shortcuts to this feature in the GUI have been removed for some reason but the ability to do this is still there.
Currently there is a fairly agrivating bug in the free version of Opscode’s (now called Chef to make it even easier to find on google) chef-server. This bug is exposed when a dependency of one of your cookbooks depends on a cookbook who’s dependency is not met.
Symptoms
Chef-server starts consuming 100% CPU
Chef-server becomes unresponsive for periods of time or until erchef is restarted
What is causing this bug?
The way that I understand it is that the depsolver that existed in older versions of chef-server was removed and it was the piece that was keeping us from hanging upon unresolved dependencies.
Needs Clarification
Is the bug triggered for missign dependencies at any depth into the dependency chain, or only second level and below?
Is the bug triggered when the dependency cookbook exists on the server but the version constraing is not met, or only when the cookbook does not exist on the chef-server at all?
What is the fix and when will it be pushed to a stable release?
The fix is to add the depsolver that was used in the old chef-server back with some tweaks and testing. This has been committed to master but has not been added to a stable release yet. As of the writing of this article, the fix has yet to be released. It will not be released in any of the 11.0.X releases but is planned to be added in 11.1.0. Huge thanks to Ho-Sheng for shedding some light on this topic!
Working in OpsWorks is has been a generally positive experience. There are a few things that I would like to see changed like the ability to position your own custom chef recipes before OpsWorks built in recipes. Also there was a bit of a sorespot when we decided to automate running a few rake tasks from jenkins on a particular host in a particular stack. To solve this issue, I’ve created a Ruby Gem that has a dual purpose. In its current form it allows you to easily list all of your stacks, see the nodes in that stack with their IP’s and also SSH directly to a host using its stack name and hsotname.
After setting up my dot files on different computers, and accounts a million and a half times, I decided to try to make life a little bit more easy and automate a few things. I’ve created a GitHub repository that contains all of my dot files and added a script that semi intelligently links them for you. The .bash_profile (.bashrc on linux) and .vimrc are set up such that they install most of the dependencies for osx or linux on their own. My goal is to make these where I use the same dot files for all computers and OS’s and have them work the same, doing whatever smartness in the background so that I/you don’t have to think about it.
I’ll add more detail to what all they’re doing a little at a time as well as some of the things that I hope to do with them down the road. Please share your tips and tricks in your dotfiles as I’d love to encorporate as much awesomeness as possible!
irssi is a wonderful chat client. I’m closing it and reopening it all of the time so its nice when it automatically connects to the server that you want, identifies your nickname and joins your channels. Here is how to do it.
How to configure irssi to auto connect to a server
Add the server or servers that you want to connect to