If you know the network name of the device, you can use the dscacheutil
to query the device and get it’s IP address.
1
|
|
For example, I have a NAS on my network but the nas requires an older display connector that I generally don’t keep around. Unfortunately it lives on a network to which we don’t control the IP space. When the IP of the device changes, its not been so easy to find it’s address. This is what I used to find it’s IP.
1 2 3 4 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
|
The message means that the trust store that you specified or was specified for you could not be opened due to access, permissions, or due to the fact that it doesn’t exist.
To fix this, you need the ca-certificates-java package which is not explicitly installed by the Oracle JDK/JRE. Also, it may be installed but you still have to manually run the configuration for it.
The solution is to run the configuration for this package. Make sure to install the package if it hasn’t already been installed.
sudo /var/lib/dpkg/info/ca-certificates-java.postinst configure
To compare…
This command:
ssh -i ~/.ssh/id_rsa me@myserver hostname
will give you Received disconnect from myserver: 2: Too many authentication failures for me
However, if you add IdentitiesOnly=yes like so…
ssh -o IdentitiesOnly=yes -i ~/.ssh/id_rsa me@myserver hostname
you will see…
myserver
Generally this is caused by having more than 5 keys loaded in ssh-agent.
You can remove keys by running ssh-add -d ~/.ssh/[key]
Get all containers IP addresses with container name
for docker
1
|
|
for docker compose
1
|
|
Alias for getting the docker VM IP
]]>If you’re runing docker on OSX more than just a little, you’ve probably run into an issue where you’re building a container and it fails due to an error that looks something like the following…
1
|
|
From this point on, you won’t be able to build new containers or images until… 1) you delete existing contaimers or images or 2) completely delete the virtual disk image that contains all of your docker containers and images
Ok, so lets look at what this error is telling us. The context is important here.
On OSX, when building a docker container or image, the work is being done inside of a virtual machine. If you’ve looked at /var/lib/docker… on your local machine, you may have noticed that its either not there or it is but it isn’t full. The reason for this is becuase the /var/lib/docker folder that the error is referring to lives inside of the virtual machine in which docker for mac or docker-machine is doing its building.
The virtual file system lives on your disk here: /Users/philip/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2
By default this virtual disk is 20G.
To confirm that this is really your issue, here are the steps that I used. Your disk size and space used will be different as I ran these after resolving the issue. Either way, this will give you a good idea of how it all works.
If you check the size of this file…
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
… you’ll notice that it’s 22G
Then if you take a peek at the space used from within a container, it’s slightly different but fairly close… (pay attention to the root partition in this context)
1 2 3 4 5 6 7 |
|
While I was having this issue, I was unable to run this command due to lack of space so you may not be able to do this until you fix the issue.
To fix this issue, we will need to do the following.
Lets go ahead and stop docker so that the disk image is not being used while we resize. This may not be required but better safe than sorry…
To launch the virtual machine, you’ll need QEMU or something that can boot from an ISO and mount a qcow2 image. For this example, I’m using QEMU.
1
|
|
To expand the disk image, we’ll use the qemu-img util packaged with Docker for MacOS. If you can’t find this on your system, you should be able to get this from the qemu package.
1
|
|
If you would like to expand it more or less, you can change the +5G
on the end of the command as needed.
Visit http://gparted.org/download.php and download the gparted-live ISO for your architecture. In this case, I downloaded gparted-live-0.27.0-1-amd64.iso.
Here we run qemu and launch a virtual machine adding our Docker.qcow2 disk image as a drive.
While launching, I saw a warning stating overlayfs: missing 'workdir'
. You can safely ignore this. Just be patient and let it finish booting.
It may take a bit for the machine to completely come up so give it some time…
1
|
|
In GParted…
Start docker back up however you normally would start it.
At this point you can go back and run your commands to check space used from within the VM and confirm that the available size has increased as expected. If it did, you should be good to go!
]]>Look in /dev for your serial USB device
1 2 |
|
In my case it was ttyUSB0. If you unplug and re-connect the PLM it may show up as a different device, such as ttyUSB1 so make sure to check if things stop working after you reconnect it.
To test the PLM, we’ll be using Insteon Terminal which you will get from GitHub.
Lets install the dependencies for the Insteon Terminal which we will use to test and ensure the connection between the PLM and your computer are working.
Install ant, default-jdk and librxtx-java
sudo apt-get install ant default-jdk librxtx-java
Add the user accessing the PLM device to the correct groups to allow permission
If the user accessing the PLM is openhab
use the following. You will need to check the current owner of the /dev/[yourdevice] file and the /run/lock folder and change dialout
and lock
below appropriately.
1 2 |
|
Reboot I’ve found that I have to reboot in order to connect to the model properly. This should not be the case but it is the simplest solution for now. It’s possible that there is a permissions issue, dependency loading issue or something along those lines that gets resolved by a reboot.
Clone the Insteon Terminal repo from GitHub
git clone https://github.com/pfrommerd/insteon-terminal.git
Copy the example config file and edit appropriately
1 2 3 4 5 6 7 |
|
./insteon-terminal
1 2 |
|
philip@cube:~/github/insteon-terminal$ ./insteon-terminal Insteon Terminal Connecting Connected Terminal ready!
1 2 |
|
philip@cube:~/github/insteon-terminal$ ./insteon-terminal Insteon Terminal Connecting gnu.io.NoSuchPortException Terminal ready! “`
Continue to troubleshoot why you cannot connect to your PLM modem device
]]>Determine the amount of space on each node and other storage related stats
1
|
|
When you spin up a single node cluster, the default setting for number of replicas is 1. This means that the cluster is going to try to create a second copy of each shard. This is not possible as you only have one node in the cluster. This keeps your cluster (single node) in the yellow status and it will never reach green. A node can function this way but it is annoying to not see a green state when everything is actually healthy.
1
|
|
When you run out of disk, shards will have not been allocated and your cluster will likely be stuck in status RED. To recover, you need to find out which indices are unassigned and assign them manually
Check your clusters health and status of unassigned shards
1
|
|
Display the indices health
1
|
|
Display shards
1
|
|
Display all unassigned shards and reason for being unassigned
1
|
|
or
1
|
|
When deploying services to Kubernetes, a certificate has to be injected into the container via secret. It doesn’t make sense to have each container renew it’s own certificates as it’s state can be wiped at any given time.
Build a service within each Kubernetes namespace that handles renewing all certificates used in that namespace. This service would kick off the request to renew each cert at a predetermined interval. It would then accept all verification requests ( GET request to domain/.well-known/acme-challenge ) and respond as necessary. After being issued the new certificate, it would recreate the appropriate secret which contains that certificate and initiate a restart of any container or service necessary.
To automate the creation and renewal of certificates, we will need to create container with Letsencrypt to request creation or renewal of each certificate, Nginx to receive and confirm domain validation, and scripts to push the generated certificates to secrets in Kubernetes. This container will be deployed to Kubernetes as a daemonset and should run in each of your Kubernetes clusters.
Previously to acieve using SSL/TLS in Kubernetes, we had to set up some sort of SSL/TLS termination proxy. With the addition of a few new features in Kubernetes 1.2 Ingress, we’re able to do away with the proxy and allow Kubernetes to handle this task.
]]>chef-client -z elasticsearch-cluster.rb
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
|
Using the knife ssl check
command, check the status of ssl between you and your chef server.
1
|
|
The precompiled versions of ruby from RVM are pointing at G/etc/openssl/certs
when looking for it’s ca certificate file. Newer versions of OSX have moved their certs to a different directory, or possibly /usr/local/etc/openssl/certs
if you’ve installed openssl from brew or some other source.
Reinstall ruby from source.
rvm reinstall 2.2.1 --disable-binary
Uninstall all the chef gems
gem uninstall chef chef-zero berkshelf knife-solo
Reinstall ChefDK
Often times you need to run the same task in bash against a number of different arguments. Loops in bash can make this very quick and easy.
One of the simplest ways you can do this in a one liner is as follows
1 2 3 4 5 |
|
You can also predefine an array to use later like this
1 2 3 4 5 |
|
Or, to do this on one line
1 2 3 4 |
|
You can use ranges with seq
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
If you need a counter you could do something like this
1 2 3 4 5 6 7 8 9 10 11 12 |
|
There are a few shortcuts that make life easier when working with file and directory permissions. Here are a few.
When you want to recursively change permissions in a directory, you will want to change the file permissions separately from the directory permissions. You can accomplish this by using two different find commands piped to xargs as follows.
1 2 |
|
or
1 2 |
|
Three permission triads
1 2 3 |
|
Each triad
1 2 3 4 5 |
|
Above, you can see that permissions can be changed using u, g, o and a. These represent references to User, Group, Other and All. + (u)ser: + The user is the owner of the files. The user of a file or directory can be changed with the chown [3]. command. + Read, write and execute privileges are individually set for the user with 0400, 0200 and 0100 respectively. Combinations can be applied as necessary eg: 0700 is read, write and execute for the user. + (g)roup: + A group is the set of people that are able to interact with that file. The group set on a file or directory can be changed with the chgrp [4]. command. + Read, write and execute privileges are individually set for the group with 0040, 0020 and 0010 respectively. Combinations can be applied as necessary eg: 0070 is read, write and execute for the group. + (o)ther: + Represents everyone who isn’t an owner or a member of the group associated with that resource. Other is often referred to as “world”, “everyone” etc. + Read, write and execute privileges are individually set for the other with 0004, 0002 and 0001 respectively. Combinations can be applied as necessary eg: 0007 is read, write and execute for other. + (a)ll: + Represents everyone
The operator is what is used to control adding or removing of modifiers + + Add the specified file mode bits to the existing file mode bits of each file + – removes the specified file mode bits to the existing file mode bits of each file + = adds the specified bits and removes unspecified bits, except the setuid and setgid bits set for directories, unless explicitly specified.
Modifiers + r read + w write + x execute (or search for directories) + X execute/search only if the file is a directory or already has execute bit set for some user + s setuid or setgid (depending on the specified references) + S setuid or setgid (depending on the specified references) without the executable bit (or search for directories) set + t restricted deletion flag or sticky bit
These values never produce ambiguous combinations; each sum represents a specific set of permissions. More technically, this is an octal representation of a bit field – each bit references a separate permission, and grouping 3 bits at a time in octal corresponds to grouping these permissions by user, group, and others.
SUID / Set User ID : A program is executed with the file owner’s permissions (rather than with the permissions of the user who executes it).
1 2 |
|
SGID / Set Group ID : Files created in the directory inherit its GID, i.e When a directory is shared between the users , and sgid is implemented on that shared directory , when these users creates directory, then the created directory has the same gid or group owner of its parent directory.
1 2 |
|
Sticky Bit : It is used mainly used on folders in order to avoid deletion of a folder and its content by other user though he/she is having write permissions. If Sticky bit is enabled on a folder, the folder is deleted by only owner of the folder and super user(root). This is a security measure to suppress deletion of critical folders where it is having full permissions by others.
1 2 3 |
|
’S’ = The directory’s setgid bit is set, but the execute bit isn’t set. ’s’ = The directory’s setgid bit is set, and the execute bit is set.
These are represented in the ls -la
(list all files in list format) by the following
1 2 3 4 5 6 7 |
|
1
|
|
Here are the steps to configure this setup…
1 2 |
|
1
|
|
1 2 |
|
/etc/network/interfaces
and update your private network interface block to include the following…1 2 |
|
… it would then look something like this …
1 2 3 4 5 6 |
|
You can add them via command line…
1 2 3 4 5 |
|
The static config that this produces looks a little different than the commands used via command line to create it.
You can use $ iptables-save > iptables.rules
to dump your rules to a file called iptables.rules
. You can then use this file to programatically load the rules upon boot.
OR
You can create an iptables file /etc/iptables.rules
to load the rules from…
1 2 3 4 5 |
|
…then use the following bash script to restore the rules (this will wipe any existing rules not existing in /etc/iptables.rules
1 2 3 |
|
1
|
|
To set up proxying of ssh connections through the proxy host to the backend workers do the following
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
-A INPUT -i eth2 -j privnet
where the interface nickname would be privnet
and eth2 would be the private interfaceOn Ubuntu add the following bash script named iptablesload
to /etc/network/if-pre-up.d/
(this will wipe any existing rules not existing in /etc/iptables.rules)
1 2 3 |
|
known_hosts
is not correcthosts.deny
and hosts.allow
and ensure that you are not blocking the client, or allowing the client if necessaryMaxStartups
value in /etc/ssh/sshd_config
, the default is 10
but something like 10:30:60
is a bit safer for ssh brute force attacks1
|
|
1
|
|
1 2 3 4 |
|
After a few kernel upgrades, the /boot partition can fill up quickly if yours is 100M like mine. It’s quite painful to remove package by package to free up some space so that you can continue upgrading. Here’s a helpful oneliner to clean up all unused kernel packages…
1
|
|
It is very handy to be able to determine what ports are listening on a box, or not. It’s also helpful to be able to determine which process and binary is using that port.
1 2 3 4 5 6 7 8 |
|
1 2 3 4 5 6 7 8 9 |
|
1
|
|
1
|
|
If you’ve set up an OpenVPN server for multiple OS tennants, you might have noticed that your OSX clients connect and receive their DNS setting from the server just fine. Your Linux clients however, if running resolvconf or openresolv may not work as easily. Luckily there is a simple and easy fix.
In the clients OpenVPN config file add the following…
1 2 3 |
|
You can also add this on the server side configuration in the Custom Directives
section.
Note however tho that if you put this on the server side, all of your clients will get this change and the file that we are referencing above may not exist on their machines which will cause problems.
]]>This is the basic URL format for doing a compare
1
|
|
Some of the available options for compare are undocumented. (append these changes to the end of the URL)
Ignore whitespace changes
1
|
|
Currently there is a fairly agrivating bug in the free version of Opscode’s (now called Chef to make it even easier to find on google) chef-server. This bug is exposed when a dependency of one of your cookbooks depends on a cookbook who’s dependency is not met.
The way that I understand it is that the depsolver that existed in older versions of chef-server was removed and it was the piece that was keeping us from hanging upon unresolved dependencies.
1
|
|
The fix is to add the depsolver that was used in the old chef-server back with some tweaks and testing. This has been committed to master but has not been added to a stable release yet. As of the writing of this article, the fix has yet to be released. It will not be released in any of the 11.0.X releases but is planned to be added in 11.1.0. Huge thanks to Ho-Sheng for shedding some light on this topic!
You can build chef-server from the repo that contains the fix from here. https://github.com/opscode/omnibus-chef-server/commit/06d37db491f0040621f844354b2599631fb62e6b
There is some more documentation on building nightlies here. http://docs.opscode.com/api_omnitruck.html
The main bug report and discussion thread are here. https://tickets.opscode.com/browse/CHEF-3921
The main piece of code is located here. https://github.com/opscode/chef_objects/commit/a3133ced037d1e508ff18723ad9a6f2b94dea1ea
This is where it gets pulled into the erchef binary. https://github.com/opscode/erchef/commit/316a09c0657fab6ff4eb2b9222ab84336a5f039a
]]>Working in OpsWorks is has been a generally positive experience. There are a few things that I would like to see changed like the ability to position your own custom chef recipes before OpsWorks built in recipes. Also there was a bit of a sorespot when we decided to automate running a few rake tasks from jenkins on a particular host in a particular stack. To solve this issue, I’ve created a Ruby Gem that has a dual purpose. In its current form it allows you to easily list all of your stacks, see the nodes in that stack with their IP’s and also SSH directly to a host using its stack name and hsotname.
To give it a shot simply… gem install owssh
The gem in use…
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
|
The code lives here… [git@github.com:phutchins/owssh.git]
]]>After setting up my dot files on different computers, and accounts a million and a half times, I decided to try to make life a little bit more easy and automate a few things. I’ve created a GitHub repository that contains all of my dot files and added a script that semi intelligently links them for you. The .bash_profile (.bashrc on linux) and .vimrc are set up such that they install most of the dependencies for osx or linux on their own. My goal is to make these where I use the same dot files for all computers and OS’s and have them work the same, doing whatever smartness in the background so that I/you don’t have to think about it.
I’ll add more detail to what all they’re doing a little at a time as well as some of the things that I hope to do with them down the road. Please share your tips and tricks in your dotfiles as I’d love to encorporate as much awesomeness as possible!
Add the server or servers that you want to connect to
/SERVER ADD -auto -network freenode irc.freenode.net 6667
If you want to automatically set and identify your nickname with nickserv you can do it like this…
/NETWORK ADD -autosendcmd "/^nick phutchins;/^msg nickserv identify mysecurepassword;wait 2000" freenode
To join a channel
/CHANNEL ADD -auto #powerline freenode
To join a channel with a password
/CHANNEL ADD -auto #awesomesecretchannel freenode supersecurepass
]]>