Finding the IP address of a device on your network is generally an easy thing to do. This isn’t always the case when the device is a network device or share using mDNS, also known as Bonjour on OSX.
If you know the network name of the device, you can use the dscacheutil to query the device and get it’s IP address.
1
dscacheutil -q host -a name [network name].local
For example, I have a NAS on my network but the nas requires an older display connector that I generally don’t keep around. Unfortunately it lives on a network to which we don’t control the IP space. When the IP of the device changes, its not been so easy to find it’s address. This is what I used to find it’s IP.
1234
$ dscacheutil -q host -a name storjnas.local
name: storjnas.local
ip_address: 10.150.50.50
ip_address: 10.150.50.50
root@elasticsearch-1:/opt/elasticsearch/elasticsearch# ./bin/elasticsearch-plugin install repository-gcs
-> Downloading repository-gcs from elastic
Exception in thread "main" javax.net.ssl.SSLException: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at sun.net.www.protocol.http.HttpURLConnection$10.run(HttpURLConnection.java:1926)
at sun.net.www.protocol.http.HttpURLConnection$10.run(HttpURLConnection.java:1921)
at java.security.AccessController.doPrivileged(Native Method)
at sun.net.www.protocol.http.HttpURLConnection.getChainedException(HttpURLConnection.java:1920)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1490)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254)
at org.elasticsearch.plugins.InstallPluginCommand.downloadZip(InstallPluginCommand.java:279)
at org.elasticsearch.plugins.InstallPluginCommand.downloadZipAndChecksum(InstallPluginCommand.java:322)
at org.elasticsearch.plugins.InstallPluginCommand.download(InstallPluginCommand.java:231)
at org.elasticsearch.plugins.InstallPluginCommand.execute(InstallPluginCommand.java:210)
at org.elasticsearch.plugins.InstallPluginCommand.execute(InstallPluginCommand.java:195)
at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122)
at org.elasticsearch.cli.MultiCommand.execute(MultiCommand.java:69)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122)
at org.elasticsearch.cli.Command.main(Command.java:88)
at org.elasticsearch.plugins.PluginCli.main(PluginCli.java:47)
The Reason
The message means that the trust store that you specified or was specified for you could not be opened due to access, permissions, or due to the fact that it doesn’t exist.
To fix this, you need the ca-certificates-java package which is not explicitly installed by the Oracle JDK/JRE. Also, it may be installed but you still have to manually run the configuration for it.
The Solution
The solution is to run the configuration for this package. Make sure to install the package if it hasn’t already been installed.
SSH by default will always try all keys known by the agent in addition to any identity files. To keep this from happening, you can specify IdentitiesOnly=yes in addition to specifying a particular key to use when authenticating.
To compare…
This command:
ssh -i ~/.ssh/id_rsa me@myserver hostname
will give you Received disconnect from myserver: 2: Too many authentication failures for me
However, if you add IdentitiesOnly=yes like so…
ssh -o IdentitiesOnly=yes -i ~/.ssh/id_rsa me@myserver hostname
you will see…
myserver
SSH Agent
Generally this is caused by having more than 5 keys loaded in ssh-agent.
You can remove keys by running ssh-add -d ~/.ssh/[key]
If you’re runing docker on OSX more than just a little, you’ve probably run into an issue where you’re building a container and it fails due to an error that looks something like the following…
1
Error response from daemon: mkdir /var/lib/docker/tmp/docker-builder983959066: no space left on device
From this point on, you won’t be able to build new containers or images until…
1) you delete existing contaimers or images
or
2) completely delete the virtual disk image that contains all of your docker containers and images
The Breakdown
Ok, so lets look at what this error is telling us. The context is important here.
On OSX, when building a docker container or image, the work is being done inside of a virtual machine. If you’ve looked at /var/lib/docker… on your local machine, you may have noticed that its either not there or it is but it isn’t full. The reason for this is becuase the /var/lib/docker folder that the error is referring to lives inside of the virtual machine in which docker for mac or docker-machine is doing its building.
The virtual file system lives on your disk here: /Users/philip/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2
By default this virtual disk is 20G.
Troubleshooting
To confirm that this is really your issue, here are the steps that I used. Your disk size and space used will be different as I ran these after resolving the issue. Either way, this will give you a good idea of how it all works.
Checking the Size of the Virtual Disk Image File
If you check the size of this file…
1234567891011121314
$ ls -altrh
total 47009328
-rw-r--r-- 1 philip staff 36B Nov 17 11:14 nic1.uuid
-rw-r--r-- 1 philip staff 0B Nov 17 11:14 lock
drwxr-xr-x 83 philip staff 2.8K Jan 1 13:09 log/
lrwxr-xr-x 1 philip staff 12B Jan 3 14:39 tty@ -> /dev/ttys004
-rw-r--r-- 1 philip staff 5B Jan 3 14:39 pid
-rw-r--r-- 1 philip staff 17B Jan 3 14:39 mac.0
-rw-r--r-- 1 philip staff 5B Jan 3 14:39 hypervisor.pid
drwxr-xr-x 21 philip staff 714B Jan 3 14:39 ../
drwxr-xr-x 12 philip staff 408B Jan 3 14:39 ./
-rw-r--r-- 1 philip staff 705B Jan 3 14:39 syslog
-rw-r--r-- 1 philip staff 64K Jan 3 20:12 console-ring
-rw-r--r-- 1 philip staff 22G Jan 4 07:11 Docker.qcow2
… you’ll notice that it’s 22G
Check Space Used From Within the VM
Then if you take a peek at the space used from within a container, it’s slightly different but fairly close…
(pay attention to the root partition in this context)
While I was having this issue, I was unable to run this command due to lack of space so you may not be able to do this until you fix the issue.
The Fix
To fix this issue, we will need to do the following.
Stop Docker
Expand the disk image size
Launch a VM in which we run GParted against the Docker.qcow2 image
Expand the partition to use the additional space added to the disk image
Exit the VM and restart docker
Stop Docker
Lets go ahead and stop docker so that the disk image is not being used while we resize. This may not be required but better safe than sorry…
Install QEMU
To launch the virtual machine, you’ll need QEMU or something that can boot from an ISO and mount a qcow2 image. For this example, I’m using QEMU.
1
brew install qemu
Expand the disk image size
To expand the disk image, we’ll use the qemu-img util packaged with Docker for MacOS. If you can’t find this on your system, you should be able to get this from the qemu package.
If you would like to expand it more or less, you can change the +5G on the end of the command as needed.
Download the GParted Live Image
Visit http://gparted.org/download.php and download the gparted-live ISO for your architecture. In this case, I downloaded gparted-live-0.27.0-1-amd64.iso.
Launch the VM Running GParted
Here we run qemu and launch a virtual machine adding our Docker.qcow2 disk image as a drive.
When prompted, you’ll select the options for booting GParted Live.
Select don’t touch keymap (unless you know what you’re doing)
The next step should default to 33 (US English) so change it if needed, otherwise, hit enter
For mode, select start X & GParted automatically which should be default
Click on the GParted icon
While launching, I saw a warning stating overlayfs: missing 'workdir'. You can safely ignore this. Just be patient and let it finish booting.
It may take a bit for the machine to completely come up so give it some time…
select the partition (it should be the largest one and should match the size you’ve seen when inspecting the image)
right click the partition and select resize/move
resize it to use the full amount of space allocated for the disk by dragging the right size of the darkened box to the far right of the block
click resize
click apply
close GParted and exit the VM
Start Docker
Start docker back up however you normally would start it.
Confirm it Worked
At this point you can go back and run your commands to check space used from within the VM and confirm that the available size has increased as expected. If it did, you should be good to go!
Plug the PLM into a USB port on your server/computer
Find the PLM Device
Look in /dev for your serial USB device
12
philip@cube:/dev$ ls | grep USB
ttyUSB0
In my case it was ttyUSB0. If you unplug and re-connect the PLM it may show up as a different device, such as ttyUSB1 so make sure to check if things stop working after you reconnect it.
Test the PLM
To test the PLM, we’ll be using Insteon Terminal which you will get from GitHub.
Lets install the dependencies for the Insteon Terminal which we will use to test and ensure the connection between the PLM and your computer are working.
Install ant, default-jdk and librxtx-java
sudo apt-get install ant default-jdk librxtx-java
Add the user accessing the PLM device to the correct groups to allow permission
If the user accessing the PLM is openhab use the following. You will need to check the current owner of the /dev/[yourdevice] file and the /run/lock folder and change dialout and lock below appropriately.
12
usermod -a -G dialout openhab
usermod -a -G lock openhab
Reboot
I’ve found that I have to reboot in order to connect to the model properly. This should not be the case but it is the simplest solution for now. It’s possible that there is a permissions issue, dependency loading issue or something along those lines that gets resolved by a reboot.
Clone the Insteon Terminal repo from GitHub
git clone https://github.com/pfrommerd/insteon-terminal.git
Copy the example config file and edit appropriately
1234567
$ cd insteon-terminal
$ cp init.py.example init.py
$ vi init.py # Or use your editor of choice here
$ # Follow the instructions in the config file to configure Insteon Terminal using the PLM device we found earlier
+ Launch Insteon Terminal
From the insteon-terminal folder, launch the terminal
./insteon-terminal
12
At this point you should see something like the following...
Elasticsarch is a wonderful and powerful search and analytics engine. Operating an ES cluster or node is generally fairly easy and straight forward however there are a few situations where the resolution to seemingly common issues is not so clear. I will gather my notes and helper scripts here in an effort to help others better understand and resolve these certain issues and configurations quickly.
Stats
Determine the amount of space on each node and other storage related stats
When you spin up a single node cluster, the default setting for number of replicas is 1. This means that the cluster is going to try to create a second copy of each shard. This is not possible as you only have one node in the cluster. This keeps your cluster (single node) in the yellow status and it will never reach green. A node can function this way but it is annoying to not see a green state when everything is actually healthy.
When you run out of disk, shards will have not been allocated and your cluster will likely be stuck in status RED. To recover, you need to find out which indices are unassigned and assign them manually
Commands
Check your clusters health and status of unassigned shards
When deploying services to Kubernetes, a certificate has to be injected into the container via secret. It doesn’t make sense to have each container renew it’s own certificates as it’s state can be wiped at any given time.
Solution
Build a service within each Kubernetes namespace that handles renewing all certificates used in that namespace. This service would kick off the request to renew each cert at a predetermined interval. It would then accept all verification requests ( GET request to domain/.well-known/acme-challenge ) and respond as necessary. After being issued the new certificate, it would recreate the appropriate secret which contains that certificate and initiate a restart of any container or service necessary.
Spec
SSL Renewal Container
To automate the creation and renewal of certificates, we will need to create container with Letsencrypt to request creation or renewal of each certificate, Nginx to receive and confirm domain validation, and scripts to push the generated certificates to secrets in Kubernetes. This container will be deployed to Kubernetes as a daemonset and should run in each of your Kubernetes clusters.
Container Creation & Setup
Nginx
LetsEncrypt (CertBot)
Pushing Secrets
kubectl
Access?
Restarting Services
kubectl
Domain List Configuration
SSL Ingress in Kubernetes
Previously to acieve using SSL/TLS in Kubernetes, we had to set up some sort of SSL/TLS termination proxy. With the addition of a few new features in Kubernetes 1.2 Ingress, we’re able to do away with the proxy and allow Kubernetes to handle this task.
The precompiled versions of ruby from RVM are pointing at G/etc/openssl/certs when looking for it’s ca certificate file. Newer versions of OSX have moved their certs to a different directory, or possibly /usr/local/etc/openssl/certs if you’ve installed openssl from brew or some other source.
The Solution
Reinstall ruby from source.
rvm reinstall 2.2.1 --disable-binary
Uninstall all the chef gems
gem uninstall chef chef-zero berkshelf knife-solo
Often times you need to run the same task in bash against a number of different arguments. Loops in bash can make this very quick and easy.
One of the simplest ways you can do this in a one liner is as follows
12345
$ for i in one two three four; do echo$i; doneone
two
three
four
You can also predefine an array to use later like this
12345
files=("/tmp/file_one""/tmp/file_two""/tmp/file_three")for i in "${files[@]}"doecho$idone
Or, to do this on one line
1234
$ files=("/tmp/file_one""/tmp/file_two""/tmp/file_three"); for i in "${files[@]}"; do echo$i; done/tmp/file_one
/tmp/file_two
/tmp/file_three
You can use ranges with seq
123456789101112131415
for year in $(seq 2000 2013); do echo$year; done2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
If you need a counter you could do something like this
123456789101112
#!/bin/bash## declare an array variabledeclare -a array=("one""two""three")# get length of an arrayarraylength=${#array[@]}# use for loop read all values and indexesfor((i=1; i<${arraylength}+1; i++ ));
doecho$i" / "${arraylength}" : "${array[$i-1]}done
File Permissions
There are a few shortcuts that make life easier when working with file and directory permissions. Here are a few.
When you want to recursively change permissions in a directory, you will want to change the file permissions separately from the directory permissions. You can accomplish this by using two different find commands piped to xargs as follows.
12
$ find * -type d -print0 | xargs -0 chmod 0755 # for directories$ find . -type f -print0 | xargs -0 chmod 0644 # for files
or
12
$ find /path/to/directory -type d -exec chmod g+rsx '{}'\;$ find /path/to/files -type f -exec chmod g+rsx '{}'\;
Three permission triads
123
first triad what the owner can dosecond triad what the group members can dothird triad what other users can do
Each triad
12345
first character r: readable
second character w: writable
third character x: executable
s or t: executable and setuid/setgid/sticky
S or T: setuid/setgid or sticky, but not executable
References, Operators and Modifiers
Above, you can see that permissions can be changed using u, g, o and a. These represent references to User, Group, Other and All.
+ (u)ser:
+ The user is the owner of the files. The user of a file or directory can be changed with the chown [3]. command.
+ Read, write and execute privileges are individually set for the user with 0400, 0200 and 0100 respectively. Combinations can be applied as necessary eg: 0700 is read, write and execute for the user.
+ (g)roup:
+ A group is the set of people that are able to interact with that file. The group set on a file or directory can be changed with the chgrp [4]. command.
+ Read, write and execute privileges are individually set for the group with 0040, 0020 and 0010 respectively. Combinations can be applied as necessary eg: 0070 is read, write and execute for the group.
+ (o)ther:
+ Represents everyone who isn’t an owner or a member of the group associated with that resource. Other is often referred to as “world”, “everyone” etc.
+ Read, write and execute privileges are individually set for the other with 0004, 0002 and 0001 respectively. Combinations can be applied as necessary eg: 0007 is read, write and execute for other.
+ (a)ll:
+ Represents everyone
The operator is what is used to control adding or removing of modifiers
+ + Add the specified file mode bits to the existing file mode bits of each file
+ – removes the specified file mode bits to the existing file mode bits of each file
+ = adds the specified bits and removes unspecified bits, except the setuid and setgid bits set for directories, unless explicitly specified.
Modifiers
+ r read
+ w write
+ x execute (or search for directories)
+ X execute/search only if the file is a directory or already has execute bit set for some user
+ s setuid or setgid (depending on the specified references)
+ S setuid or setgid (depending on the specified references) without the executable bit (or search for directories) set
+ t restricted deletion flag or sticky bit
Octal
The read bit adds 4 to its total (in binary 100),
The write bit adds 2 to its total (in binary 010), and
The execute bit adds 1 to its total (in binary 001).
These values never produce ambiguous combinations; each sum represents a specific set of permissions. More technically, this is an octal representation of a bit field – each bit references a separate permission, and grouping 3 bits at a time in octal corresponds to grouping these permissions by user, group, and others.
SetUID, SetGID and the Stick Bit
SUID / Set User ID : A program is executed with the file owner’s permissions (rather than with the permissions of the user who executes it).
SGID / Set Group ID : Files created in the directory inherit its GID, i.e When a directory is shared between the users , and sgid is implemented on that shared directory , when these users creates directory, then the created directory has the same gid or group owner of its parent directory.
12
$ chmod g+s
$ chmod 2750
Sticky Bit : It is used mainly used on folders in order to avoid deletion of a folder and its content by other user though he/she is having write permissions. If Sticky bit is enabled on a folder, the folder is deleted by only owner of the folder and super user(root). This is a security measure to suppress deletion of critical folders where it is having full permissions by others.
’S’ = The directory’s setgid bit is set, but the execute bit isn’t set.
’s’ = The directory’s setgid bit is set, and the execute bit is set.
These are represented in the ls -la (list all files in list format) by the following
1234567
Permissions Meaning
--S------ SUID is set, but user (owner) execute is not set.
--s------ SUID and user execute are both set.
-----S--- SGID is set, but group execute is not set.
-----s--- SGID and group execute are both set.
--------T Sticky bit is set, bot other execute is not set.
--------t Sticky bit and other execute are both set.