ForDoDone https://fordodone.com Tales from the Command Line... Mon, 14 Oct 2019 22:53:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 nginx map simple example https://fordodone.com/2019/10/14/nginx-map-simple-example/ https://fordodone.com/2019/10/14/nginx-map-simple-example/#respond Mon, 14 Oct 2019 22:52:41 +0000 https://fordodone.com/?p=1278 Continue reading ]]> Using nginx map function can be a powerful way to add headers if they don’t already exist. Let’s start with pseudo code example:

map $ice_cream_flavor $toppings {
  default '';
  '~choc' 'sprinkles';
  '~vanil' 'strawberries';
}

server {
  add_toppings $toppings;
}

Add strawberries to vanilla and vanillatastic ice cream, add sprinkles to chocolate and chocoloco ice cream, add nothing to other flavors. add_topings does nothing if $toppings is empty. Tilde starts a regular expression. $ice_cream_flavor would be an nginx internal variable.

nginx add headers if not already set

Using a regular expression that will match 1 or more characters (i.e. the header is set) along with the $upstream_http_* internal variables to see response headers from an upstream service, we can perform add_header ONLY if the proxied upstream service (like php fpm) has not set it already. This avoids the problem of nginx sending duplicate headers if the upstream service has already set them. add_header will not have any effect if what it is supposed to add is an empty string.

map $upstream_http_access_control_allow_origin $proxy_header_acao {
    default '*';
    '~.' "";
}

map $upstream_http_access_control_allow_headers $proxy_header_acah {
    default 'Authorization,Accept,Content-Type,Origin,X-API-VERSION,X-Visitor-Token,X-Agent-Token,X-Auth-Token';
    '~.' "";
}

map $upstream_http_access_control_allow_methods $proxy_header_acam {
    default 'GET, PUT, PATCH, POST, DELETE, OPTIONS, HEAD';
    '~.' "";
}

server {
...
        add_header 'Access-Control-Allow-Origin' $proxy_header_acao always;
        add_header 'Access-Control-Allow-Headers' $proxy_header_acah always;
        add_header 'Access-Control-Allow-Methods' $proxy_header_acam always;

...
}
]]>
https://fordodone.com/2019/10/14/nginx-map-simple-example/feed/ 0
gracefully stop php laravel sqs worker in Docker on ECS Fargate https://fordodone.com/2019/02/08/gracefully-stop-php-laravel-sqs-worker-in-docker-on-ecs-fargate/ https://fordodone.com/2019/02/08/gracefully-stop-php-laravel-sqs-worker-in-docker-on-ecs-fargate/#respond Fri, 08 Feb 2019 18:49:26 +0000 https://fordodone.com/?p=1254 Continue reading ]]> Using AWS SQS to process asynchronous messages is a great way to handle scheduled jobs, and work that doesn’t need to happen in real-time inside your user driven application. Containerizing a PHP Laravel app and using an orchestration service like ECS Fargate allows you to easily run thousands of job queue workers in an infinite and embarrassingly parallel fashion.

php artisan queue:work sqs

If your work queue is inconsistent in depth and rate (i.e. “bursty”) you’ll find you need to scale-out and scale-in containers based on how much work is available. Starting containers is no problem; just swipe your credit card and ECS delivers. The problem comes when needing to scale-in and stop containers after they are no longer needed, because ECS stops the php workers mid-job.

During autoscaling actions, when the ECS agent stops tasks it sends the equivalent of a docker stop to each container in the task. Underneath the covers it is sending the Unix process signal SIGTERM to the process inside the running container (PID 1). After the SIGTERM is sent, the ECS Agent waits 30 seconds for the process to exit, and if the process is still running after 30 seconds, the ECS Agent gives up and sends a SIGKILL. Sending SIGTERM (or SIGKILL) to the php process running the worker makes it immediately exit. This is expected but problematic because whatever the worker was working on is halted in the middle of what it was doing.

One solution to this problem is to wrap the php worker command inside of a bash script and use traps to catch the SIGTERM and give the worker some time to stop processing SQS messages and exit gracefully. A trap catches the signal sent to it, but it does not interrupt what the process is currently doing. The trap waits until the current process is finished, then it executes. Simply running the php worker with a trap is not enough, because the queue worker does not exit in between jobs, and php artisan queue:work sqs is a long running process. Because of this we use an infinite loop (while true; do; done;) and the –once flag to “single-run” php workers over and over. This means that for every SQS message (or empty receive) a new one-off php process is run. Doing it this way means that the trap can execute (and exit the script) in between jobs when the current job finishes processing. Something like this:

#!/bin/bash
exit_trap(){
  echo "received SIGTERM, exiting..."
  exit 0
}

trap exit_trap SIGTERM

while true
do
  php artisan queue:work sqs --once
done

caveat emptor

  • Running php workers with –once means the entire framework has to bootstrap for every message, which may add some extra processing time. But honestly, if your framework takes a long time to load you have bigger problems.
  • Running a new php process for every message can solve the “leaky” nature of php or laravel or problematic code and long running php worker processes slowly consuming more and more memory.
  • Workers must go from listening, to working, to finished with current job within 30 seconds (or less depending on the default receive message wait time). If the worker takes longer than 30 seconds it will receive a SIGKILL mid-job and die.
  • Using the EC2 Launch Type instead of Fargate will allow you to tweak the docker stop grace period. This value is not configurable with the Fargate Launch Type and you are stuck with 30 seconds.
  • Bash is used here, and PID 1 becomes a bash script instead of a php command.
  • It’s even more important for jobs to be idempotent, and have the ability to be re-run trivially at any time.
]]>
https://fordodone.com/2019/02/08/gracefully-stop-php-laravel-sqs-worker-in-docker-on-ecs-fargate/feed/ 0
awslogs agent running inside Fargate container https://fordodone.com/2019/01/29/awslogs-agent-running-inside-fargate-container/ https://fordodone.com/2019/01/29/awslogs-agent-running-inside-fargate-container/#respond Wed, 30 Jan 2019 00:31:45 +0000 https://fordodone.com/?p=1250 Continue reading ]]> Make sure the awslogs launcher script passes the correct environmental variables in order to use ECS Task Roles attached to the Fargate container:

/usr/bin/env -i ECS_CONTAINER_METADATA_URI=${ECS_CONTAINER_METADATA_URI} AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=${AWS_CONTAINER_CREDENTIALS_RELATIVE_URI} HTTPS_PROXY=$HTTPS_PROXY HTTP_PROXY=$HTTP_PROXY NO_PROXY=$NO_PROXY AWS_CONFIG_FILE=/var/awslogs/etc/aws.conf HOME=/root /usr/bin/nice -n 4 /var/awslogs/bin/aws --debug logs push --config-file /var/awslogs/etc/awslogs.conf --additional-configs-dir /var/awslogs/etc/config

These are the important ones:

# env | grep URI
ECS_CONTAINER_METADATA_URI=http://169.254.170.2/v3/050390f5-611b-429e-ac2e-e1485fe808ac
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=/v2/credentials/e4656067-72ba-4506-99ed-fee2bdc4aaed

]]>
https://fordodone.com/2019/01/29/awslogs-agent-running-inside-fargate-container/feed/ 0
Connect to MongoDB Atlas Replica Set via SSH Tunnels https://fordodone.com/2018/03/09/connect-to-mongodb-atlas-replica-set-via-ssh-tunnels/ https://fordodone.com/2018/03/09/connect-to-mongodb-atlas-replica-set-via-ssh-tunnels/#respond Fri, 09 Mar 2018 18:18:22 +0000 https://fordodone.com/?p=1238 Continue reading ]]> Using an ssh tunnel to connect to a MongoDB Atlas hosted replica set is not straight forward. There are multiple members of the replica set, access restrictions based on IP, and no SSH access into the Mongo instances themselves. Here’s an example project to use a docker container to connect to a remote, hosted MongoDB replica set with a ssh tunnel through a bounce box with an authorized IP. You’re kidding right? Find the working project here.

]]>
https://fordodone.com/2018/03/09/connect-to-mongodb-atlas-replica-set-via-ssh-tunnels/feed/ 0
list contents of all docker volumes https://fordodone.com/2017/12/15/list-contents-of-all-docker-volumes/ https://fordodone.com/2017/12/15/list-contents-of-all-docker-volumes/#respond Fri, 15 Dec 2017 23:31:20 +0000 https://fordodone.com/?p=1230 Continue reading ]]> To list the contents of a docker named volume, run a temporary container and mount the volume into the container, then do a directory listing. Loop over all the volumes to see what each one holds.

~$ for i in `docker volume ls -q`; do echo "volume: ${i}"; docker run --rm -it -v ${i}:/vol alpine:latest ls /vol; echo; done;
volume: 140f898b1c69b85585942aa7f25cf03eba6ac66125d4a122e2fe99455c4a1a3f

volume: 1fa7b49173076a3a1fdb07ea7ce65d7187ff80e8b1a56e2fa667ebbbc0543f3a
dump.rdb

volume: 5564a11a1945567ffcc231145c01c806afe13a02b3e0a548f1504a1cd36c9374
dump.rdb

volume: 6d0b313416430d2abc0c872b98fd4180bbda4d14560c0a5d98f534f33b792164
app                 ib_buffer_pool      private_key.pem
auto.cnf            ib_logfile0         public_key.pem
ca-key.pem          ib_logfile1         server-cert.pem
ca.pem              ibdata1             server-key.pem
client-cert.pem     mysql               sys
client-key.pem      performance_schema

volume: 8b08da4a38ca8f5924b90220db8c84384d90fe331a953a5aaa1a1944d826fc68

volume: 976239a3528b8b0b074b6b7438552e1d22c4f069cf20582d2250fcc1c068dc4f
dump.rdb

volume: aac262a93155286f4c551271d8d2a70f81ed1ca4cd56925e94006248e458895e
dump.rdb

volume: b385ebee063b72350fbc1158788cbb43d7da9b37ec95196b74caa6b22b1c115b
dump.rdb

volume: c73f2001f829e5574bd4246b2ab7a261a3f4d9a7ef89997765d7bf43883e5c24
dump.rdb

volume: ce73f9f85c475b1fd9cf4fede20fd04250ee7702e83db67c29c7118055275c28
dump.rdb

volume: foovolume1

volume: efc8a8855ac2c13d83c23573aebfd53e15072ec68d23e2793262f662ea0ae308

volume: foovolume2
auto.cnf                     ibdata1
ca-key.pem                   mysql
ca.pem                       performance_schema
client-cert.pem              private_key.pem
client-key.pem               public_key.pem
ib_buffer_pool               server-key.pem
ib_logfile0                  sys
ib_logfile1

volume: foovolume3

volume: phpsockettest
php-fpm.sock

volume: foovolume4
auto.cnf            ib_logfile0         public_key.pem
ca-key.pem          ib_logfile1         server-cert.pem
ca.pem              ibdata1             server-key.pem
client-cert.pem     mysql               sys
client-key.pem      performance_schema  
ib_buffer_pool      private_key.pem

testvol
asdf   asdf1  asdf2

This installation has some test files, backing files from a few different mysql databases, a unix socket, redis files, and empty volumes.

]]>
https://fordodone.com/2017/12/15/list-contents-of-all-docker-volumes/feed/ 0
connect to ssh tunnel on Mac host from inside docker container https://fordodone.com/2017/11/02/connect-to-ssh-tunnel-on-mac-host-from-inside-docker-container/ https://fordodone.com/2017/11/02/connect-to-ssh-tunnel-on-mac-host-from-inside-docker-container/#comments Fri, 03 Nov 2017 02:03:49 +0000 https://fordodone.com/?p=1200 Continue reading ]]> In some very edge use cases you may want to temporarily connect to a remote service on a different network from within a docker container. In this example I needed to provide a solution to connect from a docker container on a Mac laptop to a database hosted in a subnet far far away. Here’s a proof of concept.

setup the ssh tunnel from the Mac host:

ssh -L 56789:10.0.100.200:3306 fordodone@1.2.3.4

This starts an ssh tunnel with the TCP port 56789 open on localhost, forwarding through the ssh tunnel to port 3306 on a host with IP address 10.0.100.200 on the remote network

start a container:

docker run --rm -it alpine:latest sh

This runs an interactive shell in a temporary docker container from the alpine:latest image

inside the container add mysql client for testing:

/ # apk add --update --no-cache mysql-client

This fetches a temporary package list and installs mysql-client

create a database connection:

/ # mysql -h docker.for.mac.localhost --port 56789 -udbuser -p

Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 675073323
Server version: 5.6.27-log MySQL Community Server (GPL)

Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> 
MySQL [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| awesome_app        |
| innodb             |
| mysql              |
| performance_schema |
+--------------------+
5 rows in set (0.08 sec)

MySQL [(none)]> 

The magic here comes from the special docker.for.mac.localhost hostname. The internal Docker for Mac DNS resolver uses this special entry to return the internal IP address used by the Mac host. Tell the mysql client to use the port-forwarded TCP port --port 56789 and the mysql client inside the container connects through the ssh tunnel to the remote database.

]]>
https://fordodone.com/2017/11/02/connect-to-ssh-tunnel-on-mac-host-from-inside-docker-container/feed/ 1
using docker-compose to prototype against different databases https://fordodone.com/2017/09/28/using-docker-compose-to-prototype-against-different-databases/ https://fordodone.com/2017/09/28/using-docker-compose-to-prototype-against-different-databases/#respond Thu, 28 Sep 2017 19:29:02 +0000 https://fordodone.com/?p=1173 Continue reading ]]> Greenfield projects come along with the huge benefit of not having any existing or legacy code or infrastructure to navigate when trying to design an application. In some ways having a greenfield app land in your lap is the thing a developer’s dreams are made of. Along with the amazing opportunity that comes with a “start from scratch” project, comes a higher level of creative burden. The goals of the final product dictate the software architecture, and in turn the systems infrastructure, both of which have yet to be conceived.

Many times this question (or one similarly themed) arises:

“What database is right for my application?”

Often there is a clear and straightforward answer to the question, but in some cases a savvy software architect might wish to prototype against various types of persistent data stores.

This docker-compose.yml has a node.js container and four data store containers to play around with: MySQL, PostgreSQL, DynamoDB, and MongoDB. They can be run simultaneously, or one at a time, making it perfect for testing these technologies locally during the beginnings of the application software architecture. The final version of your application infrastructure is still a long ways off, but at least it will be easy to test drive different solutions at the outset of the project.

version: '2'
services:
  my-api-node:
    container_name: my-api-node
    image: node:latest
    volumes:
      - ./:/app/
    ports:
      - '3000:3000'

  my-api-mysql:
    container_name: my-api-mysql
    image: mysql:5.7
    #image: mysql:5.6
    environment:
      MYSQL_ROOT_PASSWORD: secretpassword
      MYSQL_USER: my-api-node-local
      MYSQL_PASSWORD: secretpassword
      MYSQL_DATABASE: my_api_local
    volumes:
      - my-api-mysql-data:/var/lib/mysql/
    ports:
      - '3306:3306'

  my-api-pgsql:
    container_name: my-api-pgsql
    image: postgres:9.6
    environment:
      POSTGRES_USER: my-api-node-local-pgsqltest
      POSTGRES_PASSWORD: secretpassword
      POSTGRES_DB: my_api_local_pgsqltest
    volumes:
      - my-api-pgsql-data:/var/lib/postgresql/data/
    ports:
      - '5432:5432'

  my-api-dynamodb:
    container_name: my-api-dynamodb
    image: dwmkerr/dynamodb:latest
    volumes:
      - my-api-dynamodb-data:/data
    command: -sharedDb
    ports:
      - '8000:8000'

  my-api-mongo:
    container_name: my-api-mongo
    image: mongo:3.4
    volumes:
      - my-api-mongo-data:/data/db
    ports:
      - '27017:27017'

volumes:
  my-api-mysql-data:
  my-api-pgsql-data:
  my-api-dynamodb-data:
  my-api-mongo-data:

I love Docker. I use Docker a lot. And like any tool, you can do really stupid things with it. A great piece of advice comes to mind when writing a docker-compose project like this one:

“Just because you can, doesn’t mean you should.”

This statement elicits strong emotions from both halves of a syadmin brain. The first shudders at the painful thought of running multiple databases for an application (local or otherwise), and the other shouts “Hold my beer!” Which half will you listen to today?

]]>
https://fordodone.com/2017/09/28/using-docker-compose-to-prototype-against-different-databases/feed/ 0
generate uuid bash https://fordodone.com/2017/08/16/generate-uuid-bash/ https://fordodone.com/2017/08/16/generate-uuid-bash/#respond Wed, 16 Aug 2017 21:53:44 +0000 https://fordodone.com/?p=1155 $ head /dev/urandom | tr -dc 'a-f0-9' | head -c 32 | sed -e 's/\(.\{8\}\)\(.\{4\}\)\(.\{4\}\)\(.\{4\}\)\(.\{12\}\)/\1-\2-\3-\4-\5/' 1ac5f675-6fd0-959c-4095-d44db2d64a49

silly, but easy.

]]>
https://fordodone.com/2017/08/16/generate-uuid-bash/feed/ 0
ssh deploy key for continuous delivery https://fordodone.com/2017/06/16/ssh-deploy-key-for-continuous-delivery/ https://fordodone.com/2017/06/16/ssh-deploy-key-for-continuous-delivery/#respond Fri, 16 Jun 2017 20:28:30 +0000 https://fordodone.com/?p=1096 Continue reading ]]> One pattern I see over and over again when looking at continuous delivery pipelines, is the use of an ssh client and a private key to connect to a remote ssh endpoint. Triggering scripts, restarting services, or moving files around could all be part of your deployment process. Keeping a private ssh key “secured” is critical to limiting authorized access to your resources accessible by ssh. Whether you use your own in-house application (read “unreliable mess of shell scripts”), Travis CI, Bitbucket Pipelines, or some other CD solution, you may find yourself wanting to store a ssh private key for use during deployment.

Bitbucket Pipelines already has a built-in way to store and provide ssh deploy keys, however, this is an example alternative to roll your own. The steps are pretty simple. We create an encrypted ssh private key, it’s corresponding public key, and a 64 character passphrase for the private key. The encrypted private key and public key get checked in to the repository, and the passphrase gets stored as a “secured” Bitbucket Pipelines variable. During build time, the private key gets decrypted into a file using the Bitbucket Pipelines passphrase variable. The ssh client can now use that key to connect to whatever resources you need it to.

#!/bin/bash
set -e

if [ ! `which openssl` ] || [ ! `which ssh-keygen` ] || [ ! `which jq` ] || [ ! `which curl` ]
then
  echo
  echo "need ssh-keygen, openssl, jq, and curl to continue"
  echo
  exit
fi

# generate random string of 64 characters
echo
echo "generating random string for ssh deploy key passphrase..."
DEPLOY_KEY_PASSPHRASE=`< /dev/urandom LC_CTYPE=C tr -dc A-Za-z0-9#^%@ | head -c ${1:-64}`

# save passphrase in file to be used by openssl
echo
echo "saving passphrase for use with openssl..."
echo -n "${DEPLOY_KEY_PASSPHRASE}" >passphrase.txt

# generate encrypted ssh rsa key using passphrase
echo
echo "generating encrypted ssh private key with passphrase..."
openssl genrsa -out id_rsa_deploy.pem -passout file:passphrase.txt -aes256 4096
chmod 600 id_rsa_deploy.pem

# decrypt ssh rsa key using passphrase
echo
echo "decrypting ssh private key with passphrase to temp file..."
openssl rsa -in id_rsa_deploy.pem -out id_rsa_deploy.tmp -passin file:passphrase.txt
chmod 600 id_rsa_deploy.tmp

# generate public ssh key for use on target deployment server
echo
echo "generating public key from private key..."
ssh-keygen -y -f id_rsa_deploy.tmp > id_rsa_deploy.pub

# remove unencrypted ssh rsa key
echo
echo "removing unencrypted temp file..."
rm id_rsa_deploy.tmp

# ask user for bitbucket credentials
echo
echo "PUT IN YOUR BITBUCKET CREDENTIALS TO CREATE/UPDATE PIPLINES SSH KEY PASSPHRASE VARIABLE (password does not echo)"
echo
echo -n "enter bitbucket username: "
read BBUSER
echo -n "enter bitbucket password: "
read -s BBPASS
echo

# bitbucket API doesn't have "UPSERT" capability for creating(if not exists) or updating(if exists) variables
# get variable if exists
echo
echo "getting variable uuid if variable exists"
DEPLOY_KEY_PASSPHRASE_UUID=`curl -s --user ${BBUSER}:${BBPASS} -X GET -H "Content-Type: application/json" https://api.bitbucket.org/2.0/repositories/fordodone/pipelines-test/pipelines_config/variables/ | jq -r '.values[]|select(.key=="DEPLOY_KEY_PASSPHRASE").uuid'`

if [ "${DEPLOY_KEY_PASSPHRASE_UUID}" == "" ]
then
  # create bitbucket pipeline variable
  echo
  echo "DEPLOY_KEY_PASSPHRASE variable does not exist... creating DEPLOY_KEY_PASSPHRASE"
  curl -s --user ${BBUSER}:${BBPASS} -X POST -H "Content-Type: application/json" -d '{"key":"DEPLOY_KEY_PASSPHRASE","value":"'"${DEPLOY_KEY_PASSPHRASE}"'","secured":"true"}' https://api.bitbucket.org/2.0/repositories/fordodone/pipelines-test/pipelines_config/variables/
  echo
  echo

else
  # update existing bitbucket pipeline variable by uuid
  # use --globoff to avoid curl interpreting curly braces in the variable uuid
  echo
  echo "DEPLOY_KEY_PASSPHRASE variable exists... updating DEPLOY_KEY_PASSPHRASE"
  curl --globoff -s --user ${BBUSER}:${BBPASS} -X PUT -H "Content-Type: application/json" -d '{"key":"DEPLOY_KEY_PASSPHRASE","value":"'"${DEPLOY_KEY_PASSPHRASE}"'","secured":"true","uuid":"'"${DEPLOY_KEY_PASSPHRASE_UUID}"'"}' "https://api.bitbucket.org/2.0/repositories/fordodone/pipelines-test/pipelines_config/variables/${DEPLOY_KEY_PASSPHRASE_UUID}"

fi


# after passphrase is stored in bitbucket remove passphrase file
  echo
  echo "DEPLOY_KEY_PASSPHRASE successfully stored on bitbucket, removing passphrase file..."
rm passphrase.txt

echo 
echo "KEY ROLL COMPLETE"
echo "add, commit, and push encypted private key and corresponding public key, update ssh targets with new public key"
echo "  ->  git add id_rsa_deploy.pem id_rsa_deploy.pub && git commit -m 'deploy ssh key roll' && git push"
echo

We use bitbucket username and password to authenticate the person running the script, they need access to insert the new ssh deploy key passphrase as a “secured” variable using the bitbucket API. The person running the script never sees the passphrase and doesn’t care what it is. This script can be run easily to update the key pair and passphrase. It’s easy and fast because when you need to roll a compromised key, you should never have to remember that damn openssl command that you have used your entire career, but somehow have never memorized.

Here’s how you could use the key in a Bitbucket Pipelines build container:

#!/bin/bash
set -e

# store passphrase from BitBucket secure variable into file
# file is on /dev/shm tmpfs in memory (don't put secrets on disk)
echo "creating passphrase file from BitBucket secure variable DEPLOY_KEY_PASSPHRASE"
echo -n "${DEPLOY_KEY_PASSPHRASE}" >/dev/shm/passphrase.txt

# use passphrase to decrypt ssh key into tmp file (again in memory backed file system)
echo "writing decrypted ssh key to tmp file"
openssl rsa -in id_rsa_deploy.pem -out /dev/shm/id_rsa_deploy.tmp -passin file:/dev/shm/passphrase.txt
chmod 600 /dev/shm/id_rsa_deploy.tmp

# invoke ssh-agent to manage keys
echo "starting ssh-agent"
eval `ssh-agent -s`

# add ssh key to ssh-agent
echo "adding key to ssh-agent"
ssh-add /dev/shm/id_rsa_deploy.tmp

# remove tmp ssh key and passphrase now that the key is in ssh-agent
echo "cleaning up decrypted key and passphrase file"
rm /dev/shm/id_rsa_deploy.tmp /dev/shm/passphrase.txt

# get ssh host key
echo "getting host keys"
ssh-keyscan -H someserver.fordodone.com >> $HOME/.ssh/known_hosts

# test the key
echo "testing key"
ssh someserver.fordodone.com "uptime"

It uses a tmpfs memory backed file system to store the key and passphrase, and ssh-agent to add the key to the session. How secure is secured enough? Whether you use the built-in Pipelines ssh deploy key, or this method to roll your own and store a passphrase in a variable, or store the ssh key as a base64 encoded blob in a variable, or however you do it, you essentially have to trust the provider to keep your secrets secret.

There are some changes you could make to all of this, but it’s good boilerplate. Other things to think about:

    rewrite this in python and do automated key rolls once a day with Lambda, storing the dedicated bitbucket user/pass and git key in KMS.
    do you really need ssh-agent?
    you could turn off strict host key checking instead of using ssh-keyscan
    could this be useful for x509 TLS certs?
]]>
https://fordodone.com/2017/06/16/ssh-deploy-key-for-continuous-delivery/feed/ 0
make Makefile target for help or usage options https://fordodone.com/2017/04/17/make-makefile-target-for-help-or-usage-options/ https://fordodone.com/2017/04/17/make-makefile-target-for-help-or-usage-options/#respond Mon, 17 Apr 2017 19:50:48 +0000 https://fordodone.com/?p=1086 Continue reading ]]> Using make and Makefiles with a docker based application development strategy are a great way to track shortcuts and allow team members to easily run common docker or application tasks without having to remember the syntax specifics. Without a “default” target make will attempt to run the first target (the default goal). This may be desirable in some cases, but I find it useful to have make just print out a usage, and require the operator to specify the exact target they need.


#Makefile 
DC=docker-compose
DE=docker-compose exec app

.PHONY: help
help: 
  @sh -c "echo ; echo 'usage: make <target> ' ; cat Makefile | grep ^[a-z] | sed -e 's/^/            /' -e 's/://' -e 's/help/help (this message)/'; echo"

docker-up:
  $(DC) up -d

docker-down:
  $(DC) stop

docker-rm:
  $(DC) rm -v

docker-ps:
  $(DC) ps

docker-logs:
  $(DC) logs

test:
  $(DE) sh -c "vendor/bin/phpunit"

Now without any arguments make outputs a nice little usage message:


$ make 

usage: make <target> 
            help (this message) 
            docker-up
            docker-down
            docker-rm
            docker-ps
            docker-logs
            test
$

This assumes a bunch of things like you must be calling make from the correct directory, but is a good working proof of concept.

]]>
https://fordodone.com/2017/04/17/make-makefile-target-for-help-or-usage-options/feed/ 0