awslogs agent running inside Fargate container

Make sure the awslogs launcher script passes the correct environmental variables in order to use ECS Task Roles attached to the Fargate container:

/usr/bin/env -i ECS_CONTAINER_METADATA_URI=${ECS_CONTAINER_METADATA_URI} AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=${AWS_CONTAINER_CREDENTIALS_RELATIVE_URI} HTTPS_PROXY=$HTTPS_PROXY HTTP_PROXY=$HTTP_PROXY NO_PROXY=$NO_PROXY AWS_CONFIG_FILE=/var/awslogs/etc/aws.conf HOME=/root /usr/bin/nice -n 4 /var/awslogs/bin/aws --debug logs push --config-file /var/awslogs/etc/awslogs.conf --additional-configs-dir /var/awslogs/etc/config

These are the important ones:

# env | grep URI
ECS_CONTAINER_METADATA_URI=http://169.254.170.2/v3/050390f5-611b-429e-ac2e-e1485fe808ac
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=/v2/credentials/e4656067-72ba-4506-99ed-fee2bdc4aaed

using docker-compose to prototype against different databases

Greenfield projects come along with the huge benefit of not having any existing or legacy code or infrastructure to navigate when trying to design an application. In some ways having a greenfield app land in your lap is the thing a developer’s dreams are made of. Along with the amazing opportunity that comes with a “start from scratch” project, comes a higher level of creative burden. The goals of the final product dictate the software architecture, and in turn the systems infrastructure, both of which have yet to be conceived.

Many times this question (or one similarly themed) arises:

“What database is right for my application?”

Often there is a clear and straightforward answer to the question, but in some cases a savvy software architect might wish to prototype against various types of persistent data stores.

This docker-compose.yml has a node.js container and four data store containers to play around with: MySQL, PostgreSQL, DynamoDB, and MongoDB. They can be run simultaneously, or one at a time, making it perfect for testing these technologies locally during the beginnings of the application software architecture. The final version of your application infrastructure is still a long ways off, but at least it will be easy to test drive different solutions at the outset of the project.

version: '2'
services:
  my-api-node:
    container_name: my-api-node
    image: node:latest
    volumes:
      - ./:/app/
    ports:
      - '3000:3000'

  my-api-mysql:
    container_name: my-api-mysql
    image: mysql:5.7
    #image: mysql:5.6
    environment:
      MYSQL_ROOT_PASSWORD: secretpassword
      MYSQL_USER: my-api-node-local
      MYSQL_PASSWORD: secretpassword
      MYSQL_DATABASE: my_api_local
    volumes:
      - my-api-mysql-data:/var/lib/mysql/
    ports:
      - '3306:3306'

  my-api-pgsql:
    container_name: my-api-pgsql
    image: postgres:9.6
    environment:
      POSTGRES_USER: my-api-node-local-pgsqltest
      POSTGRES_PASSWORD: secretpassword
      POSTGRES_DB: my_api_local_pgsqltest
    volumes:
      - my-api-pgsql-data:/var/lib/postgresql/data/
    ports:
      - '5432:5432'

  my-api-dynamodb:
    container_name: my-api-dynamodb
    image: dwmkerr/dynamodb:latest
    volumes:
      - my-api-dynamodb-data:/data
    command: -sharedDb
    ports:
      - '8000:8000'

  my-api-mongo:
    container_name: my-api-mongo
    image: mongo:3.4
    volumes:
      - my-api-mongo-data:/data/db
    ports:
      - '27017:27017'

volumes:
  my-api-mysql-data:
  my-api-pgsql-data:
  my-api-dynamodb-data:
  my-api-mongo-data:

I love Docker. I use Docker a lot. And like any tool, you can do really stupid things with it. A great piece of advice comes to mind when writing a docker-compose project like this one:

“Just because you can, doesn’t mean you should.”

This statement elicits strong emotions from both halves of a syadmin brain. The first shudders at the painful thought of running multiple databases for an application (local or otherwise), and the other shouts “Hold my beer!” Which half will you listen to today?

AWS Lambda function to call bash

Use this node.js snippet to get Lambda execution of a bash command. In this case bash is just doing a curl to a URL. You could also write an entire bash script and call it.

var child_process = require('child_process');

exports.handler = function(event, context) {
  var child = child_process.spawn('/bin/bash', [ '-c', 'curl --silent --output - --max-time 1 https://fordodone.com/jobtrigger || true' ], { stdio: 'inherit' });

  child.on('close', function(code) {
    if(code !== 0) {
      return context.done(new Error("non zero process exit code"));
    }

    context.done(null);
  });
}

AWS find unused instance reservations

AWS Trusted Advisor has some good metrics on Cost Optimization when it comes to looking at your infrastructure portfolio and potential savings, however, it doesn’t offer a good way to see underutilized reservations. By using awscli you can check your reservations against what is actually running in order to find under provisioned reservations:

#!/bin/bash
# super quick hack to see difference between reserved and actual usage for RI

IFS="
"

first="true"
for az in us-east-1b us-east-1d
do
  if [ "$first" == "true" ]
  then
    echo "AvailabilityZone InstanceType Reserved Actual Underprovisioned"
    first="false"
  fi
  for itype in `aws ec2 describe-reserved-instances --filters Name=state,Values=active Name=availability-zone,Values=$az | grep InstanceType | sed -e 's/ //g' | cut -d'"' -f 4 | sort -u`
  do
    rcount=$(aws ec2 describe-reserved-instances --filters Name=state,Values=active Name=availability-zone,Values=$az Name=instance-type,Values=$itype | egrep 'InstanceCount' | awk '{print $NF}' | awk 'BEGIN{i=0}{i=i+$1}END{print i}')
    icount=$(aws ec2 describe-instances --filter Name=availability-zone,Values=$az Name=instance-type,Values=$itype | grep InstanceId | wc -l)

    echo "$az $itype $rcount $icount" | awk '{ if($4<$3){alert="YES"} print $1" "$2" "$3" "$4" "alert}'

  done;

done | column -t

Output:

$ compare_reserved_instances_vs_actual.sh
AvailabilityZone  InstanceType  Reserved  Actual  Underprovisioned
us-east-1b        m3.2xlarge    2         0       YES
us-east-1b        m3.medium     2         8
us-east-1b        m3.xlarge     11        4       YES
us-east-1b        m4.xlarge     2         46
us-east-1d        m3.medium     2         8
us-east-1d        m3.xlarge     6         0       YES
us-east-1d        m4.xlarge     1         17