:::: MENU ::::

secretcli – an interface for the AWS Secrets Manager

I love the AWS Secrets Manager, but have found the awscli tools for it to be a bit bulky. Most of the time all I want to do is download a file, edit it, and upload it again- and even then I normally just want to quickly change a single value inside of my “secret”.

The secretcli project makes all of that trivial, allowing all of these actions to be done with a single command-

  • Download an existing Secret to a file,
  • Upload a file to replace an existing Secret,
  • Create new Secrets that are ready to work as JSON data stores,
  • Retrieve a single value from a Secret,
  • Add, edit, or remove a single value from a Secret.

Like all of my projects you can find this one on Github, and contributions are always welcome.


urlparser – a simple python program for extracting info from URLs

I regularly run into the need to use part of a URL inside of shell scripts- such as extracting the hostname and port from a URL in order to check if the service is reachable- and got a bit tired of screwing with regex. The urllib python library’s parse component is a great tool for managing this, so I wrote urlparser to expose that library directly from the command line and tossed it up on pypi for others to use.


ec2details, the missing EC2 Instance Metadata API

When working with the AWS EC2 service in a programmatic way I’ve repeatedly run into a simple problem- how can I get up to date metadata about the various instance types in a programmatic way?

It turns out this simple problem does not actually have a simple solution. AWS offers their Bulk API, which has all the information about every EC2 instance offering in a single giant JSON file, but parsing it with python3.6 will give an OOM error on machines with only 2gb of ram and actually getting the desired data out of it is not a trivial task. The AWS Query API requires AWS credentials and specific IAM roles (and has almost no documentation), making it overly burdensome to use.

Despite that I’ve built-in support for the AWS Bulk API into at least two projects. While contemplating doing it for the third I decided it made more sense to simply build a better API for EC2 Instance Details, with a few goals in mind-

  • Information about each instance type should be easy to access.
  • The data should include hardware specs, prices, and regional availability.
  • The data should be accessible to pretty much any programming language.
  • The data should be reasonably up to date.
  • The API should have high availability and decent security (SSL).
  • Hosting this should not cost me a fortune, even if it gets popular.

In the end I built a “static” API hosted on GitHub Pages. Every six hours CircleCI kicks off a job to download and process the Bulk API data, generating two files (JSON and YAML) with a cleaned up version of the instance data indexed by instance type. If the files are different from what is already stored in git then CircleCI commits the new files and pushes them back up to GitHub, so the API is never more than six hours out of date from the information available from AWS. Using Github Pages has some real benefits as well, with built-in SSL and the Fastly CDN. The whole system requires no direct hosting on my behalf, and will stay up to date without any need for me to interfere as long as AWS does not change the format of their giant json file. Since the whole thing is stored in git it also creates historical data as a matter of course, showing exactly when changes have occured.

The whole project is, of course, available on Github. The API itself, with documentation, is on Github Pages.


GitConsensus now available as a Github App

Last year I introduced a way to manage open source projects with GitConsensus, an open source Github bot that anyone can download and run.

Today running GitConsensus is even easier with the availability of a new GitConsensus GitHub App. Developers can now add GitConsensus directly to their repositories simply by enabling it in GitHub and then adding the consensus files. There’s even example consensus rules covering a variety of options, from anarchy to oligarchy (with a set of recommended rules that is good for most projects).

In order to get this all working I also had to build a new GitHub Apps Python Library. This library extends the excellent github3.py library, making it easier to extend projects already built using it into full-fledged GitHub Apps.


Manage Github Pull Requests with gitconsensus

This weekend I dug into the Github API to build gitconsensus, which lets communities create truly democratic projects using Reactions as a voting mechanism. Projects can define consensus rules (minimum age of pull request, quorum for votes, threshold needed for passing) using a yaml file in their project root. Pull Requests that meet the consensus rules will get merged, and those that do not meet them after a certain amount of time will get closed.

The yaml file itself is pretty straightforward-

# .gitconsensus.yaml
# Add extra labels for the vote counts and age when merging
extra_labels: false

# Do not count any vote from a user who votes for multiple options
prevent_doubles: true

# Minimum number of voters
quorum: 5

# Required percentage of yes votes (ignoring abstentions)
threshold: 0.65

# Only process votes by contributors
contributors_only: false

# Only process votes by collaborators
collaborators_only: false

# When defined only process votes from these github users
whitelist:
  - alice
  - bob
  - carol

# Number of days after last action (commit or opening the pull request) before issue can be merged
mergedelay: 3

# Number of days after last action (commit or opening the pull request) before issue is autoclosed
timeout: 30

The project is available now on pypi and can be installed using pip.


Introducing jsonsmash – work with large json files easily

Over the last year I’ve run into some pretty massive JSON files. One recent examples is from AWS, which publishes a 120mb file containing a list of their available services that they have yet to provide documentation for. Attempting to open that in a standard editor is not going to be pleasant, and while tearing it apart with something like jq is certainly an option it doesn’t feel like the best approach.

That’s why I’ve build jsonsmash, an emulated shell that lets users browse through their massive json files as if they were actually filesystems. It uses shell commands any linux user would already be familiar with to make it very straightforward to figure out the schema of a json object (even if it is a few hundred megabytes) and pull out specific data.

Development is on github and the package is published on npmjs.


Stash v0.14 Released with PSR-6 Support

Release v0.14.1 is a major update to Stash, and quite likely the last line of releases before v1.0.0.

The biggest addition is support for PSR-6. Stash now implements the various interfaces natively, allowing it to be directly injected into PSR-6 compliant libraries.

This did require a few API changes (which prompted some cleanup of deprecated functions), so please make sure to review the release notes.

Stash remains one of the best tested caching libraries out there, and this improved on that significantly. In addition there have been optimizations and improvements in PHP7 support, including behind the scenes switching between the APC and APCu functions.

The Symfony Stash Bundle has also been updated, with version v0.6.1 utilizing the Stash v0.14 line.


New Release of Stash and Stash Bundle

Stash 0.13.1

  • Dropped support for PHP 5.3.
  • Updated dependencies.
  • Removed various PHP warnings (exceptions are still thrown where needed).
  • Various optimizations, such as reduced function calls during repeated operations.
  • Added “isPersistent” method to driver classes.

Stash Bundle v0.5.1

  • Dropped support for PHP 5.3.
  • Added ‘logger’ config parameter to caches so that they may be injected when each cache is created.
  • Compatibility updates for our upstream projects.

Backing Up with Puppet and rsnapshot

One of my favorite backup tools has always been rsnapshot. It’s based off of rsync and uses a nice trick with hardlinks to maintain incremental updates that are also full updates. It runs using a basic configuration and a series of cron jobs. This is unix as it’s meant to be- extremely light weight while also being very powerful.

I am rather picky with how it is set up though. I don’t like leaving root open over ssh, which means a sudo based solution is needed on the client side. I’m also rather paranoid, which means I like my backup solutions to be read only. I also don’t like all of my machines running off of a single rsnapshot configuration, as this means a failure for the script to run on one means it won’t run on the ones after.

For years I had a set of scripts to handle this, but in the days of configuration management that seems almost silly. To make life easier I’ve put this all in a Puppet module.

There are quite a few features to this module that make it stand out-
Continue Reading


Github Enterprise Backups with Puppet

The amount and value of data stored on Github Enterprise servers is quite large, and backing them up is rather important. To keep people from doing desperate restores from cloned repositories Github has a tool for backing up and restoring GHE installs. To make this even easier I’ve put together a Puppet module that takes care of the basic setup and configuration of the GHE backups tool.

Combining this module with your existing Puppet environment makes GHE backups easy-

class { 'ghebackups':
  ghe_hostname => 'github.example.net',
}

In fact if your network will resolve “github” on your search domains you can even skip the ghe_hostname parameter.

Storing backups with a custom location and number of snapshots is not much more difficult.

file { '/backups/github':
  ensure => 'directory'
}->

class { 'ghebackups':
  ghe_hostname      => 'github.example.net',
  ghe_data_dir      => '/backups/github',
  ghe_num_snapshots => 72,
}

This module also keeps the backup utility up to date and takes care of some other minor issues around installing this package.

For full documentation view the module on Puppet Forge, and as always contributions are always welcome on Github.


Pages:123