Automate Your Life: Banking

I have been encouraging my team to automate everything.

To live by this principle I have started to automate my bank spreadsheet. I keep a spreadsheet with all of my banking transactions. This comes in handy should I need to investigate an old transaction. My current sheet covers the last 9 years. I had been manually copying the details over.

My bank allows statements to be exported as CSV files.

Here is a simple bash script that allows the data to be put into the format that I want:

cat filename.csv | awk -F, ‘{print $1 “,” $5 “,” $7 “,” $6 }’ | sed ‘1d’ | tail -r

I like the newest transactions at the bottom and credits before debits.

This becomes much easier to import into a google sheet rather than fighting with open office.

I have multiple current accounts and rebalance the main current account to a fixed amount at the end of each month. The remainder is moved into an offset mortgage. Credit card bills get paid (when due) from the mortgage account. This maximised the offset benefit.

How to Document Microservices

Here is a great article on using dot to document microservices:

https://articles.microservices.com/an-alternative-way-of-visualizing-microservice-architecture-837cbee575c1

I have forked and extended the gist to:

digraph architecture {
rankdir=LR;
// Storage – #303F9F (dark blue)
node[fillcolor="#303F9F" style="filled" fontcolor="white"];
database[label="DB"];
cache[label="Redis"];
// Client-side Apps – #FFEB3B (yellow)
node[fillcolor="#FFEB3B" style="filled" fontcolor="black"];
front_end[label="Front-end App"];
extension[label="Browser Extension"];
// Microservices – #C8E6C9 (light green)
node[fillcolor="#C8E6C9" style="filled" fontcolor="black"];
photos_ms[label="Photos MS"];
chats_ms[label="Chats MS"];
friends_ms[label="Friends MS"];
// API Gateways – #FFCCBC (light orange)
node[fillcolor="#FFCCBC" style="filled" fontcolor="black"];
auth_api[label="Auth API"];
my_app_api[label="Main API"];
// 3rd-party APIs – #CFD8DC (light grey)
node[fillcolor="#CFD8DC" style="filled" fontcolor="black"];
facebook_api[label="Facebook API"];
// Rabbit MQ – #FF0000 (red)
node[fillcolor="#FF0000" style="filled" fontcolor="white"];
subgraph client_side_apps {
front_end -> {auth_api, my_app_api};
extension -> {auth_api, my_app_api};
{rank=same; front_end, extension, auth_api};
}
subgraph api_gateways {
my_app_api -> {photos_ms, chats_ms, friends_ms};
}
subgraph microservices {
photos_ms -> {database};
chats_ms -> {database, cache};
friends_ms -> {database, facebook_api};
}
}

This is all you need to generate the below image from the above text:

dot architecture.dot -Tpng > architecture.png

You can even add annotations to the lines by adding [label=”My Link”] to the connection before.

These are ideal to add to the README.md of github projects.

I especially like the idea of having a diagram with the source to create it under source control.

This requires you to install graphviz which can be done:

# Mac
brew install graphviz

# Windows
choco install graphviz

# Linux
sudo apt-get install graphviz

For the purists who want to keep their machines clean:

# Docker
cat file.dot | docker container run --rm -i vladgolubev/dot2png > file.png

How Jenkinsfiles Really Work

I only recently encountered the Jenkinsfile format. Previously I had used the clunky Jenkins UI or the cleaner Circle CI options.

Some of my colleagues had described it as using a special Groovy declarative syntax. It is in fact simply Groovy code using some neat tricks.

Groovy allows a closure (what other languages may call a lambda) to be used as a function parameter:

def block(Closure closure) {

closure.call()

}

This can then be used as follows:

block( { print (‘hello’) } )

Groovy allows the closure to be moved outside the brackets as a code block:

block() {

print ‘hello’

}

Here I have started to use the groovy trick of dropping brackets. In fact you can also drop the leading brackets:

block {

print ‘hello’

}

This is beginning to look like the pipeline or stage steps from a Jenkinsfile.

You can even add parameters:

def wrap(marker, Closure closure{

println marker

closure.call()

println marker

}

// Which can be used as:

wrap (‘name’) {

// something …

}

Jenkinsfiles are code pretending to be config, with the added benefit of being able to become code again when needed.

Painless is Painful

Elastic search 5.6 reached its end of life date on Monday. I work on a project that uses Elastic search extensively. The downside is we have recently had a refresh of the development team leaving us with no more than 6 months exposure to a 2 to 4 year old code base.

The first discovery of the upgrade (to 6.6.1) was the new restriction that you need to be explicit about content types. This is not too difficult but does require a few changes.

The second big discovery was the move to not allow multiple types within an index. This can be resolved by adding a type field and using that to discriminate.

The third discovery was `application/x-ndjson` which is used in the bulk update process. This is a content type that takes a list of JSON items each terminated with a newline. This forms an equivalent to a CSV file. For bulk updates you send pair’s of action/metadata followed by a body element with the details.

The fourth discovery is the new language called Painless. This replaces Groovy which we had previously used. Despite various claims it is not a drop-in replacement. It’s a paired down Java with almost no syntactic sugar. Adding arrays – nope, converting to sets or lists, nope. String split requires regex to be enabled (which it advises against). To pass in parameters requires the undocumented params object. There is no cli or compiler to use to test this – just error messages that try to point you in the right direction. The name Painless itself makes it hard to search for. I understand what it is trying to be (efficient and secure) but it comes across as clumsy.

Investigating Amplify

Here are my notes on creating a react app in amplify following the steps here: https://aws-amplify.github.io/docs/js/react

To make this interesting I have started with an AWS user that has zero access rights. Rights will be added as needed. I am developing this on a mac.

Started with installing the amplify cli.

npm install -g @aws-amplify/cli

This starts with 5 warnings:

npm WARN deprecated circular-json@0.3.3: CircularJSON is in maintenance only, flatted is its successor.

npm WARN deprecated kleur@2.0.2: Please upgrade to kleur@3 or migrate to ‘ansi-colors’ if you prefer the old syntax. Visit <https://github.com/lukeed/kleur/releases/tag/v3.0.0\> for migration path(s).

npm WARN deprecated minimatch@2.0.10: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue

node-pre-gyp WARN Using request for node-pre-gyp https download

npm WARN graphql-import@0.4.5 requires a peer of graphql@^0.11.0 || ^0.12.0 || ^0.13.0 but none is installed. You must install peer dependencies yourself.

I now have @aws-amplify/cli@1.1.7 installed.

Step 2 configure:

amplify configure

This fails with:

SyntaxError: Unexpected token ...
    at createScript (vm.js:74:10)
    at Object.runInThisContext (vm.js:116:10)
    at Module._compile (module.js:533:28)
    at Object.Module._extensions..js (module.js:580:10)
    at Module.load (module.js:503:32)
    at tryModuleLoad (module.js:466:12)
    at Function.Module._load (module.js:458:3)
    at Module.require (module.js:513:17)
    at require (internal/module.js:11:18)

nvm reveals that I am using node 8.0.0

Now move node up to the current lts version (10.15.3), repeat the cli install (same as before), now onto the configure:

This time it prompts me to login to aws and create a new user.

Apparently the minimum access rights are: AdministratorAccess

The user has been created and the access key stored in a profile.

Now I need to ensure that create-react-app is installed (javascript is very fashion conscious and all the cool kids now use yarn):

yarn global add create-react-app

Now working on the amplify init

Here I get prompted for a name (matching the app helps).

Also:

Environment (dev)

Preferred editor (VS Code)

Type of app (javascript).

Framework: (react).

There are several others here but the defaults work fine.

It then prompts you for a profile to use then starts doing the aws magic.

This creates all of the local config without yet pushing it to the cloud.

The next command is:

amplify add hosting

There are 2 options here: dev using http and prod using https.

For now I am going for dev and followed the defaults.

The following will deploy the app:

amplify publish

This will create the application in s3 and configure cloudformation infront of it.

My trivial app is now deployed.

Next step is to add authentication.

amplify add auth
amplify push
yarn add aws-amplify aws-amplify-react

I have added in the required boilerplate code then called: amplify publish

Testing creating an account hits one of my pet hates: invisible limits on password entry systems. The default cognito password rules require one Capital and one Symbol (already included lowercase and numbers).

The next trick will be to learn how to configure the join, login screens and verification codes – but this is outside the scope of this exercise.

To avoid leaving you with amazon bills there is the option of:

amplify delete

Mischief Managed.

Another Useful Dependabot Command

Commands are sent to dependabot by commenting on a pull request.

@dependabot ignore this major version

This command will prevent dependabot from suggesting any further upgrade to a given dependency until it has been manually advanced past a certain point. This is really useful if you want to keep your node to a LTS version and don’t want to keep getting notified of bleeding edge changes.

Analysing Pending Pull Requests

If you have been working with github in a team that uses PR’s (especially if you are using Dependabot) then you will need to find a way to manage PR’s across a number of projects. This gets more complex if you are working on a microservices project with 10’s of projects.

The following is a great site that can be used to help you manage this:

https://ghpr.herokuapp.com

To make life even easier here is a script to aggregate some stats.

First visit the site. Copy all the text and put it into a file called pr.txt.

grep TEAMNAME pr.txt | sort | uniq -c | sort -r | awk ‘{print “https://github.com/&#8221; $2 “/pulls ” $1}’

I typically post this to my team’s slack channel once per day.

Is Tagless Final a Mistake?

Yesterday I attended a talk at LDN Functionals Meetup.

One of the talks was about Tagless Final.

This is an article on Tagless Final:

https://blog.scalac.io/exploring-tagless-final.html

Tagless Final allows you to build a subset of the host language which is sound, typesafe and predictable. When designed properly this subset makes it easy to write correct programs and hard to write incorrect ones. In fact, invalid states can’t be expressed at all! Later, the solution written in the hosted language is safely run to return the value to the hosting language.

https://blog.scalac.io/exploring-tagless-final.html

A key point of this is that “invalid state can’t be expressed”.

The question is then how do you do validation or work with the rest of the world that is not part of your nice clean typesafe utopia?

The samples that were shown during last nights presentation seemed to throw an exception if data were ever to be found to be invalid. This works so far. The downside is to be able to work with anything else would require a wrapping anti-corruption layer. By the time you have tested this integration layer and your provably correct core, you may as well have not used Tagless Final.

Perfect Storm

Some years ago I worked on a C# custom type system. This was designed to allow an invalid state to be represented. The value of this was in having a pluggable validation system that could explain exactly what was wrong in exactly the same way as results would be returned from its rules engine.

The application was defined in terms of Aggregates that formed trees of components. Each component could have properties attached. Each of these properties could be set to a string value. Even an integer age field could be set to “XXX”. This would result in the field being invalid, yet still having a value. This makes it much easier to pass data back to the user for correction. Error messages are more detailed.

These Aggregates had rulesets attached that allowed sophisticated defaults, calculations, and validations to take place. Reference data existed in what would in DDD terms be called a distinct Bounded Context. The downside is that DDD would call this an Anemic Domain Model – but the rulesets are far from that. The rulesets allow far more sophisticated processing to happen – so for example constraints on data could get tighter the further through a process that they get. For example, an order entry system could capture a users shopping list (buy carrots) but during later stages be enriched to include ( 7 carrots, specific variety, weight, and price). The same model would handle both cases yet have different rules applied.

The whole tree of data could be stored in a Work In Progress table until the first set of validation was complete – allowing the user to be presented with the initial data and all of the validated errors.

The type system and validation rules were just one part of the system. It also included code generation to create the model, database, and views. It had a Data Access Layer that could retrieve and store these models to a database. Rules could be imported and exported to a spreadsheet so that the specific rules were known to the user.

Having rules to validate a domain model makes life really easy to work with. Tests using rulesets are trivial. Everything is immutable – you enter data, validate and get a set of validation messages.

The production system that was built with this only had four production bugs in its first three years of use. The validation rulesets meant that it is not possible to store data that was not valid (by a defined set of rules). The same rules had been used to clean the historical data that was imported into the new system so there were no errors in the existing data. It was also possible to load the database at any time to check the current validation which would discover persistence errors (or manual data corrections).

Some of the ideas used in this project were reimplemented in the Perfect Storm project: https://github.com/chriseyre2000/perfectstorm